Gmapping for Magni

Hi, I trying to install gmapping package on Magni, but when I type:

$ sudo apt-get install ros-kinetic-openslam-gmapping

The output is:

Could someone help me with this? :slight_smile:

Thank you all.

Sorry about the delay, this got lost in my inbox over the holidays.

Looks like you are out of disk space, try removing some files, or expanding the VM size if you are running on a VM.

Rohan

Hi,

I corrected this with the new image that you made available. But I was running on a Magni Silver.

Thank you!

Hi Support, All, @rohbotics @mjstn2011

Since there is a thread on gmapping already, I might as well post my question here on Slam gmapping which I’m planning to test on,

  1. Is this the motor controller that I should be looking at?
    https://github.com/UbiquityRobotics/ubiquity_motor

  2. also when I run gmapping, do I connect it like this below (code line below):

and will this be able to communicate with motor controller hardware? of Magni

  1. I find out that the subcribed topic of Ubiquity Motor is cmd_vel (geometry_msgs/Twist message), so is there something that is required to publish the ‘twist’ message on the cmd_vel topic so that the two motors can be controlled?

let me know if I’m on the wrong track,

hopefully see your reply soon,
thanks

Can you explain what sort of hardware you want to run with gmapping?
Are you going to try to get a Lidar Working to map an area then drive around in that area?

We have not yet put together an example of a full navigation stack and gmapping example ourselves.
That is because there are many people using ROS with gmapping who have published things on youtube and other places on the web. It is a very advanced topic but I will sum up some basic things that have to be done in order to get all that going. You have a large job ahead of you to learn how to do all these things.

  1. If you are talking about a Lidar and maybe you are discussing one on serial interface? If that is your question 2 then I will say that you have to configure your ROS driver for your particular Lidar to use a serial port such as one you add with a usb to serial adapter plugged into the Pi. Normally that will then by default be /dev/ttyUSB0. So step one is for your driver to publish to a ROS topic and sometimes people publish to /scan You have to use a usb to serial to get another serial port because the raspberry Pi is using the only serial port it has to control the MCB motor controller.

  2. The Magni starts up all sorts of things so by default you can send movement or twist messages to /cmd_vel and magni will move around. That is part of our product. Magni also publishes its wheel odometry to /odom. So you asked about ‘twist’. To run that a separate console to the magni is started and you then run this command: ```
    rosrun teleop_twist_keyboard teleop_twist_keyboard.py

It prints some help to adjust speed and use certain keyboard letters like  'i' and 'l' and 'k' and ','

3) To actually do gmapping is involved as well.   The basic command to start it in its own window is
rosrun gmapping slam_gmapping scan:=scan
You should dig into this page: http://wiki.ros.org/gmapping

4) Next if all that runs (which is complicated)  you drive around the robot optionally using 'rviz' on a laptop that is setup to be the ros slave to the Magni raspberry pi it is possible to see the map given that rviz is configured properly.    

5) If you do get a map generated you MUST save that map using a thing called 'map_saver'.
This is done as an example with:  rosrun map_server map_saver -f newmap-ils   that would create your new map called   'newmap'.  

You should look around for ROS postings people have done involving gmapping and lidar and rviz because this is very involved topic.   I will say a great many people do this so it is a 'mature technology' but it is just very involved.

Good luck and have fun learning ROS and gmapping,
Mark

Hi @mjstn2011,

Basically I’m using the idea here:

(open source online)
http://wiki.ros.org/Robots/Nox

Which I want to try that it on Magni using a Kinect for gmapping,

is that even possible? please let me know,

I had been trying for days already, with freenect launch and trying to display point clouds in RVIZ with no luck ,

when I do roslaunch freenect_launch freenect.launch

and then rostopic list

I can see all the topics published for instance like /camera/depth/registered_points , and /camera/depth/raw_image etc…

I can even choose this in RVIZ in the pointcloud object in the dropdown but just ‘nothing’ displays in rviz…,

My first initial set up is just connecting it to the Magni VM on my windows 10 laptop and even that I struggle to display anything from the Kinect in the Magni VM rviz…, Magni VM can actually see that the Kinect is connected via the USB bus, when I do the lsusb , I can see 3 of them there are present (Microsoft camera, audio and motor controller)

I also tried on the Raspberri Pi (not sure what version this one is…connected the Kinect direct to it) on Magni , and done the Export ROS_MASTER_URI and export ROS_HOSTNAME …, on Magni VM , still rviz comes up blank…, … from the RPi camera works, but nothing displays from Kinect…,

Please help, facing a brickwall at the moment…,

Also I want to add that I have been looking around deeply in a lot of posts (on the internet everywhere) regarding Kinect Slam navigation with Ubuntu 16.04 Kinetic and also just displaying point clouds , some of them are on youtube and some of them are on ROS…, I tried alot of them and it is going no where…, still unable to display a thing from Kinect to the RVIZ in Magni VM. (things I tried are freenect, openni, openni tracker…etc…)

I look forward to your reply,
Thanks

Have not run a Kinect as a source for mapping.
For mapping have used our optical fiducial stuff and the very common Lidar and gmapping solution which is strictly 2D from the Lidar.

I have used the Asus depth camera which is basically same thing as Kinect but different drivers.
For that it was ROS but running on an Odroid and was purely a demo to show at maker faire couple years ago so people could walk up and see their image in purely depth mode colors, not point cloud format.

So I don’t have an answer for use of 3D pointcloud for making 3D maps live for navigation use. I have seen it done like you have as well but not with any detail to duplicate it myself.

Sounds like a really cool project.

Mark

@mjstn2011,

After endless nights, connected the kinect again to Magni Rasperri Pi, very unsure if I’m on the wrong track or not…

@mjstn2011 @rohbotics

Please let me know or how can I find out Magni wheel radius (or diameter) ? and also the distance between the two motor wheels?

Thanks, I look forward to your reply,

Those parameters are all in /opt/ros/kinetic/share/magni_bringup/param/base.yaml

For what you specifically asked and in meters:
wheel_separation : 0.33
wheel_radius : 0.1015

Inmoov,
I do not have experience in point cloud navigation using Kinect.
I am very curious as to what you may find and hope you share your experiences and successes.
Will ask if anybody has insight into that topic. As you are quickly learning it is at the cutting edge of navigation so it will not be easy but at the same time will be rewarding as you progress.
Mark

@mjstn2011, ok got it thanks

Hi @mjstn2011 @rohbotics,

is move_base equipped with Magni (Raspberri Pi image)?

I’m facing the error below, unable to launch node of type move_base/move_base and joint_publisher aswell (process died)

And below is the launch file I’m trying to use for Magni navigation:

Please advise,

I look forward to your reply,
Thanks
Roger

I would be surprised if it is not present but I don’t have a clean image that I can be certain has never had it installed.

Run this on the Magni: rospack find move_base (the normal place it would show is:)
/opt/ros/kinetic/share/move_base

Unless you have done installs from source which are normally in /home/ubuntu/catkin_ws/src then it would be in above location. All installs for ros packages end up below /opt/ros/kinetic

We have a launch for it that would start it up like this so try this all by itself (just to see if move_base gets found and can ‘try’ to start)

roslaunch magni_nav move_base.launch

@mjstn2011 @rohbotics

ok, its my mistake, I was using another vm with no move_base in it… ,

BUT I downloaded the Magni VM in Virtual Box again fresh to run, and tried to install full updates (sudo apt-get update , sudo apt-get upgrade), and also tried to install freenect launch (sudo apt install ros-kinetic-freenect-launch ) …all failed , see below:

it fails straight away which is very frustrating because it works in my other VM (with Ubuntu 16.04 and ROS kinetic on it)

Failed trying to install freenect launch on your Virtual Box VM,


Failed to do full updates on the Magni VM (sudo apt update , sudo apt upgrade…failed…)

it talks about some public keys not available, the repository is not updated…
image

And it works on my VM (both full updates and also freenect launch installations… but no Magni’s move_base)

(I checked the internet connections, it is very stable).
image

I really need this, also I want to let you know that since day 1 I had issues fully updating it , I vaguely I think its the same issue of errors, with sudo apt update and upgrade , I dont have any issues with fully updates on my other VMs with the similar Ubuntu 16.04 plus ROS Kinetic setup (its just that it does not have Magni installations in there…) but now it got to the point where I need Magni’s move_base for the navigation, please help me…, (i’m frustrated again back to square one)… I cant even install simple things on the Magni provided VM…

So to sum up my issue is:

  1. roslaunch magni_nav move_base.launch (this seems to run in the Magni VM)
  2. But I cannot install full updates (sudo apt-get update and upgrade fails…) , and cant even do a simple sudo apt install ros-kinetic-freenect-launch, I couldnt install which these are major issues (I’m missing alot of dependencies…) . I really need to use the kinect and also other packages to install for my use case.

I’m using this VM image from your website:

I look forward to your reply, I really need it…
Thanks
Roger

I would like to clarify a bit the role of a virtual box in your system to work with a Magni.
Are you doing all these upgrades and installs because you have seen some problem with running the VirtualBox as a rviz station? If that is the case we have a VirtualBox issue for us to work out.

The normal config is you run either a linux laptop or our VirtualBox so you can run graphic tools like rviz or some plotting software or visualization of data on some topic in the system.

It is on the Magni that the entire navigation stack and drivers for hardware would normally be installed and would run which for your case you are trying to support Kinect. We don’t use Kinect in any official way (yet).

The Laptop or VirtualBox is for the most part a way to run rviz to see graphics as the mapping or navigation progresses. The VirtualBox we supply has pre-installed all the things required to run navigation as a ROS Kinetic system that considers and is configured to use the Magni as the ROS Master node.

Given the above here are some observations from your post:

  • If you are trying to get pointcloud support to use for your Magni then the ‘freenect’ which is a driver to support the Microsoft Kinect. This driver runs on the Magni where the Kinect would be normally plugged in and configured for your URDF model so the camera pose is known.
  • move_base normally runs on the Magni itself. Move base talks directly to Magni ROS components to get location (pose) of the robot within the map and to issue /cmd_vel messages and so on to make the robot get to its final goal. So why install move_base on Virtual Box?
  • In a s similar way, gmapping runs normally on the Magni itself and not on the VirtualBox
  • I do not support the VirtualBox we distribute nor do I tend to use it. I use my own laptop all the time for rviz and other plot visualizations. Sometimes there may be a need to add definitions for certain messages if you want VirtualBox python script that has to send messages on a ROS topic with out of the ordinary messages that are not installed on the VirtualBox. I have not seen you mention why you felt you needed to do a full upgrade and so on on the VirtualBox. When we talk about full update and upgrades it is generally for the Magni.

In order to better document the roll of the Magni robot vs the roll of the workstation I have re-done the starting of our ‘how to setup a workstation’ sort of page to state what I have stated above so that all users can see this information early on in their involvement with Magni.

Enhanced introduction section now here: https://learn.ubiquityrobotics.com/connecting

We constantly use inputs from our many users who post on this Forum to recognize places that we have not documented well enough (or at all) different subjects. The issue of what to install on a magni vs what to install on the laptop comes up a LOT thus we missed the boat on discussing it up to this point.

Thanks,
Mark

Hi @mjstn2011,

ok, that is super clear now, as I thought it could be the other way around which the Virtual Box was doing the heavy lifting (navigation and mapping calculations), I understand now that Magni Robot is the one thats doing all that with move_base and all that navigation stack work, thanks for making it clear and also the enhanced introduction looks great and easy to understand. So the Virtual Box basically it’s a screen with rviz (visual tool) for Magni, now I get it. So if VirtualBox is just an rviz station, I have no problems with it…, I can even use my own VM with similar set up for that as well , (just Ubuntu16.04 and ROS kinetic…) And yep before I was confused on what to install on the Magni vs what to install on the laptop, now I’m clear on it thank you.

Since I had corrected that (Magni as the ROS master node (running the navigation launch) and my VM being the RVIZ visualiser), things are looking better but I’m still not receiving map data in RVIZ.

Here are the issues I face below now and hoping you can help,

  1. I’m facing the warnings below when I run the navigation launch directly on the Magni Robot,

do you now what they are and how I can go about it?

  1. I also ran RVIZ on a view in my laptop (own VM),

So here the fixed frame is base_link and as you can for kinect and camera, it displays No transform …

The black box represents Magni robot

  1. and when I select Map instead, I moved Magni with the mobile app and the blackbox moves as well in RVIZ…but Kinect and camera still No transform . The white part is the kinect, it does not move with it…

  1. and then when I select Kinect in RVIZ, then Kinect and Camera shows Transform ok, but then map not ok with Transform

So I dont really get it…, can you please point me to a correct direction? I thought I gone the wrong way here, also , I had more thoughts on this is that I may have loaded an incorrect URDF file during launch , I searched in Magni packages and saw this ‘‘magni.urdf.xacro’’ file at image

but it doesnt really quite look like a URDF file because it dosent end with .urdf , so can you also please let me know where the magni.urdf file is located?

  1. also you mentioned on the above " This driver runs on the Magni where the Kinect would be normally plugged in and configured for your URDF model so the camera pose is known." , I got the kinect sensor model in STL format though, how do I go about adding it to the Magni URDF model , linking it as such so that the camera pose is known ? am confused on this one… ,

I look forward to your reply,

Thank you,

Roger

@mjstn2011 , @rohbotics

since I’m still waiting for your reply,

I was trying roslaunch magni_nav move_base.launch on the Magni Robot and it starts with a bunch of errors below in the screenshot

On my laptop VM, I fired up RVIZ and I see no Magni Robot model there,

Please help to rectify as soon as you can thanks…

Roger

Don’t recall if we have told you about what many of us consider to be one of the best books to quickly get a great feel for the many components of ROS navigation.
I feel you would very much enjoy and have a great time by looking at the most excellent pdf version of a book by Patrick Goebel (highly respected ROS person). The book is called ‘ROS By Example’.

It is an excellent fast paced way to really dig in on a great many of the things you are asking about and would be really a book you would greatly benefit from and learn a great deal. Look for version 1.1.0 and in a quick search I found it in a few places but here is one place: http://file.ncnynl.com/ros/ros_by_example_v1_indigo.pdf
The whole book is great but you know a lot of it perhaps by now. Still it is worth getting Patrick’s views and teaching. For the specific navigation questions you have been asking I suggest chapters 7 and 8 from the book for sure of great value. I feel you would greatly enjoy and is recommended to anyone trying to work with ROS and robots.

We are just about ready to release another example that is based on use of an RPLIDAR with the Magni and that will be targeted towards many of our users of Magni that often ask questions about Lidar based navigation. I will post back to here once we release the lidar example.

We do not have a point cloud demo planned at this time but it is likely in perhaps the next year although it is likely to be based on somthing newer than the Kinect.

@mjstn2011

ok, you suggested me to go somewhere else to a book ,

but can you please explain why i followed your instruction to launch Magni move_base on Magni Robot and then RVIZ on the VM, with no Robot Model of Magni showing? I want an answer to that and have I done something wrong for it not to show up?

I would of assume, this would work out of the box…

please response,

Thanks
Roger