Magni Demo Using RPLidar Now Posted

No solution for what you are discussing exists in our repository at this time in answer to your question.

You would have to write a python script or something to take in the topic of /odom and then save x,y waypoints every so often in a table. After that you would then be able to use a part of your script to playback the waypoints waiting for each one to complete before next one. I have discussed and supplied in the move_basic/scripts folder a script that is called move_patterns.py which would help you form the playback part of your own program. You could create arrays like the fixed ones I have in that program and fill in your array with the waypoint x,y points you gather along your path. This is just a suggestion.

@mjstn2011, ok I will locate them and look into it thanks.

I have made the lidar nav be explained on our main doc site which for this lidar nav stuff is here: https://learn.ubiquityrobotics.com/lidar_navigation

The advantage being nicer formating and pictures now.

1 Like

Hi guys,

I have mapped a hallway and uploaded it to RVIZ. Interesting enough the robot’s Lidar return does not match the map on RVIZ. I have tried to give it a 2D pose estimate, but it does not overwrite the current position, therefore going to the wrong place when I 2D Nav it. I was wondering what could have possibly went wrong.

At the highest level work through a map and then navigation within that map using a very simple one room map with a couple boxes or things in the room so the room does not appear a total square which can confuse AMCL. You are showing a very complex map above. Start simple and get that to work.

Another thing I mention in the demo is until you really understand all the parts of doing nav start from a reboot of the robot right at a specific spot you mark on the floor with masking tape, one piece for each of the 2 main drive wheels. Then after you have made a map and saved it move on to using that map by having the robot start fresh from a reboot right at the place you started making the map.

Once you can do all those things in a simple map then move on to more complex map.

Something I have not explained much is that the name of the map file when you start doing navigation (after map exists) is important for the launch file to know the maps location.
So in magni_lidar_maprunner_amcl.launch that is in this line:

I suggest after doing the map save that you name the map file with a name that is human readable and describes the room or area you are mapping. Then save that map in the magni_lidar area in folder maps. So above I use ‘tinyroom.yaml’ but maybe your map is ‘lapmap.yaml’ or something else.

Hi @mjstn2011 thank you for your reply.

I had indeed successfully built couple of maps saved and loaded it in magni_lidar/maps. Then replaced tinyroom.yaml by the name of the map I wanted to upload in both maprunner and maprunner.amcl.launch . When I uploaded my map, what I meant to say was everything went well at the beginning as the laser return will indeed match the map at first (see picture below). I also placed the robot at the starting spot (will retry to map and mark the wheels as recommended), but after some 2D nav goal points the Lidar returns will tilt from real world on RVIZ. If you look closely in the picture from post above, the map uploaded matches the Lidar return, but they are tilted.

Which I believed could be solved just by telling the robot where it was with the 2D pose estimate. I had played with simulation of turtlebot3 and thought that would work here as well. But instead, when I 2D pose estimate, the image kind of flickers between the real position and the position it stopped at.

Before mapping the hallway, I started in small square rooms, which gave me the same problem. I will go over the whole process one more time with the tape on the floor as advised.

P.S: I also believe that a move_base with the sonars would help for navig and path planing for people interested in Lidar.
Best regards

Thanks for more explanation. I was not sure how much you had played with it before but it sounds like you did all the right steps. Sorry but it was not clear to me how much you had done yet in the past.

You are in fact using the AMCL enabled launch file. You are also using the rviz ‘/map’ frame.
So AMCL tries to get a fix on robot pose and then there is this offset from /odom to where the AMCL thinks the robot is located. I have seen the laser scan be off a bit and then after a half minute it gradually adjusts back to map edges. This disturbed me too but mine was getting back to normal laser outline do I did not worry much.

I am also only just learning and have been hoping some ‘experts’ could come in and answer your question here. There are so many details and in this case it appears to me AMCL is not feeding back the correction properly or some AMCL parameter is not set to be fast enough to correct before things get so far off it is not able to do the correction. One thing I am noting is you have a huge gap in that hallway so because it cannot see the end of the hall it gets confused. How about you try blocking off the hall with cardboard and then maybe AMCL will be able to correct? This is a problem when the lidar is not able to find a wall so that becomes an unknown and may confuse AMCL.

Just my thoughts. Sorry but I’m not sure of the root cause here.

Mark

Hi Mark, thank you for your help.

I ended up fixing the issue by deleting the RVIZ set up on my desktop and downloading it again (which in theory is the same thing).

But as I was building a map, my robot started acting weird. I keyboard_teleop the robot around on my computer, but after a while the robot would stop moving and would burst out doing the command I entered earlier. This was dangerous as the robot would not consider elements around. It was almost as if the commands would go into a buffer as the robot lags and then the robot would perform those commands.

I was wondering if this was due to the fact that multiple nodes were running at the same time or is it just a ubiquity robot defect. Please advise.

Thank you.

Oh shoot, I never hit the Reply key so this has been a ‘draft’ for a few days, sorry.

When building a map a process called gmapping can take a lot of cpu. I wonder if the robot cpu got extremely busy due to size and complexity of the map while you were putting in teleop commands then as you say the commands were de-queued and ran all at once.

Here are the things to try to debug this issue:

  • Get back to me your firmware revision and MCB board revision as this is getting involved. You can see the firmware revision in the /diagnostics topic easy using:
rostopic echo /diagnostics | grep -A 1  'Firmware [DV]'

The board revision is bright white text on left edge of board for rev 5.2 and beyond. If you don’t see it along left side this page shows how to ID the board:

  • Always set your speed when mapping to very slow like 0.15 meter/sec or slower (use of z key in teleop).
  • Run another window using ssh and run the command ‘top’ in that window. This will indicate cpu loading as well as memory usage. See if your robot cpu usage maxes out or memory runs out and runs very high cpu loads near or at 100% for seconds at a time after the map starts getting larger. Robots (in simple software cases) and certainly this demo can start acting odd when resources max out. So this step is just to see if that is involved as debug info.

Hi Mark,

thank you for your reply, I did not realize you had answered this. My firmware version value is “30” and I have a rev 5.0 board. I’ve also ran the ‘top’ command and could see the slam_gmapping %CPU value often got to 100 just as you’ve mentioned.

Moreover, I was wondering if you could point me to the right direction. I bought a rplidar_a3 with longer range to map faster, but as I started mapping the gmapping node would do a terrible job. I changed the serial baud rate from 115200 to 256000 in both mapmaker launch file and rplidarnode.cpp, but still no chance. I also moved the robot at the lowest speed and it still was not mapping properly. I personally placed the lidar at the same location as the rplidar_a1 should I modify the lidar to base link translation value?

It would be valuable to think about several of the most common things that nuke the performance on these relatively small processors resources.

When you run ‘top’ you will get a display like this:
top - 02:11:10 up 2:45, 1 user, load average: 0.63, 0.70, 0.63
Tasks: 151 total, 1 running, 102 sleeping, 0 stopped, 0 zombie
%Cpu(s): 5.6 us, 4.9 sy, 0.0 ni, 88.9 id, 0.0 wa, 0.0 hi, 0.6 si, 0.0 st
KiB Mem : 895508 total, 278716 free, 334504 used, 282288 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 478852 avail Mem

It (sadly) does not take a whole lot to overwelm the Pi3. More recently we now ship the Pi4 but to run that properly you must have an MCB board of revision 5.2 or greater (due to some changes in power supply partly to support the Pi4).
The Pi4 did not exist as we made MCB 5.0 and we are not clairvoyant (sadly).
Nowdays we run Pi4 with 4GB and there is plenty of memory in that case.

But before jumping all the way to the conclusion that ram is the issue I think the thing to do is look at the performance and then find ways to lower the loading by easing up on the speed of gmapping. The larger the map (sadly) the more the load is on gmapping. As I stated, your map is ‘rather large and complex’ for a Pi3.

I have not had to optimize gmapping so far but think there are ways and perhaps people explaining how to do things to help. things like slowing down rate of lidar data and also of slowing down map publication speed from gmapping both help.

You have identified it is capping out on cpu loading. Also look to see if the ‘top’ display shows that your are just about out of memory. When the poor Pi runs out of memory (as in any linux system) it starts to spend a HUGE amount of its time swapping in and out memory using what is called virtual memory. If it hits that then the processor will drop to it’s knees and ‘cry’. So we must avoid that.

So in general we try to be sure that the Pi has swap memory available in the first place by creation of a swap file but frankly what you really want if you get in a mode of using virtual memory is more memory. We have recently started to support Pi4 but only on current MCB boards (rev 5.2 and 5.3). On older images we may not have had swap setup. So run ‘top’

3rd line down shows load but 4th line shows if you have available memory.
If the ‘free’ memory number is really low you are likely swaping
Example: KiB Mem : 895508 total, 277904 free, 334012 used,
Above line is good because a high percentage is still free.

The 5th line down starts with ‘KiB Swap:’ and a lot of the time this may be 0.
That means not attempt to setup a swap file has happened.
Setting up swap allows processors to not get bottlenecked and flat out stop cold
but it will not solve your problem. Once swap gets used all the time things will bog down major slow so we want more to avoid use of swap than optimize swap.
But you may want to look at setup of a swap file by searching web for ‘setup raspberry pi swap on ubuntu’ or something, you may find it interesting.

On an older issue for this thread: I have started to put together some efforts to support move_base along with real costmaps using our sonars. This is a big task. It is not ready for ‘prime time’.
I simply want it to be known we try to do this as little slices of time become available. It is a really bug ‘2nd step’ in our efforts to show people how to do inside 2D navigation. We are extremely busy with many things as I have mentioned so all of these demos are done in ‘our spare time’ which is minimal at best. Thank you for your understanding.

2 Likes