We now have a demo using a RPLidar and showing how to do full nav stack with gmapping and AMCL. Find out about all that sort of cool stuff here:
This comes about because it has been a question for ages so why not have a starter page to learn about nav and actually play with it using a Magni!
To use the demo as it is see the picture at top level of the demo. Only the X and Y matter so I just show a picture from the top. The lidar height from the floor is not of much value but should be able to hit your walls.
Also, As I drive it around , it is making new walls and areas that donāt existsā¦, is it because I drove Magni back to the same spot again?..please advise? (but this was not the case with Frame_grabber.exe from RPLidarā¦,
And as I move in , it does not make any extra unknown areas (such as in launching mapmakerā¦and then rvizā¦which makes alot of unknown areas and extra walls, missing space alsoā¦ )
More accurate scan from frame_grabber.exe
Here is a photo of How I mounted RPLIDAR onto Magni , which after looking at your photo, I think the position is similar except mine may be a bit higher,
I think if you take a top-down view of your magni (rather than the perspective view you have shown here), you will see that the lidar in this position is pushed forward much more than @mjstn2011 's lidar in his picture. Knowing that mount and lidar myself I estimate that you are offset from his by atleast 6 cm positive in the x direction. (BTW I had a chuckle when I saw that you put the mount on backwards, but it makes sense given you were trying to get the spool to be behind the lidar).
You could take a wild guess and correct for it by adding 6cm to the x value in your lidar_translation frame, i.e.
But the most robust way to do it would be to try and measure the offset yourself and then put the appropriate numbers in the lidar_translation variable.
Regarding your queries about the gmapping software, from my perspective that looks like pretty normal operation of gmapping, the robot can get lost and your map can break unexpectedly and false returns from RPLidar can result in a robot that falsely sees through walls and sadly that is just part of the current state of the art of ROS mapping. I find that the map breaks when I move quickly and especially when I rotate quickly, so in mapping mode I usually drive KRYTN really sloooooooow. However, @mjstn2011 might have a different perspective.
One other thing to point out is that the RPLidar A1 is by far and away the absolute cheapest lidar I was able to find (some people can get them for USD$100!!), so you need to expect that the quality of its scans wonāt be as good as the USD$30k+ lidars that are used on all the university research robots.
Thank you for your reply and your mount works good, appreciate it, you are right, my lidar is being more forward then @mjstn2011 's position, that really make sense, I will give it a go and possibly I need to re-map.
Meanwhile , whilst I was waiting for replies hereā¦, I done more test and finally saved a rough map (below)
and then tried to give it a 2D pose of where it is supposed to beā¦but couldnt do it, can you please let me know how to do it ?
I think Iām having the wrong starting point or angle that the lidar should be facing , when I give a goal on the map , it moves the opposite direction, please adviseā¦,
And also Iām seeing these errors next to each done rotation message, please advise as well as to what I have done incorrectlyā¦, ? and it kinds of spins around , looking confuse or somethingā¦
I also drive super slow and even stop every foot or two and let gmapping settle. Rotations must be very slow. I am glad johnnyv has mentioned that, I need to explain that in the demo readme.
I mention the Z is not too important BUT the x and y translations MUST be very very close to reality. Assume that where I put the lidar is for practical purposes very close to X and Y being zero as your 1st estimation. That makes measuring easier because it is on the line connecting the center of the holes in the plate. That is not exact but it will help you then later for better accuracy worry about the small X error.
Also if the gmapping gets an āodd ideaā about a feature you have to sit in a place where it can fix that āodd ideaā for a while because gmapping will eventually average out the odd feature but it is not at all fast.
Basically take a lot of time in making the map and watch it carefully as it grows and forms.
Hi, Iām having a bit of trouble using the RPLidar, can you help me?
Iām using a XV-11 Neato LIDAR, and Iāve been able to run it with LaserScan and everything just fine using the own library for the XV-11. However, when I run the roslaunch for the magni_lidar, it not just gives me a timeout error, but I also canāt run the rosrun for the XV-11 packages anymore, getting an EOF error.
The demo was written with the assumption that you are on a real Magni platform. Is that the case? If not there are many other things that must be running on your robot.
If you are running XV-11 (which I have run on 2 of my own robots) be sure the laserscan comes down to topic /scan. Also I am assuming you have taken out the starting of the RPlidar because that will not work of course without the RPLidar driver and so on. Make sure the orientation of the lidar is proper in the static transform in the launch files.
@johnnyv, I tried your suggestion of correcting the X, and it is working much better on the map creation and tried slower moving it, ā¦, it is more accurate now, thank you,
@mjstn2011 , I tried slower as well and that seems to help as @johnnyv mentioned as well, thanks.
But I think you have not replied to my rotation errors that keeps happening in my above screenshot, do you know why that is happening? or what I done incorrectly? ā¦
Also, I having issues with the map not matching where Magni is, do I need to put Magni in the original position (when it was creating the map from its starting point) and make that always as the starting point? for the 2D goal to work , because I find that if I put Magni in a different location (and then power it up) in the map, in RVIZ, it shows Magni always in the same spot no matter where I put Magni to start (which is the incorrect spot in the mapā¦). so it ended up trying to go through walls.
Also, I found that the laser is facing downwards (at the starting point), is that correct?
If you do not start the maprunner script with magni at same starting point you did the map then you must tell magni where it is before you start moving.
So you must payattention to the section in the readme that says very precisely ā Tell The Robot Where It Is Initially Located In The Mapā.
If you skip this then you in general MUST start magni in the same spot every time. These things are explained but i know it is a lot of information.
I see a small problem in the directions and will make it clearer but is is ok.
After you do the: Start Up move_basic which will allow Path Planning and autonomous movement you will then go right to ā Tell The Robot Where It Is Initially Located In The Mapā .
What I did not explain well is every time you stop a launch file it is best to stop RVIZ so when we then run the maprunner script you start RVIZ again. This is a tricky thing about RVIZ. I only supplied one RVIZ config file that is called lidar_mapmaker.rviz so donāt worry that is says āmapmakerā it is just the same rviz config used for when you were making the map.
So is this step automatically detecting where Magni is in the map (before it moves) ?
but you also mentioned that ''you must first use RVIZā¦", so does that mean I need to click on the 2D Pose Estimate and do the green arrow to where Magni is first? and then run the amcl.launch file?
Please advise,
(Iām very excited by the way, I got Magni moving)
I am glad the demo was perhaps worth my time. Itās intent is to help people get going with industry standard ROS navigation based on Lidar scans.
AMCL trys to estimate robot pose (position and rotation) right from the beginning with a stopped robot. But that is extremely hard in most normal maps so that is why you tell it at the start your best guess of where the robot it is in the map. I have a very simple large rectangular area made with long pieces of cardboard to do my tests. The area has one corner at an angle to make it easier for AMCL. Here is demo but with Sonars showing in picture.
@mjstn2011, definitely it worth your time , Iām quite sure this would benefit alot of people and would want to use this map function added on top of Magni. thanks.
Now I have tried your suggestion on telling Where Magni is located (2D estimate pose) but it somehow dont work, I set one pose, and then I set a goal and then it moves the opposite direction or a totally different directionā¦, please advise
I also tried to launch maprunner_amcl.launch after I ran the maprunner and move base (amcl gave an error on that it couldnt retreive the mapā¦), so I tried launching maprunner_amcl first and then move_base (the map shows as below ā¦, far far away from where where Magni isā¦and shows map error in RVIZ, please advise,
Also, I notice that Magni stops when it gets close to the wall but when there is say a chair in front of it, with say 1 leg or 4 legs, it ignores it and keeps going crashes it and does not stopā¦please advise also
If I put Magni always back to the starting point everytime(which I created the map), location wise giving it goals, it seems ok which Iām impressed ā¦but it would be much better to get the above working as well,
Also, another question is , do you have a demo with move_base? as to avoid ostacles and then re-calcuate path? that would be super! would love to see that happening,
You need to in RViz global options set Fixed Frame to be āmapā there will be a pulldown. The frame of reference will be the map. that likely explains many of your issues.
Next: Sonars: Move basic is as it says ābasicā it only looks I think at the one forward sensor that has a 25 degree field of detection to just stop. So any chair leg not in that center point is missed.
The most complex thing once you get frame set to āmapā is that when you set the robot pose when in map runner mode you place the mouse where the point would be directly under the center point of the two big wheels. Then left click and HOLD the mouse button till you have rotated the large green arrow that shows up to point in the direction that magni is pointing and only then do you release the mouse button. The front of magni is the end with the big wheels which i hope you know by now.
At some future point Iāll get around to putting move_base in there but it takes a lot of tweeking and I really donāt have the time right now. So no promises on time, but ā¦ someday.
@mjstn2011, 2D Pose estimate is still not working for me, I have set it to Map , also setted the 2D pose , see screenshot below, when I give it the goal, it keeps going to the cornerā¦, very confused, please advise, ā¦thanks
Try a goal where there is absolutely nothing in the way.
Try a goal where it has to turn 90 deg to the left then move a half meter for example.
Move basic can ONLY move in straight lines and cannot plan around anything.
You said things worked if you put robot where it was for start of map making. You have not shown where that place is in this map. Please show where robot was and direction it was pointed for start of initial map making.
@mjstn2011, thank you and below is the picture which I showed another arrow pointed for start of initial map making (and things did worked better that way but ONLY in a straight line (with nothing else in the way)
But it seems unrealistic and also with my real life scenario, there wouldnt be nothing in the way (I have shelves and boxes around) and Path Planning (around an obstacle) is a must need (on the left hand side, its an outdoor which I yet need to map , and this area I just use for testingā¦)
So if thereās got to be absolutely nothing in the way (which is impossible in my scenario) and ONLY moves in a straight LINE with no plan around anything at all , this will put a big HALT to me againā¦ because I need path planning for sure (and not just a straight line with no path plan around anything or just āstopā ā¦) , I need it Magni to deliver something for example and its just going to stop at an obstacle (in my case, like a shelve rack) and it wont go any furtherā¦
I understand you are busy but these limitations (just a straight line and No Path planning around anythingā¦) It is just going to HALT my business use case. I would appreciate if you can please have a version (that can do path planning around anything and not just stop there )? or at least for the moment, point me to a direction where I can make Magni start Path planning (around anything) ? with the Lidar which is crucial for my business use case moving forward
What you are asking for is how to setup and configure move_base including a local cost map based on the sonar units of the Magni.
Move base is a well used and respected package that although complex is used by hundreds of real ROS robots.
Your general approach would be to take the demo and focus on the map_runner mode. Take that and make new launch file that uses move_base instead of move_basic. Leave out costmap as your first step and get move base to be able to go around corners of the map like to a different room.
I suggest you get move_base working first with no costmap and only then move next onto usage of a costmap that is there to take account of objects that were not in the initial map like your cat or a person walking by or somebody moves something in the way of the robot.
These are advanced topics and there are many people using those things. So I have given you a HUGE head start and all the map making and lidar stuff is all laid out for you in my starter example.
I will do move_base āsomedayā in an expansion of this demo but I just cannot commit to that right now, I am extremely busy with many things and doing full movebase in a demo that is suitable for any magni user to try is a ton of work and documentation.
So it is suggested that you start by understanding the move_base role in ROS navigation as a start in these links below. You yourself will have a far more successful product if you know how to make move_base work best for your needs.
Always keep in mind that Ubiquity Robotics wants customers to become robot experts and we offer a platform for development of ROS based robots to those who are driven enough to really learn about robots. If we hand everything to everyone on a plate then true understanding is not obtained. That sounds a bit āZenā but in fact it is for your own good especially if you are going to make a product. Never ship a product that you donāt understand is my motto.
Good luck and Iāll get to what you are requesting but it will be a few months or so.
@mjstn2011, I will try to understand the articles with links you included but I will wait then for your extension of the move_base implementation, ā¦ Iām not a professional programmer myself so dont think I will get to that high level.
Having said that, I still think this should be prioritized as I believe this is a huge benefit step and worthwhile to Magniās navigation capability.
right now, Iām have to click on many times on the 2D goal button for it to get Magni to a goal (a destination), is there a way to record a path to a destination (goal) and then at the end I can stop record. Next time, I can just press a button say Destination A, and then Magni can just go to it ?