How to take the motion coordinates of the magni robot for autonomous navigation?

I would like to create an application for autonomous navigation of the robot. I have created a map. I would like to move the magni robot using my mobile phone and the robot commander application. Then I would like to save some motion points for autonomous navigation. I see that the move_basic is used for the robot displacement. Do I need a lidar for moving the robot into the map or I can use the lidar only for the map creation? I try to realize a path planning with maprunner and maprunner_amcl applications but till this moment without success. I can control the robot using rviz and 2D Nav Goal. I see that when I use 2D Pose Estimate I can see the current positions of the robot. Is it possible to take these positions and to realize a navigation path? I see that the error during the linear motion is smaller that during the rotation motion. How to overcome this?
I started to create an application using the scripts from Use A Script To Control Robot Navigation | Learn Ubiquity Robots and ROS and roslaunch magni_demos simple_navigation.launch. Is it correct or there exists another solution? I don’t want using fiducials. Thank you!

A couple answers here.

  • Robot commander is not a supported tool but is of some value to help people see one way to set goals for navigation. We offer it as a starting point.

  • You must use the lidar for map creation and then later for any navigation within the already created map. The lidar is used in later navigation so the robot can get a fix on where it is within the map. Determination of location (pose) is key part of lidar usage when in the navigation stage.

  • You do not need to use fiducials. You can use just Lidar as long as you have walls that the Lidar can effectively see with it’s IR scans.

  • You can send goals to the robot in ways like the script you have found as long as you are fully running move_basic already and have AMCL running on the nav stack like is described near the end of Lidar-Based Localization | Learn Ubiquity Robots and ROS

  • Yes you could find your pose from rviz like you describe and then later use those ‘waypoints’ sometimes called ‘goals’ to send to the Magni when it is running the navigation stack described near the end of the page for lidar navigation linked above.

  • So you are on the right path to develop some application like you describe but I am not sure how it would be done with Robot Commander.

What you are describing is a very involved task but I feel from many of your other comments you have been working to understand these things.

We are working on another way to do navigation and make maps that is more recent technology and it will use Lidar fine as well. This is not ready to be shared right now but we are hoping we can get something ready perhaps by early next year.