How to take position of magni faster using fiducial_follow.launch

Hello

I would like to realize the following task:

  1. Using fiducial_follow.launch, I would like magni robot to follow me.
  2. Save the trajectory of motion
  3. Execute the saved trajectory alone

Regarding the task mentioned above I mount the camera forward. I’ve started fiducial_follow.launch application for testing. The magni robot follows me quit well. It’s a little bit difficult after turning on the right or on the left direction. I have to be more careful.

For saving the trajectory of motion I have to save the consecutive positions of the robot during the movement. Therefore I created the following application for testing. It simply reads the new positions of the robot:

#!/usr/bin/env python
import rospy
from nav_msgs.msg import Odometry
import time

def poseCallback(pose_message):
global x
global y
x = pose_message.pose.pose.position.x
y = pose_message.pose.pose.position.y

def pose_subscribe():
global x, y
while not rospy.is_shutdown():
print 'X, Y: ', x, y

if name == ‘main’:

    rospy.init_node('follow', anonymous=True)
    position_topic = '/odom'
    pose_subscriber = rospy.Subscriber(position_topic, Odometry, poseCallback)

    time.sleep(2)
    pose_subscribe()
    rospy.spin()

After testing I saw that the visualization of the new positions along X and Y axes is very slow. My goal is during the follow me procedure, the new positions to be saved fast and correct. How to improve this task? Could you please advise me!

The practical problem you will face when you get to step 3 is that without any form of navigation based on lidar or localized fiducials let us assume you do get the full path in some way yet to be determined.

The problem of #3 is that it sounds like you will be expecting the odometry alone to get the robot accurately from starting point to ending point. Use of simple odometry frame is not likely to be able to reproduce assorted turns and target positions with very much accuracy.

What you are asking to get in step 2 is some form of precise path but step 2 will only give you samples in time of points where the fiducial follow was making corrections in the path.

It would be a fun project but just realize that your final navigation based on spotty data as well as odometry inherent errors is not likely to be at all precise.

Anyway, back to your question 2: I don’t know of a way to obtain the path followed as we do not ‘record’ that in our software so far. What you may want to do to see something that is sort of close to the odometry path is to do a capture of the live ODOM frame while you do your test. Later you could extract the positions in x and y and plot those. That would show you the approximate path that would have the inherent errors of wheel odometry but you may find that as a good step 2.

rosrun tf tf_echo odom base_link  >  myrouteInOdomFrame.txt

As far as step 3, that will be really difficult to do with accuracy without real navigation references.

Hello

Thank you for your answer. I completely agree with you regarding step 3. I have an rplidar for navigation. In step 2 I can take X and Y positions of the robot but this information could not be enough for the correct realization of the step 3. I think that I have to store the information of the rplidar each 20 cm par example. Then regarding the data stored from rplidar I can calculate the displacement of each point and thus giving a possibility of the robot to execute the same trajectory alone. I’m afraid about the orientation of the robot. I think that better is to use a part of the rplidar’s range. The rplidar has 360 degrees of rotation. 90 degrees in front of the robot could be enough. I’m not sure that this information will be OK for the rotation of the robot. I’ve realized that using the range of the rplidar in front of the robot will be not possible because in front of the magni will be the fiducial. Could you please give me advice how to realize this task using an rplidar?

If you have the Lidar your goals will be far more likely. I did not know you had that. I have an RP-Lidar and I like it a lot, very good price and works well. My RP-Lidar I have tied directly into a full ‘navigation stack’ with great results. But I have done other lidar things with my own code and not a ROS navigation stack.

Yes as you say, determination of full pose which is X,Y, Angular Z rotation is the problem here for your case. I have used AMCL to determine pose but that requires full up front ‘mapping’ of an area so is maybe more than you want to do for this project.

We also are working on a yet unreleased type of mapping that I find very promising so watch for us to announce it in the next 3 months or so (my guess). This new form of navigation will be very useful for a new application we are also working on. This should be an exciting time in something like 2-3 months from now. Stay tunned to this forum for any announcement!

With a lidar your goals seem ‘doable’. Thanks for explaining that key point.