In short: No.
In long: Yes but it would take some work.
I think to understand what, why, and how, you’ll need to undestand how ROS navigation stacks generally work.
Sensors (lidar, sonars, stereo camera image, etc.)
- output: raw data, usually distances to detected points
Robot transforms (tf2)
- output: location of sensors and other parts in relation to the robot center, usually called base_link, since it links all the parts of the robot base
Localization and mapping (SLAM) algorithms
-
input: sensor data and transforms
-
output: a map, a map transform frame, and the relation of aforementioned base_link to the map
So a slam algorithm will essentially tell you where the base_link is in relation to all the sensor data it’s seen thus far.
Movement planners
-
input: map, transforms (including map->base_link), desired location on map
-
output: robot movement command velocity in meters per second (usually called cmd_vel and goes directly to the motors)
These usually use various subnodes, global and local planners and so on. This is the most complicated part.
So the part of this that fiducial “nav” actually is is these two things:
-
sensors: 3D markers located in world space around the base_link (raspicam_node, aruco_detect)
-
transforms: published by the robot by default, the only thing we need is camera -> baselink (robot_state_publisher)
-
SLAM: taking the found markers, making a map and publishing the transformation relative to them (fiducial_slam)
So what you’re essentially left with is the pose of the robot in relation to the found markers. That’s all it does. As you can see, planning the robot’s movement isn’t really the job of fiducial_slam.
Now as for what the magni has for the movement planner part is the so called move_basic. What that does is takes a goal in the map (the collection of all found markers), turns the robot towards it and drives directly forward until it reaches it.
And looks at sonars as to not hit anything, and merely stops if it detects anything.
There are more advanced ways of planning movement of course, the move_base and the standard ROS navigation stack is the usual one, but that one only supports 2D lidar data as its input.
So that’s what you’re working with.
Possible solutions
To have such a system you’d need to use move_basic and send it goals, to which it’ll move in a straight line.
What I’d do is get the robot into the starting position, then write a python node that would record the goals that were sent and launch it.
Then we could send the needed goals using rviz (which can send simple goals to move_basic and show the fiducials so you know where it is in relation to them).
When the script records the goals, you’d then need to set it into a “repeat” mode, where it would go over all of the recorded goals and re-send them to move_basic. That would be best done using an action_client.