How does the fiducial navigating work exactly?

Okay so I just want to understand, For the case of having the fiducial mapping navigation. I would basically run the magni through the environment with specific instructions as to how to navigate?
i.e. let’s say I make it go in a straight line then turn right (a specific angle), and then a few steps ahead. Can I run this on a loop? if I make it start again from the same starting position? So like in a way I would have preplanned path for it to follow. Is that part of the fiducial navigation?

I have already read the fiducials documentation, but I just still am not sure if that is what is happening?

In short: No.

In long: Yes but it would take some work.

I think to understand what, why, and how, you’ll need to undestand how ROS navigation stacks generally work.

Sensors (lidar, sonars, stereo camera image, etc.)

  • output: raw data, usually distances to detected points

Robot transforms (tf2)

  • output: location of sensors and other parts in relation to the robot center, usually called base_link, since it links all the parts of the robot base

Localization and mapping (SLAM) algorithms

  • input: sensor data and transforms

  • output: a map, a map transform frame, and the relation of aforementioned base_link to the map

So a slam algorithm will essentially tell you where the base_link is in relation to all the sensor data it’s seen thus far.

Movement planners

  • input: map, transforms (including map->base_link), desired location on map

  • output: robot movement command velocity in meters per second (usually called cmd_vel and goes directly to the motors)

These usually use various subnodes, global and local planners and so on. This is the most complicated part.


So the part of this that fiducial “nav” actually is is these two things:

  • sensors: 3D markers located in world space around the base_link (raspicam_node, aruco_detect)

  • transforms: published by the robot by default, the only thing we need is camera -> baselink (robot_state_publisher)

  • SLAM: taking the found markers, making a map and publishing the transformation relative to them (fiducial_slam)

So what you’re essentially left with is the pose of the robot in relation to the found markers. That’s all it does. As you can see, planning the robot’s movement isn’t really the job of fiducial_slam.

Now as for what the magni has for the movement planner part is the so called move_basic. What that does is takes a goal in the map (the collection of all found markers), turns the robot towards it and drives directly forward until it reaches it.

And looks at sonars as to not hit anything, and merely stops if it detects anything.

There are more advanced ways of planning movement of course, the move_base and the standard ROS navigation stack is the usual one, but that one only supports 2D lidar data as its input.

So that’s what you’re working with.


Possible solutions

To have such a system you’d need to use move_basic and send it goals, to which it’ll move in a straight line.

What I’d do is get the robot into the starting position, then write a python node that would record the goals that were sent and launch it.

Then we could send the needed goals using rviz (which can send simple goals to move_basic and show the fiducials so you know where it is in relation to them).

When the script records the goals, you’d then need to set it into a “repeat” mode, where it would go over all of the recorded goals and re-send them to move_basic. That would be best done using an action_client.

1 Like

So, this form of solution would be highly impractical for outdoor navigation. Too much work required :confused:

Well how would you suppose one could do fiducial ceiling nav outside without a ceiling? Sticking fiducials onto trees and power lines? :joy:

1 Like

yeah :rofl: :rofl: but in all seriousness, I was thinking to use fiducial markers kind of like visual checkpoints in a way? So what I had in mind is the Magni would follow the lane (bicycle path) using OpenCV, and then once it would reach a pedestrian crossing (traffic light) it would have a marker there to recognize that it would want to cross this path or so, but that would be with forward positioned fiducials.

Again, realistically I imagine it wouldn’t be so practical or even feasible with different weather conditions and other factors.
I am just kind of thinking out loud here, I obviously don’t have enough practical experience with this.

And again, thank you honestly for your responses, they have been very insightful!

Oh I think that should be possible too, but hard to make really reliable.

Let’s say the robot is moving along the bikelane and then spots a marker on a stop sign-like sign on the left or right. Aruco will then tell you what the marker’s ID is, and you can then check a premade list as to what it means.

Once you have the meaning (say turn left) the robot can send the required movement command to move_basic (can be done just in the robot’s base_link frame to tell it to go say 1m forward and turn 90 deg left) which overrides the lane follower and executes the rotation.

1 Like

yesssssssssssssssss, but like you said reliability is difficult… and application would require quite a lot of work. It’s just an idea I am exploring for my senior design. I am still unsure about how to implement navigation as the GPS module is not very accurate…

If you can get a fiducial within the field of view of the camera in a reliable way then aruco detect gives you not only the distance to the center of this fiducial but also give you the angles off of the normal to the camera center normal vector.

After that you must use transforms of where the camera is and how it is pointed relative to the base link axis of the robot.

If you have properly setup the camera pose in the magni urdf file then you need to work through the transforms and get determine where this fiducial marker is relative to base_link of the Magni. Only then can you decide how to move to place yourself in some relative pose relative to the fiducial marker.

Our fiducial marker usage is so far up to 2020 focused on navigation by first making a map and then moving about within that map.

We have done a LOT of thought recently on navigating between fiducials in a much more interactive way but that is not introduced nor available yet till we get that all to work so it is not really suportable but we hope to be able to deal with that sort of navigation relative to fiducials we see somewhere before the end of 2021.

1 Like