Hi,
I have an RPLidar A1 and would like to attach it to my Magni Silver. How could I do so?
Thanks!
Hi,
I have an RPLidar A1 and would like to attach it to my Magni Silver. How could I do so?
Thanks!
As for myself I have not used the RPLidar product but have connected up a Neato XV-11 lidar to navigation stack OR to directly using the lidar for object detection by looking at its output topic.
Your question is rather open-ended so I will try to answer based on different needs you may have.
So the basic steps which would apply for RPLidar I think as well are these:
Have a compatible hardware interface to your Lidar. In my case this is a USB to serial dongle setup for the proper baud rate for the lidar I use, The XP-11 in my case that connects to a little lidar to serial interface (specific to the XP-11 so not applicable in this case).
Get the ROS node that supports your specific Lidar. This will be something like this:
git clone https://github.com/Slamtec/rplidar_ros.git
Configure this node to use the serial port you are using or other interface required for hardware to the lidar. I found this page but whatever page you use, stick to ROS Kinetic as that is what our images use for Magni. Maybe try this, again, I have not used RPLidar nor has the team best I know. https://medium.com/robotics-with-ros/installing-the-rplidar-lidar-sensor-on-the-raspberry-pi-32047cde9588
Step 3 can be ‘easy’ to just see objects or very complex to do full mapping of your environment. If you just want to see the array of lidar point (typically 360 for each degree in my case but your lidar will have it’s own specs). Anyway, to see objects around you for avoidance or seeking and not for full navigation you setup a ROS node to ‘subscribe’ to the ROS topic the ROS driver outputs which is often of type scan. then you pull in the array and inspect for distances at each degree or whatever that lidar supports
I found this page that discusses the RPLidar but cannot ‘vouch’ for it as I use a different lidar but it seems ‘believable’. https://medium.com/robotics-with-ros/installing-the-rplidar-lidar-sensor-on-the-raspberry-pi-32047cde9588
Let me know if you are asking about full on ROS navigation stack using the lidar.
Thanks,
Mark
I’m using an RPLidar A3. Runs well and publishes on topic /scan. It Does not appear on the transformation tree. how do I get it on the tf tree so I can use it? I figured I could add the tf to the URDF file, but I can’t find it for some reason. Can you drop a hint? Thanks!
The URDF file is in our magni_robot repository. You can git clone that to catkin_ws/src OR you can modify directly as root the magni.urdf.xacro
Be aware if the syntax gets messed up your main clue with be nodes don’t start right. So make a copy and so on.
Repository: cd ~/catkin_ws/src
git clone https://github.com/UbiquityRobotics/magni_robot
cd ~/catkin_ws
catkin_make -j 1
URDF in the repository: magni_robot/magni_description/urdf/magni.urdf.xacro
URDF on our image ( I recommend you do the mods in your magni_robot cloned repository)
/opt/ros/kinetic/share/magni_description/urdf/magni.urdf.xacro
In the URDF file will be the location of your lidar relative to your robot base_link in some way.
I have a neato lidar and in my urdf (not a magni product but my own so just think of this as some ideas you will need to approach in one way or another)
So no matter how many levels of links from one thing to another there has to be some eventual route back to base_link so ROS nav stack can know how your lidar is positioned.
In my case it is very indirect where ‘platform’ is off of base_link then ‘upper_platform’ has a parent of ‘platform’. My upper_platform is a second level where I place my lidar.
then finally neato_lidar has a parent of ‘upper_platform’.
So below is just the FINAL segment where neato_lidar link to ‘upper_platform’ is. This stuff can be a bit tricky to think about but if you are used to it what I said will make some sense.
So for what it is worth here is the final linkage for my case and you will somehow have to ‘fix’ your lidar to base_link be it direct or through different physical parts of your model.
Edited the text to be preformated so it can be 'visible.
<link name="neato_lidar">
<visual>
<geometry>
<cylinder length="0.06" radius="0.038"/>
</geometry>
<origin rpy="0 0 0" xyz="0 0 0"/>
<material name="near_black"/>
</visual>
</link>
<joint name="up_plat_to_neato_lidar" type="fixed">
<parent link="upper_platform"/>
<child link="neato_lidar"/>
<origin xyz="0.01 0 .0305"/>
</joint>
I’ve had a go at this, and am progressing.
I set my lidar on the top plate on some foam, so the section i put into the URDF looks like this:
<!-- rplidar -->
<link name="rplidar">
<visual>
<geometry>
<cylinder length="0.06" radius="0.038"/>
</geometry>
<origin rpy="0 0 0" xyz="0 0 0"/>
<material name="near_black"/>
</visual>
</link>
<joint name="rplidar_joint" type="fixed">
<parent link="base_link"/>
<child link="rplidar"/>
<origin xyz="-0.047 0 0.3" rpy="0 0 0"/>
</joint>
roughly speaking, the lidar is exactly 30cm above, and the scan’s origin is 04.7cm behind the baselink.
I think had to modify the rplidar.launch file from the rplidar_ros package to use rplidar as the frame it publishes to, like so:
<launch>
<node name="rplidarNode" pkg="rplidar_ros" type="rplidarNode" output="screen">
<param name="serial_port" type="string" value="/dev/ttyUSB0"/>
<param name="serial_baudrate" type="int" value="115200"/><!--A1/A2 -->
<!--param name="serial_baudrate" type="int" value="256000"--><!--A3 -->
<param name="frame_id" type="string" value="rplidar"/>
<param name="inverted" type="bool" value="false"/>
<param name="angle_compensate" type="bool" value="true"/>
</node>
</launch>
This resulted in being able to see the lidar scans from RViz along with the fiducials.
Next step is to figure out how to get maps and nav working off, for atleast a local path planner
Your setup relative to base_link seems ok based on your description of distance in X and Z.
You have motivated me to offer up some very high level notes and my launch files for one of my robots with the XV lidar. These leave huge holes that you would have to fill in BUT offer an overall way to run the system for both map making and for later navigation within the map.
I am sorry I am not ‘writing a book’ or full tutorial as that would take days so please consider these from my own personal files and these are not associated with Ubiquity Robotics at all but are perhaps helpful to persons who are willing to dig in and use this high level ROS nav info as pointers or tips while researching ROS navigation which you may have already done.
So for what it is worth: https://github.com/mjstn/ros_bits/tree/master/launch/droidBotNavExample
Cheers,
Mark
Thanks Mark, I’ll check out your repo.
No need to write a book mate, these comments are enough to keep going.
While were trading githubs, here is the repo I’m currently trying to collect all my stuff in, maybe with our powers combined we can become captain planet!
John
Looks like ‘Captain Planet’ has already harnessed the power of the very early TurtleBot proto 0.1