KRYTN the coffee robot

Thanks @mjstn2011
I’ve gotten a camera calibration, however even though I have configured raspicam node to use it, the image is still quite distorted. I’d like to do rectification, but I can’t seem to figure out how to configure raspicam so that it will rectify the image. How have you handled this with your arduino lens?

The reason I think this is because I looked at the output of the camera_info topic, which indicates that the “do_rectify” flag on this is false:

header: 
  seq: 2042
  stamp: 
    secs: 1599664091
    nsecs: 225015062
  frame_id: "raspicam"
height: 960
width: 1280
distortion_model: "plumb_bob"
D: [-0.517013, 0.306511, -0.003678, 0.001355, 0.0]
K: [1227.718126, 0.0, 641.891697, 0.0, 1225.758794, 523.811004, 0.0, 0.0, 1.0]
R: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
P: [1041.5625, 0.0, 645.045935, 0.0, 0.0, 1121.064575, 529.235029, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 0
binning_y: 0
roi: 
  x_offset: 0
  y_offset: 0
  height: 0
  width: 0
  do_rectify: False

I don’t believe you can see the flattened image which is used by the navigation software to do the math for navigation via fiducials without extra steps on the image path. I have asked our expert to reply to your question. I use the corrected data in the code but have not looked into how to get a processed ‘flattened’ frame myself to view as a ‘human’.

1 Like

Thanks Mark!

Ah ok, so you’re saying that as long as the calibration being reported by camera info is correct that the aruco-detect software will use it when localising the aruco markers?

John

Hi @johnnyv,

Short answer up front: We don’t rectify the image, but aruco_detect will work correctly as long as the camera_info is correct.

Longer answer: Rectification isn’t really necessary for most things that you might want to do in a single camera system. It is significantly more computationally efficient to just transform points/lines individually through the camera matrix, which is what camera_info provides. Of course you can use these same parameters to rectify the image, getting rid of many of the distortions a lens creates. But this process still leaves out parameters like the focal length. So even with a rectified image you will need to have a camera_info that correctly models the “lens” parameters of the camera that could have taken that rectified image.

Now if you really want a rectified image for some other reason, maybe to make remote teleop easier, or maybe to make certain types of computer vision simpler, ROS has nodes that can do this for you given the original image and the camera_info. More info here: http://wiki.ros.org/image_proc

Hope that helps! Looks like you are making good progress on this project, I love seeing it come together.

Rohan

1 Like

Thanks @rohbotics, cool ok I won’t worry about trying to get the rectification to work, I’ll just rely on checking that the calibration reported by camera_info is the correct calibration.

Cheers,

John

A bit off track from main fiducial issues, wanted to ask about Lidar integration.
What is the state at this time of the rplidar based mapping and amcl nav modes?

Does KRYTN just use the RPLidar A1M8 type units from Slamtec as seen on Amazon?
I see you have forked the Slamtec/rplidar_ros perhaps to have stable code or has it been customized?

Thanks,
Mark

1 Like

Hi @mjstn2011,

No I didn’t really need to fork the Slamtec/rplidar_ros package. I think all I did was end up copying the launch file.

Currently it works pretty well, I’ve got the fiducal slam doing localisation, while the RPLidar A1M8 unit is doing local path planning and obstacle detection. My setup is abit fiddly my repo instructions describe how to setup a map where both the lidar map and the fiducial slam map same the same origin so the maps align reasonably well as well for visual debugging purposes.

I also just posted the latest on KRYTN, this time a call for people to interview as I’m going to focus on the human side of this project for abit.

Thanks for the reply. As an FYI I have ‘pulled the trigger’ and am buying a RPLidar for my own robots as I am one Lidar short and also it is a more supported Lidar than what I use. I use the Neato XV-11 lidar which can be had for half the price on EBay and is extremely common.

Here at Ubiquity Robotics our lidar of choice is at this time the Sick TIM551 but it costs ‘a little bit’ more. :rofl:. The Sick is an industry standard with certifications and waterproof for outside and so on so it is the one we have to recommend for companies doing serious robots that use our Magni base.

2 Likes

Back on topic of RPLidar. I see from your picture that the pully faces forward. So for your object avoidance the lidar value halfway in or around 180 is ‘forward’. This indeed makes the math easier for just figuring out if something is to the left or right or straight ahead (no crossover point up front).

For my neato lidar and I confirmed today it rotates CCW and the RPLidar rotates CW as seen from top and both have that crossover point from index 0 to index 359 on the opposite side from the pully. For a prior bot of mine I called the rotation in the URDF to be 0,0,0 if the pully was towards the back of the robot. So I believe for nav and so on +x is at the crossover point or end opposite the pully.

You said you are using the lidar for basically ‘costmap’ or object avoidance. So for KRYTN I think then that lidar value 180 is straight ahead and as mentioned that makes sense so no crossover point in front.

I mention this just as an observation so at some later point if you do Nav on the lidar keep in mind your lidar Z rotation is 180 degrees.

I am working on a demo based on RPLidar for full map making to nav and will share that with you later FYI. it is not quite ready as it has some issues with move_base in final nav and 2D goal setting so I will only release it once it fully runs from map making all the way to automous pose goal acquisition.

Cheers,
Mark

2 Likes

Hi @mjstn2011, interesting, I think that the 0 degree setting for the rplidar is with the pulley facing forward. My local path planning seems to work, and if you look at my modification to the xarco file below, you can see that I don’t do any rotations in the z-axis:

<!-- rplidar  -->
  <link name="rplidar">
    <visual>
      <geometry>
        <cylinder length="0.06" radius="0.038"/>
      </geometry>
      <origin rpy="0 0 0" xyz="0 0 0"/>
      <material name="near_black"/>
    </visual>
  </link>
  <joint name="rplidar_joint" type="fixed">

    <parent link="base_footprint"/>
    <child link="rplidar"/>
    <!-- <origin xyz="-0.047 0 0.3" rpy="0 0 0"/> -->
    <!-- moved in x by 2cm up in z by 2.5cm -->
    <origin xyz="-0.027 0 0.325" rpy="0 0 0"/>
    
  </joint>

You can check out my repo for a currently working example, although note that I am fusing the fiducial pose estimation with the an acml node from the lidar for global path planning.

Great stuff Johnny. I will indeed be looking. I’ll keep my lidar as it is now, it is all sorted and mounted. I found that the rplidar actually rotates backwards from my Neato Lidar but the scan points coorelate with proper Z rotation.

1 Like

Hey robot people,

The latest KRYTN update is up:

Nicely done! You had all your hardware ready (only small issue I guess with what you called ‘wrong screws’ but that is so classic. The video moves along nicely and good choice of music.

The raspicamHQ will of course require translation and rotation changes to the robot model but that is to be expected. I would have been tempted to print the mount in 3 pieces.

  • main box to hold Lidar. Would print upside down so top ends up smooth
  • Since printer has limited work area I would print each of the flat arms separate and attach to box. They would print rightside up because you want top surface that holds camera with holes to grow upward from flat base that is on Magni top plate.
1 Like

Hi @johnnyv, @mjstn2011

Looks really awesome,

do you mind sharing the STL as well (of the 3D printed mount)? I would like to mount that as well on to Magni for testing. I tried searching in github but couldnt find it… thanks

Be aware inmoov that my new demo code assumes the lidar is with the pully facing the back of the robot. What this means is the Z rotation will be off by 3.1415 if you use the print discussed in this thread. Also there is a little bit of translation offset to fix in X but that may be neglected for a ‘rough’ test.

@mjstn2011
ok, thanks for letting me know this important information and so where can I get the STL for this mount please?
thanks

thanks @inmoov and @mjstn2011!

This was my first attempt at a video with music so I think it went well. Although there are a few things I would change if I was doing it again.

You are right, I had to modify the orientation.

@inmoov , the mount is designed to also have the rpcam HD mounted on it. I think it blocks the upward facing hole for the camera, so depending on your setup you might need to modify it. If you do modify it, and take @mjstn2011’s suggestions around separating it out into 3 pieces it would be cool if you could share your changes back to me!

I’ve uploaded the stl file to the repo, you can find it here: https://github.com/johnny555/coffee_robot/blob/master/urdf/Magni%20Mount%20V3%20-%20finali.stl

Hi @johnnyv,

Thank you so much for sharing the STL, the mount looks awesome.

I still haven’t decide on my final set up yet, am still trying things around and your work project really inspires me, keep it up! :+1: , will definitely share if I have anything different. (besides on I’m using a Kinect as well, still trying to get my head around it)

Also out of curiousity, your KRYTN coffee robot, can it avoid obstacles (with RPLidar I guess) and work out a new path instantly towards the destination? (I have seen the costmap being mentioned).

Thank you,
cheers

@mjstn2011, you mentioned “that my new demo code assumes the lidar is with the pully facing the back of the robot” is there a reason the lidar with the pulley is facing back of the robot and not front? and would you have a diagram or a photo or something to show how the RPLidar A1 is installed on top of Magni? and the exact angle that it is facing? I just want it to be installed correctly so it will work with your demo smoothly. thanks

re obstacles. Its “ok” at avoiding obstacles… I think I need to tune the path planner better.

Basically if the obstacle is static and in the global map it seems to be able to path around it, but if the obstacle is moving or not in the global map then then at best it will get stuck and wait for it to move out of the way, or at worst strangley turn into the obstacle and get even more stuck.