Create Global Map and SLAM

Hi

I would like to test the navigation application.

  1. Setup my raspicam:
raspicam:   
    position: 'upward'
  1. Affix the fiducials - the camera can see one fiducial.
  2. Clear the global map.
  3. Place the robot under the first fiducial 100.
  4. Terminal 1 - Start the application roslaunch magni_demos simple_navigation.launch
  5. Terminal 2 - Start the application rosrun teleop_twist_keyboard teleop_twist_keyboard.py
  6. Control the robot executing all 4 fiducials.
  7. Create the global map!
  8. Stop the application roslaunch magni_demos simple_navigation.launch
  9. Move the robot under the first fiducial and start the application again.

The robot doesn’t move. I can’t understand how to navigate it using the fiducials. Could you please explain more? Thank you!

You should learn how to setup RViz and run that on a laptop with the ROS master being the Magni.

You should work out that using RViz so you can see that a map really was created with fiducials in it.
The screen for Magni running simple_navigation.launch should have had lines indicating it had found and recognized your fiducials as well.

I think you have found this but am not sure, have you found this page: https://learn.ubiquityrobotics.com/fiducials

Once you have run mapping you should see ~/.ros/slam/map.txt has your fiducials. If not, mapping is the issue.

What I do is when ready to run just navigation I setup and reboot magni right where I had first started to do the initial mapping. I use tape on the floor to mark where my wheels first started.

Again as you start for navigation verify you can see the fiducial locations relative to the robot using RViz. RViz is powerful and really required for any navigation especially when learning.
See this page: https://learn.ubiquityrobotics.com/rviz That page has pointers to RViz tutorials as well which if you have not used RViz are important to learn about.

Trying to navigate and learn at the same time is basically impossible without getting RViz going.

Hi

I wanted to start visualization with rviz. Therefore I launched this line first:

sudo apt install ros-kinetic-magni-robot

Unfortunately there was the following error:
E: Unable to locate package ros-kinetic-magni-robot

Then I wanted to install this package:
sudo apt install ros-kinetic-magni-robot

The error was the same:
E: Unable to locate package ros-kinetic-magni-robot

What is your advise to overcome this problem? Please, help!

RViz is run on a Linux laptop and not on the Raspberry Pi.
We have a virtual image that is pre-setup and then it is also possible to fully setup a Ubuntu Linux 16.04 LTS installation on something like a linux laptop. It can be argued which is ‘easier’, both are involved. The majority of people tend to think the virtual machine is easiest especailly for those not very familiar with configuration of Linux.

So our page that explains this and has pointers to doing either method is here:

If you did already have Ubuntu 16.04 setup with ROS Kinetic and that is when your ‘apt installs’ failed that may be because you did not do a ‘sudo apt update’ prior to those commands.

If you did not even have ROS setup that too would explain the errors you saw.

Hi

I removed my ROS and installed it again. Now everything is working fine according to me. I have created a map and using rviz and navigation application I can follow the fiducials. I’m sending a screenshot about my test. I have a question. The fiducials have to be placed very close to each other - up to 1 meter. If I have 100 square meters I have to place so many fiducials. Is there any possibility to navigate the robot easier when the working area is so big? The other question is related with the sonars. They don’t work. When I start the navigation application using my map, if I place any obstacle the magni doesn’t detect it. I made the necessary configurations in my robot.yaml file:
raspicam:
position: ‘upward’

but unfortunately I think that the sonars don’t work.

Thank you!

Lets start with easier one first: We found bug and have a fix for use of Pi4 with Sonars.
It is covered in forum thread: Sonar sensors not functioning

I paste below:
Start with Pi4 and the 2020-11-07 image freshly burned.
sudo systemctl stop magni-base

cd catkin_ws/src
git clone https://github.com/UbiquityRobotics/pi_sonar
cd pi_sonar
git checkout daemon_pigpio
cd ~/catkin_ws
catkin_make

sudo systemctl enable pigpiod

Use sudo vi /etc/ubiquity/robot.yaml and as discussed on our sonar setup page on learn, you must enable sonars by commenting out the sonars: none line and uncomment good sonar line.
Beware to Never any tabs, only spaces.

Next you discuss spacing of ceiling fiducials. This is a topic I have spent a lot of time on.
It boils down to having a supported camera that has wider field of view.
Here are 2 choices where the Arducam one was because the Pi HQ was not introduced yet

  • 1st Choice: RaspiCamHQ 12.3MPxl, Sony IMX477 chip at 4096x3040 pixel. Use of a 4mm C-mount lens on this yields a 95deg HFOV. Don’t want to go much wider or optical effects come into play to unacceptable levels. The standard RaspiCam we ship and lens for reference is only 62deg HFOV (Horizontal Field Of View). I used this 4mm lens From Uctronics SKU# U6057 1/3 CS Mount 4mm Camera Lens LS-2718CS. This camera is way better than next choice and more ‘standard’.

  • Second choice we shy away from so only if money is tight but I’ll just say that it is an Arducam Camera made for the Pi with an M12 lens mount. I then use a 2.3mm lens. If you feel you must use this one then contact me and I’ll dig up more details.

In BOTH cases you will need custom mounting plate adapter and will have to make room for the lens to stick through the Magni top plate. I have 3D printed adapters to connect either camera to the Magni mounting holes and can get you the STL files if you have 3D print ability.

Also in BOTH cases you would have to form a camera calibration file for the one you choose and would have to make modifications to a few launch files to configure either of these.
So if you are going this route contact me again BUT PLEASE USE NEW FORUM THREAD and call it something like What higher quality camera can I use for Fiducial Nav

I have today written a new section in our learn docs.
I explain this in this new forum topic on using different camera

Hi

Thank you for your answer. I’ve installed a fresh ROS (The last version) on PI4. Then I executed these lines after connecting to the magni robot:

sudo systemctl stop magni-base

cd catkin_ws/src

Then I tried to execute this line but I couldn’t:

git clone https://github.com/UbiquityRobotics/pi_sonar

I think that I need a WiFi connection but when I’m connected to the magni robot there is no Internet.

Could you please help?

Thank you!

You must have an internet connection to do any ‘git clone’.

Connect to the Magni using the magni AP point which will start with ubiquityrobot in the SSID name.

sudo pifi ad

Then reboot. Once you have rebooted the magni will be on your network.

To find the Magni if you have proper network setup use ‘ssh ubiquityrobot.local’.

See this page but above is basically what you need. After connection you may need some form of network scanner to find the IP address of the Magni.
If you have first gotten and installed the OLED display it will tell you the IP but we don’t have that built in by default yet in our images.

See this: https://learn.ubiquityrobotics.com/connect_network

Hi

I did all your instructions above and now I have pi_sonar folder. I think that it’s working now. I have the following problem. Every time when I try to connect to the ubiquity network it was not possible. I’m connecting all the time with the added wifi network before according to your instructions. I can’t connect to the network with the phone application too because I couldn’t find the ubiquity network. Please, help!
Thank you!

I believe from what you have said that your issue is you are not aware of what how to deal with connecting to your Magni once you have told your Magni to connect to your own wifi.

So lets say that you used pifi to add YOUR wifi network with SSID of ‘mynetwork’ so I can talk about what to do.
If the Magni connected to ‘mynetwork’ it will no longer present an ‘access point’ for it’s OWN network because there is no need. The Magni either uses the Pi wifi to present it’s OWN wifi Access Point, AP, OR if it is told to join your network it simply joins and gets an IP address from YOUR network access point (normally this is some Wifi Router that connects to the outside Internet.

So IF the Magni connects to your Wifi with your unique SSID (network name) the magni will be on your own network.

You must then find where the magni IP address was assigned. A rather recent change is to have this show up on the little OLED display but we are not fully shipping this little display quite yet and it is not automatically built into our software to always run the display so I don’t think that is ok yet.

What you then must do is have some OTHER device like laptop or Windows machine or an android phone and then have a tool on that device to ‘SCAN’ and show you the IP address of the Magni.

For Windows on that network I install and then use an app called ‘Advanced IP Scanner’. Once installed IF your windows machine is also on the same wifi of ‘mynetwork’ you just hit ‘SCAN’ and look for any raspberry Pi or look for hostname like ubiquityrobot to find it’s IP address that your own router assigned to it on your network.

If you have an android phone you can connect your phone to ‘my network’ and then use an IP scanner such as my favorite, ‘Fing’ available on google Play Store. This too will scan your network IF the android phone is in this case connected to ‘mynetwork’. Look for a raspberry pi with a name starting with ‘ubiquity’.

If you connect a linux laptop to ‘mynetwork’ you may then use ‘iwconfig’ to look up your first 3 numbers of your IP address and then use ‘arpscan’ so this is the most ‘unfriendly’ method but it does work. Lets say you connect your Linux Laptop to the same network SSID (name) you told PiFi to connect to. Lets say you run ‘iwconfig’ and it tells you your WIFI IP starts with 192.168.1
You would then on that linux box type: sudo arp-scan 192.168.1.1/24 (I know this is MAJOR cryptic but that in face is HOW linux is in general). Geeks LOVE cryptic command line stuff!

So in SOME way you find your IP address and after that you would use ssh like this (example IP)

ssh ubuntu@192.168.1.123

Hope this helps. I think I will add the above to the Learn pages for others in the future!

What I have just posted above is now added to our ‘learn pages’ on the last half of our network setup age here: https://learn.ubiquityrobotics.com/connect_network

Hi

Thank you for your assistance. I can control the magni robot via my local wifi. My next problem is related with the sonar board. I don’t know how to test it. I followed your instructions:

sudo systemctl stop magni-base

cd catkin_ws/src
git clone https://github.com/UbiquityRobotics/pi_sonar
cd pi_sonar
git checkout daemon_pigpio
cd ~/catkin_ws
catkin_make

sudo systemctl enable pigpiod

Use sudo nano /etc/ubiquity/robot.yaml and enable sonars by commenting out the sonars: none line and uncomment good sonar line. I left only this line - sonars: ‘pi_sonar_v1’

Then I found the folder pi_sonar. Now I need more information how to check if the sonar board is working! Help me, please!

Hi

I would like to ask about the cameras:
Could you please send me the price of RaspiCamHQ and Arducam Camera with an M12 lens mount?
According to you which camera is better when I would like to navigate the magni robot in a wider space more than 100 square meters? What is the distance between the fiducials in both cameras?

Thank you!

I strongly suggest you browse and study our documents here: https://learn.ubiquityrobotics.com/
Almost everything you are asking about is all there so far. Please utilize our documents.

The sonar board setup is here: https://learn.ubiquityrobotics.com/sonar_sensors

Verification of sonar is section 4.2 of: https://learn.ubiquityrobotics.com/verification

I do not have a fully specified camera and lens that I can just point you at and ‘guarantee’ at this time. I can share with you some thoughts and you would have to experiment.

Here is the approach I would suggest if you would like to experiment with wider field of view.
Get a camera that we recommend such as the Raspberry Pi HQ camera or the Arducam

Using Amazon.com search the C-Mount camera using this string: ‘raspberry pi hq camera’

The Arducam for M12 lens it is more complicated to see the specific camera which is inferior to the C-Mount one above but has been shown to function and allows use of cheaper M12 lenses. So I suggest look for this complex string:

Lens Board for Raspberry Pi Camera, Arducam Adjustable and Interchangeable Lens M12 Module, Focus and Angle Enhancement for Raspberry Pi 4/3/3 B+

The rather complex issue is finding a lens. The lens must of course be able to screw into the C-Mount or the M12 mount depending on which camera you select.

Then the lens must be able to illuminate the entire chip where the RaspiCam HQ is a larger chip.

Then the ‘focal length’ of the lens determines along with the size of the chip the field of view which you want to keep under 100 degrees to avoid distortion issues.

For the Arducam one you need to get something that is called an M12 1/4" lens (lights up chips of this size). Then you would also want that lens to be about focal length of 2.8mm and in that way you get 96 degree HFOV (Horizontal field of view)

I have not been able to find a lens for the Raspicam that can get light on the full chip which is much larger AND get such a wide field of view.

So if you want field of view more than low light sensitivity and so on your choice is the Arducam I have discussed.

Hi

I found the standard objectives for RaspiCamHQ 12.3MPxl, Sony IMX477:
CS-mount lens 6mm and C-mount lens 12mm.
Which of both is better according to your experience?

Thank you!

Better depends on your requirements. I believe you have said you are trying to get really wide field of view approaching 100 deg HFOV. If that is true, I know of no lens that will get the RaspiCamHQ to do that. That is because the sensor chip is REALLY BIG. I did at one point try a 4mm lens but it clipped off the corners of the image because the light cone from the lens was not fully illuminating the large chip used for the HQ camera. A 4mm lens gets you 95 deg HFOV but clipped the corners a lot. You could try that. The 6mm C-mount lens gave me only 65deg HFOV so that is not even better than the v2.1 raspicam although it is better for lower light.

So if you do require largest HFOV for capturing the most area and thus minimizing the total number of fiducials to cover your ‘roaming space’. You will be forced to use the Arducam camera with M12 mount and something like the 2.3mm lens that give you about 86 deg HFOV fully illuminated chip.

The Arducam is an ‘inferior’ camera but offers the ability to get wider field of view due to much smaller chip physical area for the sensor elements.

My experience is there is never an ideal choice, only tradeoffs.

Hope this helps, it is a series of trade-offs with no ‘perfect’ solution.

Hi

Thank you for your answer. I would like to ask if this camera is suitable for large spaces:

https://www.uctronics.com/camera-modules/camera-for-raspberry-pi/arducam-imx477-12mp-ptz-camera-for-raspberry-pi-4-3b-3-and-jetson-nano-xavier-nx-ir-cut-switchable-camera-with-metal-base-and-2-digital-servos-b0167b12.html

May I control this camera with ROS? Thank you!

I am sorry but I am not familiar with that camera. We try to stick to cameras that have drivers for Raspberry Pi CSI thin flat cable that naturally is a great fit for the Pi type CPU boards.