I was wondering if I can run the magni on the cloud?

I was wondering if I could use Magni’s raspberry pi to collect sensor information and then do the processing and programming on a cloud platform?
If so, How can I do that?

Thanks

There are bits of things well suited for the cloud but the Magni robot base itself is a ROS system and is intended to be a base platform for developers.

We have no specific ‘solutions’ documented and supported in code directly for use out of the box on the Magni platform at this time.

There are things like sending speech or images up to cloud servers to search for verbal commands or to recognize images of things. Google had a service a while ago that may still be present where you could submit images and it would rate them for how strongly they resembled things and send back text for the things it ‘may’ be and how likely there is a match.

As far as your overall comment, it is too wide of a question to comment on in any specific way.

There were some other cool cloud based things like having your camera be sent to the cloud and then allow the cloud app to tell your robot what to do from some remote human controller your robot. There was a clever platform a few years ago at letsrobot.tv for example but it may have changed its name.

So again, cannot help you specifically with your question myself, perhaps somebody else will chime in.

1 Like

So, what I am attempting to do is I want to make the Magni run autonomously in an outdoor environment, and I am relying on a stereo camera for object detection as well as radar. For the purpose of navigating the Magni, I am using a GPS module to collect the map of the environment (this is because it is running on a sidewalk). So I thought if it would be possible to collect sensor data from the camera, radar as well as the GPS module, and process it on a cloud platform, instead of processing it on the raspberry pi which is on the magni

Thanks again!

Hello again. I myself have no specific solution to point you towards any specific pieces of the solution as far as the stereo camera processing or the complex processing required for your project. We do not have that as we focus mostly on robot centric processing at this time. What you are discussing is quite complex.

Perhaps in posting here somebody with expertise in the cloud image processing area may chime in with some leads and pointers to assist your efforts.

A few things you would perhaps be interested in as parts of your solution I will go into below and perhaps you know these things already but if not there may be some value in some of this post.

  • For outdoors, away from wifi it is likely you would want to get a wifi hotspot based on a cell phone provider such as verizon. The major providers generally have these devices available. You would setup your magni to connect to this hotspot wifi. From the magni side of things this page would be of value for connecting to a hotspot: https://learn.ubiquityrobotics.com/connect_network

To undertake what you are looking at you must be very familiar with C development using catkin and you must be versed in many basic ROS ideas and mechanisms. Do not take on this task unless you are really wanting to learn ROS and become proficient in standard ROS robotic systems. Perhaps you already have this sort of background and if so you can then focus on the new pieces.

Robot building projects are often about where we build upon our prior knowledge or the knowledge offered by others on assorted web pages to take our next steps faster. Do not think all of it will just happen and be learned in a week. It will take a great deal longer however the result will be a wonderful learning experience to prove valuable on this and perhaps other projects you undertake.

If your needs are simple enough ROS supports formation and development of ROS nodes in python as well.

That being said now:
At a high level you will need to form the software to do all the passing of data and processing on the cloud and the part of your solution where Magni comes in is that when you do all that code you end up with some new ROS node of your own making (I normally call my main node ‘main_brain’ or something.

I must assume you know about working in ROS for this discussion so if not you can learn a great deal with this as a starting point: http://wiki.ros.org/ROS/Tutorials You don’t need it all but you do need a great deal of this info to understand what I will say next.

  • Your main node should be able to publish to the ROS topic called /cmd_vel. The magni as it sits is listening to /cmd_vel and takes that all the way to drive the wheels. That is one key thing we do is work out all the motor things and present the standard interface of the ROS /cmd_vel topic.
    I will assume enough keywords are here so web searches can find you more info on how publication of messages to /cmd_vel is supposed to work to control a ROS robot such as Magni.

  • Another key thing this main node or some other node you form must likely do is to subscribe to the standard ROS topic we supply that is called /odom This topic continually updates the position that Magni thinks it is located at in terms of keeping track of the wheels and the general mechanical dimensions of the Magni robot. Be aware that /odom gets you close but there is drift over time due to many factors so /odom is only good as something to use when you combine it with GPS and map navigation.

  • The GPS is the easiest part of your requirements as long as the GPS has NMEA serial output. What I have done for GPS is typically use a usb to serial adapter to the GPS. There are some examples of ROS nodes that do some of the work such as here: http://wiki.ros.org/nmea_navsat_driver

  • The more complex software will be in the node passing images to the cloud. Here you will have to consider the high bandwidth requirements of whatever frame rate your system will require. This could prove a bottleneck the higher the resolution of the stereo images and the higher the rate per second of frames.

I have no specific suggestions. The project sounds exciting and will most certainly be highly involved.

Let us know how you are doing as you progress.

Mark

1 Like

Thank you so much for your response, I really appreciate it, that was actually very insightful!!

So I have actually another question related to this, so I have decided to drop the method of publishing messages to the cloud, the reason is as you mentioned is complexity and impracticality of this. However, is there a way where instead of publishing to the cloud, I could publish it to my workstation? Is this setup in the magni? or would I be required to set this up?

From what I understand, the computation and the processing is currently done on the magni itself, however is there a way that I can collect the information from the nodes and publish them to the workstation? and in turn subscribe to the magni to collect the readings from the images as well as the other sensors?

Maybe there are sources of people who have done this? I am just concerned in terms of the computation power that the Magni has, if I perhaps decided to use a SLAM navigation method (still looking into that)

Thanks again!

Yes you can do the ‘heavy lifting’ for complex computations on a laptop and in fact that is one the powers of using ROS on this robot.

I guarentee you people do exactly what you are discussing. I don’t have a pointer to some package offhand so I offer some general tips below

At a super high level here is what is required

  1. Setup your laptop to run ROS itself and to consider the Magni Robot as the ‘ROS MASTER’.
  2. After step 1 your laptop can subscribe to any ROS topic where the Magni is publishing messages. So step 2 is to publish the images to a ROS topic from the Magni where the camera is located.
  3. On the laptop you need to develop and run a ROS node that subscribes to the ROS topic with the images. This ROS node on the laptop can then process the images.
  4. Once your node on the laptop knows how it wants to control the Magni itself your node you write on the laptop can publish speed control messages to ROS topic /cmd_vel
  5. The Magni is by default always subscribing to ROS topic /cmd_vel and so any messages on that topic will be controlling the Magni with our built in software on the Magni.

So the above was the high level idea of what can take place. To actually make all that happen will still be quite a bit of work and much more importantly, LEARNING if you want all that to happen.

I will offer a few tips to get going but be very aware this will be quite a project but after you are done you will in fact be understanding quite a bit about ROS which is day by day becoming more and more of an industry standard way to work with robots.

So I offer some parts of help here but far from any sort of cookbook, just tips.

  1. To setup a laptop on ROS where Magni is the ROS Master see this page in our documents:
    https://learn.ubiquityrobotics.com/workstation_setup

  2. If you have the Magni that has a camera then these links are of value:
    A) Setup the camera to run on a Magni: https://learn.ubiquityrobotics.com/camera_sensors
    B) To publish images from OUR simple camera we have all that on the robot once you have done the enable of the camera in above line ‘A’. See this page https://github.com/UbiquityRobotics/raspicam_node But because this is already on our images just skip ahead to ‘Running the Node’ where you find this magic line and would run this on an SSH session into the Magni Robot:
    roslaunch raspicam_node camerav2_1280x960.launch

  3. So far 1 & 2 are fairly easy, step 3 things get very complex. In this step you would have to develop a node on the laptop itself and run it there. That node must read in images on the ROS topic which in this precise case is topic /raspicam_node/image/compressed
    Your camera would have an entirely different ROS topic for it’s images.
    An EXTREMELY simple example to show a very simple ROS node that can subscribe to an Image topic is a turorial here: http://wiki.ros.org/image_transport/Tutorials/SubscribingToImages
    Pay very close attention to the top of this page where it explains you must have gone through quite a few ROS tutorials and especially ones on images before you get to the point so again, this is VERY INVOLVED but if you really want to learn you just have to roll up your sleves and work at it. It will not just magically fall together, you will have frustrations and successes.

  4. So assuming you live through the above, this step is where you in some way decide you need to tell the robot to move and for that you must publish to /cmd_vel from the laptop in this discussion and the Magni will be listening and move. Magni is a strong robot so it is advised to have magni up on blocks and just do things to make the wheels move and get good at that before you set it on the floor to really move around. So I only looked a little bit but at the end of this thread is some code that shows how to format a ‘Twist’ message for movement and then publish it to /cmd_vel. This is just some guys code, just look and learn from it. I am sure there is a much more ‘official’ example somewhere, maybe in the ROS tutorials I have given some links to you already.

1 Like

God bless you for your reply!! Much appreciated!!

One other thought for you is to use something like

this can be substituted for ROSmaster on the magni.

David

1 Like