This ad is exclusive. Only one sponsor per week. Buy it here.

Collaborative Navigation for Flying and Walking Robots

An overview of a collaborative navigation system designed to work for flying and walking robots

Flying and walking robots can use their complementary features in terms of viewpoint and payload capability to the best in a heterogeneous team. To this end, we present our online collaborative navigation framework for unknown and challenging terrain. The method leverages the flying robot’s onboard monocular camera to create both a map of visual features for simultaneous localization and mapping and a dense representation of the environment as an elevation map. This shared knowledge from the flying platform enables the walking robot to localize itself against the global map, and plan a global path to the goal by interpreting the elevation map in terms of traversability. While following the planned path, the absolute pose corrections are fused with the legged state estimation and the elevation map is continuously updated with distance measurements from an onboard laser range sensor. This allows the legged robot to safely navigate towards the goal while taking into account any changes in the environment. The presented methods are fully integrated and we demonstrate their capabilities in an experiment with a hexacopter and a quadrupedal robot.?

This robot uses a stereo vision camera to explore the environment

Instead of paying hundred of dollars to buy LiDAR sensors, you can use a much cheaper stereo vision camera for outdoor exploration.

The robot is called MyzharBot and features a StereoLabs ZED stereo camera that captures images. The algorithms based on OpenCV and ROS are running on the NVIDIA Jetson TX1 to detect obstacles and calculate the best path.

Here is how the fourth version of MyzharBot works with stereo vision to explore the environment.


Hi! I’m HEXA.


Who is HEXA?

I’m an all-terrain hexapod robot. I fit in a backpack. I have a variety of sensors, an open operating system, and an easy-to-use SDK. I’m the starting point of an artificial life. It’s nice to meet you.

HEXA is a fearless explorer that dreams of surveying volcanoes on Mars.
HEXA is a super hero that protects and saves human lives from earthquakes and fires.
HEXA is anything and everything we can dream of, because the human spirit is alive and well inside this little robot.

About its structure:

HEXA has six legs to go wherever it wants. Sensors to see and know which way is up. Plus, it is coming out with exciting hardware interfaces, an open operating system, and a well-designed SDK with easy-to-follow documents. The rest is up to you.

Infinite Rotation Structure: HEXA’s six insect-like legs allow it to seamlessly rotate in any direction. Its round body design allows its head to rotate around a center shaft. The result, no matter which way HEXA faces or goes, that’s forward.

About compute and think:

The dual-core ARM Cortex-A9 processor, plus multiple co-processors, deliver outstanding computing ability. It supports image, video, face, voice recognition, and machine learning algorithms that deliver the unique ability to intelligently adapt to almost any situation.

About its eyes:

HEXA’s eyes (the camera) support image recognition in order to decipher humans, pets or any other object. It can even distinguish between its owner and strangers. The image it sees is directly transmitted to the display on your mobile phone and VR headset, etc. So what it sees, you see – allowing you to explore places you would not normally be able to go. Night vision is also supported.

The built-in distance measuring sensor provides HEXA with the ability to measure between 4–60in (10–150cm). This function not only allows it to sense depth of field, it can literally sense the space around it. Or in other words, you never have to worry about HEXA hitting a wall while in automatic mode.

About HEXA on your mobile:

Through HEXA App, you can remotely control your HEXA from your mobile phone. You can explore the world through HEXA’s eyes. You can connect to it when you are on another continent via the Internet.


What Makes Roboticists Buy The RoboClaw 2x30A? Five Reasons Why.

Several months ago I worked for the first time with an expensive and advanced motor controller. It is about the Sabertooth 2x25A. It has a price of $299.98 on Amazon.

The Sabertooth motor controller does his job correctly, but was hard to setup and it took me some time to write the final version of an Arduino sketch that controls four brushed DC motors.

For my next project, I want to try something new. This time, something that I’ll use to build an all-terrain robot with big wheels, large DC motors and heavy weight. I already had the idea of the DC motors, battery, and chassis.

RoboClaw 2x30A Motor Controller (image source

RoboClaw 2x30A Motor Controller (image source

The RoboClaw 2x30A by Ion Motion Control can supply two brushed DC motors with 30A continuous (60A peak), or four brushed DC motors with 15A continuous. For my project, I have in plan to use four DC brushed motors with a maximum consumption of 13A each in 24V.

And because I’m quite convinced that this motor controller is best for my project, below are five of the reasons why I chose it over other motor controllers.

  1. Inputs
    I work with Arduino and Raspberry Pi. These two development boards fulfill all the conditions for what I have in plans to do next. For these two boards, I need a motor controller compatible with both Linux PC and microcontroller.

    The RoboClaw 2x30A supports a wide range of inputs. This is good for me since I have to work with a mini-computer and a microcontroller. The motor controller supports analog voltage, USB serial, I2C, PWM, RC input, UART (serial).

    I can use the RoboClaw controller via USB interface from Raspberry Pi or controlled via UART with Arduino.

  2. Built-in over current and thermal protection
    The final project will be a 50Kg robot able to climb slopes. At the maximum load, the DC motors will draw more power from the batteries.

    To protect the electronics, batteries, and other components I need to choose a motor controller with built-in over current and thermal protection. And the RoboClaw 2x30A has these protections.

  3. Regenerative
    RoboClaw’s will charge my batteries during slow down or breaking. This makes me happy and increases the time between battery recharging.
  4. Libraries
    The RoboClaw 2x30A is smart. A library helps me to have quick access to all controller functions. With few lines of code I can control the robot, use the built-in PID routine for closed-loop speed control to maintain motor speeds even if the load varies, and more.
  5. Documentation
    A comprehensive documentation makes thing clear for anyone who is working with the motor controller. Again, this motor controller comes with good documentation, including datasheets, manual and examples.

These five reasons make me buy the RoboClaw 2x30A. I’m sure that it worth the price of $124.95 + $14.99 shipping.

Project MARS. Research in the field of agricultural robotics. Precision Farming – Thinking ahead.

Cloud solution for location-independent robot control.
With the MARS research project, AGCO/Fendt has developed small robot units which, with the assistance of a cloud-based solution, can be controlled independent of location during sowing operations. Sowing operations can be planned and monitored at any time using the MARS app. The exact placement of each individual seed can be documented and saved in the cloud. Subsequent cultivation work can then be executed precisely, using less inputs:

I Built This Autonomous Robot to Detect and Avoid Obstacles. The Code is Included.

This is a simple autonomous robot able to detect and avoid obstacles. I use a cheap 4WD robot platform (You can use any of these platforms), an Arduino UNO($18.59 on Amazon), and a cheap HC-SR04 sensor (2 pieces at $2.83 on Amazon).

The robot is programmed to drive forward till an obstacle is detected. Then it turns the sensor left and right. Compare the values returned by the ultrasonic sensor and take a decision.

This is the Arduino code:

And this is how the robot navigates autonomously in my kitchen:

The First DIY Autonomous Robot That Learn How To Drive. TensorFlow Really Works.

TensorFlow is an open-source library from Google. Yes, sometimes they make some nice gifts and release free software for makers and hackers.

This robot is capable of learning from humans how to work in certain conditions. This is also my dream, to build an autonomous robot able to learn how to do things.

In the learning process, the human operates manually the robot. On short, the i5 computer attached to the robot uses the Suiron software to record video frames. Each time a frame is recorded, the Suiron ask the Arduino software about what the human operator is doing. The Arduino software sends the data such as current steering and motor values back to Suiron. Suiron collects these values and stores them along with a processed version of the frame at a rate of 30 times per second.

The learning process takes around 20 minutes for this robot to learn how to drive on a particular circuit.

How to make an autonomous car (code included)

How did you check the robot on an LCD screen when most of the time is sunny?

When you have a remote controlled robot that sends real-time images to an LCD screen, one of the problems is to check the robot “on spot” when most of the time is sunny.

An LCD display becomes less viewable in a bright ambient light. No LCD screen will be bright enough to compete with the sun and the glare off the front of the screen. The simple solution is to keep the hands around the screen. But, how you’ll control the robot if the hands are around the screen?

  1. The first solution is to set the brightness of the LCD screen accordingly. If you use a smartphone or a tablet to remote control a robot, this could be easy. But what if you have attached an LCD screen attached to a development board such as Arduino and Raspberry Pi?
  2. Polarized sunglasses: A pair of polarized sunglasses is one handy solution because they reduce the glare or reflected light.
  3. Another solution is to use a monitor sunshade sun hood like this one. This accessory is designed to keep out hot sun glare of your phone or tablet screen when controlling the robot. But it could also work if the LCD screen is attached to a Pi or Arduino. You just attach it inside the sunshade sun hood and control the robot.
  4. Another way is to use a virtual reality headset. This remote controlled robot with Oculus Rift is a good example. The disadvantage is that you have to stand still while using the virtual reality headset.

Can We Do For Under $100 A Cheap 360 Degree Camera for Outdoor Navigation?

Dyson 360 Eye Robot Vacuum has a 360-degree lens so it knows where it is located in a room and where it has already cleaned. The indoor navigation system uses a panoramic camera lens on top of the machine to map its way around the house.

So, now the question is that we can also build at home with less money a 360-degree vision system that can work outside in sunlight.

I search on the Internet and I found two solutions for DIY projects. A cheap one and an expensive one. The second solution is an Eye Mirror at a price of $453. So, it’s too expensive for me and probably for many other makers. Also, the price is miles away from the range of $100.

The cheapest solution is a Kogeto Panoramic Accessory for iPhone 4. It has a price of $14 on Amazon. And someone already uses it to build a 360 video camera system.

You can mount this panoramic lens on a 3d printed mount and connect it to a Raspberry Pi. The SimpleCV framework provides the algorithms to unwrap the frames and process the images.

For a good reason, the camera with the panoramic lens should be mounted on top of the robot.

Don’t expect to have high-quality 360-degree images with this panoramic lens, but at least you can try something new in robotics navigation for under $100.

What do you think? How big is the impact of sunlight on the images since the panoramic lens is aimed directly at the sun?