In this tutorial, we will learn how to build a 4WD robot chassis using aluminum profiles and 3D printed parts.
Several weeks ago, I built an obstacle avoiding robot on a 2WD plastic platform. The platform works well, but it is limited in terms of space and capabilities.
A 4WD chassis is easy to design, easy to build, robust, and we have many options to attach different components and parts.
There are many possible designs for such a platform. But as an engineer, I start from a specific set of requirements to build a platform capable of hosting different sensors, microcontrollers, and computers. Also, the 3D printed parts allow me to play with different designs to produce the ultimate 4WD robot chassis.
The weight distribution was a crucial factor in the design decisions of the structure of the chassis. The apportion of weight in the front and the back of the chassis is equal. For this reason, the weight distribution will not affect a variety of characteristics of the robot, including handling, traction, and acceleration.
Inside the remainder of this tutorial you will:
1. Learn how to build a chassis frame with aluminum profiles 2. How to build the drive system 3. Testing the platform 4. Next steps
When I first became interested in LIDAR technology, I had no idea where to start. I didn’t know which sensor to use. I didn’t know that the LIDAR sensor that I found could be used indoors or outdoors.
I wish there had been a list like this, detailing the sensors to use in robot navigation for simultaneously localize and map (SLAM) or obstacle detection.
If you think I’ve left an important one out, please leave me a note in the comments or send me an email.
1. Single Point Ranging LIDAR Sensors
The LIDAR (Light Detection and Ranging) sensor is used in robotics to determine how far the objects are from the sensor. The sensor uses a laser beam for detection and tracking the obstacles.
In this part of the article, you will find small size sensors great for attaching to a servo motor to scan back and forth for obstacle avoidance.
These sensors are single point LIDAR working on the Time of Flight principle. The distance measured may be affected by the environment illumination intensity and the reflectivity of the detected object.
Qwiic LIDAR-Lite v4
Qwiic LIDAR-Lite v4
Qwiic LIDAR-Lite v4
The distance measured [indoor]: between 5cm and up to 10 meters;
In this tutorial, you will learn how to use the rosserial communication to publish the ranges of three HC-SR04 sensors, including:
1.Three ultrasonic sensors and Arduino. 2.How to write the ROS node on Arduino and publish the ranges of the sensors. 3.How to identify the Arduino board on Raspberry Pi and run the ROS node via rosserial. 4.How to display the ranges using the Linux Terminal.
Data-flow diagram between sensors, Arduino and Raspberry Pi
Different projects may have different requirements. At the end of this tutorial, you will have a flexible structure that makes it possible to add more sensors, or use only one sensor, or use another type of sensor (for example, infrared sensor).
You don’t need to have strong knowledge about ROS to understand this tutorial. What you need is to have a computer like Raspberry Pi 4 running ROS, sensors, an Arduino and time to learn and build.
We go further and learn how to write the ROS node on Arduino and publish the ranges of the sensors using the sensor_msgs/Range message type from ROS. Read more →
In this tutorial, you will learn how to get started with your Garmin (Qwiic) LIDAR-Lite v4 LED, including: 1. How to connect the LIDAR-Lite v4 to Arduino. 2. How to use parameters of the sensor. 3. The operational detection range of the sensor.
Garmin (Qwiic) LIDAR-Lite v4 LED is a LIDAR (Light Detection and Ranging) system that uses echolocation to detect objects. The sensor offers high accuracy and low-power consumption in a tiny package.
Like all the LIDAR-Lite line, this sensor provides an alternative to the expensive laser LIDAR sensors and the very cheap – but noise affected and low range – infrared and ultrasonic sensors. Since I plan to use a laser-based sensor in the future for indoor and outdoor robots, first, I want to discover if the cheapest version that is using LED and optics instead of a laser can be used indoors to build a 2D obstacle detection system.
Garmin_LIDAR-Lite_v4 + Arduino + Battery + Step Down DC-DC Converter
The sensor measures the time of flight for a light using an LED, optics, and receptor. It sends a near-infrared light and looks for its reception after the reflection of a target object. The sensor calculates the time delay between the transmission and the response of the beam light using the known speed of light. Read more →
In this tutorial, you will learn how to build an obstacle detection and avoidance robot with Arduino and three HC-SR04 ultrasonic sensors. The robot is a low-cost mobile platform with two drive wheels and a rear caster. It includes three sensors making it aware of the obstacles in the environment.
The robot navigates without knowing a detailed map of the surroundings. If an obstacle is detected in its path, the robot adapts its velocity in order to avoid the collision. If the surrounding environment is free of obstructions, the robot simply moves forward until an obstacle is detected in the range of the sensors.
First, we need to define the inputs to know from where the robot takes the information into its system. This detection and avoidance robot will have two types of inputs.
1. The most straightforward input is the switch button that will turn ON and OFF the motor driver and a power on/off button for the battery bank to power the Arduino board and the sensors.
2. The robot sees the world through sensors and is the way of taking information into the control system. The sensors are the second input of the robot. The sensors let the robot to detect and respond to the surrounding environment around it. The sensors enable the robot to operate safely, automatically detecting obstacles, and dynamically change its route.
The robot uses sensors to measure the distance between the robot and the obstacle. As you may know, inside a home are different objects with different sizes and build from many types of materials. The robot can detect objects from different materials like a wooden chair or a sofa bed.
Once the robot detects an obstacle, the algorithm calculates an alternative path based on the last outputs of the sensors. If the object is on the left side of the platform, the robot dynamically moves its direction to the right until the sensors no longer detect the obstacle. The same behavior when the sensor detects a barrier on the right side.
If the sensor detects an obstacle in the middle of the robot’s path, then the algorithm randomly changes its direction to the left or right until the sensors no longer detect an obstacle.
We finished defining the types of input for the autonomous robot. Inputs are essential, so are the outputs.
The obstacle detection and avoidance robot have one type of output.
3. The mobile robot uses two DC motors, one for each side of the platform. Each motor is programmed individually and can move the robot in any direction for turning and pivoting.
To have complete control over the DC motors, we have to control the speed and the direction of rotation. This can be achieved by combining the PWM method for controlling the speed, and the H-bridge electronic circuit for controlling the rotation of direction.
Once we finish defining the inputs and output, we go further and break up the obstacle detection and avoidance robot into simple pieces and work on them one by one. Read more →
Obstacle detection is applicable to any robot that moves from an initial position to a goal position avoiding any obstacle in its path. The process of detecting obstacles is applied for a variety of robots, including a mobile robot and a robot arm. In this tutorial, you will learn how to use the HC-SR04 sensor with an Arduino board and determine the detection range of the sensor in certain conditions.
Different projects may have different requirements. At the end of this tutorial, you will have a flexible structure that can be used in different robots and make it possible to add more sensors, or use only one sensor, or use another type of sensor (for example, infrared sensor).
If you plan to build advanced robots in a productive and professional manner, this is the point where you can start.
Before starting to connect the sensor and write the first line of code, let’s be sure that we have all the hardware parts. Here is the list of hardware that I use to write the tutorial:
In this part of the article, I will show you how to connect one HC-SR04 sensor to Arduino and write the Arduino sketch that reads and transforms the sensor’s output.
For the moment, I use a USB cable to power the Arduino UNO. The 5V USB port of a personal computer or laptop provides enough power to run a 5V Arduino and the three ultrasonic sensors.
The above set up is just for connecting and testing the ultrasonic sound detection system. When the sensor and the Arduino board will be mounted on a mobile robot, the entire detection system will run on batteries.
1.1 Connect the sensor to the Arduino
First, let’s have a look on the HC-SR04 specifications:
Operating Voltage: DC 5V
Operating Current: 15mA
Operating Frequency: 40KHz
Range: from 2cm to 4m
Ranging Accuracy: 3mm
Trigger Input Signal: 10µS TTL pulse
The operating voltage of the sensor is the same as the Arduino 5V output pin – DC 5V. The operating current for an ultrasonic sensor is 15mA, which can be supported by Arduino even if it is connected to a USB port of a computer. The 5V output pin of Arduino UNO is suitable for ~400 mA on USB, ~900 mA when using an external power adapter. At this step, we will take into consideration only these two specifications and start to connect the sensor to the board.
For connections, I use female-to-male jumper wires, a breadboard, one HC-SR04, and an Arduino UNO board.
Vcc -> breadboard -> 5V
Trig -> pin 3 (digital pin)
Echo -> pin 2 (digital pin)
GND -> breadboard -> GND
The HC-SR04 sensor connect to the Arduino UNO
1.2 Write the code sample for the sensor
Once the sensor is connected to the Arduino board, we can start writing the sketch to read the output and transform the reading from the sensor. For writing the sketch, I use the Arduino IDE. I like to work with simple tools and don’t spend time on customizations and accessories. For the moment, the Arduino IDE satisfies my needs in terms of programming a microcontroller.
Before writing the first line of code, let’s recapitulate how an ultrasonic sensor works.
It pings a sound wave (for at least 10us according to specifications), which travels through the air, and if there is an object that reflects the sound wave, the sensor measures the time that ping took to return to the receiver. To calculate the distance between sensor and object detected, we consider the travel time and the speed of the sound. Read more →
The process of building an autonomous robot starts before the first sensor or actuator mounted on the chassis. The build process requires hardware and software setup, which is the topic of this article.
The first step is to set up and install the computer that will manage hardware components and run algorithms to control the robot. For the robotic projects that you will find on this website (at least on the beginning), I will use Raspberry Pi as the main computer on top of the robot.
Different projects might have different requirements. For this reason, I used to work with the most powerful version of Pi: the Raspberry Pi 4 Model B with 4GB of RAM. This version is capable of running ROS, algorithms to detect objects, and up deep learning algorithms.
Raspberry Pi 4 kit
At the end of this post, you’ll have a Raspberry Pi ready to run Python algorithms, send and receive data from single-board microcontrollers, and with remote access services installed.
I split the post into six parts to be able to switch from one topic to another or to ignore the installation of a particular one. I started with the required step to install the operating system, and I continued with the installation of the frameworks and tools.
I believe that everyone should be able to build robots. Therefore even if you are a beginner or an advanced user, you will be able to have all the software and framework running on the Pi. All you need to do is to read with attention and follow all the steps from this tutorial.
My goal with this tutorial is to have a reliable computer that you will use to build autonomous robots and keep running as long as needed without any timeout.
If you think that this post is long and will keep you blocked for several hours, maybe one day long, you may have an alternative. You can find a list of already prepared images that runs ROS. The significant advantage is that you will have an up and running Pi in minutes at the cost of not having all the frameworks and tools covered in this post.
We are very close to starting our installations, but first, we have to define all the needed hardware resources and specifications. I already mention the version of Pi that I used in this post. Below is the complete list of all the hardware components and accessories used.
If you already have a Raspberry Pi 4 (any version with 1,2 or 4GB of RAM), a microSD card, and a power supply, please check the specifications and be sure to have a power supply that is Pi 4 compatible and a microSD card from this list. If not, you may have various errors during installation and are not covered in this tutorial.
Once we have the list of hardware components and accessories, we can start the setup and installation process.
Mecanum wheels are very useful to increase the maneuverability of a robot. These wheels come with 45º rollers that move independently and allows the robot to move forward, backward, sideways, diagonally or spin in place. But please, don’t make the confusion with omnidirectional wheels. Mecanum wheels are different than omnidirectional wheels, but the result is almost the same. Depending on which wheels rotate in which direction, the robot will change the heading or spin in place.
This kit includes an aluminum chassis with four motors and mecanum wheels. The chassis provides enough space to add sensors like LiDAR or camera for computer vision, as well as a computer like Raspberry Pi or Nvidia Jetson and batteries.
The motors have attached encoders. Having encoders, you can add an IMU sensor and create a map of the room. This makes easier the job to transform this chassis into an autonomous robot able to navigate on smooth surfaces in an apartment, building or factory floor.
Mecanum wheels have some disadvantages compared with the “normal” wheels that we’re using for our cars or shopping carts. These wheels tend to wander side-to-side when the robot is trying to negotiate an inclined floor. Also, mecanum wheels are known for losing traction.
The kit has a reasonable price ($75.99 in the U.S).
Nvidia DeepStream Integration with Azure IoT Central [image credit]
Nvidia DeepStream is a set of tools capable of analyze in real-time video/image streaming and multi-sensor processing. Azure IoT Central is a cloud computing platform from Microsoft that provides access to servers, storage, networking, and software—over the internet. In the tutorial Nvidia DeepStream Integration with Azure IoT Central, Paul DeCarlo combines these two technologies and shows us how to enable remote interaction and telemetry for DeepStream on Nvidia Jetson Devices using the cloud from Microsoft.
This combination of software and hardware can be useful if you’re running a robot on Nvidia Jetson. We can apply the learnings from the tutorial and build a monitoring application for a robot running AI software and ROS.
I try to optimize my work. Even it is about CAD designs for printing 3D parts, learning new things or writing software. I’m new in writing Python scripts and sometimes the syntax causes me headaches. The video below explains common mistakes that a programmer does when is writing Python scripts. It helps me to understand why I have so many errors on the indentation of the lines.
one of the mistake: using Tab key or spaces for indentation. Corey recommendation is to use an IDE for indentation.
Using an IDE to write Python scripts is the easier way. Usually, I write ROS nodes in Python via SSH. All these nodes are running on a Raspberry Pi. I use ‘nano’ to create and write the python files via SSH connection. I cannot use a Python IDE via SSH because this connection doesn’t provide GUI resources.
When you’re working on robots and don’t have too many hardware and software resources at your service, you have to find solutions. I have one recommendation and one idea for programmers who are writing Python scripts via SSH are:
1. don’t mix ‘tab’ and ‘space’ indentations in nano editor:
This is the usual mistake that I do by reflex. Sometimes I’m lost in writing the program and I mix the Tab and space keys for indentation. I change in nano the tab character spaces to 4 and everything works without syntax mistakes if I use only the tab key for indentation.
Step 1: Go to your home directory and type the command: sudo nano /etc/nanorc
Step 2: Navigate into the configuration file until the line with #set tabsize 8
Step 3: Remove the # and put 4 instead of 8
Step 4: – Ctrl + O – to save the file, and then – Ctrl + X – to close the file
2. change the nano editor with an IDE and use git:
This is an idea: write the Python script on my Windows PC, commit to git, and then connect via SSH to clone the git package. In this way, I will reduce syntax mistakes. In the same time, if I deliver, for example, only a new line of code, will take a little bit more time to check if it works or not. I didn’t test this method. If someone uses this method on Raspberry Pi, please leave a comment with the advantages and disadvantages of this method.