Alternatives To Raspberry Pi 3 (B/B+) For Running ROS

More Powerful Alternatives To Raspberry Pi 3 (B/B+)

More Powerful Alternatives To Raspberry Pi 3 (B/B+)

I’m quite involved in developing an autonomous robot platform for outside work. The robot will work in the summer during the day for a few hours, which result in high temperatures for all its components. My main concern is the electronic parts. I have plans to install a cooling system for these, but I’m not sure if I can keep the temperature values at an acceptable threshold.

Besides, I want to reduce the temperature of the electronics using parts that do not work under stress. Low stress results in low temperature generated by the component. Thus, I’ve come to look for Raspberry Pi 3 alternatives to run ROS and communicate with the robot.

I use the Raspberry Pi 3 to run ROS Kinetic, monitor and control the robot. Raspberry Pi is a pretty stable platform and can work continuously for weeks without a problem. Its alternative must provide as much stability in operation, community support, and resources.

The budget for this change is not greater than €150. My research led to the next list (the list will be updated once new alternatives appear).

Jetson TK1 / TX1 / TX2 or Intel NUC is out of the question for the moment. Any of the three Jetson variants or the Intel computer cost a few hundred euros. It is worth investing money in such a board if running ROS and computer vision applications. Otherwise, I do not see any reason in spending hundreds of euro to run ROS nodes for sensors and navigation algorithms.

  1. Libre Renegade

    Added on 19.December.2018
    Renegade has the form-factor compatible with Raspberry Pi 3 and can run ROS without a big effort with its 1.4GHz ARM Cortex-A53 processor and 4GB of DDR4. It has a price of $80.00 (the version with 4GB DDR4), but the final price will increase by a few dollars considering that it has no WiFi. Is needed a USB dongle if you desire to use this function.


    XU4 has a competitive price of about €63 and on the specifications list is a Cortex-A15 processor that can provide 2Ghz and 2GB of LPDDR3. The only minus of this board is the lack of a built-in WiFi module. I need an extra €5 to buy a WiFi module to have an Internet connection.

  3. ASUS Tinker

    Asus Tinker runs ROS nodes on a QuadCore ARM SOC 1.8GHz processor with 2GB of RAM. It has an built-in WiFi module and a price around €49. I have some doubts about the operating system. In reviews from Amazon other users reveals stability issues for Android and Debian images. This makes me think twice about making a decision. It is very important to me to use a single board computer for months without interruption. An unstable operating system can lead to a large number of reboots and downtime.

  4. Rock64

    Rock64 comes in several variants and the strongest one has 4GB of RAM and an ARM Cortex A53 64-bit processor. The board can run a full version of Linux Ubuntu or Debian. The price is also good considering the performance – around €38. The only thing that concerns me is the community support. An active community could save me for a lot of hours to fix issues.

How To Setup ROS Kinetic To Communicate Between Raspberry Pi 3 and a remote Linux PC

How To Setup ROS Kinetic To Communicate Between Raspberry Pi 3 and a remote Linux PC

I have a Raspberry Pi 3 running ROS Kinetic and I use it to control an autonomous robot. The plan is to improve my robot by adding computer vision capabilities. The Raspberry Pi has resources to build intelligent robots and the community helps me more in fixing a lot of problems. These are the reasons for this moment not to change it with another single board computer with advanced hardware resources. The bad thing is that Pi 3 has limited capabilities for graphics applications such as rviz or running computer vision applications.

My idea (I hope is a good one and will work) is to use Pi 3 and a Linux computer in the same network to exchange data from one to another while I run rviz and the Gazebo simulator on the PC.

The first step in implementing my idea is to setup ROS Kinetic to communicate between Pi 3 and the remote Linux computer. Below are the steps that I did to make both computers communicate with each other.

  • Step 1: I checked that everything was okay with ROS Kinetic on my Linux PC. I installed it some time ago using the steps described here.
  • Step 2: I re-installed a Linux image and ROS Kinetic on the Raspberry Pi 3 board. It took some time since I chose to install ROS on the Raspbian Stretch Lite. This operating system is what I need for my robot: it doesn’t have desktop applications or a GUI of any kind.
  • Step 3: At this step, I pay some attention at the IP address for the ROS master node (Pi) and the IP address for other ROS node (the Linux PC).
    1. on Raspberry Pi 3 type the following command:
      sudo nano .bashrc

      Navigate to the end of the file and add these two lines:

      #The IP address for the Master node = the IP address of Raspberry Pi
      export ROS_MASTER_URI=
      #The IP address for the Master node= the IP address of Raspberry Pi
      export ROS_HOSTNAME=
    2. on the Linux PC, type the following command:
      sudo nano .bashrc

      Navigate to the end of the file and add these lines:

      #The IP address for the Master node = the IP address of Raspberry Pi
      export ROS_MASTER_URI=
      #The IP address for your Linux PC
      export ROS_HOSTNAME=

These are all the steps to make two Linux computers communicate and share nodes, topics, and services.

How to Install ROS Kinetic on Raspberry Pi 3 running Raspbian Stretch Lite

I want to control an autonomous robot with a Raspberry Pi 3 board and ROS Kinetic. The Pi 3 will be connected to another Linux PC used for monitoring and control settings. The setup for computers are in this article.

Due to the lack of Pi resources in terms of processor and memory, I’m forced to use resources more efficiently. The first step is to install on Pi an operating system like Raspbian Stretch Lite. The interaction with this system is done through type commands. It doesn’t have GUI and other software included with the Desktop version. Theoretically speaking, it is a perfect operating system if you want to not stress the Pi board with too many tasks that you do not need anyway.

The system will use the Raspberry Pi board as the master while the PC is the slave. This configuration means to run the Roscore on the robot instead on the remote PC. The PC is used to see the message that coming from Pi or send some manual correction back to the robot.

I installed the ROS Kinetic version with no GUI tools on the Raspbian Stretch Lite and I put all the steps below.

  • Step 1: Download and install Raspbian Stretch Lite
    The installation steps for Raspberry Lite are described here.
  • Step 2:Connect via SSH to Pi and run the below commands:
    sudo sh -c 'echo "deb $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
    wget -O - | sudo apt-key add -
    sudo apt-get update
    sudo apt-get install -y python-rosdep python-rosinstall-generator python-wstool python-rosinstall build-essential cmake
    sudo apt install dirmngr
    sudo rosdep init
    rosdep update
    rosinstall_generator ros_comm --rosdistro kinetic --deps --wet-only --tar > kinetic-ros_comm-wet.rosinstall
    wstool init src kinetic-ros_comm-wet.rosinstall
    rosdep install -y --from-paths src --ignore-src --rosdistro kinetic -r --os=debian:stretch
    sudo ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release --install-space /opt/ros/kinetic -j2
    sudo ./src/catkin/bin/catkin_make_isolated --install -DCMAKE_BUILD_TYPE=Release --install-space /opt/ros/kinetic -j1

    (thanks CaJU and Bruce W)

    source /opt/ros/kinetic/setup.bash
    echo 'source /opt/ros/kinetic/setup.bash' >> ~/.bashrc
    mkdir -p ~/catkin_workspace/src
    cd catkin_workspace/src
    cd ~/catkin_workspace/
    source ~/catkin_workspace/devel/setup.bash
    echo 'source ~/catkin_workspace/devel/setup.bash' >> ~/.bashrc
    export | grep ROS
  • Step 3 (optional): The installation process takes several hours and sincerely, I don’t want to repeat the installation too soon. I decide to clone the memory card as soon as I finished the initial installation. Here are the steps needed to clone the memory card.

You can find additional information here and here.

A cheap system for detecting curved lanes (OpenCV, Raspberry Pi 3 and Nvidia Jetson TX2)

A relatively inexpensive method with good results for a robot capable of detecting the lines and the curved lines of a road is described in this project. Kemal Ficici has used OpenCV for computer vision, a Raspberry Pi 3 and an NVIDIA Jetson TX2. If we add the video camera, wires, and other accessories, the project does not exceed the amount of 1000 Euros for the hardware parts. A small price for a system capable of detecting lines and curved lines.

Detecting curved lanes in camera space is not very easy. – says Kemal Ficici

The system uses the contrast between lane lines and road to detect the driving path.

Here’s the current image processing pipeline:

  • Distortion Correction
  • Perspective Warp
  • Sobel Filtering
  • Histogram Peak Detection
  • Sliding Window Search
  • Curve Fitting
  • Overlay Detected Lane
  • Apply to Video

The system also has weaknesses. It is affected by shadows, drastic changes in road texture, rain and snow.

How the system works:

Which Is The Best ROS Ready Robot Arms Under $1000?

I have this robotic arm with 6-axis and very cheap servo motors. When I bought it, I did not think I would use it to who knows what applications. I buy it because is cheap and I can control it with an Arduino Mega and ROS.

The problem is that I want to build a robotic arm capable to identify objects (like an apple or a pear), to pick the fruit from a tree, and put it in a basket near the robotic arm. For these operations, I need an arm with a relatively solid structure and some powerful servo motors. In conclusion, the kit I have is not helping me.

The second plan is to buy a ROS ready robotic arm with servo motors that can handle a weight around 200 grams. Because it is a personal project with a limited budget (maximum $ 1000), I searched for and found the following three robotic arms that would fit my project.

The first option is a PhantomX robotic arm with 4 degree-of-freedom and a gripper with a rated holding strength of up to 500g, while the wrist itself can lift up to 250g horizontally.

The second option is also a PhantomX arm that can handle the same weight as the first one. But it comes in addition with 5 degrees of freedom, a greater range of action and up to 300 degrees of motion.

The third option is the most expensive and I may have the surprise not to receive it in the next few months when I need it. It’s about Niryo One. For now, the arm can only be pre-ordered. The combination of Raspberry Pi, Arduino, ROS, parts that can be printed with a 3D printer – attracts me a lot.

Robots in Agriculture: Present and Future

This is a guest post by Jack Simmer (a writer for DO Supply)

Robots are gradually changing every industry and agriculture isn’t an exception. The use of robotics in this field isn’t widespread yet. However, it’s expected to grow significantly by 2020-2028.

Why There Aren’t Many Robots in Agriculture Now (But Will Be More in the Future)

The most important reason that prevents the use of robots in agriculture today is the fact that the technology hasn’t been created yet. The majority of automation solutions are either in a testing or development stages and have far to go before they can be commercialized.

This technology progresses a bit slowly because of the high costs and complexities involved. At the moment, the level of visual tech isn’t high enough to create an efficient robot for harvesting or weeding. The trick is that the vision of the robot must be taught to not only identify different objects but to analyze them and determine which should be removed.

There are a few machines capable of doing this, such as a tomato-harvester robot from Panasonic and cucumber-harvester currently developed by scientists from Germany. However, neither is commercially available yet.

The other issue that makes the development of agricultural robots so difficult is the fact that they must be taught how to perform their tasks with extreme gentleness and accuracy. While surgical robotics have proven that accuracy isn’t an issue for a machine, automating a fresh fruit-harvesting robot proves to be a much greater challenge. One can certainly make a well-calibrated teleoperated robot to pick even as soft a fruit as grapes. However, yet again, the technology isn’t at the level when it can be mass-produced and used by agricultural businesses.

There’s also a debate going on whether one should create one big multifunctional robot or multiple small robots that will fulfill various tasks. The second solution seems like a more efficient option at the moment. The main concerns are the weight of the device and its maneuverability. Lighter and smaller robots have a lower risk of damaging the crops and soil.

Money is also a challenge for the implementation of robots in agriculture. The tech available now is too expensive to make these solutions viable for these businesses. However, as this field develops rather fast, we can expect to see commercially available agricultural robots within 1-2 years. After all, some types of them exist and are used even today.

Types of Robots in Agriculture

Robots can perform a variety of tasks and make the business of growing crops much less taxing for humans. The main areas for the implementation of robotics in agriculture are harvesting, weeding, mowing, pruning, seeding, spraying, sorting, and packing.

Some types of robotics that are already used include drones (monitoring and spraying) and automated tractors. Note that the tractors of today still require a lot of human input into the controls. However, these machines get more advanced and are expected to become fully autonomous by the late 2020s.

At the moment, drones are the leader of robots in agriculture. They are extremely cost-efficient and are widely used by small farms. The reason for this is undoubtedly the fact that drone technology has become extremely commoditized and therefore affordable.

Harvesters are also getting out there as these machines are in the highest demand due to the inefficiency of picking fresh fruit by hand. The harvesting of seeds is nearly 100% motorized these days. However, only a few strawberry-picking machines are available commercially and even those require the redesign of strawberry farms to function efficiently.

In the coming years, we can expect to see much more work invested into the creation of robot-harvesters that will completely eliminate the need for back-breaking labor inherent to this low-paying and extremely difficult task.

The demand for robots in agriculture grows by the day and scientists respond to it by creating more and more advanced robotic solutions.

4 and 6 Axis Arduino Robot Arm Kits

A robotic arm may seem complicated to be built and controlled. It involves teaching how to program a microcontroller to control some servo motors for repetitive tasks. But you can learn to do it quickly using robotic arm kits.

I’ve seen a lot of robotic arm kits around the web in the last year, but the ones below are the favorites today. The robotic arms from this article have 4 or 6 degrees of freedom to suit of any project.

  • 4DOF Robot Arm with Remote Control PS2

    4DOF Robot Arm with Remote Control PS2

    4DOF Robot Arm with Remote Control PS2

    The robotic arm kit from Banggood is controlled with two ps2 joysticks. It’s a simple way to control the arm and does not involve running an advanced programming code on the Arduino board.

    The range of applications for such a kit is small compared to a programmable kit, but for the price of $39.99, it is a good start for school students. It has a manual and a guide to install the code for the Arduino board.

  • LewanSoul LeArm 6DOF

    LewanSoul LeArm 6DOF

    LewanSoul LeArm 6DOF

    This robot arm is made entirely of metal and aluminum and can lift up a weight of about 250 grams.

    A Bluetooth module is added to the main board of the robot to control the arm with a smartphone or tablet. Also, you have an application that simulates all the joints of the robot arm, so that you can move it at a push of the touchscreen. This is the case if you do not want to use wires to control your arm. Otherwise, you can use wires and a remote control to move the arm on all the 6 axes.

    It is neither the cheapest 6-axis robotic arm nor the most expensive. It has a price of $129.99 on Amazon. The price does not include transport costs.

  • 6-Axis Desktop Robotic Arm

    6-Axis Desktop Robotic Arm

    6-Axis Desktop Robotic Arm

    At a price of $174.99, Sainsmart offers us a robotic arm made from simple components available to anyone like a PVC pipe. Such an approach is very good for the DIY users who can easily change the structure of the arm. The arm can be used for applications like pick and place, palletizing, and more.

    The robotic arm requires an external power supply, other than the 5V DC from Arduino Uno.

    Another good part of this kit is the documentation. Besides the wiki, you can find a lot of projects that use the robotic arm for different applications like pick and place an object or object detection.

  • uArm Swift Pro

    uArm Swift Pro

    uArm Swift Pro

    The range of applications for uArm Swift Pro is large compared to other kits. With a repeatability of 0.2mm and a maximum payload of 500g, the arm is suitable for pick&place applications to 3D printing.

    This is not a cheap kit. It has a price of $1,129.95 on Sparkfun. The arm is open-source and controlled by an Arduino Mega 2560 board. For documentation, you can access this link.

What Robotics vs. Artificial Intelligence Means for Developers

This is a guest post by Josephine Perry.

Despite technically referring to two separate fields and ideas, many people often use the terms “robotics” and “artificial intelligence” interchangeably. That’s understandable considering that people who work in robotics often implement artificial intelligence, and vice versa.

However, the fields are not exactly the same. While there is often some overlap between them, by understanding their key differences, you’ll be better-equipped to comprehend the latest developments in both industries.

Many people confuse artificial intelligence and robotics because science-fiction TV shows and movies often depict robots as being equipped with AI.

In real life, a robot doesn’t need to be able to “think” to still qualify as a robot. Essentially, a robot is simply a machine that is able to perform tasks autonomously, or nearly autonomously. They’re also programmable. People who create them develop or use programs to determine their functions.

Granted, some could (and do) argue that because a robot must at least be able to operate semi-autonomously, it technically is “thinking” to a degree when it’s in operation. That type of thinking isn’t always very sophisticated, though.

A machine that doesn’t solve problems or acquire new knowledge could still qualify as a robot if it’s able to complete a task it’s been programmed for. In other words, while robots often do possess a form of artificial intelligence, they don’t have to.

Artificial Intelligence
Artificial intelligence is a branch of computer science. One of the key differences between artificial intelligence and robotics is simple: an AI doesn’t need to necessarily interact with the physical world.

Artificial intelligence algorithms solve the kinds of problems that usually require some degree of human insight or reflection. Thus, an AI could be used for customer service purposes, as is the case with most forms of chatbot technology utilized by brands.

That doesn’t mean that same AI would qualify as a robot. If it’s just a computer program playing the game in a virtual environment, it doesn’t have the essential physical-world application that robots must have.

Yes, an AI could be part of a robot; this is becoming much more common as both technologies continue to develop and improve. However, it’s only one part of a much larger system. A robot isn’t a robot without sensors, actuators, and other components that work together to ensure the machine performs tasks as intended.

Programming Options
To further understand how robotics and artificial intelligence differ, it helps to consider an example of a robot that would use both AI and non-AI programming.

Imagine a robot that could pick up objects and identify them. The programming that allows the machine to pick up an object wouldn’t require an artificial intelligence algorithm to do so. To identify the object, though, the robot would have to “see” it with a camera, then use machine learning principles to determine what it is; this does require an AI program.

That’s why more specialists in both fields are beginning to work together. Robotics gives AI the chance to interact more directly with the real world, while AI expands on the existing capabilities of robots. Together, they may soon make those famous sci-fi movie robots a reality.

ROS 2 Ardent Apalone was officially released. I made a list of 5 reasons why you should use it for your robot.

Coincidentally or not, after 10 years of ROS 1, the Open Source Robotics Foundation has launched a new version called ROS 2. ROS 2 (the code name “Ardent Apalone” – Apalone is a genus of turtles in the family Trionychidae) was officially released at the end of 2017. The release of the new ROS has gone a little unobserved by the usual ROS users, and it is understandable since there are few articles in online about this release.

So in this article, I will try to describe why Ardent Apalone appeared and what gaps left by ROS 1 will be covered by the new ROS 2 version.

Before going into the subject, I will remind that ROS (The Robot Operating System) is not an operating system as we know. ROS (or ROS 1) is a solution designed to be hosted by an operating system like Linux. Or, as the majority calls it, this is a meta-operating system. And of course, it’s designed for robots.

Like ROS 1, the ROS 2 is the network of nodes that allows communication/exchange of information between the components used in the robot. So far, nothing new. Everything is the same as we know it today.

One of the reasons behind the launch of a completely new version (ROS 2) and not the improvement of ROS 1 is the significant changes to the framework. The team that developed ROS 2 has chosen to implement the new changes safely in the new framework. So, they did not want to alter the ROS 1 variant to not affect the performance and stability of the current versions of ROS. From my point of view, it’s a wise decision. Especially because there is a plan to implement the ROS 1 nodes to work with the ROS 2 nodes together on the same robot. So there will not be significant changes to the systems that will work with both ROS variants.

Below I made a list of the new features of ROS 2.

  1. Three compatible operating systems
    One of the news is that besides Linux, ROS 2 is compatible with Windows 10 and Mac OS X 10.12 (Sierra). If the support of OS X is not new (officially ROS 1 were compatible with OS X as an experimental part), the support for Windows is something new for ROS.
  2. Real-time support
    ROS 1 has not been designed for real-time applications. The goal of ROS 1 was to create a simple system that can be re-used on various platforms. In other words, the use of ROS has led to a significant reduction in the development of a robot.

    A real-time system must update periodically to meet deadlines. The tolerance to errors is very low for these systems.

    The example below is used by the ROS team to describe a situation when a system needs real-time support.

    A classic example of a controls problem commonly solved by real-time computing is balancing an inverted pendulum. If the controller blocked for an unexpectedly long amount of time, the pendulum would fall down or go unstable. But if the controller reliably updates at a rate faster than the motor controlling the pendulum can operate, the pendulum will successfully adapt react to sensor data to balance the pendulum. [source]

    In other words, the real-time support is more about computation delivered at the correct time and not performance. If a system fails to send a response is as bad as giving a wrong response. This new feature is very useful in safety- and mission-critical applications such as autonomous robots and space systems.

  3. Distributed discovery
    This new feature facilitates, in some way, the communication between nodes. In other words, the nodes in ROS 2 do not need the master node to change messages between them. If you run a C ++ written node and another in Python (a talker and a listener), the nodes will identify each other and start communicating automatically. You may be wondering how to identify the nodes if there is no master node to allow authentication. In ROS 2, the role of the master node was taken over by the ROS_DOMAIN_ID environment variable. When a ROS 2 node is launched, it makes its presence known in the network to other nodes that share the same ROS domain.
  4. Node lifecycle management

    Managed nodes are scoped within a state machine of a finite amount of states. These states can be changed by invoking a transition id which indicates the succeeding consecutive state. [source]

    The most important thing is that a managed node presents a known interface and is executed according to a known life cycle machine. This means that the developer can choose how to manage the life cycle functionality.

  5. Security
    ROS 1 had no security issues because it did not exist. With ROS 2 we can talk about security. It integrates the transport layer of ROS 1 with an industry standard transport layer that includes security. The layer is called Data Distribution Service (DDS).