I have at home several boxes full of sensors, shields, DC motors, a 3D printer that doesn’t work (so tons of stepper motors), six robot kits, and many more accessories. Also, I have a Samsung Galaxy S4, which still works but I don’t use it. So, what should I do with all of these?
This robot rover project opens my eyes and gives me a good idea. I have an Arduino UNO (actually, I have five), an ESP8266 which I hope is still in work and at least 4WD robot kits that can be used to build a rover-like robot controlled from an Internet browser using an HTML interface.
Also, I can use the old Galaxy S4 to broadcast video and audio from the robot to the web page.
As the designer of this project says, there are a lot of robotics kits available in stores with different prices and various features. As roboticists, I would like to try something new and use what I have to build different things.
The Spirit Rover is the next big step in advanced robots for learning, teaching, and all around hacking fun. The robot is outfitted with up to three different computing processors, built with high quality components, and fit into an iconic form factor that any tech nerd can appreciate!
Spirit is a perfect starting point for students and hobbyists looking for an expandable and full featured robot platform. Whether you’re new to coding or involved in serious robotics research, the Spirit Rover has something for you.
Learn and expand your Python coding knowledge
Learn and expand your C/C++ Arduino skills
Learn and apply computer vision
Design your own autonomous rover missions
Learn and expand advanced Linux skills
Programmed with Python and Arduino
Want to learn to code in Python and/or Arduino? Whether you’re new to programming or a pro, the capabilities of the Spirit Rover hardware will allow you to grow and apply your skills. Many combinations of programming are possible. Write your code using Python and C/C++ on the Raspberry Pi, or write your code in C/C++ using the free and open source Arduino environment. Our easy to use functions allow seamless communication between the two boards.
Three Computer Boards in One Robot
The Spirit Rover robot includes three different computers, just like many other advanced robots you’ll find in the real world. You’ll learn how these more advanced systems really work at the low level.
A Raspberry Pi computer will handle most of your processing. Though it is optional, it is a powerful computer capable of doing many things at one time. The Pi is similar to the computer inside a tablet computer or small laptop.
An Arduino compatible processor can be used alone or together with the Pi. This is the same processor as found on the popular Arduino UNO board. It is also the same processor (and runs the same code!) as the processor on our Ringo, Wink, and Plumduino boards.
A Microchip PIC processor handles the low level processing on the robot. It does things like sending pulse signals to the servos, reading light sensors, and managing the power system. It is pre-loaded with code. Normally you won’t play with this code on your own, but it is still open and hackable if you want to customize it.
These guys, the MegaBots team, are prepared for the world’s first Giant Robot Duel between MegaBots and Suidobashi Heavy Industry of Japan. How are they prepared to duel? By destroying the giant robot.
This is not just a robot ready to get dirty. It has attached two WiFi repeater boxes that once released creates a WiFi network and keep the connection with the robot. This is helpful in situations where the WiFi signal is weak or blocked – like in tunnels.
In this tutorial, Lukas Biewald (a former Stanford Robotics Lab engineer) shows us how he builds two robots able to run deep learning to do object recognition.
Both robots use the open source software library TensorFlow. The library comes with a prebuilt model called “inception” that performs object recognition.
Deep learning and a large public training data set called ImageNet has made an impressive amount of progress toward object recognition. TensorFlow is a well-known framework that makes it very easy to implement deep learning algorithms on a variety of architectures. TensorFlow is especially good at taking advantage of GPUs, which in turn are also very good at running deep learning algorithms.
Today I remember the Let’s Make Robots community for roboticists where some time ago I could find exciting projects. The community is almost sleepy since Let’s Make Robots is part of RobotShop and this is reflected in the number of interesting projects that are published by community members.
Today is the lucky day. I found this rover project equipped with sensors and smart enough to navigate autonomously around a particular area.
The list of components is here:
1x Self made chassis
6x Chinese 12V gear motor 120RPM
6x modified brass hub with 4mm shaft hole
6x RC car wheels
6x Tamiya RC offroad rims CC01
1x Raspberry Pi 2
1x PicoBorg Reverse
1x Enviro pHAT
4x HC-SR04 ultrasonic modules
1x 12V 3A UBEC
1x 12v LiPo 12000mAh modified charger
1x Self made led setup with shift register HD44780
Ray Kurzweil is one of the greatest thinker of our time and almost all the time I agree what he said about robots and AI. This time, I’ll do a shift, and I’ll tell you why I disagree with what he said about how we can control AI.
By creating safeguards and standards in advance, we can better defend against negative consequences. As an example, Kurzweil points to the 1975 Asilomar Conference, a meeting that sought to define the ethical boundaries of biotech research before it reached its full potential. He believes a similar approach might work for AI and other exponential technologies.
Kurzweil brings into discussion safeguards and standards, something like “the ethical boundaries of biotech research.” Well, how many of you have the possibilities to do biotech research in a garage or at home? Probably none or a few of you have the resources to do that. So this category of researchers is not a significant danger and the ethical boundaries can work.
Instead, any of us can make AI at home. And any of us can apply AI to a robot at low costs.
So, “creating safeguards and standards in advance” is just the beginning and an addition to the three laws of robotics. We are far away to protect humans from artificial intelligence.
When you use the delay() function in your sketch, the program stops. The program waits until moving on to the next line of code. So, in this dead time you can’t process the input data from sensors, as well as the outputs.
The delay() function is easy to use, but good only if you don’t have something else going on during the delay. Otherwise, you have to use millis().
Millis() can seriously affect your project when you have to run multiple actions simultaneous. It’s the function that lets you do multitasking on Arduino.
It’s pretty simple to work with the delay() function. It accepts a single number as argument representing the time in milliseconds. Using millis() takes a little bit of extra work compared to delay().
Calling the millis() function in the Arduino sketch returns the number of milliseconds that have elapsed since you start to run the program.
Below is an example of millis() function used for the HC-SR04 ultrasonic sensor to run an autonomous robot able to detect and avoid obstacles.
//get the sensor distance for every 200 millis
And how the same function looks when the delay() function is used: