Robots in Agriculture: Present and Future

This is a guest post by Jack Simmer (a writer for DO Supply)

Robots are gradually changing every industry and agriculture isn’t an exception. The use of robotics in this field isn’t widespread yet. However, it’s expected to grow significantly by 2020-2028.

Why There Aren’t Many Robots in Agriculture Now (But Will Be More in the Future)

The most important reason that prevents the use of robots in agriculture today is the fact that the technology hasn’t been created yet. The majority of automation solutions are either in a testing or development stages and have far to go before they can be commercialized.

This technology progresses a bit slowly because of the high costs and complexities involved. At the moment, the level of visual tech isn’t high enough to create an efficient robot for harvesting or weeding. The trick is that the vision of the robot must be taught to not only identify different objects but to analyze them and determine which should be removed.

There are a few machines capable of doing this, such as a tomato-harvester robot from Panasonic and cucumber-harvester currently developed by scientists from Germany. However, neither is commercially available yet.

The other issue that makes the development of agricultural robots so difficult is the fact that they must be taught how to perform their tasks with extreme gentleness and accuracy. While surgical robotics have proven that accuracy isn’t an issue for a machine, automating a fresh fruit-harvesting robot proves to be a much greater challenge. One can certainly make a well-calibrated teleoperated robot to pick even as soft a fruit as grapes. However, yet again, the technology isn’t at the level when it can be mass-produced and used by agricultural businesses.

There’s also a debate going on whether one should create one big multifunctional robot or multiple small robots that will fulfill various tasks. The second solution seems like a more efficient option at the moment. The main concerns are the weight of the device and its maneuverability. Lighter and smaller robots have a lower risk of damaging the crops and soil.

Money is also a challenge for the implementation of robots in agriculture. The tech available now is too expensive to make these solutions viable for these businesses. However, as this field develops rather fast, we can expect to see commercially available agricultural robots within 1-2 years. After all, some types of them exist and are used even today.

Types of Robots in Agriculture

Robots can perform a variety of tasks and make the business of growing crops much less taxing for humans. The main areas for the implementation of robotics in agriculture are harvesting, weeding, mowing, pruning, seeding, spraying, sorting, and packing.

Some types of robotics that are already used include drones (monitoring and spraying) and automated tractors. Note that the tractors of today still require a lot of human input into the controls. However, these machines get more advanced and are expected to become fully autonomous by the late 2020s.

At the moment, drones are the leader of robots in agriculture. They are extremely cost-efficient and are widely used by small farms. The reason for this is undoubtedly the fact that drone technology has become extremely commoditized and therefore affordable.

Harvesters are also getting out there as these machines are in the highest demand due to the inefficiency of picking fresh fruit by hand. The harvesting of seeds is nearly 100% motorized these days. However, only a few strawberry-picking machines are available commercially and even those require the redesign of strawberry farms to function efficiently.

In the coming years, we can expect to see much more work invested into the creation of robot-harvesters that will completely eliminate the need for back-breaking labor inherent to this low-paying and extremely difficult task.

The demand for robots in agriculture grows by the day and scientists respond to it by creating more and more advanced robotic solutions.

4 and 6 Axis Arduino Robot Arm Kits

A robotic arm may seem complicated to be built and controlled. It involves teaching how to program a microcontroller to control some servo motors for repetitive tasks. But you can learn to do it quickly using robotic arm kits.

I’ve seen a lot of robotic arm kits around the web in the last year, but the ones below are the favorites today. The robotic arms from this article have 4 or 6 degrees of freedom to suit of any project.

  • 4DOF Robot Arm with Remote Control PS2

    4DOF Robot Arm with Remote Control PS2

    4DOF Robot Arm with Remote Control PS2

    The robotic arm kit from Banggood is controlled with two ps2 joysticks. It’s a simple way to control the arm and does not involve running an advanced programming code on the Arduino board.

    The range of applications for such a kit is small compared to a programmable kit, but for the price of $39.99, it is a good start for school students. It has a manual and a guide to install the code for the Arduino board.

  • LewanSoul LeArm 6DOF

    LewanSoul LeArm 6DOF

    LewanSoul LeArm 6DOF

    This robot arm is made entirely of metal and aluminum and can lift up a weight of about 250 grams.

    A Bluetooth module is added to the main board of the robot to control the arm with a smartphone or tablet. Also, you have an application that simulates all the joints of the robot arm, so that you can move it at a push of the touchscreen. This is the case if you do not want to use wires to control your arm. Otherwise, you can use wires and a remote control to move the arm on all the 6 axes.

    It is neither the cheapest 6-axis robotic arm nor the most expensive. It has a price of $129.99 on Amazon. The price does not include transport costs.

  • 6-Axis Desktop Robotic Arm

    6-Axis Desktop Robotic Arm

    6-Axis Desktop Robotic Arm

    At a price of $174.99, Sainsmart offers us a robotic arm made from simple components available to anyone like a PVC pipe. Such an approach is very good for the DIY users who can easily change the structure of the arm. The arm can be used for applications like pick and place, palletizing, and more.

    The robotic arm requires an external power supply, other than the 5V DC from Arduino Uno.

    Another good part of this kit is the documentation. Besides the wiki, you can find a lot of projects that use the robotic arm for different applications like pick and place an object or object detection.

  • uArm Swift Pro

    uArm Swift Pro

    uArm Swift Pro


    The range of applications for uArm Swift Pro is large compared to other kits. With a repeatability of 0.2mm and a maximum payload of 500g, the arm is suitable for pick&place applications to 3D printing.

    This is not a cheap kit. It has a price of $1,129.95 on Sparkfun. The arm is open-source and controlled by an Arduino Mega 2560 board. For documentation, you can access this link.

Guide to Buy a Robot Lawn Mower In 2018

If you are looking for a new robot to mow your lawn or to upgrade the old one, in this article I made a list of robot lawn mowers that matches to gardening trends of 2018, trends in technology and ecology.

Well, after many hours of searching for the perfect robot for the garden of 2018, I have found that the garden robots manufacturers cut off from the list of technologies almost all technology trends such as GPS navigation and computer vision. Also, they have forgotten to differentiate themselves from competitors. Most models available on the market offer roughly the same features.

Under these circumstances, I have made a collection of robots that have one or more specifications that fit the needs of a garden in 2018.

Let’s take them one at a time:

Trends in gardens

There are some trends in 2018, but I will refer strictly to the trends in the area of the house garden where such a robot can be used – that is, the lawn area.

The key here is ‘balancing nature and nurture’, so you’re advised to ‘sit back, relax and reflect on the beauty of your garden’s natural imperfections’. Overgrown perennials, moss-covered stones, rusty iron gates and weathered pots are in keeping with this trend. [source]

The word “imperfection” is the key word for the garden robot of 2018. As a challenge, the robot must know how to navigate autonomously among stones and perennial plants. In addition, the robot must be able to cut the grass as close to an irregular surface such as a stone with dimensions between a few centimeters and tens of centimeters. In this case, it is not enough for the robot to know how to navigate randomly within a bounded perimeter. Additionally, it has to use sensors so as to avoid obstacles and at the same time cut the grass around the small islets with stones and perennials.

Outdoor entertaining and kitchen areas will be a key trend for Spring/Summer 2018 – perfect for those of us who lack space in our kitchens or dining rooms, as we can move entertaining friends and family outside. Create a dedicated area with comfy furniture and mood lighting, complete with a sunken fire pit, barbecue or pizza oven. [source]

Another trend is a garden area dedicated to a table, chairs and a barbecue. Usually, such areas are near the house and have one or more edges of the lawn area. In such areas, a lawn perimeter delimited by wires can become inconvenient if it is not well mounted. The convenient solution would be a grass-cutting robot capable to make the distinction between a lawn area and a relaxing area.

“Food gardening continues to be incredibly popular with gardeners of all ages, and for me, a big part of the fun in food gardening is trying new edibles; from quirky cucamelons and burr gherkins to super-sweet ground cherries. From chickpeas and edamame to heat-tolerant, exotic greens like magenta spreen, sweet potato leaves, and amaranth,” says author Niki Jabbour [source]

Even if there is a big difference between how a grass thread looks like and for example a ground cherry, we need a robot to distinguish between what needs to be cut and any other plant that is part of the garden decoration. The technology solution would be a camera to capture images and an artificial intelligence algorithm to identify the grass and plants. Such a computer vision system is not very difficult to implement and does not come with a very high additional cost.

So far I’ve gone through the three important trends for a robot lawn mower. To be as clear as possible, let’s take a look at the ideal specifications needed for a robot capable of maintaining the lawn in a 2018 garden.

  1. the robot is able to avoid obstacles and work besides irregular surfaces;
  2. the robot is able to identify the lawn area and the relaxing areas in the garden;
  3. the robot makes the difference between the grass and other plant;

I’ve looked for the above features in over 15 newer and older models of lawn mower robots. The result is not the expected one. I’ve realized that the robot manufacturers have tried more to make a mix of technologies already used on a large scale and release new models which do not bring anything significant in addition to the old models. I didn’t find something revolutionary in this area.

The list below includes lawn mowing robots that meet only a part of the features identified above. For each robot, I made a list of specifications that worth mentioning. This information can help you identify the robot model that suits your garden.

But let’s not forget that in addition to the specifications of a garden robot for 2018, there is a list of very important basic features. Fortunately, all of the lawn mower robots include these features as standard. One of them, and perhaps one of the most important, is the grass cutting system. Such a robot must be able to provide a fine mulch that acts as a lawn fertilizer and eliminates the need for raking. Also, the robot must be able to recharge autonomously and without the intervention of a person.

WORX WR105SI 20 V S450

  • The uncut garden fringe area has been reduced from 20cm area to a 2.5 cm area!

    Setting the cutting blade to the edge of the robot allows cutting the grass very close to the edge of an irregular surface.

  • Shock sensor system. You do not have to remove all items such as garden furniture or toys from your garden: The mower detects obstacles and moves them around.

    This model can detect and avoid objects, and at the same time, is able to cut the grass a few centimeters away from objects. Probably it is the best robot for lawn area maintenance in 2018.

Husqvarna 967622505 Automower 430X

  • Smart technology adapts the amount of mowing to the lawn’s growth rate, which enables spot mowing for area of longer grass, and guides the mower through narrow passages.

    This robot is able to detect and avoid obstacles, but is not specified the distance to which the grass is cut around obstacles.

Bosch Indego 400 Connect

  • Multi-sensors detect obstacles and manoeuvre around them before continuing cutting on the calculated route

    Just like the Husqvarna model, the robot above.

Ambrogio Robot 4.0 BASIC

  • This robot has a soft bumper in front to detect and avoid the obstacles.

Technology trends

In this category, I was expecting a wider range of robots that include technology trends such as solar charging systems (the robot works outdoors and can recharge the batteries using solar energy), GPS systems, and computer vision.

The GPS systems usually have the accurate to a few meters, and that’s why it is used wire to delimit the robot’s perimeter. But in the meantime, things have evolved and have begun to appear GPS systems with a few centimeter accuracy. This is such a system. Indeed, such a system can raise the cost of the robot by several hundred euros, and maybe this is also the reason why most robot manufacturers avoid this new system. In conclusion, 2018 is unlikely to bring a revolution in the autonomous navigation of garden robots without a perimeter delimitation system such as the well-known wired system.

I’ve found two robots that integrated technologies or systems from the ones listed above. One of these robots is planned for release this year. It’s a robot equipped with a solar recharging system. The robot is Automower Solar Hybrid and is built by Husqvarna.

The second model is also built by Husqvarna, it’s called Automower 430X and integrates a GPS navigation system.

Recyclable materials

There is a tendency to use recyclable materials in all industries. Broadly speaking, the robots made from recyclable materials should not be just a marketing concept, it should become a rule for all manufacturers. The lawnmower robots are represented in this category by a single robot built of recyclable materials. Yes, only one manufacturer has been thinking of producing a robot made of recyclable materials. Or at least that’s what I found. It is about – Automower Solar Hybrid – by Husqvarna.

A short conclusion

It’s a lot of work to have a maintained garden all year round. A lawn mower robot is not enough to replace the gardener. It solves only a part of everything that means maintaining a garden, but for many of us, such a robot makes the most boring work of all that is gardening.
In addition, a lawn mower robot is an effective solution for a lawn that looks good throughout the year.

We are still far away from having a robot capable of differentiating grass from other plants or navigate without the classical solution with wires. For all this, we need a change in the vision of the companies that build such robots.

What Robotics vs. Artificial Intelligence Means for Developers

This is a guest post by Josephine Perry.

Despite technically referring to two separate fields and ideas, many people often use the terms “robotics” and “artificial intelligence” interchangeably. That’s understandable considering that people who work in robotics often implement artificial intelligence, and vice versa.

However, the fields are not exactly the same. While there is often some overlap between them, by understanding their key differences, you’ll be better-equipped to comprehend the latest developments in both industries.

Robotics
Many people confuse artificial intelligence and robotics because science-fiction TV shows and movies often depict robots as being equipped with AI.

In real life, a robot doesn’t need to be able to “think” to still qualify as a robot. Essentially, a robot is simply a machine that is able to perform tasks autonomously, or nearly autonomously. They’re also programmable. People who create them develop or use programs to determine their functions.

Granted, some could (and do) argue that because a robot must at least be able to operate semi-autonomously, it technically is “thinking” to a degree when it’s in operation. That type of thinking isn’t always very sophisticated, though.

A machine that doesn’t solve problems or acquire new knowledge could still qualify as a robot if it’s able to complete a task it’s been programmed for. In other words, while robots often do possess a form of artificial intelligence, they don’t have to.

Artificial Intelligence
Artificial intelligence is a branch of computer science. One of the key differences between artificial intelligence and robotics is simple: an AI doesn’t need to necessarily interact with the physical world.

Artificial intelligence algorithms solve the kinds of problems that usually require some degree of human insight or reflection. Thus, an AI could be used for customer service purposes, as is the case with most forms of chatbot technology utilized by brands.

That doesn’t mean that same AI would qualify as a robot. If it’s just a computer program playing the game in a virtual environment, it doesn’t have the essential physical-world application that robots must have.

Yes, an AI could be part of a robot; this is becoming much more common as both technologies continue to develop and improve. However, it’s only one part of a much larger system. A robot isn’t a robot without sensors, actuators, and other components that work together to ensure the machine performs tasks as intended.

Programming Options
To further understand how robotics and artificial intelligence differ, it helps to consider an example of a robot that would use both AI and non-AI programming.

Imagine a robot that could pick up objects and identify them. The programming that allows the machine to pick up an object wouldn’t require an artificial intelligence algorithm to do so. To identify the object, though, the robot would have to “see” it with a camera, then use machine learning principles to determine what it is; this does require an AI program.

That’s why more specialists in both fields are beginning to work together. Robotics gives AI the chance to interact more directly with the real world, while AI expands on the existing capabilities of robots. Together, they may soon make those famous sci-fi movie robots a reality.

Please help. What do you use as your third hand?

Some time ago I bought a third-hand kit from Sparkfun. Aside from the fact that from my mistakes I broke one of the arms, I tried to use the other one as much as possible. Until now, the unbroken arm has accomplished his mission if I use it to solder thin wires.

I could not say that I was very pleased with the choice made. Also, this kind of kit can be used only for wires. If you want to solder a pin to a sensor…becomes a nightmare. So, I need a third hand to be stable and fix in place some large components like sensors, controllers, and even small DC motors.

I saw in this video a small vise, similar to this. I think that is exactly what I need.

Is there anyone who used such a vise to help me with an opinion? Also, any other solution is welcome.

Home Vise

Home Vise

Few Ideas About Tank Chassis for Robotics Applications

I’ve been playing over the years with a series of robotic kits including two tracked robots[one, two, three]. The experience of using a robot tank versus a robot on wheels is different for the most part.

I like these platforms because are capable to overcome obstacles much easier than a wheeled robot. For me, this was the most exciting thing about these.

The downside was that the driving speed is lower for a robot tank compared with a robot on wheels with the same specifications for DC motors. Another disadvantage would be the plastic tracks that are not very resistant. The good thing is that these can be replaced very easily.

Since last year catches my attention a series of metal platforms with tank tracks. These platforms usually come with a metallic chassis, two DC motors, and that’s all. The small number of components included in the kit has the advantage of letting the user choose which motor driver and controller to use. Since most platforms have two 6 or 9V motors, choosing the motor driver is not difficult.

Choosing the motor driver also leads to the choice of the controller. Among the options are the most used DIY boards – Arduino and Raspberry Pi.

Another feature of these robot platforms is the space. The metal chassis and the tank track that distribute the weight of the robot over its entire surface, allow the platform to be loaded with a higher number of accessories and high capacity batteries. You want to attach a robotic arm, a series of sensors, a large battery, a webcam, all of which can be attached to the chassis – one at a time or all at once.

Another feature of these platforms is the ease with which the user can assemble them. A chassis kit with wheels has several components to assemble, so there may be assembly errors or you need a manual to get to the final product. On a chassis track kit, things are simpler. There are fewer components to assemble, so the space for errors in assembling is much smaller. Usually, there is no need for a manual to assemble such kits.

Below I made a list of five such platforms. These are easy to assemble and control with an Arduino board if you want a remote controlled or autonomous robot. For a remote-controlled, Internet controlled or autonomous robot, Raspberry Pi 3 (WiFi and Bluetooth) is probably the best solution.

All the platforms are available on Amazon.

  • XiaoR Geek Big track Robot Car Chassis Smart Tank Platform | Price: $84.90
  • KOOKYE Robot Car Chassis Smart Tank Platform Metal | Price: $79.99
  • Mountain ark Tracked Robot Smart Car Platform Aluminum alloy Chassis | Price: $49.99
  • Devastator Tank Mobile Platform | Price: $108.90
  • Tracked Robot Smart Car Platform Metal Aluminium Alloy Tank Chassis | Price: $89.99

Honda Unveiled an Autonomous All-Terrain Robot Platform – 3E (Empower, Experience, Empathy)

Some interesting details:

  • the platform is full electric;
  • the tires are airless (you do not have to worry about not finding a vulcanization);
  • the user can send commands to the robot via a smartphone/tablet or a smart clock application;
  • the robot chassis is designed based on an ATV chassis (which leads me to think that the robot should resist quite well at rust and dirt, and move heavy loads);
  • the batteries can be charged directly to the platform or can be detached and charged separately;
  • the platform can host a number of modules used in various fields of activity for:
    • -moving tools or materials;
    • -search and rescue;
    • -lawn mower;
    • -harvesting;

As a first impression, I really like the Honda’s vision for an all-terrain robot. It’s a robot based on an ATV chassis, which means it’s rugged from all points of view. Using electricity, the platform can be recharged using solar panels. Sometimes this is the best solution when you are isolated in the wilderness and the nearest gas station is hundreds of kilometers away.

ROS 2 Ardent Apalone was officially released. I made a list of 5 reasons why you should use it for your robot.

Coincidentally or not, after 10 years of ROS 1, the Open Source Robotics Foundation has launched a new version called ROS 2. ROS 2 (the code name “Ardent Apalone” – Apalone is a genus of turtles in the family Trionychidae) was officially released at the end of 2017. The release of the new ROS has gone a little unobserved by the usual ROS users, and it is understandable since there are few articles in online about this release.

So in this article, I will try to describe why Ardent Apalone appeared and what gaps left by ROS 1 will be covered by the new ROS 2 version.

Before going into the subject, I will remind that ROS (The Robot Operating System) is not an operating system as we know. ROS (or ROS 1) is a solution designed to be hosted by an operating system like Linux. Or, as the majority calls it, this is a meta-operating system. And of course, it’s designed for robots.

Like ROS 1, the ROS 2 is the network of nodes that allows communication/exchange of information between the components used in the robot. So far, nothing new. Everything is the same as we know it today.

One of the reasons behind the launch of a completely new version (ROS 2) and not the improvement of ROS 1 is the significant changes to the framework. The team that developed ROS 2 has chosen to implement the new changes safely in the new framework. So, they did not want to alter the ROS 1 variant to not affect the performance and stability of the current versions of ROS. From my point of view, it’s a wise decision. Especially because there is a plan to implement the ROS 1 nodes to work with the ROS 2 nodes together on the same robot. So there will not be significant changes to the systems that will work with both ROS variants.

Below I made a list of the new features of ROS 2.

  1. Three compatible operating systems
    One of the news is that besides Linux, ROS 2 is compatible with Windows 10 and Mac OS X 10.12 (Sierra). If the support of OS X is not new (officially ROS 1 were compatible with OS X as an experimental part), the support for Windows is something new for ROS.
  2. Real-time support
    ROS 1 has not been designed for real-time applications. The goal of ROS 1 was to create a simple system that can be re-used on various platforms. In other words, the use of ROS has led to a significant reduction in the development of a robot.

    A real-time system must update periodically to meet deadlines. The tolerance to errors is very low for these systems.

    The example below is used by the ROS team to describe a situation when a system needs real-time support.

    A classic example of a controls problem commonly solved by real-time computing is balancing an inverted pendulum. If the controller blocked for an unexpectedly long amount of time, the pendulum would fall down or go unstable. But if the controller reliably updates at a rate faster than the motor controlling the pendulum can operate, the pendulum will successfully adapt react to sensor data to balance the pendulum. [source]

    In other words, the real-time support is more about computation delivered at the correct time and not performance. If a system fails to send a response is as bad as giving a wrong response. This new feature is very useful in safety- and mission-critical applications such as autonomous robots and space systems.

  3. Distributed discovery
    This new feature facilitates, in some way, the communication between nodes. In other words, the nodes in ROS 2 do not need the master node to change messages between them. If you run a C ++ written node and another in Python (a talker and a listener), the nodes will identify each other and start communicating automatically. You may be wondering how to identify the nodes if there is no master node to allow authentication. In ROS 2, the role of the master node was taken over by the ROS_DOMAIN_ID environment variable. When a ROS 2 node is launched, it makes its presence known in the network to other nodes that share the same ROS domain.
  4. Node lifecycle management

    Managed nodes are scoped within a state machine of a finite amount of states. These states can be changed by invoking a transition id which indicates the succeeding consecutive state. [source]

    The most important thing is that a managed node presents a known interface and is executed according to a known life cycle machine. This means that the developer can choose how to manage the life cycle functionality.

  5. Security
    ROS 1 had no security issues because it did not exist. With ROS 2 we can talk about security. It integrates the transport layer of ROS 1 with an industry standard transport layer that includes security. The layer is called Data Distribution Service (DDS).

NVIDIA DRIVE Xavier is not bad

DRIVE Xavier is a board designed for the AI car systems that are used in self-driving and semi-autonomous cars. So the board must support a wide range of sensors plus the artificial intelligence algorithms.

According to the NVIDIA boss, the board should consume only 30 watts of power, which is not much for how many calculations it can make. Also, do not forget that this board will be used especially on electric cars and any watt consumed counts.

The most important features:

  • include a deep-learning accelerator
  • include a new computer-vision accelerators
  • can process 30 trillion operations per second;
  • 8-core CPU
  • 512-core Volta GPU
  • 8K HDR video processor
  • support the NVIDIA DRIVE software stack (AI software for autonomous driving)

PS: I’m almost sure that the price will have many zeros.

I made a police light application with a tower light and Arduino

A few days ago I bought a three-color tower light with buzzer for visual and audible alerts. And because I’m a big kid, I thought to build as the first application a game of lights in the style of a police car. The result will be seen at the bottom of this article.

Before entering into the connection and programming area, I give you some details about the product. The tower light has three colors and a buzzer. The tower can be easily controlled by an Arduino board, four N-channel MOSFETs or NPN transistors, and four resistors.

The light tower is branded Adafruit and produced in China. At least this is written on the product that I use in this tutorial. Also, there is a schema somewhere on it with Chinese letters. Thank you Adafruit!

Everywhere I’ve been looking for information about how it works and how I can control it, I’ve given this tutorial. The tutorial is dedicated to the RGB LED strips and less to a light tower. Generally, Adafruit produces good tutorials, so I think that I don’t have to make a great effort to turn ON the lights. But this time I was a bit misguided by the schema with connections found in the tutorial. For the NPN Bipolar Transistors (PN2222), I recommend you carefully look how the three pins of the transistor are located or use the schema from this article. When I connected the light tower just like in the Adafruit’s tutorial (the NPN schema), the result was a tower light that just makes some noise and has two lights on. Obviously, the pins of the transistors were wrongly connected.

Let’s get to the practical side.

Components:

  • 1 X Tower Light – Red Yellow Green Alert Light & Buzzer (I buy it from here, but you can buy it also from Amazon)
  • 4 X NPN Bipolar Transistors (PN2222) (link on Amazon)
  • 4 X 100-220 Ohm resistors (link on Amazon)
  • 1 X Arduino board (I think you already have one, but in case I’m wrong, you can take one from here)
  • some wires (link on Amazon)

The schema:

Tower light and Arduino Schema

Tower light and Arduino Schema

The Arduino code:

//Constants
#define REDPIN 9
#define YELLOWPIN 10
#define GREENPIN 11
#define BUZZ 12

//Variables
int  ledDelay=50;
long previousMillisLights = 0;    
long intervalLights = 500;   
long previousMillisTemperature = 0;    
long intervalTemperature = 1000;  

void setup()
{   
    Serial.begin(9600);
    pinMode(REDPIN, OUTPUT);
    pinMode(YELLOWPIN, OUTPUT);
    pinMode(GREENPIN, OUTPUT);
    pinMode(BUZZ, OUTPUT);

}

void loop()
{
 unsigned long currentMillis = millis();
   
    if ((unsigned long)(currentMillis - previousMillisLights) >= intervalLights) {
    redPoliceLights();
    yellowPoliceLights();
    greenPoliceLights();
  //add here the code if you want the turn on the buzzer
 //analogWrite(BUZZ, HIGH);
   intervalLights = currentMillis;
   }
  }

//turn ON and OFF the red light
void redPoliceLights(){
   analogWrite(REDPIN, HIGH);
   delay(ledDelay);
   analogWrite(REDPIN, LOW);
   delay(ledDelay);
   analogWrite(REDPIN, HIGH);
   delay(ledDelay);
   analogWrite(REDPIN, LOW);
   delay(ledDelay);
   analogWrite(REDPIN, HIGH);
   delay(ledDelay);
   analogWrite(REDPIN, LOW);
   delay(ledDelay);
  }

//turn ON and OFF the yellow light
void yellowPoliceLights(){
   analogWrite(YELLOWPIN, HIGH);
   delay(ledDelay);
   analogWrite(YELLOWPIN, LOW);
   delay(ledDelay);
   analogWrite(YELLOWPIN, HIGH);
   delay(ledDelay);
   analogWrite(YELLOWPIN, LOW);
   delay(ledDelay);
   analogWrite(YELLOWPIN, HIGH);
   delay(ledDelay);
   analogWrite(YELLOWPIN, LOW);
   delay(ledDelay);
  }

//turn ON and OFF the green light
  void greenPoliceLights(){
   analogWrite(GREENPIN, HIGH);
   delay(ledDelay);
   analogWrite(GREENPIN, LOW);
   delay(ledDelay);
   analogWrite(GREENPIN, HIGH);
   delay(ledDelay);
   analogWrite(GREENPIN, LOW);
   delay(ledDelay);
   analogWrite(GREENPIN, HIGH);
   delay(ledDelay);
   analogWrite(GREENPIN, LOW);
   delay(ledDelay);
  }

Tower Light Demo: