SuperDroid Robots
Robots, Parts, & Custom Solutions
Request A Quote
##GOOGLEPLUS##
##PINTEREST##
Autonomous Programmable Robot Support Page
Autonomous Robots

Build an Autonomous Robot

Autonomous Robots Overview

The primary components of a mobile autonomous robot are positioning, obstacle detection, navigation, and control. In order for all these components to work together effectively in an autonomous project, pre-planning is necessary. Choosing the correct sensors to supply positioning or obstacle detection, given the platform’s environment, will make navigation and control implementation a much smoother process. Below we have provided a step by step overview of how to bring your project to different levels of autonomy followed by corresponding item lists. Of course, if this seems a little overwhelming, our services are available! All you need to do is fill out our custom request form to start the conversation.

The Microcontroller

Oftentimes referred to as the brains of a robot, the microcontroller unit (MCU) is where all the sensor data comes together to be evaluated, converted, calibrated, and processed. Computing languages used most often include C, C++, and sometimes Python. The code that’ll be uploaded to this device will decide every action and reaction that the robot executes. Therefore, it’s very important to know the limitations and strengths of your MCU before an abundance of time is spent programming. This is a very important decision that must be made during the pre-planning process.

Depending on the computational power and I/O diversity required, common MCUs range from the Arduino () and Beaglebone () to more sophisticated standalone computers such as the Nvidia Jetson TK1 () and Nvidia Jetson TX1 (). You can narrow down the choices by first looking at the sensor configuration required to achieve your goals. If a high resolution 3D vision system is required, the Beaglebone is the wrong choice. Go with the TK1 or TX1 instead. If you only need a few static range sensors with encoder feedback, a TX1 would be overkill. Go with an Arduino or Beaglebone. If you’re just getting started, be sure to check out our GitHub for sample code!


Back to Top

Using an oscillator for timing feedback

One of the simplest autonomous actions a robot can perform is drive/turn for a certain amount of time. Using an oscillator integrated into a microcontroller can keep track of time with high accuracy. Anything from an Arduino to a custom PIC MCU is going to have multiple timing threads waiting to be accessed and implemented. Although this form of feedback proves to be necessary in some scenarios, it is not recommended as the only return data in a closed loop system. A robot relying on timing alone will be not only blind to potential collisions, but lost on its position as well.


Back to Top

Adding encoders for 2D pose estimates

Adding encoders to the mix will provide the robot with knowledge of position and yaw relative to where it started. Combined with the MCU timer, you will now have access to reliable linear and angular velocity data. Of course, the since encoders output raw count values, a calibration must be performed to convert these values into a more useful standard of measurement.

However, be wary of trusting the position and yaw values when solely relying on encoders. An unstable or inconsistent surface beneath the robot can lead to wheel slips and turns that won’t match your initial calibration. In turn, this will lead to erroneous data. The result will be a sporadic separation between the calculated pose and actual pose. For this reason, it is recommended that most applications take advantage of the velocity data fused with other sensors instead.


Back to Top

Line Following

Following a line is a very effective way to achieve autonomy in navigation. If all you need is a predetermined path to travel, this form of guidance might be the simplest solution. Some methods use colored tape paired with light sensors, but we recommend the magnetic variant. Adhesive backed magnetic tape and the RoboteQ MGS1600GY Magnetic Guide Sensor (product links provided below) are a great pair to implement for a line following application. Not only does this provide nearly millimeter resolution but also dynamic configuration options. Check out this tech guide for an example of a standard configuration.

As shown in the video below, this sensor has the ability to identify waypoints and forks in the magnetic path. This gives you the power to choose which fork to follow and how you want to interact with waypoints. Furthermore, if a second magnetic sensor is added to the rear, the robot angle relative to the tape can be reported back to the user. For our vectoring robots, a strafe correctional factor can be applied to increase orientation accuracy.




Back to Top

Introducing the IMU

An inertial measurement unit (IMU) is an electronic device that measures and reports a body's specific force, angular rate, and the magnetic field surrounding the body. These systems typically use a combination of accelerometers, gyroscopes, and magnetometers. This addition comes with access to linear acceleration, angular velocity, and orientation feedback. Some units will even have software support for full attitude estimates while still providing access to individual component data.

The IMU serves an important role in the autonomous robot. Few systems will ignore its ability to overcome the shortcomings of other sensors. For example, the yaw estimate using angular velocity and magnetic field readings can be used to counter the encoder errors discussed earlier. The accelerometer values prove to be useful when the platform is experiencing jumps in position data from sensors such as a GPS. This isn’t a simple process however. To accurately use this data to your advantage, complex algorithms such as the Kalman filter must be implemented for a complete state estimate.

Despite what the IMU brings to the table, it still has shortcomings of it’s on. Like encoders, it will need to be calibrated before use. The higher quality units will offer preset gyroscope, accelerometer, and magnetometer calibration functions. Most will just require the magnetometer offsets to counteract the magnetic interference of the platform it’s mounted on. Regardless, if the IMU is constantly traveling through magnetically perturbed areas, the magnetometer will most likely be useless. This could be fine if the remaining components still offer reliable data.

With roll and pitch feedback comes your robots first introduction to 3D space! It now has the capability of knowing a rough estimate of relative 2D position and 3D orientation. Although, it’s still blind so obstacles will still be an issue. A solution is discussed below.


Back to Top

Simple obstacle detection

The most basic form of sensor is the contact sensor such as a limit switch. This allows your robot to know when it encounters an object by physically depressing a switch. The downside to using a sensor like this is that it can be mechanically difficult to align the sensor right and you have already collided with the object. Because of this you're much more limited in speed if you want to prevent your robot from causing damage to itself or its surroundings

Although contact sensors are a viable failsafe, we need to be able to detect obstacles from a distance. This can be achieved with simple, static range sensors that emit infrared light or ultrasonic waves. Depending on the application, one method can serve a better purpose than the other. If your robot needs to travel outdoors in the sunlight, then ultrasonic range finders will likely be a better fit. If you require a lot of range with a defined and focused field of view, then infrared distance sensors may be more suitable. Both report back a value that represents the distance of an object in the direction it’s facing. If a robot needs to perform a simple stop action when an obstacle is detected, then these sensors will prove to be very useful.

Multiple range finders can be placed at different positions and angles to get a better view of the robot’s surroundings. This is recommended if vectoring movements and tight turns are necessary. If navigating around obstacles is the goal however, there’re more advanced range sensors that are better equipped for the task.


Back to Top

2D LIDAR Capabilities

A LIDAR is a very powerful tool that can be responsible for identifying both fixed and moving obstacles in the environment and their location. A huge part of what makes LIDAR systems so effective is the software that complements it. Static obstacles can be detected and remembered using the popular Simultaneous Localization and Mapping algorithm (SLAM). As the name suggests, SLAM can simultaneously create a map of an unknown environment using sensor feedback and position the robot within the map on the fly. Implementing this system will not only provide a robot with obstacle avoidance but accurate positioning as well.

If a project demands this level of autonomy, it is highly recommended that Robot Operating System (ROS) is being used. LIDARs produce a large volume of data that needs to be processed. Multiple software packages and tutorials are readily available in the ROS community to get SLAM up and running.

Using ROS, we have developed robots that can drive around an unknown location, generate a map, position itself on the map, and autonomously plot a course and travel between waypoints while avoiding obstacles. Here is a video of a robot using simultaneous localization and mapping (SLAM).




Back to Top

Adding global positioning

If an autonomous robot dares to explore the outdoors, a GPS is something to strongly consider. We discussed how LIDAR systems are great for autonomous robots but they can have some limitations outdoors. Direct sunlight can heavily reduce accuracy and valid data points, and LIDARs that’re outdoor rated are very expensive. Furthermore, in vast outdoor areas, GPS modules can provide position feedback where LIDARs can’t. Ideally, you’ll want a system that can rely on multiple sensors. This way there’s data to fall back on when a sensor fails to provide valid information.

There’re multiple options to consider when looking to add a GPS into the mix of your sensor array. For autonomy, you’ll want to focus on satellite based augmentation systems (SBAS) and ground based augmentation systems (GBAS). These systems offer the accuracy needed to be suitable for autonomy. Normal GPS modules only offer 7-10-meter accuracy where SBAS offers 3-meter accuracy and GBAS can support up to 1 cm. These different systems are explained in our sensor support page.

Three meters of accuracy may sound unusable, but this isn’t the case when implementing a sensor fusion algorithm. Encoders, IMU, and possibly LIDAR data should be rock solid In order to navigate efficiently. GPS data can be finicky and often unreliable. The other sensors should be ready to compensate. Remember that accelerometer feedback is your friend here.

Besides being used for navigation, geo-fencing is a way to set hard boundaries when exploring the outdoors with a GPS. So, when navigation paths are being planned, the robot can see geographical areas as obstacles.


Back to Top

Vision System

Even if every sensor up to this point is being used, the robot could still collide with obstacles which elude the LIDAR’s 2D plane. 3D vision systems, such as the Xbox Kinect and Zed stereo camera, are used to prevent such occurrences. They usually don’t have the horizontal viewing angle LIDARs can provide but allow the navigation layer to be aware of additional hazards high and low. Please note since the Xbox Kinect shouldn’t be used for outdoor applications since it relies so heavily on infrared light. The Zed is a well-qualified alternative but doesn’t work well in the dark.

Vision systems are also used to identify specific objects, patterns, or colors and report back their location or change in intensity. For example, the Microscan MV-40 can be used to save the pixel count and orientation of a pattern in order to zero a robotic arm with millimeter accuracy. This would be seen in a very high precision project where every few millimeters count. Other applications include automated air hockey tables and “follow me” systems.

Adding such systems to your robot will most likely affect other aspects of the design. Positioning the vision system should be thought out beforehand to minimize blind spots and optimize the field of view. Also, the computational requirements of these devices are usually very demanding due to the sheer amount of data they output for CPU and GPU rendering. As a result, a standalone computer such as the Nvidia Jetson TK1 or TX1 should be used.


Back to Top

Navigation

If in response to a 'Go here' command the positioning system solves the 'Where am I?' problem, then the navigation system solves the 'How do I get there?' problem. This often involves maintaining some representation of the environment such as a map or floor plan and planning a path from the robot's current location to the desired location that avoids obstacles. Sophisticated algorithms are also able to detour around moving or unexpected obstacles not present on the map and still reach the destination.

The nice thing about this layer is that it is not really hardware or sensor dependent. If the positioning and obstacle detection systems are well implemented then the navigation system is purely a software challenge, and multiple path planning algorithms are readily available.




Back to Top

Control

Control theory is used to determine the necessary inputs to a system in order achieve a desired result or behavior. Control is different from artificial intelligence; they are two distinct fields. Control theory operates at a lower level, closer to the hardware and actuators. Control is more rigid and mathematically defined than AI, which is comparatively open-ended and freeform. AI is used to make decisions and give the control system goals to achieve.

In robotics, the control inputs are typically actuator commands and the desired result is a specific motion. For mobile autonomous robots, control is used to make the robot follow the path generated by the navigation system. With access to the robot's position on each time step and the path layout, the control algorithms will continuously adjust the left and right wheel speed to keep the robot on top of the path. An analogy for this is a human controlling the movement of his legs - motion about the hip, knee, and ankle joints -- to walk down a sidewalk.

The most common control method is the PID controller, which is well-suited for path following. PID control (Proportional-Integral-Derivative) is a major building block of a robotic system. It provides a straight-forward method to precisely control a motor to perform a pre-determined action without the need for direct human control to adjust the machine. PID control is an algorithm that uses a determined error that can be used calculate the necessary motor response to achieve your task. For a detailed description on how a PID algorithm is implemented, check out our tech post.


Back to Top

User Interface

Although, they’re not always necessary, some type of user interface (UI) is usually paired with an autonomous robot. The simplest forms are real time data logging and a manual override feature. This is made possible by small, low powered radios such as the XBee. For a more advanced option, graphical user interfaces (GUI) displays camera feeds, LIDAR scans, and any real time information from the robot. This requires more bandwidth that WiFi communication and high powered IP radios can provide.


Back to Top

Conclusion

Hopefully it’s clear that designing an autonomous system is not to be taken lightly. By default, a machine is not self-aware of its surroundings and teaching it otherwise takes a lot of work. Everything action or reaction must be programmed, tuned, and tested. There’re many cases to be considered which could cause drastic instabilities. We have experience in this area and we’re ready to tackle your requests head on. Some companies have spent billions developing mass marketed autonomous systems, but don’t fret. We won’t charge you that much. Submit a custom request form and let's talk!

Additional Information

Our sensor support page provides more in-depth information about sensors that are suitable for autonomous projects. The necessity of the following sensors depends on many factors such as operating environments and autonomous actions the robot will be performing. Our goal is to provide detailed information on the strengths and weaknesses so you can make informed decisions moving forward. Of course, if any questions or concerns arise during your research, don’t hesitate posting on our forums for help!



Back to Top

Have questions? Need additional sample code to get you going? Please visit our forums to post about your project or to request additional sample code.



Bought one of our robots? Want to make it autonomous? Start here! We walk you through design decisions and the required components for you to develop your own autonomous robot.
Powered by SuperDroid Robots, Inc