The primary components of a mobile autonomous robot are positioning, obstacle detection, navigation, and control.
The purpose of the positioning system (PS) is to maintain an estimate of the robot's position and orientation in space. In our opinion, the PS is the single most important factor in a mobile autonomous robot's success. All other components of the robot's intelligence depend on a reliable position estimate, and if the PS is too inaccurate, noisy, or unstable then you will spend the remainder of the project struggling to compensate for it at higher levels. It is absolutely worthwhile to invest the time and effort to get this component working correctly.
The PS will include code to communicate with sensors and the necessary algorithms to determine position state based on the sensor data. People are often surprised by how difficult it is to use sensors effectively. All sensors are subject to their own unique drawbacks and limitations, and for this reason one type of sensor alone will almost never be sufficient for a mobile autonomous robot to adequately position itself in a variety of operating conditions (though LiDAR certainly comes close). As a result, autonomous robots will generally rely on a combination of sensors and use filtering or sensor fusion algorithms such as the Kalman Filter to generate a position estimate based on the observations of all sensors.
A good example of sensor limitations can be seen with GPS. GPS is of course very powerful since it can provide position, orientation, and velocity data to any receiver on Earth. So why not just stick one of these on your robot and be done with it? GPS is only accurate to somewhere in the range of 8-10 meters, which is not nearly good enough for a ground robot. In addition, GPS data only updates once per second and has a 400 millisecond delay. The data is also subject to a fair amount of noise, with the position estimate sometimes jumping wildly on subsequent updates. The robot has to be moving to get an orientation and velocity estimate from the GPS, which means your initial heading from a stationary position is unknown. And finally, GPS can only position things that are outdoors, which renders it unusable for indoor robots. GPS absolutely still has its place in robotics, but other sensors are necessary to cover its weaknesses.
The obstacle detection layer is responsible for identifying both fixed and moving obstacles in the environment and their location. Static obstacles can be detected and remembered using the popular Simultaneous Localization and Mapping algorithm (SLAM). As the name suggests, SLAM is able to simultaneously create a map of an unknown environment using sensor feedback and position the robot within the map on the fly. This makes it useful for both the positioning and the obstacle detection systems.
The robot should always be watching for moving or unexpected obstacles that cross its path. This can be accomplished using a wide variety of sensors including distance sensors such as IR, ultrasonic, and LiDAR and depth cameras such as the Kinect.
If in response to a 'Go here' command the PS solves the 'Where am I?' problem, then the navigation system solves the 'How do I get there?' problem. This often involves maintaining some representation of the environment such as a map or floor plan and planning a path from the robot's current location to the desired location that avoids obstacles. Sophisticated algorithms are also able to detour around moving or unexpected obstacles not present on the map and still reach the destination.
The nice thing about this layer is that it is not really hardware or sensor dependent. If the positioning and obstacle detection systems are well implemented then the navigation system is purely a software challenge, and multiple path planning algorithms are readily available.
Control theory is used to determine the necessary inputs to a system in order achieve a desired result or behavior. Control is not the same as artificial intelligence; they are two distinct fields. Control theory operates at a lower level, closer to the hardware and actuators. Control is more rigid and mathematically defined than AI, which is comparatively open-ended and freeform. AI is used to make decisions and give the control system goals to achieve.
In robotics, the control inputs are typically actuator commands and the desired result is a particular motion. For mobile autonomous robots, control is used to make the robot follow the path generated by the navigation system. With access to the robot's position on each timestep and the path layout, the control algorithms will continuously adjust the left and right wheel speed to keep the robot on top of the path. An analogy for this is a human controlling the movement of his legs - motion about the hip, knee, and ankle joints -- in order to walk down a sidewalk.
The most common control method is the PID controller, which is well-suited for path following.
In order to achieve any level of autonomy the robot must have some form of feedback in order to navigate the outside world. Otherwise you're stumbling in the dark and making guesses on what to do next.
The most basic form of sensor is the contact sensor such as a limit switch. This allows your robot to know when it comes into contact with an object by physical depressing a switch. The downside to using a sensor like this is that it can be mechanically difficult to align the sensor right and you have already collided with the object. Because of this you're much more limited in speed if you want to prevent your robot from causing damage to itself of its surroundings
Moving on from contact sensors, we need to be able to detect obstacles from a distance. This is achieved by using a sonic sensor like Max Sonar's ES4 Ultrasonic Range Finder or an infrared sensor like of Sharp's IR Analog Distance Sensors. Both types of sensors have their benefits but IR sensors tend to have a greater range of operation an can offer a more focused and defined field of view.
Moving forward with IR sensors, the emitted light can be focused and then swept across a field of view. When sampled properly, this field of view can provide us a more detailed data about our surroundings without needing to move our robot. This kind of sensor is called LiDAR. LiDAR is a portmanteau of "e;light"e; and "e;RADAR"e;. An example of this is Hokuyo's URG-04LX Laser range finder. A downside to using a more advanced sensor like a LiDAR is that they require a much more sophisticated back-end in order to process the data. A sensor like this typically requires a full computer, single-board computer, or an FPGA based solution in order to handle the sheer amount of data that it produces.
Just detecting the objects near the robot is not enough to achieve a sophisticated autonomous solution. We need a way to verify that we've actually moved. Otherwise our motor could be stalled and non-operational and the robot wouldn't know that anything was wrong. The basic go-to sensor for this would be an encoder. An encoder watches the shaft of the motor or the teeth of a gear and produces ticks for each revolution or portion of a revolution. A simple way to achieve this is the use of one of our encoder equipped motors. Using an encoder can be tricky. It takes a relatively fast processor and even then, missed counts WILL happen. Especially whenever you perform time consuming actions like a Serial.print() on an Arduino. A solution for this would be to use one of our encoder buffers. What these enable us to do, is to have a secondary chip take care of all of the encoder monitoring for us and then we can simply query the current position over SPI at our leisure. Then we can keep accurate count of a fast motor with your basic Arduino Uno.
Now that we have methods to know if there are objects near us, and we know that our wheels have actually moved, we STILL need some way to verify that we have actually moved. Without this our wheels could be spinning in place and our robot thinks everything IR running fine. To do this we need a positioning sensor. Positioning sensors come in various shapes and sizes. Your generic sensors will be accelerometers, gyroscopes, compasses, and GPS. Accelerometers tell us how much we're accelerating (as per the name). Gyroscopes can provide us incredibly accurate information on our orientation. This is an absolute must when operating on anything but a two dimensional plane. Compasses and GPS provide us with a global frame of reference to be able to orient ourselves properly.
Now that we have our methods to sense the outside world, detect and verify movement, we need a logical and effective method to develop our algorithm coordinate movement of our robot. This logical coordination can be achieved using something known as a state machine.
A state machine provides us an organized chart, or flow diagram, to determine what actions are to be performed and when. Take, for example, the state machine below for a garage door opener. Say, we're in a stationary and CLOSED state, we sit in this state until we're commanded by the remote. This is indicated by the outward arrow labeled pushButton. When command we change our state from CLOSED to OPENING. When opening, we continue to open unless commended or we complete our operation. This is a fundamental model for developing an autonomous solution. If you do not have a clear defined structure, a clear plan set before you touch the keyboard then you will be stumbling in the dark. Develop a plan and then implement.
Now that you have your plan, you know what you want your robot to do. The next step is to move our conceptual drawing to code. For a basic robot, the structure can be setup as a case statement in your main program loop that would then call a function associated with that position in the state machine. To handle the changes in state, we already have our logical flags and conditions laid out in the state machine above. These would then be implemented within our state functions.
SDR has produced many autonomous robots over the years. You can read about some of them here. We started out writing all of the code for our autonomous projects from scratch, but we have recently switched to ROS (Robot Operating System). Advantages of using ROS include:
Access to a wealth of powerful libraries such as SLAM implementations, depth camera and LiDAR polling, path planners, and coordinate frame transformations
Abstraction of interface medium between devices - ROS doesn't care if packages are run on a host PC or on the robot. This spares us the trouble of having to build and maintain networking interfaces and packet structures
Useful debugging tools such as the ability to tap into messages passed between ROS nodes and to visualize information using RViz
Note that ROS is only supported on Linux, so getting a ROS project up and running requires at least one Linux computer (as a remote PC and/or onboard the robot) and a seasoned Linux programmer to go with it. There are multiple good choices for single-board Linux computers depending on your project's complexity, such as the Beaglebone Black(), Nvidia Jetson TK1(,), and Nvidia Jetson TX1(,,).
Using ROS, we have developed a prototype robot that is able to drive around an unknown location, generate a map, position itself on the map, and autonomously plot a course and travel between waypoints while avoiding obstacles. Here is a video of a robot using simultaneous localization and mapping (SLAM).
The squirrel chaser is a fun project to practice autonomy. One where we plan to document every step of the way. We're making all of our code and schematics open and available for any hobbyist, entusiaast, and/or student to have complete access to how to build and program an autonomous robot. The idea is to create a fun project to track moving objects and to incorporate as many of our sensors as possible. The robot features an array of scanning sonic sensors for object detection, CO sensors, GPS monitoring, and on-board computer with a touch screen interface. Feature creep is welcome!
Keep up to date with our progress by visiting the Squirrel Chaser's product page here
Recently we ran a promotion to give away an Arduino powered WiFi Robot. The robot features a four wheel drive Mecanum wheel vectoring chassis to better enable autonomous wall following and WiFi control using an on-board router and IP camera. In the spirit of open source, we've made the entire project available to download on our GitHub. We have a dedicated support page where we provide information on the design process, design decisions, and how we built the robot.
This autonomous Arduino Mega powered Mecanum wheel robot platform is designed and fabricated in North Carolina, USA and fully supported by SuperDroid Robots, an industry leader in robotics.
Here we will provide in-depth information about sensors that are suitable for autonomous projects. The necessity of the following sensors depends on many factors such as operating environments and autonomous actions the robot will be performing. Our goal is to provide detailed information on the strengths and weaknesses so you can make informed decisions moving forward. Of course, if any questions or concerns arise during your research, don’t hesitate posting on our forums for help!
High Precision RTK GNSS Receiver
When it comes to obtaining accurate location feedback outdoors, an RTK GNSS is hard to beat. Let’s go over what this setup can do, and how it does it. This is a global navigation satellite system (GNSS) configuration that provides positioning accuracy up to 1 cm. Compared to the 10 meter accuracy of normal GPS setups, RTK systems greatly improve the viability of integrating satellite navigation on autonomous robots.
So, how does this work? The current, and most popular, method to obtaining a real-time kinematic (RTK) lock is to have one GNSS module as the designated “base” station and another as a “rover” station. The base receiver is responsible for calculating error offsets while keeping the rover station updated. The rover station applies this offset to its own position readings and provides the user with very precise position feedback. The base station is more interested in the phase of the signal rather than the content of the signal. This is because Earth’s atmosphere (ionosphere and troposphere in particular) is an error source in the form of signal delays and phase changes. Since the base station has to make these precise calculations, it must remain stationary in operation.
Along with the base station remaining stationary, two conditions need to be met in order to achieve an RTK lock. Both GNSS receivers need to have a 30 degree view of the horizon, and they also need to be linked to at least four of the same satellites with signal to noise ratios (SNR) of 40 or higher. Of course, the second is more critical than the first. This should make sense given how the error is calculated.
In areas with tall buildings and dense vegetation, it can be very difficult to obtain and keep a RTK lock. However, there’re some modifications that will increase the overall efficiency. Antenna placement is critical in order to reduce multipathing errors. Multipathing occurs when the satellite signal is reflected before it reaches the receiver. This causes the signal to take multiple paths and therefore increases the delay since the distance traveled is increased. This is usually the culprit of massive outliers in position data that we see all too often. Mounting the antennas on ground planes can also help with this issue. A ground plane is a typically a flat symmetrical metallic plate under the antenna that will create a more consistent reception pattern as well as filtering out satellite signals close to the horizon. The optimal size of the ground plane usually depends on the antenna design itself. Luckily, most RTK GNSS manufacturers make setup rather simple by selling the two modules and corresponding antennas as a package while providing sufficient “how to” documentation.
Garmin 18x WAAS Enabled GPS
More often than not, 1 cm accuracy is overkill for applications that require global positioning feedback. Fortunately, SuperDroid Robots now stocks the Garmin 18x GPS modules in 1 Hz and 5 Hz variants. This offers a cheaper and easier to use alternative to the RTK systems that are viable for autonomous development platforms. The driving force behind the 18x series compared to a normal GPS is the Wide Area Augmentation System, or WAAS. By promoting a reasonable cost and less than 3 meter error, this technology is a must have on outdoor autonomous robots.
The underlining methodology of WAAS is very similar to RTK as the onboard receiver uses correctional data for improved positioning. Multiple ground reference stations positioned across the U.S. that monitor GPS satellite data. Two base stations, located on either coast of the U.S., collect data from the reference stations and create a GPS correction message. The corrected differential message is then broadcast through 1 of 2 geostationary satellites, or satellites with a fixed position over the equator. The information is compatible with the basic GPS signal structure, which means any WAAS-enabled GPS receiver can read the signal.