Autonomous robots are robots which can perform desired tasks in unstructured environments without continuous human guidance.
The Driverless Car / Autonomous Vehicle concept embraces an emerging family of highly automated cognitive and control technologies, ultimately aimed at a full "taxi-like" experience for car users, but without a human driver.
Autonomous robots are robots which can perform desired tasks in unstructured environments without continuous human guidance. Many kinds of robots have some degree of autonomy. Different robots can be autonomous in different ways. A high degree of autonomy is particularly desirable in fields such as space exploration, where communication delays and interruptions are unavoidable. Other more mundane uses benefit from having some level of autonomy, like cleaning floors, mowing lawns, and waste water treatment.
Some modern factory robots are "autonomous" within the strict confines of their direct environment. Maybe not every degree of freedom exists in their surrounding environment but the work place of the factory robot is challenging and can often be unpredictable or even chaotic. The exact orientation and position of the next object of work and (in the more advanced factories) even the type of object and the required task must be determined. This can vary unpredictably (at least from the robot's point of view).
One important area of robotics research is to enable the robot to cope with its environment whether this be on land, underwater, in the air, underground, or in space.
A fully autonomous robot has the ability to
- Gain information about the environment.
- Work for an extended period without human intervention.
- Move either all or part of itself throughout its operating environment without human assistance.
- Avoid situations that are harmful to people, property, or itself.
An autonomous robot may also learn or gain new capabilities like adjusting strategies for accomplishing its task(s) or adapting to changing surroundings.
Autonomous robots still require regular maintenance, as do other machines.
Examples of progress towards commercial autonomous robots
The first requirement for physical autonomy is the ability for a robot to take care of itself. Many of the battery powered robots on the market today can find and connect to a charging station, and some toys like Sony's Aibo are capable of self-docking to charge their batteries.
Self maintenance is based on "proprioception", or sensing one's own internal status. In the battery charging example, the robot can tell proprioceptively that its batteries are low and it then seeks the charger. Another common proprioceptive sensor is for heat monitoring. Increased proprioception will be required for robots to work autonomously near people and in harsh environments.
Common proprioceptive sensors are:
- Hall Effect
Sensing the Environment
Exteroception is sensing things about the environment. Autonomous robots must have a range of environmental sensors to perform their task and stay out of trouble.
Common exteroceptive sensors are:
- Electromagnetic spectrum
- Smell, odor
- Range to things in the environment
- Attitude (Inclination)
Some robotic lawn mowers will adapt their programming by detecting the speed in which grass grows as needed to maintain a perfect cut lawn, and some vacuum cleaning robots have dirt detectors that sense how much dirt is being picked up and use this information to tell them to stay in one area longer.
The next step in autonomous behavior is to actually perform a physical task. A new area showing commercial promise is domestic robots, with a flood of small vacuuming robots beginning with iRobot and Electrolux in 2002. While the level of intelligence is not high in these systems, they navigate over wide areas and pilot in tight situations around homes using contact and non-contact sensors. Both of these robots use proprietary algorithms to increase coverage over simple random bounce.
The next level of autonomous task performance requires a robot to perform conditional tasks. For instance, security robots can be programmed to detect intruders and respond in a particular way depending upon where the intruder is.
Indoor position sensing and navigation
For a robot to associate behaviors with a place (localization) requires it to know where it is and to be able to navigate point-to-point. Such navigation began with wire-guidance in the 1970s and progressed in the early 2000s to beacon-based triangulation. Current commercial robots autonomously navigate based on sensing natural features. The first commercial robots to achieve this were Pyxus' HelpMate hospital robot and the CyberMotion guard robot, both designed by robotics pioneers in the 1980s. These robots originally used manually created CAD floor plans, sonar sensing and wall-following variations to navigate buildings. The next generation, such as MobileRobots' PatrolBot and autonomous wheelchair both introduced in 2004, have the ability to create their own laser-based maps of a building and to navigate open areas as well as corridors. Their control system changes its path on-the-fly if something blocks the way. Rather than climb stairs, which requires highly specialized hardware, most indoor robots navigate handicapped-accessible areas, controlling elevators and electronic doors. With such electronic access-control interfaces, robots can now freely navigate indoors. Autonomously climbing stairs and opening doors manually are topics of research at the current time.
As these indoor techniques continue to develop, vacuuming robots will gain the ability to clean a specific user specified room or a whole floor. Security robots will be able to cooperatively surround intruders and cut off exits. These advances also bring concommitant protections: robots' internal maps typically permit "forbidden areas" to be defined to prevent robots from autonomously entering certain regions.
Outdoor autonomous position-sensing and navigation
Outdoor autonomy is most easily achieved in the air, since obstacles are rare. Cruise missiles are rather dangerous highly autonomous robots. Pilotless drone aircraft are increasingly used for reconnaissance. Some of these unmanned aerial vehicles (UAVs) are capable of flying their entire mission without any human interaction at all except possibly for the landing where a person intervenes using radio remote control. But some drone aircraft are capable of a safe, automatic landing also.
Outdoor autonomy is the most difficult for ground vehicles, due to: a) 3-dimensional terrain; b) great disparities in surface density; c) weather exigencies and d) instability of the sensed environment.
In the US, the MDARS project, which defined and built a prototype outdoor surveillance robot in the 1990s, is now moving into production and will be implemented in 2006. The General Dynamics MDARS robot can navigate semi-autonomously and detect intruders, using the MRHA software architecture planned for all unmanned military vehicles. The Seekur robot was the first commercially available robot to demonstrate MDARS-like capabilities for general use by airports, utilty plants, corrections facilities and Homeland Security.
The Mars rovers MER-A and MER-B can find the position of the sun and navigate their own routes to destinations on the fly by:
- mapping the surface with 3-D vision
- computing safe and unsafe areas on the surface within that field of vision
- computing optimal paths across the safe area towards the desired destination
- driving along the calculated route;
- repeating this cycle until either the destination is reached, or there is no known path to the destination
The DARPA Grand Challenge and DARPA Urban Challenge have encouraged development of even more autonomous capabilities for ground vehicles, while this has been the demonstrated goal for aerial robots since 1990 as part of the AUVSI International Aerial Robotics Competition.
Autonomous Vehicle (Driverless Car)
The driverless car concept embraces an emerging family of highly automated cognitive and control technologies, ultimately aimed at a full "taxi-like" experience for car users, but without a human driver. Together with alternative propulsion, it is seen by some as the main technological advance in car technology by 2020.
Driverless passenger programs include the 800 million ECU EUREKA Prometheus Project on autonomous vehicles (1987-1995), the 2getthere passenger vehicles (using the FROG-navigation technology) from the Netherlands, the ARGO research project from Italy, and the DARPA Grand Challenge from the USA. For the wider application of artificial intelligence to automobiles see smart cars.
The history of autonomous vehicles starts in 1977 with the Tsukuba Mechanical Engineering Lab in Japan. On a dedicated, clearly marked course it achieved speeds of up to 30 km/h (20 miles per hour), by tracking white street markers (special hardware was necessary, since commercial computers were much slower than they are today).
In the 1980s a vision-guided Mercedes-Benz robot van, designed by Ernst Dickmanns and his team at the Universität der Bundeswehr in Munich, Germany, achieved 100 km/h on streets without traffic. Subsequently, the European Commission began funding the 800 million Euro EUREKA Prometheus Project on autonomous vehicles (1987-1995).
Also in the 1980s the DARPA-funded Autonomous Land Vehicle (ALV) in the United States achieved the first road-following demonstration that used laser radar (Environmental Research Institute of Michigan), computer vision (Carnegie Mellon University and SRI), and autonomous robotic control (Carnegie Mellon and Martin Marietta) to control a driverless vehicle up to 30km/h.
In 1994, the twin robot vehicles VaMP and Vita-2 of Daimler-Benz and Ernst Dickmanns of UniBwM drove more than one thousand kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130 km/h, albeit semi-autonomously with human interventions. They demonstrated autonomous driving in free lanes, convoy driving, and lane changes left and right with autonomous passing of other cars.
In 1995, Dickmanns´ re-engineered autonomous S-Class Mercedes-Benz took a 1600 km trip from Munich in Bavaria to Copenhagen in Denmark and back, using saccadic computer vision and transputers to react in real time. The robot achieved speeds exceeding 175 km/h on the German Autobahn, with a mean time between human interventions of 9km, or 95% autonomous driving. Again it drove in traffic, executing manoeuvres to pass other cars. Despite being a research system without emphasis on long distance reliability, it drove up to 158 km without human intervention.
In 1995, the Carnegie Mellon University Navlab project achieved 98.2% autonomous driving on a 5000 km (3000-mile) "No hands across America" trip. This car, however, was semi-autonomous by nature: it used neural networks to control the steering wheel, but throttle and brakes were human-controlled.
From 1996-2001, the Italian government funded the ARGO Project at University of Parma and Pavia University (coordinated by Prof. Alberto Broggi), which worked on enabling a modified Lancia Thema to follow the normal (painted) lane marks in an unmodified highway. The culmination of the project was a journey of 2,000 km over six days on the motorways of northern Italy dubbed MilleMiglia in Automatico, with an average speed of 90 km/h. 94% of the time the car was in fully automatic mode, with the longest automatic stretch being 54 km. The vehicle had only two black-and-white low-cost video cameras on board, and used stereoscopic vision algorithms to understand its environment, as opposed to the "laser, radar - whatever you need" approach taken by other efforts in the field.
Three US Government funded military efforts known as Demo I (US Army), Demo II (DARPA), and Demo III (US Army). Demo III (2001) demonstrated the ability of unmanned ground vehicles to navigate miles of difficult off-road terrain, avoiding obstacles such as rocks and trees.
In 2002, the DARPA Grand Challenge competitions were announced. The competitions allowed international teams to compete in fully autonomous vehicle races over rough unpaved terrain and in a non-populated suburban setting.
The challenges can broadly be divided into the technical and the social. The technical problems are the design of the sensors and control systems required to make such a car work. The social challenge is in getting people to trust the car, getting legislators to permit the car onto the public roads, and untangling the legal issues of liability for any mishaps with no person in charge.
However, any solution can be broken down to four sub-systems:
- sensors: the car knows where an obstacle is and what is around it;
- navigation: how to get to the target location from the present location;
- motion planning: getting through the next few meters, steering, and avoiding obstacles while also abiding by rules of the road and avoiding harm to the vehicle and others;
- control of the vehicle itself: actuating the system's decisions.
In examining every proposed solution, one should look at the following questions:
- Is this truly a complete system? Does it drive itself door-to-door?
- To what degree is the proposed solution a step towards the complete vision, or is it just a trick?
- Is the car 'autonomous', or would it need changes to the infrastructure?
- How feasible (technically, economically, and politically) would it be to deploy the entire solution?
- Can the system allow for and include existing vehicles driven by humans, or does it need an open field?
- How would it cope with unexpected circumstances?
Some have argued that the problem is AI-complete -- that a safe and reliable driverless car would need to use all the skills of an ordinary human being, including commonsense reasoning and affective computing. The concern is that driverless cars will perform worse than human beings in emergency situations that require judgement and the ability to communicate with other drivers and police. For example, how should a driverless car react to a man waving a flare in the middle of the road?
Automated highway systems
Automated highway systems (AHS) are an effort to construct special lanes on existing highways that would be equipped with magnets or other infrastructure to allow vehicles to stay in the center of the lane, while communicating with other vehicles (and with a central system) to avoid collision and manage traffic. Like the dual-mode monorail, the idea is that cars remain private and independent, and just use the AHS system as a quick way to move along designated routes. AHS allows specially equipped cars to join the system using special 'acceleration lanes' and to leave through 'deceleration lanes'. When leaving the system each car verifies that its driver is ready to take control of the vehicle, and if that is not the case, the system parks the car safely in a predesignated area.
Some implementations use radar to avoid collisions and coordinate speed.
One example that uses this implementation is the AHS demo of 1997 near San Diego, sponsored by the US government, in coordination with the State of California and Carnegie Mellon University. The test site is a 12-kilometer, high-occupancy-vehicle (HOV) segment of Interstate 15, 16 kilometers north of downtown San Diego. The event generated much press coverage. The technology is the subject of a book.
This concerted effort by the US government seems to have been pretty much abandoned because of social and political forces, above all else the desire to create a less futuristic and more marketable solution.
As of 2007, a three-year project is underway to allow robot controlled vehicles, including buses and trucks, to use a special lane along 20 Interstate 805. The intention is to allow the vehicles to travel at shorter following distances and thereby allow more vehicles to use the lanes. The vehicles will still have drivers since they need to enter and exit the special lanes. The system is being designed by Swoop Technology, based in San Diego county.
Source: Wikipedia (All text is available under the terms of the GNU Free Documentation License and Creative Commons Attribution-ShareAlike License.)