MIT researchers have created a driverless car navigation system that uses only simple maps and visual data to allow vehicles to navigate routes in new and complex environments, as they seek to bring more human-like reasoning to autonomous vehicles.
While human drivers are ‘exceptionally good’ at navigating roads they haven’t driven on before, using observation and simple tools, say the researchers, driverless cars struggle with this basic reasoning. As a result, a car must map and analyse roads in every new area, while also relying on complex maps that require lots of computing power.
The MIT team have published a paper describing an autonomous control system that can ‘learn’ the steering patterns of human drivers as they navigate roads in a small area, using data from video camera feeds and a simple GPS-like map. The trained system can then be used to control a driverless car along a planned route in a new area, by imitating the human driver.
“With our system, you don’t need to train on every road beforehand,” said author Alexander Amini, an MIT graduate student. “You can download a new map for the car to navigate through roads it has never seen before.”
The team used a human operator in an automated Toyota Prius to train the system initially, collecting date from local suburban streets. When deployed autonomously, the system navigated the car along a preplanned path in a different forested area, designated for autonomous vehicle tests.
Co-author Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, said: “Our objective is to achieve autonomous navigation that is robust for driving in new environments.
“For example, if we train an autonomous vehicle to drive in an urban setting such as the streets of Cambridge, the system should also be able to drive smoothly in the woods, even if that is an environment it has never seen before.”