Since it would take forever to program a driverless car with every conceivable driving scenario, drive computers use deep neural networks to learn through practice. Using preloaded data on roads, vehicles, pedestrians, and so on, they learn to identify others in test and virtual drives.
The cars also use sensors to build a detailed, dynamic representation of their environment. Lidar, an array of lasers that continuously spin through 360 degrees, builds up a real-time image. Radar measures the distance and velocity of nearby objects. These sensors feed into the driving computer, and together with camera data, give the car a thorough understanding of what’s going on around it.
Environmental data doesn’t give its location, however. Upgrades to the Global Positioning System (GPS) will make geolocation accurate to around 30cm, which is better than one to two meters currently, but not sufficient for a car to drive safely on the road. Therefore, as well as GPS, a driverless car uses triangulation algorithms (using road signs, traffic lights, and other landmarks) and high-definition maps to accurately pinpoint its location to within a few centimeters.
Finally, when the drive computer knows where the car is and what’s around it, it can plan a route to the desired destination. When driving, the car responds to other objects, both moving and stationary, and modifies its behavior accordingly (such as slowing down at roundabouts, giving other vehicles way when obliged to, and so on).
Ideally, the car’s projected trajectory and its actual one should be identical, but as other objects can interfere, and precise maneuvering is difficult, they’re sometimes a bit different. In real life, this happens all the time with human drivers, but engineers are shrinking the gap all the time.
Driverless cars are the future. By replacing human drivers with sensors, data, and machine learning code, we remove human error (behind the wheel, at least). This might not resolve every ethical dilemma, but if every vehicle on the road is a driverless one, it should make our roads much safer for everyone.
The cars also use sensors to build a detailed, dynamic representation of their environment. Lidar, an array of lasers that continuously spin through 360 degrees, builds up a real-time image. Radar measures the distance and velocity of nearby objects. These sensors feed into the driving computer, and together with camera data, give the car a thorough understanding of what’s going on around it.
Environmental data doesn’t give its location, however. Upgrades to the Global Positioning System (GPS) will make geolocation accurate to around 30cm, which is better than one to two meters currently, but not sufficient for a car to drive safely on the road. Therefore, as well as GPS, a driverless car uses triangulation algorithms (using road signs, traffic lights, and other landmarks) and high-definition maps to accurately pinpoint its location to within a few centimeters.
Finally, when the drive computer knows where the car is and what’s around it, it can plan a route to the desired destination. When driving, the car responds to other objects, both moving and stationary, and modifies its behavior accordingly (such as slowing down at roundabouts, giving other vehicles way when obliged to, and so on).
Ideally, the car’s projected trajectory and its actual one should be identical, but as other objects can interfere, and precise maneuvering is difficult, they’re sometimes a bit different. In real life, this happens all the time with human drivers, but engineers are shrinking the gap all the time.
Driverless cars are the future. By replacing human drivers with sensors, data, and machine learning code, we remove human error (behind the wheel, at least). This might not resolve every ethical dilemma, but if every vehicle on the road is a driverless one, it should make our roads much safer for everyone.
No comments:
Post a Comment