The future of autonomous vehicles (AVS) relies on simulation. Equally important is knowing what scenarios to simulate. Edge cases are high-risk scenarios, each of which is individually unlikely, but collectively, can make up all the risk of daily driving. Edge cases can be anything from a child following a ball into the street to a head-on collision with a car suddenly veering into your lane.
A dRISK edge case detailing a dangerous passing maneuver from an oncoming vehicle recorded from a dRISK exclusive source, and replicated in NVIDIA DRIVE Sim with some domain-specific modifications.
AV developers will soon be able to test and retrain their AV stacks on these and many more edge cases using the NVIDIA DRIVE Sim platform. DRIVE Sim, which is built on NVIDIA Omniverse, is an end-to-end platform architected from the ground up to run large-scale, physically accurate multi-sensor simulation delivering a safe, scalable and cost-effective way to bring self-driving vehicles to the roads. DRIVE Sim, already shown to be able to closely match real-life scenarios, will now further recreate dRISK’s knowledge graph of edge cases. Developers testing their AV stacks with DRIVE Sim will be able to validate and retrain their own parameters on the world’s trickiest scenarios.
Following a major grant from the UK government to build the ultimate driver’s test for self-driving cars, dRISK has spent years mapping out the landscape of edge cases from a huge heterogeneity of sources – from CCTV to accident reporting, to first-principles failure mode analysis led by NASA engineers.
Testing, validating and retraining on edge cases before deployment can prevent most of the accidents autonomous vehicles are experiencing right now. As dRISK demonstrated at NVIDIA GTC 2021, developers can achieve a 6x performance improvement on detecting high-risk scenarios by retraining on edge cases, while continuing to get the “center cases” (normal driving) for free.
Aggregation methods for dRISK’s edge cases from real-life data, clustered and assembled into a knowledge graph, a data structure flexible enough to capture the heterogeneity of everything that can go wrong on the road.
NVIDIA DRIVE Sim has the ingredients to achieve a major goal in AV development—to fully test, validate and even retrain and improve these vehicles in simulation. Chess Stetson, CEO of dRISK, says, “NVIDIA DRIVE Sim is the simulation platform we’ve all been waiting for, which can fully resolve the kind of depth we need to do physically accurate simulations in real time.”
NVIDIA is enabling the next generation of software-defined autonomous systems to OEMs.
One of the most important reasons for physically accurate simulations is to understand the AV perception. failures that would cause an oncoming collision to be missed. In the following clip, DRIVE Sim replicates a T-bone scenario from dRISK’s knowledge graph, complete with partial occlusions. DRIVE Sim makes the hazardous vehicle obvious to the human eye, as it would be in real life. But it’s hard for a standard computer-based object detection system to pick up (in this case, Mask RCNN).
Stetson notes, “The human visual system, evolved over millions of years, is highly adapted to avoid things that are going to hurt or kill you, which it does using extensive motion cues.”
This is in stark contrast to current object detection, which is overwhelmingly based on single-frame object localization and classification, rather than the kind of motion detection that could allow for quick reactions necessary to avoid this kind of T-bone scenario. But as AVS have more exposure to edge cases in a full sensor-real simulation platform, they will indeed be able to develop the capabilities to handle these kinds of high-risk, fast-reaction-time scenarios.
An upcoming transverse collision, replicated in high fidelity in NVIDIA DRIVE Sim. This kind of edge case is critical for testing AV reactions to high-risk scenarios in DRIVE Sim’s real-time sensor environment. Note that in this case, a common object detection framework (Mask R-CNN) misses the behavior most important to risk awareness in the scene, but that DRIVE Sim allows for the possibility of retraining AV perception to improve.
dRISK’s mission is to provide the largest, most diverse and realistic scenario database for testing autonomous vehicles. dRISK has been building this testing suite from a massive set of real-world data. Test sets are optimized for specific tasks, such as failure mode identification and curriculum learning for efficient Al training. dRISK and its customers have shown 6-20x improvements in the ability to recognize and contend with high-risk scenarios, though never yet in a simulation environment as capable as NVIDIA DRIVE Sim.
NVIDIA has licensed dRISK’s scenario data derived from real life, which is organized to be evenly distributed across the landscape of risk, for NVIDIA’S AV development uses and to complement its scenario provision efforts.
As a DRIVE Sim ecosystem partner, dRISK is making its scenario knowledge graph and tools for exploring the space of center and edge cases available to any DRIVE Sim customer.
Capabilities and Goals:
dRISK with DRIVE Sim can efficiently:
– Replicate real-life edge cases, sourced from real-life data, at large scale
– Find failure modes in simulation before vehicles are on the road
– Retrain object detection to have improved performance detecting sources of high risk
In addition to delivering high-risk, true-to-life scenarios for NVIDIA’s use in validating AV stacks against edge cases, dRISK is also exploring the capacity to deliver these capabilities to the wider AV space via a combined use of dRISK’s edge case manifold and NVIDIA AV tools.
Authors: dRISK Team (Chess Stetson, Ph.D., Federico Arenas Lopez, Lorenzo Niccolini, Nils Goldbeck, Ph.D.)
Other contributors: Philipp Hermann, Wael Elhaddad, Carène Kamel