Once you have the edge cases, there are still a couple more hurdles to cross before you can make an AV perform well on all of them. First, you will likely have to use at least some amount of simulation in order to replay the edge cases for the AV. While it’s beginning to be recognized that the only way to really develop functional AVs is in simulation, you still need to figure out what to simulate. Certainly, you can’t just simulate billions of miles of boring driving as has been done in the past – you need to focus on edge cases. But even the edge cases are too numerous to test on all of them. You can’t expect an AV to run through potentially trillions of edge cases during a nightly build test as you develop it toward increased performance. You need a way of always exposing it to the next best test at every moment. These are tricky points, but dRISK has an approach to them which we’ll share in a later posting.
The good news is that with the edge cases, and the method for traversing them, we do have all the ingredients for the ultimate AV driver’s test. In the domain of AI, where testing and training are very closely related, a true test for self-driving cars nearly guarantees truly safe self-driving cars. dRISK has been given a mandate by the UK government to make this happen by taking a systematic approach to AV regulation. We’re well into this effort and you can expect to see a lot more on it from us in the coming months, with final delivery planned for Spring/Summer 2022.
Autonomous vehicles do not have instincts or common sense. So how can they contend with situations requiring instincts or common sense? The short answer is that they should simply be trained to make the safest possible decision with everything that could possibly happen on the road. But the ideal answer is that they should of course make the safest decision every time just like a skilled human driver would. Given a large enough scale, strange and unforeseen events will occur. Recently, an AV operating in East Valley, Phoenix was confused by cones and managed to get itself stuck and eventually froze in the middle of a moving lane in traffic. These simple but catastrophic failure modes lead people to distrust AVs.
AV developers themselves are now starting to recognise that to achieve commercial viability, AVs will need to be trained, tested and validated on a huge number of edge cases. Testing an AV on simple, easy to pass tests, such as driving on a straight road with little traffic and in good weather, has little benefit as this is not a situation of risk in the real world.
To solve this disconnect with the real world, dRISK has collected, and continues to collect, the world’s most complete taxonomy of edge cases. We use CCTV from all over the world, front facing cameras, accident and insurance claim reports and information supplied by the public to recreate accidents and near misses in the simulation domain. These can be supplied to AV developers, Transport authorities and insurers who use our products to assess and reduce risk in AV (and non-AV) deployments, and to test and retrain AVs to perform better on edge cases. And we don’t just supply data to help test AVs, we can help to train and validate them, ensuring reliable and consistent safety against anything they may encounter on the roads.
The complexity of aggregating, storing and actually making the variety of data useful for retraining can be a daunting task. How different for instance is a scenario where an occluded pedestrian runs across the road vs another where two pedestrians do the same? dRISK is solving this problem by developing the first ever knowledge graph of risk. Simply put, a knowledge graph is a cloud of individual edge cases, clustered into neighbourhoods of types of scenarios. This application of knowledge graph technology is the ideal computational environment to store this degree of complexity, ensuring that scenarios generated will provide an effective sampling of scenarios across the space of risk.