Autonomous vehicle simulation poses two challenges: creating a world with enough detail and realism so that the AI driver sees the simulation as real, as well as creating simulations on a large enough scale to cover all cases where an AI driver needs to be fully trained and tested.
To address these challenges, NVIDIA researchers have created new AI-based tools to build simulations directly from real-world data. Jensen Huang, founder and CEO of NVIDIA, previewed the hack during GTC keyword.
This research includes Award winning work It was first published in SIGGRAPH, a computer graphics conference held last month.
neural reconstruction engine
The Neural Reconstruction Engine is a new AI toolkit for NVIDIA DRIVE Sim A simulation platform that uses multiple AI networks to transform recorded video data into a simulation.
The new pipeline uses AI to automatically extract key components needed for the simulation, including the environment, 3D assets, and scenarios. These pieces are then reconstructed into simulated scenes that have the realism of data recordings, but are fully interactive and can be manipulated as needed. Achieving this level of detail and versatility manually is expensive, time-consuming, and not scalable.
Environments and assets
The simulation needs an environment in which to run. AI Pipeline converts 2D video data from a real-world engine into a dynamic 3D digital binary environment that can be loaded into a DRIVE Sim.
The DRIVE Sim AI pipeline follows a similar process to rebuilding other 3D assets. Engineers can use the assets to rebuild the existing scene or put them into a larger library of assets for use in any simulation.
Using the asset harvesting pipeline is fundamental to growing the DRIVE Sim library and ensuring that it is compatible with real-world diversity and distribution.
Scenarios are the events that occur during the simulation in an environment embedded with the assets.
The Neural Reconstruction Engine maps the AI-based behaviors of actors in a scene so that when presented with the original events, they behave precisely as they did in the real engine. However, because they have an AI behavior model, the numbers in the simulation can respond and react to changes induced by the AV or other scene elements.
Since these scenarios all occur in the simulation, they can also be manipulated to add new situations. The timing and location of events can be changed. Developers can incorporate entirely new elements, artificial or real, to make the scenario more challenging, such as adding a kid chasing a ball to the scene below.
Integration into Sim Drive
Once the environment, assets, and scenario are extracted, they are reassembled in DRIVE Sim to create a 3D simulation of the recorded scene or mixed with other assets to create an entirely new scene.
DRIVE Sim provides the tools for developers to adjust dynamic and static objects, vehicle trajectory, location, direction and parameters of vehicle sensors.
The same scenes in DRIVE Sim are also used to generate pre-selected synthetic data to train perception systems. Randomizations are applied over the recreated scenes to add variety to the training data. Creating scenes from real-world data greatly reduces the sim-to-real gap.
The ability to mix and match simulation formulas is an important advantage in the extensive testing and validation of auxiliary compounds at scale. Engineers can manipulate events in a world that responds precisely to their needs.
The Neural Reconstruction Engine is the result of the work of the research team at NVIDIA, and will be integrated into future versions of DRIVE Sim. This hack will enable developers to take advantage of physics-based simulation and neural simulation on the same cloud-based platform.