
Lucid, Jaguar Land Rover and Uber are exploring NVIDIA’s newly launched Alpamayo family of open-source AI models, simulation tools and datasets as the industry looks to advance safer, reasoning-based autonomous driving (AVs).
Unveiled at CES, Alpamayo is designed to help developers tackle complex and rare driving scenarios by enabling vehicles to reason through situations step by step rather than relying solely on pre-trained patterns.
Autonomous vehicles must operate safely in countless driving situations, including rare and unpredictable scenarios often referred to as the industry’s “long tail.”
These unusual events such as unexpected road behavior or complex traffic conditions remain difficult for existing systems to manage. Many traditional AV systems rely on separate components for perception and planning, which can struggle when faced with unfamiliar situations.
NVIDIA’s Alpamayo family takes a different approach by introducing reasoning-based vision language action (VLA) models. These models are designed to think through situations step by step, similar to how humans assess cause and effect while driving.
This allows autonomous systems to better handle unfamiliar or complex conditions and provide clearer explanations for their decisions, an important factor for safety and trust.
According to NVIDIA founder and CEO Jensen Huang, the industry is entering a new phase where machines can understand and act in the physical world. He said Alpamayo enables autonomous vehicles, including robotaxis, to reason through rare scenarios, operate safely in complex environments, and explain why certain driving decisions are made.
An Open Ecosystem for Autonomous Driving Development
The Alpamayo family brings together three key components such as open AI models, simulation tools, and large-scale datasets into a single, open ecosystem.
Rather than running directly inside vehicles, Alpamayo models act as large “teacher” models. Developers can adapt and compress these models into smaller versions that fit within their own autonomous driving systems.
As part of the initial release, NVIDIA introduced Alpamayo 1, the first open chain-of-thought reasoning VLA model built specifically for autonomous vehicle research.
The model uses video input to produce driving trajectories along with reasoning traces that explain each decision. With 10 billion parameters, Alpamayo 1 is available on Hugging Face with open model weights and open-source inference tools.
NVIDIA also released AlpaSim, a fully open-source simulation framework available on GitHub. AlpaSim allows developers to test autonomous vehicles in realistic environments that include detailed sensor models and configurable traffic conditions. This makes it easier to validate driving behavior and refine policies before real-world deployment.
Industry leaders note that future autonomous vehicles will need more than data processing, they will need the ability to reason about real-world behavior. Open access to models, simulation tools, and datasets is seen as a key factor in accelerating progress while maintaining safety and transparency.
Shahriena Shukri is a journalist covering business and economic news in Malaysia, providing insights on market trends, corporate developments, and financial policies. More about Shahriena Shukri.


