
Uber Turns Human Drivers’ Cars Into Sensor Grid for Self-Driving Data Engines
Key Takeaways
- Uber outfits drivers' cars with sensors to collect real-world data for autonomous-vehicles training.
- It creates a distributed sensor grid from millions of vehicles for AV AI training.
- Regulatory and privacy hurdles accompany data licensing to monetize driving data.
From rides to road data
Uber is exploring a long-term strategy to turn its human drivers’ cars into sensor-equipped platforms that can collect real-world data for autonomous vehicle (AV) companies and other firms training AI models on physical-world scenarios.
“Streamline delivers efficient, clear, and reliable news content”
The plan was described by Uber Chief Technology Officer Praveen Neppalli Naga during an interview at TechCrunch’s StrictlyVC event in San Francisco, where he said, “That is the direction we want to go eventually,” about equipping human drivers’ vehicles.
TechCrunch reports that Naga framed the effort as “a natural extension of a nascent program the company announced in late January called AV Labs,” and said the company must first understand “the sensor kits and how they all work.”
In the meantime, Uber’s AV Labs relies on “a small, dedicated fleet of sensor-equipped cars that Uber operates itself, separate from its driver network,” according to TechCrunch.
The Asian outlet NewsBytes similarly describes the initiative as equipping human drivers’ cars with sensors to collect real-world data for AV companies, while noting that AV Labs currently uses “a small fleet of sensor-equipped cars owned by Uber itself, not its driver network.”
India Today adds that Uber wants to “move in that direction” but first “needs to better understand sensor systems and sort out rules around privacy, data sharing, and state regulations.”
Across the coverage, the central premise is that the limiting factor for AV development is not the underlying technology but access to data, with TechCrunch quoting Naga: “The bottleneck is data.”
AV cloud and shadow mode
Uber’s data strategy is not limited to collecting sensor inputs; it also includes how partners can use that data to train and test their models.
TechCrunch says Uber is building what Naga described as an “AV cloud”: “a library of labeled sensor data that partner companies can query and use to train their models.”

The Hans India similarly describes the “AV cloud” as “a large, searchable repository of labelled sensor data,” and says it would allow partner companies to access “specific datasets required for training their models.”
Both outlets connect the cloud to a testing method called “shadow mode,” where AI systems can simulate decision-making during real trips without taking control of a vehicle.
TechCrunch states that partners “can also use the system to run their trained models in ‘shadow mode’ against real Uber trips, simulating how an AV would have performed without actually putting one on the road.”
NewsBytes likewise explains that “shadow mode” lets partners run trained models “against real Uber trips,” simulating performance without deploying an autonomous car.
India Today describes the same concept in more plain language, saying “a trained self-driving model can run virtually during a real Uber trip to see how it would behave, without an autonomous car actually being on the road.”
Regulation, privacy, and scale
A recurring theme across the reporting is that Uber’s plan depends on navigating sensor-kit behavior and the rules governing data sharing across different jurisdictions.
“Uber wants to lead the field of self-driving cars using real data Uber harbors a plan to turn millions of drivers into sensor networks, providing real-world data to train self-driving car systems”
TechCrunch quotes Naga saying, “But first we need to get the understanding of the sensor kits and how they all work,” and adds that “There are some regulations — we have to make sure every state has [clarity on] what sensors mean, and what sharing it means.”
India Today similarly says Uber wants to use driver cars as “rolling data machines” but first “needs to better understand sensor systems and sort out rules around privacy, data sharing, and state regulations.”
The Hans India frames the challenge for autonomous vehicle companies as shifting from technology development to acquiring “high-quality, real-world data,” and says training AI requires exposure to “diverse and unpredictable scenarios.”
It also describes the practical obstacle for AV companies: gathering targeted data is “expensive and time-consuming,” which Uber treats as an opportunity.
NewsBytes adds that Uber’s ultimate goal is to equip human drivers’ vehicles with sensor kits, but that Uber needs to understand “what they mean in terms of data sharing regulations in different states.”
The Local Western outlet mezha.net quotes Naga directly about the regulatory hurdle, saying, “There are regulatory requirements – we must ensure that each state has clarity on what the sensors mean, and what their data transmission entails.”
Uber’s AI coding shift
While Uber’s external plan targets AV training data, the sources also describe a parallel transformation inside Uber’s engineering organization driven by AI coding tools.
The Hans India reports that “artificial intelligence is rapidly reshaping Uber’s internal operations,” and says the company is seeing “a dramatic rise in the use of AI-powered coding tools, including Claude Code.”

It quotes Naga saying, “I’m back to the drawing board, because the budget I thought I would need is blown away already,” describing how quickly AI integration is evolving.
India Today likewise reports that Naga told The Information that Uber’s “original AI budget estimates have already been surpassed because of the fast adoption of advanced coding tools such as Anthropic’s Claude Code,” and repeats the same “I’m back to the drawing board” quote.
Both outlets connect the spending shift to a change in how software is produced, with The Hans India describing “agentic software engineering,” where “AI systems independently generate code and complete tasks with minimal human input.”
The Hans India provides specific metrics: “Around 1,800 code changes each week are now fully generated by Uber’s internal AI tools,” and “Nearly 95 per cent of engineers use AI in their work monthly, and about 70 per cent of all committed code involves AI assistance.”
It adds that “In just a few months, the company’s internal AI agent has grown from contributing less than 1 per cent of code changes to roughly 8 per cent.”
How outlets frame the pivot
The reporting portrays Uber’s move through different lenses, even when describing the same core idea of turning drivers into sensor infrastructure for AV training.
“How Uber plans to help autonomous vehicle companies What's the story Uber is looking to equip its human drivers' cars with sensors to collect real-world data for autonomous vehicle (AV) companies”
TechCrunch emphasizes the technical and regulatory prerequisites, quoting Naga on the need for “understanding of the sensor kits” and “clarity on what sensors mean, and what sharing it means,” and it highlights the data bottleneck with Naga’s line, “The bottleneck is data.”

The Hans India focuses on the strategic framing of Uber as a data provider, saying Naga’s goal is “not to make money out of this data” and “We want to democratise it,” while also detailing Uber’s “AV cloud” and “shadow mode” capabilities.
NewsBytes presents the same plan as a way to help autonomous vehicle companies, describing Uber’s “AV cloud” as “a library of labeled sensor data” and explaining that partners can run models in “shadow mode” during real Uber trips.
India Today foregrounds the “rolling data machines” language and ties the initiative to privacy, data sharing, and “state regulations,” while also repeating the internal AI coding metrics and the “agentic software engineering” description.
mezha.net, meanwhile, quotes Naga’s regulatory explanation about each state’s clarity on “what the sensors mean” and “what their data transmission entails,” and it frames Uber’s stance as not quickly monetizing data but creating an “open, democratic data base.”
Finally, the “https | West Asian” source goes further into a different, more expansive narrative about “Project Sentinel,” including figures such as “7.1 million active drivers” and “$2.5 billion” in annual licensing revenue by 2028.
More on Technology and Science

Google Bought British Deepmind, Driving European Tech Value From Europe to the U.S.
11 sources compared

CISA Adds CVE-2026-31431 Copy Fail Flaw to KEV, Enabling Linux Root Privilege Escalation
27 sources compared

Muséum National D'Histoire Naturelle Explains Ichthyosaurs’ Dolphin-Like Marine Reptile Evolution
10 sources compared

Arizona, California, and Nevada Submit Colorado River Water-Saving Proposal to U.S. Interior Department
18 sources compared