The National Highway Traffic Safety Administration (NHTSA) has launched an investigation into Avride, a self-driving technology partner of Uber, after identifying more than a dozen crashes, one of which resulted in a minor injury. The investigation raises concerns about the safety of autonomous vehicles and the need for stricter regulations to prevent similar incidents in the future.
Key Takeaways
- Avride has been involved in over a dozen self-driving crashes, with one resulting in a minor injury.
- NHTSA has opened an investigation into Avride’s safety record.
- The investigation highlights the need for stricter regulations on autonomous vehicles.
- Avride’s partnership with Uber raises concerns about the company’s commitment to safety.
- The incident serves as a reminder of the risks associated with autonomous vehicle technology.
Investigation into Avride
Avride, a San Francisco-based company, has been at the forefront of self-driving technology, partnering with Uber and other major players in the industry. However, the recent string of crashes has raised serious concerns about the safety of its vehicles.
The company began testing its autonomous systems in 2021, initially in controlled environments before expanding to public roads in select cities. By 2024, Avride had secured permits to operate driverless vehicles in California and Texas, positioning itself as a key player in the race toward full autonomy. Its collaboration with Uber was seen as a strategic move to integrate self-driving cars into ride-hailing fleets, reducing operational costs and increasing scalability.
Despite these ambitions, Avride’s safety record has drawn scrutiny. The NHTSA’s preliminary findings point to recurring issues in vehicle behavior during complex urban driving scenarios—intersections with obscured signage, sudden pedestrian movements, and high-traffic zones where split-second decisions are critical. In several instances, Avride vehicles failed to yield properly or misjudged the speed of oncoming traffic, leading to low-speed collisions.
The agency’s investigation will focus on Avride’s decision-making algorithms, sensor reliability, and fallback systems designed to take over when autonomy fails. It will also examine whether the company’s internal safety reviews flagged these risks before deployment.
Crashes Involving Avride
According to the NHTSA report, Avride has been involved in over a dozen self-driving crashes, with one resulting in a minor injury. The report notes that the crashes were caused by a combination of factors, including software glitches and environmental conditions.
Most incidents occurred in San Francisco, where unpredictable traffic patterns and dense urban infrastructure challenge even the most advanced systems. In one case, an Avride vehicle attempted to merge into traffic but misread the gap, colliding with a motorcycle. The rider sustained a minor arm injury but was able to walk away. In another, the vehicle failed to recognize a temporary construction barrier and drove into it during a rainstorm, likely due to sensor degradation in wet conditions.
Software bugs appear to be a consistent theme. One crash was traced to a delay in the perception stack’s ability to classify a stopped vehicle, causing the Avride car to continue forward at 15 mph. A patch was issued after the incident, but questions remain about how thoroughly such issues are tested in simulation before deployment.
Environmental factors alone don’t explain the pattern. Other autonomous operators in the same cities—using similar hardware—have reported fewer incidents under comparable conditions. That discrepancy suggests the problem may lie in Avride’s specific implementation rather than a universal shortcoming of current AV technology.
All of the crashes occurred while the vehicles were operating in fully autonomous mode, without safety drivers behind the wheel. Avride had removed human operators from its fleet in early 2025 as part of a cost-cutting initiative aimed at commercial viability. That decision now appears riskier in hindsight, especially given the absence of real-time human intervention during failures.
NHTSA Investigation
The NHTSA investigation into Avride is considered a major development in the autonomous vehicle industry. The agency has been tasked with determining the cause of the crashes and identifying potential safety risks.
This isn’t the first time NHTSA has stepped in on autonomous vehicle matters. In 2021, the agency opened a probe into Tesla’s Autopilot system after a series of crashes involving stationary emergency vehicles. That investigation ultimately led to a recall and software updates. In 2023, NHTSA also reviewed Waymo’s performance in Phoenix after public concern over erratic braking behavior. Each of these cases set precedents for how federal regulators respond to emerging AV risks.
With Avride, the stakes are higher. Unlike Tesla, where drivers are expected to remain engaged, Avride’s vehicles are designed to operate without human oversight. That shifts the entire burden of safety onto the software and hardware systems. If flaws are found, NHTSA has the authority to demand recalls, suspend testing permits, or impose fines.
The investigation will likely include a close look at Avride’s safety validation process. How many miles were driven in simulation versus real-world conditions? What failure modes were modeled? How often are over-the-air updates pushed, and are they adequately stress-tested?
Regulators will also look at Avride’s reporting practices. Federal guidelines require companies to self-report crashes involving autonomous systems. The fact that over a dozen incidents were tallied suggests either a higher-than-average failure rate or improved transparency—possibly both. Still, transparency doesn’t replace safety.
The timeline for the investigation is unclear, but past NHTSA probes have taken anywhere from six months to two years, depending on complexity. Given the public attention and Avride’s ties to a major brand like Uber, pressure will be on to deliver findings quickly.
Regulatory Concerns
The investigation highlights the need for stricter regulations on autonomous vehicles. While the industry has called for more lenient regulations, the NHTSA has emphasized the importance of prioritizing safety.
Current federal rules don’t require pre-market approval for autonomous driving systems. Instead, companies file voluntary safety assessment letters outlining their approach. That self-certification model worked when AVs were limited to test fleets, but it’s showing strain as vehicles scale into commercial operations.
NHTSA has floated the idea of mandatory safety benchmarks—standardized tests for perception, decision-making, and emergency response—similar to crash tests for traditional cars. There’s also talk of requiring real-time data sharing from AVs so regulators can monitor performance across fleets.
Some in the industry resist these moves, arguing that rigid standards could slow innovation. But the Avride case shows what happens when oversight lags behind deployment. A single injury might seem minor, but it could foreshadow larger systemic failures.
States have tried to fill the gap. California’s DMV publishes annual disengagement reports, tracking how often human drivers must take over. Texas, where Avride also operates, has looser rules, focusing more on registration than ongoing monitoring. This patchwork makes it hard to compare performance across regions or hold companies accountable consistently.
Federal lawmakers have introduced bills to give NHTSA more authority over AVs, but none have passed. Until they do, the agency is left reacting to incidents instead of preventing them.
What This Means For You
The investigation into Avride serves as a reminder of the risks associated with autonomous vehicle technology. As the industry continues to push the boundaries of innovation, it is crucial that regulations keep pace to ensure public safety.
For developers and builders, the incident underscores the importance of prioritizing safety in autonomous vehicle design. This includes implementing strong testing protocols and addressing potential software glitches.
Consider a startup building a last-mile delivery robot. If it cuts corners on edge-case validation—say, failing to simulate low-light scenarios with reflective puddles—it could end up causing property damage or injuries. Insurers may refuse coverage. Cities might revoke permits. Public trust, once lost, is hard to regain.
Now imagine a mid-sized AV company preparing to expand from Arizona to New York. The Avride probe means they’ll face tougher scrutiny from regulators and the public. They’ll need to show not just that their system works, but that they’ve rigorously tested failure modes, logged disengagements, and responded to incidents transparently. Their safety documentation could make or break their expansion plans.
For founders seeking venture funding, the stakes are equally high. Investors are already wary of AV startups after several high-profile shutdowns in the early 2020s. A single crash linked to poor engineering can kill a funding round. The Avride case gives VCs more reason to dig into technical due diligence—asking for test logs, redundancy plans, and incident response playbooks before writing checks.
Safety isn’t just ethical. It’s a business imperative. Companies that treat it as a checkbox will struggle. Those that bake it into their culture from day one stand a better chance of surviving regulatory waves and earning public trust.
Competitive Landscape
Avride operates in a crowded and unforgiving market. Major players like Waymo, Cruise, and Motional have spent over a decade refining their systems, racking up millions of real-world and simulated miles. Waymo, for example, has driven over 20 million miles on public roads and billions more in simulation. Their vehicles have been involved in far fewer incidents per mile, giving them a credibility edge.
Cruise, despite its own setbacks—including a high-profile suspension of operations in 2023—still benefits from deep pockets and GM’s manufacturing infrastructure. Motional, backed by Hyundai and Aptiv, has focused on geofenced robotaxi services with strong safety records in cities like Las Vegas and Austin.
Avride, by comparison, has moved faster but with less public data to back its claims. It launched commercial services in 2024 with big promises but hasn’t released detailed safety metrics. That lack of transparency stands in contrast to peers who publish annual safety reports.
Uber’s partnership was meant to level the playing field. By integrating Avride’s tech into its ride-hailing network, the company hoped to leapfrog slower competitors. But now, that same association could amplify reputational damage. Uber has worked hard to rebuild trust after past controversies; an AV scandal could undo years of progress.
Other ride-hailing platforms are watching closely. Lyft exited its in-house AV development in 2022 and now partners with third-party operators. The Avride probe may make them even more cautious about which fleets they allow on their app.
Automakers are also reassessing their AV strategies. Ford and Volkswagen scaled back their self-driving investments in 2023 after realizing the timeline to profitability was longer than expected. The Avride investigation could further dampen enthusiasm, pushing more companies toward driver-assist features rather than full autonomy.
What Happens Next
One question remains: can the industry find a balance between innovation and safety?
The answer depends on multiple factors. Will NHTSA come out with enforceable standards, or stick to advisory guidance? Will Avride issue a public report detailing what went wrong and how it plans to fix it? And will Uber maintain its partnership, or distance itself to protect its brand?
Short term, expect Avride to pause expansions and focus on internal reviews. They may temporarily reinstate safety drivers or limit operations to less complex routes. Software updates will likely roll out to address the specific failure modes identified in crashes.
Long term, the AV industry may see consolidation. Smaller players without deep reserves or proven safety records could be acquired or forced out. Regulatory hurdles will rise, favoring companies that can afford rigorous compliance.
Public perception will play a decisive role. A series of minor crashes might not make headlines, but they chip away at trust. If people start seeing AVs as unreliable or dangerous, cities may restrict access, insurers may hike premiums, and investors may pull back.
The dream of fully autonomous vehicles isn’t dead. But the path forward is narrowing. Speed once defined success in this space. Now, it’s safety, transparency, and accountability.
Sources: TechCrunch, The Verge
A self-driving Uber vehicle with Avride technology is seen on the streets of San Francisco. The vehicle is equipped with a range of sensors and cameras, designed to detect and respond to its surroundings.


