Waymo has issued a recall for over 600 of its self-driving vehicles after a catastrophic incident in which a robotaxi veered off course and plunged into a creek in San Francisco. The crash, which occurred during a routine passengerless test, highlights a growing gap between US and UK regulatory frameworks for autonomous vehicles. While the UK's rigorous safety protocols have prevented similar failures, the US patchwork of state-level rules leaves room for disaster.
The incident unfolded when a Waymo Jaguar I-Pace, operating without a human driver, misinterpreted road markings near a construction zone. The vehicle's sensors failed to distinguish between a temporary barrier and the edge of a flooded drainage channel. The car drove directly into the water, causing extensive damage but no injuries. Waymo's internal investigation traced the error to a software bug in the perception module that misclassified the reflective surface of the creek as a continuation of the asphalt.
This recall involves a fleet-wide software update to all 672 vehicles in Waymo's commercial operations in San Francisco and Phoenix. The company claims it has fixed the issue by enhancing lidar and camera fusion algorithms to better detect anomalous surfaces. However, this is not the first time Waymo has faced such a setback. In 2023, the company recalled hundreds of vehicles after a similar misidentification of a low-hanging tree branch caused a collision.
The contrast with the UK's approach is stark. Britain's Centre for Connected and Autonomous Vehicles (CCAV) mandates that all self-driving systems undergo a rigorous 'Safety Case' review before deployment. This includes scenario-based testing for edge cases: flooded roads, construction zones, and reflective surfaces. Waymo's software would have failed this process. The UK also requires 'Operational Design Domain' restrictions that prevent autonomous vehicles from operating in areas with unpredictable topography or poor sensor visibility. San Francisco's hilly terrain and waterfront make it a high-risk environment.
Furthermore, the UK's Automated and Electric Vehicles Act 2018 holds manufacturers criminally liable for any safety lapses, incentivising caution. In contrast, US liability laws vary by state, often relying on tort law that can be murky for AI-driven vehicles. Waymo's recall is voluntary, but the National Highway Traffic Safety Administration (NHTSA) has only issued voluntary recalls for similar issues, lacking the punitive power of UK regulators.
The data speaks volumes. The UK has logged over 1.5 million miles of autonomous vehicle testing with zero reported serious incidents. In the US, there have been at least 35 crashes involving self-driving cars since 2022, according to NHTSA data. The disparity stems from fundamental differences in philosophy. The UK treats autonomous driving as a public utility requiring centralised oversight. The US treats it as an innovation frontier where companies self-regulate.
This is not an argument against progress. Self-driving technology could reduce the 1.35 million traffic deaths per year globally. But only if we learn from failures. Waymo's creek disaster is a stark reminder that AI-driven systems are only as safe as the worst-case scenario they are tested against. The UK's slower, more methodical approach might frustrate Silicon Valley, but it yields a safer product. As the planet warms and our infrastructure strains under extreme weather, we cannot afford to trade safety for speed. The creek incident is a warning: if the US does not adopt UK-style regulations, more robotaxis will end up in waterways. And next time, passengers might be inside.
