The need for high-integrity AI Governance
A cautionary tale of far things can go wrong when AI governance and safety systems break down.
Cruising for a Fall
In the crisp evening air of San Francisco on October 2, 2023, a woman's life changed forever at the intersection of Market Street and 5th Street. Cruise's autonomous vehicle "Panini" sat waiting at a red light, its sophisticated sensors scanning the scene, while next to it, a Nissan Sentra driver waited impatiently for the light to turn green. In the next few moments, a cascade of failures would seriously injure a pedestrian and ultimately bring down one of America's most promising autonomous vehicle companies.
I find myself returning to this story regularly in my own work. It serves as more than just another story of a corporate crisis – it's an instructive anti-pattern of what not to do when governing intelligent systems that have the potential for real-world harm. The fall of Cruise stands as a stark, cautionary tale about the critical importance of high integrity governance and, I believe, demonstrates why we need genuine and robust governance and management systems within every AI company.
The Technical Cascade
The incident began not with the autonomous vehicle, but with human error. The Nissan's driver struck a pedestrian who had entered the crosswalk against the signal. That collision launched the pedestrian into the path of Panini. What happened next exposed critical flaws in both Cruise's technical systems and its organisational culture.
Panini's automated driving system (ADS) made a series of critical misjudgments1. First, while it detected both the pedestrian and the Nissan, and it anticipated the possibility of their collision, it could not adequately foresee the likely follow-on consequence. The system had been designed to avoid direct collisions but not to predict and respond to secondary impacts – a critical oversight in urban environments where chain-reaction accidents are common. When the pedestrian was thrown into its path, Panini's response was a problem. Its systems lost track of the pedestrian in the crucial second before impact, with the classification and tracking becoming unreliable. The vehicle slowed only a little – far from the emergency stop that might be expected.
But the most serious technical failure came moments after the initial impact. Due to limited sensor visibility of the pedestrian at the moment of collision, Panini's systems misclassified the incident as a side impact rather than a frontal collision. This seemingly small technical error had devastating consequences. It did not register the pedestrian as being trapped partially under the car and so instead of coming to an immediate stop, the vehicle initiated a "pullover manoeuvre," dragging the trapped pedestrian for 20 feet while searching for what its programming considered a safe stopping place.
The human toll of these technical failures was severe. Emergency responders had to use heavy rescue tools to free the pedestrian, with the injuries sustained being serious, though thankfully not fatal. The driver of the Nissan Sentra fled the scene and has never been identified.
Organisational Breakdown
The incident marked the first pedestrian injury in Cruise's history, breaking an operational record spanning over five million miles of autonomous driving. And technically, the root cause was human error by the Nissan driver. But what transformed this serious but potentially survivable incident into an existential crisis for Cruise was the company's subsequent response.
In the crucial early hours after the accident, Cruise's leadership became myopically focused on correcting media narratives about who caused the initial collision, rather than transparently disclosing all aspects of the incident. The company's CEO Kyle Vogt and Communications VP Aaron McLear heavily edited press statements to emphasise that the Nissan had caused the initial collision, while omitting any mention of their vehicle's subsequent dragging of the pedestrian.
The company's interactions with regulators revealed deep-seated cultural problems. During a key meeting with the California Department of Motor Vehicles (DMV) on October 3, Cruise attempted to share video of the incident but relied on an engineer's home computer with poor internet connectivity. When technical glitches prevented clear viewing of the full video, Cruise employees failed to verbally explain the critical fact that their vehicle had dragged the pedestrian.
Media coverage in the following days revealed the full extent of what had really occurred. Through witness accounts and access to video footage, journalists uncovered the critical detail of how the pedestrian had been dragged under the car. This revelation dramatically shifted public perception of the incident and raised serious questions about Cruise's commitment to transparency.
The consequences were swift. By October 24, the California DMV suspended Cruise's permit to operate driverless vehicles in the state2. The suspension order was damning, stating that Cruise had failed in its "obligations of accountability and transparency to the government and the public." The National Highway Traffic Safety Administration (NHTSA) opened an investigation, and Cruise was forced to recall all 950 of its vehicles nationwide3.
The Fall
The financial and organisational impact was devastating. By mid-December, the company had laid off 24% of its workforce4. The senior leadership team that had overseen the response to the incident, including key figures in legal, regulatory, and communications roles, departed the company. Most significant among those, Kyle Vogt, who had led Cruise since 2013 and been central to its vision of autonomous vehicle technology, resigned as CEO.
Before the incident, Cruise was valued at $30 billion and on track to generate $1 billion in revenue by 2025. The company had been expanding operations to multiple cities and was seen as a leader in autonomous vehicle technology. In the aftermath of the October 2 incident and its mishandling, these ambitions crumbled. What began as a technical failure cascaded into an organisational crisis that fundamentally changed the trajectory of not just Cruise, but potentially the entire autonomous vehicle industry.
In August 2024, GM Cruise agreed to pay a $1.5M fine to the NHTSA and accept increased reporting requirements for two years5. A settlement was reached with the female pedestrian that was reported to be more than $8M. In November 2024, the US Federal Justice Department disclosed that Cruise admitted to submitting a false report regarding the accident in order to influence a federal investigation. They agreed to pay a $500,000 fine6. Less than a month later, General Motors announced it would cease funding for Cruise, having previously invested some $10bn in the venture. There are no Cruise vehicles in operation now, nor does it appear there will ever be again.
The Lessons
The fall of Cruise offers crucial lessons for our time of increasingly autonomous, embodied intelligent systems. It demonstrates how, when AI systems sense, decide, and act in the real world, technical excellence alone is not enough. Cruise's automated driving system, while sophisticated enough to safely navigate millions of miles, was undone by edge cases and unexpected scenarios. More critically, the company's organisational culture – marked by poor leadership, mistakes in judgment, lack of coordination, and an adversarial stance toward regulators – proved to be its ultimate undoing.
I encourage you to read the full report from Quinn Emanuel (links below) and the various findings and orders from the regulators involved. It is rare to have the opportunity to read in exhaustive detail how accidents happen and spiral out of control inside an organisation. It seems clear that within Cruise there were many dedicated engineering, safety and assurance professionals committed to safety, even the leaders of the organisation clearly understood it’s critical importance. Yet ultimately, despite exceptional technical talent and diligent work, their safety management system failed.
The failure shows that companies developing safety-critical AI systems need to do more than build robust technical systems, they also need to foster organisational cultures that prioritise transparency, accountability, and public safety above all else. They need high-integrity, comprehensive mechanisms that ensure proper oversight, with clear accountability and rapid, honest response when things go wrong.
The Need for High-Integrity AI Governance
The Cruise incident crystallises why AI Governance isn’t just another layer of bureaucracy – and why checkbox compliance is never ok – the management mechanisms of AI governance are the essential infrastructure that prevents technical excellence from derailing into organisational failure. In the crucial hours after the accident, Cruise's response revealed the absence of robust management systems, despite the company’s remarkable technology sophistication. Their senior leadership spent precious hours crafting media statements while critical safety information remained uncommunicated. Their regulatory team showed only partial videos, resulting in a situation whereby their regulators misunderstood the full implications. Their incident response process fragmented across different teams, without clear mechanisms to ensure that all material facts reached the right decision-makers.
These weren't just isolated mistakes – they were systematic failures that well-designed AI governance, implemented as an AI management system would have prevented. Such a system would have automatic triggers ensuring senior leadership received complete briefings about safety-critical incidents. It would mandate clear protocols for regulatory disclosure, preventing the "let the video speak for itself" approach that proved so inadequate. It would establish clear chains of responsibility and communication channels that couldn't be short-circuited by organisational silos.
What makes the Cruise case particularly instructive is how it demonstrates the interplay between technical and organisational systems. Their autonomous vehicle's technical response to the accident – misclassifying it as a side impact and initiating a pullover manoeuvre while dragging a pedestrian – was concerning and likely avoidable. But it was the organisational failure to quickly recognise, communicate, and address this technical failure that proved catastrophic. A robust AI Management System (AIMS), as the embodiment of AI governance creates the organisational capacity to detect, understand, and respond to technical failures before they cascade into crisis.
The Quinn Emanuel report reveals how Cruise had many of the individual pieces needed for safe AI deployment – dedicated safety professionals, technical expertise, and formal compliance processes. What they lacked was the cohesive management system that would bind these elements together into a resilient whole. Their technical teams couldn't effectively communicate with their regulatory teams. Their leadership lacked systematic processes for handling safety-critical information. Their incident response fragmented across organisational boundaries precisely when it needed to be most coordinated.
This is why AI Management Systems must be more than just documentation or processes – they must be living systems that actively shape how organisations deploy and manage AI technologies. They need to create "organisational slack" – the excess capacity and clear protocols that allow organisations to handle unexpected situations effectively. Without this infrastructure, even the most sophisticated AI systems become vulnerable to technical, human and organisational failures.
For anyone building or deploying AI systems, Cruise's story demonstrates that technical excellence and good intentions aren't enough. You need management systems that can bridge the gap between technical capability and organisational responsibility, between individual expertise and collective action. These systems aren't about constraining innovation – they're about ensuring that innovation doesn't outpace our ability to deploy it safely and responsibly.
The incident has become a watershed moment for the autonomous vehicle industry and, arguably, for all companies deploying AI systems in safety-critical applications and other systems that make consequential decisions. For those of us working in AI governance, Cruise's story serves as a powerful reminder that our job isn't just to oversee the technology, and it most certainly is not to simply perform checkbox compliance – it's to build organisations capable of sustained, responsible, and trustworthy AI. The technical challenges of AI are considerable, but as Cruise demonstrates, it's often the human and organisational elements that determine success or failure in the end.
If you’re ever tempted to build the amazing innovation first, and defer safety, security, privacy or resilience to later, think again. Good intentions were not enough for Cruise, they won’t be enough for your company. I think about the events at Cruise almost every week. I’d love to know what stories motivate you in your work.
In my next article, I’ll go through more specifically what I mean by a high-integrity AI Management System, and contrast that to what some call ‘checkbox compliance’ and even worse forms of compliance behaviour.
https://assets.ctfassets.net/95kuvdv8zn1v/1mb55pLYkkXVn0nXxEXz7w/9fb0e4938a89dc5cc09bf39e86ce5b9c/2024.01.24_Quinn_Emanuel_Report_re_Cruise.pdf
https://www.dmv.ca.gov/portal/news-and-media/dmv-statement-on-cruise-llc-suspension/
https://www.nhtsa.gov/press-releases/consent-order-cruise-crash-reporting
https://edition.cnn.com/2023/12/14/tech/gm-cruise-layoffs/index.html
https://www.reuters.com/business/autos-transportation/gm-self-driving-unit-cruise-pay-15-million-fine-over-crash-disclosure-2024-09-30/
https://www.reuters.com/business/autos-transportation/gm-self-driving-unit-cruise-admits-submitting-false-report-will-pay-500000-fine-2024-11-15/
James, Great article and valuable insights into how technical failures and poor leadership decisions can bring down even the most promising companies. The Cruise incident isn’t just about a faulty AI system—it highlights the lack of transparency, accountability, and a strong safety culture. It’s a powerful reminder that building cutting-edge tech isn’t enough; how a company responds to failure matters just as much. Cheers