Autobrains Chief Warns Autonomous Cars Lack Common Sense

At the Financial Times Future of the Car conference, Igal Raichelgauz, chief executive of Autobrains, warned that current self-driving systems lack the "common sense" to handle unexpected road situations, according to reporting by DimSumDaily; The Times also reported his remarks. The comments followed a San Antonio incident in which a Waymo vehicle entered floodwater and was recovered downstream, and a subsequent recall of almost 3,800 Waymo vehicles in the United States, reported by The Times. A US Department of Transportation notice quoted by The Times said the software "may allow the vehicle to slow and then drive into standing water on higher-speed roadways." DimSumDaily also notes Waymo is testing its service in London with Jaguar Land Rover-equipped cars.
What happened
According to reporting by DimSumDaily, Igal Raichelgauz, chief executive of Autobrains, told the Financial Times Future of the Car conference that one of the largest gaps in current autonomous driving AI is "common sense." The Times reproduces Raichelgauz's quote: "One of the biggest gaps in autonomous driving AI today is common sense."
What happened (continued)
Reporting by The Times says the remarks followed a San Antonio incident where a Waymo robotaxi entered floodwater and was swept away; The Times reports Waymo subsequently recalled almost 3,800 vehicles in the United States. A US Department of Transportation notice quoted in The Times said the software "may allow the vehicle to slow and then drive into standing water on higher-speed roadways." DimSumDaily reports Waymo is testing its autonomous ride-hailing service in London with Jaguar Land Rover vehicles during trials that include a human safety driver.
Editorial analysis - technical context
Industry-pattern observations: Closed-loop, example-driven machine learning - the dominant approach in production autonomy stacks - performs well on frequently encountered scenarios but struggles with low-probability, out-of-distribution events. For practitioners, that explains why edge cases such as unexpected flooding, unusual road geometry, or atypical agent behaviour continue to surface as failure modes in fielded systems.
Industry-pattern observations: Regulatory filings and safety notices, like the US Department of Transportation text cited by The Times, tend to expose specific boundary conditions where perception or decision modules misclassify or misprioritise hazards. These public documents are often the most actionable signals for engineers auditing system robustness and deployment envelopes.
Context and significance
Editorial analysis: Public incidents that result in recalls or regulator notices concentrate attention on the operational limits of autonomy and accelerate scrutiny from cities and transport authorities. Reporting about the San Antonio incident and the DOT wording highlights two practitioner-facing issues: the difficulty of covering rare events through scale-only data collection, and the need for system-level constraints that govern behaviour in high-risk conditions (for example, explicit rules about water or uncertain terrain).
Editorial analysis: For research and engineering teams, the story reinforces the distinction between pattern recognition (what modern perception stacks do well) and commonsense-style reasoning (what stakeholders and commentators identify as missing). That distinction informs priorities for simulation fidelity, scenario generation, verification tests, and conservative fallback behaviours in stacks used for on-road trials.
What to watch
Editorial analysis: Observers should track regulator filings and recall notices for concrete failure-mode descriptions, since those often list the exact conditions that led to misbehaviour. Reporting mentions of partnerships and trials, such as Waymo's work with Jaguar Land Rover in London, are useful to monitor how operational constraints (human safety drivers, geofencing, access restrictions) are applied during expansion.
Editorial analysis: From an engineering perspective, watch for follow-up technical disclosures, simulation datasets, or third-party audits that describe how teams remedied the specific perception or decision shortcomings described in DOT or company notices. These artifacts are the most reliable indicators of whether and how a system's handling of infrequent but high-risk scenarios improves.
Scoring Rationale
The story highlights practical safety limits in deployed autonomy and a regulator-cited failure mode, which matters to engineers and researchers focused on robustness and deployment. It is notable but not paradigm-shifting.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


