Hobbling-Induced Innovation
November 02, 2025
- Rather famously, Tesla refuses to use LIDAR and Autopilot only takes 2D observational video data as input. Autopilot is the only production-ready, end-to-end self-driving model. Waymo currently relies on a modular architecture using LIDAR, but is pivoting to end-to-end as well. Tesla seems to have made the correct long-term technical bet (end-to-end models for self-driving), but at the cost of a prima facie nonsensical constraint (strictly less sensory input).
- AlphaGo Zero was the first of its kind to be trained only on self-play, without reliance on human data. It beat Lee Sedol and the rest is history.
- At Softmax, we made the Cogs face in their chosen direction before taking a step. This made the agents harder to train and led to less consistent behavioral patterns. However, we made progress on our goal-conditioning agenda.
- Apple deprecated Flash on iOS in 2010, pivoting to a solely HTML5-based stack. Adobe stopped developing Flash for mobile in 2011 and eventually deprecated Flash entirely in 2020. Apple lost market share in the short-term but clearly won (Flash was not a good product).
- Rust's borrow checker forbids shared mutable aliasing. As a result, memory safety errors have been drastically reduced (compared to C/C++) and new security levels have been reached.
All five of these share the property of "removing functionality to hopefully raise the long-term ceiling of performance." It is unclear if all of these modifications did raise the ceiling! Hindsight informs us that unsupervised learning on human data for two-player, zero-sum, perfect information games is indeed a crutch. But it seems to be relatively straightforward to integrate LIDAR or radar data into an E2E self-driving model training stack, and both grant visibility in environments where video-only data is differentially disadvantaged.
Picking at the Tesla case more: it is true that LIDAR sensor per-unit prices were at ~$8,000 in 2019. Integrating that would kill any chance at making an affordable FSD consumer vehicle. Today, Luminar has brought this down to $500 in the USA and Chinese manufacturer Hesai sells sensors for $200 a pop. Prices will continue to fall, LIDAR will no longer be price-prohibitive, and Rivian plans to take advantage of the full sensor array when developing its FSD model. What gives?
Google X has the mindset that one must kill good things to make way for truly great ones. "Necessity is the mother of invention." Making a 10x breakthrough is only 2x harder. And for sure, constraining the problem to only its essential inputs can result in more scalable and successful solutions (SpaceX's Raptor 3 is no exception). But was it fundamentally necessary for Tesla to ban LIDAR?
Argument for: LIDAR was prohibitively expensive, Tesla would have failed to get the necessary distribution for data collection by using LIDAR. Counter: fair, but doesn't address why there's a lack of radar (very useful in low-visiblity scenarios, cheap, would have improved safety).
Argument for: Elon-culture is a package deal, Elon-culture was the determinative factor in the development of Autopilot, Elon-culture takes the hardcore minimalism and runs with it. Counter: I can believe this (Casey Handmer says this), but it still seems so obviously optimal that once the 0-1 is achieved you optimize for having a good product. Human eyes are not optimized for terrestrial vision, there's no point sticking to the human form factor!
Moving away from Tesla: I think we can construct a typology of reasons why one would intentionally hobble their development (via restriction) for the sake of innovation. First, because it bakes in a fundamental limitation (AlphaGo is like this, Tesla's original argument can be argued to be like this). Second, because restriction allows for better design (as in the case of Rust and Apple's refusal to use Flash), and better design creates a healthier ecosystem (this seems to be mostly applicable to platform-based products). Third, because adopting the stance of doing a Hard thing is useful, and artificially increasing the Hardness of the task has better consequences (I think of Elon like this, within limits: push up to the boundaries set by physics and no farther).
It takes skill to understand the directions in which one can make a problem harder productively. Facebook actually failed miserably at pivoting to HTML5 at the same time as Apple. Tesla's removal of radar ruffled feathers in the engineering team. Survivorship bias rules all, and given PMF it's probably easier to make development too hard rather than too easy (following customer incentive-gradients sets a floor & strong signal).
It's probably good to implement a kind of regularization in research-heavy, 0-1 product development: strip out all the assumptions, solve the core task, add additional configurations on top of a good foundation. I don't think it's necessary to continue hobbling oneself when its proven unnecessary. That is masochism, and your competitors will beat you.