Buried in Design
On structural harm and the illusion of safety
Every system tells you what it values most by how it fails.
The 737 Max didn’t just fail mechanically—it exposed what Boeing was willing to compromise to win.
It’s one of the clearest case studies of corporate structural failure in recent history. And I don’t just mean mechanical. I mean systemic. The kind of failure that happens not because of one bad part, but because the system is so layered in performance pressure and market logic that it erodes the very architecture it depends on to make sound decisions.
Boeing had a problem. They were getting undercut by Airbus, who had released a more fuel‑efficient model of the A320. Instead of designing a new plane from scratch, Boeing decided to modify their existing 737 design to accommodate larger, more efficient engines. But the change altered the plane’s aerodynamics. It made the nose more likely to pitch up.
To compensate, Boeing introduced MCAS—an automated system designed to push the nose back down in certain flight conditions. But they downplayed its significance, even within their own documentation. Pilots weren’t told how essential it was. In some cases, they weren’t told it existed at all.
Why? Because acknowledging it would mean acknowledging that the plane didn’t really fly like the old one—which would require more training, more certification, and more cost. And that would undermine the whole point of modifying the existing 737 instead of designing something new. So they kept it quiet.
The aircraft had two angle‑of‑attack sensors installed. But MCAS drew data from only one of them—a single‑point‑of‑failure decision that contradicted basic engineering principles. Redundancy is a foundational safeguard in aviation for a reason: it acknowledges that systems—and humans—can fail. Ignoring that redundancy didn’t simplify the plane; it stripped away its margin of safety.
Two crashes later, hundreds dead, and we know exactly what happened.
Not just the technical failure. The cognitive one. The ethical one. The willingness to prioritize appearances over architecture. To rely on workaround instead of redesign. To treat perception as an output problem, not a structural one. To strip down the system in the name of efficiency and assume it will still hold.
Now let’s swap that runway for a highway.
Tesla’s original self‑driving system included radar. But in 2021, they removed it entirely, choosing to rely only on cameras for their driver assistance features. It wasn’t just a controversial move—it was a fundamental structural gamble. One that abandoned the core principles of redundancy, epistemic reliability, and bounded error.
Radar isn’t perfect. But it’s real‑world data—direct measurements of distance and speed. Its flaws are bounded, predictable, and structurally understood. Like any scientific instrument, its limitations can be planned around.
Cameras, by contrast, can’t provide hard distance data. They offer raw visuals that must be interpreted by neural networks trained on vast datasets. The result isn’t reference—it’s mimicry. Pattern recognition based on what the system has seen before.
And when that system fails, it doesn’t know. There’s no internal mechanism for recognizing error. No epistemic integrity. Just statistical confidence in patterns that may or may not apply to the moment at hand.
When Tesla removed radar, they weren’t left with a simpler system. They were left with an opaque one—unable to identify its own blind spots. Incapable, by design, of detecting a failure of perception.
And they called it safer.
Elon Musk claimed that adding sensors like radar or LIDAR would reduce safety. He argued that a camera‑only system better emulates how humans drive, that fewer inputs meant cleaner design. He framed it as elegant engineering.
But that’s the same logic Boeing used to justify relying on a single sensor. The same mindset that turns safety into a liability the moment it requires acknowledgment.
And it’s not just about hardware. It’s about how safety itself gets defined.
Tesla is using machine learning not to enhance an already stable foundation—but to compensate for the absence of one. They’re building probabilistic error correction for harms that could have been prevented in the first place. Every incident of harm caused by radar‑less FSD is a direct result of the choice to risk actual lives on an unproven—and fundamentally unprovable—system. It’s not innovation. It’s human experimentation without consent.
To be clear: this isn’t a rejection of machine learning. A self‑driving system with real structural integrity can absolutely use neural networks. But those systems still require hard‑coded logic—explicit checks with measurable data, structured reference points, bounded fallbacks that allow the system to know when it doesn’t know. There must be scaffolds for reasoning, not just prediction.
Waymo’s architecture demonstrates this principle far better. It incorporates LIDAR, radar, and redundant logic layers. But even there—beneath the technical improvements—remains a central truth: we are using people to test systems that perform safety instead of understanding it.
Because when the metric becomes the goal, the structure stops mattering. And that’s how you build a system that gets smarter without ever getting wiser.
And that brings us to autism. Specifically, to ABA—Applied Behavioral Analysis, the most widely used behavioral intervention for autistic people in the world.
ABA is often described as evidence-based, safe, and effective. It’s marketed as a way to help autistic children develop communication skills, reduce “problem behaviors,” and better integrate into school and society. It’s framed as a supportive program designed to help people succeed.
But like MCAS, Tesla Vision, and any system that prioritizes observable performance over underlying structure, ABA doesn’t fully account for what it’s acting on. It doesn’t understand why we do what we do. It doesn’t understand what our behaviors are for. It doesn’t recognize that stimming, shutdowns, meltdowns, gaze aversion, echolalia, routine adherence—these are not flaws in need of fixing. They are functional outputs of a deeply distinct neurological system.
ABA doesn’t account for that difference because it was never designed to. It was created in the 1960s by psychologist Ole Ivar Lovaas, who incidentally also pioneered gay conversion therapy. And the core idea behind both was the same: that visible deviation from the norm could—and should—be extinguished through behavioral conditioning. ABA was built to correct deviation. To suppress traits that don’t match neurotypical expectations. To align appearance with performance. And to do so using reward, punishment, repetition, and compliance as its foundational mechanisms.
Of course, the landscape of therapy has changed since then. Many ABA practitioners today are better informed about autism, use trauma-informed language, and reject the more dehumanizing aspects of past practices. Some work hard to preserve the autonomy of the children they support. There are autistic adults who say they benefited from ABA, or who practice it in ways they believe are affirming and responsive. Those experiences are real, and they matter. But they don’t erase the structure.
Because a compassionate practitioner working within a flawed system is still constrained by that system’s architecture. A system designed to extinguish difference cannot be made safe by good intent. And while some children may receive support that feels individualized and respectful, others are still being taught that their instincts are wrong, their behaviors inappropriate, their very way of processing the world a problem to be solved.
Even in its modern, softened forms, the goal of ABA is still overwhelmingly to produce visible behavioral compliance. And when compliance is the benchmark for progress, even well-meaning interventions can lead to long-term harm. It treats visible difference as dysfunction. Adaptive behaviors as errors. And in doing so, it teaches autistic children not how to function more effectively—but how to look more acceptable.
Which is exactly the problem. Because safety isn’t about how you look. It’s about what your system can withstand.
You can’t measure autistic wellbeing by how indistinguishable we become. You can’t measure developmental progress by how well we mask, suppress, or camouflage. You can’t remove our radar and call it growth.
Autism has been pathologized for decades through the lens of deficit-based psychology. Framed as disorder rather than divergence. Treated as a problem of presentation instead of a structural difference in how perception, regulation, and communication occur.
ABA inherits that framing. It reinforces it. And in many cases, it becomes the method by which autistic people are taught to override our own perceptions, suppress our signals, and disconnect from the very systems that allow us to regulate, relate, and recover.
It is not a neutral tool. It is not a cultural exception. It is the Boeing problem in therapeutic form.
And the cost isn’t abstract. It’s trauma. It’s developmental fracture. It’s a lifetime of trying to function inside a system that punishes you for needing the supports it removed.
We don’t need sleeker outputs. We need deeper architecture. Because at every level—engineering, cognition, culture—safety isn’t about how polished a system looks when it works. It’s about how well it holds together when it doesn’t.
Whether we’re talking about planes, cars, or people, we cannot keep pretending that clarity of performance is the same thing as clarity of function. We cannot keep mistaking normalcy for safety, or silence for stability.
True safety isn’t seamless. It’s explicit. It’s designed with the expectation that things will go wrong—and with the integrity to reveal why, so the system can learn and adapt. But if you erase the signals, remove the redundancies, and punish the outliers, you’re not preventing failure. You’re just hiding it.
That’s not safety. That’s concealment. And no amount of polish can protect the people who get buried underneath. ∞








