Most trading systems are built for demo environments.
They assume clean data, orderly timestamps, and predictable timing. Under those conditions, everything works. Then the system touches real markets.
Bursts of data arrive where there used to be quiet. Functions that "always finish in time" don't. Providers hiccup silently. Corrections arrive after logic has already fired. The system doesn't crash.
It just stops being correct.
The most dangerous failures are the ones that don't announce themselves. Async drift between components that should be synchronised. Race conditions that occur intermittently. Events that arrive out of order and break assumptions that were never made explicit.
Logs look normal. Metrics stay green. Behaviour quietly diverges.
Teams respond by optimising.
They tune parameters. They chase latency. Benchmarks improve. The benchmarks are lies.
Real conditions include garbage collection pauses, memory pressure, network jitter, and scheduler unpredictability. Benchmarks exclude all of these by design. Worse, many "optimisations" trade determinism for speed. A non-blocking read shaves milliseconds but introduces ordering ambiguity.
Fast and wrong is worse than slow and right.
Then there is intelligence.
Adaptive systems can learn the wrong lesson. But the real danger is not learning — it's overriding. When intelligence is allowed to bypass a safety check because it "knows better," authority transfers from logic to inference.
Rules are auditable. Models are not. Rules fail predictably. Models fail in ways that cannot be anticipated.
Most systems are built for the happy path. They assume success and treat failure as an exception.
This is backwards.
Systems should assume failure and prove otherwise. Every execution path should answer one question:
What happens when this doesn't work?
If the answer is "it probably will," the design is incomplete.