Every so often, technology reaches a point where the ground shifts faster than people can process it.
We’re in one of those moments now.
AI doesn’t just change how we work — it changes what we see, what we believe, and how decisions are made.
Interfaces are no longer static. Screens redraw themselves based on model output, hidden state, or probabilistic behavior. The result is simple: people are making real decisions through systems that provide no guarantees and no traceability.
In moments like this, the real question isn’t whether AI is “good” or “bad.”
It’s who builds it — and what values shape the defaults.
I’ve learned that people rally behind technologies that keep users safe, respected, and in control.
And they follow leaders who treat those principles as non-negotiable.
Ruth Bader Ginsburg framed it perfectly:
“Fight for the things you care about — but do it in a way that will lead others to join you.”
⸻
The Invisible Risk in AI generated user exeperiences
We’re entering a phase of AI where interfaces, decisions, and outcomes are generated on the fly — often without guarantees, without traceability, and without regard for the humans on the receiving end.
This isn’t a doom prediction. It’s an engineering reality.
AI systems today can:
- Personalize what each user sees in real time.
- Generate dashboards, alerts, and controls dynamically.
- Rewrite the layout or content of a screen based on context, models, or hidden state.
- Summarize, filter, or even invent information that people then use to make decisions.
But most of these interfaces are probabilistic and opaque. They redraw. They shift. They hallucinate. They give no guarantees about what was shown, when, why of if it's accurate.
In critical environments — hospitals, trading floors, transportation systems, industrial control rooms — that’s not just a UX issue. It’s a safety issue.
This is why I built my own graphics engine and method on how to render AI output, most modern dashboards cannot prove what was displayed at the moment a human made a decision. They:
- Can’t synchronize humans, AI, and devices on a single verifiable timeline or reality.
- Can’t reconstruct the exact state of an AI output at a given point in time
- Can’t generate adaptive UI that’s also deterministic and auditable.
I believe this is the core ethical challenge of the AI era.
⸻
What I'm Building at SideSpin
The work I’ve been building at SideSpin is more than a graphics engine or UI framework. It’s a statement:
- If people rely on an AI interface, it must be deterministic, not purely probabilistic.
- If humans make decisions through it, it must be provable.
- If it mediates safety‑critical or high‑stakes work, it must be trustworthy by design.
The Animation and Transitions Engine embodies this philosophy:
Declarative manifests
Interfaces are described as data, not buried in imperative code paths or opaque model behavior. There is a clear, versioned definition of what should be on screen.Deterministic rendering
Given a specific manifest and a specific stream of events, the system always produces the same visual output. No hidden randomness in what the user sees.Unified timeline
Every change — user action, device event, AI decision, UI update — is recorded on a single, ordered timeline. You can replay it, inspect it, and prove what happened.Auditable to the frame
At any moment in time, you can answer: What exactly was this person seeing?
Not “approximately,” not “what we think the system did” — but the precise state, down to the frame.
In other words:
If an interface matters, it should be explainable — down to the frame on what was shown and why. I should be able to cryptographically hash and retrieve that event as validation of trust.
This is the philosophy I want others to follow, and it’s my hope it will keep AI safe to use.
⸻
Why This Matters to Me
I’m a founder, but I’m also still the kid staring at a motherboard, wondering what technology should do instead of what it can do.
I believe:
- Technology should expand human capability without undermining human dignity.
- Data should be handled with transparency, fairness, and respect for privacy.
- Systems that influence critical decisions must be held to a higher standard than “it usually works.”
I talk openly about deterministic UX because it gives builders a clear mental model they can act on today.
Regulation and culture won't catch up in time this round. You can choose to make your interfaces provable, auditable, and safe now simply by baking it into your code.
Everything I build — LLM-generated data, my runtime application services, how I render AI output — will embody that choice and I hope you choose the same.
I’m not fighting against AI.
I’m fighting for the safety, clarity, and stability that AI‑powered experiences will require for the next century.
Because if we’re going to let machines help mediate our choices, our work, and our lives, then we owe it to ourselves to build systems worthy of the trust we’re placing in them.
Thank you for reading.