Posted on Updated on min read by

SideSpin Is My First Try at a Startup

Innovation rarely runs straight. It runs through fog: partial maps, conflicting landmarks, long stretches where progress and self-justification look the same.

The instinct is to kill the fog with plans and confidence. That’s backwards. The fog is the environment. It’s where learning happens. Being a little lost—if you’re paying attention—is part of the job.

Most of my career has lived inside complex systems. They don’t fail because people aim too high. They fail because assumptions pile up faster than understanding. Complexity creeps. Intent blurs. Eventually the system does something no one meant.

The last eighteen months have been a deliberate step back into that fog. Not a pause. Not a reinvention. Just getting closer to the work—building end to end, feeling tradeoffs directly: speed vs. safety, abstraction vs. control, novelty vs. trust.



Ambiguity, On Purpose

SideSpin started as an iOS app to spotlight creators displaced by AI. The idea changed; the lesson stuck. The hardest problems weren’t features—they were systems: platform limits, inconsistent models, timing failures, hidden “happy-path” assumptions.

That’s the catch with AI in the loop: you don’t know what you don’t know, and the hard part isn’t the feature—it’s the scaffolding. When you open one door, it shows you many more, and so on and that only comes with knowledge, exploration and curiosity. Turning LLM output into dependable behavior is a quiet nightmare most teams label as a brittle system and move on. Too few people are mixing AI with the messy stack it actually lives in—cloud, hardware, runtimes, language syntax, language servers—and it shows. This is an under-served craft and I was happy to take time away and learn about this area.

So the work shifted from shipping screens to engineering guarantees—wrapping nondeterminism in discipline:

  • Deterministic envelopes around probabilistic models (seed control, temperature hygiene, post-hoc normalization)
  • Stateless, idempotent pipelines with content-addressable artifacts and repeatable builds
  • Gated progression and quorum checks (automated + human-in-the-loop when stakes demand it)
  • Cross-layer observability and forensic logs you can trust under pressure
  • Portable execution paths across cloud and hardware so behavior survives environment drift

This isn’t about the thing I made; it’s about making things that behave when everything around them is probabilistic.

Searching for knowledge, then opportunity

Thriving in ambiguity doesn’t mean drifting. It means holding direction lightly, testing fast, and letting results redraw the map. Each iteration reduced uncertainty in the right places.

This work exists because I stayed in that uncomfortable middle long enough to see the real problem—not the one I pitched on day one, but the one revealed by friction and constraint.

Being lost isn’t always a bad thing. Sometimes it’s the most honest way forward.




1. Starting point: LLM schema → iOS pamphlet renderer for AI-displaced voices

Inspired by AI's disruptive impact, I was empowered to make an impact even a social one, and thought I could make anything happen and I should try. Turns out, I envisioned a "tar pit" iOS app: a dynamic pamphlet aggregating listings and search for people wanting to connect. I was aiming for displaced service workers—actors, illustrators, creators. It felt urgent, a digital sanctuary amid automation's rise. The core: LLM generates a schema from user intent, rendering it as interactive visuals on iOS.

Loop: Intent → LLM schema → client-side dynamic component.

Early insight: Schemas enable rapid generation but demand safe, platform-compliant execution. This seeded the separation of description from runtime control, as unchecked outputs risked instability.


2. Mobile Native over OSS because love, but iOS constraints forced separation and backend reliance

iOS's rigid ecosystem hit hard: no state mutations in views without native Swift hurdles; compiling non-native logic triggered build failures via Metal or legacy modules—technical walls or Xcode enforcements. The pamphlet's dynamic updates edged toward "app-making-app" territory, violating App Store policies.

Design pivot: Client focuses on presentation and input; backend owns mutations, data prep, and LLM integrations. This HPC-like transcoder enriched schemas, sharing state with iOS for a View-ViewModel-Model architecture. Why? To bypass client-side risks, ensure compliance, and enable seamless mutations without device-side brittleness.


3. HPC for everyone and my first working transcoder: controlled, declarative transformation runtime

Backend evolved into a fast, stateless interpreter: ingests minimal declarative schema, expands parameters into full UI/behavior—colors, views, sequences, transitions, animations, icons, interactions—all via multi-registry primitives. No templates; pure composition for flexibility.

Real-time, platform-independent outputs addressed iOS bottlenecks. Registries versioned capabilities, ensuring auditability and safety. This crystallized the first patent:
Systems and Methods for Real-Time, Stateless, Template-Free Transformation of Human Intent into Platform-Independent Outputs via Declarative Multi-Registry Processing.

Motivation: Transform raw intent into reliable artifacts without smuggling unsafe code, balancing creativity with control.


4. OSS became better than commercial software: accelerated iteration, graph-based UX maker with prompt chaining

iOS dev slowdown and policy risks prompted a web shift. Weeks later: a graph-represented UX builder, prompt-driven layouts, UI elements, forms, sequences, alerts, backend jobs/actions. Prompt chaining refined outputs, but exposed flaws—slow, tedious, lacking tight iterative loops for real-time engagement, collective learning, or team dynamics.

Other prompt-based tools shared this "boring ceiling," limiting just-in-time LLM adaptation. Success here validated transcoder patterns but demanded better scalability. Why web? Faster prototyping revealed LLM's non-atomic outputs (malformed, token-vs-atomic mismatches) and abstract prompting challenges like "build me a UX."


5. The world cloud middleware server: universal integrations for real-world empowerment

To elevate from visuals to actions, built an "ultimate cloud server": universal IAM, roles, access controls on object patterns; integrations with Stripe Connect (payments), Google Cloud (infra), Twilio (comms), Calendly (scheduling). Prompt chaining expanded frontend schemas—e.g., prompt-to-vector images, menus, tables from strings.

This orchestrated upstream platforms, binding real-world tools to user experiences. Design choice: Keep presentation declarative, actions permissioned/auditable to prevent side-effect chaos. Latency from integrations underscored decoupling UX from compute timing, fostering stability in variable environments.


6. Native Hardware GPU FTW: Cross-platform compositor exploration with Rust and advanced rendering

Pushed boundaries: animation/transition logic via WebGL/OpenGL on Linux; cross-platform bundles for desktop/web/iOS/Android/Linux/macOS. Settled on React frontend + Rust backend—Rust handled filesystem/memory/CPU/GPU logic, feeding React adapters to cloud for IAM/user features.

Video/voice conferencing via Realtime APIs transformed interactivity. Dropped static UI libraries/React for pure WebGL: shifted from storyboards to unit-level emotional expressions, separating perceived vs. wall-clock time.

LLM challenges intensified—unreliable for app work, hard abstract prompts. Deterministic scaffolding emerged as essential. This inspired the third patent:
System and Method for Interactive Experience Execution with Asynchronous Content Preparation and Deterministic State Progression.

Rationale: Asynchronous prep without timing races; gates ensure stable, frame-rate-consistent UX.


7. Learning the hard way: Deterministic scaffolding becomes mandatory amid LLM realities

Exploration revealed LLMs' pitfalls: token streaming, malformed outputs, prompt ambiguity make dependable apps "bad and hard." Determinism isn't optional—it's the substrate for reliability, separating perception from computation.

Feedback loops and async prep allowed iterative refinement, but visibility required strict controls. Why? To eliminate UX instability from races, enable replay, and handle real-time without constant LLM crutches.


8. Moonshot with a sling-shot: Distributed LLM-streaming to a client GPU bringing it all together for the best performance, reliability trade-off possible

Budget maxed, no clear end—pushed harder. Bottleneck: declarative specs + live LLM streams are costly and non-scalable. Client-side hacks (backward load balancers, stateful inspection) fall short.

Key insight: Determinism unlocks massive performance and flawless UX. Controlled ticks and commits, immutable cryptographic-hash chains are made at runtime, this organized all streams and overhead and kill flicker or partial renders

Distributed nodes produce keyed verification artifacts. Consensus via lightweight receipts enables tamper-resistant, hardware-enforced promotion to visible state.

This became the third patent:
System and Method for Distributed Compilation with Consensus Verification and Hardware-Enforced Temporal Presentation State Progression

Core win: Combines perceptual stability, distributed readiness under partial failures, combines code execution and rendering together as a single rendering experience and cryptographic provable integrity—at true planetary scale—without ever compromising trust. The closest I could bring security and performance together possible. But the journey isn't over I'm still making, testing, optimizing this and looking forward to learning more.




The unified architecture and inferred lessons

It might appear like a random set of activity: iOS prototype → web builder → cloud orchestrator → Rust runtime → compositor shift → deterministic execution → verified commits.

But in retrospect, the boundaries inferred from my frictions were:

  1. Intent → structured, non-vague input
  2. Transformation stateless + template-free for reproducibility
  3. Composition should be registry-driven for controlled expansion
  4. Preparation asynchronous but gated
  5. Progression should be deterministic to decouple timing
  6. Presentation commit-based + verifiable + distrubuted for scale of code execution and data processing

These aren't arbitrary; they structurally eliminate failure modes like divergence, races, vulnerabilities and will open up development to a place where we focus on helping the hearts and minds of our users instead of chasing bugs and


What’s next

I spent 18 months chasing one idea all the way down: stop cutting the corners that make systems fragile, and rebuild from first principles. That work finally landed somewhere that I'm happy with. A solid—execution infrastructure where human intent turns into reliable, auditable reality, even when things get messy.

The patents are filed. The prototypes work. What started as hopeless frustration with brittle UI generators turned into something more fundamental: plumbing for the next phase of computing.

  1. Intent that executes cleanly.
  2. Interactions you can prove, not just demo.
  3. Systems that hold up in medicine, finance, and regulated environments.
  4. AI agents that learn together without losing accountability.
  5. Real-time experiences that scale without falling apart.

This isn’t a solo prototype experiment anymore. It’s a foundation.

I’m ready to be in a bigger room — to build with people who swing hard, ship fast, and care about what happens after launch. After pushing this far alone, I’m genuinely excited to make the next chapter a shared one.

I expected by now for this journey to stop or end, but it seems it just keeps leveling up.

Table of Contents