Skip to main content
Jason Sonderman, UXMC - Director of UX
Closing the Design-to-Dev Translation Gap with AI-Assisted Delivery featured image

Redefining the Handoff: How UX Became the Front-End Delivery Layer at ARCOS

How I led cross-functional alignment to restructure the design-to-dev handoff at ARCOS — building an AI-enabled model where UX transmits design intent as production-adjacent code, cutting kickoff time from two weeks to three days.

3 days Kickoff & handoff Down from 2-week investigation sprint
~30% faster Time to market Across piloted delivery cycles
Nearly eliminated Revision cycles See-build-review-change loops

The moment it clicked

The deskcheck at ARCOS wasn’t a meeting. It was a Slack channel — and sometimes a long, unwieldy thread trying to get a single component right. A developer would post a Loom video of the work. UX would watch it, trying to investigate what was actually in the code. Was the correct token used, or just a value that looked right on screen? Was that spacing coming from the design system or hardcoded? You couldn’t inspect it through a video. You could only see the surface.

That’s what stuck with me. Not that the developer had done something wrong — they hadn’t. Not that the spec was incomplete — it was thorough. But a comprehensive spec is still interpreted differently by each person who reads it. And when the review mechanism is a Loom video in a Slack thread, UX has no way to verify intent against implementation.

As AI started entering the delivery pipeline — developers feeding specs into models to accelerate their builds — the translation problem didn’t go away. It just moved. Now an AI was misreading the spec instead of a human. We’d automated the ambiguity.

That realization reframed the question. Instead of “how do we write better specs,” I started asking: what if we gave AI better source material to begin with?

The diagnosis

The traditional design-to-dev handoff has always had a lossy translation layer. Designers produce Figma files, annotations, and Zeplin specs. Developers interpret them — making judgment calls about tokens, spacing, component variants, interaction states. Every judgment call is a potential drift from intent.

At ARCOS, that translation tax was being paid in two ways: in time, through investigation sprints and spikes at the start of every delivery cycle, and in quality, through the see-build-review-change loops that consumed the back half. A two-week sprint just to ingest design intent. Then cycles of correction after that. The spec was comprehensive. It just wasn’t executable.

The vision — and why it required cross-functional buy-in from the start

I started working toward a different model — one where the UX team transmits design intention in a form that’s closer to code than documentation. Not as a replacement for engineering, but as a cleaner separation of concerns.

The idea was to move the bar left. If front-end visual builds could originate within the UX team — component-correct, pattern-consistent, already informed by the design system — then development could focus on what it does best: backend architecture, data modeling, API integration. Not burning cycles interpreting whether a button should have 8px or 12px of padding.

At the same time, this opened a different opportunity for Product. Simple experiences — low-complexity UI updates that currently require full production cycles — could be built with UX oversight rather than waiting in a delivery queue. Three roles, each elevated to their highest use. That was the vision.

But I recognized early that this wasn’t a UX decision to make alone. Changing where front-end code originates touches professional identity, codebase ownership, and delivery accountability. I needed Product leadership and engineering leadership in the room before I built anything.

Building the coalition — where alignment held and where it didn’t

I brought this vision to three audiences, and the responses were instructive.

Individual developers were largely on board. Many of them were already frustrated with the investigation overhead — they wanted to be solving harder problems. The model gave them a clear argument for why that was now possible.

Product leadership saw it quickly too. The prospect of moving simple experiences through delivery faster, without waiting in an engineering queue, aligned directly with roadmap pressures they were already feeling.

Technology leaders and managers were a different conversation. Their concern was specific and legitimate: code quality. If UX-generated code was going to enter the production codebase, who owned it? How was it reviewed? What happened when it drifted from engineering standards or introduced technical debt? These weren’t abstract objections — they were the right questions from people who are accountable for what ships.

  • Individual developers — Buy-in secured. Aligned on reducing investigation overhead.
  • Product leadership — Buy-in secured. Saw faster delivery on simple experiences.
  • Tech leaders & managers — Partial; pilot running. Core concern: code quality and codebase ownership.

There was one more layer to the Tech leader objection that took time to surface. Early in the exploration, the team used AI-assisted prototyping tools — Lovable, Bolt.new, Figma Make — to move abstract product and UX thinking into something tangible fast. That was the right use of those tools. But the output they produce is inherently prototype-shaped: AI-chosen component libraries, no relationship to the production codebase, built to make something visible rather than mergeable. When Tech leaders pushed back on “UX-generated code entering the codebase,” that’s the artifact they had in their heads. Understanding that distinction changed how I responded to the resistance — and what we built next.

How the model matured — from prototype stigma to production-adjacent components

The AI-assisted prototyping tools weren’t wrong as an exploration layer. They helped the team move fast and get abstract concepts onto a screen. But they created a lasting perception problem with engineering leadership: design output equals throwaway code. That association was the real obstacle, more than any specific technical concern.

Responding to that feedback meant making a meaningful technical distinction the early exploration had blurred. AI-assisted prototyping tools reach for whatever component library the model selects. They produce something that looks right on screen but has no relationship to the production codebase — different dependencies, different token structures, different component patterns. Engineers are right to treat that as untrusted input.

The question wasn’t how to convince Tech leaders that prototype output was acceptable. It was how to build a model where the output was genuinely different — production-adjacent rather than prototype-shaped.

The answer was to move UX designers into VS Code, working with AI agents and skills against the actual front-end codebase. Instead of an AI choosing its own component library, the AI operates within the same dependency tree, the same token structure, the same patterns that engineering already owns. The output is Higher Order Components — not pixel-perfect mockups, not throwaway scaffolding, but HOCs that wrap real production components with design intent baked in at the right level of abstraction.

AI-assisted prototyping tools (Lovable, Bolt.new, Figma Make): Fast, generative, great for moving abstract thinking onto a screen. AI-chosen dependencies, no relationship to the production codebase. Correctly perceived by engineering as throwaway.

VS Code + AI agents + design system: UX working in VS Code against the actual front-end codebase. Same tokens, same components, same patterns engineering already owns. HOC output that is reviewable, trustable, and mergeable.

This distinction matters for the trust conversation with Tech leaders. The objection to “UX code entering the codebase” was always really an objection to foreign code entering the codebase — code that didn’t speak the same language as what was already there. A HOC built against the actual front-end stack isn’t foreign. It’s design intent expressed in the team’s own vocabulary.

This model is currently in pilot. Its most meaningful outcome so far isn’t a metric — it’s that the conversation with Tech leadership has shifted from “should UX be producing code at all” to “what does the review and merge process look like.” That’s a different, more tractable problem.

What we built

To make this real, I collaborated with our dev managers to build an AI-enabled design system npm package. The goal was to give AI models deep, structured context on our components, patterns, and tokens — not just a visual reference, but the kind of context that enables high-fidelity interpretation.

Paired with the Figma MCP, this meant an AI working in that context could read a design file and produce front-end code that was already on-system. The right tokens. The right components. The right spacing. Not because a developer made good judgment calls, but because the AI had the full design system as its frame of reference.

We also worked with Product to sharpen how Milestones are defined in our Shape Up process. Clearer scope upstream made the code output more accurate downstream — because if the intent is vague at the milestone level, it’s still vague when the AI tries to interpret it. That upstream clarity work was a direct output of the cross-functional alignment process, not a technical fix.

The result: a 3-day kickoff and handoff, replacing what had been a 2-week investigation sprint. The output was working front-end code — agnostic of data and business logic, ready to be wired to APIs by engineering — but already reflecting design intent with enough fidelity that revision cycles were nearly eliminated.

The architectural argument underneath it all

The principle underneath this model pushes toward an MVC separation — visual layer decoupled from model and content. UX owns the V. Dev owns the M and C. That’s not just a workflow change. It’s a structural argument for how product teams should be organized in an AI-assisted delivery environment.

The teams that will move fastest aren’t the ones with the most developers — they’re the ones where each role is doing the work that requires their specific expertise, and AI is handling the translation between them. The goal was never to make UX do development. The goal was to make the handoff disappear.