Skip to main content
Jason Sonderman, UXMC, CPACC

From Delivery to Direction

What the efficiency revealed — and what it means when your documented process becomes a training set.

Rebrand + light mode

1 wk

Down from a full 2-week sprint
Sprint recovery

½–1 sprint

Per milestone cycle within a 6-week cadence
Story point shift

5–8 → 1–3

Fibonacci UI estimates after design system hardening
Kickoff compression

2 wks → 3 days

Iteration kickoff and handoff
  • UX Leadership
  • UX Team Scaling
  • Design Systems
  • AI-Assisted Delivery
  • Design Tokens
  • Cross-functional Collaboration
  • Enterprise SaaS

The terrain

Lighthouse runs on two stacks, serves two kinds of users, and opens every six-week cycle the same way — four days of planning sessions where product, design, and a development pod walk every defined outcome together before a line of code is written. Three senior designers cover that work across three pods. I carry the connective tissue role: coherence between the two surfaces, continuity into the legacy tools underneath, and the thread that keeps what gets built on one side from quietly drifting away from the other. The team is small by design. Constraints tend to produce clarity, and clarity, as it turns out, became the whole story.

For most of the time I’ve been here, those planning sessions were where the friction lived. This is the story of what changed when the friction went away — and what it revealed when it did. For the technical story of how the delivery infrastructure was built, see The Handoff Is the Bug. This study picks up where that one ends.


What the efficiency revealed

The pipeline worked. Half a sprint to a full sprint recovered per six-week cycle. UI stories that pointed at 5s and 8s came back as 1s and 2s. A rebrand and light mode implementation that would have taken two weeks took one. The work was understood before it began.

Working inside it closely enough revealed something the original hypothesis hadn’t accounted for. The ambiguity that remained wasn’t in the handoff. It was earlier — in whether the purpose behind a layout decision had ever been made explicit, in whether the shape of the data matched what a user actually needed to see, in the distance between what the interface asked someone to do and what they were actually trying to accomplish. Those questions don’t come from the design file. They come from being close enough to real users — in research, in testing, in the accumulated evidence of watching people work — that the intent behind a decision is grounded in something observed rather than assumed. The smarter the agent got, the more plainly it showed us how much that grounding mattered. Speed was compressing the cost of skipping it, not eliminating it.

We came looking for a better bridge. We found that the other side needed work first.

Then the mobile pod showed us what happens when the infrastructure spreads beyond the people who built it.

The mobile surface had always been the harder one to plan for — there is no clean way to prototype in React Native, and the pod had never been part of the original delivery model rollout. During a planning session, we learned they had connected the Zeplin MCP to pull design context and user flows directly into a Claude Code workflow, using it to visualize shaping outcomes mid-discussion before the formal handoff. They had also built a UX Designer agent — trained on the ways-of-working documentation our team had published. Not a tool handed to them. Not a process they’d been asked to follow. Our own documented values and process norms, used as a behavioral frame for a model working alongside their developers. They built it themselves because the documentation was precise enough to make it possible.

That moment clarified something about what it means to lead a design team well. The documentation, the process guidelines, the ways-of-working artifacts — those had always been written for human readers. The mobile pod turned them into the training set for an AI that now operates inside their delivery workflow. When your team’s institutional knowledge becomes machine-legible, it doesn’t just scale across people. It scales across time and surfaces you never anticipated. The infrastructure outlives any single project. If the process guidelines aren’t worth training a model on, they probably weren’t worth writing.


Clarity of Purpose

Oracle’s Redwood Design System named two things that have stayed with me. Clarity of Purpose — every design decision grounded in a specific outcome before execution begins. And the shape of the data — not what the system holds, but what form a user needs it to take to act with confidence. Both principles are older than any library. Both became more urgent when the time between a decision and its built expression collapsed to days.

AI is a domain I’ve worked in long enough and closely enough that other leaders inside the organization began bringing me into the room when strategic decisions about it were being made — not as a title, but as a point of view that had been tested against real work. Those conversations kept returning to the same reframe: the efficiency story is the smaller story. What AI-assisted delivery actually creates, if the recovered time goes somewhere intentional, is the conditions for design to do the work it was always supposed to be doing. Not generating UI. Holding purpose accountable. Defining the right shape for the right data. Staying close enough to what users actually need that the thing being built is worth building.

AI doesn’t make UX faster. It makes UX more valuable — if the recovered capacity goes back into the questions that determine whether velocity creates anything worth having.


Early, and honest about it

One sprint in. The volume of UX updates to screens has dropped. More tellingly, the nature of the work has shifted — designers are no longer the primary producers of UI in Figma, then advocates for its faithful translation into code. They are reviewers: informed, principled, checking that what has been built holds to the UX decisions that were made upstream, that accessibility hasn’t been traded away in the handoff, that the experience arriving at user acceptance testing is one the design team can stand behind. The production work moved to the agents. The judgment work stayed with the people.

Whether that holds, and what it looks like at scale across all three pods, is still being learned. What is visible so far is a design function that moved upstream without shrinking — AI handling the translation layer, developers owning architecture and integration, UX with its attention on the questions that don’t resolve themselves.

What the user is actually trying to accomplish. Whether the thing being built gets them there. Whether the purpose behind the decision was ever made clear enough to survive the distance between intent and code.

That last question turns out to be the one that matters most. It was there before the AI. It will be there after whatever comes next. The tools changed what it costs to ignore it.