Skip to main content
Jason Sonderman, UXMC, CPACC

Lighthouse Platform

How I led 2.5 months of embedded field research across 14 utility sites to reframe a greenfield product direction — then held the research-backed vision through two rounds of MVP scope cuts to ship convoy tracking in August 2025.

Building a Research-Backed Vision for Emergency Preparedness featured image
  • UX Leadership
  • Field Research
  • Product Vision
  • AI-Assisted Prototyping
  • Utility Industry
  • Enterprise SaaS

I joined Arcos with a plan already in motion: modernize the UI of our legacy emergency preparedness tools and make the experience more consistent. The assumption was that the work was a UI refresh. The harder question — what do these users actually need to do their jobs during a major power outage — hadn’t been asked yet.

I wasn’t brought in to validate that assumption. I was brought in to lead UX. So we started by going out to the field.


What we found in the field

14customer sites visited across AEP operating companies
120+emergency managers and field workers observed and interviewed
2.5months of embedded field research

We spent two and a half months on-site with American Electric Power, visiting 14 of their operating company locations and spending time with both command center staff and field workers. We weren’t doing interviews from a conference room — we were watching storm mode happen. We saw what the actual workflow looks like under pressure.

What we found was that users weren’t struggling because the UI was outdated. They were struggling because software — ours and everyone else’s — had been built as a collection of individual tools, not as a system that matched how storm restoration actually works. Command center operators were running 4 monitors with 8 different applications open simultaneously. Field workers kept a paper notebook — their “bible” — because it was more reliable than any digital tool when pressure was high and connectivity was spotty.

The insight that reframed everything: users didn’t need a better bag of tools. They needed a system that understood the storm restoration process and gave them what they needed next — instead of making them hunt for the wrench in a bag of screwdrivers.

That reframe changed what Lighthouse needed to be. Not a modernized UI. A workflow-native platform — one that follows the actual sequence of a storm response and surfaces the right information and actions at the right moment in that process.


Shaping the vision

With that research foundation in place, my job became translating what we’d learned into a product direction that the broader team — product managers, engineering leads, company leadership — could see and believe in. Evidence alone doesn’t create alignment. You have to make the future tangible.

I developed the concept of user lenses — a framework for how the platform should adapt its experience based on a person’s role in the storm process. A field crew supervisor needs fundamentally different information surfaces than a command center coordinator, even when they’re working the same event. Lenses became the conceptual spine of the Lighthouse vision.

To make that vision concrete, I built a working prototype in Lovable. This was an intentional choice to use AI-assisted prototyping as a leadership tool: get to something tangible fast, and use that tangibility to drive real conversation instead of abstract deck review. I worked in constant dialogue with the product managers and design team throughout, so the prototype reflected shared understanding, not just my interpretation of the research.

AI-assisted prototyping wasn’t about speed for its own sake. It was about compressing the distance between research insight and stakeholder alignment — making the vision real enough to pressure-test.


The most important problem we’d identified in the field was crew check-in. Before a storm restoration can begin, external crews have to be checked in at a staging site — and that process was slow, error-prone, and still largely paper-based. We designed a full workflow to fix it, grounded directly in what we’d learned from the users who live that process.

Then scope got cut. Twice. The reality of delivery capacity and organizational agility meant the August 2025 MVP shipped as convoy tracking — knowing where inbound external resources are and helping command center teams prepare for their arrival — rather than the full check-in overhaul we’d designed.

The check-in process is still manual and off-system. We didn’t ship the thing users needed most. That’s a real limitation, and I own it as part of the story.

But convoy tracking isn’t nothing. It starts migrating customers from an older legacy solution onto the new platform — a deliberate, low-disruption first step that builds familiarity without overwhelming users with a full feature suite on day one. It’s the right phase one. And crucially, the vision hasn’t changed. The research is still true. The user lenses concept is still the north star. What shipped in August is the beginning of that journey, not a detour from it.


What this taught me

The most important thing I did on Lighthouse wasn’t the research, the prototype, or the design work — though all of those mattered. It was maintaining a clear and credible vision of where this platform needs to go, even when organizational constraints meant we couldn’t get there all at once.

The AI-assisted prototyping work we piloted during this project has since become part of how the team operates — product managers and designers are now using that approach to translate product needs into tangible examples for leadership review faster than before. That capability didn’t exist at Arcos when I arrived. Building it was part of the job.


Quick Read

Team

  • Strategist + Designer (consultant)
  • Design Ops + Designer (consultant)