
How a small POC saved a big plan
The plan looked solid – until it wasn’t
It started like many projects do: a clear need, a familiar domain, and a rough plan that should have worked.
We were going to automate client onboarding to the encryption service. Straightforward, right?
But almost immediately, the plan started to fray. Edge cases multiplied. Coordinating migrations across domains, syncing per-account configurations, handling state propagation across distributed systems – it all got complicated fast.
We spent hours diagramming workflows, debating orchestrators, and defining responsibilities – only to generate more open questions.
We found ourselves stuck in a loop of hard trade-offs. Supporting new migrations meant potentially rewriting large chunks of logic before they were even stable. We couldn’t agree whether orchestration should live locally or in a shared service. The build vs. buy debate kept surfacing, especially with deadlines looming. And with every new “what if” – from day-one scale to backward compatibility – the plan became less certain, not more. Each question opened another. We weren’t getting closer to a plan – we were spiraling.
So we paused, shifted gears, and built a POC.
And no, it wasn’t a toy for engineers. It was a tool to cut through the fog and anchor real planning.
Let the POC lead
Here’s what we learned by using a POC not just to validate a direction, but to shape the plan itself.
Shrink the scope to expose the essence
We weren’t trying to validate every edge case – we were trying to find the shape of the problem. So we picked one slice: a single account, one service, one data model. We wanted the full flow – just compressed. So we included everything we knew would matter down the line:
- Sequential enablement of the data encryption and decryption
- Encrypting historical data
- Creating backups
- Waiting for the manual approval
- Removing unencrypted data and subsequent backup removal
We scoped the POC to a single domain, starting with a single database table intentionally. This allows us to reduce variables and focus on where the pain actually lies: orchestration, state control, and error boundaries.
When faced with additional complexities during implementation, we also decided to switch from async to synchronous batch processing for the POC. That gave us simpler control over concurrency and throughput. It wasn’t meant to be scalable – it was meant to be understandable.
To move faster, we skipped detailed error handling and defaulted to basic retries. That tradeoff let us focus on fundamentals, like discovering that every step in the process had to be idempotent, regardless of which orchestrator we’d eventually use.
This discovery shaped a design that was:
- Prototype-ready with Temporal (open-source microservices orchestration platform)
- Flexible enough to swap orchestrators later without rewriting core logic
- Designed to evolve from sync to async when needed
The goal was to simplify, not shortcut. We deliberately made early decisions reversible.
Make it real to see the edges
An underrated outcome: the POC clarified the boundaries of responsibility between components. Instead of abstract discussions, we had working flows that made contract boundaries obvious. That enabled:
- A meaningful design doc with clearly scoped responsibilities – enough to delegate work confidently and prevent endless rework
- Flow diagrams grounded in real decisions
- Deliberate abstraction points where we knew things might change
Deliver early. Learn fast. Unlock options.
One of the biggest wins came early, almost by accident. Before the full process was even ready, we saw how we could apply the new approach to the existing migrations (and the ones that we’ve worked on in parallel to automate onboarding). One of the first things we did post-POC was to refactor another data migration process to follow the new design – plugging it into a simple in-code runner.
That early step gave us something tangible. We delivered real business value with just a partial rollout – proof that we didn’t need to wait for “finished” to be useful. More importantly, it confirmed that our assumptions held up once we hit real data and real edge cases. And because the framework was already modular, the next migration can be plugged in easily. No reinvention. No guesswork.
Show, don’t tell
We didn’t expect the POC to spark so much attention. But once it was real – once people could see it working – it changed the conversation entirely. That surfaced critical feedback we didn’t get during the design review:
- “Can this scale to 10+ domains?”
- “Is there overlap with X team’s data backfill?”
- “Can we use it for other data sources?”
It also led to broader interest: other teams saw the approach, understood the modularity, and started exploring ways to reuse the architecture for different workloads.
The POC is the plan now
The best part? The POC didn’t just clarify what to build, but reshaped how we’d build it. Our new rollout plan focuses on:
- Delivering centralized state handling early
- Letting batch processes trigger manually while the full flow evolves
- Rewriting each migration component independently over time
- Unblocking other migration use cases before the full onboarding system is live
Hidden speed
POC is not a detour from delivery – it’s how you speed it up responsibly. If you’re trying to justify a POC, ask this:
- What will it cost us if we’re wrong about this assumption?
- What could we learn with just 10% of the effort?
Those two questions often turn “maybe later” into “let’s do it now.”
That’s how you make the case: A small bet today to avoid a very expensive mistake tomorrow