I Shipped a Single Button
Automating PowerPoint Generation at Axiamatic
There's a version of this case study where I walk you through an elaborate design system, a multi-step configuration flow, template libraries, drag-and-drop mapping interfaces. That's what the project looked like on paper when it landed on my desk.
Instead, I shipped a single button.
This is a case study about why that was the harder decision, and what senior design work actually looks like when you're building for learning instead of building for scale.
When customers and strategy collide
Our CTO had just announced a pivot: simplify dashboards, make AI the primary entry point, stop competing with BI tools. Two weeks later, our most strategic customer said they wouldn't adopt without one thing: automated PowerPoint generation from dashboards.
The tension was immediate. We'd committed to moving away from dashboard-centric design, and now we were being asked to build a feature that exports dashboards to PowerPoint.
I raised the question that mattered: were we building for the deliverable (a PowerPoint file) or for the generation (AI reasoning and synthesis)? The first felt like traditional BI tooling. The second would advance our direction.
The response was clear: this customer wouldn't adopt without it. Customer commitments override strategic purity.
That's when the real design work started.
Aligning on what we're actually building
The product doc reflected an early-stage tension. The title said "[redacted] Steering Committee Reporting," singular and specific. But the user interaction diagram showed template selection, dashboard configuration, and mapping interfaces. Plural, generic, platform-shaped.
Those two framings required completely different designs. A customer-specific solution means hardcoded templates, zero configuration UI, and a timeline measured in weeks. A platform feature means selection interfaces, mapping logic, and months of work. We needed to resolve which one we were building.
I talked through the spec with a few colleagues. They saw the same tension: one questioned whether it would scale beyond this customer, another suggested keeping scope to a POC/MVP. Both asked whether in-product editing belonged in the first version.
That helped me frame the right questions for my PM. Not open-ended "can you clarify the scope" questions, but specific ones: "Is this a platform feature or a customer-specific solution?"
We aligned on a hardcoded MVP for this one customer. One button, zero configuration. If it validated the pattern, we'd generalize later.
Now I had something I could actually design.
The case for doing less
With scope locked to a single customer, I had a choice. I could still build configuration UI, preview screens, and in-product editing, treating this customer as the first user of a platform feature. Or I could strip the design down to the minimum that would validate whether anyone wanted this thing at all.
I argued for the minimum, and I had to make the case for each thing I wanted to cut.
Preview before download. The spec floated this as an obvious inclusion. But we had no evidence users wanted it. Adding preview would mean more UI, more edge cases, and a longer timeline, all to validate an assumption we could test more cheaply by shipping without it. If users complain about not having preview, we'll have learned something. If they don't, we'll have saved weeks.
In-product editing. The spec mentioned this as a "future consideration," which is how scope creep enters a project without anyone noticing. I pushed to remove it from the conversation entirely. If users need to edit, they'll do it in PowerPoint, where they already know the interface and where we don't have to build anything.
Discoverability. The instinct is always to promote new features, to put them in front of users and see what happens. I argued for the opposite. The feature would live behind a single button on a single dashboard, with no promotion and no AI assistant suggestions. We're validating the pattern with one customer. If it works, we'll scale discoverability. If it doesn't, we'll have kept our failure cheap.
What remained was the smallest thing that could validate the hypothesis: a button that generates a four-slide PowerPoint deck from hardcoded templates and AI-written summaries.
Why less is harder
Cutting scope sounds like the easy path. It's not.
Every feature I cut was a conversation I had to have. Preview felt obvious to stakeholders, so I had to explain why obvious isn't the same as validated. In-product editing had already been mentioned in the spec, so removing it meant pushing back on documented assumptions. Minimal discoverability goes against every instinct product teams have about launching features.
Minimal design requires you to articulate what you're trading away and why the trade is worth it. You have to be precise about what you're learning and how you'll know if you're wrong. You have to resist the pull of "while we're at it" thinking, which is how small projects become big ones.
Building more is often the path of least resistance. Building less requires conviction.
What happens next
The MVP is scoped and designed. Now we find out if the bet pays off.
What success looks like: The customer generates their weekly deck without help from our support team. Prep time drops below 15 minutes. We see zero support involvement for four consecutive weeks.
What failure looks like: Abandonment after two attempts. Decks that require 30+ minutes of editing. Users reverting to manual creation.
What we're learning either way: Do users want all four slides or just a subset? Do they edit text more than visuals? What do they add manually after generation? Each answer shapes what the generalized version looks like, if we build one.
The version I'm not building yet
If the MVP validates, we'll need to generalize. I've already explored what that looks like, because part of designing a minimal first step is knowing what the second step might be.
The core challenge is mapping: how do customers specify which dashboards feed into which slides when both sides are flexible?
| Approach | Mechanism | Trade-off |
|---|---|---|
| Template-driven | Templates define required dashboard types upfront, and users pick from compatible options | Simple for users but rigid for us, requiring a maintained taxonomy of template-to-dashboard relationships |
| AI-assisted | Users select dashboards and templates independently, then the AI proposes a mapping for review | Flexible and aligned with our product direction, but complex to build and dependent on AI quality |
| Manual mapping | Users drag and drop dashboards onto specific slides | Maximum control but high cognitive load, and doesn't leverage the AI that's supposed to differentiate us |
My recommendation is AI-assisted mapping. It's the only approach that advances our assistant-first bet rather than contradicting it. But that recommendation stays on the shelf until we've validated that the core pattern works.
Key learnings
I started this project skeptical that it should exist. The strategy said we were moving away from dashboards, and here we were building dashboard export. It felt like backsliding.
What I've learned is that strategic alignment isn't binary. This feature contradicts our stated direction and unblocks a critical customer. Both are true. The skill is holding that tension long enough to find the version that serves the customer without derailing the strategy.
For this project, that version turned out to be very small. No configuration, no preview, no editing, no promotion. Just enough to learn whether the pattern has legs.
Senior design work isn't always about building sophisticated systems. Sometimes it's about having the conviction to build less, the clarity to explain why, and the patience to let the results tell you what to do next.
I shipped a single button. That was the design.