technology

Product Thinking for Engineers

Why the best engineers think like product managers — and why most product managers think like project managers

Sathyan··19 min read

A product manager walks into sprint planning with a request: "We need a dashboard."

The engineers estimate two weeks. Six months later, the dashboard has gone through four redesigns. The scope has ballooned to include twelve custom chart types, three export formats, and a filtering system that rivals Elasticsearch. The team has burned through two frontend developers and a backend engineer who quietly transferred to another team mid-project.

The original problem — the VP of Sales couldn't see which accounts were at risk — got buried somewhere around week three, when the conversation shifted from "what does the VP need?" to "what should the dashboard do?"

Nobody went back to the VP and asked.

Meanwhile, a junior engineer on the team got tired of hearing about the dashboard in standup. One afternoon, she wrote a SQL query that pulled at-risk accounts based on three signals — declining usage, open support tickets, and contract renewal date — piped the results into a Slack message, and scheduled it to run every Monday morning. Took her about two hours.

The dashboard project was killed in a reorg eight months later. The SQL-to-Slack script is still running. The VP of Sales still calls it "the Monday report" and refers to it in quarterly reviews. Three years and counting.

Twenty-two years of building software, and the most expensive bugs I've seen were never in the code. They were in the assumptions about what to build.

The Requirement Factory

Most engineering organizations operate as feature factories, and the uncomfortable truth is that everyone involved prefers it that way.

Product managers get to feel productive — look at this roadmap, look at these Jira tickets, look at all these features shipping. Engineers get clarity — just build what's on the list, no ambiguity, no hard conversations about strategy. Leadership gets progress reports that fit on a slide: forty-seven features shipped this quarter. The system is comfortable for everyone except the user, who gets forty-seven features and uses four.

I've worked in organizations where nobody on the leadership team could tell you which of those forty-seven features accounted for ninety percent of user engagement. Nobody had measured. The number was the accomplishment. The engineers knew, though — they always know. They could feel the codebase getting heavier with every sprint, the test suite running slower, the deploy pipeline stretching longer. The velocity trap in action: teams optimizing for motion instead of impact.

Steve Jobs was famous for asking a different question. When the Macintosh team presented feature ideas, he didn't ask "what should we add?" He asked "what problem are we solving, and what's the simplest path to solving it?" The difference sounds philosophical. In practice, it's the difference between a product with two hundred features nobody uses and a product with twenty features everyone depends on.

The deeper problem with the feature factory is the one engineers feel in their bones but rarely articulate: every feature shipped is a feature maintained. The factory's real cost is the engineering time spent maintaining things that shouldn't exist — and the opportunity cost of what that time could have built instead.

That dashboard from the opening? If it had shipped, it would still be in the codebase today. Twelve chart types. Three export formats. The filtering system. Test coverage. Bug fixes. Performance tuning as data grew. A new hire would need to understand it. A migration would need to account for it. Every quarter, someone would ask "do we still need this?" and nobody would be sure enough to delete it.

Ask your team: of the last twenty features shipped, how many are actively used? If nobody knows the answer, you're in a feature factory. The first step out is measuring outcomes, not output.

The difference between the two mindsets shows up in every decision a team makes:

Feature FactoryProduct Thinking
MeasuresFeatures shippedOutcomes achieved
AsksWhat should we build next?What problem are we solving?
Treats roadmap asList of commitmentsPortfolio of hypotheses
Defines success asShipped on timeUsers changed behavior
Engineering roleImplement the specUnderstand the job

The maintenance burden compounds in exactly the way I wrote about in Systems That Last — designing for deletion is as important as designing for creation. The feature factory makes deletion almost impossible, because nobody tracks which features matter and which don't. They all look the same in the codebase: deployed, tested, maintained, permanent.

The Question Nobody Asks

There's a concept from Clayton Christensen that changed how I think about product decisions. He studied why people buy milkshakes.

The fast food chain had done the obvious things — surveyed customers, asked about flavor preferences, tested thicker and thinner versions, added mix-ins. Sales didn't move. Then Christensen's team did something different. They stood in the restaurant at 7 AM and watched who was buying milkshakes and when.

Turns out, nearly half of milkshake sales happened before 8 AM. The buyers were commuters. They weren't buying a milkshake because they wanted a milkshake. They were buying something that kept them occupied during a boring drive, could be consumed with one hand, fit in a cup holder, and lasted the whole commute. The milkshake was competing against bagels and bananas, not against other milkshakes.

Once you see that, the "product improvements" change completely. Thicker consistency so it lasts longer. Smaller straw. Chunks of fruit for surprise. None of which show up if you ask "what flavors do customers want?"

Christensen called this Jobs-to-Be-Done: people don't buy products — they hire them to do a job in their lives.

The same thinking applies to engineering, and it's where the gap between a good engineer and a great one often lives. When a product manager asks for an export button, the obvious response is "what format?" The better response is "what's the user actually trying to accomplish?" The job might not be "export data." The job might be "show my manager that the project is on track" — which means a shareable dashboard link solves the problem better than a CSV export. Or the job might be "get this data into another system" — which means an API integration is the right answer, and the export button is a workaround for a missing feature.

In 2006, I was building an electronic health records system — a multi-tenant SaaS platform on SQL Server, serving clinics across the United States. Doctors were spending absurd amounts of time scrolling through thousands of diagnosis and procedure codes to find the ones they used every single day. The same cardiologist, the same ten ICD codes, the same search ritual, every patient visit.

The engineering question in the room was whether to use a faster search widget or a better autocomplete dropdown. The product question was different: how do I save the doctor's time?

That reframe changed everything. Instead of optimizing the search, we eliminated the need to search. We built pre-configured templates — diagnosis plans, procedure plans, medication plans — organized by specialty, by disease, by patient age. A diabetes monitoring visit pre-populated the common diagnosis codes, lab orders, and medication adjustments. An annual physical loaded age-appropriate screenings. The doctor's most frequently used codes learned from their patterns and floated to the top.

The same instinct showed up on the operations side. Every time we onboarded a new clinic, someone sat overnight running SQL scripts to provision the system — seeding the database, configuring specialties, mapping procedure codes. I built a simple Windows Forms application: one screen, pick the new clinic's name, choose which existing clinic to copy configuration from, click a button. Two hours instead of an overnight shift. The question was the same both times — not "how do I optimize the process?" but "how do I make the pain disappear?"

Even the largest EHR systems at the time had nothing like our templates. Our competitive advantage — at a small company, with a small team — came from asking the right question.

The engineer who understands the job builds a different system than the engineer who implements the spec. Different data model. Different API surface. Different architecture. And usually simpler, because understanding the actual problem tends to eliminate the requirements that were really guesses.

The junior engineer from the opening understood the job instinctively. The VP of Sales needed to know which accounts to worry about on Monday morning — not a dashboard. A Slack message answered that job. Twelve chart types did not.

Next time you pick up a ticket, try asking: "What happens if we don't build this?" If nobody can articulate the cost of inaction, the feature probably shouldn't exist.

Roadmaps Are Hypotheses

Every engineer I know has been demoralized by a roadmap that changed three times in a quarter. Features get killed, priorities shuffle, six weeks of work gets shelved. It feels like chaos.

Sometimes it is. But sometimes the roadmap changed because reality changed — and that's the system working, not failing. The problem is that most teams frame roadmaps as commitments. They're not. They're bets.

MythReality
The roadmap is a promiseIt's a portfolio of hypotheses, ranked by confidence
It reflects what users wantIt reflects what the team believes users want — untested until shipped
Completing it means successShipping features nobody uses is the most expensive kind of failure

The honest alternative is Now / Next / Later. "Now" is committed — high confidence, production-grade. "Next" is high-confidence bets — the solution might change. "Later" is exploration — the team is still validating whether the problem is worth solving.

Knowing which category you're working on changes how you build. A "Now" feature gets production-grade architecture. A "Later" exploration gets a prototype and a measurement plan. Treating everything as "Now" is how teams over-engineer experiments and under-invest in foundations.

The more useful habit is asking "what assumption are we testing with this feature?" Every item on a roadmap sits on a stack of assumptions — that the problem exists, that users care, that this solution addresses it. If the PM can't name the assumption, the feature is a guess wearing a deadline. Treat the roadmap like a venture portfolio: some bets fail, and that's the system working. The waste happens when nobody tracks the returns.

What Great Products Actually Got Right

The most interesting thing about iconic products is that the product decisions and the engineering decisions were usually the same decision.

The iPhone wasn't a feature list. When Apple built the first iPhone, Steve Jobs made a decision that looked insane from a conventional product perspective: build the entire stack. Custom ARM processor. Custom operating system. Custom touch interface. Custom distribution through a single app store. Every major technology company at the time was building phones on existing platforms — Symbian, Windows Mobile, licensed hardware. Apple built everything from scratch.

That decision only makes sense if you understand the product philosophy: seamless integration was the product. The goal wasn't to build a phone with good features. The goal was to control every pixel of the experience, from the moment you press the home button to the moment you close an app. You can't do that on someone else's operating system. The engineering decision to build a custom OS was the product strategy, expressed in code.

Slack was an accident that understood its job. Stewart Butterfield didn't set out to build a messaging app. He was building a game called Glitch, and the internal communication tool his team built during development turned out to be more valuable than the game itself. Butterfield killed the game and extracted the tool.

What made Slack different from the enterprise messaging tools that already existed — HipChat, Campfire, even email — was an obsessive focus on reducing friction. Every engineering decision in Slack's early days prioritized "make it easier" over "add more." The onboarding flow. The message input. The notification controls. The search. The integrations. Each one was engineered to remove one more reason not to use it. In an era of bloated enterprise software, Slack felt lightweight because the engineering team treated simplicity as a constraint, not a compromise.

Stripe made payments invisible. Stripe's product philosophy was radical in 2011: make payments disappear from the developer's consciousness. The engineering expression of that philosophy was a seven-line integration. That sounds like a marketing number, but it's an architectural decision. Every choice — the API shape, the error messages, the client libraries in every major language, the documentation that reads like a product in itself — served one goal: developer adoption through simplicity. Patrick Collison didn't just want payments to be easy. He wanted them to be a solved problem that developers never thought about again. The engineering is the product.

Figma bet the product on the browser. Building a high-performance design tool in the browser was technically ambitious and commercially strategic at the same time. Dylan Field's team chose WebGL and browser-based rendering when every other design tool was a native desktop application. That engineering decision — which required solving hard problems in real-time collaboration, rendering performance, and offline support — was simultaneously the product strategy. Browser-based meant no installation friction, which meant faster adoption. Real-time multiplayer meant designers could collaborate the way engineers collaborate in Google Docs. The technical choice created the distribution advantage and the collaboration moat that made Figma worth $20 billion.

The team that chose browser-based rendering for Figma made one decision that settled the technology, the distribution strategy, and the competitive moat — all at once. The best engineering decisions work like that.

The throughline across all four: the engineers who built these products weren't implementing specs handed to them by product managers. They were making product decisions through their technical choices. The architecture was the strategy.

The Engineer's Product Toolkit

None of these require permission from a product manager. All of them change what you do on Monday morning.

TechniqueThe IdeaTry This Monday
Product ObservabilityMost teams track system health (p99, error rates) but not feature health (usage, adoption, drop-off). You're flying half-blind.Add product observability alongside system metrics. Instrument feature usage, user flows, and time-to-value.
Painted Doors (Atlassian)Put a button in the UI that tracks clicks but doesn't do anything yet. If nobody clicks, you just saved three sprints.Before your next feature build, find the cheapest way to test demand — a fake button, a landing page, a manual process behind an automated interface.
PR/FAQ (Amazon)Write the press release announcing the feature before building it. If the press release is hard to write, the value proposition isn't clear.Before your next architecture doc, write one paragraph: who benefits and why they care. If you can't, stop and think.
Reverse DemoDemo the user's current workflow without your feature. Watch the friction, the workarounds, the tab-switching. Build only what addresses those pain points.In your next sprint review, show the problem before the solution. Anchor the team in what hurts.

Your monitoring dashboard should answer two questions: "Is the feature working?" (system) and "Is anyone using it?" (product). If you only have the first, you're flying half-blind.

When Engineers Push Back

Every engineer has sat in a planning meeting thinking "this is the wrong thing to build" and said nothing. The reasons are familiar: it's not my call, the PM owns the roadmap, I don't have the context, I'll just be seen as difficult.

The instinct to push back is usually right. The execution is usually wrong.

Pushback without alternatives is just complaining. "That's a bad idea" doesn't move anything forward, and it earns you a reputation as an obstructionist rather than a strategic partner. The engineers who earn product influence come with something better: "I think this feature won't solve the user problem, and here's why. Here's what I think will work instead, and here's how we could test both approaches cheaply before committing three months of engineering time."

"That's a bad idea" gets you labeled as difficult. "Here's a cheaper way to test whether it's the right idea" gets you invited to the next strategy meeting. The difference is the alternative.

The structural answer to the feature factory is shared ownership. The Spotify squad model — autonomous teams with both product and engineering accountability — works because when the same team owns both the "what" and the "how," the incentive structure changes. Nobody benefits from shipping features that don't move metrics when your team is measured on outcomes. The feature factory dissolves because building the wrong thing is everyone's problem, not just the PM's problem.

Even without structural change, there's one question that creates leverage in any planning session: "If we can only build one thing this quarter, what's the one thing that changes the trajectory?" I've used this in sessions where the initial list had fifteen features for four engineers. The math alone should have killed ten of them, but without the forcing function, the team would have tried all fifteen badly rather than three well. The trajectory question reframes "what can we fit?" into "what actually matters?" — and that shift is where engineers start influencing the product direction rather than just executing it.

Early in my career, I watched bad product decisions from the sideline and felt powerless. The shift happened when I realized that engineers who understand product thinking don't just build better systems — they get invited into the rooms where the decisions are made. Product thinking is career leverage. The engineer who can say "here's why this architecture supports the product strategy" is more valuable than the engineer who can only say "here's how I implemented the spec." Both matter. One gets you a seat at the table.

Architecture Is Product Strategy

Every architecture decision is a product bet, whether the team thinks about it that way or not.

Choosing a monolith is a bet that your team is small enough to move fast without service boundaries. Choosing microservices is a bet that you need organizational autonomy more than you need deployment simplicity. Choosing your data model is a bet about what questions the business will need to answer in two years. Choosing your API contract is a bet about who your consumers are and what they'll need.

These are product decisions wearing engineering clothes. The team that recognizes this builds systems that enable the right product. The team that doesn't builds systems that accidentally constrain the product — and then spends the next three years working around their own architecture.

The boring system from Systems That Last — the one that ran on SQL Server and a message queue for over a decade — lasted because the engineering decisions were aligned with the product reality. Small team, clear domain, stable requirements, no need for distributed complexity. The exciting system that got decommissioned was built by engineers optimizing for technical interest rather than product fit. The architecture was impressive. The product needed something simpler.

The same principle showed up in AI Agents in Production. The agent teams that actually shipped asked a product question before making any architecture decision: "Does this task actually require autonomous decision-making, or would a database query and a well-designed form handle it better?" That question — does the problem justify the complexity? — is product thinking applied to technical architecture. Most of the time, the answer was "use the simpler tool." The teams that shipped were the ones willing to hear that answer.

Conway's Law connects all of this. Your organization's communication structure shapes your architecture, which shapes your product. If three separate teams build three separate backends behind a shared gateway, you get three products wearing a trenchcoat — I've seen this happen twice, and both times the unified "platform" was really three independent systems with a shared login page.

If you want an integrated product, you need integrated teams. If you want autonomous services, you need autonomous teams. The architecture follows the org chart whether you plan for it or not. Changing the technology without changing the org structure just produces the same architecture in a newer framework. I watched a company attempt a complete rewrite — new language, new database, new everything — while keeping the same three-team structure that produced the original system. Eighteen months later, they had the same architecture in Python instead of .NET. Same boundaries, same coupling, same pain points. Different syntax.

Your architecture is your product strategy, whether you designed it that way or not. Every technical decision constrains what the product can become. The engineers who understand this build systems that enable the right product — not just the requested features.

This is where senior engineers and engineering leaders create the most impact. Not in writing code, but in making the structural decisions — team topology, data model, service boundaries, API contracts — that determine what the product can and can't become. Product-aware architecture is the highest-leverage skill a senior engineer can develop, and it's the one that almost nobody teaches explicitly.

The Script That's Still Running

That SQL-to-Slack script from the opening? Still running. Three years and counting. The dashboard that was supposed to replace it was killed in a reorg, and nobody noticed.

The whole skill is right there. Asking "why this?" before "how this?" Every engineer already does it at the technical level — you wouldn't build a distributed system without first asking why the monolith isn't enough. Product thinking is the same rigor, applied one layer up. Why this feature? Why now? Why this solution instead of a simpler one? What happens if we don't build it at all?

The engineers who build the systems that last — the boring ones that outlive the exciting ones — are the ones who understood the product, not just the spec. They knew what problem the system was solving, which meant they could make the hundreds of small decisions that accumulate into a system that works, without checking with a product manager every time.

Twenty-two years in, the code I'm proudest of was always the simplest — the code that solved the right problem and left everything else unbuilt. Everything else is maintenance.

Related Articles

More from Narchol