There's a system at a company I worked with that runs on SQL Server, a message queue, and a handful of C# services behind a load balancer. It was built in 2014. Nobody has given a conference talk about it. Nobody wrote a blog post. The on-call rotation for it is the quietest in the entire organization.
It processes millions of transactions a month. The last major incident was a disk space alert that resolved itself after a cron job kicked in. The team that built it has mostly moved on to other companies. The system hasn't noticed.
Down the hall — metaphorically, because this was all remote — another team built a system around the same time. Microservices from day one. Event sourcing for what was essentially CRUD. A graph database for data that turned out to be relational. A message broker that required a dedicated engineer just to keep the cluster healthy.
The architecture diagram looked impressive on a whiteboard. In production, it looked like a distributed system designed by a team that wanted to build a distributed system. Every feature took three times longer than expected because it touched four services and two message formats. Debugging meant correlating logs across eight containers. The graph database's query language had a learning curve that new hires never fully climbed.
It was decommissioned in 2021, rewritten as a monolith with Postgres, and now runs on a fraction of the infrastructure.
The boring system outlived the interesting one by years. It will probably outlive the rewrite too.
Here's what I find interesting about the boring system. The engineers who built it weren't less talented. Some of them were the strongest engineers in the organization. They chose SQL Server because they knew SQL Server. They chose a message queue because they'd operated message queues before. They chose a monolith-first architecture because the team was small and they wanted to move fast without coordinating deploys across eight repositories.
Every decision was the least interesting option that solved the problem. Nobody was excited about any of it. Every decision was also the one that aged the best.
Twenty-two years of building software has taught me one thing more reliably than any other: the systems that endure are built on decisions that feel unremarkable at the time. The exciting choices — the ones that generate blog posts and conference talks — tend to be the ones you'll undo later.
The Shelf Life of a Decision
Technology decisions decay at different rates.
Your data model will outlive almost everything else in the system. The schema you design in the first six months will still be shaping query patterns, API responses, and migration headaches a decade later. Frameworks come and go on a three-to-five year cycle — long enough to feel permanent, short enough that you'll rewrite the frontend at least twice. CSS libraries barely last eighteen months before something newer and shinier shows up.
Most teams spend their deliberation time backwards.
They'll debate a frontend framework for weeks — a decision they'll revisit in three years regardless — while the data model evolves through a series of ad hoc decisions that nobody treats as architecture.
That's the part people miss. The data model isn't designed once in an afternoon and then locked in. It's designed once in an afternoon and then mutated every week — a new column here, a new table there, a JSON blob because the sprint is ending Friday and there's no time to model it properly. Core tables get widened. Relationships get bolted on. Storage patterns shift because someone needed a quick fix and ALTER TABLE was faster than thinking it through. Each change is small. The accumulation is a schema that reflects eighteen months of on-the-fly decisions rather than any coherent view of the domain.
I've watched teams spend a full sprint evaluating whether to use Tailwind or styled-components, then let the database schema evolve through hundreds of unplanned migrations — each one reasonable in isolation, collectively incoherent. The CSS decision was reversible in a week. The schema decisions compounded into a structure that nobody designed and everybody depended on.
Jeff Bezos's framing is useful here: one-way doors and two-way doors. A one-way door is a decision that's expensive or impossible to reverse. A two-way door is one you can walk back through without much pain. The amount of time you spend deciding should be proportional to how hard the decision is to undo.
Most technical decisions are two-way doors. Pick a library, try it, swap it out if it doesn't work. But a handful of decisions — your data model, your primary database, your service boundaries, your public API contracts — are one-way doors. Once you walk through, the cost of going back is measured in months and rewrites.
Those are the decisions that deserve the architecture review, the whiteboard session, the "let's sleep on it." And they're exactly the ones that most teams rush through because the project is new and everything feels reversible.
| Decision | Approximate Half-Life | Reversibility |
|---|---|---|
| Data model / schema | 7-10+ years | Very hard — everything downstream depends on it |
| Primary database | 5-10 years | Hard — data migration is always painful |
| Service boundaries | 5-7 years | Hard — APIs, teams, and deploys are coupled to them |
| Programming language | 5-10 years | Hard — rewrite territory |
| API design (public) | 5+ years | Hard — external consumers depend on contracts |
| Backend framework | 3-5 years | Medium — contained within services |
| Frontend framework | 2-4 years | Medium — usually isolated to one layer |
| CSS / UI library | 1-2 years | Easy — mostly cosmetic |
| Build tooling | 1-3 years | Easy — no runtime impact |
| Package dependencies | Months to years | Easy — usually swappable |
The decisions hardest to reverse deserve the most deliberation. The decisions easiest to reverse deserve the least. Simple in principle, almost universally ignored in practice.
Here's a practical test: before your next architecture discussion, ask "how would we undo this decision in two years?" If the answer involves a rewrite, you're looking at a one-way door. Treat it accordingly. If the answer is "swap out the library," that's a two-way door. Make a decision and move on — perfectionism on reversible choices is just procrastination.
Boring Technology Wins
Every few years, a new database shows up to kill Postgres. Mongo was going to kill it. CouchDB was going to kill it. Riak was going to kill it. Cassandra-for-everything was going to kill it.
Meanwhile, Postgres kept adding features — JSONB when people wanted document storage, full-text search when people reached for Elasticsearch, table partitioning when people said it couldn't scale, logical replication when people needed multi-region. Every "Postgres killer" targeted a gap that Postgres eventually filled, while Postgres never broke the things that already worked.
Postgres didn't win by being exciting. It won by being boring on purpose — stable, well-documented, broadly understood, and relentlessly improved without breaking what already worked.
You can find a Postgres expert in any city. You can find a Stack Overflow answer for any Postgres error. You can find a managed hosting option from any cloud provider. That ecosystem — the hiring pool, the documentation, the community knowledge — is the real moat, and it's one that exciting newcomers almost never match.
The same pattern plays out across the stack.
REST over GraphQL for most teams. GraphQL solves real problems — deeply nested data graphs, multiple client types with different data needs, bandwidth-constrained mobile applications. Facebook built it because Facebook needed it.
For a team of five to twenty people with one frontend and one backend, REST is simpler to operate, cache, monitor, and debug. HTTP caching works out of the box. Rate limiting is straightforward. Every developer who's ever built a web application understands how REST endpoints work.
The operational overhead of GraphQL — schema stitching, N+1 query prevention, caching complexity, error handling that actually helps — is a tax that only pays off at a specific scale. Most teams aren't at that scale and never will be. A well-designed REST API with clear resource naming and consistent pagination will serve you for years without requiring every new hire to learn a query language.
Monoliths over premature microservices. Microservices solve a real problem: organizational scaling. When you have fifty engineers who can't all work on the same codebase without stepping on each other, service boundaries give teams autonomy. That's the benefit.
The cost is everything else — network failures between services, version skew, distributed tracing, contract testing, deployment orchestration, shared library management, and the cognitive overhead of understanding how ten services interact.
One team working on a system split into eight microservices has all of that operational overhead with none of the organizational benefit. You've bought the complexity without the payoff.
Worse, you've turned every simple change into a cross-service coordination problem. "Add a field to the user profile" becomes a four-service, three-PR, two-deployment affair. I've worked on teams where deploying a one-line change required updating three services, running integration tests across all of them, and coordinating a sequential rollout. A monolith would have handled the same change in a single commit and a single deploy.
There's an even more common variant, and it's arguably worse. I know teams that run "microservices" where every service shares the same database schema. No bounded contexts. No data ownership. No cross-service API calls — because why bother when every service can just query the same tables directly? They've split the codebase into separate deployables but left the data model monolithic. The result is the worst of both worlds: the operational overhead of distributed services — separate repos, separate CI pipelines, separate deployments, separate on-call rotations — with none of the independence that makes microservices worth the cost. Every service is still coupled to every other service through the shared schema. A migration on one table can break five services at once.
If your services all share a database, they aren't microservices. They're a monolith with extra network hops. A modular monolith — one deployable with clear internal module boundaries, explicit interfaces between domains, and enforced separation at the code level — gives you the organizational clarity of bounded contexts without the operational tax of distributed systems. You can always extract a module into a service later, when the team size or traffic patterns actually demand it. Going the other direction — collapsing pseudo-microservices back into a monolith — is a painful migration that nobody wants to fund.
To be clear — microservices done well are genuinely powerful. The argument here is against premature adoption, not against the pattern itself.
When microservices work, they work because the prerequisites are in place. Amazon's famous Bezos mandate — every team exposes its data through service interfaces, no exceptions — succeeded because it was paired with autonomous two-pizza teams, each owning their service end to end, from schema to deployment to on-call. Netflix runs over a thousand microservices, but they also built an entire internal platform (Zuul, Eureka, Hystrix) to manage the complexity, and they had the engineering headcount to staff it. Uber moved from a monolith to a domain-oriented microservice architecture as they scaled past hundreds of engineers — but they also wrote extensively about the pain of their early service sprawl and the subsequent effort to reorganize services around domain boundaries rather than arbitrary technical splits.
The pattern is consistent: microservices succeed at organizations that have the team size to give each service a true owner, the platform investment to handle cross-cutting concerns (discovery, observability, deployment), and — critically — bounded contexts that reflect real domain boundaries rather than arbitrary code splits. Each service owns its data, defines its API contract, and can be deployed independently without coordinating with five other teams.
That's a high bar. Most teams below fifty engineers don't clear it. And there's no shame in that — a well-structured monolith with clean module boundaries will outperform a poorly implemented microservice architecture every time, and it's a far better starting point for the day you do need to extract services.
Message queues over Kafka for most workloads. SQS, RabbitMQ, Redis Streams — these do "process this job later" without ZooKeeper clusters, offset management, consumer group rebalancing, and partition strategies.
Kafka is exceptional for event streaming, log aggregation, and systems where ordering and replay matter. If you're building a financial ledger or a real-time analytics pipeline, Kafka earns its complexity.
For "send an email after the user signs up," it's a forklift where you needed a hand truck.
I've seen teams adopt Kafka because they anticipated needing event replay and stream processing. Two years later, they had a Kafka cluster consuming $4,000 a month in infrastructure, a dedicated Kafka engineer on the team, and exactly zero use cases that required replay or ordering guarantees. They migrated to SQS in a weekend.
The pattern behind all of these choices is the same. Boring technologies have been in production long enough that the failure modes are known, the documentation is thorough, the talent pool is deep, and the ecosystem of supporting tools is mature. Exciting technologies trade all of that institutional knowledge for a feature advantage that usually narrows over time.
Dan McKinley wrote about this as "innovation tokens" — every organization has a limited budget for novelty, and you should spend yours on the things that actually differentiate your product. If your business advantage is in your recommendation algorithm, spend your innovation token there. Use Postgres for the database. Use REST for the API. Use a boring message queue for async work. Save the interesting choices for the places where interesting actually matters.
Most teams burn their innovation tokens on infrastructure. They pick a novel database, an exotic message bus, and a bleeding-edge framework, then build a completely generic CRUD application on top of it.
The novelty is in all the wrong places — in the parts that should be invisible, not in the parts that face the customer.
Before adopting a technology, search for "[technology] production issues" and "[technology] migration away from." The results will tell you more about the real operational cost than any getting-started guide ever will.
Your Data Model Is Your Architecture
Of all the decisions that compound over time, the data model compounds the hardest.
The schema shapes everything downstream. API shape, query patterns, join complexity, migration difficulty, reporting capabilities, performance characteristics — all of it bends around the data model.
Change the framework and you rewrite a service. Painful, but scoped. Change the data model and you rewrite the system — every service, every query, every report, every integration that touches that data. The blast radius is total.
The cruel irony: teams design the data model when their domain understanding is at its weakest.
Week two of a project, everyone's still figuring out the nouns and verbs of the domain, and that's when the tables get created. Six months later, when the team actually understands the business logic, the schema is already load-bearing. Every API endpoint, every report, every integration — all of it is shaped by decisions made when the team knew the least.
This is why designing for evolution matters more than designing for correctness. You won't get the model right the first time. Accept that. The goal is to make it changeable.
A good data model isn't one that perfectly captures the domain on day one. It's one that can absorb what you learn about the domain on day ninety without a migration that takes a weekend and three engineers.
Normalize more aggressively than you think you should — denormalization is easy to add later, re-normalization is surgery. Prefer narrow tables over wide ones. Use junction tables instead of arrays or comma-separated strings.
Put created_at and updated_at on everything — you will always, eventually, need to know when something changed. Version important entities so you can track their history without archeological digs through log files. Add soft deletes where business logic might need to recover data. These columns cost nothing today and save migrations later.
Here's a pattern I've seen play out half a dozen times. A team stores user preferences as a denormalized JSON blob on the users table. Works great for the first year. The code is simple. Reads are fast. Everyone's happy.
Then marketing wants to query "which users have notifications enabled." Querying inside a JSON blob is possible but ugly and slow without the right indexes. Then product wants A/B testing on default settings — now you need to update specific keys inside the blob without clobbering the rest. Then compliance needs an audit trail of preference changes, and you realize you have no history at all, just the current state.
Each of these requests is painful against a JSON blob and trivial against a normalized user_preferences table with columns for user_id, preference_key, preference_value, and updated_at. Marketing gets a simple WHERE clause. Product gets atomic updates. Compliance gets a history table with one trigger.
The JSON blob was faster to build. The normalized table was faster to live with.
The delta between "faster to build" and "faster to live with" is where most architectural debt accumulates. The people making the schema decision are optimizing for the first week. The people paying the cost are working in month eighteen. They're rarely the same people, which is why the lesson never sticks without institutional memory.
If you get the data model right, you can change everything else. If you get it wrong, changing everything else won't save you.
Conway's Law Is Not a Suggestion
In 1967, Melvin Conway made an observation that remains the most accurate predictor of system architecture ever published: organizations produce designs that mirror their communication structures.
Nearly sixty years later, it remains the most underappreciated idea in software architecture. It sounds obvious. It's also almost impossible to fight.
If you have four teams, you'll get a four-service architecture — regardless of what the whiteboard says. If the frontend team and the backend team don't talk, you'll get an API that serves the backend's mental model and a frontend that works around it. If the database team operates independently, you'll get a data model that optimizes for DBA convenience rather than application access patterns.
The architecture on the whiteboard converges toward the org chart whether you planned for it or not.
I watched a company attempt to build a unified platform while organized into three product teams, each with their own backend engineers. The target architecture called for a single API gateway serving all three products. What they shipped was three separate APIs behind a gateway that was really just a router. The architecture perfectly mirrored the org chart — three independent teams produced three independent systems wearing a trenchcoat.
The target architecture was sound. The org structure made it unreachable. No amount of architecture review could overcome the fact that three separate teams, with three separate backlogs and three separate managers, had no incentive or mechanism to build a unified system.
The fix wasn't technical. No amount of API design workshops or architecture review boards would have changed the outcome, because the force shaping the architecture was organizational, not technical.
They reorganized into a platform team that owned the shared API layer and three product teams that consumed it. Within six months, the architecture started converging toward the original vision — not because anyone mandated it, but because the communication structure now supported it. Same engineers, same codebase, different org chart, different architecture.
The inverse Conway maneuver — sometimes called "reverse Conway" — is the deliberate act of designing your org structure to produce the architecture you actually want. Want a well-integrated platform? Put the teams in a structure that forces integration. Want autonomous services with clear boundaries? Give teams full ownership of their service, from database to deployment, and minimize cross-team dependencies.
This sounds like management consulting, and it is. But it's also the most impactful architectural decision you can make, because it determines the shape of every decision that follows. You can choose the perfect database, the perfect framework, the perfect message broker — and if the org structure doesn't match, the architecture will drift toward the org chart anyway.
Before redesigning a system, look at the org chart. If the organization doesn't support the target architecture, the rewrite will reproduce the old system's shape in new technology. The boundaries will fall in the same places. The coupling will reappear at the same joints. The pain points will move, but they won't disappear.
This is, incidentally, why most rewrites fail.
Every failed rewrite shared the same flaw — rebuilt the technology, kept the org structure that produced the original architecture. Same system, newer language.
Design for Deletion
Most architecture discussions fixate on extensibility. How do we add features? How do we scale? How do we support the next use case?
The more important question gets asked far less often: how do we remove things?
A system that can only grow eventually collapses under its own weight. Every feature that can't be removed becomes permanent cognitive load — on the codebase, on the test suite, on deploy times, on the mental model every new engineer has to build.
Two years of features that nobody uses but everyone maintains is a ratchet problem. The system only moves in one direction, and it gets harder to turn with every click.
You see this in mature codebases all the time. A feature that three customers used, built four years ago, now maintained by nobody who remembers the original requirements. It has tests that take forty seconds to run. It has edge cases that break during migrations. It has a database table with thirty million rows that no query touches anymore.
Nobody will delete it because nobody is sure what depends on it, and finding out would take longer than just maintaining it. So it stays, accumulating maintenance cost forever. Multiply this by fifty features across a five-year-old codebase and you start to understand why mature systems feel heavy — not because of their core functionality, but because of everything around it that nobody can remove.
Practical patterns that make deletion possible:
-
Feature flags with expiration dates. A feature flag without an expiration is a permanent branch in your codebase. Set a review date. When the date arrives, the flag either becomes permanent (remove the flag, keep the code) or gets cleaned up (remove the flag and the code). The third option — leaving it indefinitely — is the one that accumulates.
-
Service boundaries that allow decommissioning. If shutting down a service requires coordinated changes across five other services, the boundary isn't real. A well-drawn boundary means you can turn off a service and the rest of the system continues functioning, maybe with reduced capability, but functioning.
-
Reversible migrations. Every migration should have a rollback path, tested before deployment. "We'll figure out rollback if we need it" means you won't have one when you need it most.
-
API versioning with sunset paths. Every API version should have a published end-of-life date. Clients get time to migrate. The old version gets shut down. Without a sunset path, you're supporting every API version you've ever shipped, forever.
-
Modular code where a module can be deleted and the system compiles. This sounds basic. Try it on your codebase. Delete a feature's directory. Does the project still compile? If the answer is no, your modules aren't as independent as your architecture diagram suggests.
The mindset shift is significant. Most engineers are trained to think about building. Architecture reviews focus on "how will we add this?" Design documents describe new capabilities. Promotions reward launching features.
Almost nobody gets rewarded for deleting things. But the teams that build systems capable of lasting a decade are the ones that treat removal as a first-class operation — as important as creation, and often harder.
The litmus test is simple: can you delete a feature in a day? If the answer is yes, your architecture is healthy. If the answer is "it depends on which feature," you already know which ones are the problem.
Complexity is a one-way door. Adding it costs once; maintaining it costs every day, forever.
Why Most Rewrites Fail
The pitch for a rewrite is always the same, and it's always compelling. The old system is unmaintainable. The tech stack is outdated. We've learned so much since then. Half the team wasn't here when the original system was built. We'll build it right this time.
The success rate is dismal. Joel Spolsky called it "the single worst strategic mistake that any software company can make." That was in 2000. The success rate hasn't improved since.
Rewrites fail because they replicate the old system's assumptions in new syntax. The domain model carries over — same entities, same relationships, same edge cases that took years to discover and handle. The org structure carries over — same team boundaries, same communication patterns, same Conway's Law effects producing the same architectural shape.
What actually changes is the technology, which means you've traded familiar complexity for unfamiliar complexity.
Familiar complexity is annoying but manageable — the team knows where the landmines are. They've built mental models of which queries are slow, which endpoints have edge cases, which data paths have implicit dependencies. Unfamiliar complexity is strictly worse: same number of landmines, nobody has a map.
Meanwhile, the old system keeps evolving while the rewrite is in progress. New features get added to the legacy system because the business can't pause for eighteen months. The rewrite team now has a moving target. Feature parity recedes like a horizon — always six months away, for two years straight.
I've seen three rewrites die this way. The third one was particularly painful because the team had explicitly planned for feature parity drift and still underestimated it by a factor of three. They allocated six months of buffer. The legacy system added eighteen months of features during the rewrite window. The gap never closed.
There's a deeper problem too, one that rarely makes it into the rewrite proposal.
A rewrite team inherits the old system's requirements, but not the old team's understanding of why those requirements exist. Edge cases, special handling, workarounds for upstream bugs — these look like technical debt in the old codebase. In reality, they're domain knowledge encoded in code. The rewrite team removes them as "cleanup," ships the clean new version, and then spends six months rediscovering why those edge cases existed. The technical debt was load-bearing.
The alternative is incremental migration.
Martin Fowler named the pattern after the strangler fig — a tree that grows around its host, gradually replacing it while both coexist. The same principle applies to software. Extract a module. Replace a service boundary. Route new traffic through the new system while the old system handles everything else. Each step is small, reversible, and delivers value independently.
The strangler approach works because it respects the reality of production systems: they can't stop. You can't pause the business while you rebuild.
But you can replace pieces, one at a time, validating each step against real traffic before moving to the next. If a replacement doesn't work, you route traffic back to the old module. The blast radius of any single step is contained. The risk is distributed across dozens of small changes instead of concentrated in one terrifying switchover.
It's slower than a rewrite. It's also dramatically more likely to succeed.
The rewrite promises a big-bang transformation and usually delivers a big-bang failure. The strangler fig promises steady, boring progress — a module at a time, a boundary at a time, a year of quiet improvement that adds up to a transformed system without anyone having to bet the company on a single switchover weekend.
There's a psychological benefit too. A rewrite means months of work with no visible progress — the new system doesn't ship until it's done, and "done" keeps moving. Incremental migration delivers wins early and often. The team replaces a painful module in week three, and the improvement is immediate. That early momentum sustains the effort through the harder migrations that come later. Rewrite projects lose organizational support because nobody sees progress. Strangler projects keep it because progress is continuous and visible.
The Strangler Fig pattern — named by Martin Fowler after a tropical tree that envelops and gradually replaces its host — is the practice of replacing a legacy system incrementally, routing new functionality through a new system while the old one continues to serve existing traffic.
The Compound Interest of Good Defaults
Every section of this article comes back to the same idea. Boring technology, careful data modeling, org-aware architecture, deletable features, incremental migration — these are all expressions of the same underlying principle.
Good defaults compound the way good habits do — quietly, invisibly, and then suddenly you notice the cumulative effect.
A well-designed schema leads to simpler queries, which leads to fewer bugs, which leads to faster feature development. A well-chosen database leads to quieter on-call rotations, which leads to happier engineers, which leads to less turnover. A well-drawn service boundary leads to more autonomous teams, which leads to faster shipping, which leads to competitive advantage that's hard to trace back to any single decision.
Bad defaults compound too. And they compound faster, because bad defaults create friction that slows down the very work that might fix them.
A denormalized schema means every new feature requires a more creative query. A trendy database means a smaller hiring pool and fewer Stack Overflow answers when something breaks at 2 AM. Premature microservices mean every debugging session starts with distributed tracing and ends with someone asking "which service actually owns this?"
The compounding is invisible in any individual sprint, which is what makes it so dangerous. You don't see the cost of a bad schema decision in week three. You feel it in month eighteen, when a feature that should take two days takes two weeks because the data model fights you at every step. By then, nobody remembers the original decision. It just feels like the system is slow to work with — a fact of life rather than a consequence of a choice made years earlier.
Good defaults also compound in a way that's harder to measure: they create space. When the database just works, the team spends its energy on product problems instead of infrastructure problems. When the service boundaries are clean, new features slot into natural homes instead of sprawling across three services. When the data model is flexible, the quarterly pivot from product leadership is a migration script instead of a month-long redesign.
The most productive teams I've worked with didn't have better engineers. They had better defaults — accumulated over years, through dozens of decisions that each seemed small at the time. They picked SQL Server in 2014 and never thought about their database again. They drew service boundaries that matched their team structure and never fought Conway's Law. They designed their data model for evolution and absorbed three product pivots without a rewrite.
None of these decisions were celebrated. All of them were essential.
After twenty-two years of building and maintaining systems, the lesson that holds up better than any framework or pattern is this: durability comes from hundreds of unremarkable decisions, each one making the next decision slightly easier. No single choice is dramatic. The accumulation is.
Nobody ships a press release about choosing SQL Server. Nobody gets promoted for picking a message queue over Kafka. Nobody gives a conference talk about the service boundary they drew correctly in 2019. These decisions are invisible precisely because they worked. And that invisibility is the proof.
That SQL Server-and-queue system from the opening? Still running. Still boring. Still the most valuable piece of infrastructure the company owns.
Nobody talks about it at conferences. The engineers who built it have moved on to other projects, other companies. The system doesn't care. It processes transactions, serves requests, and generates revenue — the same way it did twelve years ago, on the same boring stack, with the same unremarkable architecture.
New engineers onboard onto it in days because the technology is familiar. New features ship in hours because the data model is flexible and the service boundaries are clean. The on-call rotation remains the quietest in the organization because boring technology has boring failure modes — the kind you can debug at 2 AM without calling a specialist.
The compound interest of compound simplicity is that it frees you to spend your complexity budget on the problems that actually matter.
The systems that last are built on the kind of decisions that never make it into a conference talk. They compound quietly, in the background, for years. That's the whole point.