Contents

How I design an architecture to grow without rewriting everything after a year

How I design an architecture to grow without rewriting everything after a year

/en/architettra_per_crescere/img.png

If you’ve worked in a startup, an SME, or a team that’s “always in emergency mode”, you already know how this goes:
at the beginning you move fast. It works. You ship. You close tickets.
Then after 6 months (sometimes 3) you realize every new feature costs double.
After a year, every change feels like open-heart surgery.

And it often ends with the sentence nobody wants to say out loud:

“Okay, we need to rewrite everything.”

This article is a pragmatic antidote to that situation.

I’m not speaking as a spotless hero.
I’m the first one who has turned code into spaghetti. More than once.
And precisely because I’ve paid that technical debt with loan-shark interest, today I’m telling you:
you don’t need perfect architectures — you need architectures that hold up.

With a few simple habits and an extra hour every now and then, you can save yourself hundreds of hours later.


The problem: when you don’t have time, you “spaghetti-code” (and you do it for good reasons)

Spaghetti code doesn’t happen because we’re incompetent.
It happens because we’re under pressure.

  • you had to deliver yesterday
  • you don’t have time to clean things up
  • you don’t know if this feature will even be used
  • the business changes priorities every 3 days
  • and maybe you’re understaffed too

So you do the most human thing in the world:

  • you put logic “wherever it fits”
  • you duplicate a piece of code “because it only takes 5 minutes”
  • you take a shortcut “we’ll fix it later”
  • tests? “later”

The problem is that “later” always arrives when the system is bigger, more fragile, and more expensive to change.


The goal isn’t scaling: it’s avoiding a rewrite

When people talk about architecture “to grow”, they often immediately think of:

  • microservices
  • event-driven systems
  • CQRS
  • Kubernetes
  • service mesh

But most projects don’t die because they lack Kubernetes.

They die because:

  • nobody understands where the logic lives
  • every change breaks something unpredictable
  • code reuse is impossible
  • refactoring is too expensive
  • onboarding new devs = weeks of panic

So the real focus is:

making the system changeable without fear.


The golden rule: logical decoupling, not necessarily “distributed”

There’s a concept here that saves entire projects.

Decoupling doesn’t mean “splitting everything into 10 services”.
It first and foremost means something much simpler:

✅ Separating responsibilities at the logical level

A very common example:

  • Controller / Handler: input parsing and response
  • Use case / Service: application logic
  • Repository / Gateway: access to DB and external APIs
  • Domain: core rules and structures

This isn’t “textbook architecture”.
It’s a way to prevent:

  • SQL queries ending up inside controllers
  • business logic being scattered across 12 files
  • external calls being mixed with validation and calculations

This kind of decoupling is what allows you, after a year, to:

  • refactor without rewriting
  • test without going insane
  • ship new features without breaking everything else

The trick that costs little time: design the boundaries, not the whole castle

You don’t need to design a complete architecture.

You need to set clear boundaries where spaghetti code tends to explode.

The 3 “project-saving” boundaries are:

1) Boundary between logic and infrastructure

Application logic shouldn’t care:

  • whether data comes from MySQL or Postgres
  • whether you’re calling REST or gRPC
  • whether it’s sync or async

2) Boundary between domain and UI/API

HTTP must not become your mental architecture. Entities and use cases should still make sense without REST.

3) Boundary between modules that evolve at different speeds

Example: invoicing vs shipment tracking.
Both are part of the “product”, but they change at different frequencies and carry different risks.


Code reuse: don’t copy/paste — extract the concept

Reuse is a boring topic.
It sounds like a sermon from the early-2000s tech community.

Yet the hidden cost of copy/paste is brutal, because:

  • you fix a bug in one place and leave it in 3 others
  • you change a flow and break its “twin” somewhere else
  • every feature becomes a sightseeing tour through the repo

The pragmatic solution isn’t “building a perfect internal library”.

It’s:

✅ creating very simple reuse points

For example:

  • a common package/module only for stable utilities
  • lib/ components for shared code across modules
  • “pure” functions (no I/O) that are testable and reusable

A good indicator is this:

If you write the same concept twice, chances are you’ll write it five times within a month.


Testing “as much as possible”: you don’t need to cover everything, you need to cover what hurts when it breaks

Another ultra-worn topic: “you need tests”.

Sure, but said like that it doesn’t help anyone. Because when you’re under pressure, “write tests” competes with “I need to ship”.

So the real version is:

✅ test where it truly reduces risk

Practical priorities:

  1. Tests for pure functions
    stuff that takes input → produces output
    (fast, reliable, cheap)

  2. Tests for the main use cases
    the flows that generate revenue or disasters if they fail

  3. Tests for bugs you already paid for
    if a bug cost you 6 hours once, add a test and never see it again

And then a sentence that saved my life:

“A test isn’t meant to prove it works.
It’s meant to let you change code without fear.”


Single repo vs multi repo: monorepo is often better (especially early on)

A sensitive topic, because there’s religion around it.

But in practice, for many small and mid-sized teams:

✅ a single repository is more efficient

Because it gives you:

  • immediate visibility of the system
  • easier cross-module refactoring
  • one place to search
  • faster onboarding
  • simpler versioning

Multi-repo makes sense when you truly have:

  • independent teams
  • independent releases
  • separate ownership
  • different access policies

But multi-repo too early often leads to:

  • duplicated code
  • unmanageable cross-dependencies
  • incompatible versions
  • “I can’t change X because it breaks Y living somewhere else”

Monorepo isn’t “better looking”.
It’s just more pragmatic until the organizational cost of splitting becomes justified.


The most important thing: invest one hour now to avoid spending a hundred later

I know: everything I wrote sounds obvious. It sounds like stuff we’ve heard for decades.

But I keep seeing entire teams (even strong ones) struggling to:

  • stop
  • breathe
  • say “okay, let’s invest one extra hour”

And that’s the real problem: it’s not that we don’t know what’s right,
it’s that we don’t allow ourselves the time to do it.

I was the first one not to.
And I ended up paying weeks of pain:

  • features taking days instead of hours
  • unpredictable bugs
  • fear of touching sensitive areas
  • refactoring postponed until the rewrite

So if I can leave you with 3 final rules, they are:

  1. Separating logic from I/O is the cheapest form of architecture
  2. Reuse = fewer bugs and less invisible work
  3. Small, targeted tests beat “no tests” — and they also beat “perfect tests that never get finished”

You don’t need to be a purist.
You need choices that let you breathe six months from now.

Because growing is great…
but growing while rewriting everything every year is a curse.