Contents

The CTO and the Zen of Waiting

The CTO and the Zen of Waiting

/en/zen_attesa/img.png

How to Wait for People to Truly Learn

In a tech world used to moving at the speed of deploys, we tend to assume that people should evolve at the same pace: new skills in a few weeks, new languages “learned” in a two-day course, new processes absorbed after a single meeting.

Reality is different: people grow at their own pace, made of exposure, reflection, attempts, mistakes, setbacks, and new restarts.
This article aims to propose a resilient, modern, and self-aware managerial vision of intellectual absorption time, where the CTO and technical leaders learn not only to teach, but to wait – even in an era of AI assistants, aggressive autocomplete and vibe-coding.


Knowledge Transfer Is Not Enough

As a leader and trainer you must be able to wait not only for people to learn, but to understand and apply what you teach.

Knowledge transfer alone is not enough:

  • a workshop is not enough,
  • well-written documentation is not enough,
  • a brilliant demo is not enough,
  • and not even a well-crafted prompt to an AI is enough.

Real understanding shows up when:

  • someone can apply a concept in a context different from the one in which they first saw it;
  • they can explain it to someone else in their own words;
  • they integrate it naturally into their way of working;
  • they can challenge an AI suggestion because they understand its limits and risks.

The leader’s role, therefore, is not just to “emit content”, but to accompany the digestion time of those who receive it – even when the environment pushes towards fast learning boosted by smart tools.


What It Means to Wait for Learning

Learning Is Not “Downloading a File”

Too often we treat training like a download:

  • you take the course → you’ve learned;
  • you read the wiki → you know how to do it;
  • you asked ChatGPT → you’re autonomous.

In reality, true learning is a process:

  1. Exposure: listening, reading, observing (human or AI-mediated).
  2. Initial understanding: “I think I get what this is about.”
  3. Guided application: trying with someone close who can correct you (or with AI as support, not as pilot).
  4. Autonomous application: trying on your own, making mistakes and adjusting.
  5. Internalization: it becomes natural, you don’t need to think too much and you’re not a slave to the automatic suggestion.

Waiting for learning means accepting and designing for these steps, without confusing step 1 (I’ve seen an AI answer) with step 5 (I truly know what I’m doing).

Respecting Different Paces

People don’t all learn at the same speed:

  • some grasp things quickly, but need time to put them into practice;
  • some seem slower at first, but then consolidate in a stable way;
  • some need to see things several times and in several forms (theory, practice, discussion… and yes, AI support too).

A leader who knows how to wait:

  • doesn’t label as “weak” those who don’t get it on the first attempt;
  • asks: “Did I explain this in the right way for this person, at this moment?”;
  • accepts that the learning curve is irregular, in jumps, not a straight line;
  • doesn’t use AI to “cover” gaps, but to make them visible and work on them.

How to Manage These Timelines

Waiting doesn’t mean being passive. Managing learning timelines is an intentional act.

Setting Realistic Expectations

Problems often arise from wrong expectations:

  • “You should be autonomous after a week.”
  • “I’ve already explained this, I shouldn’t have to repeat it.”
  • “You’ve read the documentation, what’s missing?”
  • “You have AI, you shouldn’t be struggling.”

Managing timelines means:

  • making explicit which level you expect:
    • basic: I can execute by following a guide or an AI suggestion;
    • intermediate: I can adapt a procedure to similar cases and recognize when the AI is wrong;
    • advanced: I can improve what exists on my own, even going against the AI’s “advice”.
  • planning intermediate steps, not a single big leap:
    • first shadowing/mentoring,
    • then semi-autonomy,
    • then full ownership.

Creating Space to Experiment Without Anxiety

Learning slows down when there is:

  • fear of making mistakes,
  • fear of being judged as slow,
  • constant pressure on “results now” (especially because “we have AI anyway”).

A leader who manages timelines well:

  • creates safe spaces for trying (pair programming, sandboxes, internal exercises, coding sessions with AI where what is allowed and what isn’t is clearly defined);
  • clearly distinguishes the context where people are experimenting from the context where they must perform;
  • uses mistakes as moments of clarification, not humiliation.

Recognizing Micro-Progress

We often only see “they’re not autonomous yet”, and we miss that:

  • today they ask fewer questions than yesterday;
  • today they’re making more refined mistakes (a sign that some aspects are understood);
  • today they can say “the AI suggested X here, but I don’t trust it for these reasons”.

Managing timelines means:

  • recognizing and verbalizing small improvements;
  • using micro-progress to motivate, not just the final milestone;
  • valuing when someone rejects an AI suggestion because they consider it unsafe or out of context: that’s a sign of control, not “slowness”.

The CTO as a Lighthouse in the Fog

If line managers are the daily guides, the CTO is the lighthouse in the fog: they don’t tell every ship where to steer, but provide a clear and steady direction.

Defending Learning Time

The CTO has the duty to defend:

  • time dedicated to training and onboarding;
  • time dedicated to deliberate practice, not only to delivery at all costs;
  • the quality of learning against the temptation of “throw someone random at the problem, we’ve got AI anyway”.

Being a lighthouse means:

  • reminding the business that apparent speed (copying from AI without understanding) creates human and technical debt;
  • explaining that real people growth is a strategic asset, not a side cost;
  • making it clear that vibe-coding is not a magic shortcut, but a tool that demands maturity.

Leading by Example

A CTO who preaches learning but:

  • doesn’t update themselves,
  • doesn’t study,
  • never admits not knowing something,
  • just pastes AI output,

is sending an implicit message: “Learning is important… but only for you, not for me.”

Being a lighthouse means:

  • showing up as a learner as well (“I don’t know, let’s figure it out together – even using AI, but carefully”);
  • showing that even senior people need time to study and validate what AI proposes;
  • making it normal that “not knowing yet” is a phase, not a fault.

Giving Meaning to the Journey, Not Just to Tasks

People cope better with long learning timelines when:

  • they understand why they’re learning something;
  • they see how it connects to their professional growth;
  • they feel that their leader has a plan for them, not just a stack of tickets.

The CTO can:

  • make growth trajectories explicit (junior → mid → senior), including in terms of responsible AI usage;
  • connect each learning effort to a broader vision (“This will help us reduce risk, be less dependent on tool X, and truly understand what we put into production”);
  • remind everyone that we’re not just “closing tasks”, we’re building judgment, especially in a context where machines “write code”.

Tools and Methods to Accelerate the Process (Without Forcing It)

Waiting doesn’t mean resigning to slowness. We can accelerate learning in a healthy way, without crushing people.

Structured Mentoring, Not Improvised Help

The “senior–junior” pairing works only if:

  • it’s not just “when you have time, check their work”;
  • there are clear goals (“in the next two months we’ll work on domain logic, debugging, and critical reading of AI-generated code”);
  • there’s a routine (dedicated reviews, design sessions together, quick retros on difficulties).

Well-designed mentoring:

  • shortens the learning curve,
  • avoids dispersion,
  • creates trust relationships that reduce the fear of asking questions (including “why did the AI suggest such a weird thing here?”).

Learning by Doing With a Safety Net

People really learn when they’re the ones doing the work:

  • assigning small pieces of real responsibility;
  • letting them lead part of a feature, a demo, a spike;
  • staying close without taking the wheel away.

The important thing is to build:

  • progressive safeguards (reviews, shadowing, automated tests);
  • limited scope: mistakes don’t blow up the whole system;
  • clear communication:

    “This is a space where I expect you to try and learn. It’s okay if it’s not perfect, but I want you to be able to explain the choices you make, including when you follow or reject an AI suggestion.”

Living Documentation and Guided Paths

Documentation doesn’t mean “abandoned wiki”:

  • build guided onboarding paths (step 1, 2, 3…);
  • include concrete examples, snippets, real cases;
  • update documentation based on newcomers’ questions.

Structuring knowledge reduces:

  • time spent repeating the same explanations,
  • frustration for newcomers,
  • cognitive load for seniors.

And it also allows you to clearly specify:

“You can safely delegate this part to a vibe-coding tool, but here you really need to understand what you’re doing.”

Vibe-Coding: Powerful Accelerator, Handle With Care

By vibe-coding we mean that way of programming where:

  • you write “by feel”, driven by continuous suggestions from the IDE or AI;
  • you complete entire chunks of code letting the tool propose solutions;
  • you rely more on the flow of guided coding than on deliberate design.

This approach can dramatically shorten apparent development time:

  • it helps get past the blank-page block;
  • it speeds up boilerplate and repetitive patterns;
  • it offers examples to learn from, if used carefully.

But there’s one principle a leader cannot forget:

What you don’t understand, you cannot control.

If someone:

  • doesn’t really know what the generated code is doing,
  • doesn’t grasp the implications for performance, security, maintenance,
  • cannot modify what was produced without AI,

then we haven’t accelerated learning: we’ve only created dependency.

To use vibe-coding safely:

  • define when it’s allowed (e.g., boilerplate, simple tests, repetitive migrations);
  • define when it’s forbidden (core domain, security-related code, critical business logic);
  • always ask:
    • “Can you explain what this piece of code does?”
    • “If the tool disappeared tomorrow, could you maintain this?”
  • value those who challenge AI suggestions, not those who accept them blindly.

The message to teams should be:

Use AI to go faster if you want, but you stay the pilot. Code shouldn’t just “feel right”, it must be understood and governable.

Developmental Feedback, Not Just Evaluation

Feedback that accelerates learning:

  • is specific (“here you accepted the AI’s suggestion, but it introduced this unnecessary complexity”);
  • is timely (close to when the event happened);
  • is oriented toward improvement, not judgment.

A good leader:

  • doesn’t just say “this is wrong”;
  • shows a better alternative;
  • and above all gives people time to try again, instead of labeling them.

Conclusion: Building a Feedback Culture Around Time

At the heart of the Zen of waiting lies something simple and difficult: talking openly about learning timelines, even when the ecosystem is pushing “everything, now” thanks to AI.

Normalizing “I Haven’t Fully Understood Yet”

In a real feedback culture:

  • it’s legitimate to say “I haven’t fully understood yet”;
  • it’s not a fault to ask for something to be explained again;
  • you can discuss your difficulties without fear of being labeled.

The leader helps normalize this by saying things like:

  • “It’s normal to struggle with this, it’s complex.”
  • “I’d rather you ask me one more question today than cause an incident tomorrow.”
  • “If something doesn’t make sense, you’re not the problem: it means we need to explain it better.”
  • “If AI-generated code feels ‘magical’, let’s stop: either we understand it, or we don’t put it into production.”

Measuring Growth, Not Just Output

If you only measure:

  • tickets closed,
  • features shipped,
  • incidents resolved,

people will learn that learning matters less than delivering – and they’ll use AI only to go faster, not to understand.

A healthy culture also includes:

  • recognition for those who have made a quality leap, not just those who produced more;
  • time in retros and 1:1s to talk about how people are learning, including in their use of vibe-coding tools;
  • personal goals tied to skill and judgment growth, not only to output.

In Summary

Being a CTO or technical leader in the Zen of waiting means accepting that:

  • people don’t learn on command, but we can create the best possible conditions for them to do so;
  • waiting is not a waste of time, it’s an investment in the team’s human capital;
  • accelerating learning doesn’t mean compressing time, but removing obstacles, fear, and chaos – and using tools like vibe-coding with clarity, not as a permanent prosthesis.

The leader’s job isn’t just to explain once and then judge who “got it” and who didn’t.
The leader’s job is to stand in the middle, between knowledge and practice, between ambition and reality, between hurry and depth, between AI power and human responsibility.

Learning to wait for people to truly learn, even when we have tools that promise to make us run faster, is perhaps the most modern and conscious gesture a CTO can make today.