Retrospective

Embracing a New Era of Coding

AAnonymous
7 min read

If you thought each new tool only brought a small bump in productivity, it's time to change the question. What is happening now is not simply automation. It is changing where developers spend their time. This post is a record of how I have felt that shift over the past few months and what criteria led me to accept it.

The issue is not speed, but the shift in role.

Me Three Months Ago

Until just three months ago, I thought of generative AI as little more than a very smart autocomplete tool.

Even when I first used GitHub Copilot, I was impressed. If I wrote a function name or left a comment, it could attach a fairly convincing implementation, and it clearly reduced repetitive work. But that was as far as it went. I thought it could improve productivity a little, but not change the role of the developer itself.

Then I tried Gemini CLI. The experience of creating and editing code by talking to the terminal felt quite fresh. Its interactions felt more natural than Copilot's, and it seemed better at understanding context. Even so, my view was largely the same. In the end, I believed it was just a more advanced tool, and that real design and important decisions were still my responsibility.

Then I encountered Cursor. The experience of having AI integrated into the entire IDE was clearly different from what came before. Watching it move across files, edit code, grasp the project structure to a degree, and even suggest refactorings, I felt it had gone another step further. Even then, I thought complex business logic and architectural design ultimately had to stay in human hands until the end. To be honest, I believed my place would not be shaken that much either.

Looking back now, I was taking the speed of change far too lightly. The real inflection point began in late November 2025, when Opus 4.5 was released.

Getting Blindsided

The reason my thinking changed was not dramatic. The development and operation of this very blog is the proof.

The blog service itself is not the hardest thing in the world to build. Even so, in the past I would have expected it to take anywhere from a few days to a few weeks once planning, building the screens, refining the deployment shape, and cleaning up all the small integration work were included. Now that kind of work can be built in a short window, roughly within a day.

The experience of pushing design, implementation, cleanup, and release prep all the way to the publishing stage with a few prompts felt far more radical than I expected.

What coding agents make me feel right now is fear.

That is because I could feel that the expertise I had built up over a long time could be compressed into a different form faster than I had imagined. Until then, I had understood the progress of AI coding tools in linear terms. From Copilot to Gemini CLI, and from Gemini CLI to Cursor, I thought they were simply getting a little better over time. Then at some point, the change hit me like it had suddenly closed a gap that had felt empty and abstract.

For people who have not properly experienced the combination of Claude Code and Opus 4.5, this may sound somewhat exaggerated. I would have thought the same a few months ago. But looking at it now, this is less about making code a little easier to write and more about redefining where developers should spend their time.

From Fear to Criteria

I do not think there is any need to hide the fact that fear was the first emotion that hit me. As implementation gets faster, it can easily feel as if the value of developers is shrinking.

In fact, the industry keeps talking about an era in which even individuals, especially people in roles close to application builders, can create products and produce bigger results with fewer people. In the past, that sounded somewhat exaggerated to me. But after feeling it firsthand, those words carried a different weight.

That did not lead me to a simple conclusion. I do not think the human role is disappearing. If anything, the opposite seems closer to the truth. As fewer people become able to do bigger work, each person's judgment and review criteria matter far more.

In the past, implementation speed was often the biggest constraint for a team. Now the more important questions are what to build, in what order to validate it, and where to control risk.

At that point, these were the three criteria I ended up holding onto.

  • Defining the problem clearly
  • Designing context that AI can understand
  • Having a human take responsibility for the final result

The faster the technology moves, the heavier these three become.

The Evolution of Tools, and Claude Code

If I simplify the progression as I experienced it, it looks like this.

  • Copilot was close to autocomplete. Once I set the direction, it was a tool that quickly carried the next step forward.
  • Gemini CLI was closer to an interactive assistant. It was a way to ask questions, receive answers, and gradually build a result.
  • Cursor felt like a smart pair programmer. It tried to understand project context and had the sense of pushing implementation forward alongside me.

By contrast, Claude Code felt closer to an agent that had gone one step further. Give it a goal, and it inspects the relevant files, understands the structure, connects the changes it needs to make, and keeps thinking through the points that need to be checked.

Of course, not every result is perfect all the time. But the important difference is that I move quickly from being the person who writes every line myself to the person who sets direction, defines criteria, and reviews the outcome.

So now my role feels less like just an implementer and more like an architect and reviewer. More important than how fast I can type code is deciding how to split goals into manageable units, what to automate, and what I need to verify by hand all the way to the end.

Me Now

I now develop by actively using AI. Saying that productivity has increased is not enough to describe how much my way of working has changed. It is no longer unusual for a feature that once would have taken several days to take shape within a single day.

That does not mean I feel the human role has diminished. If anything, security, exception handling, data integrity, structural consistency, and product judgment have all come back as heavier responsibilities.

The interesting change is that development speed itself is no longer the bottleneck. In the past, the key constraint was how many days it would take to implement a feature. Now it matters far more whether the feature is truly needed, what criteria we will use to experiment and validate it, and where to draw the line for the first version of the product. The quality of judgment now has more influence on the result than technical limits do.

So these days, before I start working, I first write down the feature's purpose, success conditions, and excluded scope in plain sentences. Then I break the work into small units, and at the end I separate the items that must be reviewed directly by a human. I have come to feel that using AI well is not about vaguely throwing more requests at it, but about setting clearer criteria.

Going Forward

My goal is clear. I want to build production-level services in a short amount of time and turn that process into something repeatable.

For me, development is no longer just about writing code. It has become about pulling a product up to a level that can survive in a real operating environment. I want to see for myself how far this change can go.

That is why I plan to keep writing this blog. It will not just be a place to summarize results. It will be a record of how I received these tools, where my perspective changed, and what criteria I am using to adapt. Whether I succeed or fail, I believe the thoughts and trial and error of this period are worth preserving.

The new era of coding is not erasing the developer's role so much as compressing it and sharpening it. The center of gravity is moving from an era in which we wrote every line ourselves to one in which we define the right problems and validate the results.

I think the difference ahead will depend less on who produces more code, and more on who can repeat better judgment faster and generate outcomes that are ready for real operation.