For most of my career as a software engineer, I have worked in environments where resourcefulness was a necessity, not a nice-to-have. That experience has shaped how I approach software delivery today.

From that perspective, advancements in code generation are welcome. More than anything, they are forcing the industry to reconsider what software delivery should look like, where effort is truly valuable, and whether some of the practices we once treated as essential have become largely ceremonial.

Building enterprise applications has always required more than just writing code. There are baseline expectations around compliance, documentation, maintainability, and long-term system integrity. When resources are constrained, especially strong human capital, delivering a robust product requires a level of effort that often extends beyond what traditional project planning accounts for. That is why projects often overrun: not necessarily because of poor execution, but because robustness takes work. The alternative is to meet deadlines through trade-offs, while knowing some shortcomings will only become visible later as the client’s needs evolve.

What consistently produced the best outcomes for us was investing more heavily in discovery and making prototyping part of that process. Rather than spending excessive time on multiple design iterations, we moved quickly from an initial screen design into prototyping alongside requirements gathering. The purpose was to give the client a tangible representation of the system early, one that reflected most of its core functions and interactions.

That prototype became the basis for scope, prioritisation, and alignment. It gave everyone a clearer understanding of the effort involved, made trade-offs easier to discuss, and created stronger alignment around expected outcomes. As a result, implementation became smoother, delegation became easier, and reviews became more effective. The process may have leaned more waterfall than agile, but the structure worked because it reduced ambiguity. And ultimately, clients care about outcomes more than process or technology. Effective scoping helps keep focus on the parts of the system that actually create value.

AI has not changed my approach to building software as much as it has changed the efficiency of execution. It reduces the need for coordination and accelerates the more repetitive parts of development, but the underlying effort remains. It is still guided engineering, not autonomous problem-solving. In that sense, it feels less like a replacement for software engineering and more like an advanced extension of it.

What has changed is how much more intentionally I now think about efficiency and the amount of time I can allocate to thinking about the problem we are solving. What is the fastest path to a robust outcome? That question keeps leading me back to fundamentals: architecture, constraints, and system design. The clearer those are, the less code needs to be written, and the more likely the system is to meet its performance and design requirements without unnecessary complexity.

It has also made me think more seriously about post-deployment operations, especially how agents might support monitoring, log analysis, bug detection, and proactive performance management. That shift alone is forcing me to rethink how I approach logging, observability, and the quality of system feedback loops.

Software engineering is not dying; it is evolving. What is changing is not the need for it, but our perspective on what it now demands and where its real value lies.