Is AI the end of coding as we know it, or just another tool?
The rapid advancement of AI tools has left many developers worried about the future of their careers. AI is either coming to take your job, or it's going to make you 10 times more productive — if you learn to use it effectively. Nobody wants to be in the first category. But I've found surprisingly little discussion about what effective AI-assisted development actually looks like.
From skeptic to advocate
I was an AI skeptic. I knew it was significant, but I just didn't think it was ready to create and understand real production code. It was interesting whenever our AI expert-in-residence Anton shared his latest findings. Despite this, I just couldn't see how it helped my day-to-day work.
The turning point came when I tried using a modern AI-first editor to run some experiments with WebAssembly, a technology I'm not fluent in. I was curious about the performance benefit of offloading some expensive computations to compiled WebAssembly code in our whiteboard codebase. I thought the agent might do the setup and help me learn. Instead, it set up my project and environment, wrote my first experimental case, and just kept on going.
Where I'd previously have spent hours getting up and running and more hours learning language details, I was able to progress quickly to the results I was really interested in. Namely, should I rewrite part of our app in Rust? (Answer: No, or at least not yet!)
From that point, I could see how AI tooling could make a real impact on my work.
Your personal 'office of the CTO'
In larger companies, the CTO often employs research engineers to experiment with new technologies and help guide technical direction. The CTO doesn't have time to do all this work themselves, so they delegate. Every engineer can now do this with AI agents.
As in my experiment above, this could mean spinning up a new project to test out some different approach or just kicking off a refactoring task that you're not sure is worthwhile. Or you might use a research-focused AI tool (like Perplexity) to get tailored answers to your specific questions.
A tireless helper who knows every detail of every language
Our engineering team works with a 12-year-old Rails monolith with extensive front-end JavaScript. That means we're often working with unfamiliar code and sometimes unfamiliar languages. As a primarily front-end developer, having an assistant that knows every idiom has improved my Ruby code dramatically. And in areas I haven't worked on before, the agent identifies patterns and approaches much quicker than I can.
It's also brilliant at writing or updating standalone scripts and helpers. In a smaller context with lower stakes, the agent will often complete a task in one shot, and all you need to do is verify.
It's worth noting that you should use the best models and tools available — don't skimp. LLM costs can add up, but they're peanuts compared to the cost of your own time as an engineer. At Aha! we run the latest models on our own infrastructure, and we use open-source tools like Claude Code, Roo Code, and llm-sidekick for development.
A rapid 'coding assistant' for mundane tasks
Image from Thomas Ptacek's My AI Skeptic Friends Are All Nuts
Increasingly, AI agents can take care of the mundane work, freeing the engineer up for creative challenges and to spend more time in the top-right quadrant above. Critically, agents are great at the low-impact, easy wins that are so tempting to sink time into. Thomas Ptacek puts it nicely:
Sometimes, gnarly stuff needs doing. But you don't wanna do it. So you refactor unit tests, soothing yourself with the lie that you're doing real work. But an LLM can be told to go refactor all your unit tests. An agent can occupy itself for hours putzing with your tests in a VM and come back later with a PR. If you listen to me, you'll know that. You'll feel worse yak-shaving. You'll end up doing … real work.
The key here is to constrain the scope, and to verify — you still own the output!
- Define clear boundaries: Be explicit about what part of the system you want the AI to work on.
- 'Sandbox' the work: Provide enough context for the AI to work effectively, but constrain its scope.
- Review thoroughly: Validate the output like it's your own code (because it is).
This is pretty similar to how we've always broken down work for other humans, so it's not a new skill.
For bigger tasks, forcing the agent to write up a technical design before it starts coding is a great way to keep things on track. Most tools offer this as a built-in workflow.
What about junior developers?
Some folks are concerned about junior developers getting squeezed out, resulting in a dearth of seniors in a few years. The current job market is challenging, but there are a variety of reasons for this. It's not just challenging for juniors either. And ultimately, I don't think they are going anywhere. What will go away is the apprenticeship model where a junior cuts their teeth doing tedious work that AI agents excel at.
A friend who builds houses told me recently that he doesn't just delegate the mundane tasks on the site to his apprentices, because they won't learn the skills they need that way. Instead, they work on every part of the project with his guidance — and that's exactly how it should be in software too.
A junior should work much like a senior does: leveraging AI and spending more time thinking about systems, interfaces between modules, data flows, and product/UX considerations. What makes them junior is simply that they need guidance throughout the process.
I think this is a good thing. New engineers can start engaging with the interesting problems sooner and learn quicker as a result.
Echoes of past technological shifts
We've seen similar transitions before. The industrial revolution transformed human labor rather than eliminating it. Website builders like Squarespace and Wix didn't replace web developers; they freed them to tackle more complex problems while democratizing simple site creation. And even frameworks like Ruby on Rails ultimately allow engineers to deliver more value in less time.
Likewise, an LLM agent is just another layer of abstraction on top of all the others we've added over the past 70 years. To the best engineers, it's just another tool for writing software.
If you're more focused on the means than the end, you'll likely struggle in this new era. But that's always been the case. Just ask anyone who knows CSS specificity backwards how useful that skill is nowadays. Deep knowledge of languages, frameworks, or libraries is less useful than before. Instead, you should focus on:
- Strong fundamentals in a few key languages, systems thinking, and architecture design
- Problem decomposition and clear writing (which, of course, helps both humans and AI understand what you want)
- And perhaps most importantly, understanding your users and your product — this is what really matters
Is this the future we were promised?
The automation promised by futurists in the 1950s led to predictions of drastically reduced working hours that never materialized. Will the latest paradigm shift be different? I'm cautiously optimistic.
By offloading the repetitive parts of our work, we can focus on the creative challenges that make software development rewarding. We can tackle more ambitious projects that create value for our customers. And we can spend more time understanding user needs and less time wrestling with implementation details.
All this should add up to more interesting and fulfilling work, but it's up to us to embrace that.
Join us
The Aha! engineering team has gone all-in on integrating AI into our daily workflows. The result isn't fewer developers — it's developers who can accomplish more than ever before. If you're curious, adaptable, and focused on creating value (regardless of which tools help you get there), you should check out our open roles.