AI is not at fault for your application's failures... you are
Posted on 23rd March 2026
AI Software DevelopmentIf you started to learn to code before 2022, you pretty much had to grind through the basics of sotware principles like all engineers had done for decades prior.
You'd read docs, learn through mistakes, watch some video tutorials, read blogs and scour StackOverflow. You were very good at Googling (the dinosaur version of prompting), and if you couldn't find an answer, you'd probably have to lean on other humans for help.
At some point you became a domain expert for a particular website, web app, mobile app, microservice - whatever it was that you were working on. You held that title because you were living and breathing it, the code and the project, every day. You were always problem solving and thinking strategically about how to make that next thing work.
It didn't matter for the complexity of what you were doing, or what languages or frameworks you were working in, just that you got the work done to a decent enough standard so that you could explain it and ship it to customers.
A new dev in 2026
If I were to start "coding" or "developing" in 2026 with absolutely no prior experience, I don't think I'd be starting with ANY of that. Instead, I'd learn how to work with Claude or GitHub Copilot - gear up a few agents, write a few paragraphs of prompts and let AI do 95% of the heavy lifting.
There's a problem with this though. You, the responsible human, will never become a solid developer this way. If that is your aspiration, this will not be how to do it.
You will become pretty good at prompting, giving instructions and thinking up ideas, but without knowing if your changes and additions are ever the right fit.
My prediction by 2030
- Anyone that has been in the tech industry for half a decade or more already with a bundle of experience, good references, an impressive CV and solid problem solving skills, will be very useful.
- Agencies and consultancies will be able to accept at least four times as much work as they do now.* *I'm saying 4x because I can roughly build out and release a relatively simple feature with tests in ~15 minutes, compared to at least ~60 minutes previously.
- Developing in the traditional sense, will be less than 10% of what you'll do (if it's not already).
- Developers will be applying to jobs like "Senior Prompt Engineer", or more LinkedIn-ified, "Senior AI Consultant" as the norm.
- You'll be responsible for working on ~4 projects/customers/features a day, every day, likely at the same time. And those problems are going to become more and more complex.
- You will need to become an expert in prompting, and even better at reading and understanding code that's being written.
- Junior devs that haven't spent a few years writing code and learning software foundations are high at risk of being left behind.
Don't become lazy
I've reviewed thousands of PR's in my time as a dev, before the AI era and during.
-
I'd say around 50% of those are usually spot on, or are simple and small enough to merge in straight away.
-
Of the remaining 50%, around 45% will have minor mistakes, UI issues, acceptance criteria missing, or opinion-driven comments that are non-blocking. These PR's may need changes before merging, but it's not as if the dev has completely gone astray without thinking about the issue at hand.
-
Then you've got the 5% (1 in 20 PR's by my count), which have glaring issues like N+1 introductions, unnecessarily bloated queries, incredibly complex logic, test failures, and breaking changes that sometimes nobody catches until it goes to production.
Point #3 is the potential downfall of the agentic era for unexperienced developers, or indeed experienced engineers that have become too lazy to think, architect and problem solve WITH the agent instead of just throwing a problem at it.
Being able to review your own code before giving it to someone else has always been a given.
We're now living in an era where you likely haven't wrote all/most of the code and you simply must understand what the agent has written and why.
Guess what happens when there's an issue? You and your team are to blame and you will be the one in the trenches digging in to fix the problems.
You are still responsible, remember
Imagine a scenario where you've built an entire application, you've had an agent do 90% of the graft and you've picked up the loose ends. The customer is happy because they've got what they wanted a month early, they've got their own customers signing up and paying for the product. Then a few weeks in, something has gone completely wrong with the payment provider and nobody knows why on first glance.
They contact you and say, "fix this, it's really urgent". You don't have any deep domain knowledge, you have a high-level understanding of the business and what the code generally does. You've pretty much let the agent take control for the majority in database design, service architecture, third party integrations and UI/UX.
You throw the problem at the agent(s), because you can't exactly pin down where the problem starts and ends. Then agent starts "hallucinating" - basically making up nonsense that is completely irrelevant, and adding to the blazing fire.
A couple of hours later you breathe a sigh of relief because payments are now working locally, you deploy, and it's working again in production. You tell the customer and everyone is happy.
And then... a few weeks on, a similar outage has happened, but this time it's on a 10x bigger scale. Customers can't pay for the product and the intricate third party integrations that depend on each other in your business logic aren't working as expected.
Your product manager says "I thought this was fixed?", and you say "yeah, so did I 😅".
Now you're in deep sh!t
You've landed in a situation where the agent has plastered at least two layers over the root issue, which couldn't be understood in the first place because you didn't have the knowledge (or were not bothered enough) to intervene during the initial developing stage.
Now there's a big decision to make. Every minute the app isn't working your client is losing money on their SaaS that you've shipped. Do we...?
- Plaster over the issue again using intervention and an agent, then pray it works
- Spend a few days re-thinking and then re-building the feature, but take down production temporarily
- Propose a completely different solution altogether and build that out instead
Each option means there's a difficult conversation to be had with your customer. And trust will deplete as a result. The trust is not just with the customer, your team will start thinking of you negatively if you ship with bugs and problems often.
Blaming AI for any of these problems is not good enough.
If you go with #1, it might work forever and you're done with it. But if there's an issue again, the customer is seriously thinking of upping and leaving, or spreading bad noise about you as a team. Your relationship breaks down, and it's just miserable to work together.
#2 and #3 are sensible, but they take time and the customer is not going to be happy to be losing business after paying thousands for marketing and development. It's actually what should have been done on day 1, but for various reasons, you didn't do it or it was a genuine oversight.
What should I be doing?
Unless you are still architecting and designing solutions (with or without the help of AI), you are going to be in a mess. Until you really understand the business logic, the potential problems and how to translate the ideas efficiently into scalable software, you're going to have bugs that you don't know how to fix for the long-term.
If you are building an important feature that is critical to an application, such as a complex payment system, do it the right way. And the right way for you might be writing 70-80% of it yourself by hand, because you want to understand every single step and be the domain expert when a question or tricky situation comes out of nowhere.
Just because we're in a booming agentic AI era where everyone is deploying way quicker than before doesn't mean it's right all the time. Take 2 months to build it by hand if you think a 1 month back and forth with an agent will give you less confidence in the long run. At the end of the day, you will be held accountable. I'm saying all of this from very recent experience by the way.
The agent is only as good as you give it. If you understand the problem and know roughly what a fix would look like before prompting, you will catch the less obvious obstacles when they pop up.
We've all been guilty of this, but what you should never do is blindly prompt over the top of something you already know is pretty flaky. Plonking a mansion on the beach will look amazing until the tide comes in.
Understand the feature, design and weave the solution to build context and know roughly what the outcome should be all before sending descriptions for an agent to build. Prompting and realising issues at scale is a skill that is already desirable but will become absolutely essential in the next few years, likely across multiple industries outside of technology too.
Don't be that person that blind-prompts and expects wellness long-term. Always build with the future and scale in mind.
AI is not at fault for your application's failures, you are.