Akshay Sura - Partner
26 Jan 2026
AI is changing how software gets built.
Teams are moving faster. Costs are dropping. Small groups are doing work that once required entire engineering departments.
That part is real.
What is far less discussed is the risk created when AI is used without oversight, especially when non-developers are shipping production integrations into enterprise systems.
That shift is already happening. Quietly. And it should worry anyone responsible for digital platforms, data integrity, or long-term operational risk.
A few weeks ago, I asked a contact at a large enterprise how many developers they had on staff.
"One," he said.
One. But here's where it gets interesting. Everyone else on the team is building with AI anyway. Data entry people. Interns. They're integrating ERP systems with Shopify, wiring up backend protocols, and pushing code to production. No engineering oversight. No code review. No one is checking whether any of it actually works the way it should.
I told him that sounded risky. AI hallucinates. It writes confident code that breaks in ways you won't catch until something expensive goes wrong.
His response: "Yeah, I know. But nobody's bothering us over here."
That phrase has been rattling around my head ever since.
I wish I could say this was unusual. It's not.
AI has made it trivially easy to generate code, schemas, integrations, and entire workflows. The barrier to building stuff has collapsed. That's genuinely exciting, until you realize that the barrier to building broken stuff has collapsed, too.
The dangerous part is that everything looks fine at first. Data flows. Pages load. The dashboard shows green. And then six months later, you're staring at corrupted records or a security hole that's been quietly leaking data since launch.
AI doesn't understand your business rules. It doesn't know your compliance requirements. It makes assumptions and moves on. Without someone who can actually read the output and ask hard questions, you're flying blind.
Speed without clarity is just accelerated technical debt.
The enterprises we work with fall into two camps right now.
Camp one treats AI like a power tool. Useful, but requires training and supervision. They've rolled out internal guidelines. Employees acknowledge risks before they start using AI for anything substantive. Human review is mandatory before anything hits production.
Camp two is the "nobody's bothering us" crowd. They're shipping fast, costs are down, and everything seems fine. For now.
What made that conversation especially concerning was learning that multiple divisions inside the same parent company were independently building AI-generated systems. No coordination. No shared standards. No one is comparing notes on what's actually being deployed.
That's how you end up with five incompatible integrations, three security vulnerabilities, and a remediation project that costs more than doing it right would have.
This same dynamic plays out in platform migrations. Maybe even more so.
The pitch is seductive: "We'll use AI to migrate your legacy CMS to the cloud for a fraction of what the other guys quoted."
And technically, it works. You can use AI to lift components, templates, content models, and the whole stack. Automate the grunt work and cut your timeline in half.
But here's the question nobody asks: what are you actually getting?
If your current system is a mess (outdated content models, rigid components, workflows that fight your marketers at every turn) then a like-for-like migration just moves the mess somewhere new. Congratulations, you're in the cloud now. You still can't personalize. Your authors still hate the CMS. Nothing's actually better.
You didn't transform. You relocated.
AI works brilliantly when your foundation is solid. When your architecture is clean, and your processes make sense, AI accelerates everything. But AI amplifies whatever's already there. It doesn't fix it.
We've seen migrations quoted at $900K get undercut by shops promising $100K using "AI accelerators."
On paper, that's a no-brainer. In practice, it's worth asking what you're actually buying.
Are they rearchitecting anything? Modernizing your content model? Aligning the platform with how your business actually operates today?
Or are they just copying your problems to a new server, faster?
Most buyers don't know what questions to ask. They hear "AI-powered" and "accelerator" and assume the outcome is guaranteed. It's not. I've seen the aftermath. Teams are stuck with the same limitations they had before, except now they've also burned through their budget and goodwill.
Here's what really keeps me up at night: when AI does the work, clients often have no idea what's actually being built.
Traditional delivery has a paper trail. Code reviews. Architecture docs. Sprint demos where someone explains what's changing and why. You can see the trade-offs being made.
With AI-heavy delivery, especially from shops that aren't transparent about their process, you're signing a six-figure contract without knowing whether anyone's actually reviewing what gets generated. Is it maintainable? Secure? Compliant with your industry regs?
Most enterprises can't answer those questions. They're trusting that someone, somewhere, is checking the work.
A nail gun is faster than a hammer. You still don't hand it to someone who's never framed a wall.
I want to be clear: I'm not against accelerators. Starter kits, frameworks, shared component libraries. These save time and reduce risk when they're built well, and you own the code at the end.
Most companies aren't as unique as they think they are. Standard patterns exist because they work.
The problem is that AI is often positioned as a substitute for judgment rather than as a tool that amplifies it. When "we use AI" becomes the answer to every question about how work gets done, that's a red flag.
If you're evaluating partners for a migration or implementation, here's what I'd want to know:
What exactly does AI do in your process? "We use AI" isn't specific enough. What's automated? What's reviewed by humans?
Who checks the output before it ships? If no one does, or if "the AI validates itself," walk away.
What happens when AI gets it wrong? Because it will. What's the catch? What's the fix?
Do we own the code? Some accelerator models lock you into subscriptions. Make sure you're not trading one dependency for another.
How is this different from a lift-and-shift? If they can't explain what's actually improving, they're just moving your problems.
What's the review process for offshore or distributed teams? AI makes it easy to generate code fast. Fast doesn't mean good.
Where do humans take over? What are the guardrails? Who decides what risks are acceptable?
Vendors who can answer these clearly are rare. They're also the ones worth hiring.
AI is a powerful tool. We use it. Our clients use it. It's changed what's possible.
But tools require judgment. The organizations that come out ahead won't be the ones who moved fastest. They'll be the ones who moved deliberately. Who asked hard questions? Who invested in their foundations instead of papering over problems with automation.
Transformation isn't about copying the past into the future faster. It's about actually rethinking how the work gets done.
"Nobody's bothering us" might feel like freedom. In enterprise software, it's usually the quiet before an expensive storm.
Konabos helps organizations navigate digital transformation with senior architectural judgment. Sitecore, Kentico, headless CMS, and the hard conversations about when AI helps and when it doesn't. If you're facing a platform decision and want a partner who asks the right questions, let's talk.

Akshay is a nine-time Sitecore MVP and a two-time Kontent.ai. In addition to his work as a solution architect, Akshay is also one of the founders of SUGCON North America 2015, SUGCON India 2018 & 2019, Unofficial Sitecore Training, and Sitecore Slack.
Akshay founded and continues to run the Sitecore Hackathon. As one of the founding partners of Konabos Consulting, Akshay will continue to work with clients to lead projects and mentor their existing teams.
Share on social media