Akshay Sura - Partner
2 Mar 2026
Last month, I watched a developer write a prompt instead of writing code.
Not pseudocode. Not a technical design document. A prompt.
"Build a Sitecore rendering that handles conditional personalization with fallback logic."
Within seconds, the model generated hundreds of lines. It compiled. It passed QA. It was serialized and deployed before lunch.
Here is what stuck with me: six months from now, if that rendering breaks, who understands it? Did we document the prompt that built it? And what happens when the same prompt produces different output the next time?
That scenario is not rare anymore. Pretty much every developer in our space is using AI. It is one of the most rapidly adopted technologies in the history of the industry. And that creates an underlying set of problems that we all need to recognize.
For years, Sitecore implementations were predictable. We defined templates. We authored renderings. We configured pipelines. We built personalization rules. The platform did exactly what we told it to do.
Were there bugs? Sure. Did we find and fix them? For the most part, yes. We knew who built it. We could look at the code and trace exactly what someone did. If it broke, we knew who to call.
Today we are operating in a very different mode. AI scaffolds components. It builds personalization logic. It generates content. It suggests patterns based on traffic. And that changes something fundamental.
Authorship becomes shared. It could be a human. It could be AI. Responsibility becomes distributed. It is no longer one person or a group of people being accountable for the output. When authorship and responsibility become shared and we do not define ownership clearly, things get risky. Over time, it gets worse, because more of what is running in production was touched by AI and less of it was directly built by a person who fully understood it.
There is a term circulating in the industry right now: vibe coding.
You specify a prompt in plain English. It spits back code. You tweak it. It generates documentation. You do not even need to know the language anymore. You do not need to know React or Next.js. You just keep talking to the agent until it compiles and looks right. Then you deploy it.
In some organizations, non-technical users are deploying directly to production. People from marketing, from data analytics, are building sites and pushing them live. And this is where it gets both interesting and scary.
The bar has quietly moved from "I understand this implementation" to "it works."
And the response I keep hearing is: if it looks like what I want and acts like what I want, what is the problem?
Here is the problem. Over time, you accumulate a compounded amount of technical debt because you do not understand what has been built. AI-generated configurations enter your pipeline without review and become part of your architecture. Partial scripts get committed without anyone checking if they are sound. Integration logic that nobody understands because it is 3,000 lines of generated code. Security assumptions and authentication models that were never deliberately designed.
I recently wrote about a situation where individuals from a division of a large enterprise were using AI to integrate an e-commerce system into their ERP. One developer. No QA. Their reasoning was the same: it works. The company is letting us do it. When I asked what happens six months from now, the answer was "it does not bother me because it works."
I have seen teams deploy AI-generated serialized items that looked harmless. They passed local testing. They deployed cleanly through Sitecore CLI serialization or Unicorn without raising flags. Weeks later, publishing failed due to subtle conflicts in the content tree. We spent two days tracing the issue and nobody knew what prompt was used to build it. Nobody knew who did it. We only knew who checked it in.
That is the new shape of technical debt in Sitecore. It arrives faster. It hides better. It costs the same or more to fix.
Now look at this from the marketing side, where the game completely changes.
AI tools built into CMS platforms enable marketers to generate content at scale. Campaign structures, landing pages for multiple regions, translations, content variants, all in minutes. I have watched teams light up the moment they realize they do not need a developer to build a page or create content variations.
But here is what happens. Say you generate twelve regional landing pages. Three of them reference a feature that is not available in those markets. Nobody catches it because the output looks polished. The pages go live. The content is clean and wrong.
If AI increases your content output five times but your review process stays the same, something has to break. Workflow states, approvals, and publishing constraints still matter. Localization review still matters. We have not changed our governance models to match the velocity that AI enables.
This one is particularly interesting. I was recently working on a demo where I was trying to regenerate a homepage using an AI-driven CMS I built. No matter how many guardrails and guidelines I provided, it hallucinated content. It pulled in information from other agencies, other people's credentials. I kept telling it to use only what exists on the site, but it required constant correction.
Now think about AI-suggested personalization rules inside Sitecore Personalize. "If a visitor views the pricing page twice, show a 15 percent discount." Sounds like a normal rule any of us would create. But if that pattern was not validated against real CDP segmentation data, if the model just decided it seemed plausible, it is making decisions on its own.
Imagine that rule runs for an entire weekend. 40,000 visitors see a discount nobody approved. If 100 people buy, your company just lost 15 percent margin on 100 orders. That is not a personalization bug. That is a revenue event.
AI-assisted personalization logic has to be reviewed with the same seriousness we apply to financial systems. Every time.
Everything comes down to governance. In every enterprise Sitecore implementation, you have at least three groups: engineering cares about architecture, marketing cares about velocity, and legal cares about risk.
AI can accelerate all three. It can accelerate architecture, marketing velocity, and the risk that comes from automated decisions. The problem is AI moves faster than teams can align with each other.
If engineering adopts AI-assisted development without legal knowing about it, compliance gaps form. If marketing scales content without architectural oversight, template sprawl and content debt accumulate. If personalization logic runs automatically without executive visibility, revenue and privacy risk build silently.
Governance does not collapse dramatically. It erodes quietly. And by the time you have an incident, the risk is already embedded in the architecture. If current patterns continue unchecked, the compounding effects will surface sooner than most teams expect.
The pushback is predictable. "AI makes us faster." "Our competitors are using it." "It is good enough." But faster does not mean better. Some competitors may be accumulating risk they have not measured yet. And AI is good enough to generate, but it is not good enough to validate itself. The SDLC exists for a reason. AI does not replace that discipline. It raises the stakes.
This is not about stopping anyone from using AI. This is about stopping unsupervised AI.
Human review is non-negotiable. If your answer is "I cannot review this much code," then do not generate that much code. No AI-generated artifact ships without validation. Treat AI-generated pull requests the same way you treat human-authored ones.
If AI touched it, someone owns it. A senior developer, a lead, it does not matter who. Owning it means you understand the component, you reviewed it, and you signed off. If it cannot be explained, it does not ship.
Governance must scale with output. If velocity increases five times, review must scale proportionally. If the volume exceeds your capacity to review, reduce the volume. This applies equally to engineering, marketing, and legal.
Verify before you trust. Trust is earned through validation. The more you validate, the more you can trust. The more you trust, the more you can scale. But you never stop validating. Run penetration tests. Maintain code coverage thresholds. Enforce serialization discipline.
Through working with clients and talking to community members, I see consistent patterns in how Sitecore teams adopt AI. The difference between chaos and leverage is not the tooling. We all have access to the same tools. The difference is maturity.
Here is a five-level framework that reflects what I am actually seeing in the field.
Level 1: Isolated experimentation. Developers and marketers use AI individually. There are no formal policies or shared documentation of AI usage. AI feels like a personal productivity shortcut. Risk is invisible because nobody is measuring it.
Level 2: Embedded usage without governance. AI becomes part of daily work. Developers rely on it for scaffolding. Marketers generate content variants at scale. But governance has not changed. Review processes are the same as they were before AI. Serialization discipline is assumed, not enforced. Most Sitecore teams are here today.
Level 3: Defined guardrails. This is the inflection point. Organizations define AI usage guidelines, named ownership for AI-assisted artifacts, review gates in CI pipelines, personalization validation standards, and workflow discipline for AI-generated content. If AI touches a component, someone owns it. If AI drafts content, it passes through workflow. If AI suggests a rule, it is validated against real data. This is where stability returns.
Level 4: Measured governance. Governance becomes observable. Teams track AI-assisted commits. They audit personalization performance. They monitor incident correlation with AI-generated artifacts. Governance policies are reviewed and updated regularly. AI becomes leverage, not liability.
Level 5: Institutionalized responsible AI. Responsible AI becomes part of the operating model. Engineering, marketing, and legal are aligned. Training is mandatory. Sensitive data boundaries are defined and enforced. Some organizations deploy private AI instances inside their network. One of our pharma customers operates at this level. Every team member completes three to four hours of responsible AI training, sensitive information is restricted from public AI tools, documentation is labeled as public or restricted, and every build still goes through penetration testing, intrusion testing, full test case execution, and code coverage above 80 percent.
Few organizations are at Level 5 today. But every enterprise Sitecore team should be aiming for Level 3 as a minimum.
AI is reshaping how we operate around Sitecore. But Sitecore did not become less important because of AI. It became more important.
Architecture matters more when component generation is effortless. Serialization discipline matters more when configuration can be scaffolded instantly. Workflow governance matters more when content scales automatically.
What surprised me most this past year is what people with zero technical knowledge are able to produce. I have had customers come in and say "I built this site, can you decommission the old one?" People who sit in spreadsheets all day building landing pages using Replit, Claude Code, and Lovable. It took them two days and they did not have to ask anyone.
That is incredible. And it is exactly why governance matters. AI is not a threat. Unsupervised AI is. And that makes our role as Sitecore practitioners more critical than ever.
As someone who started the Sitecore Hackathon, helped build the Sitecore Slack community, and was part of launching SUGCON North America, I believe this conversation belongs to all of us.
We cannot treat AI adoption as just another feature announcement. Every major architectural shift in the Sitecore ecosystem required community discipline. Helix patterns. Headless transitions. Composable architecture. DevOps maturity. AI governance is the next one, and this time the risk accumulates faster.
We need this discussion in user groups. We need shared patterns for AI governance in Sitecore. We need to define what responsible AI looks like in our ecosystem, not from the vendor, but from the practitioners who implement it every day.
Sitecore is providing the right tools. The agentic capabilities, the marketing APIs, the agent marketplace. That is a huge step forward, but it is not Sitecore's job to set governance for every company. That responsibility sits with us. With the teams, the architects, the MVPs, and the community members who shape how these tools get used.
The teams that thrive will not be the ones that adopt AI the fastest. They will be the ones that adopt it deliberately. And that deliberate maturity is something we can build together.
What is vibe coding in Sitecore development? Vibe coding refers to the practice of using AI prompts to generate code without fully understanding the output. In Sitecore, this means developers or non-technical users can scaffold renderings, templates, and integration logic through natural language prompts. The risk is that AI-generated artifacts enter Sitecore CLI serialization and CI/CD pipelines without meaningful review, creating technical debt that is difficult to trace when issues surface later.
How does AI-generated content create risk in Sitecore? AI tools can generate content variants, landing pages, and translations at scale. The risk emerges when output volume exceeds a team's capacity to review. Regional content may reference features unavailable in certain markets, translations may contain inaccuracies, and personalization rules may run without validation against real CDP segmentation data. Without adjusting workflow states, approvals, and publishing constraints to match AI-driven velocity, governance gaps form quickly.
What is an AI maturity model for Sitecore teams? An AI maturity model is a framework for assessing how deliberately a Sitecore team governs its use of AI. It ranges from Level 1 (isolated experimentation with no policies) through Level 3 (defined guardrails with named ownership, review gates, and validation standards) to Level 5 (institutionalized responsible AI with mandatory training, data boundaries, and full alignment between engineering, marketing, and legal). Most Sitecore teams are currently at Level 2, where AI is embedded in daily work but governance has not changed.
Can AI replace Sitecore developers? AI can accelerate development, content creation, and personalization, but it cannot validate its own output. Security assumptions, authentication models, serialization conflicts, and architectural decisions still require experienced practitioners. Teams that reduce senior expertise in favor of AI-driven velocity risk accumulating compounding technical debt that surfaces months later.
Who is responsible for AI governance in a Sitecore implementation? Governance responsibility sits with the implementing organization, not the platform vendor. Sitecore provides the tools and capabilities, but individual companies must define AI usage guidelines, ownership models for AI-assisted artifacts, review processes, and data boundaries. This requires alignment across engineering, marketing, and legal teams.
What are the minimum guardrails for using AI responsibly in Sitecore? At minimum, teams should enforce human review for all AI-generated artifacts before production deployment, assign named ownership for any AI-assisted component or content, scale review processes proportionally to AI-driven output volume, and validate AI-suggested personalization rules against real data before activation. These guardrails do not slow teams down. They prevent the accumulation of unreviewed risk.
This article is based on a presentation I gave at the SUGCON Brazil User Group. If your team is navigating AI adoption in Sitecore and you want to talk through governance frameworks, reach out on LinkedIn or find me on the Sitecore Slack.

Akshay is a ten-time Sitecore MVP and a two-time Kontent.ai. In addition to his work as a solution architect, Akshay is also one of the founders of SUGCON North America 2015, SUGCON India 2018 & 2019, Unofficial Sitecore Training, and Sitecore Slack.
Akshay founded and continues to run the Sitecore Hackathon. As one of the founding partners of Konabos Consulting, Akshay will continue to work with clients, leading projects and mentoring their existing teams.
Share on social media