Konabos

AI That Actually Works with Sitecore: Introducing the MCP Server

Konabos Inc. - Konabos

10 Sep 2025

Note: The following is the transcription of the video produced by an automated transcription system.

Hey everyone, thank you for joining with the webinar today. It's about the AI that actually works with Sitecore, introducing the MCP server. I'm here with Anton and through demo fashion, what happens is, anthropic is down this morning, so part of the demo might not really work it, you know, obviously we're dependent on AI agents and stuff to get the work done. And as you can see, the anthropic status says that it's down. It's basically giving errors intermittently and not letting us do things. So we'll let Anton continue as much as you can explain the concept and stuff, and when we get to the parts where it's the demo, it might or might not work. So with that, take it away.

Anton, yeah, okay, so in this case, I will start with presentation, and we will hope that it will be it will be fixed because anthropic part will be in 30 minutes, so probably it will be resolved. But for now, I can't even get to my entropic API keys, and that's why probably some part two of demo will not be available. But still, I presented to I prepared a lot of slides. I prepared a lot of content. So even if demo will be failed, I still will be able to tell you a lot of interesting things. So let's start and today we will talk about artificial intelligence, large language models, model context protocol, Sitecore. And Sitecore model context protocol server, the AI that really works with Sitecore. And by the way, previous Konabos webinar with Marcelo 11 finished with question about MCP servers in cursor. Today, I will partially answer that question. Let me introduce myself. My name is Anton Tishchenko, and co founder of boutique Sitecore development company xdst. I'm already 12 years in Sitecore development, and I started as Sitecore employee. I was recognized as said core MVP for seven times in a row, starting from 2019, and you probably heard about me, if you read something about Sitecore and Astro or about Sitecore MCP server, about which we will talk today, everything started in April this year. I was user of Visual Studio code and GitHub copilot at that time, and they introduced support of model context protocol. I tried it with databases and got this wow moment, and I immediately wrote this message at that time, no one was working on Sitecore, MCP support, neither Sitecore nor community. That's why I decided to do it by myself. However, everything started much more earlier. I'm early adopter of AI techniques to work with code. I used visual code with different AI tools, GitHub copilot, continue client, group code and majorly, I use them as advanced after complete. And they were already awesome in 2024 but awesome only for generic software development, they were useless for Sitecore. They either didn't know anything about Sitecore or hallucinated a lot. So I decided to fine tune existing open rate, large. Language model, I took quantcoder, I prepared data set, data set, the contained questions and answer from Sitecore, Stack Exchange, also data set had said core documentation and some blog posts, I equipped with the powerful GPU and run find unique results were mediocre. Model became more site core aware, but there were problems, local, open weight models, they are always worse than large language models that provided as services crankcoder could not compete with anthropic load or chatgpt, and small models are not smart enough, and the hosting and the fine tuning of big models is unreasonably expensive and makes sense only for big companies. And another problem is cut off of training data you need to retrain model. If some new data appears, also, you need to retrain model. If you get new model, and knowing all these problems and level of effort, I stopped that experiment, but you still can find data sets and these models on hugging face. So what about large language models that provided as a services? They were very good and in genetic software development in 2024 my much more better than open rate models. But what about Sitecore? The same they were useless for Sitecore, very proactive, but tended to hallucinate. And as I already had data set for my current coder training, I used it for creation custom chatgpt. I provided prompts forcing less hallucination and using information from dataset and prepared custom GPT. It's some kind of retrieval argument generation, large language model became better, much better with Sitecore tasks. But even if large language model was good at some situation, it felt wrong. You want an assistant. You got the AI assistant, but you feel like the slave of AI assistant. It doesn't do anything. It tells you what to do that's felt wrong, and it was obvious. It was obvious that next stage for large language models will be tools. But at that time, it wasn't so easy with tools. If you want to integrate Sitecore, you was able to integrate Sitecore, but you need to write integration. If you need to integrate Jira, you are able to integrate Jira, but you need to write integration and so on. For design, Figma for messaging, slack for browsers, from Firefox databases, and by the way, that that was the reason why llms didn't have cans on first few slides. But still, it was totally possible to integrate any external systems as tools. Even before model context protocol, you were able to add Sitecore tools to your agents. However, I haven't seen any examples how to do it, and you even were able to combine it with other tools like JIRA. But once you get another large language model, let's say anthropic. Plot. You need to repeat it again. You need to write separate integration for Sitecore and add additional integration for JIRA. Probably you already got where I'm going. We have M multiply and problem, we have m large language models, and we have n tools. And in order to integrate the everything with everything, we need to write M multiply N generations.


That's huge. Amount, and there is no surprise that tools didn't become popular earlier. No one wanted to do this amount of work, and model context, protocol solves this issue. It standardized the way how large language models should work with the external world. So now you can write just integration one time, and your integration should work exactly the same with large language models in some kind now large language models, hands are standardized, and model context protocol describes resources, tools and prompts. Resources, it's something static, it's something that doesn't require any computation and something that doesn't change application state. Prompts are advanced queries when you don't want to type long messages to large language models. What to do? You should use prompts and tools. It is something that either requires complete computation or changes application state, and now we can write integration for our system and use it with all large language models. However, that is in standard reality differs in reality, major part of model context, the protocol clients, doesn't support the prompts and resources. That's why we decided to do everything to be tools and even something should be resource for example, documentation. It is a tool for our case, because in this way, we made our modern context protocol server to be more universal, and it allows to work with major part of clients. So everything started in April this year, and there were a lot of things done since April. There are more than 100 tools, actually, 146 we cover all item, service, API, GraphQL, API, all PowerShell commands. We have two documentation tools for Sitecore PowerShell and Sitecore CLI. Everything is built using GitHub actions and deliver it as NPM package and Linux and Windows, Docker containers, Sitecore, MCP server supports both XM XP and XM cloud. All that you need is to enable API to allow Sitecore MCP server to access your Sitecore so each work pretty with all Sitecore versions. And by the way, that was implementation of Sitecore MCP server. That was the best way to learn all Sitecore PowerShell commands. I literally tried all Sitecore PowerShell commands and how. Many devs did it before, so taking chance, I want to say thanks for Sitecore publisher model for all people who worked on it, especially to Adam and Michael 100 plus tools is a big number, but what's included? There are a lot of a lot of tools to get item. You can get it by ID, bypass by he, query by GraphQL, query by search. You can create, update, delete items. You can perform advanced operation, vis item. You can publish it. You can assign workflow. You can run workflow action. You can find reference and reference. You can assign template. You can modify template by adding, changing, removing base templates. You can create, update the language and numeric versions. You can work with presentation. You can add rendering to the page, change rendering data source, change rendering parameters. You can create, read, update, delete, layouts, renderings, placeholders, also large language models. Get advanced tools to work with security. They can create, read, update, delete domains, false users. Also they can change item assess rules to configure security who can read and who can write on items. In order to troubleshoot, you get access to Sitecore logs so large language model can read logs, can see that something is wrong and can suggest you a fix. And there are few tools for documentation. It's Sitecore CLI, and it is Sitecore PowerShell for Sitecore CLI, we decided to make it as documentation because clients are currently quite good with terminal and they can call Sitecore CLI without tools, and they need just guidance. They need just rules how they should call this CLI and for PowerShell sometime when you need some bulk processing, it could be more efficient to run script and then then pass it to set core MCP server to get result and not just call multiple Sitecore MCP Tools. That's why we have also tool for set core PowerShell documentation. So there is a lot of things that you can use and who can use it. Of course, the first group is developers, because AI adoption across developers is higher than in other groups, but there are other groups that can use it for testing, for translation, for content creation. Let's check for your use cases. The most obvious one, translations. I think that major part of translation in the world is done by AI. But you may ask, why do you need the model, contacts protocol for it? And there are few reasons. The first reason is that you don't need to write the any line of code. You need just the add large language model, configure MCP server and large language model equipped with site core MCP server, it's smart enough to find all data sources for your page and the translator. 


Another advantage of using model context protocol is freedom to choose large language model that in some kind. Its compatibility. Remember this buzzword that was very popular a few years ago, and with large language model, with MCP, you can choose large language model that works better for you. You can choose anthropic Claude Opus, sonnet, open AI, GPT four, GPT five, or Google G Mini. And if you're concerned about the privacy, you can use some self hosted models like Kona coder or GPT OS version, and you are not stuck with your choice. Your model appears, and you can switch to new model on the same day. And now you choose the way how to translate, not your translation service providers. And if you are not happy with large language model translation quality, you can add another MCP server that will provide translation services. For example, J pal, MCP server, and all of this you get with no extra call with no extra cost you pay, only for large language models, no additional fees to any service providers. Another area when, where I expect, state core MCP adoption is software development, the first sample that you can easily scaffold components. There is a great article from Jaron Brewer that will be present in the links on the end of presentation where he wrote just one prompt. Hero, he wrote just one prompt, and with this one prompt, he was able to create rendering itself, rendering item, data, source, template, item, rendering parameters, item, and then assign this rendering to the page and prepare test content for it. And you can even improve this way, you can add two model, context protocol servers, Sitecore model context protocol server and Figma model context protocol server, and you can specify what frame in Figma should be used as design. And in this case, you will get not some just to abstract design that follows specification. You will get your design and you can repeat it for all your components, and that's how you can scaffold the whole website in few days. That's incredible speed for Sitecore websites, and this technique allows you fast prototyping. Probably you wanted to try something before, but you didn't dare, because it may take a lot of time, and now you can easily try it because and you will not burn the whole project budget. Another thing, how you can use site, Chrome, CP, it's rubber duck debugging, but now your rubber duck isn't silent. Now it can check logs, it can check items, it can check code and maybe provide it with you, with fixes or some ideas, and another way how to use site, core MTP server is everything throughout content. Authors can create contents, search for the content, modify existing there is another one great article from Aaron Brewer about content migration. He used to model context protocol servers, one for Sitecore and second for Umbraco, and he was able to migrate content from Sitecore to Umbraco and from Umbraco to Sitecore without writing a writing zero lines of code. And it shouldn't be necessary Umbraco, it should be it can be any external data source. Yeah. And another cases could be, if you are quality assurance engineer, you can easily create a lot of test content to test different cases, different renderings, different languages, different amount of content on the page and set are just few example of usages you are limited only by your imagination and your needs, but the model, context, protocol isn't magic. Want that solves everything, so let's talk about few cases when it doesn't work well. First sample is set for GraphQL, if you if your website is using Sitecore SX a and most probably it is because it's recommended way to write Sitecore websites for last few years. In this case, you will get huge GraphQL schema, and the size of graphical schema will be bigger than 200k tokens, and it doesn't fit to context window for many large language models, I haven't found easy way to split GraphQL schema on parts with your templates and system templates, it's possible to do, but it requires Sitecore modification in the I didn't want to set modification because I wanted to leave model context protocol server To be Sitecore agnostics and work with any Sitecore. So if anyone from Sitecore is watching this webinar, register this as feature request. It would be nice if we will have full schema like it is now, and partial schema that is related only to your project templates. It will open additional abilities to make Sitecore even more AI ready, temporarily. You can use other large language models that has larger context window, for example, Google Jiminy, which has context window 1 million tokens, but that still could be not very efficient and could be relatively expensive. Another problem was introduced by ourselves. I decided to write too many tools. The idea was to cover all Sitecore item, API, GraphQL, API, and all set core PowerShell comments, and we got too many tools. For example, large language model can get item by GraphQL, item service, PowerShell by ID, by search criteria, by query bypass, and it starts to use all tools at once. I call it the AI procrastination, so be aware about this problem. And at second phase, we plan to split the set chrome CP server on full variant and basic variant. But it's not blocker for you, because you can configure what tools to use. I think all modern context protocol clients support it, at least. I haven't faced with any clients that doesn't allow to do it. And so if you work with content, leave just item service. API tools, if you don't use Google, Jiminy, disable GraphQL if you work with presentation. Enable presentation tools if you work with security. Enable tools for security if you are working on bug fixing, enable log tools so use only what you need, and you will get really great results.

Another challenge is Sitecore complexity. Sitecore. Is complex. For example, let's take a presentation. You can configure presentation using page design, partial design. You can configure it on standard item values. You can configure it the on branch templates, part of renderings could be on the final layout, and part could be on the shared. It's not easy, and in order to get more from Sitecore and AI right now, do not over complicate your website. If you want AI to be efficient, make your website as simple as possible, or you can wait. Now I'm AI optimist, and I think that we will eventually get there and to large language models will be good even with very complex tasks. So how you can run it? You have multiple options to run it. If you want to run it locally, the best way is to use the NPM package. If you host your Sitecore in containerized environments, you have Windows or Linux images to start your container, and if you want to change something or tune something for your needs, everything is available as source code on GitHub. You can fork it. You can change it for yourself. And if you think that you did something valuable, I will be glad to receive pull requests now. What client should you use with Sitecore, MCP. This is start, of course, is for developers, because if you are developer, probably you already tried cursor, or you tried Visual Studio code with GitHub copilot, or at least you heard about them. And if you are advanced white coder, you can use Cloud code. If you node developer, then you can use Cloud code. So my personal recommendations, use anthropic models as your large language model. It can be cloud Opus or Claude sonnet at this time, and use cursor if you are a developer, and use Cloud code if you are not a developer, but remember that these recommendations, they are actual. In September this year, everything changes too quickly, and next month, probably there will be something better. Yeah, and my favorite usage of model context protocol is not inside the integrated development environment. It's integration of MCP to agent. And one of this use cases I wanted to show you. It's an A 10 integration we will try, hopefully anthropic will be up. And if it's not, I will just describe how it works. So it's, it's time for demo. Let me check entropic status, bad news, bad news, but, but let's, let's, let's move on. Let's move on. So if anthropic is still down, let me. Let me start with cursor s, as I already. You said the easiest way to start with, with Sitecore, MCP server, is to use cursor. In order to start i I will recommend you to use, to use our GitHub repository. That's just basically a fork of official Sitecore demo. The only difference is that it contains demo site based on next js and on Astro, because, you know, I'm fan of Astra. And there is a branch where everything is configured for you to start with, model, context, protocol, and you need to fork it. You need to run in it pass, and then you need to run a PS, which I actually did before this demo. So I started set core locally from this repository, and this repository already has sample for model context protocol. So here you can see that we have configured the MCP server with all access to GraphQL to item service into to power shop, and we have cursor folder where we have configuration for this MCP server. It points to MCP server that was started in Docker, and I started Sitecore before this webinar. And you can see that the MCP server is up and running. And now, if I go to cursor settings and to MCP integration, I will be able to see Sitecore ABCP tools, and here I can enable or disable them, and here I'm able to sell the tools that I want to work for. System. I selected just basic tools that are related to item service. Let's, let's try them, try them in action, let me start new chat and let me ask something our large language model, for example, what the site for, sites are available, and probably it may try to use source code. If it will use source code, we will stop it. And we will say, use tools. Yeah, let's, stop it, and let's ask, please use tools for this question. And you can see that it started to call set quorum CP tools. It get the item by pass, then it get to another item, another item, children, children, time to time, it can call tools that are not really required, but eventually it will get there. So what is the result? Is that we have basic financial services websites with some details, languages, key features. Let's see. Let's go to Sitecore and the check okay, so he here? Here. I have Sitecore. Let me start Content Editor. Yeah, we have three websites. We have basic financial and services. Let me open the home page for services website. It's one on Astro, second on next js, they are the same. And let's say I want to change this text on this page. It's it. Is saved somewhere in items. I don't want to look for an item, and I can just go to my chat with large language model and say, Please change text. Text on services, their home page of services, site for website from to, let's say, let it be not dream project. Let it be your next project. Let's check it started to get items. Let's see whether it will be able to find this item. As you can see, I haven't specified paths, I haven't specified the data sources, I haven't specified any I just specified the page that I want in the text that I want ie it's configured. But the i You don't care for this case, how it is configured, yeah, why? While it's in Connect, let's, let's check, let's check our entropic status. Uh. Is there is another message, API, cloud, AI and console services impacted. The bad thing for us is API. That's, let's hope for the best. And the what's what our cursor is doing. It's getting item. It's still haven't find, haven't found the right one. By the way, the most probably here is also used anthropic, and probably that also could be reason for some degradation. And the here you have ability to switch agent from after to the model that you want, and here, here, probably, probably it switched to to some some worse models. That's why he it took so long. It should be just few prompts, but instead the it takes a lot of time. I Okay, at least, what? What does it write? Italy? It's it writes it hero banner, and it again, started to get the children. I uh, that's, that's, that's not the that's not deterministic logic with large language model. So let's, let's, let's just try again in new chart. So something, something went wrong, please. Okay, let's, let's stop this one, and let's run it again. I it started. It started to use. It started to use items. So let me please use psycho tools. I. Yeah, and now, now it should start using Sitecore tools, because, because we don't want to use, to use CLI. And now, finally, it should. It should do it. So it's, it run, edit item. So let's check now you can see that here we have text for let's build your next project and here, here you can see also that change, that text was changed. This demo wasn't ideal, but at least it finally changed. Let's try anthropic demo. It's still have problems, but at least it's orange. It's not red. So what is the interface where everyone is working, it's JIRA.

That's why we decided to make a agents to be able to work in JIRA. So let me in this case, to select to show you to use exam cloud, I will use exam Cloud website where which is basically the same three websites, basic financial and services and the financial and I have, I have website that's running on Versailles, where we will be able to see Our change, our changes. Let's find financial website and let's choose some page, retirement planning for AI, it will be too scary to let it work on this page, let's select something, for example, personal and borrowing. And let's translate this page from English to Spanish. As you can see here, I have version for English, for French and for Japanese, but not for Spanish. Let me copy item, pass and let me create the item. Let me name it to borrow, Inc, Spanish, borrowing Spanish, and I need to add description so translate borrow Winky zabor, Inc, I already miss article, speech, Shell, Circle website from English to Spanish, and let's specify also what kind of Spanish, because there could be Mexican and other versions of Spanish. And let's specify one slate page itself with pass to this page and Translate Page data sources. If we do not specify the pass for page, most probably it still will work, but the it sometimes it can find the page. Sometimes it's not. So it's better to write your ticket in the proper way with the path to the item. But we will not specify all data sources. We will leave it for large language model to iterate through them and update all of them. Let me save this task and now let me assign it to AI editor and I. Who is this? Our mysterious AI editor. It's our agent that is running on the background, and it's powered by here and eight and automation tool. Actually, it can be AI agent. Shouldn't be necessary in a 10 but just look how, how, how it looks. It it's great for them. You can visualize your workflow, and you can show this workflow, let's cross fingers, and the API still doesn't work. We will we will start this workflow, but most probably, it will fail because of external services, so I started workflow, and I will go through nodes one by one, and the at The beginning this workflow is executed every five minutes, we take all tickets that are assigned to AI editor, and which are it? Ai column, which I actually did. I created task for AI editor. Then we loop over the items, and before taking item into the work, we double check that it's still assigned to AI editor and it's still in AI call, because if you have many items and you can get to the issue that you want to do in five minutes or so. And then we move for issue to AI block state. And let's check our board, and we should get the borrowing Spanish task in AI block state. It's meaning for us that the page is the work on this ticket is in the progress, and now there is interesting part, and hopefully for us, it's, it's working. I hope it will be finished because anthropic services are down, but let's, let's, let me describe how it works. We have AI agent with prompt, and there is actually nothing specific for this prompt. It just says that the your AI agent that is running in the background. That's why you can't ask to clarify all your steps. So please do everything at once, or I'll ask about all your clarifications also at once. And once ticket was done, please move it to AI QA state, and please write command for it and AI agent also get the issue ID, issue name and the description. Now, what else is present here. It has large language model for our case, it's anthropic Claude sonnet model. Claude sonnet 3.7 it's not the advanced one. The most advanced are Opus four or and the sonnet four. But I wanted to show that it can work even with not top edge models. And here you can specify chat GPT. You can specify your local model if you if you have another note, is simple memory for our case, it's just memory for ticket, but it can have some more useful usages. For example, your AI agent can remember about ticket that they walked months ago, week ago, day ago, or this simple memory can be even shared between agents. And we have two model context protocol servers. And the one is for Sitecore with some tools that are available for this server. And here we selected tools that could be useful for content editor, and if we have a agent that is developer. We will select different tools and another. Another model context protocol server is Atlassian server. It allows us to move issue from one column to another, or add comments to issues and it's still working, but we can see that it already did some request to Sitecore tools, so we can we can Check borrowing item, and we can check whether something was translated here, and we can see that now, here is Spanish version. And I think that there will be even Spanish versions for some data sources, and we have this so let's let me copy paths for borrowing and let me Switch to this page to see Spanish content, yes, yes. And we can see that page is in the Spanish. And that's how it should work. And now we can see that it also executed Atlassian tool just once. So probably, probably it is a move ticket or wrote comment as ticket is still here. There should be comment. Here we have comment it wrote it 42 seconds ago. That borrowing page and all data sources were translated from English to Spanish, and the list of content that was translated. Markdown isn't ideal, but it's still much more better than major part of humans rights and once, once it will be finished. Oh, it's it called second time this tool. So let me, let me refresh this page, and now you can see that it, it moved this ticket to AI QA state, and now it's time for another our AI agent, our QA engineer. And here is basically the same. Let me execute workflow, and everything works the same. We have just to even more cheaper model. It's SONET 3.5 and we have even less Sitecore tools for this case, we have tools on the to read content because we know that the QA will be working with afters and as you can see, if we use a cheaper tool, and probably they, they do not experience problem with this cheaper tool, it was finished quickly, and now we can check our board and our ticket was moved to a young AI done state, and our Ai Qi wrote commands that everything was translated. And now it's time for human and loop. Now it's time to assign this ticket to human and either check everything check everything on this page, whether it was translated or if you if you trust your new team members, you can move it to down. You. And if task was failed for some reason, AI QA will move this task for to AI blocked column. And here I prepared sample where I intentionally break translation and AI QA was able to find the problem. That's how it should work, and it shouldn't be necessary translation task. It can be any task. For example, you can create page for blockchain or any other topic. You can write more content for the page. You can rewrite content it's it can be anything related to content, but it's not limited to content only. You can equip your AI agent with additional model context protocol servers, in the allows them to develop some code. Why not? So let's let me move to slides and let me move to conclusions. So if you tell me two years ago that generic AI, large language models will be able to add renderings to the page, I would not believe you. But here we are. AI is already here, and you probably even guessed that it helped me to prepare some parts of these presentations. This presentation and model context protocol made a breakthrough in AI. Now, large language models are not only machines that you can talk to. Now these machines are capable to perform actions, and they are capable to perform actions with Sitecore, if you gave them access to Sitecore MCP server, and I still time to time, get this wow moment when I see what it's capable to do. So give it a try. It's very easy, especially if you are copilot or Visual Studio. If you are GitHub copilot with Visual Studio or cursor user and find something that you can optimize and you can delegate to AI agent. And feel free to contact me about AI Sitecore, about model, context protocol and about AI automations at all, not necessary only for Sitecore. And thanks to everyone who helped me with Sitecore MCP server, to everyone who tried it, who left feedback, who wrote articles, who wrote some code and special thanks to my colleagues, Vadim and stas and the final slide links, you can scan this artistic QR code. Hopefully it, it is scannable on any devices, and now I'm ready for questions. It took longer because we had the anthropic outage, and I'm not sure if we still have time for questions. We do. Thank you. Thank you, Anton, for going through all of this. It was interesting, and thank Thankfully, it worked. I really liked the geo integration part of it. So this is part of the question I wanted to ask, but I'll ask anyway. So I know you've done quite a bit of work. What else do you think so? What do you think is missing, and what is the next set of features? And I'll add one more question on top of this. So do you know if Sitecore is working on their own MCP server or, like about official said core MCP server? I He, I don't know exactly. I just heard some rumors from different people that they are working on official Sitecore, MCP server, what it will be, when it will when will it be? I have no idea. And you. Not necessarily. Should wait for it. You can, you can use my MCP server. Let me, I see that it was, it was cut a little bit. So let me, let me probably move my browser a little bit at the top, in order to in order to make sure that the QR code is scannable, and about my plans of for Sitecore, MCP server, we have a lot of tools, and we need to try all of them. Some of tools will be removed. For example, we have tool for item service to run stored query. And it's even hard to developer to explain that you need to create storage queries somewhere, and then you can execute it, and that's just impossible to explain it to large language models. That's why this kind of tools will be removed, and also we will move the tools that are most useful to some basic package that you don't need to select the tools that you want to use, like, for example, for example, in cloud, yeah, in cloud, desktop, you start safecore, and you need To select Tools, what you want to use, and that's not cool thing, because you need to click 100 times for tools that you don't want, and there will be basic version where you can just use it, and that's it. Another plan is experiment with GraphQL. This big potential in GraphQL and the probably I will be able to find the way how to split schema by myself, without Sitecore help. Does it answer the question?

Yes, yes, it does. So the next question is, is there a way to know the items affected by the prompt, how to be sure it won't affect more items or the wrong item by mistake?

So for our case for that, I showed on, on, uh, on exam cloud portal this you will be able to see that item was updated by said Korea. So you can control, you can create users with limited rights with assign roles to them that they should have access this and a 10 you have access to everything that was executed. For example, if we, if we will look to executions and the last one, and we will be able to check, to check all executions, and we will be able to see, to see that the tools were called a 14 times, and we will be able to see all data that was sent to Sitecore, model, context, protocol, server and everything that was successfully changed. And similar thing for cursor, when, when I run it, you, you are able to see that it called Getting item bypass, then it called getting is that? Then it called editing item and it updated item with this data so and in cursor, in my configuration, everything is allowed because that's local. That's my sandbox, and I allow everything. But if in default say cursor configuration, it will ask you about which about each tool each time. So if it wants to even to get items, you can allow or disallow. And for example, you can allow cursor to. Access all read tools without clarification, and you can configure to ask about clarification, to run edit tools. And that's how you will be able to make sure that the only required items were affected. And probably I also at this stage, you should start from local, from QA, from there, from your instances, but not from production. And once you are sure in your new AI colleagues, in this case, you can start to use them in production. That's the answer.

Makes sense. And one, one question for me, out of curiosity, because I'm actually struggling through this. So for the XM cloud part, Anton, how do you or how does the MCP server handle the handle the Management API key, or the API calls for like publishing or using the Management API right? Does it handle all of that by itself.

So we, if you talk about the GraphQL management, APIs keys, we don't use it. Okay, that's actually great, great idea, probably we can get. So initially, we concentrated on tools that will work on both XM XP and XM cloud. That's why there is only item service, API, GraphQL, but not the GraphQL management API, just GraphQL is edge scheme and PowerShell comments that and everything is, all these APIs are configured just once, and you don't know do not need to manage this case, But that's this great idea. I will think on adding it to some future version. Awesome.

Yeah, I have a need for an external service in Azure to be able to publish an item using the Management API key for ExIm cloud. But the thing is, the key needs to be generated pretty much on demand each time. I was just curious. So maybe I can use the MCP server and use a PowerShell command to be able to do that for now and then at some I will check. But anyways, this was really, really useful. Thank you so much. Thank you for sharing the QR code. This presentation will be available on demand, on media, on both YouTube and LinkedIn shortly after this finishes. So thanks again, Anton for all your effort putting this together. Thankfully, anthropic worked at the end, so we didn't Yeah and said, Thank you. Thank you for giving me chance to present the on your channel, on your webinar. It. I was happy to present it here, unfortunately, unfortunately, topic wasn't selected for symposium, but here is also decent place for this topic. Yeah, we had quite a few people register, so I'm sure this will be watched again and again. But thank you so much for your time. Yeah, thank you for organizing this session. Yeah, bye.

Sign up to our newsletter

Share on social media

Konabos Inc.

Yay to Konabosing in style! Content tagged with the Konabos handle is produced by two or more Konabos team members.


Subscribe to newsletter