Konabos

Choosing an AI Development Framework: A Practical Guide for Individuals, Teams, and Organizations

Akshay Sura - Partner

30 Apr 2026

Share on social media

Over the last six months, I keep getting the same question from clients, friends, and our own team:

"Should we be using BMAD, Spec Kit, or one of these other AI development frameworks? Which one is right for us?"

The short version of my answer is this: pick one framework, use it for 90 days, measure, then adjust. The teams getting real results from AI development right now are not the ones running three frameworks in parallel. They are the ones with the discipline to stick with one long enough to build muscle memory.

This post walks through the four frameworks worth knowing as of April 2026, what each one actually does, and how to pick one without falling into tool sprawl. I am not affiliated with any of these projects. Everything is sourced from primary documentation and GitHub repositories. Links are included so you can verify and dig deeper.

Why these frameworks exist in the first place

AI coding assistants like Claude Code, GitHub Copilot, and Cursor are extraordinarily good at writing code. They are not as good at deciding what code should exist.

GitHub framed the problem this way in their Spec Kit launch announcement: "We treat coding agents like search engines when we should be treating them more like literal-minded pair programmers. They excel at pattern recognition but still need unambiguous instructions." [1]

That gap between intent and implementation is what every framework on this list is trying to close. The mechanism varies. Some use specifications. Some use multi-agent personas. Some are minimalist conventions you can adopt in an afternoon. They all share a common premise: structure the inputs to the AI and the outputs get more predictable.

The frameworks worth knowing

I am focusing on the four that have meaningful adoption and active development. There are dozens of smaller projects, but these are the ones that show up in real conversations.

GitHub Spec Kit

Repository: https://github.com/github/spec-kit License: MIT Stars: Tens of thousands of stars (88,000+ as of April 2026) Releases: 129 releases, with the latest (0.7.0) shipped April 14, 2026

Spec Kit is GitHub's open-source toolkit for spec-driven development. It launched on September 2, 2025 as an experiment to formalize a workflow that GitHub had been observing internally. [1]

The methodology is built around four phases delivered as slash commands in your AI coding agent: /speckit.specify, /speckit.plan, /speckit.tasks, and /speckit.implement. There is also a /speckit.constitution command for setting non-negotiable project principles that all other phases must respect. [2]

Spec Kit is agent-agnostic. The supported agent list includes Claude Code, GitHub Copilot, Cursor, Gemini CLI, Codex CLI, Windsurf, Kiro CLI, Roo Code, Qwen Code, and more than 20 others. There is also a generic mode for any agent that supports custom slash commands or skills.

Worth noting for Windows teams: Spec Kit ships PowerShell scripts as a first-class option. You pass --script ps during init and the entire workflow uses PowerShell instead of bash.

Pros:

  • Backed by GitHub, which matters for enterprise credibility
  • Works with the AI coding tools your team probably already uses
  • Phase-gated workflow produces audit-trail artifacts (specs, plans, tasks committed alongside code)
  • Active community with 60+ third-party extensions for things like Jira sync, Azure DevOps integration, V-Model traceability, and post-implementation review

Cons:

  • Phase gates can feel rigid for small bug fixes or quick prototypes
  • The branch-per-spec model creates more markdown artifacts than some teams want to maintain
  • Still labeled experimental by GitHub itself

BMAD Method

Repository: https://github.com/bmad-code-org/BMAD-METHOD License: MIT Stars: Tens of thousands of stars (42,100 as of April 2026) Releases: 27 releases, with the latest (v6.2.1) shipped March 24, 2026 

BMAD stands for Breakthrough Method for Agile AI-Driven Development. It is structured around the metaphor of a virtual development team: you get specialized agents for Product Manager, Architect, Developer, UX, Scrum Master, and other roles, and they collaborate on your project through structured workflows. [3]

The repository describes BMAD as having "scale-adaptive intelligence that adjusts from bug fixes to enterprise systems," with 12+ domain expert agents and a "Party Mode" feature where multiple personas can collaborate in a single session. 

BMAD is installed via npm (npx bmad-method install), which means it works on any platform that runs Node.js v20 or higher. It integrates with Claude Code, Cursor, Windsurf, and other AI IDEs. 

The framework has expanded into a module ecosystem. Beyond the core BMM module, there are official modules for Test Architecture, Game Development, Creative Intelligence, and a BMad Builder for creating your own custom agents and workflows. 

Pros:

  • Deepest planning model among these frameworks; the Analyst, PM, and Architect personas produce notably detailed PRDs
  • Module system lets you extend into specific domains
  • Active development with frequent releases and a sizable Discord community
  • Works across a range of AI tools, not locked to any single vendor

Cons:

  • Heavier learning curve than Spec Kit or OpenSpec; more agents and concepts to understand
  • The persona model can feel like overhead for simple work
  • Smaller mindshare than Spec Kit (42,100 vs 88,000 stars as of April 2026), which matters for hiring and ecosystem effects

OpenSpec

Repository: https://github.com/Fission-AI/OpenSpec License: Open source Maintainer: Fission AI 

OpenSpec takes a deliberately lighter approach. The project description is "Spec-driven development (SDD) for AI coding assistants" and the philosophy is explicit: "fluid not rigid, iterative not waterfall, easy not complex, brownfield-first." [4]

The defining technical concept is the delta spec. Instead of restating an entire system specification every time you make a change, OpenSpec tracks only what is being added, modified, or removed. That makes it particularly suited for working on existing codebases rather than greenfield projects.

OpenSpec compares itself directly against the alternatives in its own documentation: "vs. Spec Kit (GitHub) — Thorough but heavyweight. Rigid phase gates, lots of Markdown, Python setup. OpenSpec is lighter and lets you iterate freely. vs. Kiro (AWS) — Powerful but you're locked into their IDE and limited to Claude models."

It supports 25+ AI tools including Claude, Cursor, GitHub Copilot, Codex, Gemini, Kiro, Windsurf, and others. Installation requires Node.js 20.19.0 or higher. 

Pros:

  • Fast to adopt; minimal setup and ceremony
  • Delta spec model is genuinely good for brownfield work
  • Action-based workflow rather than rigid phase gates
  • Wide tool support

Cons:

  • Smaller community than Spec Kit or BMAD
  • Less structure means more reliance on developer discipline
  • Newer and less battle-tested in enterprise contexts
  • The lightweight approach may not satisfy clients who expect heavy artifacts

AWS Kiro

Website: https://kiro.dev Type: Proprietary IDE (not just a layer on top of existing tools)

Kiro is AWS's entry into this space. It launched in preview in July 2025 and reached general availability in November 2025. [5]

Unlike the other three frameworks, Kiro is a full IDE, not a methodology you bolt onto your existing workflow. It uses a three-phase model (Requirements, Design, Tasks) and supports MCP integration, steering files for project conventions, and property-based testing tied to specifications.

The technical approach is sophisticated. Property-based testing extracts behavioral expectations from natural-language specifications and generates large-sample test cases to validate that implementations meet stated intent. AWS calls this "spec correctness."

There is a real consideration for organizations under NDA or strict data agreements. AWS has documented opt-outs for telemetry and content collection, but the default behavior during preview included content potentially being used for service improvement. If you operate in regulated environments, verify the current data posture against your contractual obligations before adopting Kiro for client work. 

Pros:

  • Production IDE experience, not just markdown artifacts
  • Property-based testing tied to specs is genuinely novel
  • Strong fit for AWS-heavy stacks
  • Backed by AWS

Cons:

  • Vendor lock-in (you adopt the IDE, not just a methodology)
  • Data posture concerns require explicit review for regulated work
  • Less flexible than agent-agnostic alternatives like Spec Kit or OpenSpec

A simpler alternative worth mentioning

Before you adopt any framework, consider whether you actually need one.

Many developers and small teams get most of the value of spec-driven development from a single markdown file at the root of their repository: CLAUDE.md, AGENTS.md, .cursorrules, or copilot-instructions.md depending on the tool. The AI agent reads it on every session and uses it as a standing context. [7]

This is not a framework. It is a convention. But for individuals or small teams, it often does the job. You write down your project conventions, your stack preferences, your "don't do these things" rules, and your AI assistant references them automatically. No CLI to install. No phase gates. No markdown artifacts beyond the one file.

If you are reading this and feeling overwhelmed, start here. You can graduate to a real framework later if and when you outgrow it.

What I would actually do

Before the matrix of options, here is the call I would make if you asked me at a dinner table.

For most teams, pick Spec Kit. It has the largest community, the most active development, the broadest tool support, and the lowest friction for teams already on Claude Code, Copilot, or Cursor. The phase-gated workflow is rigid in a way that will occasionally annoy experienced developers, but that rigidity is exactly what most teams need when they are trying to get AI-generated work under control. The PowerShell support also matters more than people admit if any of your developers run Windows.

Pick BMAD only if you are doing genuine greenfield product work where the planning depth pays off. The Analyst, PM, and Architect persona collaboration produces better PRDs than Spec Kit. But if your team is doing client implementations, brownfield modernization, or work where the architectural patterns are already known, BMAD's depth becomes overhead.

Pick OpenSpec if you are an individual or a very small team that values speed over ceremony. The delta spec model is genuinely good for brownfield work. Just understand that less structure means more reliance on developer discipline.

Avoid Kiro for any work under NDA until you have personally read the data protection documentation and confirmed it meets your contractual obligations. The IDE itself is solid. The data posture is the question, not the technology.

Do not run two frameworks at once. This is the most common mistake I see. Teams that adopt Spec Kit for client work and BMAD for internal products end up with two sets of conventions, two training paths, and a constant low-grade debate about which one to use for any new project. Pick one. The marginal value of having a "better fit" framework for some subset of work is almost never worth the cost of running parallel methodologies.

If you are a single developer reading this on a Saturday afternoon and want to try something today: install Spec Kit, run specify init on a small project, and walk through the four phases on something contained. You will know within an afternoon whether the workflow fits how you think.

How to choose for your specific context

The decision still has nuance based on your specific situation. Here is the breakdown by team size and shape.

If you are an individual developer

Start with CLAUDE.md or AGENTS.md. Add OpenSpec if you want a lightweight workflow with delta tracking. Skip the heavier frameworks unless you are specifically working on a large, multi-month personal project where planning depth pays off.

If you are a small team (3-10 people)

Spec Kit is the safest first pick. It works with whatever AI tools your team already uses, the phase-gated workflow scales to multiple developers without creating chaos, and the GitHub backing means it will likely still be around in two years. OpenSpec is a strong second choice if your team values lighter ceremony.

If you are a product team building greenfield software

BMAD has the deepest planning model. The PM, Architect, and Developer persona collaboration genuinely produces better PRDs and architectural decisions on net-new builds. The tradeoff is a steeper learning curve and more artifacts to maintain, but for greenfield work where the architecture is not predetermined, that overhead pays off.

If you are an agency or consultancy delivering client work

Spec Kit. The four-phase workflow maps cleanly to how most consultancies already structure discovery-first engagements, and the artifacts produce a defensible audit trail when clients ask "why did this take three months." Pair it with a custom constitution.md template that encodes your firm's standards, technology preferences, and quality gates. That constitution becomes IP that travels with every engagement.

If you are an enterprise with regulated work

Spec Kit with extension packs for compliance and governance. The community has built extensions for V-Model paired generation of dev specs and test specs with regulatory traceability, security review gates, plan review gates that block task generation until specs are merged, and CI/CD compliance enforcement. Avoid Kiro until you have explicitly verified the data posture meets your obligations.

If you are a regulated industry team (healthcare, finance, defense)

Be cautious with all of these. Verify data handling for any AI-assisted tooling against your specific regulatory requirements before rollout. The frameworks themselves are mostly metadata layers, but the AI agents underneath them will see your code. That is a procurement and legal review, not a framework choice.

Common mistakes I see

A few patterns worth flagging.

Adopting multiple frameworks simultaneously. I have seen teams try to run Spec Kit for client work, BMAD for internal products, and OpenSpec for experiments. This sounds clever but creates real friction. Every framework requires training, documentation, and consistent application. Three frameworks means three sources of confusion. Pick one and stick with it for at least 90 days before evaluating.

Treating frameworks as a substitute for code review. None of these tools validate that the implementation actually matches the spec. Human review gates remain essential, especially for AI-generated code. The framework organizes the work; it does not guarantee quality.

Skipping the constitution. Both Spec Kit and BMAD support a project-level rules document that sets non-negotiable principles. Most teams skip this in their first project because they want to move fast. They then spend the next three months relitigating decisions that should have been settled upfront. Write the constitution first. It pays off immediately.

Productizing your methodology before proving it internally. It is tempting to package your framework choice as a sellable methodology to clients. Resist this for at least six months. Methodologies that have not survived contact with multiple real projects do not travel well.

Getting started

Whichever framework you choose, the rollout pattern is similar.

  1. Pick one contained pilot project. Not your most important client work, not your throwaway hackathon. Something in the middle where the stakes are real but recoverable.
  2. Install the framework on one developer's machine first. Validate it works in your environment before rolling out to the team.
  3. Draft a v0 constitution or rules document. Do not perfect it. Get something usable in front of the team within a week.
  4. Run one full workflow cycle end-to-end. Document the gotchas you hit. Those become the team onboarding doc.
  5. Run for 60 to 90 days. Measure planning time, rework rate, PR review quality, and team adoption. Decide whether to roll out further or pivot.

The first project will feel slower than usual. That is expected. By the third project, the constitution template is reusable, the team has muscle memory, and the artifacts start carrying their weight in code review and client conversations.

The bottom line

If you skip everything else in this post, this is the part that matters.

The AI framework space is moving fast. The tools will change. The names will change. The GitHub star counts will definitely change.

What will not change is this:

The teams winning right now are not the ones using the most advanced framework. They are the ones using one consistently. They define problems clearly. They apply structure across projects. And they take ownership of what AI produces instead of assuming the AI is correct.

Pick a framework. Use it for 90 days. Measure planning time, rework rate, PR review quality, and team adoption. Then adjust.

That discipline matters more than which tool you pick.

References

[1]: Delimarsky, Den. "Spec-driven development with AI: Get started with a new open source toolkit." The GitHub Blog, September 2, 2025. https://github.blog/ai-and-ml/generative-ai/spec-driven-development-with-ai-get-started-with-a-new-open-source-toolkit/

[2]: GitHub. "github/spec-kit: Toolkit to help you get started with Spec-Driven Development." GitHub repository, accessed April 30, 2026. https://github.com/github/spec-kit

[3]: BMad Code Org. "bmad-code-org/BMAD-METHOD: Breakthrough Method for Agile AI Driven Development." GitHub repository, accessed April 30, 2026. https://github.com/bmad-code-org/BMAD-METHOD

[4]: Fission AI. "Fission-AI/OpenSpec: Spec-driven development (SDD) for AI coding assistants." GitHub repository, accessed April 30, 2026. https://github.com/Fission-AI/OpenSpec

[5]: Kiro. "Kiro: Agentic AI development from prototype to production." Product website, accessed April 30, 2026. https://kiro.dev

[6]: Kiro Documentation. "Data protection." Kiro Docs, accessed April 30, 2026. https://kiro.dev/docs/privacy-and-security/data-protection/

[7]: Fowler, Birgitta. "Understanding Spec-Driven-Development: Kiro, spec-kit, and Tessl." martinfowler.com. https://martinfowler.com/articles/exploring-gen-ai/sdd-3-tools.html

Additional reading

Share on social media

Akshay Sura

Akshay Sura

Akshay is a ten-time Sitecore MVP and a two-time Kontent.ai. In addition to his work as a solution architect, Akshay is also one of the founders of SUGCON North America 2015, SUGCON India 2018 & 2019, Unofficial Sitecore Training, and Sitecore Slack.

Akshay founded and continues to run the Sitecore Hackathon. As one of the founding partners of Konabos Consulting, Akshay will continue to work with clients, leading projects and mentoring their existing teams.


Subscribe to newsletter