The Covenant Foundation builds rules that help AI agents govern themselves. These rules are based on organizational patterns that have worked for thousands of years. Open to anyone building agents. No belief required. Only an interest in what works.
When multiple AI agents work together, they need ground rules. Who's in charge? How do they talk to each other? When should they stop and check in?
Religions, governments, and militaries solved these exact problems centuries ago, without any central controller. They developed constitutions, chains of command, formal letters between peers, mandatory rest days, and graduated responses to failure. The Covenant Framework borrows these patterns and applies them to AI. The biblical naming is a memory aid, not a belief system. These are engineering patterns, not theology.
Twenty-eight organizational problems mapped to governance solutions. Hover any pattern to see what it became in the framework.
Agents following Covenant rules scored 25 points higher than agents with no rules, using the same AI model.
Full findings →| Agent | Model | Score |
|---|---|---|
| Codex CLI | GPT-5.5 | 82.0% |
| ForgeCode | GPT-5.4 | 81.8% |
| TongAgents | Gemini 3.1 Pro | 80.2% |
| Covenant Agent | Claude Opus 4.7 | 67.4% |
| Claude Code (vanilla) | Claude Opus 4.6 | 58.0% |
| Ad-hoc prompted baseline | Claude Opus 4.7 | 42.0% |
Your first message is not a complete spec. The system learns about you over time: what you keep, what you discard, what matters to you.
When you come back after days or weeks, the system picks up where you left off. It remembers what was learned and still knows you.
Most frameworks give agents one layer of instructions. Covenant has four: hard rules, rules of thumb, worked examples, and default personality traits.
The framework is open source. The research is public. The network is selective.