
a compiled DSL that enforces workflow contracts
on tool-using AI agents — at runtime
the problem
AI agents with tool access can do anything — and that's the problem. A prompt says "only search then summarize", but nothing enforces it. The agent can call any tool, in any order, at any time. You only find out it went wrong after the damage is done.
"Search the web for the topic,
then summarize the results.
Do NOT send any emails."# searched web ✓
# summarized ✓
send_email(to="boss@work.com")
# ...oopsnatural language directives are suggestions — not guarantees
the solution
Write a contract in complier's DSL. It compiles into a runtime graph that sits alongside your agent. Every tool call is checked against the graph — allowed calls pass through, disallowed calls are blocked with structured remediation.
the language
A purpose-built DSL for defining agent workflows. Supports tool calls, branching, loops, parallel execution, contract checks, and reusable guarantees — all compiled into a directed runtime graph.
guarantee safe [no_harmful_content:halt]
workflow "research" @always safe
| @human "What topic?"
| search_web
| summarize style=([relevant:3] && [concise:halt])
| @branch
-when "technical"
| @llm "Write detailed analysis"
-else
| @llm "Write brief summary"
| @call send_report{human:policy} human-approved
#{learned:policy} memory-backed
how it works
Wrap your existing tools. The session checks every call against the compiled graph and either lets it through or returns remediation info.
from complier import Contract, wrap_function
# compile the contract
contract = Contract.from_file("workflow.cpl")
session = contract.create_session()
# wrap your tools — enforcement is transparent
safe_search = wrap_function(session, search_web)
safe_summarize = wrap_function(session, summarize)
# if the agent calls a tool out of order:
# → BlockedToolResponse with remediation info
# if the call is allowed:
# → executes normally, session advances