Works with any LLM. You bring your own API keys.
Three steps from idea to running API
Drag nodes onto the canvas and connect them. Each node does one thing: call an LLM, run a tool, check a condition, or wait for human approval.
Configure your own API keys for OpenAI, Gemini, Claude, Groq, Mistral, or any OpenAI-compatible endpoint including self-hosted Ollama.
Every agent is instantly available as a REST API. One POST request and your agent runs, tools fire, LLMs respond, results stream back.
Every building block you need to model any AI workflow
Built for engineers who want to ship AI fast
Have a question, feature request, or want to report a bug?
We'd love to hear from you.