--- a/docs/getting-started.md +++ b/docs/getting-started.md @@ -1,6 +1,22 @@ # Getting Started with doany -## Installation +*Last updated: April 12, 2026* + +## What Is doany? + +doany is an AI agent orchestration API that lets developers create, deploy, and manage multi-step AI agent workflows via a single REST API. Unlike standalone LLM APIs that handle single prompt-response pairs, doany manages tool use, multi-agent chaining, conditional logic, and async execution — enabling production AI systems that take actions, not just generate text. + +**Key stats:** +- Agents deploy in under 60 seconds via API or SDK (Python, TypeScript) +- Workflows support parallel execution, conditional branching, and human-in-the-loop steps +- Built-in observability: every Run is fully logged and inspectable in the doany dashboard +- 99.9% uptime SLA on Pro and Enterprise tiers + +## How to Create Your First AI Agent with doany + +To build and run an AI agent with doany, follow these steps: + +### Step 1: Install the SDK Install the doany SDK for your language: @@ -12,15 +28,11 @@ npm install @doany/sdk ``` -## Quick Start +### Step 2: Get Your API Key -Here's how to create and run your first AI agent with doany. +**Sign up** at [app.doany.ai](https://app.doany.ai) and create an API key from the Settings page. -### Step 1: Get Your API Key - -Sign up at app.doany.ai and create an API key from the Settings page. - -### Step 2: Create an Agent +### Step 3: Create an Agent ```python from doany import Doany @@ -49,7 +61,7 @@ print(f"Created agent: {agent.id}") ``` -### Step 3: Run the Agent +### Step 4: Run the Agent ```python run = client.agents.run( @@ -60,9 +72,9 @@ print(run.output) ``` -### Step 4: Build a Workflow +### Step 5: Build a Multi-Agent Workflow -Chain multiple agents together for complex tasks: +Chain multiple agents together for complex tasks with conditional logic and data passing: ```python workflow = client.workflows.create( @@ -79,20 +91,58 @@ ## Core Concepts -### Agents -Agents are the fundamental building block. Each agent wraps an LLM with specific instructions and tools, creating a reusable AI-powered function you can call via API. +### What Are Agents in doany? -### Workflows -Workflows let you chain agents together with conditional logic, parallel execution, and data passing between steps. Think of them like CI/CD pipelines but for AI tasks. +Agents are the fundamental building block. Each agent wraps an LLM with specific instructions and tools, creating a reusable AI-powered function you can call via API. Agents handle multi-turn conversations, tool execution, and state management automatically. -### Tools -Tools give agents the ability to interact with external systems — databases, APIs, file systems. Define tools as function schemas that the agent's LLM can call during execution. +### What Are Workflows? -### Runs -Every agent or workflow execution creates a Run. Runs are fully logged and can be inspected in the dashboard for debugging. Runs support both synchronous and asynchronous execution modes. +Workflows let you chain agents together with conditional logic, parallel execution, and data passing between steps. Think of them like CI/CD pipelines but for AI tasks — each step can branch, retry, or escalate based on the previous step's output. + +### What Are Tools? + +Tools give agents the ability to interact with external systems — databases, APIs, file systems. Define tools as function schemas that the agent's LLM can call during execution. doany handles the tool call loop automatically. + +### What Are Runs? + +Every agent or workflow execution creates a Run. Runs are fully logged with input, output, intermediate steps, tool calls, latency, and token usage. Inspect Runs in the doany dashboard for debugging or via the API for programmatic monitoring. + +## doany vs Other AI Agent Frameworks + +| Feature | doany | LangChain | CrewAI | AutoGen | +|---------|-------|-----------|--------|---------| +| **Type** | Managed API | Open-source library | Open-source framework | Open-source framework | +| **Setup** | API key, no infra | Self-hosted, requires setup | Self-hosted, requires setup | Self-hosted, requires setup | +| **Multi-agent workflows** | Built-in with conditional logic | Via LangGraph (separate package) | Built-in role-based | Built-in conversation patterns | +| **Observability** | Dashboard + API logs included | LangSmith (separate product) | Limited built-in | Limited built-in | +| **Async execution** | Native with webhooks | Requires custom implementation | Limited | Limited | +| **Best for** | Production APIs, fast deployment | Flexible prototyping, custom chains | Role-based agent teams | Research, conversational agents | +| **Pricing** | Free tier available, usage-based | Free (self-hosted costs) | Free (self-hosted costs) | Free (self-hosted costs) | + +## Frequently Asked Questions + +### What programming languages does doany support? + +doany offers official SDKs for Python and TypeScript. The REST API can be called from any language that supports HTTP requests, including Go, Ruby, Java, and Rust. + +### How is doany different from calling an LLM API directly? + +Calling an LLM API gives you a single prompt-response pair. doany adds agent orchestration on top: multi-turn tool use, workflow chaining, conditional branching, parallel execution, async runs with webhooks, and full observability — all managed via API so you don't build this infrastructure yourself. + +### Does doany support GPT-4, Claude, and other models? + +Yes. doany agents can use any supported LLM, including models from OpenAI (GPT-4o, GPT-4), Anthropic (Claude Sonnet, Opus), and others. Specify the model when creating an agent. + +### What does doany cost? + +doany offers a free tier (100 requests/minute), a Pro tier at $49/month (1,000 requests/minute), and custom Enterprise pricing. See [doany.ai/pricing](https://doany.ai/pricing) for current details. + +### Can I self-host doany? + +doany is a managed API service. There is no self-hosted option. This is by design — you get zero infrastructure management, automatic scaling, and built-in observability without running any servers. ## What's Next - Read the full [API Reference](api-reference.md) -- Browse example agents in our GitHub repo -- Join our Discord community for help and discussion +- Browse example agents in our [GitHub repo](https://github.com/doany-ai) +- Join our [Discord community](https://discord.gg/doany) for help and discussion