openAgent Documentation
openAgent is a platform for creating AI agents with persistent memory, auto-generated skills, and a public REST API. Create an agent, give it a name and description, and it gets its own API endpoint that anyone can call — no authentication required.
Auto-generated Skills
AI creates SKILL.md from your description
Persistent Memory
Agents remember across conversations
Public REST API
Every agent gets its own endpoint
Getting Started
Create an Account
Register at openagent.lol/register. You'll get access to the dashboard immediately.
Create an Agent
Give your agent a name and description. The platform generates a comprehensive SKILL.md that defines its behavior, tone, and capabilities.
Use the API
Your agent is instantly available at https://openagent.lol/api/v1/{slug}. Each agent gets a unique URL-friendly slug (e.g. my-agent) generated from its name. Send a message, get a response. No API key needed.
Try it now
curl -X POST https://openagent.lol/api/v1/my-agent \
-H "Content-Type: application/json" \
-d '{"message":"Hello, what can you do?"}' Replace my-agent with any agent's slug.
How Agents Work
Each agent is a self-contained AI entity with its own personality, skills, memory, and knowledge base. When someone calls the API, the agent loads all its context, generates a response, and then autonomously updates its own memory and knowledge based on the conversation.
What happens when the API is called
Steps 5 and 6 happen asynchronously after the response is sent. The agent learns from every interaction.
Developer API
Every agent gets a REST endpoint at:
https://openagent.lol/api/v1/{slug}Each agent gets a unique URL-friendly slug (e.g. my-agent) generated from its name. No API key, no authentication, no setup. Just send a POST request with a message and get a response.
/api/v1/{slug}Returns basic agent info (name, slug, description, avatar, status).
/api/v1/{slug}Send a message to the agent and get a response. Memory and knowledge are automatically updated.
{ "message": "Hello, what can you do?" }{
"messages": [
{ "role": "user", "content": "What is 2 + 2?" },
{ "role": "assistant", "content": "2 + 2 equals 4." },
{ "role": "user", "content": "And if you multiply that by 3?" }
]
}{
"response": "Hello! I'm here to help you with...",
"agent": { "id": "abc123", "name": "My Agent", "slug": "my-agent" }
}Options
"stream": true — SSE streaming response instead of JSON
"message" — Single message string (simple)
"messages" — Array of messages (multi-turn)
Code Examples
JavaScript / TypeScript
const response = await fetch(
'https://openagent.lol/api/v1/my-agent',
{
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
message: 'Hello, what can you help me with?'
})
}
);
const data = await response.json();
console.log(data.response);
// => "Hello! I can help you with..." Python
import requests
response = requests.post(
'https://openagent.lol/api/v1/my-agent',
json={'message': 'What can you help me with?'}
)
data = response.json()
print(data['response']) cURL
curl -X POST https://openagent.lol/api/v1/my-agent \
-H "Content-Type: application/json" \
-d '{"message":"Hello, what can you do?"}' curl -X POST https://openagent.lol/api/v1/my-agent \
-H "Content-Type: application/json" \
-d '{"message":"Tell me a story","stream":true}' Memory System
Each agent has a persistent memory (MEMORY.md) that automatically evolves from conversations. After every API call, an AI evaluator analyzes the dialogue and decides whether to update the agent's memory.
What gets remembered?
- User preferences and patterns
- Important facts and decisions
- Corrections to the agent's behavior
- Recurring topics and contexts
Memory is evaluated asynchronously after each response. You can also manually edit the memory via the agent's Overview tab in the dashboard.
Knowledge Base
The knowledge base stores structured reference material that the agent uses in responses. Unlike memory (which stores context about interactions), knowledge entries are factual reference documents that are auto-extracted from conversations.
Knowledge Categories
Knowledge is auto-extracted after each API call and can also be managed manually via the Knowledge tab on each agent.
Models
openAgent uses OpenRouter as the LLM provider, giving access to a wide range of models. The platform includes automatic fallback — if one model is rate-limited, it tries the next available.