Augento Logo

Agents that actually learn

Fine-Tune Your Agents With Reinforcement Learning

Get Started

Hooks into your production system

Intercepts all the calls to your agent to collect training queries, so you don't have to provide explicit jsonl datasets.

agent.py
1from langchain_openai import OpenAI
2
3llm = OpenAI(api_key="sk-XXX", 
4             baseURL="https://api.augento.com/v1")

Define a Reward Function

Define reward functions to train your agent on your specific task. Let it learn reasoning, find edge-cases in your codebase, play chess, or interact with your MCP tools.

reward.py
1import compiler
2
3def reward(completion):
4  try:
5    compiler.compile(completion)
6    return 1
7  except Exception as e:
8    return 0

RL Training

Finetune open-source models directly on our platform.

Initializing TensorFlow.js...

Hosting

Fine-tuned models are hosted on our infrastructure, so you can switch to them just in one click.

terminal
1curl https://api.augento.ai/v1/chat/completions \
2  -H "Content-Type: application/json" \
3  -H "Authorization: Bearer sk-..." \
4  -d '{
5    "model": "finetuned-model",
6    "messages": [{"role": "user", "content": "Hello!"}]
7  }'

Coming Soon: RL from Feedback

Instead of providing a reward function, you can give high-level feedback on the agent's performance and train fine-tuned models, that don't make the same mistakes again.

Model Performance
25.0%
Feedback Cycles
0
👍
Answer was concise and accurate
4m ago
👎
Response missed key context
12m ago
👍
Great code explanation
25m ago