gptme
gptme is a personal AI agent that lives in your terminal and specializes in running code. It executes Python, shell commands, and browser automation, self-corrects on errors, and chains multiple scripts together — all without leaving the chat. MCP extends its reach by connecting external services the agent can query or modify during execution.
Self-Correcting Execution with External Context
gptme's distinguishing feature is its execution loop. When it writes a script that fails, it reads the error output, corrects the code, and retries — all automatically. Adding MCP tools lets the agent gather context before writing code (e.g., fetching a database schema) and validate results after execution (e.g., checking deployment status).
This feedback loop makes gptme well-suited for automation scripts, data analysis pipelines, and DevOps tasks where getting it right on the second try is good enough — and the agent needs live data to do so.
What sets gptme apart:
- Self-correcting — automatic retry on execution errors
- Python + shell + browser — three execution environments in one agent
- Conversation persistence — resume past sessions with full context
- Pipe-friendly — accepts stdin, works in shell scripts
- Model-agnostic — Claude, GPT, local Ollama models
- Web UI — optional browser-based interface alongside the CLI
Connecting Vinkius Cloud
1. Create a Token
In Vinkius Cloud, navigate to your server → Connection Tokens → Create. Copy the URL.
2. Add to gptme Config
Add the MCP server in your gptme configuration:
mcp_servers:
vinkius:
url: "https://edge.vinkius.com/{TOKEN}/mcp"3. Start a Session
gptmeMCP tools load alongside gptme's built-in execution capabilities. The agent calls them as part of its reasoning and execution loop.
FAQ
Can gptme run MCP tool calls and Python scripts in the same session? Yes. gptme can query an MCP tool, use the result in a Python script, execute it, and iterate — all within a single conversation turn.
What happens if an MCP tool call fails? gptme treats MCP failures like execution errors: it reads the error, adjusts its approach, and retries with a different strategy.
Does gptme support local models? Yes. It works with Ollama and any OpenAI-compatible local endpoint. MCP functions the same regardless of which model is active.
Is gptme free? Open-source under MIT. Bring your own API key for cloud LLM providers, or use free local models.