Skip to content

y-cli

y-cli is a minimal terminal AI assistant that treats simplicity as a feature. No TUI, no panels, no bloat — just a CLI tool that reads input, calls an LLM (and optionally MCP tools), and writes output. It's designed to fit into Unix pipelines: pipe text in, get AI-processed text out.


y
y-cli
Open-Source · github.com/luohy15
TRANSPORT
Streamable HTTP ✓
PLATFORM
Windows · macOS · Linux
MCP VIA
Config File

Unix Philosophy Meets AI and MCP

y-cli follows the Unix philosophy: do one thing well. It sends text to an LLM, optionally calls MCP tools if the model requests them, and returns the result. No session management, no history database, no plugins — just input → AI → output.

This makes y-cli composable. You can use it in shell scripts, cron jobs, Git hooks, or Makefiles. Pipe a log file in and get a summary out, pipe error messages in and get debugging suggestions with MCP tool data included.

Why y-cli works for MCP:

  • Pipe-friendly — reads stdin, writes stdout, composable with Unix tools
  • Zero overhead — no daemon, no TUI, no state file
  • Model-agnostic — any OpenAI-compatible API, including local Ollama
  • YAML config — MCP servers defined in a simple config file
  • Scriptable — use in shell scripts, CI pipelines, and automation
  • Minimal footprint — small binary, fast startup

Configuration

1. Get Your Token

In Vinkius Cloud, go to your server → Connection TokensCreate. Copy the URL.

2. Add to Config

Edit the y-cli configuration file:

yaml
mcp_servers:
  vinkius:
    url: "https://edge.vinkius.com/{TOKEN}/mcp"

3. Use in Pipelines

bash
# Interactive mode
y-cli "check my server status"

# Pipe mode
cat error.log | y-cli "diagnose this error and suggest a fix"

# Script mode
echo "deploy status?" | y-cli --mcp

MCP tools are invoked automatically when the model needs external data.


FAQ

Can I use y-cli in shell scripts? That is its primary use case. y-cli reads from stdin and writes to stdout, making it composable with any Unix tool.

Does y-cli maintain conversation history? No. Each invocation is stateless by design. For multi-turn conversations, use a more full-featured client.

Which models work with y-cli? Any OpenAI-compatible endpoint: GPT, Claude (via compatible proxy), Ollama, and others.

Is y-cli free? Open-source. You provide your own LLM API key.