Run commands and scripts. Just like in a terminal.
Documentation
Tools
Tools are for LLMs to request. Claude Sonnet 3.5 intelligently uses run_command. And, initial testing shows promising results with Groq Desktop with MCP and llama4 models. Currently, just one command to rule them all! - run_command - run a command, i.e. hostname or ls -al or echo "hello world" etc - Returns STDOUT and STDERR as text - Optional stdin parameter means your LLM can - pass code in stdin to commands like fish, bash, zsh, python - create files with cat >> foo/bar.txt from the text in stdin > [!WARNING] > Be careful what you ask this server to run! > In Claude Desktop app, use Approve Once (not Allow for This Chat) so you can review each command, use Deny if you don't trust the command. > Permissions are dictated by the user that runs the server. > DO NOT run with sudo.
Video walkthrough Prompts
Prompts are for users to include in chat history, i.e. via Zed's slash commands (in its AI Chat panel) - run_command - generate a prompt message with the command output
Development
Install dependencies:
npm install
Build the server:
npm run build
For development with auto-rebuild:
npm run watch
Installation
To use with Claude Desktop, add the server config:
On MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
On Windows: %APPDATA%/Claude/claude_desktop_config.json
Groq Desktop (beta, macOS) uses ~/Library/Application Support/groq-desktop-app/settings.json
{
"mcpServers": {
"mcp-server-commands": {
// works b/c of shebang in index.js
"command": "/path/to/mcp-server-commands/build/index.js"
}
}
}
Local Models
Most models are trained such that they don't think they can run commands for you.
Sometimes, they use tools w/o hesitation... other times, I have to coax them.
Use a system prompt or prompt template to instruct that they should follow user requests. Including to use run_commands without double checking.
Ollama is a great way to run a model locally (w/ Open-WebUI)
ollama pull https://huggingface.co/lmstudio-community/openhands-lm-32b-v0.1-GGUF\n\n# https://ollama.com/library/devstral
ollama pull devstral\n\n# Qwen2.5-Coder has tool use but you have to coax it
ollama pull qwen2.5-coder
HTTP / OpenAPI
The server is implemented with the STDIO transport. For HTTP, use mcpo for an OpenAPI compatible web server interface. This works with Open-WebUI
[!WARNING] > I briefly used mcpo with open-webui, make sure to vet it for security concerns.
Logging
Claude Desktop app writes logs to ~/Library/Logs/Claude/mcp-server-mcp-server-commands.log By default, only important messages are logged (i.e. errors). If you want to see more messages, add --verbose to the args when configuring the server. By the way, logs are written to STDERR because that is what Claude Desktop routes to the log files. In the future, I expect well formatted log messages to be written over the STDIO transport to the MCP client (note: not Claude Desktop app).
Debugging
Since MCP servers communicate over stdio, debugging can be challenging. We recommend using the MCP Inspector, which is available as a package script:
npm run inspector
The Inspector will provide a URL to access debugging tools in your browser.