A Model Context Protocol (MCP) server that orchestrates queries across multiple Ollama models, synthesizing their insights to deliver a comprehensive and multifaceted AI perspective on any given query.
Documentation
Multi-Model Advisor\n\n## Features
Query multiple Ollama models with a single question
Create a .env file in the project root with your desired configuration:
SERVER_NAME=multi-model-advisor
SERVER_VERSION=1.0.0
DEBUG=true\n\n# Ollama configuration
OLLAMA_API_URL=http://localhost:11434
DEFAULT_MODELS=gemma3:1b,llama3.2:1b,deepseek-r1:1.5b\n\n# System prompts for each model
GEMMA_SYSTEM_PROMPT=You are a creative and innovative AI assistant. Think outside the box and offer novel perspectives.
LLAMA_SYSTEM_PROMPT=You are a supportive and empathetic AI assistant focused on human well-being. Provide considerate and balanced advice.
DEEPSEEK_SYSTEM_PROMPT=You are a logical and analytical AI assistant. Think step-by-step and explain your reasoning clearly.
Connect to Claude for Desktop
Locate your Claude for Desktop configuration file:
Replace /absolute/path/to/ with the actual path to your project directory
Restart Claude for Desktop
Usage
Once connected to Claude for Desktop, you can use the Multi-Model Advisor in several ways:\n\n### List Available Models
You can see all available models on your system:
Show me which Ollama models are available on my system
```\n\n### Basic Usage
Simply ask Claude to use the multi-model advisor:
```bash
what are the most important skills for success in today's job market, you can use gemma3:1b, llama3.2:1b, deepseek-r1:1b to help you
How It Works
The MCP server exposes two tools:
list-available-models: Shows all Ollama models on your system
query-models: Queries multiple models with a question
When you ask Claude a question referring to the multi-model advisor:
Claude decides to use the query-models tool
The server sends your question to multiple Ollama models
Each model responds with its perspective
Claude receives all responses and synthesizes a comprehensive answer
Each model can have a different 'persona' or role assigned, encouraging diverse perspectives.
Troubleshooting# Ollama Connection Issues
If the server can't connect to Ollama:
Ensure Ollama is running (ollama serve)
Check that the OLLAMA_API_URL is correct in your .env file
Try accessing http://localhost:11434 in your browser to verify Ollama is responding\n\n### Model Not Found
If a model is reported as unavailable:
Check that you've pulled the model using ollama pull
Verify the exact model name using ollama list
Use the list-available-models tool to see all available models\n\n### Claude Not Showing MCP Tools
If the tools don't appear in Claude:
Ensure you've restarted Claude after updating the configuration
Check the absolute path in claude_desktop_config.json is correct
Look at Claude's logs for error messages\n\n### RAM is not enough
Some managers' AI models may have chosen larger models, but there is not enough memory to run them. You can try specifying a smaller model (see the Basic Usage) or upgrading the memory.