MA

Multi-Model Advisor

Created 6 months ago

A Model Context Protocol (MCP) server that queries multiple Ollama models and combines their responses.

development documentation public

What is Multi-Model Advisor?

A Model Context Protocol (MCP) server that orchestrates queries across multiple Ollama models, synthesizing their insights to deliver a comprehensive and multifaceted AI perspective on any given query.

Documentation

Multi-Model Advisor\n\n## Features

  • Query multiple Ollama models with a single question
  • Assign different roles/personas to each model
  • View all available Ollama models on your system
  • Customize system prompts for each model
  • Configure via environment variables
  • Integrate seamlessly with Claude for Desktop

Prerequisites

  • Node.js 16.x or higher
  • Ollama installed and running (see Ollama installation)
  • Claude for Desktop (for the complete advisory experience)

Installation# Installing via Smithery

To install multi-ai-advisor-mcp for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install @YuChenSSR/multi-ai-advisor-mcp --client claude
```\n\n### Manual Installation
1. Clone this repository:
```bash
git clone https://github.com/YuChenSSR/multi-ai-advisor-mcp.git
cd multi-ai-advisor-mcp
  1. Install dependencies:
npm install
  1. Build the project:
npm run build
  1. Install required Ollama models:
ollama pull gemma3:1b
ollama pull llama3.2:1b
ollama pull deepseek-r1:1.5b

Configuration

Create a .env file in the project root with your desired configuration:

SERVER_NAME=multi-model-advisor
SERVER_VERSION=1.0.0
DEBUG=true\n\n# Ollama configuration
OLLAMA_API_URL=http://localhost:11434
DEFAULT_MODELS=gemma3:1b,llama3.2:1b,deepseek-r1:1.5b\n\n# System prompts for each model
GEMMA_SYSTEM_PROMPT=You are a creative and innovative AI assistant. Think outside the box and offer novel perspectives.
LLAMA_SYSTEM_PROMPT=You are a supportive and empathetic AI assistant focused on human well-being. Provide considerate and balanced advice.
DEEPSEEK_SYSTEM_PROMPT=You are a logical and analytical AI assistant. Think step-by-step and explain your reasoning clearly.

Connect to Claude for Desktop

  1. Locate your Claude for Desktop configuration file:
  • MacOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  1. Edit the file to add the Multi-Model Advisor MCP server:
{
  "mcpServers": {
    "multi-model-advisor": {
      "command": "node",
      "args": ["/absolute/path/to/multi-ai-advisor-mcp/build/index.js"]
    }
  }
}
  1. Replace /absolute/path/to/ with the actual path to your project directory
  2. Restart Claude for Desktop

Usage

Once connected to Claude for Desktop, you can use the Multi-Model Advisor in several ways:\n\n### List Available Models You can see all available models on your system:

Show me which Ollama models are available on my system
```\n\n### Basic Usage
Simply ask Claude to use the multi-model advisor:
```bash
what are the most important skills for success in today's job market, you can use gemma3:1b, llama3.2:1b, deepseek-r1:1b to help you

How It Works

  1. The MCP server exposes two tools:
  • list-available-models: Shows all Ollama models on your system
  • query-models: Queries multiple models with a question
  1. When you ask Claude a question referring to the multi-model advisor:
  • Claude decides to use the query-models tool
  • The server sends your question to multiple Ollama models
  • Each model responds with its perspective
  • Claude receives all responses and synthesizes a comprehensive answer
  1. Each model can have a different 'persona' or role assigned, encouraging diverse perspectives.

Troubleshooting# Ollama Connection Issues

If the server can't connect to Ollama:

  • Ensure Ollama is running (ollama serve)
  • Check that the OLLAMA_API_URL is correct in your .env file
  • Try accessing http://localhost:11434 in your browser to verify Ollama is responding\n\n### Model Not Found If a model is reported as unavailable:
  • Check that you've pulled the model using ollama pull
  • Verify the exact model name using ollama list
  • Use the list-available-models tool to see all available models\n\n### Claude Not Showing MCP Tools If the tools don't appear in Claude:
  • Ensure you've restarted Claude after updating the configuration
  • Check the absolute path in claude_desktop_config.json is correct
  • Look at Claude's logs for error messages\n\n### RAM is not enough Some managers' AI models may have chosen larger models, but there is not enough memory to run them. You can try specifying a smaller model (see the Basic Usage) or upgrading the memory.

Server Config

{
  "mcpServers": {
    "multi-model-advisor-server": {
      "command": "npx",
      "args": [
        "multi-model-advisor"
      ]
    }
  }
}

Links & Status

Repository: github.com
Hosted: No
Global: No
Official: No

Project Info

Hosted Featured
Created At: May 23, 2025
Updated At: Aug 07, 2025
Author: Yu Chen
Category: community
License: MIT License
Tags:
development documentation public