Qwen_Max
Created 6 months ago
A Model Context Protocol (MCP) server implementation for the Qwen Max language model.
What is Qwen_Max?
A Model Context Protocol (MCP) server implementation for the Qwen models.
Documentation
Prerequisites
- Node.js (v18 or higher)
- npm
- Claude Desktop
- Dashscope API key
Installation# Installing via Smithery
To install Qwen Max MCP Server for Claude Desktop automatically via Smithery:
npx -y @smithery/cli install @66julienmartin/mcp-server-qwen_max --client claude
```\n\n### Manual Installation
```bash
git clone https://github.com/66julienmartin/mcp-server-qwen-max.git
cd Qwen_Max
npm install
Model Selection
By default, this server uses the Qwen-Max model. The Qwen series offers several commercial models with different capabilities:\n\n### Qwen-Max Provides the best inference performance, especially for complex and multi-step tasks.
- Context window: 32,768 tokens
- Max input: 30,720 tokens
- Max output: 8,192 tokens
- Pricing: $0.0016/1K tokens (input), $0.0064/1K tokens (output)
- Free quota: 1 million tokens
Qwen-Plus
Balanced combination of performance, speed, and cost, ideal for moderately complex tasks.
- Context window: 131,072 tokens
- Max input: 129,024 tokens
- Max output: 8,192 tokens
- Pricing: $0.0004/1K tokens (input), $0.0012/1K tokens (output)
- Free quota: 1 million tokens
Qwen-Turbo
Fast speed and low cost, suitable for simple tasks.
- Context window: 1,000,000 tokens
- Max input: 1,000,000 tokens
- Max output: 8,192 tokens
- Pricing: $0.00005/1K tokens (input), $0.0002/1K tokens (output)
- Free quota: 1 million tokens
To modify the model, update the model name in src/index.ts:
// For Qwen-Max (default) model: "qwen-max"
// For Qwen-Plus model: "qwen-plus"
// For Qwen-Turbo model: "qwen-turbo"
Configuration
- Create a
.envfile in the project root:
DASHSCOPE_API_KEY=your-api-key-here
- Update Claude Desktop configuration:
{
"mcpServers": {
"qwen_max": {
"command": "node",
"args": ["/path/to/Qwen_Max/build/index.js"],
"env": {
"DASHSCOPE_API_KEY": "your-api-key-here"
}
}
}
}
Development
npm run dev # Watch mode
npm run build # Build
npm run start # Start server
Features
- Text generation with Qwen models
- Configurable parameters (max_tokens, temperature)
- Error handling
- MCP protocol support
- Claude Desktop integration
- Support for all Qwen commercial models (Max, Plus, Turbo)
- Extensive token context windows
API Usage
// Example tool call
{
"name": "qwen_max",
"arguments": {
"prompt": "Your prompt here",
"max_tokens": 8192,
"temperature": 0.7
}
}
The Temperature Parameter
The temperature parameter controls the randomness of the model's output:
- Lower values (0.0-0.7): More focused and deterministic outputs
- Higher values (0.7-1.0): More creative and varied outputs
Error Handling
The server provides detailed error messages for common issues:
- API authentication errors
- Invalid parameters
- Rate limiting
- Network issues
- Token limit exceeded
- Model availability issues
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Server Config
{
"mcpServers": {
"qwen_max-server": {
"command": "npx",
"args": [
"qwen_max"
]
}
}
}