CCProxy

A high-performance proxy that enables Claude Code to work with multiple AI providers. Route requests to OpenAI, Google Gemini, DeepSeek, and more through a single interface.
π Documentation | π Issues | π¬ Discussions
π― Why CCProxy?
CCProxy enables Claude Code to work with multiple AI providers:
- Top-Ranked Model Access: Use Qwen3 235B (#1 on AIME with 70.3 score) via OpenRouter
- 100+ Models Through OpenRouter: Access Qwen3, Kimi K2, Grok, and many more
- Cost Optimization: Route requests to the most cost-effective provider
- Failover Protection: Automatic fallback when providers are unavailable
- Token-Based Routing: Intelligent routing based on context length
- Drop-in Replacement: Works seamlessly without code changes
π Features
- Multi-Provider Support: Anthropic, OpenAI, Google Gemini, DeepSeek, OpenRouter (100+ models)
- Intelligent Routing: Automatic model selection based on context
- API Translation: Seamless format conversion between providers
- Tool Support: Function calling required for Claude Code compatibility
- Streaming Support: Real-time responses via SSE
- Cross Platform: Linux, macOS, and Windows (AMD64/ARM64)
- Process Management: Background service with auto-startup
- Health Monitoring: Built-in status tracking
- Security: API validation, access control, rate limiting
π― Model Selection Strategy
CCProxy intelligently routes requests based on:
- Token count (>60K β longContext route)
- Model type (haiku models β background route)
- Thinking parameter (thinking: true β think route if configured)
- Explicit selection (provider,model format)
- Access to top models like Qwen3 235B via OpenRouter
Note: For Claude Code compatibility, models must support function calling (tool use).
π Quick Start
Option 1: Download Pre-built Binary
- Download the latest binary for your platform from the releases page
- Create a configuration file:
cp example.config.json config.json
# Edit config.json to add your provider API keys
- Start CCProxy:
./ccproxy start
Option 2: Build from Source
- Prerequisites: Go 1.23 or later
- Clone and build:
git clone https://github.com/orchestre-dev/ccproxy.git
cd ccproxy
go build ./cmd/ccproxy
- Configure:
cp example.config.json config.json
# Edit config.json to add your provider API keys
- Start the service:
./ccproxy start
Option 3: Docker
docker build -t ccproxy .
docker run -d -p 3456:3456 -v $(pwd)/config.json:/home/ccproxy/.ccproxy/config.json ccproxy
Option 4: One-Command Setup (Recommended)
# Fastest setup - auto-configures everything
./ccproxy code
This command automatically:
- Starts the proxy server
- Configures environment variables
- Sets up the connection
- Enables access to all configured providers
π§ Configuration Guide
Quick Start Configuration
Create a configuration file with your provider API keys:
-
Create configuration directory (if it doesn't exist):
mkdir -p ~/.ccproxy
-
Create config file with your provider API keys:
cat > ~/.ccproxy/config.json << 'EOF'
{
"providers": [
{
"name": "openai",
"api_base_url": "https://api.openai.com/v1",
"api_key": "your-openai-api-key",
"models": ["gpt-4.1", "gpt-4.1-mini"], // Available models for validation
"enabled": true
}
],
"routes": {
"default": {
"provider": "openai",
"model": "gpt-4.1" // This is the actual model that will be used
}
}
}
EOF
-
Start CCProxy and configure Claude Code:
./ccproxy code
This single command will:
- Start the CCProxy service
- Set up environment variables for Claude Code
- Enable Claude Code to use your configured AI providers
Configuration Priority System
CCProxy uses a layered configuration system. Settings are applied in this order (highest priority first):
-
Command-line flags - Override any other setting
ccproxy start --port 8080 --config /custom/config.json
-
Environment variables - Override config file values
export CCPROXY_PORT=8080
export CCPROXY_API_KEY=my-secret-key
-
Configuration file - Your main configuration
{ "port": 3456, "apikey": "configured-key" }
-
Built-in defaults - Used when nothing else is specified
Configuration File Locations
CCProxy searches for config.json in these locations (in order):
- Current directory:
./config.json
- User home directory:
~/.ccproxy/config.json
- System directory:
/etc/ccproxy/config.json
The first found file is used. You can also specify a custom path:
ccproxy start --config /path/to/my/config.json
π Understanding the Two-Level API Key System
CCProxy uses two types of API keys:
1. CCProxy Authentication Key (Optional)
| What it does | Secures access to your CCProxy server |
| Who uses it | Claude Code when connecting to CCProxy |
| Configuration | "apikey": "your-secret-key" |
| When to use | When CCProxy is accessible from non-localhost addresses |
| Default behavior | If not set, only localhost connections are allowed |
Example scenario: You're running CCProxy on a server and Claude Code connects from your laptop:
{
"host": "0.0.0.0", // Accessible from network
"apikey": "my-secret-ccproxy-key" // Required for security
}
2. AI Provider API Keys (Required)
| What they do | Authenticate CCProxy to AI services (OpenAI, Anthropic, etc.) |
| Who uses them | CCProxy when forwarding your requests to AI providers |
| Configuration | Each provider has its own "api_key" |
| When to use | Always - you need valid keys from each AI provider you want to use |
| How to get them | Sign up at each provider's website (OpenAI, Anthropic, Google AI, etc.) |
Visual Flow:
Claude Code β (uses CCProxy API key) β CCProxy β (uses Provider API key) β OpenAI/Anthropic/etc
π Complete Configuration Example
{
"host": "127.0.0.1",
"port": 3456,
"log": true,
"log_file": "~/.ccproxy/ccproxy.log",
"apikey": "my-secret-ccproxy-key", // Optional: for CCProxy authentication
"providers": [
{
"name": "anthropic",
"api_base_url": "https://api.anthropic.com",
"api_key": "sk-ant-...", // Required: your Anthropic API key
"models": ["claude-opus-4-20250720", "claude-sonnet-4-20250720"],
"enabled": true
},
{
"name": "openai",
"api_base_url": "https://api.openai.com/v1",
"api_key": "sk-...", // Required: your OpenAI API key
"models": ["gpt-4.1", "gpt-4.1-turbo", "gpt-4.1-mini"],
"enabled": true
},
{
"name": "gemini",
"api_base_url": "https://generativelanguage.googleapis.com/v1",
"api_key": "AIza...", // Required: your Google AI API key
"models": ["gemini-2.5-flash", "gemini-2.5-pro"],
"enabled": false
},
{
"name": "deepseek",
"api_base_url": "https://api.deepseek.com",
"api_key": "sk-...", // Required: your DeepSeek API key
"models": ["deepseek-chat", "deepseek-coder"],
"enabled": true
},
{
"name": "openrouter",
"api_base_url": "https://openrouter.ai/api/v1",
"api_key": "sk-or-...", // Required: your OpenRouter API key
"models": ["qwen/qwen-3-235b", "moonshotai/kimi-k2-instruct", "xai/grok-beta"],
"enabled": true
}
],
"routes": {
"default": {
"provider": "anthropic",
"model": "claude-sonnet-4-20250720"
},
"longContext": {
"provider": "anthropic",
"model": "claude-opus-4-20250720"
},
"background": {
"provider": "openai",
"model": "gpt-4.1-mini"
},
"think": { // Optional: route for thinking parameter
"provider": "anthropic",
"model": "claude-opus-4-20250720"
},
"gpt-4.1": {
"provider": "openai",
"model": "gpt-4.1-turbo"
},
"qwen3-235b": { // Top-ranked model via OpenRouter
"provider": "openrouter",
"model": "qwen/qwen-3-235b"
}
},
"performance": {
"request_timeout": "30s",
"max_request_body_size": 10485760,
"metrics_enabled": true,
"rate_limit_enabled": false,
"circuit_breaker_enabled": true
}
}
π― How Model Selection Works
CCProxy uses intelligent routing to select the appropriate model and provider:
1. Explicit Provider Selection (Highest Priority)
// Force a specific provider/model combination
{"model": "openai,gpt-4.1-turbo"}
2. Direct Model Routes
// If "gpt-4.1" is defined in routes, use that configuration
{"model": "gpt-4.1"}
3. Automatic Token-Based Routing
// Requests with >60K tokens automatically use longContext route
{"model": "claude-sonnet-4", "messages": [/* very long context */]}
4. Background Task Routing
// Models starting with "claude-3-5-haiku" use background route
{"model": "claude-3-5-haiku-20241022"}
5. Thinking Mode Routing
// Requests with thinking parameter use think route (if configured)
{"model": "claude-sonnet-4", "thinking": true}
6. Default Route
// Any unmatched model uses the default route
{"model": "some-model"}
Environment Variables
| Variable |
Default |
Description |
CCPROXY_PORT |
3456 |
Port for CCProxy to listen on |
CCPROXY_HOST |
127.0.0.1 |
Host/IP for CCProxy to bind to |
CCPROXY_API_KEY |
|
API key for CCProxy authentication |
CCPROXY_CONFIG |
|
Path to configuration file |
CCPROXY_LOG |
false |
Enable file logging |
CCPROXY_LOG_FILE |
~/.ccproxy/ccproxy.log |
Log file path |
CCPROXY_PROVIDERS_0_API_KEY |
|
Override first provider's API key |
CCPROXY_PROVIDERS_1_API_KEY |
|
Override second provider's API key |
LOG |
false |
Alternative way to enable logging |
Real-World Configuration Examples
Example 1: Simple Setup (Local Development)
{
"providers": [
{
"name": "openai",
"api_key": "sk-proj-...",
"enabled": true
}
],
"routes": {
"default": {
"provider": "openai",
"model": "gpt-4.1-turbo"
}
}
}
Example 2: Multi-Provider Setup with Smart Routing
{
"providers": [
{
"name": "anthropic",
"api_key": "sk-ant-...",
"enabled": true
},
{
"name": "openai",
"api_key": "sk-proj-...",
"enabled": true
},
{
"name": "deepseek",
"api_key": "sk-...",
"enabled": true
},
{
"name": "openrouter",
"api_base_url": "https://openrouter.ai/api/v1",
"api_key": "sk-or-...", // Required: your OpenRouter API key
"models": ["qwen/qwen-3-235b", "moonshotai/kimi-k2-instruct", "xai/grok-beta"],
"enabled": true
}
],
"routes": {
"default": {
"provider": "deepseek",
"model": "deepseek-chat"
},
"longContext": { // Auto-selected for >60K tokens
"provider": "anthropic",
"model": "claude-opus-4-20250720"
},
"gpt-4.1": { // Direct model mapping
"provider": "openai",
"model": "gpt-4.1-turbo"
},
"claude-opus-4": { // Another direct mapping
"provider": "anthropic",
"model": "claude-opus-4-20250720"
},
"qwen3-235b": { // Access top-ranked Qwen3 235B
"provider": "openrouter",
"model": "qwen/qwen-3-235b"
}
}
}
Example 3: Priority Override Demonstration
# Config file sets: port 8080
# Environment variable: CCPROXY_PORT=9090
# Command flag: ccproxy start --port 7070
# Result: CCProxy uses port 7070 (command flag wins)
π― Using CCProxy
Automatic Setup (Recommended)
# One command setup
./ccproxy code
β¨ What happens: Automatic configuration enabling access to all configured providers.
Manual Setup
-
Start CCProxy:
./ccproxy start
-
Configure Claude Code:
export ANTHROPIC_BASE_URL=http://localhost:3456
export ANTHROPIC_AUTH_TOKEN=test
-
Use Claude Code:
claude "Help me with my code"
π Supported AI Providers
CCProxy provides seamless integration with 5 major providers:
- Anthropic - Claude models with native support
- OpenAI - GPT-4.1, GPT-4.1-mini models
- Google Gemini - Advanced multimodal models
- DeepSeek - Cost-effective coding models
- OpenRouter - Gateway to 100+ models including:
- π Qwen3 235B - Top-ranked model with 70.3 AIME score
- β‘ Kimi K2 - Ultra-fast inference from Moonshot AI
- π Grok - Real-time data access from xAI
- And 100+ more models from various providers
Note: Additional providers like Groq, Mistral, XAI, and Ollama are accessible through OpenRouter, giving you the flexibility to use virtually any AI model through a single interface.
Configuration Example
Add providers to your config.json:
{
"providers": [
{
"name": "openai",
"api_base_url": "https://api.openai.com/v1",
"api_key": "your-api-key",
"models": ["gpt-4.1", "gpt-4.1-mini"],
"enabled": true
}
]
}
π API Endpoints
POST /v1/messages - Anthropic-compatible messages endpoint
GET /health - Health check endpoint
GET /status - Service status information
GET /providers - List configured providers
POST /providers - Add new provider
PUT /providers/:name - Update provider
DELETE /providers/:name - Remove provider
Example Requests
Basic Request (uses default route)
curl -X POST http://localhost:3456/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: your-api-key" \
-d '{
"model": "claude-sonnet-4-20250720",
"max_tokens": 1000,
"messages": [
{
"role": "user",
"content": "Hello, how are you?"
}
]
}'
Force Specific Provider
curl -X POST http://localhost:3456/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: your-api-key" \
-d '{
"model": "openai,gpt-4.1-turbo",
"messages": [{"role": "user", "content": "Hello"}]
}'
Long Context (auto-routes to longContext route)
curl -X POST http://localhost:3456/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: your-api-key" \
-d '{
"model": "any-model",
"messages": [{"role": "user", "content": "/* 100K+ token content */"}]
}'
curl -X POST http://localhost:3456/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: your-api-key" \
-d '{
"model": "claude-sonnet-4",
"thinking": true,
"messages": [{"role": "user", "content": "Solve this complex problem"}]
}'
π¨ Development Guide
Building
# Build for current platform
make build
# Cross-platform builds
make build-all
# Run tests
make test
# Run tests with race detection
make test-race
Commands
ccproxy start - Start the service
ccproxy stop - Stop the service
ccproxy status - Check service status
ccproxy code - Configure Claude Code
ccproxy version - Show version
ccproxy env - Show environment variables
π License
MIT License - see LICENSE file for details.
π€ Contributing
Contributions are welcome! Please read our Contributing Guidelines first.
π Acknowledgments
Built with β€οΈ for the Claude Code community.
Inspired by the original Claude Code Router project.
π Troubleshooting
Common Issues
-
Service won't start
- Check if port 3456 is available:
lsof -i :3456
- Verify configuration file syntax
- Check logs:
tail -f ~/.ccproxy/ccproxy.log
-
Authentication errors
- Verify API keys in config.json
- Check provider is enabled in configuration
- Ensure API key has correct permissions
-
Connection refused
- Check service status:
./ccproxy status
- Verify firewall settings
- Ensure service is bound to correct interface
Debug Mode
Enable debug logging:
LOG=true ./ccproxy start
# Check logs at ~/.ccproxy/ccproxy.log
π Support
π Summary
CCProxy provides seamless multi-provider integration for AI-powered development. Access top-ranked models like Qwen3 235B, switch easily between OpenAI, Google Gemini, Anthropic Claude, DeepSeek, and leverage 100+ models through OpenRouter.
Key Benefits:
- β
Top-Ranked Models: Access Qwen3 235B (70.3 AIME score) and 100+ models via OpenRouter
- β
Multi-Provider Support: 5 major AI providers with full implementation
- β
Cost Optimization: Route to cost-effective providers
- β
High Performance: Minimal latency with Go
- β
Easy Setup: One command configuration
- β
Enterprise Ready: Built-in security and monitoring
Start using CCProxy today to unlock multi-provider AI development!
If you find this project useful, please consider giving it a β on GitHub!