OpenAI Compatible
Access 100+ models through OpenRouter, vLLM, LocalAI and other OpenAI-compatible providers
Connect to any OpenAI-compatible API: OpenRouter, vLLM, LocalAI, and more
Overview
The OpenAI Compatible provider enables NeurosLink AI to work with any service that implements the OpenAI API specification. This includes third-party aggregators like OpenRouter, self-hosted solutions like vLLM, and custom OpenAI-compatible endpoints.
Key Benefits
🌐 Universal Compatibility: Works with any OpenAI-compatible endpoint
🔄 Provider Aggregation: Access multiple providers through one endpoint (OpenRouter)
🏠 Self-Hosted: Run your own models with vLLM, LocalAI
💰 Cost Optimization: Compare pricing across providers
🔧 Custom Endpoints: Integrate proprietary AI services
📊 Auto-Discovery: Automatic model detection via
/v1/modelsendpoint
Supported Services
OpenRouter
AI provider aggregator (100+ models)
Multi-provider access
vLLM
High-performance inference server
Self-hosted models
LocalAI
Local OpenAI alternative
Privacy, offline usage
Text Generation WebUI
Community inference server
Local LLMs
Custom APIs
Your own OpenAI-compatible service
Proprietary models
Quick Start
Option 1: OpenRouter (Recommended for Beginners)
OpenRouter provides access to 100+ models from multiple providers through a single API.
1. Get OpenRouter API Key
Visit OpenRouter.ai
Sign up for free account
Go to Keys
Create new key
Add credits ($5 minimum)
2. Configure NeurosLink AI
3. Test Setup
Option 2: vLLM (Self-Hosted)
vLLM is a high-performance inference server for running models locally.
1. Install vLLM
2. Configure NeurosLink AI
3. Test Setup
Option 3: LocalAI (Privacy-Focused)
LocalAI runs completely offline for maximum privacy.
1. Install LocalAI
2. Configure NeurosLink AI
Model Auto-Discovery
NeurosLink AI automatically discovers available models through the /v1/models endpoint.
Discover Available Models
SDK Auto-Discovery
OpenRouter Integration
OpenRouter aggregates 100+ models from multiple providers.
Available Models on OpenRouter
Model Selection by Provider
OpenRouter Features
vLLM Integration
vLLM provides high-performance inference for self-hosted models.
Starting vLLM Server
NeurosLink AI Configuration for vLLM
Multiple vLLM Instances
SDK Integration
Basic Usage
With Model Selection
Streaming
Custom Headers
Error Handling
CLI Usage
Basic Commands
OpenRouter-Specific Commands
Configuration Options
Environment Variables
Programmatic Configuration
Use Cases
1. Multi-Provider Access via OpenRouter
2. Self-Hosted Private Models
3. Cost Optimization
Troubleshooting
Common Issues
1. "Connection refused"
Problem: Endpoint is not accessible.
Solution:
2. "Model not found"
Problem: Model ID is incorrect or not available.
Solution:
3. "Invalid API key"
Problem: API key format is incorrect (OpenRouter).
Solution:
Best Practices
1. Model Discovery
2. Endpoint Health Checks
3. Cost Tracking
Related Documentation
Provider Setup Guide - General provider configuration
Cost Optimization - Reduce AI costs
Enterprise Multi-Region - Self-hosted and vLLM deployment
Additional Resources
OpenRouter - Multi-provider aggregator
vLLM Documentation - Self-hosted inference
LocalAI - Local OpenAI alternative
OpenAI API Spec - API standard
Need Help? Join our GitHub Discussions or open an issue.
Last updated
Was this helpful?

