folder-openOpenAI Compatible

Access 100+ models through OpenRouter, vLLM, LocalAI and other OpenAI-compatible providers

Connect to any OpenAI-compatible API: OpenRouter, vLLM, LocalAI, and more


Overview

The OpenAI Compatible provider enables NeurosLink AI to work with any service that implements the OpenAI API specification. This includes third-party aggregators like OpenRouter, self-hosted solutions like vLLM, and custom OpenAI-compatible endpoints.

Key Benefits

  • 🌐 Universal Compatibility: Works with any OpenAI-compatible endpoint

  • 🔄 Provider Aggregation: Access multiple providers through one endpoint (OpenRouter)

  • 🏠 Self-Hosted: Run your own models with vLLM, LocalAI

  • 💰 Cost Optimization: Compare pricing across providers

  • 🔧 Custom Endpoints: Integrate proprietary AI services

  • 📊 Auto-Discovery: Automatic model detection via /v1/models endpoint

Supported Services

Service
Description
Best For

OpenRouter

AI provider aggregator (100+ models)

Multi-provider access

vLLM

High-performance inference server

Self-hosted models

LocalAI

Local OpenAI alternative

Privacy, offline usage

Text Generation WebUI

Community inference server

Local LLMs

Custom APIs

Your own OpenAI-compatible service

Proprietary models


Quick Start

OpenRouter provides access to 100+ models from multiple providers through a single API.

1. Get OpenRouter API Key

  1. Sign up for free account

  2. Create new key

  3. Add credits ($5 minimum)

3. Test Setup

Option 2: vLLM (Self-Hosted)

vLLM is a high-performance inference server for running models locally.

1. Install vLLM

3. Test Setup

Option 3: LocalAI (Privacy-Focused)

LocalAI runs completely offline for maximum privacy.

1. Install LocalAI


Model Auto-Discovery

NeurosLink AI automatically discovers available models through the /v1/models endpoint.

Discover Available Models

SDK Auto-Discovery


OpenRouter Integration

OpenRouter aggregates 100+ models from multiple providers.

Available Models on OpenRouter

Model Selection by Provider

OpenRouter Features


vLLM Integration

vLLM provides high-performance inference for self-hosted models.

Starting vLLM Server

Multiple vLLM Instances


SDK Integration

Basic Usage

With Model Selection

Streaming

Custom Headers

Error Handling


CLI Usage

Basic Commands

OpenRouter-Specific Commands


Configuration Options

Environment Variables

Programmatic Configuration


Use Cases

1. Multi-Provider Access via OpenRouter

2. Self-Hosted Private Models

3. Cost Optimization


Troubleshooting

Common Issues

1. "Connection refused"

Problem: Endpoint is not accessible.

Solution:

2. "Model not found"

Problem: Model ID is incorrect or not available.

Solution:

3. "Invalid API key"

Problem: API key format is incorrect (OpenRouter).

Solution:


Best Practices

1. Model Discovery

2. Endpoint Health Checks

3. Cost Tracking



Additional Resources


Need Help? Join our GitHub Discussionsarrow-up-right or open an issuearrow-up-right.

Last updated

Was this helpful?