Code Patterns
Best practices, anti-patterns, and battle-tested patterns for production AI applications
Battle-tested patterns, anti-patterns, and best practices for production AI applications
Overview
This guide provides reusable code patterns for building production-ready AI applications with NeurosLink AI. Each pattern includes implementation code, use cases, and common pitfalls.
Table of Contents
Error Handling Patterns
Pattern 1: Comprehensive Error Handling
Pattern 2: Graceful Degradation
Retry & Backoff Strategies
Pattern 1: Exponential Backoff
Retry wrapper: Automatically retry failed AI requests with exponential backoff to handle transient failures.
Retry loop: Attempt up to
maxRetries + 1times (initial attempt + retries). Break early on success.Success path: Return immediately on successful generation, no retries needed.
Check if retryable: Only retry transient errors (rate limits, server errors). Don't retry auth errors or invalid requests.
Exponential backoff: Wait 1s, 2s, 4s, 8s... between retries (capped at 10s) to give the service time to recover.
Wait before retry: Sleep to implement backoff delay. Prevents hammering a failing service.
All retries exhausted: If all attempts fail, throw the last error to the caller.
Retryable errors: Rate limits (429), server errors (5xx), and network errors are temporary and worth retrying.
Pattern 2: Exponential Backoff with Jitter
Streaming Patterns
Pattern 1: Server-Sent Events (SSE)
SSE content type: Set
text/event-streamto enable Server-Sent Events streaming to the browser.Disable caching: Prevent proxies and browsers from caching streaming responses.
Keep connection alive: Maintain long-lived HTTP connection for streaming (won't close after first response).
Stream from AI: Use
ai.stream()which returns an async iterator of content chunks as they arrive from the provider.SSE message format: Each message starts with
data:followed by JSON and ends with two newlines (\n\n).Completion signal: Send
[DONE]to notify client that streaming is complete and connection can be closed.Error handling: Stream errors back to client in same SSE format so UI can display them.
Pattern 2: React Streaming UI
Rate Limiting Patterns
Pattern 1: Token Bucket
Pattern 2: Sliding Window
Caching Patterns
Pattern 1: In-Memory Cache with TTL
Pattern 2: Redis Cache
Middleware Patterns
Pattern 1: Logging Middleware
Pattern 2: Metrics Middleware
Pattern 3: Composable Middleware Pipeline
Testing Patterns
Pattern 1: Mock AI Responses
Pattern 2: Integration Testing
Performance Optimization
Pattern 1: Parallel Requests
Pattern 2: Batching with Queue
Security Patterns
Pattern 1: Input Sanitization
Pattern 2: API Key Rotation
Anti-Patterns to Avoid
❌ Anti-Pattern 1: No Error Handling
Why it's bad: No error handling means crashes on API failures
✅ Better approach:
❌ Anti-Pattern 2: Hardcoded API Keys
Why it's bad: Security risk, keys in version control
✅ Better approach:
❌ Anti-Pattern 3: No Rate Limiting
Why it's bad: Will hit rate limits, waste money
✅ Better approach:
❌ Anti-Pattern 4: No Caching
Why it's bad: Wastes money on duplicate requests
✅ Better approach:
❌ Anti-Pattern 5: Blocking Sequential Requests
Why it's bad: Slow, wastes time
✅ Better approach:
❌ Anti-Pattern 6: No Timeouts
Why it's bad: Can hang indefinitely
✅ Better approach:
❌ Anti-Pattern 7: Ignoring Token Limits
Why it's bad: Will fail on token limit
✅ Better approach:
Related Documentation
Use Cases - Real-world examples
Enterprise Features - Production patterns
Provider Setup - Provider configuration
Summary
You've learned production-ready patterns for:
✅ Error handling and graceful degradation ✅ Retry strategies with exponential backoff ✅ Streaming responses (SSE, React) ✅ Rate limiting (Token Bucket, Sliding Window) ✅ Caching (In-memory, Redis) ✅ Middleware pipelines ✅ Testing strategies ✅ Performance optimization ✅ Security best practices ✅ Anti-patterns to avoid
These patterns form the foundation of robust, production-ready AI applications.
Last updated
Was this helpful?

