message-smileChat Application

Tutorial - Build a production AI chat app with streaming, conversation history, and Next.js

Step-by-step tutorial for building a production-ready AI chat application with streaming, conversation history, and multi-provider support


What You'll Build

A full-stack chat application featuring:

  • 💬 Real-time streaming responses

  • 📝 Conversation history with context awareness

  • 🔄 Multi-provider failover (OpenAI → Anthropic → Google AI)

  • 💰 Cost optimization with free tier prioritization

  • 🎨 Modern UI with React/Next.js

  • 🔐 Authentication with user sessions

  • 💾 Persistent storage with PostgreSQL

Tech Stack:

  • Next.js 14+ (App Router)

  • TypeScript

  • PostgreSQL

  • Prisma ORM

  • TailwindCSS

  • NeurosLink AI

Time to Complete: 45-60 minutes


Prerequisites

  • Node.js 18+

  • PostgreSQL installed

  • AI provider API keys (at least one):

    • OpenAI API key

    • Anthropic API key (optional)

    • Google AI Studio key (optional)


Step 1: Project Setup

Initialize Next.js Project

Options:

  • TypeScript: Yes

  • ESLint: Yes

  • Tailwind CSS: Yes

  • src/ directory: Yes

  • App Router: Yes

  • Import alias: No

Install Dependencies

Environment Setup

Create .env.local:


Step 2: Database Schema

Initialize Prisma

Define Schema

Edit prisma/schema.prisma:

Apply Schema


Create src/lib/ai.ts:

  1. Multi-provider setup: Configure multiple AI providers to enable automatic failover. The array is ordered by preference.

  2. Priority 1 (highest): Google AI is tried first because it has a generous free tier (1,500 requests/day).

  3. Quota tracking: NeurosLink AI automatically tracks daily and per-minute quotas to prevent hitting rate limits.

  4. Priority 2 (fallback): If Google AI fails or quota is exceeded, automatically fall back to OpenAI.

  5. Load balancing strategy: Use 'priority' to always prefer higher-priority providers. Other options: 'round-robin', 'latency-based'.

  6. Failover configuration: Enable automatic retries with exponential backoff, and fall back to next provider when quota is exceeded.


Step 4: Database Client

Create src/lib/db.ts:


Step 5: API Routes

Chat API with Streaming

Create src/app/api/chat/route.ts:

  1. Node.js runtime required: Streaming requires the Node.js runtime in Next.js, not Edge runtime.

  2. Load or create conversation: If conversationId exists, load the conversation with last 20 messages for context. Otherwise, create new conversation.

  3. Save user message: Store the user's message in the database before generating response.

  4. Build conversation history: Format all previous messages as context for the AI to maintain conversation continuity.

  5. Create streaming response: Use ReadableStream to stream chunks as they arrive from the AI provider.

  6. Stream from NeurosLink AI: Call ai.stream() which returns an async iterator of content chunks. Automatically falls back to other providers on failure.

  7. Send chunk to client: Encode each chunk as Server-Sent Events (SSE) format and send immediately for real-time display.

  8. Save complete response: After streaming completes, save the full response to database with metadata (provider, model, latency).

  9. Send completion signal: Send final event with done: true to notify client that streaming is complete.

  10. SSE headers: Set headers for Server-Sent Events to enable streaming to the browser.

Conversations API

Create src/app/api/conversations/route.ts:

Get Conversation Messages

Create src/app/api/conversations/[id]/messages/route.ts:


Step 6: React Components

Chat Interface

Create src/components/ChatInterface.tsx:

Create src/components/Sidebar.tsx:


Step 7: Main Page

Create src/app/page.tsx:


Step 8: Run the Application

Start Development Server

Visit http://localhost:3000arrow-up-right


Step 9: Testing

Test Basic Chat

  1. Type a message: "Hello, can you help me?"

  2. Verify streaming response appears

  3. Send follow-up: "What can you do?"

  4. Verify conversation context maintained

Test Multi-Provider Failover

Temporarily invalidate Google AI key to test failover:

Verify fallback to OpenAI works automatically.

Test Conversation History

  1. Create new conversation

  2. Send multiple messages

  3. Refresh page

  4. Verify conversations appear in sidebar

  5. Click conversation to reload messages


Step 10: Production Enhancements

Add Loading States

Add Error Handling

Add Message Timestamps


Next Steps

1. Add Authentication

Use NextAuth.js for user authentication:

2. Add User Preferences

Store user settings (model preference, temperature, etc.):

3. Add Analytics

Track usage, costs, and performance:

4. Deploy to Production

Deploy to Vercel:


Troubleshooting

Database Connection Issues

API Key Errors

Verify environment variables are set:

Streaming Not Working

Enable Node.js runtime in API route:


Feature Guides:

Setup & Patterns:


Summary

You've built a production-ready chat application with:

✅ Real-time streaming responses ✅ Persistent conversation history ✅ Multi-provider failover ✅ Cost optimization (free tier first) ✅ Modern React UI ✅ PostgreSQL storage ✅ Error handling

Next Tutorial: RAG Implementation - Build a knowledge base Q&A system

Last updated

Was this helpful?