API Routes
Server-side API routes for AI chat streaming.
AI API Routes
The AI chat uses server-side API routes for processing messages and streaming responses.
Main Chat Route
Located at /api/ai/chat, this route handles all chat interactions.
Request Format
interface ChatRequest {
messages: Array<{
role: 'user' | 'assistant' | 'system';
content: string;
}>;
type?: 'message' | 'tool-approval';
toolCallId?: string;
approved?: boolean;
}
Response Format
Responses use Server-Sent Events (SSE) for streaming:
data: {"type":"text","content":"Hello!"}\n\n
data: {"type":"tool-call","toolCallId":"tc-1","toolName":"navigateTo",...}\n\n
data: {"type":"tool-result","toolCallId":"tc-1","result":{...}}\n\n
data: [DONE]\n\n
Implementation
Basic Route Structure
// app/api/ai/chat/route.ts
import { type NextRequest, NextResponse } from 'next/server';
export async function POST(request: NextRequest) {
const body = await request.json();
const { messages, type } = body;
// Handle tool approval
if (type === 'tool-approval') {
return handleToolApproval(body);
}
// Create streaming response
const stream = new ReadableStream({
async start(controller) {
const encoder = new TextEncoder();
// Generate response chunks
for await (const chunk of generateResponse(messages)) {
controller.enqueue(encoder.encode(chunk));
}
controller.close();
},
});
return new Response(stream, {
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
},
});
}
With AI SDK v6
import { openai } from '@ai-sdk/openai';
import { streamText, tool } from 'ai';
import { z } from 'zod';
export async function POST(request: NextRequest) {
const { messages } = await request.json();
const result = streamText({
model: openai('gpt-4o'),
messages,
tools: {
navigateTo: tool({
description: 'Navigate to a page',
parameters: z.object({
path: z.string(),
}),
execute: async ({ path }) => {
return { navigated: true, path };
},
}),
},
});
return result.toDataStreamResponse();
}
Edge Runtime
For optimal performance, use Edge Runtime:
export const runtime = 'edge';
export const maxDuration = 30;
export async function POST(request: NextRequest) {
// Edge-compatible implementation
}
Error Handling
export async function POST(request: NextRequest) {
try {
const body = await request.json();
// Process request...
} catch (error) {
console.error('Chat API error:', error);
return NextResponse.json(
{
error: 'Failed to process request',
code: 'CHAT_ERROR',
},
{ status: 500 }
);
}
}
Tool Execution
Tools can be executed server-side:
async function executeToolServer(
toolName: string,
args: Record<string, unknown>
) {
switch (toolName) {
case 'fetchData':
return await fetchDataFromDB(args);
case 'analyzeData':
return await runAnalysis(args);
default:
return { error: 'Unknown tool' };
}
}
Rate Limiting
Implement rate limiting for production:
import { Ratelimit } from '@upstash/ratelimit';
import { Redis } from '@upstash/redis';
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(10, '10 s'),
});
export async function POST(request: NextRequest) {
const ip = request.ip ?? '127.0.0.1';
const { success } = await ratelimit.limit(ip);
if (!success) {
return NextResponse.json(
{ error: 'Rate limit exceeded' },
{ status: 429 }
);
}
// Continue processing...
}
Environment Variables
Required environment variables:
# OpenAI
OPENAI_API_KEY=sk-...
# Or Anthropic
ANTHROPIC_API_KEY=sk-ant-...
# Optional
AI_MODEL=gpt-4o
AI_MAX_TOKENS=4096
Testing
Test the API route:
import { POST } from './route';
describe('AI Chat API', () => {
it('should stream a response', async () => {
const request = new Request('http://localhost/api/ai/chat', {
method: 'POST',
body: JSON.stringify({
messages: [{ role: 'user', content: 'Hello' }],
}),
});
const response = await POST(request);
expect(response.status).toBe(200);
const text = await response.text();
expect(text).toContain('data:');
});
});