Build an AI App
Create an AI-powered web app with streaming chat and conversation history
What You'll Build
A deployed AI chat application with streaming responses, conversation memory, and user authentication.
- Streaming AI chat interface
- Conversation history stored in Supabase
- Multiple AI model support
- Deployed on Vercel with edge functions
Prerequisites
- Node.js 18+ installed
- An OpenAI API key (or Anthropic, Google, etc.)
- Basic React and TypeScript knowledge
- A Supabase account
Architecture
Next.js handles the frontend chat UI and API routes. Vercel AI SDK provides the streaming integration with AI providers (OpenAI, Anthropic, etc.). Supabase stores conversation history and handles user auth. Vercel deploys and runs everything on edge functions for low latency.
Set up Next.js with Vercel AI SDK
~10 minCreate a new Next.js project and install the Vercel AI SDK.
- Create a Next.js project:
npx create-next-app@latest my-ai-app --typescript --tailwind --app - Install the AI SDK:
npm install ai @ai-sdk/openai - Add your AI provider API key to
.env.local
npx create-next-app@latest my-ai-app --typescript --tailwind --app
cd my-ai-app
npm install ai @ai-sdk/openai
OPENAI_API_KEY=sk-...
Build the streaming chat API route
~15 minCreate an API route that streams AI responses back to the client.
- Create
src/app/api/chat/route.ts - Use
streamTextfrom the AI SDK to stream responses - Configure the system prompt to define your AI's personality and capabilities
- The SDK handles all the streaming complexity for you
import { openai } from '@ai-sdk/openai'
import { streamText } from 'ai'
export async function POST(req: Request) {
const { messages } = await req.json()
const result = streamText({
model: openai('gpt-4o'),
system: 'You are a helpful AI assistant.',
messages
})
return result.toDataStreamResponse()
}
gpt-4o-mini during development to save API costs. Switch to gpt-4o or claude-3.5-sonnet for production.Build the chat interface
~30 minCreate a streaming chat UI with the `useChat` hook from the AI SDK.
- Create a chat page at
src/app/page.tsx - Use the
useChathook fromai/reactfor automatic streaming state management - Build a message list that displays user and assistant messages
- Add an input field with a send button
- Style with Tailwind - use different bg colors for user vs assistant messages
'use client'
import { useChat } from 'ai/react'
export default function Chat() {
const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat()
return (
<div className="max-w-2xl mx-auto p-4">
<div className="space-y-4 mb-4">
{messages.map(m => (
<div key={m.id} className={`p-4 rounded-lg ${
m.role === 'user' ? 'bg-blue-100 ml-12' : 'bg-gray-100 mr-12'
}`}>
{m.content}
</div>
))}
</div>
<form onSubmit={handleSubmit} className="flex gap-2">
<input value={input} onChange={handleInputChange}
placeholder="Ask me anything..."
className="flex-1 p-3 border rounded-lg" />
<button type="submit" disabled={isLoading}
className="px-6 py-3 bg-black text-white rounded-lg">
Send
</button>
</form>
</div>
)
}
useChat hook handles streaming, loading states, error handling, and message history automatically. You don't need to manage any of this yourself.Add conversation history with Supabase
~30 minStore conversations in Supabase so users can revisit past chats.
- Install Supabase:
npm install @supabase/supabase-js - Create a
conversationstable (id, user_id, title, created_at) - Create a
messagestable (id, conversation_id, role, content, created_at) - Add Supabase credentials to
.env.local - Update your chat API route to save messages after each exchange
- Build a sidebar that lists past conversations
CREATE TABLE conversations (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id TEXT NOT NULL,
title TEXT DEFAULT 'New conversation',
created_at TIMESTAMPTZ DEFAULT now()
);
CREATE TABLE messages (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
conversation_id UUID REFERENCES conversations(id) ON DELETE CASCADE,
role TEXT NOT NULL CHECK (role IN ('user', 'assistant', 'system')),
content TEXT NOT NULL,
created_at TIMESTAMPTZ DEFAULT now()
);
Deploy to Vercel
~10 minDeploy your AI app with edge functions for low-latency streaming.
- Push to GitHub and import into Vercel
- Add all environment variables (AI provider key, Supabase credentials)
- Add
export const runtime = 'edge'to your chat route for edge deployment - Deploy and test the full flow: chat → stream → save → revisit
🎉 You're Done!
A deployed AI chat application with streaming responses, conversation memory, and user authentication.
Want this built for you?
Get a step-by-step checklist, setup order, and the exact config for every tool in this guide. Or let me build it for you.
Get the checklist → Want this built for you?