HomeBuild Guides › Build an AI App
Intermediate ⏱ 2-3 hours

Build an AI App

Create an AI-powered web app with streaming chat and conversation history

Next.js
Next.jsFramework
Vercel
VercelHosting & AI SDK
Supabase
SupabaseDatabase & Auth

What You'll Build

A deployed AI chat application with streaming responses, conversation memory, and user authentication.

Prerequisites

Architecture

Next.js handles the frontend chat UI and API routes. Vercel AI SDK provides the streaming integration with AI providers (OpenAI, Anthropic, etc.). Supabase stores conversation history and handles user auth. Vercel deploys and runs everything on edge functions for low latency.

User → Chat UI (Next.js) → AI SDK Route Handler → AI Provider (OpenAI/Anthropic) → Streaming Response → Supabase (save history)

5 Steps

1
Next.js

Set up Next.js with Vercel AI SDK

~10 min

Create a new Next.js project and install the Vercel AI SDK.

  1. Create a Next.js project: npx create-next-app@latest my-ai-app --typescript --tailwind --app
  2. Install the AI SDK: npm install ai @ai-sdk/openai
  3. Add your AI provider API key to .env.local
Terminal
npx create-next-app@latest my-ai-app --typescript --tailwind --app
cd my-ai-app
npm install ai @ai-sdk/openai
.env.local
OPENAI_API_KEY=sk-...
💡
Tip: The Vercel AI SDK supports OpenAI, Anthropic, Google, Mistral, and more. You can switch providers by changing one import.
2
Vercel

Build the streaming chat API route

~15 min

Create an API route that streams AI responses back to the client.

  1. Create src/app/api/chat/route.ts
  2. Use streamText from the AI SDK to stream responses
  3. Configure the system prompt to define your AI's personality and capabilities
  4. The SDK handles all the streaming complexity for you
src/app/api/chat/route.ts
import { openai } from '@ai-sdk/openai'
import { streamText } from 'ai'

export async function POST(req: Request) {
  const { messages } = await req.json()
  const result = streamText({
    model: openai('gpt-4o'),
    system: 'You are a helpful AI assistant.',
    messages
  })
  return result.toDataStreamResponse()
}
💡
Tip: Use gpt-4o-mini during development to save API costs. Switch to gpt-4o or claude-3.5-sonnet for production.
⚠️
Warning: Never expose your API key on the client side. The AI SDK route handler keeps it server-side automatically.
3
Next.js

Build the chat interface

~30 min

Create a streaming chat UI with the `useChat` hook from the AI SDK.

  1. Create a chat page at src/app/page.tsx
  2. Use the useChat hook from ai/react for automatic streaming state management
  3. Build a message list that displays user and assistant messages
  4. Add an input field with a send button
  5. Style with Tailwind - use different bg colors for user vs assistant messages
src/app/page.tsx
'use client'
import { useChat } from 'ai/react'

export default function Chat() {
  const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat()
  return (
    <div className="max-w-2xl mx-auto p-4">
      <div className="space-y-4 mb-4">
        {messages.map(m => (
          <div key={m.id} className={`p-4 rounded-lg ${
            m.role === 'user' ? 'bg-blue-100 ml-12' : 'bg-gray-100 mr-12'
          }`}>
            {m.content}
          </div>
        ))}
      </div>
      <form onSubmit={handleSubmit} className="flex gap-2">
        <input value={input} onChange={handleInputChange}
          placeholder="Ask me anything..."
          className="flex-1 p-3 border rounded-lg" />
        <button type="submit" disabled={isLoading}
          className="px-6 py-3 bg-black text-white rounded-lg">
          Send
        </button>
      </form>
    </div>
  )
}
💡
Tip: The useChat hook handles streaming, loading states, error handling, and message history automatically. You don't need to manage any of this yourself.
4
Supabase

Add conversation history with Supabase

~30 min

Store conversations in Supabase so users can revisit past chats.

  1. Install Supabase: npm install @supabase/supabase-js
  2. Create a conversations table (id, user_id, title, created_at)
  3. Create a messages table (id, conversation_id, role, content, created_at)
  4. Add Supabase credentials to .env.local
  5. Update your chat API route to save messages after each exchange
  6. Build a sidebar that lists past conversations
Supabase SQL Editor
CREATE TABLE conversations (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  user_id TEXT NOT NULL,
  title TEXT DEFAULT 'New conversation',
  created_at TIMESTAMPTZ DEFAULT now()
);

CREATE TABLE messages (
  id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
  conversation_id UUID REFERENCES conversations(id) ON DELETE CASCADE,
  role TEXT NOT NULL CHECK (role IN ('user', 'assistant', 'system')),
  content TEXT NOT NULL,
  created_at TIMESTAMPTZ DEFAULT now()
);
💡
Tip: Auto-generate conversation titles from the first user message. It makes the sidebar much more useful.
5
Vercel

Deploy to Vercel

~10 min

Deploy your AI app with edge functions for low-latency streaming.

  1. Push to GitHub and import into Vercel
  2. Add all environment variables (AI provider key, Supabase credentials)
  3. Add export const runtime = 'edge' to your chat route for edge deployment
  4. Deploy and test the full flow: chat → stream → save → revisit
💡
Tip: Edge runtime gives you ~50ms latency to start streaming. Standard serverless adds ~200-500ms cold start.
⚠️
Warning: Set rate limits on your chat API to prevent abuse. Even a simple per-IP limit helps.

🎉 You're Done!

A deployed AI chat application with streaming responses, conversation memory, and user authentication.

Done for you

Want this built for you?

Get a step-by-step checklist, setup order, and the exact config for every tool in this guide. Or let me build it for you.

Get the checklist → Want this built for you?