What We're Building
In this guide, you'll build an AI-powered product recommendation engine that uses vector embeddings to find similar products and Claude to generate personalized recommendation descriptions. This is the same approach used by Amazon, Netflix, and Spotify to power their recommendation systems.
By the end of this guide, you'll have a working e-commerce recommendation system that can:
- Generate embeddings for your product catalog using OpenAI's embedding model
- Store embeddings in Supabase with pgvector for fast similarity search
- Find semantically similar products based on descriptions, not just categories
- Generate personalized recommendation copy using Claude
- Display recommendations in a beautiful, responsive carousel UI
Prerequisites
Before starting this guide, make sure you have the following:
- Node.js 18+ installed on your machine
- Supabase account (free tier works perfectly)
- OpenAI API key for generating embeddings
- Anthropic API key for Claude-powered descriptions
- Basic familiarity with React/Next.js and TypeScript
- Understanding of SQL basics (we'll write some queries)
For a catalog of 1,000 products, embedding generation costs approximately $0.10 with OpenAI's text-embedding-3-small model. Claude API calls for descriptions will depend on usage but typically cost $0.01-0.05 per recommendation set.
Tech Stack Specification
Here's the technology stack we'll use for this build, chosen for production readiness and developer experience:
| Layer | Technology | Why This Choice |
|---|---|---|
| Frontend | Next.js 14, TypeScript, Tailwind CSS | SSR for SEO, type safety prevents bugs, rapid UI development |
| Backend | Next.js API Routes | Serverless functions, zero config deployment, same codebase |
| Database | Supabase with pgvector | Native vector similarity search, generous free tier, real-time subscriptions |
| AI - Embeddings | OpenAI text-embedding-3-small |
Best price/performance for semantic search, 1536 dimensions |
| AI - Generation | Claude 3.5 Sonnet | Superior instruction following for natural recommendation copy |
| Hosting | Vercel | Optimized for Next.js, edge functions, automatic scaling |
AI Agent Workflow
Here's how to leverage AI tools throughout this build to maximize your productivity. Each tool has specific strengths that we'll exploit at different stages.
🤖 Claude Code
Use for project scaffolding, complex logic, and backend work:
- Project initialization and config
- Embedding pipeline logic
- Vector search SQL queries
- API route implementations
🎨 v0.dev
Use for rapid UI component generation:
- Product card components
- Recommendation carousel
- Loading skeletons
- Responsive layouts
⚡ Cursor
Use for debugging and optimization:
- Debugging vector queries
- Performance optimization
- Code refactoring
- Adding TypeScript types
Project Scaffolding with Claude Code
Start by asking Claude Code to scaffold your project with the correct dependencies and configuration. This saves significant setup time and ensures best practices from the start.
# Example prompt for Claude Code
Create a Next.js 14 project with TypeScript for an e-commerce
product recommendation engine. Include:
1. Supabase client setup with types
2. OpenAI client for embeddings
3. Anthropic client for Claude
4. A products table schema with pgvector embedding column
5. API routes for:
- POST /api/embeddings/generate (batch embed products)
- GET /api/recommendations/[productId] (get similar products)
6. Environment variable validation with zod
Use the App Router and server components where possible.
UI Generation with v0.dev
For the visual components, v0.dev excels at generating polished UI quickly. Use it for the product display components that need to look professional without spending hours on CSS.
When prompting v0.dev, specify "dark mode compatible" and "use Tailwind CSS" to get components that integrate seamlessly with your Next.js setup. Also include "include loading states" to get skeleton components automatically.
Development with Cursor
Cursor shines when you need to debug complex queries or optimize performance. Its inline code generation and ability to understand your entire codebase make it perfect for the integration work between your embedding pipeline and search functionality.
Step-by-Step Build Guide
Phase 1: Project Setup
Initialize your Next.js project and install the required dependencies. We're using the App Router for better server component support.
# Create Next.js project
npx create-next-app@latest ecommerce-recommender --typescript --tailwind --app --src-dir
cd ecommerce-recommender
# Install dependencies
npm install @supabase/supabase-js openai @anthropic-ai/sdk zod
# Install dev dependencies
npm install -D @types/node
Create your environment variables file with the required API keys:
# Supabase
NEXT_PUBLIC_SUPABASE_URL=your-project-url
NEXT_PUBLIC_SUPABASE_ANON_KEY=your-anon-key
SUPABASE_SERVICE_ROLE_KEY=your-service-role-key
# OpenAI
OPENAI_API_KEY=sk-...
# Anthropic
ANTHROPIC_API_KEY=sk-ant-...
Phase 2: Database Schema with pgvector
Set up your Supabase database with the pgvector extension for vector similarity search. Run this SQL in your Supabase SQL editor:
-- Enable the pgvector extension
CREATE EXTENSION IF NOT EXISTS vector;
-- Create products table with embedding column
CREATE TABLE products (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
description TEXT NOT NULL,
price DECIMAL(10, 2) NOT NULL,
category TEXT NOT NULL,
image_url TEXT,
embedding vector(1536), -- OpenAI embedding dimension
created_at TIMESTAMPTZ DEFAULT now(),
updated_at TIMESTAMPTZ DEFAULT now()
);
-- Create index for fast similarity search
CREATE INDEX products_embedding_idx
ON products
USING ivfflat (embedding vector_cosine_ops)
WITH (lists = 100);
-- Function to search similar products
CREATE OR REPLACE FUNCTION match_products(
query_embedding vector(1536),
match_count INT DEFAULT 5,
match_threshold FLOAT DEFAULT 0.7
)
RETURNS TABLE (
id UUID,
name TEXT,
description TEXT,
price DECIMAL,
category TEXT,
image_url TEXT,
similarity FLOAT
)
LANGUAGE plpgsql
AS $$
BEGIN
RETURN QUERY
SELECT
p.id,
p.name,
p.description,
p.price,
p.category,
p.image_url,
1 - (p.embedding <=> query_embedding) AS similarity
FROM products p
WHERE 1 - (p.embedding <=> query_embedding) > match_threshold
ORDER BY p.embedding <=> query_embedding
LIMIT match_count;
END;
$$;
The IVFFlat index requires at least 100 rows to build properly. For development with fewer products, you can skip the index creation or use HNSW instead: USING hnsw (embedding vector_cosine_ops)
Phase 3: Embedding Generation
Create the embedding service that converts product descriptions into vectors. This is the core of the recommendation engine.
import OpenAI from 'openai';
import { createClient } from '@supabase/supabase-js';
const openai = new OpenAI({
apiKey: process.env.OPENAI_API_KEY,
});
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY!
);
interface Product {
id: string;
name: string;
description: string;
category: string;
}
/**
* Generate embedding for a single product
* Combines name, description, and category for richer semantics
*/
export async function generateEmbedding(product: Product): Promise<number[]> {
const textToEmbed = `
Product: ${product.name}
Category: ${product.category}
Description: ${product.description}
`.trim();
const response = await openai.embeddings.create({
model: 'text-embedding-3-small',
input: textToEmbed,
});
return response.data[0].embedding;
}
/**
* Batch generate embeddings for multiple products
* Includes rate limiting and error handling
*/
export async function batchGenerateEmbeddings(
products: Product[],
batchSize = 100
): Promise<{ success: number; failed: string[] }> {
const failed: string[] = [];
let success = 0;
// Process in batches to avoid rate limits
for (let i = 0; i < products.length; i += batchSize) {
const batch = products.slice(i, i + batchSize);
const embedPromises = batch.map(async (product) => {
try {
const embedding = await generateEmbedding(product);
// Update product with embedding
const { error } = await supabase
.from('products')
.update({ embedding })
.eq('id', product.id);
if (error) throw error;
return true;
} catch (err) {
failed.push(product.id);
return false;
}
});
const results = await Promise.all(embedPromises);
success += results.filter(Boolean).length;
// Rate limiting: wait between batches
if (i + batchSize < products.length) {
await new Promise(resolve => setTimeout(resolve, 1000));
}
}
return { success, failed };
}
Phase 4: Similarity Search
Build the API route that performs vector similarity search to find related products. This uses the pgvector function we created earlier.
import { NextRequest, NextResponse } from 'next/server';
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY!
);
export async function GET(
request: NextRequest,
{ params }: { params: { productId: string } }
) {
const { productId } = params;
const limit = parseInt(
request.nextUrl.searchParams.get('limit') || '5'
);
try {
// Get the source product's embedding
const { data: product, error: productError } = await supabase
.from('products')
.select('embedding')
.eq('id', productId)
.single();
if (productError || !product?.embedding) {
return NextResponse.json(
{ error: 'Product not found or has no embedding' },
{ status: 404 }
);
}
// Find similar products using pgvector
const { data: recommendations, error: searchError } = await supabase
.rpc('match_products', {
query_embedding: product.embedding,
match_count: limit + 1, // +1 to exclude self
match_threshold: 0.5,
});
if (searchError) {
throw searchError;
}
// Filter out the source product
const filtered = recommendations
.filter((p: any) => p.id !== productId)
.slice(0, limit);
return NextResponse.json({
recommendations: filtered,
count: filtered.length,
});
} catch (error) {
console.error('Recommendation error:', error);
return NextResponse.json(
{ error: 'Failed to get recommendations' },
{ status: 500 }
);
}
}
Phase 5: AI-Powered Descriptions
Use Claude to generate personalized recommendation descriptions that explain why products are related, making recommendations feel more natural and helpful.
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
interface Product {
name: string;
description: string;
category: string;
price: number;
}
interface RecommendationWithReason {
product: Product;
reason: string;
}
/**
* Generate personalized recommendation descriptions using Claude
*/
export async function generateRecommendationReasons(
sourceProduct: Product,
recommendations: Product[]
): Promise<RecommendationWithReason[]> {
const productList = recommendations
.map((p, i) => `${i + 1}. ${p.name} (${p.category}) - ${p.description}`)
.join('\n');
const response = await anthropic.messages.create({
model: 'claude-3-5-sonnet-20241022',
max_tokens: 1024,
messages: [
{
role: 'user',
content: `A customer is viewing: "${sourceProduct.name}"
(${sourceProduct.category}) - ${sourceProduct.description}
Based on this, we're recommending these products:
${productList}
For each recommended product, write a brief, friendly reason (1 sentence,
max 15 words) explaining why it pairs well with the viewed product.
Focus on complementary use cases, style matching, or bundle value.
Format your response as JSON array:
[{"index": 1, "reason": "..."}, {"index": 2, "reason": "..."}, ...]`,
},
],
});
// Parse Claude's response
const textContent = response.content[0];
if (textContent.type !== 'text') {
throw new Error('Unexpected response type');
}
const reasons = JSON.parse(textContent.text);
return recommendations.map((product, i) => ({
product,
reason: reasons.find((r: any) => r.index === i + 1)?.reason ||
'You might also like this',
}));
}
Phase 6: UI Components
Build the frontend components to display recommendations. Use v0.dev to generate the initial components, then customize them for your needs.
'use client';
import { useEffect, useState } from 'react';
import { ProductCard } from './ProductCard';
interface Product {
id: string;
name: string;
description: string;
price: number;
image_url: string;
similarity: number;
reason?: string;
}
interface Props {
productId: string;
title?: string;
}
export function RecommendationCarousel({ productId, title = 'You might also like' }: Props) {
const [recommendations, setRecommendations] = useState<Product[]>([]);
const [loading, setLoading] = useState(true);
const [error, setError] = useState<string | null>(null);
useEffect(() => {
async function fetchRecommendations() {
try {
const res = await fetch(`/api/recommendations/${productId}?limit=6`);
if (!res.ok) throw new Error('Failed to fetch');
const data = await res.json();
setRecommendations(data.recommendations);
} catch (err) {
setError('Failed to load recommendations');
} finally {
setLoading(false);
}
}
fetchRecommendations();
}, [productId]);
if (loading) {
return (
<div className="py-8">
<h2 className="text-2xl font-bold mb-6">{title}h2>
<div className="grid grid-cols-2 md:grid-cols-3 lg:grid-cols-6 gap-4">
{[...Array(6)].map((_, i) => (
<div key={i} className="animate-pulse">
<div className="bg-gray-700 aspect-square rounded-lg" />
<div className="h-4 bg-gray-700 rounded mt-3 w-3/4" />
<div className="h-4 bg-gray-700 rounded mt-2 w-1/2" />
div>
))}
div>
div>
);
}
if (error || recommendations.length === 0) {
return null;
}
return (
<section className="py-8">
<h2 className="text-2xl font-bold mb-6">{title}h2>
<div className="grid grid-cols-2 md:grid-cols-3 lg:grid-cols-6 gap-4">
{recommendations.map((product) => (
<ProductCard
key={product.id}
product={product}
showSimilarity
/>
))}
div>
section>
);
}
Common Issues & Solutions
Here are the most common issues you might encounter and how to solve them:
If you see "expected 1536 dimensions but got X", ensure you're using text-embedding-3-small which outputs 1536 dimensions. Other models have different sizes (e.g., ada-002 is also 1536, but text-embedding-3-large is 3072).
If vector searches are slow, ensure your IVFFlat index is built (requires 100+ rows). For smaller datasets in development, use HNSW indexing instead or remove the index entirely.
OpenAI has rate limits on embeddings API. Implement exponential backoff and batch your requests (max ~100 per batch with delays between batches).
Other tips for debugging:
- Empty recommendations: Check that your
match_thresholdisn't too high. Start with 0.3 and increase gradually. - Claude JSON parsing errors: Add retry logic with temperature variation. Sometimes Claude needs a second attempt.
- Supabase connection issues: Verify your service role key has permissions and isn't the anon key.
Next Steps
Congratulations! You now have a working AI-powered recommendation engine. Here are some ways to extend and improve it:
- Add user behavior tracking: Incorporate click-through rates and purchase history to improve recommendations over time
- Implement caching: Cache embeddings and popular recommendation queries with Redis to reduce API costs and latency
- A/B test Claude prompts: Experiment with different prompt styles for recommendation reasons to optimize click-through rates
- Add category filtering: Allow users to filter recommendations by category while maintaining semantic relevance
- Build a feedback loop: Let users rate recommendations and use that data to fine-tune similarity thresholds
For production deployments, consider using Supabase Edge Functions to run the embedding generation closer to the database, reducing latency and improving reliability.
Follow the Vibe Coding Enthusiast
Follow JD — product updates on LinkedIn, personal takes on X.