Changelog

What's New

All the latest updates, improvements, and new features.

v0.4.4

UI Components

Shared component library expanded from 22 to 50 shadcn/ui primitives plus 13 new application-level components for loading states, status display, confirmations, and page layouts.

Component library expansion

The shared component library in packages/ui/ has been expanded from 22 to 50 pre-configured shadcn/ui components. New additions include Accordion, Avatar, Calendar, Carousel, Checkbox, Collapsible, Command, Drawer, Input OTP, Menubar, Pagination, Radio Group, Resizable, Scroll Area, Separator, Slider, Switch, Table, Tabs, Toggle, Toggle Group, and Typography with consistent spacing, border radius, and color token usage.

The dashboard now includes an automatic breadcrumb navigation bar that reflects the current route hierarchy.

Extracted shared components

Seven components were extracted from repeated patterns across the dashboard into dedicated, reusable modules.

Three layout wrappers reduce boilerplate for common page patterns: LegalPage wraps the privacy, terms, and imprint pages with consistent structure. DashboardFeaturePage combines feature-gate checking, Suspense boundaries, and LoadingSpinner into a single wrapper used by the four AI feature pages. CenteredPage provides the centered layout shared by error, not-found, and logout pages. CollapsibleSection adds a toggle with item count and clear action, extracted from the content and image generation history components.

New utility components

Six new components extend the UI toolkit with common interaction patterns. CopyButton renders a copy-to-clipboard button that swaps from a Copy icon to a Check icon with a two-second timeout. StatusBadge displays semantic status indicators with five variants (success, warning, error, info, neutral). StatCard is a KPI display card that accepts an icon, optional badge, formatted value, and children content.

ConfirmDialog wraps the shadcn/ui AlertDialog with variant support for destructive and warning confirmation flows including a loading state. LoadingButton extends the shadcn/ui Button with an integrated loading spinner that auto-disables during async operations. CreditCost renders an inline Coins icon with cost text for displaying AI feature credit costs.

Sascha Rahn
Sascha R
v0.4.3

Dashboard Polish & Responsive Audit

Complete responsive audit across all dashboard views. Credits page redesigned, transaction history refined, spacing system unified, and loading states optimized.

Dashboard Structure

The Showcase section has been removed from the dashboard. Sidebar navigation, quick-actions bar, account dropdown, and credit badge were updated to reflect the simplified structure. The dashboard landing page now features product-relevant quick-action cards covering AI Suite, Credits & Usage, Subscription Plan, and Quick Start.

Credits Page Redesign

The credit overview component received a full layout overhaul. Stat cards use a responsive grid that adapts from stacked mobile layout to a four-column desktop view. The bonus packages section switches between horizontal scroll on mobile and a fixed grid on larger screens. The transaction history table now uses a shared grid-column constant across headers, data rows, and skeleton placeholders to guarantee vertical alignment. Date formatting was changed to a compact MAR 09 | 11:00 style, and the column order was refined to Date, Description, Amount, Type, and Balance.

Sascha Rahn
Sascha R
v0.4.2

Blog & Changelog System

Integrated blog and changelog system built on MDX with Shiki syntax highlighting, Zod-validated frontmatter, and RSS 2.0 feeds.

Package architecture

The blog and changelog system ships as the @nextsaasai/blog package with a dual-export structure. Server-side operations — filesystem reading, MDX compilation, frontmatter parsing — export from @nextsaasai/blog. Client-side React components — timeline layouts, search, table of contents — export from @nextsaasai/blog/client. This separation keeps server-only dependencies (Node.js fs, MDX compilers) out of client bundles entirely.

Content files use MDX with YAML frontmatter validated by Zod schemas at build time. Invalid or missing fields produce clear error messages instead of silent rendering failures. Syntax highlighting runs through Shiki with dual-theme support, applying separate themes for light and dark mode without runtime theme switching overhead.

Blog features

Blog posts are organized by filesystem directories that map directly to navigation categories. Each category gets its own filtered view and RSS 2.0 feed. Detail pages include a social sharing dropdown with support for X/Twitter, LinkedIn, Facebook, WhatsApp, Email, and clipboard copying — plus native share API integration on supported devices.

Changelog features

The changelog uses a vertical timeline layout. Entries render their full content inline on the list page — no click-through required to read an update. Each entry carries a version number and date. A quick variant exists for minor updates that need only a paragraph or two, while the full variant supports headings, code blocks, and collapsible <Details> sections for longer release notes. Contributor avatars appear on each entry, linking updates to the team members who shipped them.

Sascha Rahn
Sascha R
v0.4.1

Documentation

Comprehensive product documentation built on Markdoc, covering every boilerplate feature from authentication to deployment.

Documentation architecture

The documentation system is built as a dedicated @nextsaasai/docs package using Markdoc for content authoring. Markdoc was chosen over MDX for documentation because it provides strict schema validation for custom components and keeps content files free of executable code. Documentation pages are hosted on the marketing site but document the boilerplate product — users access them at /docs alongside the marketing pages.

Content coverage

Every major feature of the boilerplate has a corresponding documentation section: authentication setup and configuration, payment integration with Lemon Squeezy, AI provider configuration, security hardening, database schema management, and deployment workflows. API references are auto-generated from the codebase to stay in sync with the implementation.

Custom Markdoc components handle recurring documentation patterns — callouts for warnings and tips, tabbed sections for multi-environment instructions, and code groups for showing related files side by side. These components render as React elements but are authored in plain Markdoc syntax, keeping the content accessible to non-developers.

Sascha Rahn
Sascha R
v0.4.0

Boilerplate Refactor

Comprehensive code quality refactor across the entire boilerplate — modular file splitting, barrel exports, and strict TypeScript compliance.

Why a dedicated refactor release

As the boilerplate grew through feature additions, several files exceeded 150 lines and accumulated mixed responsibilities. This release is a zero-feature, architecture-only pass that splits oversized modules into focused units, introduces barrel exports for cleaner import paths, and enforces consistent separation of concerns across the entire codebase. The minor version bump reflects the scope — nearly every directory was touched.

What changed

Files are now organized by responsibility: components, hooks, utilities, and types each live in dedicated files rather than being co-located in monolithic modules. Barrel exports (index.ts) provide stable public APIs for each directory, so internal restructuring does not break downstream imports. TypeScript strict mode compliance was tightened — remaining any types were replaced with proper generics or narrowed union types.

Import ordering follows a consistent convention throughout: React imports first, then Next.js, third-party libraries, @/ aliased paths, and finally relative imports. This ordering is enforced by ESLint and applies to every file in the project.

Sascha Rahn
Sascha R
v0.3.7

Content Generator

Template-based AI content creation with SSE streaming, configurable providers, and credit-based usage tracking.

The content generator provides structured templates for common writing tasks — blog posts, marketing copy, product descriptions, and more. Each template defines a system prompt and expected output format, so users get consistent results without prompt engineering. Output streams to the client via Server-Sent Events, rendering text in real time as the model generates it.

Credit deduction happens per generation based on the template category. The provider backing each generation is configurable at the application level, allowing operators to route requests to OpenAI, Anthropic, Google, or xAI depending on cost and quality requirements.

Sascha Rahn
Sascha R
v0.3.6

Image Generation

AI image generation via OpenAI GPT Image with configurable output settings and credit-based billing.

Image generation integrates OpenAI's GPT Image API as a first-class feature. Three model variants are available — standard (gpt-image-1), premium (gpt-image-1.5 with faster generation and better text rendering), and budget (gpt-image-1-mini). Users select a model, size, and quality preset, write a prompt, and receive a generated image within seconds. Each generation deducts credits based on the selected configuration, with higher resolutions and quality levels costing proportionally more.

Generated images are uploaded to Vercel Blob storage automatically, so they persist beyond the generation session and can be referenced later. The entire flow — prompt submission, API call, storage, and credit deduction — runs as a single server action with proper error handling at each step.

Sascha Rahn
Sascha R
v0.3.5

AI Chat UI

Polished chat interface with streaming SSE responses, two-phase loading states, and audio input for speech-to-text.

Streaming with a custom SSE parser

Chat responses stream via Server-Sent Events with a purpose-built parser that handles real-world network conditions. A dedicated SSEStreamError class distinguishes server errors from JSON parse failures, preventing the catch-all anti-pattern where legitimate error responses get silently swallowed. The SSELineBuffer class handles TCP packet boundary splitting — when a single SSE line arrives across multiple reader.read() calls, the buffer reassembles it before parsing.

Server errors embedded in the SSE stream (e.g., {"error": "rate limit exceeded"}) surface immediately in the UI instead of producing empty chat bubbles. Both string and object error formats are handled.

Two-phase loading

The loading indicator operates in two distinct phases. Phase A covers the gap between the user pressing send and the server creating the assistant message — a standalone loading block appears below the user's message. Phase B begins once the assistant message exists and streaming starts, switching to an inline streaming indicator within the message bubble. This prevents the jarring flash of an empty bubble that single-phase loaders produce.

Audio input

Speech-to-text input uses the MediaRecorder API to capture audio directly in the browser. Recordings are transcribed server-side and injected into the chat input. The implementation uses a session counter (monotonic useRef integer) instead of a boolean ref for async cancellation — this prevents race conditions when recording sessions overlap, where a stale promise from session N could incorrectly proceed in session N+1.

Sascha Rahn
Sascha R
v0.3.4

Demo Mode

Zero-config demo environment with simulated authentication and mock data for showcasing every feature without external services.

Environment-aware switching

Demo mode activates based on a single environment variable. When enabled, the application replaces all external service calls with local simulations — no Clerk keys, no database connection, no payment provider needed. The entire product is explorable from a cold clone.

Authentication is simulated with a mock user session that mirrors the real Clerk user object. Protected routes, role checks, and session guards all work identically to production. Users see realistic data for subscriptions, credit balances, file uploads, and AI responses without any backend infrastructure.

MSW-powered API mocking

All API calls in demo mode are intercepted by MSW (Mock Service Worker) handlers running in the browser. Handlers cover the full API surface — subscription status, credit operations, file storage, and AI streaming responses. The mocking layer sits at the network level, so components and hooks behave exactly as they would against real endpoints.

This makes demo mode useful beyond sales presentations. During local development, contributors can work on UI features without configuring external services. Onboarding a new team member takes minutes instead of hours.

Sascha Rahn
Sascha R
v0.3.3

AI Vision & PDF Chat

Image analysis via multimodal models and PDF document chat with automatic content chunking.

Vision capabilities

Users can upload images directly into the chat and receive analysis from multimodal models. The image is sent as a base64-encoded payload alongside the text prompt, letting the model reason about visual content — diagrams, screenshots, charts, handwritten notes. No separate vision API or pre-processing pipeline required.

PDF document chat

PDF chat follows a three-step pipeline: upload, parse, and converse. Documents are parsed server-side using pdf-parse, then split into chunks sized for the model's context window. The chunked content is injected as context for subsequent chat messages, allowing users to ask questions about specific sections without re-uploading.

Credits are deducted after pre-processing (parsing and chunking) but before streaming begins. This ensures users are only charged for successfully processed documents while preventing abuse through repeated upload attempts.

Turbopack compatibility

The pdf-parse library depends on pdfjs-dist, which uses dynamic worker imports that break Turbopack's static analysis. A createRequire loader module pattern wraps the native require() call in a separate file that Turbopack ignores, avoiding runtime failures without falling back to Webpack.

Sascha Rahn
Sascha R
v0.3.2

AI Multi-Provider Integration

Switch between OpenAI, Anthropic, Google, and xAI with a single environment variable — no code changes required.

Provider abstraction

The AI layer is built on the Vercel AI SDK v4.3 and abstracts away provider differences behind a unified interface. Switching between OpenAI, Anthropic, Google, and xAI requires changing a single environment variable — the rest of the application remains untouched.

# Switch providers without code changes
AI_PROVIDER=openai    # GPT-5, GPT-4.1, o3, o4-mini
AI_PROVIDER=anthropic # Claude Opus 4.5, Claude Sonnet 4.5, Claude Haiku 4.5
AI_PROVIDER=google    # Gemini 2.5 Pro, Gemini 2.5 Flash, Gemini 3 Flash Preview
AI_PROVIDER=xai       # Grok 4.1 Fast Reasoning

Reasoning model support

Reasoning models like GPT-5 and the o-series require different parameter handling than standard models. The system detects reasoning models automatically and adjusts accordingly — temperature is skipped (these models ignore it) and maxTokens is increased to 16384 to leave room for both internal reasoning and visible output. Without this adjustment, reasoning tokens consume the entire budget and the response comes back empty.

Feature flags

Each AI capability — LLM chat, RAG chat, vision, PDF analysis — is controlled by an independent feature flag. This allows gradual rollout, A/B testing, or disabling specific features without affecting others. Flags are checked at both the UI and API layer to prevent unauthorized access to disabled features.

Sascha Rahn
Sascha R
v0.3.1

Payment System

Lemon Squeezy integration with webhook-driven subscription lifecycle management covering checkout, billing, and tier upgrades.

Subscription lifecycle via webhooks

Payments run through Lemon Squeezy. Checkout sessions are created server-side and redirect users to a hosted payment page. After purchase, subscription management happens entirely through the Lemon Squeezy Customer Portal — signed URLs are generated on demand and remain valid for 24 hours. No custom billing UI to maintain.

Ten webhook events cover the complete subscription lifecycle. Every incoming webhook is verified using SVIX signature verification before processing. Events flow through a single endpoint that maps each event type to the corresponding database update, ensuring subscription state stays consistent without polling.

const WEBHOOK_EVENTS = [
  'subscription_created',
  'subscription_updated',
  'subscription_cancelled',
  'subscription_resumed',
  'subscription_expired',
  'subscription_paused',
  'subscription_unpaused',
  'subscription_payment_failed',
  'subscription_payment_success',
  'subscription_payment_recovered',
] as const

Tier-based pricing

Three tiers — Basic, Pro, and Enterprise — each available in monthly and yearly billing intervals. Tier configuration maps Lemon Squeezy variant IDs to internal plan identifiers, so pricing changes on the Lemon Squeezy dashboard propagate without code changes.

Sascha Rahn
Sascha R
v0.3.0

Monorepo Architecture

Migration to Turborepo with pnpm workspaces and shared packages for consistent cross-app code reuse.

The project has been restructured from a single Next.js application into a Turborepo monorepo with pnpm workspaces. Before the migration, shared logic (utilities, UI components, email templates) was copy-pasted between applications. Changes had to be duplicated manually, and version drift was inevitable.

After the migration, shared code lives in dedicated packages with scoped exports. A utility function is written once and imported consistently across all applications:

// Before: copy-pasted into each app
import { cn } from '@/lib/utils'

// After: single source of truth
import { cn } from '@nextsaasai/utils'

The build system uses Turborepo's task graph to parallelize builds and cache results. Shared packages are built first, then applications consume them as workspace dependencies. Customer delivery preserves the full monorepo structure -- customers receive the same workspace layout used in development.

This is a minor version bump to 0.3.0 because the migration changes the project's fundamental structure, import paths, and build pipeline.

Sascha Rahn
Sascha R
v0.2.3

Color Themes

Nine color themes switchable via a single environment variable, with automatic dark mode support.

The boilerplate ships with nine built-in color themes: default, ocean, forest, sunset, midnight, coral, slate, aurora, and crimson. Switching themes requires setting the COLOR_THEME environment variable -- no code changes, no rebuild of component files.

Each theme defines its palette as CSS custom properties, and all components use semantic Tailwind classes (bg-primary, text-muted-foreground) instead of hardcoded color values. Dark mode is handled automatically: every theme includes both light and dark variants, applied via the prefers-color-scheme media query and Tailwind's dark: prefix.

Sascha Rahn
Sascha R
v0.2.2

Security Infrastructure

Multi-layered API security with rate limiting via Upstash Redis, CORS protection, input sanitization, and security headers.

The boilerplate now includes a comprehensive security layer that protects every API route. Rate limiting is handled by Upstash Redis with category-based limits, so upload-heavy endpoints have tighter thresholds than general API calls. Each category tracks requests independently using sliding-window counters.

CategoryLimit
Upload10 / hour
Email5 / hour
Payments20 / hour
API100 / hour

CORS protection uses environment-aware origin allowlists. In development, localhost origins are permitted. In production, only the configured domain is accepted -- no wildcards. All API inputs pass through a server-side sanitization layer that strips potential XSS payloads using regex-based detection before any data reaches the database or is rendered in responses.

Security headers are applied globally via Next.js middleware. The header set includes Content-Security-Policy, Strict-Transport-Security, X-Frame-Options, X-Content-Type-Options, and Referrer-Policy. All API route inputs are validated with Zod schemas before processing, providing runtime type safety at the boundary between client and server.

Sascha Rahn
Sascha R
v0.2.1

Credit System

Usage-based billing with tier-specific credit allocations and pre-deduction before AI streaming.

Every AI operation now costs a defined number of credits. Credits are deducted before the streaming response begins, ensuring users cannot exceed their allocation mid-request. If a user lacks sufficient credits, the request is rejected with a clear error before any provider call is made.

Each subscription tier receives a monthly credit allocation that resets on the billing cycle. Unused credits do not roll over. The allocation scales with the tier to match expected usage patterns, from lightweight exploration on the free plan to high-volume production workloads on enterprise.

TierCredits / Month
Free20
Basic100
Pro1,000
Enterprise10,000

The credit cost per operation varies by model and feature. RAG queries cost more than simple chat completions because they involve embedding generation and vector retrieval in addition to the LLM call.

Sascha Rahn
Sascha R
v0.2.0

RAG System

Document upload and context-aware chat powered by vector embeddings and retrieval-augmented generation.

The boilerplate now ships with a fully integrated RAG pipeline. Users can upload PDF and plain-text documents, which are automatically chunked, embedded, and stored in Pinecone. When a user asks a question in the AI chat, the system performs a semantic search against the vector index, retrieves the most relevant document fragments, and injects them as context into the LLM prompt before generating a response.

Document processing uses a sliding-window chunking strategy with configurable overlap to preserve context across chunk boundaries. Each chunk is embedded via the configured provider and stored alongside metadata (source filename, page number, chunk index) so that responses can cite their origin. The retrieval step ranks results by cosine similarity and applies a relevance threshold to avoid injecting noise.

This is a minor version bump because RAG represents a major new capability surface. It touches the upload API, background processing, vector storage, and the chat completion pipeline. The architecture is provider-agnostic: swapping Pinecone for another vector store requires changing a single adapter.

Sascha Rahn
Sascha R
v0.1.4

File Upload

File storage via Vercel Blob with upload validation, rate limiting, and EU data residency support.

File uploads are handled through Vercel Blob, which provides edge-distributed object storage with a straightforward API. The upload endpoint validates MIME types against an allowlist and enforces file size limits before writing to storage. Rate limiting via Upstash Redis prevents abuse.

Vercel Blob supports EU data residency configuration, keeping uploaded files within European infrastructure when compliance requires it. Stored files are served through Vercel's CDN with automatic cache headers, so repeated access does not hit the origin.

Sascha Rahn
Sascha R
v0.1.3

Email Service

Transactional email via Resend with React Email templates and webhook-based delivery tracking.

Email delivery uses Resend as the transactional provider. Templates are built with React Email, which means email markup is written as React components with full TypeScript support — no separate templating language or HTML string concatenation.

Delivery status is tracked via Resend webhooks, providing visibility into bounces, complaints, and successful deliveries. A local preview route renders email templates in the browser during development, so layout and content can be iterated without sending actual messages.

Sascha Rahn
Sascha R
v0.1.2

Database

PostgreSQL database layer via Supabase with Prisma ORM for type-safe schema-driven development.

The database layer combines Supabase-managed PostgreSQL with Prisma ORM. Prisma generates TypeScript types directly from the schema file, so every query is type-checked at compile time. No manual type definitions for database entities — the schema is the single source of truth.

Development follows a schema-first workflow: define models in schema.prisma, run migrations, and the generated client updates automatically. Server Components query the database directly without an intermediate API layer, reducing latency and eliminating redundant fetch calls. Connection pooling is handled by Supabase, keeping the application stateless.

Sascha Rahn
Sascha R
v0.1.1

Authentication

Clerk v6 integration with hash-based routing, middleware protection, and environment-aware demo mode.

Clerk v6 integration

Authentication uses Clerk v6 with hash-based routing for login and registration (/login#, /register#). This approach avoids dedicated route segments for auth pages and keeps the URL structure clean. The Clerk middleware runs on every request, using auth.protect() to gate protected routes before they reach the handler.

Routes are split into public and protected segments via the middleware matcher. API routes under /(api|trpc)(.*) are automatically protected — unauthenticated requests receive a 404 response (Clerk's rewrite behavior, not the route handler). Dashboard routes require a valid session. Public routes like the marketing pages and legal pages pass through without checks.

Environment-aware switching

The authentication layer switches between real and test mode based on environment variables. In development, a demo mode provides pre-configured test credentials so local iteration does not require a live Clerk instance. In production, all auth flows route through Clerk's hosted components with multi-provider support, MFA, and session management.

Sascha Rahn
Sascha R
v0.1.0

Project Kickoff

The first commit of nextsaas.ai — an AI-native SaaS boilerplate designed to be developed with AI coding tools and to ship production-ready AI features to end users.

Why another boilerplate

Every SaaS boilerplate on the market claims to be "AI-compatible." The problem is that every codebase is technically AI-compatible — that bar is meaningless. What none of them offered was the ecosystem: token tracking, credit-based billing for AI operations, multi-provider integration with automatic fallback, document processing pipelines, or a pricing model that accounts for variable AI costs.

nextsaas.ai was built to solve both sides of the AI equation. First, the boilerplate itself is designed to be developed with AI coding tools like Claude Code — with version-locked dependencies, comprehensive documentation that serves as AI context, and specialized commands and agents for SaaS development. Second, it ships a complete AI infrastructure for end users: multi-provider support, RAG, credit-based billing, and everything needed to run AI features in production.

Version-lock strategy

The stack choices are deliberate. Next.js 14 instead of 15. Tailwind CSS 3 instead of 4. ESLint 8 instead of 9. This is not conservatism — it is an AI development strategy. AI coding tools perform best when they have extensive training data coverage for the frameworks they work with. Stable, well-documented versions with large community adoption produce better AI-assisted code than bleeding-edge releases with sparse documentation.

Every dependency had to pass two tests: would you bet a paying product on it today, and does your AI coding tool know it well enough to write production-quality code with it?

Initial stack

Next.js 14 with the App Router provides the routing and rendering layer. TypeScript runs in strict mode. Tailwind CSS handles styling without runtime overhead. Prisma paired with Supabase PostgreSQL delivers type-safe database access with managed infrastructure. Clerk handles authentication. pnpm keeps dependency installation fast and disk-efficient.

Sascha Rahn
Sascha R