Wexts 4 verified starter path

Build Next.js + NestJS apps in one production runtime.

Wexts gives you generated RPC, a single Fastify production server, Vercel Build Output support, and application-layer protection through Wexts Shield.

npx wexts create my-app
wexts.runtime.js
backend
@RpcService({ name: 'hello', requireAuth: false })
export class HelloService {
  @RpcMethod()
  async sayHello(name: string) {
    return `Hello, ${name}!`;
  }
}
/rpc → Wexts RPC
/api → NestJS
/* → Next.js
Verified stackNext.js 16NestJS 11React 19Fastify 5TypeScript 5.9Vercel Output API

Why Wexts

Stop stitching two frameworks together by hand.

Next.js and NestJS are powerful together, but production wiring often becomes a private framework. Wexts makes the boundary explicit.

Common friction

  • Separate frontend/backend wiring
  • Duplicated API clients and contracts
  • CORS, proxy, and port drift
  • Unclear Vercel vs VPS deployment paths
  • Runtime defaults that are hard to audit

Generated RPC

Explicit services produce a deterministic manifest and typed client.

One production server

Fastify serves Next, Nest, RPC, health checks, and Shield on one port.

CLI workflow

Create, generate, build, doctor, start, and Vercel output through one CLI.

Wexts Shield

Application-layer controls run before Next, Nest, and RPC.

Runtime

One Fastify entrypoint. Three clear route surfaces.

Production traffic enters one runtime. Wexts Shield runs first, then routes split to generated RPC, Nest, or Next.

Request path

Browser
Fastify runtime

/rpc

Wexts RPC

Manifest-driven service calls

/api

NestJS

Backend routes and modules

/*

Next.js

Frontend routes and assets

RPC

Typed calls without pretending every controller is automatic.

Wexts uses explicit decorators and generated files. The runtime reads manifests; it does not scan source files at request time.

apps/api/src/hello.service.ts
import { Injectable } from '@nestjs/common';
import { RpcMethod, RpcService } from 'wexts/nest';

@Injectable()
@RpcService({ name: 'hello', requireAuth: false })
export class HelloService {
  @RpcMethod()
  async sayHello(name: string): Promise<string> {
    return `Hello, ${name}!`;
  }
}
apps/web/app/page.tsx
'use client';

import { useWexts } from '@/lib/wexts-provider';

export default function Page() {
  const wexts = useWexts();

  async function run() {
    await wexts.hello.sayHello('Bob');
  }

  return <button onClick={run}>Say hello</button>;
}

CLI

A starter path you can actually verify.

The default create command generates the clean Wexts starter. Legacy compatibility templates stay behind an explicit flag.

$ npx wexts create my-app
$ cd my-app
$ pnpm install
$ pnpm run dev

# release checks
$ pnpm run generate
$ pnpm run build
$ pnpm run doctor
$ pnpm run doctor:security

Deploy

Two supported production targets, named clearly.

Use the Node runtime where long-running processes matter. Use the Vercel Build Output path when the platform model fits your application.

VPS / Node

Single Fastify runtime with /health, /api/health, /rpc, /api, and Next routes.

wexts start -c ./wexts.runtime.js

Vercel

Build Output API support for Vercel deployments. Validate serverless behavior per app.

wexts vercel-build

Wexts Shield

Application-layer protection before your routes run.

Shield is intentionally scoped. It improves application behavior but does not replace provider or network-level defenses.

Headers

CORS

CSRF

Rate limits

Body limits

Concurrency

Audit redaction

Application-layer protection. Network-level DDoS still requires provider/WAF protection.

Honest limits

Production-focused does not mean magic.

Wexts is strongest when the runtime model is validated explicitly in your deployment environment.

Development mode can use separate web and API processes.
Vercel mode is serverless and should be validated per application.
WebSockets need VPS or another long-running runtime.
Distributed rate limits need a shared store such as Redis.

Start verified

Create the starter, inspect the runtime, ship with the checklist.