10 min read

Migrating My Blog from Nextra to TanStack Start

tanstack-start
nextra
nextjs
vercel
migration
seo

So recently I shifted my blog from Nextra (Next.js 16) to TanStack Start, and honestly โ€” I should've done this months ago. This post is the why I moved, the what TanStack Start gives me instead, and the two migration headaches that ate most of a weekend so you don't have to learn them the same way ๐Ÿ˜ค.

Let's get into it.


Why I moved off Nextra ๐Ÿค”

Nextra is great if you're shipping a docs site for a project. For a personal blog where you want full control, it gets in your way. Here's what finally pushed me out.

1. Confusing docs and too much magic

The docs felt like they were written assuming you already knew Nextra. Things "just worked" until they didn't, and then you were left digging through the source to understand why. I lost more time to "wait, where does this setting actually come from?" than I did to writing posts.

2. Tailwind doesn't apply to the <Head> component

Even after you've wrestled Tailwind into working โ€” which is its own adventure, I wrote about that part separately โ€” it still doesn't apply to the site header. The header (the bar where your logo, nav, and search live) is rendered by Nextra's <Head> component, and it takes the theme color as numeric props:

jsx
<Head color={{ hue: 280, saturation: 100, lightness: { dark: 55, light: 45 } }} />

That's the actual API. Want a brand color on your header? Convert it to HSL components and pass them as numbers. Want different colors for light vs dark mode? Pass nested objects. Want to use the same Tailwind color token you're already using everywhere else? You can't. I mean seriously โ€” the rest of your site is bg-brand-500 and the header is { hue: 280, saturation: 100, lightness: 45 }. They're not the same color, and now you've got two sources of truth for a brand color that's supposed to be one thing.

Fun fact โ€” when I first set up this blog on Nextra, the Tailwind side of things hit me on day one. My very first blog post ever was literally the workaround for getting Tailwind to work at all in a Nextra app. That should tell you something.

3. Every Nextra blog looks the same

You can customize, but the cost is high. The theme is opinionated about layout, typography, and the sidebar/header structure. By the time you've overridden enough to make it yours, you're maintaining a fork of the theme. I wanted opinionated primitives, not an opinionated result.

4. Built-in components lock you in

Nextra ships components like <Tabs>, <Steps>, <Callout>, and they're convenient โ€” until you want to leave. Every MDX file that uses them is now something you have to rewrite. The more you lean on them, the harder it is to ever move off Nextra.


What TanStack Start fixed for me ๐ŸŽ

The short version:

  • File routes with real type safety โ€” createFileRoute('/blog/$slug') and TypeScript actually knows what params.slug is
  • A head() function per route โ€” meta tags, OG tags, structured data, all in one place, computed from your loader data
  • Vite + Nitro for deploy โ€” instant HMR locally, deploy to Vercel (or anywhere Nitro supports) without next.config headache
  • Tailwind v4 with zero special wiring โ€” the whole site, header included
  • Plain Markdown files โ€” I import them with import.meta.glob('../content/posts/*.md', { query: '?raw' }) and parse with gray-matter. No MDX runtime, no component dependencies, just text

Migrating the content was the easy part โ€” .mdx to .md was mostly stripping the imports. The hard parts were the things Next.js used to give me for free.


The two real migration headaches

Headache 1: Caching and on-demand revalidation ๐Ÿ”„

This was the one I was most nervous about.

On Nextra/Next.js I was exposing two API endpoints:

  • /api/blogs โ€” returns my post list as JSON. My portfolio site fetches this to show "recent writing".
  • /api/revalidate โ€” a webhook I hit after publishing so the cached response on /api/blogs flushes and the new post shows up.

In Next.js this was four lines. export const revalidate = 7776000, plus revalidateTag('posts') in the revalidate handler. Done.

In TanStack Start, there's no revalidate export. The server route is just a handler that returns a Response. So how do you get cached responses with on-demand invalidation?

The answer is @vercel/functions โ€” a small npm package Vercel ships that exposes the same edge primitives Next.js uses internally, but available to any framework now. Before showing the code, it's worth understanding how Vercel handles your app โ€” the rest makes a lot more sense once you see the model.

How Vercel handles your TanStack Start app

When you deploy a TanStack Start app with the Nitro vercel preset, Nitro doesn't generate one serverless function per route file like Next.js does. It generates one catch-all function at .vercel/output/functions/__server.func/ that contains your full router. Vercel's edge then has a routing config that says:

plaintext
/assets/*    -> serve from static CDN
filesystem   -> any prerendered HTML file
/(.*)        -> invoke /__server (the catch-all)

So when a request comes in for /api/blogs, the edge first checks its cache. If there's a HIT, the response is served directly from the CDN โ€” your function never runs. If it's a MISS, the edge invokes __server.func, which routes internally to your api.blogs.ts handler, returns a response, and Vercel caches it based on your Cache-Control headers. Next request? HIT โ€” function stays cold.

That's the model. Your function only runs on cache miss. A 90-day s-maxage means the function runs roughly once per quarter (or once per invalidation).

The cache-tag mechanism

Vercel-Cache-Tag is a response header. The edge reads it as metadata, stores it alongside the cached entry, and strips it before sending the response to the browser. So your visitors never see it; only Vercel's edge knows the tag exists. Later, when you call invalidateByTag('blog-posts'), Vercel scans its edge cache, marks every entry tagged blog-posts as stale, and the next request triggers a background revalidation.

There's a sister API too โ€” addCacheTag('blog-posts') from @vercel/functions โ€” which does the same thing as setting the header, but from inside the handler. Use whichever fits the code shape. I prefer the header because it keeps caching declarative.

Cache tag rules worth knowing (took me a while to find these):

  • Up to 128 tags per cached response
  • 256 bytes max per tag (UTF-8)
  • Tag names cannot contain commas (the header is comma-separated)

So you can tag aggressively. Common pattern: tag each post page with both a per-post tag and a global tag, so you can invalidate one OR all:

plaintext
Vercel-Cache-Tag: post-some-slug, blog-posts

invalidateByTag vs dangerouslyDeleteByTag

@vercel/functions exposes two invalidation APIs:

  • invalidateByTag(tag) โ€” marks entries as stale. Next request serves stale-while-revalidate, regenerates in the background. Use this almost always.
  • dangerouslyDeleteByTag(tag) โ€” deletes entries outright. Next request blocks until regeneration. On a high-traffic page this causes a cache stampede (every concurrent request hits your origin at once). The "dangerously" in the name is real โ€” only reach for it if you know your traffic pattern.

OK, now the code.

The pattern is two pieces. First, on the cached endpoint, set a Cache-Control for the edge and tag the response:

src/routes/api.blogs.ts
// src/routes/api.blogs.ts
import { createFileRoute } from '@tanstack/react-router'
import { getAllPostMeta } from '@/lib/posts'
 
export const Route = createFileRoute('/api/blogs')({
  server: {
    handlers: {
      GET: () =>
        new Response(
          JSON.stringify({ status: 'success', data: getAllPostMeta() }),
          {
            status: 200,
            headers: {
              'Content-Type': 'application/json',
              'Cache-Control': 'public, s-maxage=7776000, stale-while-revalidate',
              'Vercel-Cache-Tag': 'blog-posts',
            },
          },
        ),
    },
  },
})

Then the revalidate endpoint calls invalidateByTag:

src/routes/api.revalidate.ts
// src/routes/api.revalidate.ts
import { createFileRoute } from '@tanstack/react-router'
import { invalidateByTag } from '@vercel/functions'
 
export const Route = createFileRoute('/api/revalidate')({
  server: {
    handlers: {
      POST: async ({ request }) => {
        const auth = request.headers.get('authorization')
        const secret = auth?.startsWith('Bearer ') ? auth.slice(7) : undefined
 
        if (secret !== process.env.REVALIDATION_SECRET) {
          return new Response(
            JSON.stringify({ message: 'Invalid token' }),
            { status: 401, headers: { 'Content-Type': 'application/json' } },
          )
        }
 
        await invalidateByTag('blog-posts')
 
        return new Response(
          JSON.stringify({ revalidated: true, tag: 'blog-posts' }),
          { status: 200, headers: { 'Content-Type': 'application/json' } },
        )
      },
    },
  },
})

Hit this with Authorization: Bearer <secret> and every edge-cached response tagged blog-posts is marked stale. The next request to /api/blogs runs the function again and the cache repopulates. Same UX as Next.js's revalidateTag, just spelled out via headers.

The full lifecycle, in case it helps:

plaintext
1. Publish post -> POST /api/revalidate with bearer
2. invalidateByTag('blog-posts') -> Vercel marks tagged entries stale
3. Next visitor hits /api/blogs:
     - Edge sees the entry is stale
     - Serves the stale response immediately (stale-while-revalidate)
     - Invokes __server.func in the background to regenerate
     - Updated response replaces the stale entry in cache
4. Subsequent visitors -> HIT on the fresh entry

The visitor who triggered the regeneration doesn't wait โ€” they get the slightly stale response instantly. Background revalidation makes the cache eventually consistent without ever showing a loading state.

The infra gotcha that lost me an hour: Nitro's preset controls how your server gets bundled. My vite.config.ts had nitro({ preset: 'bun' }) and vercel.json was forcing outputDirectory: .output/public โ€” which is a static-only deploy. Vercel never deployed a serverless function at all, so /api/blogs was returning 404 in production with no useful error. The page routes worked because they were prerendered.

Fix is one line โ€” switch the preset:

vite.config.ts
// vite.config.ts
nitro({ preset: 'vercel' })

And drop the outputDirectory override from vercel.json so Vercel auto-detects Nitro's .vercel/output Build Output API layout. Then the function deploys, the tagging works, and you can invalidate from anywhere.

Honestly โ€” if you hit 404s on your TanStack Start API routes after deploying to Vercel, check this first.

Headache 2: SEO from scratch ๐Ÿ”

This one had me worried going in. Next.js gives you a metadata export, a sitemap generator, a robots.ts, and treats SEO as a first-class API. TanStack Start gives you a head() function and a friendly nod.

Turns out you have everything โ€” you just wire it yourself. Each piece is small. Here's what I built:

Meta tags per route. Every route exports a head() that returns { meta, links, scripts }:

tsx
export const Route = createFileRoute('/blog/$slug')({
  loader: async ({ params }) => ({ post: await getPostBySlug(params.slug) }),
  head: ({ loaderData, params }) => ({
    ...seo({
      title: loaderData.post.title,
      description: loaderData.post.description,
      image: loaderData.post.cover,
      path: `/blog/${params.slug}`,
      type: 'article',
    }),
    scripts: [
      jsonLdScript(articleSchema({ ... })),
      jsonLdScript(breadcrumbSchema({ ... })),
    ],
  }),
})

The seo() helper is a small function in src/lib/seo.ts that returns the OG / Twitter / canonical meta tag array. Wrote it once, every route now gets full SEO with one call.

Structured data. jsonLdScript() wraps a JSON-LD object in a <script type="application/ld+json">. I use it for Article, Breadcrumb, and Person schema. The Google rich-results test was happy after maybe 30 minutes of tweaking the schema shapes.

Sitemap, RSS, robots.txt, llms.txt. These are all just server routes that return text:

ts
// src/routes/sitemap[.]xml.ts
export const Route = createFileRoute('/sitemap.xml')({
  server: {
    handlers: {
      GET: () => {
        const posts = getAllPostMeta()
        const xml = buildSitemap(posts)
        return new Response(xml, {
          headers: { 'Content-Type': 'application/xml; charset=utf-8' },
        })
      },
    },
  },
})

Same pattern for rss.xml, robots.txt, llms.txt. Each one is short. They get prerendered at build time via the TanStack Start config:

ts
tanstackStart({
  prerender: { enabled: true, crawlLinks: true },
  pages: [
    { path: '/rss.xml' },
    { path: '/llms.txt' },
    { path: '/robots.txt' },
    { path: '/sitemap.xml' },
  ],
}),

So they're served as static files in production โ€” no function invocation, no cost.

Pagefind. Full-text search. The one thing that survived the migration unchanged. I just pointed the indexer at the new build output:

bash
pagefind --site .vercel/output/static --output-subdir pagefind

The lesson here: I went into the SEO chunk expecting it to take weeks. It took an afternoon once I stopped looking for a single magic export and started treating each concern as its own small piece.


Conclusion ๐ŸŽ‰

If you're sitting on a Nextra blog and wondering whether to make the move โ€” and you want full control over your site without fighting the theme โ€” TanStack Start is worth a weekend. The two real costs are figuring out @vercel/functions for caching and rebuilding your SEO bits yourself. Both are solved problems.

If you're stuck somewhere in this migration or you've found a better approach to one of these pieces, drop it in the comments. Always happy to learn.

Bye for now .....