In the React world, debates about frameworks usually orbit production performance, SEO, or deployment convenience. But inside many engineering teams, the daily pain is simpler and more visceral: how fast local development feels. How long it takes to boot the app, load the first route, and iterate without breaking flow.

That is where TanStack Start is starting to attract serious attention in 2026—positioning itself as a full-stack framework for React and Solid built on TanStack Router and Vite, with full-document SSR, streaming, and “server functions” as first-class concepts. It’s still labeled RC (Release Candidate), but the momentum is hard to ignore: the TanStack ecosystem highlights 5.879.508.080 npm downloads, 118.918 GitHub stars, 3.005 contributors, and 1.304.800 dependents across its libraries, while TanStack Start itself lists 51.866.525 npm downloads, 13.413 GitHub stars, and 676 contributors.

The conversation got louder after a notable case study from Inngest—a company building durability for serverless and event-driven workflows—shared why they migrated off Next.js and moved their UI to TanStack Start. Their headline number is the kind that makes engineering leaders stop scrolling: local dev time reduced by 83%.

When “Full-Stack React” Becomes a Workflow Tax

Inngest’s story is familiar to teams that adopted Next.js early and enthusiastically. They went “all in” on the App Router while it was still in beta, migrated from Vite quickly, and bought into React Server Components (RSC) as the future. On paper, the promise was compelling: fewer blank loading states, fewer SPA network waterfalls, nested layouts, streaming out of the box, and consolidation into one framework.

In practice, they found the tradeoffs weren’t evenly distributed across the team.

Next.js can shine when there is a dedicated front-end group living inside its conventions every day. For smaller teams—especially those where most engineers wear multiple hats—the cognitive overhead can compound. Inngest points to the friction around “use client” / “use server” directives, layered caching behaviors, and unclear boundaries between RSC and client components. Engineers who weren’t spending most of their week in front-end land felt like they were fighting the framework instead of shipping product.

The Breaking Point: Local Dev Speed

Their first mitigation attempt was reasonable: step back from RSC and lean toward client components, keeping server components minimal. That helped for a while—but then local development got slow. Not “slightly annoying” slow, but workflow-breaking slow.

Inngest described initial local page load times in the 10–12 second range. The internal mood turned into a steady drumbeat of frustration.

At that point, they tried to rescue the existing stack. They upgraded Next.js, profiled the app using Vercel tooling, and experimented with Turbopack—twice. Each attempt required dependency upgrades and refactors, and created additional operational friction because local dev and production builds weren’t aligned (they noted a period where production still relied on Webpack).

The net result: only a modest improvement—“a couple seconds” shaved off—nowhere near the reset they needed. Turbopack, at least for their codebase and workflow, wasn’t the silver bullet.

The Alternative Roundup: Fresh, React Router v7, and TanStack Start

They prototyped three options:

  • TanStack Start
  • Deno Fresh
  • React Router v7 (effectively the modern branch of the Remix approach)

Each prototype cleared basic requirements and integrations. Fresh appealed because Deno is performant and TypeScript-first with opinionated tooling. React Router felt battle-tested. But Fresh’s long gap between version 1 and 2 raised concern, and Remix’s structural changes around React Router made them pause.

TanStack Start, meanwhile, was still RC—still is per their post. That would normally be a red flag. Yet Inngest chose it anyway for a reason that many teams underestimate: developer excitement matters when developer experience is the goal. They were already using other TanStack libraries and felt confident in the ecosystem’s direction.

Why TanStack Start “Feels” Different: Explicitness Over Magic

A big part of TanStack Start’s appeal is philosophical. It trades convention-heavy behavior—sometimes “magical,” sometimes confusing—for explicit route configuration and a more prescriptive approach to data loading.

Inngest illustrated this with a simple contrast:

  • In Next.js App Router, server-side data fetching often sits right next to layout code, and the only clue it’s server-side is async/await.
  • In TanStack Router/Start, the route definition typically includes a loader, and data is consumed via useLoaderData, with server-only execution pushed into explicitly server-scoped functions.

That’s not just style. It’s a different contract with the developer: less implicit behavior, more visible structure.

The Migration Strategy: Rip the Band-Aid Off

Once they committed, Inngest faced the classic migration choice: incremental conversion or a cutover.

Incremental migration would have meant conditional routing, conditional imports, and extra infrastructure because their shared component library leaned heavily on Next.js utilities. That approach can reduce risk, but it adds complexity and stretches timelines.

They chose the opposite: a brute-force cutover.

Inngest had two Next.js “app heads”—a dev server UI and a dashboard. They started with the dev server subset and found the conversion moved fast enough to keep going. The dev server conversion took about a week. The dashboard took longer because it had more routes and complexity, but the whole effort was still described as a couple weeks of engineering work for one engineer, with assistance from AI.

They acknowledged the downside: brute-force migration creates massive pull requests that are hard to review traditionally. Their tradeoff was to lean more heavily on user acceptance testing (UAT) to validate correctness.

The Result: From 10–12 Seconds Down to 2–3 Seconds

Post-migration, Inngest reports a dramatic improvement in their day-to-day workflow:

  • Initial local loads are rarely beyond 2–3 seconds, usually only on the first route.
  • Subsequent route loads are “almost always instant.”
  • The Slack sentiment flipped from frustration to disbelief at how “snappy” it felt.

That shift is the real story. Framework decisions often get framed as architecture debates, but when local feedback loops speed up, the payoff shows up everywhere: fewer context switches, faster debugging, more willingness to refactor, and less friction for engineers who aren’t front-end specialists.

AI as a Migration Accelerator (Not an Architect)

Inngest’s approach to AI was pragmatic and controlled. They didn’t outsource design decisions. They used AI for “grunt work” route conversion—once patterns were established—then reviewed and cleaned up the output.

AI also helped with TypeScript issues and obscure bugs. The key benefit wasn’t replacing engineers, but preventing deep-dive rabbit holes from eating the schedule. Inngest claims that without AI the migration would have taken longer and carried more risk.

Their final merge and UAT blocked feature development for only two or three days, and they reported exactly one production issue serious enough to require an immediate rollback (in a tricky integration flow hard to test outside production).

The RC Question: Is TanStack Start “Ready”?

TanStack Start’s RC status is the elephant in the room. For risk-averse organizations, that’s a real constraint. But for teams where developer velocity is suffering, the calculus can change—especially if they already depend on TanStack Router, Query, or Table, and have confidence in the ecosystem.

Inngest also published their migrated UI as open source in their monorepo, making the migration tangible for teams considering a similar move.

The broader takeaway is less about declaring a “winner” and more about what’s being optimized: TanStack Start is gaining attention because it treats developer experience—especially local iteration speed—as a core product feature, not a marketing bullet point.


FAQs

Is TanStack Start a practical replacement for Next.js App Router in real production apps?
It can be, especially for teams that value explicit routing and predictable data loading over heavy conventions. The main consideration is its RC status and how comfortable the organization is with that maturity level.

What’s the biggest conceptual shift when moving from Next.js to TanStack Start?
Many teams describe it as trading “framework magic” for explicit structure: route definitions and loaders become the central source of truth, and server-only logic tends to move into clearly-scoped server functions.

How hard is a migration from Next.js to TanStack Start?
It depends on route count and shared utilities. Some teams go incremental, but Inngest reports a successful brute-force cutover in a few weeks with one engineer, supported by AI and heavy UAT.

Can AI realistically speed up a framework migration without making the codebase messy?
Yes—if it’s used for repetitive conversion tasks after patterns are established, and if humans review the output. Inngest describes AI as a way to reduce grunt work and cap time spent on complex TypeScript issues.

Scroll to Top