-
Notifications
You must be signed in to change notification settings - Fork 840
Description
Update
Root cause has now been confirmed in the upstream repository and a fix PR is open:
The confirmed trigger is concurrent metadata writes to the same LMDB path with noSync=true in the compiler metadata pipeline. In local stress testing, that combination caused worker aborts (exit 134) and, in some runs, missing metadata entries without a crash.
Summary
Upgrading @lingo.dev/compiler from 0.1.4 to 0.4.0 introduces intermittent Turbopack production build panics in a Next.js app that uses the Next integration (withLingo(...)).
The failure is random but reproducible with repeated builds. The panic consistently points at the webpack-loader execution path inside Turbopack:
Error [TurbopackInternalError]: failed to receive message
Caused by:
- reading packet length
- unexpected end of file
Debug info:
- Execution of get_all_written_entrypoints_with_issues_operation failed
- Execution of PlainIssue::from_issue failed
- Execution of PlainSource::from_source failed
- Execution of <WebpackLoadersProcessedAsset as Asset>::content failed
- Execution of WebpackLoadersProcessedAsset::process failed
- Execution of evaluate_webpack_loader failed
Environment
@lingo.dev/compiler:0.4.0- Next.js:
16.1.7 - React:
19.2.4 - Node.js:
24.14.0 - pnpm:
10.32.1 - macOS (Apple Silicon)
The app uses the standard Next integration:
import { withLingo } from "@lingo.dev/compiler/next";
export default async function createNextConfig() {
return await withLingo(nextConfig, {
sourceRoot: "src",
lingoDir: ".lingo",
sourceLocale: "en",
targetLocales: [...],
buildMode: "cache-only",
pluralization: { enabled: false, model: ... },
});
}Build command:
LINGO_BUILD_MODE=cache-only next build --turboWhat I verified
I isolated this in a real app by checking historical commits and then testing package combinations in a detached worktree.
Commit-level regression window
- pre-upgrade commit (
fc5c5af):0/8Turbopack panics - upgrade commit (
0701760):4/8Turbopack panics
Package isolation
On top of the regressing commit:
- baseline (
@lingo.dev/compiler 0.4.0,next 16.1.7):3/6panics - only downgrade
@lingo.dev/compilerto0.1.4while keeping newer Next/tooling:0/6panics - only downgrade
nextto16.1.1while keeping@lingo.dev/compiler 0.4.0:3/6panics
So from my testing, Next is not the primary trigger. The regression follows the Lingo compiler upgrade.
Source-code isolation
I also reverted the business/source changes from the regressing commit while keeping the upgraded dependency set. The panic still reproduced, so this does not look like a specific app component bug.
Strong suspicion
I compared the package internals between 0.1.4 and 0.4.0.
The biggest implementation difference relevant to this failure is that 0.4.0 appears to have changed metadata handling substantially:
0.1.4uses JSON metadata files (metadata-build.json/metadata-dev.json)0.4.0uses LMDB-backed metadata storage and a different metadata write path during loader execution
Because the Turbopack panic consistently happens in evaluate_webpack_loader, my current hypothesis is:
the
0.4.0Next/Turbopack loader path is sometimes crashing its worker/process during metadata writes, and Turbopack only surfaces that asfailed to receive message/ EOF.
I cannot prove the exact crash line without instrumenting the package, but the package-level regression and the metadata implementation change line up very closely.
Additional observation
I have another smaller Next app using @lingo.dev/compiler 0.4.0 that does not reproduce this issue in repeated builds. My current interpretation is that this is workload-sensitive: the larger app has a bigger route graph and many more translatable entries, so it triggers the issue much more reliably.
For reference:
- app that reproduces: ~248 TSX files, ~216 app files, ~1625 translatable entries
- smaller app that does not reproduce: ~179 TSX files, ~118 app files, ~991 translatable entries
Expected behavior
Repeated next build --turbo runs should be stable when using withLingo(...).
Actual behavior
Production Turbopack builds intermittently panic with failed to receive message / evaluate_webpack_loader failed.
Request
Please investigate whether the 0.4.x Next/Turbopack integration is unsafe under concurrent loader execution, especially around metadata persistence and LMDB usage.
If helpful, I can also prepare a minimized reproduction derived from this app, but the matrix above already narrowed the regression to the Lingo compiler upgrade itself.