-
-
Notifications
You must be signed in to change notification settings - Fork 968
Description
Provide environment information
System:
OS: macOS 15.6
CPU: (16) arm64 Apple M4 Max
Memory: 57.43 GB / 128.00 GB
Shell: 5.9 - /bin/zsh
Binaries:
Node: 24.7.0 - /opt/homebrew/bin/node
npm: 11.5.1 - /opt/homebrew/bin/npm
pnpm: 10.27.0 - /Users/antonioamaralegydiomartins/Developer/repos/terac/platform/node_modules/.bin/pnpm
bun: 1.3.3 - /Users/antonioamaralegydiomartins/.bun/bin/bun
Describe the bug
When running trigger.dev dev and you need to restart the dev server (Ctrl+C or opening a new terminal), the trigger-dev-run-worker child processes from the previous session are not properly terminated. These workers become orphaned zombie processes that continue consuming significant CPU resources (10-25% each), even though they're no longer connected to any active dev session.
What I expected: When the parent trigger.dev dev process is terminated, all child worker processes should be killed. Starting a new dev session should not leave orphaned workers from previous sessions.
What actually happens: Worker processes from old sessions persist indefinitely, accumulating with each restart. After several restarts, I had 34 zombie workers consuming 450%+ combined CPU.
Reproduction repo
Not applicable - this can be reproduced in any project using trigger.dev dev. The issue is in the CLI's process cleanup, not project-specific code.
To reproduce
-
Start trigger.dev dev server:
pnpm dlx trigger.dev@4.3.2 dev
-
Trigger some tasks (they can be running or completed)
-
Stop the dev server (Ctrl+C) or the terminal becomes unresponsive
-
Open a new terminal and start the dev server again:
pnpm dlx trigger.dev@4.3.2 dev
-
Check running processes:
ps aux | grep trigger-dev-run-worker -
Observe: Worker processes from the PREVIOUS session are still running as zombies, alongside any new workers from the current session
-
Repeat steps 3-6 multiple times and watch zombie workers accumulate
Additional information
Evidence - Process List (orphaned workers from previous sessions)
PID: 69294 | CPU: 22.8% | MEM: 0.3% | TIME: 3:11.26 | STARTED: 4:54PM
PID: 70058 | CPU: 20.1% | MEM: 0.3% | TIME: 3:17.06 | STARTED: 4:54PM
PID: 69698 | CPU: 19.1% | MEM: 0.3% | TIME: 3:04.85 | STARTED: 4:54PM
PID: 69250 | CPU: 19.0% | MEM: 0.3% | TIME: 3:11.41 | STARTED: 4:54PM
PID: 76510 | CPU: 18.4% | MEM: 0.3% | TIME: 1:49.24 | STARTED: 4:58PM
PID: 70131 | CPU: 17.6% | MEM: 0.2% | TIME: 1:27.72 | STARTED: 4:54PM
PID: 76039 | CPU: 16.6% | MEM: 0.3% | TIME: 1:46.70 | STARTED: 4:58PM
PID: 68077 | CPU: 15.6% | MEM: 0.3% | TIME: 3:02.45 | STARTED: 4:53PM
PID: 68205 | CPU: 15.4% | MEM: 0.3% | TIME: 3:18.22 | STARTED: 4:53PM
PID: 69976 | CPU: 14.0% | MEM: 0.3% | TIME: 3:01.42 | STARTED: 4:54PM
... (24 more orphaned workers with similar CPU usage)
Note: Workers were started at different times (4:53PM, 4:54PM, 4:58PM) corresponding to different dev server sessions - they accumulated across multiple restarts.
Workaround
pkill -f "trigger-dev-run-worker"Related Issues
- Prevent multiple instance of the
devcommand running at the same time with a File lock #1609 - Discusses multiple dev instances causing issues (related but different - that's about concurrent sessions, this is about orphaned workers after restart)
Trigger.dev Version
- CLI: 4.3.2
- Worker: 20260117.34