Skip to content

Conversation

@gakonst
Copy link
Member

@gakonst gakonst commented Feb 12, 2026

Summary

Replace the for loop + pool.spawn() pattern with pool.spawn_broadcast() for spawning proof workers.

Changes

  • Use spawn_broadcast on storage and account proof worker pools in proof_task.rs, getting the worker ID from BroadcastContext::index()
  • Add Sync bound to Factory generic on ProofWorkerHandle::new (required by spawn_broadcast)
  • Propagate the Sync bound to callers in mod.rs and multiproof.rs

Motivation

spawn_broadcast guarantees exactly one task per pool thread and provides a stable per-thread index via the broadcast context, eliminating the manual loop and per-iteration clones.

Prompted by: DaniPopes

Replace the `for` loop + `pool.spawn()` pattern with
`pool.spawn_broadcast()` for spawning proof workers. This ensures
exactly one worker per pool thread using the broadcast context index
as the worker ID, removing the manual loop and per-iteration clones.

Amp-Thread-ID: https://ampcode.com/threads/T-019c537c-6a5a-769d-a0ee-d37cd2ab072d
Co-authored-by: Amp <amp@ampcode.com>
@gakonst gakonst added the A-trie Related to Merkle Patricia Trie implementation label Feb 12, 2026
@gakonst gakonst added the A-trie Related to Merkle Patricia Trie implementation label Feb 12, 2026
@github-project-automation github-project-automation bot moved this to Backlog in Reth Tracker Feb 12, 2026
@github-actions
Copy link
Contributor

⚠️ Changelog not found.

A changelog entry is required before merging. We've generated a suggested changelog based on your changes:

Preview
---
reth-engine-tree: patch
reth-trie-parallel: minor
---

Switched proof workers from `spawn` to `spawn_broadcast` for improved parallelism and added `Sync` bound to database provider factory traits.

Add changelog to commit this to your branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-trie Related to Merkle Patricia Trie implementation

Projects

Status: Backlog

Development

Successfully merging this pull request may close these issues.

2 participants