BLAST Sync
Blockchain Layered Address Sync Tree (BLAST) is a privacy-preserving synchronization algorithm used by the Dash Platform SDK. It allows wallets to discover which of their keys exist in a server-side Merkle tree without revealing the specific keys being queried.
BLAST is used for two distinct sync tasks:
- Address balance sync: Discovering which platform addresses have balances and what those balances are.
- Nullifier sync: Checking which nullifiers have been spent in the shielded pool.
Both follow the same trunk/branch tree-scan pattern, extracted into a shared generic algorithm.
The Problem
A wallet holds a set of keys (addresses or nullifiers) and needs to learn which ones exist in a Merkle tree stored by Platform nodes. The naive approach -- querying each key individually -- leaks the wallet's full key set to the server. Even batching the keys into a single request reveals the exact set.
BLAST solves this by querying subtrees of the Merkle tree rather than individual keys. The server returns a chunk of the tree that contains the target key along with many other keys, making it impossible for the server to determine which specific key the wallet cares about.
Algorithm Overview
The sync has two phases: a tree scan for bulk discovery, and incremental catch-up for staying current between scans.
Phase 1: Tree Scan (Trunk/Branch)
The Underlying Data Structure
Platform stores addresses and nullifiers in a Merk tree -- a balanced binary search tree (BST) where each node is keyed and ordered. Every internal node has a left child (keys less than this node) and a right child (keys greater than this node). Each node also carries a Merkle hash of its subtree, making the entire structure cryptographically verifiable.
The trunk query returns a partial view of this BST: the top N levels are fully expanded (you can see the actual keys and values), while deeper subtrees are truncated to hash placeholders. The boundary between "expanded" and "truncated" defines the leaf nodes of the trunk result.
┌──────┐
│ 30 │ ← Root node (key=30)
└──┬───┘
┌──────┴──────┐
▼ ▼
┌──────┐ ┌──────┐
│ 15 │ │ 45 │ ← Internal nodes (expanded)
└──┬───┘ └──┬───┘
┌────┴────┐ ┌────┴────┐
▼ ▼ ▼ ▼
┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐
│ 7 │ │ 22 │ │ 38 │ │ 55 │ ← Leaf nodes
│▓▓▓▓▓│ │▓▓▓▓▓│ │▓▓▓▓▓│ │▓▓▓▓▓│ (children are
└─────┘ └─────┘ └─────┘ └─────┘ hash placeholders)
▓▓▓ = truncated subtree (only hash known, not contents)
The trunk result contains three key pieces of data:
elements: ABTreeMap<Vec<u8>, Element>of key-value pairs at expanded nodes. These are fully resolved -- the wallet can read their values directly.leaf_keys: ABTreeMap<Vec<u8>, LeafInfo>of nodes at the truncation boundary. EachLeafInfohas ahash(for verifying subsequent branch queries) and an optionalcount(number of elements in the truncated subtree).tree: The reconstructed BST structure from the proof, used for key tracing.
Step 1: The Trunk Query
The wallet sends a single trunk query to a Platform node. The request specifies a
max_depth (how many levels of the BST to expand). The server returns the trunk
elements, the leaf boundary information, and a Merkle proof covering the entire
result.
The proof is verified against the quorum-signed root hash, ensuring the server cannot lie about what the tree contains.
#![allow(unused)] fn main() { let (trunk_result, metadata) = PlatformAddressTrunkState::fetch_with_metadata(sdk, (), Some(settings)).await?; }
Step 2: Classifying Target Keys via BST Traversal
After receiving the trunk, the wallet classifies each of its target keys by traversing
the BST structure. The trace_key_to_leaf method performs a standard binary search:
- Start at the root node.
- Compare the target key against the current node's key.
- If equal: the key is found in the trunk elements.
- If less: follow the left child.
- If greater: follow the right child.
- If the current node is a leaf (its children are hash placeholders): the target key is somewhere in this leaf's truncated subtree, but we can't resolve it yet.
- If there is no child to follow: the key is proven absent.
This produces exactly three outcomes for each target key:
| Outcome | What it means | Action |
|---|---|---|
| Found | Key exists in trunk elements | Record the value (balance, spent status) |
| Traced to leaf | Key is in a truncated subtree | Add to KeyLeafTracker for branch querying |
| Absent | No path exists in the BST | Key is cryptographically proven to not exist |
#![allow(unused)] fn main() { for key in target_keys { if trunk_result.elements.contains_key(&key) { // Found directly in trunk -- record it result.found.insert(key); } else if let Some((leaf_key, info)) = trunk_result.trace_key_to_leaf(&key) { // Traces to a leaf subtree -- need a branch query tracker.add_key(key, leaf_key, info); } else { // Proven absent from the tree result.absent.insert(key); } } }
A concrete example: suppose the wallet is looking for key 20 in the tree above.
The BST traversal goes: root 30 (20 < 30, go left) -> node 15 (20 > 15, go right)
-> leaf 22 (children are hash placeholders). Key 20 traces to leaf 22 because
it would be in leaf 22's left subtree. The wallet now knows it needs to query
leaf 22's subtree to determine whether key 20 actually exists.
Note that each target key traces to exactly one leaf -- the BST path is deterministic. Multiple target keys may trace to the same leaf if they are close together in the key space.
Step 3: Privacy Adjustment
Before querying leaf subtrees, the algorithm applies privacy adjustment to prevent the server from learning which specific keys the wallet cares about.
Each leaf in the trunk result has an optional count -- the number of elements in its
truncated subtree. If this count is small (below min_privacy_count, default 32), then
querying that specific leaf reveals too much: the server knows the wallet is interested
in one of only a few keys.
The fix is to query an ancestor higher in the tree that has enough elements to provide cover:
Suppose leaf "22" has count=5 (too small for privacy).
Its parent "15" has count=50 (enough).
Instead of asking: "give me the subtree rooted at 22"
The wallet asks: "give me the subtree rooted at 15"
Now the server sees a query for a 50-element subtree and cannot
tell whether the wallet wants key 20 (in 22's subtree) or
key 10 (in 7's subtree) or any other key under 15.
The get_ancestor method walks up the BST path from the leaf to the root, stopping
at the first ancestor whose count exceeds min_privacy_count. It never returns the
root itself (that would be equivalent to re-fetching the entire trunk). If no ancestor
has enough count, it falls back to the node one level below the root.
The query depth is adjusted when using an ancestor: since the ancestor is higher in the tree, its subtree is deeper, so the depth parameter is reduced by the number of levels climbed.
Deduplication ensures that if multiple target keys expand to the same ancestor, only one branch query is sent.
Step 4: Iterative Branch Queries
For each leaf (or privacy-adjusted ancestor) with unresolved keys, the wallet sends a branch query specifying:
- The leaf's key (identifies which subtree to expand)
- The query depth (how many levels to expand)
- The expected root hash (the leaf's
hashfrom the trunk, used for verification) - The checkpoint height (ensures the branch matches the same tree snapshot as the trunk)
The server returns a GroveBranchQueryResult with the same structure as the trunk:
expanded elements, new leaf keys at the next truncation boundary, and a Merk proof.
The wallet verifies the proof against the expected hash from the parent query.
Each target key in the queried subtree is classified again:
- Found in the branch elements -- resolved.
- Traced to a deeper leaf -- the key is in an even deeper truncated subtree.
The
KeyLeafTrackeris updated to point to the new, deeper leaf. - Absent -- proven to not exist within this subtree.
Iteration 1: Trunk query
┌────────────────────────────────────────────────────┐
│ Key 20 traces to leaf 22 │
│ Key 41 traces to leaf 38 │
└────────────────────────────────────────────────────┘
│
▼
Iteration 2: Branch queries for leaves 22 and 38
┌────────────────────────────────────────────────────┐
│ Key 20: found in leaf 22's subtree → RESOLVED │
│ Key 41: traces to deeper leaf 40 → CONTINUE │
└────────────────────────────────────────────────────┘
│
▼
Iteration 3: Branch query for leaf 40
┌────────────────────────────────────────────────────┐
│ Key 41: proven absent in leaf 40's subtree → DONE │
└────────────────────────────────────────────────────┘
Branch queries run in parallel using FuturesUnordered with configurable concurrency
(max_concurrent_requests, default: 10). The iteration loop continues until all keys
are resolved or max_iterations (default: 50) is reached. In practice, most keys
resolve within 2-3 iterations because each branch query expands several levels of the
tree.
Phase 2: Incremental Catch-Up
After the tree scan produces a snapshot at some checkpoint height, the wallet needs to catch up to the chain tip. This is done with two sub-phases:
Compacted changes -- Historical balance/nullifier changes aggregated across block ranges. These cover the gap between the checkpoint height and recent history. Each response covers a range of blocks and contains the net changes.
Recent changes -- Per-block changes for the most recent blocks. These provide granular updates from where compacted changes left off to the chain tip.
checkpoint_height chain_tip
│ │
▼ ▼
──────┬────────────────────────────┬───────────────┤
│ Compacted changes │ Recent changes│
│ (block ranges) │ (per-block) │
└────────────────────────────┴───────────────┘
On subsequent syncs, if the elapsed time since the last sync is within
full_rescan_after_time_s (default: 7 days), the tree scan is skipped entirely and
only the incremental catch-up runs. This makes frequent re-syncs very fast.
The TrunkBranchSyncOps Trait
The shared algorithm is parameterized by the TrunkBranchSyncOps trait, defined in
packages/rs-sdk/src/platform/trunk_branch_sync/mod.rs. Each sync module implements
this trait to plug in its specific query construction, result processing, and
depth limits.
#![allow(unused)] fn main() { pub trait TrunkBranchSyncOps { /// Module-specific mutable state carried through the scan. type Context<'a>: Send where Self: 'a; /// Immutable config for parallel branch queries (cloned into each task). type BranchQueryConfig: Clone + Send + Sync + 'static; // Trunk async fn execute_trunk_query(sdk, settings, context) -> Result<(GroveTrunkQueryResult, u64, u64), Error>; fn process_trunk_result(trunk_result, context, tracker) -> Result<(), Error>; // Branch fn branch_query_config(context) -> Self::BranchQueryConfig; async fn execute_single_branch_query(sdk, config, key, depth, ...) -> Result<GroveBranchQueryResult, Error>; fn process_branch_result(branch_result, leaf_key, context, tracker) -> Result<(), Error>; // Limits and hooks fn depth_limits(platform_version) -> (u8, u8); fn after_branch_iteration(trunk_result, context, tracker) { } fn on_branch_query(context); fn on_branch_failure(context); fn on_elements_seen(context, count); fn on_iteration(context, iteration); fn set_checkpoint_height(context, height); } }
The two associated types deserve attention:
-
Context<'a>is a GAT (generic associated type) that carries mutable state through the algorithm. For nullifiers, this holds the input keys and result sets. For addresses, it holds the address provider, key-to-index mapping, and result. -
BranchQueryConfigholds immutable parameters needed to construct branch queries that must be sent to async tasks. For nullifiers, this is(pool_type, pool_identifier). For addresses, it is()since no extra parameters are needed.
The after_branch_iteration hook allows the address sync module to implement gap-limit
behavior: after each branch iteration, it checks if the provider has extended its
pending address list and adds newly pending keys to the tracker.
KeyLeafTracker
The KeyLeafTracker (in trunk_branch_sync/tracker.rs) maintains the mapping between
target keys and the leaf subtrees they reside in. It supports:
- Adding keys: When a key traces to a leaf during trunk processing
- Updating keys: When a branch query reveals the key is in a deeper subtree
- Removing keys: When a key is found or proven absent
- Reference counting: Multiple target keys can map to the same leaf; the leaf stays active until all its keys are resolved
#![allow(unused)] fn main() { let mut tracker = KeyLeafTracker::new(); // After trunk query: key traces to leaf subtree tracker.add_key(target_key, leaf_boundary_key, leaf_info); // After branch query: key found in subtree tracker.key_found(&target_key); // After branch query: key in even deeper subtree tracker.update_leaf(&target_key, deeper_leaf_key, deeper_info); // Check what still needs querying let active = tracker.active_leaves(); // leaves with unresolved keys let remaining = tracker.remaining_count(); }
Privacy-Adjusted Leaves (Detail)
The get_privacy_adjusted_leaves function (in trunk_branch_sync/mod.rs) implements
the privacy adjustment described in Phase 1, Step 3. The full logic for each active
leaf:
- Calculate the query depth from the leaf's element count using
calculate_max_tree_depth_from_count(count), clamped to platform-version bounds[min_query_depth, max_query_depth]. - If
count >= min_privacy_count: query this leaf directly at the calculated depth. - If
count < min_privacy_count: calltrunk_result.get_ancestor(&leaf_key, min_privacy_count)to find a higher node. Reduce depth bylevels_up(the number of tree levels climbed) so the total subtree size returned stays reasonable. - If no suitable ancestor exists (rare -- means the entire tree is small): query the leaf anyway, accepting reduced privacy.
- Deduplicate: if two target keys expand to the same ancestor, only one branch query
is emitted (tracked via a
BTreeSet<LeafBoundaryKey>).
Concrete Implementations
Address Balance Sync
The address sync module (platform/address_sync/) implements TrunkBranchSyncOps
as AddressOps<P> where P: AddressProvider.
The AddressProvider trait is implemented by wallets to supply:
- The list of pending addresses to check
- Callbacks when addresses are found or proven absent
- Gap-limit extension (generating new addresses when prior ones are found)
- Current balances for incremental-only mode
#![allow(unused)] fn main() { // First sync -- full tree scan + incremental catch-up let result = sdk.sync_address_balances(&mut wallet, None, None).await?; // Store for next call let height = result.new_sync_height; let timestamp = result.new_sync_timestamp; // Subsequent sync -- incremental only if within 7-day threshold let result = sdk.sync_address_balances(&mut wallet, None, Some(timestamp)).await?; }
Address balance sync uses ItemWithSumItem GroveDB elements where the item value
contains the nonce (4 bytes big-endian) and the sum value contains the credit balance.
Nullifier Sync
The nullifier sync module (platform/nullifier_sync/) implements TrunkBranchSyncOps
as NullifierOps.
Nullifier sync differs from address sync in several ways:
- Target keys are fixed 32-byte arrays (
[u8; 32]) - Branch queries carry extra config:
(pool_type, pool_identifier)to identify the shielded pool - No gap-limit behavior (the
after_branch_iterationhook is not overridden) - Branch query failures are tracked in metrics
#![allow(unused)] fn main() { let nullifiers: Vec<[u8; 32]> = vec![/* ... */]; // First sync -- full tree scan + incremental catch-up let result = sdk.sync_nullifiers(&nullifiers, None, None, None).await?; // Store for next call let height = result.new_sync_height; let timestamp = result.new_sync_timestamp; // Subsequent sync -- incremental only if within 7-day threshold let result = sdk.sync_nullifiers(&nullifiers, None, Some(height), Some(timestamp)).await?; }
Found nullifiers indicate spent notes; absent nullifiers indicate unspent notes.
Sync Mode Decision
Both sync modules use the same logic to decide between full scan and incremental-only:
last_sync_timestamp | Elapsed time | Mode |
|---|---|---|
None | -- | Full tree scan + catch-up |
Some(ts) | < full_rescan_after_time_s | Incremental only |
Some(ts) | >= full_rescan_after_time_s | Full tree scan + catch-up |
The default full_rescan_after_time_s is 604800 (7 days). Setting it to 0 forces a
full tree scan on every call.
Configuration
Both modules expose configuration structs with sensible defaults:
| Parameter | Default | Description |
|---|---|---|
min_privacy_count | 32 | Minimum elements in a queried subtree |
max_concurrent_requests | 10 | Parallel branch queries |
max_iterations | 50 | Safety limit for branch iteration depth |
full_rescan_after_time_s | 604800 | Seconds before forcing a full rescan |
Module Structure
packages/rs-sdk/src/platform/
├── trunk_branch_sync/
│ ├── mod.rs # TrunkBranchSyncOps trait, run_full_tree_scan(),
│ │ # get_privacy_adjusted_leaves(), parallel execution
│ └── tracker.rs # KeyLeafTracker with reference counting
├── address_sync/
│ ├── mod.rs # AddressOps<P> impl, sync_address_balances(),
│ │ # incremental_catch_up()
│ ├── provider.rs # AddressProvider trait
│ └── types.rs # AddressSyncConfig, AddressSyncResult, AddressFunds
└── nullifier_sync/
├── mod.rs # NullifierOps impl, sync_nullifiers(),
│ # incremental_catch_up()
├── provider.rs # NullifierProvider trait
└── types.rs # NullifierSyncConfig, NullifierSyncResult
Rules
Do:
- Use
sdk.sync_address_balances()orsdk.sync_nullifiers()as the entry points. - Persist
new_sync_heightandnew_sync_timestampfrom the result and pass them back on the next sync call. This enables incremental-only mode. - Implement
AddressProviderto integrate with your wallet's key derivation and storage. - Set
min_privacy_counthigh enough that individual key lookups cannot be distinguished. The default of 32 is a reasonable minimum.
Don't:
- Query individual keys directly via the trunk/branch RPCs -- use the sync functions which handle privacy adjustment, iteration, and proof verification.
- Set
max_iterationstoo low -- complex trees may need many rounds. The default of 50 handles trees with millions of entries. - Ignore the
full_rescan_after_time_sthreshold -- without periodic full rescans, the incremental phase could miss changes that occurred before the last known height. - Skip the incremental catch-up phase -- the tree scan snapshot may be slightly stale (the trunk is captured at a specific block height), and the catch-up brings it current.