Introduction

Welcome to The Dash Platform Book. This is a guide to the design philosophy, architectural patterns, and engineering conventions that shape the Dash Platform Rust codebase. It is not a user manual or an API reference -- it is the book you read before you open your editor, so that the thousands of files in the monorepo start to make sense.

Who This Book Is For

  • Contributors who want to add a new state transition, query endpoint, or Drive operation and need to know where things go and why.
  • Auditors reviewing the codebase for correctness, who need a map of the security-critical paths (validation pipelines, fee calculations, proof verification).
  • Architects evaluating the system design, particularly the versioning strategy, the ABCI integration, and the GroveDB storage model.

If you have written Rust before and understand traits, enums, and feature flags, you are ready.

The Design Philosophy

Four principles recur throughout the codebase. Understanding them up front will save you hours of "why is it done this way?" confusion.

1. Version Everything

Every method that touches consensus has a version number. The version is not embedded in the code path -- it lives in a central PlatformVersion struct that is threaded through the entire call stack:

#![allow(unused)]
fn main() {
// From packages/rs-platform-version/src/version/protocol_version.rs
#[derive(Clone, Debug)]
pub struct PlatformVersion {
    pub protocol_version: ProtocolVersion,
    pub dpp: DPPVersion,
    pub drive: DriveVersion,
    pub drive_abci: DriveAbciVersion,
    pub consensus: ConsensusVersions,
    pub fee_version: FeeVersion,
    pub system_data_contracts: SystemDataContractVersions,
    pub system_limits: SystemLimits,
}
}

When you see a method that dispatches on a version number, this is the canonical pattern:

#![allow(unused)]
fn main() {
// From packages/rs-drive-abci/src/execution/engine/finalize_block_proposal/mod.rs
pub(crate) fn finalize_block_proposal(
    &self,
    request_finalize_block: FinalizeBlockCleanedRequest,
    block_execution_context: BlockExecutionContext,
    transaction: &Transaction,
    platform_version: &PlatformVersion,
) -> Result<block_execution_outcome::v0::BlockFinalizationOutcome, Error> {
    match platform_version
        .drive_abci
        .methods
        .engine
        .finalize_block_proposal
    {
        0 => self.finalize_block_proposal_v0(
            request_finalize_block,
            block_execution_context,
            transaction,
            platform_version,
        ),
        version => Err(Error::Execution(ExecutionError::UnknownVersionMismatch {
            method: "finalize_block_proposal".to_string(),
            known_versions: vec![0],
            received: version,
        })),
    }
}
}

This pattern allows the network to upgrade without hard forks. Every node on the network agrees on which version to run for each method at each block height, and the version table is the single source of truth. The Versioning section covers this in depth.

2. Explicit Costs

Platform is a metered system. Every operation -- inserting a document, updating a contract, creating an identity -- has a fee expressed in credits. Costs are not estimated after the fact; they are tracked as operations execute, through GroveDB's cost accounting layer. The FeeResult type flows alongside the data, and sum trees in GroveDB let the platform verify fee totals against the Merkle-proven state.

The fee version itself is part of PlatformVersion, which means the network can adjust fee schedules across protocol upgrades without any node disagreeing on how much an operation costs at a given block height.

3. Clear Transformation Stages

A state transition does not go from "bytes on the wire" to "committed to disk" in one step. It passes through well-defined stages:

  1. Decode -- raw bytes become a StateTransition enum.
  2. Structure validation -- syntactic checks (field lengths, required fields).
  3. State validation -- checks against current platform state (does this identity exist? does the nonce match?).
  4. Transform into action -- the validated transition becomes a StateTransitionAction, a representation of what to do rather than what was requested.
  5. Convert to operations -- the action becomes a list of DriveOperation values (GroveDB inserts, deletes, replacements).
  6. Apply -- the operations execute inside a GroveDB transaction.

Each stage is a separate module with its own versioned dispatch. This separation means you can audit validation independently from execution, and you can test each stage in isolation.

4. Trait-Based Polymorphism

The codebase avoids dynamic dispatch where performance matters and embraces it where extensibility matters. ABCI handlers are defined against traits like PlatformApplication, TransactionalApplication, and BlockExecutionApplication:

#![allow(unused)]
fn main() {
// From packages/rs-drive-abci/src/abci/app/mod.rs
pub trait PlatformApplication<C = DefaultCoreRPC> {
    fn platform(&self) -> &Platform<C>;
}

pub trait TransactionalApplication<'a> {
    fn start_transaction(&self);
    fn transaction(&self) -> &RwLock<Option<Transaction<'a>>>;
    fn commit_transaction(&self, platform_version: &PlatformVersion)
        -> Result<(), Error>;
}

pub trait BlockExecutionApplication {
    fn block_execution_context(&self) -> &RwLock<Option<BlockExecutionContext>>;
}
}

This makes it possible to swap in mock implementations for testing while keeping the production path zero-cost.

How This Book Is Organized

The book follows the data as it flows through the system, from the outside in:

SectionWhat You Will Learn
ArchitectureThe monorepo layout, crate responsibilities, and the request pipeline from client to GroveDB.
VersioningHow PlatformVersion controls every consensus-critical code path, and how upgrades propagate.
State TransitionsThe lifecycle of a state transition: validation, transformation, operation generation, and application.
Error HandlingThe split between consensus errors (returned to users) and execution errors (node-level panics).
SerializationThe platform-serialization crate and its derive macros for versioned binary encoding.
Data ModelData contracts, documents, identities, and tokens as Rust types.
DriveGroveDB operations, batch processing, cost tracking, and finalize tasks.
TestingUnit test patterns, strategy tests for randomized multi-block scenarios, and test configuration.
SDKThe client-side dash-sdk crate: builder patterns, fetch traits, and proof verification.
WASMBinding patterns for the browser-facing wasm-dpp and wasm-sdk crates.

Each chapter follows the same arc: why the pattern exists (the problem it solves), what the pattern is (the types and modules involved), how it works (real code from the repository), and rules (the do's and don'ts that keep the codebase consistent).

Coding Conventions

A few conventions appear consistently across the codebase and are worth calling out early:

  • #![forbid(unsafe_code)] is set in both dpp and drive. The platform avoids unsafe Rust entirely in its core logic. Unsafe operations are confined to external dependencies (RocksDB, cryptographic libraries).
  • #![deny(missing_docs)] is enabled in drive, enforcing doc comments on every public item. DPP has this commented out but is moving toward it.
  • Feature-gated compilation is pervasive. A typical crate has 20-80 Cargo features controlling which modules, serialization formats, and integrations are compiled. This keeps binary sizes small and compile times manageable for downstream consumers that only need a subset of functionality.
  • Versioned module layout: When a function has multiple versions, they live in sibling directories named v0/, v1/, etc., with a parent mod.rs that dispatches based on PlatformVersion. This is the dominant structural pattern in Drive-ABCI.

A Note on Reading the Source

The codebase lives in a monorepo at packages/. The Rust crates are prefixed with rs- on disk but have shorter names in Cargo.toml:

Disk pathCrate name
packages/rs-dppdpp
packages/rs-drivedrive
packages/rs-drive-abcidrive-abci
packages/rs-sdkdash-sdk
packages/rs-platform-versionplatform-version
packages/rs-platform-valueplatform-value
packages/rs-platform-serializationplatform-serialization
packages/rs-drive-proof-verifierdrive-proof-verifier

The workspace currently targets Rust 1.92 and protocol version 12 (as of v3.0.1). The workspace Cargo.toml lists 44 member crates, but the core platform logic lives in the first eight listed above.

Let's begin with the architecture.

Platform Comparison

How Dash Platform compares to other blockchain networks across architecture, features, and developer experience. Ratings from - (not supported) through + (basic), ++ (good), to +++ (best in class) reflect relative strength in each dimension.

Overview

BitcoinEthereumSolanaPolkadotNEARCosmos SDKAvalancheDash Platform
Primary purposePaymentsGeneral-purpose smart contractsHigh-throughput smart contractsMulti-chain shared securitySharded smart contractsApp-chain frameworkMulti-chain smart contractsDecentralized data storage and querying
ConsensusNakamoto (PoW)Gasper (PoS)Tower BFT (PoS)GRANDPA + BABE (PoS)Nightshade (PoS)CometBFT (PoS)Snowman (PoS)Tenderdash SBFT (masternode quorums, BLS threshold signatures)
Finality- Probabilistic (~60 min)+ ~13 min (2 epochs)+++ ~0.4s (optimistic)+ ~12-60s (2 rounds)++ ~1-2s+++ Instant (1 block)++ ~1-2s+++ Instant (1 block)
Throughput (simple tx)- ~7 tx/s+ ~15-30 tx/s+++ ~65,000 tx/s+++ Scales per parachain+++ ~100,000 tx/s (sharded)+++ Per-chain++ ~4,500 tx/s++ ~1,000 tx/s

Data and Querying

BitcoinEthereumSolanaPolkadotNEARCosmos SDKAvalancheDash Platform
Data model- UTXOs+ Account / key-value+ Account / key-value+ Account / key-value+ Account / key-value+ App-defined+ Account / key-value+++ Structured documents with secondary indexes
Decentralized querying- Keys only (UTXO lookup)+ Keys only (no native indexing)+ Keys only (via RPC, no proofs)+ Keys only (per parachain)+ Keys only (via RPC, no proofs)+ Keys only (app-specific)+ Keys only (via RPC, no proofs)+++ Rich queries with indexes, ordering, and ranges -- all with proofs
State proofs+ SPV (block headers)++ Merkle-Patricia proofs- No native proofs+ Merkle proofs (per parachain)++ Merkle-Patricia proofs+ IAVL proofs+ Merkle proofs+++ GroveDB Merkle proofs for every query
Light client trust+ Follows longest chain+ Needs sync committee- Trusts RPC provider+ Trusts relay chain- Trusts RPC provider+ Trusts IBC relayer- Trusts RPC provider+++ Cryptographic proof per response -- same security as a full node

The standout difference is light client verification. Most chains either offer no state proofs (Solana, Avalanche), require trusting intermediaries (Polkadot's relay chain, Cosmos IBC relayers, NEAR's RPC providers), or give proofs that are expensive to verify (Ethereum's sync committee). Dash Platform serves a cryptographic proof with every query response, and a single BLS threshold signature is all a client needs to verify it. A mobile wallet gets the same security guarantees as a full node.

Smart Contracts and Programmability

BitcoinEthereumSolanaPolkadotNEARCosmos SDKAvalancheDash Platform
Smart contracts- Limited Script opcodes+++ Solidity / Vyper on EVM+++ Rust / C on SVM++ Per-parachain, typically Wasm++ Rust / JS / AssemblyScript on Wasm VM+ App-specific (Go)++ Solidity on EVM, Rust on Wasm- Coming in v4.0
VM / execution- Script interpreter+++ EVM+++ SVM (eBPF)++ Wasm (per parachain)++ Wasm VM+ No VM (compiled Go)++ EVM + Wasm subnets- No VM (data contracts; VM planned for v4.0)
Developer languages- Script+++ Solidity, Vyper++ Rust, C++ Rust (Substrate)++ Rust, JS, AssemblyScript+ Go++ Solidity, Rust+ JSON Schema (data contracts), Rust/JS/Swift (SDKs)
Smart contract securityN/A+ Reentrancy, gas exploits++ No reentrancy, but complexity++ Sandboxed per parachain++ Wasm sandboxingN/A+ Inherits EVM risksN/A (data contracts are declarative)

Dash Platform takes a fundamentally different approach: instead of a VM that executes arbitrary code, developers define data contracts -- JSON Schema-based specifications that describe the structure and validation rules for their application data. The network stores, indexes, and enforces these schemas directly. This eliminates entire classes of smart contract vulnerabilities (reentrancy, unchecked external calls, gas manipulation). Smart contract support is planned for Platform v4.0 (targeted for mainnet in 2027).

Token Support

BitcoinEthereumSolanaPolkadotNEARCosmos SDKAvalancheDash Platform
Native token standard- BRC-20 via inscriptions+++ ERC-20 / ERC-721 / ERC-1155++ SPL Token+ Per-parachain++ NEP-141 / NEP-171+ Per-chain++ ERC-20 (C-Chain)+++ Protocol-native tokens with declarative rules
Token creation- Requires third-party indexer++ Deploy smart contract++ Deploy program+ Deploy parachain++ Deploy smart contract+ Build app-chain++ Deploy smart contract+++ Declare in data contract -- no code deployment
Freeze / pause- No+ Only if contract implements it++ Mint authority can freeze+ Per-parachain+ Only if contract implements it+ Per-chain+ Only if contract implements it+++ Protocol-level freeze, pause, and destroy
Minting authority- N/A+ Contract owner / governance+ Mint authority+ Per-parachain+ Contract owner+ Per-chain+ Contract owner+++ Individuals, groups with threshold signing, or pre-programmed schedules
Pre-programmed distributions- No+ Requires contract logic+ Requires program logic- No+ Requires contract logic+ Requires app logic+ Requires contract logic+++ Native: time-based, epoch-based perpetual distributions

Dash Platform tokens are first-class protocol objects rather than smart contract deployments. Token behavior (minting rules, supply caps, freeze authority, distribution schedules) is configured declaratively in data contracts and enforced by the protocol itself.

Project and Ecosystem

BitcoinEthereumSolanaPolkadotNEARCosmos SDKAvalancheDash Platform
LicenseMITVarious (GPL, Apache, MIT)Apache 2.0GPL 3.0Apache 2.0 / MITApache 2.0BSD 3-ClauseMIT
Open sourceYesYesYesYesYesYesYesYes
Core languageC++Go, RustRustRustRustGoGoRust
Client SDKs+ Multiple (community)+++ web3.js, ethers.js, viem++ @solana/web3.js+ Polkadot.js+ near-api-js+ CosmJS++ ethers.js (C-Chain)++ Rust, JavaScript, Swift (iOS), Android (coming)
Launched200920152020202020202019 (SDK)20202024 (v1.0 mainnet)
Ecosystem maturity+++ Largest, most established+++ Largest smart contract ecosystem++ Fast-growing DeFi ecosystem+ Growing parachain ecosystem+ Growing dApp ecosystem++ Many sovereign chains++ Growing subnet ecosystem+ Early stage, growing
Identity system- Addresses only+ ENS (contract-based)- No native identity- No native identity+ Named accounts- No native identity- No native identity+++ Protocol-native identities with hierarchical keys and DPNS usernames
Native token privacy+ UTXO model allows address rotation- Account model, all activity linked to one address- Account model, fully transparent- Per-parachain, generally transparent- Account model, fully transparent- Generally transparent- Account model, fully transparent+++ Shielded pool with Orchard/Halo2 ZK proofs

SDK Support

Dash Platform provides SDKs for multiple languages and environments so developers can build applications on whatever stack they prefer.

Available SDKs

SDKLanguageStatusPackageUse case
Rust SDKRustAvailable nowrs-sdkServer-side applications, full-node tooling, direct protocol access
JavaScript SDKJavaScript / TypeScriptAvailable nowjs-evo-sdkNode.js backends, scripts, CLI tools
iOS SDKSwiftComing in v3.1swift-sdkiOS and macOS applications
Android SDKKotlinComing in v3.2--Android applications

Supporting packages

PackagePurpose
rs-sdk-ffiC FFI layer over the Rust SDK; used by the Swift SDK, the Android SDK, and any language that can call C

Choosing an SDK

Building a server or CLI tool? Use the Rust SDK for maximum performance and direct access to all protocol features, or the JavaScript SDK if your stack is Node.js.

Building an iOS or macOS app? Use the Swift SDK (v3.1+), which wraps the Rust SDK through an FFI layer and provides native Swift types.

Building an Android app? The Android SDK (v3.2+) will wrap the same FFI layer with native Kotlin types.

Building for another language? The FFI layer (rs-sdk-ffi) exposes a C-compatible interface that can be called from Python, C#, or any language with C interop support.

What every SDK provides

All SDKs share the same underlying Rust implementation, so behavior is consistent across platforms:

  • Identity management -- create, top up, and manage identities with hierarchical key support
  • Data contract deployment -- define and publish JSON Schema-based data contracts
  • Document operations -- create, update, delete, and query documents with proof verification
  • Token operations -- query balances, supply, statuses, and pre-programmed distributions
  • Name registration -- register and resolve DPNS usernames
  • Proof verification -- every query response can be cryptographically verified against the platform state root

Getting Started

This guide covers prerequisites and local development setup for the Dash Platform monorepo.

Prerequisites

  • Node.js v20+

  • Docker v20.10+

  • Rust v1.92+, with the wasm32 target:

    rustup target add wasm32-unknown-unknown
    
  • protoc (Protocol Buffers compiler) v32.0+. If protoc is not on your PATH, set the PROTOC environment variable to the binary location.

  • wasm-bindgen-cli:

    cargo install wasm-bindgen-cli@0.2.103
    

    Important: the wasm-bindgen-cli version must match the wasm-bindgen version in Cargo.lock. Check with grep 'name = "wasm-bindgen"' Cargo.lock.

    Depending on your system, you may need additional packages before wasm-bindgen-cli will compile (e.g. clang, llvm, libssl-dev).

  • wasm-pack:

    curl https://rustwasm.github.io/wasm-pack/installer/init.sh -sSf | sh
    
  • Build essentials (Debian / Ubuntu):

    apt install -y build-essential libssl-dev pkg-config clang cmake llvm
    

macOS-specific notes

The built-in Apple llvm toolchain does not work for WASM compilation. Install LLVM from Homebrew and put it on your PATH:

brew install llvm
echo 'export PATH="/opt/homebrew/opt/llvm/bin:$PATH"' >> ~/.zshrc
source ~/.zshrc

If you use Bash, replace ~/.zshrc with ~/.bash_profile. You can check your default shell with echo $SHELL.

Setup

# Enable corepack (ships with Node.js) to get the correct yarn version
corepack enable

# Install dependencies, configure, and build all packages
yarn setup

# Start the local development environment (runs a local Dash network in Docker)
yarn start

# Run the full test suite (requires a running local node)
yarn test

# Stop the local environment (frees system resources)
yarn stop

# Rebuild after making changes
yarn build

# If you need to restart services after a rebuild
yarn restart

# Complete reset of all local data and builds
yarn reset

Running package-level tests

You can run tests for a single package instead of the entire suite:

yarn workspace <package_name> test

For example:

yarn workspace @dashevo/dapi-client test

See the packages directory for the full list of available packages.

Rust development

# Run tests for a specific Rust crate
cargo test -p <crate_name>

# Run all Rust workspace tests
cargo test --workspace

# Check compilation without building
cargo check --workspace

# Run the clippy linter
cargo clippy --workspace

# Format all Rust code
cargo fmt --all

Monorepo Overview

Dash Platform ships as a single Git repository containing 44 Rust crates, a handful of JavaScript/TypeScript packages, and supporting tooling. This chapter maps the territory: what each crate owns, how they depend on one another, and where the boundaries are drawn.

The Workspace

The top-level Cargo.toml declares a workspace with resolver = "2". It pins a few critical external dependencies at the workspace level -- most notably dashcore (the Rust Dash Core library) and the GroveDB family of crates:

# From Cargo.toml (workspace root)
[workspace.dependencies]
dashcore = { git = "https://github.com/dashpay/rust-dashcore", rev = "53d699c..." }

The workspace version (3.0.1 at time of writing, Rust edition 2021, MSRV 1.92) is shared by all member crates through version.workspace = true.

The Core Dependency Chain

Four crates form the spine of the platform. Understanding their dependency order is the single most important thing for navigating the codebase:

    dpp (rs-dpp)
      |
      v
    drive (rs-drive)
      |
      v
    drive-abci (rs-drive-abci)
      |
      v
    dash-sdk (rs-sdk)

Each layer adds a concern:

dpp -- Dash Platform Protocol

Crate name: dpp Path: packages/rs-dpp

DPP defines the data model of the platform. This is where you will find:

  • Identity, IdentityPublicKey, and identity state transitions
  • DataContract and the JSON Schema validation logic
  • Document and document state transitions
  • StateTransition -- the enum that unifies all transition types
  • Prelude types used everywhere: Identifier, BlockHeight, IdentityNonce
  • Token, voting, and withdrawal types
#![allow(unused)]
fn main() {
// From packages/rs-dpp/src/lib.rs
pub mod data_contract;
pub mod document;
pub mod identifier;
pub mod identity;
pub mod state_transition;
pub mod tokens;
pub mod voting;
pub mod fee;
pub mod validation;
}

DPP is deliberately storage-agnostic. It knows nothing about GroveDB, ABCI, or gRPC. It depends on platform-version, platform-value, and platform-serialization for versioned encoding, but it never opens a database.

A key design choice: DPP uses feature flags extensively. The Cargo.toml lists over 80 features that control which serialization formats, validation paths, and system contracts are compiled in. The abci feature, for instance, pulls in the subset needed by Drive-ABCI without dragging in client-side concerns:

# From packages/rs-dpp/Cargo.toml
[features]
abci = [
  "state-transitions",
  "state-transition-validation",
  "validation",
  "random-public-keys",
  "identity-serialization",
  "vote-serialization",
  "platform-value-cbor",
  "core-types",
  "core-types-serialization",
  "core-types-serde-conversion",
  "core_rpc_client",
]

Rule: If you are adding a new type that both the server and the SDK need, put it in DPP. If it needs GroveDB, it belongs in Drive.

drive -- Dash Drive

Crate name: drive Path: packages/rs-drive

Drive is the storage engine. It wraps GroveDB (a Merkle-tree-based key-value store) and provides domain-specific operations: inserting documents, managing identity balances, tracking fee pools, building cryptographic proofs.

#![allow(unused)]
fn main() {
// From packages/rs-drive/src/lib.rs
#[cfg(any(feature = "server", feature = "verify"))]
pub mod drive;
#[cfg(any(feature = "server", feature = "verify"))]
pub mod query;
#[cfg(feature = "server")]
pub mod state_transition_action;
#[cfg(any(feature = "server", feature = "verify"))]
pub mod verify;
#[cfg(feature = "server")]
pub mod fees;
}

Notice the feature split: server includes everything needed to write to the database, while verify includes only what is needed to prove and verify queries. The SDK only enables verify, which means it can check GroveDB proofs without linking in the full storage engine.

Drive depends on DPP for type definitions and on the GroveDB family of crates for storage:

# From packages/rs-drive/Cargo.toml
dpp = { path = "../rs-dpp", features = ["state-transitions"], default-features = false }
grovedb = { git = "https://github.com/dashpay/grovedb", rev = "33dfd48...", optional = true }
grovedb-path = { git = "https://github.com/dashpay/grovedb", rev = "33dfd48..." }
grovedb-costs = { git = "https://github.com/dashpay/grovedb", rev = "33dfd48...", optional = true }

Rule: Never import drive with the server feature from a client-side crate. Use verify only.

drive-abci -- The ABCI Application

Crate name: drive-abci Path: packages/rs-drive-abci

This is the application server. It implements the Tenderdash ABCI interface, orchestrates block processing, validates state transitions, manages platform state, and serves gRPC queries. It is the binary that masternodes run.

Drive-ABCI depends on both DPP and Drive, plus the Tenderdash ABCI library and the dapi-grpc protobuf definitions:

# From packages/rs-drive-abci/Cargo.toml
drive = { path = "../rs-drive", default-features = false, features = ["server"] }
dpp = { path = "../rs-dpp", default-features = false, features = ["abci"] }
tenderdash-abci = { git = "https://github.com/dashpay/rs-tenderdash-abci", tag = "v1.5.0" }
dapi-grpc = { path = "../dapi-grpc", default-features = false, features = ["server", "platform"] }

The crate is organized into three major subsystems:

  1. abci/ -- Tenderdash handler functions (prepare_proposal, process_proposal, finalize_block, check_tx, etc.)
  2. execution/ -- Block processing engine, state transition validation, platform events (epoch changes, withdrawals, voting)
  3. query/ -- gRPC query service implementing the Platform API

Rule: Business logic goes in execution/. The abci/ handlers should be thin wrappers that delegate to the execution engine.

dash-sdk -- The Client SDK

Crate name: dash-sdk Path: packages/rs-sdk

The SDK is what application developers use. It provides high-level methods for fetching documents, creating identities, broadcasting state transitions, and verifying proofs. It depends on DPP for types, Drive (with verify only) for proof verification, and rs-dapi-client for network communication:

# From packages/rs-sdk/Cargo.toml
dpp = { path = "../rs-dpp", default-features = false, features = ["dash-sdk-features"] }
drive = { path = "../rs-drive", default-features = false, features = ["verify"] }
drive-proof-verifier = { path = "../rs-drive-proof-verifier", default-features = false }
rs-dapi-client = { path = "../rs-dapi-client", default-features = false }

Supporting Crates

Several smaller crates provide cross-cutting infrastructure:

platform-version

Path: packages/rs-platform-version

The versioning backbone. Defines PlatformVersion, ProtocolVersion, and the version tables for every consensus-critical method across DPP, Drive, and Drive-ABCI. Currently tracks 12 protocol versions (v1 through v12).

#![allow(unused)]
fn main() {
// From packages/rs-platform-version/src/version/mod.rs
pub type ProtocolVersion = u32;
pub const LATEST_VERSION: ProtocolVersion = PROTOCOL_VERSION_12;
pub const INITIAL_PROTOCOL_VERSION: ProtocolVersion = 1;
}

Every crate in the dependency chain depends on platform-version. It is the root of the version tree.

platform-serialization

Path: packages/rs-platform-serialization

A thin wrapper around bincode that adds platform-version-aware serialization. Paired with platform-serialization-derive for derive macros that generate versioned Encode/Decode implementations.

platform-value

Path: packages/rs-platform-value

A dynamically-typed value type (think serde_json::Value but with binary support). Used as the interchange format when converting between JSON, CBOR, and Rust types, especially in data contract and document processing.

drive-proof-verifier

Path: packages/rs-drive-proof-verifier

Client-side proof verification. Takes a GroveDB proof returned by a platform query and verifies it against a known root hash. Used by the SDK and the WASM bindings.

dapi-grpc

Path: packages/dapi-grpc

Protobuf definitions and generated Rust code for the Platform gRPC API. Both server-side (Drive-ABCI) and client-side (SDK, DAPI client) depend on this crate, using the server and client features respectively.

What Is GroveDB?

GroveDB is an external dependency -- a Merkle-tree-based authenticated data structure built on RocksDB. It is not part of this repository but is central to understanding Drive.

Key properties:

  • Authenticated: Every read can produce a cryptographic proof that the data (or its absence) is consistent with the root hash stored in the block header.
  • Hierarchical: Data is organized into nested trees (subtrees), addressed by paths. A document lives at a path like [contract_id, document_type, document_id].
  • Sum trees: Some subtrees track the sum of their leaf values, used for balance accounting and fee verification.
  • Transactional: All writes happen inside a transaction that can be committed or rolled back atomically.
  • Cost-tracking: Every operation returns a CostResult that records storage and processing costs.

GroveDB is pinned to a specific Git revision in the workspace Cargo.toml and referenced by five sub-crates: grovedb, grovedb-path, grovedb-costs, grovedb-storage, and grovedb-version.

The Full Crate Map

Here is a simplified view of every Rust workspace member, grouped by role:

RoleCrates
Protocol typesdpp, platform-value, platform-serialization, platform-serialization-derive, platform-versioning, platform-value-convertible
Storagedrive
Application serverdrive-abci
Client SDKdash-sdk, rs-dapi-client, dash-context-provider, rs-sdk-trusted-context-provider
Proof verificationdrive-proof-verifier
gRPC definitionsdapi-grpc
WASM bindingswasm-dpp, wasm-dpp2, wasm-sdk, wasm-drive-verify
iOS/FFIrs-sdk-ffi
System contractsdpns-contract, dashpay-contract, withdrawals-contract, masternode-reward-shares-contract, feature-flags-contract, wallet-utils-contract, token-history-contract, keyword-search-contract, data-contracts
Toolingdashmate (JS), strategy-tests, simple-signer, check-features, json-schema-compatibility-validator
Otherdash-platform-macros, rs-dash-event-bus, rs-platform-wallet, dash-platform-balance-checker, rs-dapi

Rules

Do:

  • Follow the dependency direction. DPP never imports Drive. Drive never imports Drive-ABCI.
  • Use feature flags to keep compilation lean. The SDK should never compile server-side code.
  • Put new domain types in DPP. Put new storage operations in Drive. Put new validation logic in Drive-ABCI.

Don't:

  • Add GroveDB as a dependency to DPP. If you need tree structure knowledge in a type definition, use a path abstraction.
  • Enable the server feature of Drive in client-facing crates. This pulls in RocksDB and doubles compile times.
  • Create new top-level crates without updating the workspace Cargo.toml and ensuring the dependency direction is maintained.

Component Pipeline

This chapter traces a request from the moment it leaves a client application to the moment its effects are committed to GroveDB. Understanding this pipeline is essential because every bug, every audit finding, and every performance issue lives somewhere along this path.

The Big Picture

Client App
  |
  | gRPC (protobuf)
  v
DAPI (rs-dapi) -----------> Drive-ABCI query service (reads)
  |
  | BroadcastStateTransition
  v
Tenderdash mempool
  |
  | ABCI (check_tx, prepare_proposal, process_proposal, finalize_block)
  v
Drive-ABCI (rs-drive-abci)
  |
  | DriveOperations
  v
Drive (rs-drive)
  |
  | GroveDB transaction
  v
GroveDB -> RocksDB

There are two fundamentally different paths through this pipeline:

  1. Reads (queries): Client sends a gRPC query, DAPI forwards it to Drive-ABCI's query service, which reads from GroveDB and returns data with a Merkle proof. No consensus involved.

  2. Writes (state transitions): Client broadcasts a state transition, it enters the Tenderdash mempool, goes through consensus, and is applied to GroveDB during block processing.

Let's trace the write path in detail -- it is where all the complexity lives.

DAPI: The Entry Point

DAPI (packages/rs-dapi) is the internet-facing gRPC server. It implements two roles:

  • Platform queries: Forwarded directly to Drive-ABCI's gRPC service (which runs as a separate listener within the same process).
  • State transition broadcast: Submitted to Tenderdash via its RPC interface.
// Simplified from packages/rs-dapi/src/services/platform_service/broadcast_state_transition.rs
Client --gRPC--> DAPI --Tenderdash RPC--> Tenderdash mempool

DAPI itself does minimal validation. It is a routing layer. The heavy lifting happens in Drive-ABCI.

The ABCI Handler Layer

When Tenderdash needs the application to do something -- check a transaction, build a block, validate a proposal, or finalize a block -- it calls an ABCI method. Drive-ABCI implements these in packages/rs-drive-abci/src/abci/handler/:

#![allow(unused)]
fn main() {
// From packages/rs-drive-abci/src/abci/handler/mod.rs
mod check_tx;
mod echo;
mod extend_vote;
mod finalize_block;
mod info;
mod init_chain;
mod prepare_proposal;
mod process_proposal;
mod verify_vote_extension;
}

These handlers are implemented against trait bounds, not concrete types:

#![allow(unused)]
fn main() {
// From packages/rs-drive-abci/src/abci/handler/finalize_block.rs
pub fn finalize_block<'a, A, C>(
    app: &A,
    request: proto::RequestFinalizeBlock,
) -> Result<proto::ResponseFinalizeBlock, Error>
where
    A: PlatformApplication<C> + TransactionalApplication<'a> + BlockExecutionApplication,
    C: CoreRPCLike,
{ ... }
}

The FullAbciApplication struct wires everything together. It holds a reference to Platform, a GroveDB transaction, and the current block execution context:

#![allow(unused)]
fn main() {
// From packages/rs-drive-abci/src/abci/app/full.rs
pub struct FullAbciApplication<'a, C> {
    pub platform: &'a Platform<C>,
    pub transaction: RwLock<Option<Transaction<'a>>>,
    pub block_execution_context: RwLock<Option<BlockExecutionContext>>,
}
}

It implements the tenderdash_abci::Application trait, delegating each method to the corresponding handler function.

Block Processing Lifecycle

Tenderdash uses a proposal-based consensus model. Here is the sequence of ABCI calls for a single block:

1. check_tx -- Mempool Gatekeeper

Before a state transition enters the mempool, Tenderdash calls check_tx. This is a lightweight validation that runs outside of consensus:

#![allow(unused)]
fn main() {
// From packages/rs-drive-abci/src/abci/handler/check_tx.rs
pub fn check_tx<C>(
    platform: &Platform<C>,
    core_rpc: &C,
    request: proto::RequestCheckTx,
) -> Result<proto::ResponseCheckTx, Error>
where
    C: CoreRPCLike,
{
    let platform_state = platform.state.load();
    let platform_version = platform_state.current_platform_version()?;

    let validation_result = platform.check_tx(
        tx.as_slice(),
        r#type.try_into()?,
        &platform_ref,
        platform_version,
    );
    // ...
}
}

check_tx operates in two modes: mode 0 (new transaction) and mode 1 (re-check existing mempool transactions after a new block). It returns a fee estimate (gas_wanted), a priority for ordering, and a sender identifier for deduplication. Importantly, check_tx does not run inside a GroveDB transaction -- it reads committed state only.

2. prepare_proposal -- The Proposer Builds a Block

The block proposer calls prepare_proposal with a list of candidate transactions. The handler starts a GroveDB transaction and runs the full block proposal:

#![allow(unused)]
fn main() {
// From packages/rs-drive-abci/src/abci/handler/prepare_proposal.rs
// Start a GroveDB transaction
app.start_transaction();

// Run the full proposal (validates + executes all state transitions)
let mut run_result = app.platform().run_block_proposal(
    block_proposal,
    true,         // known_from_us = true (we are the proposer)
    &platform_state,
    transaction,
    Some(&timer),
)?;
}

The response tells Tenderdash which transactions to keep, remove, or delay:

  • TxAction::Unmodified -- valid transition, include in block
  • TxAction::Removed -- unpaid error or internal error, strip from block
  • TxAction::Delayed -- exceeded max block size, try next block

3. process_proposal -- Validators Verify the Block

Non-proposing validators receive the block and call process_proposal. This runs the same run_block_proposal logic but with known_from_us = false:

#![allow(unused)]
fn main() {
// From packages/rs-drive-abci/src/abci/handler/process_proposal.rs
let run_result = app.platform().run_block_proposal(
    (&request).try_into()?,
    false,        // known_from_us = false (we are validating someone else's proposal)
    &platform_state,
    transaction,
    None,
)?;
}

A key optimization: if the validator was also the proposer for this round (same height and round), the cached result from prepare_proposal is reused:

#![allow(unused)]
fn main() {
// From process_proposal.rs -- cache hit path
if let Some(proposal_info) = block_execution_context.proposer_results() {
    return Ok(proto::ResponseProcessProposal {
        status: proto::response_process_proposal::ProposalStatus::Accept.into(),
        app_hash: proposal_info.app_hash.clone(),
        tx_results: proposal_info.tx_results.clone(),
        // ...
    });
}
}

If the proposal contains failed or unpaid transitions, the validator rejects it.

4. finalize_block -- Commit

After consensus is reached, Tenderdash calls finalize_block. This is where the GroveDB transaction is committed to disk:

#![allow(unused)]
fn main() {
// From packages/rs-drive-abci/src/abci/handler/finalize_block.rs
let block_finalization_outcome = app.platform().finalize_block_proposal(
    request_finalize_block,
    block_execution_context,
    transaction,
    platform_version,
)?;

// Commit the GroveDB transaction
let result = app.commit_transaction(platform_version);
}

After commit, the block height counter is updated and, if needed, a GroveDB checkpoint is created for crash recovery.

Inside run_block_proposal

The run_block_proposal method in packages/rs-drive-abci/src/execution/engine/run_block_proposal/v0/mod.rs is the heart of the block processing engine. It orchestrates everything that happens within a single block. Here is the sequence, in order:

1.  Verify protocol version matches expected version
2.  Validate block follows previous block (height, core height)
3.  Clear drive block cache
4.  Verify chain lock (if core chain lock update present)
5.  Update core info (masternode list, quorums)
6.  Update validator proposed app version
7.  Rebroadcast expired withdrawals
8.  Update broadcasted withdrawal statuses
9.  Dequeue and build unsigned withdrawal transactions
10. Run DAO platform events (vote tallying, contested documents)
11. Process raw state transitions  <-- the main work
12. Store address balance changes
13. Clean up expired address balance entries
14. Pool withdrawals into transaction queue
15. Clean up expired withdrawal amount locks
16. Process block fees and validate sum trees
17. Compute root hash (app_hash)
18. Determine validator set update

Steps 1-10 are "block-level housekeeping." Step 11 is where individual state transitions are decoded, validated, transformed into actions, and applied to GroveDB. Steps 12-18 finalize the block's effects.

State Transition Processing

State transition processing (step 11 above) follows its own pipeline within packages/rs-drive-abci/src/execution/platform_events/state_transition_processing/:

decode_raw_state_transitions
       |
       v
process_raw_state_transitions
       |
       v
   For each transition:
       |
       +-> validate (structure, state, signatures)
       +-> transform_into_action
       +-> validate_fees_of_event
       +-> execute_event (apply DriveOperations to GroveDB)

The validation itself is split into modules per transition type:

#![allow(unused)]
fn main() {
// From packages/rs-drive-abci/src/execution/validation/state_transition/state_transitions/mod.rs
pub mod batch;
pub mod identity_create;
pub mod identity_credit_transfer;
pub mod identity_credit_withdrawal;
pub mod identity_top_up;
pub mod identity_update;
pub mod data_contract_create;
pub mod data_contract_update;
pub mod masternode_vote;
// ... and more
}

Each module provides versioned validation methods dispatched through PlatformVersion, following the same pattern shown in the Introduction.

The Query Path

Queries bypass consensus entirely. Drive-ABCI implements the Platform gRPC service directly:

#![allow(unused)]
fn main() {
// From packages/rs-drive-abci/src/query/service.rs
// Implements dapi_grpc::platform::v0::platform_server::Platform
// with methods like:
//   get_identity()
//   get_documents()
//   get_data_contract()
//   get_identity_balance()
//   ... (50+ query endpoints)
}

Each query reads from the committed GroveDB state (no transaction), generates a Merkle proof, and returns both the data and the proof. The client SDK uses drive-proof-verifier to independently verify that the returned data matches the proof against a known root hash.

Protocol Version Upgrades During Block Processing

One subtlety worth highlighting: protocol version changes happen at epoch boundaries. The run_block_proposal method checks whether the current block is the first block of a new epoch and whether the locked-in next protocol version differs from the current one:

#![allow(unused)]
fn main() {
// From packages/rs-drive-abci/src/execution/engine/run_block_proposal/mod.rs
let block_platform_version = if epoch_info.is_epoch_change_but_not_genesis()
    && platform_state.next_epoch_protocol_version()
        != platform_state.current_protocol_version_in_consensus()
{
    let next_protocol_version = platform_state.next_epoch_protocol_version();
    let next_platform_version = PlatformVersion::get(next_protocol_version)?;

    // Perform structural changes for the new protocol version
    self.perform_events_on_first_block_of_protocol_change(
        platform_state,
        &block_info,
        transaction,
        old_protocol_version,
        next_platform_version,
    )?;

    next_platform_version
} else {
    last_committed_platform_version
};
}

This ensures that all nodes switch protocol versions at exactly the same block, and that any structural migrations (new GroveDB trees, schema changes) are applied atomically as part of that block's transaction.

Rules

Do:

  • Keep ABCI handlers thin. They should parse the request, delegate to the execution engine, and format the response. No business logic.
  • Always pass platform_version through the call stack. Never hard-code a version number in execution logic.
  • Run check_tx validation as a strict subset of proposal validation. If check_tx accepts a transition, process_proposal should not reject it (modulo state changes between the two calls).

Don't:

  • Read uncommitted state in check_tx. It reads committed state only because it runs outside the block transaction.
  • Assume prepare_proposal and process_proposal see the same state. Another block may have been committed between the two calls if a round change occurs.
  • Add new block-level events without inserting them in the correct position in the run_block_proposal sequence. Order matters -- withdrawals must be processed before state transitions, fee accounting must happen after.
  • Panic in ABCI handlers except for truly unrecoverable situations (like app hash mismatches, which indicate data corruption).

Platform Version

The Problem: Deterministic Upgrades in a Distributed System

Dash Platform is a replicated state machine. Every masternode in the network processes the same transactions and must arrive at the exact same state. If even one node computes a fee differently, serializes a document with one extra byte, or validates a field that others skip, the chain forks.

Now imagine you need to ship a bug fix. In a normal application you deploy the new binary and move on. In a blockchain you have a harder constraint: not every node upgrades at the same moment. Some masternodes will still be running the old code when the new protocol version activates. The system needs a way to say "at protocol version N, use these exact behaviors" -- and it needs to be impossible for a developer to accidentally mix behaviors from different versions.

The answer is the PlatformVersion struct: a single, massive, immutable snapshot that pins every versioned behavior in the entire platform to a concrete value.

The PlatformVersion Struct

Open packages/rs-platform-version/src/version/protocol_version.rs and you will find the heart of the system:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug)]
pub struct PlatformVersion {
    pub protocol_version: ProtocolVersion,
    pub dpp: DPPVersion,
    pub drive: DriveVersion,
    pub drive_abci: DriveAbciVersion,
    pub consensus: ConsensusVersions,
    pub fee_version: FeeVersion,
    pub system_data_contracts: SystemDataContractVersions,
    pub system_limits: SystemLimits,
}
}

Where ProtocolVersion is simply:

#![allow(unused)]
fn main() {
pub type ProtocolVersion = u32;
}

Every field inside PlatformVersion is itself a version struct -- and those structs contain more version structs, all the way down to individual method version numbers. We will explore that nesting in the next chapter. For now, the key insight is that PlatformVersion is the root of a tree. Given a single protocol version number (like 7), you can resolve the exact version of every method, every fee parameter, every system limit, and every data contract schema across the entire platform.

Think of it like a lockfile in a package manager. Cargo.lock pins every transitive dependency to a specific version so that builds are reproducible. PlatformVersion does the same thing for runtime behavior: it pins every function version so that execution is deterministic.

The Version Array

Each protocol version gets its own constant, defined in a separate file. At the time of writing, the platform has twelve versions:

#![allow(unused)]
fn main() {
// packages/rs-platform-version/src/version/mod.rs

pub type ProtocolVersion = u32;

pub const LATEST_VERSION: ProtocolVersion = PROTOCOL_VERSION_12;
pub const INITIAL_PROTOCOL_VERSION: ProtocolVersion = 1;
pub const ALL_VERSIONS: RangeInclusive<ProtocolVersion> = 1..=LATEST_VERSION;
}

These twelve snapshots are collected into a single static array in protocol_version.rs:

#![allow(unused)]
fn main() {
pub const PLATFORM_VERSIONS: &[PlatformVersion] = &[
    PLATFORM_V1,
    PLATFORM_V2,
    PLATFORM_V3,
    PLATFORM_V4,
    PLATFORM_V5,
    PLATFORM_V6,
    PLATFORM_V7,
    PLATFORM_V8,
    PLATFORM_V9,
    PLATFORM_V10,
    PLATFORM_V11,
    PLATFORM_V12,
];

pub const LATEST_PLATFORM_VERSION: &PlatformVersion = &PLATFORM_V12;
pub const DESIRED_PLATFORM_VERSION: &PlatformVersion = LATEST_PLATFORM_VERSION;
}

The array is indexed by protocol version number minus one (since versions are 1-indexed). PLATFORM_V1 sits at index 0, PLATFORM_V12 at index 11. This simple layout is what makes the get function so fast.

What a Version Snapshot Looks Like

Here is the very first version, PLATFORM_V1, slightly abbreviated:

#![allow(unused)]
fn main() {
// packages/rs-platform-version/src/version/v1.rs

pub const PROTOCOL_VERSION_1: ProtocolVersion = 1;

pub const PLATFORM_V1: PlatformVersion = PlatformVersion {
    protocol_version: 1,
    drive: DRIVE_VERSION_V1,
    drive_abci: DriveAbciVersion {
        structs: DRIVE_ABCI_STRUCTURE_VERSIONS_V1,
        methods: DRIVE_ABCI_METHOD_VERSIONS_V1,
        validation_and_processing: DRIVE_ABCI_VALIDATION_VERSIONS_V1,
        withdrawal_constants: DRIVE_ABCI_WITHDRAWAL_CONSTANTS_V1,
        query: DRIVE_ABCI_QUERY_VERSIONS_V1,
        checkpoints: DRIVE_ABCI_CHECKPOINT_PARAMETERS_V1,
    },
    dpp: DPPVersion {
        costs: DPP_COSTS_VERSIONS_V1,
        validation: DPP_VALIDATION_VERSIONS_V1,
        state_transition_serialization_versions: STATE_TRANSITION_SERIALIZATION_VERSIONS_V1,
        state_transition_conversion_versions: STATE_TRANSITION_CONVERSION_VERSIONS_V1,
        state_transition_method_versions: STATE_TRANSITION_METHOD_VERSIONS_V1,
        state_transitions: STATE_TRANSITION_VERSIONS_V1,
        contract_versions: CONTRACT_VERSIONS_V1,
        document_versions: DOCUMENT_VERSIONS_V1,
        identity_versions: IDENTITY_VERSIONS_V1,
        voting_versions: VOTING_VERSION_V1,
        token_versions: TOKEN_VERSIONS_V1,
        asset_lock_versions: DPP_ASSET_LOCK_VERSIONS_V1,
        methods: DPP_METHOD_VERSIONS_V1,
        factory_versions: DPP_FACTORY_VERSIONS_V1,
    },
    system_data_contracts: SYSTEM_DATA_CONTRACT_VERSIONS_V1,
    fee_version: FEE_VERSION1,
    system_limits: SYSTEM_LIMITS_V1,
    consensus: ConsensusVersions {
        tenderdash_consensus_version: 0,
    },
};
}

Now compare with PLATFORM_V12, the latest at the time of writing:

#![allow(unused)]
fn main() {
// packages/rs-platform-version/src/version/v12.rs

pub const PLATFORM_V12: PlatformVersion = PlatformVersion {
    protocol_version: PROTOCOL_VERSION_12,
    drive: DRIVE_VERSION_V6,          // was V1
    drive_abci: DriveAbciVersion {
        structs: DRIVE_ABCI_STRUCTURE_VERSIONS_V1,
        methods: DRIVE_ABCI_METHOD_VERSIONS_V7,   // was V1
        validation_and_processing: DRIVE_ABCI_VALIDATION_VERSIONS_V7, // was V1
        withdrawal_constants: DRIVE_ABCI_WITHDRAWAL_CONSTANTS_V2,     // was V1
        query: DRIVE_ABCI_QUERY_VERSIONS_V1,
        checkpoints: DRIVE_ABCI_CHECKPOINT_PARAMETERS_V1,
    },
    dpp: DPPVersion {
        costs: DPP_COSTS_VERSIONS_V1,
        validation: DPP_VALIDATION_VERSIONS_V2,  // was V1
        state_transitions: STATE_TRANSITION_VERSIONS_V3, // was V1
        contract_versions: CONTRACT_VERSIONS_V3,         // was V1
        document_versions: DOCUMENT_VERSIONS_V3,         // was V1
        // ... other fields, some still V1, some bumped
        methods: DPP_METHOD_VERSIONS_V2,                 // was V1
        factory_versions: DPP_FACTORY_VERSIONS_V1,
        // ...
    },
    fee_version: FEE_VERSION2,   // was VERSION1
    consensus: ConsensusVersions {
        tenderdash_consensus_version: 1,  // was 0
    },
    // ...
};
}

Notice how only some subsystem versions change between V1 and V12. The query versions stayed at V1 across all twelve protocol versions because the query logic never changed. The ABCI method versions, on the other hand, went from V1 all the way to V7 -- seven revisions of the block processing logic.

This is the power of the snapshot model: each subsystem version evolves at its own pace. A new protocol version does not require bumping everything. You only change the sub-constants that actually differ.

The Get Dispatch

The most important function on PlatformVersion is get:

#![allow(unused)]
fn main() {
impl PlatformVersion {
    pub fn get<'a>(version: ProtocolVersion) -> Result<&'a Self, PlatformVersionError> {
        if version > 0 {
            PLATFORM_VERSIONS.get(version as usize - 1).ok_or_else(|| {
                PlatformVersionError::UnknownVersionError(
                    format!("no platform version {version}")
                )
            })
        } else {
            Err(PlatformVersionError::UnknownVersionError(
                format!("no platform version {version}")
            ))
        }
    }
}
}

This is a simple array lookup. Protocol version 1 maps to index 0, version 12 to index 11. If the version number is out of range, you get a clear error. No hash maps, no runtime registration, no dynamic dispatch -- just a static array of compile-time constants.

There are also convenience methods:

#![allow(unused)]
fn main() {
impl PlatformVersion {
    pub fn first<'a>() -> &'a Self {
        PLATFORM_VERSIONS.first()
            .expect("expected to have a platform version")
    }

    pub fn latest<'a>() -> &'a Self {
        PLATFORM_VERSIONS.last()
            .expect("expected to have a platform version")
    }

    pub fn desired<'a>() -> &'a Self {
        DESIRED_PLATFORM_VERSION
    }
}
}

first() is used in tests that need to verify behavior under the initial protocol. latest() is the default for new code. desired() returns the version that nodes want to upgrade to -- it equals latest() during normal operation but could theoretically differ during a staged rollout.

Version-Aware Traits

The rs-platform-version crate also defines traits that thread the platform version through standard Rust conversion patterns:

#![allow(unused)]
fn main() {
// packages/rs-platform-version/src/lib.rs

pub trait TryFromPlatformVersioned<T>: Sized {
    type Error;

    fn try_from_platform_versioned(
        value: T,
        platform_version: &PlatformVersion,
    ) -> Result<Self, Self::Error>;
}

pub trait DefaultForPlatformVersion: Sized {
    type Error;

    fn default_for_platform_version(
        platform_version: &PlatformVersion,
    ) -> Result<Self, Self::Error>;
}
}

These are the versioned equivalents of TryFrom and Default. When you convert a data structure, you pass the platform version so the implementation can pick the right serialization format, the right field set, or the right validation rules. There is also FromPlatformVersioned for infallible conversions, and blanket IntoPlatformVersioned implementations that mirror the standard library pattern.

Mock Versions for Testing

The version system supports a mock-versions feature flag for tests:

#![allow(unused)]
fn main() {
#[cfg(feature = "mock-versions")]
pub static PLATFORM_TEST_VERSIONS: OnceLock<Vec<PlatformVersion>> = OnceLock::new();
}

When this feature is enabled, PlatformVersion::get checks for a special bit in the version number. If set, it routes to the test version array instead of the production one. This lets tests create synthetic platform versions with specific behaviors without polluting the production constants:

#![allow(unused)]
fn main() {
#[cfg(feature = "mock-versions")]
{
    if version >> TEST_PROTOCOL_VERSION_SHIFT_BYTES > 0 {
        let test_version = version - (1 << TEST_PROTOCOL_VERSION_SHIFT_BYTES);
        let versions = PLATFORM_TEST_VERSIONS
            .get_or_init(|| vec![TEST_PLATFORM_V2, TEST_PLATFORM_V3]);
        return versions.get(test_version as usize - 2).ok_or(/* ... */);
    }
}
}

This is a clever design: tests can exercise version upgrade logic (like "what happens when we transition from test version 2 to test version 3?") without needing to create real protocol versions.

Why Immutable Snapshots?

You might wonder: why not use a mutable configuration object? Why not a HashMap<&str, u16> that maps method names to versions?

Three reasons:

  1. Determinism. A const value is baked into the binary at compile time. There is no way to accidentally modify it at runtime. Every node running the same binary with the same protocol version will use the exact same values.

  2. Exhaustiveness. Because the version struct has named fields for every subsystem, adding a new versioned method forces you to set its version in every platform version constant. The compiler will refuse to compile if you forget one. A hash map cannot give you this guarantee.

  3. Performance. Looking up a version number is a struct field access -- zero overhead at runtime. The entire version tree lives in static memory. No allocations, no lookups, no indirection.

The cost is verbosity. Each new platform version file is large and repetitive. But this is a deliberate trade-off: the system favors correctness and auditability over conciseness. When you read PLATFORM_V12, you can see every single version number in one place. There is no mystery about what version 12 means.

Rules

Do:

  • Always pass &PlatformVersion (or &DriveVersion, etc.) to functions that have versioned behavior. Never hardcode a version number at a call site.
  • Use PlatformVersion::latest() for tests that do not care about a specific version. Use PlatformVersion::first() when you need to test the initial protocol behavior.
  • When creating a new platform version, copy the previous version file and change only the constants that differ. The compiler will catch any missing fields.

Do not:

  • Never mutate platform version data at runtime. The constants are const for a reason.
  • Never add a new field to PlatformVersion without also updating every PLATFORM_V* constant. The compiler will enforce this, but be aware that the fix is updating twelve files, not one.
  • Never use PlatformVersion::latest() in consensus-critical code paths. Always use the version from the current platform state, obtained via platform_state.current_platform_version(). The "latest" version is what the binary supports; the active version is what the network has agreed upon -- and they may differ during an upgrade window.

Feature Versions

The Problem: Granularity

The previous chapter showed how PlatformVersion is an immutable snapshot of the entire platform's behavior at a given protocol version. But a snapshot is only useful if it can describe behavior at a fine enough granularity.

Consider the Drive storage layer. It has dozens of grove operations, hundreds of document methods, contract methods, identity methods, and more. When you fix a bug in update_contract, you need to bump that one method's version without affecting insert_contract or prove_contract. The system needs a way to assign a version number to individual methods and then compose those numbers into larger subsystem snapshots.

This is where FeatureVersion and the nested version structs come in.

The FeatureVersion Type

At the very bottom of the version tree is a single type, defined in the external versioned-feature-core crate:

#![allow(unused)]
fn main() {
// versioned-feature-core/src/lib.rs

pub type FeatureVersion = u16;
pub type OptionalFeatureVersion = Option<u16>;
}

That is it. A FeatureVersion is a u16 -- a number that says "use version N of this particular function." The value 0 means "use the v0 implementation," 1 means "use v1," and so on.

OptionalFeatureVersion is Option<u16>. It represents a feature that did not exist in earlier protocol versions. When the value is None, the feature is not active -- calling it returns a VersionNotActive error. When it is Some(0), the feature exists and should use its v0 implementation.

There is also a bounds type for serialization format versions:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug, Default)]
pub struct FeatureVersionBounds {
    pub min_version: FeatureVersion,
    pub max_version: FeatureVersion,
    pub default_current_version: FeatureVersion,
}
}

This is used when a field can accept a range of versions -- for example, a data contract serialization format where the system can read versions 0 through 2 but writes version 2 by default.

Version Structs: The Middle of the Tree

Between the top-level PlatformVersion and the leaf-level FeatureVersion numbers sit dozens of intermediate structs. These structs group related method versions together, forming a hierarchy that mirrors the codebase's module structure.

Let us trace a path from the top down.

Level 1: PlatformVersion

#![allow(unused)]
fn main() {
pub struct PlatformVersion {
    pub protocol_version: ProtocolVersion,
    pub dpp: DPPVersion,
    pub drive: DriveVersion,
    pub drive_abci: DriveAbciVersion,
    pub consensus: ConsensusVersions,
    pub fee_version: FeeVersion,
    pub system_data_contracts: SystemDataContractVersions,
    pub system_limits: SystemLimits,
}
}

Level 2: DriveVersion

The drive field contains DriveVersion, which groups all storage layer versions:

#![allow(unused)]
fn main() {
// packages/rs-platform-version/src/version/drive_versions/mod.rs

#[derive(Clone, Debug, Default)]
pub struct DriveVersion {
    pub structure: DriveStructureVersion,
    pub methods: DriveMethodVersions,
    pub grove_methods: DriveGroveMethodVersions,
    pub grove_version: GroveVersion,
}
}

Level 3: DriveMethodVersions

The methods field expands into every category of Drive operation:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug, Default)]
pub struct DriveMethodVersions {
    pub initialization: DriveInitializationMethodVersions,
    pub credit_pools: DriveCreditPoolMethodVersions,
    pub protocol_upgrade: DriveProtocolUpgradeVersions,
    pub prefunded_specialized_balances: DrivePrefundedSpecializedMethodVersions,
    pub balances: DriveBalancesMethodVersions,
    pub document: DriveDocumentMethodVersions,
    pub vote: DriveVoteMethodVersions,
    pub contract: DriveContractMethodVersions,
    pub fees: DriveFeesMethodVersions,
    pub estimated_costs: DriveEstimatedCostsMethodVersions,
    pub asset_lock: DriveAssetLockMethodVersions,
    pub verify: DriveVerifyMethodVersions,
    pub identity: DriveIdentityMethodVersions,
    pub token: DriveTokenMethodVersions,
    pub platform_system: DrivePlatformSystemMethodVersions,
    pub operations: DriveOperationsMethodVersion,
    pub batch_operations: DriveBatchOperationsMethodVersion,
    pub fetch: DriveFetchMethodVersions,
    pub prove: DriveProveMethodVersions,
    pub state_transitions: DriveStateTransitionMethodVersions,
    pub platform_state: DrivePlatformStateMethodVersions,
    pub group: DriveGroupMethodVersions,
    pub address_funds: DriveAddressFundsMethodVersions,
    pub saved_block_transactions: DriveSavedBlockTransactionsMethodVersions,
}
}

Level 4: Individual Method Categories

Each category struct contains FeatureVersion fields for individual methods. For example, the contract method versions:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug, Default)]
pub struct DriveContractMethodVersions {
    pub prove: DriveContractProveMethodVersions,
    pub apply: DriveContractApplyMethodVersions,
    pub insert: DriveContractInsertMethodVersions,
    pub update: DriveContractUpdateMethodVersions,
    pub costs: DriveContractCostsMethodVersions,
    pub get: DriveContractGetMethodVersions,
}

#[derive(Clone, Debug, Default)]
pub struct DriveContractUpdateMethodVersions {
    pub update_contract: FeatureVersion,
    pub update_description: FeatureVersion,
    pub update_keywords: FeatureVersion,
}
}

So the full path to read "which version of update_contract should I use?" is:

#![allow(unused)]
fn main() {
platform_version.drive.methods.contract.update.update_contract
}

That is a five-level deep field access, and it resolves to a plain u16.

The Grove Methods Branch

Let us trace a different path. The grove_methods field on DriveVersion holds versions for low-level GroveDB operations:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug, Default)]
pub struct DriveGroveMethodVersions {
    pub basic: DriveGroveBasicMethodVersions,
    pub batch: DriveGroveBatchMethodVersions,
    pub apply: DriveGroveApplyMethodVersions,
    pub costs: DriveGroveCostMethodVersions,
}
}

The basic struct is where individual grove operations live:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug, Default)]
pub struct DriveGroveBasicMethodVersions {
    pub grove_insert: FeatureVersion,
    pub grove_insert_empty_tree: FeatureVersion,
    pub grove_insert_if_not_exists: FeatureVersion,
    pub grove_clear: FeatureVersion,
    pub grove_delete: FeatureVersion,
    pub grove_get_raw: FeatureVersion,
    pub grove_get_raw_optional: FeatureVersion,
    pub grove_get: FeatureVersion,
    pub grove_get_path_query: FeatureVersion,
    pub grove_get_proved_path_query: FeatureVersion,
    pub grove_get_sum_tree_total_value: FeatureVersion,
    pub grove_has_raw: FeatureVersion,
    // ... and many more
}
}

So the path for grove_get_raw is:

#![allow(unused)]
fn main() {
drive_version.grove_methods.basic.grove_get_raw
}

Notice something: grove operations take a &DriveVersion rather than &PlatformVersion. This is a minor optimization -- when you are deep in the Drive layer, you only need the drive-specific version numbers, not the entire platform snapshot. The caller extracts &platform_version.drive once and passes it down.

The DPP Branch

The Dash Platform Protocol has its own deep tree. DPPVersion contains fourteen sub-version structs:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug, Default)]
pub struct DPPVersion {
    pub costs: DPPCostsVersions,
    pub validation: DPPValidationVersions,
    pub state_transition_serialization_versions: DPPStateTransitionSerializationVersions,
    pub state_transition_conversion_versions: DPPStateTransitionConversionVersions,
    pub state_transition_method_versions: DPPStateTransitionMethodVersions,
    pub state_transitions: DPPStateTransitionVersions,
    pub contract_versions: DPPContractVersions,
    pub document_versions: DPPDocumentVersions,
    pub identity_versions: DPPIdentityVersions,
    pub voting_versions: DPPVotingVersions,
    pub token_versions: DPPTokenVersions,
    pub asset_lock_versions: DPPAssetLockVersions,
    pub methods: DPPMethodVersions,
    pub factory_versions: DPPFactoryVersions,
}
}

And those go deeper. For example, DPPContractVersions contains not just FeatureVersion values but also FeatureVersionBounds and further nesting:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug, Default)]
pub struct DPPContractVersions {
    pub max_serialized_size: u32,
    pub contract_serialization_version: FeatureVersionBounds,
    pub contract_structure_version: FeatureVersion,
    pub created_data_contract_structure: FeatureVersion,
    pub config: FeatureVersionBounds,
    pub methods: DataContractMethodVersions,
    pub document_type_versions: DocumentTypeVersions,
    pub token_versions: TokenVersions,
}
}

Notice max_serialized_size: u32. Not every field is a FeatureVersion. Some are configuration values -- limits, thresholds, constants -- that change between protocol versions. The version struct is flexible enough to hold both "which implementation to use" and "what parameters to use."

The Drive ABCI Branch

The DriveAbciVersion struct covers the application blockchain interface -- the layer that processes blocks, validates state transitions, and handles protocol upgrades:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug, Default)]
pub struct DriveAbciVersion {
    pub structs: DriveAbciStructureVersions,
    pub methods: DriveAbciMethodVersions,
    pub validation_and_processing: DriveAbciValidationVersions,
    pub withdrawal_constants: DriveAbciWithdrawalConstants,
    pub query: DriveAbciQueryVersions,
    pub checkpoints: DriveAbciCheckpointParameters,
}
}

The validation_and_processing field is where state transition validation versions live. This is where OptionalFeatureVersion becomes important:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug, Default)]
pub struct DriveAbciStateTransitionValidationVersion {
    pub basic_structure: OptionalFeatureVersion,
    pub advanced_structure: OptionalFeatureVersion,
    pub identity_signatures: OptionalFeatureVersion,
    pub nonce: OptionalFeatureVersion,
    pub state: FeatureVersion,
    pub transform_into_action: FeatureVersion,
}
}

basic_structure is OptionalFeatureVersion -- in some protocol versions, basic structure validation may not exist for a particular state transition. The dispatch code handles this with a three-arm match:

#![allow(unused)]
fn main() {
match platform_version
    .drive_abci
    .validation_and_processing
    .state_transitions
    .identity_create_state_transition
    .basic_structure
{
    Some(0) => self.validate_basic_structure_v0(platform_version),
    Some(version) => Err(Error::Execution(
        ExecutionError::UnknownVersionMismatch { /* ... */ }
    )),
    None => Err(Error::Execution(
        ExecutionError::VersionNotActive { /* ... */ }
    )),
}
}

Compare with state and transform_into_action which are plain FeatureVersion -- those validations always exist, so there is no None arm.

Non-Method Version Fields

Some version structs contain values that are not method versions at all, but protocol parameters:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug, Default)]
pub struct SystemLimits {
    pub estimated_contract_max_serialized_size: u16,
    pub max_field_value_size: u32,
    pub max_state_transition_size: u64,
    pub max_transitions_in_documents_batch: u16,
    pub withdrawal_transactions_per_block_limit: u16,
    pub max_withdrawal_amount: u64,
    pub max_contract_group_size: u16,
    pub max_token_redemption_cycles: u32,
    // ...
}
}
#![allow(unused)]
fn main() {
pub struct DriveAbciCoreChainLockMethodVersionsAndConstants {
    pub choose_quorum: FeatureVersion,
    pub verify_chain_lock: FeatureVersion,
    // ...
    pub recent_block_count_amount: u32,
}
}

The chain lock struct mixes method versions (choose_quorum: FeatureVersion) with protocol constants (recent_block_count_amount: u32). This is perfectly fine -- the version snapshot captures all protocol-specific values, whether they control dispatch or configure behavior.

How Subsystem Version Constants Compose

Each subsystem version constant (like DRIVE_VERSION_V1) is assembled from smaller constants:

#![allow(unused)]
fn main() {
// packages/rs-platform-version/src/version/drive_versions/v1.rs

pub const DRIVE_VERSION_V1: DriveVersion = DriveVersion {
    structure: DRIVE_STRUCTURE_V1,
    methods: DriveMethodVersions {
        initialization: DriveInitializationMethodVersions {
            create_initial_state_structure: 0,
        },
        credit_pools: CREDIT_POOL_METHOD_VERSIONS_V1,
        protocol_upgrade: DriveProtocolUpgradeVersions {
            clear_version_information: 0,
            fetch_versions_with_counter: 0,
            // ...
        },
        balances: DriveBalancesMethodVersions {
            add_to_system_credits: 0,
            remove_from_system_credits: 0,
            calculate_total_credits_balance: 0,
            // ...
        },
        contract: DRIVE_CONTRACT_METHOD_VERSIONS_V1,
        // ...
    },
    grove_methods: DRIVE_GROVE_METHOD_VERSIONS_V1,
    grove_version: GROVE_V1,
};
}

Notice the mix of inline construction and named constants. Small structs like DriveProtocolUpgradeVersions are often written inline because all their fields are 0 in every version. Larger, frequently-changing structs like DRIVE_CONTRACT_METHOD_VERSIONS_V1 get their own named constant so they can be reused or overridden in later versions.

When DRIVE_VERSION_V6 (used in PLATFORM_V12) needs to change contract methods, it simply references DRIVE_CONTRACT_METHOD_VERSIONS_V2 instead of V1:

#![allow(unused)]
fn main() {
// packages/rs-platform-version/src/version/drive_versions/v6.rs

pub const DRIVE_VERSION_V6: DriveVersion = DriveVersion {
    structure: DRIVE_STRUCTURE_V1,
    methods: DriveMethodVersions {
        // ...
        contract: DRIVE_CONTRACT_METHOD_VERSIONS_V2,  // changed!
        // ...
        state_transitions: DRIVE_STATE_TRANSITION_METHOD_VERSIONS_V2, // also changed!
        // ...
    },
    grove_methods: DRIVE_GROVE_METHOD_VERSIONS_V1,  // unchanged
    grove_version: GROVE_V2,  // changed!
};
}

The unchanged parts reference the same V1 constants they always did. Only the parts that actually changed get new constants.

The File Layout

The version structs follow a consistent directory layout in packages/rs-platform-version/src/version/:

version/
  mod.rs                        # ProtocolVersion type alias, module declarations
  protocol_version.rs           # PlatformVersion struct, PLATFORM_VERSIONS array, get()
  v1.rs .. v12.rs               # PLATFORM_V* snapshot constants
  drive_versions/
    mod.rs                      # DriveVersion, DriveMethodVersions, etc.
    v1.rs .. v6.rs              # DRIVE_VERSION_V* constants
    drive_grove_method_versions/
      mod.rs                    # DriveGroveMethodVersions struct
      v1.rs                     # DRIVE_GROVE_METHOD_VERSIONS_V1
    drive_contract_method_versions/
      mod.rs                    # DriveContractMethodVersions struct
      v1.rs, v2.rs              # versioned constants
    ...
  drive_abci_versions/
    mod.rs                      # DriveAbciVersion struct
    drive_abci_method_versions/
      mod.rs                    # DriveAbciMethodVersions and sub-structs
      v1.rs .. v7.rs            # versioned constants
    drive_abci_validation_versions/
      mod.rs                    # DriveAbciValidationVersions struct
      v1.rs .. v7.rs
    ...
  dpp_versions/
    mod.rs                      # DPPVersion struct
    dpp_contract_versions/
      mod.rs                    # DPPContractVersions struct
      v1.rs, v2.rs, v3.rs
    ...
  fee/
    mod.rs                      # FeeVersion struct
    v1.rs, v2.rs
    storage.rs, signature.rs, ...
  system_limits/
    mod.rs                      # SystemLimits struct
    v1.rs

The pattern is: mod.rs defines the struct, and v*.rs files define the concrete constants. The struct definition is the schema. The version files are the data.

Rules

Do:

  • When adding a new method to Drive, DPP, or Drive ABCI, add a corresponding FeatureVersion field to the appropriate version struct. Then set its value in every v*.rs constant -- the compiler will force you.
  • Use OptionalFeatureVersion for features that are being introduced in a non-initial protocol version. Set them to None in earlier versions and Some(0) in the version that introduces the feature.
  • Group related methods into their own sub-struct when the parent struct grows too large. Follow the existing naming pattern: Drive<Category>MethodVersions.

Do not:

  • Never use a raw u16 where you mean FeatureVersion. The type alias exists for readability and future-proofing -- if we ever need to change the underlying type, the alias is the single point of change.
  • Never put runtime-computed values into a version struct. Every field must be a compile-time constant. This is what makes the snapshot deterministic.
  • Never reuse a version constant with different semantics. If DRIVE_CONTRACT_METHOD_VERSIONS_V1 means something, creating a V2 that changes one field is correct. Silently modifying V1 is not -- it would change the behavior of every platform version that references it.

Versioned Dispatch

The Problem: Running the Right Code

The previous two chapters explained what gets versioned (every method in the platform) and how version numbers are stored (nested structs inside an immutable PlatformVersion snapshot). This chapter covers the most important part: how those version numbers actually select which code runs.

The core idea is simple. Every versioned function has a dispatch method that reads a FeatureVersion value and calls the corresponding implementation. But the way this dispatch is organized across files, the error handling conventions, and the step-by-step process of adding a new version -- these are the details that make the pattern work at scale.

The Canonical Match Pattern

Here is the most common pattern in the codebase. This is from packages/rs-drive/src/util/grove_operations/grove_get_raw/mod.rs:

#![allow(unused)]
fn main() {
impl Drive {
    pub fn grove_get_raw<B: AsRef<[u8]>>(
        &self,
        path: SubtreePath<'_, B>,
        key: &[u8],
        direct_query_type: DirectQueryType,
        transaction: TransactionArg,
        drive_operations: &mut Vec<LowLevelDriveOperation>,
        drive_version: &DriveVersion,
    ) -> Result<Option<Element>, Error> {
        match drive_version.grove_methods.basic.grove_get_raw {
            0 => self.grove_get_raw_v0(
                path,
                key,
                direct_query_type,
                transaction,
                drive_operations,
                drive_version,
            ),
            version => Err(Error::Drive(DriveError::UnknownVersionMismatch {
                method: "grove_get_raw".to_string(),
                known_versions: vec![0],
                received: version,
            })),
        }
    }
}
}

Let us break down what is happening:

  1. The public method (grove_get_raw) is the entry point. It takes all the business parameters plus a version reference (drive_version: &DriveVersion).

  2. The version lookup reads the specific FeatureVersion for this method: drive_version.grove_methods.basic.grove_get_raw. This resolves to a u16.

  3. The match dispatches to the right implementation. Version 0 calls grove_get_raw_v0. The catch-all arm (version =>) returns an error.

  4. The error (UnknownVersionMismatch) includes the method name, the list of known versions, and the version that was actually received. This makes debugging version mismatches trivial.

This pattern appears hundreds of times across the codebase. It is the fundamental building block of versioned execution.

Multiple Versions

When a method has been revised, the match grows. Here is update_contract from packages/rs-drive/src/drive/contract/update/update_contract/mod.rs:

#![allow(unused)]
fn main() {
impl Drive {
    pub fn update_contract(
        &self,
        contract: &DataContract,
        block_info: BlockInfo,
        apply: bool,
        transaction: TransactionArg,
        platform_version: &PlatformVersion,
        previous_fee_versions: Option<&CachedEpochIndexFeeVersions>,
    ) -> Result<FeeResult, Error> {
        match platform_version
            .drive
            .methods
            .contract
            .update
            .update_contract
        {
            0 => self.update_contract_v0(
                contract, block_info, apply,
                transaction, platform_version, previous_fee_versions,
            ),
            1 => self.update_contract_v1(
                contract, block_info, apply,
                transaction, platform_version, previous_fee_versions,
            ),
            version => Err(Error::Drive(DriveError::UnknownVersionMismatch {
                method: "update_contract".to_string(),
                known_versions: vec![0, 1],
                received: version,
            })),
        }
    }
}
}

The structure is identical. The only differences are: there are now two known versions (0 and 1), and the known_versions vector in the error arm lists both. When a node running protocol version 1 processes a block, the version number is 0 and update_contract_v0 runs. When the network upgrades and the version number becomes 1, update_contract_v1 runs instead.

Both v0 and v1 implementations coexist in the binary. Old code is never deleted (at least not until a version is permanently retired from the network). This is critical for replaying historical blocks -- a node syncing from genesis needs to execute v0 for early blocks and v1 for later ones.

OptionalFeatureVersion Dispatch

For features introduced after the initial protocol version, the dispatch handles a None case:

#![allow(unused)]
fn main() {
// From identity_create/mod.rs

match platform_version
    .drive_abci
    .validation_and_processing
    .state_transitions
    .identity_create_state_transition
    .basic_structure
{
    Some(0) => {
        self.validate_basic_structure_v0(platform_version)
    }
    Some(version) => Err(Error::Execution(
        ExecutionError::UnknownVersionMismatch {
            method: "identity create transition: validate_basic_structure"
                .to_string(),
            known_versions: vec![0],
            received: version,
        }
    )),
    None => Err(Error::Execution(
        ExecutionError::VersionNotActive {
            method: "identity create transition: validate_basic_structure"
                .to_string(),
            known_versions: vec![0],
        }
    )),
}
}

Three arms instead of two:

  • Some(0) -- the feature exists, use v0.
  • Some(version) -- the feature exists but the version is unrecognized.
  • None -- the feature does not exist in this protocol version.

The VersionNotActive error is different from UnknownVersionMismatch. It means "this feature is legitimately not available," not "something went wrong." This distinction matters for callers that need to handle graceful degradation.

The Directory Convention

Versioned methods follow a strict directory layout. Let us use grove_get_raw as the example:

packages/rs-drive/src/util/grove_operations/
  grove_get_raw/
    mod.rs          # dispatch method (the match statement)
    v0/
      mod.rs        # grove_get_raw_v0 implementation

The dispatch method lives in grove_get_raw/mod.rs. Each implementation version gets its own subdirectory: v0/mod.rs, v1/mod.rs, etc. The dispatch file declares the version modules:

#![allow(unused)]
fn main() {
// grove_get_raw/mod.rs
mod v0;
}

And each version module provides the actual implementation as a method on Drive:

#![allow(unused)]
fn main() {
// grove_get_raw/v0/mod.rs

impl Drive {
    pub(super) fn grove_get_raw_v0<B: AsRef<[u8]>>(
        &self,
        path: SubtreePath<'_, B>,
        key: &[u8],
        direct_query_type: DirectQueryType,
        transaction: TransactionArg,
        drive_operations: &mut Vec<LowLevelDriveOperation>,
        drive_version: &DriveVersion,
    ) -> Result<Option<Element>, Error> {
        // actual implementation
        match direct_query_type {
            DirectQueryType::StatelessDirectQuery { /* ... */ } => {
                // estimate costs
            }
            DirectQueryType::StatefulDirectQuery => {
                let CostContext { value, cost } =
                    self.grove.get_raw(path, key, transaction,
                                       &drive_version.grove_version);
                drive_operations.push(CalculatedCostOperation(cost));
                Ok(Some(value.map_err(Error::from)?))
            }
        }
    }
}
}

Notice the visibility: pub(super). The v0 function is only visible to its parent module (the dispatch file). External code calls the public dispatch method, never the versioned implementation directly.

For state transitions in Drive ABCI, the same pattern applies but with trait implementations:

packages/rs-drive-abci/src/execution/validation/state_transition/
  state_transitions/
    identity_create/
      mod.rs                  # dispatch traits and match statements
      basic_structure/
        mod.rs                # just declares v0
        v0/
          mod.rs              # BasicStructureValidationV0 implementation
      advanced_structure/
        mod.rs
        v0/
          mod.rs
      state/
        mod.rs
        v0/
          mod.rs

The Error Types

There are two UnknownVersionMismatch error variants in the codebase -- one for Drive and one for Drive ABCI -- but they have the same shape:

#![allow(unused)]
fn main() {
// packages/rs-drive/src/error/drive.rs

#[derive(Debug, thiserror::Error)]
pub enum DriveError {
    #[error("drive unknown version on {method}, received: {received}")]
    UnknownVersionMismatch {
        method: String,
        known_versions: Vec<FeatureVersion>,
        received: FeatureVersion,
    },

    #[error("{method} not active for drive version")]
    VersionNotActive {
        method: String,
        known_versions: Vec<FeatureVersion>,
    },
    // ...
}
}
#![allow(unused)]
fn main() {
// packages/rs-drive-abci/src/error/execution.rs

#[derive(Debug, thiserror::Error)]
pub enum ExecutionError {
    #[error("platform unknown version on {method}, received: {received}")]
    UnknownVersionMismatch {
        method: String,
        known_versions: Vec<FeatureVersion>,
        received: FeatureVersion,
    },

    #[error("{method} not active for drive version")]
    VersionNotActive {
        method: String,
        known_versions: Vec<FeatureVersion>,
    },
    // ...
}
}

Both carry three pieces of information:

  • method: A human-readable name identifying which dispatch failed.
  • known_versions: The versions this binary knows how to handle.
  • received: The version number that was actually in the platform version.

This makes the error message self-diagnosing. If you see "drive unknown version on update_contract, received: 2, known versions: [0, 1]", you immediately know that the binary is too old to handle the active protocol version.

How to Add a New Version: Step by Step

Let us walk through the exact steps to add a v1 implementation of a method that currently only has v0. We will use a fictional example: my_grove_operation.

Step 1: Write the new implementation

Create the v1 module:

my_grove_operation/
  mod.rs          # existing dispatch
  v0/
    mod.rs        # existing v0
  v1/
    mod.rs        # NEW: v1 implementation
#![allow(unused)]
fn main() {
// my_grove_operation/v1/mod.rs

impl Drive {
    pub(super) fn my_grove_operation_v1(
        &self,
        // same signature as v0, or possibly different
    ) -> Result<SomeResult, Error> {
        // new implementation with bug fix or feature
    }
}
}

Step 2: Update the dispatch

In my_grove_operation/mod.rs, declare the new module and add the match arm:

#![allow(unused)]
fn main() {
mod v0;
mod v1;  // NEW

impl Drive {
    pub fn my_grove_operation(
        &self,
        // ...
        drive_version: &DriveVersion,
    ) -> Result<SomeResult, Error> {
        match drive_version.grove_methods.basic.my_grove_operation {
            0 => self.my_grove_operation_v0(/* ... */),
            1 => self.my_grove_operation_v1(/* ... */),  // NEW
            version => Err(Error::Drive(DriveError::UnknownVersionMismatch {
                method: "my_grove_operation".to_string(),
                known_versions: vec![0, 1],  // UPDATED
                received: version,
            })),
        }
    }
}
}

Step 3: Create a new subsystem version constant

If this is the first change in this subsystem version, create a new constant. For example, if grove method versions were at V1:

#![allow(unused)]
fn main() {
// drive_grove_method_versions/v2.rs  (NEW file)

pub const DRIVE_GROVE_METHOD_VERSIONS_V2: DriveGroveMethodVersions =
    DriveGroveMethodVersions {
        basic: DriveGroveBasicMethodVersions {
            my_grove_operation: 1,  // CHANGED from 0 to 1
            grove_get_raw: 0,       // unchanged
            grove_delete: 0,        // unchanged
            // ... all other fields unchanged
        },
        // ... rest unchanged
    };
}

Step 4: Create a new DriveVersion constant

Create a new DriveVersion that references the updated subsystem version:

#![allow(unused)]
fn main() {
// drive_versions/v7.rs  (NEW file)

pub const DRIVE_VERSION_V7: DriveVersion = DriveVersion {
    grove_methods: DRIVE_GROVE_METHOD_VERSIONS_V2,  // CHANGED
    // ... everything else unchanged from V6
};
}

Step 5: Create a new PlatformVersion

Create the new platform version snapshot that references the new drive version:

#![allow(unused)]
fn main() {
// version/v13.rs  (NEW file)

pub const PROTOCOL_VERSION_13: ProtocolVersion = 13;

pub const PLATFORM_V13: PlatformVersion = PlatformVersion {
    protocol_version: PROTOCOL_VERSION_13,
    drive: DRIVE_VERSION_V7,  // CHANGED
    // ... everything else unchanged from V12
};
}

Step 6: Register the new version

Add PLATFORM_V13 to the PLATFORM_VERSIONS array and update the version constants:

#![allow(unused)]
fn main() {
// version/mod.rs
pub mod v13;

// version/protocol_version.rs
pub const PLATFORM_VERSIONS: &[PlatformVersion] = &[
    PLATFORM_V1,
    // ...
    PLATFORM_V12,
    PLATFORM_V13,  // NEW
];

pub const LATEST_PLATFORM_VERSION: &PlatformVersion = &PLATFORM_V13;
}

Step 7: Write tests

Test both the old and new behavior:

#![allow(unused)]
fn main() {
#[test]
fn test_my_grove_operation_v0() {
    let platform_version = PlatformVersion::first();
    // ... assert v0 behavior
}

#[test]
fn test_my_grove_operation_v1() {
    let platform_version = PlatformVersion::latest();
    // ... assert v1 behavior
}
}

This is a lot of steps, but each one is mechanical and the compiler guides you through most of it. If you add a field to a version struct and forget to set it in one of the twelve (now thirteen) platform version constants, the build fails.

Passing Version References

A subtle but important convention is which version reference a function receives. There are three patterns:

&PlatformVersion -- used by high-level code that might need any part of the version tree. State transition processing, block execution, and similar entry points take this.

&DriveVersion -- used by mid-level Drive code that only needs drive- specific versions. The caller extracts &platform_version.drive once.

&GroveVersion -- used by the lowest-level GroveDB operations. Extracted from &drive_version.grove_version.

This layering avoids passing the entire PlatformVersion into the deepest functions. It also makes the dependency explicit: a function taking &DriveVersion cannot accidentally use a DPP version number.

The Version Flow in Block Processing

Here is how the version flows through a real execution path:

Block arrives from Tenderdash
    |
    v
PlatformState has the current protocol_version (e.g., 12)
    |
    v
PlatformVersion::get(12) -> &PLATFORM_V12
    |
    v
process_raw_state_transitions(&platform_version)
    |
    v
validate_state_for_identity_create_transition()
    reads: platform_version.drive_abci.validation_and_processing
           .state_transitions.identity_create_state_transition.state
    dispatches to: validate_state_v0()
    |
    v
drive.update_contract(&platform_version)
    reads: platform_version.drive.methods.contract.update.update_contract
    dispatches to: update_contract_v1()
    |
    v
drive.grove_get_raw(&platform_version.drive)
    reads: drive_version.grove_methods.basic.grove_get_raw
    dispatches to: grove_get_raw_v0()

The protocol version number enters at the top and the correct implementation is selected at every level. No function chooses its own version -- it is always determined by the version reference passed from above.

Rules

Do:

  • Always include the method name in the UnknownVersionMismatch error. Use the same string format you see in existing code: the plain method name for Drive methods ("grove_get_raw"), and a descriptive path for ABCI methods ("identity create transition: validate_basic_structure").
  • Keep the known_versions vector in the error arm up to date. When you add version 2, the vector should be vec![0, 1, 2].
  • Make versioned implementation methods pub(super) -- visible to the dispatch module but not to external code.
  • Keep v0 code intact when adding v1. Never modify an existing version's implementation. Copy it, rename it, and make your changes in the new version.

Do not:

  • Never call a versioned implementation directly (e.g., grove_get_raw_v0). Always go through the dispatch method. Direct calls bypass version control and break determinism.
  • Never add a version to the match without also adding the corresponding FeatureVersion field value in the version constants. The dispatch will never be reached if no platform version sets that number.
  • Never use _ => as the catch-all arm in a version dispatch. Always use version => so the variable is available for the error message. And never silently ignore unknown versions -- always return an error.
  • Never change the signature of an existing version's function after it has been released to the network. If v0 takes five parameters and v1 needs six, that is fine -- v0 keeps its original signature forever.

The State Transition Lifecycle

Every change to Dash Platform -- creating an identity, registering a data contract, storing a document, casting a masternode vote -- follows the same fundamental pattern: a state transition. If you understand this one concept, you understand the heartbeat of the entire platform.

What Is a State Transition?

A state transition is the atomic unit of state change on Dash Platform. Think of the platform's state as a database. You cannot write to that database directly. Instead, you construct a state transition object that describes what you want to change, sign it with your private key, serialize it to bytes, and broadcast it to the network. Validators receive it, validate it through a multi-stage pipeline, and -- if everything checks out -- apply it to their copy of the state.

This is fundamentally different from a smart contract model. There is no arbitrary code execution. Every possible mutation is one of a fixed set of state transition types, each with its own validation rules hardcoded into the platform. The benefit is predictability: you can reason about fees, security, and correctness without worrying about Turing-complete execution.

The StateTransition Enum

At the Rust level, every state transition is a variant of a single enum. This is defined in packages/rs-dpp/src/state_transition/mod.rs:

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, Encode, Decode, PlatformSerialize,
         PlatformDeserialize, PlatformSignable, From, PartialEq)]
#[platform_serialize(unversioned)]
#[platform_serialize(limit = 100000)]
pub enum StateTransition {
    DataContractCreate(DataContractCreateTransition),
    DataContractUpdate(DataContractUpdateTransition),
    Batch(BatchTransition),
    IdentityCreate(IdentityCreateTransition),
    IdentityTopUp(IdentityTopUpTransition),
    IdentityCreditWithdrawal(IdentityCreditWithdrawalTransition),
    IdentityUpdate(IdentityUpdateTransition),
    IdentityCreditTransfer(IdentityCreditTransferTransition),
    MasternodeVote(MasternodeVoteTransition),
    IdentityCreditTransferToAddresses(IdentityCreditTransferToAddressesTransition),
    IdentityCreateFromAddresses(IdentityCreateFromAddressesTransition),
    IdentityTopUpFromAddresses(IdentityTopUpFromAddressesTransition),
    AddressFundsTransfer(AddressFundsTransferTransition),
    AddressFundingFromAssetLock(AddressFundingFromAssetLockTransition),
    AddressCreditWithdrawal(AddressCreditWithdrawalTransition),
}
}

Each variant wraps a dedicated struct. Notice the derive macros: Encode and Decode for bincode serialization, PlatformSerialize and PlatformDeserialize for the platform's own serialization layer, and PlatformSignable for generating the "signable bytes" that get signed.

These variants fall into natural groups:

Identity lifecycle:

  • IdentityCreate -- Register a new identity (funded by an asset lock on the Dash core chain)
  • IdentityTopUp -- Add credits to an existing identity (also asset-lock funded)
  • IdentityUpdate -- Add or disable public keys on an identity
  • IdentityCreditWithdrawal -- Withdraw credits back to the core chain
  • IdentityCreditTransfer -- Transfer credits between identities

Data contracts and documents:

  • DataContractCreate -- Register a new data contract (schema)
  • DataContractUpdate -- Update an existing data contract
  • Batch -- Create, replace, delete, or transfer documents; mint, burn, transfer, or freeze tokens

Governance:

  • MasternodeVote -- Cast a vote in a contested resource election

Address-based (newer):

  • IdentityCreateFromAddresses, IdentityTopUpFromAddresses, AddressFundsTransfer, AddressFundingFromAssetLock, AddressCreditWithdrawal -- Operations that use platform addresses instead of (or in addition to) identity-based authentication

Each variant has its own numeric discriminant, defined in packages/rs-dpp/src/state_transition/state_transition_types.rs:

#![allow(unused)]
fn main() {
#[repr(u8)]
pub enum StateTransitionType {
    DataContractCreate = 0,
    Batch = 1,
    IdentityCreate = 2,
    IdentityTopUp = 3,
    DataContractUpdate = 4,
    IdentityUpdate = 5,
    IdentityCreditWithdrawal = 6,
    IdentityCreditTransfer = 7,
    MasternodeVote = 8,
    IdentityCreditTransferToAddresses = 9,
    IdentityCreateFromAddresses = 10,
    IdentityTopUpFromAddresses = 11,
    AddressFundsTransfer = 12,
    AddressFundingFromAssetLock = 13,
    AddressCreditWithdrawal = 14,
}
}

This type tag is what allows the platform to deserialize a raw byte buffer into the correct variant. When bytes arrive over the wire, the first byte tells the deserializer which struct to decode into.

The Batch Transition: A Swiss Army Knife

The Batch variant deserves special attention because it is the most complex. A single BatchTransition can contain multiple sub-transitions, each operating on a different document or token. The sub-transitions include:

  • Document operations: Create, Replace, Delete, Transfer, UpdatePrice, Purchase
  • Token operations: Transfer, Mint, Burn, Freeze, Unfreeze, DestroyFrozenFunds, EmergencyAction, ConfigUpdate, Claim, DirectPurchase, SetPriceForDirectPurchase

This batching is important for atomicity: either all operations in the batch succeed or none of them do. It also means a single identity nonce covers the entire batch, preventing partial replay attacks.

Signatures and Authentication

State transitions carry cryptographic signatures that prove authorization. There are two fundamentally different authentication models:

Identity-signed transitions -- The majority of transition types. The signer is an identity that already exists on the platform. The transition carries a signature_public_key_id referencing a key in the identity's key set, plus a signature over the signable bytes.

Asset-lock transitions -- IdentityCreate and IdentityTopUp are special because the identity may not exist yet (or the transition does not require an identity signature). Instead, they carry an AssetLockProof that references a transaction on the Dash core chain. The signature proves ownership of the funds being locked.

Address-based transitions -- Newer transition types like IdentityCreateFromAddresses use platform address inputs with their own nonces and balances, rather than identity-based authentication.

The sign method on StateTransition handles this:

#![allow(unused)]
fn main() {
pub fn sign(
    &mut self,
    identity_public_key: &IdentityPublicKey,
    private_key: &[u8],
    bls: &impl BlsModule,
) -> Result<(), ProtocolError> { ... }
}

It first verifies that the key's purpose and security level are appropriate for this transition type, then signs the signable_bytes() with the private key. Supported key types are ECDSA_SECP256K1, ECDSA_HASH160, and BLS12_381.

The signable bytes are produced by the PlatformSignable derive macro. It serializes the transition with the signature field zeroed out, producing a deterministic byte sequence that both client and platform can independently compute.

Serialization

The platform uses bincode for wire serialization of state transitions, wrapped in the PlatformSerialize / PlatformDeserialize layer. This gives us:

  • Compact binary encoding (no field names, no JSON overhead)
  • Deterministic output (critical for signature verification)
  • Size limits (#[platform_serialize(limit = 100000)] -- 100KB max)
  • Version awareness through the #[platform_serialize(unversioned)] annotation

Deserialization includes version-range checks:

#![allow(unused)]
fn main() {
pub fn deserialize_from_bytes_in_version(
    bytes: &[u8],
    platform_version: &PlatformVersion,
) -> Result<Self, ProtocolError> {
    let state_transition = StateTransition::deserialize_from_bytes(bytes)?;
    let active_version_range = state_transition.active_version_range();
    if active_version_range.contains(&platform_version.protocol_version) {
        Ok(state_transition)
    } else {
        Err(ProtocolError::StateTransitionError(
            StateTransitionIsNotActiveError { ... },
        ))
    }
}
}

This ensures that a transition type introduced in protocol version 9 cannot be submitted to a node running protocol version 8.

The Full Journey

Here is the end-to-end lifecycle of a state transition, from a client's perspective all the way through to state application:

1. Client constructs the transition. Using the Rust SDK (rs-sdk), JavaScript SDK (js-dash-sdk), or any client library, the application builds a concrete transition struct -- for example, a DataContractCreateTransition containing the new contract's schema.

2. Client signs the transition. The client calls sign() or sign_external(), which computes signable bytes, signs them with the appropriate private key, and attaches the signature and key ID to the transition struct.

3. Client serializes and broadcasts. The signed StateTransition is serialized to bytes via serialize_to_bytes() and sent to the network through DAPI (the Decentralized API).

4. Platform receives the bytes in check_tx. Before a state transition enters the mempool, it goes through a lighter validation pass. The platform deserializes the bytes, checks the signature, verifies the identity has sufficient balance, and validates basic structure. This is a gatekeeper -- it rejects obviously invalid transitions cheaply.

5. Platform processes during process_proposal. When a block is proposed, each state transition goes through the full validation pipeline: is_allowed, signature verification, nonce validation, basic structure, balance checks, advanced structure, transform_into_action, and state validation. (We will cover this pipeline in detail in the next chapter.)

6. Platform applies the action. If validation succeeds, the resulting StateTransitionAction is converted into DriveOperations, which are converted into LowLevelDriveOperations, which are applied atomically to GroveDB. Fees are calculated and deducted. The state has changed.

7. Result is returned. The client receives confirmation (or rejection) through DAPI.

The Versioning Pattern

You will notice that almost every method on StateTransition dispatches through a version table. For example:

#![allow(unused)]
fn main() {
match platform_version
    .drive_abci
    .validation_and_processing
    .process_state_transition
{
    0 => v0::process_state_transition_v0(...),
    version => Err(Error::Execution(ExecutionError::UnknownVersionMismatch { ... })),
}
}

This is the protocol versioning pattern. Every behavior that could conceivably change between protocol versions is gated behind a version number looked up from PlatformVersion. This means a node running protocol version 9 and a node running protocol version 10 can both validate the same block correctly, each using its own version's logic. It is how the platform achieves hard-fork-free upgrades.

The call_method Macro

Since StateTransition is an enum with 15 variants, dispatching a method call to the inner type would require writing out a 15-arm match statement every time. The codebase solves this with a family of macros:

#![allow(unused)]
fn main() {
macro_rules! call_method {
    ($state_transition:expr, $method:ident) => {
        match $state_transition {
            StateTransition::DataContractCreate(st) => st.$method(),
            StateTransition::DataContractUpdate(st) => st.$method(),
            StateTransition::Batch(st) => st.$method(),
            // ... all 15 variants
        }
    };
}
}

There are several variants: call_method for universal methods, call_getter_method_identity_signed for methods that return Option (returning None for non-identity-signed transitions like IdentityCreate), and call_errorable_method_identity_signed for methods that return Result (returning an error for inapplicable variants).

Rules and Guidelines

Do:

  • Always deserialize with deserialize_from_bytes_in_version in production code, to enforce version-range checks.
  • Use signable_bytes() when computing what to sign or verify -- never serialize the full transition (which includes the signature itself).
  • Check active_version_range() before processing a transition to ensure it is valid for the current protocol version.

Do not:

  • Assume all transitions have signatures. IdentityCreateFromAddresses and AddressFundsTransfer return None from signature().
  • Assume all transitions have an owner_id. Address-based transitions do not.
  • Modify the StateTransitionType discriminant values -- they are part of the wire format and changing them would break all existing serialized data.
  • Add new variants without also updating every call_method macro and every match statement in the validation pipeline.

The Validation Pipeline

When a state transition arrives at a Dash Platform node, it does not get applied immediately. It must survive a gauntlet of validation stages, each designed to catch a different category of error. Understanding this pipeline is essential to understanding how the platform maintains security, prevents abuse, and ensures deterministic state across all validators.

Two Entry Points: check_tx and process_proposal

State transitions enter the pipeline through two different doors, depending on when they are being validated.

check_tx runs when a transition first arrives at a node (before entering the mempool) and again when rechecking mempool contents. It performs a lighter validation: signature verification, basic structure, and balance checks. The goal is to filter out obvious garbage cheaply, without doing expensive state lookups.

This is implemented in packages/rs-drive-abci/src/execution/validation/state_transition/check_tx_verification/mod.rs:

#![allow(unused)]
fn main() {
pub(in crate::execution) fn state_transition_to_execution_event_for_check_tx<'a, C: CoreRPCLike>(
    platform: &'a PlatformRef<C>,
    state_transition: StateTransition,
    check_tx_level: CheckTxLevel,
    platform_version: &PlatformVersion,
) -> Result<ConsensusValidationResult<Option<ExecutionEvent<'a>>>, Error> { ... }
}

process_proposal (also called the "Validator" path) runs during block execution. This is the full pipeline. It is implemented in packages/rs-drive-abci/src/execution/validation/state_transition/processor/v0/mod.rs:

#![allow(unused)]
fn main() {
pub(super) fn process_state_transition_v0<'a, C: CoreRPCLike>(
    platform: &'a PlatformRef<C>,
    block_info: &BlockInfo,
    state_transition: StateTransition,
    transaction: TransactionArg,
    platform_version: &PlatformVersion,
) -> Result<ConsensusValidationResult<ExecutionEvent<'a>>, Error> { ... }
}

Let us walk through the full pipeline, stage by stage.

Stage 1: Is Allowed

Some state transition types are only available starting from a certain protocol version. For example, address-based transitions like IdentityCreateFromAddresses require protocol version 11 or higher. The first check asks: is this transition type even permitted on the current network?

#![allow(unused)]
fn main() {
if state_transition.has_is_allowed_validation()? {
    let result = state_transition.validate_is_allowed(platform, platform_version)?;
    if !result.is_valid() {
        return Ok(ConsensusValidationResult::new_with_errors(result.errors));
    }
}
}

The trait is defined in packages/rs-drive-abci/src/execution/validation/state_transition/processor/traits/is_allowed.rs:

#![allow(unused)]
fn main() {
pub(crate) trait StateTransitionIsAllowedValidationV0 {
    fn has_is_allowed_validation(&self) -> Result<bool, Error>;
    fn validate_is_allowed<C: CoreRPCLike>(
        &self,
        platform: &PlatformRef<C>,
        platform_version: &PlatformVersion,
    ) -> Result<ConsensusValidationResult<()>, Error>;
}
}

Transitions like DataContractCreate and IdentityCreate skip this check entirely (they have always been allowed). The Batch transition has its own is_allowed logic because certain token operations may be gated.

Stage 2: Identity Signature Verification

For identity-signed transitions (everything except IdentityCreate, IdentityTopUp, and the address-based types), the platform fetches the signer's identity from state and verifies the signature against one of its registered public keys.

#![allow(unused)]
fn main() {
let mut maybe_identity = if state_transition.uses_identity_in_state() {
    let result = if state_transition.validates_signature_based_on_identity_info() {
        state_transition.validate_identity_signed_state_transition(
            platform.drive,
            transaction,
            &mut state_transition_execution_context,
            platform_version,
        )
    } else {
        state_transition.retrieve_identity_info(...)
    }?;
    if !result.is_valid() {
        return Ok(ConsensusValidationResult::new_with_errors(result.errors));
    }
    Some(result.into_data()?)
} else {
    None
};
}

If the signature does not match, the transition is rejected without charging a fee. This is critical: if someone forges a transition with your identity ID but a wrong signature, you should not pay for their garbage.

Stage 3: Address Witness Validation

For address-based transitions, instead of identity signatures, the platform validates witnesses (proofs of ownership) for the input addresses:

#![allow(unused)]
fn main() {
if state_transition.has_address_witness_validation(platform_version)? {
    let result = state_transition.validate_address_witnesses(
        &mut state_transition_execution_context,
        platform_version,
    )?;
    if !result.is_valid() {
        return Ok(ConsensusValidationResult::new_with_errors(result.errors));
    }
}
}

Stage 4: Address Balances and Nonces

For transitions that use platform addresses as inputs, the platform verifies that each input address has sufficient balance and that the provided nonces are correct (preventing replay):

#![allow(unused)]
fn main() {
let remaining_address_balances =
    if state_transition.has_addresses_balances_and_nonces_validation() {
        let result = state_transition.validate_address_balances_and_nonces(
            platform.drive,
            &mut state_transition_execution_context,
            transaction,
            platform_version,
        )?;
        if !result.is_valid() {
            return Ok(ConsensusValidationResult::new_with_errors(result.errors));
        }
        Some(result.into_data()?)
    } else {
        None
    };
}

Stage 5: Identity Nonce Validation

Nonces prevent replay attacks. Each identity-signed transition carries a nonce that must be strictly greater than the last used nonce for that identity (or identity-contract pair). The platform checks this against the stored nonce in state.

#![allow(unused)]
fn main() {
if state_transition.has_identity_nonce_validation(platform_version)? {
    let result = state_transition.validate_identity_nonces(
        &platform.into(),
        platform.state.last_block_info(),
        transaction,
        &mut state_transition_execution_context,
        platform_version,
    )?;
    if !result.is_valid() {
        return Ok(ConsensusValidationResult::new_with_errors(result.errors));
    }
}
}

Identity create and identity top up skip this check -- they do not have nonces because the identity may not exist yet.

Stage 6: Basic Structure Validation

This stage checks that the transition's data is well-formed without looking at platform state. For example: are all required fields present? Are field values within allowed ranges? Is the data contract schema valid JSON Schema?

#![allow(unused)]
fn main() {
if state_transition.has_basic_structure_validation(platform_version) {
    let consensus_result = state_transition.validate_basic_structure(
        platform.config.network,
        platform_version,
    )?;
    if !consensus_result.is_valid() {
        return Ok(ConsensusValidationResult::new_with_errors(
            consensus_result.errors,
        ));
    }
}
}

The trait lives in processor/traits/basic_structure.rs:

#![allow(unused)]
fn main() {
pub(crate) trait StateTransitionBasicStructureValidationV0 {
    fn validate_basic_structure(
        &self,
        network_type: Network,
        platform_version: &PlatformVersion,
    ) -> Result<SimpleConsensusValidationResult, Error>;

    fn has_basic_structure_validation(&self, _platform_version: &PlatformVersion) -> bool {
        true
    }
}
}

MasternodeVote skips basic structure validation entirely. DataContractCreate and DataContractUpdate conditionally enable it based on the platform version.

Stage 7: Balance Pre-Check

Before doing expensive state validation, the platform checks that the identity has enough credits to plausibly pay for this transition. For credit transfers and withdrawals, this includes checking that the transfer amount plus estimated fees does not exceed the balance.

#![allow(unused)]
fn main() {
if state_transition.has_identity_minimum_balance_pre_check_validation() {
    let result = state_transition.validate_identity_minimum_balance_pre_check(
        identity, platform_version,
    )?;
    if !result.is_valid() {
        return Ok(ConsensusValidationResult::new_with_errors(result.errors));
    }
}
}

Stage 8: Advanced Structure Validation (without State)

Some transitions need structural validation that goes beyond basic checks but does not require reading from platform state. For example, IdentityUpdate verifies that added public keys have valid signatures. DataContractCreate validates the contract schema more deeply.

#![allow(unused)]
fn main() {
if state_transition.has_advanced_structure_validation_without_state() {
    let consensus_result = state_transition.validate_advanced_structure(
        identity, &mut state_transition_execution_context, platform_version,
    )?;
    if !consensus_result.is_valid() {
        // Note: this returns an action so the nonce gets bumped even on failure
        return consensus_result.map_result(|action| {
            ExecutionEvent::create_from_state_transition_action(...)
        });
    }
}
}

An important detail: if advanced structure validation fails, the identity's nonce is still bumped. This prevents an attacker from replaying a structurally invalid transition over and over without cost.

Stage 9: Transform Into Action + Advanced Structure (with State)

For certain transition types (Batch, IdentityCreate, MasternodeVote, AddressFundingFromAssetLock, IdentityCreateFromAddresses), the platform needs to read state before it can finish structure validation. For example, a document create transition needs to fetch the data contract to validate the document against its schema.

This stage first transforms the raw state transition into an action (reading state in the process), then validates the action's structure:

#![allow(unused)]
fn main() {
let action = if state_transition.has_advanced_structure_validation_with_state() {
    let state_transition_action_result = state_transition.transform_into_action(
        platform, block_info, &remaining_address_balances,
        ValidationMode::Validator, &mut state_transition_execution_context, transaction,
    )?;
    if !state_transition_action_result.is_valid_with_data() {
        return state_transition_action_result.map_result(|action| { ... });
    }
    let action = state_transition_action_result.into_data()?;

    let result = state_transition.validate_advanced_structure_from_state(
        block_info, platform.config.network, &action,
        maybe_identity.as_ref(), &mut state_transition_execution_context,
        platform_version,
    )?;
    if !result.is_valid() {
        return result.map_result(|action| { ... });
    }
    Some(action)
} else {
    None
};
}

We will cover transform_into_action in detail in the next chapter.

Stage 10: State Validation

The final validation stage checks for state-level conflicts. Does a data contract with this ID already exist? Is there already a document with this unique index value? Has this asset lock already been spent?

#![allow(unused)]
fn main() {
let result = if state_transition.has_state_validation() {
    state_transition.validate_state(
        action, platform, ValidationMode::Validator, block_info,
        &mut state_transition_execution_context, transaction,
    )?
} else if let Some(action) = action {
    ConsensusValidationResult::new_with_data(action)
} else {
    state_transition.transform_into_action(...)? // For transitions that skipped earlier
};
}

Not all transitions need state validation. IdentityTopUp, IdentityCreditWithdrawal, AddressFundsTransfer, and several others skip it -- their validation is fully covered by the earlier stages.

ConsensusValidationResult: How Errors Accumulate

Throughout the pipeline, errors are communicated through ConsensusValidationResult, defined in packages/rs-dpp/src/validation/validation_result.rs:

#![allow(unused)]
fn main() {
pub type ConsensusValidationResult<TData> = ValidationResult<TData, ConsensusError>;
pub type SimpleConsensusValidationResult = ConsensusValidationResult<()>;

pub struct ValidationResult<TData: Clone, E: Debug> {
    pub errors: Vec<E>,
    pub data: Option<TData>,
}
}

Key properties of this type:

  • It can carry data alongside errors. When transform_into_action partially succeeds but has warnings, the action is in data and the warnings are in errors.
  • is_valid() checks if errors is empty. Any error means failure.
  • is_valid_with_data() checks both. Valid and has associated data.
  • Errors can be merged from multiple validation steps using add_errors() or merge().
  • Chainable via and_then_validation() for pipeline-style composition.

The SimpleConsensusValidationResult alias (where TData = ()) is used by validation stages that just check pass/fail without producing a transformed object.

ValidationMode

The pipeline behavior varies based on context:

#![allow(unused)]
fn main() {
pub enum ValidationMode {
    CheckTx,       // Mempool admission -- lighter checks
    RecheckTx,     // Periodic mempool revalidation
    Validator,     // Full block execution
    NoValidation,  // Testing/tooling only
}
}

For example, should_fully_validate_contract_on_transform_into_action() returns true only in Validator mode. During CheckTx, the platform skips expensive contract validation to keep mempool admission fast.

The Early-Return Pattern

You will notice a consistent pattern throughout the pipeline: each stage checks is_valid() and returns early if validation failed. This is intentional.

When the pipeline returns early due to a signature failure or invalid nonce, the transition produces no execution event -- the user is not charged. This protects users from being billed for transitions they did not actually submit (forged signatures) or that are replays (invalid nonces).

But when the pipeline returns early after the nonce has been validated -- for example, during advanced structure validation or state validation -- it returns a nonce-bump action so the identity still pays a small fee. This prevents a different attack: submitting thousands of structurally invalid transitions to waste validator resources for free.

Rules and Guidelines

Do:

  • Run the full pipeline in Validator mode during block execution. Skipping stages can lead to consensus divergence.
  • Return early on signature/nonce failures without charging the user.
  • Bump the nonce (and charge) when structure or state validation fails after the nonce check has passed.
  • Use ConsensusValidationResult everywhere -- do not use Result<(), Error> for validation outcomes, because a validation failure is not a system error.

Do not:

  • Add expensive state reads to basic structure validation. It runs during check_tx and must be fast.
  • Skip is_allowed validation -- it is the mechanism that enables protocol upgrades to gate new transition types.
  • Assume the pipeline order is arbitrary. Each stage depends on information established by previous stages (e.g., the identity fetched during signature validation is used in balance checks).
  • Confuse ConsensusError (a validation failure that the user caused) with Error (a system error like a database failure). The former accumulates in ValidationResult::errors; the latter propagates via Result::Err.

Transform Into Action

After a state transition passes the early validation stages -- signature verification, nonce checks, basic structure -- the platform needs to translate it from a raw "what the client sent" representation into a "what the platform should do" representation. This translation step is called transform into action, and it is where the platform reads state, resolves references, and prepares a validated, self-contained instruction for the Drive storage layer.

Why Actions Exist

Consider a DataContractCreateTransition. The client sends the contract in its serialized form. But before the platform can store it, it needs to:

  1. Deserialize the contract from its wire format
  2. Validate the contract schema (JSON Schema validation, index rules, etc.)
  3. Compute the contract ID from the owner ID and nonce
  4. Check version compatibility

The transition is what the client sends. The action is what the platform will execute. The action is a validated, resolved, ready-to-apply object. It has already been through deserialization, its references have been resolved against current state, and it carries exactly the information needed to generate Drive operations.

This separation is a deliberate design choice. The transition type lives in rs-dpp (the protocol library, shared between client and platform). The action type lives in rs-drive (the storage layer, platform-only). By keeping them separate:

  • Clients never need to depend on storage internals
  • The platform can evolve its internal representation without changing the wire format
  • Validation logic lives close to state access, not scattered across the protocol layer

The StateTransitionActionTransformer Trait

The transformation is defined by a trait in packages/rs-drive-abci/src/execution/validation/state_transition/transformer/mod.rs:

#![allow(unused)]
fn main() {
pub trait StateTransitionActionTransformer {
    fn transform_into_action<C: CoreRPCLike>(
        &self,
        platform: &PlatformRef<C>,
        block_info: &BlockInfo,
        remaining_address_input_balances: &Option<
            BTreeMap<PlatformAddress, (AddressNonce, Credits)>,
        >,
        validation_mode: ValidationMode,
        execution_context: &mut StateTransitionExecutionContext,
        tx: TransactionArg,
    ) -> Result<ConsensusValidationResult<StateTransitionAction>, Error>;
}
}

The key parameters:

  • platform -- Access to the full platform state, including Drive (the storage layer), the current platform state, and configuration.
  • block_info -- The current block height, time, epoch. Some validations are time-dependent.
  • remaining_address_input_balances -- For address-based transitions, the balances remaining after earlier deductions.
  • validation_mode -- Controls how thorough the validation is (lighter for CheckTx, full for Validator).
  • execution_context -- Accumulates information about what operations were performed during validation (used for fee estimation).
  • tx -- The GroveDB transaction handle for consistent state reads.

The return type is ConsensusValidationResult<StateTransitionAction> -- it can carry both the resulting action and any validation errors encountered during transformation.

The Top-Level Dispatch

The StateTransition enum implements StateTransitionActionTransformer by dispatching to each variant's own implementation:

#![allow(unused)]
fn main() {
impl StateTransitionActionTransformer for StateTransition {
    fn transform_into_action<C: CoreRPCLike>(
        &self,
        platform: &PlatformRef<C>,
        block_info: &BlockInfo,
        remaining_address_input_balances: &Option<BTreeMap<PlatformAddress, (AddressNonce, Credits)>>,
        validation_mode: ValidationMode,
        execution_context: &mut StateTransitionExecutionContext,
        tx: TransactionArg,
    ) -> Result<ConsensusValidationResult<StateTransitionAction>, Error> {
        match self {
            StateTransition::DataContractCreate(st) => st.transform_into_action(
                platform, block_info, remaining_address_input_balances,
                validation_mode, execution_context, tx,
            ),
            StateTransition::Batch(st) => st.transform_into_action(
                platform, block_info, remaining_address_input_balances,
                validation_mode, execution_context, tx,
            ),
            StateTransition::IdentityCreate(st) => {
                let signable_bytes = self.signable_bytes()?;
                st.transform_into_action_for_identity_create_transition(
                    platform, signable_bytes, validation_mode,
                    execution_context, tx,
                )
            },
            StateTransition::IdentityTopUp(st) => {
                let signable_bytes = self.signable_bytes()?;
                st.transform_into_action_for_identity_top_up_transition(
                    platform, signable_bytes, validation_mode,
                    execution_context, tx,
                )
            },
            // ... remaining variants
        }
    }
}
}

Notice that IdentityCreate and IdentityTopUp use specialized method names (transform_into_action_for_identity_create_transition) and pass signable_bytes explicitly. This is because these transitions need the signable bytes to verify the asset lock proof signature, and computing them at the StateTransition level (before the inner struct is extracted) ensures the correct bytes are used.

The StateTransitionAction Enum

The output of transformation is a StateTransitionAction, defined in packages/rs-drive/src/state_transition_action/mod.rs:

#![allow(unused)]
fn main() {
pub enum StateTransitionAction {
    DataContractCreateAction(DataContractCreateTransitionAction),
    DataContractUpdateAction(DataContractUpdateTransitionAction),
    BatchAction(BatchTransitionAction),
    IdentityCreateAction(IdentityCreateTransitionAction),
    IdentityTopUpAction(IdentityTopUpTransitionAction),
    IdentityCreditWithdrawalAction(IdentityCreditWithdrawalTransitionAction),
    IdentityUpdateAction(IdentityUpdateTransitionAction),
    IdentityCreditTransferAction(IdentityCreditTransferTransitionAction),
    MasternodeVoteAction(MasternodeVoteTransitionAction),
    // ... address-based actions ...
    BumpIdentityNonceAction(BumpIdentityNonceAction),
    BumpIdentityDataContractNonceAction(BumpIdentityDataContractNonceAction),
    PartiallyUseAssetLockAction(PartiallyUseAssetLockAction),
    BumpAddressInputNoncesAction(BumpAddressInputNoncesAction),
}
}

Notice the system actions at the bottom: BumpIdentityNonceAction, BumpIdentityDataContractNonceAction, PartiallyUseAssetLockAction, and BumpAddressInputNoncesAction. These are not user-requested actions. They are generated by the platform when a transition fails validation after the nonce check. The platform still needs to bump the nonce (so the same invalid transition cannot be replayed) and charge a fee. These system actions handle that housekeeping.

Versioned Dispatch Within transform_into_action

Each transition type's transform_into_action implementation uses the standard versioned dispatch pattern. Here is the data contract create example from packages/rs-drive-abci/src/execution/validation/state_transition/state_transitions/data_contract_create/mod.rs:

#![allow(unused)]
fn main() {
impl StateTransitionActionTransformer for DataContractCreateTransition {
    fn transform_into_action<C: CoreRPCLike>(
        &self,
        platform: &PlatformRef<C>,
        block_info: &BlockInfo,
        _remaining_address_input_balances: &Option<
            BTreeMap<PlatformAddress, (AddressNonce, Credits)>,
        >,
        validation_mode: ValidationMode,
        execution_context: &mut StateTransitionExecutionContext,
        _tx: TransactionArg,
    ) -> Result<ConsensusValidationResult<StateTransitionAction>, Error> {
        let platform_version = platform.state.current_platform_version()?;

        match platform_version
            .drive_abci
            .validation_and_processing
            .state_transitions
            .contract_create_state_transition
            .transform_into_action
        {
            0 => self.transform_into_action_v0::<C>(
                block_info, validation_mode,
                execution_context, platform_version,
            ),
            version => Err(Error::Execution(ExecutionError::UnknownVersionMismatch {
                method: "data contract create transition: transform_into_action".to_string(),
                known_versions: vec![0],
                received: version,
            })),
        }
    }
}
}

The version number (0 here) is looked up from the platform version struct, allowing the logic to change in future protocol upgrades without modifying the dispatch layer.

What Happens Inside: Reading State

The core work inside transform_into_action varies by transition type, but the common thread is resolving references against current state. Here are the key patterns for different transition types:

DataContractCreate: Deserializes the contract from its wire format, validates the schema, and wraps it in a DataContractCreateTransitionAction. The validation depth depends on ValidationMode -- CheckTx mode skips full schema validation for speed.

DataContractUpdate: Fetches the existing contract from Drive to compare against the proposed update. Validates that schema changes are backward-compatible (e.g., you can add fields but not remove required ones).

Batch (Documents): This is the most complex transformation. For each document sub-transition in the batch, the platform must:

  1. Fetch the data contract that the document belongs to
  2. Look up the document type within that contract
  3. For creates: validate the document against the type's schema
  4. For replaces/deletes: fetch the existing document to verify ownership and revision
  5. For token operations: validate token configuration and authorization

IdentityCreate: Validates the asset lock proof against the core chain (via RPC), verifies the signature on the transition using keys embedded in the transition itself, and constructs the identity that will be created.

MasternodeVote: Fetches the contested resource being voted on, verifies that the masternode is eligible to vote, and constructs the vote action.

The Batch Transition: A Deeper Look

The BatchTransition is worth examining more closely because it demonstrates how transform_into_action handles multiple sub-operations. A single batch can contain dozens of document and token transitions targeting different contracts and document types.

During transformation, the platform:

  1. Groups transitions by contract. This allows fetching each contract only once.
  2. Fetches all needed contracts. Each contract is loaded from Drive (with caching).
  3. Resolves each sub-transition. Document creates get validated against their type's schema. Replaces fetch the existing document. Token operations check authorization rules.
  4. Produces a BatchTransitionAction containing individual BatchedTransitionAction items -- each of which is either a DocumentAction or a TokenAction.

If any sub-transition fails validation, the entire batch fails, and the platform produces a BumpIdentityDataContractNonceAction instead (bumping the nonce to prevent replay, while charging the user).

When Transformation Fails

Transformation can fail in two ways:

System error (Result::Err): Something unexpected happened -- a database read failed, a version mismatch, corrupted state. These propagate up as Error and typically halt block processing.

Consensus error (ValidationResult with errors): The transition itself is invalid. The document does not match its schema, the contract does not exist, the asset lock is already spent. In this case, the result still carries data -- typically a nonce-bump action -- so the pipeline can charge the user and move on.

This distinction is reflected in the return type: Result<ConsensusValidationResult<StateTransitionAction>, Error>. The outer Result is for system errors. The inner ConsensusValidationResult is for user errors.

Rules and Guidelines

Do:

  • Always respect ValidationMode. Skip expensive work during CheckTx but never skip it during Validator mode.
  • Accumulate validation costs in execution_context -- every state read, every schema validation -- so that fee calculation is accurate.
  • Return a nonce-bump action when transformation fails for user-caused reasons. Never silently swallow the transition.
  • Fetch contracts through Drive's caching layer. A batch transition might reference the same contract dozens of times; fetching it from disk each time would be prohibitively expensive.

Do not:

  • Perform state writes during transform_into_action. This is a read-only phase. State is only modified when the resulting action is applied later.
  • Mix up the transition and action types. The transition is DataContractCreateTransition (from rs-dpp). The action is DataContractCreateTransitionAction (from rs-drive). They live in different crates for good reason.
  • Forget to handle the remaining_address_input_balances parameter for address-based transitions. It is None for identity-based transitions but must be Some for address-based ones -- failing to provide it is a corrupted code execution error.
  • Add new transition types without implementing both the transformer trait and adding the variant to the StateTransitionAction enum. These must stay in sync.

Drive Operations: From Action to Storage

By this point in the pipeline, a state transition has been validated and transformed into a StateTransitionAction. But the action is still an abstract description of what should change. The final step is translating it into concrete storage mutations that get applied atomically to GroveDB, the platform's Merkle tree database. This translation happens through a three-tier pipeline of progressively lower-level operation types.

The Three-Tier Pipeline

The pipeline looks like this:

StateTransitionAction
    |
    | DriveHighLevelOperationConverter::into_high_level_drive_operations()
    v
Vec<DriveOperation>
    |
    | DriveLowLevelOperationConverter::into_low_level_drive_operations()
    v
Vec<LowLevelDriveOperation>
    |
    | apply_batch_low_level_drive_operations()
    v
GroveDB (applied atomically)

Each tier exists for a reason:

  • StateTransitionAction speaks the language of the protocol: "create this identity," "store this document," "transfer these credits."
  • DriveOperation speaks the language of Drive's domain model: "insert a document into this contract's document type tree," "update an identity's balance."
  • LowLevelDriveOperation speaks the language of GroveDB: "insert this key-value pair at this path," "delete this element."

This layering allows each tier to be tested independently and evolved separately. A change to how documents are indexed in GroveDB only affects the DriveLowLevelOperationConverter implementation for DocumentOperationType -- it does not ripple up to the action layer.

Tier 1: DriveHighLevelOperationConverter

The first conversion step is defined in packages/rs-drive/src/state_transition_action/action_convert_to_operations/mod.rs:

#![allow(unused)]
fn main() {
pub trait DriveHighLevelOperationConverter {
    fn into_high_level_drive_operations<'a>(
        self,
        epoch: &Epoch,
        platform_version: &PlatformVersion,
    ) -> Result<Vec<DriveOperation<'a>>, Error>;
}
}

The StateTransitionAction enum implements this trait by dispatching to each variant:

#![allow(unused)]
fn main() {
impl DriveHighLevelOperationConverter for StateTransitionAction {
    fn into_high_level_drive_operations<'a>(
        self,
        epoch: &Epoch,
        platform_version: &PlatformVersion,
    ) -> Result<Vec<DriveOperation<'a>>, Error> {
        match self {
            StateTransitionAction::DataContractCreateAction(action) => {
                action.into_high_level_drive_operations(epoch, platform_version)
            }
            StateTransitionAction::DataContractUpdateAction(action) => {
                action.into_high_level_drive_operations(epoch, platform_version)
            }
            StateTransitionAction::BatchAction(action) => {
                action.into_high_level_drive_operations(epoch, platform_version)
            }
            StateTransitionAction::IdentityCreateAction(action) => {
                action.into_high_level_drive_operations(epoch, platform_version)
            }
            // ... all other variants
        }
    }
}
}

Each action type knows how to decompose itself into the appropriate DriveOperation variants. A single action often produces multiple drive operations. For example, IdentityCreateAction might produce:

  1. An IdentityOperation to insert the identity
  2. An IdentityOperation to set the initial balance
  3. Multiple IdentityOperations to add each public key
  4. A SystemOperation to update system credit tracking

For the BatchTransitionAction, there is an additional layer of delegation. The batch contains multiple BatchedTransitionAction items, each of which implements DriveHighLevelBatchOperationConverter:

#![allow(unused)]
fn main() {
pub trait DriveHighLevelBatchOperationConverter {
    fn into_high_level_batch_drive_operations<'a>(
        self,
        epoch: &Epoch,
        owner_id: Identifier,
        platform_version: &PlatformVersion,
    ) -> Result<Vec<DriveOperation<'a>>, Error>;
}
}

Notice the extra owner_id parameter -- batch operations need to know which identity owns the documents being created or modified.

Tier 2: The DriveOperation Enum

The DriveOperation enum represents domain-level storage operations. It is defined in packages/rs-drive/src/util/batch/drive_op_batch/mod.rs:

#![allow(unused)]
fn main() {
pub enum DriveOperation<'a> {
    DataContractOperation(DataContractOperationType<'a>),
    DocumentOperation(DocumentOperationType<'a>),
    TokenOperation(TokenOperationType),
    WithdrawalOperation(WithdrawalOperationType),
    IdentityOperation(IdentityOperationType),
    PrefundedSpecializedBalanceOperation(PrefundedSpecializedBalanceOperationType),
    SystemOperation(SystemOperationType),
    GroupOperation(GroupOperationType),
    AddressFundsOperation(AddressFundsOperationType),
    GroveDBOperation(QualifiedGroveDbOp),
    GroveDBOpBatch(GroveDbOpBatch),
}
}

Each variant wraps a type-specific operation enum. For example, DocumentOperationType includes operations like AddDocument, UpdateDocument, DeleteDocument -- each carrying the document data, contract reference, and storage flags needed for insertion.

The last two variants -- GroveDBOperation and GroveDBOpBatch -- are escape hatches for when higher-level abstractions are not needed. They wrap raw GroveDB operations directly.

The DriveOperation enum implements the DriveLowLevelOperationConverter trait to convert itself into the next tier:

#![allow(unused)]
fn main() {
pub trait DriveLowLevelOperationConverter {
    fn into_low_level_drive_operations(
        self,
        drive: &Drive,
        estimated_costs_only_with_layer_info: &mut Option<
            HashMap<KeyInfoPath, EstimatedLayerInformation>,
        >,
        block_info: &BlockInfo,
        transaction: TransactionArg,
        platform_version: &PlatformVersion,
    ) -> Result<Vec<LowLevelDriveOperation>, Error>;
}
}

Two important parameters here:

  • estimated_costs_only_with_layer_info: When this is Some, the converter does not actually read from or write to GroveDB. Instead, it estimates the cost of the operations using layer information. This is used for fee estimation before execution.
  • transaction: The GroveDB transaction handle. All reads and writes within a single block happen within one transaction, ensuring atomicity.

Tier 3: LowLevelDriveOperation

The lowest tier is defined in packages/rs-drive/src/fees/op.rs:

#![allow(unused)]
fn main() {
pub enum LowLevelDriveOperation {
    GroveOperation(QualifiedGroveDbOp),
    FunctionOperation(FunctionOp),
    CalculatedCostOperation(OperationCost),
    PreCalculatedFeeResult(FeeResult),
}
}

At this level, there are only four kinds of things:

  • GroveOperation: A concrete GroveDB operation -- insert, delete, or update a key-value pair at a specific path in the Merkle tree.
  • FunctionOperation: A CPU-bound operation with a pre-defined cost (like hashing or signature verification). These do not touch storage but still cost processing fees.
  • CalculatedCostOperation: A pre-computed cost that gets folded into the fee calculation.
  • PreCalculatedFeeResult: An already-computed fee result, used when the fee for an operation was determined earlier in the pipeline.

The GroveOperation variant is where the rubber meets the road. QualifiedGroveDbOp is GroveDB's own batch operation type -- it specifies a path (a vector of byte-string segments navigating the tree), a key, and an operation (insert element, delete, replace, etc.).

Applying the Batch

The entire sequence -- from DriveOperation collection to GroveDB application -- is orchestrated by Drive::apply_drive_operations, defined in packages/rs-drive/src/util/batch/drive_op_batch/drive_methods/apply_drive_operations/v0/mod.rs:

#![allow(unused)]
fn main() {
pub(crate) fn apply_drive_operations_v0(
    &self,
    operations: Vec<DriveOperation>,
    apply: bool,
    block_info: &BlockInfo,
    transaction: TransactionArg,
    platform_version: &PlatformVersion,
    previous_fee_versions: Option<&CachedEpochIndexFeeVersions>,
) -> Result<FeeResult, Error> {
    if operations.is_empty() {
        return Ok(FeeResult::default());
    }
    let mut low_level_operations = vec![];
    let mut estimated_costs_only_with_layer_info = if apply {
        None
    } else {
        Some(HashMap::new())
    };

    let mut finalize_tasks: Vec<DriveOperationFinalizeTask> = Vec::new();

    for drive_op in operations {
        if let Some(tasks) = drive_op.finalization_tasks(platform_version)? {
            finalize_tasks.extend(tasks);
        }
        low_level_operations.append(
            &mut drive_op.into_low_level_drive_operations(
                self, &mut estimated_costs_only_with_layer_info,
                block_info, transaction, platform_version,
            )?
        );
    }

    let mut cost_operations = vec![];
    self.apply_batch_low_level_drive_operations(
        estimated_costs_only_with_layer_info, transaction,
        low_level_operations, &mut cost_operations, &platform_version.drive,
    )?;

    for task in finalize_tasks {
        task.execute(self, platform_version);
    }

    Drive::calculate_fee(
        None, Some(cost_operations), &block_info.epoch,
        self.config.epochs_per_era, platform_version, previous_fee_versions,
    )
}
}

Let us break this down:

  1. Collect finalization tasks. Some operations need post-processing. For example, DataContractOperation may produce a RecordShieldedAnchor finalization task that runs after the batch is committed. Tasks are collected first, executed last.

  2. Convert to low-level operations. Each DriveOperation is expanded into one or more LowLevelDriveOperations. A single document insertion might produce dozens of GroveDB operations (one for each index, plus the document itself, plus metadata).

  3. Apply the batch atomically. apply_batch_low_level_drive_operations collects all GroveOperation items into a single GroveDB batch and applies them in one atomic write. This is critical -- if the node crashes mid-application, either all operations succeed or none do.

  4. Execute finalization tasks. Post-commit callbacks run (e.g., updating caches).

  5. Calculate fees. The cost of every operation (storage bytes written, bytes read, processing time) is tallied into a FeeResult that determines how many credits the user pays.

The apply Parameter

Notice the apply: bool parameter. When apply is false, the entire pipeline runs in estimation mode: operations are not actually written to GroveDB. Instead, the estimated_costs_only_with_layer_info map is populated with what would be written, and fees are estimated from that.

This is used during check_tx and fee estimation. The platform needs to know approximately how much a transition will cost before actually applying it, both for balance pre-checks and for returning fee estimates to clients.

The Fee Calculation

At the end of apply_drive_operations, fees are calculated from the accumulated cost operations:

#![allow(unused)]
fn main() {
Drive::calculate_fee(
    None,
    Some(cost_operations),
    &block_info.epoch,
    self.config.epochs_per_era,
    platform_version,
    previous_fee_versions,
)
}

The fee has two components:

  • Storage fee: Proportional to the bytes written to disk. Stored bytes have an ongoing cost because they consume space in the state tree indefinitely (until deleted). When bytes are later removed, a portion of the storage fee is refunded.
  • Processing fee: Proportional to the CPU work performed -- hashing, signature verification, tree traversal. This is ephemeral and not refundable.

The user's user_fee_increase (a percentage multiplier) applies to the processing fee, allowing users to bid higher for priority.

Finalization Tasks

Some DriveOperation variants carry finalization tasks -- callbacks that run after the batch is committed. These are defined in packages/rs-drive/src/util/batch/drive_op_batch/finalize_task.rs:

#![allow(unused)]
fn main() {
pub(crate) trait DriveOperationFinalizationTasks {
    fn finalization_tasks(
        &self,
        platform_version: &PlatformVersion,
    ) -> Result<Option<Vec<DriveOperationFinalizeTask>>, Error>;
}
}

The most common finalization task is RecordShieldedAnchor, used by shielded transaction operations to record Merkle tree anchors after the state changes are committed. Finalization tasks are intentionally limited -- they must be deterministic and must not fail, since they run after the batch is already committed.

Putting It All Together: A Document Create

Let us trace a document creation through the entire three-tier pipeline:

  1. Action: BatchTransitionAction containing a DocumentAction::CreateAction with the document data, its type, and the contract reference.

  2. DriveHighLevelOperationConverter: The document create action produces a DriveOperation::DocumentOperation(AddDocument { ... }) containing the owned document, contract info, document type info, and storage flags.

  3. DriveLowLevelOperationConverter: The AddDocument operation produces multiple LowLevelDriveOperation::GroveOperation items:

    • Insert the serialized document at its primary key path
    • Insert index entries for each indexed property
    • Update the document type's document count
    • Record storage flags for fee tracking
  4. GroveDB batch: All the GroveOperation items from all documents in the batch are collected into a single GroveDbOpBatch and applied atomically.

  5. Fee calculation: The total bytes written, bytes read, and processing operations are summed to produce the FeeResult.

Rules and Guidelines

Do:

  • Implement DriveHighLevelOperationConverter for new action types. This is the contract between the validation layer and the storage layer.
  • Keep into_low_level_drive_operations deterministic. Given the same inputs and state, it must always produce the same operations. Non-determinism causes consensus failures.
  • Use the estimation mode (apply = false) for fee pre-checks. Do not skip fee estimation -- users need accurate cost information before committing to a transition.
  • Test both the estimation path and the application path. They can diverge if the estimation layer info is stale or incomplete.

Do not:

  • Access GroveDB directly from action conversion code. Always go through Drive's methods, which handle versioning, caching, and error translation.
  • Produce side effects in into_high_level_drive_operations. This conversion must be pure -- it maps data, it does not read or write state.
  • Assume a 1:1 mapping between actions and GroveDB operations. A single document create can produce 10+ GroveDB operations (one per index). A batch with 50 documents can produce hundreds.
  • Forget finalization tasks when adding new operation types that require post-commit work. If your operation needs to update a cache or record an anchor, implement DriveOperationFinalizationTasks.
  • Mix up DriveOperation (high-level, domain-aware) with LowLevelDriveOperation (low-level, GroveDB-aware). The naming can be confusing, but the distinction is important: the former knows about documents and identities, the latter knows about tree paths and elements.

Fee System Overview

Every state transition on Dash Platform costs credits. Credits are the internal unit of account (1 Dash = 100,000,000,000 credits). The fee system ensures that validators are compensated for computation and storage, that spam is economically infeasible, and that the platform's state does not grow unboundedly without payment.

This section covers the three fee eras that the platform has gone through:

  1. Identity Credit Fees (protocol versions 1--9) — Fees paid from an identity's credit balance, funded by asset lock transactions on Core.
  2. Platform Address Fees (protocol versions 10--11) — Fees paid from platform address balances using a UTXO-like input/output model.
  3. Shielded Transaction Fees (protocol version 12) — Fees embedded in zero-knowledge proofs and cryptographically bound to the Orchard bundle.

Each era introduced new ExecutionEvent variants and fee validation logic, but the underlying cost accounting (storage fees, processing fees, epoch distribution) is shared across all three.

Credits and Denomination

Platform credits are the smallest unit of value:

UnitCredits
1 credit1
1 mDash100,000,000
1 Dash100,000,000,000

All fee constants in the codebase are denominated in credits.

Cost Components

The platform distinguishes two fundamental kinds of cost:

Storage Fees

Storage fees pay for bytes that persist in GroveDB indefinitely. The rate is set in FeeStorageVersion:

ParameterValueDescription
storage_disk_usage_credit_per_byte27,000Permanent disk storage cost
storage_processing_credit_per_byte400I/O cost to write the bytes
storage_load_credit_per_byte20I/O cost to read stored bytes
non_storage_load_credit_per_byte10I/O cost for ephemeral reads
storage_seek_cost2,000Cost of a single disk seek

Storage fees are refundable: when data is deleted, a portion of the original storage fee is returned to the identity that paid it (see Refunds below).

Processing Fees

Processing fees pay for computation that does not leave a permanent trace in storage: signature verification, hashing, tree traversal, and so on. These are non-refundable — the computation has already been performed.

Processing costs are built up from individual operations:

processing_fee =
    seek_count × storage_seek_cost
  + added_bytes × storage_processing_credit_per_byte
  + replaced_bytes × storage_processing_credit_per_byte
  + loaded_bytes × storage_load_credit_per_byte
  + hash_node_calls × (blake3_base + blake3_per_block)

Signature verification adds a fixed cost per algorithm:

AlgorithmCost (credits)
ECDSA secp256k115,000
BLS12-381300,000
ECDSA hash16015,500
BIP13 script hash300,000
EdDSA ed25519 hash1603,500

Hashing costs scale with the number of blocks processed:

Hash FunctionBasePer Block
SHA-2561005,000
Blake3100300
SHA-256 + RIPEMD-1606,0005,000

Minimum Fees

Every state transition type has a minimum fee that must be met regardless of the actual computation cost. This prevents zero-cost spam. The minimums are defined in StateTransitionMinFees:

Identity-Based Transitions (protocol versions 1--9)

TransitionMinimum Fee (credits)
Credit Transfer100,000
Credit Transfer to Addresses500,000
Credit Withdrawal400,000,000
Identity Update100,000
Document Batch (per sub-transition)100,000
Contract Create100,000
Contract Update100,000
Masternode Vote100,000

Address-Based Transitions (protocol versions 10--11)

TransitionMinimum Fee (credits)
Address Funds Transfer (per input)500,000
Address Funds Transfer (per output)6,000,000
Address Credit Withdrawal400,000,000
Identity Create (base)2,000,000
Identity Create (per key)6,500,000
Identity Top-Up (base)500,000

Data Contract Registration Fees (protocol version 9+)

Protocol version 9 introduced significant registration fees for data contracts to prevent namespace squatting:

ComponentFeeDash Equivalent
Base contract registration10,000,000,0000.1 Dash
Document type registration2,000,000,0000.02 Dash
Non-unique index1,000,000,0000.01 Dash
Unique index1,000,000,0000.01 Dash
Contested index100,000,000,0001.0 Dash
Token registration10,000,000,0000.1 Dash
Search keyword10,000,000,0000.1 Dash

Before protocol version 9, all registration fees were zero.

User Fee Increase

Every state transition carries a user_fee_increase field (a UserFeeIncrease value). This allows the sender to voluntarily pay more than the base fee to prioritize their transition. The multiplier works as follows:

  • 0 = 100% of base fee (no increase)
  • 1 = 101% of base fee
  • 10 = 110% of base fee
  • 100 = 200% of base fee

The increase applies only to the processing fee component, not to storage fees. This is because storage fees are a direct function of bytes stored and should not be inflated.

#![allow(unused)]
fn main() {
fn apply_user_fee_increase(&mut self, user_fee_increase: UserFeeIncrease) {
    let increase = self.processing_fee * user_fee_increase as u64 / 100;
    self.processing_fee = self.processing_fee.saturating_add(increase);
}
}

ExecutionEvent Variants

The ExecutionEvent enum (in rs-drive-abci) determines how fees are collected for each state transition. There are six variants:

VariantFee SourceUsed By
PaidIdentity credit balanceMost identity-based transitions
PaidFromAssetLockAsset lock transaction valueIdentityCreate, IdentityTopUp
PaidFromAssetLockWithoutIdentityAsset lock (fixed amount)PartiallyUseAssetLock
PaidFromAddressInputsPlatform address balancesAll address-based transitions
PaidFixedCostFixed fee to poolMasternodeVote
PaidFromShieldedPoolShielded pool value_balanceShieldedTransfer, Unshield, ShieldedWithdrawal

Each variant carries the operations to execute and enough context for the fee validation and execution pipeline to deduct the correct amount from the correct source.

FeeResult

All fee calculations produce a FeeResult:

#![allow(unused)]
fn main() {
pub struct FeeResult {
    pub storage_fee: Credits,
    pub processing_fee: Credits,
    pub fee_refunds: FeeRefunds,
    pub removed_bytes_from_system: u32,
}
}
  • storage_fee — credits for new bytes written to persistent storage
  • processing_fee — credits for computation and I/O
  • fee_refunds — credits returned because previously stored data was deleted
  • removed_bytes_from_system — bytes removed that were stored by the system (not by any identity), so no refund is issued

The total base fee is storage_fee + processing_fee. The FeeResult is produced by Drive::apply_drive_operations(), which executes the GroveDB operations and measures the actual cost of each insert, delete, and query.

Refunds

When data is removed from GroveDB (a document is deleted, a key is removed), the system calculates a refund of the original storage fee. Refunds are tracked per identity per epoch:

#![allow(unused)]
fn main() {
pub struct FeeRefunds(pub CreditsPerEpochByIdentifier);
// BTreeMap<IdentifierBytes32, BTreeMap<EpochIndex, Credits>>
}

Refunds are not 1:1 with the original fee because storage fees are distributed across future epochs (see below). The refund amount depends on how many epochs have elapsed since the data was stored — the longer the data has been stored, the smaller the refund, because more of the distributed fees have already been paid out to proposers.

There is a dust limit: refunds below 32 bytes worth of storage credits are discarded to prevent micro-refund spam.

Epoch-Based Fee Distribution

Fees do not go directly to the block proposer. Instead, they accumulate in epoch-specific pools and are distributed to proposers at epoch boundaries.

How Epochs Work

  • An epoch is a fixed window of blocks
  • An era consists of 40 epochs
  • Storage fees are distributed across 50 eras (2,000 epochs, roughly 50 years) using a declining schedule

The distribution table allocates percentages per era:

EraPercentageCumulative
05.000%5.0%
14.800%9.8%
24.600%14.4%
.........
490.125%100.0%

Within each era, the percentage is divided equally among the era's epochs. For example, a 1,000,000-credit storage fee distributes 50,000 credits (5%) to era 0, split evenly across 40 epochs = 1,250 credits per epoch.

Fee Flow

Block execution
  └→ FeeResult (storage + processing)
       └→ End of block: add_distribute_block_fees_into_pools()
            ├→ Processing fees → current epoch pool
            └→ Storage fees → global distribution pool → spread across future epochs

Epoch change
  └→ add_distribute_fees_from_oldest_unpaid_epoch_pool_to_proposers()
       ├→ Calculate Core block rewards for the epoch
       ├→ Add Core rewards to system credits
       └→ Distribute (Platform fees + Core rewards) to proposers

Processing fees are paid to proposers at the end of the epoch in which they were collected. Storage fees are spread across 50 eras of future epochs, providing a long-term revenue stream for validators.

Fee Versioning

All fee parameters are versioned through FeeVersion, stored in PlatformVersion. This allows the protocol to adjust fee rates without a hard fork — a new protocol version simply references different fee constants.

The current fee version structure:

#![allow(unused)]
fn main() {
pub struct FeeVersion {
    pub fee_version_number: FeeVersionNumber,
    pub uses_version_fee_multiplier_permille: Option<u64>,
    pub storage: FeeStorageVersion,
    pub signature: FeeSignatureVersion,
    pub hashing: FeeHashingVersion,
    pub processing: FeeProcessingVersion,
    pub data_contract_validation: FeeDataContractValidationVersion,
    pub data_contract_registration: FeeDataContractRegistrationVersion,
    pub state_transition_min_fees: StateTransitionMinFees,
    pub vote_resolution_fund_fees: VoteResolutionFundFees,
}
}

Fee versions are stored in the FEE_VERSIONS array and looked up by number. The uses_version_fee_multiplier_permille field allows a global scaling factor (permille = divide by 1000; a value of 1000 means no change).

Key Source Files

FileContents
rs-platform-version/src/version/fee/All fee version definitions
rs-platform-version/src/version/fee/storage/v1.rsStorage fee rates
rs-platform-version/src/version/fee/signature/v1.rsSignature verification costs
rs-platform-version/src/version/fee/state_transition_min_fees/v1.rsMinimum fees per transition
rs-platform-version/src/version/fee/data_contract_registration/v2.rsContract registration fees
rs-drive/src/fees/op.rsLowLevelDriveOperation and cost calculation
rs-dpp/src/fee/fee_result/mod.rsFeeResult, BalanceChangeForIdentity
rs-dpp/src/fee/epoch/distribution.rsEpoch distribution table and refund logic
rs-drive-abci/src/execution/types/execution_event/mod.rsExecutionEvent enum
rs-drive-abci/src/execution/platform_events/fee_pool_inwards_distribution/Block fee collection
rs-drive-abci/src/execution/platform_events/fee_pool_outwards_distribution/Proposer payout

Platform Address Fees

Protocol versions 10 and 11 introduced a new class of state transitions that operate on platform addresses rather than identities. Platform addresses are derived from public keys (similar to Bitcoin addresses) and hold a credit balance directly, without requiring an identity to be registered.

This chapter explains how fees work for address-based transitions, how they differ from the identity credit model, and how the fee strategy mechanism gives clients control over fee deduction.

Background: Why Platform Addresses?

Before protocol version 10, every action on the platform required an identity — a registered entity funded by an asset lock transaction. Creating an identity required a Core transaction, waiting for confirmations, and then submitting a state transition. This was a multi-step process that created friction for simple operations like "send credits to an address."

Platform addresses solve this by allowing credits to exist at addresses without a full identity. Users can fund an address (via an asset lock or a transfer from another address) and then spend from it directly using a signature from the address's key pair.

Address-Based State Transitions

Protocol versions 10 and 11 added the following transition types:

TransitionProtocol VersionDescription
IdentityCreateFromAddresses10Create an identity funded from platform address balances
IdentityTopUpFromAddresses10Add credits to an existing identity from address balances
AddressFundsTransfer11Transfer credits between platform addresses
AddressFundingFromAssetLock11Fund an address directly from an asset lock
AddressCreditWithdrawal11Withdraw credits from an address back to Core

All of these use the PaidFromAddressInputs execution event variant.

The Input/Output Model

Address-based transitions follow a UTXO-inspired model with inputs and outputs:

Inputs

Each input specifies a platform address, the expected nonce, and the amount to consume from that address's balance:

inputs: [
    { address: A, nonce: 5, amount: 100_000_000 },
    { address: B, nonce: 3, amount:  50_000_000 },
]

The platform validates each input by:

  1. Checking the address exists in state
  2. Verifying the nonce is exactly current_nonce + 1 (replay protection)
  3. Confirming the address has sufficient balance for the requested amount

After validation, the remaining balance of each input is tracked:

remaining = actual_balance - requested_amount

This remaining balance is what is available to pay fees from (after the requested amount is committed to outputs).

Outputs

Outputs specify destination addresses and the credits to send:

outputs: [
    { address: C, amount: 80_000_000 },
    { address: D, amount: 60_000_000 },
]

Outputs are added to recipient balances. They can also be reduced to pay fees (see Fee Strategy below).

Balance Equation

The fundamental constraint is:

sum(input_amounts) >= sum(output_amounts) + fees

If the inputs cannot cover both the outputs and the fees, the transition is rejected with AddressesNotEnoughFundsError.

Fee Strategy

Unlike identity-based transitions where fees are always deducted from the identity's balance, address-based transitions use an explicit fee strategy that the client includes in the transition. The fee strategy is an ordered sequence of steps that tells the platform where to find the fee credits:

#![allow(unused)]
fn main() {
pub enum AddressFundsFeeStrategyStep {
    /// Deduct fee from a specific input address's remaining balance.
    DeductFromInput(u16),
    /// Reduce a specific output's amount to cover the fee.
    ReduceOutput(u16),
}

pub type AddressFundsFeeStrategy = Vec<AddressFundsFeeStrategyStep>;
}

How It Works

The platform processes the steps in order. At each step, it deducts as much of the remaining fee as possible from the specified source:

fee_strategy: [DeductFromInput(0), ReduceOutput(0)]

Step 1: Try to deduct full fee from input 0's remaining balance
  - remaining_fee = 10,000,000
  - input_0_remaining = 25,000,000
  - deducted = 10,000,000
  - input_0_remaining = 15,000,000
  - remaining_fee = 0 → done

If the first source is insufficient, the algorithm moves to the next step:

fee_strategy: [DeductFromInput(0), ReduceOutput(1)]

Step 1: Try to deduct from input 0
  - remaining_fee = 10,000,000
  - input_0_remaining = 3,000,000
  - deducted = 3,000,000
  - input_0_remaining = 0 (removed)
  - remaining_fee = 7,000,000

Step 2: Try to reduce output 1
  - remaining_fee = 7,000,000
  - output_1_amount = 60,000,000
  - deducted = 7,000,000
  - output_1_amount = 53,000,000
  - remaining_fee = 0 → done

Index Stability

The indices in the fee strategy refer to the original BTreeMap iteration order. The implementation snapshots the address lists before processing any steps, so removing a drained entry at step 1 does not shift the indices for step 2. This is critical for correctness:

#![allow(unused)]
fn main() {
let input_addresses: Vec<PlatformAddress> = inputs.keys().copied().collect();
let output_addresses: Vec<PlatformAddress> = outputs.keys().copied().collect();

for step in fee_strategy {
    match step {
        DeductFromInput(index) => {
            let address = input_addresses[*index as usize];
            // look up by address, not by index into the live BTreeMap
        }
        // ...
    }
}
}

FeeDeductionResult

The deduction algorithm produces:

#![allow(unused)]
fn main() {
pub struct FeeDeductionResult {
    pub remaining_input_balances: BTreeMap<PlatformAddress, (AddressNonce, Credits)>,
    pub adjusted_outputs: BTreeMap<PlatformAddress, Credits>,
    pub fee_fully_covered: bool,
}
}

If fee_fully_covered is false, the transition is rejected.

Nonce System (Replay Protection)

Every platform address has a nonce that starts at 0 and increments by 1 with each transition that uses the address as an input. The transition must specify the expected nonce, which must be exactly current_nonce + 1:

#![allow(unused)]
fn main() {
let expected_next_nonce = state_nonce.saturating_add(1);
if provided_nonce != expected_next_nonce {
    return Err(AddressInvalidNonceError {
        expected: expected_next_nonce,
        provided: provided_nonce,
    });
}
}

This prevents:

  • Replay attacks — resubmitting an old transition (wrong nonce)
  • Double-spending — using the same balance twice (nonce already consumed)
  • Ordering attacks — submitting transitions out of order (nonce gap)

If a nonce reaches u32::MAX, the address is exhausted and cannot be used as an input anymore.

Multiple addresses can be used as inputs in a single transition, each with its own nonce. The platform enforces a maximum number of inputs per transition (configured in platform_version.dpp.state_transitions.max_address_inputs).

The PaidFromAddressInputs Event

When an address-based transition passes validation, the processor creates a PaidFromAddressInputs execution event:

#![allow(unused)]
fn main() {
ExecutionEvent::PaidFromAddressInputs {
    input_current_balances: BTreeMap<PlatformAddress, (AddressNonce, Credits)>,
    added_to_balance_outputs: BTreeMap<PlatformAddress, Credits>,
    fee_strategy: AddressFundsFeeStrategy,
    operations: Vec<DriveOperation>,
    execution_operations: Vec<ValidationOperation>,
    additional_fixed_fee_cost: Option<Credits>,
    user_fee_increase: UserFeeIncrease,
}
}
  • input_current_balances — the remaining balance of each input after consuming the requested amounts, plus the validated nonce
  • added_to_balance_outputs — the output amounts before any fee deductions
  • fee_strategy — the client's ordered fee deduction instructions
  • operations — the GroveDB operations (balance updates, nonce bumps)
  • execution_operations — validation operations that also incur fees
  • additional_fixed_fee_cost — optional fixed costs (e.g., registration fees)
  • user_fee_increase — voluntary processing fee multiplier

Fee Validation Pipeline

Address-based fee validation runs in two phases:

Phase 1: Pre-Check (Minimum Balance)

Before expensive state reads, a quick estimate verifies that the inputs have enough credits to cover the minimum possible fee:

#![allow(unused)]
fn main() {
fn validate_addresses_minimum_balance_pre_check(
    &self,
    remaining_address_balances: &BTreeMap<PlatformAddress, (AddressNonce, Credits)>,
    platform_version: &PlatformVersion,
) -> Result<SimpleConsensusValidationResult, Error>
}

This catches obviously insufficient balances early. Only AddressFundsTransfer, AddressCreditWithdrawal, IdentityCreateFromAddresses, and IdentityTopUpFromAddresses run this pre-check. AddressFundingFromAssetLock skips it because its funds come from the asset lock, not existing address balances.

Phase 2: Full Fee Validation

After the operations are determined, the platform:

  1. Applies all drive operations (in estimation mode) to calculate the actual FeeResult
  2. Adds validation operation costs
  3. Applies the user_fee_increase multiplier
  4. Adds any additional_fixed_fee_cost
  5. Runs the fee strategy to deduct the total from inputs/outputs
  6. Checks fee_fully_covered
#![allow(unused)]
fn main() {
let fee_deduction_result = deduct_fee_from_outputs_or_remaining_balance_of_inputs(
    input_current_balances.clone(),
    added_to_balance_outputs.clone(),
    fee_strategy,
    required_balance,
    platform_version,
)?;

if !fee_deduction_result.fee_fully_covered {
    return Err(AddressesNotEnoughFundsError);
}
}

Fee Execution

When the transition is executed, the fee deduction runs a second time with the real (not estimated) fee amount, and the adjusted balances are written to state:

  1. Apply all drive operations — the core state changes (transfers, nonce bumps, etc.)
  2. Calculate actual fee — from the FeeResult of the applied operations
  3. Deduct fee — run the fee strategy against the actual fee amount
  4. Adjust outputs — if any output was reduced, call remove_balance_from_address for the difference
  5. Adjust inputs — if any input's remaining balance was reduced, call set_balance_to_address with the adjusted amount
  6. Apply adjustment operations — batch the balance corrections into GroveDB

The fee goes into the epoch pool just like identity-based fees, and is distributed to proposers at epoch end.

Minimum Fee Calculation

For AddressFundsTransfer, the minimum fee scales with the number of inputs and outputs:

min_fee = num_inputs × address_funds_transfer_input_cost
        + num_outputs × address_funds_transfer_output_cost

With current constants:

InputsOutputsMinimum Fee
116,500,000
1212,500,000
217,000,000
2213,000,000

For IdentityCreateFromAddresses:

min_fee = identity_create_base_cost + num_keys × identity_key_in_creation_cost
KeysMinimum Fee
18,500,000
215,000,000
321,500,000

Key Differences from Identity Fees

AspectIdentity Credit FeesPlatform Address Fees
Fee sourceIdentity balance (single pool)Input addresses + output reduction
Fee deductionAutomatic from identityExplicit fee strategy
Debt allowedYes (negative balance tracked)No — must have funds
RefundsYes — deleted storage refunded to identityNo refund mechanism
NoncePer identity, per contractPer address, monotonic u32
User fee increaseApplied to processing feesApplied to processing fees
Minimum feePer transition typePer input + per output
ExecutionEventPaid / PaidFromAssetLockPaidFromAddressInputs
Error on insufficientBalanceIsNotEnoughErrorAddressesNotEnoughFundsError

No Debt, No Refunds

The most significant difference is that address-based transitions have no debt mechanism and no refund mechanism:

  • Identity fees can create a negative balance when the processing fee cannot be fully covered. This debt is tracked and must be repaid before the identity can submit new transitions.

  • Address fees must be fully covered by available inputs and outputs. If the fee cannot be paid, the transition is rejected outright.

  • Identity refunds return credits to the identity when stored data is deleted. The refund is calculated per epoch based on when the data was originally stored.

  • Address-based transitions do not produce refundable storage. If an address-based transition stores data and that data is later deleted, no refund is issued to any address.

Example: AddressFundsTransfer

A concrete example of how fees flow for a two-input, two-output transfer:

Transition:
  inputs:  [{A, nonce: 5, amount: 1_000_000_000}, {B, nonce: 3, amount: 500_000_000}]
  outputs: [{C, amount: 800_000_000}, {D, amount: 600_000_000}]
  fee_strategy: [DeductFromInput(0), DeductFromInput(1)]
  user_fee_increase: 0

1. Validate inputs:
   A: state_nonce=4, balance=2_000_000_000 → nonce 5 ✓, balance ≥ 1B ✓
   B: state_nonce=2, balance=700_000_000  → nonce 3 ✓, balance ≥ 500M ✓

   Remaining: A=(5, 1_000_000_000), B=(3, 200_000_000)

2. Pre-check minimum fee:
   min = 2 × 500,000 + 2 × 6,000,000 = 13,000,000
   sum(remaining) = 1,200,000,000 >> 13M ✓

3. Create PaidFromAddressInputs event:
   input_current_balances: {A: (5, 1B), B: (3, 200M)}
   added_to_balance_outputs: {C: 800M, D: 600M}

4. Calculate actual fee:
   storage_fee = 14,000,000  (hypothetical)
   processing_fee = 2,500,000
   total = 16,500,000

5. Apply fee strategy:
   Step 1: DeductFromInput(0) → A: 1B - 16.5M = 983,500,000
   remaining_fee = 0 ✓

6. Execute:
   A.balance = 983,500,000, A.nonce = 5
   B.balance = 200,000,000, B.nonce = 3
   C.balance += 800,000,000
   D.balance += 600,000,000
   16,500,000 → epoch fee pool

Key Source Files

FileContents
rs-dpp/src/address_funds/fee_strategy/mod.rsAddressFundsFeeStrategy definition
rs-dpp/src/address_funds/fee_strategy/deduct_fee_from_inputs_and_outputs/Fee deduction algorithm
rs-drive-abci/src/execution/types/execution_event/mod.rsPaidFromAddressInputs variant
rs-drive-abci/src/execution/validation/.../processor/traits/address_balances_and_nonces.rsNonce and balance validation
rs-drive-abci/src/execution/validation/.../processor/traits/addresses_minimum_balance.rsMinimum balance pre-check
rs-drive-abci/src/execution/platform_events/.../validate_fees_of_event/Full fee validation
rs-drive-abci/src/execution/platform_events/.../execute_event/Fee execution with adjustments
rs-platform-version/src/version/fee/state_transition_min_fees/v1.rsAddress fee constants

Shielded Transaction Fees

Introduced in protocol version 12. For the general fee system overview, see Fee System Overview. For address-based fees (protocol versions 10--11), see Platform Address Fees.

Shielded transactions use the Orchard protocol's zero-knowledge proofs to hide transaction amounts. Because amounts are hidden, the platform cannot inspect the transaction to compute fees the way it does for transparent transitions. Instead, the fee is embedded into the cryptographic structure of the bundle itself, and the platform enforces a minimum.

This chapter explains the fee model, how it is validated, and how it differs from the transparent and address-based fee systems.

The Problem: Fees in a Privacy System

In a transparent state transition like AddressFundsTransfer, the platform can see the transfer amount, compute the cost of storage and processing, and deduct the fee from the sender's balance. The fee calculation happens after the transition is applied.

Shielded transitions break this model. The amounts inside the ZK proof are hidden. The platform cannot look inside the proof to determine how much was sent or received. It can only see two things from the public fields:

  1. value_balance — the net amount leaving the shielded pool (positive means credits flow out of the pool into the transparent world or to proposers as fees)
  2. num_actions — the number of spend+output pairs in the bundle

The fee must therefore be encoded into value_balance by the client and validated by the platform before execution.

Fee Extraction by Transition Type

The fee is derived differently depending on the shielded transition type:

TransitionFee FormulaExplanation
ShieldPaid from transparent address inputsFee comes from the transparent side, not from value_balance. Skipped by shielded fee validation.
ShieldedTransferfee = value_balanceThe entire value_balance is fee — nothing leaves the pool except the fee going to proposers.
Unshieldfee = value_balance − amountamount goes to the output address; the remainder is fee.
ShieldedWithdrawalfee = value_balance − amountamount goes to the withdrawal document; the remainder is fee.
ShieldFromAssetLockPaid from asset lockFee comes from the asset lock mechanism, not from value_balance.

For ShieldedTransfer, the client constructs the bundle so that total_spent − total_output = desired_fee. The Orchard circuit proves that value is conserved (inputs = outputs + value_balance), and the binding signature cryptographically commits to the value_balance. Mutating value_balance after signing will cause the binding signature to fail verification.

The Three-Component Fee Model

The minimum shielded fee has three components:

min_fee = proof_verification_fee + num_actions × (processing_fee + storage_fee)

1. Proof Verification Fee (per bundle)

A single Halo 2 ZK proof covers the entire bundle regardless of action count. Verifying it is the most expensive operation — benchmarked at approximately 30× the cost of a per-action signature verification. This is a fixed cost per bundle.

Current value: 100,000,000 credits (100M)

2. Per-Action Processing Fee

Each action in the bundle requires:

  • RedPallas spend authorization signature verification
  • Nullifier duplicate check (hash + tree lookup)
  • Note commitment insertion into the Sinsemilla-based Merkle tree

The processing cost per action was calibrated at a 33:1 ratio against the proof verification cost, based on benchmarks of signature verification and tree operations.

Current value: 3,000,000 credits (3M)

3. Per-Action Storage Fee

Each action permanently stores data in two places:

StorageBytesContents
BulkAppendTree (commitment tree)28032 cmx + 32 nullifier + 216 encrypted note
Nullifier tree32nullifier key (value is empty)
Total312

The storage fee is derived from the platform's existing per-byte storage rates:

storage_fee_per_action = 312 × (storage_disk_usage_credit_per_byte
                              + storage_processing_credit_per_byte)
                       = 312 × (27,000 + 400)
                       = 312 × 27,400
                       = 8,548,800

This is not a separate constant — it is computed dynamically from the storage fee version, ensuring shielded storage costs stay consistent with transparent storage costs as fee parameters evolve.

Fee Table

Combining all three components:

ActionsProof FeeProcessingStorageTotal Minimum Fee
2100,000,0006,000,00017,097,600123,097,600
3100,000,0009,000,00025,646,400134,646,400
4100,000,00012,000,00034,195,200146,195,200

Note: The Orchard protocol requires a minimum of 2 actions per bundle for privacy (even a single-input single-output transfer produces 2 actions with a dummy padding action). Bundles with 1 action are structurally invalid.

Where Fee Validation Runs

Fee validation is integrated into the processor pipeline (see Validation Pipeline) between basic structure validation and ZK proof verification:

... → Basic Structure → Minimum Fee Check → ZK Proof Verification → ...

The ordering is deliberate. The fee check is stateless and cheap — it only reads value_balance and actions.len() from the transition, with no GroveDB lookups. Placing it before proof verification means that bundles with insufficient fees are rejected instantly, without spending ~100ms on Halo 2 verification.

The implementation lives in packages/rs-drive-abci/src/execution/validation/state_transition/processor/traits/shielded_proof.rs:

#![allow(unused)]
fn main() {
pub(crate) trait StateTransitionShieldedMinimumFeeValidationV0 {
    fn validate_minimum_shielded_fee(
        &self,
        platform_version: &PlatformVersion,
    ) -> Result<SimpleConsensusValidationResult, Error>;
}
}

If the fee is below the minimum, the transition is rejected with InsufficientShieldedFeeError — an unpaid consensus error. The sender is not charged (there is no identity to charge), and the transition produces no execution event.

Fee Constants in the Version System

The fee parameters are stored in the platform version under drive_abci.validation_and_processing.event_constants:

#![allow(unused)]
fn main() {
pub struct DriveAbciValidationConstants {
    pub maximum_vote_polls_to_process: u16,
    pub maximum_contenders_to_consider: u16,
    pub minimum_pool_notes_for_outgoing: u64,
    pub shielded_proof_verification_fee: u64,      // 100_000_000
    pub shielded_per_action_processing_fee: u64,    // 3_000_000
}
}

The storage component is not a separate constant — it is derived at runtime from fee_version.storage.storage_disk_usage_credit_per_byte and fee_version.storage.storage_processing_credit_per_byte, multiplied by the constant SHIELDED_STORAGE_BYTES_PER_ACTION = 312.

This design means:

  • Proof and processing fees can be tuned independently via version bumps
  • Storage fees automatically track changes to the platform-wide storage rates
  • No "magic number" for storage cost exists in the version constants

How Fees Flow After Validation

Once the fee check passes and the transition is fully validated and executed, the fee amount is deducted from the shielded pool's total balance and routed to block proposers via the PaidFromShieldedPool execution event:

ShieldedTransfer:    pool_balance -= fee_amount
Unshield:           pool_balance -= (amount + fee_amount)
ShieldedWithdrawal: pool_balance -= (amount + fee_amount)

For Unshield, the amount goes to the output platform address. For ShieldedWithdrawal, the amount goes to a Core withdrawal document. In both cases, the fee_amount goes to proposers.

For ShieldedTransfer, the total pool value decreases by exactly the fee amount. The rest of the value stays inside the pool (the sender's notes are spent and the recipient's notes are created, but the pool's aggregate balance only drops by the fee).

Cryptographic Binding

The fee is not just a field that the platform trusts. It is cryptographically bound to the ZK proof through two mechanisms:

  1. Value commitments (cv_net): Each action contains a Pedersen commitment to the note value. The sum of all value commitments must equal value_balance (modulo the blinding factors). The binding signature proves this relationship holds.

  2. Platform sighash: The bundle commitment (which includes value_balance) is hashed into the sighash that the spend authorization signatures sign over:

    sighash = SHA-256("DashPlatformSighash" || bundle_commitment || extra_data)
    

    Mutating value_balance after signing changes the sighash, invalidating all signatures. The BatchValidator checks both the Halo 2 proof and all signatures, so any tampering is caught.

This means a client cannot claim a lower fee than what the ZK proof actually commits to — the proof and signatures would fail verification.

Rules and Guidelines

Do:

  • Always set value_balance to at least the minimum fee when building a shielded bundle on the client side. Use min_fee = proof_verification_fee + num_actions × (processing_fee + storage_fee) with the current platform version constants.
  • Include the fee in the note arithmetic: total_spent = total_output + fee. The Orchard builder handles this when you set the output amount to spend_amount − desired_fee.
  • Remember that the minimum action count is 2 (Orchard privacy requirement).

Do not:

  • Assume the fee is free for Shield transitions — the fee comes from transparent address inputs and is validated through the address balance system, not here.
  • Mutate value_balance after building the bundle. The binding signature and sighash will be invalidated.
  • Hardcode fee amounts. Always read from PlatformVersion — the constants are versioned and will change as the protocol evolves.

Consensus Errors

When a Dash Platform node processes a state transition -- a document creation, an identity update, a credit withdrawal -- things can go wrong. The data contract might reference an invalid schema. The signature might not match. The identity might not have enough balance. These are not internal bugs; they are protocol-level validation failures that every node on the network must agree on. If one node rejects a transition for reason X and another rejects it for reason Y, the chain forks. If one node returns error code 40100 and another returns 40101, clients break.

This is why consensus errors in Dash Platform are not ordinary Rust errors. They are serializable, code-stable, network-transmitted data structures that must produce identical bytes on every node and remain decodable by every client version. The ConsensusError enum is the heart of this system.

The top-level enum

Open packages/rs-dpp/src/errors/consensus/consensus_error.rs and you will find the root of the hierarchy:

#![allow(unused)]
fn main() {
#[derive(
    thiserror::Error,
    Debug,
    Encode,
    Decode,
    PlatformSerialize,
    PlatformDeserialize,
    Clone,
    PartialEq,
)]
#[platform_serialize(limit = 2000)]
#[error(transparent)]
#[allow(clippy::large_enum_variant)]
pub enum ConsensusError {
    /*

    DO NOT CHANGE ORDER OF VARIANTS WITHOUT INTRODUCING OF NEW VERSION

    */
    #[error("default error")]
    DefaultError,

    #[error(transparent)]
    BasicError(BasicError),

    #[error(transparent)]
    StateError(StateError),

    #[error(transparent)]
    SignatureError(SignatureError),

    #[error(transparent)]
    FeeError(FeeError),

    #[cfg(test)]
    #[cfg_attr(test, error(transparent))]
    TestConsensusError(TestConsensusError),
}
}

There are five things worth understanding here.

The four categories

Every consensus error falls into one of four categories:

  • BasicError -- structural and syntactic validation failures. The state transition itself is malformed, references a nonexistent document type, has an invalid identifier, exceeds size limits, or fails schema validation. These are caught before the node ever checks persistent state. The BasicError enum in packages/rs-dpp/src/errors/consensus/basic/basic_error.rs contains over 130 variants organized into sub-groups: versioning errors, structure errors, data contract errors, group errors, document errors, token errors, identity errors, state transition errors, and address errors.

  • StateError -- the transition is structurally valid but conflicts with the current platform state. A document already exists, an identity nonce is wrong, a token account is frozen, a group action was already completed. The StateError enum in packages/rs-dpp/src/errors/consensus/state/state_error.rs contains roughly 80 variants covering data contracts, documents, identities, voting, tokens, groups, and address balances.

  • SignatureError -- the cryptographic signature on the transition is invalid. The identity was not found, the key type is wrong, the key is disabled, the security level is insufficient, or the raw signature verification failed.

  • FeeError -- the transition is valid in every other way, but the identity cannot pay for it. Currently this category has a single variant: BalanceIsNotEnoughError.

This layered structure mirrors the validation pipeline. Platform checks basic structure first, then state, then signatures, then fees. Each layer produces errors from its own category.

The DO NOT CHANGE ORDER rule

The comment at the top of every consensus error enum is not a suggestion:

#![allow(unused)]
fn main() {
/*

DO NOT CHANGE ORDER OF VARIANTS WITHOUT INTRODUCING OF NEW VERSION

*/
}

Why? Because ConsensusError derives Encode and Decode from bincode. Bincode serializes enum variants by their ordinal position -- the first variant is 0, the second is 1, and so on. If you reorder variants, the same bytes now decode to a different error on nodes running different code versions. This is a consensus failure.

The same rule applies to every nested error enum. Here is FeeError in packages/rs-dpp/src/errors/consensus/fee/fee_error.rs:

#![allow(unused)]
fn main() {
#[derive(
    Error, Debug, PartialEq, Encode, Decode, PlatformSerialize, PlatformDeserialize, Clone,
)]
pub enum FeeError {
    /*

    DO NOT CHANGE ORDER OF VARIANTS WITHOUT INTRODUCING OF NEW VERSION

    */
    #[error(transparent)]
    BalanceIsNotEnoughError(BalanceIsNotEnoughError),
}
}

And the same rule applies to individual error structs. Here is DocumentAlreadyPresentError in packages/rs-dpp/src/errors/consensus/state/document/document_already_present_error.rs:

#![allow(unused)]
fn main() {
#[derive(
    Error, Debug, Clone, PartialEq, Eq, Encode, Decode, PlatformSerialize, PlatformDeserialize,
)]
#[error("Document {document_id} is already present")]
#[platform_serialize(unversioned)]
pub struct DocumentAlreadyPresentError {
    /*

    DO NOT CHANGE ORDER OF FIELDS WITHOUT INTRODUCING OF NEW VERSION

    */
    document_id: Identifier,
}
}

Notice the comment says fields too, not just variants. Bincode serializes struct fields in declaration order. Swap two fields and the bytes change.

The derive stack

Every consensus error type carries the same set of derives:

#![allow(unused)]
fn main() {
#[derive(
    Error,              // thiserror -- provides Display and Error trait
    Debug,              // standard Rust debug formatting
    Clone,              // errors need to be cloneable
    PartialEq,          // errors need to be comparable
    Encode,             // bincode encoding (field-level)
    Decode,             // bincode decoding (field-level)
    PlatformSerialize,  // platform wrapper around bincode (size limits, etc.)
    PlatformDeserialize,// platform wrapper around bincode
)]
}

The thiserror::Error derive gives each variant a Display implementation. For enums, #[error(transparent)] delegates to the inner type's Display. For leaf structs, you write the message directly:

#![allow(unused)]
fn main() {
#[error("Document {document_id} is already present")]
pub struct DocumentAlreadyPresentError {
    document_id: Identifier,
}
}

The Encode and Decode derives come from bincode and handle the actual byte-level serialization of each field. The PlatformSerialize and PlatformDeserialize derives are the platform's own wrappers that add size limits and error type conversion on top of bincode. We will cover those in the serialization chapters.

The #[platform_serialize] attribute

On the top-level ConsensusError, you will notice:

#![allow(unused)]
fn main() {
#[platform_serialize(limit = 2000)]
}

This sets a maximum serialized size of 2000 bytes. If a consensus error somehow serializes to more than 2000 bytes, the platform will return a MaxEncodedBytesReachedError instead. This is a defense against oversized error payloads being transmitted across the network.

On individual leaf error structs, you will see a different attribute:

#![allow(unused)]
fn main() {
#[platform_serialize(unversioned)]
}

The unversioned flag means the struct does not need a PlatformVersion parameter for serialization. Most leaf error structs are simple enough that they do not change between protocol versions -- their serialization format is stable. The top-level ConsensusError enum does not use unversioned because it may need version-aware behavior as the protocol evolves.

How errors nest

The nesting is three levels deep:

  1. ConsensusError -- the top level, with four category variants
  2. Category enums (BasicError, StateError, SignatureError, FeeError) -- each holds dozens of specific error variants
  3. Leaf error structs -- the actual error data (identifiers, messages, amounts)

Each level has From implementations to make conversion ergonomic:

#![allow(unused)]
fn main() {
// Leaf -> Category
impl From<DocumentAlreadyPresentError> for ConsensusError {
    fn from(err: DocumentAlreadyPresentError) -> Self {
        Self::StateError(StateError::DocumentAlreadyPresentError(err))
    }
}

// Category -> Top-level
impl From<StateError> for ConsensusError {
    fn from(error: StateError) -> Self {
        Self::StateError(error)
    }
}
}

This means you can use ? to propagate any leaf error up to a ConsensusError:

#![allow(unused)]
fn main() {
fn validate_document(doc: &Document) -> Result<(), ConsensusError> {
    if document_exists(doc.id()) {
        return Err(DocumentAlreadyPresentError::new(doc.id()).into());
    }
    Ok(())
}
}

The .into() call walks the From chain: DocumentAlreadyPresentError -> ConsensusError::StateError(StateError::DocumentAlreadyPresentError(...)).

Anatomy of a leaf error

Every leaf error follows the same pattern. Let us look at the full DocumentAlreadyPresentError:

#![allow(unused)]
fn main() {
use crate::consensus::state::state_error::StateError;
use crate::consensus::ConsensusError;
use crate::errors::ProtocolError;
use bincode::{Decode, Encode};
use platform_serialization_derive::{PlatformDeserialize, PlatformSerialize};
use platform_value::Identifier;
use thiserror::Error;

#[derive(
    Error, Debug, Clone, PartialEq, Eq, Encode, Decode, PlatformSerialize, PlatformDeserialize,
)]
#[error("Document {document_id} is already present")]
#[platform_serialize(unversioned)]
pub struct DocumentAlreadyPresentError {
    /*

    DO NOT CHANGE ORDER OF FIELDS WITHOUT INTRODUCING OF NEW VERSION

    */
    document_id: Identifier,
}

impl DocumentAlreadyPresentError {
    pub fn new(document_id: Identifier) -> Self {
        Self { document_id }
    }

    pub fn document_id(&self) -> &Identifier {
        &self.document_id
    }
}

impl From<DocumentAlreadyPresentError> for ConsensusError {
    fn from(err: DocumentAlreadyPresentError) -> Self {
        Self::StateError(StateError::DocumentAlreadyPresentError(err))
    }
}
}

The pattern is:

  1. Fields are private with getter methods
  2. A new() constructor
  3. A From impl that chains through the category enum to ConsensusError
  4. The full derive stack including PlatformSerialize and PlatformDeserialize
  5. The #[platform_serialize(unversioned)] attribute
  6. The ordering warning comment

The test-only variant

You may have noticed this in ConsensusError:

#![allow(unused)]
fn main() {
#[cfg(test)]
#[cfg_attr(test, error(transparent))]
TestConsensusError(TestConsensusError),
}

This variant only exists in test builds. It allows tests to create synthetic consensus errors without depending on real validation logic. The #[cfg(test)] attribute ensures it is completely stripped from production builds and does not affect the binary encoding of the other variants.

Rules

Do:

  • Always add new variants at the end of the enum
  • Always add new fields at the end of the struct
  • Always include the ordering warning comment in new error types
  • Always implement From<YourError> for ConsensusError through the appropriate category
  • Always derive the full stack: Error, Debug, Clone, PartialEq, Encode, Decode, PlatformSerialize, PlatformDeserialize
  • Use #[platform_serialize(unversioned)] on leaf error structs
  • Keep fields private with getter methods

Do not:

  • Reorder existing variants or fields -- this breaks network serialization
  • Remove existing variants -- old nodes might still produce them
  • Change the #[error(...)] message without considering client-side parsing
  • Add large fields to error structs -- the 2000-byte limit on ConsensusError applies to the entire serialized tree
  • Use #[platform_serialize(limit = ...)] on leaf structs -- the limit is enforced at the ConsensusError level

Error Codes

In the previous chapter we saw how ConsensusError organizes errors into a tree of enums and structs. But when a client -- whether a JavaScript SDK, a mobile app, or a third-party tool -- receives a validation failure from the platform, it does not receive a Rust enum. It receives bytes on a wire. To make those bytes useful, every consensus error maps to a stable numeric code through the ErrorWithCode trait.

These codes are the public API of the error system. They appear in gRPC responses, they are documented for client developers, and they must never change once assigned. A client that handles error code 40100 ("document already present") today must still be able to handle that same code a year from now, even after dozens of protocol upgrades.

The ErrorWithCode trait

The trait lives in packages/rs-dpp/src/errors/consensus/codes.rs and is as simple as it gets:

#![allow(unused)]
fn main() {
pub trait ErrorWithCode {
    /// Returns the error code
    fn code(&self) -> u32;
}
}

One method, one number, no room for ambiguity. The trait is implemented for ConsensusError and for each of the four category enums.

How ConsensusError delegates

The top-level implementation just forwards to the appropriate category:

#![allow(unused)]
fn main() {
impl ErrorWithCode for ConsensusError {
    fn code(&self) -> u32 {
        match self {
            Self::BasicError(e) => e.code(),
            Self::SignatureError(e) => e.code(),
            Self::StateError(e) => e.code(),
            Self::FeeError(e) => e.code(),

            #[cfg(test)]
            ConsensusError::TestConsensusError(_) => 1000,
            ConsensusError::DefaultError => 1, // this should never happen
        }
    }
}
}

The DefaultError variant returns code 1 -- a sentinel value that should never appear in practice. The TestConsensusError returns 1000, safely outside any production range.

The code ranges

Error codes are organized into ranges that correspond to error categories and subcategories. Here is the complete map as it stands in the codebase:

BasicError codes (10000-10899)

RangeCategoryExamples
10000-10099VersioningUnsupportedVersionError (10000), ProtocolVersionParsingError (10001), IncompatibleProtocolVersionError (10004)
10100-10199StructureJsonSchemaCompilationError (10100), InvalidIdentifierError (10102), ValueError (10103)
10200-10275Data ContractDataContractMaxDepthExceedError (10200), DuplicateIndexError (10201), InvalidDataContractIdError (10204)
10350-10359GroupsGroupPositionDoesNotExistError (10350), GroupExceedsMaxMembersError (10354)
10400-10418DocumentsDataContractNotPresentError (10400), DuplicateDocumentTransitionsWithIdsError (10401)
10450-10460TokensInvalidTokenIdError (10450), TokenTransferToOurselfError (10456)
10500-10533IdentityDuplicatedIdentityPublicKeyBasicError (10500), InvalidIdentityPublicKeyDataError (10511)
10600-10603State TransitionInvalidStateTransitionTypeError (10600), StateTransitionMaxSizeExceededError (10602)
10700-10700GeneralOverflowError (10700)
10800-10818AddressTransitionOverMaxInputsError (10800), WithdrawalBelowMinAmountError (10818)

SignatureError codes (20000-20012)

#![allow(unused)]
fn main() {
impl ErrorWithCode for SignatureError {
    fn code(&self) -> u32 {
        match self {
            Self::IdentityNotFoundError { .. } => 20000,
            Self::InvalidIdentityPublicKeyTypeError { .. } => 20001,
            Self::InvalidStateTransitionSignatureError { .. } => 20002,
            Self::MissingPublicKeyError { .. } => 20003,
            Self::InvalidSignaturePublicKeySecurityLevelError { .. } => 20004,
            Self::WrongPublicKeyPurposeError { .. } => 20005,
            Self::PublicKeyIsDisabledError { .. } => 20006,
            Self::PublicKeySecurityLevelNotMetError { .. } => 20007,
            Self::SignatureShouldNotBePresentError(_) => 20008,
            Self::BasicECDSAError(_) => 20009,
            Self::BasicBLSError(_) => 20010,
            Self::InvalidSignaturePublicKeyPurposeError(_) => 20011,
            Self::UncompressedPublicKeyNotAllowedError(_) => 20012,
        }
    }
}
}

Signature errors occupy the 20000 range. There are only 13 of them -- cryptographic validation is relatively binary (the signature is valid or it is not), so there are fewer failure modes to distinguish.

FeeError codes (30000)

#![allow(unused)]
fn main() {
impl ErrorWithCode for FeeError {
    fn code(&self) -> u32 {
        match self {
            Self::BalanceIsNotEnoughError { .. } => 30000,
        }
    }
}
}

The fee category currently has a single code. The 30000 range is reserved for future fee-related errors.

StateError codes (40000-40899)

RangeCategoryExamples
40000-40009Data ContractDataContractAlreadyPresentError (40000), DataContractIsReadonlyError (40001), DataContractNotFoundError (40008)
40100-40117DocumentsDocumentAlreadyPresentError (40100), DocumentNotFoundError (40101), DuplicateUniqueIndexError (40105)
40200-40217IdentityIdentityAlreadyExistsError (40200), InvalidIdentityRevisionError (40203), IdentityInsufficientBalanceError (40210)
40300-40306VotingMasternodeNotFoundError (40300), MasternodeVoteAlreadyPresentError (40304)
40400-40401Prefunded BalancesPrefundedSpecializedBalanceInsufficientError (40400)
40500-40502Data TriggersDataTriggerConditionError (40500), DataTriggerExecutionError (40501)
40600-40603AddressesAddressDoesNotExistError (40600), AddressNotEnoughFundsError (40601)
40700-40721TokensIdentityDoesNotHaveEnoughTokenBalanceError (40700), UnauthorizedTokenActionError (40701)
40800-40804GroupsIdentityNotMemberOfGroupError (40800), GroupActionAlreadyCompletedError (40802)

Notice how the DataTriggerError sub-enum has its own ErrorWithCode implementation that the StateError delegates to:

#![allow(unused)]
fn main() {
impl ErrorWithCode for StateError {
    fn code(&self) -> u32 {
        match self {
            // ...
            // Data trigger errors: 40500-40699
            Self::DataTriggerError(ref e) => e.code(),
            // ...
        }
    }
}

impl ErrorWithCode for DataTriggerError {
    fn code(&self) -> u32 {
        match self {
            Self::DataTriggerConditionError { .. } => 40500,
            Self::DataTriggerExecutionError { .. } => 40501,
            Self::DataTriggerInvalidResultError { .. } => 40502,
        }
    }
}
}

This is the only case where StateError delegates to a sub-enum rather than mapping directly. All other state error variants map to their codes inline.

The pattern: one variant, one code, one match arm

Look at how codes are assigned inside BasicError:

#![allow(unused)]
fn main() {
impl ErrorWithCode for BasicError {
    fn code(&self) -> u32 {
        match self {
            // Versioning Errors: 10000-10099
            Self::UnsupportedVersionError(_) => 10000,
            Self::ProtocolVersionParsingError { .. } => 10001,
            Self::SerializedObjectParsingError { .. } => 10002,
            Self::UnsupportedProtocolVersionError(_) => 10003,
            Self::IncompatibleProtocolVersionError(_) => 10004,
            Self::VersionError(_) => 10005,
            Self::UnsupportedFeatureError(_) => 10006,

            // Structure Errors: 10100-10199
            Self::JsonSchemaCompilationError(..) => 10100,
            Self::JsonSchemaError(_) => 10101,
            // ...
        }
    }
}
}

Every single variant gets its own match arm and its own code. There is no default case, no wildcard, no range-based mapping. This is deliberate -- if you add a new variant to the enum and forget to add a code for it, the Rust compiler will refuse to compile because the match is non-exhaustive. The type system enforces completeness.

Nested ContractError patterns

One interesting detail is how BasicError handles DataContractError variants. The ContractError wrapper contains a nested enum, so the code mapping matches through two levels:

#![allow(unused)]
fn main() {
Self::ContractError(DataContractError::DocumentTypesAreMissingError { .. }) => 10214,
Self::ContractError(DataContractError::DecodingContractError { .. }) => 10222,
Self::ContractError(DataContractError::DecodingDocumentError { .. }) => 10223,
Self::ContractError(DataContractError::InvalidDocumentTypeError { .. }) => 10224,
Self::ContractError(DataContractError::MissingRequiredKey(_)) => 10225,
Self::ContractError(DataContractError::FieldRequirementUnmet(_)) => 10226,
// ... many more
}

Each specific DataContractError variant gets its own distinct code. This ensures that clients can distinguish between "the contract is missing a required key" (10225) and "the contract has a wrong type for a value" (10228) even though both come wrapped in BasicError::ContractError.

Adding a new error code

Here is the procedure for adding a new consensus error:

  1. Choose the category. Is it a structural validation issue (BasicError), a state conflict (StateError), a signature problem (SignatureError), or a fee issue (FeeError)?

  2. Choose the subcategory range. Look at the existing ranges within that category. If your error is about tokens in StateError, you would use the 40700-40799 range.

  3. Pick the next available code. Within the range, find the highest existing code and add 1. Never reuse a code, even if the original error was removed.

  4. Add the variant to the appropriate enum (at the end -- remember the ordering rule).

  5. Add the match arm to the ErrorWithCode implementation.

  6. Document the code so client developers know what it means.

Here is what the diff would look like for adding a hypothetical new token state error:

#![allow(unused)]
fn main() {
// In state_error.rs -- add at the end of the enum:
#[error(transparent)]
TokenNewError(TokenNewError),

// In codes.rs -- add to the StateError match:
Self::TokenNewError(_) => 40722,  // next available in the 40700 range
}

Why codes can never change

Consider what happens if code 40100 is changed from DocumentAlreadyPresentError to something else:

  • A client SDK has a handler for code 40100 that displays "This document already exists" to the user.
  • After the change, the platform sends 40100 for a completely different error.
  • The user sees "This document already exists" when actually their token balance is insufficient.
  • Trust in the platform erodes.

Error codes are part of the platform's wire protocol. They are as immutable as protobuf field numbers or HTTP status codes. Once assigned, they live forever.

Rules

Do:

  • Assign new codes at the end of the appropriate range
  • Use the compiler's exhaustive match checking to ensure every variant has a code
  • Keep codes within their designated range (do not put a document error in the identity range)
  • Leave gaps between ranges for future expansion (this is already done)

Do not:

  • Change an existing error code -- ever
  • Reuse a code from a removed error variant
  • Add a wildcard (_) match arm to ErrorWithCode implementations
  • Assign codes outside the established ranges without allocating a new range first
  • Skip codes within a range (assign sequentially within each subcategory)

Drive Errors

The previous two chapters covered consensus errors -- the carefully serialized, code-stable errors that get sent across the network. Drive errors are a different beast entirely. They are internal errors that arise from the storage layer, the database, and the logic that sits between state transitions and GroveDB. They never leave the node. They are not serialized. And they do not need stable numeric codes.

But they do need to be well-organized, because Drive is where most of the platform's complexity lives. When something goes wrong in Drive, you need to know immediately whether it is a corrupted database, a protocol-level validation failure, a fee calculation error, or a bug in your own code.

The Drive Error enum

The top-level error type lives in packages/rs-drive/src/error/mod.rs:

#![allow(unused)]
fn main() {
/// Errors
#[derive(Debug, thiserror::Error)]
pub enum Error {
    /// Query error
    #[error("query: {0}")]
    Query(#[from] QuerySyntaxError),
    /// Storage Flags error
    #[error("storage flags: {0}")]
    StorageFlags(#[from] StorageFlagsError),
    /// Drive error
    #[error("drive: {0}")]
    Drive(#[from] DriveError),
    /// Proof error
    #[error("proof: {0}")]
    Proof(#[from] ProofError),
    /// GroveDB error
    #[error("grovedb: {0}")]
    GroveDB(Box<grovedb::Error>),
    /// Protocol error
    #[error("protocol: {0}")]
    Protocol(Box<ProtocolError>),
    /// Identity error
    #[error("identity: {0}")]
    Identity(#[from] IdentityError),
    /// Fee error
    #[error("fee: {0}")]
    Fee(#[from] FeeError),
    /// Document error
    #[error("document: {0}")]
    Document(#[from] DocumentError),
    /// Value error
    #[error("value: {0}")]
    Value(#[from] ValueError),
    /// DataContract error
    #[error("contract: {0}")]
    DataContract(#[from] DataContractError),
    /// Cache error
    #[error("contract: {0}")]
    Cache(#[from] CacheError),
    /// Protocol error with info string
    #[error("protocol: {0} ({1})")]
    ProtocolWithInfoString(Box<ProtocolError>, String),
    /// IO error with info string
    #[error("io: {0} ({1})")]
    IOErrorWithInfoString(Box<io::Error>, String),
}
}

This enum is a classic Rust error aggregator. It collects errors from every subsystem that Drive interacts with -- the query parser, the storage flags system, GroveDB, the protocol layer, identities, fees, documents, data contracts, and the cache. Each variant wraps a specific error type from that subsystem.

Notice the organizational difference from ConsensusError:

  • ConsensusError is organized by validation phase (basic, state, signature, fee)
  • Drive's Error is organized by subsystem (query, storage, grovedb, protocol, identity, etc.)

This makes sense. Consensus errors are about "what rule was violated." Drive errors are about "what component failed."

Box<ProtocolError> -- avoiding large enum variants

Two variants wrap their inner error in a Box:

#![allow(unused)]
fn main() {
/// GroveDB error
#[error("grovedb: {0}")]
GroveDB(Box<grovedb::Error>),

/// Protocol error
#[error("protocol: {0}")]
Protocol(Box<ProtocolError>),
}

Why? Because ProtocolError and grovedb::Error are large types. Without the Box, the entire Error enum would be as large as its largest variant, which could be hundreds of bytes. Since most Drive operations return Result<T, Error>, you would be paying that size cost on every Ok path too -- the Result itself is as large as the larger of T and Error.

Boxing the large variants means the Error enum stores only an 8-byte pointer for those cases, keeping the overall size reasonable. This is a common Rust pattern, and Clippy will warn you about it via the clippy::large_enum_variant lint.

The From trait chain

The #[from] attribute on most variants is a thiserror feature that automatically generates From implementations. For example, #[from] DriveError generates:

#![allow(unused)]
fn main() {
impl From<DriveError> for Error {
    fn from(value: DriveError) -> Self {
        Self::Drive(value)
    }
}
}

But look at the manually written From implementations at the bottom of the file:

#![allow(unused)]
fn main() {
impl From<ProtocolError> for Error {
    fn from(value: ProtocolError) -> Self {
        Self::Protocol(Box::new(value))
    }
}

impl From<grovedb::Error> for Error {
    fn from(value: grovedb::Error) -> Self {
        Self::GroveDB(Box::new(value))
    }
}

impl From<grovedb::element::error::ElementError> for Error {
    fn from(value: grovedb::element::error::ElementError) -> Self {
        Self::GroveDB(Box::new(grovedb::Error::ElementError(value)))
    }
}

impl From<ProtocolDataContractError> for Error {
    fn from(value: ProtocolDataContractError) -> Self {
        Self::Protocol(Box::new(ProtocolError::DataContractError(value)))
    }
}
}

These cannot use #[from] because they involve boxing or wrapping through an intermediate type. The ProtocolError conversion boxes the value. The grovedb::element::error::ElementError conversion wraps the error inside grovedb::Error::ElementError first, then boxes it. The ProtocolDataContractError conversion wraps through ProtocolError::DataContractError and then boxes.

These manual From implementations create multi-hop error conversion chains. When a function deep in Drive's GroveDB interaction layer returns a grovedb::element::error::ElementError, the ? operator can propagate it all the way up to a drive::Error in a single step, with the conversion chain handling the wrapping automatically.

The DriveError enum -- internal errors

The DriveError enum in packages/rs-drive/src/error/drive.rs is where Drive reports its own internal problems:

#![allow(unused)]
fn main() {
/// Drive errors
#[derive(Debug, thiserror::Error)]
pub enum DriveError {
    /// This error should never occur, it is the equivalent of a panic.
    #[error("corrupted code execution error: {0}")]
    CorruptedCodeExecution(&'static str),

    /// Platform expected some specific versions
    #[error("drive unknown version on {method}, received: {received}")]
    UnknownVersionMismatch {
        method: String,
        known_versions: Vec<FeatureVersion>,
        received: FeatureVersion,
    },

    /// A critical corrupted state should stall the chain.
    #[error("critical corrupted state error: {0}")]
    CriticalCorruptedState(&'static str),

    /// Error
    #[error("not supported error: {0}")]
    NotSupported(&'static str),

    // ... many more variants
}
}

Notice how DriveError uses &'static str for most of its error messages rather than String. This is deliberate -- these are internal error messages that should be known at compile time. A CorruptedCodeExecution("document tree missing expected root") is a fixed message that describes a specific bug or data corruption scenario. Using &'static str makes it clear that these are not user-facing messages and avoids allocations on error paths.

There are a few variants that do use String -- these are cases where the error message needs to include runtime data:

#![allow(unused)]
fn main() {
#[error("corrupted contract indexes error: {0}")]
CorruptedContractIndexes(String),

#[error("corrupted drive state error: {0}")]
CorruptedDriveState(String),
}

The severity spectrum

DriveError variants implicitly encode severity through naming conventions:

  • Corrupted* variants indicate data corruption. The database is in an unexpected state. These are serious problems that may indicate bugs or hardware failures:

    #![allow(unused)]
    fn main() {
    CorruptedCodeExecution(&'static str),
    CriticalCorruptedState(&'static str),
    CorruptedContractPath(&'static str),
    CorruptedDocumentPath(&'static str),
    CorruptedBalancePath(&'static str),
    CorruptedSerialization(String),
    CorruptedElementType(&'static str),
    CorruptedDriveState(String),
    }
  • Invalid* and NotSupported variants indicate logic errors -- the code is trying to do something that should not be possible:

    #![allow(unused)]
    fn main() {
    InvalidDeletionOfDocumentThatKeepsHistory(&'static str),
    InvalidContractHistoryFetchLimit(u16),
    NotSupported(&'static str),
    }
  • *NotFound and *DoesNotExist variants indicate missing data that was expected:

    #![allow(unused)]
    fn main() {
    DataContractNotFound(String),
    ElementNotFound(&'static str),
    PrefundedSpecializedBalanceDoesNotExist(String),
    }
  • Version mismatch variants indicate protocol version problems:

    #![allow(unused)]
    fn main() {
    UnknownVersionMismatch { method, known_versions, received },
    VersionNotActive { method, known_versions },
    }

When to use consensus errors vs drive errors

This is the key design decision you face when adding error handling to Drive code:

Use a consensus error when:

  • The error is caused by invalid user input (a malformed state transition)
  • The error needs to be communicated back to the client
  • The error needs a stable numeric code
  • Other nodes must produce the same error for the same input

Use a drive error when:

  • The error is caused by internal state (database corruption, missing paths)
  • The error indicates a bug in the platform code
  • The error is about version mismatches or unsupported features
  • The error involves the storage layer (GroveDB problems)

In practice, Drive functions often work with both. A typical pattern is a function that validates input (producing consensus errors) and then performs storage operations (which might produce drive errors). The return type is usually Result<T, Error> where Error is the Drive error, and consensus errors are embedded inside ProtocolError which is itself a variant of the Drive error.

The ? propagation pattern

Here is how error propagation typically works in Drive code:

#![allow(unused)]
fn main() {
fn apply_document_create(
    &self,
    document: &Document,
    contract: &DataContract,
    // ...
) -> Result<(), Error> {
    // This might return a grovedb::Error, which auto-converts via From
    let existing = self.grove_get(path, key, transaction)?;

    // This might return a DriveError
    if existing.is_some() {
        return Err(DriveError::CorruptedDocumentAlreadyExists(
            "document should not exist at this point"
        ).into());
    }

    // This might return a ProtocolError, which auto-converts via From + Box
    let serialized = document.serialize(platform_version)?;

    // This might return a grovedb::Error
    self.grove_insert(path, key, element, transaction)?;

    Ok(())
}
}

The ? operator handles all the conversions transparently. A grovedb::Error becomes Error::GroveDB(Box::new(...)). A DriveError becomes Error::Drive(...). A ProtocolError becomes Error::Protocol(Box::new(...)). The developer does not need to think about which From implementation is being invoked -- the type system figures it out.

Sub-module error types

Drive organizes its errors into submodules, each with its own error enum:

#![allow(unused)]
fn main() {
pub mod cache;      // CacheError
pub mod contract;   // DataContractError (Drive's own, distinct from DPP's)
pub mod document;   // DocumentError
pub mod drive;      // DriveError
pub mod fee;        // FeeError (Drive's own, distinct from consensus FeeError)
pub mod identity;   // IdentityError
pub mod proof;      // ProofError
pub mod query;      // QuerySyntaxError
}

Each of these has a #[from] conversion to the top-level Error, creating a clean hierarchy where subsystem-specific code can work with its own error type and callers can propagate through ? to the unified type.

Rules

Do:

  • Use Box for large error types (ProtocolError, grovedb::Error) to keep the enum small
  • Use &'static str for internal error messages that are known at compile time
  • Use String only when the message needs runtime data
  • Let #[from] generate From implementations where possible
  • Write manual From implementations when boxing or intermediate wrapping is needed
  • Follow the naming conventions: Corrupted* for data corruption, Invalid* for logic errors, *NotFound for missing data

Do not:

  • Use consensus errors for internal Drive problems
  • Use drive errors for user-facing validation failures
  • Forget to add a From implementation when introducing a new error type
  • Return a String error message when a typed error variant would be more informative
  • Panic in Drive code -- return a CorruptedCodeExecution or CriticalCorruptedState error instead

Platform Serialization

Dash Platform needs to serialize data structures -- state transitions, consensus errors, documents, identities -- into bytes and back. You might ask: why not just use bincode directly? Or serde? Or protobuf? The answer is that Platform has requirements that no off-the-shelf serialization library handles on its own:

  1. Version awareness. The serialization of a type may differ between protocol versions. A DataContract serialized under protocol version 3 might have different fields than under version 4. The serialization system needs to accept a PlatformVersion parameter and dispatch accordingly.

  2. Size limits. A malicious actor should not be able to send a 100 MB state transition that costs the node minutes to decode. Serialization must enforce configurable byte limits.

  3. Determinism. Every node must produce identical bytes for identical data. This rules out serialization formats that allow field reordering (like JSON) or that depend on hash map iteration order.

  4. Compatibility with bincode. Platform already uses bincode extensively for its compact binary format and deterministic encoding. The custom layer should wrap bincode, not replace it.

The rs-platform-serialization crate provides this layer. It sits between bincode and the rest of the platform, adding version-aware encoding/decoding while delegating the actual byte-level work to bincode.

The core traits

The crate defines two pairs of traits in packages/rs-platform-serialization/src/enc/mod.rs and packages/rs-platform-serialization/src/de/mod.rs.

Encoding: PlatformVersionEncode

#![allow(unused)]
fn main() {
pub trait PlatformVersionEncode {
    /// Encode a given type.
    fn platform_encode<E: Encoder>(
        &self,
        encoder: &mut E,
        platform_version: &PlatformVersion,
    ) -> Result<(), EncodeError>;
}
}

Compare this to bincode's standard Encode trait:

#![allow(unused)]
fn main() {
// bincode's Encode (for reference)
pub trait Encode {
    fn encode<E: Encoder>(&self, encoder: &mut E) -> Result<(), EncodeError>;
}
}

The only difference is the platform_version parameter. For types whose serialization does not change between versions, the implementation simply ignores it and delegates to bincode's Encode:

#![allow(unused)]
fn main() {
impl PlatformVersionEncode for String {
    fn platform_encode<E: Encoder>(
        &self,
        encoder: &mut E,
        _: &PlatformVersion,  // ignored -- String encoding never changes
    ) -> Result<(), EncodeError> {
        Encode::encode(self, encoder)
    }
}
}

For types that do change between versions, the implementation can dispatch to different encoding logic based on the version.

Decoding: PlatformVersionedDecode

#![allow(unused)]
fn main() {
pub trait PlatformVersionedDecode: Sized {
    fn platform_versioned_decode<D: Decoder<Context = crate::BincodeContext>>(
        decoder: &mut D,
        platform_version: &PlatformVersion,
    ) -> Result<Self, DecodeError>;
}
}

Again, this mirrors bincode's Decode but with the version parameter. There is also a borrowed variant for zero-copy decoding:

#![allow(unused)]
fn main() {
pub trait PlatformVersionedBorrowDecode<'de>: Sized {
    fn platform_versioned_borrow_decode<
        D: BorrowDecoder<'de, Context = crate::BincodeContext>,
    >(
        decoder: &mut D,
        platform_version: &PlatformVersion,
    ) -> Result<Self, DecodeError>;
}
}

And a convenience macro to implement PlatformVersionedBorrowDecode for any type that implements PlatformVersionedDecode:

#![allow(unused)]
fn main() {
#[macro_export]
macro_rules! impl_platform_versioned_borrow_decode {
    ($ty:ty) => {
        impl<'de> $crate::PlatformVersionedBorrowDecode<'de> for $ty {
            fn platform_versioned_borrow_decode<
                D: bincode::de::BorrowDecoder<'de, Context = $crate::BincodeContext>,
            >(
                decoder: &mut D,
                platform_version: &PlatformVersion,
            ) -> core::result::Result<Self, bincode::error::DecodeError> {
                <$ty as $crate::PlatformVersionedDecode>::platform_versioned_decode(
                    decoder,
                    platform_version,
                )
            }
        }
    };
}
}

The convenience functions

The crate provides top-level functions that mirror bincode's API but add version support. The most commonly used one is platform_encode_to_vec in packages/rs-platform-serialization/src/features/impl_alloc.rs:

#![allow(unused)]
fn main() {
pub fn platform_encode_to_vec<E: PlatformVersionEncode, C: Config>(
    val: E,
    config: C,
    platform_version: &PlatformVersion,
) -> Result<Vec<u8>, EncodeError> {
    let size = {
        let mut size_writer = enc::EncoderImpl::<_, C>::new(SizeWriter::default(), config);
        val.platform_encode(&mut size_writer, platform_version)?;
        size_writer.into_writer().bytes_written
    };
    let writer = VecWriter::with_capacity(size);
    let mut encoder = enc::EncoderImpl::<_, C>::new(writer, config);
    val.platform_encode(&mut encoder, platform_version)?;
    Ok(encoder.into_writer().inner)
}
}

This function does something clever: it encodes twice. The first pass uses a SizeWriter that counts bytes without allocating. The second pass uses a VecWriter pre-allocated to exactly the right size. This avoids reallocations during encoding.

For decoding:

#![allow(unused)]
fn main() {
pub fn platform_versioned_decode_from_slice<D: PlatformVersionedDecode, C: Config>(
    src: &[u8],
    config: C,
    platform_version: &PlatformVersion,
) -> Result<D, error::DecodeError> {
    let reader = read::SliceReader::new(src);
    let mut decoder = DecoderImpl::<_, C, crate::BincodeContext>::new(reader, config, ());
    D::platform_versioned_decode(&mut decoder, platform_version)
}
}

There are also platform_encode_into_slice for encoding into a pre-allocated buffer, encode_into_writer for encoding into arbitrary writers, and platform_versioned_decode_from_reader for decoding from arbitrary readers.

How it wraps bincode

The wrapping is lightweight. Platform serialization does not add a version prefix to the bytes by default. It does not change the wire format. When you call platform_encode_to_vec, the resulting bytes are identical to what bincode::encode_to_vec would produce -- the difference is that the encoding logic can vary based on the PlatformVersion parameter.

The bincode configuration used throughout the platform is:

#![allow(unused)]
fn main() {
let config = bincode::config::standard()
    .with_big_endian()  // network byte order for determinism
    .with_no_limit();   // or .with_limit::<{ N }>() for size-limited encoding
}

Big endian is used for deterministic cross-platform encoding. The limit is applied at the PlatformSerialize / PlatformDeserialize level (see the derive macros chapter) rather than at the raw bincode level.

Standard type implementations

The crate provides PlatformVersionEncode and PlatformVersionedDecode implementations for all standard Rust types. These live in packages/rs-platform-serialization/src/features/impl_alloc.rs and the other impl files. Here are the key patterns:

Types that ignore the version

Primitive types, strings, and simple wrappers just delegate to bincode:

#![allow(unused)]
fn main() {
impl PlatformVersionedDecode for String {
    fn platform_versioned_decode<D: Decoder<Context = crate::BincodeContext>>(
        decoder: &mut D,
        _: &PlatformVersion,
    ) -> Result<Self, DecodeError> {
        bincode::Decode::decode(decoder)
    }
}
}

Collections that propagate the version

Collections like Vec, BTreeMap, and BTreeSet encode their length, then encode each element with the platform version:

#![allow(unused)]
fn main() {
impl<T> PlatformVersionEncode for Vec<T>
where
    T: PlatformVersionEncode + 'static,
{
    fn platform_encode<E: Encoder>(
        &self,
        encoder: &mut E,
        platform_version: &PlatformVersion,
    ) -> Result<(), EncodeError> {
        crate::enc::encode_slice_len(encoder, self.len())?;
        // Optimization: byte slices are written directly
        if core::any::TypeId::of::<T>() == core::any::TypeId::of::<u8>() {
            let slice: &[u8] = unsafe { core::mem::transmute(self.as_slice()) };
            encoder.writer().write(slice)?;
            return Ok(());
        }
        for item in self.iter() {
            item.platform_encode(encoder, platform_version)?;
        }
        Ok(())
    }
}
}

Notice the Vec<u8> optimization -- byte vectors are written in bulk rather than element-by-element, which is significantly faster for large binary data.

Smart pointers

Box<T>, Rc<T>, Arc<T>, and Cow<T> all delegate to their inner type:

#![allow(unused)]
fn main() {
impl<T> PlatformVersionEncode for Box<T>
where
    T: PlatformVersionEncode + ?Sized,
{
    fn platform_encode<E: Encoder>(
        &self,
        encoder: &mut E,
        platform_version: &PlatformVersion,
    ) -> Result<(), EncodeError> {
        T::platform_encode(self, encoder, platform_version)
    }
}
}

Size limits and why they matter

Without size limits, a malicious state transition could contain a Vec claiming to have 2^64 elements, causing the node to allocate unbounded memory during decoding. Or a deeply nested document structure could produce a multi-megabyte serialized form that consumes excessive bandwidth.

Size limits are enforced at two levels:

  1. At the PlatformSerialize trait level -- the #[platform_serialize(limit = N)] attribute on a type causes the derive macro to use bincode::config::standard().with_big_endian().with_limit::<{ N }>(). If encoding exceeds N bytes, it returns a MaxEncodedBytesReachedError.

  2. At the bincode decoder level -- bincode's claim_container_read mechanism prevents allocating excessive memory for containers:

#![allow(unused)]
fn main() {
fn platform_versioned_decode<D: Decoder<Context = crate::BincodeContext>>(
    decoder: &mut D,
    platform_versioned: &PlatformVersion,
) -> Result<Self, DecodeError> {
    let len = crate::de::decode_slice_len(decoder)?;
    decoder.claim_container_read::<T>(len)?;  // checks memory budget

    let mut vec = Vec::with_capacity(len);
    for _ in 0..len {
        decoder.unclaim_bytes_read(core::mem::size_of::<T>());
        vec.push(T::platform_versioned_decode(decoder, platform_version)?);
    }
    Ok(vec)
}
}

The claim_container_read call tells the decoder "I am about to read len elements of type T" and the decoder checks whether this fits within the remaining byte budget. If not, it returns an error before any allocation happens.

The BincodeContext type alias

You will see crate::BincodeContext throughout the code:

#![allow(unused)]
fn main() {
pub type BincodeContext = ();
}

This is the bincode context type. Bincode supports contextual decoding where a context value is threaded through the decoder -- useful for things like string interning or reference resolution. Platform does not use this feature, so the context is the unit type (). The alias exists to make it easy to change later if needed.

Rules

Do:

  • Use PlatformVersionEncode / PlatformVersionedDecode for types whose serialization may change between protocol versions
  • Use standard bincode Encode / Decode for types whose format is permanently fixed
  • Use big-endian configuration everywhere for deterministic encoding
  • Apply size limits on types that are received from untrusted sources (state transitions, consensus errors)
  • Use the platform_encode_to_vec convenience function rather than constructing encoders manually

Do not:

  • Use little-endian or native-endian configuration -- this breaks determinism across architectures
  • Forget to propagate the platform_version parameter through collection types
  • Skip claim_container_read in custom collection decoders -- this is a denial-of-service defense
  • Add a version prefix to the bytes manually -- the platform version is a parameter, not part of the wire format
  • Use serde for consensus-critical serialization -- serde's flexibility (field names, human-readable formats) is a liability when you need deterministic bytes

Derive Macros

In the previous chapter we looked at the PlatformVersionEncode and PlatformVersionedDecode traits and the platform_encode_to_vec / platform_versioned_decode_from_slice functions. But you rarely implement those traits by hand. Instead, you use three derive macros from the rs-platform-serialization-derive crate:

  • PlatformSerialize -- generates a high-level serialize_to_bytes() method
  • PlatformDeserialize -- generates a high-level deserialize_from_bytes() method
  • PlatformSignable -- generates a signable_bytes() method that excludes signature fields

These macros live in packages/rs-platform-serialization-derive/src/lib.rs and are the glue that connects Rust struct definitions to the platform serialization system.

PlatformSerialize and PlatformDeserialize

These two macros work as a pair. Let us start with how they are used on the ConsensusError type:

#![allow(unused)]
fn main() {
#[derive(
    thiserror::Error,
    Debug,
    Encode,
    Decode,
    PlatformSerialize,
    PlatformDeserialize,
    Clone,
    PartialEq,
)]
#[platform_serialize(limit = 2000)]
#[error(transparent)]
pub enum ConsensusError {
    // ...
}
}

And on a leaf error struct:

#![allow(unused)]
fn main() {
#[derive(
    Error, Debug, Clone, PartialEq, Eq, Encode, Decode, PlatformSerialize, PlatformDeserialize,
)]
#[error("Document {document_id} is already present")]
#[platform_serialize(unversioned)]
pub struct DocumentAlreadyPresentError {
    document_id: Identifier,
}
}

And on the StateTransition enum:

#![allow(unused)]
fn main() {
#[derive(
    Debug, Clone, Encode, Decode,
    PlatformSerialize, PlatformDeserialize, PlatformSignable,
    From, PartialEq,
)]
#[platform_serialize(unversioned)]
#[platform_serialize(limit = 100000)]
pub enum StateTransition {
    DataContractCreate(DataContractCreateTransition),
    DataContractUpdate(DataContractUpdateTransition),
    Batch(BatchTransition),
    // ...
}
}

Notice that StateTransition uses two #[platform_serialize] attributes -- one for unversioned and one for limit. These are combined internally.

What the derives generate

PlatformSerialize generates an implementation of one of two traits, depending on whether unversioned is set:

With unversioned -- implements PlatformSerializable:

#![allow(unused)]
fn main() {
// Generated code (simplified):
impl PlatformSerializable for DocumentAlreadyPresentError {
    type Error = ProtocolError;

    fn serialize_to_bytes(&self) -> Result<Vec<u8>, Self::Error> {
        let config = bincode::config::standard()
            .with_big_endian()
            .with_no_limit();
        bincode::encode_to_vec(self, config)
            .map_err(|e| {
                ProtocolError::PlatformSerializationError(
                    format!("unable to serialize DocumentAlreadyPresentError: {}", e)
                )
            })
    }

    fn serialize_consume_to_bytes(self) -> Result<Vec<u8>, Self::Error> {
        // same as above, taking self by value
    }
}
}

Without unversioned -- implements PlatformSerializableWithPlatformVersion:

#![allow(unused)]
fn main() {
// Generated code (simplified):
impl PlatformSerializableWithPlatformVersion for ConsensusError {
    type Error = ProtocolError;

    fn serialize_to_bytes_with_platform_version(
        &self,
        platform_version: &PlatformVersion,
    ) -> Result<Vec<u8>, ProtocolError> {
        let config = bincode::config::standard()
            .with_big_endian()
            .with_limit::<{ 2000 }>();
        platform_serialization::platform_encode_to_vec(self, config, platform_version)
            .map_err(|e| match e {
                bincode::error::EncodeError::Io { inner, index } =>
                    ProtocolError::MaxEncodedBytesReachedError {
                        max_size_kbytes: 2000,
                        size_hit: index,
                    },
                _ => ProtocolError::PlatformSerializationError(
                    format!("unable to serialize ConsensusError: {}", e)
                ),
            })
    }
}
}

The key differences: the unversioned variant uses plain bincode::encode_to_vec, while the versioned variant uses platform_serialization::platform_encode_to_vec which threads the PlatformVersion through the encode chain. When a limit is set, the config uses with_limit::<{ N }>() and the error mapping converts bincode IO errors to MaxEncodedBytesReachedError.

In both cases, the derive also generates bincode Encode and Decode implementations by internally calling into derive_bincode -- this is why you still need Encode and Decode in the derive list alongside PlatformSerialize and PlatformDeserialize.

The #[platform_serialize] attributes

The #[platform_serialize(...)] attribute accepts several parameters. Here is the full list from the derive macro source in packages/rs-platform-serialization-derive/src/lib.rs:

limit = N

Sets the maximum serialized size in bytes:

#![allow(unused)]
fn main() {
#[platform_serialize(limit = 2000)]
}

When encoding exceeds this limit, the error is MaxEncodedBytesReachedError. This is critical for types received from the network.

unversioned

Generates PlatformSerializable instead of PlatformSerializableWithPlatformVersion:

#![allow(unused)]
fn main() {
#[platform_serialize(unversioned)]
}

Use this when the type's serialization format does not change between protocol versions. Most leaf types (individual error structs, simple data holders) use this.

passthrough

For enums, serializes by delegating directly to the inner variant's serialization method:

#![allow(unused)]
fn main() {
#[platform_serialize(passthrough)]
}

When MyEnum::Variant(inner) is serialized with passthrough, it calls inner.serialize() directly rather than encoding the enum tag + inner data. This means the enum variant information is lost in serialization -- deserialization must use platform_version_path to know which variant to decode into.

Cannot be combined with limit, untagged, or into.

untagged

For enums, serializes without the variant tag number:

#![allow(unused)]
fn main() {
#[platform_serialize(untagged)]
}

Similar to passthrough but still uses the enum's own encoding logic rather than delegating to the inner type. The variant index is omitted from the output. Like passthrough, deserialization requires knowing which variant to expect.

Cannot be combined with passthrough.

into = "TypePath"

For structs, converts the value to another type before serialization:

#![allow(unused)]
fn main() {
#[platform_serialize(into = "DataContractInSerializationFormat")]
}

The generated code calls .into() to convert to the target type, then serializes that type. This is useful for types that have a different in-memory representation than their serialization format.

Cannot be used on enums.

platform_version_path = "..."

Used with passthrough or untagged to specify how to determine the correct variant during deserialization:

#![allow(unused)]
fn main() {
#[platform_serialize(
    passthrough,
    platform_version_path = "dpp.contract_versions.contract_serialization_version.default_current_version"
)]
}

The deserialization code reads this field path from the PlatformVersion to determine which variant index to decode.

crate_name = "..."

Overrides the default crate path (defaults to crate):

#![allow(unused)]
fn main() {
#[platform_serialize(crate_name = "dpp")]
}

allow_prepend_version and force_prepend_version

These flags are defined in the code but currently not actively used in the main serialization path. They were designed for prepending version bytes to serialized output. Only one can be used at a time.

The #[platform_error_type] attribute

By default, the derives use ProtocolError as the error type. You can override this:

#![allow(unused)]
fn main() {
#[derive(PlatformSerialize, PlatformDeserialize)]
#[platform_error_type(MyCustomError)]
pub struct MyType { ... }
}

The error type must have PlatformSerializationError(String) and MaxEncodedBytesReachedError { max_size_kbytes, size_hit } variants (or equivalent conversion paths).

PlatformSignable -- the signature hash derive

The PlatformSignable derive solves a specific problem: when you sign a state transition, you need to hash all the fields except the signature itself. You cannot include the signature in the data that was signed -- that would be circular.

Here is how it is used on DataContractCreateTransitionV0 in packages/rs-dpp/src/state_transition/state_transitions/contract/data_contract_create_transition/v0/mod.rs:

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, Encode, Decode, PartialEq, PlatformSignable)]
pub struct DataContractCreateTransitionV0 {
    pub data_contract: DataContractInSerializationFormat,
    pub identity_nonce: IdentityNonce,
    pub user_fee_increase: UserFeeIncrease,
    #[platform_signable(exclude_from_sig_hash)]
    pub signature_public_key_id: KeyID,
    #[platform_signable(exclude_from_sig_hash)]
    pub signature: BinaryData,
}
}

The #[platform_signable(exclude_from_sig_hash)] attribute marks fields that should be excluded from the signature hash. The derive generates:

  1. A new struct DataContractCreateTransitionV0Signable<'a> containing only the non-excluded fields as Cow references
  2. A From<&DataContractCreateTransitionV0> implementation for the signable struct
  3. An implementation of the Signable trait on the original struct

The generated code looks approximately like this:

#![allow(unused)]
fn main() {
// Generated by PlatformSignable derive:

#[derive(Debug, Clone, bincode::Encode)]
pub struct DataContractCreateTransitionV0Signable<'a> {
    data_contract: std::borrow::Cow<'a, DataContractInSerializationFormat>,
    identity_nonce: std::borrow::Cow<'a, IdentityNonce>,
    user_fee_increase: std::borrow::Cow<'a, UserFeeIncrease>,
    // signature_public_key_id -- excluded
    // signature -- excluded
}

impl<'a> From<&'a DataContractCreateTransitionV0>
    for DataContractCreateTransitionV0Signable<'a>
{
    fn from(original: &'a DataContractCreateTransitionV0) -> Self {
        DataContractCreateTransitionV0Signable {
            data_contract: std::borrow::Cow::Borrowed(&original.data_contract),
            identity_nonce: std::borrow::Cow::Borrowed(&original.identity_nonce),
            user_fee_increase: std::borrow::Cow::Borrowed(&original.user_fee_increase),
        }
    }
}

impl crate::serialization::Signable for DataContractCreateTransitionV0 {
    fn signable_bytes(&self) -> Result<Vec<u8>, ProtocolError> {
        let config = bincode::config::standard().with_big_endian();
        let intermediate: DataContractCreateTransitionV0Signable = self.into();
        bincode::encode_to_vec(intermediate, config).map_err(|e| {
            ProtocolError::PlatformSerializationError(
                format!("unable to serialize to produce sig hash \
                    DataContractCreateTransitionV0: {}", e)
            )
        })
    }
}
}

The Cow references avoid cloning the data just to produce a hash. The intermediate struct borrows from the original and only serializes the fields that matter for the signature.

PlatformSignable on enums

When used on an enum (like StateTransition), the derive generates a corresponding Signable enum where each variant wraps the signable version of its inner type:

#![allow(unused)]
fn main() {
// On StateTransition:
#[derive(PlatformSignable)]
pub enum StateTransition {
    DataContractCreate(DataContractCreateTransition),
    DataContractUpdate(DataContractUpdateTransition),
    // ...
}

// Generates:
#[derive(Debug, Clone, bincode::Encode, derive_more::From)]
pub enum StateTransitionSignable<'a> {
    DataContractCreate(DataContractCreateTransitionSignable<'a>),
    DataContractUpdate(DataContractUpdateTransitionSignable<'a>),
    // ...
}
}

The signable_bytes implementation on the enum encodes a variant index (as u16) followed by the inner type's signable bytes:

#![allow(unused)]
fn main() {
impl Signable for StateTransition {
    fn signable_bytes(&self) -> Result<Vec<u8>, ProtocolError> {
        let config = bincode::config::standard().with_big_endian();
        let signable_bytes = match self {
            StateTransition::DataContractCreate(ref inner) => {
                let mut buf = bincode::encode_to_vec(&(0u16), config).unwrap();
                let inner_signable_bytes = inner.signable_bytes()?;
                buf.extend(inner_signable_bytes);
                buf
            }
            StateTransition::DataContractUpdate(ref inner) => {
                let mut buf = bincode::encode_to_vec(&(1u16), config).unwrap();
                let inner_signable_bytes = inner.signable_bytes()?;
                buf.extend(inner_signable_bytes);
                buf
            }
            // ...
        };
        Ok(signable_bytes)
    }
}
}

PlatformSignable attributes

  • exclude_from_sig_hash -- marks a field to be excluded from the signable struct
  • into = "TypePath" -- converts a field to a different type in the signable struct (used for fields that need a different representation for hashing)
  • derive_into -- on enums, generates From conversions from the original enum to the signable enum
  • derive_bincode_with_borrowed_vec -- manually implements bincode::Encode for the signable struct instead of deriving it (needed when fields contain borrowed Vec types)

The relationship between Encode/Decode and PlatformSerialize/PlatformDeserialize

This is a common source of confusion. Here is how the pieces fit together:

  • Encode / Decode (from bincode) -- field-level encoding. These know how to write each field to bytes and read it back. They are the low-level building blocks.

  • PlatformVersionEncode / PlatformVersionedDecode (from rs-platform-serialization) -- version-aware field-level encoding. These wrap Encode/Decode with a PlatformVersion parameter.

  • PlatformSerialize / PlatformDeserialize (derive macros) -- high-level serialization. These generate the serialize_to_bytes() and deserialize_from_bytes() methods that configure bincode, enforce size limits, and convert errors.

When you derive all of them on a type, the call chain is:

your_type.serialize_to_bytes()           // PlatformSerialize-generated method
  -> platform_encode_to_vec(...)         // from rs-platform-serialization
    -> your_type.platform_encode(...)    // PlatformVersionEncode (auto-generated)
      -> field.encode(encoder)           // bincode Encode for each field

You need Encode and Decode in the derive list alongside PlatformSerialize and PlatformDeserialize because the platform derives internally delegate to bincode encoding.

Rules

Do:

  • Always derive Encode, Decode, PlatformSerialize, PlatformDeserialize together
  • Use #[platform_serialize(unversioned)] for types with a stable serialization format
  • Use #[platform_serialize(limit = N)] on types received from untrusted sources
  • Use #[platform_signable(exclude_from_sig_hash)] on signature and signature key ID fields
  • Keep the #[platform_error_type] attribute consistent with the crate's error type

Do not:

  • Use passthrough on structs (it is enum-only)
  • Use into on enums (it is struct-only)
  • Combine passthrough with limit, untagged, or into
  • Combine force_prepend_version with allow_prepend_version
  • Forget that PlatformSignable on enums requires each inner type to also implement Signable
  • Change the order of fields in a PlatformSignable struct -- this changes the signature hash, which invalidates existing signatures

Platform Addresses

Dash Platform has its own address system, independent of the legacy Base58Check addresses used on the Core chain. Platform addresses use Bech32m encoding (BIP-350), following the specification in DIP-0018. There are three address types: two transparent and one shielded.

If you are coming from Bitcoin or Dash Core, the key mental shift is this: platform addresses are not derived from a single private key via a single algorithm. They are typed containers for a hash or a public key, unified under a single encoding scheme with a shared human-readable prefix.

The Three Address Types

TypeInner DataBech32m Type BytePrefix (mainnet)Prefix (testnet)
P2PKH20-byte pubkey hash0xb0dash1k...tdash1k...
P2SH20-byte script hash0x80dash1s...tdash1s...
Orchard43-byte shielded address0x10dash1z...tdash1z...

The type byte is the first byte of the Bech32m data payload. It determines how the rest of the payload is interpreted. The type bytes were chosen so that the first character after dash1 (or tdash1) is a memorable letter: k for keys (P2PKH), s for scripts (P2SH), z for zero-knowledge (Orchard).

PlatformAddress

The transparent address type is defined in packages/rs-dpp/src/address_funds/platform_address.rs:

#![allow(unused)]
fn main() {
pub enum PlatformAddress {
    P2pkh([u8; 20]),
    P2sh([u8; 20]),
}
}

Two variants, both wrapping a 20-byte hash. A P2pkh address contains a Hash160(compressed_pubkey), just like on Core. A P2sh address contains a Hash160(redeem_script), supporting standard multisig scripts.

Bech32m Encoding

The human-readable part (HRP) depends on the network:

#![allow(unused)]
fn main() {
const PLATFORM_HRP_MAINNET: &str = "dash";
const PLATFORM_HRP_TESTNET: &str = "tdash";  // also devnet, regtest
}

Encoding produces addresses like:

  • dash1krma5z3ttj75la4m93xcndna9ullamq9y5e9n5rs (P2PKH, mainnet)
  • tdash1sppl5xpu70aka8nacc4kj2htflydspzkc8jtru5 (P2SH, testnet)

The wire format is type_byte || hash -- 21 bytes total, then Bech32m-encoded with the appropriate HRP.

Storage vs. Bech32m: Two Byte Schemes

There is an important distinction between how addresses are encoded for users and how they are serialized for storage. The Bech32m type bytes (0xb0, 0x80) are not the same as the bincode variant indices (0x00, 0x01) used in GroveDB keys:

ContextP2PKH byteP2SH byte
Bech32m (user-facing)0xb00x80
Bincode (storage/wire)0x000x01

This matters when you encounter raw bytes. If the leading byte is 0xb0 or 0x80, you are looking at a Bech32m payload. If it is 0x00 or 0x01, it is bincode-serialized. The to_bytes() method produces bincode format; to_bech32m_string() produces the user-facing string.

Conversion to Core Addresses

PlatformAddress can be converted to and from dashcore::Address, the type used by the Core chain RPC client. This allows the platform layer to interoperate with Core's address format when needed -- for example, when processing withdrawals that ultimately create Core chain transactions.

OrchardAddress

The shielded address type is defined in packages/rs-dpp/src/address_funds/orchard_address.rs:

#![allow(unused)]
fn main() {
pub struct OrchardAddress(grovedb_commitment_tree::PaymentAddress);
}

It wraps the Orchard protocol's native PaymentAddress, which consists of two components:

ComponentSizePurpose
Diversifier11 bytesEntropy for deriving unlinkable addresses from a single spending key
pk_d32 bytesDiversified transmission key (Pallas curve point)

Total: 43 bytes. The Bech32m payload is 0x10 || diversifier || pk_d -- 44 bytes.

Key Constants

#![allow(unused)]
fn main() {
const ORCHARD_DIVERSIFIER_SIZE: usize = 11;
const ORCHARD_PKD_SIZE: usize = 32;
const ORCHARD_ADDRESS_SIZE: usize = 43;
const ORCHARD_TYPE: u8 = 0x10;
}

Diversifiers and Unlinkability

A single Orchard spending key can derive an unlimited number of addresses by varying the diversifier. Each address looks completely unrelated to any other address derived from the same key. Only the holder of the corresponding Incoming Viewing Key (IVK) can link them.

This is the fundamental privacy property: a merchant can give every customer a unique address, and no observer can determine that all those addresses belong to the same wallet. The diversifier is the mechanism that makes this possible.

Differences from Zcash Encoding

Dash's OrchardAddress uses the same 43-byte raw format as Zcash Orchard addresses (identical diversifier + pk_d structure). But the Bech32m encoding is Dash-specific:

  • No F4Jumble: Zcash applies an F4Jumble permutation before encoding; Dash does not.
  • No Unified Address wrapper: Zcash wraps Orchard addresses in a Unified Address (UA) envelope with typecodes and length prefixes; Dash uses a simple type-byte scheme.
  • Different HRP: Zcash uses u1; Dash uses dash/tdash.

The result is simpler, shorter addresses that are not interoperable with Zcash wallets.

Converting to Orchard's Native Type

The builder functions that construct shielded transactions need the Orchard library's native PaymentAddress, not our wrapper. Conversion is straightforward:

#![allow(unused)]
fn main() {
impl From<&OrchardAddress> for PaymentAddress {
    fn from(address: &OrchardAddress) -> Self {
        *address.inner()
    }
}
}

This is used in the shielded builder module (packages/rs-dpp/src/shielded/builder/) when adding outputs to an Orchard bundle.

Address Witnesses

When a transparent address is used as an input (for example, funding a shield transition), the sender must prove ownership. This is done through an AddressWitness, defined in packages/rs-dpp/src/address_funds/witness.rs:

#![allow(unused)]
fn main() {
pub enum AddressWitness {
    P2pkh {
        signature: BinaryData,
    },
    P2sh {
        signatures: Vec<BinaryData>,
        redeem_script: BinaryData,
    },
}
}

P2PKH Witnesses

A P2PKH witness contains a single 65-byte recoverable ECDSA signature. The public key is recovered from the signature rather than transmitted alongside it -- saving 33 bytes per witness. Verification recovers the public key, hashes it with Hash160, and checks the result against the address hash.

P2SH Witnesses

A P2SH witness contains the signatures and the redeem script. Only standard bare multisig scripts are supported:

OP_M <pubkey1> <pubkey2> ... <pubkeyN> OP_N OP_CHECKMULTISIG

Constraints:

  • Maximum 17 signature entries (MAX_P2SH_SIGNATURES) -- 16 keys plus 1 dummy for the CHECKMULTISIG bug
  • Only compressed public keys (33 bytes)
  • No timelocks, hash puzzles, or custom scripts
  • The redeem script must hash to the address: Hash160(script) == address_hash

Verification Cost Tracking

Every witness verification has a measurable cost. The AddressWitnessVerificationOperations struct tracks the work:

#![allow(unused)]
fn main() {
pub struct AddressWitnessVerificationOperations {
    pub ecdsa_signature_verifications: u16,
    pub message_hash_count: u16,
    pub pubkey_hash_verifications: u16,
    pub script_hash_verifications: u16,
    pub signable_bytes_len: usize,
}
}

These counts feed into the fee system. A P2PKH witness costs one ECDSA verification. An M-of-N multisig costs N verifications (all public keys are checked against all signatures, per Bitcoin's CHECKMULTISIG semantics).

How Addresses Are Used in Shielded Transitions

Each of the five shielded transition types uses addresses differently:

Shield (Transparent to Shielded)

The sender provides one or more PlatformAddress inputs, each with a nonce and a maximum contribution amount. An AddressWitness proves ownership of each input. The funds enter the shielded pool and are received by an OrchardAddress embedded in the Orchard bundle's encrypted outputs.

#![allow(unused)]
fn main() {
pub struct ShieldTransitionV0 {
    pub inputs: BTreeMap<PlatformAddress, (AddressNonce, Credits)>,
    pub input_witnesses: Vec<AddressWitness>,
    pub actions: Vec<SerializedAction>,  // contains encrypted OrchardAddress recipient
    pub amount: u64,
    // ...
}
}

Shielded Transfer (Shielded to Shielded)

No transparent addresses are involved. The sender spends notes from the shielded pool and creates new notes for the recipient's OrchardAddress and a change OrchardAddress. Both addresses are hidden inside the Orchard bundle -- only the respective recipients can decrypt them.

#![allow(unused)]
fn main() {
pub struct ShieldedTransferTransitionV0 {
    pub actions: Vec<SerializedAction>,  // spend + output pairs
    pub value_balance: u64,             // fee extracted from pool
    // ...
}
}

Unshield (Shielded to Transparent)

The sender spends shielded notes and sends the funds to a PlatformAddress visible on-chain. The transparent output address is explicitly included in the transition and bound into the Orchard sighash to prevent substitution.

#![allow(unused)]
fn main() {
pub struct UnshieldTransitionV0 {
    pub output_address: PlatformAddress,  // visible transparent recipient
    pub unshielding_amount: u64,
    pub actions: Vec<SerializedAction>,   // spend + change output
    // ...
}
}

Shielded Withdrawal (Shielded to Core Chain)

Similar to unshield, but the destination is a Core chain script rather than a platform address. The CoreScript holds a raw P2PKH or P2SH script that will be used in a Core chain withdrawal transaction.

#![allow(unused)]
fn main() {
pub struct ShieldedWithdrawalTransitionV0 {
    pub output_script: CoreScript,  // Core chain P2PKH or P2SH
    pub core_fee_per_byte: u32,
    pub pooling: Pooling,
    // ...
}
}

Shield From Asset Lock (Core Chain to Shielded)

An asset lock proof from the Core chain funds the shielded pool directly. The recipient is an OrchardAddress inside the Orchard bundle. No PlatformAddress inputs are needed -- the asset lock proof substitutes for them.

The Platform Sighash

When transparent fields need to be bound to an Orchard bundle's proof, the platform uses a custom sighash computation defined in packages/rs-dpp/src/shielded/mod.rs:

#![allow(unused)]
fn main() {
const SIGHASH_DOMAIN: &[u8] = b"DashPlatformSighash";

pub fn compute_platform_sighash(
    bundle_commitment: &[u8; 32],
    extra_data: &[u8],
) -> [u8; 32] {
    let mut hasher = Sha256::new();
    hasher.update(SIGHASH_DOMAIN);
    hasher.update(bundle_commitment);
    hasher.update(extra_data);
    hasher.finalize().into()
}
}

The bundle_commitment is a BLAKE2b-256 hash of the Orchard bundle (per ZIP-244). The extra_data binds transparent fields:

Transitionextra_data
Shieldempty
Shielded Transferempty
Unshieldoutput_address.to_bytes() || amount.to_le_bytes()
Shielded Withdrawaloutput_script.as_bytes()
Shield From Asset Lockempty

This binding is critical for security. Without it, an attacker who intercepts an unshield transition could substitute the output_address while reusing the valid Orchard proof and signatures. The sighash ensures the Orchard bundle's spend authorization signatures commit to the specific transparent recipient.

The same compute_platform_sighash function is used on both sides: the client uses it when signing the bundle, and the platform uses it when verifying.

Trial Decryption and Address Privacy

A shielded recipient does not appear anywhere in cleartext on-chain. To discover incoming payments, a wallet must attempt trial decryption of every new note using its Incoming Viewing Key (IVK).

The process works as follows:

  1. The wallet retrieves new note entries from the shielded pool (each entry contains a nullifier, cmx, and 216-byte encrypted_note).
  2. For each entry, the wallet constructs a CompactAction from the nullifier, commitment, ephemeral public key, and encrypted ciphertext.
  3. The wallet attempts trial decryption using try_note_decryption with its IVK.
  4. If decryption succeeds, the note was addressed to one of the wallet's OrchardAddress instances.

The nullifier stored alongside each note is essential -- it provides the Rho value needed for decryption. Without it, the Orchard protocol's forward secrecy mechanism would prevent the recipient from recovering the note.

Because a single spending key can generate unlimited OrchardAddress instances (via different diversifiers), the IVK-based scan catches all of them in a single pass. The diversifier that was used becomes apparent only after successful decryption.

Note Encryption Structure

Each SerializedAction contains an encrypted_note field of 216 bytes:

ComponentSizePurpose
epk32 bytesEphemeral public key for Diffie-Hellman key agreement
enc_ciphertext104 bytesNote plaintext encrypted to recipient (ChaCha20-Poly1305)
out_ciphertext80 bytesNote encrypted to sender for wallet recovery

The enc_ciphertext contains the note plaintext (52 bytes), a Dash-specific memo (36 bytes), and the AEAD authentication tag (16 bytes). The memo is smaller than Zcash's 512-byte memos -- Dash uses the DashMemo type (36 bytes) rather than ZcashMemo (512 bytes), keeping encrypted notes compact.

Storage in GroveDB

Notes are stored in a BulkAppendTree within the shielded credit pool:

AddressBalances / "s" (shielded_credit_pool) /
    [1] notes       -- CommitmentTree (BulkAppendTree)
    [2] nullifiers   -- Tree (spent note markers)
    [5] total_balance -- SumItem
    [6] anchors      -- Tree (block_height -> anchor)

Each note entry stores:

cmx (32 bytes) || nullifier (32 bytes) || encrypted_note (216 bytes) = 280 bytes

The nullifier is stored alongside the note (rather than separately) specifically to support trial decryption. A scanning client needs both the encrypted ciphertext and the nullifier to attempt decryption.

Rules and Guidelines

Do:

  • Use to_bech32m_string() for user-facing address display. Always include the network parameter.
  • Use to_bytes() (bincode format) for storage keys and wire serialization.
  • Generate a fresh diversifier for each new Orchard payment address to maximize unlinkability.
  • Always bind transparent fields into the platform sighash. Forgetting to include the output address in an unshield transition would be a critical vulnerability.

Do not:

  • Mix up Bech32m type bytes (0xb0, 0x80, 0x10) with bincode variant bytes (0x00, 0x01). They serve different purposes and are not interchangeable.
  • Assume Orchard addresses are interoperable with Zcash. The encoding differs (no F4Jumble, no Unified Address wrapper).
  • Use non-standard scripts in P2SH addresses. Only bare multisig (OP_M ... OP_N OP_CHECKMULTISIG) is supported.
  • Store or transmit the raw spending key. Use the Incoming Viewing Key for scanning and the Full Viewing Key for read-only wallet recovery.
  • Attempt trial decryption without the nullifier. The Orchard protocol requires the Rho derived from the nullifier to reconstruct the note.

Data Contracts

If you have ever built a traditional web application, you know the pattern: define a database schema, then write code that reads and writes data conforming to that schema. Dash Platform follows the same idea, but on a decentralized network. A data contract is the on-chain schema that defines what application data looks like, how it is indexed, and who can modify it.

Every application on Dash Platform -- whether it is DPNS (the naming service), DashPay (social payments), or your own custom dApp -- starts by registering a data contract. Once registered, users can create, update, and query documents that conform to that contract's schema. Think of the data contract as the CREATE TABLE statement and documents as the rows.

The DataContract Enum

Like almost every core type in Platform, DataContract is a versioned enum. You will find its definition in packages/rs-dpp/src/data_contract/mod.rs:

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, PartialEq, From, PlatformVersioned)]
pub enum DataContract {
    V0(DataContractV0),
    V1(DataContractV1),
}
}

This is a pattern you will see over and over: the top-level type is an enum whose variants are concrete struct versions. The PlatformVersioned derive macro wires it into the protocol versioning system, and the From derive gives you free .into() conversions from either variant.

The enum also provides direct-access helpers for when you know exactly which version you are dealing with:

#![allow(unused)]
fn main() {
impl DataContract {
    pub fn as_v0(&self) -> Option<&DataContractV0> { ... }
    pub fn as_v1(&self) -> Option<&DataContractV1> { ... }
    pub fn into_v0(self) -> Option<DataContractV0> { ... }
    pub fn into_v1(self) -> Option<DataContractV1> { ... }
}
}

In tests, there are convenience methods as_latest() and into_latest() that always return the most recent variant. These are gated behind #[cfg(test)] because production code should never assume which version is "latest" -- the protocol version determines that.

What Lives Inside a Data Contract

The V0 struct contains the essentials. From packages/rs-dpp/src/data_contract/v0/data_contract.rs:

#![allow(unused)]
fn main() {
pub struct DataContractV0 {
    pub(crate) id: Identifier,
    pub(crate) version: u32,
    pub(crate) owner_id: Identifier,
    pub document_types: BTreeMap<DocumentName, DocumentType>,
    pub(crate) metadata: Option<Metadata>,
    pub(crate) config: DataContractConfig,
    pub(crate) schema_defs: Option<BTreeMap<DefinitionName, Value>>,
}
}

The key fields are:

  • id: A 32-byte identifier derived from the contract creation transaction. Globally unique.
  • version: A monotonically increasing counter. Every time the contract owner updates the contract, this increments.
  • owner_id: The identity that created (and can update) this contract.
  • document_types: A map from document type names (like "contactRequest" or "domain") to their DocumentType definitions, which include the JSON Schema, indexes, and mutability rules.
  • config: Contract-level configuration such as whether documents can be deleted, encryption key requirements, and so on.
  • schema_defs: Shared JSON Schema $defs that document types can reference.

What V1 Added

DataContractV1 extends V0 with several important capabilities. From packages/rs-dpp/src/data_contract/v1/data_contract.rs:

#![allow(unused)]
fn main() {
pub struct DataContractV1 {
    // All V0 fields...
    pub id: Identifier,
    pub version: u32,
    pub owner_id: Identifier,
    pub document_types: BTreeMap<DocumentName, DocumentType>,
    pub config: DataContractConfig,
    pub schema_defs: Option<BTreeMap<DefinitionName, Value>>,

    // New in V1:
    pub created_at: Option<TimestampMillis>,
    pub updated_at: Option<TimestampMillis>,
    pub created_at_block_height: Option<BlockHeight>,
    pub updated_at_block_height: Option<BlockHeight>,
    pub created_at_epoch: Option<EpochIndex>,
    pub updated_at_epoch: Option<EpochIndex>,
    pub groups: BTreeMap<GroupContractPosition, Group>,
    pub tokens: BTreeMap<TokenContractPosition, TokenConfiguration>,
    pub keywords: Vec<String>,
    pub description: Option<String>,
}
}

The additions fall into four categories:

  1. Timestamps and block tracking -- created_at, updated_at, created_at_block_height, updated_at_block_height, created_at_epoch, updated_at_epoch. These provide an immutable audit trail of when the contract was created and last modified.

  2. Groups -- BTreeMap<GroupContractPosition, Group>. Groups enable multiparty governance. Each group has a set of member identities with associated voting power and a required power threshold for actions.

  3. Tokens -- BTreeMap<TokenContractPosition, TokenConfiguration>. Contracts can now define and manage tokens with configurable supply limits, minting/burning rules, and governance controls.

  4. Searchability -- keywords and description make contracts discoverable through the platform's search system contract.

The Versioned Accessors Pattern

Here is where it gets interesting. You do not access data contract fields directly in most code. Instead, you go through accessor traits. These live in packages/rs-dpp/src/data_contract/accessors/.

The V0 getter trait, defined in accessors/v0/mod.rs:

#![allow(unused)]
fn main() {
pub trait DataContractV0Getters {
    fn id(&self) -> Identifier;
    fn id_ref(&self) -> &Identifier;
    fn version(&self) -> u32;
    fn owner_id(&self) -> Identifier;
    fn document_type_for_name(&self, name: &str)
        -> Result<DocumentTypeRef<'_>, DataContractError>;
    fn document_types(&self) -> &BTreeMap<DocumentName, DocumentType>;
    fn config(&self) -> &DataContractConfig;
    // ... and more
}

pub trait DataContractV0Setters {
    fn set_id(&mut self, id: Identifier);
    fn set_version(&mut self, version: u32);
    fn increment_version(&mut self);
    fn set_owner_id(&mut self, owner_id: Identifier);
    fn set_config(&mut self, config: DataContractConfig);
}
}

And the V1 getter trait extends V0, defined in accessors/v1/mod.rs:

#![allow(unused)]
fn main() {
pub trait DataContractV1Getters: DataContractV0Getters {
    fn groups(&self) -> &BTreeMap<GroupContractPosition, Group>;
    fn tokens(&self) -> &BTreeMap<TokenContractPosition, TokenConfiguration>;
    fn created_at(&self) -> Option<TimestampMillis>;
    fn updated_at(&self) -> Option<TimestampMillis>;
    fn keywords(&self) -> &Vec<String>;
    fn description(&self) -> Option<&String>;
    // ... and more
}
}

Notice that DataContractV1Getters has a supertrait bound on DataContractV0Getters. This means anything that implements V1 getters automatically has V0 getters too. You can use both interchangeably.

How the Enum Dispatches

The magic happens in packages/rs-dpp/src/data_contract/accessors/mod.rs, where the top-level DataContract enum implements both traits by dispatching to the inner variant:

#![allow(unused)]
fn main() {
impl DataContractV0Getters for DataContract {
    fn id(&self) -> Identifier {
        match self {
            DataContract::V0(v0) => v0.id(),
            DataContract::V1(v1) => v1.id(),
        }
    }

    fn version(&self) -> u32 {
        match self {
            DataContract::V0(v0) => v0.version(),
            DataContract::V1(v1) => v1.version(),
        }
    }
    // ... every method follows this pattern
}
}

For V1-only fields, the implementation gracefully handles V0 contracts:

#![allow(unused)]
fn main() {
impl DataContractV1Getters for DataContract {
    fn groups(&self) -> &BTreeMap<GroupContractPosition, Group> {
        match self {
            DataContract::V0(_) => &EMPTY_GROUPS,  // static empty map
            DataContract::V1(v1) => &v1.groups,
        }
    }

    fn tokens(&self) -> &BTreeMap<TokenContractPosition, TokenConfiguration> {
        match self {
            DataContract::V0(_) => &EMPTY_TOKENS,  // static empty map
            DataContract::V1(v1) => &v1.tokens,
        }
    }

    fn created_at(&self) -> Option<TimestampMillis> {
        match self {
            DataContract::V0(_) => None,
            DataContract::V1(v1) => v1.created_at,
        }
    }
}
}

This is a deliberate design choice. When code asks a V0 contract for its groups, it gets an empty map rather than an error. When it asks for a timestamp, it gets None. The calling code does not need to know or care which version it is working with -- it just checks whether the Option has a value or whether the map is empty.

Serialization Strategy

Data contracts use a two-step serialization approach. They are first converted to a DataContractInSerializationFormat (a common intermediate representation), then serialized to bytes using bincode with big-endian encoding:

#![allow(unused)]
fn main() {
impl PlatformSerializableWithPlatformVersion for DataContract {
    fn serialize_to_bytes_with_platform_version(
        &self,
        platform_version: &PlatformVersion,
    ) -> Result<Vec<u8>, ProtocolError> {
        let serialization_format: DataContractInSerializationFormat =
            self.try_into_platform_versioned(platform_version)?;
        let config = bincode::config::standard()
            .with_big_endian()
            .with_no_limit();
        bincode::encode_to_vec(serialization_format, config)
            .map_err(|e| PlatformSerializationError(
                format!("unable to serialize DataContract: {}", e)
            ))
    }
}
}

This intermediate format is important because serialization versions and code structure versions are independent. A contract serialized as V1 ten years ago must still be deserializable, even if the code structures have evolved to V5 by then. The serialization format acts as the bridge.

There is also versioned_limit_deserialize, which imposes a size limit and always performs full validation -- this is used for data coming from untrusted sources (anything not from Drive's own storage).

Rules and Guidelines

Do:

  • Always access fields through the accessor traits (DataContractV0Getters, DataContractV1Getters), not by pattern-matching on the enum variant.
  • Handle V1-only fields being None or empty when the contract might be V0.
  • Use document_type_for_name() to retrieve document types -- it returns a proper error if the name does not exist.

Do not:

  • Use as_v0() / as_v1() in production code unless you genuinely need version-specific behavior. The trait accessors are the right abstraction.
  • Assume into_latest() exists outside of tests -- it is #[cfg(test)] only.
  • Forget that serialization versions persist forever. If you add a new field, old serialized contracts will not have it, and your deserialization must handle that gracefully.
  • Mutate a contract's id after creation -- it is derived from the creation transaction and must remain stable.

Documents

If data contracts are the tables, then documents are the rows. A document is an instance of a document type defined within a data contract. When a user creates a profile on DashPay, submits a domain name on DPNS, or stores any application data on the platform, they are creating a document.

Documents are the most fundamental unit of user data on Dash Platform. They are stored in GroveDB (through Drive), indexed for efficient querying, and cryptographically provable. Understanding how documents work at the Rust level is essential for working with the platform codebase.

The Document Enum

Like DataContract and Identity, Document is a versioned enum. From packages/rs-dpp/src/document/mod.rs:

#![allow(unused)]
fn main() {
#[derive(Clone, Debug, PartialEq, From)]
pub enum Document {
    V0(DocumentV0),
}
}

Currently there is only one variant, V0. But the enum wrapper is already in place so that future protocol versions can introduce a V1 variant without breaking existing code. All code that works with documents goes through the accessor traits, so adding a new variant is purely additive.

What Lives Inside a Document

The DocumentV0 struct is defined in packages/rs-dpp/src/document/v0/mod.rs:

#![allow(unused)]
fn main() {
pub struct DocumentV0 {
    pub id: Identifier,
    pub owner_id: Identifier,
    pub properties: BTreeMap<String, Value>,
    pub revision: Option<Revision>,
    pub created_at: Option<TimestampMillis>,
    pub updated_at: Option<TimestampMillis>,
    pub transferred_at: Option<TimestampMillis>,
    pub created_at_block_height: Option<BlockHeight>,
    pub updated_at_block_height: Option<BlockHeight>,
    pub transferred_at_block_height: Option<BlockHeight>,
    pub created_at_core_block_height: Option<CoreBlockHeight>,
    pub updated_at_core_block_height: Option<CoreBlockHeight>,
    pub transferred_at_core_block_height: Option<CoreBlockHeight>,
    pub creator_id: Option<Identifier>,
}
}

Let us walk through the key fields:

  • id: A 32-byte unique identifier. Unlike contract IDs, document IDs are derived from a combination of the contract ID, owner ID, document type name, and entropy. This makes them deterministic yet unique.

  • owner_id: The identity that currently owns this document. Ownership can change if the document type supports transfers.

  • properties: The actual application data, stored as a BTreeMap<String, Value>. The Value type comes from platform-value and can represent strings, integers, byte arrays, nested maps, and arrays. The keys correspond to the property names defined in the document type's JSON Schema.

  • revision: An Option<Revision> (which is a u64). Mutable documents track revisions -- each update increments the revision. Immutable document types will have None here.

  • Timestamps: Six pairs of timestamp fields covering three events (creation, update, transfer) across three time references (milliseconds, block height, core block height). Whether these are populated depends on the document type schema -- if the schema requires $createdAt, the platform fills it in when the document is created.

  • creator_id: The original creator of the document. This differs from owner_id when a document has been transferred to a new owner.

Document ID Generation

Document IDs are not random -- they are derived deterministically. From packages/rs-dpp/src/document/generate_document_id.rs:

#![allow(unused)]
fn main() {
impl Document {
    pub fn generate_document_id_v0(
        contract_id: &Identifier,
        owner_id: &Identifier,
        document_type_name: &str,
        entropy: &[u8],
    ) -> Identifier {
        let mut buf: Vec<u8> = vec![];
        buf.extend_from_slice(&contract_id.to_buffer());
        buf.extend_from_slice(&owner_id.to_buffer());
        buf.extend_from_slice(document_type_name.as_bytes());
        buf.extend_from_slice(entropy);

        Identifier::from_bytes(&hash_double_to_vec(&buf)).unwrap()
    }
}
}

The ID is a double SHA-256 hash of the concatenation of the contract ID, owner ID, document type name, and client-provided entropy. This means:

  • The same entropy in the same context always produces the same ID (deterministic).
  • Different entropy always produces a different ID (unique in practice).
  • The ID commits to both the contract and document type, preventing cross-contract collisions.

The Accessor Traits

Documents follow the same accessor-trait pattern as data contracts. The getter trait is defined in packages/rs-dpp/src/document/accessors/v0/mod.rs:

#![allow(unused)]
fn main() {
pub trait DocumentV0Getters {
    fn id(&self) -> Identifier;
    fn owner_id(&self) -> Identifier;
    fn properties(&self) -> &BTreeMap<String, Value>;
    fn properties_mut(&mut self) -> &mut BTreeMap<String, Value>;
    fn revision(&self) -> Option<Revision>;
    fn created_at(&self) -> Option<TimestampMillis>;
    fn updated_at(&self) -> Option<TimestampMillis>;
    fn transferred_at(&self) -> Option<TimestampMillis>;
    fn created_at_block_height(&self) -> Option<u64>;
    fn updated_at_block_height(&self) -> Option<u64>;
    fn creator_id(&self) -> Option<Identifier>;
    // ... and more
}
}

The setter trait extends it with mutation methods and also provides convenient typed setters:

#![allow(unused)]
fn main() {
pub trait DocumentV0Setters: DocumentV0Getters {
    fn set_id(&mut self, id: Identifier);
    fn set_owner_id(&mut self, owner_id: Identifier);
    fn set_properties(&mut self, properties: BTreeMap<String, Value>);
    fn set_revision(&mut self, revision: Option<Revision>);
    fn set_created_at(&mut self, created_at: Option<TimestampMillis>);
    fn set_updated_at(&mut self, updated_at: Option<TimestampMillis>);

    // Generic property access via path syntax
    fn set(&mut self, path: &str, value: Value) { ... }
    fn remove(&mut self, path: &str) -> Option<Value> { ... }

    // Typed setters for common types
    fn set_u8(&mut self, property_name: &str, value: u8);
    fn set_u64(&mut self, property_name: &str, value: u64);
    fn set_bytes(&mut self, property_name: &str, value: Vec<u8>);
    // ... and more
}
}

Notice the set() method provides lodash-style path syntax: "root.people[0].name". Parents are created automatically if they do not exist.

The DocumentMethodsV0 Trait

Beyond simple field access, documents have behavior defined by the DocumentMethodsV0 trait in packages/rs-dpp/src/document/document_methods/mod.rs:

#![allow(unused)]
fn main() {
pub trait DocumentMethodsV0 {
    fn get_raw_for_contract(
        &self,
        key: &str,
        document_type_name: &str,
        contract: &DataContract,
        owner_id: Option<[u8; 32]>,
        platform_version: &PlatformVersion,
    ) -> Result<Option<Vec<u8>>, ProtocolError>;

    fn get_raw_for_document_type(
        &self,
        key_path: &str,
        document_type: DocumentTypeRef,
        owner_id: Option<[u8; 32]>,
        platform_version: &PlatformVersion,
    ) -> Result<Option<Vec<u8>>, ProtocolError>;

    fn hash(
        &self,
        contract: &DataContract,
        document_type: DocumentTypeRef,
        platform_version: &PlatformVersion,
    ) -> Result<Vec<u8>, ProtocolError>;

    fn increment_revision(&mut self) -> Result<(), ProtocolError>;

    fn is_equal_ignoring_time_based_fields(
        &self,
        rhs: &Self,
        also_ignore_fields: Option<Vec<&str>>,
        platform_version: &PlatformVersion,
    ) -> Result<bool, ProtocolError>;
}
}

The get_raw_for_contract and get_raw_for_document_type methods retrieve a document property as raw bytes, using the document type schema to determine how to serialize the value. This is critical for building index keys and storage operations.

The is_equal_ignoring_time_based_fields method is particularly useful in validation. Since timestamps and block heights are set by the network (not the client), you often want to compare two documents while ignoring those fields -- for example, to verify that a client's update only changed the fields it was supposed to change.

Version Dispatching in Methods

Every method in the Document implementation dispatches through the platform version, following the standard pattern:

#![allow(unused)]
fn main() {
impl DocumentMethodsV0 for Document {
    fn get_raw_for_contract(
        &self,
        key: &str,
        document_type_name: &str,
        contract: &DataContract,
        owner_id: Option<[u8; 32]>,
        platform_version: &PlatformVersion,
    ) -> Result<Option<Vec<u8>>, ProtocolError> {
        match self {
            Document::V0(document_v0) => {
                match platform_version
                    .dpp
                    .document_versions
                    .document_method_versions
                    .get_raw_for_contract
                {
                    0 => document_v0.get_raw_for_contract_v0(
                        key, document_type_name, contract,
                        owner_id, platform_version,
                    ),
                    version => Err(ProtocolError::UnknownVersionMismatch {
                        method: "DocumentMethodV0::get_raw_for_contract".to_string(),
                        known_versions: vec![0],
                        received: version,
                    }),
                }
            }
        }
    }
}
}

This is a double dispatch: first on the document variant (V0), then on the method version from the platform version configuration. This allows the platform to evolve both the document structure and the behavior of document methods independently.

How Documents Reference Their Contract

Documents do not carry a reference to their contract inside the struct itself. Instead, the relationship is established through context -- the document type name and contract are passed alongside the document whenever they are needed (for serialization, validation, hashing, and storage).

When serializing, a document is always serialized relative to its document type:

#![allow(unused)]
fn main() {
let serialized = <Document as DocumentPlatformConversionMethodsV0>::serialize(
    &document,
    document_type,  // the schema determines field order and encoding
    &contract,
    platform_version,
)?;

let deserialized = Document::from_bytes(
    &serialized,
    document_type,  // same schema needed for decoding
    platform_version,
)?;
}

This means a document's binary representation is not self-describing. You need the document type definition to interpret the bytes. This is a deliberate design choice for storage efficiency -- field names are not repeated in every serialized document.

The INITIAL_REVISION Constant

When a new document is created, it starts at revision 1:

#![allow(unused)]
fn main() {
pub const INITIAL_REVISION: u64 = 1;
}

Revision 0 is never used for active documents. This allows 0 to serve as a sentinel value meaning "no revision" in some contexts.

Rules and Guidelines

Do:

  • Always serialize and deserialize documents using their document type definition. The type determines field layout.
  • Use is_equal_ignoring_time_based_fields() when comparing documents for validation purposes.
  • Use increment_revision() rather than manually manipulating the revision field -- it handles overflow checking.
  • Access properties through the accessor traits, not by reaching into the inner DocumentV0 struct.

Do not:

  • Assume a document carries its contract reference. The contract and document type are always passed as separate arguments.
  • Manually construct document IDs. Use generate_document_id_v0() with proper entropy.
  • Treat serialized document bytes as self-describing. Without the document type schema, the bytes are meaningless.
  • Set time-based fields from client code. The platform sets created_at, updated_at, block heights, and similar fields during state transition processing.

Identities

Before a user can do anything on Dash Platform -- register a name, send a contact request, create a data contract -- they need an identity. An identity is the platform-level representation of a user. It is the anchor for everything: documents are owned by identities, state transitions are signed by identity keys, and fees are paid from identity balances.

If you are coming from Ethereum, think of an identity as an account. But unlike Ethereum's single-key accounts, a Dash Platform identity can have multiple public keys with different purposes and security levels, making it more flexible and more secure.

The Identity Enum

Following the standard versioning pattern, Identity is defined in packages/rs-dpp/src/identity/identity.rs:

#![allow(unused)]
fn main() {
#[derive(Debug, Clone, PartialEq, From)]
pub enum Identity {
    V0(IdentityV0),
}
}

Currently only V0 exists. The IdentityV0 struct lives in packages/rs-dpp/src/identity/v0/mod.rs:

#![allow(unused)]
fn main() {
pub struct IdentityV0 {
    pub id: Identifier,
    pub public_keys: BTreeMap<KeyID, IdentityPublicKey>,
    pub balance: u64,
    pub revision: Revision,
}
}

Four fields. That is it. An identity is remarkably simple:

  • id: A 32-byte unique identifier. For identities created via asset locks, this is derived from the locking transaction. For identities created via address-based funding, it is derived from the input addresses and nonces.

  • public_keys: A BTreeMap mapping key IDs (simple integers) to IdentityPublicKey objects. Each public key has a purpose (authentication, encryption, decryption, transfer, voting, owner), a security level (master, critical, high, medium), and the actual key data. An identity can have many keys for different scenarios.

  • balance: The identity's credit balance, measured in platform credits. Credits are the unit of account for fee payment on the platform. Users convert Dash into credits through a process called "topping up."

  • revision: A monotonically increasing counter that increments with every identity update (adding keys, disabling keys, etc.). This prevents replay attacks -- each update must reference the current revision.

The Credit System

Platform credits are the fuel that powers everything on the network. Every state transition (creating a document, updating a contract, transferring tokens) costs credits. The credit system is how the platform measures and charges for computational and storage resources.

The balance field is a simple u64 representing the number of credits an identity holds. When an operation is performed, the fee system calculates the cost (as we will see in the Cost Tracking chapter) and deducts it from the identity's balance. If the balance is insufficient, the operation is rejected.

Public Keys and Key Types

An identity's public keys are not just random cryptographic keys -- they are structured with purpose and security level. Each IdentityPublicKey carries:

  • Key ID (KeyID): A numeric identifier unique within the identity, used to reference the key.
  • Purpose: What the key is for -- authentication, encryption, decryption, transfer, voting, or owner operations.
  • Security Level: How sensitive operations signed by this key are -- master, critical, high, or medium.
  • Key Type: The cryptographic algorithm -- ECDSA_SECP256K1, BLS12_381, ECDSA_HASH160, BIP13_SCRIPT_HASH, or EDDSA_25519_HASH160.
  • Data: The actual public key bytes.
  • Disabled At: An optional timestamp marking when the key was disabled.

This multi-key design means an identity can have a master key stored in cold storage, a critical key for important operations, and a high-level key for everyday use -- all belonging to the same identity. If a day-to-day key is compromised, the master key can disable it and add a replacement without losing the identity.

The PartialIdentity Pattern

Here is where the codebase reveals a practical optimization. Loading a full identity from storage is expensive -- you need to fetch the balance, all the keys, the revision, and potentially more. But most operations do not need all of that. A balance transfer only needs the balance. A document creation only needs to verify one key.

Enter PartialIdentity, defined alongside Identity in packages/rs-dpp/src/identity/identity.rs:

#![allow(unused)]
fn main() {
pub struct PartialIdentity {
    pub id: Identifier,
    pub loaded_public_keys: BTreeMap<KeyID, IdentityPublicKey>,
    pub balance: Option<Credits>,
    pub revision: Option<Revision>,
    pub not_found_public_keys: BTreeSet<KeyID>,
}
}

A PartialIdentity is exactly what it sounds like -- a partially-loaded identity. Notice the differences from Identity:

  • balance is Option<Credits> rather than a bare u64. It might not have been loaded.
  • revision is Option<Revision>. Same story.
  • loaded_public_keys might only contain the specific keys that were requested.
  • not_found_public_keys tracks which keys were requested but did not exist on the identity.

You can convert a full Identity into a PartialIdentity:

#![allow(unused)]
fn main() {
impl IdentityV0 {
    pub fn into_partial_identity_info(self) -> PartialIdentity {
        let Self { id, public_keys, balance, revision, .. } = self;
        PartialIdentity {
            id,
            loaded_public_keys: public_keys,
            balance: Some(balance),
            revision: Some(revision),
            not_found_public_keys: Default::default(),
        }
    }

    pub fn into_partial_identity_info_no_balance(self) -> PartialIdentity {
        let Self { id, public_keys, revision, .. } = self;
        PartialIdentity {
            id,
            loaded_public_keys: public_keys,
            balance: None,  // explicitly not loaded
            revision: Some(revision),
            not_found_public_keys: Default::default(),
        }
    }
}
}

The PartialIdentity pattern is used extensively in Drive's query and validation code. When processing a state transition, Drive fetches only the identity fields it actually needs, wraps them in a PartialIdentity, and passes that through the validation pipeline. This avoids unnecessary storage reads and keeps things efficient.

Identity Nonces

Replay protection on Dash Platform uses nonces rather than sequential transaction counters. There are two kinds:

  1. Identity nonce: A per-identity counter used for identity-level operations (like key updates).
  2. Identity-contract nonce: A per-identity-per-contract counter used for document operations. This allows operations on different contracts to be submitted in parallel without conflicting.

The nonce system is defined in packages/rs-dpp/src/identity/identity_nonce.rs and is more sophisticated than a simple incrementing counter. The nonce value is actually a packed u64 that contains both the counter value and a bitfield tracking recently-used nonces:

#![allow(unused)]
fn main() {
pub const IDENTITY_NONCE_VALUE_FILTER: u64 = 0xFFFFFFFFFF;
pub const MISSING_IDENTITY_REVISIONS_FILTER: u64 = 0xFFFFFF0000000000;
pub const MAX_MISSING_IDENTITY_REVISIONS: u64 = 24;
}

The lower 40 bits hold the current nonce tip. The upper 24 bits form a bitfield that tracks which of the last 24 nonce values have been seen. This allows out-of-order submission within a window: if a user submits nonces 5, 7, and 6 in that order, all three are accepted. But nonce 5 cannot be submitted again because it is already marked in the bitfield.

The validation function checks several conditions:

#![allow(unused)]
fn main() {
pub fn validate_identity_nonce_update(
    existing_nonce: IdentityNonce,
    new_revision_nonce: IdentityNonce,
    identity_id: Identifier,
) -> SimpleConsensusValidationResult {
    let actual_existing_revision = existing_nonce & IDENTITY_NONCE_VALUE_FILTER;
    match actual_existing_revision.cmp(&new_revision_nonce) {
        std::cmp::Ordering::Equal => {
            // Nonce already used at the tip
            // -> NonceAlreadyPresentAtTip error
        }
        std::cmp::Ordering::Less => {
            // Nonce is in the future -- check it's within window
            // -> NonceTooFarInFuture if gap > 24
        }
        std::cmp::Ordering::Greater => {
            // Nonce is in the past -- check bitfield
            // -> NonceTooFarInPast if gap > 24
            // -> NonceAlreadyPresentInPast if bit is already set
        }
    }
}
}

This design balances several concerns:

  • Replay protection: A nonce cannot be reused.
  • Out-of-order tolerance: Within a 24-nonce window, transactions can arrive in any order.
  • Bounded storage: Only 8 bytes are needed to track the full nonce state (the packed u64).
  • Parallel submission: Identity-contract nonces let different contracts have independent nonce spaces.

Creating an Identity

The Identity enum provides versioned constructors:

#![allow(unused)]
fn main() {
impl Identity {
    pub fn new_with_id_and_keys(
        id: Identifier,
        public_keys: BTreeMap<KeyID, IdentityPublicKey>,
        platform_version: &PlatformVersion,
    ) -> Result<Identity, ProtocolError> {
        match platform_version
            .dpp
            .identity_versions
            .identity_structure_version
        {
            0 => {
                let identity_v0 = IdentityV0 {
                    id,
                    public_keys,
                    balance: 0,
                    revision: 0,
                };
                Ok(identity_v0.into())
            }
            version => Err(ProtocolError::UnknownVersionMismatch {
                method: "Identity::new_with_id_and_keys".to_string(),
                known_versions: vec![0],
                received: version,
            }),
        }
    }
}
}

New identities start with a balance of 0 and a revision of 0. The balance is filled by the identity creation state transition (which includes an asset lock or address-based funding), and the revision increments from there.

How Identity Differs from Other Types

One important thing to note: the identity is not stored as a single blob in Drive. Unlike documents and data contracts (which are serialized and stored as items), identity fields are stored in separate locations within GroveDB's tree structure. The balance is in one place, each key is in another, the revision somewhere else. This is because different operations need to update different parts of the identity independently and atomically.

The Identity struct is primarily used for:

  • Creating new identities (assembling all fields for the creation state transition)
  • Client-side representation (what the SDK returns when you query an identity)
  • Transport (serialized for gRPC responses)

Inside Drive and ABCI, you will more commonly see PartialIdentity or direct field access through Drive's identity methods.

Rules and Guidelines

Do:

  • Use PartialIdentity when you only need a subset of identity fields. It avoids unnecessary storage reads.
  • Validate nonces through the provided validate_identity_nonce_update function -- the bitfield logic is subtle.
  • Always go through PlatformVersion when constructing identities to ensure the correct structure version.

Do not:

  • Assume an identity has only one key. Identities commonly have multiple keys with different purposes and security levels.
  • Manually pack or unpack nonce bitfields. Use the provided constants and validation functions.
  • Store or cache full Identity objects when a PartialIdentity would suffice. The full identity can be large if it has many keys.
  • Treat identity balance as Dash amounts. Credits are the unit of account on the platform; the conversion to/from Dash happens at the protocol level.

Grove Operations

Drive is the storage layer of Dash Platform, and GroveDB is the authenticated data structure (a Merkle tree of trees) that Drive uses under the hood. But Drive never talks to GroveDB directly in its business logic. Instead, every single GroveDB call is wrapped in a versioned Drive method that follows a consistent pattern. This chapter explains why that wrapper layer exists and how it works.

The Problem: Raw GroveDB Is Too Low-Level

If you were to call GroveDB directly throughout Drive's codebase, you would face several problems:

  1. No cost tracking. GroveDB operations return a CostContext that wraps both the result and an OperationCost. If you forget to capture that cost, the fee system breaks.
  2. No version dispatch. Different protocol versions might need different behavior for the same logical operation (like how to handle estimated costs vs. actual costs).
  3. No consistent API. Each caller would need to handle cost capture, error conversion, and version checking independently.

The grove operations layer solves all three problems by providing a single, consistent abstraction. Every grove operation is a method on Drive that takes a path, a key, some type information, a transaction, and the mutable drive_operations accumulator.

The Module Structure

The grove operations live in packages/rs-drive/src/util/grove_operations/. Each operation is its own submodule:

grove_operations/
    mod.rs                    -- shared types and helpers
    grove_insert/
        mod.rs                -- version dispatcher
        v0/mod.rs             -- v0 implementation
    grove_get_raw/
        mod.rs                -- version dispatcher
        v0/mod.rs             -- v0 implementation
    grove_delete/
    grove_get/
    grove_get_raw_optional/
    grove_has_raw/
    batch_insert/
    batch_insert_empty_tree/
    batch_delete/
    ... (30+ more)

Each submodule follows the same structure: a mod.rs that dispatches on the version, and a v0/mod.rs (and potentially v1/, v2/, etc.) with the actual implementation.

The drive_operations Accumulator Pattern

This is the most important pattern to understand. Almost every grove operation method accepts a mutable reference to a Vec<LowLevelDriveOperation>:

#![allow(unused)]
fn main() {
drive_operations: &mut Vec<LowLevelDriveOperation>
}

Instead of returning costs directly, the method pushes the cost of its GroveDB call onto this vector. The caller passes the same vector through multiple operations, accumulating all costs. Later, the batch application system processes this vector to calculate the total fee.

Why accumulate rather than execute immediately? Two reasons:

  1. Fee estimation. When apply is false, Drive needs to estimate costs without actually writing to the database. The operations still accumulate cost information, but no state changes occur.
  2. Atomic batching. Multiple operations can be collected and then applied as a single atomic batch. More on this in the Batch Operations chapter.

A Concrete Example: grove_get_raw

Let us trace through a complete grove operation. The version dispatcher is in packages/rs-drive/src/util/grove_operations/grove_get_raw/mod.rs:

#![allow(unused)]
fn main() {
impl Drive {
    pub fn grove_get_raw<B: AsRef<[u8]>>(
        &self,
        path: SubtreePath<'_, B>,
        key: &[u8],
        direct_query_type: DirectQueryType,
        transaction: TransactionArg,
        drive_operations: &mut Vec<LowLevelDriveOperation>,
        drive_version: &DriveVersion,
    ) -> Result<Option<Element>, Error> {
        match drive_version.grove_methods.basic.grove_get_raw {
            0 => self.grove_get_raw_v0(
                path, key, direct_query_type,
                transaction, drive_operations, drive_version,
            ),
            version => Err(Error::Drive(DriveError::UnknownVersionMismatch {
                method: "grove_get_raw".to_string(),
                known_versions: vec![0],
                received: version,
            })),
        }
    }
}
}

The dispatcher consults drive_version.grove_methods.basic.grove_get_raw to determine which implementation version to call. If the version is unknown, it returns an error immediately.

Now the v0 implementation, from grove_get_raw/v0/mod.rs:

#![allow(unused)]
fn main() {
impl Drive {
    pub(super) fn grove_get_raw_v0<B: AsRef<[u8]>>(
        &self,
        path: SubtreePath<'_, B>,
        key: &[u8],
        direct_query_type: DirectQueryType,
        transaction: TransactionArg,
        drive_operations: &mut Vec<LowLevelDriveOperation>,
        drive_version: &DriveVersion,
    ) -> Result<Option<Element>, Error> {
        match direct_query_type {
            DirectQueryType::StatelessDirectQuery {
                in_tree_type,
                query_target,
            } => {
                let key_info_path = KeyInfoPath::from_known_owned_path(path.to_vec());
                let key_info = KeyInfo::KnownKey(key.to_vec());
                let cost = match query_target {
                    QueryTarget::QueryTargetTree(flags_size, tree_type) => {
                        GroveDb::average_case_for_get_tree(
                            &key_info_path, &key_info, flags_size,
                            tree_type, in_tree_type,
                            &drive_version.grove_version,
                        )
                    }
                    QueryTarget::QueryTargetValue(estimated_value_size) => {
                        GroveDb::average_case_for_get_raw(
                            &key_info_path, &key_info,
                            estimated_value_size, in_tree_type,
                            &drive_version.grove_version,
                        )
                    }
                }?;
                drive_operations.push(CalculatedCostOperation(cost));
                Ok(None) // No actual data -- just cost estimation
            }
            DirectQueryType::StatefulDirectQuery => {
                let CostContext { value, cost } = self.grove.get_raw(
                    path, key, transaction,
                    &drive_version.grove_version,
                );
                drive_operations.push(CalculatedCostOperation(cost));
                Ok(Some(value.map_err(Error::from)?))
            }
        }
    }
}
}

This reveals the dual nature of every grove operation: it can operate in stateless mode (for cost estimation) or stateful mode (for actual execution). In stateless mode, it calculates the average-case cost without touching the database and returns None. In stateful mode, it performs the actual GroveDB read and returns the element.

The DirectQueryType Enum

The DirectQueryType enum, defined in packages/rs-drive/src/util/grove_operations/mod.rs, controls this dual behavior:

#![allow(unused)]
fn main() {
pub enum DirectQueryType {
    StatelessDirectQuery {
        in_tree_type: TreeType,
        query_target: QueryTarget,
    },
    StatefulDirectQuery,
}
}
  • StatelessDirectQuery: Used for fee estimation. Provides the tree type and query target so the system can calculate costs without reading from disk. The QueryTarget specifies whether we are querying for a tree (with flags) or a value (with an estimated size).

  • StatefulDirectQuery: Used for actual execution. The system reads from GroveDB and returns real data.

There is also a more general QueryType enum that adds reference size estimation:

#![allow(unused)]
fn main() {
pub enum QueryType {
    StatelessQuery {
        in_tree_type: TreeType,
        query_target: QueryTarget,
        estimated_reference_sizes: Vec<u32>,
    },
    StatefulQuery,
}
}

And a QueryTarget enum that specifies what kind of element we expect to find:

#![allow(unused)]
fn main() {
pub enum QueryTarget {
    QueryTargetTree(FlagsLen, TreeType),
    QueryTargetValue(u32),  // estimated value size in bytes
}
}

Another Example: grove_insert

Inserts follow the same pattern. From packages/rs-drive/src/util/grove_operations/grove_insert/v0/mod.rs:

#![allow(unused)]
fn main() {
impl Drive {
    pub(super) fn grove_insert_v0<B: AsRef<[u8]>>(
        &self,
        path: SubtreePath<'_, B>,
        key: &[u8],
        element: Element,
        transaction: TransactionArg,
        options: Option<InsertOptions>,
        drive_operations: &mut Vec<LowLevelDriveOperation>,
        drive_version: &DriveVersion,
    ) -> Result<(), Error> {
        let cost_context = self.grove.insert(
            path, key, element, options, transaction,
            &drive_version.grove_version,
        );
        push_drive_operation_result(cost_context, drive_operations)
    }
}
}

This is simpler than the get because inserts are always stateful -- you cannot "estimate" an insert by not doing it. The push_drive_operation_result helper extracts the cost from GroveDB's CostContext and pushes it onto the operations vector:

#![allow(unused)]
fn main() {
fn push_drive_operation_result<T>(
    cost_context: CostContext<Result<T, GroveError>>,
    drive_operations: &mut Vec<LowLevelDriveOperation>,
) -> Result<T, Error> {
    let CostContext { value, cost } = cost_context;
    if !cost.is_nothing() {
        drive_operations.push(CalculatedCostOperation(cost));
    }
    value.map_err(Error::from)
}
}

Notice the is_nothing() check -- if an operation has zero cost (which can happen), we skip pushing to avoid cluttering the vector.

Batch Apply Types

For operations that work in batch mode (building up a batch of operations to apply atomically), there are corresponding apply-type enums. For example:

#![allow(unused)]
fn main() {
pub enum BatchDeleteApplyType {
    StatelessBatchDelete {
        in_tree_type: TreeType,
        estimated_key_size: u32,
        estimated_value_size: u32,
    },
    StatefulBatchDelete {
        is_known_to_be_subtree_with_sum: Option<MaybeTree>,
    },
}

pub enum BatchInsertTreeApplyType {
    StatelessBatchInsertTree {
        in_tree_type: TreeType,
        tree_type: TreeType,
        flags_len: FlagsLen,
    },
    StatefulBatchInsertTree,
}
}

These follow the same stateless/stateful split. The stateless variants carry enough information to estimate costs without touching the database, while the stateful variants trigger actual operations. Each batch apply type can be converted to a DirectQueryType for use with the lower-level grove operations.

The GroveDBToUse Enum

A recent addition supports querying different GroveDB instances:

#![allow(unused)]
fn main() {
pub enum GroveDBToUse {
    Current,
    LatestCheckpoint,
    Checkpoint(u64),
}
}

This enables queries against historical checkpoints -- useful for proof generation and state verification at specific block heights.

Method Signature Conventions

Across all grove operations, you will notice a consistent parameter ordering:

&self, path, key, [element], [query_type], transaction, drive_operations, drive_version
  1. &self -- the Drive instance (which holds the GroveDB handle)
  2. Path -- where in the tree
  3. Key -- which element at that path
  4. Element -- the data to write (for inserts/replaces)
  5. Query type -- stateless vs. stateful
  6. Transaction -- the GroveDB transaction context
  7. drive_operations -- the mutable cost accumulator
  8. drive_version -- for version dispatching

This consistency makes the codebase navigable even though there are 30+ different grove operations.

Rules and Guidelines

Do:

  • Always use the grove operation wrappers on Drive. Never call self.grove.insert() or self.grove.get() directly in business logic.
  • Pass the drive_operations vector through every call chain. It is how costs propagate upward.
  • Use StatelessDirectQuery for fee estimation and StatefulDirectQuery for actual execution.

Do not:

  • Ignore the cost returned by GroveDB operations. The push_drive_operation_result helper exists for this reason.
  • Mix stateful and stateless queries in a single estimation pass. Pick one mode and stick with it.
  • Create new grove operations without following the mod.rs + v0/mod.rs dispatcher pattern. Consistency is critical.

Batch Operations

Individual grove operations are the atoms. Batch operations are the molecules. Dash Platform never applies a single database write in isolation -- every state transition (a document creation, a contract update, a balance transfer) results in a batch of operations that are applied atomically. Either they all succeed, or none of them do. This chapter covers how that batching works at every level of the stack.

The Three Levels of Abstraction

Drive has three layers of operation abstraction, each serving a different purpose:

  1. DriveOperation -- High-level, domain-aware operations like "add this document" or "apply this contract."
  2. LowLevelDriveOperation -- Individual grove operations, cost calculations, and function costs.
  3. GroveDbOpBatch -- The final flat list of QualifiedGroveDbOp items that GroveDB applies atomically.

The flow is always top-down: DriveOperation -> LowLevelDriveOperation -> GroveDbOpBatch.

DriveOperation: The High-Level API

The DriveOperation enum lives in packages/rs-drive/src/util/batch/drive_op_batch/mod.rs and represents every kind of operation the platform can perform:

#![allow(unused)]
fn main() {
pub enum DriveOperation<'a> {
    DataContractOperation(DataContractOperationType<'a>),
    DocumentOperation(DocumentOperationType<'a>),
    TokenOperation(TokenOperationType),
    WithdrawalOperation(WithdrawalOperationType),
    IdentityOperation(IdentityOperationType),
    PrefundedSpecializedBalanceOperation(PrefundedSpecializedBalanceOperationType),
    SystemOperation(SystemOperationType),
    GroupOperation(GroupOperationType),
    AddressFundsOperation(AddressFundsOperationType),
    GroveDBOperation(QualifiedGroveDbOp),
    GroveDBOpBatch(GroveDbOpBatch),
}
}

Each variant wraps a domain-specific operation type. For example, a DocumentOperationType might be an AddDocument, UpdateDocument, or DeleteDocument. A DataContractOperationType might be ApplyContract. And so on.

The last two variants -- GroveDBOperation and GroveDBOpBatch -- are escape hatches for when code already has raw GroveDB operations and just wants to include them in the batch.

The DriveLowLevelOperationConverter Trait

Every DriveOperation variant knows how to convert itself into a list of low-level operations. This is defined by the DriveLowLevelOperationConverter trait:

#![allow(unused)]
fn main() {
pub trait DriveLowLevelOperationConverter {
    fn into_low_level_drive_operations(
        self,
        drive: &Drive,
        estimated_costs_only_with_layer_info: &mut Option<
            HashMap<KeyInfoPath, EstimatedLayerInformation>,
        >,
        block_info: &BlockInfo,
        transaction: TransactionArg,
        platform_version: &PlatformVersion,
    ) -> Result<Vec<LowLevelDriveOperation>, Error>;
}
}

The estimated_costs_only_with_layer_info parameter is key. When it is None, the converter performs actual operations (stateful mode). When it is Some(HashMap), the converter only estimates costs and fills in layer information for GroveDB's cost estimation (stateless mode).

The DriveOperation enum implements this trait by dispatching to each variant:

#![allow(unused)]
fn main() {
impl DriveLowLevelOperationConverter for DriveOperation<'_> {
    fn into_low_level_drive_operations(
        self, drive: &Drive,
        estimated_costs_only_with_layer_info: &mut Option<
            HashMap<KeyInfoPath, EstimatedLayerInformation>
        >,
        block_info: &BlockInfo,
        transaction: TransactionArg,
        platform_version: &PlatformVersion,
    ) -> Result<Vec<LowLevelDriveOperation>, Error> {
        match self {
            DriveOperation::DataContractOperation(op) =>
                op.into_low_level_drive_operations(
                    drive, estimated_costs_only_with_layer_info,
                    block_info, transaction, platform_version,
                ),
            DriveOperation::DocumentOperation(op) =>
                op.into_low_level_drive_operations(
                    drive, estimated_costs_only_with_layer_info,
                    block_info, transaction, platform_version,
                ),
            // ... each variant delegates to its own converter
            DriveOperation::GroveDBOperation(op) =>
                Ok(vec![GroveOperation(op)]),
            DriveOperation::GroveDBOpBatch(operations) =>
                Ok(operations.operations.into_iter()
                    .map(GroveOperation).collect()),
        }
    }
}
}

The apply_drive_operations Flow

The centerpiece of the batch system is Drive::apply_drive_operations. From packages/rs-drive/src/util/batch/drive_op_batch/drive_methods/apply_drive_operations/v0/mod.rs:

#![allow(unused)]
fn main() {
impl Drive {
    pub(crate) fn apply_drive_operations_v0(
        &self,
        operations: Vec<DriveOperation>,
        apply: bool,
        block_info: &BlockInfo,
        transaction: TransactionArg,
        platform_version: &PlatformVersion,
        previous_fee_versions: Option<&CachedEpochIndexFeeVersions>,
    ) -> Result<FeeResult, Error> {
        if operations.is_empty() {
            return Ok(FeeResult::default());
        }

        let mut low_level_operations = vec![];
        let mut estimated_costs_only_with_layer_info = if apply {
            None::<HashMap<KeyInfoPath, EstimatedLayerInformation>>
        } else {
            Some(HashMap::new())
        };

        let mut finalize_tasks: Vec<DriveOperationFinalizeTask> = Vec::new();

        for drive_op in operations {
            // Collect finalize tasks before conversion
            if let Some(tasks) = drive_op.finalization_tasks(platform_version)? {
                finalize_tasks.extend(tasks);
            }

            // Convert high-level to low-level operations
            low_level_operations.append(
                &mut drive_op.into_low_level_drive_operations(
                    self,
                    &mut estimated_costs_only_with_layer_info,
                    block_info,
                    transaction,
                    platform_version,
                )?
            );
        }

        let mut cost_operations = vec![];

        // Apply the batch atomically
        self.apply_batch_low_level_drive_operations(
            estimated_costs_only_with_layer_info,
            transaction,
            low_level_operations,
            &mut cost_operations,
            &platform_version.drive,
        )?;

        // Execute post-commit finalize tasks
        for task in finalize_tasks {
            task.execute(self, platform_version);
        }

        // Calculate total fee from accumulated costs
        Drive::calculate_fee(
            None,
            Some(cost_operations),
            &block_info.epoch,
            self.config.epochs_per_era,
            platform_version,
            previous_fee_versions,
        )
    }
}
}

Let us trace the flow step by step:

  1. Mode selection. If apply is true, estimated_costs_only_with_layer_info is None, triggering stateful execution. If false, it is Some(HashMap), triggering cost estimation only.

  2. Finalize task collection. Before converting each operation, we collect any finalize tasks it declares. These are post-commit callbacks (covered in the Finalize Tasks chapter).

  3. Conversion. Each DriveOperation is converted into zero or more LowLevelDriveOperation items. A single high-level operation like "add document" might produce dozens of low-level operations (insert the document itself, update each index, update the contract's document count, etc.).

  4. Batch application. All low-level operations are applied as a single atomic batch through apply_batch_low_level_drive_operations.

  5. Finalization. Post-commit tasks execute (like invalidating caches).

  6. Fee calculation. The accumulated cost operations are converted into a FeeResult.

GroveDbOpBatch: The Final Layer

Before operations hit GroveDB, they are split into two categories. From packages/rs-drive/src/util/operations/apply_batch_low_level_drive_operations/v0/mod.rs:

#![allow(unused)]
fn main() {
impl Drive {
    pub(crate) fn apply_batch_low_level_drive_operations_v0(
        &self,
        estimated_costs_only_with_layer_info: Option<
            HashMap<KeyInfoPath, EstimatedLayerInformation>,
        >,
        transaction: TransactionArg,
        batch_operations: Vec<LowLevelDriveOperation>,
        drive_operations: &mut Vec<LowLevelDriveOperation>,
        drive_version: &DriveVersion,
    ) -> Result<(), Error> {
        let (grove_db_operations, mut other_operations) =
            LowLevelDriveOperation::grovedb_operations_batch_consume_with_leftovers(
                batch_operations,
            );
        if !grove_db_operations.is_empty() {
            self.apply_batch_grovedb_operations(
                estimated_costs_only_with_layer_info,
                transaction,
                grove_db_operations,
                drive_operations,
                drive_version,
            )?;
        }
        drive_operations.append(&mut other_operations);
        Ok(())
    }
}
}

The grovedb_operations_batch_consume_with_leftovers method partitions the operations:

  • GroveOperation variants become a GroveDbOpBatch that is applied atomically to GroveDB.
  • Everything else (CalculatedCostOperation, FunctionOperation, PreCalculatedFeeResult) is kept as-is for fee calculation.

The GroveDbOpBatch itself is defined in packages/rs-drive/src/util/batch/grovedb_op_batch/mod.rs:

#![allow(unused)]
fn main() {
pub struct GroveDbOpBatch {
    pub(crate) operations: Vec<QualifiedGroveDbOp>,
}
}

It is a thin wrapper around a vector of QualifiedGroveDbOp -- GroveDB's native batch operation type. The wrapper provides convenience methods for building batches:

#![allow(unused)]
fn main() {
pub trait GroveDbOpBatchV0Methods {
    fn new() -> Self;
    fn push(&mut self, op: QualifiedGroveDbOp);
    fn add_insert_empty_tree(&mut self, path: Vec<Vec<u8>>, key: Vec<u8>);
    fn add_insert_empty_sum_tree(&mut self, path: Vec<Vec<u8>>, key: Vec<u8>);
    fn add_delete(&mut self, path: Vec<Vec<u8>>, key: Vec<u8>);
    fn add_insert(&mut self, path: Vec<Vec<u8>>, key: Vec<u8>, element: Element);
    fn verify_consistency_of_operations(&self) -> GroveDbOpConsistencyResults;
    fn contains<'c, P>(&self, path: P, key: &[u8]) -> Option<&GroveOp>;
    fn remove<'c, P>(&mut self, path: P, key: &[u8]) -> Option<GroveOp>;
    fn remove_if_insert(&mut self, path: Vec<Vec<u8>>, key: &[u8]) -> Option<GroveOp>;
}
}

The verify_consistency_of_operations method is particularly important -- it checks that the batch does not contain conflicting operations (like inserting and deleting the same key).

Building a Batch: A Real Example

The test code in drive_op_batch/mod.rs shows how a typical batch is assembled:

#![allow(unused)]
fn main() {
let mut drive_operations = vec![];

// Step 1: Apply a contract
drive_operations.push(DataContractOperation(ApplyContract {
    contract: Cow::Borrowed(&contract),
    storage_flags: None,
}));

// Step 2: Add a document
drive_operations.push(DocumentOperation(AddDocument {
    owned_document_info: OwnedDocumentInfo {
        document_info: DocumentRefInfo((
            &document,
            StorageFlags::optional_default_as_cow(),
        )),
        owner_id: None,
    },
    contract_info: DataContractInfo::BorrowedDataContract(&contract),
    document_type_info: DocumentTypeInfo::DocumentTypeRef(document_type),
    override_document: false,
}));

// Step 3: Apply everything atomically
drive.apply_drive_operations(
    drive_operations,
    true,  // actually apply, not just estimate
    &BlockInfo::default(),
    Some(&db_transaction),
    platform_version,
    None,
)?;
}

The contract application and document insertion happen in the same atomic batch. If the document insert fails (perhaps due to a uniqueness constraint violation), the contract application is also rolled back. This all-or-nothing guarantee is fundamental to platform correctness.

The Display Implementation

The GroveDbOpBatch has a custom Display implementation that produces human-readable output, mapping raw byte paths to meaningful names:

#![allow(unused)]
fn main() {
impl fmt::Display for GroveDbOpBatch {
    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
        for op in &self.operations {
            let (path_string, known_path) = readable_path(&op.path);
            let (key_string, _) = readable_key_info(known_path, &op.key);
            writeln!(f, "   Path: {}", path_string)?;
            writeln!(f, "   Key: {}", key_string)?;
            // ... operation details
        }
    }
}
}

This translates paths like [0x03] into Identities(3) and keys like 32-byte arrays into IdentityId(bs58::...). Invaluable for debugging.

Rules and Guidelines

Do:

  • Always use apply_drive_operations for applying batches. It handles the full pipeline: conversion, application, finalization, and fee calculation.
  • Collect all operations for a state transition into a single Vec<DriveOperation> before applying.
  • Use the apply: false flag for dry-run fee estimation before committing.

Do not:

  • Apply operations one at a time. Always batch them for atomicity.
  • Mix stateful and stateless operations in the same batch application pass.
  • Forget to handle the FeeResult returned by apply_drive_operations. The fee system depends on it.
  • Manually construct GroveDbOpBatch objects unless you are working at the lowest level. Prefer DriveOperation for business logic.

Cost Tracking

Dash Platform is a fee-based system. Every operation -- reading a document, inserting a key, hashing a value -- has a cost measured in platform credits. This chapter explains how Drive tracks those costs from individual operations all the way through to the final fee result.

The Problem: Why Explicit Cost Tracking?

Some systems use gas metering: you start with a budget, and every opcode decrements it. Dash Platform takes a different approach. Instead of metering, it accumulates the costs of operations as they execute and then calculates the total fee at the end.

This matters because:

  1. Costs depend on what actually happened. A GroveDB insert into a deep tree costs more than one into a shallow tree because of the Merkle proof updates involved. You cannot know this in advance -- you have to do the insert and measure.
  2. Storage fees and processing fees are different. Storage fees are based on bytes added and removed. Processing fees cover computation: seeks, hash operations, byte loading. They need to be calculated separately.
  3. Refunds are possible. When data is removed from storage, the original storage fee can be partially refunded. This requires knowing when the data was originally stored (which epoch), making the calculation depend on historical state.

LowLevelDriveOperation: The Cost Carrier

The LowLevelDriveOperation enum is the vehicle that carries cost information through the system. Defined in packages/rs-drive/src/fees/op.rs:

#![allow(unused)]
fn main() {
pub enum LowLevelDriveOperation {
    GroveOperation(QualifiedGroveDbOp),
    FunctionOperation(FunctionOp),
    CalculatedCostOperation(OperationCost),
    PreCalculatedFeeResult(FeeResult),
}
}

Four variants, each representing a different kind of cost:

GroveOperation

A raw GroveDB operation (insert, delete, get, etc.) that has not yet been executed. When a batch is built up (as described in the Batch Operations chapter), individual grove operations accumulate as this variant. They carry no cost yet -- the cost is determined when the batch is applied to GroveDB.

CalculatedCostOperation

An OperationCost from a GroveDB operation that has already been executed (or estimated). This is what gets pushed onto the drive_operations vector by the grove operation wrappers:

#![allow(unused)]
fn main() {
fn push_drive_operation_result<T>(
    cost_context: CostContext<Result<T, GroveError>>,
    drive_operations: &mut Vec<LowLevelDriveOperation>,
) -> Result<T, Error> {
    let CostContext { value, cost } = cost_context;
    if !cost.is_nothing() {
        drive_operations.push(CalculatedCostOperation(cost));
    }
    value.map_err(Error::from)
}
}

The OperationCost (from grovedb_costs) tracks:

  • seek_count: Number of disk seeks performed
  • storage_cost: Bytes added, replaced, and removed (with per-epoch tracking for refunds)
  • storage_loaded_bytes: Bytes read from storage
  • hash_node_calls: Number of hash operations for Merkle tree updates

FunctionOperation

Represents the cost of a pure computation like hashing. Defined as:

#![allow(unused)]
fn main() {
pub struct FunctionOp {
    pub(crate) hash: HashFunction,
    pub(crate) rounds: u32,
}
}

With supported hash functions:

#![allow(unused)]
fn main() {
pub enum HashFunction {
    Sha256RipeMD160,
    Sha256,
    Sha256_2,  // Double SHA-256
    Blake3,
}
}

Each hash function has a base cost and a per-block cost. The total cost of a FunctionOp is:

#![allow(unused)]
fn main() {
impl FunctionOp {
    fn cost(&self, fee_version: &FeeVersion) -> Credits {
        let block_cost = (self.rounds as u64)
            .saturating_mul(self.hash.block_cost(fee_version));
        self.hash.base_cost(fee_version).saturating_add(block_cost)
    }
}
}

You can create a FunctionOp either by specifying the number of rounds directly or by providing the byte count (which calculates rounds based on the hash function's block size):

#![allow(unused)]
fn main() {
impl FunctionOp {
    pub fn new_with_round_count(hash: HashFunction, rounds: u32) -> Self {
        FunctionOp { hash, rounds }
    }

    pub fn new_with_byte_count(hash: HashFunction, byte_count: u16) -> Self {
        let blocks = byte_count / hash.block_size() + 1;
        let rounds = blocks + hash.rounds() - 1;
        FunctionOp { hash, rounds: rounds as u32 }
    }
}
}

PreCalculatedFeeResult

A fee result that was already computed elsewhere and just needs to be included in the total. This is a pass-through -- no further calculation needed.

BaseOp: Arithmetic Operation Costs

For simple computational operations (not storage-related), the BaseOp enum provides fixed costs:

#![allow(unused)]
fn main() {
pub enum BaseOp {
    Stop, Add, Mul, Sub, Div, Sdiv, Mod, Smod,
    Addmod, Mulmod, Signextend,
    Lt, Gt, Slt, Sgt, Eq, Iszero,
    And, Or, Xor, Not, Byte,
}

impl BaseOp {
    pub fn cost(&self) -> u64 {
        match self {
            BaseOp::Stop => 0,
            BaseOp::Add | BaseOp::Sub => 12,
            BaseOp::Mul | BaseOp::Div | BaseOp::Sdiv |
            BaseOp::Mod | BaseOp::Smod | BaseOp::Signextend => 20,
            BaseOp::Addmod | BaseOp::Mulmod => 32,
            BaseOp::Lt | BaseOp::Gt | BaseOp::Slt |
            BaseOp::Sgt | BaseOp::Eq | BaseOp::Iszero |
            BaseOp::And | BaseOp::Or | BaseOp::Xor |
            BaseOp::Not | BaseOp::Byte => 12,
        }
    }
}
}

These are EVM-inspired operation costs, adapted for the platform's fee model. Comparisons and bitwise operations cost 12 credits. Multiplication and division cost 20. Modular arithmetic costs 32.

The consume_to_fees_v0 Pipeline

When all operations are collected, they are converted into fee results through consume_to_fees_v0:

#![allow(unused)]
fn main() {
pub fn consume_to_fees_v0(
    drive_operations: Vec<LowLevelDriveOperation>,
    epoch: &Epoch,
    epochs_per_era: u16,
    fee_version: &FeeVersion,
    previous_fee_versions: Option<&CachedEpochIndexFeeVersions>,
) -> Result<Vec<FeeResult>, Error> {
    drive_operations.into_iter().map(|operation| match operation {
        PreCalculatedFeeResult(f) => Ok(f),

        FunctionOperation(op) => Ok(FeeResult {
            processing_fee: op.cost(fee_version),
            ..Default::default()
        }),

        _ => {
            let cost = operation.operation_cost()?;

            // Storage fee: bytes added * rate per byte
            let storage_fee = cost.storage_cost.added_bytes as u64
                * fee_version.storage.storage_disk_usage_credit_per_byte;

            // Processing fee: seeks + loaded bytes + hash calls + ...
            let processing_fee = cost.ephemeral_cost(fee_version)?;

            // Refunds from removed data
            let (fee_refunds, removed_bytes_from_system) =
                match cost.storage_cost.removed_bytes {
                    NoStorageRemoval => (FeeRefunds::default(), 0),
                    BasicStorageRemoval(amount) => (FeeRefunds::default(), amount),
                    SectionedStorageRemoval(removal_per_epoch_by_identifier) => {
                        // Calculate epoch-aware refunds
                        (FeeRefunds::from_storage_removal(
                            removal_per_epoch_by_identifier,
                            epoch.index,
                            epochs_per_era,
                            previous_fee_versions,
                        )?, system_amount)
                    }
                };

            Ok(FeeResult {
                storage_fee,
                processing_fee,
                fee_refunds,
                removed_bytes_from_system,
            })
        }
    }).collect()
}
}

Each operation produces a FeeResult with four components:

  • storage_fee: The cost of new bytes written to persistent storage.
  • processing_fee: The ephemeral cost of computation and I/O.
  • fee_refunds: Credits returned because previously-stored data was removed.
  • removed_bytes_from_system: Bytes removed that were stored by the system (not any particular identity), so no refund is issued.

Ephemeral Cost Calculation

The ephemeral_cost method on OperationCost computes the processing fee from the raw operation metrics:

#![allow(unused)]
fn main() {
impl DriveCost for OperationCost {
    fn ephemeral_cost(&self, fee_version: &FeeVersion) -> Result<Credits, Error> {
        let OperationCost {
            seek_count,
            storage_cost,
            storage_loaded_bytes,
            hash_node_calls,
        } = self;

        let seek_cost = (*seek_count as u64)
            .checked_mul(fee_version.storage.storage_seek_cost)?;

        let storage_added_bytes_ephemeral_cost = (storage_cost.added_bytes as u64)
            .checked_mul(fee_version.storage.storage_processing_credit_per_byte)?;

        let storage_replaced_bytes_ephemeral_cost = (storage_cost.replaced_bytes as u64)
            .checked_mul(fee_version.storage.storage_processing_credit_per_byte)?;

        let storage_loaded_bytes_cost = (*storage_loaded_bytes)
            .checked_mul(fee_version.storage.storage_load_credit_per_byte)?;

        let blake3_total = fee_version.hashing.blake3_base
            + fee_version.hashing.blake3_per_block;
        let hash_node_cost = blake3_total * (*hash_node_calls as u64);

        // Sum all costs with overflow checking
        seek_cost
            .checked_add(storage_added_bytes_ephemeral_cost)
            .and_then(|c| c.checked_add(storage_replaced_bytes_ephemeral_cost))
            .and_then(|c| c.checked_add(storage_loaded_bytes_cost))
            .and_then(|c| c.checked_add(hash_node_cost))
            .ok_or_else(|| get_overflow_error("ephemeral cost addition overflow"))
    }
}
}

Notice that every multiplication and addition uses checked arithmetic. In a fee system, overflows would be catastrophic -- an underflowed fee could let someone store unlimited data for free.

Helper Methods on LowLevelDriveOperation

The LowLevelDriveOperation type provides several methods for working with collections of operations:

#![allow(unused)]
fn main() {
impl LowLevelDriveOperation {
    // Combine all CalculatedCostOperation costs into one
    pub fn combine_cost_operations(
        operations: &[LowLevelDriveOperation]
    ) -> OperationCost { ... }

    // Extract GroveOperation variants into a batch
    pub fn grovedb_operations_batch(
        operations: &[LowLevelDriveOperation]
    ) -> GroveDbOpBatch { ... }

    // Same but consuming the vector
    pub fn grovedb_operations_batch_consume(
        operations: Vec<LowLevelDriveOperation>
    ) -> GroveDbOpBatch { ... }

    // Partition: grove ops go to batch, rest stays as leftovers
    pub fn grovedb_operations_batch_consume_with_leftovers(
        operations: Vec<LowLevelDriveOperation>,
    ) -> (GroveDbOpBatch, Vec<LowLevelDriveOperation>) { ... }
}
}

The grovedb_operations_batch_consume_with_leftovers method is particularly important -- it is used during batch application to separate the grove operations (which go to GroveDB) from the cost operations (which go to fee calculation).

Constructing Operations

LowLevelDriveOperation also has constructors for common operations:

#![allow(unused)]
fn main() {
impl LowLevelDriveOperation {
    pub fn for_known_path_key_empty_tree(
        path: Vec<Vec<u8>>, key: Vec<u8>,
        storage_flags: Option<&StorageFlags>,
    ) -> Self { ... }

    pub fn for_known_path_key_empty_sum_tree(
        path: Vec<Vec<u8>>, key: Vec<u8>,
        storage_flags: Option<&StorageFlags>,
    ) -> Self { ... }

    pub fn insert_for_known_path_key_element(
        path: Vec<Vec<u8>>, key: Vec<u8>, element: Element,
    ) -> Self {
        GroveOperation(QualifiedGroveDbOp::insert_or_replace_op(
            path, key, element
        ))
    }

    pub fn replace_for_known_path_key_element(
        path: Vec<Vec<u8>>, key: Vec<u8>, element: Element,
    ) -> Self {
        GroveOperation(QualifiedGroveDbOp::replace_op(
            path, key, element
        ))
    }
}
}

These provide a cleaner API than constructing QualifiedGroveDbOp directly, and they handle storage flags properly.

Rules and Guidelines

Do:

  • Use checked arithmetic everywhere in fee calculations. checked_mul, checked_add, and friends.
  • Construct FunctionOp with new_with_byte_count when you know the input size, and new_with_round_count when you know the rounds.
  • Let operations accumulate in the drive_operations vector throughout the call chain.

Do not:

  • Call operation_cost() on a GroveOperation -- it will return an error. Grove operations must be executed first; only CalculatedCostOperation carries a usable cost.
  • Forget that storage fees and processing fees are calculated differently. Storage fees are proportional to bytes. Processing fees are a complex function of seeks, loads, hashes, and byte movements.
  • Assume fee rates are constant. They are versioned through FeeVersion and can change between protocol versions.
  • Ignore removed_bytes_from_system. This tracks bytes removed that belong to the system rather than a specific identity, affecting the refund calculation.

Finalize Tasks

Most operations on Dash Platform follow a straightforward path: convert high-level operations to low-level ones, apply them atomically, calculate fees. But some operations need something to happen after the batch has been successfully committed. That is what finalize tasks are for.

The Problem: Post-Commit Side Effects

Consider what happens when a data contract is updated. The updated contract is written to GroveDB as part of the atomic batch. But Drive also caches contracts in memory for fast access. After the batch commits, that cache entry is stale -- it still holds the old version of the contract.

You cannot invalidate the cache before the commit, because the commit might fail (GroveDB could reject the batch due to a consistency error). And you cannot invalidate it during the commit, because the batch application is a single atomic operation on GroveDB. You need a post-commit callback: "if the batch succeeds, do this."

That is exactly what DriveOperationFinalizeTask provides.

The DriveOperationFinalizeTask Enum

Defined in packages/rs-drive/src/util/batch/drive_op_batch/finalize_task.rs:

#![allow(unused)]
fn main() {
pub enum DriveOperationFinalizeTask {
    RemoveDataContractFromCache { contract_id: Identifier },
}
}

Currently there is only one variant: RemoveDataContractFromCache. When a data contract is updated, this task is registered. After the batch commits successfully, it removes the stale contract from Drive's in-memory cache, forcing the next access to reload from GroveDB.

The execution is straightforward:

#![allow(unused)]
fn main() {
impl DriveOperationFinalizeTask {
    pub fn execute(self, drive: &Drive, _platform_version: &PlatformVersion) {
        match self {
            DriveOperationFinalizeTask::RemoveDataContractFromCache { contract_id } => {
                drive.cache.data_contracts.remove(contract_id.to_buffer());
            }
        }
    }
}
}

The DriveOperationFinalizationTasks Trait

Not every DriveOperation has finalize tasks. The trait that declares them is:

#![allow(unused)]
fn main() {
pub trait DriveOperationFinalizationTasks {
    fn finalization_tasks(
        &self,
        platform_version: &PlatformVersion,
    ) -> Result<Option<Vec<DriveOperationFinalizeTask>>, Error>;
}
}

The return type is Option<Vec<...>> rather than just Vec<...>. This is a deliberate optimization -- since only one operation type currently has finalize tasks, returning None (rather than an empty Vec) avoids unnecessary heap allocations for the vast majority of operations.

The implementation on DriveOperation dispatches through versioning:

#![allow(unused)]
fn main() {
impl DriveOperationFinalizationTasks for DriveOperation<'_> {
    fn finalization_tasks(
        &self,
        platform_version: &PlatformVersion,
    ) -> Result<Option<Vec<DriveOperationFinalizeTask>>, Error> {
        match platform_version
            .drive
            .methods
            .state_transitions
            .operations
            .finalization_tasks
        {
            0 => self.finalization_tasks_v0(platform_version),
            version => Err(Error::Drive(DriveError::UnknownVersionMismatch {
                method: "DriveOperation.finalization_tasks".to_string(),
                known_versions: vec![0],
                received: version,
            })),
        }
    }
}
}

And the v0 implementation only checks data contract operations:

#![allow(unused)]
fn main() {
impl DriveOperation<'_> {
    fn finalization_tasks_v0(
        &self,
        platform_version: &PlatformVersion,
    ) -> Result<Option<Vec<DriveOperationFinalizeTask>>, Error> {
        match self {
            DriveOperation::DataContractOperation(o) =>
                o.finalization_tasks(platform_version),
            _ => Ok(None),
        }
    }
}
}

Every other operation variant -- documents, identities, tokens, withdrawals -- returns None. Only data contract operations can produce finalize tasks.

How Finalize Tasks Integrate with Batch Application

The integration point is in apply_drive_operations_v0, which we saw in the Batch Operations chapter. Here is the relevant excerpt:

#![allow(unused)]
fn main() {
pub(crate) fn apply_drive_operations_v0(
    &self,
    operations: Vec<DriveOperation>,
    apply: bool,
    block_info: &BlockInfo,
    transaction: TransactionArg,
    platform_version: &PlatformVersion,
    previous_fee_versions: Option<&CachedEpochIndexFeeVersions>,
) -> Result<FeeResult, Error> {
    // ...

    let mut finalize_tasks: Vec<DriveOperationFinalizeTask> = Vec::new();

    for drive_op in operations {
        // Step 1: Collect finalize tasks BEFORE converting the operation
        if let Some(tasks) = drive_op.finalization_tasks(platform_version)? {
            finalize_tasks.extend(tasks);
        }

        // Step 2: Convert to low-level operations (consumes drive_op)
        low_level_operations.append(
            &mut drive_op.into_low_level_drive_operations(/* ... */)?
        );
    }

    // Step 3: Apply the batch atomically
    self.apply_batch_low_level_drive_operations(/* ... */)?;

    // Step 4: Execute finalize tasks AFTER successful commit
    for task in finalize_tasks {
        task.execute(self, platform_version);
    }

    // Step 5: Calculate fees
    Drive::calculate_fee(/* ... */)
}
}

The ordering is critical:

  1. Collect finalize tasks first. This happens before into_low_level_drive_operations because that method consumes the DriveOperation (it takes self, not &self). After conversion, the original operation is gone.

  2. Apply the batch. If this fails, we return the error immediately. The finalize tasks never execute.

  3. Execute finalize tasks only on success. By the time we reach step 4, we know the batch committed successfully. Now it is safe to invalidate caches and perform other side effects.

The Cache Invalidation Pattern

Why cache invalidation specifically? Drive maintains an in-memory cache of frequently-accessed data contracts:

#![allow(unused)]
fn main() {
drive.cache.data_contracts.remove(contract_id.to_buffer());
}

Without this invalidation, here is what would go wrong:

  1. Block N: Contract "foo" is at version 3 in GroveDB and cached.
  2. Block N+1: A state transition updates "foo" to version 4 in GroveDB.
  3. Block N+1: Without cache invalidation, queries still return version 3 from the cache.
  4. Block N+1: Document validation uses the stale version 3 schema, potentially accepting invalid documents.

By removing the contract from the cache after a successful update, the next access will read version 4 from GroveDB and re-populate the cache.

When to Use Finalize Tasks

Finalize tasks are the right tool when you need to perform side effects that:

  1. Must not happen if the batch fails. If you invalidate a cache before the commit and the commit fails, you have a warm-up penalty for no reason (and potentially incorrect behavior during the recovery window).

  2. Are not idempotent with respect to partial application. Cache invalidation is fine to do after commit because the cache will self-heal on the next access. But if your side effect were "send a network message," you would want to be very sure the batch actually committed.

  3. Operate on data outside GroveDB. GroveDB's atomic batch guarantees only cover GroveDB state. In-memory caches, external systems, and non-transactional state all need explicit post-commit handling.

Extending Finalize Tasks

To add a new finalize task:

  1. Add a variant to the DriveOperationFinalizeTask enum in finalize_task.rs.
  2. Implement its execution in the execute method's match block.
  3. In the relevant DriveOperation variant's finalization_tasks implementation, return the new task when appropriate.

The design is intentionally simple and extensible. The enum + trait pattern means new finalize tasks do not affect existing code paths.

Rules and Guidelines

Do:

  • Collect finalize tasks before consuming DriveOperation via into_low_level_drive_operations.
  • Execute finalize tasks only after confirming the batch committed successfully.
  • Keep finalize task execution fast. They run synchronously in the block processing pipeline.

Do not:

  • Put business logic in finalize tasks. They are for side effects like cache management, not for state mutations. State mutations belong in the batch itself.
  • Execute finalize tasks if the batch application returns an error. The whole point is that they only run on success.
  • Rely on finalize tasks for correctness-critical behavior that must be exactly-once. If the process crashes between batch commit and finalize task execution, the finalize tasks will not run. They should always be "nice to have" optimizations (like cache invalidation) rather than required for correctness.
  • Introduce finalize tasks with external side effects (like network calls) without careful consideration of failure modes. Keep them fast, local, and idempotent.

Unit Tests

If you have spent any time reading the Dash Platform codebase, you have probably noticed that test files are everywhere -- and they follow a very specific structure. This chapter walks through the patterns that Platform's unit tests use, why those patterns exist, and how to write your own tests that fit naturally into the codebase.

The Test Module Convention

Nearly every test file in Platform follows the same opening stanza:

#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
    use super::*;
    // ... additional imports ...
}
}

This is standard Rust, but the consistency matters. The #[cfg(test)] attribute means the entire module is compiled only when running cargo test. The use super::*; import pulls in everything from the parent module, so tests can access the types and functions they are testing without repeating import paths.

In Platform, tests that validate state transitions live in dedicated tests.rs files:

packages/rs-drive-abci/src/execution/validation/
  state_transition/state_transitions/
    address_credit_withdrawal/
      tests.rs            <-- unit tests for withdrawal transitions
    address_funds_transfer/
      tests.rs            <-- unit tests for address-to-address transfers

Each tests.rs file is a self-contained test module for one state transition type. Inside, tests are further organized into sub-modules by category:

#![allow(unused)]
fn main() {
#[cfg(test)]
mod tests {
    // ... imports and helpers ...

    mod structure_validation {
        use super::*;

        #[test]
        fn test_no_inputs_returns_error() { /* ... */ }

        #[test]
        fn test_too_many_inputs_returns_error() { /* ... */ }
    }

    mod address_state_validation {
        use super::*;
        // ...
    }

    mod witness_validation {
        use super::*;
        // ...
    }
}
}

This sub-module approach groups related tests, making cargo test output scannable. When a test fails, you immediately see tests::structure_validation::test_no_inputs_returns_error instead of a flat list.

TestPlatformBuilder: Setting Up the World

Most unit tests need a running Platform instance with a database, genesis state, and configuration. The TestPlatformBuilder provides a fluent API for this:

#![allow(unused)]
fn main() {
// File: packages/rs-drive-abci/src/test/helpers/setup.rs

pub struct TestPlatformBuilder {
    config: Option<PlatformConfig>,
    initial_protocol_version: Option<ProtocolVersion>,
    tempdir: TempDir,
}
}

The builder creates a TempPlatform -- a Platform instance backed by a temporary directory that is automatically cleaned up when the test finishes:

#![allow(unused)]
fn main() {
pub struct TempPlatform<C> {
    pub platform: Platform<C>,
    pub tempdir: TempDir,
}
}

Here is the typical setup pattern:

#![allow(unused)]
fn main() {
let platform = TestPlatformBuilder::new()
    .with_config(platform_config)
    .with_latest_protocol_version()
    .build_with_mock_rpc()
    .set_genesis_state();
}

Let's break this down:

  • new() creates a builder with a fresh TempDir.
  • with_config() injects a PlatformConfig (including test-specific overrides).
  • with_latest_protocol_version() pins the platform to the current protocol version.
  • build_with_mock_rpc() constructs the Platform with a MockCoreRPCLike -- no real Dash Core node needed.
  • set_genesis_state() writes the initial state tree (system data contracts, etc.) into the database.

Because TempPlatform implements Deref<Target = Platform<C>>, you can call Platform methods directly on it:

#![allow(unused)]
fn main() {
impl<C> Deref for TempPlatform<C> {
    type Target = Platform<C>;

    fn deref(&self) -> &Self::Target {
        &self.platform
    }
}
}

This means platform.drive, platform.state, and platform.config all work directly.

Helper Functions: Encapsulating Test Patterns

Each test file defines local helper functions that encapsulate repeated setup logic. For example, the withdrawal tests define helpers for creating transitions:

#![allow(unused)]
fn main() {
fn create_signed_address_credit_withdrawal_transition(
    signer: &TestAddressSigner,
    inputs: BTreeMap<PlatformAddress, (AddressNonce, u64)>,
    output: Option<(PlatformAddress, u64)>,
    fee_strategy: Vec<AddressFundsFeeStrategyStep>,
    output_script: CoreScript,
) -> StateTransition {
    AddressCreditWithdrawalTransitionV0::try_from_inputs_with_signer(
        inputs,
        output,
        AddressFundsFeeStrategy::from(fee_strategy),
        1, // core_fee_per_byte
        Pooling::Never,
        output_script,
        signer,
        0, // user_fee_increase
        PlatformVersion::latest(),
    )
    .expect("should create signed transition")
}
}

And helpers for submitting them to the platform:

#![allow(unused)]
fn main() {
fn check_tx_is_valid(
    platform: &TempPlatform<MockCoreRPCLike>,
    raw_tx: &[u8],
    platform_version: &PlatformVersion,
) -> bool {
    let platform_state = platform.state.load();
    let platform_ref = PlatformRef {
        drive: &platform.drive,
        state: &platform_state,
        config: &platform.config,
        core_rpc: &platform.core_rpc,
    };

    let check_result = platform
        .check_tx(raw_tx, CheckTxLevel::FirstTimeCheck,
                   &platform_ref, platform_version)
        .expect("expected to check tx");

    check_result.is_valid()
}
}

The key insight is that helpers should be specific to the test file. A withdrawal test's helper knows about AddressCreditWithdrawalTransition; it does not try to be a generic state transition factory.

assert_matches! for Error Checking

Platform tests lean heavily on the assert_matches! macro from the assert_matches crate. This is the idiomatic way to verify error variants in a deeply nested enum hierarchy:

#![allow(unused)]
fn main() {
use assert_matches::assert_matches;

assert_matches!(
    processing_result.execution_results().as_slice(),
    [StateTransitionExecutionResult::UnpaidConsensusError(
        ConsensusError::BasicError(
            BasicError::TransitionNoInputsError(_)
        )
    )]
);
}

Without assert_matches!, you would need a verbose match block or a chain of if let statements. The macro makes the expected shape of the error immediately visible in the test.

For cases where you need to inspect error fields, combine matches! with additional assertions:

#![allow(unused)]
fn main() {
let error = result.first_error().unwrap();
assert!(
    matches!(
        error,
        ConsensusError::BasicError(
            BasicError::TransitionOverMaxInputsError(e)
        ) if e.actual_inputs() == 17 && e.max_inputs() == 16
    ),
    "Expected TransitionOverMaxInputsError with 17/16, got {:?}",
    error
);
}

The guard clause (if e.actual_inputs() == 17) lets you verify both the variant and its contents in a single expression.

Processing State Transitions in Tests

The standard way to submit a state transition in unit tests is through process_raw_state_transitions:

#![allow(unused)]
fn main() {
let raw_bytes = transition.serialize_to_bytes().unwrap();

let processing_result = platform
    .platform
    .process_raw_state_transitions(
        &vec![raw_bytes],
        &platform_state,
        &BlockInfo::default(),
        &transaction,
        platform_version,
        false,   // not dry run
        None,    // no extra data
    )
    .expect("expected to process state transition");
}

This is the same code path that runs in production -- your test transition goes through the same validation pipeline that a real block proposer executes.

Deterministic Randomness

Tests that need random data always use a seeded RNG:

#![allow(unused)]
fn main() {
let mut rng = StdRng::seed_from_u64(567);
let output_script = CoreScript::random_p2pkh(&mut rng);
}

The seed ensures the test produces identical results every time. If a test fails, you can reproduce the exact same inputs. Never use thread_rng() or entropy-seeded RNGs in unit tests.

Feature-Gated Test Compilation

Some tests require features that are expensive or only available in certain contexts. The testing-config feature gate controls test-specific configuration:

#![allow(unused)]
fn main() {
#[cfg(feature = "testing-config")]
impl PlatformTestConfig {
    pub fn default_minimal_verifications() -> Self {
        Self {
            block_signing: false,
            store_platform_state: false,
            block_commit_signature_verification: false,
            disable_instant_lock_signature_verification: true,
            disable_contested_documents_is_allowed_validation: true,
            disable_checkpoints: true,
        }
    }
}
}

Tests that need the full platform test infrastructure will not compile without --features testing-config, keeping the main build clean.

OnceLock for Expensive Resources

When a test suite needs an expensive-to-create resource (like a cryptographic key that takes 30 seconds to build), the OnceLock pattern avoids rebuilding it for every test:

#![allow(unused)]
fn main() {
use std::sync::OnceLock;

static STATE_TRANSITION_TYPE_COUNTER: OnceLock<Mutex<BTreeMap<String, usize>>>
    = OnceLock::new();

fn state_transition_counter() -> &'static Mutex<BTreeMap<String, usize>> {
    STATE_TRANSITION_TYPE_COUNTER.get_or_init(|| Mutex::new(BTreeMap::new()))
}
}

OnceLock is initialized at most once, the first time any test calls it. Because test threads share the same process, all tests in the binary reuse the same instance. This pattern is essential for resources like cryptographic proving keys that are expensive to construct but immutable once built.

Rules

Do:

  • Follow the #[cfg(test)] mod tests { use super::*; } convention.
  • Group tests into sub-modules by validation category.
  • Use TestPlatformBuilder for any test that needs a platform instance.
  • Use StdRng::seed_from_u64() for deterministic randomness.
  • Use assert_matches! for checking error variants.
  • Use OnceLock for expensive, immutable test resources.
  • Process transitions through process_raw_state_transitions to test the real code path.

Don't:

  • Use thread_rng() or unseeded randomness in tests.
  • Create ad-hoc Platform instances without TestPlatformBuilder.
  • Write generic helper functions that try to handle all state transition types.
  • Skip set_genesis_state() unless you are specifically testing pre-genesis behavior.
  • Use unwrap() on validation results -- use assert_matches! to verify error shapes.

Strategy Tests

Unit tests verify that a single state transition behaves correctly. But what about testing an entire chain of blocks with hundreds of identities creating documents, transferring credits, and voting on contested resources -- all at the same time?

That is what strategy tests are for. They are Platform's integration-level simulation framework: you declare what should happen and let the framework simulate it across hundreds of blocks.

The Problem

Consider everything that happens in a real Dash Platform network over 100 blocks:

  • Masternodes join, leave, get banned, change IPs
  • Quorums rotate and sign blocks
  • Identities are created, topped up, and updated
  • Documents are inserted, replaced, deleted, and transferred
  • Contracts are deployed and updated
  • Withdrawals are processed and batched
  • Protocol upgrades happen mid-chain

Testing any of these in isolation is straightforward. Testing them together -- where the output of block 47 affects the input of block 48 -- requires something more powerful than a unit test.

Two-Layer Strategy Architecture

Strategy tests use a two-layer design:

Layer 1: Strategy (defined in packages/strategy-tests/src/lib.rs) describes what operations to perform:

#![allow(unused)]
fn main() {
pub struct Strategy {
    /// Identities to create on the first block.
    pub start_identities: StartIdentities,

    /// Platform addresses to fund on the first block.
    pub start_addresses: StartAddresses,

    /// Contracts to deploy on the second block,
    /// with optional scheduled updates.
    pub start_contracts: Vec<(
        CreatedDataContract,
        Option<BTreeMap<u64, CreatedDataContract>>,
    )>,

    /// Operations to execute each block.
    pub operations: Vec<Operation>,

    /// Configuration for ongoing identity creation.
    pub identity_inserts: IdentityInsertInfo,

    /// Optional nonce gaps for edge-case testing.
    pub identity_contract_nonce_gaps: Option<Frequency>,

    /// Key manager for signing state transitions.
    pub signer: Option<SimpleSigner>,
}
}

Layer 2: NetworkStrategy (defined in packages/rs-drive-abci/tests/strategy_tests/strategy.rs) wraps a Strategy with network-level configuration:

#![allow(unused)]
fn main() {
pub struct NetworkStrategy {
    pub strategy: Strategy,
    pub total_hpmns: u16,
    pub extra_normal_mns: u16,
    pub validator_quorum_count: u16,
    pub chain_lock_quorum_count: u16,
    pub instant_lock_quorum_count: u16,
    pub initial_core_height: u32,
    pub upgrading_info: Option<UpgradingInfo>,
    pub core_height_increase: CoreHeightIncrease,
    pub proposer_strategy: MasternodeListChangesStrategy,
    pub rotate_quorums: bool,
    pub failure_testing: Option<FailureStrategy>,
    pub query_testing: Option<QueryStrategy>,
    pub verify_state_transition_results: bool,
    pub max_tx_bytes_per_block: u64,
    pub independent_process_proposal_verification: bool,
    pub sign_chain_locks: bool,
    pub sign_instant_locks: bool,
    // ...
}
}

The separation is intentional. Strategy is about application-level behavior (documents, identities, contracts). NetworkStrategy is about network-level behavior (masternodes, quorums, block production). By composing them, you can test the same application strategy under different network conditions.

Operations and Frequency

Each operation in a strategy has a type and a frequency:

#![allow(unused)]
fn main() {
pub struct Operation {
    /// The type of operation to perform.
    pub op_type: OperationType,
    /// Configuration controlling how often this operation occurs.
    pub frequency: Frequency,
}
}

OperationType is an enum covering every kind of platform action:

#![allow(unused)]
fn main() {
pub enum OperationType {
    Document(DocumentOp),
    IdentityTopUp(AmountRange),
    IdentityUpdate(IdentityUpdateOp),
    IdentityWithdrawal(AmountRange),
    ContractCreate(RandomDocumentTypeParameters, DocumentTypeCount),
    ContractUpdate(DataContractUpdateOp),
    IdentityTransfer(Option<IdentityTransferInfo>),
    ResourceVote(ResourceVoteOp),
    // ... token operations, address operations, etc.
}
}

Frequency controls when and how many operations occur per block:

#![allow(unused)]
fn main() {
pub struct Frequency {
    /// Range for the number of events when a block is selected.
    pub times_per_block_range: Range<u16>,
    /// Probability (0.0 to 1.0) that events occur in a given block.
    pub chance_per_block: Option<f64>,
}
}

For example, Frequency { times_per_block_range: 1..4, chance_per_block: Some(0.5) } means: on each block, there is a 50% chance that 1-3 operations of this type will occur. This probabilistic scheduling creates realistic, varied block content.

Running a Strategy: run_chain_for_strategy

The engine that drives everything is run_chain_for_strategy, defined in packages/rs-drive-abci/tests/strategy_tests/execution.rs:

#![allow(unused)]
fn main() {
pub(crate) fn run_chain_for_strategy<'a>(
    platform: &'a mut Platform<MockCoreRPCLike>,
    block_count: u64,
    strategy: NetworkStrategy,
    config: PlatformConfig,
    seed: u64,
    add_voting_keys_to_signer: &mut Option<SimpleSigner>,
    add_payout_keys_to_signer: &mut Option<SimpleSigner>,
) -> ChainExecutionOutcome<'a> {
    // ...
}
}

This function:

  1. Generates a deterministic RNG from seed.
  2. Creates the specified number of masternodes and quorums.
  3. For each block (up to block_count):
    • Determines core height increases.
    • Generates state transitions based on the strategy's operations and frequencies.
    • Simulates ABCI PrepareProposal / ProcessProposal / FinalizeBlock.
    • Applies masternode list changes (joins, leaves, bans).
    • Rotates quorums if configured.
  4. Returns a ChainExecutionOutcome containing the final state.

The outcome struct captures everything you need to verify:

#![allow(unused)]
fn main() {
pub struct ChainExecutionOutcome<'a> {
    pub abci_app: FullAbciApplication<'a, MockCoreRPCLike>,
    pub masternode_identity_balances: BTreeMap<[u8; 32], Credits>,
    pub identities: Vec<Identity>,
    pub proposers: Vec<MasternodeListItemWithUpdates>,
    pub validator_quorums: BTreeMap<QuorumHash, TestQuorumInfo>,
    pub identity_nonce_counter: BTreeMap<Identifier, IdentityNonce>,
    pub end_epoch_index: u16,
    pub end_time_ms: u64,
    pub state_transition_results_per_block:
        BTreeMap<u64, Vec<(StateTransition, ExecTxResult)>>,
    // ...
}
}

Masternode List Changes

The MasternodeListChangesStrategy allows simulating a dynamic validator set:

#![allow(unused)]
fn main() {
pub struct MasternodeListChangesStrategy {
    pub new_hpmns: Frequency,
    pub removed_hpmns: Frequency,
    pub updated_hpmns: Frequency,
    pub banned_hpmns: Frequency,
    pub unbanned_hpmns: Frequency,
    pub changed_ip_hpmns: Frequency,
    pub changed_p2p_port_hpmns: Frequency,
    pub changed_http_port_hpmns: Frequency,
    pub new_masternodes: Frequency,
    pub removed_masternodes: Frequency,
    pub updated_masternodes: Frequency,
    pub banned_masternodes: Frequency,
    pub unbanned_masternodes: Frequency,
    pub changed_ip_masternodes: Frequency,
}
}

Each field uses Frequency, so you can say "ban 1-2 HPMNs per block with 10% probability" naturally.

Writing a Strategy Test

Here is a minimal strategy test from the codebase (packages/rs-drive-abci/tests/strategy_tests/test_cases/basic_tests.rs):

#![allow(unused)]
fn main() {
#[test]
fn run_chain_nothing_happening() {
    let strategy = NetworkStrategy {
        strategy: Strategy {
            start_contracts: vec![],
            operations: vec![],
            start_identities: StartIdentities::default(),
            start_addresses: StartAddresses::default(),
            identity_inserts: IdentityInsertInfo::default(),
            identity_contract_nonce_gaps: None,
            signer: None,
        },
        total_hpmns: 100,
        extra_normal_mns: 0,
        validator_quorum_count: 24,
        chain_lock_quorum_count: 24,
        upgrading_info: None,
        proposer_strategy: Default::default(),
        rotate_quorums: false,
        failure_testing: None,
        query_testing: None,
        verify_state_transition_results: false,
        ..Default::default()
    };

    let config = PlatformConfig {
        validator_set: ValidatorSetConfig::default_100_67(),
        chain_lock: ChainLockConfig::default_100_67(),
        instant_lock: InstantLockConfig::default_100_67(),
        execution: ExecutionConfig {
            verify_sum_trees: true,
            ..ExecutionConfig::default()
        },
        block_spacing_ms: 3000,
        testing_configs: PlatformTestConfig::default_minimal_verifications(),
        ..Default::default()
    };

    let mut platform = TestPlatformBuilder::new()
        .with_config(config.clone())
        .build_with_mock_rpc();

    run_chain_for_strategy(
        &mut platform, 100, strategy, config,
        15, &mut None, &mut None,
    );
}
}

This test runs 100 empty blocks with 100 masternodes and verifies that the chain progresses without errors. It is the "smoke test" for the strategy framework itself.

Continuing a Chain

Strategy tests support pausing and resuming with continue_chain_for_strategy:

#![allow(unused)]
fn main() {
let outcome = run_chain_for_strategy(
    &mut platform, 50, strategy.clone(), config.clone(),
    13, &mut None, &mut None,
);

// Later...
let continued = continue_chain_for_strategy(
    outcome, strategy, config,
    50, // 50 more blocks
    &mut None, &mut None,
);
}

This is invaluable for testing restart scenarios and verifying that state persists correctly across platform restarts.

How Strategy Tests Differ from Unit Tests

AspectUnit TestsStrategy Tests
ScopeOne state transitionHundreds across many blocks
SetupTestPlatformBuilderrun_chain_for_strategy
RandomnessSeeded per-testSeeded once, flows through blocks
MasternodesNot involvedFully simulated
QuorumsNot involvedRotated and signed
DeterminismYesYes (same seed = same outcome)
SpeedFast (seconds)Slow (minutes for large chains)

Rules

Do:

  • Use strategy tests for multi-block scenarios involving multiple participants.
  • Start with default_minimal_verifications() to speed up test execution.
  • Use small block counts (10-50) during development, increase for CI.
  • Check state_transition_results_per_block to verify specific block outcomes.
  • Use continue_chain_for_strategy for restart/persistence testing.

Don't:

  • Use strategy tests when a unit test would suffice -- they are much slower.
  • Forget to pass a deterministic seed -- non-deterministic strategy tests are useless.
  • Set verify_state_transition_results: true unless you need it; it adds overhead.
  • Create strategy tests with more than a few hundred blocks for regular CI runs.

Test Configuration

Platform tests need to run fast. A production node verifies block signatures, checks instant lock proofs, persists platform state to disk, and creates database checkpoints. All of that is essential for security -- and all of it makes tests slow.

This chapter covers the configuration system that lets tests disable expensive checks selectively, the builder that wires everything together, and the mock RPC layer that eliminates the need for a real Dash Core node.

PlatformTestConfig

The heart of test configuration is PlatformTestConfig, defined in packages/rs-drive-abci/src/config.rs:

#![allow(unused)]
fn main() {
#[cfg(feature = "testing-config")]
pub struct PlatformTestConfig {
    /// Whether to perform block signing.
    pub block_signing: bool,

    /// Whether to store platform state to disk.
    pub store_platform_state: bool,

    /// Whether to verify block commit signatures.
    pub block_commit_signature_verification: bool,

    /// Whether to disable instant lock signature verification.
    pub disable_instant_lock_signature_verification: bool,

    /// Whether to disable contested documents validation.
    pub disable_contested_documents_is_allowed_validation: bool,

    /// Whether to disable checkpoint creation during tests.
    pub disable_checkpoints: bool,
}
}

Notice the #[cfg(feature = "testing-config")] gate. This struct does not exist in production builds. You cannot accidentally ship code that disables signature verification.

Two Default Profiles

PlatformTestConfig provides two defaults, and choosing the right one matters:

Full defaults (Default::default()): Everything enabled. Tests run like a production node, just with a mock RPC backend:

#![allow(unused)]
fn main() {
#[cfg(feature = "testing-config")]
impl Default for PlatformTestConfig {
    fn default() -> Self {
        Self {
            block_signing: true,
            store_platform_state: true,
            block_commit_signature_verification: true,
            disable_instant_lock_signature_verification: false,
            disable_contested_documents_is_allowed_validation: true,
            disable_checkpoints: true,
        }
    }
}
}

Minimal verifications (default_minimal_verifications()): Disables everything that is not needed to test application logic:

#![allow(unused)]
fn main() {
impl PlatformTestConfig {
    pub fn default_minimal_verifications() -> Self {
        Self {
            block_signing: false,
            store_platform_state: false,
            block_commit_signature_verification: false,
            disable_instant_lock_signature_verification: true,
            disable_contested_documents_is_allowed_validation: true,
            disable_checkpoints: true,
        }
    }
}
}

Use default() when testing consensus-critical behavior (block signing, quorum verification). Use default_minimal_verifications() for everything else -- it is significantly faster because it skips cryptographic operations.

When to Override Individual Fields

Sometimes you need a custom combination. For example, testing withdrawal transitions requires disabling instant lock verification but keeping everything else:

#![allow(unused)]
fn main() {
let platform_config = PlatformConfig {
    testing_configs: PlatformTestConfig {
        disable_instant_lock_signature_verification: true,
        ..Default::default()
    },
    ..Default::default()
};
}

The ..Default::default() spread syntax fills in the remaining fields with their defaults. This pattern lets you express "default with one override" clearly.

TestPlatformBuilder

TestPlatformBuilder is the fluent API for constructing a test platform. It lives in packages/rs-drive-abci/src/test/helpers/setup.rs:

#![allow(unused)]
fn main() {
pub struct TestPlatformBuilder {
    config: Option<PlatformConfig>,
    initial_protocol_version: Option<ProtocolVersion>,
    tempdir: TempDir,
}
}

The Builder Chain

The builder supports three configuration methods:

#![allow(unused)]
fn main() {
impl TestPlatformBuilder {
    /// Create a new builder with a fresh temporary directory.
    pub fn new() -> Self { Self::default() }

    /// Override the platform configuration.
    pub fn with_config(mut self, config: PlatformConfig) -> Self {
        self.config = Some(config);
        self
    }

    /// Pin a specific protocol version.
    pub fn with_initial_protocol_version(
        mut self,
        initial_protocol_version: ProtocolVersion,
    ) -> Self {
        self.initial_protocol_version = Some(initial_protocol_version);
        self
    }

    /// Use the latest protocol version.
    pub fn with_latest_protocol_version(mut self) -> Self {
        self.initial_protocol_version =
            Some(PlatformVersion::latest().protocol_version);
        self
    }
}
}

Building

The builder has two build methods:

build_with_mock_rpc() creates a TempPlatform<MockCoreRPCLike> -- no real Dash Core node needed:

#![allow(unused)]
fn main() {
pub fn build_with_mock_rpc(self) -> TempPlatform<MockCoreRPCLike> {
    let config = self.config.map(|mut c| {
        c.db_path = self.tempdir.path().to_path_buf();
        c
    });

    let platform = Platform::<MockCoreRPCLike>::open(
        self.tempdir.path(),
        config,
        self.initial_protocol_version
            .or(Some(PlatformVersion::latest().protocol_version)),
    )
    .expect("should open Platform successfully");

    TempPlatform {
        platform,
        tempdir: self.tempdir,
    }
}
}

Notice how the builder automatically sets db_path to the temp directory -- you cannot accidentally write to a real database.

build_with_default_rpc() creates a TempPlatform<DefaultCoreRPC> for integration tests that need a real Dash Core connection.

Initializing State

After building, you choose what initial state to install:

#![allow(unused)]
fn main() {
// Minimal: just the GroveDB tree structure
let platform = TestPlatformBuilder::new()
    .build_with_mock_rpc()
    .set_initial_state_structure();

// Full: genesis state with system data contracts
let platform = TestPlatformBuilder::new()
    .with_latest_protocol_version()
    .build_with_mock_rpc()
    .set_genesis_state();

// Genesis with specific activation info
let platform = TestPlatformBuilder::new()
    .build_with_mock_rpc()
    .set_genesis_state_with_activation_info(
        genesis_time,
        start_core_block_height,
    );
}

Most tests want set_genesis_state(). Use set_initial_state_structure() only when testing the state structure itself.

Loading Test Data Contracts

TempPlatform provides convenience methods for loading test contracts:

#![allow(unused)]
fn main() {
let (platform, card_game_contract) = TestPlatformBuilder::new()
    .build_with_mock_rpc()
    .set_initial_state_structure()
    .with_crypto_card_game_transfer_only(Transferable::Always);
}

This loads a predefined "crypto card game" data contract from tests/supporting_files/contract/ and applies it to the platform. The returned DataContract can be used to create documents in subsequent test steps.

Mock RPC: Simulating Dash Core

The MockCoreRPCLike type (from the mockall crate) replaces the real Dash Core RPC client. It lets tests control exactly what Core "reports" -- which transactions are confirmed, what the current block height is, which asset locks exist, etc.

In strategy tests, the mock is configured automatically by run_chain_for_strategy. In unit tests, you typically let the default mock behavior handle things:

#![allow(unused)]
fn main() {
let platform = TestPlatformBuilder::new()
    .with_config(platform_config)
    .build_with_mock_rpc()  // <-- MockCoreRPCLike
    .set_genesis_state();
}

The mock RPC means unit tests require zero external services. They run in CI, on developer laptops, and in sandboxed environments with no network access.

PlatformConfig for Tests vs Production

PlatformConfig is a large struct with many subsections. Here is how tests typically configure it:

#![allow(unused)]
fn main() {
let config = PlatformConfig {
    // Validator set: 100 nodes, 67% threshold
    validator_set: ValidatorSetConfig::default_100_67(),

    // Chain lock quorum config
    chain_lock: ChainLockConfig::default_100_67(),

    // Instant lock quorum config
    instant_lock: InstantLockConfig::default_100_67(),

    // Execution settings
    execution: ExecutionConfig {
        verify_sum_trees: true,
        ..ExecutionConfig::default()
    },

    // Block timing
    block_spacing_ms: 3000,

    // Test-specific overrides
    testing_configs: PlatformTestConfig::default_minimal_verifications(),

    // Fill the rest with defaults
    ..Default::default()
};
}

The default_100_67() methods create configs for a 100-node network with a 67% signing threshold -- the standard test network size.

Platform Restart Testing

TempPlatform supports simulating a platform restart by reopening from the same temporary directory:

#![allow(unused)]
fn main() {
pub fn open_with_tempdir(
    tempdir: TempDir,
    mut config: PlatformConfig,
) -> Self {
    config.db_path = tempdir.path().to_path_buf();
    let platform = Platform::<MockCoreRPCLike>::open(
        tempdir.path(), Some(config), None,
    )
    .expect("should open Platform successfully");

    Self { platform, tempdir }
}
}

The pattern for restart testing:

#![allow(unused)]
fn main() {
// Run first phase
let outcome = run_chain_for_strategy(
    &mut platform, 50, strategy, config.clone(),
    seed, &mut None, &mut None,
);

// Extract tempdir (ownership transfer)
let tempdir = platform.tempdir;

// Reopen -- simulates restart
let platform = TempPlatform::open_with_tempdir(tempdir, config);

// Verify state survived the restart
}

Rules

Do:

  • Use default_minimal_verifications() for tests that do not need signature verification.
  • Use Default::default() for PlatformTestConfig when testing block signing or quorum logic.
  • Always build with build_with_mock_rpc() unless you specifically need Dash Core.
  • Let the builder manage db_path -- never set it manually in test configs.
  • Use set_genesis_state() for most tests; use set_initial_state_structure() only for low-level storage tests.

Don't:

  • Disable verifications in production code -- PlatformTestConfig is #[cfg(feature)] guarded.
  • Create Platform instances directly -- always use TestPlatformBuilder.
  • Share temporary directories between tests -- each test gets its own TempDir.
  • Forget with_latest_protocol_version() -- without it, the builder still defaults to latest, but being explicit prevents surprises during protocol upgrades.
  • Use build_with_default_rpc() in CI -- it requires a running Dash Core node.

Builder Pattern

The Dash Platform Rust SDK (packages/rs-sdk) is the primary way applications interact with Dash Platform. Before you can fetch identities, create documents, or broadcast state transitions, you need an Sdk instance. And to create an Sdk, you use the builder pattern.

This chapter covers SdkBuilder, the Sdk struct it produces, and the two modes of operation: normal (real network) and mock (testing).

The Problem

Creating an Sdk requires many pieces of configuration:

  • Network addresses (where are the DAPI nodes?)
  • Network type (mainnet, testnet, devnet, regtest?)
  • Request settings (timeouts, retries, ban policies)
  • Context provider (where do cached data contracts and quorum keys come from?)
  • Staleness checks (how old can metadata be before we reject it?)
  • Platform version (which protocol version should we use?)
  • Cancellation token (how do we abort pending requests?)
  • TLS certificates (for secure connections)

Most of these have sensible defaults. A constructor with 10 parameters would be unusable. The builder pattern lets you set only what you need.

SdkBuilder

SdkBuilder lives in packages/rs-sdk/src/sdk.rs:

#![allow(unused)]
fn main() {
pub struct SdkBuilder {
    addresses: Option<AddressList>,
    settings: Option<RequestSettings>,
    network: Network,
    core_ip: String,
    core_port: u16,
    core_user: String,
    core_password: Zeroizing<String>,
    proofs: bool,
    version: &'static PlatformVersion,
    context_provider: Option<Box<dyn ContextProvider>>,
    metadata_height_tolerance: Option<u64>,
    metadata_time_tolerance_ms: Option<u64>,
    cancel_token: CancellationToken,

    #[cfg(feature = "mocks")]
    data_contract_cache_size: NonZeroUsize,
    #[cfg(feature = "mocks")]
    token_config_cache_size: NonZeroUsize,
    #[cfg(feature = "mocks")]
    quorum_public_keys_cache_size: NonZeroUsize,
    #[cfg(feature = "mocks")]
    dump_dir: Option<PathBuf>,

    #[cfg(not(target_arch = "wasm32"))]
    ca_certificate: Option<Certificate>,
}
}

Constructor Methods

The builder offers several constructors for different scenarios:

#![allow(unused)]
fn main() {
// Normal mode: connect to specified DAPI nodes
let sdk = SdkBuilder::new(address_list)
    .with_network(Network::Testnet)
    .build()?;

// Mock mode: no network, useful for tests
let sdk = SdkBuilder::new_mock()
    .build()?;

// Convenience (not yet implemented):
let sdk = SdkBuilder::new_testnet().build()?;
let sdk = SdkBuilder::new_mainnet().build()?;
}

The key distinction: if addresses is Some, you get a real DapiClient that connects to DAPI nodes over gRPC. If addresses is None (the mock path), you get a MockDapiClient that responds with pre-programmed data.

Configuration Methods

Every builder method follows the same signature pattern: take mut self, modify a field, return self:

#![allow(unused)]
fn main() {
impl SdkBuilder {
    pub fn with_network(mut self, network: Network) -> Self {
        self.network = network;
        self
    }

    pub fn with_settings(mut self, settings: RequestSettings) -> Self {
        self.settings = Some(settings);
        self
    }

    pub fn with_version(mut self, version: &'static PlatformVersion) -> Self {
        self.version = version;
        self
    }

    pub fn with_context_provider<C: ContextProvider + 'static>(
        mut self,
        context_provider: C,
    ) -> Self {
        self.context_provider = Some(Box::new(context_provider));
        self
    }

    pub fn with_cancellation_token(
        mut self,
        cancel_token: CancellationToken,
    ) -> Self {
        self.cancel_token = cancel_token;
        self
    }
}
}

This lets you chain configuration fluently:

#![allow(unused)]
fn main() {
let sdk = SdkBuilder::new(addresses)
    .with_network(Network::Testnet)
    .with_version(PlatformVersion::latest())
    .with_settings(RequestSettings { retries: Some(5), ..Default::default() })
    .with_context_provider(my_provider)
    .with_cancellation_token(token)
    .build()?;
}

Staleness Configuration

The SDK protects against stale responses from out-of-date nodes:

#![allow(unused)]
fn main() {
// Reject responses whose height is behind by more than 1 block
let sdk = SdkBuilder::new(addresses)
    .with_height_tolerance(Some(1))     // default
    .with_time_tolerance(Some(360_000)) // 6 minutes
    .build()?;
}

Height tolerance defaults to Some(1) -- if a node returns metadata with a height more than 1 block behind the last seen height, the SDK considers it stale. Time tolerance defaults to None (disabled) because it requires synchronized clocks.

Dash Core Integration

For development, the SDK can use Dash Core as a wallet and context provider:

#![allow(unused)]
fn main() {
let sdk = SdkBuilder::new(addresses)
    .with_core("127.0.0.1", 19998, "user", "password")
    .build()?;
}

This is a convenience method that internally creates a GrpcContextProvider backed by Core's RPC interface. For production, you should implement ContextProvider yourself.

Dump Directory

For debugging, the SDK can record all gRPC requests and responses to disk:

#![allow(unused)]
fn main() {
let sdk = SdkBuilder::new(addresses)
    .with_dump_dir(Path::new("./sdk-dumps"))
    .build()?;
}

This creates files like msg-*.json, quorum_pubkey-*.json, and data_contract-*.json that can be replayed in mock mode.

The Sdk Struct

The build() method produces an Sdk:

#![allow(unused)]
fn main() {
pub struct Sdk {
    pub network: Network,
    inner: SdkInstance,
    proofs: bool,
    internal_cache: Arc<InternalSdkCache>,
    context_provider: ArcSwapOption<Box<dyn ContextProvider>>,
    metadata_last_seen_height: Arc<atomic::AtomicU64>,
    metadata_height_tolerance: Option<u64>,
    metadata_time_tolerance_ms: Option<u64>,
    pub(crate) cancel_token: CancellationToken,
    pub(crate) dapi_client_settings: RequestSettings,
}
}

SdkInstance: Normal vs Mock

The inner field is an enum that holds either a real or mock client:

#![allow(unused)]
fn main() {
enum SdkInstance {
    Dapi {
        dapi: DapiClient,
        version: &'static PlatformVersion,
    },
    #[cfg(feature = "mocks")]
    Mock {
        dapi: Arc<Mutex<MockDapiClient>>,
        mock: Arc<Mutex<MockDashPlatformSdk>>,
        address_list: AddressList,
        version: &'static PlatformVersion,
    },
}
}

All public Sdk methods work identically in both modes. Code that uses the SDK does not know (or care) whether it is talking to a real network or a mock.

Thread Safety

Sdk is Clone and thread-safe. It uses Arc for shared state and ArcSwapOption for the context provider (which allows lock-free reads). The mock mode uses tokio::Mutex for the mock client since mock state is modified in async contexts.

Nonce Management

The SDK maintains an internal cache of identity nonces to avoid querying the network on every state transition:

#![allow(unused)]
fn main() {
pub async fn get_identity_nonce(
    &self,
    identity_id: Identifier,
    bump_first: bool,
    settings: Option<PutSettings>,
) -> Result<IdentityNonce, Error> {
    // 1. Check cache
    // 2. If stale or absent, query Platform
    // 3. Optionally bump (increment) before returning
    // 4. Apply IDENTITY_NONCE_VALUE_FILTER mask
}
}

The cache has a staleness timeout (default: 20 minutes). When bump_first is true, the nonce is incremented before being returned -- this is used when creating new state transitions that need the next nonce value.

The Quick Mock Path

For tests that need a mock SDK immediately:

#![allow(unused)]
fn main() {
let sdk = Sdk::new_mock();
}

This is a shorthand for SdkBuilder::default().build().unwrap(). It creates an SDK in mock mode with all default settings. You can then configure expectations:

#![allow(unused)]
fn main() {
let mut sdk = Sdk::new_mock();
sdk.mock().expect_fetch(identity, None);
}

Request Settings

The SDK applies a chain of settings to every request:

#![allow(unused)]
fn main() {
const DEFAULT_REQUEST_SETTINGS: RequestSettings = RequestSettings {
    retries: Some(3),
    timeout: None,
    ban_failed_address: None,
    connect_timeout: None,
    max_decoding_message_size: None,
};
}

When building, user-provided settings override defaults:

#![allow(unused)]
fn main() {
let dapi_client_settings = match self.settings {
    Some(settings) => DEFAULT_REQUEST_SETTINGS.override_by(settings),
    None => DEFAULT_REQUEST_SETTINGS,
};
}

And when making individual requests, per-request settings override global settings:

#![allow(unused)]
fn main() {
let settings = sdk
    .dapi_client_settings
    .override_by(request_specific_settings);
}

This three-level cascade (defaults -> builder -> per-request) gives you control without verbosity.

Rules

Do:

  • Use SdkBuilder::new(addresses) for production code with real DAPI connections.
  • Use Sdk::new_mock() for quick unit tests.
  • Set with_context_provider() in production -- the fallback to Core RPC is for development only.
  • Use with_height_tolerance() to detect stale nodes.
  • Clone the SDK freely -- it is designed for shared ownership via Arc.

Don't:

  • Use new_testnet() or new_mainnet() -- they are not implemented yet.
  • Disable proofs (with_proofs(false)) in production -- proofs are the security model.
  • Set metadata_time_tolerance_ms too low -- network delays and time skew can cause false positives.
  • Forget the cancellation token in long-running applications -- without it, you cannot gracefully shut down pending requests.
  • Construct Sdk directly -- always use the builder.

Fetch Traits

Reading data from Dash Platform is the most common SDK operation. You need to look up an identity by its identifier, retrieve a data contract, query documents, check a balance. The SDK provides a unified abstraction for all of these: the Fetch and FetchMany traits.

This chapter covers how these traits work, how queries are formed, how proofs are verified, and how different Platform types plug into the system.

The Problem

Platform stores many different types of data: identities, data contracts, documents, balances, epoch info, votes, token configurations, and more. Each requires a different gRPC request, returns a different response, and needs different proof verification logic.

Without an abstraction, every type would need its own fetch function with duplicated retry logic, proof parsing, metadata validation, and error handling. The Fetch trait eliminates that duplication.

The Fetch Trait

Fetch is defined in packages/rs-sdk/src/platform/fetch.rs:

#![allow(unused)]
fn main() {
#[async_trait::async_trait]
pub trait Fetch
where
    Self: Sized
        + Debug
        + MockResponse
        + FromProof<
            <Self as Fetch>::Request,
            Request = <Self as Fetch>::Request,
            Response = <<Self as Fetch>::Request as DapiRequest>::Response,
        >,
{
    /// The gRPC request type used to fetch this object.
    type Request: TransportRequest
        + Into<<Self as FromProof<<Self as Fetch>::Request>>::Request>;

    /// Fetch a single object from Platform.
    async fn fetch<Q: Query<<Self as Fetch>::Request>>(
        sdk: &Sdk,
        query: Q,
    ) -> Result<Option<Self>, Error> {
        Self::fetch_with_settings(sdk, query, RequestSettings::default()).await
    }

    /// Fetch with metadata (block height, time, etc.)
    async fn fetch_with_metadata<Q: Query<<Self as Fetch>::Request>>(
        sdk: &Sdk,
        query: Q,
        settings: Option<RequestSettings>,
    ) -> Result<(Option<Self>, ResponseMetadata), Error> { /* ... */ }

    /// Fetch with metadata and the raw proof.
    async fn fetch_with_metadata_and_proof<Q: Query<<Self as Fetch>::Request>>(
        sdk: &Sdk,
        query: Q,
        settings: Option<RequestSettings>,
    ) -> Result<(Option<Self>, ResponseMetadata, Proof), Error> { /* ... */ }

    /// Fetch with custom request settings.
    async fn fetch_with_settings<Q: Query<<Self as Fetch>::Request>>(
        sdk: &Sdk,
        query: Q,
        settings: RequestSettings,
    ) -> Result<Option<Self>, Error> { /* ... */ }

    /// Convenience: fetch by identifier.
    async fn fetch_by_identifier(
        sdk: &Sdk,
        id: Identifier,
    ) -> Result<Option<Self>, Error>
    where
        Identifier: Query<<Self as Fetch>::Request>,
    {
        Self::fetch(sdk, id).await
    }
}
}

The Key Insight: Option Semantics

Notice the return type: Result<Option<Self>, Error>.

  • Ok(Some(item)) -- the object was found and verified.
  • Ok(None) -- the object was proven to not exist. This is not an error; it is a cryptographic proof of absence.
  • Err(error) -- something went wrong (network failure, proof verification failed, etc.)

This design means "not found" is a normal, expected outcome. Code that uses Fetch does not need to handle "not found" as an error case.

Usage

#![allow(unused)]
fn main() {
use dash_sdk::platform::{Fetch, Identifier, Identity};

// Fetch an identity
let identity = Identity::fetch(&sdk, some_identifier).await?;

match identity {
    Some(id) => println!("Found identity with balance: {}", id.balance()),
    None => println!("Identity does not exist"),
}
}

Implementing Fetch for a Type

For most types, implementing Fetch is a one-liner:

#![allow(unused)]
fn main() {
impl Fetch for Identity {
    type Request = IdentityRequest;
}

impl Fetch for dpp::prelude::DataContract {
    type Request = platform_proto::GetDataContractRequest;
}

impl Fetch for drive_proof_verifier::types::IdentityBalance {
    type Request = platform_proto::GetIdentityBalanceRequest;
}

impl Fetch for drive_proof_verifier::types::IdentityNonceFetcher {
    type Request = platform_proto::GetIdentityNonceRequest;
}

impl Fetch for ExtendedEpochInfo {
    type Request = platform_proto::GetEpochsInfoRequest;
}

impl Fetch for Vote {
    type Request = platform_proto::GetContestedResourceIdentityVotesRequest;
}

impl Fetch for drive_proof_verifier::types::TotalCreditsInPlatform {
    type Request = platform_proto::GetTotalCreditsInPlatformRequest;
}
}

The type Request associates each fetchable type with its gRPC request message. The default method implementations handle everything else -- sending the request, parsing the proof, verifying metadata. All you need to provide is the request type.

Document: A Custom Override

Documents are special because they depend on a data contract schema for deserialization. If the cached contract is outdated, deserialization fails. The Document implementation overrides the default to add retry logic:

#![allow(unused)]
fn main() {
#[async_trait::async_trait]
impl Fetch for Document {
    type Request = DocumentQuery;

    async fn fetch_with_metadata_and_proof<Q: Query<<Self as Fetch>::Request>>(
        sdk: &Sdk,
        query: Q,
        settings: Option<RequestSettings>,
    ) -> Result<(Option<Self>, ResponseMetadata, Proof), Error> {
        let document_query: DocumentQuery = query.query(sdk.prove())?;

        // First attempt with current (possibly cached) contract
        match fetch_request(sdk, &document_query, settings).await {
            Ok(result) => Ok(result),
            Err(e) if is_document_deserialization_error(&e) => {
                // Contract schema might have changed -- refetch it
                let fresh_query =
                    refetch_contract_for_query(sdk, &document_query).await?;
                fetch_request(sdk, &fresh_query, settings).await
            }
            Err(e) => Err(e),
        }
    }
}
}

If deserialization fails with a CorruptedSerialization error, the SDK refetches the data contract from the network, updates the cache, and retries. This handles the case where a contract was updated but the local cache still has the old version.

The Query Trait

Query converts user-friendly search criteria into gRPC request messages:

#![allow(unused)]
fn main() {
pub trait Query<T: TransportRequest + Mockable>: Send + Debug + Clone {
    fn query(self, prove: bool) -> Result<T, Error>;
}
}

The simplest implementation: any TransportRequest is a query for itself:

#![allow(unused)]
fn main() {
impl<T> Query<T> for T
where
    T: TransportRequest + Sized + Send + Sync + Clone + Debug,
{
    fn query(self, prove: bool) -> Result<T, Error> {
        if !prove {
            unimplemented!("queries without proofs are not supported");
        }
        Ok(self)
    }
}
}

But you can also implement Query for more ergonomic types. For example, Identifier implements Query<GetIdentityRequest>, so you can write:

#![allow(unused)]
fn main() {
let identity = Identity::fetch(&sdk, my_identifier).await?;
}

Instead of manually constructing a GetIdentityRequest proto message.

The FromProof Trait

Every Fetch implementation requires that the fetched type implements FromProof. This trait, defined in packages/rs-drive-proof-verifier/src/proof.rs, verifies the cryptographic proof returned by the Platform node:

#![allow(unused)]
fn main() {
pub trait FromProof<Req> {
    type Request;
    type Response;

    /// Parse and verify the proof, returning the requested object.
    ///
    /// Returns:
    /// - Ok(Some(object, metadata)) when found
    /// - Ok(None) when proven to not exist
    /// - Err when verification fails
    fn maybe_from_proof_with_metadata(
        request: Self::Request,
        response: Self::Response,
        network: Network,
        platform_version: &PlatformVersion,
        provider: &impl ContextProvider,
    ) -> Result<(Option<Self>, ResponseMetadata, Proof), Error>
    where
        Self: Sized;
}
}

The chain is: Query produces a request -> DAPI returns a response with a proof -> FromProof verifies the proof and extracts the object. Every step is type-safe and generic over the specific Platform type being fetched.

FetchMany: Retrieving Collections

FetchMany extends the pattern to collections:

#![allow(unused)]
fn main() {
pub trait FetchMany<K: Ord, O: FromIterator<(K, Option<Self>)>>
where
    Self: Sized,
    O: MockResponse
        + FromProof<Self::Request, ...>
        + Send
        + Default,
{
    type Request: TransportRequest;

    async fn fetch_many<Q: Query<Self::Request>>(
        sdk: &Sdk,
        query: Q,
    ) -> Result<O, Error> { /* ... */ }

    // ... with_settings, with_metadata, with_limit variants
}
}

The O type parameter is the output collection type. It must implement FromIterator<(K, Option<Self>)> -- a collection of key-value pairs where the value might be None (proven absent). This handles queries like "fetch documents matching these criteria" where some requested items might not exist.

The Internal Fetch Pipeline

When you call Identity::fetch(&sdk, id), here is what happens:

  1. Query conversion: id.query(true) produces a GetIdentityRequest with proofs enabled.

  2. Request execution with retry: The SDK sends the request to a DAPI node, with automatic retry logic:

    #![allow(unused)]
    fn main() {
    let fut = |settings: RequestSettings| async move {
        let response = request.clone().execute(sdk, settings).await?;
        let (object, metadata, proof) = sdk
            .parse_proof_with_metadata_and_proof(request, response)
            .await?;
        Ok((object, metadata, proof))
    };
    
    retry(sdk.address_list(), settings, fut).await
    }
  3. Proof verification: parse_proof_with_metadata_and_proof calls FromProof::maybe_from_proof_with_metadata, which verifies the GroveDB proof against quorum signatures.

  4. Metadata validation: The SDK checks that the response metadata (height, time) is fresh enough based on the configured tolerances.

  5. Result return: The verified object (or None) is returned to the caller.

Rules

Do:

  • Use Fetch::fetch() for single-object lookups by identifier.
  • Use FetchMany::fetch_many() for queries that return collections.
  • Handle Ok(None) as a normal case -- it means the object does not exist, proven cryptographically.
  • Implement Fetch for new types by specifying just type Request.
  • Override fetch_with_metadata_and_proof only when you need custom logic (like the Document retry pattern).

Don't:

  • Treat Ok(None) as an error -- "not found" is a valid, proven result.
  • Bypass the Fetch trait to make raw gRPC calls -- you would skip proof verification.
  • Forget to implement FromProof for new fetchable types -- without it, proofs cannot be verified.
  • Disable proofs in production -- query(prove: false) is not supported and will panic.
  • Implement Query conversions that lose information -- the query must fully specify what to fetch.

Put Operations

Reading data from Platform is handled by Fetch. Writing data -- creating documents, deploying contracts, registering identities -- is handled by the put operation traits. This chapter covers the write path through the SDK: how state transitions are built, signed, broadcast, and confirmed.

The Problem

Writing to Platform is fundamentally different from reading. A read is a simple request/response: send a query, get back a proof. A write involves multiple steps:

  1. Determine the correct nonce for the identity.
  2. Build a state transition from the data to be written.
  3. Sign the transition with the identity's private key.
  4. Broadcast the signed transition to the network.
  5. Wait for the transition to be included in a block.
  6. Verify the proof that the write was applied.

Each step can fail independently, and the SDK needs to handle all of them coherently.

The PutDocument Trait

The primary write trait for documents is PutDocument, defined in packages/rs-sdk/src/platform/transition/put_document.rs:

#![allow(unused)]
fn main() {
#[async_trait::async_trait]
pub trait PutDocument<S: Signer<IdentityPublicKey>>: Waitable {
    async fn put_to_platform(
        &self,
        sdk: &Sdk,
        document_type: DocumentType,
        document_state_transition_entropy: Option<[u8; 32]>,
        identity_public_key: IdentityPublicKey,
        token_payment_info: Option<TokenPaymentInfo>,
        signer: &S,
        settings: Option<PutSettings>,
    ) -> Result<StateTransition, Error>;

    async fn put_to_platform_and_wait_for_response(
        &self,
        sdk: &Sdk,
        document_type: DocumentType,
        document_state_transition_entropy: Option<[u8; 32]>,
        identity_public_key: IdentityPublicKey,
        token_payment_info: Option<TokenPaymentInfo>,
        signer: &S,
        settings: Option<PutSettings>,
    ) -> Result<Document, Error>;
}
}

There are two methods:

  • put_to_platform broadcasts the transition and returns immediately. You get back the StateTransition that was broadcast but no confirmation that it was applied.
  • put_to_platform_and_wait_for_response broadcasts and then waits for the platform to include the transition in a block, returning the confirmed Document.

The Nonce-Build-Broadcast-Wait Pipeline

Let's walk through put_to_platform step by step:

Step 1: Get the Nonce

#![allow(unused)]
fn main() {
let new_identity_contract_nonce = sdk
    .get_identity_contract_nonce(
        self.owner_id(),
        document_type.data_contract_id(),
        true,  // bump_first: increment the nonce
        settings,
    )
    .await?;
}

Every identity has a nonce that increments with each state transition targeting a specific contract. The SDK caches nonces internally and bumps them optimistically. bump_first: true means "give me the next unused nonce."

The SDK's nonce management is sophisticated:

  • Nonces are cached per (identity_id, contract_id) pair.
  • If the cache is older than the staleness timeout (default 20 minutes), the SDK re-fetches from Platform.
  • If Platform reports a higher nonce than the cache, the cache is updated.
  • A filter mask (IDENTITY_NONCE_VALUE_FILTER) is applied to keep the nonce in the valid range.

Step 2: Build the Transition

The SDK decides whether to create a new document or replace an existing one based on the document's revision:

#![allow(unused)]
fn main() {
let transition = if self.revision().is_some()
    && self.revision().unwrap() != INITIAL_REVISION
{
    // This is an update -- create a replacement transition
    BatchTransition::new_document_replacement_transition_from_document(
        self.clone(),
        document_type.as_ref(),
        &identity_public_key,
        new_identity_contract_nonce,
        settings.user_fee_increase.unwrap_or_default(),
        token_payment_info,
        signer,
        sdk.version(),
        settings.state_transition_creation_options,
    )
} else {
    // This is a new document -- generate entropy and create
    let (document, entropy) = document_state_transition_entropy
        .map(|e| (self.clone(), e))
        .unwrap_or_else(|| {
            let mut rng = StdRng::from_entropy();
            let mut document = self.clone();
            let entropy = rng.gen::<[u8; 32]>();
            document.set_id(Document::generate_document_id_v0(
                &document_type.data_contract_id(),
                &document.owner_id(),
                document_type.name(),
                entropy.as_slice(),
            ));
            (document, entropy)
        });

    BatchTransition::new_document_creation_transition_from_document(
        document,
        document_type.as_ref(),
        entropy,
        &identity_public_key,
        new_identity_contract_nonce,
        settings.user_fee_increase.unwrap_or_default(),
        token_payment_info,
        signer,
        sdk.version(),
        settings.state_transition_creation_options,
    )
}?;
}

For new documents, the SDK generates 32 bytes of entropy (unless you provide your own) and uses it to deterministically generate the document ID. This ensures the same inputs always produce the same document ID.

Step 3: Validate Structure

Before broadcasting, the SDK validates the transition's basic structure:

#![allow(unused)]
fn main() {
ensure_valid_state_transition_structure(&transition, sdk.version())?;
}

This catches obvious errors (wrong field types, missing required fields) before the transition hits the network, saving a round-trip.

Step 4: Broadcast

#![allow(unused)]
fn main() {
transition.broadcast(sdk, Some(settings)).await?;
}

This sends the serialized transition to a DAPI node.

The BroadcastStateTransition Trait

Broadcasting is implemented as a trait on StateTransition:

#![allow(unused)]
fn main() {
#[async_trait::async_trait]
pub trait BroadcastStateTransition {
    async fn broadcast(
        &self,
        sdk: &Sdk,
        settings: Option<PutSettings>,
    ) -> Result<(), Error>;

    async fn wait_for_response<T: TryFrom<StateTransitionProofResult> + Send>(
        &self,
        sdk: &Sdk,
        settings: Option<PutSettings>,
    ) -> Result<T, Error>;

    async fn broadcast_and_wait<T: TryFrom<StateTransitionProofResult> + Send>(
        &self,
        sdk: &Sdk,
        settings: Option<PutSettings>,
    ) -> Result<T, Error>;
}
}

Three methods, three use cases:

  • broadcast: Fire-and-forget. Returns Ok(()) when the node accepts the transition. The response is always empty -- confirmation comes later.
  • wait_for_response: Poll until the transition is included in a block. Returns the proven result.
  • broadcast_and_wait: Combines both -- broadcast, then wait.

The Wait Mechanism

wait_for_response uses the WaitForStateTransitionResult gRPC endpoint. It sends the transition's hash and blocks until the platform includes it in a block:

#![allow(unused)]
fn main() {
async fn wait_for_response<T>(&self, sdk: &Sdk, settings: Option<PutSettings>)
    -> Result<T, Error>
{
    let factory = |request_settings: RequestSettings| async move {
        let request = self.wait_for_state_transition_result_request()?;
        let response = request.execute(sdk, request_settings).await?;

        // Check for broadcast errors
        if let Some(e) = state_transition_broadcast_error {
            return Err(Error::from(e));
        }

        // Extract and verify the proof
        let proof = grpc_response.proof()?;
        let (_, result) = Drive::verify_state_transition_was_executed_with_proof(
            self,
            &block_info,
            proof.grovedb_proof.as_slice(),
            &context_provider.as_contract_lookup_fn(sdk.version()),
            sdk.version(),
        )?;

        // Convert to the expected output type
        T::try_from(result)
    };

    retry(sdk.address_list(), retry_settings, factory).await
}
}

The wait includes full proof verification: the SDK verifies a GroveDB proof that the state transition was actually applied. This is not just checking a status flag -- it is cryptographic proof of inclusion.

Timeout Handling

wait_for_response supports an optional timeout:

#![allow(unused)]
fn main() {
match wait_timeout {
    Some(timeout) => {
        tokio::time::timeout(timeout, future)
            .await
            .map_err(|_| Error::TimeoutReached(timeout, details))?
    }
    None => future.await,
}
}

Without a timeout, the wait is unbounded. For production use, always set a timeout via PutSettings.

The Waitable Trait

Waitable provides type-specific post-processing after a broadcast:

#![allow(unused)]
fn main() {
#[async_trait::async_trait]
pub trait Waitable: Sized {
    async fn wait_for_response(
        sdk: &Sdk,
        state_transition: StateTransition,
        settings: Option<PutSettings>,
    ) -> Result<Self, Error>;
}
}

Each type implements this differently:

DataContract and Vote: straightforward delegation:

#![allow(unused)]
fn main() {
impl Waitable for DataContract {
    async fn wait_for_response(
        sdk: &Sdk,
        state_transition: StateTransition,
        settings: Option<PutSettings>,
    ) -> Result<DataContract, Error> {
        state_transition.wait_for_response(sdk, settings).await
    }
}
}

Document: extracts the single document from the batch transition result:

#![allow(unused)]
fn main() {
impl Waitable for Document {
    async fn wait_for_response(
        sdk: &Sdk,
        state_transition: StateTransition,
        settings: Option<PutSettings>,
    ) -> Result<Self, Error> {
        // Verify this is a batch transition with exactly one document
        let doc_id = /* extract from transition */;

        let mut documents: BTreeMap<Identifier, Option<Document>> =
            state_transition.wait_for_response(sdk, settings).await?;

        documents.remove(&doc_id)
            .ok_or(Error::InvalidProvedResponse(...))?
            .ok_or(Error::InvalidProvedResponse(...))
    }
}
}

Identity: handles the "already exists" case specially by falling back to a fetch:

#![allow(unused)]
fn main() {
impl Waitable for Identity {
    async fn wait_for_response(
        sdk: &Sdk,
        state_transition: StateTransition,
        settings: Option<PutSettings>,
    ) -> Result<Self, Error> {
        match state_transition.wait_for_response(sdk, settings).await {
            Ok(identity) => Ok(identity),
            Err(Error::AlreadyExists(_)) => {
                // Identity already exists -- fetch it instead
                let identity_id = /* extract from transition */;
                Identity::fetch(sdk, identity_id).await?
                    .ok_or(Error::Generic("proved to not exist but said to exist"))
            }
            Err(e) => Err(e),
        }
    }
}
}

Error Handling at the SDK Level

Errors during put operations fall into several categories:

  • Nonce errors: The cached nonce was stale. The SDK refreshes nonces on broadcast failure: sdk.refresh_identity_nonce(&owner_id).await
  • Broadcast errors: The network rejected the transition. Returned as StateTransitionBroadcastError.
  • Proof errors: The proof verification failed. Returned as DriveProofError with the raw proof bytes and block info for debugging.
  • Timeout errors: The transition was not included in time. Returned as TimeoutReached with the timeout duration and a description.
  • Conversion errors: The proof result could not be converted to the expected type. Returned as InvalidProvedResponse.

PutSettings

All put operations accept optional PutSettings:

#![allow(unused)]
fn main() {
pub struct PutSettings {
    pub request_settings: RequestSettings,
    pub identity_nonce_stale_time_s: Option<u64>,
    pub user_fee_increase: Option<u16>,
    pub wait_timeout: Option<Duration>,
    pub state_transition_creation_options: Option<...>,
}
}

The most important field is wait_timeout. In production, always set it to avoid hanging indefinitely.

Rules

Do:

  • Use put_to_platform_and_wait_for_response when you need confirmation.
  • Use put_to_platform when you want fire-and-forget semantics.
  • Always set wait_timeout in production.
  • Let the SDK manage nonces -- do not manually set them.
  • Handle Error::AlreadyExists gracefully, especially for identity creation.

Don't:

  • Call broadcast without eventually calling wait_for_response -- you will not know if the transition succeeded.
  • Retry a failed transition without refreshing the nonce -- the old nonce may be consumed.
  • Set user_fee_increase to zero in congested networks -- your transition may be deprioritized.
  • Provide custom entropy unless you need deterministic document IDs for testing.
  • Ignore TimeoutReached errors -- they may indicate network issues that affect subsequent operations.

Identity Keys Deep Dive

Every identity on Dash Platform is controlled by a set of identity public keys. These keys determine what the identity can do: sign state transitions, encrypt messages, transfer credits, vote, or prove masternode ownership. The key system is designed around three axes -- purpose, security level, and key type -- that together define what a key is for, how sensitive it is, and what cryptographic algorithm it uses.

This chapter covers the full key lifecycle: structure, creation, storage, validation, rotation, and the GroveDB tree layout that makes lookups efficient.

Key Structure

An identity public key is represented by IdentityPublicKeyV0:

#![allow(unused)]
fn main() {
pub struct IdentityPublicKeyV0 {
    pub id: KeyID,                            // u32, unique within this identity
    pub purpose: Purpose,                      // what the key is used for
    pub security_level: SecurityLevel,         // how sensitive the key is
    pub key_type: KeyType,                     // cryptographic algorithm
    pub read_only: bool,                       // if true, cannot sign state transitions
    pub data: BinaryData,                      // the public key bytes
    pub disabled_at: Option<TimestampMillis>,  // None = active, Some = disabled
    pub contract_bounds: Option<ContractBounds>, // restrict to specific contract
}
}

Each field serves a specific role:

  • id (KeyID = u32): Sequential identifier assigned at creation. Key IDs are unique within a single identity but not globally. The ID is used to reference the key in state transitions and storage.

  • data (BinaryData): The raw public key bytes. Size depends on key_type: 33 bytes for ECDSA, 48 bytes for BLS, 20 bytes for hash-based types.

  • disabled_at: When set, the key can no longer be used to sign anything. This is a timestamp (milliseconds since epoch), not a boolean, so you know exactly when the key was disabled.

  • read_only: A read-only key can verify signatures but cannot be used to sign new state transitions. This is enforced at validation time.

Purpose

The Purpose enum defines what a key is authorized to do:

PurposeValueDescription
AUTHENTICATION0General-purpose signing. Every identity must have at least one MASTER-level authentication key.
ENCRYPTION1Encrypt data. Cannot sign documents or state transitions.
DECRYPTION2Decrypt data. Cannot sign documents or state transitions.
TRANSFER3Sign credit transfers, withdrawals, and token operations. Required at CRITICAL security level.
SYSTEM4System operations. Cannot sign documents.
VOTING5Cast masternode votes. Cannot sign documents.
OWNER6Prove ownership of a masternode or evonode.

Purposes are grouped by searchability in the storage layer:

  • Searchable: AUTHENTICATION, TRANSFER, VOTING -- these get indexed in the key reference tree so they can be looked up by purpose and security level.
  • Non-searchable: ENCRYPTION, DECRYPTION, SYSTEM, OWNER -- stored but not indexed for search.

The practical effect: if a Platform node needs to find "the TRANSFER key for identity X", it can do a direct tree lookup. But finding "the ENCRYPTION key for identity X" requires fetching all keys and filtering client-side.

Security Level

Security levels form a strict hierarchy:

MASTER (0)  >  CRITICAL (1)  >  HIGH (2)  >  MEDIUM (3)
  strongest                                    weakest

The numeric value is inverted from what you might expect: lower value = stronger security. This matters because many operations check key.security_level().stronger_or_equal_security_than(required_level).

What Security Level Controls

  1. What the key can sign. A data contract can require that documents be signed with at least a certain security level. A key at MEDIUM cannot sign a document that requires HIGH or above.

  2. What operations the key can perform. Some state transitions require specific security levels:

    • Adding/disabling other keys requires MASTER
    • Credit transfers require CRITICAL (enforced via the TRANSFER purpose)
    • Document operations accept HIGH or MEDIUM depending on the contract
  3. Which purposes allow which levels. Not all combinations are valid for externally added keys (i.e., keys added via identity create/update transitions):

    PurposeAllowed Security Levels
    AUTHENTICATIONMASTER, CRITICAL, HIGH, MEDIUM
    ENCRYPTIONMEDIUM only
    DECRYPTIONMEDIUM only
    TRANSFERCRITICAL only
    SYSTEMNot externally addable (platform-managed)
    VOTINGNot externally addable (platform-managed)
    OWNERNot externally addable (platform-managed)

    SYSTEM, VOTING, and OWNER keys are created automatically by the platform (e.g., during masternode registration) and cannot be added through state transitions. Attempting to add a key with one of these purposes will fail validation. Similarly, attempting to create a TRANSFER key at HIGH security level will fail because only CRITICAL is allowed for that purpose.

The Master Key Requirement

Every identity must have exactly one MASTER-level AUTHENTICATION key at creation time. This key is the identity's root of trust -- it can add new keys, disable other keys, and perform any operation. Losing access to the master key means losing the ability to manage the identity's key set.

Key Type

The KeyType enum determines the cryptographic algorithm and key size:

Key TypeValueSizeUniqueDescription
ECDSA_SECP256K1033 bytesYesStandard Bitcoin/Dash curve. Default.
BLS12_381148 bytesYesBLS signatures, used by masternodes.
ECDSA_HASH160220 bytesNoRIPEMD160(SHA256) of an ECDSA public key. Core address type.
BIP13_SCRIPT_HASH320 bytesNoScript hash. Core address type.
EDDSA_25519_HASH160420 bytesNoRIPEMD160(SHA256) of an Ed25519 public key.

Unique vs Non-Unique Keys

This distinction is critical for understanding how keys are stored and enforced:

  • Unique key types (ECDSA_SECP256K1, BLS12_381): The full public key is stored, and Platform enforces that no two identities can register the same public key. This is checked in both the unique and non-unique hash tables during insertion. If identity A registers an ECDSA key, identity B cannot register the same key bytes.

  • Non-unique key types (ECDSA_HASH160, BIP13_SCRIPT_HASH, EDDSA_25519_HASH160): Only a 20-byte hash is stored. Multiple identities can share the same hash. This makes sense for address-based key types where the same Dash address might legitimately be associated with multiple identities (e.g., through asset lock transactions).

Core Address Key Types

ECDSA_HASH160 and BIP13_SCRIPT_HASH are specifically for linking Platform identities to Layer 1 (Core) Dash addresses. They store the same 20-byte hash used in Core addresses, enabling cross-layer identity verification without revealing the full public key.

Contract Bounds

A key can optionally be restricted to operations within a specific data contract:

#![allow(unused)]
fn main() {
pub enum ContractBounds {
    /// Key can only be used within a specific contract
    SingleContract { id: Identifier },

    /// Key can only be used within a specific contract and document type
    SingleContractDocumentType {
        id: Identifier,
        document_type_name: String,
    },
}
}

When contract_bounds is set:

  • The key can only sign state transitions that target the specified contract.
  • With SingleContractDocumentType, it is further restricted to a specific document type within that contract.
  • The key cannot be used for general-purpose operations outside the bound contract.

This enables fine-grained delegation: an identity owner can create a key that is only allowed to interact with one specific dApp, limiting exposure if that key is compromised.

Storage in GroveDB

Identity keys are stored across multiple trees in GroveDB for efficient access patterns.

Identity-Level Trees

Each identity has its own subtree under the root Identities tree:

Identities [RootTree::Identities]
└── {identity_id (32 bytes)}
    ├── IdentityTreeKeys [128]
    │   └── {key_id (varint)} → serialized IdentityPublicKey
    │
    ├── IdentityTreeKeyReferences [160]
    │   ├── AUTHENTICATION [0]
    │   │   ├── MASTER [0]
    │   │   │   └── {key_id} → reference to IdentityTreeKeys
    │   │   ├── CRITICAL [1]
    │   │   │   └── ...
    │   │   ├── HIGH [2]
    │   │   │   └── ...
    │   │   └── MEDIUM [3]    ← pre-created at identity creation
    │   │       └── ...
    │   ├── TRANSFER [3]
    │   │   └── {key_id} → reference
    │   └── VOTING [5]
    │       └── {key_id} → reference
    │
    ├── IdentityTreeRevision [192]
    ├── IdentityTreeNonce [64]
    └── IdentityContractInfo [32]

IdentityTreeKeys stores the actual serialized key data, keyed by the key ID encoded as a varint.

IdentityTreeKeyReferences provides a searchable index organized by purpose and (for AUTHENTICATION) security level. Each entry is a GroveDB reference pointing back to the actual key in IdentityTreeKeys.

The MEDIUM security level subtree under AUTHENTICATION is pre-created during identity initialization, even if no MEDIUM keys exist yet. Other security level subtrees are created on-demand when a key with that level is first added.

Global Key Hash Tables

Two root-level trees provide reverse lookups from key hashes to identity IDs:

UniquePublicKeyHashesToIdentities [24]
└── {key_hash (20 bytes)} → identity_id (32 bytes)

NonUniquePublicKeyKeyHashesToIdentities [8]
└── {key_hash (20 bytes)}
    └── {identity_id (32 bytes)} → empty item

The unique table is a flat mapping: one hash to one identity. Insertion fails if the hash already exists in either table.

The non-unique table uses a nested structure: each key hash has a subtree containing identity IDs as keys. This allows multiple identities to share the same key hash.

Key Hash Computation

All key types are hashed to 20 bytes for storage in the hash tables:

  • ECDSA_SECP256K1 (33 bytes): RIPEMD160(SHA256(pubkey))
  • BLS12_381 (48 bytes): RIPEMD160(SHA256(pubkey))
  • ECDSA_HASH160 (20 bytes): stored as-is (already a hash)
  • BIP13_SCRIPT_HASH (20 bytes): stored as-is
  • EDDSA_25519_HASH160 (20 bytes): stored as-is

Key Lifecycle

Creation

Keys are added to an identity either at identity creation or via an IdentityUpdate state transition.

At Identity Creation:

  • The state transition includes IdentityPublicKeyInCreation objects
  • Validation enforces exactly one MASTER-level AUTHENTICATION key
  • Each key also carries a signature field (proving the creator holds the private key)
  • After validation, keys are converted to IdentityPublicKey with disabled_at = None

Via IdentityUpdate:

  • The add_public_keys field carries new IdentityPublicKeyInCreation objects
  • Multiple keys can be added in a single transition
  • The transition must be signed by a key with sufficient security level
  • New key IDs must not collide with existing keys on the identity

The insertion process differs by key type:

  1. Unique keys: Check both hash tables for conflicts, insert into UniquePublicKeyHashesToIdentities, insert key data, create references.
  2. Non-unique keys: Create subtree under hash if needed, insert identity ID, insert key data, create references.

Disabling

Keys are disabled (not deleted) via the disable_public_keys field of IdentityUpdate:

#![allow(unused)]
fn main() {
// IdentityUpdateTransition fields (simplified)
pub add_public_keys: Vec<IdentityPublicKeyInCreation>,
pub disable_public_keys: Vec<KeyID>,
}

When a key is disabled:

  1. The key is fetched from storage.
  2. disabled_at is set to the current block timestamp (milliseconds).
  3. The serialized key is replaced in IdentityTreeKeys.
  4. Key references in IdentityTreeKeyReferences are refreshed.

A disabled key:

  • Cannot be used to sign any state transition (checked during signature verification)
  • Remains in storage (can still be read)
  • Can be re-enabled in the future

Re-enabling

Keys can be re-enabled by clearing the disabled_at field:

  1. The key is fetched from storage.
  2. disabled_at is set to None.
  3. The serialized key is replaced.
  4. References are refreshed.

Masternode Keys

Masternode identities have a special rule: all their keys are registered as non-unique, regardless of key type. This allows the same BLS key to be used across multiple masternode identities (e.g., during key rotation or when the same operator runs multiple masternodes).

Signing and Verification

When a state transition is signed:

  1. The transition specifies which key ID it was signed with.
  2. The key is looked up on the signing identity.
  3. Validation checks:
    • The key exists and is not disabled (disabled_at must be None)
    • The key's purpose allows this type of state transition
    • The key's security level meets the minimum required
    • If the key has contract_bounds, the transition targets the bound contract
    • If the key is read_only, it cannot sign
  4. The signature is verified using the appropriate algorithm:
    • ECDSA_SECP256K1: standard secp256k1 signature verification
    • BLS12_381: BLS signature verification
    • ECDSA_HASH160: ECDSA verification (key data is a hash, so verification uses the hash comparison path)

Validation Rules Summary

At identity creation:

  • Exactly 1 MASTER-level AUTHENTICATION key required
  • No duplicate key IDs in the transition
  • No duplicate key data for unique key types
  • Key count must not exceed max_public_keys_in_creation (platform config)
  • Each key's purpose/security level combination must be in the allowed set

At key addition (IdentityUpdate):

  • New key IDs must not conflict with existing keys
  • Unique key hashes must not exist in either the unique or non-unique global tables
  • The signing key must have sufficient security level to add keys
  • All purpose/security level constraints apply

At signing time:

  • Key must not be disabled
  • Key purpose must match the operation
  • Key security level must be >= the required level
  • Contract bounds must match (if set)
  • Key must not be read-only

Querying Keys

Find identity by public key hash:

#![allow(unused)]
fn main() {
// Returns the identity ID that owns this unique public key
let identity_id = drive.fetch_identity_id_by_unique_public_key_hash(
    key_hash, transaction, platform_version
)?;
}

Find all identities sharing a non-unique key hash:

#![allow(unused)]
fn main() {
// Returns all identity IDs registered under this non-unique key hash
let identity_ids = drive.fetch_identity_ids_by_non_unique_public_key_hash(
    key_hash, transaction, platform_version
)?;
}

Fetch all keys for an identity:

#![allow(unused)]
fn main() {
let keys = identity.public_keys();  // BTreeMap<KeyID, IdentityPublicKey>
}

Fetch by purpose and security level: Uses the IdentityTreeKeyReferences tree to efficiently look up keys without scanning all keys on the identity.

Design Rationale

Why separate purpose and security level? Purpose defines what a key can do; security level defines how sensitive it is. An AUTHENTICATION key at MEDIUM can sign low-sensitivity documents. An AUTHENTICATION key at MASTER can manage the identity itself. This separation lets identities create keys with exactly the right capabilities -- not too much, not too little.

Why disable instead of delete? Disabled keys remain in storage so that historical signatures can still be verified. If a key were deleted, past state transitions signed by that key would become unverifiable.

Why unique vs non-unique hash tables? Full public keys (ECDSA, BLS) must be globally unique to prevent impersonation. But hash-based keys (ECDSA_HASH160, BIP13_SCRIPT_HASH) represent Dash addresses that may legitimately appear in multiple identities -- for example, when the same address is used in multiple asset lock transactions.

Why pre-create the MEDIUM AUTHENTICATION subtree? This is the most commonly used security level for document signing. Pre-creating its tree at identity creation avoids the cost of creating it on the first document submission.

Why contract bounds? They enable the principle of least privilege. An identity can create a key specifically for interacting with one dApp. If that key is compromised, the damage is limited to that single contract -- the attacker cannot use it to transfer credits or interact with other contracts.

BLAST Sync

Blockchain Layered Address Sync Tree (BLAST) is a privacy-preserving synchronization algorithm used by the Dash Platform SDK. It allows wallets to discover which of their keys exist in a server-side Merkle tree without revealing the specific keys being queried.

BLAST is used for two distinct sync tasks:

  • Address balance sync: Discovering which platform addresses have balances and what those balances are.
  • Nullifier sync: Checking which nullifiers have been spent in the shielded pool.

Both follow the same trunk/branch tree-scan pattern, extracted into a shared generic algorithm.

The Problem

A wallet holds a set of keys (addresses or nullifiers) and needs to learn which ones exist in a Merkle tree stored by Platform nodes. The naive approach -- querying each key individually -- leaks the wallet's full key set to the server. Even batching the keys into a single request reveals the exact set.

BLAST solves this by querying subtrees of the Merkle tree rather than individual keys. The server returns a chunk of the tree that contains the target key along with many other keys, making it impossible for the server to determine which specific key the wallet cares about.

Algorithm Overview

The sync has two phases: a tree scan for bulk discovery, and incremental catch-up for staying current between scans.

Phase 1: Tree Scan (Trunk/Branch)

The Underlying Data Structure

Platform stores addresses and nullifiers in a Merk tree -- a balanced binary search tree (BST) where each node is keyed and ordered. Every internal node has a left child (keys less than this node) and a right child (keys greater than this node). Each node also carries a Merkle hash of its subtree, making the entire structure cryptographically verifiable.

The trunk query returns a partial view of this BST: the top N levels are fully expanded (you can see the actual keys and values), while deeper subtrees are truncated to hash placeholders. The boundary between "expanded" and "truncated" defines the leaf nodes of the trunk result.

                          ┌──────┐
                          │  30  │   ← Root node (key=30)
                          └──┬───┘
                      ┌──────┴──────┐
                      ▼             ▼
                   ┌──────┐     ┌──────┐
                   │  15  │     │  45  │   ← Internal nodes (expanded)
                   └──┬───┘     └──┬───┘
                 ┌────┴────┐  ┌────┴────┐
                 ▼         ▼  ▼         ▼
              ┌─────┐  ┌─────┐ ┌─────┐  ┌─────┐
              │  7  │  │ 22  │ │ 38  │  │ 55  │  ← Leaf nodes
              │▓▓▓▓▓│  │▓▓▓▓▓│ │▓▓▓▓▓│  │▓▓▓▓▓│     (children are
              └─────┘  └─────┘ └─────┘  └─────┘      hash placeholders)

              ▓▓▓ = truncated subtree (only hash known, not contents)

The trunk result contains three key pieces of data:

  • elements: A BTreeMap<Vec<u8>, Element> of key-value pairs at expanded nodes. These are fully resolved -- the wallet can read their values directly.
  • leaf_keys: A BTreeMap<Vec<u8>, LeafInfo> of nodes at the truncation boundary. Each LeafInfo has a hash (for verifying subsequent branch queries) and an optional count (number of elements in the truncated subtree).
  • tree: The reconstructed BST structure from the proof, used for key tracing.

Step 1: The Trunk Query

The wallet sends a single trunk query to a Platform node. The request specifies a max_depth (how many levels of the BST to expand). The server returns the trunk elements, the leaf boundary information, and a Merkle proof covering the entire result.

The proof is verified against the quorum-signed root hash, ensuring the server cannot lie about what the tree contains.

#![allow(unused)]
fn main() {
let (trunk_result, metadata) =
    PlatformAddressTrunkState::fetch_with_metadata(sdk, (), Some(settings)).await?;
}

Step 2: Classifying Target Keys via BST Traversal

After receiving the trunk, the wallet classifies each of its target keys by traversing the BST structure. The trace_key_to_leaf method performs a standard binary search:

  1. Start at the root node.
  2. Compare the target key against the current node's key.
  3. If equal: the key is found in the trunk elements.
  4. If less: follow the left child.
  5. If greater: follow the right child.
  6. If the current node is a leaf (its children are hash placeholders): the target key is somewhere in this leaf's truncated subtree, but we can't resolve it yet.
  7. If there is no child to follow: the key is proven absent.

This produces exactly three outcomes for each target key:

OutcomeWhat it meansAction
FoundKey exists in trunk elementsRecord the value (balance, spent status)
Traced to leafKey is in a truncated subtreeAdd to KeyLeafTracker for branch querying
AbsentNo path exists in the BSTKey is cryptographically proven to not exist
#![allow(unused)]
fn main() {
for key in target_keys {
    if trunk_result.elements.contains_key(&key) {
        // Found directly in trunk -- record it
        result.found.insert(key);
    } else if let Some((leaf_key, info)) = trunk_result.trace_key_to_leaf(&key) {
        // Traces to a leaf subtree -- need a branch query
        tracker.add_key(key, leaf_key, info);
    } else {
        // Proven absent from the tree
        result.absent.insert(key);
    }
}
}

A concrete example: suppose the wallet is looking for key 20 in the tree above. The BST traversal goes: root 30 (20 < 30, go left) -> node 15 (20 > 15, go right) -> leaf 22 (children are hash placeholders). Key 20 traces to leaf 22 because it would be in leaf 22's left subtree. The wallet now knows it needs to query leaf 22's subtree to determine whether key 20 actually exists.

Note that each target key traces to exactly one leaf -- the BST path is deterministic. Multiple target keys may trace to the same leaf if they are close together in the key space.

Step 3: Privacy Adjustment

Before querying leaf subtrees, the algorithm applies privacy adjustment to prevent the server from learning which specific keys the wallet cares about.

Each leaf in the trunk result has an optional count -- the number of elements in its truncated subtree. If this count is small (below min_privacy_count, default 32), then querying that specific leaf reveals too much: the server knows the wallet is interested in one of only a few keys.

The fix is to query an ancestor higher in the tree that has enough elements to provide cover:

    Suppose leaf "22" has count=5 (too small for privacy).
    Its parent "15" has count=50 (enough).

    Instead of asking: "give me the subtree rooted at 22"
    The wallet asks:   "give me the subtree rooted at 15"

    Now the server sees a query for a 50-element subtree and cannot
    tell whether the wallet wants key 20 (in 22's subtree) or
    key 10 (in 7's subtree) or any other key under 15.

The get_ancestor method walks up the BST path from the leaf to the root, stopping at the first ancestor whose count exceeds min_privacy_count. It never returns the root itself (that would be equivalent to re-fetching the entire trunk). If no ancestor has enough count, it falls back to the node one level below the root.

The query depth is adjusted when using an ancestor: since the ancestor is higher in the tree, its subtree is deeper, so the depth parameter is reduced by the number of levels climbed.

Deduplication ensures that if multiple target keys expand to the same ancestor, only one branch query is sent.

Step 4: Iterative Branch Queries

For each leaf (or privacy-adjusted ancestor) with unresolved keys, the wallet sends a branch query specifying:

  • The leaf's key (identifies which subtree to expand)
  • The query depth (how many levels to expand)
  • The expected root hash (the leaf's hash from the trunk, used for verification)
  • The checkpoint height (ensures the branch matches the same tree snapshot as the trunk)

The server returns a GroveBranchQueryResult with the same structure as the trunk: expanded elements, new leaf keys at the next truncation boundary, and a Merk proof. The wallet verifies the proof against the expected hash from the parent query.

Each target key in the queried subtree is classified again:

  • Found in the branch elements -- resolved.
  • Traced to a deeper leaf -- the key is in an even deeper truncated subtree. The KeyLeafTracker is updated to point to the new, deeper leaf.
  • Absent -- proven to not exist within this subtree.
    Iteration 1: Trunk query
    ┌────────────────────────────────────────────────────┐
    │  Key 20 traces to leaf 22                          │
    │  Key 41 traces to leaf 38                          │
    └────────────────────────────────────────────────────┘
                            │
                            ▼
    Iteration 2: Branch queries for leaves 22 and 38
    ┌────────────────────────────────────────────────────┐
    │  Key 20: found in leaf 22's subtree → RESOLVED     │
    │  Key 41: traces to deeper leaf 40 → CONTINUE       │
    └────────────────────────────────────────────────────┘
                            │
                            ▼
    Iteration 3: Branch query for leaf 40
    ┌────────────────────────────────────────────────────┐
    │  Key 41: proven absent in leaf 40's subtree → DONE │
    └────────────────────────────────────────────────────┘

Branch queries run in parallel using FuturesUnordered with configurable concurrency (max_concurrent_requests, default: 10). The iteration loop continues until all keys are resolved or max_iterations (default: 50) is reached. In practice, most keys resolve within 2-3 iterations because each branch query expands several levels of the tree.

Phase 2: Incremental Catch-Up

After the tree scan produces a snapshot at some checkpoint height, the wallet needs to catch up to the chain tip. This is done with two sub-phases:

Compacted changes -- Historical balance/nullifier changes aggregated across block ranges. These cover the gap between the checkpoint height and recent history. Each response covers a range of blocks and contains the net changes.

Recent changes -- Per-block changes for the most recent blocks. These provide granular updates from where compacted changes left off to the chain tip.

  checkpoint_height                              chain_tip
        │                                            │
        ▼                                            ▼
  ──────┬────────────────────────────┬───────────────┤
        │   Compacted changes        │ Recent changes│
        │   (block ranges)           │ (per-block)   │
        └────────────────────────────┴───────────────┘

On subsequent syncs, if the elapsed time since the last sync is within full_rescan_after_time_s (default: 7 days), the tree scan is skipped entirely and only the incremental catch-up runs. This makes frequent re-syncs very fast.

The TrunkBranchSyncOps Trait

The shared algorithm is parameterized by the TrunkBranchSyncOps trait, defined in packages/rs-sdk/src/platform/trunk_branch_sync/mod.rs. Each sync module implements this trait to plug in its specific query construction, result processing, and depth limits.

#![allow(unused)]
fn main() {
pub trait TrunkBranchSyncOps {
    /// Module-specific mutable state carried through the scan.
    type Context<'a>: Send where Self: 'a;

    /// Immutable config for parallel branch queries (cloned into each task).
    type BranchQueryConfig: Clone + Send + Sync + 'static;

    // Trunk
    async fn execute_trunk_query(sdk, settings, context)
        -> Result<(GroveTrunkQueryResult, u64, u64), Error>;
    fn process_trunk_result(trunk_result, context, tracker) -> Result<(), Error>;

    // Branch
    fn branch_query_config(context) -> Self::BranchQueryConfig;
    async fn execute_single_branch_query(sdk, config, key, depth, ...)
        -> Result<GroveBranchQueryResult, Error>;
    fn process_branch_result(branch_result, leaf_key, context, tracker)
        -> Result<(), Error>;

    // Limits and hooks
    fn depth_limits(platform_version) -> (u8, u8);
    fn after_branch_iteration(trunk_result, context, tracker) { }
    fn on_branch_query(context);
    fn on_branch_failure(context);
    fn on_elements_seen(context, count);
    fn on_iteration(context, iteration);
    fn set_checkpoint_height(context, height);
}
}

The two associated types deserve attention:

  • Context<'a> is a GAT (generic associated type) that carries mutable state through the algorithm. For nullifiers, this holds the input keys and result sets. For addresses, it holds the address provider, key-to-index mapping, and result.

  • BranchQueryConfig holds immutable parameters needed to construct branch queries that must be sent to async tasks. For nullifiers, this is (pool_type, pool_identifier). For addresses, it is () since no extra parameters are needed.

The after_branch_iteration hook allows the address sync module to implement gap-limit behavior: after each branch iteration, it checks if the provider has extended its pending address list and adds newly pending keys to the tracker.

KeyLeafTracker

The KeyLeafTracker (in trunk_branch_sync/tracker.rs) maintains the mapping between target keys and the leaf subtrees they reside in. It supports:

  • Adding keys: When a key traces to a leaf during trunk processing
  • Updating keys: When a branch query reveals the key is in a deeper subtree
  • Removing keys: When a key is found or proven absent
  • Reference counting: Multiple target keys can map to the same leaf; the leaf stays active until all its keys are resolved
#![allow(unused)]
fn main() {
let mut tracker = KeyLeafTracker::new();

// After trunk query: key traces to leaf subtree
tracker.add_key(target_key, leaf_boundary_key, leaf_info);

// After branch query: key found in subtree
tracker.key_found(&target_key);

// After branch query: key in even deeper subtree
tracker.update_leaf(&target_key, deeper_leaf_key, deeper_info);

// Check what still needs querying
let active = tracker.active_leaves(); // leaves with unresolved keys
let remaining = tracker.remaining_count();
}

Privacy-Adjusted Leaves (Detail)

The get_privacy_adjusted_leaves function (in trunk_branch_sync/mod.rs) implements the privacy adjustment described in Phase 1, Step 3. The full logic for each active leaf:

  1. Calculate the query depth from the leaf's element count using calculate_max_tree_depth_from_count(count), clamped to platform-version bounds [min_query_depth, max_query_depth].
  2. If count >= min_privacy_count: query this leaf directly at the calculated depth.
  3. If count < min_privacy_count: call trunk_result.get_ancestor(&leaf_key, min_privacy_count) to find a higher node. Reduce depth by levels_up (the number of tree levels climbed) so the total subtree size returned stays reasonable.
  4. If no suitable ancestor exists (rare -- means the entire tree is small): query the leaf anyway, accepting reduced privacy.
  5. Deduplicate: if two target keys expand to the same ancestor, only one branch query is emitted (tracked via a BTreeSet<LeafBoundaryKey>).

Concrete Implementations

Address Balance Sync

The address sync module (platform/address_sync/) implements TrunkBranchSyncOps as AddressOps<P> where P: AddressProvider.

The AddressProvider trait is implemented by wallets to supply:

  • The list of pending addresses to check
  • Callbacks when addresses are found or proven absent
  • Gap-limit extension (generating new addresses when prior ones are found)
  • Current balances for incremental-only mode
#![allow(unused)]
fn main() {
// First sync -- full tree scan + incremental catch-up
let result = sdk.sync_address_balances(&mut wallet, None, None).await?;

// Store for next call
let height = result.new_sync_height;
let timestamp = result.new_sync_timestamp;

// Subsequent sync -- incremental only if within 7-day threshold
let result = sdk.sync_address_balances(&mut wallet, None, Some(timestamp)).await?;
}

Address balance sync uses ItemWithSumItem GroveDB elements where the item value contains the nonce (4 bytes big-endian) and the sum value contains the credit balance.

Nullifier Sync

The nullifier sync module (platform/nullifier_sync/) implements TrunkBranchSyncOps as NullifierOps.

Nullifier sync differs from address sync in several ways:

  • Target keys are fixed 32-byte arrays ([u8; 32])
  • Branch queries carry extra config: (pool_type, pool_identifier) to identify the shielded pool
  • No gap-limit behavior (the after_branch_iteration hook is not overridden)
  • Branch query failures are tracked in metrics
#![allow(unused)]
fn main() {
let nullifiers: Vec<[u8; 32]> = vec![/* ... */];

// First sync -- full tree scan + incremental catch-up
let result = sdk.sync_nullifiers(&nullifiers, None, None, None).await?;

// Store for next call
let height = result.new_sync_height;
let timestamp = result.new_sync_timestamp;

// Subsequent sync -- incremental only if within 7-day threshold
let result = sdk.sync_nullifiers(&nullifiers, None, Some(height), Some(timestamp)).await?;
}

Found nullifiers indicate spent notes; absent nullifiers indicate unspent notes.

Sync Mode Decision

Both sync modules use the same logic to decide between full scan and incremental-only:

last_sync_timestampElapsed timeMode
None--Full tree scan + catch-up
Some(ts)< full_rescan_after_time_sIncremental only
Some(ts)>= full_rescan_after_time_sFull tree scan + catch-up

The default full_rescan_after_time_s is 604800 (7 days). Setting it to 0 forces a full tree scan on every call.

Configuration

Both modules expose configuration structs with sensible defaults:

ParameterDefaultDescription
min_privacy_count32Minimum elements in a queried subtree
max_concurrent_requests10Parallel branch queries
max_iterations50Safety limit for branch iteration depth
full_rescan_after_time_s604800Seconds before forcing a full rescan

Module Structure

packages/rs-sdk/src/platform/
├── trunk_branch_sync/
│   ├── mod.rs        # TrunkBranchSyncOps trait, run_full_tree_scan(),
│   │                 #   get_privacy_adjusted_leaves(), parallel execution
│   └── tracker.rs    # KeyLeafTracker with reference counting
├── address_sync/
│   ├── mod.rs        # AddressOps<P> impl, sync_address_balances(),
│   │                 #   incremental_catch_up()
│   ├── provider.rs   # AddressProvider trait
│   └── types.rs      # AddressSyncConfig, AddressSyncResult, AddressFunds
└── nullifier_sync/
    ├── mod.rs        # NullifierOps impl, sync_nullifiers(),
    │                 #   incremental_catch_up()
    ├── provider.rs   # NullifierProvider trait
    └── types.rs      # NullifierSyncConfig, NullifierSyncResult

Rules

Do:

  • Use sdk.sync_address_balances() or sdk.sync_nullifiers() as the entry points.
  • Persist new_sync_height and new_sync_timestamp from the result and pass them back on the next sync call. This enables incremental-only mode.
  • Implement AddressProvider to integrate with your wallet's key derivation and storage.
  • Set min_privacy_count high enough that individual key lookups cannot be distinguished. The default of 32 is a reasonable minimum.

Don't:

  • Query individual keys directly via the trunk/branch RPCs -- use the sync functions which handle privacy adjustment, iteration, and proof verification.
  • Set max_iterations too low -- complex trees may need many rounds. The default of 50 handles trees with millions of entries.
  • Ignore the full_rescan_after_time_s threshold -- without periodic full rescans, the incremental phase could miss changes that occurred before the last known height.
  • Skip the incremental catch-up phase -- the tree scan snapshot may be slightly stale (the trunk is captured at a specific block height), and the catch-up brings it current.

Binding Patterns

Dash Platform's core logic is written in Rust. But many developers build applications in JavaScript -- browser-based wallets, Node.js services, React Native apps. The wasm-dpp package bridges these worlds by compiling Rust types to WebAssembly and exposing them as JavaScript classes.

This chapter covers the patterns used to create these bindings: the wrapper struct pattern, naming conventions, buffer handling at the boundary, getter/setter patterns, and the Inner trait.

The Problem

Rust and JavaScript have fundamentally different type systems. Rust has ownership, lifetimes, and zero-cost abstractions. JavaScript has garbage collection, prototype chains, and dynamic types. wasm-bindgen bridges the gap, but it requires careful manual work to make the resulting JavaScript API feel natural.

The core challenge: you have a Rust type like Identity with methods like id(), balance(), and public_keys(). You need to expose it to JavaScript as a class with methods like getId(), getBalance(), and getPublicKeys() -- following JavaScript naming conventions while maintaining Rust's safety guarantees.

The Wrapper Struct Pattern

Every Rust type exposed to JavaScript gets a wrapper struct. Here is IdentityWasm from packages/wasm-dpp/src/identity/identity.rs:

#![allow(unused)]
fn main() {
#[wasm_bindgen(js_name=Identity)]
#[derive(Clone)]
pub struct IdentityWasm {
    inner: Identity,
    metadata: Option<Metadata>,
}
}

The pattern has three parts:

  1. #[wasm_bindgen(js_name=Identity)] -- tells wasm_bindgen to expose this struct as Identity in JavaScript, not IdentityWasm.
  2. inner: Identity -- the real Rust type, hidden from JavaScript.
  3. Additional fields -- any extra state needed at the WASM boundary (like metadata here, which is managed separately in JS).

Why a Wrapper?

You cannot put #[wasm_bindgen] directly on Identity for several reasons:

  • Identity is defined in rs-dpp, a different crate. You cannot add attributes to types in other crates.
  • Identity may contain types that wasm_bindgen cannot handle (nested enums, complex generics, trait objects).
  • The JavaScript API should have camelCase methods (getId), not Rust-style snake_case (id).
  • Some conversions (like turning Identifier into a Buffer) only make sense at the WASM boundary.

From/Into Conversions

Every wrapper implements bidirectional conversion:

#![allow(unused)]
fn main() {
impl From<IdentityWasm> for Identity {
    fn from(identity: IdentityWasm) -> Self {
        identity.inner
    }
}

impl From<Identity> for IdentityWasm {
    fn from(identity: Identity) -> Self {
        Self {
            inner: identity,
            metadata: None,
        }
    }
}
}

This lets internal Rust code work with the real Identity type while the WASM boundary works with IdentityWasm. The conversion is zero-cost -- it just moves the inner value.

JavaScript Method Naming

Methods use #[wasm_bindgen(js_name=...)] to follow JavaScript conventions:

#![allow(unused)]
fn main() {
#[wasm_bindgen(js_class=Identity)]
impl IdentityWasm {
    #[wasm_bindgen(js_name=getId)]
    pub fn get_id(&self) -> IdentifierWrapper {
        self.inner.id().into()
    }

    #[wasm_bindgen(js_name=setId)]
    pub fn set_id(&mut self, id: IdentifierWrapper) {
        self.inner.set_id(id.into());
    }

    #[wasm_bindgen(js_name=getBalance)]
    pub fn get_balance(&self) -> u64 {
        self.inner.balance()
    }

    #[wasm_bindgen(js_name=setBalance)]
    pub fn set_balance(&mut self, balance: u64) {
        self.inner.set_balance(balance);
    }
}
}

Note the #[wasm_bindgen(js_class=Identity)] on the impl block -- this associates the methods with the Identity JavaScript class (the js_name from the struct).

In JavaScript, these become:

const identity = new Identity(platformVersion);
const id = identity.getId();
identity.setBalance(1000n);

Getter Properties

For simple values, you can use JavaScript getter syntax:

#![allow(unused)]
fn main() {
#[wasm_bindgen(getter)]
pub fn balance(&self) -> u64 {
    self.inner.balance()
}
}

In JavaScript, this becomes a property access:

const bal = identity.balance;  // no parentheses

Platform uses both patterns -- getBalance() method and balance getter -- for the same value. This provides flexibility: the getter is concise for reading, the method is consistent for tools that expect a getter/setter pair.

Constructors

The #[wasm_bindgen(constructor)] attribute creates a JavaScript constructor:

#![allow(unused)]
fn main() {
#[wasm_bindgen(constructor)]
pub fn new(platform_version: u32) -> Result<IdentityWasm, JsValue> {
    let platform_version = &PlatformVersion::get(platform_version)
        .map_err(|e| JsValue::from(e.to_string()))?;

    Identity::default_versioned(platform_version)
        .map(Into::into)
        .map_err(from_dpp_err)
}
}

Notice the error handling: Rust Result becomes a JavaScript throw. The from_dpp_err function converts Rust ProtocolError into JavaScript error objects.

Buffer Handling at the Boundary

Binary data crosses the WASM boundary as Buffer (a custom type that wraps Uint8Array):

#![allow(unused)]
fn main() {
#[wasm_bindgen(js_name=toBuffer)]
pub fn to_buffer(&self) -> Result<Buffer, JsValue> {
    let bytes = PlatformSerializable::serialize_to_bytes(
        &self.inner.clone()
    ).with_js_error()?;
    Ok(Buffer::from_bytes(&bytes))
}

#[wasm_bindgen(js_name=fromBuffer)]
pub fn from_buffer(buffer: Vec<u8>) -> Result<IdentityWasm, JsValue> {
    let identity: Identity =
        PlatformDeserializable::deserialize_from_bytes(buffer.as_slice())
            .with_js_error()?;
    Ok(identity.into())
}
}

The toBuffer/fromBuffer pair is the standard serialization interface. Every WASM type that needs to be stored or transmitted implements this pair.

Handling Complex Types: Arrays and Objects

JavaScript arrays and objects require special handling. For collections of public keys:

#![allow(unused)]
fn main() {
#[wasm_bindgen(js_name=getPublicKeys)]
pub fn get_public_keys(&self) -> Vec<JsValue> {
    self.inner
        .public_keys()
        .values()
        .cloned()
        .map(IdentityPublicKeyWasm::from)  // Rust -> Wrapper
        .map(JsValue::from)               // Wrapper -> JsValue
        .collect()
}

#[wasm_bindgen(js_name=setPublicKeys)]
pub fn set_public_keys(&mut self, public_keys: js_sys::Array)
    -> Result<usize, JsValue>
{
    if public_keys.length() == 0 {
        return Err("Must use array of PublicKeys".into());
    }

    let public_keys = public_keys
        .iter()
        .map(|key| {
            key.to_wasm::<IdentityPublicKeyWasm>("IdentityPublicKey")
                .map(|key| {
                    let key = IdentityPublicKey::from(key.to_owned());
                    (key.id(), key)
                })
        })
        .collect::<Result<_, _>>()?;

    self.inner.set_public_keys(public_keys);
    Ok(self.inner.public_keys().len())
}
}

The to_wasm::<T>("TypeName") helper extracts a Rust wrapper from a JsValue, validating that it is the correct type.

JSON Serialization

Most WASM types provide toJSON and toObject methods:

#![allow(unused)]
fn main() {
#[wasm_bindgen(js_name=toJSON)]
pub fn to_json(&self) -> Result<JsValue, JsValue> {
    let mut value = self.inner.to_object().with_js_error()?;

    // Convert identifiers to Base58 strings for readability
    value.replace_at_paths(
        dpp::identity::IDENTIFIER_FIELDS_RAW_OBJECT,
        ReplacementType::TextBase58,
    ).map_err(|e| e.to_string())?;

    // Convert binary key data to Base64
    let public_keys = value
        .get_array_mut_ref(dpp::identity::property_names::PUBLIC_KEYS)
        .map_err(|e| e.to_string())?;

    for key in public_keys.iter_mut() {
        key.replace_at_paths(
            dpp::identity::identity_public_key::BINARY_DATA_FIELDS,
            ReplacementType::TextBase64,
        ).map_err(|e| e.to_string())?;
    }

    let json = value.try_into_validating_json()
        .map_err(|e| e.to_string())?
        .to_string();

    js_sys::JSON::parse(&json)
}
}

The toJSON method applies human-readable encoding (Base58 for identifiers, Base64 for binary data) before converting to a JavaScript object. This is the format used in APIs and debugging tools.

The Inner Trait

To standardize wrapper access, Platform defines an Inner trait:

#![allow(unused)]
fn main() {
impl Inner for IdentityWasm {
    type InnerItem = Identity;

    fn into_inner(self) -> Self::InnerItem {
        self.inner
    }

    fn inner(&self) -> &Self::InnerItem {
        &self.inner
    }

    fn inner_mut(&mut self) -> &mut Self::InnerItem {
        &mut self.inner
    }
}
}

This trait provides a consistent way for other Rust code in the WASM layer to access the unwrapped type without knowing the wrapper's internal structure.

Rules

Do:

  • Follow the wrapper struct pattern: TypeWasm { inner: Type }.
  • Use js_name on the struct and js_class on the impl block.
  • Implement From<Type> for TypeWasm and From<TypeWasm> for Type.
  • Use Buffer::from_bytes for binary data crossing the boundary.
  • Implement toBuffer/fromBuffer for any type that needs serialization.
  • Use from_dpp_err or with_js_error() for error conversion.
  • Implement the Inner trait for consistent wrapper access.

Don't:

  • Expose Rust types directly to WASM -- always wrap them.
  • Use Rust naming conventions (get_id) in the JavaScript API -- use js_name=getId.
  • Return Result<T, ProtocolError> from WASM methods -- convert to Result<T, JsValue>.
  • Forget to handle empty arrays and invalid inputs with clear error messages.
  • Expose internal fields that do not make sense in JavaScript (like Arc or Mutex).

Error Macros

Dash Platform defines hundreds of consensus errors. Each one needs a WASM binding so JavaScript code can inspect error codes, read messages, and serialize errors for transport. Writing a wrapper struct for every single error by hand would be tedious, error-prone, and a maintenance burden.

This chapter covers the generic_consensus_error! macro that generates WASM bindings automatically, the paste! crate that makes it work, the manual binding pattern for errors that need custom methods, and how the two approaches coexist.

The Problem

Consider Platform's error hierarchy. At the top level:

ConsensusError
  BasicError (dozens of variants)
  StateError (dozens of variants)
  SignatureError (handful of variants)
  FeeError (one variant)

Each variant wraps a specific error struct -- TransitionNoInputsError, DocumentNotFoundError, MasternodeNotFoundError, and so on. There are well over a hundred of these.

Every one needs a JavaScript class with:

  • A getCode() method returning the numeric error code.
  • A message getter returning the human-readable error string.
  • A serialize() method for wire encoding.
  • A From<&RustType> implementation for conversion.

Writing all of this for each error would mean hundreds of nearly identical files.

The generic_consensus_error! Macro

The macro lives in packages/wasm-dpp/src/errors/generic_consensus_error.rs:

#![allow(unused)]
fn main() {
#[macro_export]
macro_rules! generic_consensus_error {
    ($error_type:ident, $error_instance:expr) => {{
        use {
            dpp::{
                consensus::{codes::ErrorWithCode, ConsensusError},
                serialization::PlatformSerializableWithPlatformVersion,
                version::PlatformVersion,
            },
            paste::paste,
            wasm_bindgen::prelude::wasm_bindgen,
            $crate::buffer::Buffer,
        };

        paste! {
            #[derive(Debug)]
            #[wasm_bindgen(js_name=$error_type)]
            pub struct [<$error_type Wasm>] {
                inner: $error_type
            }

            impl From<&$error_type> for [<$error_type Wasm>] {
                fn from(e: &$error_type) -> Self {
                    Self {
                        inner: e.clone()
                    }
                }
            }

            #[wasm_bindgen(js_class=$error_type)]
            impl [<$error_type Wasm>] {
                #[wasm_bindgen(js_name=getCode)]
                pub fn get_code(&self) -> u32 {
                    ConsensusError::from(self.inner.clone()).code()
                }

                #[wasm_bindgen(getter)]
                pub fn message(&self) -> String {
                    self.inner.to_string()
                }

                pub fn serialize(&self) -> Result<Buffer, JsError> {
                    let bytes = ConsensusError::from(self.inner.clone())
                        .serialize_to_bytes_with_platform_version(
                            PlatformVersion::first(),
                        )
                        .map_err(JsError::from)?;

                    Ok(Buffer::from_bytes(bytes.as_slice()))
                }
            }

            [<$error_type Wasm>]::from($error_instance)
        }
    }};
}
}

What It Generates

For a call like generic_consensus_error!(MasternodeNotFoundError, e), the macro generates:

  1. A wrapper struct: MasternodeNotFoundErrorWasm with inner: MasternodeNotFoundError
  2. A From impl: From<&MasternodeNotFoundError> for MasternodeNotFoundErrorWasm
  3. Three methods:
    • get_code() -- returns the numeric error code
    • message -- returns the Display string
    • serialize() -- encodes the error to bytes
  4. An instantiation: Creates a MasternodeNotFoundErrorWasm from the error reference

The paste! Crate

The magic of [<$error_type Wasm>] comes from the paste crate. Standard Rust macros cannot concatenate identifiers -- you cannot write $error_type ## Wasm to create a new identifier. The paste! macro provides this:

#![allow(unused)]
fn main() {
paste! {
    pub struct [<$error_type Wasm>] { ... }
    //         ^^^^^^^^^^^^^^^^^
    //         pastes to: MasternodeNotFoundErrorWasm
}
}

Inside paste! { ... }, [<token1 token2>] concatenates tokens into a single identifier. This is what allows the macro to generate both the JavaScript name (MasternodeNotFoundError via js_name) and the Rust struct name (MasternodeNotFoundErrorWasm via paste).

How It Is Used

The macro is used inline within the from_consensus_error_ref function and its helpers in packages/wasm-dpp/src/errors/consensus/consensus_error.rs. Here is a representative excerpt:

#![allow(unused)]
fn main() {
pub fn from_state_error(state_error: &StateError) -> JsValue {
    match state_error {
        // Manual wrappers (have custom methods)
        StateError::DocumentAlreadyPresentError(e) => {
            DocumentAlreadyPresentErrorWasm::from(e).into()
        }
        StateError::DocumentNotFoundError(e) => {
            DocumentNotFoundErrorWasm::from(e).into()
        }

        // Macro-generated wrappers (standard interface only)
        StateError::MasternodeNotFoundError(e) => {
            generic_consensus_error!(MasternodeNotFoundError, e).into()
        }
        StateError::DocumentContestCurrentlyLockedError(e) => {
            generic_consensus_error!(
                DocumentContestCurrentlyLockedError, e
            ).into()
        }
        StateError::TokenIsPausedError(e) => {
            generic_consensus_error!(TokenIsPausedError, e).into()
        }
        // ... dozens more
    }
}
}

The pattern is clear: use the macro for errors that need only getCode(), message, and serialize(). Use manual wrappers for errors that need custom accessors.

Manual Wrappers: When You Need More

Some errors expose domain-specific data that JavaScript code needs to access. For these, you write a full wrapper by hand. Here is DataContractMaxDepthExceedError from packages/wasm-dpp/src/errors/consensus/basic/data_contract/:

#![allow(unused)]
fn main() {
#[wasm_bindgen(js_name=DataContractMaxDepthExceedError)]
pub struct DataContractMaxDepthExceedErrorWasm {
    inner: DataContractMaxDepthExceedError,
}

impl From<&DataContractMaxDepthExceedError>
    for DataContractMaxDepthExceedErrorWasm
{
    fn from(e: &DataContractMaxDepthExceedError) -> Self {
        Self { inner: e.clone() }
    }
}

#[wasm_bindgen(js_class=DataContractMaxDepthError)]
impl DataContractMaxDepthExceedErrorWasm {
    #[wasm_bindgen(js_name=getMaxDepth)]
    pub fn get_max_depth(&self) -> usize {
        self.inner.max_depth()
    }

    #[wasm_bindgen(js_name=getCode)]
    pub fn get_code(&self) -> u32 {
        ConsensusError::from(self.inner.clone()).code()
    }

    #[wasm_bindgen(getter)]
    pub fn message(&self) -> String {
        self.inner.to_string()
    }
}
}

The custom get_max_depth() method lets JavaScript inspect the specific limit that was exceeded. The macro cannot generate these domain-specific accessors -- it only knows about the three standard methods.

The from_dpp_err Pattern

At the top level, Rust's ProtocolError is converted to a JsValue through from_dpp_err in packages/wasm-dpp/src/errors/from.rs:

#![allow(unused)]
fn main() {
pub fn from_dpp_err(pe: ProtocolError) -> JsValue {
    match pe {
        ProtocolError::ConsensusError(consensus_error) => {
            from_consensus_error(*consensus_error)
        }
        ProtocolError::DataContractError(e) => {
            from_data_contract_to_js_error(e)
        }
        ProtocolError::Document(e) => {
            from_document_to_js_error(*e)
        }
        ProtocolError::DataContractNotPresentError(err) => {
            DataContractNotPresentNotConsensusErrorWasm::new(
                err.data_contract_id()
            ).into()
        }
        ProtocolError::ValueError(value_error) => {
            PlatformValueErrorWasm::from(value_error).into()
        }
        _ => JsValue::from_str(
            &format!("Error conversion not implemented: {pe:#}")
        ),
    }
}
}

This is the entry point for error conversion. It dispatches to the appropriate conversion function based on the error variant. The fallback case converts unhandled errors to a string -- not ideal, but it ensures no errors are silently swallowed.

The Consensus Error Dispatch

The from_consensus_error_ref function dispatches across the entire consensus error hierarchy:

#![allow(unused)]
fn main() {
pub fn from_consensus_error_ref(e: &DPPConsensusError) -> JsValue {
    match e {
        DPPConsensusError::FeeError(e) => match e {
            FeeError::BalanceIsNotEnoughError(e) =>
                BalanceIsNotEnoughErrorWasm::from(e).into(),
        },
        DPPConsensusError::SignatureError(e) =>
            from_signature_error(e),
        DPPConsensusError::StateError(state_error) =>
            from_state_error(state_error),
        DPPConsensusError::BasicError(basic_error) =>
            from_basic_error(basic_error),
        DPPConsensusError::DefaultError =>
            JsError::new("DefaultError").into(),
    }
}
}

Each sub-function (from_state_error, from_basic_error, from_signature_error) handles its category, using either manual wrappers or the macro as appropriate.

Adding a New WASM Error Binding

When a new consensus error is added to rs-dpp, you need to add its WASM binding. Here is the checklist:

If the error only needs getCode(), message, and serialize():

  1. Import the error type in consensus_error.rs.
  2. Add a match arm using the macro:
#![allow(unused)]
fn main() {
StateError::YourNewError(e) => {
    generic_consensus_error!(YourNewError, e).into()
}
}

That is it. The macro handles everything else.

If the error needs custom accessors:

  1. Create a new file in the appropriate subdirectory under packages/wasm-dpp/src/errors/consensus/.
  2. Define the wrapper struct, From impl, and methods following the manual pattern.
  3. Add the wrapper to the mod.rs file.
  4. Import it in consensus_error.rs.
  5. Add a match arm using the manual wrapper:
#![allow(unused)]
fn main() {
StateError::YourNewError(e) => {
    YourNewErrorWasm::from(e).into()
}
}

Rules

Do:

  • Use generic_consensus_error! for errors that need only the standard three methods.
  • Use manual wrappers when JavaScript needs to access error-specific fields.
  • Follow the existing file organization: one file per manual error, grouped by category.
  • Always provide getCode(), message, and serialize() -- these are the standard interface.
  • Test that new errors convert correctly in both directions.

Don't:

  • Write manual wrappers when the macro would suffice -- it just creates maintenance debt.
  • Forget to add the match arm in consensus_error.rs -- unhandled errors will fall through to the DefaultError case or an unreachable_patterns guard.
  • Use js_name values that differ from the Rust error type name -- JavaScript developers should see the same name they find in documentation.
  • Skip the serialize() method -- it is needed for error transport across process boundaries.
  • Modify generic_consensus_error! without understanding that every call site will be affected -- it generates code in every match arm that uses it.

API Reference

Auto-generated API documentation for the Dash Platform developer ecosystem. Each section is built from source and updated by CI when relevant source files change on v*-dev branches.


Rust Crate Docs

Full rustdoc for the Dash Platform workspace — every public type, trait, function, and module across all crates.

Key crates:

CrateDescription
dash-sdkHigh-level client SDK with builder pattern and fetch traits
dppDash Platform Protocol — data contracts, documents, identities, state transitions
driveDecentralized storage engine built on GroveDB
drive-abciABCI application connecting Tenderdash to Drive
dapi-grpcRust types generated from the gRPC protocol definitions
rs-dapi-clientLow-level DAPI client with retries and load balancing
platform-valueCross-language value representation
platform-versionProtocol versioning and feature version dispatch

gRPC API

Protocol Buffer service definitions for the three gRPC endpoints:

  • Platform — identity, data contract, document, and token operations
  • Core — block, transaction, and masternode queries
  • Drive — internal node-to-node replication

Documents every RPC method, request/response message, and field type.

JavaScript / TypeScript API

TypeDoc for the JavaScript and TypeScript client libraries:

  • dash (js-dash-sdk) — main SDK for Node.js and browser applications
  • wasm-dpp — WebAssembly bindings for Dash Platform Protocol
  • wasm-sdk — Browser-facing WASM SDK with TypeScript type definitions