90 Commits

Author SHA1 Message Date
Bobbin Threadbare
b5eb68e46c Merge pull request #120 from 0xPolygonMiden/next
Tracking PR for v0.3 release
2023-04-07 23:55:43 -07:00
Bobbin Threadbare
61db888b2c chore: update crate version to v0.3 2023-04-07 23:44:27 -07:00
Bobbin Threadbare
051167f2e5 Merge pull request #76 from 0xPolygonMiden/bobbin-blake3-opt
BLAKE3 hash_elements() optimization
2023-04-07 23:12:41 -07:00
Victor Lopes
498bc93c15 Merge pull request #125 from 0xPolygonMiden/vlopes11-store-get-leaf-depth
feat: add `MerkleStore::get_leaf_depth`
2023-04-06 23:13:54 +02:00
Victor Lopez
00ffc1568a feat: add MerkleStore::get_leaf_depth
This commit introduces `get_leaf_depth`, a tiered SMT helpers that will
retrieve the depth of a leaf for a given root, capped by `64`.

closes #119
2023-04-06 23:01:38 +02:00
Augusto Hack
cbf51dd3e2 Merge pull request #127 from 0xPolygonMiden/hacka-optimized-peak-hash
mmr: optimized peak hash for Miden VM
2023-04-06 19:38:48 +02:00
Augusto F. Hack
ab903a2229 mmr: optimized peak hash for Miden VM 2023-04-06 18:22:01 +02:00
Bobbin Threadbare
86dba195b4 Merge pull request #124 from 0xPolygonMiden/bobbin-merkle-fixes
Merkle fixes
2023-04-05 12:20:41 -07:00
Bobbin Threadbare
bd557bc68c fix: add validation to NodeIndex constructor and remove BitIterator 2023-04-05 12:08:00 -07:00
Augusto Hack
cf94ac07b7 Merge pull request #121 from 0xPolygonMiden/hacka-simple-smt-parent-node-iterator
feat: add parent node iterator for SimpleSMT
2023-04-05 00:46:32 +02:00
Augusto Hack
d873866f52 Merge pull request #118 from 0xPolygonMiden/hacka-support-mmr-in-the-merkle-store
feat: add support for MMR to the MerkleStore
2023-04-04 23:13:43 +02:00
Augusto F. Hack
9275dd00ad feat: add parent node iterator for SimpleSMT 2023-04-04 22:33:26 +02:00
Augusto F. Hack
429d3bab6f feat: add support for MMR to the MerkleStore 2023-04-04 22:33:01 +02:00
Augusto Hack
f19fe6e739 Merge pull request #117 from 0xPolygonMiden/hacka-simplify-consuming-merkle-tree
feat: add node iterator to MerkleTree
2023-04-04 22:14:38 +02:00
Augusto F. Hack
1df4318399 feat: add node iterator to MerkleTree 2023-04-04 22:11:21 +02:00
Bobbin Threadbare
433b467953 feat: optimized hash_elements for blake3 hasher 2023-04-04 01:06:51 -07:00
Augusto Hack
f46d913b20 Merge pull request #116 from 0xPolygonMiden/hacka-remove-merke-store
Remove SimpleSmt store
2023-03-31 03:12:09 +02:00
Augusto F. Hack
f8a62dae76 chore: remove simple_smt::Store 2023-03-31 03:10:01 +02:00
Victor Lopes
49b9029b46 Merge pull request #115 from 0xPolygonMiden/vlopes11-store-smt-depth
feat: Add `depth` as store SMT argument
2023-03-30 01:19:30 +02:00
Victor Lopez
d37f3f5e84 feat: Add depth as store SMT argument
Prior to this commit, MerkleStore allowed the creation of Sparse Merkle
tree only with the maximum depth of 63. However, this doesn't fit the
Tiered Sparse Merkle tree requirements, as it will contain trees of
depth 16.

This commit adds the `depth` argument to the MerkleStore methods that
will create Sparse Merkle trees.
2023-03-30 01:13:05 +02:00
Bobbin Threadbare
9389f2fb40 Merge pull request #80 from 0xPolygonMiden/next
v0.2 tracking PR
2023-03-25 01:28:40 -07:00
Bobbin Threadbare
703692553d chore: add winterfell dependency update to changelog 2023-03-25 00:45:17 -07:00
Bobbin Threadbare
d68be83bc4 chore: add Mmr to readme and changelog 2023-03-25 00:00:24 -07:00
Bobbin Threadbare
80171af872 Merge pull request #114 from 0xPolygonMiden/v0.2.0-release-prep
Prepare v0.2 release
2023-03-24 23:50:41 -07:00
Augusto Hack
75af3d474b Merge pull request #113 from 0xPolygonMiden/hacka-merkle-store-fix-empty-roots
bugfix: fix internal nodes of for empty leafs of a SMT
2023-03-24 23:26:48 +01:00
Augusto F. Hack
9e6c8ff700 bugfix: fix internal nodes of for empty leafs of a SMT
The path returned by `EmptySubtreeRoots` starts at the root, and goes to
the leaf. The MerkleStore constructor assumed the other direction, so
the parent/child hashes were reversed.

This fixes the bug and adds a test.
2023-03-24 23:22:31 +01:00
Bobbin Threadbare
a58922756a chore: update crate versions, dependencies, and CHANGELOG 2023-03-24 14:58:19 -07:00
Augusto Hack
bf15e1331a Merge pull request #112 from 0xPolygonMiden/hacka-add-serde-to-merklestore
Add serde to merklestore
2023-03-24 21:49:50 +01:00
Augusto F. Hack
7957cc929a feat: added MerkleStore serde 2023-03-24 21:44:36 +01:00
Victor Lopes
854892ba9d Merge pull request #111 from 0xPolygonMiden/vlopes11-increase-empty-subtrees
feat: add empty subtree constants to cover u8::MAX depth
2023-03-23 22:50:37 +01:00
Bobbin Threadbare
ce38ee388d Merge pull request #104 from 0xPolygonMiden/hacka-store-docs
Store docs
2023-03-23 13:11:04 -07:00
Bobbin Threadbare
4d1b3628d3 Merge pull request #110 from 0xPolygonMiden/bobbin-pathset-fixes
Fix MerklePathSet issues
2023-03-23 13:10:21 -07:00
Augusto F. Hack
2d1bc3ba34 store: added user documentation on usage and purpose 2023-03-23 14:19:37 +01:00
Victor Lopez
2ff96f40cb feat: add empty subtree constants to cover u8::MAX depth
Prior to this commit, we limited the constants count to 64 for the empty
subtrees depth computation. This is a hard-assumption that every tree of
Miden will have a depth up to 64 - and will cause undefined behavior if
it doesn't.

With the introduction of `MerkleStore::merge_roots` and the deprecation
of `mtree_cwm` instruction from the VM, this assumption is broken and
the user might end with trees with depth greater than 64. This broken
assumption could lead to attack vectors.

We can easily fix that by extending the pre-computed hashes list to the
maximum of `u8` (i.e. 255). This will have zero impact on functionality,
and will be completely safe to use without hard assumptions.
2023-03-23 12:59:47 +01:00
Bobbin Threadbare
9531d2bd34 fix: to paths reduction of MerklePathSet 2023-03-23 01:12:02 -07:00
Bobbin Threadbare
c79351be99 Merge pull request #107 from 0xPolygonMiden/hacka-store-add-merkle-paths
store: added with_merkle_paths constructor
2023-03-22 16:14:45 -07:00
Bobbin Threadbare
b7678619b0 Merge pull request #103 from 0xPolygonMiden/hacka-format-merkle-tree
Format merkle tree
2023-03-22 15:40:16 -07:00
Augusto F. Hack
0375f31035 feat: added utility to format MerkleTree and MerklePath to hex
Example formatted MerkleTree:

```
880abe452320966617646e7740b014954300f19a28780a0889d62ff33f4b0534
  1ade1369091efa31201e9b60c9c28874d0ddce5362b335135a6bb4c917285983
  3e60a9c843b4bb19f7a0572102e6507195f5240767a396335fd21981b048b807
    0100000000000000000000000000000000000000000000000000000000000000
    0200000000000000000000000000000000000000000000000000000000000000
    0300000000000000000000000000000000000000000000000000000000000000
    0400000000000000000000000000000000000000000000000000000000000000
```

Example formatted MerklePath:

```
[0400000000000000000000000000000000000000000000000000000000000000, 1ade1369091efa31201e9b60c9c28874d0ddce5362b335135a6bb4c917285983]
```
2023-03-22 21:53:05 +01:00
Augusto Hack
c96047af9d Merge pull request #102 from 0xPolygonMiden/hacka-merkle-tree-assert-message
chore: clarified assert message
2023-03-22 17:54:54 +01:00
Augusto F. Hack
b250752883 store: added with_merkle_paths constructor
And unit tests for each constructor type.
2023-03-22 14:17:12 +01:00
Augusto Hack
482dab94c5 Merge pull request #101 from 0xPolygonMiden/hacka-fix-benchmark-code
Fix benchmark code
2023-03-22 13:46:22 +01:00
Augusto F. Hack
d6cbd178e1 chore: clarified assert message 2023-03-22 11:30:19 +01:00
Augusto F. Hack
ef342cec23 bugfix: fix store benchmark 2023-03-22 10:53:12 +01:00
Victor Lopes
7305a72295 Merge pull request #99 from 0xPolygonMiden/vlopes11-merkle-store-containers
feat: add merkle path containers and return them on tree update
2023-03-21 20:54:36 +01:00
Victor Lopez
84086bdb95 feat: add merkle path containers and return them on tree update
Returning tuples is often confusing as they don't convey meaning and it
should be used only when there is no possible ambiguity.

For `MerkleStore`, we had a couple of tuples being returned, and reading
the implementation was required in order to distinguish if they were
leaf values or computed roots.

This commit introduces two containers that will self-document these
returns: `RootPath` and `ValuePath`. It will also update `set_node` to
return both the new root & the new path, so we can prevent duplicated
traversals downstream when updating a node (one to update, the second to
fetch the new path/root).
2023-03-21 20:45:01 +01:00
Bobbin Threadbare
a681952982 Merge pull request #97 from 0xPolygonMiden/hacka-storage-benchmark
Storage benchmark
2023-03-21 11:43:12 -07:00
Augusto F. Hack
78e82f2ee6 feat: add benchmark for storages 2023-03-21 14:29:18 +01:00
Victor Lopes
f07ed69d2f Merge pull request #95 from 0xPolygonMiden/vlopes11-fix-merkle-store-bounds
fix: merkle store panics on bounds
2023-03-21 09:51:48 +01:00
Augusto F. Hack
17eb8d78d3 chore: storage -> store 2023-03-21 09:45:36 +01:00
Victor Lopez
8cb245dc1f bugfix: reverse merkle path to match other structures
The store builds the path from root to leaf, this updates the code to
return a path from leaf to root, as it is done by the other structures.

This also added custom error for missing root.
2023-03-21 09:45:29 +01:00
Victor Lopez
867b772d9a fix: merkle store panics on bounds
Prior to this commit, the MerkleStore panicked under certain bounds. It
will prevent such panics by using checked operations.

ilog2, for instance, will panic when the operand is zero. However, there
is a documentation rule enforcing the merkle tree to be size at least 2.
If this rule is checked, then the panic is impossible.
2023-03-18 02:20:11 +01:00
Bobbin Threadbare
33d37d82e2 Merge pull request #79 from 0xPolygonMiden/hacka-ignore-pre-commit-rev
ignore pre commit rev
2023-03-17 00:11:13 -07:00
Augusto Hack
5703fef226 Merge pull request #96 from 0xPolygonMiden/hacka-check-root-in-storage
bugfix: check if the requested root is in the storage
2023-03-16 23:30:56 +01:00
Augusto F. Hack
669ebb49fb bugfix: check if the requested root is in the storage 2023-03-16 23:26:02 +01:00
Victor Lopes
931bcc3cc3 Merge pull request #94 from 0xPolygonMiden/vlopes11-merkle-store-derive
refactor: add derive proc macros to merkle store
2023-03-16 19:13:02 +01:00
Victor Lopez
91667fd7de refactor: add derive proc macros to merkle store
This commit introduce common derive proc macros to MerkleStore. These
are required downstream as the in-memory storage can be cloned.

It also introduces constructors common to the other types of the crate
that will help to build a merkle store, using a build pattern.
2023-03-16 10:28:45 +01:00
Augusto Hack
e4ddf6ffaf Merge pull request #93 from 0xPolygonMiden/hacka-add-merkle-store
Add merkle store
2023-03-15 18:13:48 +01:00
Augusto F. Hack
88a646031f feat: add merkle store 2023-03-15 17:34:42 +01:00
Bobbin Threadbare
2871e4eb27 Merge pull request #87 from 0xPolygonMiden/vlopes11-36-simple-smt-prepare
feat: refactor simple smt to use empty subtree constants
2023-03-07 16:10:24 -08:00
Victor Lopez
3a6a4fcce6 feat: refactor simple smt to use empty subtree constants
Prior to this commit, there was an internal procedure with the merkle
trees to compute empty sub-tree for arbitrary depths.

However, this isn't ideal as this code can be reused in any merkle
implementation that uses RPO as backend.

This commit introduces a structure that will generate these empty
subtrees values.
2023-03-07 20:44:42 +01:00
Augusto Hack
7ffa0cd97d Merge pull request #67 from 0xPolygonMiden/hacka-merkle-mountain-range-memory-implementation
feat: merkle mountain range
2023-03-02 22:27:13 +01:00
Augusto F. Hack
32d37f1591 feat: merkle mountain range 2023-03-02 13:07:55 +01:00
Augusto F. Hack
bc12fcafe9 chore: ignore pre-commit rev 2023-03-01 18:32:24 +01:00
Augusto Hack
8c08243f7a Merge pull request #78 from 0xPolygonMiden/hacka-pre-commit
Add pre commit
2023-03-01 18:31:08 +01:00
Augusto F. Hack
956e4c6fad chore: initial run pre-commit 2023-03-01 17:45:57 +01:00
Augusto F. Hack
efa39e5ce0 feat: added pre-commit hook config 2023-03-01 17:45:33 +01:00
Bobbin Threadbare
ae3f14e0ff Merge pull request #74 from 0xPolygonMiden/hacka-node-index-docs
docs: mention tree form order of NodeIndex docs
2023-02-22 12:19:45 -08:00
Bobbin Threadbare
962a07292f Merge pull request #75 from 0xPolygonMiden/next
v0.1.4 release
2023-02-22 09:32:44 -08:00
Augusto F. Hack
dfb073f784 docs: mention tree form order of NodeIndex docs 2023-02-22 17:23:03 +01:00
Bobbin Threadbare
41c38b4b5d chore: changed version to v0.1.4 in Cargo.toml 2023-02-22 08:22:25 -08:00
Bobbin Threadbare
c4eb4a6b98 Merge pull request #73 from 0xPolygonMiden/vlopes11-72-add-winter-hasher
feat: re-export winter-crypto Hasher, Digest & ElementHasher
2023-02-22 08:15:58 -08:00
Victor Lopez
35b255b5eb feat: re-export winter-crypto Hasher, Digest & ElementHasher
This commit introduces the re-export of the listed primitives.

They will be used inside Miden to report the security level of the
picked primitive, as well as other functionality.

closes #72
2023-02-22 16:56:14 +01:00
Bobbin Threadbare
e94b0c70a9 Merge pull request #71 from 0xPolygonMiden/bobbin-dep-updates
Dependency updates
2023-02-20 23:55:43 -08:00
Bobbin Threadbare
e6bf497500 chore: update dependencies 2023-02-20 23:46:21 -08:00
Bobbin Threadbare
835142d432 Merge pull request #70 from 0xPolygonMiden/next
v0.1.3 release
2023-02-20 23:13:07 -08:00
Bobbin Threadbare
85ba3f1a34 chore: update changelog for v0.1.3 release 2023-02-20 16:21:15 -08:00
Bobbin Threadbare
6aa226e9bb Merge pull request #68 from 0xPolygonMiden/vlopes11-update-winterfell-to-0.5
feat: upgrade to winterfell 0.5
2023-02-20 16:16:11 -08:00
Victor Lopez
0af45b75f4 feat: upgrade to winterfell 0.5 2023-02-20 23:57:41 +01:00
Bobbin Threadbare
822c52a1d2 Merge pull request #63 from 0xPolygonMiden/next
v0.1.2 release
2023-02-17 12:09:49 -08:00
Bobbin Threadbare
3c9a5235a0 docs: fix typos in doc comments 2023-02-17 11:58:23 -08:00
Bobbin Threadbare
2d97153fd0 Merge pull request #64 from 0xPolygonMiden/vlopes11-chore-release-v.0.1.2
chore: prepare for `v0.1.2` release
2023-02-17 10:59:04 -08:00
Victor Lopez
325b3abf8b chore: prepare for v0.1.2 release 2023-02-17 18:03:25 +01:00
Victor Lopes
b1a5ed6b5d Merge pull request #62 from 0xPolygonMiden/vlopes11-feat-add-node-index-from-felt
feat: add `from_elements` to `NodeIndex`
2023-02-16 21:53:36 +01:00
Victor Lopez
9307178873 feat: add from_elements to NodeIndex 2023-02-16 21:14:07 +01:00
Victor Lopes
3af53e63cf Merge pull request #54 from 0xPolygonMiden/vlopes11-36-feat-add-merkle-index
feat: add merkle node index
2023-02-16 00:39:18 +01:00
Victor Lopez
0799b1bb9d feat: add merkle node index
This commit introduces a wrapper structure to encapsulate the merkle
tree traversal.

related issue: #36
2023-02-15 23:53:01 +01:00
Victor Lopes
0c242d2c51 Merge pull request #53 from 0xPolygonMiden/vlopes11-36-feat-add-merkle-types
feat: add merkle path wrapper
2023-02-13 22:46:59 +01:00
Victor Lopez
21a8cbcb45 feat: add merkle path wrapper
A Merkle path is a vector of nodes, regardless of the Merkle tree
implementation.

This commit introduces an encapsulation for such vector, also to provide
functionality that is common between different algorithms such as
opening verification.

related issue: #36
2023-02-13 22:43:13 +01:00
Bobbin Threadbare
66da469ec4 Merge pull request #46 from 0xPolygonMiden/vlopes11-44-fix-rpo256-sponge-pad
fix: sponge pad panics on input
2023-02-09 11:43:42 -08:00
Victor Lopez
ed36ebc542 fix: sponge pad panics on input
closes #44
2023-02-09 13:06:06 +01:00
37 changed files with 6160 additions and 717 deletions

2
.git-blame-ignore-revs Normal file
View File

@@ -0,0 +1,2 @@
# initial run of pre-commit
956e4c6fad779ef15eaa27702b26f05f65d31494

View File

@@ -6,4 +6,4 @@
- Commit messages and codestyle follow [conventions](./CONTRIBUTING.md).
- Relevant issues are linked in the PR description.
- Tests added for new functionality.
- Documentation/comments updated according to changes.
- Documentation/comments updated according to changes.

43
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,43 @@
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0
hooks:
- id: trailing-whitespace
- id: end-of-file-fixer
- id: check-yaml
- id: check-json
- id: check-toml
- id: pretty-format-json
- id: check-added-large-files
- id: check-case-conflict
- id: check-executables-have-shebangs
- id: check-merge-conflict
- id: detect-private-key
- repo: https://github.com/hackaugusto/pre-commit-cargo
rev: v1.0.0
hooks:
# Allows cargo fmt to modify the source code prior to the commit
- id: cargo
name: Cargo fmt
args: ["+stable", "fmt", "--all"]
stages: [commit]
# Requires code to be properly formatted prior to pushing upstream
- id: cargo
name: Cargo fmt --check
args: ["+stable", "fmt", "--all", "--check"]
stages: [push, manual]
- id: cargo
name: Cargo check --all-targets
args: ["+stable", "check", "--all-targets"]
- id: cargo
name: Cargo check --all-targets --no-default-features
args: ["+stable", "check", "--all-targets", "--no-default-features"]
- id: cargo
name: Cargo check --all-targets --all-features
args: ["+stable", "check", "--all-targets", "--all-features"]
# Unlike fmt, clippy will not be automatically applied
- id: cargo
name: Cargo clippy
args: ["+nightly", "clippy", "--workspace", "--", "--deny", "clippy::all", "--deny", "warnings"]

View File

@@ -1,3 +1,33 @@
## 0.3.0 (2023-04-08)
- Added `depth` parameter to SMT constructors in `MerkleStore` (#115).
- Optimized MMR peak hashing for Miden VM (#120).
- Added `get_leaf_depth` method to `MerkleStore` (#119).
- Added inner node iterators to `MerkleTree`, `SimpleSmt`, and `Mmr` (#117, #118, #121).
## 0.2.0 (2023-03-24)
- Implemented `Mmr` and related structs (#67).
- Implemented `MerkleStore` (#93, #94, #95, #107 #112).
- Added benchmarks for `MerkleStore` vs. other structs (#97).
- Added Merkle path containers (#99).
- Fixed depth handling in `MerklePathSet` (#110).
- Updated Winterfell dependency to v0.6.
## 0.1.4 (2023-02-22)
- Re-export winter-crypto Hasher, Digest & ElementHasher (#72)
## 0.1.3 (2023-02-20)
- Updated Winterfell dependency to v0.5.1 (#68)
## 0.1.2 (2023-02-17)
- Fixed `Rpo256::hash` pad that was panicking on input (#44)
- Added `MerklePath` wrapper to encapsulate Merkle opening verification and root computation (#53)
- Added `NodeIndex` Merkle wrapper to encapsulate Merkle tree traversal and mappings (#54)
## 0.1.1 (2023-02-06)
- Introduced `merge_in_domain` for the RPO hash function, to allow using a specified domain value in the second capacity register when hashing two digests together.
@@ -8,6 +38,6 @@
- Initial release on crates.io containing the cryptographic primitives used in Miden VM and the Miden Rollup.
- Hash module with the BLAKE3 and Rescue Prime Optimized hash functions.
- BLAKE3 is implemented with 256-bit, 192-bit, or 160-bit output.
- BLAKE3 is implemented with 256-bit, 192-bit, or 160-bit output.
- RPO is implemented with 256-bit output.
- Merkle module, with a set of data structures related to Merkle trees, implemented using the RPO hash function.

View File

@@ -17,7 +17,7 @@ We are using [Github Flow](https://docs.github.com/en/get-started/quickstart/git
### Branching
- The current active branch is `next`. Every branch with a fix/feature must be forked from `next`.
- The branch name should contain a short issue/feature description separated with hyphens [(kebab-case)](https://en.wikipedia.org/wiki/Letter_case#Kebab_case).
- The branch name should contain a short issue/feature description separated with hyphens [(kebab-case)](https://en.wikipedia.org/wiki/Letter_case#Kebab_case).
For example, if the issue title is `Fix functionality X in component Y` then the branch name will be something like: `fix-x-in-y`.

View File

@@ -1,14 +1,16 @@
[package]
name = "miden-crypto"
version = "0.1.1"
description="Miden Cryptographic primitives"
version = "0.3.0"
description = "Miden Cryptographic primitives"
authors = ["miden contributors"]
readme="README.md"
readme = "README.md"
license = "MIT"
repository = "https://github.com/0xPolygonMiden/crypto"
documentation = "https://docs.rs/miden-crypto/0.3.0"
categories = ["cryptography", "no-std"]
keywords = ["miden", "crypto", "hash", "merkle"]
edition = "2021"
rust-version = "1.67"
[[bench]]
name = "hash"
@@ -18,17 +20,21 @@ harness = false
name = "smt"
harness = false
[[bench]]
name = "store"
harness = false
[features]
default = ["blake3/default", "std", "winter_crypto/default", "winter_math/default", "winter_utils/default"]
std = ["blake3/std", "winter_crypto/std", "winter_math/std", "winter_utils/std"]
[dependencies]
blake3 = { version = "1.0", default-features = false }
winter_crypto = { version = "0.4.1", package = "winter-crypto", default-features = false }
winter_math = { version = "0.4.1", package = "winter-math", default-features = false }
winter_utils = { version = "0.4.1", package = "winter-utils", default-features = false }
blake3 = { version = "1.3", default-features = false }
winter_crypto = { version = "0.6", package = "winter-crypto", default-features = false }
winter_math = { version = "0.6", package = "winter-math", default-features = false }
winter_utils = { version = "0.6", package = "winter-utils", default-features = false }
[dev-dependencies]
criterion = { version = "0.4", features = ["html_reports"] }
proptest = "1.0.0"
rand_utils = { version = "0.4", package = "winter-rand-utils" }
proptest = "1.1.0"
rand_utils = { version = "0.6", package = "winter-rand-utils" }

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2022 Polygon Miden
Copyright (c) 2023 Polygon Miden
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -13,7 +13,15 @@ For performance benchmarks of these hash functions and their comparison to other
[Merkle module](./src/merkle/) provides a set of data structures related to Merkle trees. All these data structures are implemented using the RPO hash function described above. The data structures are:
* `MerkleTree`: a regular fully-balanced binary Merkle tree. The depth of this tree can be at most 64.
* `SimpleSmt`: a Sparse Merkle Tree, mapping 64-bit keys to 4-element leaf values.
* `MerklePathSet`: a collection of Merkle authentication paths all resolving to the same root. The length of the paths can be at most 64.
* `MerkleStore`: a collection of Merkle trees of different heights designed to efficiently store trees with common subtrees.
* `Mmr`: a Merkle mountain range structure designed to function as an append-only log.
The module also contains additional supporting components such as `NodeIndex`, `MerklePath`, and `MerkleError` to assist with tree indexation, opening proofs, and reporting inconsistent arguments/state.
## Extra
[Root module](./src/lib.rs) provides a set of constants, types, aliases, and utils required to use the primitives of this library.
## Crate features
This crate can be compiled with the following features:
@@ -25,5 +33,21 @@ Both of these features imply the use of [alloc](https://doc.rust-lang.org/alloc/
To compile with `no_std`, disable default features via `--no-default-features` flag.
## Testing
You can use cargo defaults to test the library:
```shell
cargo test
```
However, some of the functions are heavy and might take a while for the tests to complete. In order to test in release mode, we have to replicate the test conditions of the development mode so all debug assertions can be verified.
We do that by enabling some special [flags](https://doc.rust-lang.org/cargo/reference/profiles.html) for the compilation.
```shell
RUSTFLAGS="-C debug-assertions -C overflow-checks -C debuginfo=2" cargo test --release
```
## License
This project is [MIT licensed](./LICENSE).

View File

@@ -1,4 +1,4 @@
# Miden VM Hash Functions
# Miden VM Hash Functions
In the Miden VM, we make use of different hash functions. Some of these are "traditional" hash functions, like `BLAKE3`, which are optimized for out-of-STARK performance, while others are algebraic hash functions, like `Rescue Prime`, and are more optimized for a better performance inside the STARK. In what follows, we benchmark several such hash functions and compare against other constructions that are used by other proving systems. More precisely, we benchmark:
* **BLAKE3** as specified [here](https://github.com/BLAKE3-team/BLAKE3-specs/blob/master/blake3.pdf) and implemented [here](https://github.com/BLAKE3-team/BLAKE3) (with a wrapper exposed via this crate).
@@ -13,7 +13,7 @@ In the Miden VM, we make use of different hash functions. Some of these are "tra
We benchmark the above hash functions using two scenarios. The first is a 2-to-1 $(a,b)\mapsto h(a,b)$ hashing where both $a$, $b$ and $h(a,b)$ are the digests corresponding to each of the hash functions.
The second scenario is that of sequential hashing where we take a sequence of length $100$ field elements and hash these to produce a single digest. The digests are $4$ field elements in a prime field with modulus $2^{64} - 2^{32} + 1$ (i.e., 32 bytes) for Poseidon, Rescue Prime and RPO, and an array `[u8; 32]` for SHA3 and BLAKE3.
#### Scenario 1: 2-to-1 hashing `h(a,b)`
#### Scenario 1: 2-to-1 hashing `h(a,b)`
| Function | BLAKE3 | SHA3 | Poseidon | Rp64_256 | RPO_256 |
| ------------------- | ------ | --------| --------- | --------- | ------- |
@@ -28,7 +28,7 @@ The second scenario is that of sequential hashing where we take a sequence of le
| Function | BLAKE3 | SHA3 | Poseidon | Rp64_256 | RPO_256 |
| ------------------- | -------| ------- | --------- | --------- | ------- |
| Apple M1 Pro | 1.1 us | 1.5 us | 19.4 us | 118 us | 70 us |
| Apple M1 Pro | 1.0 us | 1.5 us | 19.4 us | 118 us | 70 us |
| Apple M2 | 1.0 us | 1.5 us | 17.4 us | 103 us | 65 us |
| Amazon Graviton 3 | 1.4 us | | | | 114 us |
| AMD Ryzen 9 5950X | 0.8 us | 1.7 us | 15.7 us | 120 us | 72 us |
@@ -46,4 +46,4 @@ To run the benchmarks for Rescue Prime, Poseidon and SHA3, clone the following [
```
cargo bench hash
```
```

View File

@@ -18,7 +18,7 @@ fn smt_rpo(c: &mut Criterion) {
(i, word)
})
.collect();
let tree = SimpleSmt::new(entries, depth).unwrap();
let tree = SimpleSmt::new(depth).unwrap().with_leaves(entries).unwrap();
trees.push(tree);
}
}

505
benches/store.rs Normal file
View File

@@ -0,0 +1,505 @@
use criterion::{black_box, criterion_group, criterion_main, BatchSize, BenchmarkId, Criterion};
use miden_crypto::merkle::{MerkleStore, MerkleTree, NodeIndex, SimpleSmt};
use miden_crypto::Word;
use miden_crypto::{hash::rpo::RpoDigest, Felt};
use rand_utils::{rand_array, rand_value};
/// Since MerkleTree can only be created when a power-of-two number of elements is used, the sample
/// sizes are limited to that.
static BATCH_SIZES: [usize; 3] = [2usize.pow(4), 2usize.pow(7), 2usize.pow(10)];
/// Generates a random `RpoDigest`.
fn random_rpo_digest() -> RpoDigest {
rand_array::<Felt, 4>().into()
}
/// Generates a random `Word`.
fn random_word() -> Word {
rand_array::<Felt, 4>().into()
}
/// Generates an index at the specified depth in `0..range`.
fn random_index(range: u64, depth: u8) -> NodeIndex {
let value = rand_value::<u64>() % range;
NodeIndex::new(depth, value).unwrap()
}
/// Benchmarks getting an empty leaf from the SMT and MerkleStore backends.
fn get_empty_leaf_simplesmt(c: &mut Criterion) {
let mut group = c.benchmark_group("get_empty_leaf_simplesmt");
let depth = SimpleSmt::MAX_DEPTH;
let size = u64::MAX;
// both SMT and the store are pre-populated with empty hashes, accessing these values is what is
// being benchmarked here, so no values are inserted into the backends
let smt = SimpleSmt::new(depth).unwrap();
let store = MerkleStore::new();
let root = smt.root();
group.bench_function(BenchmarkId::new("SimpleSmt", depth), |b| {
b.iter_batched(
|| random_index(size, depth),
|index| black_box(smt.get_node(index)),
BatchSize::SmallInput,
)
});
group.bench_function(BenchmarkId::new("MerkleStore", depth), |b| {
b.iter_batched(
|| random_index(size, depth),
|index| black_box(store.get_node(root, index)),
BatchSize::SmallInput,
)
});
}
/// Benchmarks getting a leaf on Merkle trees and Merkle stores of varying power-of-two sizes.
fn get_leaf_merkletree(c: &mut Criterion) {
let mut group = c.benchmark_group("get_leaf_merkletree");
let random_data_size = BATCH_SIZES.into_iter().max().unwrap();
let random_data: Vec<RpoDigest> = (0..random_data_size).map(|_| random_rpo_digest()).collect();
for size in BATCH_SIZES {
let leaves = &random_data[..size];
let mtree_leaves: Vec<Word> = leaves.iter().map(|v| v.into()).collect();
let mtree = MerkleTree::new(mtree_leaves.clone()).unwrap();
let store = MerkleStore::new().with_merkle_tree(mtree_leaves).unwrap();
let depth = mtree.depth();
let root = mtree.root();
let size_u64 = size as u64;
group.bench_function(BenchmarkId::new("MerkleTree", size), |b| {
b.iter_batched(
|| random_index(size_u64, depth),
|index| black_box(mtree.get_node(index)),
BatchSize::SmallInput,
)
});
group.bench_function(BenchmarkId::new("MerkleStore", size), |b| {
b.iter_batched(
|| random_index(size_u64, depth),
|index| black_box(store.get_node(root, index)),
BatchSize::SmallInput,
)
});
}
}
/// Benchmarks getting a leaf on SMT and Merkle stores of varying power-of-two sizes.
fn get_leaf_simplesmt(c: &mut Criterion) {
let mut group = c.benchmark_group("get_leaf_simplesmt");
let random_data_size = BATCH_SIZES.into_iter().max().unwrap();
let random_data: Vec<RpoDigest> = (0..random_data_size).map(|_| random_rpo_digest()).collect();
for size in BATCH_SIZES {
let leaves = &random_data[..size];
let smt_leaves = leaves
.iter()
.enumerate()
.map(|(c, v)| (c.try_into().unwrap(), v.into()))
.collect::<Vec<(u64, Word)>>();
let smt = SimpleSmt::new(SimpleSmt::MAX_DEPTH)
.unwrap()
.with_leaves(smt_leaves.clone())
.unwrap();
let store = MerkleStore::new()
.with_sparse_merkle_tree(SimpleSmt::MAX_DEPTH, smt_leaves)
.unwrap();
let depth = smt.depth();
let root = smt.root();
let size_u64 = size as u64;
group.bench_function(BenchmarkId::new("SimpleSmt", size), |b| {
b.iter_batched(
|| random_index(size_u64, depth),
|index| black_box(smt.get_node(index)),
BatchSize::SmallInput,
)
});
group.bench_function(BenchmarkId::new("MerkleStore", size), |b| {
b.iter_batched(
|| random_index(size_u64, depth),
|index| black_box(store.get_node(root, index)),
BatchSize::SmallInput,
)
});
}
}
/// Benchmarks getting a node at half of the depth of an empty SMT and an empty Merkle store.
fn get_node_of_empty_simplesmt(c: &mut Criterion) {
let mut group = c.benchmark_group("get_node_of_empty_simplesmt");
let depth = SimpleSmt::MAX_DEPTH;
// both SMT and the store are pre-populated with the empty hashes, accessing the internal nodes
// of these values is what is being benchmarked here, so no values are inserted into the
// backends.
let smt = SimpleSmt::new(depth).unwrap();
let store = MerkleStore::new();
let root = smt.root();
let half_depth = depth / 2;
let half_size = 2_u64.pow(half_depth as u32);
group.bench_function(BenchmarkId::new("SimpleSmt", depth), |b| {
b.iter_batched(
|| random_index(half_size, half_depth),
|index| black_box(smt.get_node(index)),
BatchSize::SmallInput,
)
});
group.bench_function(BenchmarkId::new("MerkleStore", depth), |b| {
b.iter_batched(
|| random_index(half_size, half_depth),
|index| black_box(store.get_node(root, index)),
BatchSize::SmallInput,
)
});
}
/// Benchmarks getting a node at half of the depth of a Merkle tree and Merkle store of varying
/// power-of-two sizes.
fn get_node_merkletree(c: &mut Criterion) {
let mut group = c.benchmark_group("get_node_merkletree");
let random_data_size = BATCH_SIZES.into_iter().max().unwrap();
let random_data: Vec<RpoDigest> = (0..random_data_size).map(|_| random_rpo_digest()).collect();
for size in BATCH_SIZES {
let leaves = &random_data[..size];
let mtree_leaves: Vec<Word> = leaves.iter().map(|v| v.into()).collect();
let mtree = MerkleTree::new(mtree_leaves.clone()).unwrap();
let store = MerkleStore::new().with_merkle_tree(mtree_leaves).unwrap();
let root = mtree.root();
let half_depth = mtree.depth() / 2;
let half_size = 2_u64.pow(half_depth as u32);
group.bench_function(BenchmarkId::new("MerkleTree", size), |b| {
b.iter_batched(
|| random_index(half_size, half_depth),
|index| black_box(mtree.get_node(index)),
BatchSize::SmallInput,
)
});
group.bench_function(BenchmarkId::new("MerkleStore", size), |b| {
b.iter_batched(
|| random_index(half_size, half_depth),
|index| black_box(store.get_node(root, index)),
BatchSize::SmallInput,
)
});
}
}
/// Benchmarks getting a node at half the depth on SMT and Merkle stores of varying power-of-two
/// sizes.
fn get_node_simplesmt(c: &mut Criterion) {
let mut group = c.benchmark_group("get_node_simplesmt");
let random_data_size = BATCH_SIZES.into_iter().max().unwrap();
let random_data: Vec<RpoDigest> = (0..random_data_size).map(|_| random_rpo_digest()).collect();
for size in BATCH_SIZES {
let leaves = &random_data[..size];
let smt_leaves = leaves
.iter()
.enumerate()
.map(|(c, v)| (c.try_into().unwrap(), v.into()))
.collect::<Vec<(u64, Word)>>();
let smt = SimpleSmt::new(SimpleSmt::MAX_DEPTH)
.unwrap()
.with_leaves(smt_leaves.clone())
.unwrap();
let store = MerkleStore::new()
.with_sparse_merkle_tree(SimpleSmt::MAX_DEPTH, smt_leaves)
.unwrap();
let root = smt.root();
let half_depth = smt.depth() / 2;
let half_size = 2_u64.pow(half_depth as u32);
group.bench_function(BenchmarkId::new("SimpleSmt", size), |b| {
b.iter_batched(
|| random_index(half_size, half_depth),
|index| black_box(smt.get_node(index)),
BatchSize::SmallInput,
)
});
group.bench_function(BenchmarkId::new("MerkleStore", size), |b| {
b.iter_batched(
|| random_index(half_size, half_depth),
|index| black_box(store.get_node(root, index)),
BatchSize::SmallInput,
)
});
}
}
/// Benchmarks getting a path of a leaf on the Merkle tree and Merkle store backends.
fn get_leaf_path_merkletree(c: &mut Criterion) {
let mut group = c.benchmark_group("get_leaf_path_merkletree");
let random_data_size = BATCH_SIZES.into_iter().max().unwrap();
let random_data: Vec<RpoDigest> = (0..random_data_size).map(|_| random_rpo_digest()).collect();
for size in BATCH_SIZES {
let leaves = &random_data[..size];
let mtree_leaves: Vec<Word> = leaves.iter().map(|v| v.into()).collect();
let mtree = MerkleTree::new(mtree_leaves.clone()).unwrap();
let store = MerkleStore::new().with_merkle_tree(mtree_leaves).unwrap();
let depth = mtree.depth();
let root = mtree.root();
let size_u64 = size as u64;
group.bench_function(BenchmarkId::new("MerkleTree", size), |b| {
b.iter_batched(
|| random_index(size_u64, depth),
|index| black_box(mtree.get_path(index)),
BatchSize::SmallInput,
)
});
group.bench_function(BenchmarkId::new("MerkleStore", size), |b| {
b.iter_batched(
|| random_index(size_u64, depth),
|index| black_box(store.get_path(root, index)),
BatchSize::SmallInput,
)
});
}
}
/// Benchmarks getting a path of a leaf on the SMT and Merkle store backends.
fn get_leaf_path_simplesmt(c: &mut Criterion) {
let mut group = c.benchmark_group("get_leaf_path_simplesmt");
let random_data_size = BATCH_SIZES.into_iter().max().unwrap();
let random_data: Vec<RpoDigest> = (0..random_data_size).map(|_| random_rpo_digest()).collect();
for size in BATCH_SIZES {
let leaves = &random_data[..size];
let smt_leaves = leaves
.iter()
.enumerate()
.map(|(c, v)| (c.try_into().unwrap(), v.into()))
.collect::<Vec<(u64, Word)>>();
let smt = SimpleSmt::new(SimpleSmt::MAX_DEPTH)
.unwrap()
.with_leaves(smt_leaves.clone())
.unwrap();
let store = MerkleStore::new()
.with_sparse_merkle_tree(SimpleSmt::MAX_DEPTH, smt_leaves)
.unwrap();
let depth = smt.depth();
let root = smt.root();
let size_u64 = size as u64;
group.bench_function(BenchmarkId::new("SimpleSmt", size), |b| {
b.iter_batched(
|| random_index(size_u64, depth),
|index| black_box(smt.get_path(index)),
BatchSize::SmallInput,
)
});
group.bench_function(BenchmarkId::new("MerkleStore", size), |b| {
b.iter_batched(
|| random_index(size_u64, depth),
|index| black_box(store.get_path(root, index)),
BatchSize::SmallInput,
)
});
}
}
/// Benchmarks creation of the different storage backends
fn new(c: &mut Criterion) {
let mut group = c.benchmark_group("new");
let random_data_size = BATCH_SIZES.into_iter().max().unwrap();
let random_data: Vec<RpoDigest> = (0..random_data_size).map(|_| random_rpo_digest()).collect();
for size in BATCH_SIZES {
let leaves = &random_data[..size];
// MerkleTree constructor is optimized to work with vectors. Create a new copy of the data
// and pass it to the benchmark function
group.bench_function(BenchmarkId::new("MerkleTree::new", size), |b| {
b.iter_batched(
|| leaves.iter().map(|v| v.into()).collect::<Vec<Word>>(),
|l| black_box(MerkleTree::new(l)),
BatchSize::SmallInput,
)
});
// This could be done with `bench_with_input`, however to remove variables while comparing
// with MerkleTree it is using `iter_batched`
group.bench_function(
BenchmarkId::new("MerkleStore::with_merkle_tree", size),
|b| {
b.iter_batched(
|| leaves.iter().map(|v| v.into()).collect::<Vec<Word>>(),
|l| black_box(MerkleStore::new().with_merkle_tree(l)),
BatchSize::SmallInput,
)
},
);
group.bench_function(BenchmarkId::new("SimpleSmt::new", size), |b| {
b.iter_batched(
|| {
leaves
.iter()
.enumerate()
.map(|(c, v)| (c.try_into().unwrap(), v.into()))
.collect::<Vec<(u64, Word)>>()
},
|l| black_box(SimpleSmt::new(SimpleSmt::MAX_DEPTH).unwrap().with_leaves(l)),
BatchSize::SmallInput,
)
});
group.bench_function(
BenchmarkId::new("MerkleStore::with_sparse_merkle_tree", size),
|b| {
b.iter_batched(
|| {
leaves
.iter()
.enumerate()
.map(|(c, v)| (c.try_into().unwrap(), v.into()))
.collect::<Vec<(u64, Word)>>()
},
|l| {
black_box(
MerkleStore::new().with_sparse_merkle_tree(SimpleSmt::MAX_DEPTH, l),
)
},
BatchSize::SmallInput,
)
},
);
}
}
/// Benchmarks updating a leaf on MerkleTree and MerkleStore backends.
fn update_leaf_merkletree(c: &mut Criterion) {
let mut group = c.benchmark_group("update_leaf_merkletree");
let random_data_size = BATCH_SIZES.into_iter().max().unwrap();
let random_data: Vec<RpoDigest> = (0..random_data_size).map(|_| random_rpo_digest()).collect();
for size in BATCH_SIZES {
let leaves = &random_data[..size];
let mtree_leaves: Vec<Word> = leaves.iter().map(|v| v.into()).collect();
let mut mtree = MerkleTree::new(mtree_leaves.clone()).unwrap();
let mut store = MerkleStore::new().with_merkle_tree(mtree_leaves).unwrap();
let depth = mtree.depth();
let root = mtree.root();
let size_u64 = size as u64;
group.bench_function(BenchmarkId::new("MerkleTree", size), |b| {
b.iter_batched(
|| (rand_value::<u64>() % size_u64, random_word()),
|(index, value)| black_box(mtree.update_leaf(index, value)),
BatchSize::SmallInput,
)
});
let mut store_root = root;
group.bench_function(BenchmarkId::new("MerkleStore", size), |b| {
b.iter_batched(
|| (random_index(size_u64, depth), random_word()),
|(index, value)| {
// The MerkleTree automatically updates its internal root, the Store maintains
// the old root and adds the new one. Here we update the root to have a fair
// comparison
store_root = store.set_node(root, index, value).unwrap().root;
black_box(store_root)
},
BatchSize::SmallInput,
)
});
}
}
/// Benchmarks updating a leaf on SMT and MerkleStore backends.
fn update_leaf_simplesmt(c: &mut Criterion) {
let mut group = c.benchmark_group("update_leaf_simplesmt");
let random_data_size = BATCH_SIZES.into_iter().max().unwrap();
let random_data: Vec<RpoDigest> = (0..random_data_size).map(|_| random_rpo_digest()).collect();
for size in BATCH_SIZES {
let leaves = &random_data[..size];
let smt_leaves = leaves
.iter()
.enumerate()
.map(|(c, v)| (c.try_into().unwrap(), v.into()))
.collect::<Vec<(u64, Word)>>();
let mut smt = SimpleSmt::new(SimpleSmt::MAX_DEPTH)
.unwrap()
.with_leaves(smt_leaves.clone())
.unwrap();
let mut store = MerkleStore::new()
.with_sparse_merkle_tree(SimpleSmt::MAX_DEPTH, smt_leaves)
.unwrap();
let depth = smt.depth();
let root = smt.root();
let size_u64 = size as u64;
group.bench_function(BenchmarkId::new("SimpleSMT", size), |b| {
b.iter_batched(
|| (rand_value::<u64>() % size_u64, random_word()),
|(index, value)| black_box(smt.update_leaf(index, value)),
BatchSize::SmallInput,
)
});
let mut store_root = root;
group.bench_function(BenchmarkId::new("MerkleStore", size), |b| {
b.iter_batched(
|| (random_index(size_u64, depth), random_word()),
|(index, value)| {
// The MerkleTree automatically updates its internal root, the Store maintains
// the old root and adds the new one. Here we update the root to have a fair
// comparison
store_root = store.set_node(root, index, value).unwrap().root;
black_box(store_root)
},
BatchSize::SmallInput,
)
});
}
}
criterion_group!(
store_group,
get_empty_leaf_simplesmt,
get_leaf_merkletree,
get_leaf_path_merkletree,
get_leaf_path_simplesmt,
get_leaf_simplesmt,
get_node_merkletree,
get_node_of_empty_simplesmt,
get_node_simplesmt,
new,
update_leaf_merkletree,
update_leaf_simplesmt,
);
criterion_main!(store_group);

View File

@@ -1,7 +1,5 @@
use super::{Digest, ElementHasher, Felt, FieldElement, Hasher, StarkField};
use crate::utils::{
uninit_vector, ByteReader, ByteWriter, Deserializable, DeserializationError, Serializable,
};
use crate::utils::{ByteReader, ByteWriter, Deserializable, DeserializationError, Serializable};
use core::{
mem::{size_of, transmute, transmute_copy},
ops::Deref,
@@ -56,13 +54,13 @@ impl<const N: usize> From<[u8; N]> for Blake3Digest<N> {
impl<const N: usize> Serializable for Blake3Digest<N> {
fn write_into<W: ByteWriter>(&self, target: &mut W) {
target.write_u8_slice(&self.0);
target.write_bytes(&self.0);
}
}
impl<const N: usize> Deserializable for Blake3Digest<N> {
fn read_from<R: ByteReader>(source: &mut R) -> Result<Self, DeserializationError> {
source.read_u8_array().map(Self)
source.read_array().map(Self)
}
}
@@ -78,9 +76,13 @@ impl<const N: usize> Digest for Blake3Digest<N> {
// ================================================================================================
/// 256-bit output blake3 hasher.
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
pub struct Blake3_256;
impl Hasher for Blake3_256 {
/// Blake3 collision resistance is 128-bits for 32-bytes output.
const COLLISION_RESISTANCE: u32 = 128;
type Digest = Blake3Digest<32>;
fn hash(bytes: &[u8]) -> Self::Digest {
@@ -138,9 +140,13 @@ impl Blake3_256 {
// ================================================================================================
/// 192-bit output blake3 hasher.
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
pub struct Blake3_192;
impl Hasher for Blake3_192 {
/// Blake3 collision resistance is 96-bits for 24-bytes output.
const COLLISION_RESISTANCE: u32 = 96;
type Digest = Blake3Digest<24>;
fn hash(bytes: &[u8]) -> Self::Digest {
@@ -198,9 +204,13 @@ impl Blake3_192 {
// ================================================================================================
/// 160-bit output blake3 hasher.
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
pub struct Blake3_160;
impl Hasher for Blake3_160 {
/// Blake3 collision resistance is 80-bits for 20-bytes output.
const COLLISION_RESISTANCE: u32 = 80;
type Digest = Blake3Digest<20>;
fn hash(bytes: &[u8]) -> Self::Digest {
@@ -278,15 +288,25 @@ where
let digest = if Felt::IS_CANONICAL {
blake3::hash(E::elements_as_bytes(elements))
} else {
let base_elements = E::as_base_elements(elements);
let blen = base_elements.len() << 3;
let mut hasher = blake3::Hasher::new();
let mut bytes = unsafe { uninit_vector(blen) };
for (idx, element) in base_elements.iter().enumerate() {
bytes[idx * 8..(idx + 1) * 8].copy_from_slice(&element.as_int().to_le_bytes());
// BLAKE3 state is 64 bytes - so, we can absorb 64 bytes into the state in a single
// permutation. we move the elements into the hasher via the buffer to give the CPU
// a chance to process multiple element-to-byte conversions in parallel
let mut buf = [0_u8; 64];
let mut chunk_iter = E::slice_as_base_elements(elements).chunks_exact(8);
for chunk in chunk_iter.by_ref() {
for i in 0..8 {
buf[i * 8..(i + 1) * 8].copy_from_slice(&chunk[i].as_int().to_le_bytes());
}
hasher.update(&buf);
}
blake3::hash(&bytes)
for element in chunk_iter.remainder() {
hasher.update(&element.as_int().to_le_bytes());
}
hasher.finalize()
};
*shrink_bytes(&digest.into())
}

View File

@@ -1,6 +1,22 @@
use super::*;
use crate::utils::collections::Vec;
use proptest::prelude::*;
use rand_utils::rand_vector;
#[test]
fn blake3_hash_elements() {
// test multiple of 8
let elements = rand_vector::<Felt>(16);
let expected = compute_expected_element_hash(&elements);
let actual: [u8; 32] = hash_elements(&elements);
assert_eq!(&expected, &actual);
// test not multiple of 8
let elements = rand_vector::<Felt>(17);
let expected = compute_expected_element_hash(&elements);
let actual: [u8; 32] = hash_elements(&elements);
assert_eq!(&expected, &actual);
}
proptest! {
#[test]
@@ -18,3 +34,14 @@ proptest! {
Blake3_256::hash(vec);
}
}
// HELPER FUNCTIONS
// ================================================================================================
fn compute_expected_element_hash(elements: &[Felt]) -> blake3::Hash {
let mut bytes = Vec::new();
for element in elements.iter() {
bytes.extend_from_slice(&element.as_int().to_le_bytes());
}
blake3::hash(&bytes)
}

View File

@@ -1,5 +1,9 @@
use super::{Felt, FieldElement, StarkField, ONE, ZERO};
use winter_crypto::{Digest, ElementHasher, Hasher};
pub mod blake;
pub mod rpo;
// RE-EXPORTS
// ================================================================================================
pub use winter_crypto::{Digest, ElementHasher, Hasher};

View File

@@ -11,7 +11,7 @@ use core::{cmp::Ordering, ops::Deref};
pub struct RpoDigest([Felt; DIGEST_SIZE]);
impl RpoDigest {
pub fn new(value: [Felt; DIGEST_SIZE]) -> Self {
pub const fn new(value: [Felt; DIGEST_SIZE]) -> Self {
Self(value)
}
@@ -46,7 +46,7 @@ impl Digest for RpoDigest {
impl Serializable for RpoDigest {
fn write_into<W: ByteWriter>(&self, target: &mut W) {
target.write_u8_slice(&self.as_bytes());
target.write_bytes(&self.as_bytes());
}
}
@@ -73,12 +73,24 @@ impl From<[Felt; DIGEST_SIZE]> for RpoDigest {
}
}
impl From<&RpoDigest> for [Felt; DIGEST_SIZE] {
fn from(value: &RpoDigest) -> Self {
value.0
}
}
impl From<RpoDigest> for [Felt; DIGEST_SIZE] {
fn from(value: RpoDigest) -> Self {
value.0
}
}
impl From<&RpoDigest> for [u8; 32] {
fn from(value: &RpoDigest) -> Self {
value.as_bytes()
}
}
impl From<RpoDigest> for [u8; 32] {
fn from(value: RpoDigest) -> Self {
value.as_bytes()

View File

@@ -88,67 +88,80 @@ const INV_ALPHA: u64 = 10540996611094048183;
/// to deserialize them into field elements and then hash them using
/// [hash_elements()](Rpo256::hash_elements) function rather then hashing the serialized bytes
/// using [hash()](Rpo256::hash) function.
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
pub struct Rpo256();
impl Hasher for Rpo256 {
/// Rpo256 collision resistance is the same as the security level, that is 128-bits.
///
/// #### Collision resistance
///
/// However, our setup of the capacity registers might drop it to 126.
///
/// Related issue: [#69](https://github.com/0xPolygonMiden/crypto/issues/69)
const COLLISION_RESISTANCE: u32 = 128;
type Digest = RpoDigest;
fn hash(bytes: &[u8]) -> Self::Digest {
// compute the number of elements required to represent the string; we will be processing
// the string in BINARY_CHUNK_SIZE-byte chunks, thus the number of elements will be equal
// to the number of such chunks (including a potential partial chunk at the end).
let num_elements = if bytes.len() % BINARY_CHUNK_SIZE == 0 {
bytes.len() / BINARY_CHUNK_SIZE
} else {
bytes.len() / BINARY_CHUNK_SIZE + 1
};
// initialize state to all zeros, except for the first element of the capacity part, which
// is set to the number of elements to be hashed. this is done so that adding zero elements
// at the end of the list always results in a different hash.
// initialize the state with zeroes
let mut state = [ZERO; STATE_WIDTH];
state[CAPACITY_RANGE.start] = Felt::new(num_elements as u64);
// break the string into BINARY_CHUNK_SIZE-byte chunks, convert each chunk into a field
// element, and absorb the element into the rate portion of the state. we use
// BINARY_CHUNK_SIZE-byte chunks because every BINARY_CHUNK_SIZE-byte chunk is guaranteed
// to map to some field element.
let mut i = 0;
let mut buf = [0_u8; 8];
for chunk in bytes.chunks(BINARY_CHUNK_SIZE) {
if i < num_elements - 1 {
buf[..BINARY_CHUNK_SIZE].copy_from_slice(chunk);
} else {
// if we are dealing with the last chunk, it may be smaller than BINARY_CHUNK_SIZE
// bytes long, so we need to handle it slightly differently. We also append a byte
// with value 1 to the end of the string; this pads the string in such a way that
// adding trailing zeros results in different hash
let chunk_len = chunk.len();
buf = [0_u8; 8];
buf[..chunk_len].copy_from_slice(chunk);
buf[chunk_len] = 1;
}
// convert the bytes into a field element and absorb it into the rate portion of the
// state; if the rate is filled up, apply the Rescue permutation and start absorbing
// again from zero index.
state[RATE_RANGE.start + i] = Felt::new(u64::from_le_bytes(buf));
i += 1;
if i % RATE_WIDTH == 0 {
Self::apply_permutation(&mut state);
i = 0;
}
// set the capacity (first element) to a flag on whether or not the input length is evenly
// divided by the rate. this will prevent collisions between padded and non-padded inputs,
// and will rule out the need to perform an extra permutation in case of evenly divided
// inputs.
let is_rate_multiple = bytes.len() % RATE_WIDTH == 0;
if !is_rate_multiple {
state[CAPACITY_RANGE.start] = ONE;
}
// initialize a buffer to receive the little-endian elements.
let mut buf = [0_u8; 8];
// iterate the chunks of bytes, creating a field element from each chunk and copying it
// into the state.
//
// every time the rate range is filled, a permutation is performed. if the final value of
// `i` is not zero, then the chunks count wasn't enough to fill the state range, and an
// additional permutation must be performed.
let i = bytes.chunks(BINARY_CHUNK_SIZE).fold(0, |i, chunk| {
// the last element of the iteration may or may not be a full chunk. if it's not, then
// we need to pad the remainder bytes of the chunk with zeroes, separated by a `1`.
// this will avoid collisions.
if chunk.len() == BINARY_CHUNK_SIZE {
buf[..BINARY_CHUNK_SIZE].copy_from_slice(chunk);
} else {
buf.fill(0);
buf[..chunk.len()].copy_from_slice(chunk);
buf[chunk.len()] = 1;
}
// set the current rate element to the input. since we take at most 7 bytes, we are
// guaranteed that the inputs data will fit into a single field element.
state[RATE_RANGE.start + i] = Felt::new(u64::from_le_bytes(buf));
// proceed filling the range. if it's full, then we apply a permutation and reset the
// counter to the beginning of the range.
if i == RATE_WIDTH - 1 {
Self::apply_permutation(&mut state);
0
} else {
i + 1
}
});
// if we absorbed some elements but didn't apply a permutation to them (would happen when
// the number of elements is not a multiple of RATE_WIDTH), apply the RPO permutation.
// we don't need to apply any extra padding because we injected total number of elements
// in the input list into the capacity portion of the state during initialization.
if i > 0 {
// the number of elements is not a multiple of RATE_WIDTH), apply the RPO permutation. we
// don't need to apply any extra padding because the first capacity element containts a
// flag indicating whether the input is evenly divisible by the rate.
if i != 0 {
state[RATE_RANGE.start + i..RATE_RANGE.end].fill(ZERO);
state[RATE_RANGE.start + i] = ONE;
Self::apply_permutation(&mut state);
}
// return the first 4 elements of the state as hash result
// return the first 4 elements of the rate as hash result.
RpoDigest::new(state[DIGEST_RANGE].try_into().unwrap())
}
@@ -199,7 +212,7 @@ impl ElementHasher for Rpo256 {
fn hash_elements<E: FieldElement<BaseField = Self::BaseField>>(elements: &[E]) -> Self::Digest {
// convert the elements into a list of base field elements
let elements = E::as_base_elements(elements);
let elements = E::slice_as_base_elements(elements);
// initialize state to all zeros, except for the first element of the capacity part, which
// is set to 1 if the number of elements is not a multiple of RATE_WIDTH.

View File

@@ -2,7 +2,9 @@ use super::{
Felt, FieldElement, Hasher, Rpo256, RpoDigest, StarkField, ALPHA, INV_ALPHA, ONE, STATE_WIDTH,
ZERO,
};
use crate::utils::collections::{BTreeSet, Vec};
use core::convert::TryInto;
use proptest::prelude::*;
use rand_utils::rand_value;
#[test]
@@ -193,6 +195,43 @@ fn hash_test_vectors() {
}
}
#[test]
fn sponge_bytes_with_remainder_length_wont_panic() {
// this test targets to assert that no panic will happen with the edge case of having an inputs
// with length that is not divisible by the used binary chunk size. 113 is a non-negligible
// input length that is prime; hence guaranteed to not be divisible by any choice of chunk
// size.
//
// this is a preliminary test to the fuzzy-stress of proptest.
Rpo256::hash(&vec![0; 113]);
}
#[test]
fn sponge_collision_for_wrapped_field_element() {
let a = Rpo256::hash(&[0; 8]);
let b = Rpo256::hash(&Felt::MODULUS.to_le_bytes());
assert_ne!(a, b);
}
#[test]
fn sponge_zeroes_collision() {
let mut zeroes = Vec::with_capacity(255);
let mut set = BTreeSet::new();
(0..255).for_each(|_| {
let hash = Rpo256::hash(&zeroes);
zeroes.push(0);
// panic if a collision was found
assert!(set.insert(hash));
});
}
proptest! {
#[test]
fn rpo256_wont_panic_with_arbitrary_input(ref vec in any::<Vec<u8>>()) {
Rpo256::hash(&vec);
}
}
const EXPECTED: [[Felt; 4]; 19] = [
[
Felt::new(1502364727743950833),

View File

@@ -6,21 +6,14 @@ extern crate alloc;
pub mod hash;
pub mod merkle;
pub mod utils;
// RE-EXPORTS
// ================================================================================================
pub use winter_crypto::{RandomCoin, RandomCoinError};
pub use winter_math::{fields::f64::BaseElement as Felt, FieldElement, StarkField};
pub mod utils {
pub use winter_utils::{
collections, string, uninit_vector, ByteReader, ByteWriter, Deserializable,
DeserializationError, Serializable, SliceReader,
};
}
// TYPE ALIASES
// ================================================================================================
@@ -38,3 +31,32 @@ pub const ZERO: Felt = Felt::ZERO;
/// Field element representing ONE in the Miden base filed.
pub const ONE: Felt = Felt::ONE;
// TESTS
// ================================================================================================
#[test]
#[should_panic]
fn debug_assert_is_checked() {
// enforce the release checks to always have `RUSTFLAGS="-C debug-assertions".
//
// some upstream tests are performed with `debug_assert`, and we want to assert its correctness
// downstream.
//
// for reference, check
// https://github.com/0xPolygonMiden/miden-vm/issues/433
debug_assert!(false);
}
#[test]
#[should_panic]
#[allow(arithmetic_overflow)]
fn overflow_panics_for_test() {
// overflows might be disabled if tests are performed in release mode. these are critical,
// mandatory checks as overflows might be attack vectors.
//
// to enable overflow checks in release mode, ensure `RUSTFLAGS="-C overflow-checks"`
let a = 1_u64;
let b = 64;
assert_ne!(a << b, 0);
}

1585
src/merkle/empty_roots.rs Normal file

File diff suppressed because it is too large Load Diff

175
src/merkle/index.rs Normal file
View File

@@ -0,0 +1,175 @@
use super::{Felt, MerkleError, RpoDigest, StarkField};
// NODE INDEX
// ================================================================================================
/// Address to an arbitrary node in a binary tree using level order form.
///
/// The position is represented by the pair `(depth, pos)`, where for a given depth `d` elements
/// are numbered from $0..(2^d)-1$. Example:
///
/// ```ignore
/// depth
/// 0 0
/// 1 0 1
/// 2 0 1 2 3
/// 3 0 1 2 3 4 5 6 7
/// ```
///
/// The root is represented by the pair $(0, 0)$, its left child is $(1, 0)$ and its right child
/// $(1, 1)$.
#[derive(Debug, Default, Copy, Clone, Eq, PartialEq, PartialOrd, Ord, Hash)]
pub struct NodeIndex {
depth: u8,
value: u64,
}
impl NodeIndex {
// CONSTRUCTORS
// --------------------------------------------------------------------------------------------
/// Creates a new node index.
///
/// # Errors
/// Returns an error if the `value` is greater than or equal to 2^{depth}.
pub const fn new(depth: u8, value: u64) -> Result<Self, MerkleError> {
if (64 - value.leading_zeros()) > depth as u32 {
Err(MerkleError::InvalidIndex { depth, value })
} else {
Ok(Self { depth, value })
}
}
/// Creates a new node index for testing purposes.
///
/// # Panics
/// Panics if the `value` is greater than or equal to 2^{depth}.
#[cfg(test)]
pub fn make(depth: u8, value: u64) -> Self {
Self::new(depth, value).unwrap()
}
/// Creates a node index from a pair of field elements representing the depth and value.
///
/// # Errors
/// Returns an error if:
/// - `depth` doesn't fit in a `u8`.
/// - `value` is greater than or equal to 2^{depth}.
pub fn from_elements(depth: &Felt, value: &Felt) -> Result<Self, MerkleError> {
let depth = depth.as_int();
let depth = u8::try_from(depth).map_err(|_| MerkleError::DepthTooBig(depth))?;
let value = value.as_int();
Self::new(depth, value)
}
/// Creates a new node index pointing to the root of the tree.
pub const fn root() -> Self {
Self { depth: 0, value: 0 }
}
/// Computes the value of the sibling of the current node.
pub fn sibling(mut self) -> Self {
self.value ^= 1;
self
}
// PROVIDERS
// --------------------------------------------------------------------------------------------
/// Builds a node to be used as input of a hash function when computing a Merkle path.
///
/// Will evaluate the parity of the current instance to define the result.
pub const fn build_node(&self, slf: RpoDigest, sibling: RpoDigest) -> [RpoDigest; 2] {
if self.is_value_odd() {
[sibling, slf]
} else {
[slf, sibling]
}
}
/// Returns the scalar representation of the depth/value pair.
///
/// It is computed as `2^depth + value`.
pub const fn to_scalar_index(&self) -> u64 {
(1 << self.depth as u64) + self.value
}
/// Returns the depth of the current instance.
pub const fn depth(&self) -> u8 {
self.depth
}
/// Returns the value of this index.
pub const fn value(&self) -> u64 {
self.value
}
/// Returns true if the current instance points to a right sibling node.
pub const fn is_value_odd(&self) -> bool {
(self.value & 1) == 1
}
/// Returns `true` if the depth is `0`.
pub const fn is_root(&self) -> bool {
self.depth == 0
}
// STATE MUTATORS
// --------------------------------------------------------------------------------------------
/// Traverse one level towards the root, decrementing the depth by `1`.
pub fn move_up(&mut self) -> &mut Self {
self.depth = self.depth.saturating_sub(1);
self.value >>= 1;
self
}
}
#[cfg(test)]
mod tests {
use super::*;
use proptest::prelude::*;
#[test]
fn test_node_index_value_too_high() {
assert_eq!(
NodeIndex::new(0, 0).unwrap(),
NodeIndex { depth: 0, value: 0 }
);
match NodeIndex::new(0, 1) {
Err(MerkleError::InvalidIndex { depth, value }) => {
assert_eq!(depth, 0);
assert_eq!(value, 1);
}
_ => unreachable!(),
}
}
#[test]
fn test_node_index_can_represent_depth_64() {
assert!(NodeIndex::new(64, u64::MAX).is_ok());
}
prop_compose! {
fn node_index()(value in 0..2u64.pow(u64::BITS - 1)) -> NodeIndex {
// unwrap never panics because the range of depth is 0..u64::BITS
let mut depth = value.ilog2() as u8;
if value > (1 << depth) { // round up
depth += 1;
}
NodeIndex::new(depth, value.into()).unwrap()
}
}
proptest! {
#[test]
fn arbitrary_index_wont_panic_on_move_up(
mut index in node_index(),
count in prop::num::u8::ANY,
) {
for _ in 0..count {
index.move_up();
}
}
}
}

View File

@@ -1,344 +0,0 @@
use super::{BTreeMap, MerkleError, Rpo256, Vec, Word, ZERO};
// MERKLE PATH SET
// ================================================================================================
/// A set of Merkle paths.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct MerklePathSet {
root: Word,
total_depth: u32,
paths: BTreeMap<u64, Vec<Word>>,
}
impl MerklePathSet {
// CONSTRUCTOR
// --------------------------------------------------------------------------------------------
/// Returns an empty MerklePathSet.
pub fn new(depth: u32) -> Result<Self, MerkleError> {
let root = [ZERO; 4];
let paths = BTreeMap::<u64, Vec<Word>>::new();
Ok(Self {
root,
total_depth: depth,
paths,
})
}
// PUBLIC ACCESSORS
// --------------------------------------------------------------------------------------------
/// Adds the specified Merkle path to this [MerklePathSet]. The `index` and `value` parameters
/// specify the leaf node at which the path starts.
///
/// # Errors
/// Returns an error if:
/// - The specified index is not valid in the context of this Merkle path set (i.e., the index
/// implies a greater depth than is specified for this set).
/// - The specified path is not consistent with other paths in the set (i.e., resolves to a
/// different root).
pub fn add_path(
&mut self,
index: u64,
value: Word,
path: Vec<Word>,
) -> Result<(), MerkleError> {
let depth = (path.len() + 1) as u32;
if depth != self.total_depth {
return Err(MerkleError::InvalidDepth(self.total_depth, depth));
}
// Actual number of node in tree
let pos = 2u64.pow(self.total_depth) + index;
// Index of the leaf path in map. Paths of neighboring leaves are stored in one key-value pair
let half_pos = pos / 2;
let mut extended_path = path;
if is_even(pos) {
extended_path.insert(0, value);
} else {
extended_path.insert(1, value);
}
let root_of_current_path = compute_path_root(&extended_path, depth, index);
if self.root == [ZERO; 4] {
self.root = root_of_current_path;
} else if self.root != root_of_current_path {
return Err(MerkleError::InvalidPath(extended_path));
}
self.paths.insert(half_pos, extended_path);
Ok(())
}
/// Returns the root to which all paths in this set resolve.
pub fn root(&self) -> Word {
self.root
}
/// Returns the depth of the Merkle tree implied by the paths stored in this set.
///
/// Merkle tree of depth 1 has two leaves, depth 2 has four leaves etc.
pub fn depth(&self) -> u32 {
self.total_depth
}
/// Returns a node at the specified index.
///
/// # Errors
/// Returns an error if:
/// * The specified index not valid for the depth of structure.
/// * Requested node does not exist in the set.
pub fn get_node(&self, depth: u32, index: u64) -> Result<Word, MerkleError> {
if index >= 2u64.pow(self.total_depth) {
return Err(MerkleError::InvalidIndex(self.total_depth, index));
}
if depth != self.total_depth {
return Err(MerkleError::InvalidDepth(self.total_depth, depth));
}
let pos = 2u64.pow(depth) + index;
let index = pos / 2;
match self.paths.get(&index) {
None => Err(MerkleError::NodeNotInSet(index)),
Some(path) => {
if is_even(pos) {
Ok(path[0])
} else {
Ok(path[1])
}
}
}
}
/// Returns a Merkle path to the node at the specified index. The node itself is
/// not included in the path.
///
/// # Errors
/// Returns an error if:
/// * The specified index not valid for the depth of structure.
/// * Node of the requested path does not exist in the set.
pub fn get_path(&self, depth: u32, index: u64) -> Result<Vec<Word>, MerkleError> {
if index >= 2u64.pow(self.total_depth) {
return Err(MerkleError::InvalidIndex(self.total_depth, index));
}
if depth != self.total_depth {
return Err(MerkleError::InvalidDepth(self.total_depth, depth));
}
let pos = 2u64.pow(depth) + index;
let index = pos / 2;
match self.paths.get(&index) {
None => Err(MerkleError::NodeNotInSet(index)),
Some(path) => {
let mut local_path = path.clone();
if is_even(pos) {
local_path.remove(0);
Ok(local_path)
} else {
local_path.remove(1);
Ok(local_path)
}
}
}
}
/// Replaces the leaf at the specified index with the provided value.
///
/// # Errors
/// Returns an error if:
/// * Requested node does not exist in the set.
pub fn update_leaf(&mut self, index: u64, value: Word) -> Result<(), MerkleError> {
let depth = self.depth();
if index >= 2u64.pow(depth) {
return Err(MerkleError::InvalidIndex(depth, index));
}
let pos = 2u64.pow(depth) + index;
let path = match self.paths.get_mut(&(pos / 2)) {
None => return Err(MerkleError::NodeNotInSet(index)),
Some(path) => path,
};
// Fill old_hashes vector -----------------------------------------------------------------
let (old_hashes, _) = compute_path_trace(path, depth, index);
// Fill new_hashes vector -----------------------------------------------------------------
if is_even(pos) {
path[0] = value;
} else {
path[1] = value;
}
let (new_hashes, new_root) = compute_path_trace(path, depth, index);
self.root = new_root;
// update paths ---------------------------------------------------------------------------
for path in self.paths.values_mut() {
for i in (0..old_hashes.len()).rev() {
if path[i + 2] == old_hashes[i] {
path[i + 2] = new_hashes[i];
break;
}
}
}
Ok(())
}
}
// HELPER FUNCTIONS
// --------------------------------------------------------------------------------------------
fn is_even(pos: u64) -> bool {
pos & 1 == 0
}
/// Calculates the hash of the parent node by two sibling ones
/// - node — current node
/// - node_pos — position of the current node
/// - sibling — neighboring vertex in the tree
fn calculate_parent_hash(node: Word, node_pos: u64, sibling: Word) -> Word {
if is_even(node_pos) {
Rpo256::merge(&[node.into(), sibling.into()]).into()
} else {
Rpo256::merge(&[sibling.into(), node.into()]).into()
}
}
/// Returns vector of hashes from current to the root
fn compute_path_trace(path: &[Word], depth: u32, index: u64) -> (Vec<Word>, Word) {
let mut pos = 2u64.pow(depth) + index;
let mut computed_hashes = Vec::<Word>::new();
let mut comp_hash = Rpo256::merge(&[path[0].into(), path[1].into()]).into();
if path.len() != 2 {
for path_hash in path.iter().skip(2) {
computed_hashes.push(comp_hash);
pos /= 2;
comp_hash = calculate_parent_hash(comp_hash, pos, *path_hash);
}
}
(computed_hashes, comp_hash)
}
/// Returns hash of the root
fn compute_path_root(path: &[Word], depth: u32, index: u64) -> Word {
let mut pos = 2u64.pow(depth) + index;
// hash that is obtained after calculating the current hash and path hash
let mut comp_hash = Rpo256::merge(&[path[0].into(), path[1].into()]).into();
for path_hash in path.iter().skip(2) {
pos /= 2;
comp_hash = calculate_parent_hash(comp_hash, pos, *path_hash);
}
comp_hash
}
// TESTS
// ================================================================================================
#[cfg(test)]
mod tests {
use super::calculate_parent_hash;
use crate::merkle::int_to_node;
#[test]
fn get_root() {
let leaf0 = int_to_node(0);
let leaf1 = int_to_node(1);
let leaf2 = int_to_node(2);
let leaf3 = int_to_node(3);
let parent0 = calculate_parent_hash(leaf0, 0, leaf1);
let parent1 = calculate_parent_hash(leaf2, 2, leaf3);
let root_exp = calculate_parent_hash(parent0, 0, parent1);
let mut set = super::MerklePathSet::new(3).unwrap();
set.add_path(0, leaf0, vec![leaf1, parent1]).unwrap();
assert_eq!(set.root(), root_exp);
}
#[test]
fn add_and_get_path() {
let path_6 = vec![int_to_node(7), int_to_node(45), int_to_node(123)];
let hash_6 = int_to_node(6);
let index = 6u64;
let depth = 4u32;
let mut set = super::MerklePathSet::new(depth).unwrap();
set.add_path(index, hash_6, path_6.clone()).unwrap();
let stored_path_6 = set.get_path(depth, index).unwrap();
assert_eq!(path_6, stored_path_6);
assert!(set.get_path(depth, 15u64).is_err())
}
#[test]
fn get_node() {
let path_6 = vec![int_to_node(7), int_to_node(45), int_to_node(123)];
let hash_6 = int_to_node(6);
let index = 6u64;
let depth = 4u32;
let mut set = super::MerklePathSet::new(depth).unwrap();
set.add_path(index, hash_6, path_6).unwrap();
assert_eq!(int_to_node(6u64), set.get_node(depth, index).unwrap());
assert!(set.get_node(depth, 15u64).is_err());
}
#[test]
fn update_leaf() {
let hash_4 = int_to_node(4);
let hash_5 = int_to_node(5);
let hash_6 = int_to_node(6);
let hash_7 = int_to_node(7);
let hash_45 = calculate_parent_hash(hash_4, 12u64, hash_5);
let hash_67 = calculate_parent_hash(hash_6, 14u64, hash_7);
let hash_0123 = int_to_node(123);
let path_6 = vec![hash_7, hash_45, hash_0123];
let path_5 = vec![hash_4, hash_67, hash_0123];
let path_4 = vec![hash_5, hash_67, hash_0123];
let index_6 = 6u64;
let index_5 = 5u64;
let index_4 = 4u64;
let depth = 4u32;
let mut set = super::MerklePathSet::new(depth).unwrap();
set.add_path(index_6, hash_6, path_6).unwrap();
set.add_path(index_5, hash_5, path_5).unwrap();
set.add_path(index_4, hash_4, path_4).unwrap();
let new_hash_6 = int_to_node(100);
let new_hash_5 = int_to_node(55);
set.update_leaf(index_6, new_hash_6).unwrap();
let new_path_4 = set.get_path(depth, index_4).unwrap();
let new_hash_67 = calculate_parent_hash(new_hash_6, 14u64, hash_7);
assert_eq!(new_hash_67, new_path_4[1]);
set.update_leaf(index_5, new_hash_5).unwrap();
let new_path_4 = set.get_path(depth, index_4).unwrap();
let new_path_6 = set.get_path(depth, index_6).unwrap();
let new_hash_45 = calculate_parent_hash(new_hash_5, 13u64, hash_4);
assert_eq!(new_hash_45, new_path_6[1]);
assert_eq!(new_hash_5, new_path_4[0]);
}
}

View File

@@ -1,6 +1,11 @@
use super::{Felt, MerkleError, Rpo256, RpoDigest, Vec, Word};
use crate::{utils::uninit_vector, FieldElement};
use core::slice;
use super::{
Felt, InnerNodeInfo, MerkleError, MerklePath, NodeIndex, Rpo256, RpoDigest, Vec, Word,
};
use crate::{
utils::{string::String, uninit_vector, word_to_hex},
FieldElement,
};
use core::{fmt, slice};
use winter_math::log2;
// MERKLE TREE
@@ -22,7 +27,7 @@ impl MerkleTree {
pub fn new(leaves: Vec<Word>) -> Result<Self, MerkleError> {
let n = leaves.len();
if n <= 1 {
return Err(MerkleError::DepthTooSmall(n as u32));
return Err(MerkleError::DepthTooSmall(n as u8));
} else if !n.is_power_of_two() {
return Err(MerkleError::NumLeavesNotPowerOfTwo(n));
}
@@ -35,12 +40,14 @@ impl MerkleTree {
nodes[n..].copy_from_slice(&leaves);
// re-interpret nodes as an array of two nodes fused together
let two_nodes =
unsafe { slice::from_raw_parts(nodes.as_ptr() as *const [RpoDigest; 2], n) };
// Safety: `nodes` will never move here as it is not bound to an external lifetime (i.e.
// `self`).
let ptr = nodes.as_ptr() as *const [RpoDigest; 2];
let pairs = unsafe { slice::from_raw_parts(ptr, n) };
// calculate all internal tree nodes
for i in (1..n).rev() {
nodes[i] = Rpo256::merge(&two_nodes[i]).into();
nodes[i] = Rpo256::merge(&pairs[i]).into();
}
Ok(Self { nodes })
@@ -57,82 +64,178 @@ impl MerkleTree {
/// Returns the depth of this Merkle tree.
///
/// Merkle tree of depth 1 has two leaves, depth 2 has four leaves etc.
pub fn depth(&self) -> u32 {
log2(self.nodes.len() / 2)
pub fn depth(&self) -> u8 {
log2(self.nodes.len() / 2) as u8
}
/// Returns a node at the specified depth and index.
/// Returns a node at the specified depth and index value.
///
/// # Errors
/// Returns an error if:
/// * The specified depth is greater than the depth of the tree.
/// * The specified index not valid for the specified depth.
pub fn get_node(&self, depth: u32, index: u64) -> Result<Word, MerkleError> {
if depth == 0 {
return Err(MerkleError::DepthTooSmall(depth));
} else if depth > self.depth() {
return Err(MerkleError::DepthTooBig(depth));
}
if index >= 2u64.pow(depth) {
return Err(MerkleError::InvalidIndex(depth, index));
/// * The specified index is not valid for the specified depth.
pub fn get_node(&self, index: NodeIndex) -> Result<Word, MerkleError> {
if index.is_root() {
return Err(MerkleError::DepthTooSmall(index.depth()));
} else if index.depth() > self.depth() {
return Err(MerkleError::DepthTooBig(index.depth() as u64));
}
let pos = 2_usize.pow(depth) + (index as usize);
let pos = index.to_scalar_index() as usize;
Ok(self.nodes[pos])
}
/// Returns a Merkle path to the node at the specified depth and index. The node itself is
/// not included in the path.
/// Returns a Merkle path to the node at the specified depth and index value. The node itself
/// is not included in the path.
///
/// # Errors
/// Returns an error if:
/// * The specified depth is greater than the depth of the tree.
/// * The specified index not valid for the specified depth.
pub fn get_path(&self, depth: u32, index: u64) -> Result<Vec<Word>, MerkleError> {
if depth == 0 {
return Err(MerkleError::DepthTooSmall(depth));
} else if depth > self.depth() {
return Err(MerkleError::DepthTooBig(depth));
}
if index >= 2u64.pow(depth) {
return Err(MerkleError::InvalidIndex(depth, index));
/// * The specified value is not valid for the specified depth.
pub fn get_path(&self, mut index: NodeIndex) -> Result<MerklePath, MerkleError> {
if index.is_root() {
return Err(MerkleError::DepthTooSmall(index.depth()));
} else if index.depth() > self.depth() {
return Err(MerkleError::DepthTooBig(index.depth() as u64));
}
let mut path = Vec::with_capacity(depth as usize);
let mut pos = 2_usize.pow(depth) + (index as usize);
while pos > 1 {
path.push(self.nodes[pos ^ 1]);
pos >>= 1;
// TODO should we create a helper in `NodeIndex` that will encapsulate traversal to root so
// we always use inlined `for` instead of `while`? the reason to use `for` is because its
// easier for the compiler to vectorize.
let mut path = Vec::with_capacity(index.depth() as usize);
for _ in 0..index.depth() {
let sibling = index.sibling().to_scalar_index() as usize;
path.push(self.nodes[sibling]);
index.move_up();
}
Ok(path)
debug_assert!(
index.is_root(),
"the path walk must go all the way to the root"
);
Ok(path.into())
}
/// Replaces the leaf at the specified index with the provided value.
///
/// # Errors
/// Returns an error if the specified index is not a valid leaf index for this tree.
pub fn update_leaf(&mut self, index: u64, value: Word) -> Result<(), MerkleError> {
let depth = self.depth();
if index >= 2u64.pow(depth) {
return Err(MerkleError::InvalidIndex(depth, index));
}
let mut index = 2usize.pow(depth) + index as usize;
self.nodes[index] = value;
/// Returns an error if the specified index value is not a valid leaf value for this tree.
pub fn update_leaf<'a>(&'a mut self, index_value: u64, value: Word) -> Result<(), MerkleError> {
let mut index = NodeIndex::new(self.depth(), index_value)?;
// we don't need to copy the pairs into a new address as we are logically guaranteed to not
// overlap write instructions. however, it's important to bind the lifetime of pairs to
// `self.nodes` so the compiler will never move one without moving the other.
debug_assert_eq!(self.nodes.len() & 1, 0);
let n = self.nodes.len() / 2;
let two_nodes =
unsafe { slice::from_raw_parts(self.nodes.as_ptr() as *const [RpoDigest; 2], n) };
for _ in 0..depth {
index /= 2;
self.nodes[index] = Rpo256::merge(&two_nodes[index]).into();
// Safety: the length of nodes is guaranteed to contain pairs of words; hence, pairs of
// digests. we explicitly bind the lifetime here so we add an extra layer of guarantee that
// `self.nodes` will be moved only if `pairs` is moved as well. also, the algorithm is
// logically guaranteed to not overlap write positions as the write index is always half
// the index from which we read the digest input.
let ptr = self.nodes.as_ptr() as *const [RpoDigest; 2];
let pairs: &'a [[RpoDigest; 2]] = unsafe { slice::from_raw_parts(ptr, n) };
// update the current node
let pos = index.to_scalar_index() as usize;
self.nodes[pos] = value;
// traverse to the root, updating each node with the merged values of its parents
for _ in 0..index.depth() {
index.move_up();
let pos = index.to_scalar_index() as usize;
let value = Rpo256::merge(&pairs[pos]).into();
self.nodes[pos] = value;
}
Ok(())
}
/// An iterator over every inner node in the tree. The iterator order is unspecified.
pub fn inner_nodes(&self) -> MerkleTreeNodes<'_> {
MerkleTreeNodes {
nodes: &self.nodes,
index: 1, // index 0 is just padding, start at 1
}
}
}
// ITERATORS
// ================================================================================================
/// An iterator over every inner node of the [MerkleTree].
///
/// Use this to extract the data of the tree, there is no guarantee on the order of the elements.
pub struct MerkleTreeNodes<'a> {
nodes: &'a Vec<Word>,
index: usize,
}
impl<'a> Iterator for MerkleTreeNodes<'a> {
type Item = InnerNodeInfo;
fn next(&mut self) -> Option<Self::Item> {
if self.index < self.nodes.len() / 2 {
let value = self.index;
let left = self.index * 2;
let right = left + 1;
self.index += 1;
Some(InnerNodeInfo {
value: self.nodes[value],
left: self.nodes[left],
right: self.nodes[right],
})
} else {
None
}
}
}
/// Utility to visualize a [MerkleTree] in text.
pub fn tree_to_text(tree: &MerkleTree) -> Result<String, fmt::Error> {
let indent = " ";
let mut s = String::new();
s.push_str(&word_to_hex(&tree.root())?);
s.push('\n');
for d in 1..=tree.depth() {
let entries = 2u64.pow(d.into());
for i in 0..entries {
let index = NodeIndex::new(d, i).expect("The index must always be valid");
let node = tree.get_node(index).expect("The node must always be found");
for _ in 0..d {
s.push_str(indent);
}
s.push_str(&word_to_hex(&node)?);
s.push('\n');
}
}
Ok(s)
}
/// Utility to visualize a [MerklePath] in text.
pub fn path_to_text(path: &MerklePath) -> Result<String, fmt::Error> {
let mut s = String::new();
s.push('[');
for el in path.iter() {
s.push_str(&word_to_hex(el)?);
s.push_str(", ");
}
// remove the last ", "
if path.len() != 0 {
s.pop();
s.pop();
}
s.push(']');
Ok(s)
}
// TESTS
@@ -140,10 +243,10 @@ impl MerkleTree {
#[cfg(test)]
mod tests {
use super::{
super::{int_to_node, Rpo256},
Word,
};
use super::*;
use crate::merkle::{int_to_node, InnerNodeInfo};
use core::mem::size_of;
use proptest::prelude::*;
const LEAVES4: [Word; 4] = [
int_to_node(1),
@@ -187,16 +290,16 @@ mod tests {
let tree = super::MerkleTree::new(LEAVES4.to_vec()).unwrap();
// check depth 2
assert_eq!(LEAVES4[0], tree.get_node(2, 0).unwrap());
assert_eq!(LEAVES4[1], tree.get_node(2, 1).unwrap());
assert_eq!(LEAVES4[2], tree.get_node(2, 2).unwrap());
assert_eq!(LEAVES4[3], tree.get_node(2, 3).unwrap());
assert_eq!(LEAVES4[0], tree.get_node(NodeIndex::make(2, 0)).unwrap());
assert_eq!(LEAVES4[1], tree.get_node(NodeIndex::make(2, 1)).unwrap());
assert_eq!(LEAVES4[2], tree.get_node(NodeIndex::make(2, 2)).unwrap());
assert_eq!(LEAVES4[3], tree.get_node(NodeIndex::make(2, 3)).unwrap());
// check depth 1
let (_, node2, node3) = compute_internal_nodes();
assert_eq!(node2, tree.get_node(1, 0).unwrap());
assert_eq!(node3, tree.get_node(1, 1).unwrap());
assert_eq!(node2, tree.get_node(NodeIndex::make(1, 0)).unwrap());
assert_eq!(node3, tree.get_node(NodeIndex::make(1, 1)).unwrap());
}
#[test]
@@ -206,14 +309,26 @@ mod tests {
let (_, node2, node3) = compute_internal_nodes();
// check depth 2
assert_eq!(vec![LEAVES4[1], node3], tree.get_path(2, 0).unwrap());
assert_eq!(vec![LEAVES4[0], node3], tree.get_path(2, 1).unwrap());
assert_eq!(vec![LEAVES4[3], node2], tree.get_path(2, 2).unwrap());
assert_eq!(vec![LEAVES4[2], node2], tree.get_path(2, 3).unwrap());
assert_eq!(
vec![LEAVES4[1], node3],
*tree.get_path(NodeIndex::make(2, 0)).unwrap()
);
assert_eq!(
vec![LEAVES4[0], node3],
*tree.get_path(NodeIndex::make(2, 1)).unwrap()
);
assert_eq!(
vec![LEAVES4[3], node2],
*tree.get_path(NodeIndex::make(2, 2)).unwrap()
);
assert_eq!(
vec![LEAVES4[2], node2],
*tree.get_path(NodeIndex::make(2, 3)).unwrap()
);
// check depth 1
assert_eq!(vec![node3], tree.get_path(1, 0).unwrap());
assert_eq!(vec![node2], tree.get_path(1, 1).unwrap());
assert_eq!(vec![node3], *tree.get_path(NodeIndex::make(1, 0)).unwrap());
assert_eq!(vec![node2], *tree.get_path(NodeIndex::make(1, 1)).unwrap());
}
#[test]
@@ -221,25 +336,87 @@ mod tests {
let mut tree = super::MerkleTree::new(LEAVES8.to_vec()).unwrap();
// update one leaf
let index = 3;
let value = 3;
let new_node = int_to_node(9);
let mut expected_leaves = LEAVES8.to_vec();
expected_leaves[index as usize] = new_node;
expected_leaves[value as usize] = new_node;
let expected_tree = super::MerkleTree::new(expected_leaves.clone()).unwrap();
tree.update_leaf(index, new_node).unwrap();
tree.update_leaf(value, new_node).unwrap();
assert_eq!(expected_tree.nodes, tree.nodes);
// update another leaf
let index = 6;
let value = 6;
let new_node = int_to_node(10);
expected_leaves[index as usize] = new_node;
expected_leaves[value as usize] = new_node;
let expected_tree = super::MerkleTree::new(expected_leaves.clone()).unwrap();
tree.update_leaf(index, new_node).unwrap();
tree.update_leaf(value, new_node).unwrap();
assert_eq!(expected_tree.nodes, tree.nodes);
}
#[test]
fn nodes() -> Result<(), MerkleError> {
let tree = super::MerkleTree::new(LEAVES4.to_vec()).unwrap();
let root = tree.root();
let l1n0 = tree.get_node(NodeIndex::make(1, 0))?;
let l1n1 = tree.get_node(NodeIndex::make(1, 1))?;
let l2n0 = tree.get_node(NodeIndex::make(2, 0))?;
let l2n1 = tree.get_node(NodeIndex::make(2, 1))?;
let l2n2 = tree.get_node(NodeIndex::make(2, 2))?;
let l2n3 = tree.get_node(NodeIndex::make(2, 3))?;
let nodes: Vec<InnerNodeInfo> = tree.inner_nodes().collect();
let expected = vec![
InnerNodeInfo {
value: root,
left: l1n0,
right: l1n1,
},
InnerNodeInfo {
value: l1n0,
left: l2n0,
right: l2n1,
},
InnerNodeInfo {
value: l1n1,
left: l2n2,
right: l2n3,
},
];
assert_eq!(nodes, expected);
Ok(())
}
proptest! {
#[test]
fn arbitrary_word_can_be_represented_as_digest(
a in prop::num::u64::ANY,
b in prop::num::u64::ANY,
c in prop::num::u64::ANY,
d in prop::num::u64::ANY,
) {
// this test will assert the memory equivalence between word and digest.
// it is used to safeguard the `[MerkleTee::update_leaf]` implementation
// that assumes this equivalence.
// build a word and copy it to another address as digest
let word = [Felt::new(a), Felt::new(b), Felt::new(c), Felt::new(d)];
let digest = RpoDigest::from(word);
// assert the addresses are different
let word_ptr = (&word).as_ptr() as *const u8;
let digest_ptr = (&digest).as_ptr() as *const u8;
assert_ne!(word_ptr, digest_ptr);
// compare the bytes representation
let word_bytes = unsafe { slice::from_raw_parts(word_ptr, size_of::<Word>()) };
let digest_bytes = unsafe { slice::from_raw_parts(digest_ptr, size_of::<RpoDigest>()) };
assert_eq!(word_bytes, digest_bytes);
}
}
// HELPER FUNCTIONS
// --------------------------------------------------------------------------------------------

View File

@@ -0,0 +1,61 @@
use super::{
super::Vec,
super::{WORD_SIZE, ZERO},
MmrProof, Rpo256, Word,
};
#[derive(Debug, Clone, PartialEq)]
pub struct MmrPeaks {
/// The number of leaves is used to differentiate accumulators that have the same number of
/// peaks. This happens because the number of peaks goes up-and-down as the structure is used
/// causing existing trees to be merged and new ones to be created. As an example, every time
/// the MMR has a power-of-two number of leaves there is a single peak.
///
/// Every tree in the MMR forest has a distinct power-of-two size, this means only the right
/// most tree can have an odd number of elements (e.g. `1`). Additionally this means that the bits in
/// `num_leaves` conveniently encode the size of each individual tree.
///
/// Examples:
///
/// - With 5 leaves, the binary `0b101`. The number of set bits is equal the number
/// of peaks, in this case there are 2 peaks. The 0-indexed least-significant position of
/// the bit determines the number of elements of a tree, so the rightmost tree has `2**0`
/// elements and the left most has `2**2`.
/// - With 12 leaves, the binary is `0b1100`, this case also has 2 peaks, the
/// leftmost tree has `2**3=8` elements, and the right most has `2**2=4` elements.
pub num_leaves: usize,
/// All the peaks of every tree in the MMR forest. The peaks are always ordered by number of
/// leaves, starting from the peak with most children, to the one with least.
///
/// Invariant: The length of `peaks` must be equal to the number of true bits in `num_leaves`.
pub peaks: Vec<Word>,
}
impl MmrPeaks {
/// Hashes the peaks.
///
/// The hashing is optimized to work with the Miden VM, the procedure will:
///
/// - Pad the peaks with ZERO to an even number of words, this removes the need to handle RPO padding.
/// - Pad the peaks to a minimum length of 16 words, which reduces the constant cost of
/// hashing.
pub fn hash_peaks(&self) -> Word {
let mut copy = self.peaks.clone();
if copy.len() < 16 {
copy.resize(16, [ZERO; WORD_SIZE])
} else if copy.len() % 2 == 1 {
copy.push([ZERO; WORD_SIZE])
}
Rpo256::hash_elements(&copy.as_slice().concat()).into()
}
pub fn verify(&self, value: Word, opening: MmrProof) -> bool {
let root = &self.peaks[opening.peak_index()];
opening
.merkle_path
.verify(opening.relative_pos() as u64, value, root)
}
}

46
src/merkle/mmr/bit.rs Normal file
View File

@@ -0,0 +1,46 @@
/// Iterate over the bits of a `usize` and yields the bit positions for the true bits.
pub struct TrueBitPositionIterator {
value: usize,
}
impl TrueBitPositionIterator {
pub fn new(value: usize) -> TrueBitPositionIterator {
TrueBitPositionIterator { value }
}
}
impl Iterator for TrueBitPositionIterator {
type Item = u32;
fn next(&mut self) -> Option<<Self as Iterator>::Item> {
// trailing_zeros is computed with the intrinsic cttz. [Rust 1.67.0] x86 uses the `bsf`
// instruction. AArch64 uses the `rbit clz` instructions.
let zeros = self.value.trailing_zeros();
if zeros == usize::BITS {
None
} else {
let bit_position = zeros;
let mask = 1 << bit_position;
self.value ^= mask;
Some(bit_position)
}
}
}
impl DoubleEndedIterator for TrueBitPositionIterator {
fn next_back(&mut self) -> Option<<Self as Iterator>::Item> {
// trailing_zeros is computed with the intrinsic ctlz. [Rust 1.67.0] x86 uses the `bsr`
// instruction. AArch64 uses the `clz` instruction.
let zeros = self.value.leading_zeros();
if zeros == usize::BITS {
None
} else {
let bit_position = usize::BITS - zeros - 1;
let mask = 1 << bit_position;
self.value ^= mask;
Some(bit_position)
}
}
}

392
src/merkle/mmr/full.rs Normal file
View File

@@ -0,0 +1,392 @@
//! A fully materialized Merkle mountain range (MMR).
//!
//! A MMR is a forest structure, i.e. it is an ordered set of disjoint rooted trees. The trees are
//! ordered by size, from the most to least number of leaves. Every tree is a perfect binary tree,
//! meaning a tree has all its leaves at the same depth, and every inner node has a branch-factor
//! of 2 with both children set.
//!
//! Additionally the structure only supports adding leaves to the right-most tree, the one with the
//! least number of leaves. The structure preserves the invariant that each tree has different
//! depths, i.e. as part of adding adding a new element to the forest the trees with same depth are
//! merged, creating a new tree with depth d+1, this process is continued until the property is
//! restabilished.
use super::bit::TrueBitPositionIterator;
use super::{
super::{InnerNodeInfo, MerklePath, Vec},
MmrPeaks, MmrProof, Rpo256, Word,
};
use core::fmt::{Display, Formatter};
#[cfg(feature = "std")]
use std::error::Error;
// MMR
// ===============================================================================================
/// A fully materialized Merkle Mountain Range, with every tree in the forest and all their
/// elements.
///
/// Since this is a full representation of the MMR, elements are never removed and the MMR will
/// grow roughly `O(2n)` in number of leaf elements.
pub struct Mmr {
/// Refer to the `forest` method documentation for details of the semantics of this value.
pub(super) forest: usize,
/// Contains every element of the forest.
///
/// The trees are in postorder sequential representation. This representation allows for all
/// the elements of every tree in the forest to be stored in the same sequential buffer. It
/// also means new elements can be added to the forest, and merging of trees is very cheap with
/// no need to copy elements.
pub(super) nodes: Vec<Word>,
}
#[derive(Debug, PartialEq, Eq, Copy, Clone)]
pub enum MmrError {
InvalidPosition(usize),
}
impl Display for MmrError {
fn fmt(&self, fmt: &mut Formatter<'_>) -> Result<(), core::fmt::Error> {
match self {
MmrError::InvalidPosition(pos) => write!(fmt, "Mmr does not contain position {pos}"),
}
}
}
#[cfg(feature = "std")]
impl Error for MmrError {}
impl Default for Mmr {
fn default() -> Self {
Self::new()
}
}
impl Mmr {
// CONSTRUCTORS
// ============================================================================================
/// Constructor for an empty `Mmr`.
pub fn new() -> Mmr {
Mmr {
forest: 0,
nodes: Vec::new(),
}
}
// ACCESSORS
// ============================================================================================
/// Returns the MMR forest representation.
///
/// The forest value has the following interpretations:
/// - its value is the number of elements in the forest
/// - bit count corresponds to the number of trees in the forest
/// - each true bit position determines the depth of a tree in the forest
pub const fn forest(&self) -> usize {
self.forest
}
// FUNCTIONALITY
// ============================================================================================
/// Given a leaf position, returns the Merkle path to its corresponding peak. If the position
/// is greater-or-equal than the tree size an error is returned.
///
/// Note: The leaf position is the 0-indexed number corresponding to the order the leaves were
/// added, this corresponds to the MMR size _prior_ to adding the element. So the 1st element
/// has position 0, the second position 1, and so on.
pub fn open(&self, pos: usize) -> Result<MmrProof, MmrError> {
// find the target tree responsible for the MMR position
let tree_bit =
leaf_to_corresponding_tree(pos, self.forest).ok_or(MmrError::InvalidPosition(pos))?;
let forest_target = 1usize << tree_bit;
// isolate the trees before the target
let forest_before = self.forest & high_bitmask(tree_bit + 1);
let index_offset = nodes_in_forest(forest_before);
// find the root
let index = nodes_in_forest(forest_target) - 1;
// update the value position from global to the target tree
let relative_pos = pos - forest_before;
// collect the path and the final index of the target value
let (_, path) =
self.collect_merkle_path_and_value(tree_bit, relative_pos, index_offset, index);
Ok(MmrProof {
forest: self.forest,
position: pos,
merkle_path: MerklePath::new(path),
})
}
/// Returns the leaf value at position `pos`.
///
/// Note: The leaf position is the 0-indexed number corresponding to the order the leaves were
/// added, this corresponds to the MMR size _prior_ to adding the element. So the 1st element
/// has position 0, the second position 1, and so on.
pub fn get(&self, pos: usize) -> Result<Word, MmrError> {
// find the target tree responsible for the MMR position
let tree_bit =
leaf_to_corresponding_tree(pos, self.forest).ok_or(MmrError::InvalidPosition(pos))?;
let forest_target = 1usize << tree_bit;
// isolate the trees before the target
let forest_before = self.forest & high_bitmask(tree_bit + 1);
let index_offset = nodes_in_forest(forest_before);
// find the root
let index = nodes_in_forest(forest_target) - 1;
// update the value position from global to the target tree
let relative_pos = pos - forest_before;
// collect the path and the final index of the target value
let (value, _) =
self.collect_merkle_path_and_value(tree_bit, relative_pos, index_offset, index);
Ok(value)
}
/// Adds a new element to the MMR.
pub fn add(&mut self, el: Word) {
// Note: every node is also a tree of size 1, adding an element to the forest creates a new
// rooted-tree of size 1. This may temporarily break the invariant that every tree in the
// forest has different sizes, the loop below will eagerly merge trees of same size and
// restore the invariant.
self.nodes.push(el);
let mut left_offset = self.nodes.len().saturating_sub(2);
let mut right = el;
let mut left_tree = 1;
while self.forest & left_tree != 0 {
right = *Rpo256::merge(&[self.nodes[left_offset].into(), right.into()]);
self.nodes.push(right);
left_offset = left_offset.saturating_sub(nodes_in_forest(left_tree));
left_tree <<= 1;
}
self.forest += 1;
}
/// Returns an accumulator representing the current state of the MMR.
pub fn accumulator(&self) -> MmrPeaks {
let peaks: Vec<Word> = TrueBitPositionIterator::new(self.forest)
.rev()
.map(|bit| nodes_in_forest(1 << bit))
.scan(0, |offset, el| {
*offset += el;
Some(*offset)
})
.map(|offset| self.nodes[offset - 1])
.collect();
MmrPeaks {
num_leaves: self.forest,
peaks,
}
}
/// An iterator over inner nodes in the MMR. The order of iteration is unspecified.
pub fn inner_nodes(&self) -> MmrNodes {
MmrNodes {
mmr: self,
forest: 0,
last_right: 0,
index: 0,
}
}
// UTILITIES
// ============================================================================================
/// Internal function used to collect the Merkle path of a value.
fn collect_merkle_path_and_value(
&self,
tree_bit: u32,
relative_pos: usize,
index_offset: usize,
mut index: usize,
) -> (Word, Vec<Word>) {
// collect the Merkle path
let mut tree_depth = tree_bit as usize;
let mut path = Vec::with_capacity(tree_depth + 1);
while tree_depth > 0 {
let bit = relative_pos & tree_depth;
let right_offset = index - 1;
let left_offset = right_offset - nodes_in_forest(tree_depth);
// Elements to the right have a higher position because they were
// added later. Therefore when the bit is true the node's path is
// to the right, and its sibling to the left.
let sibling = if bit != 0 {
index = right_offset;
self.nodes[index_offset + left_offset]
} else {
index = left_offset;
self.nodes[index_offset + right_offset]
};
tree_depth >>= 1;
path.push(sibling);
}
// the rest of the codebase has the elements going from leaf to root, adjust it here for
// easy of use/consistency sake
path.reverse();
let value = self.nodes[index_offset + index];
(value, path)
}
}
impl<T> From<T> for Mmr
where
T: IntoIterator<Item = Word>,
{
fn from(values: T) -> Self {
let mut mmr = Mmr::new();
for v in values {
mmr.add(v)
}
mmr
}
}
// ITERATOR
// ===============================================================================================
/// Yields inner nodes of the [Mmr].
pub struct MmrNodes<'a> {
/// [Mmr] being yielded, when its `forest` value is matched, the iterations is finished.
mmr: &'a Mmr,
/// Keeps track of the left nodes yielded so far waiting for a right pair, this matches the
/// semantics of the [Mmr]'s forest attribute, since that too works as a buffer of left nodes
/// waiting for a pair to be hashed together.
forest: usize,
/// Keeps track of the last right node yielded, after this value is set, the next iteration
/// will be its parent with its corresponding left node that has been yield already.
last_right: usize,
/// The current index in the `nodes` vector.
index: usize,
}
impl<'a> Iterator for MmrNodes<'a> {
type Item = InnerNodeInfo;
fn next(&mut self) -> Option<Self::Item> {
debug_assert!(
self.last_right.count_ones() <= 1,
"last_right tracks zero or one element"
);
// only parent nodes are emitted, remove the single node tree from the forest
let target = self.mmr.forest & (usize::MAX << 1);
if self.forest < target {
if self.last_right == 0 {
// yield the left leaf
debug_assert!(self.last_right == 0, "left must be before right");
self.forest |= 1;
self.index += 1;
// yield the right leaf
debug_assert!((self.forest & 1) == 1, "right must be after left");
self.last_right |= 1;
self.index += 1;
};
debug_assert!(
self.forest & self.last_right != 0,
"parent requires both a left and right",
);
// compute the number of nodes in the right tree, this is the offset to the
// previous left parent
let right_nodes = nodes_in_forest(self.last_right);
// the next parent position is one above the position of the pair
let parent = self.last_right << 1;
// the left node has been paired and the current parent yielded, removed it from the forest
self.forest ^= self.last_right;
if self.forest & parent == 0 {
// this iteration yielded the left parent node
debug_assert!(self.forest & 1 == 0, "next iteration yields a left leaf");
self.last_right = 0;
self.forest ^= parent;
} else {
// the left node of the parent level has been yielded already, this iteration
// was the right parent. Next iteration yields their parent.
self.last_right = parent;
}
// yields a parent
let value = self.mmr.nodes[self.index];
let right = self.mmr.nodes[self.index - 1];
let left = self.mmr.nodes[self.index - 1 - right_nodes];
self.index += 1;
let node = InnerNodeInfo { value, left, right };
Some(node)
} else {
None
}
}
}
// UTILITIES
// ===============================================================================================
/// Given a 0-indexed leaf position and the current forest, return the tree number responsible for
/// the position.
///
/// Note:
/// The result is a tree position `p`, it has the following interpretations. $p+1$ is the depth of
/// the tree, which corresponds to the size of a Merkle proof for that tree. $2^p$ is equal to the
/// number of leaves in this particular tree. and $2^(p+1)-1$ corresponds to size of the tree.
pub(crate) const fn leaf_to_corresponding_tree(pos: usize, forest: usize) -> Option<u32> {
if pos >= forest {
None
} else {
// - each bit in the forest is a unique tree and the bit position its power-of-two size
// - each tree owns a consecutive range of positions equal to its size from left-to-right
// - this means the first tree owns from `0` up to the `2^k_0` first positions, where `k_0`
// is the highest true bit position, the second tree from `2^k_0 + 1` up to `2^k_1` where
// `k_1` is the second higest bit, so on.
// - this means the highest bits work as a category marker, and the position is owned by
// the first tree which doesn't share a high bit with the position
let before = forest & pos;
let after = forest ^ before;
let tree = after.ilog2();
Some(tree)
}
}
/// Return a bitmask for the bits including and above the given position.
pub(crate) const fn high_bitmask(bit: u32) -> usize {
if bit > usize::BITS - 1 {
0
} else {
usize::MAX << bit
}
}
/// Return the total number of nodes of a given forest
///
/// Panics:
///
/// This will panic if the forest has size greater than `usize::MAX / 2`
pub(crate) const fn nodes_in_forest(forest: usize) -> usize {
// - the size of a perfect binary tree is $2^{k+1}-1$ or $2*2^k-1$
// - the forest represents the sum of $2^k$ so a single multiplication is necessary
// - the number of `-1` is the same as the number of trees, which is the same as the number
// bits set
let tree_count = forest.count_ones() as usize;
forest * 2 - tree_count
}

15
src/merkle/mmr/mod.rs Normal file
View File

@@ -0,0 +1,15 @@
mod accumulator;
mod bit;
mod full;
mod proof;
#[cfg(test)]
mod tests;
use super::{Rpo256, Word};
// REEXPORTS
// ================================================================================================
pub use accumulator::MmrPeaks;
pub use full::Mmr;
pub use proof::MmrProof;

33
src/merkle/mmr/proof.rs Normal file
View File

@@ -0,0 +1,33 @@
/// The representation of a single Merkle path.
use super::super::MerklePath;
use super::full::{high_bitmask, leaf_to_corresponding_tree};
#[derive(Debug, Clone, PartialEq)]
pub struct MmrProof {
/// The state of the MMR when the MmrProof was created.
pub forest: usize,
/// The position of the leaf value on this MmrProof.
pub position: usize,
/// The Merkle opening, starting from the value's sibling up to and excluding the root of the
/// responsible tree.
pub merkle_path: MerklePath,
}
impl MmrProof {
/// Converts the leaf global position into a local position that can be used to verify the
/// merkle_path.
pub fn relative_pos(&self) -> usize {
let tree_bit = leaf_to_corresponding_tree(self.position, self.forest)
.expect("position must be part of the forest");
let forest_before = self.forest & high_bitmask(tree_bit + 1);
self.position - forest_before
}
pub fn peak_index(&self) -> usize {
let root = leaf_to_corresponding_tree(self.position, self.forest)
.expect("position must be part of the forest");
(self.forest.count_ones() - root - 1) as usize
}
}

538
src/merkle/mmr/tests.rs Normal file
View File

@@ -0,0 +1,538 @@
use super::bit::TrueBitPositionIterator;
use super::full::{high_bitmask, leaf_to_corresponding_tree, nodes_in_forest};
use super::{
super::{InnerNodeInfo, Vec, WORD_SIZE, ZERO},
Mmr, MmrPeaks, Rpo256, Word,
};
use crate::merkle::{int_to_node, MerklePath};
#[test]
fn test_position_equal_or_higher_than_leafs_is_never_contained() {
let empty_forest = 0;
for pos in 1..1024 {
// pos is index, 0 based
// tree is a length counter, 1 based
// so a valid pos is always smaller, not equal, to tree
assert_eq!(leaf_to_corresponding_tree(pos, pos), None);
assert_eq!(leaf_to_corresponding_tree(pos, pos - 1), None);
// and empty forest has no trees, so no position is valid
assert_eq!(leaf_to_corresponding_tree(pos, empty_forest), None);
}
}
#[test]
fn test_position_zero_is_always_contained_by_the_highest_tree() {
for leaves in 1..1024usize {
let tree = leaves.ilog2();
assert_eq!(leaf_to_corresponding_tree(0, leaves), Some(tree));
}
}
#[test]
fn test_leaf_to_corresponding_tree() {
assert_eq!(leaf_to_corresponding_tree(0, 0b0001), Some(0));
assert_eq!(leaf_to_corresponding_tree(0, 0b0010), Some(1));
assert_eq!(leaf_to_corresponding_tree(0, 0b0011), Some(1));
assert_eq!(leaf_to_corresponding_tree(0, 0b1011), Some(3));
// position one is always owned by the left-most tree
assert_eq!(leaf_to_corresponding_tree(1, 0b0010), Some(1));
assert_eq!(leaf_to_corresponding_tree(1, 0b0011), Some(1));
assert_eq!(leaf_to_corresponding_tree(1, 0b1011), Some(3));
// position two starts as its own root, and then it is merged with the left-most tree
assert_eq!(leaf_to_corresponding_tree(2, 0b0011), Some(0));
assert_eq!(leaf_to_corresponding_tree(2, 0b0100), Some(2));
assert_eq!(leaf_to_corresponding_tree(2, 0b1011), Some(3));
// position tree is merged on the left-most tree
assert_eq!(leaf_to_corresponding_tree(3, 0b0011), None);
assert_eq!(leaf_to_corresponding_tree(3, 0b0100), Some(2));
assert_eq!(leaf_to_corresponding_tree(3, 0b1011), Some(3));
assert_eq!(leaf_to_corresponding_tree(4, 0b0101), Some(0));
assert_eq!(leaf_to_corresponding_tree(4, 0b0110), Some(1));
assert_eq!(leaf_to_corresponding_tree(4, 0b0111), Some(1));
assert_eq!(leaf_to_corresponding_tree(4, 0b1000), Some(3));
assert_eq!(leaf_to_corresponding_tree(12, 0b01101), Some(0));
assert_eq!(leaf_to_corresponding_tree(12, 0b01110), Some(1));
assert_eq!(leaf_to_corresponding_tree(12, 0b01111), Some(1));
assert_eq!(leaf_to_corresponding_tree(12, 0b10000), Some(4));
}
#[test]
fn test_high_bitmask() {
assert_eq!(high_bitmask(0), usize::MAX);
assert_eq!(high_bitmask(1), usize::MAX << 1);
assert_eq!(high_bitmask(usize::BITS - 2), 0b11usize.rotate_right(2));
assert_eq!(high_bitmask(usize::BITS - 1), 0b1usize.rotate_right(1));
assert_eq!(high_bitmask(usize::BITS), 0, "overflow should be handled");
}
#[test]
fn test_nodes_in_forest() {
assert_eq!(nodes_in_forest(0b0000), 0);
assert_eq!(nodes_in_forest(0b0001), 1);
assert_eq!(nodes_in_forest(0b0010), 3);
assert_eq!(nodes_in_forest(0b0011), 4);
assert_eq!(nodes_in_forest(0b0100), 7);
assert_eq!(nodes_in_forest(0b0101), 8);
assert_eq!(nodes_in_forest(0b0110), 10);
assert_eq!(nodes_in_forest(0b0111), 11);
assert_eq!(nodes_in_forest(0b1000), 15);
assert_eq!(nodes_in_forest(0b1001), 16);
assert_eq!(nodes_in_forest(0b1010), 18);
assert_eq!(nodes_in_forest(0b1011), 19);
}
#[test]
fn test_nodes_in_forest_single_bit() {
assert_eq!(nodes_in_forest(2usize.pow(0)), 2usize.pow(1) - 1);
assert_eq!(nodes_in_forest(2usize.pow(1)), 2usize.pow(2) - 1);
assert_eq!(nodes_in_forest(2usize.pow(2)), 2usize.pow(3) - 1);
assert_eq!(nodes_in_forest(2usize.pow(3)), 2usize.pow(4) - 1);
for bit in 0..(usize::BITS - 1) {
let size = 2usize.pow(bit + 1) - 1;
assert_eq!(nodes_in_forest(1usize << bit), size);
}
}
const LEAVES: [Word; 7] = [
int_to_node(0),
int_to_node(1),
int_to_node(2),
int_to_node(3),
int_to_node(4),
int_to_node(5),
int_to_node(6),
];
#[test]
fn test_mmr_simple() {
let mut postorder = Vec::new();
postorder.push(LEAVES[0]);
postorder.push(LEAVES[1]);
postorder.push(*Rpo256::hash_elements(&[LEAVES[0], LEAVES[1]].concat()));
postorder.push(LEAVES[2]);
postorder.push(LEAVES[3]);
postorder.push(*Rpo256::hash_elements(&[LEAVES[2], LEAVES[3]].concat()));
postorder.push(*Rpo256::hash_elements(
&[postorder[2], postorder[5]].concat(),
));
postorder.push(LEAVES[4]);
postorder.push(LEAVES[5]);
postorder.push(*Rpo256::hash_elements(&[LEAVES[4], LEAVES[5]].concat()));
postorder.push(LEAVES[6]);
let mut mmr = Mmr::new();
assert_eq!(mmr.forest(), 0);
assert_eq!(mmr.nodes.len(), 0);
mmr.add(LEAVES[0]);
assert_eq!(mmr.forest(), 1);
assert_eq!(mmr.nodes.len(), 1);
assert_eq!(mmr.nodes.as_slice(), &postorder[0..mmr.nodes.len()]);
let acc = mmr.accumulator();
assert_eq!(acc.num_leaves, 1);
assert_eq!(acc.peaks, &[postorder[0]]);
mmr.add(LEAVES[1]);
assert_eq!(mmr.forest(), 2);
assert_eq!(mmr.nodes.len(), 3);
assert_eq!(mmr.nodes.as_slice(), &postorder[0..mmr.nodes.len()]);
let acc = mmr.accumulator();
assert_eq!(acc.num_leaves, 2);
assert_eq!(acc.peaks, &[postorder[2]]);
mmr.add(LEAVES[2]);
assert_eq!(mmr.forest(), 3);
assert_eq!(mmr.nodes.len(), 4);
assert_eq!(mmr.nodes.as_slice(), &postorder[0..mmr.nodes.len()]);
let acc = mmr.accumulator();
assert_eq!(acc.num_leaves, 3);
assert_eq!(acc.peaks, &[postorder[2], postorder[3]]);
mmr.add(LEAVES[3]);
assert_eq!(mmr.forest(), 4);
assert_eq!(mmr.nodes.len(), 7);
assert_eq!(mmr.nodes.as_slice(), &postorder[0..mmr.nodes.len()]);
let acc = mmr.accumulator();
assert_eq!(acc.num_leaves, 4);
assert_eq!(acc.peaks, &[postorder[6]]);
mmr.add(LEAVES[4]);
assert_eq!(mmr.forest(), 5);
assert_eq!(mmr.nodes.len(), 8);
assert_eq!(mmr.nodes.as_slice(), &postorder[0..mmr.nodes.len()]);
let acc = mmr.accumulator();
assert_eq!(acc.num_leaves, 5);
assert_eq!(acc.peaks, &[postorder[6], postorder[7]]);
mmr.add(LEAVES[5]);
assert_eq!(mmr.forest(), 6);
assert_eq!(mmr.nodes.len(), 10);
assert_eq!(mmr.nodes.as_slice(), &postorder[0..mmr.nodes.len()]);
let acc = mmr.accumulator();
assert_eq!(acc.num_leaves, 6);
assert_eq!(acc.peaks, &[postorder[6], postorder[9]]);
mmr.add(LEAVES[6]);
assert_eq!(mmr.forest(), 7);
assert_eq!(mmr.nodes.len(), 11);
assert_eq!(mmr.nodes.as_slice(), &postorder[0..mmr.nodes.len()]);
let acc = mmr.accumulator();
assert_eq!(acc.num_leaves, 7);
assert_eq!(acc.peaks, &[postorder[6], postorder[9], postorder[10]]);
}
#[test]
fn test_mmr_open() {
let mmr: Mmr = LEAVES.into();
let h01: Word = Rpo256::hash_elements(&LEAVES[0..2].concat()).into();
let h23: Word = Rpo256::hash_elements(&LEAVES[2..4].concat()).into();
// node at pos 7 is the root
assert!(
mmr.open(7).is_err(),
"Element 7 is not in the tree, result should be None"
);
// node at pos 6 is the root
let empty: MerklePath = MerklePath::new(vec![]);
let opening = mmr
.open(6)
.expect("Element 6 is contained in the tree, expected an opening result.");
assert_eq!(opening.merkle_path, empty);
assert_eq!(opening.forest, mmr.forest);
assert_eq!(opening.position, 6);
assert!(
mmr.accumulator().verify(LEAVES[6], opening),
"MmrProof should be valid for the current accumulator."
);
// nodes 4,5 are detph 1
let root_to_path = MerklePath::new(vec![LEAVES[4]]);
let opening = mmr
.open(5)
.expect("Element 5 is contained in the tree, expected an opening result.");
assert_eq!(opening.merkle_path, root_to_path);
assert_eq!(opening.forest, mmr.forest);
assert_eq!(opening.position, 5);
assert!(
mmr.accumulator().verify(LEAVES[5], opening),
"MmrProof should be valid for the current accumulator."
);
let root_to_path = MerklePath::new(vec![LEAVES[5]]);
let opening = mmr
.open(4)
.expect("Element 4 is contained in the tree, expected an opening result.");
assert_eq!(opening.merkle_path, root_to_path);
assert_eq!(opening.forest, mmr.forest);
assert_eq!(opening.position, 4);
assert!(
mmr.accumulator().verify(LEAVES[4], opening),
"MmrProof should be valid for the current accumulator."
);
// nodes 0,1,2,3 are detph 2
let root_to_path = MerklePath::new(vec![LEAVES[2], h01]);
let opening = mmr
.open(3)
.expect("Element 3 is contained in the tree, expected an opening result.");
assert_eq!(opening.merkle_path, root_to_path);
assert_eq!(opening.forest, mmr.forest);
assert_eq!(opening.position, 3);
assert!(
mmr.accumulator().verify(LEAVES[3], opening),
"MmrProof should be valid for the current accumulator."
);
let root_to_path = MerklePath::new(vec![LEAVES[3], h01]);
let opening = mmr
.open(2)
.expect("Element 2 is contained in the tree, expected an opening result.");
assert_eq!(opening.merkle_path, root_to_path);
assert_eq!(opening.forest, mmr.forest);
assert_eq!(opening.position, 2);
assert!(
mmr.accumulator().verify(LEAVES[2], opening),
"MmrProof should be valid for the current accumulator."
);
let root_to_path = MerklePath::new(vec![LEAVES[0], h23]);
let opening = mmr
.open(1)
.expect("Element 1 is contained in the tree, expected an opening result.");
assert_eq!(opening.merkle_path, root_to_path);
assert_eq!(opening.forest, mmr.forest);
assert_eq!(opening.position, 1);
assert!(
mmr.accumulator().verify(LEAVES[1], opening),
"MmrProof should be valid for the current accumulator."
);
let root_to_path = MerklePath::new(vec![LEAVES[1], h23]);
let opening = mmr
.open(0)
.expect("Element 0 is contained in the tree, expected an opening result.");
assert_eq!(opening.merkle_path, root_to_path);
assert_eq!(opening.forest, mmr.forest);
assert_eq!(opening.position, 0);
assert!(
mmr.accumulator().verify(LEAVES[0], opening),
"MmrProof should be valid for the current accumulator."
);
}
#[test]
fn test_mmr_get() {
let mmr: Mmr = LEAVES.into();
assert_eq!(
mmr.get(0).unwrap(),
LEAVES[0],
"value at pos 0 must correspond"
);
assert_eq!(
mmr.get(1).unwrap(),
LEAVES[1],
"value at pos 1 must correspond"
);
assert_eq!(
mmr.get(2).unwrap(),
LEAVES[2],
"value at pos 2 must correspond"
);
assert_eq!(
mmr.get(3).unwrap(),
LEAVES[3],
"value at pos 3 must correspond"
);
assert_eq!(
mmr.get(4).unwrap(),
LEAVES[4],
"value at pos 4 must correspond"
);
assert_eq!(
mmr.get(5).unwrap(),
LEAVES[5],
"value at pos 5 must correspond"
);
assert_eq!(
mmr.get(6).unwrap(),
LEAVES[6],
"value at pos 6 must correspond"
);
assert!(mmr.get(7).is_err());
}
#[test]
fn test_mmr_invariants() {
let mut mmr = Mmr::new();
for v in 1..=1028 {
mmr.add(int_to_node(v));
let accumulator = mmr.accumulator();
assert_eq!(
v as usize,
mmr.forest(),
"MMR leaf count must increase by one on every add"
);
assert_eq!(
v as usize, accumulator.num_leaves,
"MMR and its accumulator must match leaves count"
);
assert_eq!(
accumulator.num_leaves.count_ones() as usize,
accumulator.peaks.len(),
"bits on leaves must match the number of peaks"
);
let expected_nodes: usize = TrueBitPositionIterator::new(mmr.forest())
.map(|bit_pos| nodes_in_forest(1 << bit_pos))
.sum();
assert_eq!(
expected_nodes,
mmr.nodes.len(),
"the sum of every tree size must be equal to the number of nodes in the MMR (forest: {:b})",
mmr.forest(),
);
}
}
#[test]
fn test_bit_position_iterator() {
assert_eq!(TrueBitPositionIterator::new(0).count(), 0);
assert_eq!(TrueBitPositionIterator::new(0).rev().count(), 0);
assert_eq!(
TrueBitPositionIterator::new(1).collect::<Vec<u32>>(),
vec![0]
);
assert_eq!(
TrueBitPositionIterator::new(1).rev().collect::<Vec<u32>>(),
vec![0],
);
assert_eq!(
TrueBitPositionIterator::new(2).collect::<Vec<u32>>(),
vec![1]
);
assert_eq!(
TrueBitPositionIterator::new(2).rev().collect::<Vec<u32>>(),
vec![1],
);
assert_eq!(
TrueBitPositionIterator::new(3).collect::<Vec<u32>>(),
vec![0, 1],
);
assert_eq!(
TrueBitPositionIterator::new(3).rev().collect::<Vec<u32>>(),
vec![1, 0],
);
assert_eq!(
TrueBitPositionIterator::new(0b11010101).collect::<Vec<u32>>(),
vec![0, 2, 4, 6, 7],
);
assert_eq!(
TrueBitPositionIterator::new(0b11010101)
.rev()
.collect::<Vec<u32>>(),
vec![7, 6, 4, 2, 0],
);
}
#[test]
fn test_mmr_inner_nodes() {
let mmr: Mmr = LEAVES.into();
let nodes: Vec<InnerNodeInfo> = mmr.inner_nodes().collect();
let h01 = *Rpo256::hash_elements(&[LEAVES[0], LEAVES[1]].concat());
let h23 = *Rpo256::hash_elements(&[LEAVES[2], LEAVES[3]].concat());
let h0123 = *Rpo256::hash_elements(&[h01, h23].concat());
let h45 = *Rpo256::hash_elements(&[LEAVES[4], LEAVES[5]].concat());
let postorder = vec![
InnerNodeInfo {
value: h01,
left: LEAVES[0],
right: LEAVES[1],
},
InnerNodeInfo {
value: h23,
left: LEAVES[2],
right: LEAVES[3],
},
InnerNodeInfo {
value: h0123,
left: h01,
right: h23,
},
InnerNodeInfo {
value: h45,
left: LEAVES[4],
right: LEAVES[5],
},
];
assert_eq!(postorder, nodes);
}
#[test]
fn test_mmr_hash_peaks() {
let mmr: Mmr = LEAVES.into();
let peaks = mmr.accumulator();
let first_peak = *Rpo256::merge(&[
Rpo256::hash_elements(&[LEAVES[0], LEAVES[1]].concat()),
Rpo256::hash_elements(&[LEAVES[2], LEAVES[3]].concat()),
]);
let second_peak = *Rpo256::hash_elements(&[LEAVES[4], LEAVES[5]].concat());
let third_peak = LEAVES[6];
// minimum length is 16
let mut expected_peaks = [first_peak, second_peak, third_peak].to_vec();
expected_peaks.resize(16, [ZERO; WORD_SIZE]);
assert_eq!(
peaks.hash_peaks(),
*Rpo256::hash_elements(&expected_peaks.as_slice().concat())
);
}
#[test]
fn test_mmr_peaks_hash_less_than_16() {
let mut peaks = Vec::new();
for i in 0..16 {
peaks.push(int_to_node(i));
let accumulator = MmrPeaks {
num_leaves: (1 << peaks.len()) - 1,
peaks: peaks.clone(),
};
// minimum length is 16
let mut expected_peaks = peaks.clone();
expected_peaks.resize(16, [ZERO; WORD_SIZE]);
assert_eq!(
accumulator.hash_peaks(),
*Rpo256::hash_elements(&expected_peaks.as_slice().concat())
);
}
}
#[test]
fn test_mmr_peaks_hash_odd() {
let peaks: Vec<_> = (0..=17).map(|i| int_to_node(i)).collect();
let accumulator = MmrPeaks {
num_leaves: (1 << peaks.len()) - 1,
peaks: peaks.clone(),
};
// odd length bigger than 16 is padded to the next even nubmer
let mut expected_peaks = peaks.clone();
expected_peaks.resize(18, [ZERO; WORD_SIZE]);
assert_eq!(
accumulator.hash_peaks(),
*Rpo256::hash_elements(&expected_peaks.as_slice().concat())
);
}
mod property_tests {
use super::leaf_to_corresponding_tree;
use proptest::prelude::*;
proptest! {
#[test]
fn test_last_position_is_always_contained_in_the_last_tree(leaves in any::<usize>().prop_filter("cant have an empty tree", |v| *v != 0)) {
let last_pos = leaves - 1;
let lowest_bit = leaves.trailing_zeros();
assert_eq!(
leaf_to_corresponding_tree(last_pos, leaves),
Some(lowest_bit),
);
}
}
proptest! {
#[test]
fn test_contained_tree_is_always_power_of_two((leaves, pos) in any::<usize>().prop_flat_map(|v| (Just(v), 0..v))) {
let tree = leaf_to_corresponding_tree(pos, leaves).expect("pos is smaller than leaves, there should always be a corresponding tree");
let mask = 1usize << tree;
assert!(tree < usize::BITS, "the result must be a bit in usize");
assert!(mask & leaves != 0, "the result should be a tree in leaves");
}
}
}

View File

@@ -1,54 +1,81 @@
use super::{
hash::rpo::{Rpo256, RpoDigest},
utils::collections::{BTreeMap, Vec},
Felt, Word, ZERO,
utils::collections::{vec, BTreeMap, Vec},
Felt, StarkField, Word, WORD_SIZE, ZERO,
};
use core::fmt;
mod merkle_tree;
pub use merkle_tree::MerkleTree;
// REEXPORTS
// ================================================================================================
mod merkle_path_set;
pub use merkle_path_set::MerklePathSet;
mod empty_roots;
pub use empty_roots::EmptySubtreeRoots;
mod index;
pub use index::NodeIndex;
mod merkle_tree;
pub use merkle_tree::{path_to_text, tree_to_text, MerkleTree};
mod path;
pub use path::{MerklePath, RootPath, ValuePath};
mod path_set;
pub use path_set::MerklePathSet;
mod simple_smt;
pub use simple_smt::SimpleSmt;
mod mmr;
pub use mmr::{Mmr, MmrPeaks};
mod store;
pub use store::MerkleStore;
mod node;
pub use node::InnerNodeInfo;
// ERRORS
// ================================================================================================
#[derive(Clone, Debug)]
#[derive(Clone, Debug, PartialEq, Eq)]
pub enum MerkleError {
DepthTooSmall(u32),
DepthTooBig(u32),
ConflictingRoots(Vec<Word>),
DepthTooSmall(u8),
DepthTooBig(u64),
NodeNotInStore(Word, NodeIndex),
NumLeavesNotPowerOfTwo(usize),
InvalidIndex(u32, u64),
InvalidDepth(u32, u32),
InvalidPath(Vec<Word>),
InvalidIndex { depth: u8, value: u64 },
InvalidDepth { expected: u8, provided: u8 },
InvalidPath(MerklePath),
InvalidEntriesCount(usize, usize),
NodeNotInSet(u64),
RootNotInStore(Word),
}
impl fmt::Display for MerkleError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
use MerkleError::*;
match self {
ConflictingRoots(roots) => write!(f, "the merkle paths roots do not match {roots:?}"),
DepthTooSmall(depth) => write!(f, "the provided depth {depth} is too small"),
DepthTooBig(depth) => write!(f, "the provided depth {depth} is too big"),
NumLeavesNotPowerOfTwo(leaves) => {
write!(f, "the leaves count {leaves} is not a power of 2")
}
InvalidIndex(depth, index) => write!(
InvalidIndex{ depth, value} => write!(
f,
"the leaf index {index} is not valid for the depth {depth}"
"the index value {value} is not valid for the depth {depth}"
),
InvalidDepth(expected, provided) => write!(
InvalidDepth { expected, provided } => write!(
f,
"the provided depth {provided} is not valid for {expected}"
),
InvalidPath(_path) => write!(f, "the provided path is not valid"),
InvalidEntriesCount(max, provided) => write!(f, "the provided number of entries is {provided}, but the maximum for the given depth is {max}"),
NodeNotInSet(index) => write!(f, "the node indexed by {index} is not in the set"),
NodeNotInStore(hash, index) => write!(f, "the node {:?} indexed by {} and depth {} is not in the store", hash, index.value(), index.depth(),),
RootNotInStore(root) => write!(f, "the root {:?} is not in the store", root),
}
}
}

9
src/merkle/node.rs Normal file
View File

@@ -0,0 +1,9 @@
use super::Word;
/// Representation of a node with two children used for iterating over containers.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct InnerNodeInfo {
pub value: Word,
pub left: Word,
pub right: Word,
}

112
src/merkle/path.rs Normal file
View File

@@ -0,0 +1,112 @@
use super::{vec, MerkleError, NodeIndex, Rpo256, Vec, Word};
use core::ops::{Deref, DerefMut};
// MERKLE PATH
// ================================================================================================
/// A merkle path container, composed of a sequence of nodes of a Merkle tree.
#[derive(Clone, Debug, Default, PartialEq, Eq)]
pub struct MerklePath {
nodes: Vec<Word>,
}
impl MerklePath {
// CONSTRUCTORS
// --------------------------------------------------------------------------------------------
/// Creates a new Merkle path from a list of nodes.
pub fn new(nodes: Vec<Word>) -> Self {
Self { nodes }
}
// PROVIDERS
// --------------------------------------------------------------------------------------------
/// Computes the merkle root for this opening.
pub fn compute_root(&self, index: u64, node: Word) -> Result<Word, MerkleError> {
let mut index = NodeIndex::new(self.depth(), index)?;
let root = self.nodes.iter().copied().fold(node, |node, sibling| {
// compute the node and move to the next iteration.
let input = index.build_node(node.into(), sibling.into());
index.move_up();
Rpo256::merge(&input).into()
});
Ok(root)
}
/// Returns the depth in which this Merkle path proof is valid.
pub fn depth(&self) -> u8 {
self.nodes.len() as u8
}
/// Verifies the Merkle opening proof towards the provided root.
///
/// Returns `true` if `node` exists at `index` in a Merkle tree with `root`.
pub fn verify(&self, index: u64, node: Word, root: &Word) -> bool {
match self.compute_root(index, node) {
Ok(computed_root) => root == &computed_root,
Err(_) => false,
}
}
}
impl From<Vec<Word>> for MerklePath {
fn from(path: Vec<Word>) -> Self {
Self::new(path)
}
}
impl Deref for MerklePath {
// we use `Vec` here instead of slice so we can call vector mutation methods directly from the
// merkle path (example: `Vec::remove`).
type Target = Vec<Word>;
fn deref(&self) -> &Self::Target {
&self.nodes
}
}
impl DerefMut for MerklePath {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.nodes
}
}
impl FromIterator<Word> for MerklePath {
fn from_iter<T: IntoIterator<Item = Word>>(iter: T) -> Self {
Self::new(iter.into_iter().collect())
}
}
impl IntoIterator for MerklePath {
type Item = Word;
type IntoIter = vec::IntoIter<Word>;
fn into_iter(self) -> Self::IntoIter {
self.nodes.into_iter()
}
}
// MERKLE PATH CONTAINERS
// ================================================================================================
/// A container for a [Word] value and its [MerklePath] opening.
#[derive(Clone, Debug, Default, PartialEq, Eq)]
pub struct ValuePath {
/// The node value opening for `path`.
pub value: Word,
/// The path from `value` to `root` (exclusive).
pub path: MerklePath,
}
/// A container for a [MerklePath] and its [Word] root.
///
/// This structure does not provide any guarantees regarding the correctness of the path to the
/// root. For more information, check [MerklePath::verify].
#[derive(Clone, Debug, Default, PartialEq, Eq)]
pub struct RootPath {
/// The node value opening for `path`.
pub root: Word,
/// The path from `value` to `root` (exclusive).
pub path: MerklePath,
}

417
src/merkle/path_set.rs Normal file
View File

@@ -0,0 +1,417 @@
use super::{BTreeMap, MerkleError, MerklePath, NodeIndex, Rpo256, ValuePath, Vec, Word, ZERO};
// MERKLE PATH SET
// ================================================================================================
/// A set of Merkle paths.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct MerklePathSet {
root: Word,
total_depth: u8,
paths: BTreeMap<u64, MerklePath>,
}
impl MerklePathSet {
// CONSTRUCTOR
// --------------------------------------------------------------------------------------------
/// Returns an empty MerklePathSet.
pub fn new(depth: u8) -> Self {
let root = [ZERO; 4];
let paths = BTreeMap::new();
Self {
root,
total_depth: depth,
paths,
}
}
/// Appends the provided paths iterator into the set.
///
/// Analogous to `[Self::add_path]`.
pub fn with_paths<I>(self, paths: I) -> Result<Self, MerkleError>
where
I: IntoIterator<Item = (u64, Word, MerklePath)>,
{
paths
.into_iter()
.try_fold(self, |mut set, (index, value, path)| {
set.add_path(index, value, path)?;
Ok(set)
})
}
// PUBLIC ACCESSORS
// --------------------------------------------------------------------------------------------
/// Returns the root to which all paths in this set resolve.
pub const fn root(&self) -> Word {
self.root
}
/// Returns the depth of the Merkle tree implied by the paths stored in this set.
///
/// Merkle tree of depth 1 has two leaves, depth 2 has four leaves etc.
pub const fn depth(&self) -> u8 {
self.total_depth
}
/// Returns a node at the specified index.
///
/// # Errors
/// Returns an error if:
/// * The specified index is not valid for the depth of structure.
/// * Requested node does not exist in the set.
pub fn get_node(&self, index: NodeIndex) -> Result<Word, MerkleError> {
if index.depth() != self.total_depth {
return Err(MerkleError::InvalidDepth {
expected: self.total_depth,
provided: index.depth(),
});
}
let parity = index.value() & 1;
let path_key = index.value() - parity;
self.paths
.get(&path_key)
.ok_or(MerkleError::NodeNotInSet(path_key))
.map(|path| path[parity as usize])
}
/// Returns a leaf at the specified index.
///
/// # Errors
/// * The specified index is not valid for the depth of the structure.
/// * Leaf with the requested path does not exist in the set.
pub fn get_leaf(&self, index: u64) -> Result<Word, MerkleError> {
let index = NodeIndex::new(self.depth(), index)?;
self.get_node(index)
}
/// Returns a Merkle path to the node at the specified index. The node itself is
/// not included in the path.
///
/// # Errors
/// Returns an error if:
/// * The specified index is not valid for the depth of structure.
/// * Node of the requested path does not exist in the set.
pub fn get_path(&self, index: NodeIndex) -> Result<MerklePath, MerkleError> {
if index.depth() != self.total_depth {
return Err(MerkleError::InvalidDepth {
expected: self.total_depth,
provided: index.depth(),
});
}
let parity = index.value() & 1;
let path_key = index.value() - parity;
let mut path = self
.paths
.get(&path_key)
.cloned()
.ok_or(MerkleError::NodeNotInSet(index.value()))?;
path.remove(parity as usize);
Ok(path)
}
/// Returns all paths in this path set together with their indexes.
pub fn to_paths(&self) -> Vec<(u64, ValuePath)> {
let mut result = Vec::with_capacity(self.paths.len() * 2);
for (&index, path) in self.paths.iter() {
// push path for the even index into the result
let path1 = ValuePath {
value: path[0],
path: MerklePath::new(path[1..].to_vec()),
};
result.push((index, path1));
// push path for the odd index into the result
let mut path2 = path.clone();
let leaf2 = path2.remove(1);
let path2 = ValuePath {
value: leaf2,
path: path2,
};
result.push((index + 1, path2));
}
result
}
// STATE MUTATORS
// --------------------------------------------------------------------------------------------
/// Adds the specified Merkle path to this [MerklePathSet]. The `index` and `value` parameters
/// specify the leaf node at which the path starts.
///
/// # Errors
/// Returns an error if:
/// - The specified index is is not valid in the context of this Merkle path set (i.e., the
/// index implies a greater depth than is specified for this set).
/// - The specified path is not consistent with other paths in the set (i.e., resolves to a
/// different root).
pub fn add_path(
&mut self,
index_value: u64,
value: Word,
mut path: MerklePath,
) -> Result<(), MerkleError> {
let mut index = NodeIndex::new(path.len() as u8, index_value)?;
if index.depth() != self.total_depth {
return Err(MerkleError::InvalidDepth {
expected: self.total_depth,
provided: index.depth(),
});
}
// update the current path
let parity = index_value & 1;
path.insert(parity as usize, value);
// traverse to the root, updating the nodes
let root: Word = Rpo256::merge(&[path[0].into(), path[1].into()]).into();
let root = path.iter().skip(2).copied().fold(root, |root, hash| {
index.move_up();
Rpo256::merge(&index.build_node(root.into(), hash.into())).into()
});
// if the path set is empty (the root is all ZEROs), set the root to the root of the added
// path; otherwise, the root of the added path must be identical to the current root
if self.root == [ZERO; 4] {
self.root = root;
} else if self.root != root {
return Err(MerkleError::ConflictingRoots([self.root, root].to_vec()));
}
// finish updating the path
let path_key = index_value - parity;
self.paths.insert(path_key, path);
Ok(())
}
/// Replaces the leaf at the specified index with the provided value.
///
/// # Errors
/// Returns an error if:
/// * Requested node does not exist in the set.
pub fn update_leaf(&mut self, base_index_value: u64, value: Word) -> Result<(), MerkleError> {
let mut index = NodeIndex::new(self.depth(), base_index_value)?;
let parity = index.value() & 1;
let path_key = index.value() - parity;
let path = match self.paths.get_mut(&path_key) {
Some(path) => path,
None => return Err(MerkleError::NodeNotInSet(base_index_value)),
};
// Fill old_hashes vector -----------------------------------------------------------------
let mut current_index = index;
let mut old_hashes = Vec::with_capacity(path.len().saturating_sub(2));
let mut root: Word = Rpo256::merge(&[path[0].into(), path[1].into()]).into();
for hash in path.iter().skip(2).copied() {
old_hashes.push(root);
current_index.move_up();
let input = current_index.build_node(hash.into(), root.into());
root = Rpo256::merge(&input).into();
}
// Fill new_hashes vector -----------------------------------------------------------------
path[index.is_value_odd() as usize] = value;
let mut new_hashes = Vec::with_capacity(path.len().saturating_sub(2));
let mut new_root: Word = Rpo256::merge(&[path[0].into(), path[1].into()]).into();
for path_hash in path.iter().skip(2).copied() {
new_hashes.push(new_root);
index.move_up();
let input = current_index.build_node(path_hash.into(), new_root.into());
new_root = Rpo256::merge(&input).into();
}
self.root = new_root;
// update paths ---------------------------------------------------------------------------
for path in self.paths.values_mut() {
for i in (0..old_hashes.len()).rev() {
if path[i + 2] == old_hashes[i] {
path[i + 2] = new_hashes[i];
break;
}
}
}
Ok(())
}
}
// TESTS
// ================================================================================================
#[cfg(test)]
mod tests {
use super::*;
use crate::merkle::int_to_node;
#[test]
fn get_root() {
let leaf0 = int_to_node(0);
let leaf1 = int_to_node(1);
let leaf2 = int_to_node(2);
let leaf3 = int_to_node(3);
let parent0 = calculate_parent_hash(leaf0, 0, leaf1);
let parent1 = calculate_parent_hash(leaf2, 2, leaf3);
let root_exp = calculate_parent_hash(parent0, 0, parent1);
let set = super::MerklePathSet::new(2)
.with_paths([(0, leaf0, vec![leaf1, parent1].into())])
.unwrap();
assert_eq!(set.root(), root_exp);
}
#[test]
fn add_and_get_path() {
let path_6 = vec![int_to_node(7), int_to_node(45), int_to_node(123)];
let hash_6 = int_to_node(6);
let index = 6_u64;
let depth = 3_u8;
let set = super::MerklePathSet::new(depth)
.with_paths([(index, hash_6, path_6.clone().into())])
.unwrap();
let stored_path_6 = set.get_path(NodeIndex::make(depth, index)).unwrap();
assert_eq!(path_6, *stored_path_6);
}
#[test]
fn get_node() {
let path_6 = vec![int_to_node(7), int_to_node(45), int_to_node(123)];
let hash_6 = int_to_node(6);
let index = 6_u64;
let depth = 3_u8;
let set = MerklePathSet::new(depth)
.with_paths([(index, hash_6, path_6.into())])
.unwrap();
assert_eq!(
int_to_node(6u64),
set.get_node(NodeIndex::make(depth, index)).unwrap()
);
}
#[test]
fn update_leaf() {
let hash_4 = int_to_node(4);
let hash_5 = int_to_node(5);
let hash_6 = int_to_node(6);
let hash_7 = int_to_node(7);
let hash_45 = calculate_parent_hash(hash_4, 12u64, hash_5);
let hash_67 = calculate_parent_hash(hash_6, 14u64, hash_7);
let hash_0123 = int_to_node(123);
let path_6 = vec![hash_7, hash_45, hash_0123];
let path_5 = vec![hash_4, hash_67, hash_0123];
let path_4 = vec![hash_5, hash_67, hash_0123];
let index_6 = 6_u64;
let index_5 = 5_u64;
let index_4 = 4_u64;
let depth = 3_u8;
let mut set = MerklePathSet::new(depth)
.with_paths([
(index_6, hash_6, path_6.into()),
(index_5, hash_5, path_5.into()),
(index_4, hash_4, path_4.into()),
])
.unwrap();
let new_hash_6 = int_to_node(100);
let new_hash_5 = int_to_node(55);
set.update_leaf(index_6, new_hash_6).unwrap();
let new_path_4 = set.get_path(NodeIndex::make(depth, index_4)).unwrap();
let new_hash_67 = calculate_parent_hash(new_hash_6, 14_u64, hash_7);
assert_eq!(new_hash_67, new_path_4[1]);
set.update_leaf(index_5, new_hash_5).unwrap();
let new_path_4 = set.get_path(NodeIndex::make(depth, index_4)).unwrap();
let new_path_6 = set.get_path(NodeIndex::make(depth, index_6)).unwrap();
let new_hash_45 = calculate_parent_hash(new_hash_5, 13_u64, hash_4);
assert_eq!(new_hash_45, new_path_6[1]);
assert_eq!(new_hash_5, new_path_4[0]);
}
#[test]
fn depth_3_is_correct() {
let a = int_to_node(1);
let b = int_to_node(2);
let c = int_to_node(3);
let d = int_to_node(4);
let e = int_to_node(5);
let f = int_to_node(6);
let g = int_to_node(7);
let h = int_to_node(8);
let i = Rpo256::merge(&[a.into(), b.into()]);
let j = Rpo256::merge(&[c.into(), d.into()]);
let k = Rpo256::merge(&[e.into(), f.into()]);
let l = Rpo256::merge(&[g.into(), h.into()]);
let m = Rpo256::merge(&[i.into(), j.into()]);
let n = Rpo256::merge(&[k.into(), l.into()]);
let root = Rpo256::merge(&[m.into(), n.into()]);
let mut set = MerklePathSet::new(3);
let value = b;
let index = 1;
let path = MerklePath::new([a.into(), j.into(), n.into()].to_vec());
set.add_path(index, value, path.clone()).unwrap();
assert_eq!(value, set.get_leaf(index).unwrap());
assert_eq!(Word::from(root), set.root());
let value = e;
let index = 4;
let path = MerklePath::new([f.into(), l.into(), m.into()].to_vec());
set.add_path(index, value, path.clone()).unwrap();
assert_eq!(value, set.get_leaf(index).unwrap());
assert_eq!(Word::from(root), set.root());
let value = a;
let index = 0;
let path = MerklePath::new([b.into(), j.into(), n.into()].to_vec());
set.add_path(index, value, path.clone()).unwrap();
assert_eq!(value, set.get_leaf(index).unwrap());
assert_eq!(Word::from(root), set.root());
let value = h;
let index = 7;
let path = MerklePath::new([g.into(), k.into(), m.into()].to_vec());
set.add_path(index, value, path.clone()).unwrap();
assert_eq!(value, set.get_leaf(index).unwrap());
assert_eq!(Word::from(root), set.root());
}
// HELPER FUNCTIONS
// --------------------------------------------------------------------------------------------
const fn is_even(pos: u64) -> bool {
pos & 1 == 0
}
/// Calculates the hash of the parent node by two sibling ones
/// - node — current node
/// - node_pos — position of the current node
/// - sibling — neighboring vertex in the tree
fn calculate_parent_hash(node: Word, node_pos: u64, sibling: Word) -> Word {
if is_even(node_pos) {
Rpo256::merge(&[node.into(), sibling.into()]).into()
} else {
Rpo256::merge(&[sibling.into(), node.into()]).into()
}
}
}

View File

@@ -1,4 +1,7 @@
use super::{BTreeMap, MerkleError, Rpo256, RpoDigest, Vec, Word};
use super::{
BTreeMap, EmptySubtreeRoots, InnerNodeInfo, MerkleError, MerklePath, NodeIndex, Rpo256,
RpoDigest, Vec, Word,
};
#[cfg(test)]
mod tests;
@@ -6,14 +9,27 @@ mod tests;
// SPARSE MERKLE TREE
// ================================================================================================
/// A sparse Merkle tree with 63-bit keys and 4-element leaf values, without compaction.
/// Manipulation and retrieval of leaves and internal nodes is provided by its internal `Store`.
/// A sparse Merkle tree with 64-bit keys and 4-element leaf values, without compaction.
/// The root of the tree is recomputed on each new leaf update.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct SimpleSmt {
depth: u8,
root: Word,
depth: u32,
store: Store,
leaves: BTreeMap<u64, Word>,
branches: BTreeMap<NodeIndex, BranchNode>,
empty_hashes: Vec<RpoDigest>,
}
#[derive(Debug, Default, Clone, PartialEq, Eq)]
struct BranchNode {
left: RpoDigest,
right: RpoDigest,
}
impl BranchNode {
fn parent(&self) -> RpoDigest {
Rpo256::merge(&[self.left, self.right])
}
}
impl SimpleSmt {
@@ -21,76 +37,109 @@ impl SimpleSmt {
// --------------------------------------------------------------------------------------------
/// Minimum supported depth.
pub const MIN_DEPTH: u32 = 1;
pub const MIN_DEPTH: u8 = 1;
/// Maximum supported depth.
pub const MAX_DEPTH: u32 = 63;
pub const MAX_DEPTH: u8 = 64;
// CONSTRUCTORS
// --------------------------------------------------------------------------------------------
/// Creates a new simple SMT.
///
/// The provided entries will be tuples of the leaves and their corresponding keys.
/// Creates a new simple SMT with the provided depth.
pub fn new(depth: u8) -> Result<Self, MerkleError> {
// validate the range of the depth.
if depth < Self::MIN_DEPTH {
return Err(MerkleError::DepthTooSmall(depth));
} else if Self::MAX_DEPTH < depth {
return Err(MerkleError::DepthTooBig(depth as u64));
}
let empty_hashes = EmptySubtreeRoots::empty_hashes(depth).to_vec();
let root = empty_hashes[0].into();
Ok(Self {
root,
depth,
empty_hashes,
leaves: BTreeMap::new(),
branches: BTreeMap::new(),
})
}
/// Appends the provided entries as leaves of the tree.
///
/// # Errors
///
/// The function will fail if the provided entries count exceed the maximum tree capacity, that
/// is `2^{depth}`.
pub fn new<R, I>(entries: R, depth: u32) -> Result<Self, MerkleError>
pub fn with_leaves<R, I>(mut self, entries: R) -> Result<Self, MerkleError>
where
R: IntoIterator<IntoIter = I>,
I: Iterator<Item = (u64, Word)> + ExactSizeIterator,
{
// check if the leaves count will fit the depth setup
let mut entries = entries.into_iter();
// validate the range of the depth.
let max = 1 << depth;
if depth < Self::MIN_DEPTH {
return Err(MerkleError::DepthTooSmall(depth));
} else if Self::MAX_DEPTH < depth {
return Err(MerkleError::DepthTooBig(depth));
} else if entries.len() > max {
let max = 1 << self.depth.min(63);
if entries.len() > max {
return Err(MerkleError::InvalidEntriesCount(max, entries.len()));
}
let (store, root) = Store::new(depth);
let mut tree = Self { root, depth, store };
entries.try_for_each(|(key, leaf)| tree.insert_leaf(key, leaf))?;
Ok(tree)
// append leaves and return
entries.try_for_each(|(key, leaf)| self.insert_leaf(key, leaf))?;
Ok(self)
}
/// Replaces the internal empty digests used when a given depth doesn't contain a node.
pub fn with_empty_subtrees<I>(mut self, hashes: I) -> Self
where
I: IntoIterator<Item = RpoDigest>,
{
self.replace_empty_subtrees(hashes.into_iter().collect());
self
}
// PUBLIC ACCESSORS
// --------------------------------------------------------------------------------------------
/// Returns the root of this Merkle tree.
pub const fn root(&self) -> Word {
self.root
}
/// Returns the depth of this Merkle tree.
pub const fn depth(&self) -> u32 {
pub const fn depth(&self) -> u8 {
self.depth
}
// PROVIDERS
// --------------------------------------------------------------------------------------------
/// Returns the set count of the keys of the leaves.
pub fn leaves_count(&self) -> usize {
self.store.leaves_count()
self.leaves.len()
}
/// Returns a node at the specified key
/// Returns a node at the specified index.
///
/// # Errors
/// Returns an error if:
/// * The specified depth is greater than the depth of the tree.
/// * The specified key does not exist
pub fn get_node(&self, depth: u32, key: u64) -> Result<Word, MerkleError> {
if depth == 0 {
Err(MerkleError::DepthTooSmall(depth))
} else if depth > self.depth() {
Err(MerkleError::DepthTooBig(depth))
} else if depth == self.depth() {
self.store.get_leaf_node(key)
pub fn get_node(&self, index: NodeIndex) -> Result<Word, MerkleError> {
if index.is_root() {
Err(MerkleError::DepthTooSmall(index.depth()))
} else if index.depth() > self.depth() {
Err(MerkleError::DepthTooBig(index.depth() as u64))
} else if index.depth() == self.depth() {
self.get_leaf_node(index.value())
.or_else(|| {
self.empty_hashes
.get(index.depth() as usize)
.copied()
.map(Word::from)
})
.ok_or(MerkleError::NodeNotInSet(index.value()))
} else {
let branch_node = self.store.get_branch_node(key, depth)?;
let branch_node = self.get_branch_node(&index);
Ok(Rpo256::merge(&[branch_node.left, branch_node.right]).into())
}
}
@@ -100,31 +149,23 @@ impl SimpleSmt {
///
/// # Errors
/// Returns an error if:
/// * The specified key does not exist as a branch or leaf node
/// * The specified depth is greater than the depth of the tree.
pub fn get_path(&self, depth: u32, key: u64) -> Result<Vec<Word>, MerkleError> {
if depth == 0 {
return Err(MerkleError::DepthTooSmall(depth));
} else if depth > self.depth() {
return Err(MerkleError::DepthTooBig(depth));
} else if depth == self.depth() && !self.store.check_leaf_node_exists(key) {
return Err(MerkleError::InvalidIndex(self.depth(), key));
pub fn get_path(&self, mut index: NodeIndex) -> Result<MerklePath, MerkleError> {
if index.is_root() {
return Err(MerkleError::DepthTooSmall(index.depth()));
} else if index.depth() > self.depth() {
return Err(MerkleError::DepthTooBig(index.depth() as u64));
}
let mut path = Vec::with_capacity(depth as usize);
let mut curr_key = key;
for n in (0..depth).rev() {
let parent_key = curr_key >> 1;
let parent_node = self.store.get_branch_node(parent_key, n)?;
let sibling_node = if curr_key & 1 == 1 {
parent_node.left
} else {
parent_node.right
};
path.push(sibling_node.into());
curr_key >>= 1;
let mut path = Vec::with_capacity(index.depth() as usize);
for _ in 0..index.depth() {
let is_right = index.is_value_odd();
index.move_up();
let BranchNode { left, right } = self.get_branch_node(&index);
let value = if is_right { left } else { right };
path.push(*value);
}
Ok(path)
Ok(path.into())
}
/// Return a Merkle path from the leaf at the specified key to the root. The leaf itself is not
@@ -133,17 +174,32 @@ impl SimpleSmt {
/// # Errors
/// Returns an error if:
/// * The specified key does not exist as a leaf node.
pub fn get_leaf_path(&self, key: u64) -> Result<Vec<Word>, MerkleError> {
self.get_path(self.depth(), key)
pub fn get_leaf_path(&self, key: u64) -> Result<MerklePath, MerkleError> {
let index = NodeIndex::new(self.depth(), key)?;
self.get_path(index)
}
/// Replaces the leaf located at the specified key, and recomputes hashes by walking up the tree
/// Iterator over the inner nodes of the [SimpleSmt].
pub fn inner_nodes(&self) -> impl Iterator<Item = InnerNodeInfo> + '_ {
self.branches.values().map(|e| InnerNodeInfo {
value: e.parent().into(),
left: e.left.into(),
right: e.right.into(),
})
}
// STATE MUTATORS
// --------------------------------------------------------------------------------------------
/// Replaces the leaf located at the specified key, and recomputes hashes by walking up the
/// tree.
///
/// # Errors
/// Returns an error if the specified key is not a valid leaf index for this tree.
pub fn update_leaf(&mut self, key: u64, value: Word) -> Result<(), MerkleError> {
if !self.store.check_leaf_node_exists(key) {
return Err(MerkleError::InvalidIndex(self.depth(), key));
let index = NodeIndex::new(self.depth(), key)?;
if !self.check_leaf_node_exists(key) {
return Err(MerkleError::NodeNotInSet(index.value()));
}
self.insert_leaf(key, value)?;
@@ -152,118 +208,58 @@ impl SimpleSmt {
/// Inserts a leaf located at the specified key, and recomputes hashes by walking up the tree
pub fn insert_leaf(&mut self, key: u64, value: Word) -> Result<(), MerkleError> {
self.store.insert_leaf_node(key, value);
self.insert_leaf_node(key, value);
let depth = self.depth();
let mut curr_key = key;
let mut curr_node: RpoDigest = value.into();
for n in (0..depth).rev() {
let parent_key = curr_key >> 1;
let parent_node = self
.store
.get_branch_node(parent_key, n)
.unwrap_or_else(|_| self.store.get_empty_node((n + 1) as usize));
let (left, right) = if curr_key & 1 == 1 {
(parent_node.left, curr_node)
// TODO consider using a map `index |-> word` instead of `index |-> (word, word)`
let mut index = NodeIndex::new(self.depth(), key)?;
let mut value = RpoDigest::from(value);
for _ in 0..index.depth() {
let is_right = index.is_value_odd();
index.move_up();
let BranchNode { left, right } = self.get_branch_node(&index);
let (left, right) = if is_right {
(left, value)
} else {
(curr_node, parent_node.right)
(value, right)
};
self.store.insert_branch_node(parent_key, n, left, right);
curr_key = parent_key;
curr_node = Rpo256::merge(&[left, right]);
self.insert_branch_node(index, left, right);
value = Rpo256::merge(&[left, right]);
}
self.root = curr_node.into();
self.root = value.into();
Ok(())
}
}
// STORE
// ================================================================================================
// HELPER METHODS
// --------------------------------------------------------------------------------------------
/// A data store for sparse Merkle tree key-value pairs.
/// Leaves and branch nodes are stored separately in B-tree maps, indexed by key and (key, depth)
/// respectively. Hashes for blank subtrees at each layer are stored in `empty_hashes`, beginning
/// with the root hash of an empty tree, and ending with the zero value of a leaf node.
#[derive(Debug, Clone, PartialEq, Eq)]
struct Store {
branches: BTreeMap<(u64, u32), BranchNode>,
leaves: BTreeMap<u64, Word>,
empty_hashes: Vec<RpoDigest>,
depth: u32,
}
#[derive(Debug, Default, Clone, PartialEq, Eq)]
struct BranchNode {
left: RpoDigest,
right: RpoDigest,
}
impl Store {
fn new(depth: u32) -> (Self, Word) {
let branches = BTreeMap::new();
let leaves = BTreeMap::new();
// Construct empty node digests for each layer of the tree
let empty_hashes: Vec<RpoDigest> = (0..depth + 1)
.scan(Word::default().into(), |state, _| {
let value = *state;
*state = Rpo256::merge(&[value, value]);
Some(value)
})
.collect::<Vec<_>>()
.into_iter()
.rev()
.collect();
let root = empty_hashes[0].into();
let store = Self {
branches,
leaves,
empty_hashes,
depth,
};
(store, root)
}
fn get_empty_node(&self, depth: usize) -> BranchNode {
let digest = self.empty_hashes[depth];
BranchNode {
left: digest,
right: digest,
}
fn replace_empty_subtrees(&mut self, hashes: Vec<RpoDigest>) {
self.empty_hashes = hashes;
}
fn check_leaf_node_exists(&self, key: u64) -> bool {
self.leaves.contains_key(&key)
}
fn get_leaf_node(&self, key: u64) -> Result<Word, MerkleError> {
self.leaves
.get(&key)
.cloned()
.ok_or(MerkleError::InvalidIndex(self.depth, key))
fn get_leaf_node(&self, key: u64) -> Option<Word> {
self.leaves.get(&key).copied()
}
fn insert_leaf_node(&mut self, key: u64, node: Word) {
self.leaves.insert(key, node);
}
fn get_branch_node(&self, key: u64, depth: u32) -> Result<BranchNode, MerkleError> {
self.branches
.get(&(key, depth))
.cloned()
.ok_or(MerkleError::InvalidIndex(depth, key))
fn get_branch_node(&self, index: &NodeIndex) -> BranchNode {
self.branches.get(index).cloned().unwrap_or_else(|| {
let node = self.empty_hashes[index.depth() as usize + 1];
BranchNode {
left: node,
right: node,
}
})
}
fn insert_branch_node(&mut self, key: u64, depth: u32, left: RpoDigest, right: RpoDigest) {
let node = BranchNode { left, right };
self.branches.insert((key, depth), node);
}
fn leaves_count(&self) -> usize {
self.leaves.len()
fn insert_branch_node(&mut self, index: NodeIndex, left: RpoDigest, right: RpoDigest) {
let branch = BranchNode { left, right };
self.branches.insert(index, branch);
}
}

View File

@@ -1,9 +1,7 @@
use super::{
super::{MerkleTree, RpoDigest, SimpleSmt},
Rpo256, Vec, Word,
super::{int_to_node, InnerNodeInfo, MerkleError, MerkleTree, RpoDigest, SimpleSmt},
NodeIndex, Rpo256, Vec, Word,
};
use crate::{Felt, FieldElement};
use core::iter;
use proptest::prelude::*;
use rand_utils::prng_array;
@@ -32,7 +30,7 @@ const ZERO_VALUES8: [Word; 8] = [int_to_node(0); 8];
#[test]
fn build_empty_tree() {
let smt = SimpleSmt::new(iter::empty(), 3).unwrap();
let smt = SimpleSmt::new(3).unwrap();
let mt = MerkleTree::new(ZERO_VALUES8.to_vec()).unwrap();
assert_eq!(mt.root(), smt.root());
}
@@ -40,7 +38,7 @@ fn build_empty_tree() {
#[test]
fn empty_digests_are_consistent() {
let depth = 5;
let root = SimpleSmt::new(iter::empty(), depth).unwrap().root();
let root = SimpleSmt::new(depth).unwrap().root();
let computed: [RpoDigest; 2] = (0..depth).fold([Default::default(); 2], |state, _| {
let digest = Rpo256::merge(&state);
[digest; 2]
@@ -51,7 +49,7 @@ fn empty_digests_are_consistent() {
#[test]
fn build_sparse_tree() {
let mut smt = SimpleSmt::new(iter::empty(), 3).unwrap();
let mut smt = SimpleSmt::new(3).unwrap();
let mut values = ZERO_VALUES8.to_vec();
// insert single value
@@ -62,7 +60,10 @@ fn build_sparse_tree() {
.expect("Failed to insert leaf");
let mt2 = MerkleTree::new(values.clone()).unwrap();
assert_eq!(mt2.root(), smt.root());
assert_eq!(mt2.get_path(3, 6).unwrap(), smt.get_path(3, 6).unwrap());
assert_eq!(
mt2.get_path(NodeIndex::make(3, 6)).unwrap(),
smt.get_path(NodeIndex::make(3, 6)).unwrap()
);
// insert second value at distinct leaf branch
let key = 2;
@@ -72,61 +73,132 @@ fn build_sparse_tree() {
.expect("Failed to insert leaf");
let mt3 = MerkleTree::new(values).unwrap();
assert_eq!(mt3.root(), smt.root());
assert_eq!(mt3.get_path(3, 2).unwrap(), smt.get_path(3, 2).unwrap());
assert_eq!(
mt3.get_path(NodeIndex::make(3, 2)).unwrap(),
smt.get_path(NodeIndex::make(3, 2)).unwrap()
);
}
#[test]
fn build_full_tree() {
let tree = SimpleSmt::new(KEYS4.into_iter().zip(VALUES4.into_iter()), 2).unwrap();
let tree = SimpleSmt::new(2)
.unwrap()
.with_leaves(KEYS4.into_iter().zip(VALUES4.into_iter()))
.unwrap();
let (root, node2, node3) = compute_internal_nodes();
assert_eq!(root, tree.root());
assert_eq!(node2, tree.get_node(1, 0).unwrap());
assert_eq!(node3, tree.get_node(1, 1).unwrap());
assert_eq!(node2, tree.get_node(NodeIndex::make(1, 0)).unwrap());
assert_eq!(node3, tree.get_node(NodeIndex::make(1, 1)).unwrap());
}
#[test]
fn get_values() {
let tree = SimpleSmt::new(KEYS4.into_iter().zip(VALUES4.into_iter()), 2).unwrap();
let tree = SimpleSmt::new(2)
.unwrap()
.with_leaves(KEYS4.into_iter().zip(VALUES4.into_iter()))
.unwrap();
// check depth 2
assert_eq!(VALUES4[0], tree.get_node(2, 0).unwrap());
assert_eq!(VALUES4[1], tree.get_node(2, 1).unwrap());
assert_eq!(VALUES4[2], tree.get_node(2, 2).unwrap());
assert_eq!(VALUES4[3], tree.get_node(2, 3).unwrap());
assert_eq!(VALUES4[0], tree.get_node(NodeIndex::make(2, 0)).unwrap());
assert_eq!(VALUES4[1], tree.get_node(NodeIndex::make(2, 1)).unwrap());
assert_eq!(VALUES4[2], tree.get_node(NodeIndex::make(2, 2)).unwrap());
assert_eq!(VALUES4[3], tree.get_node(NodeIndex::make(2, 3)).unwrap());
}
#[test]
fn get_path() {
let tree = SimpleSmt::new(KEYS4.into_iter().zip(VALUES4.into_iter()), 2).unwrap();
let tree = SimpleSmt::new(2)
.unwrap()
.with_leaves(KEYS4.into_iter().zip(VALUES4.into_iter()))
.unwrap();
let (_, node2, node3) = compute_internal_nodes();
// check depth 2
assert_eq!(vec![VALUES4[1], node3], tree.get_path(2, 0).unwrap());
assert_eq!(vec![VALUES4[0], node3], tree.get_path(2, 1).unwrap());
assert_eq!(vec![VALUES4[3], node2], tree.get_path(2, 2).unwrap());
assert_eq!(vec![VALUES4[2], node2], tree.get_path(2, 3).unwrap());
assert_eq!(
vec![VALUES4[1], node3],
*tree.get_path(NodeIndex::make(2, 0)).unwrap()
);
assert_eq!(
vec![VALUES4[0], node3],
*tree.get_path(NodeIndex::make(2, 1)).unwrap()
);
assert_eq!(
vec![VALUES4[3], node2],
*tree.get_path(NodeIndex::make(2, 2)).unwrap()
);
assert_eq!(
vec![VALUES4[2], node2],
*tree.get_path(NodeIndex::make(2, 3)).unwrap()
);
// check depth 1
assert_eq!(vec![node3], tree.get_path(1, 0).unwrap());
assert_eq!(vec![node2], tree.get_path(1, 1).unwrap());
assert_eq!(vec![node3], *tree.get_path(NodeIndex::make(1, 0)).unwrap());
assert_eq!(vec![node2], *tree.get_path(NodeIndex::make(1, 1)).unwrap());
}
#[test]
fn test_parent_node_iterator() -> Result<(), MerkleError> {
let tree = SimpleSmt::new(2)
.unwrap()
.with_leaves(KEYS4.into_iter().zip(VALUES4.into_iter()))
.unwrap();
// check depth 2
assert_eq!(VALUES4[0], tree.get_node(NodeIndex::make(2, 0)).unwrap());
assert_eq!(VALUES4[1], tree.get_node(NodeIndex::make(2, 1)).unwrap());
assert_eq!(VALUES4[2], tree.get_node(NodeIndex::make(2, 2)).unwrap());
assert_eq!(VALUES4[3], tree.get_node(NodeIndex::make(2, 3)).unwrap());
// get parent nodes
let root = tree.root();
let l1n0 = tree.get_node(NodeIndex::make(1, 0))?;
let l1n1 = tree.get_node(NodeIndex::make(1, 1))?;
let l2n0 = tree.get_node(NodeIndex::make(2, 0))?;
let l2n1 = tree.get_node(NodeIndex::make(2, 1))?;
let l2n2 = tree.get_node(NodeIndex::make(2, 2))?;
let l2n3 = tree.get_node(NodeIndex::make(2, 3))?;
let nodes: Vec<InnerNodeInfo> = tree.inner_nodes().collect();
let expected = vec![
InnerNodeInfo {
value: root.into(),
left: l1n0.into(),
right: l1n1.into(),
},
InnerNodeInfo {
value: l1n0.into(),
left: l2n0.into(),
right: l2n1.into(),
},
InnerNodeInfo {
value: l1n1.into(),
left: l2n2.into(),
right: l2n3.into(),
},
];
assert_eq!(nodes, expected);
Ok(())
}
#[test]
fn update_leaf() {
let mut tree = SimpleSmt::new(KEYS8.into_iter().zip(VALUES8.into_iter()), 3).unwrap();
let mut tree = SimpleSmt::new(3)
.unwrap()
.with_leaves(KEYS8.into_iter().zip(VALUES8.into_iter()))
.unwrap();
// update one value
let key = 3;
let new_node = int_to_node(9);
let mut expected_values = VALUES8.to_vec();
expected_values[key] = new_node;
let expected_tree = SimpleSmt::new(
KEYS8.into_iter().zip(expected_values.clone().into_iter()),
3,
)
.unwrap();
let expected_tree = SimpleSmt::new(3)
.unwrap()
.with_leaves(KEYS8.into_iter().zip(expected_values.clone().into_iter()))
.unwrap();
tree.update_leaf(key as u64, new_node).unwrap();
assert_eq!(expected_tree.root, tree.root);
@@ -135,8 +207,10 @@ fn update_leaf() {
let key = 6;
let new_node = int_to_node(10);
expected_values[key] = new_node;
let expected_tree =
SimpleSmt::new(KEYS8.into_iter().zip(expected_values.into_iter()), 3).unwrap();
let expected_tree = SimpleSmt::new(3)
.unwrap()
.with_leaves(KEYS8.into_iter().zip(expected_values.into_iter()))
.unwrap();
tree.update_leaf(key as u64, new_node).unwrap();
assert_eq!(expected_tree.root, tree.root);
@@ -171,11 +245,11 @@ fn small_tree_opening_is_consistent() {
let depth = 3;
let entries = vec![(0, a), (1, b), (4, c), (7, d)];
let tree = SimpleSmt::new(entries, depth).unwrap();
let tree = SimpleSmt::new(depth).unwrap().with_leaves(entries).unwrap();
assert_eq!(tree.root(), Word::from(k));
let cases: Vec<(u32, u64, Vec<Word>)> = vec![
let cases: Vec<(u8, u64, Vec<Word>)> = vec![
(3, 0, vec![b, f, j]),
(3, 1, vec![a, f, j]),
(3, 4, vec![z, h, i]),
@@ -189,9 +263,9 @@ fn small_tree_opening_is_consistent() {
];
for (depth, key, path) in cases {
let opening = tree.get_path(depth, key).unwrap();
let opening = tree.get_path(NodeIndex::make(depth, key)).unwrap();
assert_eq!(path, opening);
assert_eq!(path, *opening);
}
}
@@ -202,7 +276,7 @@ proptest! {
key in prop::num::u64::ANY,
leaf in prop::num::u64::ANY,
) {
let mut tree = SimpleSmt::new(iter::empty(), depth).unwrap();
let mut tree = SimpleSmt::new(depth).unwrap();
let key = key % (1 << depth as u64);
let leaf = int_to_node(leaf);
@@ -213,7 +287,7 @@ proptest! {
// traverse to root, fetching all paths
for d in 1..depth {
let k = key >> (depth - d);
tree.get_path(d, k).unwrap();
tree.get_path(NodeIndex::make(d, k)).unwrap();
}
}
@@ -223,7 +297,7 @@ proptest! {
count in 2u8..10u8,
ref seed in any::<[u8; 32]>()
) {
let mut tree = SimpleSmt::new(iter::empty(), depth).unwrap();
let mut tree = SimpleSmt::new(depth).unwrap();
let mut seed = *seed;
let leaves = (1 << depth) - 1;
@@ -257,7 +331,3 @@ fn compute_internal_nodes() -> (Word, Word, Word) {
(root.into(), node2.into(), node3.into())
}
const fn int_to_node(value: u64) -> Word {
[Felt::new(value), Felt::ZERO, Felt::ZERO, Felt::ZERO]
}

558
src/merkle/store/mod.rs Normal file
View File

@@ -0,0 +1,558 @@
use super::mmr::{Mmr, MmrPeaks};
use super::{
BTreeMap, EmptySubtreeRoots, MerkleError, MerklePath, MerklePathSet, MerkleTree, NodeIndex,
RootPath, Rpo256, RpoDigest, SimpleSmt, ValuePath, Vec, Word,
};
use crate::utils::{ByteReader, ByteWriter, Deserializable, DeserializationError, Serializable};
#[cfg(test)]
mod tests;
#[derive(Debug, Default, Copy, Clone, Eq, PartialEq)]
pub struct Node {
left: RpoDigest,
right: RpoDigest,
}
/// An in-memory data store for Merkle-lized data.
///
/// This is a in memory data store for Merkle trees, this store allows all the nodes of multiple
/// trees to live as long as necessary and without duplication, this allows the implementation of
/// space efficient persistent data structures.
///
/// Example usage:
///
/// ```rust
/// # use miden_crypto::{ZERO, Felt, Word};
/// # use miden_crypto::merkle::{NodeIndex, MerkleStore, MerkleTree};
/// # use miden_crypto::hash::rpo::Rpo256;
/// # const fn int_to_node(value: u64) -> Word {
/// # [Felt::new(value), ZERO, ZERO, ZERO]
/// # }
/// # let A = int_to_node(1);
/// # let B = int_to_node(2);
/// # let C = int_to_node(3);
/// # let D = int_to_node(4);
/// # let E = int_to_node(5);
/// # let F = int_to_node(6);
/// # let G = int_to_node(7);
/// # let H0 = int_to_node(8);
/// # let H1 = int_to_node(9);
/// # let T0 = MerkleTree::new([A, B, C, D, E, F, G, H0].to_vec()).expect("even number of leaves provided");
/// # let T1 = MerkleTree::new([A, B, C, D, E, F, G, H1].to_vec()).expect("even number of leaves provided");
/// # let ROOT0 = T0.root();
/// # let ROOT1 = T1.root();
/// let mut store = MerkleStore::new();
///
/// // the store is initialized with the SMT empty nodes
/// assert_eq!(store.num_internal_nodes(), 255);
///
/// // populates the store with two merkle trees, common nodes are shared
/// store.add_merkle_tree([A, B, C, D, E, F, G, H0]);
/// store.add_merkle_tree([A, B, C, D, E, F, G, H1]);
///
/// // every leaf except the last are the same
/// for i in 0..7 {
/// let idx0 = NodeIndex::new(3, i).unwrap();
/// let d0 = store.get_node(ROOT0, idx0).unwrap();
/// let idx1 = NodeIndex::new(3, i).unwrap();
/// let d1 = store.get_node(ROOT1, idx1).unwrap();
/// assert_eq!(d0, d1, "Both trees have the same leaf at pos {i}");
/// }
///
/// // The leafs A-B-C-D are the same for both trees, so are their 2 immediate parents
/// for i in 0..4 {
/// let idx0 = NodeIndex::new(3, i).unwrap();
/// let d0 = store.get_path(ROOT0, idx0).unwrap();
/// let idx1 = NodeIndex::new(3, i).unwrap();
/// let d1 = store.get_path(ROOT1, idx1).unwrap();
/// assert_eq!(d0.path[0..2], d1.path[0..2], "Both sub-trees are equal up to two levels");
/// }
///
/// // Common internal nodes are shared, the two added trees have a total of 30, but the store has
/// // only 10 new entries, corresponding to the 10 unique internal nodes of these trees.
/// assert_eq!(store.num_internal_nodes() - 255, 10);
/// ```
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct MerkleStore {
nodes: BTreeMap<RpoDigest, Node>,
}
impl Default for MerkleStore {
fn default() -> Self {
Self::new()
}
}
impl MerkleStore {
// CONSTRUCTORS
// --------------------------------------------------------------------------------------------
/// Creates an empty `MerkleStore` instance.
pub fn new() -> MerkleStore {
// pre-populate the store with the empty hashes
let subtrees = EmptySubtreeRoots::empty_hashes(255);
let nodes = subtrees
.iter()
.rev()
.copied()
.zip(subtrees.iter().rev().skip(1).copied())
.map(|(child, parent)| {
(
parent,
Node {
left: child,
right: child,
},
)
})
.collect();
MerkleStore { nodes }
}
/// Appends the provided merkle tree represented by its `leaves` to the set.
pub fn with_merkle_tree<I>(mut self, leaves: I) -> Result<Self, MerkleError>
where
I: IntoIterator<Item = Word>,
{
self.add_merkle_tree(leaves)?;
Ok(self)
}
/// Appends the provided Sparse Merkle tree represented by its `entries` to the set.
///
/// For more information, check [MerkleStore::add_sparse_merkle_tree].
pub fn with_sparse_merkle_tree<R, I>(
mut self,
depth: u8,
entries: R,
) -> Result<Self, MerkleError>
where
R: IntoIterator<IntoIter = I>,
I: Iterator<Item = (u64, Word)> + ExactSizeIterator,
{
self.add_sparse_merkle_tree(depth, entries)?;
Ok(self)
}
/// Appends the provided merkle path set.
pub fn with_merkle_path(
mut self,
index_value: u64,
node: Word,
path: MerklePath,
) -> Result<Self, MerkleError> {
self.add_merkle_path(index_value, node, path)?;
Ok(self)
}
/// Appends the provided merkle path set.
pub fn with_merkle_paths<I>(mut self, paths: I) -> Result<Self, MerkleError>
where
I: IntoIterator<Item = (u64, Word, MerklePath)>,
{
self.add_merkle_paths(paths)?;
Ok(self)
}
/// Appends the provided [Mmr] represented by its `leaves` to the set.
pub fn with_mmr<I>(mut self, leaves: I) -> Result<Self, MerkleError>
where
I: IntoIterator<Item = Word>,
{
self.add_mmr(leaves)?;
Ok(self)
}
// PUBLIC ACCESSORS
// --------------------------------------------------------------------------------------------
/// Return a count of the non-leaf nodes in the store.
pub fn num_internal_nodes(&self) -> usize {
self.nodes.len()
}
/// Returns the node at `index` rooted on the tree `root`.
///
/// # Errors
///
/// This method can return the following errors:
/// - `RootNotInStore` if the `root` is not present in the store.
/// - `NodeNotInStore` if a node needed to traverse from `root` to `index` is not present in the store.
pub fn get_node(&self, root: Word, index: NodeIndex) -> Result<Word, MerkleError> {
let mut hash: RpoDigest = root.into();
// corner case: check the root is in the store when called with index `NodeIndex::root()`
self.nodes
.get(&hash)
.ok_or(MerkleError::RootNotInStore(hash.into()))?;
for i in (0..index.depth()).rev() {
let node = self
.nodes
.get(&hash)
.ok_or(MerkleError::NodeNotInStore(hash.into(), index))?;
let bit = (index.value() >> i) & 1;
hash = if bit == 0 { node.left } else { node.right }
}
Ok(hash.into())
}
/// Returns the node at the specified `index` and its opening to the `root`.
///
/// The path starts at the sibling of the target leaf.
///
/// # Errors
///
/// This method can return the following errors:
/// - `RootNotInStore` if the `root` is not present in the store.
/// - `NodeNotInStore` if a node needed to traverse from `root` to `index` is not present in the store.
pub fn get_path(&self, root: Word, index: NodeIndex) -> Result<ValuePath, MerkleError> {
let mut hash: RpoDigest = root.into();
let mut path = Vec::with_capacity(index.depth().into());
// corner case: check the root is in the store when called with index `NodeIndex::root()`
self.nodes
.get(&hash)
.ok_or(MerkleError::RootNotInStore(hash.into()))?;
for i in (0..index.depth()).rev() {
let node = self
.nodes
.get(&hash)
.ok_or(MerkleError::NodeNotInStore(hash.into(), index))?;
let bit = (index.value() >> i) & 1;
hash = if bit == 0 {
path.push(node.right.into());
node.left
} else {
path.push(node.left.into());
node.right
}
}
// the path is computed from root to leaf, so it must be reversed
path.reverse();
Ok(ValuePath {
value: hash.into(),
path: MerklePath::new(path),
})
}
/// Reconstructs a path from the root until a leaf or empty node and returns its depth.
///
/// The `tree_depth` parameter defines up to which depth the tree will be traversed, starting
/// from `root`. The maximum value the argument accepts is [u64::BITS].
///
/// The traversed path from leaf to root will start at the least significant bit of `index`,
/// and will be executed for `tree_depth` bits.
///
/// # Errors
/// Will return an error if:
/// - The provided root is not found.
/// - The path from the root continues to a depth greater than `tree_depth`.
/// - The provided `tree_depth` is greater than `64.
/// - The provided `index` is not valid for a depth equivalent to `tree_depth`. For more
/// information, check [NodeIndex::new].
pub fn get_leaf_depth(
&self,
root: Word,
tree_depth: u8,
index: u64,
) -> Result<u8, MerkleError> {
// validate depth and index
if tree_depth > 64 {
return Err(MerkleError::DepthTooBig(tree_depth as u64));
}
NodeIndex::new(tree_depth, index)?;
// it's not illegal to have a maximum depth of `0`; we should just return the root in that
// case. this check will simplify the implementation as we could overflow bits for depth
// `0`.
if tree_depth == 0 {
return Ok(0);
}
// check if the root exists, providing the proper error report if it doesn't
let empty = EmptySubtreeRoots::empty_hashes(tree_depth);
let mut hash: RpoDigest = root.into();
if !self.nodes.contains_key(&hash) {
return Err(MerkleError::RootNotInStore(hash.into()));
}
// we traverse from root to leaf, so the path is reversed
let mut path = (index << (64 - tree_depth)).reverse_bits();
// iterate every depth and reconstruct the path from root to leaf
for depth in 0..tree_depth {
// we short-circuit if an empty node has been found
if hash == empty[depth as usize] {
return Ok(depth);
}
// fetch the children pair, mapped by its parent hash
let children = match self.nodes.get(&hash) {
Some(node) => node,
None => return Ok(depth),
};
// traverse down
hash = if path & 1 == 0 {
children.left
} else {
children.right
};
path >>= 1;
}
// at max depth assert it doesn't have sub-trees
if self.nodes.contains_key(&hash) {
return Err(MerkleError::DepthTooBig(tree_depth as u64 + 1));
}
// depleted bits; return max depth
Ok(tree_depth)
}
// STATE MUTATORS
// --------------------------------------------------------------------------------------------
/// Adds all the nodes of a Merkle tree represented by `leaves`.
///
/// This will instantiate a Merkle tree using `leaves` and include all the nodes into the
/// store.
///
/// # Errors
///
/// This method may return the following errors:
/// - `DepthTooSmall` if leaves is empty or contains only 1 element
/// - `NumLeavesNotPowerOfTwo` if the number of leaves is not a power-of-two
pub fn add_merkle_tree<I>(&mut self, leaves: I) -> Result<Word, MerkleError>
where
I: IntoIterator<Item = Word>,
{
let leaves: Vec<_> = leaves.into_iter().collect();
if leaves.len() < 2 {
return Err(MerkleError::DepthTooSmall(leaves.len() as u8));
}
let tree = MerkleTree::new(leaves)?;
for node in tree.inner_nodes() {
self.nodes.insert(
node.value.into(),
Node {
left: node.left.into(),
right: node.right.into(),
},
);
}
Ok(tree.root())
}
/// Adds a Sparse Merkle tree defined by the specified `entries` to the store, and returns the
/// root of the added tree.
///
/// The entries are expected to contain tuples of `(index, node)` describing nodes in the tree
/// at `depth`.
///
/// # Errors
/// Returns an error if the provided `depth` is greater than [SimpleSmt::MAX_DEPTH].
pub fn add_sparse_merkle_tree<R, I>(
&mut self,
depth: u8,
entries: R,
) -> Result<Word, MerkleError>
where
R: IntoIterator<IntoIter = I>,
I: Iterator<Item = (u64, Word)> + ExactSizeIterator,
{
let smt = SimpleSmt::new(depth)?.with_leaves(entries)?;
for node in smt.inner_nodes() {
self.nodes.insert(
node.value.into(),
Node {
left: node.left.into(),
right: node.right.into(),
},
);
}
Ok(smt.root())
}
/// Adds all the nodes of a Merkle path represented by `path`, opening to `node`. Returns the
/// new root.
///
/// This will compute the sibling elements determined by the Merkle `path` and `node`, and
/// include all the nodes into the store.
pub fn add_merkle_path(
&mut self,
index_value: u64,
mut node: Word,
path: MerklePath,
) -> Result<Word, MerkleError> {
let mut index = NodeIndex::new(path.len() as u8, index_value)?;
for sibling in path {
let (left, right) = match index.is_value_odd() {
true => (sibling, node),
false => (node, sibling),
};
let parent = Rpo256::merge(&[left.into(), right.into()]);
self.nodes.insert(
parent,
Node {
left: left.into(),
right: right.into(),
},
);
index.move_up();
node = parent.into();
}
Ok(node)
}
/// Adds all the nodes of multiple Merkle paths into the store.
///
/// This will compute the sibling elements for each Merkle `path` and include all the nodes
/// into the store.
///
/// For further reference, check [MerkleStore::add_merkle_path].
pub fn add_merkle_paths<I>(&mut self, paths: I) -> Result<(), MerkleError>
where
I: IntoIterator<Item = (u64, Word, MerklePath)>,
{
for (index_value, node, path) in paths.into_iter() {
self.add_merkle_path(index_value, node, path)?;
}
Ok(())
}
/// Appends the provided [MerklePathSet] into the store.
///
/// For further reference, check [MerkleStore::add_merkle_path].
pub fn add_merkle_path_set(&mut self, path_set: &MerklePathSet) -> Result<Word, MerkleError> {
let root = path_set.root();
for (index, path) in path_set.to_paths() {
self.add_merkle_path(index, path.value, path.path)?;
}
Ok(root)
}
/// Appends the provided [Mmr] into the store.
pub fn add_mmr<I>(&mut self, leaves: I) -> Result<MmrPeaks, MerkleError>
where
I: IntoIterator<Item = Word>,
{
let mmr = Mmr::from(leaves);
for node in mmr.inner_nodes() {
self.nodes.insert(
node.value.into(),
Node {
left: node.left.into(),
right: node.right.into(),
},
);
}
Ok(mmr.accumulator())
}
/// Sets a node to `value`.
///
/// # Errors
///
/// This method can return the following errors:
/// - `RootNotInStore` if the `root` is not present in the store.
/// - `NodeNotInStore` if a node needed to traverse from `root` to `index` is not present in the store.
pub fn set_node(
&mut self,
mut root: Word,
index: NodeIndex,
value: Word,
) -> Result<RootPath, MerkleError> {
let node = value;
let ValuePath { value, path } = self.get_path(root, index)?;
// performs the update only if the node value differs from the opening
if node != value {
root = self.add_merkle_path(index.value(), node, path.clone())?;
}
Ok(RootPath { root, path })
}
pub fn merge_roots(&mut self, root1: Word, root2: Word) -> Result<Word, MerkleError> {
let root1: RpoDigest = root1.into();
let root2: RpoDigest = root2.into();
if !self.nodes.contains_key(&root1) {
Err(MerkleError::NodeNotInStore(root1.into(), NodeIndex::root()))
} else if !self.nodes.contains_key(&root1) {
Err(MerkleError::NodeNotInStore(root2.into(), NodeIndex::root()))
} else {
let parent: Word = Rpo256::merge(&[root1, root2]).into();
self.nodes.insert(
parent.into(),
Node {
left: root1,
right: root2,
},
);
Ok(parent)
}
}
}
// SERIALIZATION
// ================================================================================================
impl Serializable for Node {
fn write_into<W: ByteWriter>(&self, target: &mut W) {
self.left.write_into(target);
self.right.write_into(target);
}
}
impl Deserializable for Node {
fn read_from<R: ByteReader>(source: &mut R) -> Result<Self, DeserializationError> {
let left = RpoDigest::read_from(source)?;
let right = RpoDigest::read_from(source)?;
Ok(Node { left, right })
}
}
impl Serializable for MerkleStore {
fn write_into<W: ByteWriter>(&self, target: &mut W) {
target.write_u64(self.nodes.len() as u64);
for (k, v) in self.nodes.iter() {
k.write_into(target);
v.write_into(target);
}
}
}
impl Deserializable for MerkleStore {
fn read_from<R: ByteReader>(source: &mut R) -> Result<Self, DeserializationError> {
let len = source.read_u64()?;
let mut nodes: BTreeMap<RpoDigest, Node> = BTreeMap::new();
for _ in 0..len {
let key = RpoDigest::read_from(source)?;
let value = Node::read_from(source)?;
nodes.insert(key, value);
}
Ok(MerkleStore { nodes })
}
}

808
src/merkle/store/tests.rs Normal file
View File

@@ -0,0 +1,808 @@
use super::*;
use crate::{
hash::rpo::Rpo256,
merkle::{int_to_node, MerklePathSet},
Felt, Word, WORD_SIZE, ZERO,
};
#[cfg(std)]
use std::error::Error;
const KEYS4: [u64; 4] = [0, 1, 2, 3];
const LEAVES4: [Word; 4] = [
int_to_node(1),
int_to_node(2),
int_to_node(3),
int_to_node(4),
];
const EMPTY: Word = [ZERO; WORD_SIZE];
#[test]
fn test_root_not_in_store() -> Result<(), MerkleError> {
let mtree = MerkleTree::new(LEAVES4.to_vec())?;
let store = MerkleStore::default().with_merkle_tree(LEAVES4)?;
assert_eq!(
store.get_node(LEAVES4[0], NodeIndex::make(mtree.depth(), 0)),
Err(MerkleError::RootNotInStore(LEAVES4[0])),
"Leaf 0 is not a root"
);
assert_eq!(
store.get_path(LEAVES4[0], NodeIndex::make(mtree.depth(), 0)),
Err(MerkleError::RootNotInStore(LEAVES4[0])),
"Leaf 0 is not a root"
);
Ok(())
}
#[test]
fn test_merkle_tree() -> Result<(), MerkleError> {
let mut store = MerkleStore::default();
let mtree = MerkleTree::new(LEAVES4.to_vec())?;
store.add_merkle_tree(LEAVES4.to_vec())?;
// STORE LEAVES ARE CORRECT ==============================================================
// checks the leaves in the store corresponds to the expected values
assert_eq!(
store.get_node(mtree.root(), NodeIndex::make(mtree.depth(), 0)),
Ok(LEAVES4[0]),
"node 0 must be in the tree"
);
assert_eq!(
store.get_node(mtree.root(), NodeIndex::make(mtree.depth(), 1)),
Ok(LEAVES4[1]),
"node 1 must be in the tree"
);
assert_eq!(
store.get_node(mtree.root(), NodeIndex::make(mtree.depth(), 2)),
Ok(LEAVES4[2]),
"node 2 must be in the tree"
);
assert_eq!(
store.get_node(mtree.root(), NodeIndex::make(mtree.depth(), 3)),
Ok(LEAVES4[3]),
"node 3 must be in the tree"
);
// STORE LEAVES MATCH TREE ===============================================================
// sanity check the values returned by the store and the tree
assert_eq!(
mtree.get_node(NodeIndex::make(mtree.depth(), 0)),
store.get_node(mtree.root(), NodeIndex::make(mtree.depth(), 0)),
"node 0 must be the same for both MerkleTree and MerkleStore"
);
assert_eq!(
mtree.get_node(NodeIndex::make(mtree.depth(), 1)),
store.get_node(mtree.root(), NodeIndex::make(mtree.depth(), 1)),
"node 1 must be the same for both MerkleTree and MerkleStore"
);
assert_eq!(
mtree.get_node(NodeIndex::make(mtree.depth(), 2)),
store.get_node(mtree.root(), NodeIndex::make(mtree.depth(), 2)),
"node 2 must be the same for both MerkleTree and MerkleStore"
);
assert_eq!(
mtree.get_node(NodeIndex::make(mtree.depth(), 3)),
store.get_node(mtree.root(), NodeIndex::make(mtree.depth(), 3)),
"node 3 must be the same for both MerkleTree and MerkleStore"
);
// STORE MERKLE PATH MATCHS ==============================================================
// assert the merkle path returned by the store is the same as the one in the tree
let result = store
.get_path(mtree.root(), NodeIndex::make(mtree.depth(), 0))
.unwrap();
assert_eq!(
LEAVES4[0], result.value,
"Value for merkle path at index 0 must match leaf value"
);
assert_eq!(
mtree.get_path(NodeIndex::make(mtree.depth(), 0)),
Ok(result.path),
"merkle path for index 0 must be the same for the MerkleTree and MerkleStore"
);
let result = store
.get_path(mtree.root(), NodeIndex::make(mtree.depth(), 1))
.unwrap();
assert_eq!(
LEAVES4[1], result.value,
"Value for merkle path at index 0 must match leaf value"
);
assert_eq!(
mtree.get_path(NodeIndex::make(mtree.depth(), 1)),
Ok(result.path),
"merkle path for index 1 must be the same for the MerkleTree and MerkleStore"
);
let result = store
.get_path(mtree.root(), NodeIndex::make(mtree.depth(), 2))
.unwrap();
assert_eq!(
LEAVES4[2], result.value,
"Value for merkle path at index 0 must match leaf value"
);
assert_eq!(
mtree.get_path(NodeIndex::make(mtree.depth(), 2)),
Ok(result.path),
"merkle path for index 0 must be the same for the MerkleTree and MerkleStore"
);
let result = store
.get_path(mtree.root(), NodeIndex::make(mtree.depth(), 3))
.unwrap();
assert_eq!(
LEAVES4[3], result.value,
"Value for merkle path at index 0 must match leaf value"
);
assert_eq!(
mtree.get_path(NodeIndex::make(mtree.depth(), 3)),
Ok(result.path),
"merkle path for index 0 must be the same for the MerkleTree and MerkleStore"
);
Ok(())
}
#[test]
fn test_empty_roots() {
let store = MerkleStore::default();
let mut root = RpoDigest::new(EMPTY);
for depth in 0..255 {
root = Rpo256::merge(&[root; 2]);
assert!(
store.get_node(root.into(), NodeIndex::make(0, 0)).is_ok(),
"The root of the empty tree of depth {depth} must be registered"
);
}
}
#[test]
fn test_leaf_paths_for_empty_trees() -> Result<(), MerkleError> {
let store = MerkleStore::default();
// Starts at 1 because leafs are not included in the store.
// Ends at 64 because it is not possible to represent an index of a depth greater than 64,
// because a u64 is used to index the leaf.
for depth in 1..64 {
let smt = SimpleSmt::new(depth)?;
let index = NodeIndex::make(depth, 0);
let store_path = store.get_path(smt.root(), index)?;
let smt_path = smt.get_path(index)?;
assert_eq!(
store_path.value, EMPTY,
"the leaf of an empty tree is always ZERO"
);
assert_eq!(
store_path.path, smt_path,
"the returned merkle path does not match the computed values"
);
assert_eq!(
store_path.path.compute_root(depth.into(), EMPTY).unwrap(),
smt.root(),
"computed root from the path must match the empty tree root"
);
}
Ok(())
}
#[test]
fn test_get_invalid_node() {
let mut store = MerkleStore::default();
let mtree = MerkleTree::new(LEAVES4.to_vec()).expect("creating a merkle tree must work");
store
.add_merkle_tree(LEAVES4.to_vec())
.expect("adding a merkle tree to the store must work");
let _ = store.get_node(mtree.root(), NodeIndex::make(mtree.depth(), 3));
}
#[test]
fn test_add_sparse_merkle_tree_one_level() -> Result<(), MerkleError> {
let mut store = MerkleStore::default();
let keys2: [u64; 2] = [0, 1];
let leaves2: [Word; 2] = [int_to_node(1), int_to_node(2)];
store.add_sparse_merkle_tree(48, keys2.into_iter().zip(leaves2.into_iter()))?;
let smt = SimpleSmt::new(1)
.unwrap()
.with_leaves(keys2.into_iter().zip(leaves2.into_iter()))
.unwrap();
let idx = NodeIndex::make(1, 0);
assert_eq!(smt.get_node(idx).unwrap(), leaves2[0]);
assert_eq!(
store.get_node(smt.root(), idx).unwrap(),
smt.get_node(idx).unwrap()
);
let idx = NodeIndex::make(1, 1);
assert_eq!(smt.get_node(idx).unwrap(), leaves2[1]);
assert_eq!(
store.get_node(smt.root(), idx).unwrap(),
smt.get_node(idx).unwrap()
);
Ok(())
}
#[test]
fn test_sparse_merkle_tree() -> Result<(), MerkleError> {
let mut store = MerkleStore::default();
store.add_sparse_merkle_tree(
SimpleSmt::MAX_DEPTH,
KEYS4.into_iter().zip(LEAVES4.into_iter()),
)?;
let smt = SimpleSmt::new(SimpleSmt::MAX_DEPTH)
.unwrap()
.with_leaves(KEYS4.into_iter().zip(LEAVES4.into_iter()))
.unwrap();
// STORE LEAVES ARE CORRECT ==============================================================
// checks the leaves in the store corresponds to the expected values
assert_eq!(
store.get_node(smt.root(), NodeIndex::make(smt.depth(), 0)),
Ok(LEAVES4[0]),
"node 0 must be in the tree"
);
assert_eq!(
store.get_node(smt.root(), NodeIndex::make(smt.depth(), 1)),
Ok(LEAVES4[1]),
"node 1 must be in the tree"
);
assert_eq!(
store.get_node(smt.root(), NodeIndex::make(smt.depth(), 2)),
Ok(LEAVES4[2]),
"node 2 must be in the tree"
);
assert_eq!(
store.get_node(smt.root(), NodeIndex::make(smt.depth(), 3)),
Ok(LEAVES4[3]),
"node 3 must be in the tree"
);
assert_eq!(
store.get_node(smt.root(), NodeIndex::make(smt.depth(), 4)),
Ok(EMPTY),
"unmodified node 4 must be ZERO"
);
// STORE LEAVES MATCH TREE ===============================================================
// sanity check the values returned by the store and the tree
assert_eq!(
smt.get_node(NodeIndex::make(smt.depth(), 0)),
store.get_node(smt.root(), NodeIndex::make(smt.depth(), 0)),
"node 0 must be the same for both SparseMerkleTree and MerkleStore"
);
assert_eq!(
smt.get_node(NodeIndex::make(smt.depth(), 1)),
store.get_node(smt.root(), NodeIndex::make(smt.depth(), 1)),
"node 1 must be the same for both SparseMerkleTree and MerkleStore"
);
assert_eq!(
smt.get_node(NodeIndex::make(smt.depth(), 2)),
store.get_node(smt.root(), NodeIndex::make(smt.depth(), 2)),
"node 2 must be the same for both SparseMerkleTree and MerkleStore"
);
assert_eq!(
smt.get_node(NodeIndex::make(smt.depth(), 3)),
store.get_node(smt.root(), NodeIndex::make(smt.depth(), 3)),
"node 3 must be the same for both SparseMerkleTree and MerkleStore"
);
assert_eq!(
smt.get_node(NodeIndex::make(smt.depth(), 4)),
store.get_node(smt.root(), NodeIndex::make(smt.depth(), 4)),
"node 4 must be the same for both SparseMerkleTree and MerkleStore"
);
// STORE MERKLE PATH MATCHS ==============================================================
// assert the merkle path returned by the store is the same as the one in the tree
let result = store
.get_path(smt.root(), NodeIndex::make(smt.depth(), 0))
.unwrap();
assert_eq!(
LEAVES4[0], result.value,
"Value for merkle path at index 0 must match leaf value"
);
assert_eq!(
smt.get_path(NodeIndex::make(smt.depth(), 0)),
Ok(result.path),
"merkle path for index 0 must be the same for the MerkleTree and MerkleStore"
);
let result = store
.get_path(smt.root(), NodeIndex::make(smt.depth(), 1))
.unwrap();
assert_eq!(
LEAVES4[1], result.value,
"Value for merkle path at index 1 must match leaf value"
);
assert_eq!(
smt.get_path(NodeIndex::make(smt.depth(), 1)),
Ok(result.path),
"merkle path for index 1 must be the same for the MerkleTree and MerkleStore"
);
let result = store
.get_path(smt.root(), NodeIndex::make(smt.depth(), 2))
.unwrap();
assert_eq!(
LEAVES4[2], result.value,
"Value for merkle path at index 2 must match leaf value"
);
assert_eq!(
smt.get_path(NodeIndex::make(smt.depth(), 2)),
Ok(result.path),
"merkle path for index 2 must be the same for the MerkleTree and MerkleStore"
);
let result = store
.get_path(smt.root(), NodeIndex::make(smt.depth(), 3))
.unwrap();
assert_eq!(
LEAVES4[3], result.value,
"Value for merkle path at index 3 must match leaf value"
);
assert_eq!(
smt.get_path(NodeIndex::make(smt.depth(), 3)),
Ok(result.path),
"merkle path for index 3 must be the same for the MerkleTree and MerkleStore"
);
let result = store
.get_path(smt.root(), NodeIndex::make(smt.depth(), 4))
.unwrap();
assert_eq!(
EMPTY, result.value,
"Value for merkle path at index 4 must match leaf value"
);
assert_eq!(
smt.get_path(NodeIndex::make(smt.depth(), 4)),
Ok(result.path),
"merkle path for index 4 must be the same for the MerkleTree and MerkleStore"
);
Ok(())
}
#[test]
fn test_add_merkle_paths() -> Result<(), MerkleError> {
let mtree = MerkleTree::new(LEAVES4.to_vec())?;
let i0 = 0;
let p0 = mtree.get_path(NodeIndex::make(2, i0)).unwrap();
let i1 = 1;
let p1 = mtree.get_path(NodeIndex::make(2, i1)).unwrap();
let i2 = 2;
let p2 = mtree.get_path(NodeIndex::make(2, i2)).unwrap();
let i3 = 3;
let p3 = mtree.get_path(NodeIndex::make(2, i3)).unwrap();
let paths = [
(i0, LEAVES4[i0 as usize], p0),
(i1, LEAVES4[i1 as usize], p1),
(i2, LEAVES4[i2 as usize], p2),
(i3, LEAVES4[i3 as usize], p3),
];
let mut store = MerkleStore::default();
store
.add_merkle_paths(paths.clone())
.expect("the valid paths must work");
let depth = 2;
let set = MerklePathSet::new(depth).with_paths(paths).unwrap();
// STORE LEAVES ARE CORRECT ==============================================================
// checks the leaves in the store corresponds to the expected values
assert_eq!(
store.get_node(set.root(), NodeIndex::make(set.depth(), 0)),
Ok(LEAVES4[0]),
"node 0 must be in the set"
);
assert_eq!(
store.get_node(set.root(), NodeIndex::make(set.depth(), 1)),
Ok(LEAVES4[1]),
"node 1 must be in the set"
);
assert_eq!(
store.get_node(set.root(), NodeIndex::make(set.depth(), 2)),
Ok(LEAVES4[2]),
"node 2 must be in the set"
);
assert_eq!(
store.get_node(set.root(), NodeIndex::make(set.depth(), 3)),
Ok(LEAVES4[3]),
"node 3 must be in the set"
);
// STORE LEAVES MATCH SET ================================================================
// sanity check the values returned by the store and the set
assert_eq!(
set.get_node(NodeIndex::make(set.depth(), 0)),
store.get_node(set.root(), NodeIndex::make(set.depth(), 0)),
"node 0 must be the same for both SparseMerkleTree and MerkleStore"
);
assert_eq!(
set.get_node(NodeIndex::make(set.depth(), 1)),
store.get_node(set.root(), NodeIndex::make(set.depth(), 1)),
"node 1 must be the same for both SparseMerkleTree and MerkleStore"
);
assert_eq!(
set.get_node(NodeIndex::make(set.depth(), 2)),
store.get_node(set.root(), NodeIndex::make(set.depth(), 2)),
"node 2 must be the same for both SparseMerkleTree and MerkleStore"
);
assert_eq!(
set.get_node(NodeIndex::make(set.depth(), 3)),
store.get_node(set.root(), NodeIndex::make(set.depth(), 3)),
"node 3 must be the same for both SparseMerkleTree and MerkleStore"
);
// STORE MERKLE PATH MATCHS ==============================================================
// assert the merkle path returned by the store is the same as the one in the set
let result = store
.get_path(set.root(), NodeIndex::make(set.depth(), 0))
.unwrap();
assert_eq!(
LEAVES4[0], result.value,
"Value for merkle path at index 0 must match leaf value"
);
assert_eq!(
set.get_path(NodeIndex::make(set.depth(), 0)),
Ok(result.path),
"merkle path for index 0 must be the same for the MerkleTree and MerkleStore"
);
let result = store
.get_path(set.root(), NodeIndex::make(set.depth(), 1))
.unwrap();
assert_eq!(
LEAVES4[1], result.value,
"Value for merkle path at index 0 must match leaf value"
);
assert_eq!(
set.get_path(NodeIndex::make(set.depth(), 1)),
Ok(result.path),
"merkle path for index 1 must be the same for the MerkleTree and MerkleStore"
);
let result = store
.get_path(set.root(), NodeIndex::make(set.depth(), 2))
.unwrap();
assert_eq!(
LEAVES4[2], result.value,
"Value for merkle path at index 0 must match leaf value"
);
assert_eq!(
set.get_path(NodeIndex::make(set.depth(), 2)),
Ok(result.path),
"merkle path for index 0 must be the same for the MerkleTree and MerkleStore"
);
let result = store
.get_path(set.root(), NodeIndex::make(set.depth(), 3))
.unwrap();
assert_eq!(
LEAVES4[3], result.value,
"Value for merkle path at index 0 must match leaf value"
);
assert_eq!(
set.get_path(NodeIndex::make(set.depth(), 3)),
Ok(result.path),
"merkle path for index 0 must be the same for the MerkleTree and MerkleStore"
);
Ok(())
}
#[test]
fn wont_open_to_different_depth_root() {
let empty = EmptySubtreeRoots::empty_hashes(64);
let a = [Felt::new(1); 4];
let b = [Felt::new(2); 4];
// Compute the root for a different depth. We cherry-pick this specific depth to prevent a
// regression to a bug in the past that allowed the user to fetch a node at a depth lower than
// the inserted path of a Merkle tree.
let mut root = Rpo256::merge(&[a.into(), b.into()]);
for depth in (1..=63).rev() {
root = Rpo256::merge(&[root, empty[depth]]);
}
let root = Word::from(root);
// For this example, the depth of the Merkle tree is 1, as we have only two leaves. Here we
// attempt to fetch a node on the maximum depth, and it should fail because the root shouldn't
// exist for the set.
let store = MerkleStore::default().with_merkle_tree([a, b]).unwrap();
let index = NodeIndex::root();
let err = store.get_node(root, index).err().unwrap();
assert_eq!(err, MerkleError::RootNotInStore(root));
}
#[test]
fn store_path_opens_from_leaf() {
let a = [Felt::new(1); 4];
let b = [Felt::new(2); 4];
let c = [Felt::new(3); 4];
let d = [Felt::new(4); 4];
let e = [Felt::new(5); 4];
let f = [Felt::new(6); 4];
let g = [Felt::new(7); 4];
let h = [Felt::new(8); 4];
let i = Rpo256::merge(&[a.into(), b.into()]);
let j = Rpo256::merge(&[c.into(), d.into()]);
let k = Rpo256::merge(&[e.into(), f.into()]);
let l = Rpo256::merge(&[g.into(), h.into()]);
let m = Rpo256::merge(&[i.into(), j.into()]);
let n = Rpo256::merge(&[k.into(), l.into()]);
let root = Rpo256::merge(&[m.into(), n.into()]);
let store = MerkleStore::default()
.with_merkle_tree([a, b, c, d, e, f, g, h])
.unwrap();
let path = store
.get_path(root.into(), NodeIndex::make(3, 1))
.unwrap()
.path;
let expected = MerklePath::new([a.into(), j.into(), n.into()].to_vec());
assert_eq!(path, expected);
}
#[test]
fn test_set_node() -> Result<(), MerkleError> {
let mtree = MerkleTree::new(LEAVES4.to_vec())?;
let mut store = MerkleStore::default().with_merkle_tree(LEAVES4)?;
let value = int_to_node(42);
let index = NodeIndex::make(mtree.depth(), 0);
let new_root = store.set_node(mtree.root(), index, value)?.root;
assert_eq!(
store.get_node(new_root, index),
Ok(value),
"Value must have changed"
);
Ok(())
}
#[test]
fn test_constructors() -> Result<(), MerkleError> {
let store = MerkleStore::new().with_merkle_tree(LEAVES4)?;
let mtree = MerkleTree::new(LEAVES4.to_vec())?;
let depth = mtree.depth();
let leaves = 2u64.pow(depth.into());
for index in 0..leaves {
let index = NodeIndex::make(depth, index);
let value_path = store.get_path(mtree.root(), index)?;
assert_eq!(mtree.get_path(index)?, value_path.path);
}
let depth = 32;
let store = MerkleStore::default()
.with_sparse_merkle_tree(depth, KEYS4.into_iter().zip(LEAVES4.into_iter()))?;
let smt = SimpleSmt::new(depth)
.unwrap()
.with_leaves(KEYS4.into_iter().zip(LEAVES4.into_iter()))
.unwrap();
let depth = smt.depth();
for key in KEYS4 {
let index = NodeIndex::make(depth, key);
let value_path = store.get_path(smt.root(), index)?;
assert_eq!(smt.get_path(index)?, value_path.path);
}
let d = 2;
let paths = [
(
0,
LEAVES4[0],
mtree.get_path(NodeIndex::make(d, 0)).unwrap(),
),
(
1,
LEAVES4[1],
mtree.get_path(NodeIndex::make(d, 1)).unwrap(),
),
(
2,
LEAVES4[2],
mtree.get_path(NodeIndex::make(d, 2)).unwrap(),
),
(
3,
LEAVES4[3],
mtree.get_path(NodeIndex::make(d, 3)).unwrap(),
),
];
let store1 = MerkleStore::default().with_merkle_paths(paths.clone())?;
let store2 = MerkleStore::default()
.with_merkle_path(0, LEAVES4[0], mtree.get_path(NodeIndex::make(d, 0))?)?
.with_merkle_path(1, LEAVES4[1], mtree.get_path(NodeIndex::make(d, 1))?)?
.with_merkle_path(2, LEAVES4[2], mtree.get_path(NodeIndex::make(d, 2))?)?
.with_merkle_path(3, LEAVES4[3], mtree.get_path(NodeIndex::make(d, 3))?)?;
let set = MerklePathSet::new(d).with_paths(paths).unwrap();
for key in [0, 1, 2, 3] {
let index = NodeIndex::make(d, key);
let value_path1 = store1.get_path(set.root(), index)?;
let value_path2 = store2.get_path(set.root(), index)?;
assert_eq!(value_path1, value_path2);
let index = NodeIndex::make(d, key);
assert_eq!(set.get_path(index)?, value_path1.path);
}
Ok(())
}
#[test]
fn node_path_should_be_truncated_by_midtier_insert() {
let key = 0b11010010_11001100_11001100_11001100_11001100_11001100_11001100_11001100_u64;
let mut store = MerkleStore::new();
let root: Word = EmptySubtreeRoots::empty_hashes(64)[0].into();
// insert first node - works as expected
let depth = 64;
let node = [Felt::new(key); WORD_SIZE];
let index = NodeIndex::new(depth, key).unwrap();
let root = store.set_node(root, index, node).unwrap().root;
let result = store.get_node(root, index).unwrap();
let path = store.get_path(root, index).unwrap().path;
assert_eq!(node, result);
assert_eq!(path.depth(), depth);
assert!(path.verify(index.value(), result, &root));
// flip the first bit of the key and insert the second node on a different depth
let key = key ^ (1 << 63);
let key = key >> 8;
let depth = 56;
let node = [Felt::new(key); WORD_SIZE];
let index = NodeIndex::new(depth, key).unwrap();
let root = store.set_node(root, index, node).unwrap().root;
let result = store.get_node(root, index).unwrap();
let path = store.get_path(root, index).unwrap().path;
assert_eq!(node, result);
assert_eq!(path.depth(), depth);
assert!(path.verify(index.value(), result, &root));
// attempt to fetch a path of the second node to depth 64
// should fail because the previously inserted node will remove its sub-tree from the set
let key = key << 8;
let index = NodeIndex::new(64, key).unwrap();
assert!(store.get_node(root, index).is_err());
}
#[test]
fn get_leaf_depth_works_depth_64() {
let mut store = MerkleStore::new();
let mut root: Word = EmptySubtreeRoots::empty_hashes(64)[0].into();
let key = u64::MAX;
// this will create a rainbow tree and test all opening to depth 64
for d in 0..64 {
let k = key & (u64::MAX >> d);
let node = [Felt::new(k); WORD_SIZE];
let index = NodeIndex::new(64, k).unwrap();
// assert the leaf doesn't exist before the insert. the returned depth should always
// increment with the paths count of the set, as they are insersecting one another up to
// the first bits of the used key.
assert_eq!(d, store.get_leaf_depth(root, 64, k).unwrap());
// insert and assert the correct depth
root = store.set_node(root, index, node).unwrap().root;
assert_eq!(64, store.get_leaf_depth(root, 64, k).unwrap());
}
}
#[test]
fn get_leaf_depth_works_with_incremental_depth() {
let mut store = MerkleStore::new();
let mut root: Word = EmptySubtreeRoots::empty_hashes(64)[0].into();
// insert some path to the left of the root and assert it
let key = 0b01001011_10110110_00001101_01110100_00111011_10101101_00000100_01000001_u64;
assert_eq!(0, store.get_leaf_depth(root, 64, key).unwrap());
let depth = 64;
let index = NodeIndex::new(depth, key).unwrap();
let node = [Felt::new(key); WORD_SIZE];
root = store.set_node(root, index, node).unwrap().root;
assert_eq!(depth, store.get_leaf_depth(root, 64, key).unwrap());
// flip the key to the right of the root and insert some content on depth 16
let key = 0b11001011_10110110_00000000_00000000_00000000_00000000_00000000_00000000_u64;
assert_eq!(1, store.get_leaf_depth(root, 64, key).unwrap());
let depth = 16;
let index = NodeIndex::new(depth, key >> (64 - depth)).unwrap();
let node = [Felt::new(key); WORD_SIZE];
root = store.set_node(root, index, node).unwrap().root;
assert_eq!(depth, store.get_leaf_depth(root, 64, key).unwrap());
// attempt the sibling of the previous leaf
let key = 0b11001011_10110111_00000000_00000000_00000000_00000000_00000000_00000000_u64;
assert_eq!(16, store.get_leaf_depth(root, 64, key).unwrap());
let index = NodeIndex::new(depth, key >> (64 - depth)).unwrap();
let node = [Felt::new(key); WORD_SIZE];
root = store.set_node(root, index, node).unwrap().root;
assert_eq!(depth, store.get_leaf_depth(root, 64, key).unwrap());
// move down to the next depth and assert correct behavior
let key = 0b11001011_10110100_00000000_00000000_00000000_00000000_00000000_00000000_u64;
assert_eq!(15, store.get_leaf_depth(root, 64, key).unwrap());
let depth = 17;
let index = NodeIndex::new(depth, key >> (64 - depth)).unwrap();
let node = [Felt::new(key); WORD_SIZE];
root = store.set_node(root, index, node).unwrap().root;
assert_eq!(depth, store.get_leaf_depth(root, 64, key).unwrap());
}
#[test]
fn get_leaf_depth_works_with_depth_8() {
let mut store = MerkleStore::new();
let mut root: Word = EmptySubtreeRoots::empty_hashes(8)[0].into();
// insert some random, 8 depth keys. `a` diverges from the first bit
let a = 0b01101001_u64;
let b = 0b10011001_u64;
let c = 0b10010110_u64;
let d = 0b11110110_u64;
for k in [a, b, c, d] {
let index = NodeIndex::new(8, k).unwrap();
let node = [Felt::new(k); WORD_SIZE];
root = store.set_node(root, index, node).unwrap().root;
}
// assert all leaves returns the inserted depth
for k in [a, b, c, d] {
assert_eq!(8, store.get_leaf_depth(root, 8, k).unwrap());
}
// flip last bit of a and expect it to return the the same depth, but for an empty node
assert_eq!(8, store.get_leaf_depth(root, 8, 0b01101000_u64).unwrap());
// flip fourth bit of a and expect an empty node on depth 4
assert_eq!(4, store.get_leaf_depth(root, 8, 0b01111001_u64).unwrap());
// flip third bit of a and expect an empty node on depth 3
assert_eq!(3, store.get_leaf_depth(root, 8, 0b01001001_u64).unwrap());
// flip second bit of a and expect an empty node on depth 2
assert_eq!(2, store.get_leaf_depth(root, 8, 0b00101001_u64).unwrap());
// flip fourth bit of c and expect an empty node on depth 4
assert_eq!(4, store.get_leaf_depth(root, 8, 0b10000110_u64).unwrap());
// flip second bit of d and expect an empty node on depth 3 as depth 2 conflicts with b and c
assert_eq!(3, store.get_leaf_depth(root, 8, 0b10110110_u64).unwrap());
// duplicate the tree on `a` and assert the depth is short-circuited by such sub-tree
let index = NodeIndex::new(8, a).unwrap();
root = store.set_node(root, index, root).unwrap().root;
assert_eq!(
Err(MerkleError::DepthTooBig(9)),
store.get_leaf_depth(root, 8, a)
);
}
#[cfg(std)]
#[test]
fn test_serialization() -> Result<(), Box<dyn Error>> {
let original = MerkleStore::new().with_merkle_tree(LEAVES4)?;
let decoded = MerkleStore::read_from_bytes(&original.to_bytes())?;
assert_eq!(original, decoded);
Ok(())
}

21
src/utils.rs Normal file
View File

@@ -0,0 +1,21 @@
use super::Word;
use crate::utils::string::String;
use core::fmt::{self, Write};
// RE-EXPORTS
// ================================================================================================
pub use winter_utils::{
collections, string, uninit_vector, ByteReader, ByteWriter, Deserializable,
DeserializationError, Serializable, SliceReader,
};
/// Converts a [Word] into hex.
pub fn word_to_hex(w: &Word) -> Result<String, fmt::Error> {
let mut s = String::new();
for byte in w.iter().flat_map(|e| e.to_bytes()) {
write!(s, "{byte:02x}")?;
}
Ok(s)
}