22 Commits

Author SHA1 Message Date
38148bd09c remove duplicated check in falcon verification
Minor change removing a duplicated check of `h_digest==pubkey_com`at
`src/dsa/rpo_falcon512/signature.rs#L100`, which is already done at
`src/dsa/rpo_falcon512/signature.rs#L95`.
2025-01-26 09:11:36 +01:00
Bobbin Threadbare
a424652ba7 Merge branch 'main' into next 2025-01-24 17:34:50 -08:00
polydez
2a5b8ffb21 feat: implement functionality needed for computing openings for recent blocks (#367)
* refactor: make `InnerNode` and `NodeMutation` public
* feat: implement serialization for `LeafIndex`
2025-01-24 17:32:30 -08:00
Bobbin Threadbare
0e85398732 chore: update crate version to 0.14.0 and MSRV to 1.84 2025-01-23 00:11:36 -08:00
Bobbin Threadbare
a75dced6e9 chore: fix typo 2025-01-19 14:10:59 -08:00
Bobbin Threadbare
6da2a62b2b docs: add Graviton 4 to hash benchmarks 2025-01-04 12:18:29 -08:00
Grzegorz Świrski
f825c23415 feat: add support for graviton4 (#364) 2025-01-04 12:01:42 -08:00
polydez
7ee6d7fb93 feat: add support for hashmaps in Smt and SimpleSmt (#363) 2025-01-02 10:23:12 -08:00
Bobbin Threadbare
e4373e54c9 chore: update lockfile 2024-12-28 16:38:54 -08:00
Bobbin Threadbare
d470a5087b chore: fix lints 2024-12-26 23:12:49 -08:00
Bobbin Threadbare
43b2954d60 Merge branch 'main' into next 2024-12-26 23:10:26 -08:00
polydez
589839fef1 feat: reverse mutations generation, mutations serialization (#355)
* feat: revert mutations generation, mutations serialization
* tests: check both `apply_mutations` and `apply_mutations_with_reversion`
* feat: add `num_leaves` method for `Smt`
* refactor: improve ad-hoc benchmarks
* chore: update crate version to v0.13.1
2024-12-26 18:16:38 -08:00
Bobbin Threadbare
ef3183fc0b chore: minor benchmark fixes 2024-12-24 01:12:51 -08:00
Bobbin Threadbare
8db71b66d9 chore: fix typo 2024-12-22 00:25:58 -08:00
crStiv
1444bbc0f2 fix: typos of different importance (#359) 2024-12-16 10:27:51 -08:00
RiceChuan
d2181f44c9 docs: remove repetitive words (#352) 2024-12-10 15:34:08 -08:00
Qyriad
b151773b0d feat: implement concurrent Smt construction (#341)
* merkle: add parent() helper function on NodeIndex
* smt: add pairs_to_leaf() to trait
* smt: add sorted_pairs_to_leaves() and test for it
* smt: implement single subtree-8 hashing, w/ benchmarks & tests

This will be composed into depth-8-subtree-based computation of entire
sparse Merkle trees.

* merkle: add a benchmark for constructing 256-balanced trees

This is intended for comparison with the benchmarks from the previous
commit. This benchmark represents the theoretical perfect-efficiency
performance we could possibly (but impractically) get for computing
depth-8 sparse Merkle subtrees.

* smt: test that SparseMerkleTree::build_subtree() is composable

* smt: test that subtree logic can correctly construct an entire tree

This commit ensures that `SparseMerkleTree::build_subtree()` can
correctly compose into building an entire sparse Merkle tree, without
yet getting into potential complications concurrency introduces.

* smt: implement test for basic parallelized subtree computation w/ rayon

Building on the previous commit, this commit implements a test proving
that `SparseMerkleTree::build_subtree()` can be composed into itself not
just concurrently, but in parallel, without issue.

* smt: add from_raw_parts() to trait interface

This commit adds a new required method to the SparseMerkleTree trait,
to allow generic construction from pre-computed parts.

This will be used to add a generic version of `with_entries()` in a
later commit.

* smt: add parallel constructors to Smt and SimpleSmt

What the previous few commits have been leading up to: SparseMerkleTree
now has a function to construct the tree from existing data in parallel.
This is significantly faster than the singlethreaded equivalent.
Benchmarks incoming!

---------

Co-authored-by: krushimir <krushimir@reilabs.co>
Co-authored-by: krushimir <kresimir.grofelnik@reilabs.io>
2024-12-04 10:54:41 -08:00
Bobbin Threadbare
c64f43b262 chore: merge v0.13.0 release 2024-11-24 22:36:08 -08:00
Bobbin Threadbare
1867f842d3 chore: update changelog 2024-11-24 22:26:51 -08:00
Al-Kindi-0
e1072ecc7f chore: update to winterfell dependencies to 0.11 (#346) 2024-11-24 22:20:19 -08:00
Bobbin Threadbare
3909b01993 chore: merge v0.12.0 release from 0xPolygonMiden/next 2024-10-30 15:25:34 -07:00
Bobbin Threadbare
d74e746a7f chore: merge v0.11.0 release 2024-10-17 23:26:04 -07:00
32 changed files with 2358 additions and 340 deletions

View File

@@ -17,7 +17,7 @@ jobs:
matrix:
toolchain: [stable, nightly]
os: [ubuntu]
args: [default, no-std]
args: [default, smt-hashmaps, no-std]
timeout-minutes: 30
steps:
- uses: actions/checkout@main

View File

@@ -1,11 +1,29 @@
## 0.13.0 (TBD)
## 0.14.0 (TBD)
- [BREAKING] Increment minimum supported Rust version to 1.84.
- Removed duplicated check in RpoFalcon512 verification (#368).
## 0.13.2 (2025-01-24)
- Made `InnerNode` and `NodeMutation` public. Implemented (de)serialization of `LeafIndex` (#367).
## 0.13.1 (2024-12-26)
- Generate reverse mutations set on applying of mutations set, implemented serialization of `MutationsSet` (#355).
## 0.13.0 (2024-11-24)
- Fixed a bug in the implementation of `draw_integers` for `RpoRandomCoin` (#343).
- [BREAKING] Refactor error messages and use `thiserror` to derive errors (#344).
- [BREAKING] Updated Winterfell dependency to v0.11 (#346).
- Added support for hashmaps in `Smt` and `SimpleSmt` which gives up to 10x boost in some operations (#363).
## 0.12.0 (2024-10-30)
- [BREAKING] Updated Winterfell dependency to v0.10 (#338).
- Added parallel implementation of `Smt::with_entries()` with significantly better performance when the `concurrent` feature is enabled (#341).
## 0.11.0 (2024-10-17)

249
Cargo.lock generated
View File

@@ -11,6 +11,12 @@ dependencies = [
"memchr",
]
[[package]]
name = "allocator-api2"
version = "0.2.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "683d7910e743518b0e34f1186f92494becacb047c7b6bf616c96772180fef923"
[[package]]
name = "anes"
version = "0.1.6"
@@ -53,17 +59,18 @@ version = "1.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "79947af37f4177cfead1110013d678905c37501914fba0efea834c3fe9a8d60c"
dependencies = [
"windows-sys 0.59.0",
"windows-sys",
]
[[package]]
name = "anstyle-wincon"
version = "3.0.6"
version = "3.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2109dbce0e72be3ec00bed26e6a7479ca384ad226efdd66db8fa2e3a38c83125"
checksum = "ca3534e77181a9cc07539ad51f2141fe32f6c3ffd4df76db8ad92346b003ae4e"
dependencies = [
"anstyle",
"windows-sys 0.59.0",
"once_cell",
"windows-sys",
]
[[package]]
@@ -92,30 +99,30 @@ checksum = "ace50bade8e6234aa140d9a2f552bbee1db4d353f69b8217bc503490fc1a9f26"
[[package]]
name = "bit-set"
version = "0.5.3"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0700ddab506f33b20a03b13996eccd309a48e5ff77d0d95926aa0210fb4e95f1"
checksum = "08807e080ed7f9d5433fa9b275196cfc35414f66a0c79d864dc51a0d825231a3"
dependencies = [
"bit-vec",
]
[[package]]
name = "bit-vec"
version = "0.6.3"
version = "0.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "349f9b6a179ed607305526ca489b34ad0a41aed5f7980fa90eb03160b69598fb"
checksum = "5e764a1d40d510daf35e07be9eb06e75770908c27d411ee6c92109c9840eaaf7"
[[package]]
name = "bitflags"
version = "2.6.0"
version = "2.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b048fb63fd8b5923fc5aa7b340d8e156aec7ec02f0c78fa8a6ddc2613f6f71de"
checksum = "8f68f53c83ab957f72c32642f3868eec03eb974d1fb82e453128456482613d36"
[[package]]
name = "blake3"
version = "1.5.4"
version = "1.5.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d82033247fd8e890df8f740e407ad4d038debb9eb1f40533fffb32e7d17dc6f7"
checksum = "b8ee0c1824c4dea5b5f81736aff91bae041d2c07ee1192bec91054e10e3e601e"
dependencies = [
"arrayref",
"arrayvec",
@@ -153,9 +160,9 @@ checksum = "37b2a672a2cb129a2e41c10b1224bb368f9f37a2b16b612598138befd7b37eb5"
[[package]]
name = "cc"
version = "1.2.1"
version = "1.2.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fd9de9f2205d5ef3fd67e685b0df337994ddd4495e2a28d185500d0e1edfea47"
checksum = "13208fcbb66eaeffe09b99fffbe1af420f00a7b35aa99ad683dfc1aa76145229"
dependencies = [
"jobserver",
"libc",
@@ -197,9 +204,9 @@ dependencies = [
[[package]]
name = "clap"
version = "4.5.21"
version = "4.5.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fb3b4b9e5a7c7514dfa52869339ee98b3156b0bfb4e8a77c4ff4babb64b1604f"
checksum = "769b0145982b4b48713e01ec42d61614425f27b7058bda7180a3a41f30104796"
dependencies = [
"clap_builder",
"clap_derive",
@@ -207,9 +214,9 @@ dependencies = [
[[package]]
name = "clap_builder"
version = "4.5.21"
version = "4.5.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b17a95aa67cc7b5ebd32aa5370189aa0d79069ef1c64ce893bd30fb24bff20ec"
checksum = "1b26884eb4b57140e4d2d93652abfa49498b938b3c9179f9fc487b0acc3edad7"
dependencies = [
"anstream",
"anstyle",
@@ -219,9 +226,9 @@ dependencies = [
[[package]]
name = "clap_derive"
version = "4.5.18"
version = "4.5.24"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ac6a0c7b1a9e9a5186361f67dfa1b88213572f427fb9ab038efb2bd8c582dab"
checksum = "54b755194d6389280185988721fffba69495eed5ee9feeee9a599b53db80318c"
dependencies = [
"heck",
"proc-macro2",
@@ -231,9 +238,9 @@ dependencies = [
[[package]]
name = "clap_lex"
version = "0.7.3"
version = "0.7.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "afb84c814227b90d6895e01398aee0d8033c00e7466aca416fb6a8e0eb19d8a7"
checksum = "f46ad14479a25103f283c0f10005961cf086d8dc42205bb44c46ac563475dca6"
[[package]]
name = "colorchoice"
@@ -249,9 +256,9 @@ checksum = "7c74b8349d32d297c9134b8c88677813a227df8f779daa29bfc29c183fe3dca6"
[[package]]
name = "cpufeatures"
version = "0.2.15"
version = "0.2.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0ca741a962e1b0bff6d724a1a0958b686406e853bb14061f218562e1896f95e6"
checksum = "16b80225097f2e5ae4e7179dd2266824648f3e2f49d9134d584b76389d31c4c3"
dependencies = [
"libc",
]
@@ -294,9 +301,9 @@ dependencies = [
[[package]]
name = "crossbeam-deque"
version = "0.8.5"
version = "0.8.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "613f8cc01fe9cf1a3eb3d7f488fd2fa8388403e97039e2f73692932e291a770d"
checksum = "9dd111b7b7f7d55b72c0a6ae361660ee5853c9af73f70c3c2ef6858b950e2e51"
dependencies = [
"crossbeam-epoch",
"crossbeam-utils",
@@ -313,15 +320,15 @@ dependencies = [
[[package]]
name = "crossbeam-utils"
version = "0.8.20"
version = "0.8.21"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "22ec99545bb0ed0ea7bb9b8e1e9122ea386ff8a48c0922e43f36d45ab09e0e80"
checksum = "d0a5c400df2834b80a4c3327b3aad3a4c4cd4de0629063962b03235697506a28"
[[package]]
name = "crunchy"
version = "0.2.2"
version = "0.2.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7a81dae078cea95a014a339291cec439d2f232ebe854a9d672b796c6afafa9b7"
checksum = "43da5946c66ffcc7745f48db692ffbb10a83bfe0afd96235c5c2a4fb23994929"
[[package]]
name = "crypto-common"
@@ -350,20 +357,26 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "60b1af1c220855b6ceac025d3f6ecdd2b7c4894bfe9cd9bda4fbb4bc7c0d4cf0"
[[package]]
name = "errno"
version = "0.3.9"
name = "equivalent"
version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "534c5cf6194dfab3db3242765c03bbe257cf92f22b38f6bc0c58d59108a820ba"
checksum = "5443807d6dff69373d433ab9ef5378ad8df50ca6298caf15de6e52e24aaf54d5"
[[package]]
name = "errno"
version = "0.3.10"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "33d852cb9b869c2a9b3df2f71a3074817f01e1844f839a144f5fcef059a4eb5d"
dependencies = [
"libc",
"windows-sys 0.52.0",
"windows-sys",
]
[[package]]
name = "fastrand"
version = "2.2.0"
version = "2.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "486f806e73c5707928240ddc295403b1b93c96a02038563881c4a2fd84b81ac4"
checksum = "37909eebbb50d72f9059c3b6d82c0463f2ff062c9e95845c43a6c9c0355411be"
[[package]]
name = "fnv"
@@ -371,6 +384,12 @@ version = "1.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3f9eec918d3f24069decb9af1554cad7c880e2da24a9afd88aca000531ab82c1"
[[package]]
name = "foldhash"
version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a0d2fde1f7b3d48b8395d5f2de76c18a528bd6a9cdde438df747bfcba3e05d6f"
[[package]]
name = "generic-array"
version = "0.14.7"
@@ -396,9 +415,9 @@ dependencies = [
[[package]]
name = "glob"
version = "0.3.1"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d2fabcfbdc87f4758337ca535fb41a6d701b65693ce38287d856d1674551ec9b"
checksum = "a8d1add55171497b4705a648c6b583acafb01d58050a51727785f0b2c8e0a2b2"
[[package]]
name = "half"
@@ -410,6 +429,18 @@ dependencies = [
"crunchy",
]
[[package]]
name = "hashbrown"
version = "0.15.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "bf151400ff0baff5465007dd2f3e717f3fe502074ca563069ce3a6629d07b289"
dependencies = [
"allocator-api2",
"equivalent",
"foldhash",
"serde",
]
[[package]]
name = "heck"
version = "0.5.0"
@@ -430,13 +461,13 @@ checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]]
name = "is-terminal"
version = "0.4.13"
version = "0.4.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "261f68e344040fbd0edea105bef17c66edf46f984ddb1115b775ce31be948f4b"
checksum = "e19b23d53f35ce9f56aebc7d1bb4e6ac1e9c0db7ac85c8d1760c04379edced37"
dependencies = [
"hermit-abi",
"libc",
"windows-sys 0.52.0",
"windows-sys",
]
[[package]]
@@ -456,9 +487,9 @@ dependencies = [
[[package]]
name = "itoa"
version = "1.0.13"
version = "1.0.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "540654e97a3f4470a492cd30ff187bc95d89557a903a2bbf112e2fae98104ef2"
checksum = "d75a2a4b1b190afb6f5425f10f6a8f959d2ea0b9c2b1d79553551850539e4674"
[[package]]
name = "jobserver"
@@ -471,10 +502,11 @@ dependencies = [
[[package]]
name = "js-sys"
version = "0.3.72"
version = "0.3.77"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6a88f1bda2bd75b0452a14784937d796722fdebfe50df998aeb3f0b7603019a9"
checksum = "1cfaf33c695fc6e08064efbc1f72ec937429614f25eef83af942d0e227c3a28f"
dependencies = [
"once_cell",
"wasm-bindgen",
]
@@ -495,9 +527,9 @@ checksum = "bbd2bcb4c963f2ddae06a2efc7e9f3591312473c50c6685e1f298068316e66fe"
[[package]]
name = "libc"
version = "0.2.164"
version = "0.2.169"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "433bfe06b8c75da9b2e3fbea6e5329ff87748f0b144ef75306e674c3f6f7c13f"
checksum = "b5aba8db14291edd000dfcc4d620c7ebfb122c613afb886ca8803fa4e128a20a"
[[package]]
name = "libm"
@@ -507,15 +539,15 @@ checksum = "8355be11b20d696c8f18f6cc018c4e372165b1fa8126cef092399c9951984ffa"
[[package]]
name = "linux-raw-sys"
version = "0.4.14"
version = "0.4.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78b3ae25bc7c8c38cec158d1f2757ee79e9b3740fbc7ccf0e59e4b08d793fa89"
checksum = "d26c52dbd32dccf2d10cac7725f8eae5296885fb5703b261f7d0a0739ec807ab"
[[package]]
name = "log"
version = "0.4.22"
version = "0.4.25"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a7a70ba024b9dc04c27ea2f0c0548feb474ec5c54bba33a7f72f873a39d07b24"
checksum = "04cbf5b083de1c7e0222a7a51dbfdba1cbe1c6ab0b15e29fff3f6c077fd9cd9f"
[[package]]
name = "memchr"
@@ -525,7 +557,7 @@ checksum = "78ca9ab1a0babb1e7d5695e3530886289c18cf2f87ec19a575a0abdce112e3a3"
[[package]]
name = "miden-crypto"
version = "0.13.0"
version = "0.14.0"
dependencies = [
"assert_matches",
"blake3",
@@ -534,6 +566,7 @@ dependencies = [
"criterion",
"getrandom",
"glob",
"hashbrown",
"hex",
"num",
"num-complex",
@@ -541,6 +574,7 @@ dependencies = [
"rand",
"rand_chacha",
"rand_core",
"rayon",
"seq-macro",
"serde",
"sha3",
@@ -676,18 +710,18 @@ dependencies = [
[[package]]
name = "proc-macro2"
version = "1.0.92"
version = "1.0.93"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "37d3544b3f2748c54e147655edb5025752e2303145b5aefb3c3ea2c78b973bb0"
checksum = "60946a68e5f9d28b0dc1c21bb8a97ee7d018a8b322fa57838ba31cc878e22d99"
dependencies = [
"unicode-ident",
]
[[package]]
name = "proptest"
version = "1.5.0"
version = "1.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b4c2511913b88df1637da85cc8d96ec8e43a3f8bb8ccb71ee1ac240d6f3df58d"
checksum = "14cae93065090804185d3b75f0bf93b8eeda30c7a9b4a33d3bdb3988d6229e50"
dependencies = [
"bit-set",
"bit-vec",
@@ -711,9 +745,9 @@ checksum = "a1d01941d82fa2ab50be1e79e6714289dd7cde78eba4c074bc5a4374f650dfe0"
[[package]]
name = "quote"
version = "1.0.37"
version = "1.0.38"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b5b9d34b8991d19d98081b46eacdd8eb58c6f2b201139f7c5f643cc155a633af"
checksum = "0e4dccaaaf89514f546c693ddc140f729f958c247918a13380cccc6078391acc"
dependencies = [
"proc-macro2",
]
@@ -808,17 +842,23 @@ checksum = "2b15c43186be67a4fd63bee50d0303afffcef381492ebe2c5d87f324e1b8815c"
[[package]]
name = "rustix"
version = "0.38.41"
version = "0.38.44"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d7f649912bc1495e167a6edee79151c84b1bad49748cb4f1f1167f459f6224f6"
checksum = "fdb5bc1ae2baa591800df16c9ca78619bf65c0488b41b96ccec5d11220d8c154"
dependencies = [
"bitflags",
"errno",
"libc",
"linux-raw-sys",
"windows-sys 0.52.0",
"windows-sys",
]
[[package]]
name = "rustversion"
version = "1.0.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f7c45b9784283f1b2e7fb61b42047c2fd678ef0960d4f6f1eba131594cc369d4"
[[package]]
name = "rusty-fork"
version = "0.3.0"
@@ -854,18 +894,18 @@ checksum = "a3f0bf26fd526d2a95683cd0f87bf103b8539e2ca1ef48ce002d67aad59aa0b4"
[[package]]
name = "serde"
version = "1.0.215"
version = "1.0.217"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6513c1ad0b11a9376da888e3e0baa0077f1aed55c17f50e7b2397136129fb88f"
checksum = "02fc4265df13d6fa1d00ecff087228cc0a2b5f3c0e87e258d8b94a156e984c70"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.215"
version = "1.0.217"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ad1e866f866923f252f05c889987993144fb74e722403468a4ebd70c3cd756c0"
checksum = "5a9bf7cf98d04a2b28aead066b7496853d4779c9cc183c440dbac457641e19a0"
dependencies = [
"proc-macro2",
"quote",
@@ -874,9 +914,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.133"
version = "1.0.137"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c7fceb2473b9166b2294ef05efcb65a3db80803f0b03ef86a5fc88a2b85ee377"
checksum = "930cfb6e6abf99298aaad7d29abbef7a9999a9a8806a40088f55f0dcec03146b"
dependencies = [
"itoa",
"memchr",
@@ -908,9 +948,9 @@ checksum = "7da8b5736845d9f2fcb837ea5d9e2628564b3b043a70948a3f0b778838c5fb4f"
[[package]]
name = "syn"
version = "2.0.89"
version = "2.0.96"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "44d46482f1c1c87acd84dea20c1bf5ebff4c757009ed6bf19cfd36fb10e92c4e"
checksum = "d5d0adab1ae378d7f53bdebc67a39f1f151407ef230f0ce2883572f5d8985c80"
dependencies = [
"proc-macro2",
"quote",
@@ -919,31 +959,32 @@ dependencies = [
[[package]]
name = "tempfile"
version = "3.14.0"
version = "3.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "28cce251fcbc87fac86a866eeb0d6c2d536fc16d06f184bb61aeae11aa4cee0c"
checksum = "9a8a559c81686f576e8cd0290cd2a24a2a9ad80c98b3478856500fcbd7acd704"
dependencies = [
"cfg-if",
"fastrand",
"getrandom",
"once_cell",
"rustix",
"windows-sys 0.59.0",
"windows-sys",
]
[[package]]
name = "thiserror"
version = "2.0.3"
version = "2.0.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c006c85c7651b3cf2ada4584faa36773bd07bac24acfb39f3c431b36d7e667aa"
checksum = "d452f284b73e6d76dd36758a0c8684b1d5be31f92b89d07fd5822175732206fc"
dependencies = [
"thiserror-impl",
]
[[package]]
name = "thiserror-impl"
version = "2.0.3"
version = "2.0.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f077553d607adc1caf65430528a576c757a71ed73944b66ebb58ef2bbd243568"
checksum = "26afc1baea8a989337eeb52b6e72a039780ce45c3edfcc9c5b9d112feeb173c2"
dependencies = [
"proc-macro2",
"quote",
@@ -1017,24 +1058,24 @@ checksum = "9c8d87e72b64a3b4db28d11ce29237c246188f4f51057d65a7eab63b7987e423"
[[package]]
name = "wasm-bindgen"
version = "0.2.95"
version = "0.2.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "128d1e363af62632b8eb57219c8fd7877144af57558fb2ef0368d0087bddeb2e"
checksum = "1edc8929d7499fc4e8f0be2262a241556cfc54a0bea223790e71446f2aab1ef5"
dependencies = [
"cfg-if",
"once_cell",
"rustversion",
"wasm-bindgen-macro",
]
[[package]]
name = "wasm-bindgen-backend"
version = "0.2.95"
version = "0.2.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cb6dd4d3ca0ddffd1dd1c9c04f94b868c37ff5fac97c30b97cff2d74fce3a358"
checksum = "2f0a0651a5c2bc21487bde11ee802ccaf4c51935d0d3d42a6101f98161700bc6"
dependencies = [
"bumpalo",
"log",
"once_cell",
"proc-macro2",
"quote",
"syn",
@@ -1043,9 +1084,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-macro"
version = "0.2.95"
version = "0.2.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e79384be7f8f5a9dd5d7167216f022090cf1f9ec128e6e6a482a2cb5c5422c56"
checksum = "7fe63fc6d09ed3792bd0897b314f53de8e16568c2b3f7982f468c0bf9bd0b407"
dependencies = [
"quote",
"wasm-bindgen-macro-support",
@@ -1053,9 +1094,9 @@ dependencies = [
[[package]]
name = "wasm-bindgen-macro-support"
version = "0.2.95"
version = "0.2.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "26c6ab57572f7a24a4985830b120de1594465e5d500f24afe89e16b4e833ef68"
checksum = "8ae87ea40c9f689fc23f209965b6fb8a99ad69aeeb0231408be24920604395de"
dependencies = [
"proc-macro2",
"quote",
@@ -1066,15 +1107,18 @@ dependencies = [
[[package]]
name = "wasm-bindgen-shared"
version = "0.2.95"
version = "0.2.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "65fc09f10666a9f147042251e0dda9c18f166ff7de300607007e96bdebc1068d"
checksum = "1a05d73b933a847d6cccdda8f838a22ff101ad9bf93e33684f39c1f5f0eece3d"
dependencies = [
"unicode-ident",
]
[[package]]
name = "web-sys"
version = "0.3.72"
version = "0.3.77"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f6488b90108c040df0fe62fa815cbdee25124641df01814dd7282749234c6112"
checksum = "33b6dd2ef9186f1f2072e409e99cd22a975331a6b3591b12c764e0e55c60d5d2"
dependencies = [
"js-sys",
"wasm-bindgen",
@@ -1086,16 +1130,7 @@ version = "0.1.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb"
dependencies = [
"windows-sys 0.59.0",
]
[[package]]
name = "windows-sys"
version = "0.52.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "282be5f36a8ce781fad8c8ae18fa3f9beff57ec1b52cb3de0789201425d9a33d"
dependencies = [
"windows-targets",
"windows-sys",
]
[[package]]
@@ -1173,9 +1208,9 @@ checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec"
[[package]]
name = "winter-crypto"
version = "0.10.2"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3fcae1ada055aa10554910ecffc106cb116a19dba11ac91390ef982f94adb9c5"
checksum = "67c57748fd2da77742be601f03eda639ff6046879738fd1faae86e80018263cb"
dependencies = [
"blake3",
"sha3",
@@ -1185,9 +1220,9 @@ dependencies = [
[[package]]
name = "winter-math"
version = "0.10.2"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "82479f94efc0b5374a93e2074ba46ef404384fb1ea6e35a847febec53096509b"
checksum = "6020c17839fa107ce4a7cc178e407ebbc24adfac1980f4fa2111198e052700ab"
dependencies = [
"serde",
"winter-utils",
@@ -1195,9 +1230,9 @@ dependencies = [
[[package]]
name = "winter-rand-utils"
version = "0.10.1"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4a7616d11fcc26552dada45c803a884ac97c253218835b83a2c63e1c2a988639"
checksum = "226e4c455f6eb72f64ac6eeb7642df25e21ff2280a4f6b09db75392ad6b390ef"
dependencies = [
"rand",
"winter-utils",
@@ -1205,9 +1240,9 @@ dependencies = [
[[package]]
name = "winter-utils"
version = "0.10.2"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1f948e71ffd482aa13d0ec3349047f81ecdb89f3b3287577973dcbf092a25fb4"
checksum = "1507ef312ea5569d54c2c7446a18b82143eb2a2e21f5c3ec7cfbe8200c03bd7c"
[[package]]
name = "zerocopy"

View File

@@ -1,16 +1,16 @@
[package]
name = "miden-crypto"
version = "0.13.0"
version = "0.14.0"
description = "Miden Cryptographic primitives"
authors = ["miden contributors"]
readme = "README.md"
license = "MIT"
repository = "https://github.com/0xPolygonMiden/crypto"
documentation = "https://docs.rs/miden-crypto/0.13.0"
documentation = "https://docs.rs/miden-crypto/0.14.0"
categories = ["cryptography", "no-std"]
keywords = ["miden", "crypto", "hash", "merkle"]
edition = "2021"
rust-version = "1.82"
rust-version = "1.84"
[[bin]]
name = "miden-crypto"
@@ -27,13 +27,29 @@ harness = false
name = "smt"
harness = false
[[bench]]
name = "smt-subtree"
harness = false
required-features = ["internal"]
[[bench]]
name = "merkle"
harness = false
[[bench]]
name = "smt-with-entries"
harness = false
[[bench]]
name = "store"
harness = false
[features]
default = ["std"]
concurrent = ["dep:rayon"]
default = ["std", "concurrent"]
executable = ["dep:clap", "dep:rand-utils", "std"]
smt_hashmaps = ["dep:hashbrown"]
internal = []
serde = ["dep:serde", "serde?/alloc", "winter-math/serde"]
std = [
"blake3/std",
@@ -48,26 +64,28 @@ std = [
[dependencies]
blake3 = { version = "1.5", default-features = false }
clap = { version = "4.5", optional = true, features = ["derive"] }
hashbrown = { version = "0.15", optional = true, features = ["serde"] }
num = { version = "0.4", default-features = false, features = ["alloc", "libm"] }
num-complex = { version = "0.4", default-features = false }
rand = { version = "0.8", default-features = false }
rand_core = { version = "0.6", default-features = false }
rand-utils = { version = "0.10", package = "winter-rand-utils", optional = true }
rand-utils = { version = "0.11", package = "winter-rand-utils", optional = true }
rayon = { version = "1.10", optional = true }
serde = { version = "1.0", default-features = false, optional = true, features = ["derive"] }
sha3 = { version = "0.10", default-features = false }
thiserror = { version = "2.0", default-features = false }
winter-crypto = { version = "0.10", default-features = false }
winter-math = { version = "0.10", default-features = false }
winter-utils = { version = "0.10", default-features = false }
winter-crypto = { version = "0.11", default-features = false }
winter-math = { version = "0.11", default-features = false }
winter-utils = { version = "0.11", default-features = false }
[dev-dependencies]
assert_matches = { version = "1.5", default-features = false }
criterion = { version = "0.5", features = ["html_reports"] }
getrandom = { version = "0.2", features = ["js"] }
hex = { version = "0.4", default-features = false, features = ["alloc"] }
proptest = "1.5"
proptest = "1.6"
rand_chacha = { version = "0.3", default-features = false }
rand-utils = { version = "0.10", package = "winter-rand-utils" }
rand-utils = { version = "0.11", package = "winter-rand-utils" }
seq-macro = { version = "0.3" }
[build-dependencies]

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2024 Polygon Miden
Copyright (c) 2025 Polygon Miden
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -46,6 +46,9 @@ doc: ## Generate and check documentation
test-default: ## Run tests with default features
$(DEBUG_OVERFLOW_INFO) cargo nextest run --profile default --release --all-features
.PHONY: test-smt-hashmaps
test-smt-hashmaps: ## Run tests with `smt_hashmaps` feature enabled
$(DEBUG_OVERFLOW_INFO) cargo nextest run --profile default --release --features smt_hashmaps
.PHONY: test-no-std
test-no-std: ## Run tests with `no-default-features` (std)
@@ -53,7 +56,7 @@ test-no-std: ## Run tests with `no-default-features` (std)
.PHONY: test
test: test-default test-no-std ## Run all tests
test: test-default test-smt-hashmaps test-no-std ## Run all tests
# --- checking ------------------------------------------------------------------------------------
@@ -81,6 +84,10 @@ build-sve: ## Build with sve support
# --- benchmarking --------------------------------------------------------------------------------
.PHONY: bench-tx
bench-tx: ## Run crypto benchmarks
cargo bench
.PHONY: bench
bench: ## Run crypto benchmarks
cargo bench --features concurrent
.PHONY: bench-smt-concurrent
bench-smt-concurrent: ## Run SMT benchmarks with concurrent feature
cargo run --release --features concurrent,executable -- --size 1000000

View File

@@ -3,7 +3,7 @@
[![LICENSE](https://img.shields.io/badge/license-MIT-blue.svg)](https://github.com/0xPolygonMiden/crypto/blob/main/LICENSE)
[![test](https://github.com/0xPolygonMiden/crypto/actions/workflows/test.yml/badge.svg)](https://github.com/0xPolygonMiden/crypto/actions/workflows/test.yml)
[![build](https://github.com/0xPolygonMiden/crypto/actions/workflows/build.yml/badge.svg)](https://github.com/0xPolygonMiden/crypto/actions/workflows/build.yml)
[![RUST_VERSION](https://img.shields.io/badge/rustc-1.82+-lightgray.svg)](https://www.rust-lang.org/tools/install)
[![RUST_VERSION](https://img.shields.io/badge/rustc-1.84+-lightgray.svg)](https://www.rust-lang.org/tools/install)
[![CRATE](https://img.shields.io/crates/v/miden-crypto)](https://crates.io/crates/miden-crypto)
This crate contains cryptographic primitives used in Polygon Miden.
@@ -60,10 +60,12 @@ make
This crate can be compiled with the following features:
- `concurrent`- enabled by default; enables multi-threaded implementation of `Smt::with_entries()` which significantly improves performance on multi-core CPUs.
- `std` - enabled by default and relies on the Rust standard library.
- `no_std` does not rely on the Rust standard library and enables compilation to WebAssembly.
- `smt_hashmaps` - uses hashbrown hashmaps in SMT implementation which significantly improves performance of SMT updating. Keys ordering in SMT iterators is not guarantied when this feature is enabled.
Both of these features imply the use of [alloc](https://doc.rust-lang.org/alloc/) to support heap-allocated collections.
All of these features imply the use of [alloc](https://doc.rust-lang.org/alloc/) to support heap-allocated collections.
To compile with `no_std`, disable default features via `--no-default-features` flag or using the following command:

View File

@@ -1,7 +1,8 @@
#include <stddef.h>
#include <arm_sve.h>
#include "library.h"
#include "rpo_hash.h"
#include "rpo_hash_128bit.h"
#include "rpo_hash_256bit.h"
// The STATE_WIDTH of RPO hash is 12x u64 elements.
// The current generation of SVE-enabled processors - Neoverse V1
@@ -31,48 +32,24 @@
bool add_constants_and_apply_sbox(uint64_t state[STATE_WIDTH], uint64_t constants[STATE_WIDTH]) {
const uint64_t vl = svcntd(); // number of u64 numbers in one SVE vector
if (vl != 4) {
if (vl == 2) {
return add_constants_and_apply_sbox_128(state, constants);
} else if (vl == 4) {
return add_constants_and_apply_sbox_256(state, constants);
} else {
return false;
}
svbool_t ptrue = svptrue_b64();
svuint64_t state1 = svld1(ptrue, state + 0*vl);
svuint64_t state2 = svld1(ptrue, state + 1*vl);
svuint64_t const1 = svld1(ptrue, constants + 0*vl);
svuint64_t const2 = svld1(ptrue, constants + 1*vl);
add_constants(ptrue, &state1, &const1, &state2, &const2, state+8, constants+8);
apply_sbox(ptrue, &state1, &state2, state+8);
svst1(ptrue, state + 0*vl, state1);
svst1(ptrue, state + 1*vl, state2);
return true;
}
bool add_constants_and_apply_inv_sbox(uint64_t state[STATE_WIDTH], uint64_t constants[STATE_WIDTH]) {
const uint64_t vl = svcntd(); // number of u64 numbers in one SVE vector
if (vl != 4) {
if (vl == 2) {
return add_constants_and_apply_inv_sbox_128(state, constants);
} else if (vl == 4) {
return add_constants_and_apply_inv_sbox_256(state, constants);
} else {
return false;
}
svbool_t ptrue = svptrue_b64();
svuint64_t state1 = svld1(ptrue, state + 0 * vl);
svuint64_t state2 = svld1(ptrue, state + 1 * vl);
svuint64_t const1 = svld1(ptrue, constants + 0 * vl);
svuint64_t const2 = svld1(ptrue, constants + 1 * vl);
add_constants(ptrue, &state1, &const1, &state2, &const2, state + 8, constants + 8);
apply_inv_sbox(ptrue, &state1, &state2, state + 8);
svst1(ptrue, state + 0 * vl, state1);
svst1(ptrue, state + 1 * vl, state2);
return true;
}

View File

@@ -0,0 +1,318 @@
#ifndef RPO_SVE_RPO_HASH_128_H
#define RPO_SVE_RPO_HASH_128_H
#include <arm_sve.h>
#include <stddef.h>
#include <stdint.h>
#include <string.h>
#define STATE_WIDTH 12
#define COPY_128(NAME, VIN1, VIN2, VIN3, VIN4, SIN) \
svuint64_t NAME ## _1 = VIN1; \
svuint64_t NAME ## _2 = VIN2; \
svuint64_t NAME ## _3 = VIN3; \
svuint64_t NAME ## _4 = VIN4; \
uint64_t NAME ## _tail[4]; \
memcpy(NAME ## _tail, SIN, 4 * sizeof(uint64_t))
#define MULTIPLY_128(PRED, DEST, OP) \
mul_128(PRED, &DEST ## _1, &OP ## _1, &DEST ## _2, &OP ## _2, &DEST ## _3, &OP ## _3, &DEST ## _4, &OP ## _4, DEST ## _tail, OP ## _tail)
#define SQUARE_128(PRED, NAME) \
sq_128(PRED, &NAME ## _1, &NAME ## _2, &NAME ## _3, &NAME ## _4, NAME ## _tail)
#define SQUARE_DEST_128(PRED, DEST, SRC) \
COPY_128(DEST, SRC ## _1, SRC ## _2, SRC ## _3, SRC ## _4, SRC ## _tail); \
SQUARE_128(PRED, DEST);
#define POW_ACC_128(PRED, NAME, CNT, TAIL) \
for (size_t i = 0; i < CNT; i++) { \
SQUARE_128(PRED, NAME); \
} \
MULTIPLY_128(PRED, NAME, TAIL);
#define POW_ACC_DEST(PRED, DEST, CNT, HEAD, TAIL) \
COPY_128(DEST, HEAD ## _1, HEAD ## _2, HEAD ## _3, HEAD ## _4, HEAD ## _tail); \
POW_ACC_128(PRED, DEST, CNT, TAIL)
extern inline void add_constants_128(
svbool_t pg,
svuint64_t *state1,
svuint64_t *const1,
svuint64_t *state2,
svuint64_t *const2,
svuint64_t *state3,
svuint64_t *const3,
svuint64_t *state4,
svuint64_t *const4,
uint64_t *state_tail,
uint64_t *const_tail
) {
uint64_t Ms = 0xFFFFFFFF00000001ull;
svuint64_t Mv = svindex_u64(Ms, 0);
uint64_t p_1 = Ms - const_tail[0];
uint64_t p_2 = Ms - const_tail[1];
uint64_t p_3 = Ms - const_tail[2];
uint64_t p_4 = Ms - const_tail[3];
uint64_t x_1, x_2, x_3, x_4;
uint32_t adj_1 = -__builtin_sub_overflow(state_tail[0], p_1, &x_1);
uint32_t adj_2 = -__builtin_sub_overflow(state_tail[1], p_2, &x_2);
uint32_t adj_3 = -__builtin_sub_overflow(state_tail[2], p_3, &x_3);
uint32_t adj_4 = -__builtin_sub_overflow(state_tail[3], p_4, &x_4);
state_tail[0] = x_1 - (uint64_t)adj_1;
state_tail[1] = x_2 - (uint64_t)adj_2;
state_tail[2] = x_3 - (uint64_t)adj_3;
state_tail[3] = x_4 - (uint64_t)adj_4;
svuint64_t p1 = svsub_x(pg, Mv, *const1);
svuint64_t p2 = svsub_x(pg, Mv, *const2);
svuint64_t p3 = svsub_x(pg, Mv, *const3);
svuint64_t p4 = svsub_x(pg, Mv, *const4);
svuint64_t x1 = svsub_x(pg, *state1, p1);
svuint64_t x2 = svsub_x(pg, *state2, p2);
svuint64_t x3 = svsub_x(pg, *state3, p3);
svuint64_t x4 = svsub_x(pg, *state4, p4);
svbool_t pt1 = svcmplt_u64(pg, *state1, p1);
svbool_t pt2 = svcmplt_u64(pg, *state2, p2);
svbool_t pt3 = svcmplt_u64(pg, *state3, p3);
svbool_t pt4 = svcmplt_u64(pg, *state4, p4);
*state1 = svsub_m(pt1, x1, (uint32_t)-1);
*state2 = svsub_m(pt2, x2, (uint32_t)-1);
*state3 = svsub_m(pt3, x3, (uint32_t)-1);
*state4 = svsub_m(pt4, x4, (uint32_t)-1);
}
extern inline void mul_128(
svbool_t pg,
svuint64_t *r1,
const svuint64_t *op1,
svuint64_t *r2,
const svuint64_t *op2,
svuint64_t *r3,
const svuint64_t *op3,
svuint64_t *r4,
const svuint64_t *op4,
uint64_t *r_tail,
const uint64_t *op_tail
) {
__uint128_t x_1 = r_tail[0];
__uint128_t x_2 = r_tail[1];
__uint128_t x_3 = r_tail[2];
__uint128_t x_4 = r_tail[3];
x_1 *= (__uint128_t) op_tail[0];
x_2 *= (__uint128_t) op_tail[1];
x_3 *= (__uint128_t) op_tail[2];
x_4 *= (__uint128_t) op_tail[3];
uint64_t x0_1 = x_1;
uint64_t x0_2 = x_2;
uint64_t x0_3 = x_3;
uint64_t x0_4 = x_4;
svuint64_t l1 = svmul_x(pg, *r1, *op1);
svuint64_t l2 = svmul_x(pg, *r2, *op2);
svuint64_t l3 = svmul_x(pg, *r3, *op3);
svuint64_t l4 = svmul_x(pg, *r4, *op4);
uint64_t x1_1 = (x_1 >> 64);
uint64_t x1_2 = (x_2 >> 64);
uint64_t x1_3 = (x_3 >> 64);
uint64_t x1_4 = (x_4 >> 64);
uint64_t a_1, a_2, a_3, a_4;
uint64_t e_1 = __builtin_add_overflow(x0_1, (x0_1 << 32), &a_1);
uint64_t e_2 = __builtin_add_overflow(x0_2, (x0_2 << 32), &a_2);
uint64_t e_3 = __builtin_add_overflow(x0_3, (x0_3 << 32), &a_3);
uint64_t e_4 = __builtin_add_overflow(x0_4, (x0_4 << 32), &a_4);
svuint64_t ls1 = svlsl_x(pg, l1, 32);
svuint64_t ls2 = svlsl_x(pg, l2, 32);
svuint64_t ls3 = svlsl_x(pg, l3, 32);
svuint64_t ls4 = svlsl_x(pg, l4, 32);
svuint64_t a1 = svadd_x(pg, l1, ls1);
svuint64_t a2 = svadd_x(pg, l2, ls2);
svuint64_t a3 = svadd_x(pg, l3, ls3);
svuint64_t a4 = svadd_x(pg, l4, ls4);
svbool_t e1 = svcmplt(pg, a1, l1);
svbool_t e2 = svcmplt(pg, a2, l2);
svbool_t e3 = svcmplt(pg, a3, l3);
svbool_t e4 = svcmplt(pg, a4, l4);
svuint64_t as1 = svlsr_x(pg, a1, 32);
svuint64_t as2 = svlsr_x(pg, a2, 32);
svuint64_t as3 = svlsr_x(pg, a3, 32);
svuint64_t as4 = svlsr_x(pg, a4, 32);
svuint64_t b1 = svsub_x(pg, a1, as1);
svuint64_t b2 = svsub_x(pg, a2, as2);
svuint64_t b3 = svsub_x(pg, a3, as3);
svuint64_t b4 = svsub_x(pg, a4, as4);
b1 = svsub_m(e1, b1, 1);
b2 = svsub_m(e2, b2, 1);
b3 = svsub_m(e3, b3, 1);
b4 = svsub_m(e4, b4, 1);
uint64_t b_1 = a_1 - (a_1 >> 32) - e_1;
uint64_t b_2 = a_2 - (a_2 >> 32) - e_2;
uint64_t b_3 = a_3 - (a_3 >> 32) - e_3;
uint64_t b_4 = a_4 - (a_4 >> 32) - e_4;
uint64_t r_1, r_2, r_3, r_4;
uint32_t c_1 = __builtin_sub_overflow(x1_1, b_1, &r_1);
uint32_t c_2 = __builtin_sub_overflow(x1_2, b_2, &r_2);
uint32_t c_3 = __builtin_sub_overflow(x1_3, b_3, &r_3);
uint32_t c_4 = __builtin_sub_overflow(x1_4, b_4, &r_4);
svuint64_t h1 = svmulh_x(pg, *r1, *op1);
svuint64_t h2 = svmulh_x(pg, *r2, *op2);
svuint64_t h3 = svmulh_x(pg, *r3, *op3);
svuint64_t h4 = svmulh_x(pg, *r4, *op4);
svuint64_t tr1 = svsub_x(pg, h1, b1);
svuint64_t tr2 = svsub_x(pg, h2, b2);
svuint64_t tr3 = svsub_x(pg, h3, b3);
svuint64_t tr4 = svsub_x(pg, h4, b4);
svbool_t c1 = svcmplt_u64(pg, h1, b1);
svbool_t c2 = svcmplt_u64(pg, h2, b2);
svbool_t c3 = svcmplt_u64(pg, h3, b3);
svbool_t c4 = svcmplt_u64(pg, h4, b4);
*r1 = svsub_m(c1, tr1, (uint32_t) -1);
*r2 = svsub_m(c2, tr2, (uint32_t) -1);
*r3 = svsub_m(c3, tr3, (uint32_t) -1);
*r4 = svsub_m(c4, tr4, (uint32_t) -1);
uint32_t minus1_1 = 0 - c_1;
uint32_t minus1_2 = 0 - c_2;
uint32_t minus1_3 = 0 - c_3;
uint32_t minus1_4 = 0 - c_4;
r_tail[0] = r_1 - (uint64_t)minus1_1;
r_tail[1] = r_2 - (uint64_t)minus1_2;
r_tail[2] = r_3 - (uint64_t)minus1_3;
r_tail[3] = r_4 - (uint64_t)minus1_4;
}
extern inline void sq_128(svbool_t pg, svuint64_t *a, svuint64_t *b, svuint64_t *c, svuint64_t *d, uint64_t *e) {
mul_128(pg, a, a, b, b, c, c, d, d, e, e);
}
extern inline void apply_sbox_128(
svbool_t pg,
svuint64_t *state1,
svuint64_t *state2,
svuint64_t *state3,
svuint64_t *state4,
uint64_t *state_tail
) {
COPY_128(x, *state1, *state2, *state3, *state4, state_tail); // copy input to x
SQUARE_128(pg, x); // x contains input^2
mul_128(pg, state1, &x_1, state2, &x_2, state3, &x_3, state4, &x_4, state_tail, x_tail); // state contains input^3
SQUARE_128(pg, x); // x contains input^4
mul_128(pg, state1, &x_1, state2, &x_2, state3, &x_3, state4, &x_4, state_tail, x_tail); // state contains input^7
}
extern inline void apply_inv_sbox_128(
svbool_t pg,
svuint64_t *state1,
svuint64_t *state2,
svuint64_t *state3,
svuint64_t *state4,
uint64_t *state_tail
) {
// base^10
COPY_128(t1, *state1, *state2, *state3, *state4, state_tail);
SQUARE_128(pg, t1);
// base^100
SQUARE_DEST_128(pg, t2, t1);
// base^100100
POW_ACC_DEST(pg, t3, 3, t2, t2);
// base^100100100100
POW_ACC_DEST(pg, t4, 6, t3, t3);
// compute base^100100100100100100100100
POW_ACC_DEST(pg, t5, 12, t4, t4);
// compute base^100100100100100100100100100100
POW_ACC_DEST(pg, t6, 6, t5, t3);
// compute base^1001001001001001001001001001000100100100100100100100100100100
POW_ACC_DEST(pg, t7, 31, t6, t6);
// compute base^1001001001001001001001001001000110110110110110110110110110110111
SQUARE_128(pg, t7);
MULTIPLY_128(pg, t7, t6);
SQUARE_128(pg, t7);
SQUARE_128(pg, t7);
MULTIPLY_128(pg, t7, t1);
MULTIPLY_128(pg, t7, t2);
mul_128(pg, state1, &t7_1, state2, &t7_2, state3, &t7_3, state4, &t7_4, state_tail, t7_tail);
}
bool add_constants_and_apply_sbox_128(uint64_t state[STATE_WIDTH], uint64_t constants[STATE_WIDTH]) {
const uint64_t vl = 2; // number of u64 numbers in one 128 bit SVE vector
svbool_t ptrue = svptrue_b64();
svuint64_t state1 = svld1(ptrue, state + 0 * vl);
svuint64_t state2 = svld1(ptrue, state + 1 * vl);
svuint64_t state3 = svld1(ptrue, state + 2 * vl);
svuint64_t state4 = svld1(ptrue, state + 3 * vl);
svuint64_t const1 = svld1(ptrue, constants + 0 * vl);
svuint64_t const2 = svld1(ptrue, constants + 1 * vl);
svuint64_t const3 = svld1(ptrue, constants + 2 * vl);
svuint64_t const4 = svld1(ptrue, constants + 3 * vl);
add_constants_128(ptrue, &state1, &const1, &state2, &const2, &state3, &const3, &state4, &const4, state + 8, constants + 8);
apply_sbox_128(ptrue, &state1, &state2, &state3, &state4, state + 8);
svst1(ptrue, state + 0 * vl, state1);
svst1(ptrue, state + 1 * vl, state2);
svst1(ptrue, state + 2 * vl, state3);
svst1(ptrue, state + 3 * vl, state4);
return true;
}
bool add_constants_and_apply_inv_sbox_128(uint64_t state[STATE_WIDTH], uint64_t constants[STATE_WIDTH]) {
const uint64_t vl = 2; // number of u64 numbers in one 128 bit SVE vector
svbool_t ptrue = svptrue_b64();
svuint64_t state1 = svld1(ptrue, state + 0 * vl);
svuint64_t state2 = svld1(ptrue, state + 1 * vl);
svuint64_t state3 = svld1(ptrue, state + 2 * vl);
svuint64_t state4 = svld1(ptrue, state + 3 * vl);
svuint64_t const1 = svld1(ptrue, constants + 0 * vl);
svuint64_t const2 = svld1(ptrue, constants + 1 * vl);
svuint64_t const3 = svld1(ptrue, constants + 2 * vl);
svuint64_t const4 = svld1(ptrue, constants + 3 * vl);
add_constants_128(ptrue, &state1, &const1, &state2, &const2, &state3, &const3, &state4, &const4, state + 8, constants + 8);
apply_inv_sbox_128(ptrue, &state1, &state2, &state3, &state4, state + 8);
svst1(ptrue, state + 0 * vl, state1);
svst1(ptrue, state + 1 * vl, state2);
svst1(ptrue, state + 2 * vl, state3);
svst1(ptrue, state + 3 * vl, state4);
return true;
}
#endif //RPO_SVE_RPO_HASH_128_H

View File

@@ -1,38 +1,40 @@
#ifndef RPO_SVE_RPO_HASH_H
#define RPO_SVE_RPO_HASH_H
#ifndef RPO_SVE_RPO_HASH_256_H
#define RPO_SVE_RPO_HASH_256_H
#include <arm_sve.h>
#include <stddef.h>
#include <stdint.h>
#include <string.h>
#define COPY(NAME, VIN1, VIN2, SIN3) \
#define STATE_WIDTH 12
#define COPY_256(NAME, VIN1, VIN2, SIN3) \
svuint64_t NAME ## _1 = VIN1; \
svuint64_t NAME ## _2 = VIN2; \
uint64_t NAME ## _3[4]; \
memcpy(NAME ## _3, SIN3, 4 * sizeof(uint64_t))
#define MULTIPLY(PRED, DEST, OP) \
mul(PRED, &DEST ## _1, &OP ## _1, &DEST ## _2, &OP ## _2, DEST ## _3, OP ## _3)
#define MULTIPLY_256(PRED, DEST, OP) \
mul_256(PRED, &DEST ## _1, &OP ## _1, &DEST ## _2, &OP ## _2, DEST ## _3, OP ## _3)
#define SQUARE(PRED, NAME) \
sq(PRED, &NAME ## _1, &NAME ## _2, NAME ## _3)
#define SQUARE_256(PRED, NAME) \
sq_256(PRED, &NAME ## _1, &NAME ## _2, NAME ## _3)
#define SQUARE_DEST(PRED, DEST, SRC) \
COPY(DEST, SRC ## _1, SRC ## _2, SRC ## _3); \
SQUARE(PRED, DEST);
#define SQUARE_DEST_256(PRED, DEST, SRC) \
COPY_256(DEST, SRC ## _1, SRC ## _2, SRC ## _3); \
SQUARE_256(PRED, DEST);
#define POW_ACC(PRED, NAME, CNT, TAIL) \
for (size_t i = 0; i < CNT; i++) { \
SQUARE(PRED, NAME); \
SQUARE_256(PRED, NAME); \
} \
MULTIPLY(PRED, NAME, TAIL);
MULTIPLY_256(PRED, NAME, TAIL);
#define POW_ACC_DEST(PRED, DEST, CNT, HEAD, TAIL) \
COPY(DEST, HEAD ## _1, HEAD ## _2, HEAD ## _3); \
#define POW_ACC_DEST_256(PRED, DEST, CNT, HEAD, TAIL) \
COPY_256(DEST, HEAD ## _1, HEAD ## _2, HEAD ## _3); \
POW_ACC(PRED, DEST, CNT, TAIL)
extern inline void add_constants(
extern inline void add_constants_256(
svbool_t pg,
svuint64_t *state1,
svuint64_t *const1,
@@ -73,7 +75,7 @@ extern inline void add_constants(
*state2 = svsub_m(pt2, x2, (uint32_t)-1);
}
extern inline void mul(
extern inline void mul_256(
svbool_t pg,
svuint64_t *r1,
const svuint64_t *op1,
@@ -163,59 +165,97 @@ extern inline void mul(
r3[3] = r_4 - (uint64_t)minus1_4;
}
extern inline void sq(svbool_t pg, svuint64_t *a, svuint64_t *b, uint64_t *c) {
mul(pg, a, a, b, b, c, c);
extern inline void sq_256(svbool_t pg, svuint64_t *a, svuint64_t *b, uint64_t *c) {
mul_256(pg, a, a, b, b, c, c);
}
extern inline void apply_sbox(
extern inline void apply_sbox_256(
svbool_t pg,
svuint64_t *state1,
svuint64_t *state2,
uint64_t *state3
) {
COPY(x, *state1, *state2, state3); // copy input to x
SQUARE(pg, x); // x contains input^2
mul(pg, state1, &x_1, state2, &x_2, state3, x_3); // state contains input^3
SQUARE(pg, x); // x contains input^4
mul(pg, state1, &x_1, state2, &x_2, state3, x_3); // state contains input^7
COPY_256(x, *state1, *state2, state3); // copy input to x
SQUARE_256(pg, x); // x contains input^2
mul_256(pg, state1, &x_1, state2, &x_2, state3, x_3); // state contains input^3
SQUARE_256(pg, x); // x contains input^4
mul_256(pg, state1, &x_1, state2, &x_2, state3, x_3); // state contains input^7
}
extern inline void apply_inv_sbox(
extern inline void apply_inv_sbox_256(
svbool_t pg,
svuint64_t *state_1,
svuint64_t *state_2,
uint64_t *state_3
) {
// base^10
COPY(t1, *state_1, *state_2, state_3);
SQUARE(pg, t1);
COPY_256(t1, *state_1, *state_2, state_3);
SQUARE_256(pg, t1);
// base^100
SQUARE_DEST(pg, t2, t1);
SQUARE_DEST_256(pg, t2, t1);
// base^100100
POW_ACC_DEST(pg, t3, 3, t2, t2);
POW_ACC_DEST_256(pg, t3, 3, t2, t2);
// base^100100100100
POW_ACC_DEST(pg, t4, 6, t3, t3);
POW_ACC_DEST_256(pg, t4, 6, t3, t3);
// compute base^100100100100100100100100
POW_ACC_DEST(pg, t5, 12, t4, t4);
POW_ACC_DEST_256(pg, t5, 12, t4, t4);
// compute base^100100100100100100100100100100
POW_ACC_DEST(pg, t6, 6, t5, t3);
POW_ACC_DEST_256(pg, t6, 6, t5, t3);
// compute base^1001001001001001001001001001000100100100100100100100100100100
POW_ACC_DEST(pg, t7, 31, t6, t6);
POW_ACC_DEST_256(pg, t7, 31, t6, t6);
// compute base^1001001001001001001001001001000110110110110110110110110110110111
SQUARE(pg, t7);
MULTIPLY(pg, t7, t6);
SQUARE(pg, t7);
SQUARE(pg, t7);
MULTIPLY(pg, t7, t1);
MULTIPLY(pg, t7, t2);
mul(pg, state_1, &t7_1, state_2, &t7_2, state_3, t7_3);
SQUARE_256(pg, t7);
MULTIPLY_256(pg, t7, t6);
SQUARE_256(pg, t7);
SQUARE_256(pg, t7);
MULTIPLY_256(pg, t7, t1);
MULTIPLY_256(pg, t7, t2);
mul_256(pg, state_1, &t7_1, state_2, &t7_2, state_3, t7_3);
}
#endif //RPO_SVE_RPO_HASH_H
bool add_constants_and_apply_sbox_256(uint64_t state[STATE_WIDTH], uint64_t constants[STATE_WIDTH]) {
const uint64_t vl = 4; // number of u64 numbers in one 128 bit SVE vector
svbool_t ptrue = svptrue_b64();
svuint64_t state1 = svld1(ptrue, state + 0 * vl);
svuint64_t state2 = svld1(ptrue, state + 1 * vl);
svuint64_t const1 = svld1(ptrue, constants + 0 * vl);
svuint64_t const2 = svld1(ptrue, constants + 1 * vl);
add_constants_256(ptrue, &state1, &const1, &state2, &const2, state + 8, constants + 8);
apply_sbox_256(ptrue, &state1, &state2, state + 8);
svst1(ptrue, state + 0 * vl, state1);
svst1(ptrue, state + 1 * vl, state2);
return true;
}
bool add_constants_and_apply_inv_sbox_256(uint64_t state[STATE_WIDTH], uint64_t constants[STATE_WIDTH]) {
const uint64_t vl = 4; // number of u64 numbers in one 128 bit SVE vector
svbool_t ptrue = svptrue_b64();
svuint64_t state1 = svld1(ptrue, state + 0 * vl);
svuint64_t state2 = svld1(ptrue, state + 1 * vl);
svuint64_t const1 = svld1(ptrue, constants + 0 * vl);
svuint64_t const2 = svld1(ptrue, constants + 1 * vl);
add_constants_256(ptrue, &state1, &const1, &state2, &const2, state + 8, constants + 8);
apply_inv_sbox_256(ptrue, &state1, &state2, state + 8);
svst1(ptrue, state + 0 * vl, state1);
svst1(ptrue, state + 1 * vl, state2);
return true;
}
#endif //RPO_SVE_RPO_HASH_256_H

View File

@@ -21,6 +21,7 @@ The second scenario is that of sequential hashing where we take a sequence of le
| Apple M1 Pro | 76 ns | 245 ns | 1.5 µs | 9.1 µs | 5.2 µs | 2.7 µs |
| Apple M2 Max | 71 ns | 233 ns | 1.3 µs | 7.9 µs | 4.6 µs | 2.4 µs |
| Amazon Graviton 3 | 108 ns | | | | 5.3 µs | 3.1 µs |
| Amazon Graviton 4 | 96 ns | | | | 5.1 µs | 2.8 µs |
| AMD Ryzen 9 5950X | 64 ns | 273 ns | 1.2 µs | 9.1 µs | 5.5 µs | |
| AMD EPYC 9R14 | 83 ns | | | | 4.3 µs | 2.4 µs |
| Intel Core i5-8279U | 68 ns | 536 ns | 2.0 µs | 13.6 µs | 8.5 µs | 4.4 µs |
@@ -33,13 +34,14 @@ The second scenario is that of sequential hashing where we take a sequence of le
| Apple M1 Pro | 1.0 µs | 1.5 µs | 19.4 µs | 118 µs | 69 µs | 35 µs |
| Apple M2 Max | 0.9 µs | 1.5 µs | 17.4 µs | 103 µs | 60 µs | 31 µs |
| Amazon Graviton 3 | 1.4 µs | | | | 69 µs | 41 µs |
| Amazon Graviton 4 | 1.2 µs | | | | 67 µs | 36 µs |
| AMD Ryzen 9 5950X | 0.8 µs | 1.7 µs | 15.7 µs | 120 µs | 72 µs | |
| AMD EPYC 9R14 | 0.9 µs | | | | 56 µs | 32 µs |
| Intel Core i5-8279U | 0.9 µs | | | | 107 µs | 56 µs |
| Intel Xeon 8375C | 0.8 µs | | | | 110 µs | |
Notes:
- On Graviton 3, RPO256 and RPX256 are run with SVE acceleration enabled.
- On Graviton 3 and 4, RPO256 and RPX256 are run with SVE acceleration enabled.
- On AMD EPYC 9R14, RPO256 and RPX256 are run with AVX2 acceleration enabled.
### Instructions

66
benches/merkle.rs Normal file
View File

@@ -0,0 +1,66 @@
//! Benchmark for building a [`miden_crypto::merkle::MerkleTree`]. This is intended to be compared
//! with the results from `benches/smt-subtree.rs`, as building a fully balanced Merkle tree with
//! 256 leaves should indicate the *absolute best* performance we could *possibly* get for building
//! a depth-8 sparse Merkle subtree, though practically speaking building a fully balanced Merkle
//! tree will perform better than the sparse version. At the time of this writing (2024/11/24), this
//! benchmark is about four times more efficient than the equivalent benchmark in
//! `benches/smt-subtree.rs`.
use std::{hint, mem, time::Duration};
use criterion::{criterion_group, criterion_main, BatchSize, Criterion};
use miden_crypto::{merkle::MerkleTree, Felt, Word, ONE};
use rand_utils::prng_array;
fn balanced_merkle_even(c: &mut Criterion) {
c.bench_function("balanced-merkle-even", |b| {
b.iter_batched(
|| {
let entries: Vec<Word> =
(0..256).map(|i| [Felt::new(i), ONE, ONE, Felt::new(i)]).collect();
assert_eq!(entries.len(), 256);
entries
},
|leaves| {
let tree = MerkleTree::new(hint::black_box(leaves)).unwrap();
assert_eq!(tree.depth(), 8);
},
BatchSize::SmallInput,
);
});
}
fn balanced_merkle_rand(c: &mut Criterion) {
let mut seed = [0u8; 32];
c.bench_function("balanced-merkle-rand", |b| {
b.iter_batched(
|| {
let entries: Vec<Word> = (0..256).map(|_| generate_word(&mut seed)).collect();
assert_eq!(entries.len(), 256);
entries
},
|leaves| {
let tree = MerkleTree::new(hint::black_box(leaves)).unwrap();
assert_eq!(tree.depth(), 8);
},
BatchSize::SmallInput,
);
});
}
criterion_group! {
name = smt_subtree_group;
config = Criterion::default()
.measurement_time(Duration::from_secs(20))
.configure_from_args();
targets = balanced_merkle_even, balanced_merkle_rand
}
criterion_main!(smt_subtree_group);
// HELPER FUNCTIONS
// --------------------------------------------------------------------------------------------
fn generate_word(seed: &mut [u8; 32]) -> Word {
mem::swap(seed, &mut prng_array(*seed));
let nums: [u64; 4] = prng_array(*seed);
[Felt::new(nums[0]), Felt::new(nums[1]), Felt::new(nums[2]), Felt::new(nums[3])]
}

142
benches/smt-subtree.rs Normal file
View File

@@ -0,0 +1,142 @@
use std::{fmt::Debug, hint, mem, time::Duration};
use criterion::{criterion_group, criterion_main, BatchSize, BenchmarkId, Criterion};
use miden_crypto::{
hash::rpo::RpoDigest,
merkle::{build_subtree_for_bench, NodeIndex, SmtLeaf, SubtreeLeaf, SMT_DEPTH},
Felt, Word, ONE,
};
use rand_utils::prng_array;
use winter_utils::Randomizable;
const PAIR_COUNTS: [u64; 5] = [1, 64, 128, 192, 256];
fn smt_subtree_even(c: &mut Criterion) {
let mut seed = [0u8; 32];
let mut group = c.benchmark_group("subtree8-even");
for pair_count in PAIR_COUNTS {
let bench_id = BenchmarkId::from_parameter(pair_count);
group.bench_with_input(bench_id, &pair_count, |b, &pair_count| {
b.iter_batched(
|| {
// Setup.
let entries: Vec<(RpoDigest, Word)> = (0..pair_count)
.map(|n| {
// A single depth-8 subtree can have a maximum of 255 leaves.
let leaf_index = ((n as f64 / pair_count as f64) * 255.0) as u64;
let key = RpoDigest::new([
generate_value(&mut seed),
ONE,
Felt::new(n),
Felt::new(leaf_index),
]);
let value = generate_word(&mut seed);
(key, value)
})
.collect();
let mut leaves: Vec<_> = entries
.iter()
.map(|(key, value)| {
let leaf = SmtLeaf::new_single(*key, *value);
let col = NodeIndex::from(leaf.index()).value();
let hash = leaf.hash();
SubtreeLeaf { col, hash }
})
.collect();
leaves.sort();
leaves.dedup_by_key(|leaf| leaf.col);
leaves
},
|leaves| {
// Benchmarked function.
let (subtree, _) = build_subtree_for_bench(
hint::black_box(leaves),
hint::black_box(SMT_DEPTH),
hint::black_box(SMT_DEPTH),
);
assert!(!subtree.is_empty());
},
BatchSize::SmallInput,
);
});
}
}
fn smt_subtree_random(c: &mut Criterion) {
let mut seed = [0u8; 32];
let mut group = c.benchmark_group("subtree8-rand");
for pair_count in PAIR_COUNTS {
let bench_id = BenchmarkId::from_parameter(pair_count);
group.bench_with_input(bench_id, &pair_count, |b, &pair_count| {
b.iter_batched(
|| {
// Setup.
let entries: Vec<(RpoDigest, Word)> = (0..pair_count)
.map(|i| {
let leaf_index: u8 = generate_value(&mut seed);
let key = RpoDigest::new([
ONE,
ONE,
Felt::new(i),
Felt::new(leaf_index as u64),
]);
let value = generate_word(&mut seed);
(key, value)
})
.collect();
let mut leaves: Vec<_> = entries
.iter()
.map(|(key, value)| {
let leaf = SmtLeaf::new_single(*key, *value);
let col = NodeIndex::from(leaf.index()).value();
let hash = leaf.hash();
SubtreeLeaf { col, hash }
})
.collect();
leaves.sort();
leaves
},
|leaves| {
let (subtree, _) = build_subtree_for_bench(
hint::black_box(leaves),
hint::black_box(SMT_DEPTH),
hint::black_box(SMT_DEPTH),
);
assert!(!subtree.is_empty());
},
BatchSize::SmallInput,
);
});
}
}
criterion_group! {
name = smt_subtree_group;
config = Criterion::default()
.measurement_time(Duration::from_secs(40))
.sample_size(60)
.configure_from_args();
targets = smt_subtree_even, smt_subtree_random
}
criterion_main!(smt_subtree_group);
// HELPER FUNCTIONS
// --------------------------------------------------------------------------------------------
fn generate_value<T: Copy + Debug + Randomizable>(seed: &mut [u8; 32]) -> T {
mem::swap(seed, &mut prng_array(*seed));
let value: [T; 1] = rand_utils::prng_array(*seed);
value[0]
}
fn generate_word(seed: &mut [u8; 32]) -> Word {
mem::swap(seed, &mut prng_array(*seed));
let nums: [u64; 4] = prng_array(*seed);
[Felt::new(nums[0]), Felt::new(nums[1]), Felt::new(nums[2]), Felt::new(nums[3])]
}

View File

@@ -0,0 +1,71 @@
use std::{fmt::Debug, hint, mem, time::Duration};
use criterion::{criterion_group, criterion_main, BatchSize, BenchmarkId, Criterion};
use miden_crypto::{hash::rpo::RpoDigest, merkle::Smt, Felt, Word, ONE};
use rand_utils::prng_array;
use winter_utils::Randomizable;
// 2^0, 2^4, 2^8, 2^12, 2^16
const PAIR_COUNTS: [u64; 6] = [1, 16, 256, 4096, 65536, 1_048_576];
fn smt_with_entries(c: &mut Criterion) {
let mut seed = [0u8; 32];
let mut group = c.benchmark_group("smt-with-entries");
for pair_count in PAIR_COUNTS {
let bench_id = BenchmarkId::from_parameter(pair_count);
group.bench_with_input(bench_id, &pair_count, |b, &pair_count| {
b.iter_batched(
|| {
// Setup.
prepare_entries(pair_count, &mut seed)
},
|entries| {
// Benchmarked function.
Smt::with_entries(hint::black_box(entries)).unwrap();
},
BatchSize::SmallInput,
);
});
}
}
criterion_group! {
name = smt_with_entries_group;
config = Criterion::default()
//.measurement_time(Duration::from_secs(960))
.measurement_time(Duration::from_secs(60))
.sample_size(10)
.configure_from_args();
targets = smt_with_entries
}
criterion_main!(smt_with_entries_group);
// HELPER FUNCTIONS
// --------------------------------------------------------------------------------------------
fn prepare_entries(pair_count: u64, seed: &mut [u8; 32]) -> Vec<(RpoDigest, [Felt; 4])> {
let entries: Vec<(RpoDigest, Word)> = (0..pair_count)
.map(|i| {
let count = pair_count as f64;
let idx = ((i as f64 / count) * (count)) as u64;
let key = RpoDigest::new([generate_value(seed), ONE, Felt::new(i), Felt::new(idx)]);
let value = generate_word(seed);
(key, value)
})
.collect();
entries
}
fn generate_value<T: Copy + Debug + Randomizable>(seed: &mut [u8; 32]) -> T {
mem::swap(seed, &mut prng_array(*seed));
let value: [T; 1] = rand_utils::prng_array(*seed);
value[0]
}
fn generate_word(seed: &mut [u8; 32]) -> Word {
mem::swap(seed, &mut prng_array(*seed));
let nums: [u64; 4] = prng_array(*seed);
[Felt::new(nums[0]), Felt::new(nums[1]), Felt::new(nums[2]), Felt::new(nums[3])]
}

View File

@@ -1,5 +1,5 @@
[toolchain]
channel = "1.82"
channel = "1.84"
components = ["rustfmt", "rust-src", "clippy"]
targets = ["wasm32-unknown-unknown"]
profile = "minimal"

View File

@@ -13,7 +13,7 @@ else
if git diff --exit-code "origin/${BASE_REF}" -- "${CHANGELOG_FILE}"; then
>&2 echo "Changes should come with an entry in the \"CHANGELOG.md\" file. This behavior
can be overridden by using the \"no changelog\" label, which is used for changes
that are trivial / explicitely stated not to require a changelog entry."
that are trivial / explicitly stated not to require a changelog entry."
exit 1
fi

View File

@@ -97,7 +97,7 @@ impl Signature {
}
let c = hash_to_point_rpo256(message, &self.nonce);
h_digest == pubkey_com && verify_helper(&c, &self.s2, self.pk_poly())
verify_helper(&c, &self.s2, self.pk_poly())
}
}
@@ -289,9 +289,9 @@ impl Deserializable for SignaturePoly {
}
m += 128;
if m >= 2048 {
return Err(DeserializationError::InvalidValue(
"Failed to decode signature: high bits {m} exceed 2048".to_string(),
));
return Err(DeserializationError::InvalidValue(format!(
"Failed to decode signature: high bits {m} exceed 2048",
)));
}
}
if s != 0 && m == 0 {

View File

@@ -1,5 +1,11 @@
use alloc::string::String;
use core::{cmp::Ordering, fmt::Display, ops::Deref, slice};
use core::{
cmp::Ordering,
fmt::Display,
hash::{Hash, Hasher},
ops::Deref,
slice,
};
use thiserror::Error;
@@ -55,6 +61,12 @@ impl RpoDigest {
}
}
impl Hash for RpoDigest {
fn hash<H: Hasher>(&self, state: &mut H) {
state.write(&self.as_bytes());
}
}
impl Digest for RpoDigest {
fn as_bytes(&self) -> [u8; DIGEST_BYTES] {
let mut result = [0; DIGEST_BYTES];

View File

@@ -4,8 +4,9 @@ use clap::Parser;
use miden_crypto::{
hash::rpo::{Rpo256, RpoDigest},
merkle::{MerkleError, Smt},
Felt, Word, ONE,
Felt, Word, EMPTY_WORD, ONE,
};
use rand::{prelude::IteratorRandom, thread_rng, Rng};
use rand_utils::rand_value;
#[derive(Parser, Debug)]
@@ -13,7 +14,7 @@ use rand_utils::rand_value;
pub struct BenchmarkCmd {
/// Size of the tree
#[clap(short = 's', long = "size")]
size: u64,
size: usize,
}
fn main() {
@@ -29,101 +30,152 @@ pub fn benchmark_smt() {
let mut entries = Vec::new();
for i in 0..tree_size {
let key = rand_value::<RpoDigest>();
let value = [ONE, ONE, ONE, Felt::new(i)];
let value = [ONE, ONE, ONE, Felt::new(i as u64)];
entries.push((key, value));
}
let mut tree = construction(entries, tree_size).unwrap();
insertion(&mut tree, tree_size).unwrap();
batched_insertion(&mut tree, tree_size).unwrap();
proof_generation(&mut tree, tree_size).unwrap();
let mut tree = construction(entries.clone(), tree_size).unwrap();
insertion(&mut tree).unwrap();
batched_insertion(&mut tree).unwrap();
batched_update(&mut tree, entries).unwrap();
proof_generation(&mut tree).unwrap();
}
/// Runs the construction benchmark for [`Smt`], returning the constructed tree.
pub fn construction(entries: Vec<(RpoDigest, Word)>, size: u64) -> Result<Smt, MerkleError> {
pub fn construction(entries: Vec<(RpoDigest, Word)>, size: usize) -> Result<Smt, MerkleError> {
println!("Running a construction benchmark:");
let now = Instant::now();
let tree = Smt::with_entries(entries)?;
let elapsed = now.elapsed();
println!(
"Constructed a SMT with {} key-value pairs in {:.3} seconds",
size,
elapsed.as_secs_f32(),
);
let elapsed = now.elapsed().as_secs_f32();
println!("Constructed a SMT with {size} key-value pairs in {elapsed:.1} seconds");
println!("Number of leaf nodes: {}\n", tree.leaves().count());
Ok(tree)
}
/// Runs the insertion benchmark for the [`Smt`].
pub fn insertion(tree: &mut Smt, size: u64) -> Result<(), MerkleError> {
pub fn insertion(tree: &mut Smt) -> Result<(), MerkleError> {
const NUM_INSERTIONS: usize = 1_000;
println!("Running an insertion benchmark:");
let size = tree.num_leaves();
let mut insertion_times = Vec::new();
for i in 0..20 {
for i in 0..NUM_INSERTIONS {
let test_key = Rpo256::hash(&rand_value::<u64>().to_be_bytes());
let test_value = [ONE, ONE, ONE, Felt::new(size + i)];
let test_value = [ONE, ONE, ONE, Felt::new((size + i) as u64)];
let now = Instant::now();
tree.insert(test_key, test_value);
let elapsed = now.elapsed();
insertion_times.push(elapsed.as_secs_f32());
insertion_times.push(elapsed.as_micros());
}
println!(
"An average insertion time measured by 20 inserts into a SMT with {} key-value pairs is {:.3} milliseconds\n",
size,
// calculate the average by dividing by 20 and convert to milliseconds by multiplying by
// 1000. As a result, we can only multiply by 50
insertion_times.iter().sum::<f32>() * 50f32,
"An average insertion time measured by {NUM_INSERTIONS} inserts into an SMT with {size} leaves is {:.0} μs\n",
// calculate the average
insertion_times.iter().sum::<u128>() as f64 / (NUM_INSERTIONS as f64),
);
Ok(())
}
pub fn batched_insertion(tree: &mut Smt, size: u64) -> Result<(), MerkleError> {
pub fn batched_insertion(tree: &mut Smt) -> Result<(), MerkleError> {
const NUM_INSERTIONS: usize = 1_000;
println!("Running a batched insertion benchmark:");
let new_pairs: Vec<(RpoDigest, Word)> = (0..1000)
let size = tree.num_leaves();
let new_pairs: Vec<(RpoDigest, Word)> = (0..NUM_INSERTIONS)
.map(|i| {
let key = Rpo256::hash(&rand_value::<u64>().to_be_bytes());
let value = [ONE, ONE, ONE, Felt::new(size + i)];
let value = [ONE, ONE, ONE, Felt::new((size + i) as u64)];
(key, value)
})
.collect();
let now = Instant::now();
let mutations = tree.compute_mutations(new_pairs);
let compute_elapsed = now.elapsed();
let compute_elapsed = now.elapsed().as_secs_f64() * 1000_f64; // time in ms
let now = Instant::now();
tree.apply_mutations(mutations).unwrap();
let apply_elapsed = now.elapsed();
tree.apply_mutations(mutations)?;
let apply_elapsed = now.elapsed().as_secs_f64() * 1000_f64; // time in ms
println!(
"An average batch computation time measured by a 1k-batch into an SMT with {} key-value pairs over {:.3} milliseconds is {:.3} milliseconds",
size,
compute_elapsed.as_secs_f32() * 1000f32,
// Dividing by the number of iterations, 1000, and then multiplying by 1000 to get
// milliseconds, cancels out.
compute_elapsed.as_secs_f32(),
"An average insert-batch computation time measured by a {NUM_INSERTIONS}-batch into an SMT with {size} leaves over {:.1} ms is {:.0} μs",
compute_elapsed,
compute_elapsed * 1000_f64 / NUM_INSERTIONS as f64, // time in μs
);
println!(
"An average batch application time measured by a 1k-batch into an SMT with {} key-value pairs over {:.3} milliseconds is {:.3} milliseconds",
size,
apply_elapsed.as_secs_f32() * 1000f32,
// Dividing by the number of iterations, 1000, and then multiplying by 1000 to get
// milliseconds, cancels out.
apply_elapsed.as_secs_f32(),
"An average insert-batch application time measured by a {NUM_INSERTIONS}-batch into an SMT with {size} leaves over {:.1} ms is {:.0} μs",
apply_elapsed,
apply_elapsed * 1000_f64 / NUM_INSERTIONS as f64, // time in μs
);
println!(
"An average batch insertion time measured by a 1k-batch into an SMT with {} key-value pairs totals to {:.3} milliseconds",
size,
(compute_elapsed + apply_elapsed).as_secs_f32() * 1000f32,
"An average batch insertion time measured by a 1k-batch into an SMT with {size} leaves totals to {:.1} ms",
(compute_elapsed + apply_elapsed),
);
println!();
Ok(())
}
pub fn batched_update(tree: &mut Smt, entries: Vec<(RpoDigest, Word)>) -> Result<(), MerkleError> {
const NUM_UPDATES: usize = 1_000;
const REMOVAL_PROBABILITY: f64 = 0.2;
println!("Running a batched update benchmark:");
let size = tree.num_leaves();
let mut rng = thread_rng();
let new_pairs =
entries
.into_iter()
.choose_multiple(&mut rng, NUM_UPDATES)
.into_iter()
.map(|(key, _)| {
let value = if rng.gen_bool(REMOVAL_PROBABILITY) {
EMPTY_WORD
} else {
[ONE, ONE, ONE, Felt::new(rng.gen())]
};
(key, value)
});
assert_eq!(new_pairs.len(), NUM_UPDATES);
let now = Instant::now();
let mutations = tree.compute_mutations(new_pairs);
let compute_elapsed = now.elapsed().as_secs_f64() * 1000_f64; // time in ms
let now = Instant::now();
tree.apply_mutations(mutations)?;
let apply_elapsed = now.elapsed().as_secs_f64() * 1000_f64; // time in ms
println!(
"An average update-batch computation time measured by a {NUM_UPDATES}-batch into an SMT with {size} leaves over {:.1} ms is {:.0} μs",
compute_elapsed,
compute_elapsed * 1000_f64 / NUM_UPDATES as f64, // time in μs
);
println!(
"An average update-batch application time measured by a {NUM_UPDATES}-batch into an SMT with {size} leaves over {:.1} ms is {:.0} μs",
apply_elapsed,
apply_elapsed * 1000_f64 / NUM_UPDATES as f64, // time in μs
);
println!(
"An average batch update time measured by a 1k-batch into an SMT with {size} leaves totals to {:.1} ms",
(compute_elapsed + apply_elapsed),
);
println!();
@@ -132,28 +184,28 @@ pub fn batched_insertion(tree: &mut Smt, size: u64) -> Result<(), MerkleError> {
}
/// Runs the proof generation benchmark for the [`Smt`].
pub fn proof_generation(tree: &mut Smt, size: u64) -> Result<(), MerkleError> {
pub fn proof_generation(tree: &mut Smt) -> Result<(), MerkleError> {
const NUM_PROOFS: usize = 100;
println!("Running a proof generation benchmark:");
let mut insertion_times = Vec::new();
let size = tree.num_leaves();
for i in 0..20 {
for i in 0..NUM_PROOFS {
let test_key = Rpo256::hash(&rand_value::<u64>().to_be_bytes());
let test_value = [ONE, ONE, ONE, Felt::new(size + i)];
let test_value = [ONE, ONE, ONE, Felt::new((size + i) as u64)];
tree.insert(test_key, test_value);
let now = Instant::now();
let _proof = tree.open(&test_key);
let elapsed = now.elapsed();
insertion_times.push(elapsed.as_secs_f32());
insertion_times.push(now.elapsed().as_micros());
}
println!(
"An average proving time measured by 20 value proofs in a SMT with {} key-value pairs in {:.3} microseconds",
size,
// calculate the average by dividing by 20 and convert to microseconds by multiplying by
// 1000000. As a result, we can only multiply by 50000
insertion_times.iter().sum::<f32>() * 50000f32,
"An average proving time measured by {NUM_PROOFS} value proofs in an SMT with {size} leaves in {:.0} μs",
// calculate the average
insertion_times.iter().sum::<u128>() as f64 / (NUM_PROOFS as f64),
);
Ok(())

View File

@@ -97,6 +97,14 @@ impl NodeIndex {
self
}
/// Returns the parent of the current node. This is the same as [`Self::move_up()`], but returns
/// a new value instead of mutating `self`.
pub const fn parent(mut self) -> Self {
self.depth = self.depth.saturating_sub(1);
self.value >>= 1;
self
}
// PROVIDERS
// --------------------------------------------------------------------------------------------
@@ -128,7 +136,7 @@ impl NodeIndex {
self.value
}
/// Returns true if the current instance points to a right sibling node.
/// Returns `true` if the current instance points to a right sibling node.
pub const fn is_value_odd(&self) -> bool {
(self.value & 1) == 1
}

View File

@@ -270,7 +270,7 @@ pub fn path_to_text(path: &MerklePath) -> Result<String, fmt::Error> {
}
// remove the last ", "
if path.len() != 0 {
if !path.is_empty() {
s.pop();
s.pop();
}

View File

@@ -303,7 +303,7 @@ impl PartialMmr {
if leaf_pos + 1 == self.forest
&& path.depth() == 0
&& self.peaks.last().map_or(false, |v| *v == leaf)
&& self.peaks.last().is_some_and(|v| *v == leaf)
{
self.track_latest = true;
return Ok(());

View File

@@ -21,9 +21,11 @@ mod path;
pub use path::{MerklePath, RootPath, ValuePath};
mod smt;
#[cfg(feature = "internal")]
pub use smt::build_subtree_for_bench;
pub use smt::{
LeafIndex, MutationSet, SimpleSmt, Smt, SmtLeaf, SmtLeafError, SmtProof, SmtProofError,
SMT_DEPTH, SMT_MAX_DEPTH, SMT_MIN_DEPTH,
InnerNode, LeafIndex, MutationSet, NodeMutation, SimpleSmt, Smt, SmtLeaf, SmtLeafError,
SmtProof, SmtProofError, SubtreeLeaf, SMT_DEPTH, SMT_MAX_DEPTH, SMT_MIN_DEPTH,
};
mod mmr;

View File

@@ -3,6 +3,7 @@ use super::RpoDigest;
/// Representation of a node with two children used for iterating over containers.
#[derive(Debug, Clone, PartialEq, Eq)]
#[cfg_attr(feature = "serde", derive(serde::Deserialize, serde::Serialize))]
#[cfg_attr(test, derive(PartialOrd, Ord))]
pub struct InnerNodeInfo {
pub value: RpoDigest,
pub left: RpoDigest,

View File

@@ -46,7 +46,7 @@ impl SmtLeaf {
let leaf = Self::new_multiple(entries)?;
// `new_multiple()` checked that all keys map to the same leaf index. We still need
// to ensure that that leaf index is `leaf_index`.
// to ensure that leaf index is `leaf_index`.
if leaf.index() != leaf_index {
Err(SmtLeafError::InconsistentMultipleLeafIndices {
leaf_index_from_keys: leaf.index(),
@@ -70,7 +70,7 @@ impl SmtLeaf {
Self::Single((key, value))
}
/// Returns a new single leaf with the specified entry. The leaf index is derived from the
/// Returns a new multiple leaf with the specified entries. The leaf index is derived from the
/// entries' keys.
///
/// # Errors

View File

@@ -1,12 +1,8 @@
use alloc::{
collections::{BTreeMap, BTreeSet},
string::ToString,
vec::Vec,
};
use alloc::{collections::BTreeSet, string::ToString, vec::Vec};
use super::{
EmptySubtreeRoots, Felt, InnerNode, InnerNodeInfo, LeafIndex, MerkleError, MerklePath,
MutationSet, NodeIndex, Rpo256, RpoDigest, SparseMerkleTree, Word, EMPTY_WORD,
EmptySubtreeRoots, Felt, InnerNode, InnerNodeInfo, InnerNodes, LeafIndex, MerkleError,
MerklePath, MutationSet, NodeIndex, Rpo256, RpoDigest, SparseMerkleTree, Word, EMPTY_WORD,
};
mod error;
@@ -30,6 +26,8 @@ pub const SMT_DEPTH: u8 = 64;
// SMT
// ================================================================================================
type Leaves = super::Leaves<SmtLeaf>;
/// Sparse Merkle tree mapping 256-bit keys to 256-bit values. Both keys and values are represented
/// by 4 field elements.
///
@@ -43,8 +41,8 @@ pub const SMT_DEPTH: u8 = 64;
#[cfg_attr(feature = "serde", derive(serde::Deserialize, serde::Serialize))]
pub struct Smt {
root: RpoDigest,
leaves: BTreeMap<u64, SmtLeaf>,
inner_nodes: BTreeMap<NodeIndex, InnerNode>,
inner_nodes: InnerNodes,
leaves: Leaves,
}
impl Smt {
@@ -64,19 +62,58 @@ impl Smt {
Self {
root,
leaves: BTreeMap::new(),
inner_nodes: BTreeMap::new(),
inner_nodes: Default::default(),
leaves: Default::default(),
}
}
/// Returns a new [Smt] instantiated with leaves set as specified by the provided entries.
///
/// If the `concurrent` feature is enabled, this function uses a parallel implementation to
/// process the entries efficiently, otherwise it defaults to the sequential implementation.
///
/// All leaves omitted from the entries list are set to [Self::EMPTY_VALUE].
///
/// # Errors
/// Returns an error if the provided entries contain multiple values for the same key.
pub fn with_entries(
entries: impl IntoIterator<Item = (RpoDigest, Word)>,
) -> Result<Self, MerkleError> {
#[cfg(feature = "concurrent")]
{
let mut seen_keys = BTreeSet::new();
let entries: Vec<_> = entries
.into_iter()
.map(|(key, value)| {
if seen_keys.insert(key) {
Ok((key, value))
} else {
Err(MerkleError::DuplicateValuesForIndex(
LeafIndex::<SMT_DEPTH>::from(key).value(),
))
}
})
.collect::<Result<_, _>>()?;
if entries.is_empty() {
return Ok(Self::default());
}
<Self as SparseMerkleTree<SMT_DEPTH>>::with_entries_par(entries)
}
#[cfg(not(feature = "concurrent"))]
{
Self::with_entries_sequential(entries)
}
}
/// Returns a new [Smt] instantiated with leaves set as specified by the provided entries.
///
/// This sequential implementation processes entries one at a time to build the tree.
/// All leaves omitted from the entries list are set to [Self::EMPTY_VALUE].
///
/// # Errors
/// Returns an error if the provided entries contain multiple values for the same key.
pub fn with_entries_sequential(
entries: impl IntoIterator<Item = (RpoDigest, Word)>,
) -> Result<Self, MerkleError> {
// create an empty tree
let mut tree = Self::new();
@@ -101,6 +138,19 @@ impl Smt {
Ok(tree)
}
/// Returns a new [`Smt`] instantiated from already computed leaves and nodes.
///
/// This function performs minimal consistency checking. It is the caller's responsibility to
/// ensure the passed arguments are correct and consistent with each other.
///
/// # Panics
/// With debug assertions on, this function panics if `root` does not match the root node in
/// `inner_nodes`.
pub fn from_raw_parts(inner_nodes: InnerNodes, leaves: Leaves, root: RpoDigest) -> Self {
// Our particular implementation of `from_raw_parts()` never returns `Err`.
<Self as SparseMerkleTree<SMT_DEPTH>>::from_raw_parts(inner_nodes, leaves, root).unwrap()
}
// PUBLIC ACCESSORS
// --------------------------------------------------------------------------------------------
@@ -114,6 +164,11 @@ impl Smt {
<Self as SparseMerkleTree<SMT_DEPTH>>::root(self)
}
/// Returns the number of non-empty leaves in this tree.
pub fn num_leaves(&self) -> usize {
self.leaves.len()
}
/// Returns the leaf to which `key` maps
pub fn get_leaf(&self, key: &RpoDigest) -> SmtLeaf {
<Self as SparseMerkleTree<SMT_DEPTH>>::get_leaf(self, key)
@@ -200,7 +255,7 @@ impl Smt {
<Self as SparseMerkleTree<SMT_DEPTH>>::compute_mutations(self, kv_pairs)
}
/// Apply the prospective mutations computed with [`Smt::compute_mutations()`] to this tree.
/// Applies the prospective mutations computed with [`Smt::compute_mutations()`] to this tree.
///
/// # Errors
/// If `mutations` was computed on a tree with a different root than this one, returns
@@ -214,6 +269,23 @@ impl Smt {
<Self as SparseMerkleTree<SMT_DEPTH>>::apply_mutations(self, mutations)
}
/// Applies the prospective mutations computed with [`Smt::compute_mutations()`] to this tree
/// and returns the reverse mutation set.
///
/// Applying the reverse mutation sets to the updated tree will revert the changes.
///
/// # Errors
/// If `mutations` was computed on a tree with a different root than this one, returns
/// [`MerkleError::ConflictingRoots`] with a two-item [`Vec`]. The first item is the root hash
/// the `mutations` were computed against, and the second item is the actual current root of
/// this tree.
pub fn apply_mutations_with_reversion(
&mut self,
mutations: MutationSet<SMT_DEPTH, RpoDigest, Word>,
) -> Result<MutationSet<SMT_DEPTH, RpoDigest, Word>, MerkleError> {
<Self as SparseMerkleTree<SMT_DEPTH>>::apply_mutations_with_reversion(self, mutations)
}
// HELPERS
// --------------------------------------------------------------------------------------------
@@ -260,6 +332,19 @@ impl SparseMerkleTree<SMT_DEPTH> for Smt {
const EMPTY_VALUE: Self::Value = EMPTY_WORD;
const EMPTY_ROOT: RpoDigest = *EmptySubtreeRoots::entry(SMT_DEPTH, 0);
fn from_raw_parts(
inner_nodes: InnerNodes,
leaves: Leaves,
root: RpoDigest,
) -> Result<Self, MerkleError> {
if cfg!(debug_assertions) {
let root_node = inner_nodes.get(&NodeIndex::root()).unwrap();
assert_eq!(root_node.hash(), root);
}
Ok(Self { root, inner_nodes, leaves })
}
fn root(&self) -> RpoDigest {
self.root
}
@@ -275,12 +360,12 @@ impl SparseMerkleTree<SMT_DEPTH> for Smt {
.unwrap_or_else(|| EmptySubtreeRoots::get_inner_node(SMT_DEPTH, index.depth()))
}
fn insert_inner_node(&mut self, index: NodeIndex, inner_node: InnerNode) {
self.inner_nodes.insert(index, inner_node);
fn insert_inner_node(&mut self, index: NodeIndex, inner_node: InnerNode) -> Option<InnerNode> {
self.inner_nodes.insert(index, inner_node)
}
fn remove_inner_node(&mut self, index: NodeIndex) {
let _ = self.inner_nodes.remove(&index);
fn remove_inner_node(&mut self, index: NodeIndex) -> Option<InnerNode> {
self.inner_nodes.remove(&index)
}
fn insert_value(&mut self, key: Self::Key, value: Self::Value) -> Option<Self::Value> {
@@ -344,6 +429,23 @@ impl SparseMerkleTree<SMT_DEPTH> for Smt {
fn path_and_leaf_to_opening(path: MerklePath, leaf: SmtLeaf) -> SmtProof {
SmtProof::new_unchecked(path, leaf)
}
fn pairs_to_leaf(mut pairs: Vec<(RpoDigest, Word)>) -> SmtLeaf {
assert!(!pairs.is_empty());
if pairs.len() > 1 {
SmtLeaf::new_multiple(pairs).unwrap()
} else {
let (key, value) = pairs.pop().unwrap();
// TODO: should we ever be constructing empty leaves from pairs?
if value == Self::EMPTY_VALUE {
let index = Self::key_to_leaf_index(&key);
SmtLeaf::new_empty(index)
} else {
SmtLeaf::new_single(key, value)
}
}
}
}
impl Default for Smt {

View File

@@ -2,11 +2,13 @@ use alloc::vec::Vec;
use super::{Felt, LeafIndex, NodeIndex, Rpo256, RpoDigest, Smt, SmtLeaf, EMPTY_WORD, SMT_DEPTH};
use crate::{
merkle::{smt::SparseMerkleTree, EmptySubtreeRoots, MerkleStore},
merkle::{
smt::{NodeMutation, SparseMerkleTree, UnorderedMap},
EmptySubtreeRoots, MerkleStore, MutationSet,
},
utils::{Deserializable, Serializable},
Word, ONE, WORD_SIZE,
};
// SMT
// --------------------------------------------------------------------------------------------
@@ -412,21 +414,49 @@ fn test_prospective_insertion() {
let mutations = smt.compute_mutations(vec![(key_1, value_1)]);
assert_eq!(mutations.root(), root_1, "prospective root 1 did not match actual root 1");
smt.apply_mutations(mutations).unwrap();
let revert = apply_mutations(&mut smt, mutations);
assert_eq!(smt.root(), root_1, "mutations before and after apply did not match");
assert_eq!(revert.old_root, smt.root(), "reverse mutations old root did not match");
assert_eq!(revert.root(), root_empty, "reverse mutations new root did not match");
assert_eq!(
revert.new_pairs,
UnorderedMap::from_iter([(key_1, EMPTY_WORD)]),
"reverse mutations pairs did not match"
);
assert_eq!(
revert.node_mutations,
smt.inner_nodes.keys().map(|key| (*key, NodeMutation::Removal)).collect(),
"reverse mutations inner nodes did not match"
);
let mutations = smt.compute_mutations(vec![(key_2, value_2)]);
assert_eq!(mutations.root(), root_2, "prospective root 2 did not match actual root 2");
let mutations =
smt.compute_mutations(vec![(key_3, EMPTY_WORD), (key_2, value_2), (key_3, value_3)]);
assert_eq!(mutations.root(), root_3, "mutations before and after apply did not match");
smt.apply_mutations(mutations).unwrap();
let old_root = smt.root();
let revert = apply_mutations(&mut smt, mutations);
assert_eq!(revert.old_root, smt.root(), "reverse mutations old root did not match");
assert_eq!(revert.root(), old_root, "reverse mutations new root did not match");
assert_eq!(
revert.new_pairs,
UnorderedMap::from_iter([(key_2, EMPTY_WORD), (key_3, EMPTY_WORD)]),
"reverse mutations pairs did not match"
);
// Edge case: multiple values at the same key, where a later pair restores the original value.
let mutations = smt.compute_mutations(vec![(key_3, EMPTY_WORD), (key_3, value_3)]);
assert_eq!(mutations.root(), root_3);
smt.apply_mutations(mutations).unwrap();
let old_root = smt.root();
let revert = apply_mutations(&mut smt, mutations);
assert_eq!(smt.root(), root_3);
assert_eq!(revert.old_root, smt.root(), "reverse mutations old root did not match");
assert_eq!(revert.root(), old_root, "reverse mutations new root did not match");
assert_eq!(
revert.new_pairs,
UnorderedMap::from_iter([(key_3, value_3)]),
"reverse mutations pairs did not match"
);
// Test batch updates, and that the order doesn't matter.
let pairs =
@@ -437,8 +467,16 @@ fn test_prospective_insertion() {
root_empty,
"prospective root for batch removal did not match actual root",
);
smt.apply_mutations(mutations).unwrap();
let old_root = smt.root();
let revert = apply_mutations(&mut smt, mutations);
assert_eq!(smt.root(), root_empty, "mutations before and after apply did not match");
assert_eq!(revert.old_root, smt.root(), "reverse mutations old root did not match");
assert_eq!(revert.root(), old_root, "reverse mutations new root did not match");
assert_eq!(
revert.new_pairs,
UnorderedMap::from_iter([(key_1, value_1), (key_2, value_2), (key_3, value_3)]),
"reverse mutations pairs did not match"
);
let pairs = vec![(key_3, value_3), (key_1, value_1), (key_2, value_2)];
let mutations = smt.compute_mutations(pairs);
@@ -447,6 +485,72 @@ fn test_prospective_insertion() {
assert_eq!(smt.root(), root_3);
}
#[test]
fn test_mutations_revert() {
let mut smt = Smt::default();
let key_1: RpoDigest = RpoDigest::from([ONE, ONE, ONE, Felt::new(1)]);
let key_2: RpoDigest =
RpoDigest::from([2_u32.into(), 2_u32.into(), 2_u32.into(), Felt::new(2)]);
let key_3: RpoDigest =
RpoDigest::from([0_u32.into(), 0_u32.into(), 0_u32.into(), Felt::new(3)]);
let value_1 = [ONE; WORD_SIZE];
let value_2 = [2_u32.into(); WORD_SIZE];
let value_3 = [3_u32.into(); WORD_SIZE];
smt.insert(key_1, value_1);
smt.insert(key_2, value_2);
let mutations =
smt.compute_mutations(vec![(key_1, EMPTY_WORD), (key_2, value_1), (key_3, value_3)]);
let original = smt.clone();
let revert = smt.apply_mutations_with_reversion(mutations).unwrap();
assert_eq!(revert.old_root, smt.root(), "reverse mutations old root did not match");
assert_eq!(revert.root(), original.root(), "reverse mutations new root did not match");
smt.apply_mutations(revert).unwrap();
assert_eq!(smt, original, "SMT with applied revert mutations did not match original SMT");
}
#[test]
fn test_mutation_set_serialization() {
let mut smt = Smt::default();
let key_1: RpoDigest = RpoDigest::from([ONE, ONE, ONE, Felt::new(1)]);
let key_2: RpoDigest =
RpoDigest::from([2_u32.into(), 2_u32.into(), 2_u32.into(), Felt::new(2)]);
let key_3: RpoDigest =
RpoDigest::from([0_u32.into(), 0_u32.into(), 0_u32.into(), Felt::new(3)]);
let value_1 = [ONE; WORD_SIZE];
let value_2 = [2_u32.into(); WORD_SIZE];
let value_3 = [3_u32.into(); WORD_SIZE];
smt.insert(key_1, value_1);
smt.insert(key_2, value_2);
let mutations =
smt.compute_mutations(vec![(key_1, EMPTY_WORD), (key_2, value_1), (key_3, value_3)]);
let serialized = mutations.to_bytes();
let deserialized =
MutationSet::<SMT_DEPTH, RpoDigest, Word>::read_from_bytes(&serialized).unwrap();
assert_eq!(deserialized, mutations, "deserialized mutations did not match original");
let revert = smt.apply_mutations_with_reversion(mutations).unwrap();
let serialized = revert.to_bytes();
let deserialized =
MutationSet::<SMT_DEPTH, RpoDigest, Word>::read_from_bytes(&serialized).unwrap();
assert_eq!(deserialized, revert, "deserialized mutations did not match original");
}
/// Tests that 2 key-value pairs stored in the same leaf have the same path
#[test]
fn test_smt_path_to_keys_in_same_leaf_are_equal() {
@@ -499,21 +603,21 @@ fn test_smt_get_value() {
/// Tests that `entries()` works as expected
#[test]
fn test_smt_entries() {
let key_1: RpoDigest = RpoDigest::from([ONE, ONE, ONE, ONE]);
let key_2: RpoDigest = RpoDigest::from([2_u32, 2_u32, 2_u32, 2_u32]);
let key_1 = RpoDigest::from([ONE, ONE, ONE, ONE]);
let key_2 = RpoDigest::from([2_u32, 2_u32, 2_u32, 2_u32]);
let value_1 = [ONE; WORD_SIZE];
let value_2 = [2_u32.into(); WORD_SIZE];
let entries = [(key_1, value_1), (key_2, value_2)];
let smt = Smt::with_entries([(key_1, value_1), (key_2, value_2)]).unwrap();
let smt = Smt::with_entries(entries).unwrap();
let mut entries = smt.entries();
let mut expected = Vec::from_iter(entries);
expected.sort_by_key(|(k, _)| *k);
let mut actual: Vec<_> = smt.entries().cloned().collect();
actual.sort_by_key(|(k, _)| *k);
// Note: for simplicity, we assume the order `(k1,v1), (k2,v2)`. If a new implementation
// switches the order, it is OK to modify the order here as well.
assert_eq!(&(key_1, value_1), entries.next().unwrap());
assert_eq!(&(key_2, value_2), entries.next().unwrap());
assert!(entries.next().is_none());
assert_eq!(actual, expected);
}
/// Tests that `EMPTY_ROOT` constant generated in the `Smt` equals to the root of the empty tree of
@@ -602,3 +706,19 @@ fn build_multiple_leaf_node(kv_pairs: &[(RpoDigest, Word)]) -> RpoDigest {
Rpo256::hash_elements(&elements)
}
/// Applies mutations with and without reversion to the given SMT, comparing resulting SMTs,
/// returning mutation set for reversion.
fn apply_mutations(
smt: &mut Smt,
mutation_set: MutationSet<SMT_DEPTH, RpoDigest, Word>,
) -> MutationSet<SMT_DEPTH, RpoDigest, Word> {
let mut smt2 = smt.clone();
let reversion = smt.apply_mutations_with_reversion(mutation_set.clone()).unwrap();
smt2.apply_mutations(mutation_set).unwrap();
assert_eq!(&smt2, smt);
reversion
}

View File

@@ -1,4 +1,8 @@
use alloc::{collections::BTreeMap, vec::Vec};
use core::{hash::Hash, mem};
use num::Integer;
use winter_utils::{ByteReader, ByteWriter, Deserializable, DeserializationError, Serializable};
use super::{EmptySubtreeRoots, InnerNodeInfo, MerkleError, MerklePath, NodeIndex};
use crate::{
@@ -24,6 +28,15 @@ pub const SMT_MAX_DEPTH: u8 = 64;
// SPARSE MERKLE TREE
// ================================================================================================
/// A map whose keys are not guarantied to be ordered.
#[cfg(feature = "smt_hashmaps")]
type UnorderedMap<K, V> = hashbrown::HashMap<K, V>;
#[cfg(not(feature = "smt_hashmaps"))]
type UnorderedMap<K, V> = alloc::collections::BTreeMap<K, V>;
type InnerNodes = UnorderedMap<NodeIndex, InnerNode>;
type Leaves<T> = UnorderedMap<u64, T>;
type NodeMutations = UnorderedMap<NodeIndex, NodeMutation>;
/// An abstract description of a sparse Merkle tree.
///
/// A sparse Merkle tree is a key-value map which also supports proving that a given value is indeed
@@ -40,12 +53,12 @@ pub const SMT_MAX_DEPTH: u8 = 64;
/// Every key maps to one leaf. If there are as many keys as there are leaves, then
/// [Self::Leaf] should be the same type as [Self::Value], as is the case with
/// [crate::merkle::SimpleSmt]. However, if there are more keys than leaves, then [`Self::Leaf`]
/// must accomodate all keys that map to the same leaf.
/// must accommodate all keys that map to the same leaf.
///
/// [SparseMerkleTree] currently doesn't support optimizations that compress Merkle proofs.
pub(crate) trait SparseMerkleTree<const DEPTH: u8> {
/// The type for a key
type Key: Clone + Ord;
type Key: Clone + Ord + Eq + Hash;
/// The type for a value
type Value: Clone + PartialEq;
/// The type for a leaf
@@ -62,6 +75,17 @@ pub(crate) trait SparseMerkleTree<const DEPTH: u8> {
// PROVIDED METHODS
// ---------------------------------------------------------------------------------------------
/// Creates a new sparse Merkle tree from an existing set of key-value pairs, in parallel.
#[cfg(feature = "concurrent")]
fn with_entries_par(entries: Vec<(Self::Key, Self::Value)>) -> Result<Self, MerkleError>
where
Self: Sized,
{
let (inner_nodes, leaves) = Self::build_subtrees(entries);
let root = inner_nodes.get(&NodeIndex::root()).unwrap().hash();
Self::from_raw_parts(inner_nodes, leaves, root)
}
/// Returns an opening of the leaf associated with `key`. Conceptually, an opening is a Merkle
/// path to the leaf, as well as the leaf itself.
fn open(&self, key: &Self::Key) -> Self::Opening {
@@ -133,9 +157,9 @@ pub(crate) trait SparseMerkleTree<const DEPTH: u8> {
node_hash = Rpo256::merge(&[left, right]);
if node_hash == *EmptySubtreeRoots::entry(DEPTH, node_depth) {
// If a subtree is empty, when can remove the inner node, since it's equal to the
// If a subtree is empty, then can remove the inner node, since it's equal to the
// default value
self.remove_inner_node(index)
self.remove_inner_node(index);
} else {
self.insert_inner_node(index, InnerNode { left, right });
}
@@ -158,8 +182,8 @@ pub(crate) trait SparseMerkleTree<const DEPTH: u8> {
use NodeMutation::*;
let mut new_root = self.root();
let mut new_pairs: BTreeMap<Self::Key, Self::Value> = Default::default();
let mut node_mutations: BTreeMap<NodeIndex, NodeMutation> = Default::default();
let mut new_pairs: UnorderedMap<Self::Key, Self::Value> = Default::default();
let mut node_mutations: NodeMutations = Default::default();
for (key, value) in kv_pairs {
// If the old value and the new value are the same, there is nothing to update.
@@ -241,7 +265,7 @@ pub(crate) trait SparseMerkleTree<const DEPTH: u8> {
}
}
/// Apply the prospective mutations computed with [`SparseMerkleTree::compute_mutations()`] to
/// Applies the prospective mutations computed with [`SparseMerkleTree::compute_mutations()`] to
/// this tree.
///
/// # Errors
@@ -275,8 +299,12 @@ pub(crate) trait SparseMerkleTree<const DEPTH: u8> {
for (index, mutation) in node_mutations {
match mutation {
Removal => self.remove_inner_node(index),
Addition(node) => self.insert_inner_node(index, node),
Removal => {
self.remove_inner_node(index);
},
Addition(node) => {
self.insert_inner_node(index, node);
},
}
}
@@ -289,9 +317,89 @@ pub(crate) trait SparseMerkleTree<const DEPTH: u8> {
Ok(())
}
/// Applies the prospective mutations computed with [`SparseMerkleTree::compute_mutations()`] to
/// this tree and returns the reverse mutation set. Applying the reverse mutation sets to the
/// updated tree will revert the changes.
///
/// # Errors
/// If `mutations` was computed on a tree with a different root than this one, returns
/// [`MerkleError::ConflictingRoots`] with a two-item [`Vec`]. The first item is the root hash
/// the `mutations` were computed against, and the second item is the actual current root of
/// this tree.
fn apply_mutations_with_reversion(
&mut self,
mutations: MutationSet<DEPTH, Self::Key, Self::Value>,
) -> Result<MutationSet<DEPTH, Self::Key, Self::Value>, MerkleError>
where
Self: Sized,
{
use NodeMutation::*;
let MutationSet {
old_root,
node_mutations,
new_pairs,
new_root,
} = mutations;
// Guard against accidentally trying to apply mutations that were computed against a
// different tree, including a stale version of this tree.
if old_root != self.root() {
return Err(MerkleError::ConflictingRoots {
expected_root: self.root(),
actual_root: old_root,
});
}
let mut reverse_mutations = NodeMutations::new();
for (index, mutation) in node_mutations {
match mutation {
Removal => {
if let Some(node) = self.remove_inner_node(index) {
reverse_mutations.insert(index, Addition(node));
}
},
Addition(node) => {
if let Some(old_node) = self.insert_inner_node(index, node) {
reverse_mutations.insert(index, Addition(old_node));
} else {
reverse_mutations.insert(index, Removal);
}
},
}
}
let mut reverse_pairs = UnorderedMap::new();
for (key, value) in new_pairs {
if let Some(old_value) = self.insert_value(key.clone(), value) {
reverse_pairs.insert(key, old_value);
} else {
reverse_pairs.insert(key, Self::EMPTY_VALUE);
}
}
self.set_root(new_root);
Ok(MutationSet {
old_root: new_root,
node_mutations: reverse_mutations,
new_pairs: reverse_pairs,
new_root: old_root,
})
}
// REQUIRED METHODS
// ---------------------------------------------------------------------------------------------
/// Construct this type from already computed leaves and nodes. The caller ensures passed
/// arguments are correct and consistent with each other.
fn from_raw_parts(
inner_nodes: InnerNodes,
leaves: Leaves<Self::Leaf>,
root: RpoDigest,
) -> Result<Self, MerkleError>
where
Self: Sized;
/// The root of the tree
fn root(&self) -> RpoDigest;
@@ -302,10 +410,10 @@ pub(crate) trait SparseMerkleTree<const DEPTH: u8> {
fn get_inner_node(&self, index: NodeIndex) -> InnerNode;
/// Inserts an inner node at the given index
fn insert_inner_node(&mut self, index: NodeIndex, inner_node: InnerNode);
fn insert_inner_node(&mut self, index: NodeIndex, inner_node: InnerNode) -> Option<InnerNode>;
/// Removes an inner node at the given index
fn remove_inner_node(&mut self, index: NodeIndex);
fn remove_inner_node(&mut self, index: NodeIndex) -> Option<InnerNode>;
/// Inserts a leaf node, and returns the value at the key if already exists
fn insert_value(&mut self, key: Self::Key, value: Self::Value) -> Option<Self::Value>;
@@ -341,18 +449,137 @@ pub(crate) trait SparseMerkleTree<const DEPTH: u8> {
/// Maps a key to a leaf index
fn key_to_leaf_index(key: &Self::Key) -> LeafIndex<DEPTH>;
/// Constructs a single leaf from an arbitrary amount of key-value pairs.
/// Those pairs must all have the same leaf index.
fn pairs_to_leaf(pairs: Vec<(Self::Key, Self::Value)>) -> Self::Leaf;
/// Maps a (MerklePath, Self::Leaf) to an opening.
///
/// The length `path` is guaranteed to be equal to `DEPTH`
fn path_and_leaf_to_opening(path: MerklePath, leaf: Self::Leaf) -> Self::Opening;
/// Performs the initial transforms for constructing a [`SparseMerkleTree`] by composing
/// subtrees. In other words, this function takes the key-value inputs to the tree, and produces
/// the inputs to feed into [`build_subtree()`].
///
/// `pairs` *must* already be sorted **by leaf index column**, not simply sorted by key. If
/// `pairs` is not correctly sorted, the returned computations will be incorrect.
///
/// # Panics
/// With debug assertions on, this function panics if it detects that `pairs` is not correctly
/// sorted. Without debug assertions, the returned computations will be incorrect.
fn sorted_pairs_to_leaves(
pairs: Vec<(Self::Key, Self::Value)>,
) -> PairComputations<u64, Self::Leaf> {
debug_assert!(pairs.is_sorted_by_key(|(key, _)| Self::key_to_leaf_index(key).value()));
let mut accumulator: PairComputations<u64, Self::Leaf> = Default::default();
let mut accumulated_leaves: Vec<SubtreeLeaf> = Vec::with_capacity(pairs.len() / 2);
// As we iterate, we'll keep track of the kv-pairs we've seen so far that correspond to a
// single leaf. When we see a pair that's in a different leaf, we'll swap these pairs
// out and store them in our accumulated leaves.
let mut current_leaf_buffer: Vec<(Self::Key, Self::Value)> = Default::default();
let mut iter = pairs.into_iter().peekable();
while let Some((key, value)) = iter.next() {
let col = Self::key_to_leaf_index(&key).index.value();
let peeked_col = iter.peek().map(|(key, _v)| {
let index = Self::key_to_leaf_index(key);
let next_col = index.index.value();
// We panic if `pairs` is not sorted by column.
debug_assert!(next_col >= col);
next_col
});
current_leaf_buffer.push((key, value));
// If the next pair is the same column as this one, then we're done after adding this
// pair to the buffer.
if peeked_col == Some(col) {
continue;
}
// Otherwise, the next pair is a different column, or there is no next pair. Either way
// it's time to swap out our buffer.
let leaf_pairs = mem::take(&mut current_leaf_buffer);
let leaf = Self::pairs_to_leaf(leaf_pairs);
let hash = Self::hash_leaf(&leaf);
accumulator.nodes.insert(col, leaf);
accumulated_leaves.push(SubtreeLeaf { col, hash });
debug_assert!(current_leaf_buffer.is_empty());
}
// TODO: determine is there is any notable performance difference between computing
// subtree boundaries after the fact as an iterator adapter (like this), versus computing
// subtree boundaries as we go. Either way this function is only used at the beginning of a
// parallel construction, so it should not be a critical path.
accumulator.leaves = SubtreeLeavesIter::from_leaves(&mut accumulated_leaves).collect();
accumulator
}
/// Computes the raw parts for a new sparse Merkle tree from a set of key-value pairs.
///
/// `entries` need not be sorted. This function will sort them.
#[cfg(feature = "concurrent")]
fn build_subtrees(
mut entries: Vec<(Self::Key, Self::Value)>,
) -> (InnerNodes, Leaves<Self::Leaf>) {
entries.sort_by_key(|item| {
let index = Self::key_to_leaf_index(&item.0);
index.value()
});
Self::build_subtrees_from_sorted_entries(entries)
}
/// Computes the raw parts for a new sparse Merkle tree from a set of key-value pairs.
///
/// This function is mostly an implementation detail of
/// [`SparseMerkleTree::with_entries_par()`].
#[cfg(feature = "concurrent")]
fn build_subtrees_from_sorted_entries(
entries: Vec<(Self::Key, Self::Value)>,
) -> (InnerNodes, Leaves<Self::Leaf>) {
use rayon::prelude::*;
let mut accumulated_nodes: InnerNodes = Default::default();
let PairComputations {
leaves: mut leaf_subtrees,
nodes: initial_leaves,
} = Self::sorted_pairs_to_leaves(entries);
for current_depth in (SUBTREE_DEPTH..=DEPTH).step_by(SUBTREE_DEPTH as usize).rev() {
let (nodes, mut subtree_roots): (Vec<BTreeMap<_, _>>, Vec<SubtreeLeaf>) = leaf_subtrees
.into_par_iter()
.map(|subtree| {
debug_assert!(subtree.is_sorted());
debug_assert!(!subtree.is_empty());
let (nodes, subtree_root) = build_subtree(subtree, DEPTH, current_depth);
(nodes, subtree_root)
})
.unzip();
leaf_subtrees = SubtreeLeavesIter::from_leaves(&mut subtree_roots).collect();
accumulated_nodes.extend(nodes.into_iter().flatten());
debug_assert!(!leaf_subtrees.is_empty());
}
(accumulated_nodes, initial_leaves)
}
}
// INNER NODE
// ================================================================================================
/// This struct is public so functions returning it can be used in `benches/`, but is otherwise not
/// part of the public API.
#[doc(hidden)]
#[derive(Debug, Default, Clone, PartialEq, Eq)]
#[cfg_attr(feature = "serde", derive(serde::Deserialize, serde::Serialize))]
pub(crate) struct InnerNode {
pub struct InnerNode {
pub left: RpoDigest,
pub right: RpoDigest,
}
@@ -416,25 +643,37 @@ impl<const DEPTH: u8> TryFrom<NodeIndex> for LeafIndex<DEPTH> {
}
}
impl<const DEPTH: u8> Serializable for LeafIndex<DEPTH> {
fn write_into<W: ByteWriter>(&self, target: &mut W) {
self.index.write_into(target);
}
}
impl<const DEPTH: u8> Deserializable for LeafIndex<DEPTH> {
fn read_from<R: ByteReader>(source: &mut R) -> Result<Self, DeserializationError> {
Ok(Self { index: source.read()? })
}
}
// MUTATIONS
// ================================================================================================
/// A change to an inner node of a [`SparseMerkleTree`] that hasn't yet been applied.
/// A change to an inner node of a sparse Merkle tree that hasn't yet been applied.
/// [`MutationSet`] stores this type in relation to a [`NodeIndex`] to keep track of what changes
/// need to occur at which node indices.
#[derive(Debug, Clone, PartialEq, Eq)]
pub(crate) enum NodeMutation {
/// Corresponds to [`SparseMerkleTree::remove_inner_node()`].
pub enum NodeMutation {
/// Node needs to be removed.
Removal,
/// Corresponds to [`SparseMerkleTree::insert_inner_node()`].
/// Node needs to be inserted.
Addition(InnerNode),
}
/// Represents a group of prospective mutations to a `SparseMerkleTree`, created by
/// `SparseMerkleTree::compute_mutations()`, and that can be applied with
/// `SparseMerkleTree::apply_mutations()`.
#[derive(Debug, Clone, PartialEq, Eq, Default)]
pub struct MutationSet<const DEPTH: u8, K, V> {
#[derive(Debug, Clone, Default, PartialEq, Eq)]
pub struct MutationSet<const DEPTH: u8, K: Eq + Hash, V> {
/// The root of the Merkle tree this MutationSet is for, recorded at the time
/// [`SparseMerkleTree::compute_mutations()`] was called. Exists to guard against applying
/// mutations to the wrong tree or applying stale mutations to a tree that has since changed.
@@ -444,21 +683,335 @@ pub struct MutationSet<const DEPTH: u8, K, V> {
/// index overlayed, if any. Each [`NodeMutation::Addition`] corresponds to a
/// [`SparseMerkleTree::insert_inner_node()`] call, and each [`NodeMutation::Removal`]
/// corresponds to a [`SparseMerkleTree::remove_inner_node()`] call.
node_mutations: BTreeMap<NodeIndex, NodeMutation>,
node_mutations: NodeMutations,
/// The set of top-level key-value pairs we're prospectively adding to the tree, including
/// adding empty values. The "effective" value for a key is the value in this BTreeMap, falling
/// back to the existing value in the Merkle tree. Each entry corresponds to a
/// [`SparseMerkleTree::insert_value()`] call.
new_pairs: BTreeMap<K, V>,
new_pairs: UnorderedMap<K, V>,
/// The calculated root for the Merkle tree, given these mutations. Publicly retrievable with
/// [`MutationSet::root()`]. Corresponds to a [`SparseMerkleTree::set_root()`]. call.
new_root: RpoDigest,
}
impl<const DEPTH: u8, K, V> MutationSet<DEPTH, K, V> {
/// Queries the root that was calculated during `SparseMerkleTree::compute_mutations()`. See
impl<const DEPTH: u8, K: Eq + Hash, V> MutationSet<DEPTH, K, V> {
/// Returns the SMT root that was calculated during `SparseMerkleTree::compute_mutations()`. See
/// that method for more information.
pub fn root(&self) -> RpoDigest {
self.new_root
}
/// Returns the SMT root before the mutations were applied.
pub fn old_root(&self) -> RpoDigest {
self.old_root
}
/// Returns the set of inner nodes that need to be removed or added.
pub fn node_mutations(&self) -> &NodeMutations {
&self.node_mutations
}
/// Returns the set of top-level key-value pairs that need to be added, updated or deleted
/// (i.e. set to `EMPTY_WORD`).
pub fn new_pairs(&self) -> &UnorderedMap<K, V> {
&self.new_pairs
}
}
// SERIALIZATION
// ================================================================================================
impl Serializable for InnerNode {
fn write_into<W: ByteWriter>(&self, target: &mut W) {
target.write(self.left);
target.write(self.right);
}
}
impl Deserializable for InnerNode {
fn read_from<R: ByteReader>(source: &mut R) -> Result<Self, DeserializationError> {
let left = source.read()?;
let right = source.read()?;
Ok(Self { left, right })
}
}
impl Serializable for NodeMutation {
fn write_into<W: ByteWriter>(&self, target: &mut W) {
match self {
NodeMutation::Removal => target.write_bool(false),
NodeMutation::Addition(inner_node) => {
target.write_bool(true);
inner_node.write_into(target);
},
}
}
}
impl Deserializable for NodeMutation {
fn read_from<R: ByteReader>(source: &mut R) -> Result<Self, DeserializationError> {
if source.read_bool()? {
let inner_node = source.read()?;
return Ok(NodeMutation::Addition(inner_node));
}
Ok(NodeMutation::Removal)
}
}
impl<const DEPTH: u8, K: Serializable + Eq + Hash, V: Serializable> Serializable
for MutationSet<DEPTH, K, V>
{
fn write_into<W: ByteWriter>(&self, target: &mut W) {
target.write(self.old_root);
target.write(self.new_root);
let inner_removals: Vec<_> = self
.node_mutations
.iter()
.filter(|(_, value)| matches!(value, NodeMutation::Removal))
.map(|(key, _)| key)
.collect();
let inner_additions: Vec<_> = self
.node_mutations
.iter()
.filter_map(|(key, value)| match value {
NodeMutation::Addition(node) => Some((key, node)),
_ => None,
})
.collect();
target.write(inner_removals);
target.write(inner_additions);
target.write_usize(self.new_pairs.len());
target.write_many(&self.new_pairs);
}
}
impl<const DEPTH: u8, K: Deserializable + Ord + Eq + Hash, V: Deserializable> Deserializable
for MutationSet<DEPTH, K, V>
{
fn read_from<R: ByteReader>(source: &mut R) -> Result<Self, DeserializationError> {
let old_root = source.read()?;
let new_root = source.read()?;
let inner_removals: Vec<NodeIndex> = source.read()?;
let inner_additions: Vec<(NodeIndex, InnerNode)> = source.read()?;
let node_mutations = NodeMutations::from_iter(
inner_removals.into_iter().map(|index| (index, NodeMutation::Removal)).chain(
inner_additions
.into_iter()
.map(|(index, node)| (index, NodeMutation::Addition(node))),
),
);
let num_new_pairs = source.read_usize()?;
let new_pairs = source.read_many(num_new_pairs)?;
let new_pairs = UnorderedMap::from_iter(new_pairs);
Ok(Self {
old_root,
node_mutations,
new_pairs,
new_root,
})
}
}
// SUBTREES
// ================================================================================================
/// A subtree is of depth 8.
const SUBTREE_DEPTH: u8 = 8;
/// A depth-8 subtree contains 256 "columns" that can possibly be occupied.
const COLS_PER_SUBTREE: u64 = u64::pow(2, SUBTREE_DEPTH as u32);
/// Helper struct for organizing the data we care about when computing Merkle subtrees.
///
/// Note that these represet "conceptual" leaves of some subtree, not necessarily
/// the leaf type for the sparse Merkle tree.
#[derive(Debug, Copy, Clone, PartialEq, Eq, PartialOrd, Ord, Default)]
pub struct SubtreeLeaf {
/// The 'value' field of [`NodeIndex`]. When computing a subtree, the depth is already known.
pub col: u64,
/// The hash of the node this `SubtreeLeaf` represents.
pub hash: RpoDigest,
}
/// Helper struct to organize the return value of [`SparseMerkleTree::sorted_pairs_to_leaves()`].
#[derive(Debug, Clone)]
pub(crate) struct PairComputations<K, L> {
/// Literal leaves to be added to the sparse Merkle tree's internal mapping.
pub nodes: UnorderedMap<K, L>,
/// "Conceptual" leaves that will be used for computations.
pub leaves: Vec<Vec<SubtreeLeaf>>,
}
// Derive requires `L` to impl Default, even though we don't actually need that.
impl<K, L> Default for PairComputations<K, L> {
fn default() -> Self {
Self {
nodes: Default::default(),
leaves: Default::default(),
}
}
}
#[derive(Debug)]
struct SubtreeLeavesIter<'s> {
leaves: core::iter::Peekable<alloc::vec::Drain<'s, SubtreeLeaf>>,
}
impl<'s> SubtreeLeavesIter<'s> {
fn from_leaves(leaves: &'s mut Vec<SubtreeLeaf>) -> Self {
// TODO: determine if there is any notable performance difference between taking a Vec,
// which many need flattening first, vs storing a `Box<dyn Iterator<Item = SubtreeLeaf>>`.
// The latter may have self-referential properties that are impossible to express in purely
// safe Rust Rust.
Self { leaves: leaves.drain(..).peekable() }
}
}
impl Iterator for SubtreeLeavesIter<'_> {
type Item = Vec<SubtreeLeaf>;
/// Each `next()` collects an entire subtree.
fn next(&mut self) -> Option<Vec<SubtreeLeaf>> {
let mut subtree: Vec<SubtreeLeaf> = Default::default();
let mut last_subtree_col = 0;
while let Some(leaf) = self.leaves.peek() {
last_subtree_col = u64::max(1, last_subtree_col);
let is_exact_multiple = Integer::is_multiple_of(&last_subtree_col, &COLS_PER_SUBTREE);
let next_subtree_col = if is_exact_multiple {
u64::next_multiple_of(last_subtree_col + 1, COLS_PER_SUBTREE)
} else {
last_subtree_col.next_multiple_of(COLS_PER_SUBTREE)
};
last_subtree_col = leaf.col;
if leaf.col < next_subtree_col {
subtree.push(self.leaves.next().unwrap());
} else if subtree.is_empty() {
continue;
} else {
break;
}
}
if subtree.is_empty() {
debug_assert!(self.leaves.peek().is_none());
return None;
}
Some(subtree)
}
}
// HELPER FUNCTIONS
// ================================================================================================
/// Builds Merkle nodes from a bottom layer of "leaves" -- represented by a horizontal index and
/// the hash of the leaf at that index. `leaves` *must* be sorted by horizontal index, and
/// `leaves` must not contain more than one depth-8 subtree's worth of leaves.
///
/// This function will then calculate the inner nodes above each leaf for 8 layers, as well as
/// the "leaves" for the next 8-deep subtree, so this function can effectively be chained into
/// itself.
///
/// # Panics
/// With debug assertions on, this function panics under invalid inputs: if `leaves` contains
/// more entries than can fit in a depth-8 subtree, if `leaves` contains leaves belonging to
/// different depth-8 subtrees, if `bottom_depth` is lower in the tree than the specified
/// maximum depth (`DEPTH`), or if `leaves` is not sorted.
fn build_subtree(
mut leaves: Vec<SubtreeLeaf>,
tree_depth: u8,
bottom_depth: u8,
) -> (BTreeMap<NodeIndex, InnerNode>, SubtreeLeaf) {
debug_assert!(bottom_depth <= tree_depth);
debug_assert!(Integer::is_multiple_of(&bottom_depth, &SUBTREE_DEPTH));
debug_assert!(leaves.len() <= usize::pow(2, SUBTREE_DEPTH as u32));
let subtree_root = bottom_depth - SUBTREE_DEPTH;
let mut inner_nodes: BTreeMap<NodeIndex, InnerNode> = Default::default();
let mut next_leaves: Vec<SubtreeLeaf> = Vec::with_capacity(leaves.len() / 2);
for next_depth in (subtree_root..bottom_depth).rev() {
debug_assert!(next_depth <= bottom_depth);
// `next_depth` is the stuff we're making.
// `current_depth` is the stuff we have.
let current_depth = next_depth + 1;
let mut iter = leaves.drain(..).peekable();
while let Some(first) = iter.next() {
// On non-continuous iterations, including the first iteration, `first_column` may
// be a left or right node. On subsequent continuous iterations, we will always call
// `iter.next()` twice.
// On non-continuous iterations (including the very first iteration), this column
// could be either on the left or the right. If the next iteration is not
// discontinuous with our right node, then the next iteration's
let is_right = first.col.is_odd();
let (left, right) = if is_right {
// Discontinuous iteration: we have no left node, so it must be empty.
let left = SubtreeLeaf {
col: first.col - 1,
hash: *EmptySubtreeRoots::entry(tree_depth, current_depth),
};
let right = first;
(left, right)
} else {
let left = first;
let right_col = first.col + 1;
let right = match iter.peek().copied() {
Some(SubtreeLeaf { col, .. }) if col == right_col => {
// Our inputs must be sorted.
debug_assert!(left.col <= col);
// The next leaf in the iterator is our sibling. Use it and consume it!
iter.next().unwrap()
},
// Otherwise, the leaves don't contain our sibling, so our sibling must be
// empty.
_ => SubtreeLeaf {
col: right_col,
hash: *EmptySubtreeRoots::entry(tree_depth, current_depth),
},
};
(left, right)
};
let index = NodeIndex::new_unchecked(current_depth, left.col).parent();
let node = InnerNode { left: left.hash, right: right.hash };
let hash = node.hash();
let &equivalent_empty_hash = EmptySubtreeRoots::entry(tree_depth, next_depth);
// If this hash is empty, then it doesn't become a new inner node, nor does it count
// as a leaf for the next depth.
if hash != equivalent_empty_hash {
inner_nodes.insert(index, node);
next_leaves.push(SubtreeLeaf { col: index.value(), hash });
}
}
// Stop borrowing `leaves`, so we can swap it.
// The iterator is empty at this point anyway.
drop(iter);
// After each depth, consider the stuff we just made the new "leaves", and empty the
// other collection.
mem::swap(&mut leaves, &mut next_leaves);
}
debug_assert_eq!(leaves.len(), 1);
let root = leaves.pop().unwrap();
(inner_nodes, root)
}
#[cfg(feature = "internal")]
pub fn build_subtree_for_bench(
leaves: Vec<SubtreeLeaf>,
tree_depth: u8,
bottom_depth: u8,
) -> (BTreeMap<NodeIndex, InnerNode>, SubtreeLeaf) {
build_subtree(leaves, tree_depth, bottom_depth)
}
// TESTS
// ================================================================================================
#[cfg(test)]
mod tests;

View File

@@ -1,8 +1,8 @@
use alloc::collections::{BTreeMap, BTreeSet};
use alloc::{collections::BTreeSet, vec::Vec};
use super::{
super::ValuePath, EmptySubtreeRoots, InnerNode, InnerNodeInfo, LeafIndex, MerkleError,
MerklePath, MutationSet, NodeIndex, RpoDigest, SparseMerkleTree, Word, EMPTY_WORD,
super::ValuePath, EmptySubtreeRoots, InnerNode, InnerNodeInfo, InnerNodes, LeafIndex,
MerkleError, MerklePath, MutationSet, NodeIndex, RpoDigest, SparseMerkleTree, Word, EMPTY_WORD,
SMT_MAX_DEPTH, SMT_MIN_DEPTH,
};
@@ -12,6 +12,8 @@ mod tests;
// SPARSE MERKLE TREE
// ================================================================================================
type Leaves = super::Leaves<Word>;
/// A sparse Merkle tree with 64-bit keys and 4-element leaf values, without compaction.
///
/// The root of the tree is recomputed on each new leaf update.
@@ -19,8 +21,8 @@ mod tests;
#[cfg_attr(feature = "serde", derive(serde::Deserialize, serde::Serialize))]
pub struct SimpleSmt<const DEPTH: u8> {
root: RpoDigest,
leaves: BTreeMap<u64, Word>,
inner_nodes: BTreeMap<NodeIndex, InnerNode>,
inner_nodes: InnerNodes,
leaves: Leaves,
}
impl<const DEPTH: u8> SimpleSmt<DEPTH> {
@@ -51,8 +53,8 @@ impl<const DEPTH: u8> SimpleSmt<DEPTH> {
Ok(Self {
root,
leaves: BTreeMap::new(),
inner_nodes: BTreeMap::new(),
inner_nodes: Default::default(),
leaves: Default::default(),
})
}
@@ -97,6 +99,19 @@ impl<const DEPTH: u8> SimpleSmt<DEPTH> {
Ok(tree)
}
/// Returns a new [`SimpleSmt`] instantiated from already computed leaves and nodes.
///
/// This function performs minimal consistency checking. It is the caller's responsibility to
/// ensure the passed arguments are correct and consistent with each other.
///
/// # Panics
/// With debug assertions on, this function panics if `root` does not match the root node in
/// `inner_nodes`.
pub fn from_raw_parts(inner_nodes: InnerNodes, leaves: Leaves, root: RpoDigest) -> Self {
// Our particular implementation of `from_raw_parts()` never returns `Err`.
<Self as SparseMerkleTree<DEPTH>>::from_raw_parts(inner_nodes, leaves, root).unwrap()
}
/// Wrapper around [`SimpleSmt::with_leaves`] which inserts leaves at contiguous indices
/// starting at index 0.
pub fn with_contiguous_leaves(
@@ -221,7 +236,7 @@ impl<const DEPTH: u8> SimpleSmt<DEPTH> {
<Self as SparseMerkleTree<DEPTH>>::compute_mutations(self, kv_pairs)
}
/// Apply the prospective mutations computed with [`SimpleSmt::compute_mutations()`] to this
/// Applies the prospective mutations computed with [`SimpleSmt::compute_mutations()`] to this
/// tree.
///
/// # Errors
@@ -236,6 +251,23 @@ impl<const DEPTH: u8> SimpleSmt<DEPTH> {
<Self as SparseMerkleTree<DEPTH>>::apply_mutations(self, mutations)
}
/// Applies the prospective mutations computed with [`SimpleSmt::compute_mutations()`] to
/// this tree and returns the reverse mutation set.
///
/// Applying the reverse mutation sets to the updated tree will revert the changes.
///
/// # Errors
/// If `mutations` was computed on a tree with a different root than this one, returns
/// [`MerkleError::ConflictingRoots`] with a two-item [`alloc::vec::Vec`]. The first item is the
/// root hash the `mutations` were computed against, and the second item is the actual
/// current root of this tree.
pub fn apply_mutations_with_reversion(
&mut self,
mutations: MutationSet<DEPTH, LeafIndex<DEPTH>, Word>,
) -> Result<MutationSet<DEPTH, LeafIndex<DEPTH>, Word>, MerkleError> {
<Self as SparseMerkleTree<DEPTH>>::apply_mutations_with_reversion(self, mutations)
}
/// Inserts a subtree at the specified index. The depth at which the subtree is inserted is
/// computed as `DEPTH - SUBTREE_DEPTH`.
///
@@ -306,6 +338,19 @@ impl<const DEPTH: u8> SparseMerkleTree<DEPTH> for SimpleSmt<DEPTH> {
const EMPTY_VALUE: Self::Value = EMPTY_WORD;
const EMPTY_ROOT: RpoDigest = *EmptySubtreeRoots::entry(DEPTH, 0);
fn from_raw_parts(
inner_nodes: InnerNodes,
leaves: Leaves,
root: RpoDigest,
) -> Result<Self, MerkleError> {
if cfg!(debug_assertions) {
let root_node = inner_nodes.get(&NodeIndex::root()).unwrap();
assert_eq!(root_node.hash(), root);
}
Ok(Self { root, inner_nodes, leaves })
}
fn root(&self) -> RpoDigest {
self.root
}
@@ -321,12 +366,12 @@ impl<const DEPTH: u8> SparseMerkleTree<DEPTH> for SimpleSmt<DEPTH> {
.unwrap_or_else(|| EmptySubtreeRoots::get_inner_node(DEPTH, index.depth()))
}
fn insert_inner_node(&mut self, index: NodeIndex, inner_node: InnerNode) {
self.inner_nodes.insert(index, inner_node);
fn insert_inner_node(&mut self, index: NodeIndex, inner_node: InnerNode) -> Option<InnerNode> {
self.inner_nodes.insert(index, inner_node)
}
fn remove_inner_node(&mut self, index: NodeIndex) {
let _ = self.inner_nodes.remove(&index);
fn remove_inner_node(&mut self, index: NodeIndex) -> Option<InnerNode> {
self.inner_nodes.remove(&index)
}
fn insert_value(&mut self, key: LeafIndex<DEPTH>, value: Word) -> Option<Word> {
@@ -370,4 +415,11 @@ impl<const DEPTH: u8> SparseMerkleTree<DEPTH> for SimpleSmt<DEPTH> {
fn path_and_leaf_to_opening(path: MerklePath, leaf: Word) -> ValuePath {
(path, leaf).into()
}
fn pairs_to_leaf(mut pairs: Vec<(LeafIndex<DEPTH>, Word)>) -> Word {
// SimpleSmt can't have more than one value per key.
assert_eq!(pairs.len(), 1);
let (_key, value) = pairs.pop().unwrap();
value
}
}

View File

@@ -141,12 +141,15 @@ fn test_inner_node_iterator() -> Result<(), MerkleError> {
let l2n2 = tree.get_node(NodeIndex::make(2, 2))?;
let l2n3 = tree.get_node(NodeIndex::make(2, 3))?;
let nodes: Vec<InnerNodeInfo> = tree.inner_nodes().collect();
let expected = vec![
let mut nodes: Vec<InnerNodeInfo> = tree.inner_nodes().collect();
let mut expected = [
InnerNodeInfo { value: root, left: l1n0, right: l1n1 },
InnerNodeInfo { value: l1n0, left: l2n0, right: l2n1 },
InnerNodeInfo { value: l1n1, left: l2n2, right: l2n3 },
];
nodes.sort();
expected.sort();
assert_eq!(nodes, expected);
Ok(())

417
src/merkle/smt/tests.rs Normal file
View File

@@ -0,0 +1,417 @@
use alloc::{collections::BTreeMap, vec::Vec};
use super::{
build_subtree, InnerNode, LeafIndex, NodeIndex, PairComputations, SmtLeaf, SparseMerkleTree,
SubtreeLeaf, SubtreeLeavesIter, COLS_PER_SUBTREE, SUBTREE_DEPTH,
};
use crate::{
hash::rpo::RpoDigest,
merkle::{Smt, SMT_DEPTH},
Felt, Word, ONE,
};
fn smtleaf_to_subtree_leaf(leaf: &SmtLeaf) -> SubtreeLeaf {
SubtreeLeaf {
col: leaf.index().index.value(),
hash: leaf.hash(),
}
}
#[test]
fn test_sorted_pairs_to_leaves() {
let entries: Vec<(RpoDigest, Word)> = vec![
// Subtree 0.
(RpoDigest::new([ONE, ONE, ONE, Felt::new(16)]), [ONE; 4]),
(RpoDigest::new([ONE, ONE, ONE, Felt::new(17)]), [ONE; 4]),
// Leaf index collision.
(RpoDigest::new([ONE, ONE, Felt::new(10), Felt::new(20)]), [ONE; 4]),
(RpoDigest::new([ONE, ONE, Felt::new(20), Felt::new(20)]), [ONE; 4]),
// Subtree 1. Normal single leaf again.
(RpoDigest::new([ONE, ONE, ONE, Felt::new(400)]), [ONE; 4]), // Subtree boundary.
(RpoDigest::new([ONE, ONE, ONE, Felt::new(401)]), [ONE; 4]),
// Subtree 2. Another normal leaf.
(RpoDigest::new([ONE, ONE, ONE, Felt::new(1024)]), [ONE; 4]),
];
let control = Smt::with_entries_sequential(entries.clone()).unwrap();
let control_leaves: Vec<SmtLeaf> = {
let mut entries_iter = entries.iter().cloned();
let mut next_entry = || entries_iter.next().unwrap();
let control_leaves = vec![
// Subtree 0.
SmtLeaf::Single(next_entry()),
SmtLeaf::Single(next_entry()),
SmtLeaf::new_multiple(vec![next_entry(), next_entry()]).unwrap(),
// Subtree 1.
SmtLeaf::Single(next_entry()),
SmtLeaf::Single(next_entry()),
// Subtree 2.
SmtLeaf::Single(next_entry()),
];
assert_eq!(entries_iter.next(), None);
control_leaves
};
let control_subtree_leaves: Vec<Vec<SubtreeLeaf>> = {
let mut control_leaves_iter = control_leaves.iter();
let mut next_leaf = || control_leaves_iter.next().unwrap();
let control_subtree_leaves: Vec<Vec<SubtreeLeaf>> = [
// Subtree 0.
vec![next_leaf(), next_leaf(), next_leaf()],
// Subtree 1.
vec![next_leaf(), next_leaf()],
// Subtree 2.
vec![next_leaf()],
]
.map(|subtree| subtree.into_iter().map(smtleaf_to_subtree_leaf).collect())
.to_vec();
assert_eq!(control_leaves_iter.next(), None);
control_subtree_leaves
};
let subtrees: PairComputations<u64, SmtLeaf> = Smt::sorted_pairs_to_leaves(entries);
// This will check that the hashes, columns, and subtree assignments all match.
assert_eq!(subtrees.leaves, control_subtree_leaves);
// Flattening and re-separating out the leaves into subtrees should have the same result.
let mut all_leaves: Vec<SubtreeLeaf> = subtrees.leaves.clone().into_iter().flatten().collect();
let re_grouped: Vec<Vec<_>> = SubtreeLeavesIter::from_leaves(&mut all_leaves).collect();
assert_eq!(subtrees.leaves, re_grouped);
// Then finally we might as well check the computed leaf nodes too.
let control_leaves: BTreeMap<u64, SmtLeaf> = control
.leaves()
.map(|(index, value)| (index.index.value(), value.clone()))
.collect();
for (column, test_leaf) in subtrees.nodes {
if test_leaf.is_empty() {
continue;
}
let control_leaf = control_leaves
.get(&column)
.unwrap_or_else(|| panic!("no leaf node found for column {column}"));
assert_eq!(control_leaf, &test_leaf);
}
}
// Helper for the below tests.
fn generate_entries(pair_count: u64) -> Vec<(RpoDigest, Word)> {
(0..pair_count)
.map(|i| {
let leaf_index = ((i as f64 / pair_count as f64) * (pair_count as f64)) as u64;
let key = RpoDigest::new([ONE, ONE, Felt::new(i), Felt::new(leaf_index)]);
let value = [ONE, ONE, ONE, Felt::new(i)];
(key, value)
})
.collect()
}
#[test]
fn test_single_subtree() {
// A single subtree's worth of leaves.
const PAIR_COUNT: u64 = COLS_PER_SUBTREE;
let entries = generate_entries(PAIR_COUNT);
let control = Smt::with_entries_sequential(entries.clone()).unwrap();
// `entries` should already be sorted by nature of how we constructed it.
let leaves = Smt::sorted_pairs_to_leaves(entries).leaves;
let leaves = leaves.into_iter().next().unwrap();
let (first_subtree, subtree_root) = build_subtree(leaves, SMT_DEPTH, SMT_DEPTH);
assert!(!first_subtree.is_empty());
// The inner nodes computed from that subtree should match the nodes in our control tree.
for (index, node) in first_subtree.into_iter() {
let control = control.get_inner_node(index);
assert_eq!(
control, node,
"subtree-computed node at index {index:?} does not match control",
);
}
// The root returned should also match the equivalent node in the control tree.
let control_root_index =
NodeIndex::new(SMT_DEPTH - SUBTREE_DEPTH, subtree_root.col).expect("Valid root index");
let control_root_node = control.get_inner_node(control_root_index);
let control_hash = control_root_node.hash();
assert_eq!(
control_hash, subtree_root.hash,
"Subtree-computed root at index {control_root_index:?} does not match control"
);
}
// Test that not just can we compute a subtree correctly, but we can feed the results of one
// subtree into computing another. In other words, test that `build_subtree()` is correctly
// composable.
#[test]
fn test_two_subtrees() {
// Two subtrees' worth of leaves.
const PAIR_COUNT: u64 = COLS_PER_SUBTREE * 2;
let entries = generate_entries(PAIR_COUNT);
let control = Smt::with_entries_sequential(entries.clone()).unwrap();
let PairComputations { leaves, .. } = Smt::sorted_pairs_to_leaves(entries);
// With two subtrees' worth of leaves, we should have exactly two subtrees.
let [first, second]: [Vec<_>; 2] = leaves.try_into().unwrap();
assert_eq!(first.len() as u64, PAIR_COUNT / 2);
assert_eq!(first.len(), second.len());
let mut current_depth = SMT_DEPTH;
let mut next_leaves: Vec<SubtreeLeaf> = Default::default();
let (first_nodes, first_root) = build_subtree(first, SMT_DEPTH, current_depth);
next_leaves.push(first_root);
let (second_nodes, second_root) = build_subtree(second, SMT_DEPTH, current_depth);
next_leaves.push(second_root);
// All new inner nodes + the new subtree-leaves should be 512, for one depth-cycle.
let total_computed = first_nodes.len() + second_nodes.len() + next_leaves.len();
assert_eq!(total_computed as u64, PAIR_COUNT);
// Verify the computed nodes of both subtrees.
let computed_nodes = first_nodes.clone().into_iter().chain(second_nodes);
for (index, test_node) in computed_nodes {
let control_node = control.get_inner_node(index);
assert_eq!(
control_node, test_node,
"subtree-computed node at index {index:?} does not match control",
);
}
current_depth -= SUBTREE_DEPTH;
let (nodes, root_leaf) = build_subtree(next_leaves, SMT_DEPTH, current_depth);
assert_eq!(nodes.len(), SUBTREE_DEPTH as usize);
assert_eq!(root_leaf.col, 0);
for (index, test_node) in nodes {
let control_node = control.get_inner_node(index);
assert_eq!(
control_node, test_node,
"subtree-computed node at index {index:?} does not match control",
);
}
let index = NodeIndex::new(current_depth - SUBTREE_DEPTH, root_leaf.col).unwrap();
let control_root = control.get_inner_node(index).hash();
assert_eq!(control_root, root_leaf.hash, "Root mismatch");
}
#[test]
fn test_singlethreaded_subtrees() {
const PAIR_COUNT: u64 = COLS_PER_SUBTREE * 64;
let entries = generate_entries(PAIR_COUNT);
let control = Smt::with_entries_sequential(entries.clone()).unwrap();
let mut accumulated_nodes: BTreeMap<NodeIndex, InnerNode> = Default::default();
let PairComputations {
leaves: mut leaf_subtrees,
nodes: test_leaves,
} = Smt::sorted_pairs_to_leaves(entries);
for current_depth in (SUBTREE_DEPTH..=SMT_DEPTH).step_by(SUBTREE_DEPTH as usize).rev() {
// There's no flat_map_unzip(), so this is the best we can do.
let (nodes, mut subtree_roots): (Vec<BTreeMap<_, _>>, Vec<SubtreeLeaf>) = leaf_subtrees
.into_iter()
.enumerate()
.map(|(i, subtree)| {
// Pre-assertions.
assert!(
subtree.is_sorted(),
"subtree {i} at bottom-depth {current_depth} is not sorted",
);
assert!(
!subtree.is_empty(),
"subtree {i} at bottom-depth {current_depth} is empty!",
);
// Do actual things.
let (nodes, subtree_root) = build_subtree(subtree, SMT_DEPTH, current_depth);
// Post-assertions.
for (&index, test_node) in nodes.iter() {
let control_node = control.get_inner_node(index);
assert_eq!(
test_node, &control_node,
"depth {} subtree {}: test node does not match control at index {:?}",
current_depth, i, index,
);
}
(nodes, subtree_root)
})
.unzip();
// Update state between each depth iteration.
leaf_subtrees = SubtreeLeavesIter::from_leaves(&mut subtree_roots).collect();
accumulated_nodes.extend(nodes.into_iter().flatten());
assert!(!leaf_subtrees.is_empty(), "on depth {current_depth}");
}
// Make sure the true leaves match, first checking length and then checking each individual
// leaf.
let control_leaves: BTreeMap<_, _> = control.leaves().collect();
let control_leaves_len = control_leaves.len();
let test_leaves_len = test_leaves.len();
assert_eq!(test_leaves_len, control_leaves_len);
for (col, ref test_leaf) in test_leaves {
let index = LeafIndex::new_max_depth(col);
let &control_leaf = control_leaves.get(&index).unwrap();
assert_eq!(test_leaf, control_leaf, "test leaf at column {col} does not match control");
}
// Make sure the inner nodes match, checking length first and then each individual leaf.
let control_nodes_len = control.inner_nodes().count();
let test_nodes_len = accumulated_nodes.len();
assert_eq!(test_nodes_len, control_nodes_len);
for (index, test_node) in accumulated_nodes.clone() {
let control_node = control.get_inner_node(index);
assert_eq!(test_node, control_node, "test node does not match control at {index:?}");
}
// After the last iteration of the above for loop, we should have the new root node actually
// in two places: one in `accumulated_nodes`, and the other as the "next leaves" return from
// `build_subtree()`. So let's check both!
let control_root = control.get_inner_node(NodeIndex::root());
// That for loop should have left us with only one leaf subtree...
let [leaf_subtree]: [Vec<_>; 1] = leaf_subtrees.try_into().unwrap();
// which itself contains only one 'leaf'...
let [root_leaf]: [SubtreeLeaf; 1] = leaf_subtree.try_into().unwrap();
// which matches the expected root.
assert_eq!(control.root(), root_leaf.hash);
// Likewise `accumulated_nodes` should contain a node at the root index...
assert!(accumulated_nodes.contains_key(&NodeIndex::root()));
// and it should match our actual root.
let test_root = accumulated_nodes.get(&NodeIndex::root()).unwrap();
assert_eq!(control_root, *test_root);
// And of course the root we got from each place should match.
assert_eq!(control.root(), root_leaf.hash);
}
/// The parallel version of `test_singlethreaded_subtree()`.
#[test]
#[cfg(feature = "concurrent")]
fn test_multithreaded_subtrees() {
use rayon::prelude::*;
const PAIR_COUNT: u64 = COLS_PER_SUBTREE * 64;
let entries = generate_entries(PAIR_COUNT);
let control = Smt::with_entries_sequential(entries.clone()).unwrap();
let mut accumulated_nodes: BTreeMap<NodeIndex, InnerNode> = Default::default();
let PairComputations {
leaves: mut leaf_subtrees,
nodes: test_leaves,
} = Smt::sorted_pairs_to_leaves(entries);
for current_depth in (SUBTREE_DEPTH..=SMT_DEPTH).step_by(SUBTREE_DEPTH as usize).rev() {
let (nodes, mut subtree_roots): (Vec<BTreeMap<_, _>>, Vec<SubtreeLeaf>) = leaf_subtrees
.into_par_iter()
.enumerate()
.map(|(i, subtree)| {
// Pre-assertions.
assert!(
subtree.is_sorted(),
"subtree {i} at bottom-depth {current_depth} is not sorted",
);
assert!(
!subtree.is_empty(),
"subtree {i} at bottom-depth {current_depth} is empty!",
);
let (nodes, subtree_root) = build_subtree(subtree, SMT_DEPTH, current_depth);
// Post-assertions.
for (&index, test_node) in nodes.iter() {
let control_node = control.get_inner_node(index);
assert_eq!(
test_node, &control_node,
"depth {} subtree {}: test node does not match control at index {:?}",
current_depth, i, index,
);
}
(nodes, subtree_root)
})
.unzip();
leaf_subtrees = SubtreeLeavesIter::from_leaves(&mut subtree_roots).collect();
accumulated_nodes.extend(nodes.into_iter().flatten());
assert!(!leaf_subtrees.is_empty(), "on depth {current_depth}");
}
// Make sure the true leaves match, checking length first and then each individual leaf.
let control_leaves: BTreeMap<_, _> = control.leaves().collect();
let control_leaves_len = control_leaves.len();
let test_leaves_len = test_leaves.len();
assert_eq!(test_leaves_len, control_leaves_len);
for (col, ref test_leaf) in test_leaves {
let index = LeafIndex::new_max_depth(col);
let &control_leaf = control_leaves.get(&index).unwrap();
assert_eq!(test_leaf, control_leaf);
}
// Make sure the inner nodes match, checking length first and then each individual leaf.
let control_nodes_len = control.inner_nodes().count();
let test_nodes_len = accumulated_nodes.len();
assert_eq!(test_nodes_len, control_nodes_len);
for (index, test_node) in accumulated_nodes.clone() {
let control_node = control.get_inner_node(index);
assert_eq!(test_node, control_node, "test node does not match control at {index:?}");
}
// After the last iteration of the above for loop, we should have the new root node actually
// in two places: one in `accumulated_nodes`, and the other as the "next leaves" return from
// `build_subtree()`. So let's check both!
let control_root = control.get_inner_node(NodeIndex::root());
// That for loop should have left us with only one leaf subtree...
let [leaf_subtree]: [_; 1] = leaf_subtrees.try_into().unwrap();
// which itself contains only one 'leaf'...
let [root_leaf]: [_; 1] = leaf_subtree.try_into().unwrap();
// which matches the expected root.
assert_eq!(control.root(), root_leaf.hash);
// Likewise `accumulated_nodes` should contain a node at the root index...
assert!(accumulated_nodes.contains_key(&NodeIndex::root()));
// and it should match our actual root.
let test_root = accumulated_nodes.get(&NodeIndex::root()).unwrap();
assert_eq!(control_root, *test_root);
// And of course the root we got from each place should match.
assert_eq!(control.root(), root_leaf.hash);
}
#[test]
#[cfg(feature = "concurrent")]
fn test_with_entries_parallel() {
const PAIR_COUNT: u64 = COLS_PER_SUBTREE * 64;
let entries = generate_entries(PAIR_COUNT);
let control = Smt::with_entries_sequential(entries.clone()).unwrap();
let smt = Smt::with_entries(entries.clone()).unwrap();
assert_eq!(smt.root(), control.root());
assert_eq!(smt, control);
}

View File

@@ -725,7 +725,7 @@ fn get_leaf_depth_works_with_depth_8() {
assert_eq!(8, store.get_leaf_depth(root, 8, k).unwrap());
}
// flip last bit of a and expect it to return the the same depth, but for an empty node
// flip last bit of a and expect it to return the same depth, but for an empty node
assert_eq!(8, store.get_leaf_depth(root, 8, 0b01101000_u64).unwrap());
// flip fourth bit of a and expect an empty node on depth 4