Add circom test with circuit for CircomVerifierProofs, which allows to
automatically check that the data generated from arbo matches the circom
circuit of a SMTVerifierProof.
Added also GHA workflow to test the circuits with the output of arbo
code.
Circom Verifier Proofs allow to verify through a zkSNARK proof the
inclusion/exclusion of a leaf in a tree. This commit adds the needed
code in go to generate the circuit inputs for a CircomVerifierProof.
- Update upFromNodes function for unbalanced tree case
- Add AddBatchTestVector2 & 3 with some edge cases
- Add checkRoots test method, which stores the Dump of the tree to file for after-debug
Case tree empty, AddBatch was 10.95x times faster than without AddBatch
nCPU: 4, nLeafs: 1024, hash: Poseidon, db: memory
dbgStats(hash: 2.047k, dbGet: 1, dbPut: 2.049k)
Case tree not empty w/ few leafs, AddBatch was 7.28x times faster than without AddBatch
nCPU: 4, nLeafs: 1024, hash: Poseidon, db: memory
dbgStats(hash: 2.047k, dbGet: 198, dbPut: 2.049k)
Case tree not empty w/ enough leafs, AddBatch was 5.94x times faster than without AddBatch
nCPU: 4, nLeafs: 1024, hash: Poseidon, db: memory
dbgStats(hash: 2.047k, dbGet: 1.000k, dbPut: 2.049k)
Case tree not empty, AddBatch was 9.27x times faster than without AddBatch
nCPU: 4, nLeafs: 4096, hash: Poseidon, db: memory
dbgStats(hash: 8.191k, dbGet: 1.800k, dbPut: 8.193k)
Case tree not empty & unbalanced, AddBatch was 10.67x times faster than without AddBatch
nCPU: 4, nLeafs: 4096, hash: Poseidon, db: memory
dbgStats(hash: 10.409k, dbGet: 2.668k, dbPut: 10.861k)
TestAddBatchBench: nCPU: 4, nLeafs: 50000, hash: Blake2b, db: badgerdb
Add loop: 10.10829114s
AddBatch: 732.030263ms
dbgStats(hash: 122.518k, dbGet: 1, dbPut: 122.520k)
TestDbgStats
add in loop in emptyTree dbgStats(hash: 141.721k, dbGet: 134.596k, dbPut: 161.721k)
addbatch caseEmptyTree dbgStats(hash: 24.402k, dbGet: 1, dbPut: 24.404k)
addbatch caseNotEmptyTree dbgStats(hash: 26.868k, dbGet: 2.468k, dbPut: 26.872k)
VirtualTree (vt) computes a tree without computing any hash. With the
idea of once all the leafs are placed in their positions, the hashes can
be computed, avoiding computing a node hash more than one time.
CASE D: Already populated Tree
==============================
- Use A, B, C, D as subtree
- Sort the Keys in Buckets that share the initial part of the path
- For each subtree add there the new leafs
R
/ \
/ \
/ \
* *
/ | / \
/ | / \
/ | / \
L: A B C D
/\ /\ / \ / \
... ... ... ... ... ...
buildTreeBottomUp splits the key-values into n Buckets (where n is the
number of CPUs), in parallel builds a subtree for each bucket, once all
the subtrees are built, uses the subtrees roots as keys for a new tree,
which as result will have the complete Tree build from bottom to up,
where until the log2(nCPU) level it has been computed in parallel.
As result of this, the tree construction can be parallelized until
almost the top level, almost dividing the time by the number of CPUs.
CASE C: ALMOST CASE B --> if Tree has few Leafs (but numLeafs>=minLeafsThreshold)
==============================================================================
- Use A, B, G, F as Roots of subtrees
- Do CASE B for each subtree
- Then go from L to the Root
R
/ \
/ \
/ \
* *
/ | / \
/ | / \
/ | / \
L: A B G D
/ \
/ \
/ \
C *
/ \
/ \
/ \
... ... (nLeafs >= minLeafsThreshold)
CASE B: ALMOST CASE A, Almost empty Tree --> if Tree has numLeafs < minLeafsThreshold
==============================================================================
- Get the Leafs (key & value) (iterate the tree from the current root getting
the leafs)
- Create a new empty Tree
- Do CASE A for the new Tree, giving the already existing key&values (leafs)
from the original Tree + the new key&values to be added from the AddBatch call
R R
/ \ / \
A * / \
/ \ / \
B C * *
/ | / \
/ | / \
/ | / \
L: A B G D
/ \
/ \
/ \
C *
/ \
/ \
/ \
... ... (nLeafs < minLeafsThreshold)