Browse Source

initial commit

- Code copied from https://github.com/aergoio/aergo
- Make it a go module
- Added Walk and GetWithRoot new methods
- Create custom hasher based on blake256

Signed-off-by: p4u <pau@dabax.net>
master
p4u 3 years ago
commit
40ddcbf3f2
15 changed files with 2706 additions and 0 deletions
  1. +220
    -0
      README.md
  2. +9
    -0
      go.mod
  3. +225
    -0
      go.sum
  4. +18
    -0
      hash.go
  5. BIN
      pictures/batch.png
  6. BIN
      pictures/deleted.png
  7. BIN
      pictures/mod.png
  8. BIN
      pictures/smt.png
  9. +585
    -0
      trie.go
  10. +63
    -0
      trie_cache.go
  11. +210
    -0
      trie_merkle_proof.go
  12. +178
    -0
      trie_revert.go
  13. +883
    -0
      trie_test.go
  14. +273
    -0
      trie_tools.go
  15. +42
    -0
      util.go

+ 220
- 0
README.md

@ -0,0 +1,220 @@
# AERGO StateTrie
## Features
* Efficient Merkle proof verification (binary tree structure)
* Efficient database reads and storage through node batching
* Reduced data storage (leaf nodes for subtrees contain 1 key)
* Reduced hash computation (leaf nodes for subtrees contain 1 key)
* Simultaneous update of multiple keys with goroutines
Aergo Trie is a modified version of a Sparse Merkle Tree which stores values at the highest subtree containing only one key.
The benefit achieved from this is that, on average, in a tree containing N random keys, just log(N) hashes are required to update a key in the tree. Therefore the trie height is on average log(N), making inclusion and non-inclusion proofs shorter.
## Standard Sparse Merkle Tree
![smt](pictures/smt.png)
*Figure 1. An example sparse Merkle tree of height=4 (4-bit keys) containing 3 keys. The 3 keys are shown in red and blue. Default nodes are shown in green. Non-default nodes are shown in purple.*
Medium article about the Standard SMT : [https://medium.com/@ouvrard.pierre.alain/sparse-merkle-tree-86e6e2fc26da](https://medium.com/@ouvrard.pierre.alain/sparse-merkle-tree-86e6e2fc26da)
Implementation of the standard SMT : [https://github.com/aergoio/SMT](https://github.com/aergoio/SMT)
## Aergo Trie
### Modification of the Sparse Merkle Tree
To reduce the number of hashing operations necessary to update a key in a tree, we created leaf nodes. A leaf node is stored at the highest subtree that contains only 1 non-default key. So, the hashing of the tree starts from the height of leaf nodes instead of height 0. If the tree contains N random keys, then on average, leaf nodes will be created around height = log(N).
Another benefit of the Aergo Trie is that Default Hashes are no longer necessary. We add the following property to the hash function : H(0,0) = 0. Looking at the example below, **D1 = Hash(D0,D0) = Hash(byte(0),byte(0)) = byte(0) = D2 =...= D256**.
![mod](pictures/mod.png)
*Figure 2. H3 was the highest subtree containing only one key: the red one. So, Value will take its place in the modified sparse Merkle tree.*
### Merkle Proofs
Using a binary tree gives us very simple and easy-to-use Merkle proofs.
On the diagram above, the Merkle proof of the red key is composed of the node with a red circle: [h3]
In case of the standard SMT that proof would have been [D0, D1, D2, h3]
### Compressed Merkle proofs
Like in the standard sparse Merkle tree, Merkle proofs can also be compressed. We can use a bitmap and set a bit for every index that is not default in the proof. The proof that the blue LeafNode1 is included in the tree is: [LeafNode2, D1, D2, LeafNode]. This proof can be compressed to 1001[LeafNode2, LeafNode]. The verifier of the compressed Merkle proof should know to use D1 to compute h2 because the second index of the bitmap is 0, and D2 for the third proof element, etc.
### Proofs of non-inclusion
There are 2 ways to prove that a key is not included in the tree :
- prove that the Leaf node of another key is included in the tree and is on the path of the non-included key.
- prove that a default node (byte(0)) is included in the tree and is on the path of the non-included key.
For example, a proof that key=0000 is not included in the tree is a proof that LeafNode is on the path of key and is included in the tree. A proof that key=1111 is not included in the tree is a proof that D2 is on the path of the key and is included in the tree.
### Deleting from the tree
When a leaf is removed from the tree, special care is taken by the Update() function to keep leaf nodes at the highest subtree containing only 1 key. Otherwise, if a node has a different position in the tree, the resulting trie root would be different even though keys and values are the same.
So, when a key is deleted, Update() checks if it’s sibling is also a leaf node and moves it up until the highest subtree root containing only that non-default key.
![deleted](pictures/deleted.png)
*Figure 3. The same tree after deleting a blue key : LeafNode1 moves up to the highest subtree containing one key*
### Node batching
When storing each node as a root with 2 children, the quantity of nodes to store grows very quickly and a bottleneck happens due to multiple threads loading nodes from memory. A hex Merkle tree would solve this problem as each key has 16 children and a smaller height of 64 (256/4), though as we said earlier, we need the tree to be binary. We can achieve the same features of a hex tree by using node batching.
Instead of storing 2 children for one node, we store the subtree of height 4 for that node. A tree of height 4 has 16 leaves at height 0 (like hex). So, the value of a node is an array containing all the nodes of the 4-bit tree. The children of a node at index i in the tree can be found at index 2*i+1 and 2*i+2.
A node is encoded as follows:
```
{ Root : [ [ byte(0/1) to flag a leaf node ], 3–0, 3–1, 2–0, 2–1, 2–2, 2–3, 1–0, 1–1, 1–2, 1–3, 1–4, 1–5, 1–6, 1–7, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, a, b, c, d, e, f ] }
```
For example, to get the children of node 3–0 at index id=1 in the array, we can access the left child 2–0 at index (2 * id + 1) = index 3 and the right child 2–1 at index (2 * id + 2) = index 4.
To each node, we append a byte flag to recognize the leaf nodes. Since the nature of Root is not know ahead of time, the byte flag is stored at the first index of the nodes array.
![batch](pictures/batch.png)
*Figure 4. A visual representation of node batching. The first batch is blue, and all 16 leaves of a batch are roots to other batches (green). A batch contains 30 nodes.*
The example from figure 2 will be encoded as follows :
```
{Root : [ [byte(0)], LeafNodeHash, h3, LeafNodeKey, LeafNodeValue, h2, D2=nil, nil, nil, nil, nil, h1, D1=nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, nil, LeafNode1Hash, LeafNode2Hash, nil, nil, nil, nil, nil, nil ]}
```
Where LeafNodeHash = Hash(key, value, height)
To store the batch in the database, it is serialized with a bitmap which allows us to store only the non default nodes.
The bitmap is 4 bytes = 32 bits. The first 30 bits are for the batch nodes, the 31st bit is the flag to make a shortcut batch (a batch that only contains a key and a value at 3-0 and 3-1), the 32nd bit is not used.
The figure 2 example after serialization:
```
11111000001000000000001100000010 [LeafNodeHash][h3][LeafNodeKey][LeafNodeValue][h2][h1][LeafNode1Hash][LeafNode2Hash]
```
Node batching has two benefits : reduced number of database reads and concurrent update of the height 4 subtree without the need for a lock.
## Usage
- NewTrie
```go
func NewTrie(root []byte, hash func(data …[]byte) []byte, store db.DB) *Trie {
```
When creating an empty tree, set root to nil. A nil root means that it is equal to the default value of its height. Use a custom hash function or use the Hasher in utils and specify a database if you plan to commit nodes.
- Update
```go
func (s *Trie) Update(keys, values [][]byte) ([]byte, error) {
```
‘keys [][]byte’ is a sorted array of keys, ‘values [][]byte’ contains the matching values of keys.
Update will recursively go down the tree and split the keys and values according to the side of the tree they belong to: multiple parts of the tree can be simultaneously updated.
If update is called several times before Commit, only the last state is committed.
- AtomicUpdate
```go
func (s *Trie) AtomicUpdate(keys, values [][]byte) ([]byte, error) {
```
AtomicUpdate updates the tree with sorted keys and values just like Update. But unlike update, if AtomicUpdate is called several times before Commit, all the intermediate states from AtomicUpdate calls will be recorded. This can be useful when authenticating the state of each block, but not committing to the database right away.
- Get
```go
func (s *Trie) Get(key []byte) ([]byte, error) {
```
Get the value of a key stored in the tree, if a key is default, i.e., not stored, return nil.
- Commit
```go
func (s *Trie) Commit() error {
```
Commit the updated nodes to the database. When update is called, the new nodes are stored in smt.db.updatedNodes. Commit then stores to disk.
- StageUpdates
```go
func (s *Trie) StageUpdates(txn *db.Transaction) {
```
StageUpdates loads the updated nodes into the given database transaction. It enables the commit of the trie with an external database transaction.
- Stash
```go
func (s *Trie) Stash(rollbackCache bool) error {
```
Use the Stash function to revert the update without committing.
- Revert
```go
func (s *SMT) Revert(toOldRoot []byte) error {
```
When revert is called, the trees to rollback (between the current tree and toOldRoot) are deleted from the database.
- MerkleProof
```go
func (s *Trie) MerkleProof(key []byte) ([][]byte, bool, []byte, []byte, error) {
```
MerkleProof creates a Merkle proof of inclusion/non-inclusion of the key. The Merkle proof is an array of hashes.
If the key is not included, MerkleProof will return false along with the proof leaf on the path of the key.
- MerkleProofPast
```go
func (s *Trie) MerkleProofPast(key []byte, root []byte) ([][]byte, bool, []byte, []byte, error) {
```
MerkleProofPast creates a Merkle proof of inclusion/non-inclusion of the key at a given trie root. This is used to query state at a different block than the last one.
- MerkleProofCompressed
```go
func (s *Trie) MerkleProofCompressed(key []byte) ([]byte, [][]byte, uint64, bool, []byte, []byte, error) {
```
MerkleProofCompressed creates the same Merkle proof as MerkleProof but compressed using a bitmap
- VerifyInclusion
```go
func (s *Trie) VerifyInclusion(ap [][]byte, key, value []byte) bool {
```
Verifies that the key-value pair is included in the tree at the current Root.
- VerifyNonInclusion
```go
func (s *Trie) VerifyNonInclusion(ap [][]byte, key, value, proofKey []byte) bool {
```
Verify a proof of non-inclusion. Verifies that a leaf(proofKey, proofValue, height) of empty subtree is on the path of the non-included key.
- VerifyInclusionC
```go
func (s *Trie) VerifyInclusionC(bitmap, key, value []byte, ap [][]byte, length int) bool {
```
Verifies a compressed proof of inclusion. ‘length’ is the height of the leaf key-value being verified.
- VerifyNonInclusionC
```go
func (s *Trie) VerifyNonInclusionC(ap [][]byte, length int, bitmap, key, value, proofKey []byte) bool {
```
Verify a compressed proof of non-inclusion. Verifies that a leaf (proofKey, proofValue, height) of empty subtree is on the path of the non-included key.
For more information about the Aergo StateTrie : [https://medium.com/aergo/releasing-statetrie-a-hash-tree-built-for-high-performance-interoperability-6ce0406b12ae](https://medium.com/aergo/releasing-statetrie-a-hash-tree-built-for-high-performance-interoperability-6ce0406b12ae)

+ 9
- 0
go.mod

@ -0,0 +1,9 @@
module github.com/p4u/asmt
go 1.16
require (
github.com/aergoio/aergo-lib v1.0.2
github.com/spf13/afero v1.2.1 // indirect
golang.org/x/crypto v0.0.0-20210317152858-513c2a44f670
)

+ 225
- 0
go.sum

@ -0,0 +1,225 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96 h1:cTp8I5+VIoKjsnZuH8vjyaysT/ses3EvZeaV/1UkF2M=
github.com/AndreasBriese/bbloom v0.0.0-20190825152654-46b345b51c96/go.mod h1:bOvUY6CB00SOBii9/FifXqc0awNKxLFCL/+pkDPuyl8=
github.com/BurntSushi/toml v0.3.1 h1:WXkYYl6Yr3qBf1K79EBnL4mak0OimBfB0XUf9Vl28OQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/aergoio/aergo-lib v1.0.2 h1:Zp0hGS+SCdURCgAsXAjXQedq5JBSrslZWIkXXaM84sc=
github.com/aergoio/aergo-lib v1.0.2/go.mod h1:hxL0B2opPLGkc1e0hdxiQNM5shqrxSrFA7cj1UJ1/m4=
github.com/aergoio/badger v1.6.0-gcfix h1:VUUwHyUpjfA7WbEqNsjH69R2mgNlSqhLN3xe3UWOQmQ=
github.com/aergoio/badger v1.6.0-gcfix/go.mod h1:GwmHJc3SDzMZotppuSPcMT1HY9+SYt7nlDP2QevCbx4=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/armon/consul-api v0.0.0-20180202201655-eb2c6b5be1b6/go.mod h1:grANhF5doyWs3UAsr3K4I6qtAmlQcZDesFNEHPZAzj8=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13 h1:fAjc9m62+UWV/WAFKLNi6ZS0675eEUC9y3AlwSbQu1Y=
github.com/dgryski/go-farm v0.0.0-20200201041132-a6ae2369ad13/go.mod h1:SqUrOPUnsFjfmXRMNPybcSiG0BgUW2AuFH8PAnS2iTw=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.8-0.20180830220226-ccc981bf8038 h1:j2xrf/etQ7t7yE6yPggWVr8GLKpISYIwxxLiHdOCHis=
github.com/fsnotify/fsnotify v1.4.8-0.20180830220226-ccc981bf8038/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3 h1:gyjaxf+svBWX08ZjK86iN9geUJF0H6gp2IRKX6Nf6/I=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/snappy v0.0.0-20180518054509-2e65f85255db/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/golang/snappy v0.0.1 h1:Qgr9rKW7uDUkrbSmQeiDsGa8SjGyCOGtuasMWwvp2P4=
github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/grpc-ecosystem/go-grpc-middleware v1.0.0/go.mod h1:FiyG127CGDf3tlThmgyCl78X/SZQqEOJBCDaAfeWzPs=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/guptarohit/asciigraph v0.4.1 h1:YHmCMN8VH81BIUIgTg2Fs3B52QDxNZw2RQ6j5pGoSxo=
github.com/guptarohit/asciigraph v0.4.1/go.mod h1:9fYEfE5IGJGxlP1B+w8wHFy7sNZMhPtn59f0RLtpRFM=
github.com/hashicorp/hcl v1.0.0/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hashicorp/hcl v1.0.1-0.20180906183839-65a6292f0157 h1:uyodBE3xDz0ynKs1tLBU26wOQoEkAqqiY18DbZ+FZrA=
github.com/hashicorp/hcl v1.0.1-0.20180906183839-65a6292f0157/go.mod h1:E5yfLk+7swimpb2L/Alb/PJmXilQ/rhwaUYs4T20WEQ=
github.com/hpcloud/tail v1.0.0 h1:nfCOvKYfkgYP8hkirhJocXT2+zOD8yUNjXaWfTlyFKI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0 h1:L/CwN0zerZDmRFUapSPitk6f+Q3+0za1rQkzVuMiMFI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0 h1:45sCR5RtlFHMR4UwH9sdQ5TC8v0qDQCHnXt+kaKSTVE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1 h1:ZC2Vc7/ZFkGmsVC9KvOjumD+G5lXy2RtTKyzRKO2BQ4=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mattn/go-colorable v0.1.4 h1:snbPLB8fVfU9iwbbo30TPtbLRzwWu6aJS6Xh4eaaviA=
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.10 h1:qxFzApOv4WsAL965uUPIsXzAKCZxN2p9UqdhFS4ZW10=
github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/mapstructure v1.1.2 h1:fmNYVwqnSfB9mZU6OS2O6GsXM+wcskZDuKQzvN1EDeE=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.10.3 h1:OoxbjfXVZyod1fmWYhI7SEyaD8B00ynP3T+D5GiyHOY=
github.com/onsi/ginkgo v1.10.3/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.4.3/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.1 h1:K0jcRCwNQM3vFGh1ppMtDh/+7ApJrjldlX8fA0jDTLQ=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.2.1-0.20180930205832-81a861c69d25 h1:a49/z+9Z5t53bD9NMajAJMe5tN0Kcz5Y6bvFxD/TPwo=
github.com/pelletier/go-toml v1.2.1-0.20180930205832-81a861c69d25/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rs/xid v1.2.1/go.mod h1:+uKXf+4Djp6Md1KODXJxgGQPKngRmWyn10oCKFzNHOQ=
github.com/rs/zerolog v1.16.1-0.20191111091419-e709c5d91e35 h1:GSGuq6eajGCJ6fQFt4I8pT1lbTAtl3XdXOsIZ3qYQQI=
github.com/rs/zerolog v1.16.1-0.20191111091419-e709c5d91e35/go.mod h1:9nvC1axdVrAHcu/s9taAVfBuIdTZLVQmKQyvrUjF5+I=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.1 h1:qgMbHoJbPbw579P+1zVY+6n4nIFuIchaIjzZ/I/Yq8M=
github.com/spf13/afero v1.2.1/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
github.com/spf13/cast v1.3.0 h1:oget//CVOEoFewqQxwr0Ej5yjygnqGkvggSE/gB35Q8=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
github.com/spf13/jwalterweatherman v1.1.0 h1:ue6voC5bR5F8YxI5S67j9i582FU4Qvo2bmqnqMYADFk=
github.com/spf13/jwalterweatherman v1.1.0/go.mod h1:aNWZUN0dPAAO/Ljvb5BEdw96iTZ0EXowPYD95IqWIGo=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/spf13/viper v1.5.0 h1:GpsTwfsQ27oS/Aha/6d1oD7tpKIqWnOA6tgOX9HHkt4=
github.com/spf13/viper v1.5.0/go.mod h1:AkYRkVJF8TkSG/xet6PzXX+l39KhhXa2pdqVSxnTcn4=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.4.0 h1:2E4SXV/wtOkTonXsotYi4li6zVWxYlZuYNCXe9XRJyk=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
github.com/syndtr/goleveldb v1.0.0 h1:fBdIW9lB4Iz0n9khmH8w27SJ3QEJ7+IgjPEwGSZiFdE=
github.com/syndtr/goleveldb v1.0.0/go.mod h1:ZVVdQEZoIme9iO1Ch2Jdy24qqXrMMOU6lpPAyBWyWuQ=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xordataexchange/crypt v0.0.3-0.20170626215501-b2862e3d0a77/go.mod h1:aYKd//L2LvnjZzWKhF00oedf4jCCReLcmhLdhm1A27Q=
github.com/zenazn/goji v0.9.0/go.mod h1:7S9M489iMyHBNxwZnk9/EHS098H4/F6TATF2mIxtB1Q=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
go.uber.org/atomic v1.4.0/go.mod h1:gD2HeocX3+yG+ygLZcrzQJaqmWj9AIm7n08wl/qW/PE=
go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0=
go.uber.org/zap v1.10.0/go.mod h1:vwi/ZaCAaUcBkycHslxD9B2zi4UTXhF60s6SWpuDF0Q=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210317152858-513c2a44f670 h1:gzMM0EjIYiRmJI3+jBdFuoynZlpxa2JQZsolKu09BXo=
golang.org/x/crypto v0.0.0-20210317152858-513c2a44f670/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181220203305-927f97764cc3/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110 h1:qWPm9rbaAMKs8Bq/9LRpbMqxWRVUAQwMI9fVrssnTfw=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181107165924-66b7b1311ac8/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181205085412-a5c9d58dba9a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68 h1:nxC68pudNYkKU6jWhgrqdreuFiOQWj1Fs7T3VrH4Pjw=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20181030141323-6f44c5a2ea40/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3 h1:cokOdA+Jmi5PJGXLlLllQSgYigAEfHXJAERHVMaCc2k=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190828213141-aed303cbaa74/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15 h1:YR8cESwS4TdDjEe65xsg0ogRM/Nc3DYOhEAlW+xobZo=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/fsnotify.v1 v1.4.7 h1:xOHLXZwVvI9hhs+cLKq5+I5onOuwQLhQwiu63xxlHs4=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4 h1:/eiJrUcujPVeJ3xlSWaiNi3uSVmDGBK1pDHUHAnao1I=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

+ 18
- 0
hash.go

@ -0,0 +1,18 @@
package trie
import (
"golang.org/x/crypto/blake2b"
)
// Hasher exports default hash function for trie
var Hasher = func(data ...[]byte) []byte {
hasher, err := blake2b.New256(nil)
if err != nil {
panic(err)
}
//hasher := sha256.New()
for i := 0; i < len(data); i++ {
hasher.Write(data[i])
}
return hasher.Sum(nil)
}

BIN
pictures/batch.png

Before After
Width: 970  |  Height: 497  |  Size: 68 KiB

BIN
pictures/deleted.png

Before After
Width: 1594  |  Height: 462  |  Size: 85 KiB

BIN
pictures/mod.png

Before After
Width: 1401  |  Height: 555  |  Size: 75 KiB

BIN
pictures/smt.png

Before After
Width: 1401  |  Height: 555  |  Size: 68 KiB

+ 585
- 0
trie.go

@ -0,0 +1,585 @@
/**
* @file
* @copyright defined in aergo/LICENSE.txt
*/
package trie
import (
"bytes"
"fmt"
"sync"
"github.com/aergoio/aergo-lib/db"
)
// Trie is a modified sparse Merkle tree.
// Instead of storing values at the leaves of the tree,
// the values are stored at the highest subtree root that contains only that value.
// If the tree is sparse, this requires fewer hashing operations.
type Trie struct {
db *CacheDB
// Root is the current root of the smt.
Root []byte
// prevRoot is the root before the last update
prevRoot []byte
// lock is for the whole struct
lock sync.RWMutex
// hash is the hash function used in the trie
hash func(data ...[]byte) []byte
// TrieHeight is the number if bits in a key
TrieHeight int
// LoadDbCounter counts the nb of db reads in on update
LoadDbCounter int
// loadDbMux is a lock for LoadDbCounter
loadDbMux sync.RWMutex
// LoadCacheCounter counts the nb of cache reads in on update
LoadCacheCounter int
// liveCountMux is a lock fo LoadCacheCounter
liveCountMux sync.RWMutex
// counterOn is used to enable/diseable for efficiency
counterOn bool
// CacheHeightLimit is the number of tree levels we want to store in cache
CacheHeightLimit int
// pastTries stores the past maxPastTries trie roots to revert
pastTries [][]byte
// atomicUpdate, commit all the changes made by intermediate update calls
atomicUpdate bool
}
// NewSMT creates a new SMT given a keySize and a hash function.
func NewTrie(root []byte, hash func(data ...[]byte) []byte, store db.DB) *Trie {
s := &Trie{
hash: hash,
TrieHeight: len(hash([]byte("height"))) * 8, // hash any string to get output length
counterOn: false,
}
s.db = &CacheDB{
liveCache: make(map[Hash][][]byte),
updatedNodes: make(map[Hash][][]byte),
Store: store,
}
// don't store any cache by default (contracts state don't use cache)
s.CacheHeightLimit = s.TrieHeight + 1
s.Root = root
return s
}
// Update adds and deletes a sorted list of keys and their values to the trie
// Adding and deleting can be simultaneous.
// To delete, set the value to DefaultLeaf.
// If Update is called multiple times, only the state after the last update
// is commited.
func (s *Trie) Update(keys, values [][]byte) ([]byte, error) {
s.lock.Lock()
defer s.lock.Unlock()
s.atomicUpdate = false
s.LoadDbCounter = 0
s.LoadCacheCounter = 0
ch := make(chan mresult, 1)
s.update(s.Root, keys, values, nil, 0, s.TrieHeight, ch)
result := <-ch
if result.err != nil {
return nil, result.err
}
if len(result.update) != 0 {
s.Root = result.update[:HashLength]
} else {
s.Root = nil
}
return s.Root, nil
}
// AtomicUpdate can be called multiple times and all the updated nodes will be commited
// and roots will be stored in past tries.
// Can be used for updating several blocks before committing to DB.
func (s *Trie) AtomicUpdate(keys, values [][]byte) ([]byte, error) {
s.lock.Lock()
defer s.lock.Unlock()
s.atomicUpdate = true
s.LoadDbCounter = 0
s.LoadCacheCounter = 0
ch := make(chan mresult, 1)
s.update(s.Root, keys, values, nil, 0, s.TrieHeight, ch)
result := <-ch
if result.err != nil {
return nil, result.err
}
if len(result.update) != 0 {
s.Root = result.update[:HashLength]
} else {
s.Root = nil
}
s.updatePastTries()
return s.Root, nil
}
// mresult is used to contain the result of goroutines and is sent through a channel.
type mresult struct {
update []byte
// flag if a node was deleted and a shortcut node maybe has to move up the tree
deleted bool
err error
}
// update adds and deletes a sorted list of keys and their values to the trie.
// Adding and deleting can be simultaneous.
// To delete, set the value to DefaultLeaf.
// It returns the root of the updated tree.
func (s *Trie) update(root []byte, keys, values, batch [][]byte, iBatch, height int, ch chan<- (mresult)) {
if height == 0 {
if bytes.Equal(DefaultLeaf, values[0]) {
// Delete the key-value from the trie if it is being set to DefaultLeaf
// The value will be set to [] in batch by maybeMoveupShortcut or interiorHash
s.deleteOldNode(root, height, false)
ch <- mresult{nil, true, nil}
} else {
// create a new shortcut batch.
// simply storing the value will make it hard to move up the
// shortcut in case of sibling deletion
batch = make([][]byte, 31, 31)
node := s.leafHash(keys[0], values[0], root, batch, 0, height)
ch <- mresult{node, false, nil}
}
return
}
// Load the node to update
batch, iBatch, lnode, rnode, isShortcut, err := s.loadChildren(root, height, iBatch, batch)
if err != nil {
ch <- mresult{nil, false, err}
return
}
// Check if the keys are updating the shortcut node
if isShortcut {
keys, values = s.maybeAddShortcutToKV(keys, values, lnode[:HashLength], rnode[:HashLength])
if iBatch == 0 {
// shortcut is moving so it's root will change
s.deleteOldNode(root, height, false)
}
// The shortcut node was added to keys and values so consider this subtree default.
lnode, rnode = nil, nil
// update in the batch (set key, value to default so the next loadChildren is correct)
batch[2*iBatch+1] = nil
batch[2*iBatch+2] = nil
if len(keys) == 0 {
// Set true so that a potential sibling shortcut may move up.
ch <- mresult{nil, true, nil}
return
}
}
// Store shortcut node
if (len(lnode) == 0) && (len(rnode) == 0) && (len(keys) == 1) {
// We are adding 1 key to an empty subtree so store it as a shortcut
if bytes.Equal(DefaultLeaf, values[0]) {
ch <- mresult{nil, true, nil}
} else {
node := s.leafHash(keys[0], values[0], root, batch, iBatch, height)
ch <- mresult{node, false, nil}
}
return
}
// Split the keys array so each branch can be updated in parallel
lkeys, rkeys := s.splitKeys(keys, s.TrieHeight-height)
splitIndex := len(lkeys)
lvalues, rvalues := values[:splitIndex], values[splitIndex:]
switch {
case len(lkeys) == 0 && len(rkeys) > 0:
s.updateRight(lnode, rnode, root, keys, values, batch, iBatch, height, ch)
case len(lkeys) > 0 && len(rkeys) == 0:
s.updateLeft(lnode, rnode, root, keys, values, batch, iBatch, height, ch)
default:
s.updateParallel(lnode, rnode, root, lkeys, rkeys, lvalues, rvalues, batch, iBatch, height, ch)
}
}
// updateRight updates the right side of the tree
func (s *Trie) updateRight(lnode, rnode, root []byte, keys, values, batch [][]byte, iBatch, height int, ch chan<- (mresult)) {
// all the keys go in the right subtree
newch := make(chan mresult, 1)
s.update(rnode, keys, values, batch, 2*iBatch+2, height-1, newch)
result := <-newch
if result.err != nil {
ch <- mresult{nil, false, result.err}
return
}
// Move up a shortcut node if necessary.
if result.deleted {
if s.maybeMoveUpShortcut(lnode, result.update, root, batch, iBatch, height, ch) {
return
}
}
node := s.interiorHash(lnode, result.update, root, batch, iBatch, height)
ch <- mresult{node, false, nil}
}
// updateLeft updates the left side of the tree
func (s *Trie) updateLeft(lnode, rnode, root []byte, keys, values, batch [][]byte, iBatch, height int, ch chan<- (mresult)) {
// all the keys go in the left subtree
newch := make(chan mresult, 1)
s.update(lnode, keys, values, batch, 2*iBatch+1, height-1, newch)
result := <-newch
if result.err != nil {
ch <- mresult{nil, false, result.err}
return
}
// Move up a shortcut node if necessary.
if result.deleted {
if s.maybeMoveUpShortcut(result.update, rnode, root, batch, iBatch, height, ch) {
return
}
}
node := s.interiorHash(result.update, rnode, root, batch, iBatch, height)
ch <- mresult{node, false, nil}
}
// updateParallel updates both sides of the trie simultaneously
func (s *Trie) updateParallel(lnode, rnode, root []byte, lkeys, rkeys, lvalues, rvalues, batch [][]byte, iBatch, height int, ch chan<- (mresult)) {
lch := make(chan mresult, 1)
rch := make(chan mresult, 1)
go s.update(lnode, lkeys, lvalues, batch, 2*iBatch+1, height-1, lch)
go s.update(rnode, rkeys, rvalues, batch, 2*iBatch+2, height-1, rch)
lresult := <-lch
rresult := <-rch
if lresult.err != nil {
ch <- mresult{nil, false, lresult.err}
return
}
if rresult.err != nil {
ch <- mresult{nil, false, rresult.err}
return
}
// Move up a shortcut node if it's sibling is default
if lresult.deleted || rresult.deleted {
if s.maybeMoveUpShortcut(lresult.update, rresult.update, root, batch, iBatch, height, ch) {
return
}
}
node := s.interiorHash(lresult.update, rresult.update, root, batch, iBatch, height)
ch <- mresult{node, false, nil}
}
// deleteOldNode deletes an old node that has been updated
func (s *Trie) deleteOldNode(root []byte, height int, movingUp bool) {
var node Hash
copy(node[:], root)
if !s.atomicUpdate || movingUp {
// dont delete old nodes with atomic updated except when
// moving up a shortcut, we dont record every single move
s.db.updatedMux.Lock()
delete(s.db.updatedNodes, node)
s.db.updatedMux.Unlock()
}
if height >= s.CacheHeightLimit {
s.db.liveMux.Lock()
delete(s.db.liveCache, node)
s.db.liveMux.Unlock()
}
}
// splitKeys devides the array of keys into 2 so they can update left and right branches in parallel
func (s *Trie) splitKeys(keys [][]byte, height int) ([][]byte, [][]byte) {
for i, key := range keys {
if bitIsSet(key, height) {
return keys[:i], keys[i:]
}
}
return keys, nil
}
// maybeMoveUpShortcut moves up a shortcut if it's sibling node is default
func (s *Trie) maybeMoveUpShortcut(left, right, root []byte, batch [][]byte, iBatch, height int, ch chan<- (mresult)) bool {
if len(left) == 0 && len(right) == 0 {
// Both update and sibling are deleted subtrees
if iBatch == 0 {
// If the deleted subtrees are at the root, then delete it.
s.deleteOldNode(root, height, true)
} else {
batch[2*iBatch+1] = nil
batch[2*iBatch+2] = nil
}
ch <- mresult{nil, true, nil}
return true
} else if len(left) == 0 {
// If right is a shortcut move it up
if right[HashLength] == 1 {
s.moveUpShortcut(right, root, batch, iBatch, 2*iBatch+2, height, ch)
return true
}
} else if len(right) == 0 {
// If left is a shortcut move it up
if left[HashLength] == 1 {
s.moveUpShortcut(left, root, batch, iBatch, 2*iBatch+1, height, ch)
return true
}
}
return false
}
func (s *Trie) moveUpShortcut(shortcut, root []byte, batch [][]byte, iBatch, iShortcut, height int, ch chan<- (mresult)) {
// it doesn't matter if atomic update is true or false since the batch is node modified
_, _, shortcutKey, shortcutVal, _, err := s.loadChildren(shortcut, height-1, iShortcut, batch)
if err != nil {
ch <- mresult{nil, false, err}
return
}
// when moving up the shortcut, it's hash will change because height is +1
newShortcut := s.hash(shortcutKey[:HashLength], shortcutVal[:HashLength], []byte{byte(height)})
newShortcut = append(newShortcut, byte(1))
if iBatch == 0 {
// Modify batch to a shortcut batch
batch[0] = []byte{1}
batch[2*iBatch+1] = shortcutKey
batch[2*iBatch+2] = shortcutVal
batch[2*iShortcut+1] = nil
batch[2*iShortcut+2] = nil
// cache and updatedNodes deleted by store node
s.storeNode(batch, newShortcut, root, height)
} else if (height-1)%4 == 0 {
// move up shortcut and delete old batch
batch[2*iBatch+1] = shortcutKey
batch[2*iBatch+2] = shortcutVal
// set true so that AtomicUpdate can also delete a node moving up
// otherwise every nodes moved up is recorded
s.deleteOldNode(shortcut, height, true)
} else {
//move up shortcut
batch[2*iBatch+1] = shortcutKey
batch[2*iBatch+2] = shortcutVal
batch[2*iShortcut+1] = nil
batch[2*iShortcut+2] = nil
}
// Return the left sibling node to move it up
ch <- mresult{newShortcut, true, nil}
}
// maybeAddShortcutToKV adds a shortcut key to the keys array to be updated.
// this is used when a subtree containing a shortcut node is being updated
func (s *Trie) maybeAddShortcutToKV(keys, values [][]byte, shortcutKey, shortcutVal []byte) ([][]byte, [][]byte) {
newKeys := make([][]byte, 0, len(keys)+1)
newVals := make([][]byte, 0, len(keys)+1)
if bytes.Compare(shortcutKey, keys[0]) < 0 {
newKeys = append(newKeys, shortcutKey)
newKeys = append(newKeys, keys...)
newVals = append(newVals, shortcutVal)
newVals = append(newVals, values...)
} else if bytes.Compare(shortcutKey, keys[len(keys)-1]) > 0 {
newKeys = append(newKeys, keys...)
newKeys = append(newKeys, shortcutKey)
newVals = append(newVals, values...)
newVals = append(newVals, shortcutVal)
} else {
higher := false
for i, key := range keys {
if bytes.Equal(shortcutKey, key) {
if !bytes.Equal(DefaultLeaf, values[i]) {
// Do nothing if the shortcut is simply updated
return keys, values
}
// Delete shortcut if it is updated to DefaultLeaf
newKeys = append(newKeys, keys[:i]...)
newKeys = append(newKeys, keys[i+1:]...)
newVals = append(newVals, values[:i]...)
newVals = append(newVals, values[i+1:]...)
}
if !higher && bytes.Compare(shortcutKey, key) > 0 {
higher = true
continue
}
if higher && bytes.Compare(shortcutKey, key) < 0 {
// insert shortcut in slices
newKeys = append(newKeys, keys[:i]...)
newKeys = append(newKeys, shortcutKey)
newKeys = append(newKeys, keys[i:]...)
newVals = append(newVals, values[:i]...)
newVals = append(newVals, shortcutVal)
newVals = append(newVals, values[i:]...)
break
}
}
}
return newKeys, newVals
}
// loadChildren looks for the children of a node.
// if the node is not stored in cache, it will be loaded from db.
func (s *Trie) loadChildren(root []byte, height, iBatch int, batch [][]byte) ([][]byte, int, []byte, []byte, bool, error) {
isShortcut := false
if height%4 == 0 {
if len(root) == 0 {
// create a new default batch
batch = make([][]byte, 31, 31)
batch[0] = []byte{0}
} else {
var err error
batch, err = s.loadBatch(root)
if err != nil {
return nil, 0, nil, nil, false, err
}
}
iBatch = 0
if batch[0][0] == 1 {
isShortcut = true
}
} else {
if len(batch[iBatch]) != 0 && batch[iBatch][HashLength] == 1 {
isShortcut = true
}
}
return batch, iBatch, batch[2*iBatch+1], batch[2*iBatch+2], isShortcut, nil
}
// loadBatch fetches a batch of nodes in cache or db
func (s *Trie) loadBatch(root []byte) ([][]byte, error) {
var node Hash
copy(node[:], root)
s.db.liveMux.RLock()
val, exists := s.db.liveCache[node]
s.db.liveMux.RUnlock()
if exists {
if s.counterOn {
s.liveCountMux.Lock()
s.LoadCacheCounter++
s.liveCountMux.Unlock()
}
if s.atomicUpdate {
// Return a copy so that Commit() doesnt have to be called at
// each block and still commit every state transition.
// Before Commit, the same batch is in liveCache and in updatedNodes
newVal := make([][]byte, 31, 31)
copy(newVal, val)
return newVal, nil
}
return val, nil
}
// checking updated nodes is useful if get() or update() is called twice in a row without db commit
s.db.updatedMux.RLock()
val, exists = s.db.updatedNodes[node]
s.db.updatedMux.RUnlock()
if exists {
if s.atomicUpdate {
// Return a copy so that Commit() doesnt have to be called at
// each block and still commit every state transition.
newVal := make([][]byte, 31, 31)
copy(newVal, val)
return newVal, nil
}
return val, nil
}
//Fetch node in disk database
if s.db.Store == nil {
return nil, fmt.Errorf("DB not connected to trie")
}
if s.counterOn {
s.loadDbMux.Lock()
s.LoadDbCounter++
s.loadDbMux.Unlock()
}
s.db.lock.Lock()
dbval := s.db.Store.Get(root[:HashLength])
s.db.lock.Unlock()
nodeSize := len(dbval)
if nodeSize != 0 {
return s.parseBatch(dbval), nil
}
return nil, fmt.Errorf("the trie node %x is unavailable in the disk db, db may be corrupted", root)
}
// parseBatch decodes the byte data into a slice of nodes and bitmap
func (s *Trie) parseBatch(val []byte) [][]byte {
batch := make([][]byte, 31, 31)
bitmap := val[:4]
// check if the batch root is a shortcut
if bitIsSet(val, 31) {
batch[0] = []byte{1}
batch[1] = val[4 : 4+33]
batch[2] = val[4+33 : 4+33*2]
} else {
batch[0] = []byte{0}
j := 0
for i := 1; i <= 30; i++ {
if bitIsSet(bitmap, i-1) {
batch[i] = val[4+33*j : 4+33*(j+1)]
j++
}
}
}
return batch
}
// leafHash returns the hash of key_value_byte(height) concatenated, stores it in the updatedNodes and maybe in liveCache.
// leafHash is never called for a default value. Default value should not be stored.
func (s *Trie) leafHash(key, value, oldRoot []byte, batch [][]byte, iBatch, height int) []byte {
// byte(height) is here for 2 reasons.
// 1- to prevent potential problems with merkle proofs where if an account
// has the same address as a node, it would be possible to prove a
// different value for the account.
// 2- when accounts are added to the trie, accounts on their path get pushed down the tree
// with them. if an old account changes position from a shortcut batch to another
// shortcut batch of different height, if would be deleted when reverting.
h := s.hash(key, value, []byte{byte(height)})
h = append(h, byte(1)) // byte(1) is a flag for the shortcut
batch[2*iBatch+2] = append(value, byte(2))
batch[2*iBatch+1] = append(key, byte(2))
if height%4 == 0 {
batch[0] = []byte{1} // byte(1) is a flag for the shortcut batch
s.storeNode(batch, h, oldRoot, height)
}
return h
}
// storeNode stores a batch and deletes the old node from cache
func (s *Trie) storeNode(batch [][]byte, h, oldRoot []byte, height int) {
if !bytes.Equal(h, oldRoot) {
var node Hash
copy(node[:], h)
// record new node
s.db.updatedMux.Lock()
s.db.updatedNodes[node] = batch
s.db.updatedMux.Unlock()
// Cache the shortcut node if it's height is over CacheHeightLimit
if height >= s.CacheHeightLimit {
s.db.liveMux.Lock()
s.db.liveCache[node] = batch
s.db.liveMux.Unlock()
}
s.deleteOldNode(oldRoot, height, false)
}
}
// interiorHash hashes 2 children to get the parent hash and stores it in the updatedNodes and maybe in liveCache.
func (s *Trie) interiorHash(left, right, oldRoot []byte, batch [][]byte, iBatch, height int) []byte {
var h []byte
// left and right cannot both be default. It is handled by maybeMoveUpShortcut()
if len(left) == 0 {
h = s.hash(DefaultLeaf, right[:HashLength])
} else if len(right) == 0 {
h = s.hash(left[:HashLength], DefaultLeaf)
} else {
h = s.hash(left[:HashLength], right[:HashLength])
}
h = append(h, byte(0))
batch[2*iBatch+2] = right
batch[2*iBatch+1] = left
if height%4 == 0 {
batch[0] = []byte{0}
s.storeNode(batch, h, oldRoot, height)
}
return h
}
// updatePastTries appends the current Root to the list of past tries
func (s *Trie) updatePastTries() {
if len(s.pastTries) >= maxPastTries {
copy(s.pastTries, s.pastTries[1:])
s.pastTries[len(s.pastTries)-1] = s.Root
} else {
s.pastTries = append(s.pastTries, s.Root)
}
}

+ 63
- 0
trie_cache.go

@ -0,0 +1,63 @@
/**
* @file
* @copyright defined in aergo/LICENSE.txt
*/
package trie
import (
"sync"
"github.com/aergoio/aergo-lib/db"
)
// DbTx represents Set and Delete interface to store data
type DbTx interface {
Set(key, value []byte)
Delete(key []byte)
}
type CacheDB struct {
// liveCache contains the first levels of the trie (nodes that have 2 non default children)
liveCache map[Hash][][]byte
// liveMux is a lock for liveCache
liveMux sync.RWMutex
// updatedNodes that have will be flushed to disk
updatedNodes map[Hash][][]byte
// updatedMux is a lock for updatedNodes
updatedMux sync.RWMutex
// nodesToRevert will be deleted from db
nodesToRevert [][]byte
// revertMux is a lock for updatedNodes
revertMux sync.RWMutex
// lock for CacheDB
lock sync.RWMutex
// store is the interface to disk db
Store db.DB
}
// commit adds updatedNodes to the given database transaction.
func (c *CacheDB) commit(txn *DbTx) {
c.updatedMux.Lock()
defer c.updatedMux.Unlock()
for key, batch := range c.updatedNodes {
var node []byte
(*txn).Set(append(node, key[:]...), c.serializeBatch(batch))
}
}
// serializeBatch serialises the 2D [][]byte into a []byte for db
func (c *CacheDB) serializeBatch(batch [][]byte) []byte {
serialized := make([]byte, 4) //, 30*33)
if batch[0][0] == 1 {
// the batch node is a shortcut
bitSet(serialized, 31)
}
for i := 1; i < 31; i++ {
if len(batch[i]) != 0 {
bitSet(serialized, i-1)
serialized = append(serialized, batch[i]...)
}
}
return serialized
}

+ 210
- 0
trie_merkle_proof.go

@ -0,0 +1,210 @@
/**
* @file
* @copyright defined in aergo/LICENSE.txt
*/
package trie
import (
"bytes"
)
// MerkleProof generates a Merke proof of inclusion or non-inclusion
// for the current trie root
// returns the audit path, bool (key included), key, value, error
// (key,value) can be 1- (nil, value), value of the included key, 2- the kv of a LeafNode
// on the path of the non-included key, 3- (nil, nil) for a non-included key
// with a DefaultLeaf on the path
func (s *Trie) MerkleProof(key []byte) ([][]byte, bool, []byte, []byte, error) {
s.lock.RLock()
defer s.lock.RUnlock()
s.atomicUpdate = false // so loadChildren doesnt return a copy
return s.merkleProof(s.Root, key, nil, s.TrieHeight, 0)
}
// MerkleProofPast generates a Merke proof of inclusion or non-inclusion
// for a given past trie root
// returns the audit path, bool (key included), key, value, error
// (key,value) can be 1- (nil, value), value of the included key, 2- the kv of a LeafNode
// on the path of the non-included key, 3- (nil, nil) for a non-included key
// with a DefaultLeaf on the path
func (s *Trie) MerkleProofR(key, root []byte) ([][]byte, bool, []byte, []byte, error) {
s.lock.RLock()
defer s.lock.RUnlock()
s.atomicUpdate = false // so loadChildren doesnt return a copy
return s.merkleProof(root, key, nil, s.TrieHeight, 0)
}
// MerkleProofCompressed returns a compressed merkle proof in the given trie
func (s *Trie) MerkleProofCompressedR(key, root []byte) ([]byte, [][]byte, int, bool, []byte, []byte, error) {
return s.merkleProofCompressed(key, root)
}
// MerkleProofCompressed returns a compressed merkle proof
func (s *Trie) MerkleProofCompressed(key []byte) ([]byte, [][]byte, int, bool, []byte, []byte, error) {
return s.merkleProofCompressed(key, s.Root)
}
func (s *Trie) merkleProofCompressed(key, root []byte) ([]byte, [][]byte, int, bool, []byte, []byte, error) {
s.lock.RLock()
defer s.lock.RUnlock()
s.atomicUpdate = false // so loadChildren doesnt return a copy
// create a regular merkle proof and then compress it
mpFull, included, proofKey, proofVal, err := s.merkleProof(root, key, nil, s.TrieHeight, 0)
if err != nil {
return nil, nil, 0, true, nil, nil, err
}
// the height of the shortcut in the tree will be needed for the proof verification
height := len(mpFull)
var mp [][]byte
bitmap := make([]byte, len(mpFull)/8+1)
for i, node := range mpFull {
if !bytes.Equal(node, DefaultLeaf) {
bitSet(bitmap, i)
mp = append(mp, node)
}
}
return bitmap, mp, height, included, proofKey, proofVal, nil
}
// merkleProof generates a Merke proof of inclusion or non-inclusion
// for a given trie root.
// returns the audit path, bool (key included), key, value, error
// (key,value) can be 1- (nil, value), value of the included key, 2- the kv of a LeafNode
// on the path of the non-included key, 3- (nil, nil) for a non-included key
// with a DefaultLeaf on the path
func (s *Trie) merkleProof(root, key []byte, batch [][]byte, height, iBatch int) ([][]byte, bool, []byte, []byte, error) {
if len(root) == 0 {
// proove that an empty subtree is on the path of the key
return nil, false, nil, nil, nil
}
// Fetch the children of the node
batch, iBatch, lnode, rnode, isShortcut, err := s.loadChildren(root, height, iBatch, batch)
if err != nil {
return nil, false, nil, nil, err
}
if isShortcut || height == 0 {
if bytes.Equal(lnode[:HashLength], key) {
// return the value so a call to trie.Get() is not needed.
return nil, true, nil, rnode[:HashLength], nil
}
// Return the proof of the leaf key that is on the path of the non included key
return nil, false, lnode[:HashLength], rnode[:HashLength], nil
}
// append the left or right node to the proof
if bitIsSet(key, s.TrieHeight-height) {
mp, included, proofKey, proofValue, err := s.merkleProof(rnode, key, batch, height-1, 2*iBatch+2)
if err != nil {
return nil, false, nil, nil, err
}
if len(lnode) != 0 {
return append(mp, lnode[:HashLength]), included, proofKey, proofValue, nil
} else {
return append(mp, DefaultLeaf), included, proofKey, proofValue, nil
}
}
mp, included, proofKey, proofValue, err := s.merkleProof(lnode, key, batch, height-1, 2*iBatch+1)
if err != nil {
return nil, false, nil, nil, err
}
if len(rnode) != 0 {
return append(mp, rnode[:HashLength]), included, proofKey, proofValue, nil
} else {
return append(mp, DefaultLeaf), included, proofKey, proofValue, nil
}
}
// VerifyInclusion verifies that key/value is included in the trie with latest root
func (s *Trie) VerifyInclusion(ap [][]byte, key, value []byte) bool {
leafHash := s.hash(key, value, []byte{byte(s.TrieHeight - len(ap))})
return bytes.Equal(s.Root, s.verifyInclusion(ap, 0, key, leafHash))
}
// verifyInclusion returns the merkle root by hashing the merkle proof items
func (s *Trie) verifyInclusion(ap [][]byte, keyIndex int, key, leafHash []byte) []byte {
if keyIndex == len(ap) {
return leafHash
}
if bitIsSet(key, keyIndex) {
return s.hash(ap[len(ap)-keyIndex-1], s.verifyInclusion(ap, keyIndex+1, key, leafHash))
}
return s.hash(s.verifyInclusion(ap, keyIndex+1, key, leafHash), ap[len(ap)-keyIndex-1])
}
// VerifyNonInclusion verifies a proof of non inclusion,
// Returns true if the non-inclusion is verified
func (s *Trie) VerifyNonInclusion(ap [][]byte, key, value, proofKey []byte) bool {
// Check if an empty subtree is on the key path
if len(proofKey) == 0 {
// return true if a DefaultLeaf in the key path is included in the trie
return bytes.Equal(s.Root, s.verifyInclusion(ap, 0, key, DefaultLeaf))
}
// Check if another kv leaf is on the key path in 2 steps
// 1- Check the proof leaf exists
if !s.VerifyInclusion(ap, proofKey, value) {
// the proof leaf is not included in the trie
return false
}
// 2- Check the proof leaf is on the key path
var b int
for b = 0; b < len(ap); b++ {
if bitIsSet(key, b) != bitIsSet(proofKey, b) {
// the proofKey leaf node is not on the path of the key
return false
}
}
// return true because we verified another leaf is on the key path
return true
}
// VerifyInclusionC verifies that key/value is included in the trie with latest root
func (s *Trie) VerifyInclusionC(bitmap, key, value []byte, ap [][]byte, length int) bool {
leafHash := s.hash(key, value, []byte{byte(s.TrieHeight - length)})
return bytes.Equal(s.Root, s.verifyInclusionC(bitmap, key, leafHash, ap, length, 0, 0))
}
// verifyInclusionC returns the merkle root by hashing the merkle proof items
func (s *Trie) verifyInclusionC(bitmap, key, leafHash []byte, ap [][]byte, length, keyIndex, apIndex int) []byte {
if keyIndex == length {
return leafHash
}
if bitIsSet(key, keyIndex) {
if bitIsSet(bitmap, length-keyIndex-1) {
return s.hash(ap[len(ap)-apIndex-1], s.verifyInclusionC(bitmap, key, leafHash, ap, length, keyIndex+1, apIndex+1))
}
return s.hash(DefaultLeaf, s.verifyInclusionC(bitmap, key, leafHash, ap, length, keyIndex+1, apIndex))
}
if bitIsSet(bitmap, length-keyIndex-1) {
return s.hash(s.verifyInclusionC(bitmap, key, leafHash, ap, length, keyIndex+1, apIndex+1), ap[len(ap)-apIndex-1])
}
return s.hash(s.verifyInclusionC(bitmap, key, leafHash, ap, length, keyIndex+1, apIndex), DefaultLeaf)
}
// VerifyNonInclusionC verifies a proof of non inclusion,
// Returns true if the non-inclusion is verified
func (s *Trie) VerifyNonInclusionC(ap [][]byte, length int, bitmap, key, value, proofKey []byte) bool {
// Check if an empty subtree is on the key path
if len(proofKey) == 0 {
// return true if a DefaultLeaf in the key path is included in the trie
return bytes.Equal(s.Root, s.verifyInclusionC(bitmap, key, DefaultLeaf, ap, length, 0, 0))
}
// Check if another kv leaf is on the key path in 2 steps
// 1- Check the proof leaf exists
if !s.VerifyInclusionC(bitmap, proofKey, value, ap, length) {
// the proof leaf is not included in the trie
return false
}
// 2- Check the proof leaf is on the key path
var b int
for b = 0; b < length; b++ {
if bitIsSet(key, b) != bitIsSet(proofKey, b) {
// the proofKey leaf node is not on the path of the key
return false
}
}
// return true because we verified another leaf is on the key path
return true
}

+ 178
- 0
trie_revert.go

@ -0,0 +1,178 @@
/**
* @file
* @copyright defined in aergo/LICENSE.txt
*/
package trie
import (
"bytes"
"fmt"
)
// Revert rewinds the state tree to a previous version
// All the nodes (subtree roots and values) reverted are deleted from the database.
func (s *Trie) Revert(toOldRoot []byte) error {
s.lock.RLock()
defer s.lock.RUnlock()
// safety precaution if reverting to a shortcut batch that might have been deleted
s.atomicUpdate = false // so loadChildren doesnt return a copy
batch, _, _, _, isShortcut, err := s.loadChildren(toOldRoot, s.TrieHeight, 0, nil)
if err != nil {
return err
}
//check if toOldRoot is in s.pastTries
canRevert := false
toIndex := 0
for i, r := range s.pastTries {
if bytes.Equal(r, toOldRoot) {
canRevert = true
toIndex = i
break
}
}
if !canRevert || bytes.Equal(s.Root, toOldRoot) {
return fmt.Errorf("The root cannot be reverted, because already latest of not in pastTries : current : %x, target : %x", s.Root, toOldRoot)
}
// For every node of toOldRoot, compare it to the equivalent node in other pasttries between toOldRoot and current s.Root. If a node is different, delete the one from pasttries
s.db.nodesToRevert = make([][]byte, 0)
for i := toIndex + 1; i < len(s.pastTries); i++ {
ch := make(chan error, 1)
s.maybeDeleteSubTree(toOldRoot, s.pastTries[i], s.TrieHeight, 0, nil, nil, ch)
err := <-ch
if err != nil {
return err
}
}
// NOTE The tx interface doesnt handle ErrTxnTooBig
txn := s.db.Store.NewTx()
for _, key := range s.db.nodesToRevert {
txn.Delete(key[:HashLength])
}
txn.Commit()
s.pastTries = s.pastTries[:toIndex+1]
s.Root = toOldRoot
s.db.liveCache = make(map[Hash][][]byte)
s.db.updatedNodes = make(map[Hash][][]byte)
if isShortcut {
// If toOldRoot is a shortcut batch, it is possible that
// revert has deleted it if the key was ever stored at height0
// because in leafHash byte(0) = byte(256)
s.db.Store.Set(toOldRoot, s.db.serializeBatch(batch))
}
return nil
}
// maybeDeleteSubTree compares the subtree nodes of 2 tries and keeps only the older one
func (s *Trie) maybeDeleteSubTree(original, maybeDelete []byte, height, iBatch int, batch, batch2 [][]byte, ch chan<- (error)) {
if height == 0 {
if !bytes.Equal(original, maybeDelete) && len(maybeDelete) != 0 {
s.maybeDeleteRevertedNode(maybeDelete, 0)
}
ch <- nil
return
}
if bytes.Equal(original, maybeDelete) || len(maybeDelete) == 0 {
ch <- nil
return
}
// if this point os reached, then the root of the batch is same
// so the batch is also same.
batch, iBatch, lnode, rnode, isShortcut, lerr := s.loadChildren(original, height, iBatch, batch)
if lerr != nil {
ch <- lerr
return
}
batch2, _, lnode2, rnode2, isShortcut2, rerr := s.loadChildren(maybeDelete, height, iBatch, batch2)
if rerr != nil {
ch <- rerr
return
}
if isShortcut != isShortcut2 {
if isShortcut {
ch1 := make(chan error, 1)
s.deleteSubTree(maybeDelete, height, iBatch, batch2, ch1)
err := <-ch1
if err != nil {
ch <- err
return
}
} else {
s.maybeDeleteRevertedNode(maybeDelete, iBatch)
}
} else {
if isShortcut {
// Delete shortcut if not equal
if !bytes.Equal(lnode, lnode2) || !bytes.Equal(rnode, rnode2) {
s.maybeDeleteRevertedNode(maybeDelete, iBatch)
}
} else {
// Delete subtree if not equal
s.maybeDeleteRevertedNode(maybeDelete, iBatch)
ch1 := make(chan error, 1)
ch2 := make(chan error, 1)
go s.maybeDeleteSubTree(lnode, lnode2, height-1, 2*iBatch+1, batch, batch2, ch1)
go s.maybeDeleteSubTree(rnode, rnode2, height-1, 2*iBatch+2, batch, batch2, ch2)
err1, err2 := <-ch1, <-ch2
if err1 != nil {
ch <- err1
return
}
if err2 != nil {
ch <- err2
return
}
}
}
ch <- nil
}
// deleteSubTree deletes all the nodes contained in a tree
func (s *Trie) deleteSubTree(root []byte, height, iBatch int, batch [][]byte, ch chan<- (error)) {
if len(root) == 0 || height == 0 {
if height == 0 {
s.maybeDeleteRevertedNode(root, 0)
}
ch <- nil
return
}
batch, iBatch, lnode, rnode, isShortcut, err := s.loadChildren(root, height, iBatch, batch)
if err != nil {
ch <- err
return
}
if !isShortcut {
ch1 := make(chan error, 1)
ch2 := make(chan error, 1)
go s.deleteSubTree(lnode, height-1, 2*iBatch+1, batch, ch1)
go s.deleteSubTree(rnode, height-1, 2*iBatch+2, batch, ch2)
lerr := <-ch1
rerr := <-ch2
if lerr != nil {
ch <- lerr
return
}
if rerr != nil {
ch <- rerr
return
}
}
s.maybeDeleteRevertedNode(root, iBatch)
ch <- nil
}
// maybeDeleteRevertedNode adds the node to updatedNodes to be reverted
// if it is a batch node at height%4 == 0
func (s *Trie) maybeDeleteRevertedNode(root []byte, iBatch int) {
if iBatch == 0 {
s.db.revertMux.Lock()
s.db.nodesToRevert = append(s.db.nodesToRevert, root)
s.db.revertMux.Unlock()
}
}

+ 883
- 0
trie_test.go

@ -0,0 +1,883 @@
/**
* @file
* @copyright defined in aergo/LICENSE.txt
*/
package trie
import (
"bytes"
"runtime"
//"io/ioutil"
"os"
"path"
"time"
//"encoding/hex"
"fmt"
"math/rand"
"sort"
"testing"
"github.com/aergoio/aergo-lib/db"
)
func TestTrieEmpty(t *testing.T) {
smt := NewTrie(nil, Hasher, nil)
if len(smt.Root) != 0 {
t.Fatal("empty trie root hash not correct")
}
}
func TestTrieUpdateAndGet(t *testing.T) {
smt := NewTrie(nil, Hasher, nil)
smt.atomicUpdate = false
// Add data to empty trie
keys := getFreshData(10, 32)
values := getFreshData(10, 32)
ch := make(chan mresult, 1)
smt.update(smt.Root, keys, values, nil, 0, smt.TrieHeight, ch)
res := <-ch
root := res.update
// Check all keys have been stored
for i, key := range keys {
value, _ := smt.get(root, key, nil, 0, smt.TrieHeight)
if !bytes.Equal(values[i], value) {
t.Fatal("value not updated")
}
}
// Append to the trie
newKeys := getFreshData(5, 32)
newValues := getFreshData(5, 32)
ch = make(chan mresult, 1)
smt.update(root, newKeys, newValues, nil, 0, smt.TrieHeight, ch)
res = <-ch
newRoot := res.update
if bytes.Equal(root, newRoot) {
t.Fatal("trie not updated")
}
for i, newKey := range newKeys {
newValue, _ := smt.get(newRoot, newKey, nil, 0, smt.TrieHeight)
if !bytes.Equal(newValues[i], newValue) {
t.Fatal("failed to get value")
}
}
}
func TestTrieAtomicUpdate(t *testing.T) {
smt := NewTrie(nil, Hasher, nil)
smt.CacheHeightLimit = 0
keys := getFreshData(1, 32)
values := getFreshData(1, 32)
root, _ := smt.AtomicUpdate(keys, values)
updatedNb := len(smt.db.updatedNodes)
cacheNb := len(smt.db.liveCache)
newvalues := getFreshData(1, 32)
smt.AtomicUpdate(keys, newvalues)
if len(smt.db.updatedNodes) != 2*updatedNb {
t.Fatal("Atomic update doesnt store all tries")
}
if len(smt.db.liveCache) != cacheNb {
t.Fatal("Cache size should remain the same")
}
// check keys of previous atomic update are accessible in
// updated nodes with root.
smt.atomicUpdate = false
for i, key := range keys {
value, _ := smt.get(root, key, nil, 0, smt.TrieHeight)
if !bytes.Equal(values[i], value) {
t.Fatal("failed to get value")
}
}
}
func TestTriePublicUpdateAndGet(t *testing.T) {
smt := NewTrie(nil, Hasher, nil)
smt.CacheHeightLimit = 0
// Add data to empty trie
keys := getFreshData(20, 32)
values := getFreshData(20, 32)
root, _ := smt.Update(keys, values)
updatedNb := len(smt.db.updatedNodes)
cacheNb := len(smt.db.liveCache)
// Check all keys have been stored
for i, key := range keys {
value, _ := smt.Get(key)
if !bytes.Equal(values[i], value) {
t.Fatal("trie not updated")
}
}
if !bytes.Equal(root, smt.Root) {
t.Fatal("Root not stored")
}
newValues := getFreshData(20, 32)
smt.Update(keys, newValues)
if len(smt.db.updatedNodes) != updatedNb {
t.Fatal("multiple updates don't actualise updated nodes")
}
if len(smt.db.liveCache) != cacheNb {
t.Fatal("multiple updates don't actualise liveCache")
}
// Check all keys have been modified
for i, key := range keys {
value, _ := smt.Get(key)
if !bytes.Equal(newValues[i], value) {
t.Fatal("trie not updated")
}
}
}
func TestGetWithRoot(t *testing.T) {
dbPath := t.TempDir()
st := db.NewDB(db.BadgerImpl, dbPath)
smt := NewTrie(nil, Hasher, st)
smt.CacheHeightLimit = 0
// Add data to empty trie
keys := getFreshData(20, 32)
values := getFreshData(20, 32)
root, _ := smt.Update(keys, values)
// Check all keys have been stored
for i, key := range keys {
value, _ := smt.Get(key)
if !bytes.Equal(values[i], value) {
t.Fatal("trie not updated")
}
}
if !bytes.Equal(root, smt.Root) {
t.Fatal("Root not stored")
}
if err := smt.Commit(); err != nil {
t.Fatal(err)
}
// Delete two values (0 and 1)
if _, err := smt.Update([][]byte{keys[0], keys[1]}, [][]byte{DefaultLeaf, DefaultLeaf}); err != nil {
t.Fatal(err)
}
// Change one value
oldValue3 := make([]byte, 32)
copy(oldValue3, values[3])
values[3] = getFreshData(1, 32)[0]
if _, err := smt.Update([][]byte{keys[3]}, [][]byte{values[3]}); err != nil {
t.Fatal(err)
}
// Check root has been actually updated
if bytes.Equal(smt.Root, root) {
t.Fatal("root not updated")
}
// Get the third value with the new root
v3, err := smt.GetWithRoot(keys[3], smt.Root)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(v3, values[3]) {
t.Fatalf("GetWithRoot did not keep the value: %x != %x", v3, values[3])
}
// Get the third value with the old root
v3, err = smt.GetWithRoot(keys[3], root)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(v3, oldValue3) {
t.Fatalf("GetWithRoot did not keep the value: %x != %x", v3, oldValue3)
}
st.Close()
}
func TestTrieWalk(t *testing.T) {
dbPath := t.TempDir()
st := db.NewDB(db.BadgerImpl, dbPath)
smt := NewTrie(nil, Hasher, st)
smt.CacheHeightLimit = 0
// Add data to empty trie
keys := getFreshData(20, 32)
values := getFreshData(20, 32)
root, _ := smt.Update(keys, values)
// Check all keys have been stored
for i, key := range keys {
value, _ := smt.Get(key)
if !bytes.Equal(values[i], value) {
t.Fatal("trie not updated")
}
}
if !bytes.Equal(root, smt.Root) {
t.Fatal("Root not stored")
}
// Walk over the whole tree and compare the values
i := 0
if err := smt.Walk(nil, func(v *WalkResult) int32 {
if !bytes.Equal(v.Value, values[i]) {
t.Fatalf("walk value does not match %x != %x", v.Value, values[i])
}
if !bytes.Equal(v.Key, keys[i]) {
t.Fatalf("walk key does not match %x != %x", v.Key, keys[i])
}
i++
return 0
}); err != nil {
t.Fatal(err)
}
// Delete two values (0 and 3)
if _, err := smt.Update([][]byte{keys[0], keys[3]}, [][]byte{DefaultLeaf, DefaultLeaf}); err != nil {
t.Fatal(err)
}
// Delete two elements and walk again
i = 1
if err := smt.Walk(nil, func(v *WalkResult) int32 {
if i == 3 {
i++
}
if !bytes.Equal(v.Value, values[i]) {
t.Fatalf("walk value does not match %x == %x\n", v.Value, values[i])
}
if !bytes.Equal(v.Key, keys[i]) {
t.Fatalf("walk key does not match %x == %x\n", v.Key, keys[i])
}
i++
return 0
}); err != nil {
t.Fatal(err)
}
// Add one new value to preivous deleted key
values[3] = getFreshData(1, 32)[0]
if _, err := smt.Update([][]byte{keys[3]}, [][]byte{values[3]}); err != nil {
t.Fatal(err)
}
// Walk and check again
i = 1
if err := smt.Walk(nil, func(v *WalkResult) int32 {
if !bytes.Equal(v.Value, values[i]) {
t.Fatalf("walk value does not match %x != %x\n", v.Value, values[i])
}
if !bytes.Equal(v.Key, keys[i]) {
t.Fatalf("walk key does not match %x != %x\n", v.Key, keys[i])
}
i++
return 0
}); err != nil {
t.Fatal(err)
}
// Find a specific value and test stop
i = 0
if err := smt.Walk(nil, func(v *WalkResult) int32 {
if bytes.Equal(v.Value, values[5]) {
return 1
}
i++
return 0
}); err != nil {
t.Fatal(err)
}
if i != 4 {
t.Fatalf("Needed more iterations on walk than expected: %d != 4", i)
}
st.Close()
}
func TestTrieDelete(t *testing.T) {
smt := NewTrie(nil, Hasher, nil)
// Add data to empty trie
keys := getFreshData(20, 32)
values := getFreshData(20, 32)
ch := make(chan mresult, 1)
smt.update(smt.Root, keys, values, nil, 0, smt.TrieHeight, ch)
result := <-ch
root := result.update
value, _ := smt.get(root, keys[0], nil, 0, smt.TrieHeight)
if !bytes.Equal(values[0], value) {
t.Fatal("trie not updated")
}
// Delete from trie
// To delete a key, just set it's value to Default leaf hash.
ch = make(chan mresult, 1)
smt.update(root, keys[0:1], [][]byte{DefaultLeaf}, nil, 0, smt.TrieHeight, ch)
result = <-ch
updatedNb := len(smt.db.updatedNodes)
newRoot := result.update
newValue, _ := smt.get(newRoot, keys[0], nil, 0, smt.TrieHeight)
if len(newValue) != 0 {
t.Fatal("Failed to delete from trie")
}
// Remove deleted key from keys and check root with a clean trie.
smt2 := NewTrie(nil, Hasher, nil)
ch = make(chan mresult, 1)
smt2.update(smt.Root, keys[1:], values[1:], nil, 0, smt.TrieHeight, ch)
result = <-ch
cleanRoot := result.update
if !bytes.Equal(newRoot, cleanRoot) {
t.Fatal("roots mismatch")
}
if len(smt2.db.updatedNodes) != updatedNb {
t.Fatal("deleting doesn't actualise updated nodes")
}
//Empty the trie
var newValues [][]byte
for i := 0; i < 20; i++ {
newValues = append(newValues, DefaultLeaf)
}
ch = make(chan mresult, 1)
smt.update(root, keys, newValues, nil, 0, smt.TrieHeight, ch)
result = <-ch
root = result.update
//if !bytes.Equal(smt.DefaultHash(256), root) {
if len(root) != 0 {
t.Fatal("empty trie root hash not correct")
}
// Test deleting an already empty key
smt = NewTrie(nil, Hasher, nil)
keys = getFreshData(2, 32)
values = getFreshData(2, 32)
root, _ = smt.Update(keys, values)
key0 := make([]byte, 32, 32)
key1 := make([]byte, 32, 32)
smt.Update([][]byte{key0, key1}, [][]byte{DefaultLeaf, DefaultLeaf})
if !bytes.Equal(root, smt.Root) {
t.Fatal("deleting a default key shouldnt' modify the tree")
}
}
// test updating and deleting at the same time
func TestTrieUpdateAndDelete(t *testing.T) {
smt := NewTrie(nil, Hasher, nil)
smt.CacheHeightLimit = 0
key0 := make([]byte, 32, 32)
values := getFreshData(1, 32)
root, _ := smt.Update([][]byte{key0}, values)
cacheNb := len(smt.db.liveCache)
updatedNb := len(smt.db.updatedNodes)
smt.atomicUpdate = false
_, _, k, v, isShortcut, _ := smt.loadChildren(root, smt.TrieHeight, 0, nil)
if !isShortcut || !bytes.Equal(k[:HashLength], key0) || !bytes.Equal(v[:HashLength], values[0]) {
t.Fatal("leaf shortcut didn't move up to root")
}
key1 := make([]byte, 32, 32)
// set the last bit
bitSet(key1, 255)
keys := [][]byte{key0, key1}
values = [][]byte{DefaultLeaf, getFreshData(1, 32)[0]}
root, _ = smt.Update(keys, values)
if len(smt.db.liveCache) != cacheNb {
t.Fatal("number of cache nodes not correct after delete")
}
if len(smt.db.updatedNodes) != updatedNb {
t.Fatal("number of cache nodes not correct after delete")
}
smt.atomicUpdate = false
_, _, k, v, isShortcut, _ = smt.loadChildren(root, smt.TrieHeight, 0, nil)
if !isShortcut || !bytes.Equal(k[:HashLength], key1) || !bytes.Equal(v[:HashLength], values[1]) {
t.Fatal("leaf shortcut didn't move up to root")
}
}
func TestTrieMerkleProof(t *testing.T) {
smt := NewTrie(nil, Hasher, nil)
// Add data to empty trie
keys := getFreshData(10, 32)
values := getFreshData(10, 32)
smt.Update(keys, values)
for i, key := range keys {
ap, _, k, v, _ := smt.MerkleProof(key)
if !smt.VerifyInclusion(ap, key, values[i]) {
t.Fatalf("failed to verify inclusion proof")
}
if !bytes.Equal(key, k) && !bytes.Equal(values[i], v) {
t.Fatalf("merkle proof didnt return the correct key-value pair")
}
}
emptyKey := Hasher([]byte("non-member"))
ap, included, proofKey, proofValue, _ := smt.MerkleProof(emptyKey)
if included {
t.Fatalf("failed to verify non inclusion proof")
}
if !smt.VerifyNonInclusion(ap, emptyKey, proofValue, proofKey) {
t.Fatalf("failed to verify non inclusion proof")
}
}
func TestTrieMerkleProofCompressed(t *testing.T) {
smt := NewTrie(nil, Hasher, nil)
// Add data to empty trie
keys := getFreshData(10, 32)
values := getFreshData(10, 32)
smt.Update(keys, values)
for i, key := range keys {
bitmap, ap, length, _, k, v, _ := smt.MerkleProofCompressed(key)
if !smt.VerifyInclusionC(bitmap, key, values[i], ap, length) {
t.Fatalf("failed to verify inclusion proof")
}
if !bytes.Equal(key, k) && !bytes.Equal(values[i], v) {
t.Fatalf("merkle proof didnt return the correct key-value pair")
}
}
emptyKey := Hasher([]byte("non-member"))
bitmap, ap, length, included, proofKey, proofValue, _ := smt.MerkleProofCompressed(emptyKey)
if included {
t.Fatalf("failed to verify non inclusion proof")
}
if !smt.VerifyNonInclusionC(ap, length, bitmap, emptyKey, proofValue, proofKey) {
t.Fatalf("failed to verify non inclusion proof")
}
}
func TestTrieCommit(t *testing.T) {
dbPath := path.Join(".aergo", "db")
if _, err := os.Stat(dbPath); os.IsNotExist(err) {
_ = os.MkdirAll(dbPath, 0711)
}
st := db.NewDB(db.BadgerImpl, dbPath)
smt := NewTrie(nil, Hasher, st)
keys := getFreshData(10, 32)
values := getFreshData(10, 32)
smt.Update(keys, values)
smt.Commit()
// liveCache is deleted so the key is fetched in badger db
smt.db.liveCache = make(map[Hash][][]byte)
for i, key := range keys {
value, _ := smt.Get(key)
if !bytes.Equal(value, values[i]) {
t.Fatal("failed to get value in committed db")
}
}
st.Close()
os.RemoveAll(".aergo")
}
func TestTrieStageUpdates(t *testing.T) {
dbPath := path.Join(".aergo", "db")
if _, err := os.Stat(dbPath); os.IsNotExist(err) {
_ = os.MkdirAll(dbPath, 0711)
}
st := db.NewDB(db.BadgerImpl, dbPath)
smt := NewTrie(nil, Hasher, st)
keys := getFreshData(10, 32)
values := getFreshData(10, 32)
smt.Update(keys, values)
txn := st.NewTx()
smt.StageUpdates(txn.(DbTx))
txn.Commit()
// liveCache is deleted so the key is fetched in badger db
smt.db.liveCache = make(map[Hash][][]byte)
for i, key := range keys {
value, _ := smt.Get(key)
if !bytes.Equal(value, values[i]) {
t.Fatal("failed to get value in committed db")
}
}
st.Close()
os.RemoveAll(".aergo")
}
func TestTrieRevert(t *testing.T) {
dbPath := path.Join(".aergo", "db")
if _, err := os.Stat(dbPath); os.IsNotExist(err) {
_ = os.MkdirAll(dbPath, 0711)
}
st := db.NewDB(db.BadgerImpl, dbPath)
smt := NewTrie(nil, Hasher, st)
// Edge case : Test that revert doesnt delete shortcut nodes
// when moved to a different position in tree
key0 := make([]byte, 32, 32)
key1 := make([]byte, 32, 32)
// setting the bit at 251 creates 2 shortcut batches at height 252
bitSet(key1, 251)
values := getFreshData(2, 32)
keys := [][]byte{key0, key1}
root, _ := smt.Update([][]byte{key0}, [][]byte{values[0]})
smt.Commit()
root2, _ := smt.Update([][]byte{key1}, [][]byte{values[1]})
smt.Commit()
smt.Revert(root)
if len(smt.db.Store.Get(root)) == 0 {
t.Fatal("shortcut node shouldnt be deleted by revert")
}
if len(smt.db.Store.Get(root2)) != 0 {
t.Fatal("reverted root should have been deleted")
}
key1 = make([]byte, 32, 32)
// setting the bit at 255 stores the keys as the tip
bitSet(key1, 255)
smt.Update([][]byte{key1}, [][]byte{values[1]})
smt.Commit()
smt.Revert(root)
if len(smt.db.Store.Get(root)) == 0 {
t.Fatal("shortcut node shouldnt be deleted by revert")
}
// Test all nodes are reverted in the usual case
// Add data to empty trie
keys = getFreshData(10, 32)
values = getFreshData(10, 32)
root, _ = smt.Update(keys, values)
smt.Commit()
// Update the values
newValues := getFreshData(10, 32)
smt.Update(keys, newValues)
updatedNodes1 := smt.db.updatedNodes
smt.Commit()
newKeys := getFreshData(10, 32)
newValues = getFreshData(10, 32)
smt.Update(newKeys, newValues)
updatedNodes2 := smt.db.updatedNodes
smt.Commit()
smt.Revert(root)
if !bytes.Equal(smt.Root, root) {
t.Fatal("revert failed")
}
if len(smt.pastTries) != 2 { // contains empty trie + reverted trie
t.Fatal("past tries not updated after revert")
}
// Check all keys have been reverted
for i, key := range keys {
value, _ := smt.Get(key)
if !bytes.Equal(values[i], value) {
t.Fatal("revert failed, values not updated")
}
}
if len(smt.db.liveCache) != 0 {
t.Fatal("live cache not reset after revert")
}
// Check all reverted nodes have been deleted
for node, _ := range updatedNodes2 {
if len(smt.db.Store.Get(node[:])) != 0 {
t.Fatal("nodes not deleted from database", node)
}
}
for node, _ := range updatedNodes1 {
if len(smt.db.Store.Get(node[:])) != 0 {
t.Fatal("nodes not deleted from database", node)
}
}
st.Close()
os.RemoveAll(".aergo")
}
func TestTrieRaisesError(t *testing.T) {
dbPath := path.Join(".aergo", "db")
if _, err := os.Stat(dbPath); os.IsNotExist(err) {
_ = os.MkdirAll(dbPath, 0711)
}
st := db.NewDB(db.BadgerImpl, dbPath)
smt := NewTrie(nil, Hasher, st)
// Add data to empty trie
keys := getFreshData(10, 32)
values := getFreshData(10, 32)
smt.Update(keys, values)
smt.db.liveCache = make(map[Hash][][]byte)
smt.db.updatedNodes = make(map[Hash][][]byte)
// Check errors are raised is a keys is not in cache nore db
for _, key := range keys {
_, err := smt.Get(key)
if err == nil {
t.Fatal("Error not created if database doesnt have a node")
}
}
_, _, _, _, _, _, err := smt.MerkleProofCompressed(keys[0])
if err == nil {
t.Fatal("Error not created if database doesnt have a node")
}
_, err = smt.Update(keys, values)
if err == nil {
t.Fatal("Error not created if database doesnt have a node")
}
st.Close()
os.RemoveAll(".aergo")
smt = NewTrie(nil, Hasher, nil)
err = smt.Commit()
if err == nil {
t.Fatal("Error not created if database not connected")
}
smt.db.liveCache = make(map[Hash][][]byte)
smt.atomicUpdate = false
_, _, _, _, _, err = smt.loadChildren(make([]byte, 32, 32), smt.TrieHeight, 0, nil)
if err == nil {
t.Fatal("Error not created if database not connected")
}
err = smt.LoadCache(make([]byte, 32))
if err == nil {
t.Fatal("Error not created if database not connected")
}
}
func TestTrieLoadCache(t *testing.T) {
dbPath := path.Join(".aergo", "db")
if _, err := os.Stat(dbPath); os.IsNotExist(err) {
_ = os.MkdirAll(dbPath, 0711)
}
st := db.NewDB(db.BadgerImpl, dbPath)
smt := NewTrie(nil, Hasher, st)
// Test size of cache
smt.CacheHeightLimit = 0
key0 := make([]byte, 32, 32)
key1 := make([]byte, 32, 32)
bitSet(key1, 255)
values := getFreshData(2, 32)
smt.Update([][]byte{key0, key1}, values)
if len(smt.db.liveCache) != 66 {
// the nodes are at the tip, so 64 + 2 = 66
t.Fatal("cache size incorrect")
}
// Add data to empty trie
keys := getFreshData(10, 32)
values = getFreshData(10, 32)
smt.Update(keys, values)
smt.Commit()
// Simulate node restart by deleting and loading cache
cacheSize := len(smt.db.liveCache)
smt.db.liveCache = make(map[Hash][][]byte)
err := smt.LoadCache(smt.Root)
if err != nil {
t.Fatal(err)
}
if cacheSize != len(smt.db.liveCache) {
t.Fatal("Cache loading from db incorrect")
}
st.Close()
os.RemoveAll(".aergo")
}
func TestHeight0LeafShortcut(t *testing.T) {
keySize := 32
smt := NewTrie(nil, Hasher, nil)
// Add 2 sibling keys that will be stored at height 0
key0 := make([]byte, keySize, keySize)
key1 := make([]byte, keySize, keySize)
bitSet(key1, keySize*8-1)
keys := [][]byte{key0, key1}
values := getFreshData(2, 32)
smt.Update(keys, values)
updatedNb := len(smt.db.updatedNodes)
// Check all keys have been stored
for i, key := range keys {
value, _ := smt.Get(key)
if !bytes.Equal(values[i], value) {
t.Fatal("trie not updated")
}
}
bitmap, ap, length, _, k, v, err := smt.MerkleProofCompressed(key1)
if err != nil {
t.Fatal(err)
}
if !bytes.Equal(key1, k) && !bytes.Equal(values[1], v) {
t.Fatalf("merkle proof didnt return the correct key-value pair")
}
if length != smt.TrieHeight {
t.Fatal("proof should have length equal to trie height for a leaf shortcut")
}
if !smt.VerifyInclusionC(bitmap, key1, values[1], ap, length) {
t.Fatal("failed to verify inclusion proof")
}
// Delete one key and check that the remaining one moved up to the root of the tree
newRoot, _ := smt.AtomicUpdate(keys[0:1], [][]byte{DefaultLeaf})
// Nb of updated nodes remains same because the new shortcut root was already stored at height 0.
if len(smt.db.updatedNodes) != updatedNb {
fmt.Println(len(smt.db.updatedNodes), updatedNb)
t.Fatal("number of cache nodes not correct after delete")
}
smt.atomicUpdate = false
_, _, k, v, isShortcut, err := smt.loadChildren(newRoot, smt.TrieHeight, 0, nil)
if err != nil {
t.Fatal(err)
}
if !isShortcut || !bytes.Equal(k[:HashLength], key1) || !bytes.Equal(v[:HashLength], values[1]) {
t.Fatal("leaf shortcut didn't move up to root")
}
_, _, length, _, k, v, _ = smt.MerkleProofCompressed(key1)
if length != 0 {
t.Fatal("proof should have length equal to trie height for a leaf shortcut")
}
if !bytes.Equal(key1, k) && !bytes.Equal(values[1], v) {
t.Fatalf("merkle proof didnt return the correct key-value pair")
}
}
func TestStash(t *testing.T) {
dbPath := path.Join(".aergo", "db")
if _, err := os.Stat(dbPath); os.IsNotExist(err) {
_ = os.MkdirAll(dbPath, 0711)
}
st := db.NewDB(db.BadgerImpl, dbPath)
smt := NewTrie(nil, Hasher, st)
// Add data to empty trie
keys := getFreshData(20, 32)
values := getFreshData(20, 32)
root, _ := smt.Update(keys, values)
cacheSize := len(smt.db.liveCache)
smt.Commit()
if len(smt.pastTries) != 1 {
t.Fatal("Past tries not updated after commit")
}
values = getFreshData(20, 32)
smt.Update(keys, values)
smt.Stash(true)
if len(smt.pastTries) != 1 {
t.Fatal("Past tries not updated after commit")
}
if !bytes.Equal(smt.Root, root) {
t.Fatal("Trie not rolled back")
}
if len(smt.db.updatedNodes) != 0 {
t.Fatal("Trie not rolled back")
}
if len(smt.db.liveCache) != cacheSize {
t.Fatal("Trie not rolled back")
}
keys = getFreshData(20, 32)
values = getFreshData(20, 32)
smt.AtomicUpdate(keys, values)
values = getFreshData(20, 32)
smt.AtomicUpdate(keys, values)
if len(smt.pastTries) != 3 {
t.Fatal("Past tries not updated after commit")
}
smt.Stash(true)
if !bytes.Equal(smt.Root, root) {
t.Fatal("Trie not rolled back")
}
if len(smt.db.updatedNodes) != 0 {
t.Fatal("Trie not rolled back")
}
if len(smt.db.liveCache) != cacheSize {
t.Fatal("Trie not rolled back")
}
if len(smt.pastTries) != 1 {
t.Fatal("Past tries not updated after commit")
}
st.Close()
os.RemoveAll(".aergo")
}
func benchmark10MAccounts10Ktps(smt *Trie, b *testing.B) {
//b.ReportAllocs()
keys := getFreshData(100, 32)
values := getFreshData(100, 32)
smt.Update(keys, values)
fmt.Println("\nLoading b.N x 1000 accounts")
for i := 0; i < b.N; i++ {
newkeys := getFreshData(1000, 32)
newvalues := getFreshData(1000, 32)
start := time.Now()
smt.Update(newkeys, newvalues)
end := time.Now()
smt.Commit()
end2 := time.Now()
for j, key := range newkeys {
val, _ := smt.Get(key)
if !bytes.Equal(val, newvalues[j]) {
b.Fatal("new key not included")
}
}
end3 := time.Now()
elapsed := end.Sub(start)
elapsed2 := end2.Sub(end)
elapsed3 := end3.Sub(end2)
var m runtime.MemStats
runtime.ReadMemStats(&m)
fmt.Println(i, " : update time : ", elapsed, "commit time : ", elapsed2,
"\n1000 Get time : ", elapsed3,
"\ndb read : ", smt.LoadDbCounter, " cache read : ", smt.LoadCacheCounter,
"\ncache size : ", len(smt.db.liveCache),
"\nRAM : ", m.Sys/1024/1024, " MiB")
}
}
//go test -run=xxx -bench=. -benchmem -test.benchtime=20s
func BenchmarkCacheHeightLimit233(b *testing.B) {
dbPath := path.Join(".aergo", "db")
if _, err := os.Stat(dbPath); os.IsNotExist(err) {
_ = os.MkdirAll(dbPath, 0711)
}
st := db.NewDB(db.BadgerImpl, dbPath)
smt := NewTrie(nil, Hasher, st)
smt.CacheHeightLimit = 233
benchmark10MAccounts10Ktps(smt, b)
st.Close()
os.RemoveAll(".aergo")
}
func BenchmarkCacheHeightLimit238(b *testing.B) {
dbPath := path.Join(".aergo", "db")
if _, err := os.Stat(dbPath); os.IsNotExist(err) {
_ = os.MkdirAll(dbPath, 0711)
}
st := db.NewDB(db.BadgerImpl, dbPath)
smt := NewTrie(nil, Hasher, st)
smt.CacheHeightLimit = 238
benchmark10MAccounts10Ktps(smt, b)
st.Close()
os.RemoveAll(".aergo")
}
func BenchmarkCacheHeightLimit245(b *testing.B) {
dbPath := path.Join(".aergo", "db")
if _, err := os.Stat(dbPath); os.IsNotExist(err) {
_ = os.MkdirAll(dbPath, 0711)
}
st := db.NewDB(db.BadgerImpl, dbPath)
smt := NewTrie(nil, Hasher, st)
smt.CacheHeightLimit = 245
benchmark10MAccounts10Ktps(smt, b)
st.Close()
os.RemoveAll(".aergo")
}
func getFreshData(size, length int) [][]byte {
var data [][]byte
for i := 0; i < size; i++ {
key := make([]byte, 32)
_, err := rand.Read(key)
if err != nil {
panic(err)
}
data = append(data, Hasher(key)[:length])
}
sort.Sort(DataArray(data))
return data
}

+ 273
- 0
trie_tools.go

@ -0,0 +1,273 @@
/**
* @file
* @copyright defined in aergo/LICENSE.txt
*/
package trie
import (
"bytes"
"fmt"
"sync"
"sync/atomic"
"github.com/aergoio/aergo-lib/db"
)
// LoadCache loads the first layers of the merkle tree given a root
// This is called after a node restarts so that it doesnt become slow with db reads
// LoadCache also updates the Root with the given root.
func (s *Trie) LoadCache(root []byte) error {
if s.db.Store == nil {
return fmt.Errorf("DB not connected to trie")
}
s.db.liveCache = make(map[Hash][][]byte)
ch := make(chan error, 1)
s.loadCache(root, nil, 0, s.TrieHeight, ch)
s.Root = root
return <-ch
}
// loadCache loads the first layers of the merkle tree given a root
func (s *Trie) loadCache(root []byte, batch [][]byte, iBatch, height int, ch chan<- (error)) {
if height < s.CacheHeightLimit || len(root) == 0 {
ch <- nil
return
}
if height%4 == 0 {
// Load the node from db
s.db.lock.Lock()
dbval := s.db.Store.Get(root[:HashLength])
s.db.lock.Unlock()
if len(dbval) == 0 {
ch <- fmt.Errorf("the trie node %x is unavailable in the disk db, db may be corrupted", root)
return
}
//Store node in cache.
var node Hash
copy(node[:], root)
batch = s.parseBatch(dbval)
s.db.liveMux.Lock()
s.db.liveCache[node] = batch
s.db.liveMux.Unlock()
iBatch = 0
if batch[0][0] == 1 {
// if height == 0 this will also return
ch <- nil
return
}
}
if iBatch != 0 && batch[iBatch][HashLength] == 1 {
// Check if node is a leaf node
ch <- nil
} else {
// Load subtree
lnode, rnode := batch[2*iBatch+1], batch[2*iBatch+2]
lch := make(chan error, 1)
rch := make(chan error, 1)
go s.loadCache(lnode, batch, 2*iBatch+1, height-1, lch)
go s.loadCache(rnode, batch, 2*iBatch+2, height-1, rch)
if err := <-lch; err != nil {
ch <- err
return
}
if err := <-rch; err != nil {
ch <- err
return
}
ch <- nil
}
}
// Get fetches the value of a key by going down the current trie root.
func (s *Trie) Get(key []byte) ([]byte, error) {
s.lock.RLock()
defer s.lock.RUnlock()
s.atomicUpdate = false
return s.get(s.Root, key, nil, 0, s.TrieHeight)
}
// GetWithRoot fetches the value of a key by going down for the specified root.
func (s *Trie) GetWithRoot(key []byte, root []byte) ([]byte, error) {
s.lock.RLock()
defer s.lock.RUnlock()
s.atomicUpdate = false
if root == nil {
root = s.Root
}
return s.get(root, key, nil, 0, s.TrieHeight)
}
// get fetches the value of a key given a trie root
func (s *Trie) get(root, key []byte, batch [][]byte, iBatch, height int) ([]byte, error) {
if len(root) == 0 {
// the trie does not contain the key
return nil, nil
}
// Fetch the children of the node
batch, iBatch, lnode, rnode, isShortcut, err := s.loadChildren(root, height, iBatch, batch)
if err != nil {
return nil, err
}
if isShortcut {
if bytes.Equal(lnode[:HashLength], key) {
return rnode[:HashLength], nil
}
// also returns nil if height 0 is not a shortcut
return nil, nil
}
if bitIsSet(key, s.TrieHeight-height) {
return s.get(rnode, key, batch, 2*iBatch+2, height-1)
}
return s.get(lnode, key, batch, 2*iBatch+1, height-1)
}
// WalkResult contains the key and value obtained with a Walk() operation
type WalkResult struct {
Value []byte
Key []byte
}
// Walk finds all the trie stored values from left to right and calls callback.
// If callback returns a number diferent from 0, the walk will stop, else it will continue.
func (s *Trie) Walk(root []byte, callback func(*WalkResult) int32) error {
walkc := make(chan *WalkResult)
s.lock.RLock()
defer s.lock.RUnlock()
if root == nil {
root = s.Root
}
s.atomicUpdate = false
finishedWalk := make(chan (bool), 1)
stop := int32(0)
wg := sync.WaitGroup{} // WaitGroup to avoid Walk() return before all callback executions are finished.
go func() {
for {
select {
case <-finishedWalk:
return
case value := <-walkc:
stopCallback := callback(value)
wg.Done()
// In order to avoid data races we need to check the current value of stop, while at the
// same time we store our callback value. If our callback value is 0 means that we have
// override the previous non-zero value, so we need to restore it.
if cv := atomic.SwapInt32(&stop, stopCallback); cv != 0 || stopCallback != 0 {
if stopCallback == 0 {
atomic.StoreInt32(&stop, cv)
}
// We need to return (instead of break) in order to stop iterating if some callback returns non zero
return
}
}
}
}()
err := s.walk(walkc, &stop, root, nil, 0, s.TrieHeight, &wg)
finishedWalk <- true
wg.Wait()
return err
}
// walk fetches the value of a key given a trie root
func (s *Trie) walk(walkc chan (*WalkResult), stop *int32, root []byte, batch [][]byte, ibatch, height int, wg *sync.WaitGroup) error {
if len(root) == 0 || atomic.LoadInt32(stop) != 0 {
// The sub tree is empty or stop walking
return nil
}
// Fetch the children of the node
batch, ibatch, lnode, rnode, isShortcut, err := s.loadChildren(root, height, ibatch, batch)
if err != nil {
return err
}
if isShortcut {
wg.Add(1)
walkc <- &WalkResult{Value: rnode[:HashLength], Key: lnode[:HashLength]}
return nil
}
// Go left
if err := s.walk(walkc, stop, lnode, batch, 2*ibatch+1, height-1, wg); err != nil {
return err
}
// Go Right
if err := s.walk(walkc, stop, rnode, batch, 2*ibatch+2, height-1, wg); err != nil {
return err
}
return nil
}
// TrieRootExists returns true if the root exists in Database.
func (s *Trie) TrieRootExists(root []byte) bool {
s.db.lock.RLock()
dbval := s.db.Store.Get(root)
s.db.lock.RUnlock()
if len(dbval) != 0 {
return true
}
return false
}
// Commit stores the updated nodes to disk.
// Commit should be called for every block otherwise past tries
// are not recorded and it is not possible to revert to them
// (except if AtomicUpdate is used, which records every state).
func (s *Trie) Commit() error {
if s.db.Store == nil {
return fmt.Errorf("DB not connected to trie")
}
// NOTE The tx interface doesnt handle ErrTxnTooBig
txn := s.db.Store.NewTx().(DbTx)
s.StageUpdates(txn)
txn.(db.Transaction).Commit()
return nil
}
// StageUpdates requires a database transaction as input
// Unlike Commit(), it doesnt commit the transaction
// the database transaction MUST be commited otherwise the
// state ROOT will not exist.
func (s *Trie) StageUpdates(txn DbTx) {
s.lock.Lock()
defer s.lock.Unlock()
// Commit the new nodes to database, clear updatedNodes and store the Root in pastTries for reverts.
if !s.atomicUpdate {
// if previously AtomicUpdate was called, then past tries is already updated
s.updatePastTries()
}
s.db.commit(&txn)
s.db.updatedNodes = make(map[Hash][][]byte)
s.prevRoot = s.Root
}
// Stash rolls back the changes made by previous updates
// and loads the cache from before the rollback.
func (s *Trie) Stash(rollbackCache bool) error {
s.lock.Lock()
defer s.lock.Unlock()
s.Root = s.prevRoot
if rollbackCache {
// Making a temporary liveCache requires it to be copied, so it's quicker
// to just load the cache from DB if a block state root was incorrect.
s.db.liveCache = make(map[Hash][][]byte)
ch := make(chan error, 1)
s.loadCache(s.Root, nil, 0, s.TrieHeight, ch)
err := <-ch
if err != nil {
return err
}
} else {
s.db.liveCache = make(map[Hash][][]byte)
}
s.db.updatedNodes = make(map[Hash][][]byte)
// also stash past tries created by Atomic update
for i := len(s.pastTries) - 1; i >= 0; i-- {
if bytes.Equal(s.pastTries[i], s.Root) {
break
} else {
// remove from past tries
s.pastTries = s.pastTries[:len(s.pastTries)-1]
}
}
return nil
}

+ 42
- 0
util.go

@ -0,0 +1,42 @@
/**
* @file
* @copyright defined in aergo/LICENSE.txt
*/
package trie
import (
"bytes"
)
var (
// Trie default value : [byte(0)]
DefaultLeaf = []byte{0}
)
const (
HashLength = 32
maxPastTries = 300
)
type Hash [HashLength]byte
func bitIsSet(bits []byte, i int) bool {
return bits[i/8]&(1<<uint(7-i%8)) != 0
}
func bitSet(bits []byte, i int) {
bits[i/8] |= 1 << uint(7-i%8)
}
// for sorting test data
type DataArray [][]byte
func (d DataArray) Len() int {
return len(d)
}
func (d DataArray) Swap(i, j int) {
d[i], d[j] = d[j], d[i]
}
func (d DataArray) Less(i, j int) bool {
return bytes.Compare(d[i], d[j]) == -1
}

Loading…
Cancel
Save