You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

694 lines
20 KiB

Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
  1. package statedb
  2. import (
  3. "encoding/hex"
  4. "fmt"
  5. "io/ioutil"
  6. "math/big"
  7. "os"
  8. "strings"
  9. "sync"
  10. "testing"
  11. ethCommon "github.com/ethereum/go-ethereum/common"
  12. ethCrypto "github.com/ethereum/go-ethereum/crypto"
  13. "github.com/hermeznetwork/hermez-node/common"
  14. "github.com/hermeznetwork/hermez-node/log"
  15. "github.com/hermeznetwork/tracerr"
  16. "github.com/iden3/go-iden3-crypto/babyjub"
  17. "github.com/iden3/go-merkletree/db"
  18. "github.com/stretchr/testify/assert"
  19. "github.com/stretchr/testify/require"
  20. )
  21. func newAccount(t *testing.T, i int) *common.Account {
  22. var sk babyjub.PrivateKey
  23. _, err := hex.Decode(sk[:],
  24. []byte("0001020304050607080900010203040506070809000102030405060708090001"))
  25. require.NoError(t, err)
  26. pk := sk.Public()
  27. key, err := ethCrypto.GenerateKey()
  28. require.NoError(t, err)
  29. address := ethCrypto.PubkeyToAddress(key.PublicKey)
  30. return &common.Account{
  31. Idx: common.Idx(256 + i),
  32. TokenID: common.TokenID(i),
  33. Nonce: common.Nonce(i),
  34. Balance: big.NewInt(1000),
  35. BJJ: pk.Compress(),
  36. EthAddr: address,
  37. }
  38. }
  39. func TestNewStateDBIntermediateState(t *testing.T) {
  40. dir, err := ioutil.TempDir("", "tmpdb")
  41. require.NoError(t, err)
  42. defer require.NoError(t, os.RemoveAll(dir))
  43. sdb, err := NewStateDB(Config{Path: dir, Keep: 128, Type: TypeTxSelector, NLevels: 0})
  44. require.NoError(t, err)
  45. // test values
  46. k0 := []byte("testkey0")
  47. k1 := []byte("testkey1")
  48. v0 := []byte("testvalue0")
  49. v1 := []byte("testvalue1")
  50. // store some data
  51. tx, err := sdb.db.DB().NewTx()
  52. require.NoError(t, err)
  53. err = tx.Put(k0, v0)
  54. require.NoError(t, err)
  55. err = tx.Commit()
  56. require.NoError(t, err)
  57. v, err := sdb.db.DB().Get(k0)
  58. require.NoError(t, err)
  59. assert.Equal(t, v0, v)
  60. // k0 not yet in last
  61. err = sdb.LastRead(func(sdb *Last) error {
  62. _, err := sdb.DB().Get(k0)
  63. assert.Equal(t, db.ErrNotFound, tracerr.Unwrap(err))
  64. return nil
  65. })
  66. require.NoError(t, err)
  67. // Close PebbleDB before creating a new StateDB
  68. sdb.Close()
  69. // call NewStateDB which should get the db at the last checkpoint state
  70. // executing a Reset (discarding the last 'testkey0'&'testvalue0' data)
  71. sdb, err = NewStateDB(Config{Path: dir, Keep: 128, Type: TypeTxSelector, NLevels: 0})
  72. require.NoError(t, err)
  73. v, err = sdb.db.DB().Get(k0)
  74. assert.NotNil(t, err)
  75. assert.Equal(t, db.ErrNotFound, tracerr.Unwrap(err))
  76. assert.Nil(t, v)
  77. // k0 not in last
  78. err = sdb.LastRead(func(sdb *Last) error {
  79. _, err := sdb.DB().Get(k0)
  80. assert.Equal(t, db.ErrNotFound, tracerr.Unwrap(err))
  81. return nil
  82. })
  83. require.NoError(t, err)
  84. // store the same data from the beginning that has ben lost since last NewStateDB
  85. tx, err = sdb.db.DB().NewTx()
  86. require.NoError(t, err)
  87. err = tx.Put(k0, v0)
  88. require.NoError(t, err)
  89. err = tx.Commit()
  90. require.NoError(t, err)
  91. v, err = sdb.db.DB().Get(k0)
  92. require.NoError(t, err)
  93. assert.Equal(t, v0, v)
  94. // k0 yet not in last
  95. err = sdb.LastRead(func(sdb *Last) error {
  96. _, err := sdb.DB().Get(k0)
  97. assert.Equal(t, db.ErrNotFound, tracerr.Unwrap(err))
  98. return nil
  99. })
  100. require.NoError(t, err)
  101. // make checkpoints with the current state
  102. bn, err := sdb.getCurrentBatch()
  103. require.NoError(t, err)
  104. assert.Equal(t, common.BatchNum(0), bn)
  105. err = sdb.MakeCheckpoint()
  106. require.NoError(t, err)
  107. bn, err = sdb.getCurrentBatch()
  108. require.NoError(t, err)
  109. assert.Equal(t, common.BatchNum(1), bn)
  110. // k0 in last
  111. err = sdb.LastRead(func(sdb *Last) error {
  112. v, err := sdb.DB().Get(k0)
  113. require.NoError(t, err)
  114. assert.Equal(t, v0, v)
  115. return nil
  116. })
  117. require.NoError(t, err)
  118. // write more data
  119. tx, err = sdb.db.DB().NewTx()
  120. require.NoError(t, err)
  121. err = tx.Put(k1, v1)
  122. require.NoError(t, err)
  123. err = tx.Put(k0, v1) // overwrite k0 with v1
  124. require.NoError(t, err)
  125. err = tx.Commit()
  126. require.NoError(t, err)
  127. v, err = sdb.db.DB().Get(k1)
  128. require.NoError(t, err)
  129. assert.Equal(t, v1, v)
  130. err = sdb.LastRead(func(sdb *Last) error {
  131. v, err := sdb.DB().Get(k0)
  132. require.NoError(t, err)
  133. assert.Equal(t, v0, v)
  134. return nil
  135. })
  136. require.NoError(t, err)
  137. // Close PebbleDB before creating a new StateDB
  138. sdb.Close()
  139. // call NewStateDB which should get the db at the last checkpoint state
  140. // executing a Reset (discarding the last 'testkey1'&'testvalue1' data)
  141. sdb, err = NewStateDB(Config{Path: dir, Keep: 128, Type: TypeTxSelector, NLevels: 0})
  142. require.NoError(t, err)
  143. bn, err = sdb.getCurrentBatch()
  144. require.NoError(t, err)
  145. assert.Equal(t, common.BatchNum(1), bn)
  146. // we closed the db without doing a checkpoint after overwriting k0, so
  147. // it's back to v0
  148. v, err = sdb.db.DB().Get(k0)
  149. require.NoError(t, err)
  150. assert.Equal(t, v0, v)
  151. v, err = sdb.db.DB().Get(k1)
  152. assert.NotNil(t, err)
  153. assert.Equal(t, db.ErrNotFound, tracerr.Unwrap(err))
  154. assert.Nil(t, v)
  155. }
  156. func TestStateDBWithoutMT(t *testing.T) {
  157. dir, err := ioutil.TempDir("", "tmpdb")
  158. require.NoError(t, err)
  159. defer require.NoError(t, os.RemoveAll(dir))
  160. sdb, err := NewStateDB(Config{Path: dir, Keep: 128, Type: TypeTxSelector, NLevels: 0})
  161. require.NoError(t, err)
  162. // create test accounts
  163. var accounts []*common.Account
  164. for i := 0; i < 4; i++ {
  165. accounts = append(accounts, newAccount(t, i))
  166. }
  167. // get non-existing account, expecting an error
  168. unexistingAccount := common.Idx(1)
  169. _, err = sdb.GetAccount(unexistingAccount)
  170. assert.NotNil(t, err)
  171. assert.Equal(t, db.ErrNotFound, tracerr.Unwrap(err))
  172. // add test accounts
  173. for i := 0; i < len(accounts); i++ {
  174. _, err = sdb.CreateAccount(accounts[i].Idx, accounts[i])
  175. require.NoError(t, err)
  176. }
  177. for i := 0; i < len(accounts); i++ {
  178. existingAccount := accounts[i].Idx
  179. accGetted, err := sdb.GetAccount(existingAccount)
  180. require.NoError(t, err)
  181. assert.Equal(t, accounts[i], accGetted)
  182. }
  183. // try already existing idx and get error
  184. existingAccount := common.Idx(256)
  185. _, err = sdb.GetAccount(existingAccount) // check that exist
  186. require.NoError(t, err)
  187. _, err = sdb.CreateAccount(common.Idx(256), accounts[1]) // check that can not be created twice
  188. assert.NotNil(t, err)
  189. assert.Equal(t, ErrAccountAlreadyExists, tracerr.Unwrap(err))
  190. // update accounts
  191. for i := 0; i < len(accounts); i++ {
  192. accounts[i].Nonce = accounts[i].Nonce + 1
  193. existingAccount = common.Idx(i)
  194. _, err = sdb.UpdateAccount(existingAccount, accounts[i])
  195. require.NoError(t, err)
  196. }
  197. _, err = sdb.MTGetProof(common.Idx(1))
  198. assert.NotNil(t, err)
  199. assert.Equal(t, ErrStateDBWithoutMT, tracerr.Unwrap(err))
  200. }
  201. func TestStateDBWithMT(t *testing.T) {
  202. dir, err := ioutil.TempDir("", "tmpdb")
  203. require.NoError(t, err)
  204. defer require.NoError(t, os.RemoveAll(dir))
  205. sdb, err := NewStateDB(Config{Path: dir, Keep: 128, Type: TypeSynchronizer, NLevels: 32})
  206. require.NoError(t, err)
  207. // create test accounts
  208. var accounts []*common.Account
  209. for i := 0; i < 20; i++ {
  210. accounts = append(accounts, newAccount(t, i))
  211. }
  212. // get non-existing account, expecting an error
  213. _, err = sdb.GetAccount(common.Idx(1))
  214. assert.NotNil(t, err)
  215. assert.Equal(t, db.ErrNotFound, tracerr.Unwrap(err))
  216. // add test accounts
  217. for i := 0; i < len(accounts); i++ {
  218. _, err = sdb.CreateAccount(accounts[i].Idx, accounts[i])
  219. require.NoError(t, err)
  220. }
  221. for i := 0; i < len(accounts); i++ {
  222. accGetted, err := sdb.GetAccount(accounts[i].Idx)
  223. require.NoError(t, err)
  224. assert.Equal(t, accounts[i], accGetted)
  225. }
  226. // try already existing idx and get error
  227. _, err = sdb.GetAccount(common.Idx(256)) // check that exist
  228. require.NoError(t, err)
  229. _, err = sdb.CreateAccount(common.Idx(256), accounts[1]) // check that can not be created twice
  230. assert.NotNil(t, err)
  231. assert.Equal(t, ErrAccountAlreadyExists, tracerr.Unwrap(err))
  232. _, err = sdb.MTGetProof(common.Idx(256))
  233. require.NoError(t, err)
  234. // update accounts
  235. for i := 0; i < len(accounts); i++ {
  236. accounts[i].Nonce = accounts[i].Nonce + 1
  237. _, err = sdb.UpdateAccount(accounts[i].Idx, accounts[i])
  238. require.NoError(t, err)
  239. }
  240. a, err := sdb.GetAccount(common.Idx(256)) // check that account value has been updated
  241. require.NoError(t, err)
  242. assert.Equal(t, accounts[0].Nonce, a.Nonce)
  243. }
  244. // TestCheckpoints performs almost the same test than kvdb/kvdb_test.go
  245. // TestCheckpoints, but over the StateDB
  246. func TestCheckpoints(t *testing.T) {
  247. dir, err := ioutil.TempDir("", "sdb")
  248. require.NoError(t, err)
  249. defer require.NoError(t, os.RemoveAll(dir))
  250. sdb, err := NewStateDB(Config{Path: dir, Keep: 128, Type: TypeSynchronizer, NLevels: 32})
  251. require.NoError(t, err)
  252. err = sdb.Reset(0)
  253. require.NoError(t, err)
  254. // create test accounts
  255. var accounts []*common.Account
  256. for i := 0; i < 10; i++ {
  257. accounts = append(accounts, newAccount(t, i))
  258. }
  259. // add test accounts
  260. for i := 0; i < len(accounts); i++ {
  261. _, err = sdb.CreateAccount(accounts[i].Idx, accounts[i])
  262. require.NoError(t, err)
  263. }
  264. // account doesn't exist in Last checkpoint
  265. _, err = sdb.LastGetAccount(accounts[0].Idx)
  266. assert.Equal(t, db.ErrNotFound, tracerr.Unwrap(err))
  267. // do checkpoints and check that currentBatch is correct
  268. err = sdb.MakeCheckpoint()
  269. require.NoError(t, err)
  270. cb, err := sdb.getCurrentBatch()
  271. require.NoError(t, err)
  272. assert.Equal(t, common.BatchNum(1), cb)
  273. // account exists in Last checkpoint
  274. accCur, err := sdb.GetAccount(accounts[0].Idx)
  275. require.NoError(t, err)
  276. accLast, err := sdb.LastGetAccount(accounts[0].Idx)
  277. require.NoError(t, err)
  278. assert.Equal(t, accounts[0], accLast)
  279. assert.Equal(t, accCur, accLast)
  280. for i := 1; i < 10; i++ {
  281. err = sdb.MakeCheckpoint()
  282. require.NoError(t, err)
  283. cb, err = sdb.getCurrentBatch()
  284. require.NoError(t, err)
  285. assert.Equal(t, common.BatchNum(i+1), cb)
  286. }
  287. // printCheckpoints(t, sdb.cfg.Path)
  288. // reset checkpoint
  289. err = sdb.Reset(3)
  290. require.NoError(t, err)
  291. // check that reset can be repeated (as there exist the 'current' and
  292. // 'BatchNum3', from where the 'current' is a copy)
  293. err = sdb.Reset(3)
  294. require.NoError(t, err)
  295. // check that currentBatch is as expected after Reset
  296. cb, err = sdb.getCurrentBatch()
  297. require.NoError(t, err)
  298. assert.Equal(t, common.BatchNum(3), cb)
  299. // advance one checkpoint and check that currentBatch is fine
  300. err = sdb.MakeCheckpoint()
  301. require.NoError(t, err)
  302. cb, err = sdb.getCurrentBatch()
  303. require.NoError(t, err)
  304. assert.Equal(t, common.BatchNum(4), cb)
  305. err = sdb.db.DeleteCheckpoint(common.BatchNum(1))
  306. require.NoError(t, err)
  307. err = sdb.db.DeleteCheckpoint(common.BatchNum(2))
  308. require.NoError(t, err)
  309. err = sdb.db.DeleteCheckpoint(common.BatchNum(1)) // does not exist, should return err
  310. assert.NotNil(t, err)
  311. err = sdb.db.DeleteCheckpoint(common.BatchNum(2)) // does not exist, should return err
  312. assert.NotNil(t, err)
  313. // Create a LocalStateDB from the initial StateDB
  314. dirLocal, err := ioutil.TempDir("", "ldb")
  315. require.NoError(t, err)
  316. defer require.NoError(t, os.RemoveAll(dirLocal))
  317. ldb, err := NewLocalStateDB(Config{Path: dirLocal, Keep: 128, Type: TypeBatchBuilder,
  318. NLevels: 32}, sdb)
  319. require.NoError(t, err)
  320. // get checkpoint 4 from sdb (StateDB) to ldb (LocalStateDB)
  321. err = ldb.Reset(4, true)
  322. require.NoError(t, err)
  323. // check that currentBatch is 4 after the Reset
  324. cb, err = ldb.getCurrentBatch()
  325. require.NoError(t, err)
  326. assert.Equal(t, common.BatchNum(4), cb)
  327. // advance one checkpoint in ldb
  328. err = ldb.MakeCheckpoint()
  329. require.NoError(t, err)
  330. cb, err = ldb.getCurrentBatch()
  331. require.NoError(t, err)
  332. assert.Equal(t, common.BatchNum(5), cb)
  333. // Create a 2nd LocalStateDB from the initial StateDB
  334. dirLocal2, err := ioutil.TempDir("", "ldb2")
  335. require.NoError(t, err)
  336. defer require.NoError(t, os.RemoveAll(dirLocal2))
  337. ldb2, err := NewLocalStateDB(Config{Path: dirLocal2, Keep: 128, Type: TypeBatchBuilder,
  338. NLevels: 32}, sdb)
  339. require.NoError(t, err)
  340. // get checkpoint 4 from sdb (StateDB) to ldb (LocalStateDB)
  341. err = ldb2.Reset(4, true)
  342. require.NoError(t, err)
  343. // check that currentBatch is 4 after the Reset
  344. cb = ldb2.CurrentBatch()
  345. assert.Equal(t, common.BatchNum(4), cb)
  346. // advance one checkpoint in ldb2
  347. err = ldb2.MakeCheckpoint()
  348. require.NoError(t, err)
  349. cb = ldb2.CurrentBatch()
  350. assert.Equal(t, common.BatchNum(5), cb)
  351. debug := false
  352. if debug {
  353. printCheckpoints(t, sdb.cfg.Path)
  354. printCheckpoints(t, ldb.cfg.Path)
  355. printCheckpoints(t, ldb2.cfg.Path)
  356. }
  357. }
  358. func TestStateDBGetAccounts(t *testing.T) {
  359. dir, err := ioutil.TempDir("", "tmpdb")
  360. require.NoError(t, err)
  361. sdb, err := NewStateDB(Config{Path: dir, Keep: 128, Type: TypeTxSelector, NLevels: 0})
  362. require.NoError(t, err)
  363. // create test accounts
  364. var accounts []common.Account
  365. for i := 0; i < 16; i++ {
  366. account := newAccount(t, i)
  367. accounts = append(accounts, *account)
  368. }
  369. // add test accounts
  370. for i := range accounts {
  371. _, err = sdb.CreateAccount(accounts[i].Idx, &accounts[i])
  372. require.NoError(t, err)
  373. }
  374. dbAccounts, err := sdb.TestGetAccounts()
  375. require.NoError(t, err)
  376. assert.Equal(t, accounts, dbAccounts)
  377. }
  378. func printCheckpoints(t *testing.T, path string) {
  379. files, err := ioutil.ReadDir(path)
  380. require.NoError(t, err)
  381. fmt.Println(path)
  382. for _, f := range files {
  383. fmt.Println(" " + f.Name())
  384. }
  385. }
  386. func bigFromStr(h string, u int) *big.Int {
  387. if u == 16 {
  388. h = strings.TrimPrefix(h, "0x")
  389. }
  390. b, ok := new(big.Int).SetString(h, u)
  391. if !ok {
  392. panic("bigFromStr err")
  393. }
  394. return b
  395. }
  396. func TestCheckAccountsTreeTestVectors(t *testing.T) {
  397. dir, err := ioutil.TempDir("", "tmpdb")
  398. require.NoError(t, err)
  399. defer require.NoError(t, os.RemoveAll(dir))
  400. sdb, err := NewStateDB(Config{Path: dir, Keep: 128, Type: TypeSynchronizer, NLevels: 32})
  401. require.NoError(t, err)
  402. ay0 := new(big.Int).Sub(new(big.Int).Exp(big.NewInt(2), big.NewInt(253), nil), big.NewInt(1))
  403. // test value from js version (compatibility-canary)
  404. assert.Equal(t, "1fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
  405. (hex.EncodeToString(ay0.Bytes())))
  406. bjjPoint0Comp := babyjub.PackSignY(true, ay0)
  407. bjj0 := babyjub.PublicKeyComp(bjjPoint0Comp)
  408. ay1 := bigFromStr("00", 16)
  409. bjjPoint1Comp := babyjub.PackSignY(false, ay1)
  410. bjj1 := babyjub.PublicKeyComp(bjjPoint1Comp)
  411. ay2 := bigFromStr("21b0a1688b37f77b1d1d5539ec3b826db5ac78b2513f574a04c50a7d4f8246d7", 16)
  412. bjjPoint2Comp := babyjub.PackSignY(false, ay2)
  413. bjj2 := babyjub.PublicKeyComp(bjjPoint2Comp)
  414. ay3 := bigFromStr("0x10", 16) // 0x10=16
  415. bjjPoint3Comp := babyjub.PackSignY(false, ay3)
  416. require.NoError(t, err)
  417. bjj3 := babyjub.PublicKeyComp(bjjPoint3Comp)
  418. accounts := []*common.Account{
  419. {
  420. Idx: 1,
  421. TokenID: 0xFFFFFFFF,
  422. BJJ: bjj0,
  423. EthAddr: ethCommon.HexToAddress("0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"),
  424. Nonce: common.Nonce(0xFFFFFFFFFF),
  425. Balance: bigFromStr("FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF", 16),
  426. },
  427. {
  428. Idx: 100,
  429. TokenID: 0,
  430. BJJ: bjj1,
  431. EthAddr: ethCommon.HexToAddress("0x00"),
  432. Nonce: common.Nonce(0),
  433. Balance: bigFromStr("0", 10),
  434. },
  435. {
  436. Idx: 0xFFFFFFFFFFFF,
  437. TokenID: 3,
  438. BJJ: bjj2,
  439. EthAddr: ethCommon.HexToAddress("0xA3C88ac39A76789437AED31B9608da72e1bbfBF9"),
  440. Nonce: common.Nonce(129),
  441. Balance: bigFromStr("42000000000000000000", 10),
  442. },
  443. {
  444. Idx: 10000,
  445. TokenID: 1000,
  446. BJJ: bjj3,
  447. EthAddr: ethCommon.HexToAddress("0x64"),
  448. Nonce: common.Nonce(1900),
  449. Balance: bigFromStr("14000000000000000000", 10),
  450. },
  451. }
  452. for i := 0; i < len(accounts); i++ {
  453. _, err = accounts[i].HashValue()
  454. require.NoError(t, err)
  455. _, err = sdb.CreateAccount(accounts[i].Idx, accounts[i])
  456. if err != nil {
  457. log.Error(err)
  458. }
  459. require.NoError(t, err)
  460. }
  461. // root value generated by js version:
  462. assert.Equal(t,
  463. "13174362770971232417413036794215823584762073355951212910715422236001731746065",
  464. sdb.MT.Root().BigInt().String())
  465. }
  466. // TestListCheckpoints performs almost the same test than kvdb/kvdb_test.go
  467. // TestListCheckpoints, but over the StateDB
  468. func TestListCheckpoints(t *testing.T) {
  469. dir, err := ioutil.TempDir("", "tmpdb")
  470. require.NoError(t, err)
  471. defer require.NoError(t, os.RemoveAll(dir))
  472. sdb, err := NewStateDB(Config{Path: dir, Keep: 128, Type: TypeSynchronizer, NLevels: 32})
  473. require.NoError(t, err)
  474. numCheckpoints := 16
  475. // do checkpoints
  476. for i := 0; i < numCheckpoints; i++ {
  477. err = sdb.MakeCheckpoint()
  478. require.NoError(t, err)
  479. }
  480. list, err := sdb.db.ListCheckpoints()
  481. require.NoError(t, err)
  482. assert.Equal(t, numCheckpoints, len(list))
  483. assert.Equal(t, 1, list[0])
  484. assert.Equal(t, numCheckpoints, list[len(list)-1])
  485. numReset := 10
  486. err = sdb.Reset(common.BatchNum(numReset))
  487. require.NoError(t, err)
  488. list, err = sdb.db.ListCheckpoints()
  489. require.NoError(t, err)
  490. assert.Equal(t, numReset, len(list))
  491. assert.Equal(t, 1, list[0])
  492. assert.Equal(t, numReset, list[len(list)-1])
  493. }
  494. // TestDeleteOldCheckpoints performs almost the same test than
  495. // kvdb/kvdb_test.go TestDeleteOldCheckpoints, but over the StateDB
  496. func TestDeleteOldCheckpoints(t *testing.T) {
  497. dir, err := ioutil.TempDir("", "tmpdb")
  498. require.NoError(t, err)
  499. defer require.NoError(t, os.RemoveAll(dir))
  500. keep := 16
  501. sdb, err := NewStateDB(Config{Path: dir, Keep: keep, Type: TypeSynchronizer, NLevels: 32})
  502. require.NoError(t, err)
  503. numCheckpoints := 32
  504. // do checkpoints and check that we never have more than `keep`
  505. // checkpoints
  506. for i := 0; i < numCheckpoints; i++ {
  507. err = sdb.MakeCheckpoint()
  508. require.NoError(t, err)
  509. err := sdb.DeleteOldCheckpoints()
  510. require.NoError(t, err)
  511. checkpoints, err := sdb.db.ListCheckpoints()
  512. require.NoError(t, err)
  513. assert.LessOrEqual(t, len(checkpoints), keep)
  514. }
  515. }
  516. // TestConcurrentDeleteOldCheckpoints performs almost the same test than
  517. // kvdb/kvdb_test.go TestConcurrentDeleteOldCheckpoints, but over the StateDB
  518. func TestConcurrentDeleteOldCheckpoints(t *testing.T) {
  519. dir, err := ioutil.TempDir("", "tmpdb")
  520. require.NoError(t, err)
  521. defer require.NoError(t, os.RemoveAll(dir))
  522. keep := 16
  523. sdb, err := NewStateDB(Config{Path: dir, Keep: keep, Type: TypeSynchronizer, NLevels: 32})
  524. require.NoError(t, err)
  525. numCheckpoints := 32
  526. // do checkpoints and check that we never have more than `keep`
  527. // checkpoints
  528. for i := 0; i < numCheckpoints; i++ {
  529. err = sdb.MakeCheckpoint()
  530. require.NoError(t, err)
  531. wg := sync.WaitGroup{}
  532. n := 10
  533. wg.Add(n)
  534. for j := 0; j < n; j++ {
  535. go func() {
  536. err := sdb.DeleteOldCheckpoints()
  537. require.NoError(t, err)
  538. checkpoints, err := sdb.db.ListCheckpoints()
  539. require.NoError(t, err)
  540. assert.LessOrEqual(t, len(checkpoints), keep)
  541. wg.Done()
  542. }()
  543. _, err := sdb.db.ListCheckpoints()
  544. // only checking here for absence of errors, not the count of checkpoints
  545. require.NoError(t, err)
  546. }
  547. wg.Wait()
  548. checkpoints, err := sdb.db.ListCheckpoints()
  549. require.NoError(t, err)
  550. assert.LessOrEqual(t, len(checkpoints), keep)
  551. }
  552. }
  553. func TestCurrentIdx(t *testing.T) {
  554. dir, err := ioutil.TempDir("", "tmpdb")
  555. require.NoError(t, err)
  556. defer require.NoError(t, os.RemoveAll(dir))
  557. keep := 16
  558. sdb, err := NewStateDB(Config{Path: dir, Keep: keep, Type: TypeSynchronizer, NLevels: 32})
  559. require.NoError(t, err)
  560. idx := sdb.CurrentIdx()
  561. assert.Equal(t, common.Idx(255), idx)
  562. sdb.Close()
  563. sdb, err = NewStateDB(Config{Path: dir, Keep: keep, Type: TypeSynchronizer, NLevels: 32})
  564. require.NoError(t, err)
  565. idx = sdb.CurrentIdx()
  566. assert.Equal(t, common.Idx(255), idx)
  567. err = sdb.MakeCheckpoint()
  568. require.NoError(t, err)
  569. idx = sdb.CurrentIdx()
  570. assert.Equal(t, common.Idx(255), idx)
  571. sdb.Close()
  572. sdb, err = NewStateDB(Config{Path: dir, Keep: keep, Type: TypeSynchronizer, NLevels: 32})
  573. require.NoError(t, err)
  574. idx = sdb.CurrentIdx()
  575. assert.Equal(t, common.Idx(255), idx)
  576. }
  577. func TestResetFromBadCheckpoint(t *testing.T) {
  578. dir, err := ioutil.TempDir("", "tmpdb")
  579. require.NoError(t, err)
  580. defer require.NoError(t, os.RemoveAll(dir))
  581. keep := 16
  582. sdb, err := NewStateDB(Config{Path: dir, Keep: keep, Type: TypeSynchronizer, NLevels: 32})
  583. require.NoError(t, err)
  584. err = sdb.MakeCheckpoint()
  585. require.NoError(t, err)
  586. err = sdb.MakeCheckpoint()
  587. require.NoError(t, err)
  588. err = sdb.MakeCheckpoint()
  589. require.NoError(t, err)
  590. // reset from a checkpoint that doesn't exist
  591. err = sdb.Reset(10)
  592. require.Error(t, err)
  593. }