You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

308 lines
9.8 KiB

Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
  1. package coordinator
  2. import (
  3. "context"
  4. "fmt"
  5. "io/ioutil"
  6. "math/big"
  7. "os"
  8. "testing"
  9. ethKeystore "github.com/ethereum/go-ethereum/accounts/keystore"
  10. ethCommon "github.com/ethereum/go-ethereum/common"
  11. "github.com/ethereum/go-ethereum/crypto"
  12. "github.com/ethereum/go-ethereum/ethclient"
  13. "github.com/hermeznetwork/hermez-node/common"
  14. "github.com/hermeznetwork/hermez-node/db/historydb"
  15. "github.com/hermeznetwork/hermez-node/db/statedb"
  16. "github.com/hermeznetwork/hermez-node/eth"
  17. "github.com/hermeznetwork/hermez-node/prover"
  18. "github.com/hermeznetwork/hermez-node/synchronizer"
  19. "github.com/hermeznetwork/hermez-node/test"
  20. "github.com/hermeznetwork/hermez-node/test/til"
  21. "github.com/iden3/go-merkletree"
  22. "github.com/stretchr/testify/assert"
  23. "github.com/stretchr/testify/require"
  24. )
  25. func newBigInt(s string) *big.Int {
  26. v, ok := new(big.Int).SetString(s, 10)
  27. if !ok {
  28. panic(fmt.Errorf("Can't set big.Int from %s", s))
  29. }
  30. return v
  31. }
  32. func TestPipelineShouldL1L2Batch(t *testing.T) {
  33. ethClientSetup := test.NewClientSetupExample()
  34. ethClientSetup.ChainID = big.NewInt(int64(chainID))
  35. var timer timer
  36. ctx := context.Background()
  37. ethClient := test.NewClient(true, &timer, &bidder, ethClientSetup)
  38. modules := newTestModules(t)
  39. var stats synchronizer.Stats
  40. coord := newTestCoordinator(t, forger, ethClient, ethClientSetup, modules)
  41. pipeline, err := coord.newPipeline(ctx)
  42. require.NoError(t, err)
  43. pipeline.vars = coord.vars
  44. // Check that the parameters are the ones we expect and use in this test
  45. require.Equal(t, 0.5, pipeline.cfg.L1BatchTimeoutPerc)
  46. require.Equal(t, int64(10), ethClientSetup.RollupVariables.ForgeL1L2BatchTimeout)
  47. l1BatchTimeoutPerc := pipeline.cfg.L1BatchTimeoutPerc
  48. l1BatchTimeout := ethClientSetup.RollupVariables.ForgeL1L2BatchTimeout
  49. startBlock := int64(100)
  50. // Empty batchInfo to pass to shouldL1L2Batch() which sets debug information
  51. batchInfo := BatchInfo{}
  52. //
  53. // No scheduled L1Batch
  54. //
  55. // Last L1Batch was a long time ago
  56. stats.Eth.LastBlock.Num = startBlock
  57. stats.Sync.LastBlock = stats.Eth.LastBlock
  58. stats.Sync.LastL1BatchBlock = 0
  59. pipeline.stats = stats
  60. assert.Equal(t, true, pipeline.shouldL1L2Batch(&batchInfo))
  61. stats.Sync.LastL1BatchBlock = startBlock
  62. // We are are one block before the timeout range * 0.5
  63. stats.Eth.LastBlock.Num = startBlock - 1 + int64(float64(l1BatchTimeout-1)*l1BatchTimeoutPerc) - 1
  64. stats.Sync.LastBlock = stats.Eth.LastBlock
  65. pipeline.stats = stats
  66. assert.Equal(t, false, pipeline.shouldL1L2Batch(&batchInfo))
  67. // We are are at timeout range * 0.5
  68. stats.Eth.LastBlock.Num = startBlock - 1 + int64(float64(l1BatchTimeout-1)*l1BatchTimeoutPerc)
  69. stats.Sync.LastBlock = stats.Eth.LastBlock
  70. pipeline.stats = stats
  71. assert.Equal(t, true, pipeline.shouldL1L2Batch(&batchInfo))
  72. //
  73. // Scheduled L1Batch
  74. //
  75. pipeline.state.lastScheduledL1BatchBlockNum = startBlock
  76. stats.Sync.LastL1BatchBlock = startBlock - 10
  77. // We are are one block before the timeout range * 0.5
  78. stats.Eth.LastBlock.Num = startBlock - 1 + int64(float64(l1BatchTimeout-1)*l1BatchTimeoutPerc) - 1
  79. stats.Sync.LastBlock = stats.Eth.LastBlock
  80. pipeline.stats = stats
  81. assert.Equal(t, false, pipeline.shouldL1L2Batch(&batchInfo))
  82. // We are are at timeout range * 0.5
  83. stats.Eth.LastBlock.Num = startBlock - 1 + int64(float64(l1BatchTimeout-1)*l1BatchTimeoutPerc)
  84. stats.Sync.LastBlock = stats.Eth.LastBlock
  85. pipeline.stats = stats
  86. assert.Equal(t, true, pipeline.shouldL1L2Batch(&batchInfo))
  87. }
  88. const testTokensLen = 3
  89. const testUsersLen = 4
  90. func preloadSync(t *testing.T, ethClient *test.Client, sync *synchronizer.Synchronizer,
  91. historyDB *historydb.HistoryDB, stateDB *statedb.StateDB) *til.Context {
  92. // Create a set with `testTokensLen` tokens and for each token
  93. // `testUsersLen` accounts.
  94. var set []til.Instruction
  95. // set = append(set, til.Instruction{Typ: "Blockchain"})
  96. for tokenID := 1; tokenID < testTokensLen; tokenID++ {
  97. set = append(set, til.Instruction{
  98. Typ: til.TypeAddToken,
  99. TokenID: common.TokenID(tokenID),
  100. })
  101. }
  102. depositAmount, ok := new(big.Int).SetString("10225000000000000000000000000000000", 10)
  103. require.True(t, ok)
  104. for tokenID := 0; tokenID < testTokensLen; tokenID++ {
  105. for user := 0; user < testUsersLen; user++ {
  106. set = append(set, til.Instruction{
  107. Typ: common.TxTypeCreateAccountDeposit,
  108. TokenID: common.TokenID(tokenID),
  109. DepositAmount: depositAmount,
  110. From: fmt.Sprintf("User%d", user),
  111. })
  112. }
  113. }
  114. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  115. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  116. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  117. tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
  118. blocks, err := tc.GenerateBlocksFromInstructions(set)
  119. require.NoError(t, err)
  120. require.NotNil(t, blocks)
  121. // Set StateRoots for batches manually (til doesn't set it)
  122. blocks[0].Rollup.Batches[0].Batch.StateRoot =
  123. newBigInt("0")
  124. blocks[0].Rollup.Batches[1].Batch.StateRoot =
  125. newBigInt("6860514559199319426609623120853503165917774887908204288119245630904770452486")
  126. ethAddTokens(blocks, ethClient)
  127. err = ethClient.CtlAddBlocks(blocks)
  128. require.NoError(t, err)
  129. ctx := context.Background()
  130. for {
  131. syncBlock, discards, err := sync.Sync(ctx, nil)
  132. require.NoError(t, err)
  133. require.Nil(t, discards)
  134. if syncBlock == nil {
  135. break
  136. }
  137. }
  138. dbTokens, err := historyDB.GetAllTokens()
  139. require.Nil(t, err)
  140. require.Equal(t, testTokensLen, len(dbTokens))
  141. dbAccounts, err := historyDB.GetAllAccounts()
  142. require.Nil(t, err)
  143. require.Equal(t, testTokensLen*testUsersLen, len(dbAccounts))
  144. sdbAccounts, err := stateDB.TestGetAccounts()
  145. require.Nil(t, err)
  146. require.Equal(t, testTokensLen*testUsersLen, len(sdbAccounts))
  147. return tc
  148. }
  149. func TestPipelineForgeBatchWithTxs(t *testing.T) {
  150. ethClientSetup := test.NewClientSetupExample()
  151. ethClientSetup.ChainID = big.NewInt(int64(chainID))
  152. var timer timer
  153. ctx := context.Background()
  154. ethClient := test.NewClient(true, &timer, &bidder, ethClientSetup)
  155. modules := newTestModules(t)
  156. coord := newTestCoordinator(t, forger, ethClient, ethClientSetup, modules)
  157. sync := newTestSynchronizer(t, ethClient, ethClientSetup, modules)
  158. // preload the synchronier (via the test ethClient) some tokens and
  159. // users with positive balances
  160. tilCtx := preloadSync(t, ethClient, sync, modules.historyDB, modules.stateDB)
  161. syncStats := sync.Stats()
  162. batchNum := syncStats.Sync.LastBatch.BatchNum
  163. syncSCVars := sync.SCVars()
  164. pipeline, err := coord.newPipeline(ctx)
  165. require.NoError(t, err)
  166. // Insert some l2txs in the Pool
  167. setPool := `
  168. Type: PoolL2
  169. PoolTransfer(0) User0-User1: 100 (126)
  170. PoolTransfer(0) User1-User2: 200 (126)
  171. PoolTransfer(0) User2-User3: 300 (126)
  172. `
  173. l2txs, err := tilCtx.GeneratePoolL2Txs(setPool)
  174. require.NoError(t, err)
  175. for _, tx := range l2txs {
  176. err := modules.l2DB.AddTxTest(&tx) //nolint:gosec
  177. require.NoError(t, err)
  178. }
  179. err = pipeline.reset(batchNum, syncStats, syncSCVars)
  180. require.NoError(t, err)
  181. // Sanity check
  182. sdbAccounts, err := pipeline.txSelector.LocalAccountsDB().TestGetAccounts()
  183. require.Nil(t, err)
  184. require.Equal(t, testTokensLen*testUsersLen, len(sdbAccounts))
  185. // Sanity check
  186. sdbAccounts, err = pipeline.batchBuilder.LocalStateDB().TestGetAccounts()
  187. require.Nil(t, err)
  188. require.Equal(t, testTokensLen*testUsersLen, len(sdbAccounts))
  189. // Sanity check
  190. require.Equal(t, modules.stateDB.MT.Root(),
  191. pipeline.batchBuilder.LocalStateDB().MT.Root())
  192. batchNum++
  193. batchInfo, err := pipeline.forgeBatch(batchNum)
  194. require.NoError(t, err)
  195. assert.Equal(t, 3, len(batchInfo.L2Txs))
  196. batchNum++
  197. batchInfo, err = pipeline.forgeBatch(batchNum)
  198. require.NoError(t, err)
  199. assert.Equal(t, 0, len(batchInfo.L2Txs))
  200. }
  201. func TestEthRollupForgeBatch(t *testing.T) {
  202. if os.Getenv("TEST_ROLLUP_FORGE_BATCH") == "" {
  203. return
  204. }
  205. const web3URL = "http://localhost:8545"
  206. const password = "test"
  207. addr := ethCommon.HexToAddress("0xb4124ceb3451635dacedd11767f004d8a28c6ee7")
  208. sk, err := crypto.HexToECDSA(
  209. "a8a54b2d8197bc0b19bb8a084031be71835580a01e70a45a13babd16c9bc1563")
  210. require.NoError(t, err)
  211. rollupAddr := ethCommon.HexToAddress("0x8EEaea23686c319133a7cC110b840d1591d9AeE0")
  212. pathKeystore, err := ioutil.TempDir("", "tmpKeystore")
  213. require.NoError(t, err)
  214. deleteme = append(deleteme, pathKeystore)
  215. ctx := context.Background()
  216. batchInfo := &BatchInfo{}
  217. proofClient := &prover.MockClient{}
  218. chainID := uint16(0)
  219. ethClient, err := ethclient.Dial(web3URL)
  220. require.NoError(t, err)
  221. ethCfg := eth.EthereumConfig{
  222. CallGasLimit: 300000,
  223. GasPriceDiv: 100,
  224. }
  225. scryptN := ethKeystore.LightScryptN
  226. scryptP := ethKeystore.LightScryptP
  227. keyStore := ethKeystore.NewKeyStore(pathKeystore,
  228. scryptN, scryptP)
  229. account, err := keyStore.ImportECDSA(sk, password)
  230. require.NoError(t, err)
  231. require.Equal(t, account.Address, addr)
  232. err = keyStore.Unlock(account, password)
  233. require.NoError(t, err)
  234. client, err := eth.NewClient(ethClient, &account, keyStore, &eth.ClientConfig{
  235. Ethereum: ethCfg,
  236. Rollup: eth.RollupConfig{
  237. Address: rollupAddr,
  238. },
  239. Auction: eth.AuctionConfig{
  240. Address: ethCommon.Address{},
  241. TokenHEZ: eth.TokenConfig{
  242. Address: ethCommon.Address{},
  243. Name: "HEZ",
  244. },
  245. },
  246. WDelayer: eth.WDelayerConfig{
  247. Address: ethCommon.Address{},
  248. },
  249. })
  250. require.NoError(t, err)
  251. zkInputs := common.NewZKInputs(chainID, 100, 24, 512, 32, big.NewInt(1))
  252. zkInputs.Metadata.NewStateRootRaw = &merkletree.Hash{1}
  253. zkInputs.Metadata.NewExitRootRaw = &merkletree.Hash{2}
  254. batchInfo.ZKInputs = zkInputs
  255. err = proofClient.CalculateProof(ctx, batchInfo.ZKInputs)
  256. require.NoError(t, err)
  257. proof, pubInputs, err := proofClient.GetProof(ctx)
  258. require.NoError(t, err)
  259. batchInfo.Proof = proof
  260. batchInfo.PublicInputs = pubInputs
  261. batchInfo.ForgeBatchArgs = prepareForgeBatchArgs(batchInfo)
  262. auth, err := client.NewAuth()
  263. require.NoError(t, err)
  264. _, err = client.RollupForgeBatch(batchInfo.ForgeBatchArgs, auth)
  265. require.NoError(t, err)
  266. batchInfo.Proof = proof
  267. }