You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

312 lines
9.9 KiB

Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
  1. package coordinator
  2. import (
  3. "context"
  4. "fmt"
  5. "io/ioutil"
  6. "math/big"
  7. "os"
  8. "testing"
  9. ethKeystore "github.com/ethereum/go-ethereum/accounts/keystore"
  10. ethCommon "github.com/ethereum/go-ethereum/common"
  11. "github.com/ethereum/go-ethereum/crypto"
  12. "github.com/ethereum/go-ethereum/ethclient"
  13. "github.com/hermeznetwork/hermez-node/common"
  14. "github.com/hermeznetwork/hermez-node/db/historydb"
  15. "github.com/hermeznetwork/hermez-node/db/statedb"
  16. "github.com/hermeznetwork/hermez-node/eth"
  17. "github.com/hermeznetwork/hermez-node/prover"
  18. "github.com/hermeznetwork/hermez-node/synchronizer"
  19. "github.com/hermeznetwork/hermez-node/test"
  20. "github.com/hermeznetwork/hermez-node/test/til"
  21. "github.com/iden3/go-merkletree"
  22. "github.com/stretchr/testify/assert"
  23. "github.com/stretchr/testify/require"
  24. )
  25. func newBigInt(s string) *big.Int {
  26. v, ok := new(big.Int).SetString(s, 10)
  27. if !ok {
  28. panic(fmt.Errorf("Can't set big.Int from %s", s))
  29. }
  30. return v
  31. }
  32. func TestPipelineShouldL1L2Batch(t *testing.T) {
  33. ethClientSetup := test.NewClientSetupExample()
  34. ethClientSetup.ChainID = big.NewInt(int64(chainID))
  35. var timer timer
  36. ctx := context.Background()
  37. ethClient := test.NewClient(true, &timer, &bidder, ethClientSetup)
  38. modules := newTestModules(t)
  39. var stats synchronizer.Stats
  40. coord := newTestCoordinator(t, forger, ethClient, ethClientSetup, modules)
  41. pipeline, err := coord.newPipeline(ctx)
  42. require.NoError(t, err)
  43. pipeline.vars = coord.vars
  44. // Check that the parameters are the ones we expect and use in this test
  45. require.Equal(t, 0.5, pipeline.cfg.L1BatchTimeoutPerc)
  46. require.Equal(t, int64(10), ethClientSetup.RollupVariables.ForgeL1L2BatchTimeout)
  47. l1BatchTimeoutPerc := pipeline.cfg.L1BatchTimeoutPerc
  48. l1BatchTimeout := ethClientSetup.RollupVariables.ForgeL1L2BatchTimeout
  49. startBlock := int64(100)
  50. // Empty batchInfo to pass to shouldL1L2Batch() which sets debug information
  51. batchInfo := BatchInfo{}
  52. //
  53. // No scheduled L1Batch
  54. //
  55. // Last L1Batch was a long time ago
  56. stats.Eth.LastBlock.Num = startBlock
  57. stats.Sync.LastBlock = stats.Eth.LastBlock
  58. stats.Sync.LastL1BatchBlock = 0
  59. pipeline.stats = stats
  60. assert.Equal(t, true, pipeline.shouldL1L2Batch(&batchInfo))
  61. stats.Sync.LastL1BatchBlock = startBlock
  62. // We are are one block before the timeout range * 0.5
  63. stats.Eth.LastBlock.Num = startBlock - 1 + int64(float64(l1BatchTimeout-1)*l1BatchTimeoutPerc) - 1
  64. stats.Sync.LastBlock = stats.Eth.LastBlock
  65. pipeline.stats = stats
  66. assert.Equal(t, false, pipeline.shouldL1L2Batch(&batchInfo))
  67. // We are are at timeout range * 0.5
  68. stats.Eth.LastBlock.Num = startBlock - 1 + int64(float64(l1BatchTimeout-1)*l1BatchTimeoutPerc)
  69. stats.Sync.LastBlock = stats.Eth.LastBlock
  70. pipeline.stats = stats
  71. assert.Equal(t, true, pipeline.shouldL1L2Batch(&batchInfo))
  72. //
  73. // Scheduled L1Batch
  74. //
  75. pipeline.state.lastScheduledL1BatchBlockNum = startBlock
  76. stats.Sync.LastL1BatchBlock = startBlock - 10
  77. // We are are one block before the timeout range * 0.5
  78. stats.Eth.LastBlock.Num = startBlock - 1 + int64(float64(l1BatchTimeout-1)*l1BatchTimeoutPerc) - 1
  79. stats.Sync.LastBlock = stats.Eth.LastBlock
  80. pipeline.stats = stats
  81. assert.Equal(t, false, pipeline.shouldL1L2Batch(&batchInfo))
  82. // We are are at timeout range * 0.5
  83. stats.Eth.LastBlock.Num = startBlock - 1 + int64(float64(l1BatchTimeout-1)*l1BatchTimeoutPerc)
  84. stats.Sync.LastBlock = stats.Eth.LastBlock
  85. pipeline.stats = stats
  86. assert.Equal(t, true, pipeline.shouldL1L2Batch(&batchInfo))
  87. }
  88. const testTokensLen = 3
  89. const testUsersLen = 4
  90. func preloadSync(t *testing.T, ethClient *test.Client, sync *synchronizer.Synchronizer,
  91. historyDB *historydb.HistoryDB, stateDB *statedb.StateDB) *til.Context {
  92. // Create a set with `testTokensLen` tokens and for each token
  93. // `testUsersLen` accounts.
  94. var set []til.Instruction
  95. // set = append(set, til.Instruction{Typ: "Blockchain"})
  96. for tokenID := 1; tokenID < testTokensLen; tokenID++ {
  97. set = append(set, til.Instruction{
  98. Typ: til.TypeAddToken,
  99. TokenID: common.TokenID(tokenID),
  100. })
  101. }
  102. depositAmount, ok := new(big.Int).SetString("10225000000000000000000000000000000", 10)
  103. require.True(t, ok)
  104. for tokenID := 0; tokenID < testTokensLen; tokenID++ {
  105. for user := 0; user < testUsersLen; user++ {
  106. set = append(set, til.Instruction{
  107. Typ: common.TxTypeCreateAccountDeposit,
  108. TokenID: common.TokenID(tokenID),
  109. DepositAmount: depositAmount,
  110. From: fmt.Sprintf("User%d", user),
  111. })
  112. }
  113. }
  114. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  115. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  116. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  117. tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
  118. blocks, err := tc.GenerateBlocksFromInstructions(set)
  119. require.NoError(t, err)
  120. require.NotNil(t, blocks)
  121. // Set StateRoots for batches manually (til doesn't set it)
  122. blocks[0].Rollup.Batches[0].Batch.StateRoot =
  123. newBigInt("0")
  124. blocks[0].Rollup.Batches[1].Batch.StateRoot =
  125. newBigInt("10941365282189107056349764238909072001483688090878331371699519307087372995595")
  126. ethAddTokens(blocks, ethClient)
  127. err = ethClient.CtlAddBlocks(blocks)
  128. require.NoError(t, err)
  129. ctx := context.Background()
  130. for {
  131. syncBlock, discards, err := sync.Sync(ctx, nil)
  132. require.NoError(t, err)
  133. require.Nil(t, discards)
  134. if syncBlock == nil {
  135. break
  136. }
  137. }
  138. dbTokens, err := historyDB.GetAllTokens()
  139. require.Nil(t, err)
  140. require.Equal(t, testTokensLen, len(dbTokens))
  141. dbAccounts, err := historyDB.GetAllAccounts()
  142. require.Nil(t, err)
  143. require.Equal(t, testTokensLen*testUsersLen, len(dbAccounts))
  144. sdbAccounts, err := stateDB.TestGetAccounts()
  145. require.Nil(t, err)
  146. require.Equal(t, testTokensLen*testUsersLen, len(sdbAccounts))
  147. return tc
  148. }
  149. func TestPipelineForgeBatchWithTxs(t *testing.T) {
  150. ethClientSetup := test.NewClientSetupExample()
  151. ethClientSetup.ChainID = big.NewInt(int64(chainID))
  152. var timer timer
  153. ctx := context.Background()
  154. ethClient := test.NewClient(true, &timer, &bidder, ethClientSetup)
  155. modules := newTestModules(t)
  156. coord := newTestCoordinator(t, forger, ethClient, ethClientSetup, modules)
  157. sync := newTestSynchronizer(t, ethClient, ethClientSetup, modules)
  158. // preload the synchronier (via the test ethClient) some tokens and
  159. // users with positive balances
  160. tilCtx := preloadSync(t, ethClient, sync, modules.historyDB, modules.stateDB)
  161. syncStats := sync.Stats()
  162. batchNum := syncStats.Sync.LastBatch.BatchNum
  163. syncSCVars := sync.SCVars()
  164. pipeline, err := coord.newPipeline(ctx)
  165. require.NoError(t, err)
  166. // Insert some l2txs in the Pool
  167. setPool := `
  168. Type: PoolL2
  169. PoolTransfer(0) User0-User1: 100 (126)
  170. PoolTransfer(0) User1-User2: 200 (126)
  171. PoolTransfer(0) User2-User3: 300 (126)
  172. `
  173. l2txs, err := tilCtx.GeneratePoolL2Txs(setPool)
  174. require.NoError(t, err)
  175. for _, tx := range l2txs {
  176. err := modules.l2DB.AddTxTest(&tx) //nolint:gosec
  177. require.NoError(t, err)
  178. }
  179. err = pipeline.reset(batchNum, syncStats, &synchronizer.SCVariables{
  180. Rollup: *syncSCVars.Rollup,
  181. Auction: *syncSCVars.Auction,
  182. WDelayer: *syncSCVars.WDelayer,
  183. })
  184. require.NoError(t, err)
  185. // Sanity check
  186. sdbAccounts, err := pipeline.txSelector.LocalAccountsDB().TestGetAccounts()
  187. require.Nil(t, err)
  188. require.Equal(t, testTokensLen*testUsersLen, len(sdbAccounts))
  189. // Sanity check
  190. sdbAccounts, err = pipeline.batchBuilder.LocalStateDB().TestGetAccounts()
  191. require.Nil(t, err)
  192. require.Equal(t, testTokensLen*testUsersLen, len(sdbAccounts))
  193. // Sanity check
  194. require.Equal(t, modules.stateDB.MT.Root(),
  195. pipeline.batchBuilder.LocalStateDB().MT.Root())
  196. batchNum++
  197. batchInfo, err := pipeline.forgeBatch(batchNum)
  198. require.NoError(t, err)
  199. assert.Equal(t, 3, len(batchInfo.L2Txs))
  200. batchNum++
  201. batchInfo, err = pipeline.forgeBatch(batchNum)
  202. require.NoError(t, err)
  203. assert.Equal(t, 0, len(batchInfo.L2Txs))
  204. }
  205. func TestEthRollupForgeBatch(t *testing.T) {
  206. if os.Getenv("TEST_ROLLUP_FORGE_BATCH") == "" {
  207. return
  208. }
  209. const web3URL = "http://localhost:8545"
  210. const password = "test"
  211. addr := ethCommon.HexToAddress("0xb4124ceb3451635dacedd11767f004d8a28c6ee7")
  212. sk, err := crypto.HexToECDSA(
  213. "a8a54b2d8197bc0b19bb8a084031be71835580a01e70a45a13babd16c9bc1563")
  214. require.NoError(t, err)
  215. rollupAddr := ethCommon.HexToAddress("0x8EEaea23686c319133a7cC110b840d1591d9AeE0")
  216. pathKeystore, err := ioutil.TempDir("", "tmpKeystore")
  217. require.NoError(t, err)
  218. deleteme = append(deleteme, pathKeystore)
  219. ctx := context.Background()
  220. batchInfo := &BatchInfo{}
  221. proofClient := &prover.MockClient{}
  222. chainID := uint16(0)
  223. ethClient, err := ethclient.Dial(web3URL)
  224. require.NoError(t, err)
  225. ethCfg := eth.EthereumConfig{
  226. CallGasLimit: 300000,
  227. GasPriceDiv: 100,
  228. }
  229. scryptN := ethKeystore.LightScryptN
  230. scryptP := ethKeystore.LightScryptP
  231. keyStore := ethKeystore.NewKeyStore(pathKeystore,
  232. scryptN, scryptP)
  233. account, err := keyStore.ImportECDSA(sk, password)
  234. require.NoError(t, err)
  235. require.Equal(t, account.Address, addr)
  236. err = keyStore.Unlock(account, password)
  237. require.NoError(t, err)
  238. client, err := eth.NewClient(ethClient, &account, keyStore, &eth.ClientConfig{
  239. Ethereum: ethCfg,
  240. Rollup: eth.RollupConfig{
  241. Address: rollupAddr,
  242. },
  243. Auction: eth.AuctionConfig{
  244. Address: ethCommon.Address{},
  245. TokenHEZ: eth.TokenConfig{
  246. Address: ethCommon.Address{},
  247. Name: "HEZ",
  248. },
  249. },
  250. WDelayer: eth.WDelayerConfig{
  251. Address: ethCommon.Address{},
  252. },
  253. })
  254. require.NoError(t, err)
  255. zkInputs := common.NewZKInputs(chainID, 100, 24, 512, 32, big.NewInt(1))
  256. zkInputs.Metadata.NewStateRootRaw = &merkletree.Hash{1}
  257. zkInputs.Metadata.NewExitRootRaw = &merkletree.Hash{2}
  258. batchInfo.ZKInputs = zkInputs
  259. err = proofClient.CalculateProof(ctx, batchInfo.ZKInputs)
  260. require.NoError(t, err)
  261. proof, pubInputs, err := proofClient.GetProof(ctx)
  262. require.NoError(t, err)
  263. batchInfo.Proof = proof
  264. batchInfo.PublicInputs = pubInputs
  265. batchInfo.ForgeBatchArgs = prepareForgeBatchArgs(batchInfo)
  266. auth, err := client.NewAuth()
  267. require.NoError(t, err)
  268. _, err = client.RollupForgeBatch(batchInfo.ForgeBatchArgs, auth)
  269. require.NoError(t, err)
  270. batchInfo.Proof = proof
  271. }