You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

849 lines
26 KiB

Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
  1. package synchronizer
  2. import (
  3. "context"
  4. "encoding/json"
  5. "fmt"
  6. "io/ioutil"
  7. "math/big"
  8. "os"
  9. "sort"
  10. "testing"
  11. "time"
  12. ethCommon "github.com/ethereum/go-ethereum/common"
  13. "github.com/hermeznetwork/hermez-node/common"
  14. dbUtils "github.com/hermeznetwork/hermez-node/db"
  15. "github.com/hermeznetwork/hermez-node/db/historydb"
  16. "github.com/hermeznetwork/hermez-node/db/statedb"
  17. "github.com/hermeznetwork/hermez-node/eth"
  18. "github.com/hermeznetwork/hermez-node/test"
  19. "github.com/hermeznetwork/hermez-node/test/til"
  20. "github.com/jinzhu/copier"
  21. "github.com/stretchr/testify/assert"
  22. "github.com/stretchr/testify/require"
  23. )
  24. var tokenConsts = map[common.TokenID]eth.ERC20Consts{}
  25. type timer struct {
  26. time int64
  27. }
  28. func (t *timer) Time() int64 {
  29. currentTime := t.time
  30. t.time++
  31. return currentTime
  32. }
  33. func accountsCmp(accounts []common.Account) func(i, j int) bool {
  34. return func(i, j int) bool { return accounts[i].Idx < accounts[j].Idx }
  35. }
  36. // Check Sync output and HistoryDB state against expected values generated by
  37. // til
  38. func checkSyncBlock(t *testing.T, s *Synchronizer, blockNum int, block, syncBlock *common.BlockData) {
  39. // Check Blocks
  40. dbBlocks, err := s.historyDB.GetAllBlocks()
  41. require.NoError(t, err)
  42. dbBlocks = dbBlocks[1:] // ignore block 0, added by default in the DB
  43. assert.Equal(t, blockNum, len(dbBlocks))
  44. assert.Equal(t, int64(blockNum), dbBlocks[blockNum-1].Num)
  45. assert.NotEqual(t, dbBlocks[blockNum-1].Hash, dbBlocks[blockNum-2].Hash)
  46. assert.Greater(t, dbBlocks[blockNum-1].Timestamp.Unix(), dbBlocks[blockNum-2].Timestamp.Unix())
  47. // Check Tokens
  48. assert.Equal(t, len(block.Rollup.AddedTokens), len(syncBlock.Rollup.AddedTokens))
  49. dbTokens, err := s.historyDB.GetAllTokens()
  50. require.NoError(t, err)
  51. dbTokens = dbTokens[1:] // ignore token 0, added by default in the DB
  52. for i, token := range block.Rollup.AddedTokens {
  53. dbToken := dbTokens[i]
  54. syncToken := syncBlock.Rollup.AddedTokens[i]
  55. assert.Equal(t, block.Block.Num, syncToken.EthBlockNum)
  56. assert.Equal(t, token.TokenID, syncToken.TokenID)
  57. assert.Equal(t, token.EthAddr, syncToken.EthAddr)
  58. tokenConst := tokenConsts[token.TokenID]
  59. assert.Equal(t, tokenConst.Name, syncToken.Name)
  60. assert.Equal(t, tokenConst.Symbol, syncToken.Symbol)
  61. assert.Equal(t, tokenConst.Decimals, syncToken.Decimals)
  62. var tokenCpy historydb.TokenWithUSD
  63. //nolint:gosec
  64. require.Nil(t, copier.Copy(&tokenCpy, &token)) // copy common.Token to historydb.TokenWithUSD
  65. require.Nil(t, copier.Copy(&tokenCpy, &tokenConst)) // copy common.Token to historydb.TokenWithUSD
  66. tokenCpy.ItemID = dbToken.ItemID // we don't care about ItemID
  67. assert.Equal(t, tokenCpy, dbToken)
  68. }
  69. // Check submitted L1UserTxs
  70. assert.Equal(t, len(block.Rollup.L1UserTxs), len(syncBlock.Rollup.L1UserTxs))
  71. dbL1UserTxs, err := s.historyDB.GetAllL1UserTxs()
  72. require.NoError(t, err)
  73. // Ignore BatchNum in syncBlock.L1UserTxs because this value is set by
  74. // the HistoryDB. Also ignore EffectiveAmount & EffectiveDepositAmount
  75. // because this value is set by StateDB.ProcessTxs.
  76. for i := range syncBlock.Rollup.L1UserTxs {
  77. syncBlock.Rollup.L1UserTxs[i].BatchNum = block.Rollup.L1UserTxs[i].BatchNum
  78. assert.Nil(t, syncBlock.Rollup.L1UserTxs[i].EffectiveDepositAmount)
  79. assert.Nil(t, syncBlock.Rollup.L1UserTxs[i].EffectiveAmount)
  80. }
  81. assert.Equal(t, block.Rollup.L1UserTxs, syncBlock.Rollup.L1UserTxs)
  82. for _, tx := range block.Rollup.L1UserTxs {
  83. var dbTx *common.L1Tx
  84. // Find tx in DB output
  85. for _, _dbTx := range dbL1UserTxs {
  86. if *tx.ToForgeL1TxsNum == *_dbTx.ToForgeL1TxsNum &&
  87. tx.Position == _dbTx.Position {
  88. dbTx = new(common.L1Tx)
  89. *dbTx = _dbTx
  90. // NOTE: Overwrite EffectiveFromIdx in L1UserTx
  91. // from db because we don't expect
  92. // EffectiveFromIdx to be set yet, as this tx
  93. // is not in yet forged
  94. dbTx.EffectiveFromIdx = 0
  95. break
  96. }
  97. }
  98. // If the tx has been forged in this block, this will be
  99. // reflected in the DB, and so the Effective values will be
  100. // already set
  101. if dbTx.BatchNum != nil {
  102. tx.EffectiveAmount = tx.Amount
  103. tx.EffectiveDepositAmount = tx.DepositAmount
  104. }
  105. assert.Equal(t, &tx, dbTx) //nolint:gosec
  106. }
  107. // Check Batches
  108. assert.Equal(t, len(block.Rollup.Batches), len(syncBlock.Rollup.Batches))
  109. dbBatches, err := s.historyDB.GetAllBatches()
  110. require.NoError(t, err)
  111. dbL1CoordinatorTxs, err := s.historyDB.GetAllL1CoordinatorTxs()
  112. require.NoError(t, err)
  113. dbL2Txs, err := s.historyDB.GetAllL2Txs()
  114. require.NoError(t, err)
  115. dbExits, err := s.historyDB.GetAllExits()
  116. require.NoError(t, err)
  117. // dbL1CoordinatorTxs := []common.L1Tx{}
  118. for i, batch := range block.Rollup.Batches {
  119. var dbBatch *common.Batch
  120. // Find batch in DB output
  121. for _, _dbBatch := range dbBatches {
  122. if batch.Batch.BatchNum == _dbBatch.BatchNum {
  123. dbBatch = new(common.Batch)
  124. *dbBatch = _dbBatch
  125. break
  126. }
  127. }
  128. syncBatch := syncBlock.Rollup.Batches[i]
  129. // We don't care about TotalFeesUSD. Use the syncBatch that
  130. // has a TotalFeesUSD inserted by the HistoryDB
  131. batch.Batch.TotalFeesUSD = syncBatch.Batch.TotalFeesUSD
  132. assert.Equal(t, batch.CreatedAccounts, syncBatch.CreatedAccounts)
  133. batch.Batch.NumAccounts = len(batch.CreatedAccounts)
  134. // Test field by field to facilitate debugging of errors
  135. assert.Equal(t, len(batch.L1UserTxs), len(syncBatch.L1UserTxs))
  136. // NOTE: EffectiveFromIdx is set to til L1UserTxs in
  137. // `FillBlocksForgedL1UserTxs` function
  138. for j := range syncBatch.L1UserTxs {
  139. assert.NotEqual(t, 0, syncBatch.L1UserTxs[j].EffectiveFromIdx)
  140. }
  141. assert.Equal(t, batch.L1UserTxs, syncBatch.L1UserTxs)
  142. // NOTE: EffectiveFromIdx is set to til L1CoordinatorTxs in
  143. // `FillBlocksExtra` function
  144. for j := range syncBatch.L1CoordinatorTxs {
  145. assert.NotEqual(t, 0, syncBatch.L1CoordinatorTxs[j].EffectiveFromIdx)
  146. }
  147. assert.Equal(t, batch.L1CoordinatorTxs, syncBatch.L1CoordinatorTxs)
  148. assert.Equal(t, batch.L2Txs, syncBatch.L2Txs)
  149. // In exit tree, we only check AccountIdx and Balance, because
  150. // it's what we have precomputed before.
  151. require.Equal(t, len(batch.ExitTree), len(syncBatch.ExitTree))
  152. for j := range batch.ExitTree {
  153. exit := &batch.ExitTree[j]
  154. assert.Equal(t, exit.AccountIdx, syncBatch.ExitTree[j].AccountIdx)
  155. assert.Equal(t, exit.Balance, syncBatch.ExitTree[j].Balance)
  156. *exit = syncBatch.ExitTree[j]
  157. }
  158. assert.Equal(t, batch.Batch, syncBatch.Batch)
  159. // Ignore updated accounts
  160. syncBatch.UpdatedAccounts = nil
  161. assert.Equal(t, batch, syncBatch)
  162. assert.Equal(t, &batch.Batch, dbBatch) //nolint:gosec
  163. // Check forged L1UserTxs from DB, and check effective values
  164. // in sync output
  165. for j, tx := range batch.L1UserTxs {
  166. var dbTx *common.L1Tx
  167. // Find tx in DB output
  168. for _, _dbTx := range dbL1UserTxs {
  169. if *tx.BatchNum == *_dbTx.BatchNum &&
  170. tx.Position == _dbTx.Position {
  171. dbTx = new(common.L1Tx)
  172. *dbTx = _dbTx
  173. break
  174. }
  175. }
  176. assert.Equal(t, &tx, dbTx) //nolint:gosec
  177. syncTx := &syncBlock.Rollup.Batches[i].L1UserTxs[j]
  178. assert.Equal(t, syncTx.DepositAmount, syncTx.EffectiveDepositAmount)
  179. assert.Equal(t, syncTx.Amount, syncTx.EffectiveAmount)
  180. }
  181. // Check L1CoordinatorTxs from DB
  182. for _, tx := range batch.L1CoordinatorTxs {
  183. var dbTx *common.L1Tx
  184. // Find tx in DB output
  185. for _, _dbTx := range dbL1CoordinatorTxs {
  186. if *tx.BatchNum == *_dbTx.BatchNum &&
  187. tx.Position == _dbTx.Position {
  188. dbTx = new(common.L1Tx)
  189. *dbTx = _dbTx
  190. break
  191. }
  192. }
  193. assert.Equal(t, &tx, dbTx) //nolint:gosec
  194. }
  195. // Check L2Txs from DB
  196. for _, tx := range batch.L2Txs {
  197. var dbTx *common.L2Tx
  198. // Find tx in DB output
  199. for _, _dbTx := range dbL2Txs {
  200. if tx.BatchNum == _dbTx.BatchNum &&
  201. tx.Position == _dbTx.Position {
  202. dbTx = new(common.L2Tx)
  203. *dbTx = _dbTx
  204. break
  205. }
  206. }
  207. assert.Equal(t, &tx, dbTx) //nolint:gosec
  208. }
  209. // Check Exits from DB
  210. for _, exit := range batch.ExitTree {
  211. var dbExit *common.ExitInfo
  212. // Find exit in DB output
  213. for _, _dbExit := range dbExits {
  214. if exit.BatchNum == _dbExit.BatchNum &&
  215. exit.AccountIdx == _dbExit.AccountIdx {
  216. dbExit = new(common.ExitInfo)
  217. *dbExit = _dbExit
  218. break
  219. }
  220. }
  221. // Compare MerkleProof in JSON because unmarshaled 0
  222. // big.Int leaves the internal big.Int array at nil,
  223. // and gives trouble when comparing big.Int with
  224. // internal big.Int array != nil but empty.
  225. mtp, err := json.Marshal(exit.MerkleProof)
  226. require.NoError(t, err)
  227. dbMtp, err := json.Marshal(dbExit.MerkleProof)
  228. require.NoError(t, err)
  229. assert.Equal(t, mtp, dbMtp)
  230. dbExit.MerkleProof = exit.MerkleProof
  231. assert.Equal(t, &exit, dbExit) //nolint:gosec
  232. }
  233. }
  234. // Compare accounts from HistoryDB with StateDB (they should match)
  235. dbAccounts, err := s.historyDB.GetAllAccounts()
  236. require.NoError(t, err)
  237. sdbAccounts, err := s.stateDB.TestGetAccounts()
  238. require.NoError(t, err)
  239. assertEqualAccountsHistoryDBStateDB(t, dbAccounts, sdbAccounts)
  240. }
  241. func assertEqualAccountsHistoryDBStateDB(t *testing.T, hdbAccs, sdbAccs []common.Account) {
  242. assert.Equal(t, len(hdbAccs), len(sdbAccs))
  243. sort.SliceStable(hdbAccs, accountsCmp(hdbAccs))
  244. sort.SliceStable(sdbAccs, accountsCmp(sdbAccs))
  245. for i := range hdbAccs {
  246. hdbAcc := hdbAccs[i]
  247. sdbAcc := sdbAccs[i]
  248. assert.Equal(t, hdbAcc.Idx, sdbAcc.Idx)
  249. assert.Equal(t, hdbAcc.TokenID, sdbAcc.TokenID)
  250. assert.Equal(t, hdbAcc.EthAddr, sdbAcc.EthAddr)
  251. assert.Equal(t, hdbAcc.BJJ, sdbAcc.BJJ)
  252. }
  253. }
  254. // ethAddTokens adds the tokens from the blocks to the blockchain
  255. func ethAddTokens(blocks []common.BlockData, client *test.Client) {
  256. for _, block := range blocks {
  257. for _, token := range block.Rollup.AddedTokens {
  258. consts := eth.ERC20Consts{
  259. Name: fmt.Sprintf("Token %d", token.TokenID),
  260. Symbol: fmt.Sprintf("TK%d", token.TokenID),
  261. Decimals: 18,
  262. }
  263. tokenConsts[token.TokenID] = consts
  264. client.CtlAddERC20(token.EthAddr, consts)
  265. }
  266. }
  267. }
  268. var chainID uint16 = 0
  269. var deleteme = []string{}
  270. func TestMain(m *testing.M) {
  271. exitVal := m.Run()
  272. for _, dir := range deleteme {
  273. if err := os.RemoveAll(dir); err != nil {
  274. panic(err)
  275. }
  276. }
  277. os.Exit(exitVal)
  278. }
  279. func newTestModules(t *testing.T) (*statedb.StateDB, *historydb.HistoryDB) {
  280. // Int State DB
  281. dir, err := ioutil.TempDir("", "tmpdb")
  282. require.NoError(t, err)
  283. deleteme = append(deleteme, dir)
  284. stateDB, err := statedb.NewStateDB(statedb.Config{Path: dir, Keep: 128, Type: statedb.TypeSynchronizer, NLevels: 32})
  285. require.NoError(t, err)
  286. // Init History DB
  287. pass := os.Getenv("POSTGRES_PASS")
  288. db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
  289. require.NoError(t, err)
  290. historyDB := historydb.NewHistoryDB(db, db, nil)
  291. // Clear DB
  292. test.WipeDB(historyDB.DB())
  293. return stateDB, historyDB
  294. }
  295. func newBigInt(s string) *big.Int {
  296. v, ok := new(big.Int).SetString(s, 10)
  297. if !ok {
  298. panic(fmt.Errorf("Can't set big.Int from %s", s))
  299. }
  300. return v
  301. }
  302. func TestSyncGeneral(t *testing.T) {
  303. //
  304. // Setup
  305. //
  306. stateDB, historyDB := newTestModules(t)
  307. // Init eth client
  308. var timer timer
  309. clientSetup := test.NewClientSetupExample()
  310. clientSetup.ChainID = big.NewInt(int64(chainID))
  311. bootCoordAddr := clientSetup.AuctionVariables.BootCoordinator
  312. client := test.NewClient(true, &timer, &ethCommon.Address{}, clientSetup)
  313. // Create Synchronizer
  314. s, err := NewSynchronizer(client, historyDB, stateDB, Config{
  315. StatsRefreshPeriod: 0 * time.Second,
  316. })
  317. require.NoError(t, err)
  318. ctx := context.Background()
  319. //
  320. // First Sync from an initial state
  321. //
  322. stats := s.Stats()
  323. assert.Equal(t, false, stats.Synced())
  324. // Test Sync for rollup genesis block
  325. syncBlock, discards, err := s.Sync(ctx, nil)
  326. require.NoError(t, err)
  327. require.Nil(t, discards)
  328. require.NotNil(t, syncBlock)
  329. require.Nil(t, syncBlock.Rollup.Vars)
  330. require.Nil(t, syncBlock.Auction.Vars)
  331. require.Nil(t, syncBlock.WDelayer.Vars)
  332. assert.Equal(t, int64(1), syncBlock.Block.Num)
  333. stats = s.Stats()
  334. assert.Equal(t, int64(1), stats.Eth.FirstBlockNum)
  335. assert.Equal(t, int64(1), stats.Eth.LastBlock.Num)
  336. assert.Equal(t, int64(1), stats.Sync.LastBlock.Num)
  337. vars := s.SCVars()
  338. assert.Equal(t, *clientSetup.RollupVariables, vars.Rollup)
  339. assert.Equal(t, *clientSetup.AuctionVariables, vars.Auction)
  340. assert.Equal(t, *clientSetup.WDelayerVariables, vars.WDelayer)
  341. dbBlocks, err := s.historyDB.GetAllBlocks()
  342. require.NoError(t, err)
  343. assert.Equal(t, 2, len(dbBlocks))
  344. assert.Equal(t, int64(1), dbBlocks[1].Num)
  345. // Sync again and expect no new blocks
  346. syncBlock, discards, err = s.Sync(ctx, nil)
  347. require.NoError(t, err)
  348. require.Nil(t, discards)
  349. require.Nil(t, syncBlock)
  350. //
  351. // Generate blockchain and smart contract data, and fill the test smart contracts
  352. //
  353. // Generate blockchain data with til
  354. set1 := `
  355. Type: Blockchain
  356. AddToken(1)
  357. AddToken(2)
  358. AddToken(3)
  359. CreateAccountDeposit(1) C: 2000 // Idx=256+2=258
  360. CreateAccountDeposit(2) A: 2000 // Idx=256+3=259
  361. CreateAccountDeposit(1) D: 500 // Idx=256+4=260
  362. CreateAccountDeposit(2) B: 500 // Idx=256+5=261
  363. CreateAccountDeposit(2) C: 500 // Idx=256+6=262
  364. CreateAccountCoordinator(1) A // Idx=256+0=256
  365. CreateAccountCoordinator(1) B // Idx=256+1=257
  366. > batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{5}
  367. > batchL1 // forge defined L1UserTxs{5}, freeze L1UserTxs{nil}
  368. > block // blockNum=2
  369. CreateAccountDepositTransfer(1) E-A: 1000, 200 // Idx=256+7=263
  370. ForceTransfer(1) C-B: 80
  371. ForceExit(1) A: 100
  372. ForceExit(1) B: 80
  373. ForceTransfer(1) A-D: 100
  374. Transfer(1) C-A: 100 (126)
  375. Exit(1) C: 50 (100)
  376. Exit(1) D: 30 (100)
  377. > batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{3}
  378. > batchL1 // forge L1UserTxs{3}, freeze defined L1UserTxs{nil}
  379. > block // blockNum=3
  380. `
  381. tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
  382. tilCfgExtra := til.ConfigExtra{
  383. BootCoordAddr: bootCoordAddr,
  384. CoordUser: "A",
  385. }
  386. blocks, err := tc.GenerateBlocks(set1)
  387. require.NoError(t, err)
  388. // Sanity check
  389. require.Equal(t, 2, len(blocks))
  390. // blocks 0 (blockNum=2)
  391. i := 0
  392. require.Equal(t, 2, int(blocks[i].Block.Num))
  393. require.Equal(t, 3, len(blocks[i].Rollup.AddedTokens))
  394. require.Equal(t, 5, len(blocks[i].Rollup.L1UserTxs))
  395. require.Equal(t, 2, len(blocks[i].Rollup.Batches))
  396. require.Equal(t, 2, len(blocks[i].Rollup.Batches[0].L1CoordinatorTxs))
  397. // Set StateRoots for batches manually (til doesn't set it)
  398. blocks[i].Rollup.Batches[0].Batch.StateRoot =
  399. newBigInt("18906357591508007884273218035694076596537737437965299189312069102730480717391")
  400. blocks[i].Rollup.Batches[1].Batch.StateRoot =
  401. newBigInt("9513185123401321669660637227182204000277156839501731093239187625486561933297")
  402. // blocks 1 (blockNum=3)
  403. i = 1
  404. require.Equal(t, 3, int(blocks[i].Block.Num))
  405. require.Equal(t, 5, len(blocks[i].Rollup.L1UserTxs))
  406. require.Equal(t, 2, len(blocks[i].Rollup.Batches))
  407. require.Equal(t, 3, len(blocks[i].Rollup.Batches[0].L2Txs))
  408. // Set StateRoots for batches manually (til doesn't set it)
  409. blocks[i].Rollup.Batches[0].Batch.StateRoot =
  410. newBigInt("13060270878200012606074130020925677466793317216609491464427188889005039616594")
  411. blocks[i].Rollup.Batches[1].Batch.StateRoot =
  412. newBigInt("21427104994652624302859637783375978708867165042357535792408500519060088086054")
  413. // Generate extra required data
  414. ethAddTokens(blocks, client)
  415. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  416. require.NoError(t, err)
  417. tc.FillBlocksL1UserTxsBatchNum(blocks)
  418. err = tc.FillBlocksForgedL1UserTxs(blocks)
  419. require.NoError(t, err)
  420. // Add block data to the smart contracts
  421. err = client.CtlAddBlocks(blocks)
  422. require.NoError(t, err)
  423. //
  424. // Sync to synchronize the current state from the test smart contracts,
  425. // and check the outcome
  426. //
  427. // Block 2
  428. syncBlock, discards, err = s.Sync(ctx, nil)
  429. require.NoError(t, err)
  430. require.Nil(t, discards)
  431. require.NotNil(t, syncBlock)
  432. assert.Nil(t, syncBlock.Rollup.Vars)
  433. assert.Nil(t, syncBlock.Auction.Vars)
  434. assert.Nil(t, syncBlock.WDelayer.Vars)
  435. assert.Equal(t, int64(2), syncBlock.Block.Num)
  436. stats = s.Stats()
  437. assert.Equal(t, int64(1), stats.Eth.FirstBlockNum)
  438. assert.Equal(t, int64(3), stats.Eth.LastBlock.Num)
  439. assert.Equal(t, int64(2), stats.Sync.LastBlock.Num)
  440. checkSyncBlock(t, s, 2, &blocks[0], syncBlock)
  441. // Block 3
  442. syncBlock, discards, err = s.Sync(ctx, nil)
  443. assert.NoError(t, err)
  444. require.NoError(t, err)
  445. require.Nil(t, discards)
  446. require.NotNil(t, syncBlock)
  447. assert.Nil(t, syncBlock.Rollup.Vars)
  448. assert.Nil(t, syncBlock.Auction.Vars)
  449. assert.Nil(t, syncBlock.WDelayer.Vars)
  450. assert.Equal(t, int64(3), syncBlock.Block.Num)
  451. stats = s.Stats()
  452. assert.Equal(t, int64(1), stats.Eth.FirstBlockNum)
  453. assert.Equal(t, int64(3), stats.Eth.LastBlock.Num)
  454. assert.Equal(t, int64(3), stats.Sync.LastBlock.Num)
  455. checkSyncBlock(t, s, 3, &blocks[1], syncBlock)
  456. // Block 4
  457. // Generate 2 withdraws manually
  458. _, err = client.RollupWithdrawMerkleProof(tc.Users["A"].BJJ.Public().Compress(), 1, 4, 256, big.NewInt(100), []*big.Int{}, true)
  459. require.NoError(t, err)
  460. _, err = client.RollupWithdrawMerkleProof(tc.Users["C"].BJJ.Public().Compress(), 1, 3, 258, big.NewInt(50), []*big.Int{}, false)
  461. require.NoError(t, err)
  462. client.CtlMineBlock()
  463. syncBlock, discards, err = s.Sync(ctx, nil)
  464. require.NoError(t, err)
  465. require.Nil(t, discards)
  466. require.NotNil(t, syncBlock)
  467. assert.Nil(t, syncBlock.Rollup.Vars)
  468. assert.Nil(t, syncBlock.Auction.Vars)
  469. assert.Nil(t, syncBlock.WDelayer.Vars)
  470. assert.Equal(t, int64(4), syncBlock.Block.Num)
  471. stats = s.Stats()
  472. assert.Equal(t, int64(1), stats.Eth.FirstBlockNum)
  473. assert.Equal(t, int64(4), stats.Eth.LastBlock.Num)
  474. assert.Equal(t, int64(4), stats.Sync.LastBlock.Num)
  475. vars = s.SCVars()
  476. assert.Equal(t, *clientSetup.RollupVariables, vars.Rollup)
  477. assert.Equal(t, *clientSetup.AuctionVariables, vars.Auction)
  478. assert.Equal(t, *clientSetup.WDelayerVariables, vars.WDelayer)
  479. dbExits, err := s.historyDB.GetAllExits()
  480. require.NoError(t, err)
  481. foundA1, foundC1 := false, false
  482. for _, exit := range dbExits {
  483. if exit.AccountIdx == 256 && exit.BatchNum == 4 {
  484. foundA1 = true
  485. assert.Equal(t, int64(4), *exit.InstantWithdrawn)
  486. }
  487. if exit.AccountIdx == 258 && exit.BatchNum == 3 {
  488. foundC1 = true
  489. assert.Equal(t, int64(4), *exit.DelayedWithdrawRequest)
  490. }
  491. }
  492. assert.True(t, foundA1)
  493. assert.True(t, foundC1)
  494. // Block 5
  495. // Update variables manually
  496. rollupVars, auctionVars, wDelayerVars, err := s.historyDB.GetSCVars()
  497. require.NoError(t, err)
  498. rollupVars.ForgeL1L2BatchTimeout = 42
  499. _, err = client.RollupUpdateForgeL1L2BatchTimeout(rollupVars.ForgeL1L2BatchTimeout)
  500. require.NoError(t, err)
  501. auctionVars.OpenAuctionSlots = 17
  502. _, err = client.AuctionSetOpenAuctionSlots(auctionVars.OpenAuctionSlots)
  503. require.NoError(t, err)
  504. wDelayerVars.WithdrawalDelay = 99
  505. _, err = client.WDelayerChangeWithdrawalDelay(wDelayerVars.WithdrawalDelay)
  506. require.NoError(t, err)
  507. client.CtlMineBlock()
  508. syncBlock, discards, err = s.Sync(ctx, nil)
  509. require.NoError(t, err)
  510. require.Nil(t, discards)
  511. require.NotNil(t, syncBlock)
  512. assert.NotNil(t, syncBlock.Rollup.Vars)
  513. assert.NotNil(t, syncBlock.Auction.Vars)
  514. assert.NotNil(t, syncBlock.WDelayer.Vars)
  515. assert.Equal(t, int64(5), syncBlock.Block.Num)
  516. stats = s.Stats()
  517. assert.Equal(t, int64(1), stats.Eth.FirstBlockNum)
  518. assert.Equal(t, int64(5), stats.Eth.LastBlock.Num)
  519. assert.Equal(t, int64(5), stats.Sync.LastBlock.Num)
  520. vars = s.SCVars()
  521. assert.NotEqual(t, clientSetup.RollupVariables, vars.Rollup)
  522. assert.NotEqual(t, clientSetup.AuctionVariables, vars.Auction)
  523. assert.NotEqual(t, clientSetup.WDelayerVariables, vars.WDelayer)
  524. dbRollupVars, dbAuctionVars, dbWDelayerVars, err := s.historyDB.GetSCVars()
  525. require.NoError(t, err)
  526. // Set EthBlockNum for Vars to the blockNum in which they were updated (should be 5)
  527. rollupVars.EthBlockNum = syncBlock.Block.Num
  528. auctionVars.EthBlockNum = syncBlock.Block.Num
  529. wDelayerVars.EthBlockNum = syncBlock.Block.Num
  530. assert.Equal(t, rollupVars, dbRollupVars)
  531. assert.Equal(t, auctionVars, dbAuctionVars)
  532. assert.Equal(t, wDelayerVars, dbWDelayerVars)
  533. //
  534. // Reorg test
  535. //
  536. // Redo blocks 2-5 (as a reorg) only leaving:
  537. // - 2 create account transactions
  538. // - 2 add tokens
  539. // We add a 6th block so that the synchronizer can detect the reorg
  540. set2 := `
  541. Type: Blockchain
  542. AddToken(1)
  543. AddToken(2)
  544. CreateAccountDeposit(1) C: 2000 // Idx=256+1=257
  545. CreateAccountCoordinator(1) A // Idx=256+0=256
  546. > batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{1}
  547. > batchL1 // forge defined L1UserTxs{1}, freeze L1UserTxs{nil}
  548. > block // blockNum=2
  549. > block // blockNum=3
  550. > block // blockNum=4
  551. > block // blockNum=5
  552. > block // blockNum=6
  553. `
  554. tc = til.NewContext(chainID, common.RollupConstMaxL1UserTx)
  555. tilCfgExtra = til.ConfigExtra{
  556. BootCoordAddr: bootCoordAddr,
  557. CoordUser: "A",
  558. }
  559. blocks, err = tc.GenerateBlocks(set2)
  560. require.NoError(t, err)
  561. // Set StateRoots for batches manually (til doesn't set it)
  562. blocks[0].Rollup.Batches[0].Batch.StateRoot =
  563. newBigInt("11218510534825843475100588932060366395781087435899915642332104464234485046683")
  564. blocks[0].Rollup.Batches[1].Batch.StateRoot =
  565. newBigInt("20283020730369146334077598087403837297563965802277806438205710455191646998983")
  566. for i := 0; i < 4; i++ {
  567. client.CtlRollback()
  568. }
  569. block := client.CtlLastBlock()
  570. require.Equal(t, int64(1), block.Num)
  571. // Generate extra required data
  572. ethAddTokens(blocks, client)
  573. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  574. require.NoError(t, err)
  575. tc.FillBlocksL1UserTxsBatchNum(blocks)
  576. // Add block data to the smart contracts
  577. err = client.CtlAddBlocks(blocks)
  578. require.NoError(t, err)
  579. // First sync detects the reorg and discards 4 blocks
  580. syncBlock, discards, err = s.Sync(ctx, nil)
  581. require.NoError(t, err)
  582. expetedDiscards := int64(4)
  583. require.Equal(t, &expetedDiscards, discards)
  584. require.Nil(t, syncBlock)
  585. stats = s.Stats()
  586. assert.Equal(t, false, stats.Synced())
  587. assert.Equal(t, int64(6), stats.Eth.LastBlock.Num)
  588. vars = s.SCVars()
  589. assert.Equal(t, *clientSetup.RollupVariables, vars.Rollup)
  590. assert.Equal(t, *clientSetup.AuctionVariables, vars.Auction)
  591. assert.Equal(t, *clientSetup.WDelayerVariables, vars.WDelayer)
  592. // At this point, the DB only has data up to block 1
  593. dbBlock, err := s.historyDB.GetLastBlock()
  594. require.NoError(t, err)
  595. assert.Equal(t, int64(1), dbBlock.Num)
  596. // Accounts in HistoryDB and StateDB must be empty
  597. dbAccounts, err := s.historyDB.GetAllAccounts()
  598. require.NoError(t, err)
  599. sdbAccounts, err := s.stateDB.TestGetAccounts()
  600. require.NoError(t, err)
  601. assert.Equal(t, 0, len(dbAccounts))
  602. assertEqualAccountsHistoryDBStateDB(t, dbAccounts, sdbAccounts)
  603. // Sync blocks 2-6
  604. for i := 0; i < 5; i++ {
  605. syncBlock, discards, err = s.Sync(ctx, nil)
  606. require.NoError(t, err)
  607. require.Nil(t, discards)
  608. require.NotNil(t, syncBlock)
  609. assert.Nil(t, syncBlock.Rollup.Vars)
  610. assert.Nil(t, syncBlock.Auction.Vars)
  611. assert.Nil(t, syncBlock.WDelayer.Vars)
  612. assert.Equal(t, int64(2+i), syncBlock.Block.Num)
  613. stats = s.Stats()
  614. assert.Equal(t, int64(1), stats.Eth.FirstBlockNum)
  615. assert.Equal(t, int64(6), stats.Eth.LastBlock.Num)
  616. assert.Equal(t, int64(2+i), stats.Sync.LastBlock.Num)
  617. if i == 4 {
  618. assert.Equal(t, true, stats.Synced())
  619. } else {
  620. assert.Equal(t, false, stats.Synced())
  621. }
  622. vars = s.SCVars()
  623. assert.Equal(t, *clientSetup.RollupVariables, vars.Rollup)
  624. assert.Equal(t, *clientSetup.AuctionVariables, vars.Auction)
  625. assert.Equal(t, *clientSetup.WDelayerVariables, vars.WDelayer)
  626. }
  627. dbBlock, err = s.historyDB.GetLastBlock()
  628. require.NoError(t, err)
  629. assert.Equal(t, int64(6), dbBlock.Num)
  630. // Accounts in HistoryDB and StateDB is only 2 entries
  631. dbAccounts, err = s.historyDB.GetAllAccounts()
  632. require.NoError(t, err)
  633. sdbAccounts, err = s.stateDB.TestGetAccounts()
  634. require.NoError(t, err)
  635. assert.Equal(t, 2, len(dbAccounts))
  636. assertEqualAccountsHistoryDBStateDB(t, dbAccounts, sdbAccounts)
  637. }
  638. func TestSyncForgerCommitment(t *testing.T) {
  639. stateDB, historyDB := newTestModules(t)
  640. // Init eth client
  641. var timer timer
  642. clientSetup := test.NewClientSetupExample()
  643. clientSetup.ChainID = big.NewInt(int64(chainID))
  644. clientSetup.AuctionConstants.GenesisBlockNum = 2
  645. clientSetup.AuctionConstants.BlocksPerSlot = 4
  646. clientSetup.AuctionVariables.SlotDeadline = 2
  647. bootCoordAddr := clientSetup.AuctionVariables.BootCoordinator
  648. client := test.NewClient(true, &timer, &ethCommon.Address{}, clientSetup)
  649. // Create Synchronizer
  650. s, err := NewSynchronizer(client, historyDB, stateDB, Config{
  651. StatsRefreshPeriod: 0 * time.Second,
  652. })
  653. require.NoError(t, err)
  654. ctx := context.Background()
  655. set := `
  656. Type: Blockchain
  657. // Slot = 0
  658. > block // 2
  659. > block // 3
  660. > block // 4
  661. > block // 5
  662. // Slot = 1
  663. > block // 6
  664. > batch
  665. > block // 7
  666. > block // 8
  667. > block // 9
  668. // Slot = 2
  669. > block // 10
  670. > block // 11
  671. > batch
  672. > block // 12
  673. > block // 13
  674. `
  675. // For each block, true when the slot that belongs to the following
  676. // block has forgerCommitment
  677. commitment := map[int64]bool{
  678. 2: false,
  679. 3: false,
  680. 4: false,
  681. 5: false,
  682. 6: false,
  683. 7: true,
  684. 8: true,
  685. 9: false,
  686. 10: false,
  687. 11: false,
  688. 12: false,
  689. 13: false,
  690. }
  691. tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
  692. blocks, err := tc.GenerateBlocks(set)
  693. assert.NoError(t, err)
  694. tilCfgExtra := til.ConfigExtra{
  695. BootCoordAddr: bootCoordAddr,
  696. CoordUser: "A",
  697. }
  698. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  699. require.NoError(t, err)
  700. // for i := range blocks {
  701. // for j := range blocks[i].Rollup.Batches {
  702. // blocks[i].Rollup.Batches[j].Batch.SlotNum = int64(i) / 4
  703. // }
  704. // }
  705. // be in sync
  706. for {
  707. syncBlock, discards, err := s.Sync(ctx, nil)
  708. require.NoError(t, err)
  709. require.Nil(t, discards)
  710. if syncBlock == nil {
  711. break
  712. }
  713. }
  714. stats := s.Stats()
  715. require.Equal(t, int64(1), stats.Sync.LastBlock.Num)
  716. // Store ForgerComitmnent observed at every block by the live synchronizer
  717. syncCommitment := map[int64]bool{}
  718. // Store ForgerComitmnent observed at every block by a syncrhonizer that is restarted
  719. syncRestartedCommitment := map[int64]bool{}
  720. for _, block := range blocks {
  721. // Add block data to the smart contracts
  722. err = client.CtlAddBlocks([]common.BlockData{block})
  723. require.NoError(t, err)
  724. syncBlock, discards, err := s.Sync(ctx, nil)
  725. require.NoError(t, err)
  726. require.Nil(t, discards)
  727. if syncBlock == nil {
  728. break
  729. }
  730. stats := s.Stats()
  731. require.True(t, stats.Synced())
  732. syncCommitment[syncBlock.Block.Num] = stats.Sync.Auction.CurrentSlot.ForgerCommitment
  733. s2, err := NewSynchronizer(client, historyDB, stateDB, Config{
  734. StatsRefreshPeriod: 0 * time.Second,
  735. })
  736. require.NoError(t, err)
  737. stats = s2.Stats()
  738. require.True(t, stats.Synced())
  739. syncRestartedCommitment[syncBlock.Block.Num] = stats.Sync.Auction.CurrentSlot.ForgerCommitment
  740. }
  741. assert.Equal(t, commitment, syncCommitment)
  742. assert.Equal(t, commitment, syncRestartedCommitment)
  743. }