You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1586 lines
50 KiB

Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
  1. package historydb
  2. import (
  3. "database/sql"
  4. "fmt"
  5. "math"
  6. "math/big"
  7. "os"
  8. "strings"
  9. "testing"
  10. "time"
  11. ethCommon "github.com/ethereum/go-ethereum/common"
  12. "github.com/hermeznetwork/hermez-node/apitypes"
  13. "github.com/hermeznetwork/hermez-node/common"
  14. dbUtils "github.com/hermeznetwork/hermez-node/db"
  15. "github.com/hermeznetwork/hermez-node/log"
  16. "github.com/hermeznetwork/hermez-node/test"
  17. "github.com/hermeznetwork/hermez-node/test/til"
  18. "github.com/hermeznetwork/tracerr"
  19. "github.com/stretchr/testify/assert"
  20. "github.com/stretchr/testify/require"
  21. )
  22. var historyDB *HistoryDB
  23. var historyDBWithACC *HistoryDB
  24. // In order to run the test you need to run a Posgres DB with
  25. // a database named "history" that is accessible by
  26. // user: "hermez"
  27. // pass: set it using the env var POSTGRES_PASS
  28. // This can be achieved by running: POSTGRES_PASS=your_strong_pass && sudo docker run --rm --name hermez-db-test -p 5432:5432 -e POSTGRES_DB=history -e POSTGRES_USER=hermez -e POSTGRES_PASSWORD=$POSTGRES_PASS -d postgres && sleep 2s && sudo docker exec -it hermez-db-test psql -a history -U hermez -c "CREATE DATABASE l2;"
  29. // After running the test you can stop the container by running: sudo docker kill hermez-db-test
  30. // If you already did that for the L2DB you don't have to do it again
  31. func TestMain(m *testing.M) {
  32. // init DB
  33. pass := os.Getenv("POSTGRES_PASS")
  34. db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
  35. if err != nil {
  36. panic(err)
  37. }
  38. historyDB = NewHistoryDB(db, db, nil)
  39. if err != nil {
  40. panic(err)
  41. }
  42. apiConnCon := dbUtils.NewAPIConnectionController(1, time.Second)
  43. historyDBWithACC = NewHistoryDB(db, db, apiConnCon)
  44. // Run tests
  45. result := m.Run()
  46. // Close DB
  47. if err := db.Close(); err != nil {
  48. log.Error("Error closing the history DB:", err)
  49. }
  50. os.Exit(result)
  51. }
  52. func TestBlocks(t *testing.T) {
  53. var fromBlock, toBlock int64
  54. fromBlock = 0
  55. toBlock = 7
  56. // Reset DB
  57. test.WipeDB(historyDB.DB())
  58. // Generate blocks using til
  59. set1 := `
  60. Type: Blockchain
  61. // block 0 is stored as default in the DB
  62. // block 1 does not exist
  63. > block // blockNum=2
  64. > block // blockNum=3
  65. > block // blockNum=4
  66. > block // blockNum=5
  67. > block // blockNum=6
  68. `
  69. tc := til.NewContext(uint16(0), 1)
  70. blocks, err := tc.GenerateBlocks(set1)
  71. require.NoError(t, err)
  72. // Save timestamp of a block with UTC and change it without UTC
  73. timestamp := time.Now().Add(time.Second * 13)
  74. blocks[fromBlock].Block.Timestamp = timestamp
  75. // Insert blocks into DB
  76. for i := 0; i < len(blocks); i++ {
  77. err := historyDB.AddBlock(&blocks[i].Block)
  78. assert.NoError(t, err)
  79. }
  80. // Add block 0 to the generated blocks
  81. blocks = append(
  82. []common.BlockData{{Block: test.Block0}}, //nolint:gofmt
  83. blocks...,
  84. )
  85. // Get all blocks from DB
  86. fetchedBlocks, err := historyDB.getBlocks(fromBlock, toBlock)
  87. assert.Equal(t, len(blocks), len(fetchedBlocks))
  88. // Compare generated vs getted blocks
  89. assert.NoError(t, err)
  90. for i := range fetchedBlocks {
  91. assertEqualBlock(t, &blocks[i].Block, &fetchedBlocks[i])
  92. }
  93. // Compare saved timestamp vs getted
  94. nameZoneUTC, offsetUTC := timestamp.UTC().Zone()
  95. zoneFetchedBlock, offsetFetchedBlock := fetchedBlocks[fromBlock].Timestamp.Zone()
  96. assert.Equal(t, nameZoneUTC, zoneFetchedBlock)
  97. assert.Equal(t, offsetUTC, offsetFetchedBlock)
  98. // Get blocks from the DB one by one
  99. for i := int64(2); i < toBlock; i++ { // avoid block 0 for simplicity
  100. fetchedBlock, err := historyDB.GetBlock(i)
  101. assert.NoError(t, err)
  102. assertEqualBlock(t, &blocks[i-1].Block, fetchedBlock)
  103. }
  104. // Get last block
  105. lastBlock, err := historyDB.GetLastBlock()
  106. assert.NoError(t, err)
  107. assertEqualBlock(t, &blocks[len(blocks)-1].Block, lastBlock)
  108. }
  109. func assertEqualBlock(t *testing.T, expected *common.Block, actual *common.Block) {
  110. assert.Equal(t, expected.Num, actual.Num)
  111. assert.Equal(t, expected.Hash, actual.Hash)
  112. assert.Equal(t, expected.Timestamp.Unix(), actual.Timestamp.Unix())
  113. }
  114. func TestBatches(t *testing.T) {
  115. // Reset DB
  116. test.WipeDB(historyDB.DB())
  117. // Generate batches using til (and blocks for foreign key)
  118. set := `
  119. Type: Blockchain
  120. AddToken(1) // Will have value in USD
  121. AddToken(2) // Will NOT have value in USD
  122. CreateAccountDeposit(1) A: 2000
  123. CreateAccountDeposit(2) A: 2000
  124. CreateAccountDeposit(1) B: 1000
  125. CreateAccountDeposit(2) B: 1000
  126. > batchL1
  127. > batchL1
  128. Transfer(1) A-B: 100 (5)
  129. Transfer(2) B-A: 100 (199)
  130. > batch // batchNum=2, L2 only batch, forges transfers (mixed case of with(out) USD value)
  131. > block
  132. Transfer(1) A-B: 100 (5)
  133. > batch // batchNum=3, L2 only batch, forges transfer (with USD value)
  134. Transfer(2) B-A: 100 (199)
  135. > batch // batchNum=4, L2 only batch, forges transfer (without USD value)
  136. > block
  137. `
  138. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  139. tilCfgExtra := til.ConfigExtra{
  140. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  141. CoordUser: "A",
  142. }
  143. blocks, err := tc.GenerateBlocks(set)
  144. require.NoError(t, err)
  145. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  146. require.NoError(t, err)
  147. // Insert to DB
  148. batches := []common.Batch{}
  149. tokensValue := make(map[common.TokenID]float64)
  150. lastL1TxsNum := new(int64)
  151. lastL1BatchBlockNum := int64(0)
  152. for _, block := range blocks {
  153. // Insert block
  154. assert.NoError(t, historyDB.AddBlock(&block.Block))
  155. // Insert tokens
  156. for i, token := range block.Rollup.AddedTokens {
  157. assert.NoError(t, historyDB.AddToken(&token)) //nolint:gosec
  158. if i%2 != 0 {
  159. // Set value to the token
  160. value := (float64(i) + 5) * 5.389329
  161. assert.NoError(t, historyDB.UpdateTokenValue(token.EthAddr, value))
  162. tokensValue[token.TokenID] = value / math.Pow(10, float64(token.Decimals))
  163. }
  164. }
  165. // Combine all generated batches into single array
  166. for _, batch := range block.Rollup.Batches {
  167. batches = append(batches, batch.Batch)
  168. forgeTxsNum := batch.Batch.ForgeL1TxsNum
  169. if forgeTxsNum != nil && (lastL1TxsNum == nil || *lastL1TxsNum < *forgeTxsNum) {
  170. *lastL1TxsNum = *forgeTxsNum
  171. lastL1BatchBlockNum = batch.Batch.EthBlockNum
  172. }
  173. }
  174. }
  175. // Insert batches
  176. assert.NoError(t, historyDB.AddBatches(batches))
  177. // Set expected total fee
  178. for _, batch := range batches {
  179. total := .0
  180. for tokenID, amount := range batch.CollectedFees {
  181. af := new(big.Float).SetInt(amount)
  182. amountFloat, _ := af.Float64()
  183. total += tokensValue[tokenID] * amountFloat
  184. }
  185. batch.TotalFeesUSD = &total
  186. }
  187. // Get batches from the DB
  188. fetchedBatches, err := historyDB.GetBatches(0, common.BatchNum(len(batches)+1))
  189. assert.NoError(t, err)
  190. assert.Equal(t, len(batches), len(fetchedBatches))
  191. for i, fetchedBatch := range fetchedBatches {
  192. assert.Equal(t, batches[i], fetchedBatch)
  193. }
  194. // Test GetLastBatchNum
  195. fetchedLastBatchNum, err := historyDB.GetLastBatchNum()
  196. assert.NoError(t, err)
  197. assert.Equal(t, batches[len(batches)-1].BatchNum, fetchedLastBatchNum)
  198. // Test GetLastBatch
  199. fetchedLastBatch, err := historyDB.GetLastBatch()
  200. assert.NoError(t, err)
  201. assert.Equal(t, &batches[len(batches)-1], fetchedLastBatch)
  202. // Test GetLastL1TxsNum
  203. fetchedLastL1TxsNum, err := historyDB.GetLastL1TxsNum()
  204. assert.NoError(t, err)
  205. assert.Equal(t, lastL1TxsNum, fetchedLastL1TxsNum)
  206. // Test GetLastL1BatchBlockNum
  207. fetchedLastL1BatchBlockNum, err := historyDB.GetLastL1BatchBlockNum()
  208. assert.NoError(t, err)
  209. assert.Equal(t, lastL1BatchBlockNum, fetchedLastL1BatchBlockNum)
  210. // Test GetBatch
  211. fetchedBatch, err := historyDB.GetBatch(1)
  212. require.NoError(t, err)
  213. assert.Equal(t, &batches[0], fetchedBatch)
  214. _, err = historyDB.GetBatch(common.BatchNum(len(batches) + 1))
  215. assert.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
  216. }
  217. func TestBids(t *testing.T) {
  218. const fromBlock int64 = 1
  219. const toBlock int64 = 5
  220. // Prepare blocks in the DB
  221. blocks := setTestBlocks(fromBlock, toBlock)
  222. // Generate fake coordinators
  223. const nCoords = 5
  224. coords := test.GenCoordinators(nCoords, blocks)
  225. err := historyDB.AddCoordinators(coords)
  226. assert.NoError(t, err)
  227. // Generate fake bids
  228. const nBids = 20
  229. bids := test.GenBids(nBids, blocks, coords)
  230. err = historyDB.AddBids(bids)
  231. assert.NoError(t, err)
  232. // Fetch bids
  233. fetchedBids, err := historyDB.GetAllBids()
  234. assert.NoError(t, err)
  235. // Compare fetched bids vs generated bids
  236. for i, bid := range fetchedBids {
  237. assert.Equal(t, bids[i], bid)
  238. }
  239. }
  240. func TestTokens(t *testing.T) {
  241. const fromBlock int64 = 1
  242. const toBlock int64 = 5
  243. // Prepare blocks in the DB
  244. blocks := setTestBlocks(fromBlock, toBlock)
  245. // Generate fake tokens
  246. const nTokens = 5
  247. tokens, ethToken := test.GenTokens(nTokens, blocks)
  248. err := historyDB.AddTokens(tokens)
  249. assert.NoError(t, err)
  250. tokens = append([]common.Token{ethToken}, tokens...)
  251. // Fetch tokens
  252. fetchedTokens, err := historyDB.GetTokensTest()
  253. assert.NoError(t, err)
  254. // Compare fetched tokens vs generated tokens
  255. // All the tokens should have USDUpdate setted by the DB trigger
  256. for i, token := range fetchedTokens {
  257. assert.Equal(t, tokens[i].TokenID, token.TokenID)
  258. assert.Equal(t, tokens[i].EthBlockNum, token.EthBlockNum)
  259. assert.Equal(t, tokens[i].EthAddr, token.EthAddr)
  260. assert.Equal(t, tokens[i].Name, token.Name)
  261. assert.Equal(t, tokens[i].Symbol, token.Symbol)
  262. assert.Nil(t, token.USD)
  263. assert.Nil(t, token.USDUpdate)
  264. }
  265. // Update token value
  266. for i, token := range tokens {
  267. value := 1.01 * float64(i)
  268. assert.NoError(t, historyDB.UpdateTokenValue(token.EthAddr, value))
  269. }
  270. // Fetch tokens
  271. fetchedTokens, err = historyDB.GetTokensTest()
  272. assert.NoError(t, err)
  273. // Compare fetched tokens vs generated tokens
  274. // All the tokens should have USDUpdate setted by the DB trigger
  275. for i, token := range fetchedTokens {
  276. value := 1.01 * float64(i)
  277. assert.Equal(t, value, *token.USD)
  278. nameZone, offset := token.USDUpdate.Zone()
  279. assert.Equal(t, "UTC", nameZone)
  280. assert.Equal(t, 0, offset)
  281. }
  282. }
  283. func TestTokensUTF8(t *testing.T) {
  284. // Reset DB
  285. test.WipeDB(historyDB.DB())
  286. const fromBlock int64 = 1
  287. const toBlock int64 = 5
  288. // Prepare blocks in the DB
  289. blocks := setTestBlocks(fromBlock, toBlock)
  290. // Generate fake tokens
  291. const nTokens = 5
  292. tokens, ethToken := test.GenTokens(nTokens, blocks)
  293. nonUTFTokens := make([]common.Token, len(tokens))
  294. // Force token.name and token.symbol to be non UTF-8 Strings
  295. for i, token := range tokens {
  296. token.Name = fmt.Sprint("NON-UTF8-NAME-\xc5-", i)
  297. token.Symbol = fmt.Sprint("S-\xc5-", i)
  298. tokens[i] = token
  299. nonUTFTokens[i] = token
  300. }
  301. err := historyDB.AddTokens(tokens)
  302. assert.NoError(t, err)
  303. // Work with nonUTFTokens as tokens one gets updated and non UTF-8 characters are lost
  304. nonUTFTokens = append([]common.Token{ethToken}, nonUTFTokens...)
  305. // Fetch tokens
  306. fetchedTokens, err := historyDB.GetTokensTest()
  307. assert.NoError(t, err)
  308. // Compare fetched tokens vs generated tokens
  309. // All the tokens should have USDUpdate setted by the DB trigger
  310. for i, token := range fetchedTokens {
  311. assert.Equal(t, nonUTFTokens[i].TokenID, token.TokenID)
  312. assert.Equal(t, nonUTFTokens[i].EthBlockNum, token.EthBlockNum)
  313. assert.Equal(t, nonUTFTokens[i].EthAddr, token.EthAddr)
  314. assert.Equal(t, strings.ToValidUTF8(nonUTFTokens[i].Name, " "), token.Name)
  315. assert.Equal(t, strings.ToValidUTF8(nonUTFTokens[i].Symbol, " "), token.Symbol)
  316. assert.Nil(t, token.USD)
  317. assert.Nil(t, token.USDUpdate)
  318. }
  319. // Update token value
  320. for i, token := range nonUTFTokens {
  321. value := 1.01 * float64(i)
  322. assert.NoError(t, historyDB.UpdateTokenValue(token.EthAddr, value))
  323. }
  324. // Fetch tokens
  325. fetchedTokens, err = historyDB.GetTokensTest()
  326. assert.NoError(t, err)
  327. // Compare fetched tokens vs generated tokens
  328. // All the tokens should have USDUpdate setted by the DB trigger
  329. for i, token := range fetchedTokens {
  330. value := 1.01 * float64(i)
  331. assert.Equal(t, value, *token.USD)
  332. nameZone, offset := token.USDUpdate.Zone()
  333. assert.Equal(t, "UTC", nameZone)
  334. assert.Equal(t, 0, offset)
  335. }
  336. }
  337. func TestAccounts(t *testing.T) {
  338. const fromBlock int64 = 1
  339. const toBlock int64 = 5
  340. // Prepare blocks in the DB
  341. blocks := setTestBlocks(fromBlock, toBlock)
  342. // Generate fake tokens
  343. const nTokens = 5
  344. tokens, ethToken := test.GenTokens(nTokens, blocks)
  345. err := historyDB.AddTokens(tokens)
  346. assert.NoError(t, err)
  347. tokens = append([]common.Token{ethToken}, tokens...)
  348. // Generate fake batches
  349. const nBatches = 10
  350. batches := test.GenBatches(nBatches, blocks)
  351. err = historyDB.AddBatches(batches)
  352. assert.NoError(t, err)
  353. // Generate fake accounts
  354. const nAccounts = 3
  355. accs := test.GenAccounts(nAccounts, 0, tokens, nil, nil, batches)
  356. err = historyDB.AddAccounts(accs)
  357. assert.NoError(t, err)
  358. // Fetch accounts
  359. fetchedAccs, err := historyDB.GetAllAccounts()
  360. assert.NoError(t, err)
  361. // Compare fetched accounts vs generated accounts
  362. for i, acc := range fetchedAccs {
  363. accs[i].Balance = nil
  364. assert.Equal(t, accs[i], acc)
  365. }
  366. // Test AccountBalances
  367. accUpdates := make([]common.AccountUpdate, len(accs))
  368. for i, acc := range accs {
  369. accUpdates[i] = common.AccountUpdate{
  370. EthBlockNum: batches[acc.BatchNum-1].EthBlockNum,
  371. BatchNum: acc.BatchNum,
  372. Idx: acc.Idx,
  373. Nonce: common.Nonce(i),
  374. Balance: big.NewInt(int64(i)),
  375. }
  376. }
  377. err = historyDB.AddAccountUpdates(accUpdates)
  378. require.NoError(t, err)
  379. fetchedAccBalances, err := historyDB.GetAllAccountUpdates()
  380. require.NoError(t, err)
  381. assert.Equal(t, accUpdates, fetchedAccBalances)
  382. }
  383. func TestTxs(t *testing.T) {
  384. // Reset DB
  385. test.WipeDB(historyDB.DB())
  386. set := `
  387. Type: Blockchain
  388. AddToken(1)
  389. AddToken(2)
  390. CreateAccountDeposit(1) A: 10
  391. CreateAccountDeposit(1) B: 10
  392. > batchL1
  393. > batchL1
  394. > block
  395. CreateAccountDepositTransfer(1) C-A: 20, 10
  396. CreateAccountCoordinator(1) User0
  397. > batchL1
  398. > batchL1
  399. > block
  400. Deposit(1) B: 10
  401. Deposit(1) C: 10
  402. Transfer(1) C-A : 10 (1)
  403. Transfer(1) B-C : 10 (1)
  404. Transfer(1) A-B : 10 (1)
  405. Exit(1) A: 10 (1)
  406. > batch
  407. > block
  408. DepositTransfer(1) A-B: 10, 10
  409. > batchL1
  410. > block
  411. ForceTransfer(1) A-B: 10
  412. ForceExit(1) A: 5
  413. > batchL1
  414. > batchL1
  415. > block
  416. CreateAccountDeposit(2) D: 10
  417. > batchL1
  418. > block
  419. CreateAccountDeposit(2) E: 10
  420. > batchL1
  421. > batchL1
  422. > block
  423. `
  424. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  425. tilCfgExtra := til.ConfigExtra{
  426. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  427. CoordUser: "A",
  428. }
  429. blocks, err := tc.GenerateBlocks(set)
  430. require.NoError(t, err)
  431. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  432. require.NoError(t, err)
  433. // Sanity check
  434. require.Equal(t, 7, len(blocks))
  435. require.Equal(t, 2, len(blocks[0].Rollup.L1UserTxs))
  436. require.Equal(t, 1, len(blocks[1].Rollup.L1UserTxs))
  437. require.Equal(t, 2, len(blocks[2].Rollup.L1UserTxs))
  438. require.Equal(t, 1, len(blocks[3].Rollup.L1UserTxs))
  439. require.Equal(t, 2, len(blocks[4].Rollup.L1UserTxs))
  440. require.Equal(t, 1, len(blocks[5].Rollup.L1UserTxs))
  441. require.Equal(t, 1, len(blocks[6].Rollup.L1UserTxs))
  442. var null *common.BatchNum = nil
  443. var txID common.TxID
  444. // Insert blocks into DB
  445. for i := range blocks {
  446. if i == len(blocks)-1 {
  447. blocks[i].Block.Timestamp = time.Now()
  448. dbL1Txs, err := historyDB.GetAllL1UserTxs()
  449. assert.NoError(t, err)
  450. // Check batch_num is nil before forging
  451. assert.Equal(t, null, dbL1Txs[len(dbL1Txs)-1].BatchNum)
  452. // Save this TxId
  453. txID = dbL1Txs[len(dbL1Txs)-1].TxID
  454. }
  455. err = historyDB.AddBlockSCData(&blocks[i])
  456. assert.NoError(t, err)
  457. }
  458. // Check blocks
  459. dbBlocks, err := historyDB.GetAllBlocks()
  460. assert.NoError(t, err)
  461. assert.Equal(t, len(blocks)+1, len(dbBlocks))
  462. // Check batches
  463. batches, err := historyDB.GetAllBatches()
  464. assert.NoError(t, err)
  465. assert.Equal(t, 11, len(batches))
  466. // Check L1 Transactions
  467. dbL1Txs, err := historyDB.GetAllL1UserTxs()
  468. assert.NoError(t, err)
  469. assert.Equal(t, 10, len(dbL1Txs))
  470. // Tx Type
  471. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[0].Type)
  472. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[1].Type)
  473. assert.Equal(t, common.TxTypeCreateAccountDepositTransfer, dbL1Txs[2].Type)
  474. assert.Equal(t, common.TxTypeDeposit, dbL1Txs[3].Type)
  475. assert.Equal(t, common.TxTypeDeposit, dbL1Txs[4].Type)
  476. assert.Equal(t, common.TxTypeDepositTransfer, dbL1Txs[5].Type)
  477. assert.Equal(t, common.TxTypeForceTransfer, dbL1Txs[6].Type)
  478. assert.Equal(t, common.TxTypeForceExit, dbL1Txs[7].Type)
  479. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[8].Type)
  480. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[9].Type)
  481. // Tx ID
  482. assert.Equal(t, "0x00e979da4b80d60a17ce56fa19278c6f3a7e1b43359fb8a8ea46d0264de7d653ab", dbL1Txs[0].TxID.String())
  483. assert.Equal(t, "0x00af9bf96eb60f2d618519402a2f6b07057a034fa2baefd379fe8e1c969f1c5cf4", dbL1Txs[1].TxID.String())
  484. assert.Equal(t, "0x00a256ee191905243320ea830840fd666a73c7b4e6f89ce4bd47ddf998dfee627a", dbL1Txs[2].TxID.String())
  485. assert.Equal(t, "0x00930696d03ae0a1e6150b6ccb88043cb539a4e06a7f8baf213029ce9a0600197e", dbL1Txs[3].TxID.String())
  486. assert.Equal(t, "0x00de8e41d49f23832f66364e8702c4b78237eb0c95542a94d34188e51696e74fc8", dbL1Txs[4].TxID.String())
  487. assert.Equal(t, "0x007a44d6d60b15f3789d4ff49d62377a70255bf13a8d42e41ef49bf4c7b77d2c1b", dbL1Txs[5].TxID.String())
  488. assert.Equal(t, "0x00c33f316240f8d33a973db2d0e901e4ac1c96de30b185fcc6b63dac4d0e147bd4", dbL1Txs[6].TxID.String())
  489. assert.Equal(t, "0x00b55f0882c5229d1be3d9d3c1a076290f249cd0bae5ae6e609234606befb91233", dbL1Txs[7].TxID.String())
  490. assert.Equal(t, "0x009133d4c8a412ca45f50bccdbcfdb8393b0dd8efe953d0cc3bcc82796b7a581b6", dbL1Txs[8].TxID.String())
  491. assert.Equal(t, "0x00f5e8ab141ac16d673e654ba7747c2f12e93ea2c50ba6c05563752ca531968c62", dbL1Txs[9].TxID.String())
  492. // Tx From IDx
  493. assert.Equal(t, common.Idx(0), dbL1Txs[0].FromIdx)
  494. assert.Equal(t, common.Idx(0), dbL1Txs[1].FromIdx)
  495. assert.Equal(t, common.Idx(0), dbL1Txs[2].FromIdx)
  496. assert.NotEqual(t, common.Idx(0), dbL1Txs[3].FromIdx)
  497. assert.NotEqual(t, common.Idx(0), dbL1Txs[4].FromIdx)
  498. assert.NotEqual(t, common.Idx(0), dbL1Txs[5].FromIdx)
  499. assert.NotEqual(t, common.Idx(0), dbL1Txs[6].FromIdx)
  500. assert.NotEqual(t, common.Idx(0), dbL1Txs[7].FromIdx)
  501. assert.Equal(t, common.Idx(0), dbL1Txs[8].FromIdx)
  502. assert.Equal(t, common.Idx(0), dbL1Txs[9].FromIdx)
  503. assert.Equal(t, common.Idx(0), dbL1Txs[9].FromIdx)
  504. assert.Equal(t, dbL1Txs[5].FromIdx, dbL1Txs[6].FromIdx)
  505. assert.Equal(t, dbL1Txs[5].FromIdx, dbL1Txs[7].FromIdx)
  506. // Tx to IDx
  507. assert.Equal(t, dbL1Txs[2].ToIdx, dbL1Txs[5].FromIdx)
  508. assert.Equal(t, dbL1Txs[5].ToIdx, dbL1Txs[3].FromIdx)
  509. assert.Equal(t, dbL1Txs[6].ToIdx, dbL1Txs[3].FromIdx)
  510. // Token ID
  511. assert.Equal(t, common.TokenID(1), dbL1Txs[0].TokenID)
  512. assert.Equal(t, common.TokenID(1), dbL1Txs[1].TokenID)
  513. assert.Equal(t, common.TokenID(1), dbL1Txs[2].TokenID)
  514. assert.Equal(t, common.TokenID(1), dbL1Txs[3].TokenID)
  515. assert.Equal(t, common.TokenID(1), dbL1Txs[4].TokenID)
  516. assert.Equal(t, common.TokenID(1), dbL1Txs[5].TokenID)
  517. assert.Equal(t, common.TokenID(1), dbL1Txs[6].TokenID)
  518. assert.Equal(t, common.TokenID(1), dbL1Txs[7].TokenID)
  519. assert.Equal(t, common.TokenID(2), dbL1Txs[8].TokenID)
  520. assert.Equal(t, common.TokenID(2), dbL1Txs[9].TokenID)
  521. // Batch Number
  522. var bn common.BatchNum = common.BatchNum(2)
  523. assert.Equal(t, &bn, dbL1Txs[0].BatchNum)
  524. assert.Equal(t, &bn, dbL1Txs[1].BatchNum)
  525. bn = common.BatchNum(4)
  526. assert.Equal(t, &bn, dbL1Txs[2].BatchNum)
  527. bn = common.BatchNum(7)
  528. assert.Equal(t, &bn, dbL1Txs[3].BatchNum)
  529. assert.Equal(t, &bn, dbL1Txs[4].BatchNum)
  530. assert.Equal(t, &bn, dbL1Txs[5].BatchNum)
  531. bn = common.BatchNum(8)
  532. assert.Equal(t, &bn, dbL1Txs[6].BatchNum)
  533. assert.Equal(t, &bn, dbL1Txs[7].BatchNum)
  534. bn = common.BatchNum(10)
  535. assert.Equal(t, &bn, dbL1Txs[8].BatchNum)
  536. bn = common.BatchNum(11)
  537. assert.Equal(t, &bn, dbL1Txs[9].BatchNum)
  538. // eth_block_num
  539. assert.Equal(t, int64(2), dbL1Txs[0].EthBlockNum)
  540. assert.Equal(t, int64(2), dbL1Txs[1].EthBlockNum)
  541. assert.Equal(t, int64(3), dbL1Txs[2].EthBlockNum)
  542. assert.Equal(t, int64(4), dbL1Txs[3].EthBlockNum)
  543. assert.Equal(t, int64(4), dbL1Txs[4].EthBlockNum)
  544. assert.Equal(t, int64(5), dbL1Txs[5].EthBlockNum)
  545. assert.Equal(t, int64(6), dbL1Txs[6].EthBlockNum)
  546. assert.Equal(t, int64(6), dbL1Txs[7].EthBlockNum)
  547. assert.Equal(t, int64(7), dbL1Txs[8].EthBlockNum)
  548. assert.Equal(t, int64(8), dbL1Txs[9].EthBlockNum)
  549. // User Origin
  550. assert.Equal(t, true, dbL1Txs[0].UserOrigin)
  551. assert.Equal(t, true, dbL1Txs[1].UserOrigin)
  552. assert.Equal(t, true, dbL1Txs[2].UserOrigin)
  553. assert.Equal(t, true, dbL1Txs[3].UserOrigin)
  554. assert.Equal(t, true, dbL1Txs[4].UserOrigin)
  555. assert.Equal(t, true, dbL1Txs[5].UserOrigin)
  556. assert.Equal(t, true, dbL1Txs[6].UserOrigin)
  557. assert.Equal(t, true, dbL1Txs[7].UserOrigin)
  558. assert.Equal(t, true, dbL1Txs[8].UserOrigin)
  559. assert.Equal(t, true, dbL1Txs[9].UserOrigin)
  560. // Deposit Amount
  561. assert.Equal(t, big.NewInt(10), dbL1Txs[0].DepositAmount)
  562. assert.Equal(t, big.NewInt(10), dbL1Txs[1].DepositAmount)
  563. assert.Equal(t, big.NewInt(20), dbL1Txs[2].DepositAmount)
  564. assert.Equal(t, big.NewInt(10), dbL1Txs[3].DepositAmount)
  565. assert.Equal(t, big.NewInt(10), dbL1Txs[4].DepositAmount)
  566. assert.Equal(t, big.NewInt(10), dbL1Txs[5].DepositAmount)
  567. assert.Equal(t, big.NewInt(0), dbL1Txs[6].DepositAmount)
  568. assert.Equal(t, big.NewInt(0), dbL1Txs[7].DepositAmount)
  569. assert.Equal(t, big.NewInt(10), dbL1Txs[8].DepositAmount)
  570. assert.Equal(t, big.NewInt(10), dbL1Txs[9].DepositAmount)
  571. // Check saved txID's batch_num is not nil
  572. assert.Equal(t, txID, dbL1Txs[len(dbL1Txs)-2].TxID)
  573. assert.NotEqual(t, null, dbL1Txs[len(dbL1Txs)-2].BatchNum)
  574. // Check Coordinator TXs
  575. coordTxs, err := historyDB.GetAllL1CoordinatorTxs()
  576. assert.NoError(t, err)
  577. assert.Equal(t, 1, len(coordTxs))
  578. assert.Equal(t, common.TxTypeCreateAccountDeposit, coordTxs[0].Type)
  579. assert.Equal(t, false, coordTxs[0].UserOrigin)
  580. // Check L2 TXs
  581. dbL2Txs, err := historyDB.GetAllL2Txs()
  582. assert.NoError(t, err)
  583. assert.Equal(t, 4, len(dbL2Txs))
  584. // Tx Type
  585. assert.Equal(t, common.TxTypeTransfer, dbL2Txs[0].Type)
  586. assert.Equal(t, common.TxTypeTransfer, dbL2Txs[1].Type)
  587. assert.Equal(t, common.TxTypeTransfer, dbL2Txs[2].Type)
  588. assert.Equal(t, common.TxTypeExit, dbL2Txs[3].Type)
  589. // Tx ID
  590. assert.Equal(t, "0x024e555248100b69a8aabf6d31719b9fe8a60dcc6c3407904a93c8d2d9ade18ee5", dbL2Txs[0].TxID.String())
  591. assert.Equal(t, "0x021ae87ca34d50ff35d98dfc0d7c95f2bf2e4ffeebb82ea71f43a8b0dfa5d36d89", dbL2Txs[1].TxID.String())
  592. assert.Equal(t, "0x024abce7f3f2382dc520ed557593f11dea1ee197e55b60402e664facc27aa19774", dbL2Txs[2].TxID.String())
  593. assert.Equal(t, "0x02f921ad9e7a6e59606570fe12a7dde0e36014197de0363b9b45e5097d6f2b1dd0", dbL2Txs[3].TxID.String())
  594. // Tx From and To IDx
  595. assert.Equal(t, dbL2Txs[0].ToIdx, dbL2Txs[2].FromIdx)
  596. assert.Equal(t, dbL2Txs[1].ToIdx, dbL2Txs[0].FromIdx)
  597. assert.Equal(t, dbL2Txs[2].ToIdx, dbL2Txs[1].FromIdx)
  598. // Batch Number
  599. assert.Equal(t, common.BatchNum(5), dbL2Txs[0].BatchNum)
  600. assert.Equal(t, common.BatchNum(5), dbL2Txs[1].BatchNum)
  601. assert.Equal(t, common.BatchNum(5), dbL2Txs[2].BatchNum)
  602. assert.Equal(t, common.BatchNum(5), dbL2Txs[3].BatchNum)
  603. // eth_block_num
  604. assert.Equal(t, int64(4), dbL2Txs[0].EthBlockNum)
  605. assert.Equal(t, int64(4), dbL2Txs[1].EthBlockNum)
  606. assert.Equal(t, int64(4), dbL2Txs[2].EthBlockNum)
  607. // Amount
  608. assert.Equal(t, big.NewInt(10), dbL2Txs[0].Amount)
  609. assert.Equal(t, big.NewInt(10), dbL2Txs[1].Amount)
  610. assert.Equal(t, big.NewInt(10), dbL2Txs[2].Amount)
  611. assert.Equal(t, big.NewInt(10), dbL2Txs[3].Amount)
  612. }
  613. func TestExitTree(t *testing.T) {
  614. nBatches := 17
  615. blocks := setTestBlocks(1, 10)
  616. batches := test.GenBatches(nBatches, blocks)
  617. err := historyDB.AddBatches(batches)
  618. assert.NoError(t, err)
  619. const nTokens = 50
  620. tokens, ethToken := test.GenTokens(nTokens, blocks)
  621. err = historyDB.AddTokens(tokens)
  622. assert.NoError(t, err)
  623. tokens = append([]common.Token{ethToken}, tokens...)
  624. const nAccounts = 3
  625. accs := test.GenAccounts(nAccounts, 0, tokens, nil, nil, batches)
  626. assert.NoError(t, historyDB.AddAccounts(accs))
  627. exitTree := test.GenExitTree(nBatches, batches, accs, blocks)
  628. err = historyDB.AddExitTree(exitTree)
  629. assert.NoError(t, err)
  630. }
  631. func TestGetUnforgedL1UserTxs(t *testing.T) {
  632. test.WipeDB(historyDB.DB())
  633. set := `
  634. Type: Blockchain
  635. AddToken(1)
  636. AddToken(2)
  637. AddToken(3)
  638. CreateAccountDeposit(1) A: 20
  639. CreateAccountDeposit(2) A: 20
  640. CreateAccountDeposit(1) B: 5
  641. CreateAccountDeposit(1) C: 5
  642. CreateAccountDeposit(1) D: 5
  643. > block
  644. `
  645. tc := til.NewContext(uint16(0), 128)
  646. blocks, err := tc.GenerateBlocks(set)
  647. require.NoError(t, err)
  648. // Sanity check
  649. require.Equal(t, 1, len(blocks))
  650. require.Equal(t, 5, len(blocks[0].Rollup.L1UserTxs))
  651. toForgeL1TxsNum := int64(1)
  652. for i := range blocks {
  653. err = historyDB.AddBlockSCData(&blocks[i])
  654. require.NoError(t, err)
  655. }
  656. l1UserTxs, err := historyDB.GetUnforgedL1UserTxs(toForgeL1TxsNum)
  657. require.NoError(t, err)
  658. assert.Equal(t, 5, len(l1UserTxs))
  659. assert.Equal(t, blocks[0].Rollup.L1UserTxs, l1UserTxs)
  660. count, err := historyDB.GetUnforgedL1UserTxsCount()
  661. require.NoError(t, err)
  662. assert.Equal(t, 5, count)
  663. // No l1UserTxs for this toForgeL1TxsNum
  664. l1UserTxs, err = historyDB.GetUnforgedL1UserTxs(2)
  665. require.NoError(t, err)
  666. assert.Equal(t, 0, len(l1UserTxs))
  667. }
  668. func exampleInitSCVars() (*common.RollupVariables, *common.AuctionVariables, *common.WDelayerVariables) {
  669. //nolint:govet
  670. rollup := &common.RollupVariables{
  671. 0,
  672. big.NewInt(10),
  673. 12,
  674. 13,
  675. []common.BucketParams{},
  676. false,
  677. }
  678. //nolint:govet
  679. auction := &common.AuctionVariables{
  680. 0,
  681. ethCommon.BigToAddress(big.NewInt(2)),
  682. ethCommon.BigToAddress(big.NewInt(3)),
  683. "https://boot.coord.com",
  684. [6]*big.Int{
  685. big.NewInt(1), big.NewInt(2), big.NewInt(3),
  686. big.NewInt(4), big.NewInt(5), big.NewInt(6),
  687. },
  688. 0,
  689. 2,
  690. 4320,
  691. [3]uint16{10, 11, 12},
  692. 1000,
  693. 20,
  694. }
  695. //nolint:govet
  696. wDelayer := &common.WDelayerVariables{
  697. 0,
  698. ethCommon.BigToAddress(big.NewInt(2)),
  699. ethCommon.BigToAddress(big.NewInt(3)),
  700. 13,
  701. 14,
  702. false,
  703. }
  704. return rollup, auction, wDelayer
  705. }
  706. func TestSetInitialSCVars(t *testing.T) {
  707. test.WipeDB(historyDB.DB())
  708. _, _, _, err := historyDB.GetSCVars()
  709. assert.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
  710. rollup, auction, wDelayer := exampleInitSCVars()
  711. err = historyDB.SetInitialSCVars(rollup, auction, wDelayer)
  712. require.NoError(t, err)
  713. dbRollup, dbAuction, dbWDelayer, err := historyDB.GetSCVars()
  714. require.NoError(t, err)
  715. require.Equal(t, rollup, dbRollup)
  716. require.Equal(t, auction, dbAuction)
  717. require.Equal(t, wDelayer, dbWDelayer)
  718. }
  719. func TestSetExtraInfoForgedL1UserTxs(t *testing.T) {
  720. test.WipeDB(historyDB.DB())
  721. set := `
  722. Type: Blockchain
  723. AddToken(1)
  724. CreateAccountDeposit(1) A: 2000
  725. CreateAccountDeposit(1) B: 500
  726. CreateAccountDeposit(1) C: 500
  727. > batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{*}
  728. > block // blockNum=2
  729. > batchL1 // forge defined L1UserTxs{*}
  730. > block // blockNum=3
  731. `
  732. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  733. tilCfgExtra := til.ConfigExtra{
  734. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  735. CoordUser: "A",
  736. }
  737. blocks, err := tc.GenerateBlocks(set)
  738. require.NoError(t, err)
  739. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  740. require.NoError(t, err)
  741. err = tc.FillBlocksForgedL1UserTxs(blocks)
  742. require.NoError(t, err)
  743. // Add only first block so that the L1UserTxs are not marked as forged
  744. for i := range blocks[:1] {
  745. err = historyDB.AddBlockSCData(&blocks[i])
  746. require.NoError(t, err)
  747. }
  748. // Add second batch to trigger the update of the batch_num,
  749. // while avoiding the implicit call of setExtraInfoForgedL1UserTxs
  750. err = historyDB.addBlock(historyDB.dbWrite, &blocks[1].Block)
  751. require.NoError(t, err)
  752. err = historyDB.addBatch(historyDB.dbWrite, &blocks[1].Rollup.Batches[0].Batch)
  753. require.NoError(t, err)
  754. err = historyDB.addAccounts(historyDB.dbWrite, blocks[1].Rollup.Batches[0].CreatedAccounts)
  755. require.NoError(t, err)
  756. // Set the Effective{Amount,DepositAmount} of the L1UserTxs that are forged in the second block
  757. l1Txs := blocks[1].Rollup.Batches[0].L1UserTxs
  758. require.Equal(t, 3, len(l1Txs))
  759. // Change some values to test all cases
  760. l1Txs[1].EffectiveAmount = big.NewInt(0)
  761. l1Txs[2].EffectiveDepositAmount = big.NewInt(0)
  762. l1Txs[2].EffectiveAmount = big.NewInt(0)
  763. err = historyDB.setExtraInfoForgedL1UserTxs(historyDB.dbWrite, l1Txs)
  764. require.NoError(t, err)
  765. dbL1Txs, err := historyDB.GetAllL1UserTxs()
  766. require.NoError(t, err)
  767. for i, tx := range dbL1Txs {
  768. log.Infof("%d %v %v", i, tx.EffectiveAmount, tx.EffectiveDepositAmount)
  769. assert.NotNil(t, tx.EffectiveAmount)
  770. assert.NotNil(t, tx.EffectiveDepositAmount)
  771. switch tx.TxID {
  772. case l1Txs[0].TxID:
  773. assert.Equal(t, l1Txs[0].DepositAmount, tx.EffectiveDepositAmount)
  774. assert.Equal(t, l1Txs[0].Amount, tx.EffectiveAmount)
  775. case l1Txs[1].TxID:
  776. assert.Equal(t, l1Txs[1].DepositAmount, tx.EffectiveDepositAmount)
  777. assert.Equal(t, big.NewInt(0), tx.EffectiveAmount)
  778. case l1Txs[2].TxID:
  779. assert.Equal(t, big.NewInt(0), tx.EffectiveDepositAmount)
  780. assert.Equal(t, big.NewInt(0), tx.EffectiveAmount)
  781. }
  782. }
  783. }
  784. func TestUpdateExitTree(t *testing.T) {
  785. test.WipeDB(historyDB.DB())
  786. set := `
  787. Type: Blockchain
  788. AddToken(1)
  789. CreateAccountDeposit(1) C: 2000 // Idx=256+2=258
  790. CreateAccountDeposit(1) D: 500 // Idx=256+3=259
  791. CreateAccountCoordinator(1) A // Idx=256+0=256
  792. CreateAccountCoordinator(1) B // Idx=256+1=257
  793. > batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{5}
  794. > batchL1 // forge defined L1UserTxs{5}, freeze L1UserTxs{nil}
  795. > block // blockNum=2
  796. ForceExit(1) A: 100
  797. ForceExit(1) B: 80
  798. Exit(1) C: 50 (172)
  799. Exit(1) D: 30 (172)
  800. > batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{3}
  801. > batchL1 // forge L1UserTxs{3}, freeze defined L1UserTxs{nil}
  802. > block // blockNum=3
  803. > block // blockNum=4 (empty block)
  804. > block // blockNum=5 (empty block)
  805. `
  806. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  807. tilCfgExtra := til.ConfigExtra{
  808. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  809. CoordUser: "A",
  810. }
  811. blocks, err := tc.GenerateBlocks(set)
  812. require.NoError(t, err)
  813. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  814. require.NoError(t, err)
  815. // Add all blocks except for the last two
  816. for i := range blocks[:len(blocks)-2] {
  817. err = historyDB.AddBlockSCData(&blocks[i])
  818. require.NoError(t, err)
  819. }
  820. // Add withdraws to the second-to-last block, and insert block into the DB
  821. block := &blocks[len(blocks)-2]
  822. require.Equal(t, int64(4), block.Block.Num)
  823. tokenAddr := blocks[0].Rollup.AddedTokens[0].EthAddr
  824. // block.WDelayer.Deposits = append(block.WDelayer.Deposits,
  825. // common.WDelayerTransfer{Owner: tc.UsersByIdx[257].Addr, Token: tokenAddr, Amount: big.NewInt(80)}, // 257
  826. // common.WDelayerTransfer{Owner: tc.UsersByIdx[259].Addr, Token: tokenAddr, Amount: big.NewInt(15)}, // 259
  827. // )
  828. block.Rollup.Withdrawals = append(block.Rollup.Withdrawals,
  829. common.WithdrawInfo{Idx: 256, NumExitRoot: 4, InstantWithdraw: true},
  830. common.WithdrawInfo{Idx: 257, NumExitRoot: 4, InstantWithdraw: false,
  831. Owner: tc.UsersByIdx[257].Addr, Token: tokenAddr},
  832. common.WithdrawInfo{Idx: 258, NumExitRoot: 3, InstantWithdraw: true},
  833. common.WithdrawInfo{Idx: 259, NumExitRoot: 3, InstantWithdraw: false,
  834. Owner: tc.UsersByIdx[259].Addr, Token: tokenAddr},
  835. )
  836. err = historyDB.addBlock(historyDB.dbWrite, &block.Block)
  837. require.NoError(t, err)
  838. err = historyDB.updateExitTree(historyDB.dbWrite, block.Block.Num,
  839. block.Rollup.Withdrawals, block.WDelayer.Withdrawals)
  840. require.NoError(t, err)
  841. // Check that exits in DB match with the expected values
  842. dbExits, err := historyDB.GetAllExits()
  843. require.NoError(t, err)
  844. assert.Equal(t, 4, len(dbExits))
  845. dbExitsByIdx := make(map[common.Idx]common.ExitInfo)
  846. for _, dbExit := range dbExits {
  847. dbExitsByIdx[dbExit.AccountIdx] = dbExit
  848. }
  849. for _, withdraw := range block.Rollup.Withdrawals {
  850. assert.Equal(t, withdraw.NumExitRoot, dbExitsByIdx[withdraw.Idx].BatchNum)
  851. if withdraw.InstantWithdraw {
  852. assert.Equal(t, &block.Block.Num, dbExitsByIdx[withdraw.Idx].InstantWithdrawn)
  853. } else {
  854. assert.Equal(t, &block.Block.Num, dbExitsByIdx[withdraw.Idx].DelayedWithdrawRequest)
  855. }
  856. }
  857. // Add delayed withdraw to the last block, and insert block into the DB
  858. block = &blocks[len(blocks)-1]
  859. require.Equal(t, int64(5), block.Block.Num)
  860. block.WDelayer.Withdrawals = append(block.WDelayer.Withdrawals,
  861. common.WDelayerTransfer{
  862. Owner: tc.UsersByIdx[257].Addr,
  863. Token: tokenAddr,
  864. Amount: big.NewInt(80),
  865. })
  866. err = historyDB.addBlock(historyDB.dbWrite, &block.Block)
  867. require.NoError(t, err)
  868. err = historyDB.updateExitTree(historyDB.dbWrite, block.Block.Num,
  869. block.Rollup.Withdrawals, block.WDelayer.Withdrawals)
  870. require.NoError(t, err)
  871. // Check that delayed withdrawn has been set
  872. dbExits, err = historyDB.GetAllExits()
  873. require.NoError(t, err)
  874. for _, dbExit := range dbExits {
  875. dbExitsByIdx[dbExit.AccountIdx] = dbExit
  876. }
  877. require.Equal(t, &block.Block.Num, dbExitsByIdx[257].DelayedWithdrawn)
  878. }
  879. func TestGetBestBidCoordinator(t *testing.T) {
  880. test.WipeDB(historyDB.DB())
  881. rollup, auction, wDelayer := exampleInitSCVars()
  882. err := historyDB.SetInitialSCVars(rollup, auction, wDelayer)
  883. require.NoError(t, err)
  884. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  885. blocks, err := tc.GenerateBlocks(`
  886. Type: Blockchain
  887. > block // blockNum=2
  888. `)
  889. require.NoError(t, err)
  890. err = historyDB.AddBlockSCData(&blocks[0])
  891. require.NoError(t, err)
  892. coords := []common.Coordinator{
  893. {
  894. Bidder: ethCommon.BigToAddress(big.NewInt(1)),
  895. Forger: ethCommon.BigToAddress(big.NewInt(2)),
  896. EthBlockNum: 2,
  897. URL: "foo",
  898. },
  899. {
  900. Bidder: ethCommon.BigToAddress(big.NewInt(3)),
  901. Forger: ethCommon.BigToAddress(big.NewInt(4)),
  902. EthBlockNum: 2,
  903. URL: "bar",
  904. },
  905. }
  906. err = historyDB.addCoordinators(historyDB.dbWrite, coords)
  907. require.NoError(t, err)
  908. bids := []common.Bid{
  909. {
  910. SlotNum: 10,
  911. BidValue: big.NewInt(10),
  912. EthBlockNum: 2,
  913. Bidder: coords[0].Bidder,
  914. },
  915. {
  916. SlotNum: 10,
  917. BidValue: big.NewInt(20),
  918. EthBlockNum: 2,
  919. Bidder: coords[1].Bidder,
  920. },
  921. }
  922. err = historyDB.addBids(historyDB.dbWrite, bids)
  923. require.NoError(t, err)
  924. forger10, err := historyDB.GetBestBidCoordinator(10)
  925. require.NoError(t, err)
  926. require.Equal(t, coords[1].Forger, forger10.Forger)
  927. require.Equal(t, coords[1].Bidder, forger10.Bidder)
  928. require.Equal(t, coords[1].URL, forger10.URL)
  929. require.Equal(t, bids[1].SlotNum, forger10.SlotNum)
  930. require.Equal(t, bids[1].BidValue, forger10.BidValue)
  931. for i := range forger10.DefaultSlotSetBid {
  932. require.Equal(t, auction.DefaultSlotSetBid[i], forger10.DefaultSlotSetBid[i])
  933. }
  934. _, err = historyDB.GetBestBidCoordinator(11)
  935. require.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
  936. }
  937. func TestAddBucketUpdates(t *testing.T) {
  938. test.WipeDB(historyDB.DB())
  939. const fromBlock int64 = 1
  940. const toBlock int64 = 5 + 1
  941. setTestBlocks(fromBlock, toBlock)
  942. bucketUpdates := []common.BucketUpdate{
  943. {
  944. EthBlockNum: 4,
  945. NumBucket: 0,
  946. BlockStamp: 4,
  947. Withdrawals: big.NewInt(123),
  948. },
  949. {
  950. EthBlockNum: 5,
  951. NumBucket: 2,
  952. BlockStamp: 5,
  953. Withdrawals: big.NewInt(42),
  954. },
  955. }
  956. err := historyDB.addBucketUpdates(historyDB.dbWrite, bucketUpdates)
  957. require.NoError(t, err)
  958. dbBucketUpdates, err := historyDB.GetAllBucketUpdates()
  959. require.NoError(t, err)
  960. assert.Equal(t, bucketUpdates, dbBucketUpdates)
  961. }
  962. func TestAddTokenExchanges(t *testing.T) {
  963. test.WipeDB(historyDB.DB())
  964. const fromBlock int64 = 1
  965. const toBlock int64 = 5 + 1
  966. setTestBlocks(fromBlock, toBlock)
  967. tokenExchanges := []common.TokenExchange{
  968. {
  969. EthBlockNum: 4,
  970. Address: ethCommon.BigToAddress(big.NewInt(111)),
  971. ValueUSD: 12345,
  972. },
  973. {
  974. EthBlockNum: 5,
  975. Address: ethCommon.BigToAddress(big.NewInt(222)),
  976. ValueUSD: 67890,
  977. },
  978. }
  979. err := historyDB.addTokenExchanges(historyDB.dbWrite, tokenExchanges)
  980. require.NoError(t, err)
  981. dbTokenExchanges, err := historyDB.GetAllTokenExchanges()
  982. require.NoError(t, err)
  983. assert.Equal(t, tokenExchanges, dbTokenExchanges)
  984. }
  985. func TestAddEscapeHatchWithdrawals(t *testing.T) {
  986. test.WipeDB(historyDB.DB())
  987. const fromBlock int64 = 1
  988. const toBlock int64 = 5 + 1
  989. setTestBlocks(fromBlock, toBlock)
  990. escapeHatchWithdrawals := []common.WDelayerEscapeHatchWithdrawal{
  991. {
  992. EthBlockNum: 4,
  993. Who: ethCommon.BigToAddress(big.NewInt(111)),
  994. To: ethCommon.BigToAddress(big.NewInt(222)),
  995. TokenAddr: ethCommon.BigToAddress(big.NewInt(333)),
  996. Amount: big.NewInt(10002),
  997. },
  998. {
  999. EthBlockNum: 5,
  1000. Who: ethCommon.BigToAddress(big.NewInt(444)),
  1001. To: ethCommon.BigToAddress(big.NewInt(555)),
  1002. TokenAddr: ethCommon.BigToAddress(big.NewInt(666)),
  1003. Amount: big.NewInt(20003),
  1004. },
  1005. }
  1006. err := historyDB.addEscapeHatchWithdrawals(historyDB.dbWrite, escapeHatchWithdrawals)
  1007. require.NoError(t, err)
  1008. dbEscapeHatchWithdrawals, err := historyDB.GetAllEscapeHatchWithdrawals()
  1009. require.NoError(t, err)
  1010. assert.Equal(t, escapeHatchWithdrawals, dbEscapeHatchWithdrawals)
  1011. }
  1012. func TestGetMetricsAPI(t *testing.T) {
  1013. test.WipeDB(historyDB.DB())
  1014. set := `
  1015. Type: Blockchain
  1016. AddToken(1)
  1017. CreateAccountDeposit(1) A: 1000 // numTx=1
  1018. CreateAccountDeposit(1) B: 2000 // numTx=2
  1019. CreateAccountDeposit(1) C: 3000 //numTx=3
  1020. // block 0 is stored as default in the DB
  1021. // block 1 does not exist
  1022. > batchL1 // numBatches=1
  1023. > batchL1 // numBatches=2
  1024. > block // blockNum=2
  1025. Transfer(1) C-A : 10 (1) // numTx=4
  1026. > batch // numBatches=3
  1027. > block // blockNum=3
  1028. Transfer(1) B-C : 10 (1) // numTx=5
  1029. > batch // numBatches=5
  1030. > block // blockNum=4
  1031. Transfer(1) A-B : 10 (1) // numTx=6
  1032. > batch // numBatches=5
  1033. > block // blockNum=5
  1034. Transfer(1) A-B : 10 (1) // numTx=7
  1035. > batch // numBatches=6
  1036. > block // blockNum=6
  1037. `
  1038. const numBatches int = 6
  1039. const numTx int = 7
  1040. const blockNum = 6 - 1
  1041. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  1042. tilCfgExtra := til.ConfigExtra{
  1043. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  1044. CoordUser: "A",
  1045. }
  1046. blocks, err := tc.GenerateBlocks(set)
  1047. require.NoError(t, err)
  1048. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  1049. require.NoError(t, err)
  1050. // Sanity check
  1051. require.Equal(t, blockNum, len(blocks))
  1052. // Adding one batch per block
  1053. // batch frequency can be chosen
  1054. const frequency int = 15
  1055. for i := range blocks {
  1056. blocks[i].Block.Timestamp = time.Now().Add(-time.Second * time.Duration(frequency*(len(blocks)-i)))
  1057. err = historyDB.AddBlockSCData(&blocks[i])
  1058. assert.NoError(t, err)
  1059. }
  1060. res, err := historyDB.GetMetricsInternalAPI(common.BatchNum(numBatches))
  1061. assert.NoError(t, err)
  1062. assert.Equal(t, float64(numTx)/float64(numBatches), res.TransactionsPerBatch)
  1063. // Frequency is not exactly the desired one, some decimals may appear
  1064. // There is a -2 as time for first and last batch is not taken into account
  1065. assert.InEpsilon(t, float64(frequency)*float64(numBatches-2)/float64(numBatches), res.BatchFrequency, 0.01)
  1066. assert.InEpsilon(t, float64(numTx)/float64(frequency*blockNum-frequency), res.TransactionsPerSecond, 0.01)
  1067. assert.Equal(t, int64(3), res.TotalAccounts)
  1068. assert.Equal(t, int64(3), res.TotalBJJs)
  1069. // Til does not set fees
  1070. assert.Equal(t, float64(0), res.AvgTransactionFee)
  1071. }
  1072. func TestGetMetricsAPIMoreThan24Hours(t *testing.T) {
  1073. test.WipeDB(historyDB.DB())
  1074. testUsersLen := 3
  1075. var set []til.Instruction
  1076. for user := 0; user < testUsersLen; user++ {
  1077. set = append(set, til.Instruction{
  1078. Typ: common.TxTypeCreateAccountDeposit,
  1079. TokenID: common.TokenID(0),
  1080. DepositAmount: big.NewInt(1000000),
  1081. Amount: big.NewInt(0),
  1082. From: fmt.Sprintf("User%02d", user),
  1083. })
  1084. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1085. }
  1086. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1087. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1088. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1089. // Transfers
  1090. const numBlocks int = 30
  1091. for x := 0; x < numBlocks; x++ {
  1092. set = append(set, til.Instruction{
  1093. Typ: common.TxTypeTransfer,
  1094. TokenID: common.TokenID(0),
  1095. DepositAmount: big.NewInt(1),
  1096. Amount: big.NewInt(0),
  1097. From: "User00",
  1098. To: "User01",
  1099. })
  1100. set = append(set, til.Instruction{Typ: til.TypeNewBatch})
  1101. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1102. }
  1103. var chainID uint16 = 0
  1104. tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
  1105. blocks, err := tc.GenerateBlocksFromInstructions(set)
  1106. assert.NoError(t, err)
  1107. tilCfgExtra := til.ConfigExtra{
  1108. CoordUser: "A",
  1109. }
  1110. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  1111. require.NoError(t, err)
  1112. const numBatches int = 2 + numBlocks
  1113. const blockNum = 4 + numBlocks
  1114. // Sanity check
  1115. require.Equal(t, blockNum, len(blocks))
  1116. // Adding one batch per block
  1117. // batch frequency can be chosen
  1118. const blockTime time.Duration = 3600 * time.Second
  1119. now := time.Now()
  1120. require.NoError(t, err)
  1121. for i := range blocks {
  1122. blocks[i].Block.Timestamp = now.Add(-time.Duration(len(blocks)-1-i) * blockTime)
  1123. err = historyDB.AddBlockSCData(&blocks[i])
  1124. assert.NoError(t, err)
  1125. }
  1126. res, err := historyDBWithACC.GetMetricsInternalAPI(common.BatchNum(numBatches))
  1127. assert.NoError(t, err)
  1128. assert.InEpsilon(t, 1.0, res.TransactionsPerBatch, 0.1)
  1129. assert.InEpsilon(t, res.BatchFrequency, float64(blockTime/time.Second), 0.1)
  1130. assert.InEpsilon(t, 1.0/float64(blockTime/time.Second), res.TransactionsPerSecond, 0.1)
  1131. assert.Equal(t, int64(3), res.TotalAccounts)
  1132. assert.Equal(t, int64(3), res.TotalBJJs)
  1133. // Til does not set fees
  1134. assert.Equal(t, float64(0), res.AvgTransactionFee)
  1135. }
  1136. func TestGetMetricsAPIEmpty(t *testing.T) {
  1137. test.WipeDB(historyDB.DB())
  1138. _, err := historyDBWithACC.GetMetricsInternalAPI(0)
  1139. assert.NoError(t, err)
  1140. }
  1141. func TestGetLastL1TxsNum(t *testing.T) {
  1142. test.WipeDB(historyDB.DB())
  1143. _, err := historyDB.GetLastL1TxsNum()
  1144. assert.NoError(t, err)
  1145. }
  1146. func TestGetLastTxsPosition(t *testing.T) {
  1147. test.WipeDB(historyDB.DB())
  1148. _, err := historyDB.GetLastTxsPosition(0)
  1149. assert.Equal(t, sql.ErrNoRows.Error(), err.Error())
  1150. }
  1151. func TestGetFirstBatchBlockNumBySlot(t *testing.T) {
  1152. test.WipeDB(historyDB.DB())
  1153. set := `
  1154. Type: Blockchain
  1155. // Slot = 0
  1156. > block // 2
  1157. > block // 3
  1158. > block // 4
  1159. > block // 5
  1160. // Slot = 1
  1161. > block // 6
  1162. > block // 7
  1163. > batch
  1164. > block // 8
  1165. > block // 9
  1166. // Slot = 2
  1167. > batch
  1168. > block // 10
  1169. > block // 11
  1170. > block // 12
  1171. > block // 13
  1172. `
  1173. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  1174. blocks, err := tc.GenerateBlocks(set)
  1175. assert.NoError(t, err)
  1176. tilCfgExtra := til.ConfigExtra{
  1177. CoordUser: "A",
  1178. }
  1179. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  1180. require.NoError(t, err)
  1181. for i := range blocks {
  1182. for j := range blocks[i].Rollup.Batches {
  1183. blocks[i].Rollup.Batches[j].Batch.SlotNum = int64(i) / 4
  1184. }
  1185. }
  1186. // Add all blocks
  1187. for i := range blocks {
  1188. err = historyDB.AddBlockSCData(&blocks[i])
  1189. require.NoError(t, err)
  1190. }
  1191. _, err = historyDB.GetFirstBatchBlockNumBySlot(0)
  1192. require.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
  1193. bn1, err := historyDB.GetFirstBatchBlockNumBySlot(1)
  1194. require.NoError(t, err)
  1195. assert.Equal(t, int64(8), bn1)
  1196. bn2, err := historyDB.GetFirstBatchBlockNumBySlot(2)
  1197. require.NoError(t, err)
  1198. assert.Equal(t, int64(10), bn2)
  1199. }
  1200. func TestTxItemID(t *testing.T) {
  1201. test.WipeDB(historyDB.DB())
  1202. testUsersLen := 10
  1203. var set []til.Instruction
  1204. for user := 0; user < testUsersLen; user++ {
  1205. set = append(set, til.Instruction{
  1206. Typ: common.TxTypeCreateAccountDeposit,
  1207. TokenID: common.TokenID(0),
  1208. DepositAmount: big.NewInt(1000000),
  1209. Amount: big.NewInt(0),
  1210. From: fmt.Sprintf("User%02d", user),
  1211. })
  1212. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1213. }
  1214. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1215. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1216. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1217. for user := 0; user < testUsersLen; user++ {
  1218. set = append(set, til.Instruction{
  1219. Typ: common.TxTypeDeposit,
  1220. TokenID: common.TokenID(0),
  1221. DepositAmount: big.NewInt(100000),
  1222. Amount: big.NewInt(0),
  1223. From: fmt.Sprintf("User%02d", user),
  1224. })
  1225. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1226. }
  1227. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1228. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1229. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1230. for user := 0; user < testUsersLen; user++ {
  1231. set = append(set, til.Instruction{
  1232. Typ: common.TxTypeDepositTransfer,
  1233. TokenID: common.TokenID(0),
  1234. DepositAmount: big.NewInt(10000 * int64(user+1)),
  1235. Amount: big.NewInt(1000 * int64(user+1)),
  1236. From: fmt.Sprintf("User%02d", user),
  1237. To: fmt.Sprintf("User%02d", (user+1)%testUsersLen),
  1238. })
  1239. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1240. }
  1241. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1242. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1243. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1244. for user := 0; user < testUsersLen; user++ {
  1245. set = append(set, til.Instruction{
  1246. Typ: common.TxTypeForceTransfer,
  1247. TokenID: common.TokenID(0),
  1248. Amount: big.NewInt(100 * int64(user+1)),
  1249. DepositAmount: big.NewInt(0),
  1250. From: fmt.Sprintf("User%02d", user),
  1251. To: fmt.Sprintf("User%02d", (user+1)%testUsersLen),
  1252. })
  1253. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1254. }
  1255. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1256. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1257. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1258. for user := 0; user < testUsersLen; user++ {
  1259. set = append(set, til.Instruction{
  1260. Typ: common.TxTypeForceExit,
  1261. TokenID: common.TokenID(0),
  1262. Amount: big.NewInt(10 * int64(user+1)),
  1263. DepositAmount: big.NewInt(0),
  1264. From: fmt.Sprintf("User%02d", user),
  1265. })
  1266. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1267. }
  1268. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1269. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1270. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1271. var chainID uint16 = 0
  1272. tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
  1273. blocks, err := tc.GenerateBlocksFromInstructions(set)
  1274. assert.NoError(t, err)
  1275. tilCfgExtra := til.ConfigExtra{
  1276. CoordUser: "A",
  1277. }
  1278. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  1279. require.NoError(t, err)
  1280. // Add all blocks
  1281. for i := range blocks {
  1282. err = historyDB.AddBlockSCData(&blocks[i])
  1283. require.NoError(t, err)
  1284. }
  1285. txs, err := historyDB.GetAllL1UserTxs()
  1286. require.NoError(t, err)
  1287. position := 0
  1288. for _, tx := range txs {
  1289. if tx.Position == 0 {
  1290. position = 0
  1291. }
  1292. assert.Equal(t, position, tx.Position)
  1293. position++
  1294. }
  1295. }
  1296. // setTestBlocks WARNING: this will delete the blocks and recreate them
  1297. func setTestBlocks(from, to int64) []common.Block {
  1298. test.WipeDB(historyDB.DB())
  1299. blocks := test.GenBlocks(from, to)
  1300. if err := historyDB.AddBlocks(blocks); err != nil {
  1301. panic(err)
  1302. }
  1303. return blocks
  1304. }
  1305. func TestNodeInfo(t *testing.T) {
  1306. test.WipeDB(historyDB.DB())
  1307. err := historyDB.SetStateInternalAPI(&StateAPI{})
  1308. require.NoError(t, err)
  1309. clientSetup := test.NewClientSetupExample()
  1310. constants := &Constants{
  1311. SCConsts: common.SCConsts{
  1312. Rollup: *clientSetup.RollupConstants,
  1313. Auction: *clientSetup.AuctionConstants,
  1314. WDelayer: *clientSetup.WDelayerConstants,
  1315. },
  1316. ChainID: 42,
  1317. HermezAddress: clientSetup.AuctionConstants.HermezRollup,
  1318. }
  1319. err = historyDB.SetConstants(constants)
  1320. require.NoError(t, err)
  1321. // Test parameters
  1322. var f64 float64 = 1.2
  1323. var i64 int64 = 8888
  1324. addr := ethCommon.HexToAddress("0x1234")
  1325. hash := ethCommon.HexToHash("0x5678")
  1326. stateAPI := &StateAPI{
  1327. NodePublicConfig: NodePublicConfig{
  1328. ForgeDelay: 3.1,
  1329. },
  1330. Network: NetworkAPI{
  1331. LastEthBlock: 12,
  1332. LastSyncBlock: 34,
  1333. LastBatch: &BatchAPI{
  1334. ItemID: 123,
  1335. BatchNum: 456,
  1336. EthBlockNum: 789,
  1337. EthBlockHash: hash,
  1338. Timestamp: time.Now(),
  1339. ForgerAddr: addr,
  1340. // CollectedFeesDB: map[common.TokenID]*big.Int{
  1341. // 0: big.NewInt(11111),
  1342. // 1: big.NewInt(21111),
  1343. // 2: big.NewInt(31111),
  1344. // },
  1345. CollectedFeesAPI: apitypes.CollectedFeesAPI(map[common.TokenID]apitypes.BigIntStr{
  1346. 0: apitypes.BigIntStr("11111"),
  1347. 1: apitypes.BigIntStr("21111"),
  1348. 2: apitypes.BigIntStr("31111"),
  1349. }),
  1350. TotalFeesUSD: &f64,
  1351. StateRoot: apitypes.BigIntStr("1234"),
  1352. NumAccounts: 11,
  1353. ExitRoot: apitypes.BigIntStr("5678"),
  1354. ForgeL1TxsNum: &i64,
  1355. SlotNum: 44,
  1356. ForgedTxs: 23,
  1357. TotalItems: 0,
  1358. FirstItem: 0,
  1359. LastItem: 0,
  1360. },
  1361. CurrentSlot: 22,
  1362. NextForgers: []NextForgerAPI{
  1363. {
  1364. Coordinator: CoordinatorAPI{
  1365. ItemID: 111,
  1366. Bidder: addr,
  1367. Forger: addr,
  1368. EthBlockNum: 566,
  1369. URL: "asd",
  1370. TotalItems: 0,
  1371. FirstItem: 0,
  1372. LastItem: 0,
  1373. },
  1374. Period: Period{
  1375. SlotNum: 33,
  1376. FromBlock: 55,
  1377. ToBlock: 66,
  1378. FromTimestamp: time.Now(),
  1379. ToTimestamp: time.Now(),
  1380. },
  1381. },
  1382. },
  1383. },
  1384. Metrics: MetricsAPI{
  1385. TransactionsPerBatch: 1.1,
  1386. TotalAccounts: 42,
  1387. },
  1388. Rollup: *NewRollupVariablesAPI(clientSetup.RollupVariables),
  1389. Auction: *NewAuctionVariablesAPI(clientSetup.AuctionVariables),
  1390. WithdrawalDelayer: *clientSetup.WDelayerVariables,
  1391. RecommendedFee: common.RecommendedFee{
  1392. ExistingAccount: 0.15,
  1393. },
  1394. }
  1395. err = historyDB.SetStateInternalAPI(stateAPI)
  1396. require.NoError(t, err)
  1397. nodeConfig := &NodeConfig{
  1398. MaxPoolTxs: 123,
  1399. MinFeeUSD: 0.5,
  1400. }
  1401. err = historyDB.SetNodeConfig(nodeConfig)
  1402. require.NoError(t, err)
  1403. dbConstants, err := historyDB.GetConstants()
  1404. require.NoError(t, err)
  1405. assert.Equal(t, constants, dbConstants)
  1406. dbNodeConfig, err := historyDB.GetNodeConfig()
  1407. require.NoError(t, err)
  1408. assert.Equal(t, nodeConfig, dbNodeConfig)
  1409. dbStateAPI, err := historyDB.getStateAPI(historyDB.dbRead)
  1410. require.NoError(t, err)
  1411. assert.Equal(t, stateAPI.Network.LastBatch.Timestamp.Unix(),
  1412. dbStateAPI.Network.LastBatch.Timestamp.Unix())
  1413. dbStateAPI.Network.LastBatch.Timestamp = stateAPI.Network.LastBatch.Timestamp
  1414. assert.Equal(t, stateAPI.Network.NextForgers[0].Period.FromTimestamp.Unix(),
  1415. dbStateAPI.Network.NextForgers[0].Period.FromTimestamp.Unix())
  1416. dbStateAPI.Network.NextForgers[0].Period.FromTimestamp = stateAPI.Network.NextForgers[0].Period.FromTimestamp
  1417. assert.Equal(t, stateAPI.Network.NextForgers[0].Period.ToTimestamp.Unix(),
  1418. dbStateAPI.Network.NextForgers[0].Period.ToTimestamp.Unix())
  1419. dbStateAPI.Network.NextForgers[0].Period.ToTimestamp = stateAPI.Network.NextForgers[0].Period.ToTimestamp
  1420. assert.Equal(t, stateAPI, dbStateAPI)
  1421. }