You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1444 lines
46 KiB

Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
  1. package historydb
  2. import (
  3. "database/sql"
  4. "fmt"
  5. "math"
  6. "math/big"
  7. "os"
  8. "strings"
  9. "testing"
  10. "time"
  11. ethCommon "github.com/ethereum/go-ethereum/common"
  12. "github.com/hermeznetwork/hermez-node/common"
  13. dbUtils "github.com/hermeznetwork/hermez-node/db"
  14. "github.com/hermeznetwork/hermez-node/log"
  15. "github.com/hermeznetwork/hermez-node/test"
  16. "github.com/hermeznetwork/hermez-node/test/til"
  17. "github.com/hermeznetwork/tracerr"
  18. "github.com/stretchr/testify/assert"
  19. "github.com/stretchr/testify/require"
  20. )
  21. var historyDB *HistoryDB
  22. // In order to run the test you need to run a Posgres DB with
  23. // a database named "history" that is accessible by
  24. // user: "hermez"
  25. // pass: set it using the env var POSTGRES_PASS
  26. // This can be achieved by running: POSTGRES_PASS=your_strong_pass && sudo docker run --rm --name hermez-db-test -p 5432:5432 -e POSTGRES_DB=history -e POSTGRES_USER=hermez -e POSTGRES_PASSWORD=$POSTGRES_PASS -d postgres && sleep 2s && sudo docker exec -it hermez-db-test psql -a history -U hermez -c "CREATE DATABASE l2;"
  27. // After running the test you can stop the container by running: sudo docker kill hermez-db-test
  28. // If you already did that for the L2DB you don't have to do it again
  29. func TestMain(m *testing.M) {
  30. // init DB
  31. pass := os.Getenv("POSTGRES_PASS")
  32. db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
  33. if err != nil {
  34. panic(err)
  35. }
  36. historyDB = NewHistoryDB(db)
  37. if err != nil {
  38. panic(err)
  39. }
  40. // Run tests
  41. result := m.Run()
  42. // Close DB
  43. if err := db.Close(); err != nil {
  44. log.Error("Error closing the history DB:", err)
  45. }
  46. os.Exit(result)
  47. }
  48. func TestBlocks(t *testing.T) {
  49. var fromBlock, toBlock int64
  50. fromBlock = 0
  51. toBlock = 7
  52. // Reset DB
  53. test.WipeDB(historyDB.DB())
  54. // Generate blocks using til
  55. set1 := `
  56. Type: Blockchain
  57. // block 0 is stored as default in the DB
  58. // block 1 does not exist
  59. > block // blockNum=2
  60. > block // blockNum=3
  61. > block // blockNum=4
  62. > block // blockNum=5
  63. > block // blockNum=6
  64. `
  65. tc := til.NewContext(uint16(0), 1)
  66. blocks, err := tc.GenerateBlocks(set1)
  67. require.NoError(t, err)
  68. // Save timestamp of a block with UTC and change it without UTC
  69. timestamp := time.Now().Add(time.Second * 13)
  70. blocks[fromBlock].Block.Timestamp = timestamp
  71. // Insert blocks into DB
  72. for i := 0; i < len(blocks); i++ {
  73. err := historyDB.AddBlock(&blocks[i].Block)
  74. assert.NoError(t, err)
  75. }
  76. // Add block 0 to the generated blocks
  77. blocks = append(
  78. []common.BlockData{{Block: test.Block0}}, //nolint:gofmt
  79. blocks...,
  80. )
  81. // Get all blocks from DB
  82. fetchedBlocks, err := historyDB.GetBlocks(fromBlock, toBlock)
  83. assert.Equal(t, len(blocks), len(fetchedBlocks))
  84. // Compare generated vs getted blocks
  85. assert.NoError(t, err)
  86. for i := range fetchedBlocks {
  87. assertEqualBlock(t, &blocks[i].Block, &fetchedBlocks[i])
  88. }
  89. // Compare saved timestamp vs getted
  90. nameZoneUTC, offsetUTC := timestamp.UTC().Zone()
  91. zoneFetchedBlock, offsetFetchedBlock := fetchedBlocks[fromBlock].Timestamp.Zone()
  92. assert.Equal(t, nameZoneUTC, zoneFetchedBlock)
  93. assert.Equal(t, offsetUTC, offsetFetchedBlock)
  94. // Get blocks from the DB one by one
  95. for i := int64(2); i < toBlock; i++ { // avoid block 0 for simplicity
  96. fetchedBlock, err := historyDB.GetBlock(i)
  97. assert.NoError(t, err)
  98. assertEqualBlock(t, &blocks[i-1].Block, fetchedBlock)
  99. }
  100. // Get last block
  101. lastBlock, err := historyDB.GetLastBlock()
  102. assert.NoError(t, err)
  103. assertEqualBlock(t, &blocks[len(blocks)-1].Block, lastBlock)
  104. }
  105. func assertEqualBlock(t *testing.T, expected *common.Block, actual *common.Block) {
  106. assert.Equal(t, expected.Num, actual.Num)
  107. assert.Equal(t, expected.Hash, actual.Hash)
  108. assert.Equal(t, expected.Timestamp.Unix(), actual.Timestamp.Unix())
  109. }
  110. func TestBatches(t *testing.T) {
  111. // Reset DB
  112. test.WipeDB(historyDB.DB())
  113. // Generate batches using til (and blocks for foreign key)
  114. set := `
  115. Type: Blockchain
  116. AddToken(1) // Will have value in USD
  117. AddToken(2) // Will NOT have value in USD
  118. CreateAccountDeposit(1) A: 2000
  119. CreateAccountDeposit(2) A: 2000
  120. CreateAccountDeposit(1) B: 1000
  121. CreateAccountDeposit(2) B: 1000
  122. > batchL1
  123. > batchL1
  124. Transfer(1) A-B: 100 (5)
  125. Transfer(2) B-A: 100 (199)
  126. > batch // batchNum=2, L2 only batch, forges transfers (mixed case of with(out) USD value)
  127. > block
  128. Transfer(1) A-B: 100 (5)
  129. > batch // batchNum=3, L2 only batch, forges transfer (with USD value)
  130. Transfer(2) B-A: 100 (199)
  131. > batch // batchNum=4, L2 only batch, forges transfer (without USD value)
  132. > block
  133. `
  134. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  135. tilCfgExtra := til.ConfigExtra{
  136. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  137. CoordUser: "A",
  138. }
  139. blocks, err := tc.GenerateBlocks(set)
  140. require.NoError(t, err)
  141. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  142. require.NoError(t, err)
  143. // Insert to DB
  144. batches := []common.Batch{}
  145. tokensValue := make(map[common.TokenID]float64)
  146. lastL1TxsNum := new(int64)
  147. lastL1BatchBlockNum := int64(0)
  148. for _, block := range blocks {
  149. // Insert block
  150. assert.NoError(t, historyDB.AddBlock(&block.Block))
  151. // Insert tokens
  152. for i, token := range block.Rollup.AddedTokens {
  153. assert.NoError(t, historyDB.AddToken(&token)) //nolint:gosec
  154. if i%2 != 0 {
  155. // Set value to the token
  156. value := (float64(i) + 5) * 5.389329
  157. assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
  158. tokensValue[token.TokenID] = value / math.Pow(10, float64(token.Decimals))
  159. }
  160. }
  161. // Combine all generated batches into single array
  162. for _, batch := range block.Rollup.Batches {
  163. batches = append(batches, batch.Batch)
  164. forgeTxsNum := batch.Batch.ForgeL1TxsNum
  165. if forgeTxsNum != nil && (lastL1TxsNum == nil || *lastL1TxsNum < *forgeTxsNum) {
  166. *lastL1TxsNum = *forgeTxsNum
  167. lastL1BatchBlockNum = batch.Batch.EthBlockNum
  168. }
  169. }
  170. }
  171. // Insert batches
  172. assert.NoError(t, historyDB.AddBatches(batches))
  173. // Set expected total fee
  174. for _, batch := range batches {
  175. total := .0
  176. for tokenID, amount := range batch.CollectedFees {
  177. af := new(big.Float).SetInt(amount)
  178. amountFloat, _ := af.Float64()
  179. total += tokensValue[tokenID] * amountFloat
  180. }
  181. batch.TotalFeesUSD = &total
  182. }
  183. // Get batches from the DB
  184. fetchedBatches, err := historyDB.GetBatches(0, common.BatchNum(len(batches)+1))
  185. assert.NoError(t, err)
  186. assert.Equal(t, len(batches), len(fetchedBatches))
  187. for i, fetchedBatch := range fetchedBatches {
  188. assert.Equal(t, batches[i], fetchedBatch)
  189. }
  190. // Test GetLastBatchNum
  191. fetchedLastBatchNum, err := historyDB.GetLastBatchNum()
  192. assert.NoError(t, err)
  193. assert.Equal(t, batches[len(batches)-1].BatchNum, fetchedLastBatchNum)
  194. // Test GetLastL1TxsNum
  195. fetchedLastL1TxsNum, err := historyDB.GetLastL1TxsNum()
  196. assert.NoError(t, err)
  197. assert.Equal(t, lastL1TxsNum, fetchedLastL1TxsNum)
  198. // Test GetLastL1BatchBlockNum
  199. fetchedLastL1BatchBlockNum, err := historyDB.GetLastL1BatchBlockNum()
  200. assert.NoError(t, err)
  201. assert.Equal(t, lastL1BatchBlockNum, fetchedLastL1BatchBlockNum)
  202. }
  203. func TestBids(t *testing.T) {
  204. const fromBlock int64 = 1
  205. const toBlock int64 = 5
  206. // Prepare blocks in the DB
  207. blocks := setTestBlocks(fromBlock, toBlock)
  208. // Generate fake coordinators
  209. const nCoords = 5
  210. coords := test.GenCoordinators(nCoords, blocks)
  211. err := historyDB.AddCoordinators(coords)
  212. assert.NoError(t, err)
  213. // Generate fake bids
  214. const nBids = 20
  215. bids := test.GenBids(nBids, blocks, coords)
  216. err = historyDB.AddBids(bids)
  217. assert.NoError(t, err)
  218. // Fetch bids
  219. fetchedBids, err := historyDB.GetAllBids()
  220. assert.NoError(t, err)
  221. // Compare fetched bids vs generated bids
  222. for i, bid := range fetchedBids {
  223. assert.Equal(t, bids[i], bid)
  224. }
  225. }
  226. func TestTokens(t *testing.T) {
  227. const fromBlock int64 = 1
  228. const toBlock int64 = 5
  229. // Prepare blocks in the DB
  230. blocks := setTestBlocks(fromBlock, toBlock)
  231. // Generate fake tokens
  232. const nTokens = 5
  233. tokens, ethToken := test.GenTokens(nTokens, blocks)
  234. err := historyDB.AddTokens(tokens)
  235. assert.NoError(t, err)
  236. tokens = append([]common.Token{ethToken}, tokens...)
  237. limit := uint(10)
  238. // Fetch tokens
  239. fetchedTokens, _, err := historyDB.GetTokens(nil, nil, "", nil, &limit, OrderAsc)
  240. assert.NoError(t, err)
  241. // Compare fetched tokens vs generated tokens
  242. // All the tokens should have USDUpdate setted by the DB trigger
  243. for i, token := range fetchedTokens {
  244. assert.Equal(t, tokens[i].TokenID, token.TokenID)
  245. assert.Equal(t, tokens[i].EthBlockNum, token.EthBlockNum)
  246. assert.Equal(t, tokens[i].EthAddr, token.EthAddr)
  247. assert.Equal(t, tokens[i].Name, token.Name)
  248. assert.Equal(t, tokens[i].Symbol, token.Symbol)
  249. assert.Nil(t, token.USD)
  250. assert.Nil(t, token.USDUpdate)
  251. }
  252. // Update token value
  253. for i, token := range tokens {
  254. value := 1.01 * float64(i)
  255. assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
  256. }
  257. // Fetch tokens
  258. fetchedTokens, _, err = historyDB.GetTokens(nil, nil, "", nil, &limit, OrderAsc)
  259. assert.NoError(t, err)
  260. // Compare fetched tokens vs generated tokens
  261. // All the tokens should have USDUpdate setted by the DB trigger
  262. for i, token := range fetchedTokens {
  263. value := 1.01 * float64(i)
  264. assert.Equal(t, value, *token.USD)
  265. nameZone, offset := token.USDUpdate.Zone()
  266. assert.Equal(t, "UTC", nameZone)
  267. assert.Equal(t, 0, offset)
  268. }
  269. }
  270. func TestTokensUTF8(t *testing.T) {
  271. // Reset DB
  272. test.WipeDB(historyDB.DB())
  273. const fromBlock int64 = 1
  274. const toBlock int64 = 5
  275. // Prepare blocks in the DB
  276. blocks := setTestBlocks(fromBlock, toBlock)
  277. // Generate fake tokens
  278. const nTokens = 5
  279. tokens, ethToken := test.GenTokens(nTokens, blocks)
  280. nonUTFTokens := make([]common.Token, len(tokens)+1)
  281. // Force token.name and token.symbol to be non UTF-8 Strings
  282. for i, token := range tokens {
  283. token.Name = fmt.Sprint("NON-UTF8-NAME-\xc5-", i)
  284. token.Symbol = fmt.Sprint("S-\xc5-", i)
  285. tokens[i] = token
  286. nonUTFTokens[i] = token
  287. }
  288. err := historyDB.AddTokens(tokens)
  289. assert.NoError(t, err)
  290. // Work with nonUTFTokens as tokens one gets updated and non UTF-8 characters are lost
  291. nonUTFTokens = append([]common.Token{ethToken}, nonUTFTokens...)
  292. limit := uint(10)
  293. // Fetch tokens
  294. fetchedTokens, _, err := historyDB.GetTokens(nil, nil, "", nil, &limit, OrderAsc)
  295. assert.NoError(t, err)
  296. // Compare fetched tokens vs generated tokens
  297. // All the tokens should have USDUpdate setted by the DB trigger
  298. for i, token := range fetchedTokens {
  299. assert.Equal(t, nonUTFTokens[i].TokenID, token.TokenID)
  300. assert.Equal(t, nonUTFTokens[i].EthBlockNum, token.EthBlockNum)
  301. assert.Equal(t, nonUTFTokens[i].EthAddr, token.EthAddr)
  302. assert.Equal(t, strings.ToValidUTF8(nonUTFTokens[i].Name, " "), token.Name)
  303. assert.Equal(t, strings.ToValidUTF8(nonUTFTokens[i].Symbol, " "), token.Symbol)
  304. assert.Nil(t, token.USD)
  305. assert.Nil(t, token.USDUpdate)
  306. }
  307. // Update token value
  308. for i, token := range nonUTFTokens {
  309. value := 1.01 * float64(i)
  310. assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
  311. }
  312. // Fetch tokens
  313. fetchedTokens, _, err = historyDB.GetTokens(nil, nil, "", nil, &limit, OrderAsc)
  314. assert.NoError(t, err)
  315. // Compare fetched tokens vs generated tokens
  316. // All the tokens should have USDUpdate setted by the DB trigger
  317. for i, token := range fetchedTokens {
  318. value := 1.01 * float64(i)
  319. assert.Equal(t, value, *token.USD)
  320. nameZone, offset := token.USDUpdate.Zone()
  321. assert.Equal(t, "UTC", nameZone)
  322. assert.Equal(t, 0, offset)
  323. }
  324. }
  325. func TestAccounts(t *testing.T) {
  326. const fromBlock int64 = 1
  327. const toBlock int64 = 5
  328. // Prepare blocks in the DB
  329. blocks := setTestBlocks(fromBlock, toBlock)
  330. // Generate fake tokens
  331. const nTokens = 5
  332. tokens, ethToken := test.GenTokens(nTokens, blocks)
  333. err := historyDB.AddTokens(tokens)
  334. assert.NoError(t, err)
  335. tokens = append([]common.Token{ethToken}, tokens...)
  336. // Generate fake batches
  337. const nBatches = 10
  338. batches := test.GenBatches(nBatches, blocks)
  339. err = historyDB.AddBatches(batches)
  340. assert.NoError(t, err)
  341. // Generate fake accounts
  342. const nAccounts = 3
  343. accs := test.GenAccounts(nAccounts, 0, tokens, nil, nil, batches)
  344. err = historyDB.AddAccounts(accs)
  345. assert.NoError(t, err)
  346. // Fetch accounts
  347. fetchedAccs, err := historyDB.GetAllAccounts()
  348. assert.NoError(t, err)
  349. // Compare fetched accounts vs generated accounts
  350. for i, acc := range fetchedAccs {
  351. accs[i].Balance = nil
  352. assert.Equal(t, accs[i], acc)
  353. }
  354. }
  355. func TestTxs(t *testing.T) {
  356. // Reset DB
  357. test.WipeDB(historyDB.DB())
  358. set := `
  359. Type: Blockchain
  360. AddToken(1)
  361. AddToken(2)
  362. CreateAccountDeposit(1) A: 10
  363. CreateAccountDeposit(1) B: 10
  364. > batchL1
  365. > batchL1
  366. > block
  367. CreateAccountDepositTransfer(1) C-A: 20, 10
  368. CreateAccountCoordinator(1) User0
  369. > batchL1
  370. > batchL1
  371. > block
  372. Deposit(1) B: 10
  373. Deposit(1) C: 10
  374. Transfer(1) C-A : 10 (1)
  375. Transfer(1) B-C : 10 (1)
  376. Transfer(1) A-B : 10 (1)
  377. Exit(1) A: 10 (1)
  378. > batch
  379. > block
  380. DepositTransfer(1) A-B: 10, 10
  381. > batchL1
  382. > block
  383. ForceTransfer(1) A-B: 10
  384. ForceExit(1) A: 5
  385. > batchL1
  386. > batchL1
  387. > block
  388. CreateAccountDeposit(2) D: 10
  389. > batchL1
  390. > block
  391. CreateAccountDeposit(2) E: 10
  392. > batchL1
  393. > batchL1
  394. > block
  395. `
  396. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  397. tilCfgExtra := til.ConfigExtra{
  398. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  399. CoordUser: "A",
  400. }
  401. blocks, err := tc.GenerateBlocks(set)
  402. require.NoError(t, err)
  403. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  404. require.NoError(t, err)
  405. // Sanity check
  406. require.Equal(t, 7, len(blocks))
  407. require.Equal(t, 2, len(blocks[0].Rollup.L1UserTxs))
  408. require.Equal(t, 1, len(blocks[1].Rollup.L1UserTxs))
  409. require.Equal(t, 2, len(blocks[2].Rollup.L1UserTxs))
  410. require.Equal(t, 1, len(blocks[3].Rollup.L1UserTxs))
  411. require.Equal(t, 2, len(blocks[4].Rollup.L1UserTxs))
  412. require.Equal(t, 1, len(blocks[5].Rollup.L1UserTxs))
  413. require.Equal(t, 1, len(blocks[6].Rollup.L1UserTxs))
  414. var null *common.BatchNum = nil
  415. var txID common.TxID
  416. // Insert blocks into DB
  417. for i := range blocks {
  418. if i == len(blocks)-1 {
  419. blocks[i].Block.Timestamp = time.Now()
  420. dbL1Txs, err := historyDB.GetAllL1UserTxs()
  421. assert.NoError(t, err)
  422. // Check batch_num is nil before forging
  423. assert.Equal(t, null, dbL1Txs[len(dbL1Txs)-1].BatchNum)
  424. // Save this TxId
  425. txID = dbL1Txs[len(dbL1Txs)-1].TxID
  426. }
  427. err = historyDB.AddBlockSCData(&blocks[i])
  428. assert.NoError(t, err)
  429. }
  430. // Check blocks
  431. dbBlocks, err := historyDB.GetAllBlocks()
  432. assert.NoError(t, err)
  433. assert.Equal(t, len(blocks)+1, len(dbBlocks))
  434. // Check batches
  435. batches, err := historyDB.GetAllBatches()
  436. assert.NoError(t, err)
  437. assert.Equal(t, 11, len(batches))
  438. // Check L1 Transactions
  439. dbL1Txs, err := historyDB.GetAllL1UserTxs()
  440. assert.NoError(t, err)
  441. assert.Equal(t, 10, len(dbL1Txs))
  442. // Tx Type
  443. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[0].Type)
  444. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[1].Type)
  445. assert.Equal(t, common.TxTypeCreateAccountDepositTransfer, dbL1Txs[2].Type)
  446. assert.Equal(t, common.TxTypeDeposit, dbL1Txs[3].Type)
  447. assert.Equal(t, common.TxTypeDeposit, dbL1Txs[4].Type)
  448. assert.Equal(t, common.TxTypeDepositTransfer, dbL1Txs[5].Type)
  449. assert.Equal(t, common.TxTypeForceTransfer, dbL1Txs[6].Type)
  450. assert.Equal(t, common.TxTypeForceExit, dbL1Txs[7].Type)
  451. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[8].Type)
  452. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[9].Type)
  453. // Tx ID
  454. assert.Equal(t, "0x00c4f3fb5c0f7f76b3fe0a74a6ae7472e6a5ef9d66db08df7d0a7e4980c578c55a", dbL1Txs[0].TxID.String())
  455. assert.Equal(t, "0x00b0c7398bfd31f7a6c0b4d3f80c73cfe9cdb541bdb6eccc6b9097976f9535fb01", dbL1Txs[1].TxID.String())
  456. assert.Equal(t, "0x00bc12304d5d1aca95c356394bfa2e331e4ccb21e250c6a7442d92e02371eca9ff", dbL1Txs[2].TxID.String())
  457. assert.Equal(t, "0x0063077b5c07999b460aa31dc3ea300f5923afa08f117e8ed7476aae299ed4b74b", dbL1Txs[3].TxID.String())
  458. assert.Equal(t, "0x003f8b27b160e7b98ee5275de5ace264ae45891ac219a1b7c03863b5a764176b03", dbL1Txs[4].TxID.String())
  459. assert.Equal(t, "0x00937115a38e1c049aab568b3281e005c206a3e18e87400ce6c62c83599a3bafbd", dbL1Txs[5].TxID.String())
  460. assert.Equal(t, "0x006118820894c0acdc230d65fe739a4082c9eed3be1f5020f544d855e36dc4eae6", dbL1Txs[6].TxID.String())
  461. assert.Equal(t, "0x003e5aede622ad4ebbc436d178eb83d15f8b38614eda6e90b1acb88034a0eb177d", dbL1Txs[7].TxID.String())
  462. assert.Equal(t, "0x007682bb57dfd4d2e98a5c7836d0dc92bee86edefad6db6ad123415991d79fd69d", dbL1Txs[8].TxID.String())
  463. assert.Equal(t, "0x006d068c5ee574706ed23bc357390da1c5bc5e144f51a32dcd38faf50be60813d6", dbL1Txs[9].TxID.String())
  464. // Tx From IDx
  465. assert.Equal(t, common.Idx(0), dbL1Txs[0].FromIdx)
  466. assert.Equal(t, common.Idx(0), dbL1Txs[1].FromIdx)
  467. assert.Equal(t, common.Idx(0), dbL1Txs[2].FromIdx)
  468. assert.NotEqual(t, common.Idx(0), dbL1Txs[3].FromIdx)
  469. assert.NotEqual(t, common.Idx(0), dbL1Txs[4].FromIdx)
  470. assert.NotEqual(t, common.Idx(0), dbL1Txs[5].FromIdx)
  471. assert.NotEqual(t, common.Idx(0), dbL1Txs[6].FromIdx)
  472. assert.NotEqual(t, common.Idx(0), dbL1Txs[7].FromIdx)
  473. assert.Equal(t, common.Idx(0), dbL1Txs[8].FromIdx)
  474. assert.Equal(t, common.Idx(0), dbL1Txs[9].FromIdx)
  475. assert.Equal(t, common.Idx(0), dbL1Txs[9].FromIdx)
  476. assert.Equal(t, dbL1Txs[5].FromIdx, dbL1Txs[6].FromIdx)
  477. assert.Equal(t, dbL1Txs[5].FromIdx, dbL1Txs[7].FromIdx)
  478. // Tx to IDx
  479. assert.Equal(t, dbL1Txs[2].ToIdx, dbL1Txs[5].FromIdx)
  480. assert.Equal(t, dbL1Txs[5].ToIdx, dbL1Txs[3].FromIdx)
  481. assert.Equal(t, dbL1Txs[6].ToIdx, dbL1Txs[3].FromIdx)
  482. // Token ID
  483. assert.Equal(t, common.TokenID(1), dbL1Txs[0].TokenID)
  484. assert.Equal(t, common.TokenID(1), dbL1Txs[1].TokenID)
  485. assert.Equal(t, common.TokenID(1), dbL1Txs[2].TokenID)
  486. assert.Equal(t, common.TokenID(1), dbL1Txs[3].TokenID)
  487. assert.Equal(t, common.TokenID(1), dbL1Txs[4].TokenID)
  488. assert.Equal(t, common.TokenID(1), dbL1Txs[5].TokenID)
  489. assert.Equal(t, common.TokenID(1), dbL1Txs[6].TokenID)
  490. assert.Equal(t, common.TokenID(1), dbL1Txs[7].TokenID)
  491. assert.Equal(t, common.TokenID(2), dbL1Txs[8].TokenID)
  492. assert.Equal(t, common.TokenID(2), dbL1Txs[9].TokenID)
  493. // Batch Number
  494. var bn common.BatchNum = common.BatchNum(2)
  495. assert.Equal(t, &bn, dbL1Txs[0].BatchNum)
  496. assert.Equal(t, &bn, dbL1Txs[1].BatchNum)
  497. bn = common.BatchNum(4)
  498. assert.Equal(t, &bn, dbL1Txs[2].BatchNum)
  499. bn = common.BatchNum(7)
  500. assert.Equal(t, &bn, dbL1Txs[3].BatchNum)
  501. assert.Equal(t, &bn, dbL1Txs[4].BatchNum)
  502. assert.Equal(t, &bn, dbL1Txs[5].BatchNum)
  503. bn = common.BatchNum(8)
  504. assert.Equal(t, &bn, dbL1Txs[6].BatchNum)
  505. assert.Equal(t, &bn, dbL1Txs[7].BatchNum)
  506. bn = common.BatchNum(10)
  507. assert.Equal(t, &bn, dbL1Txs[8].BatchNum)
  508. bn = common.BatchNum(11)
  509. assert.Equal(t, &bn, dbL1Txs[9].BatchNum)
  510. // eth_block_num
  511. assert.Equal(t, int64(2), dbL1Txs[0].EthBlockNum)
  512. assert.Equal(t, int64(2), dbL1Txs[1].EthBlockNum)
  513. assert.Equal(t, int64(3), dbL1Txs[2].EthBlockNum)
  514. assert.Equal(t, int64(4), dbL1Txs[3].EthBlockNum)
  515. assert.Equal(t, int64(4), dbL1Txs[4].EthBlockNum)
  516. assert.Equal(t, int64(5), dbL1Txs[5].EthBlockNum)
  517. assert.Equal(t, int64(6), dbL1Txs[6].EthBlockNum)
  518. assert.Equal(t, int64(6), dbL1Txs[7].EthBlockNum)
  519. assert.Equal(t, int64(7), dbL1Txs[8].EthBlockNum)
  520. assert.Equal(t, int64(8), dbL1Txs[9].EthBlockNum)
  521. // User Origin
  522. assert.Equal(t, true, dbL1Txs[0].UserOrigin)
  523. assert.Equal(t, true, dbL1Txs[1].UserOrigin)
  524. assert.Equal(t, true, dbL1Txs[2].UserOrigin)
  525. assert.Equal(t, true, dbL1Txs[3].UserOrigin)
  526. assert.Equal(t, true, dbL1Txs[4].UserOrigin)
  527. assert.Equal(t, true, dbL1Txs[5].UserOrigin)
  528. assert.Equal(t, true, dbL1Txs[6].UserOrigin)
  529. assert.Equal(t, true, dbL1Txs[7].UserOrigin)
  530. assert.Equal(t, true, dbL1Txs[8].UserOrigin)
  531. assert.Equal(t, true, dbL1Txs[9].UserOrigin)
  532. // Deposit Amount
  533. assert.Equal(t, big.NewInt(10), dbL1Txs[0].DepositAmount)
  534. assert.Equal(t, big.NewInt(10), dbL1Txs[1].DepositAmount)
  535. assert.Equal(t, big.NewInt(20), dbL1Txs[2].DepositAmount)
  536. assert.Equal(t, big.NewInt(10), dbL1Txs[3].DepositAmount)
  537. assert.Equal(t, big.NewInt(10), dbL1Txs[4].DepositAmount)
  538. assert.Equal(t, big.NewInt(10), dbL1Txs[5].DepositAmount)
  539. assert.Equal(t, big.NewInt(0), dbL1Txs[6].DepositAmount)
  540. assert.Equal(t, big.NewInt(0), dbL1Txs[7].DepositAmount)
  541. assert.Equal(t, big.NewInt(10), dbL1Txs[8].DepositAmount)
  542. assert.Equal(t, big.NewInt(10), dbL1Txs[9].DepositAmount)
  543. // Check saved txID's batch_num is not nil
  544. assert.Equal(t, txID, dbL1Txs[len(dbL1Txs)-2].TxID)
  545. assert.NotEqual(t, null, dbL1Txs[len(dbL1Txs)-2].BatchNum)
  546. // Check Coordinator TXs
  547. coordTxs, err := historyDB.GetAllL1CoordinatorTxs()
  548. assert.NoError(t, err)
  549. assert.Equal(t, 1, len(coordTxs))
  550. assert.Equal(t, common.TxTypeCreateAccountDeposit, coordTxs[0].Type)
  551. assert.Equal(t, false, coordTxs[0].UserOrigin)
  552. // Check L2 TXs
  553. dbL2Txs, err := historyDB.GetAllL2Txs()
  554. assert.NoError(t, err)
  555. assert.Equal(t, 4, len(dbL2Txs))
  556. // Tx Type
  557. assert.Equal(t, common.TxTypeTransfer, dbL2Txs[0].Type)
  558. assert.Equal(t, common.TxTypeTransfer, dbL2Txs[1].Type)
  559. assert.Equal(t, common.TxTypeTransfer, dbL2Txs[2].Type)
  560. assert.Equal(t, common.TxTypeExit, dbL2Txs[3].Type)
  561. // Tx ID
  562. assert.Equal(t, "0x0216d6fd29ec664d30a5db5c11401b79624388acc1c8bdd7ec4d29c9fbc82e6bbd", dbL2Txs[0].TxID.String())
  563. assert.Equal(t, "0x024a99c757c9ded6156cea463e9e7b1ebed51c323dae1f1dc1bea5068f5c688f3a", dbL2Txs[1].TxID.String())
  564. assert.Equal(t, "0x0239d316ab550bf8ee20a48f9a89d511baa069207d24ccdc4cfcea0dc04e0659df", dbL2Txs[2].TxID.String())
  565. assert.Equal(t, "0x02c7233141caf1f99d4d5d2013da01c709e73ee3c9b46f3d5635b02d14e6177a9d", dbL2Txs[3].TxID.String())
  566. // Tx From and To IDx
  567. assert.Equal(t, dbL2Txs[0].ToIdx, dbL2Txs[2].FromIdx)
  568. assert.Equal(t, dbL2Txs[1].ToIdx, dbL2Txs[0].FromIdx)
  569. assert.Equal(t, dbL2Txs[2].ToIdx, dbL2Txs[1].FromIdx)
  570. // Batch Number
  571. assert.Equal(t, common.BatchNum(5), dbL2Txs[0].BatchNum)
  572. assert.Equal(t, common.BatchNum(5), dbL2Txs[1].BatchNum)
  573. assert.Equal(t, common.BatchNum(5), dbL2Txs[2].BatchNum)
  574. assert.Equal(t, common.BatchNum(5), dbL2Txs[3].BatchNum)
  575. // eth_block_num
  576. assert.Equal(t, int64(4), dbL2Txs[0].EthBlockNum)
  577. assert.Equal(t, int64(4), dbL2Txs[1].EthBlockNum)
  578. assert.Equal(t, int64(4), dbL2Txs[2].EthBlockNum)
  579. // Amount
  580. assert.Equal(t, big.NewInt(10), dbL2Txs[0].Amount)
  581. assert.Equal(t, big.NewInt(10), dbL2Txs[1].Amount)
  582. assert.Equal(t, big.NewInt(10), dbL2Txs[2].Amount)
  583. assert.Equal(t, big.NewInt(10), dbL2Txs[3].Amount)
  584. }
  585. func TestExitTree(t *testing.T) {
  586. nBatches := 17
  587. blocks := setTestBlocks(1, 10)
  588. batches := test.GenBatches(nBatches, blocks)
  589. err := historyDB.AddBatches(batches)
  590. assert.NoError(t, err)
  591. const nTokens = 50
  592. tokens, ethToken := test.GenTokens(nTokens, blocks)
  593. err = historyDB.AddTokens(tokens)
  594. assert.NoError(t, err)
  595. tokens = append([]common.Token{ethToken}, tokens...)
  596. const nAccounts = 3
  597. accs := test.GenAccounts(nAccounts, 0, tokens, nil, nil, batches)
  598. assert.NoError(t, historyDB.AddAccounts(accs))
  599. exitTree := test.GenExitTree(nBatches, batches, accs, blocks)
  600. err = historyDB.AddExitTree(exitTree)
  601. assert.NoError(t, err)
  602. }
  603. func TestGetUnforgedL1UserTxs(t *testing.T) {
  604. test.WipeDB(historyDB.DB())
  605. set := `
  606. Type: Blockchain
  607. AddToken(1)
  608. AddToken(2)
  609. AddToken(3)
  610. CreateAccountDeposit(1) A: 20
  611. CreateAccountDeposit(2) A: 20
  612. CreateAccountDeposit(1) B: 5
  613. CreateAccountDeposit(1) C: 5
  614. CreateAccountDeposit(1) D: 5
  615. > block
  616. `
  617. tc := til.NewContext(uint16(0), 128)
  618. blocks, err := tc.GenerateBlocks(set)
  619. require.NoError(t, err)
  620. // Sanity check
  621. require.Equal(t, 1, len(blocks))
  622. require.Equal(t, 5, len(blocks[0].Rollup.L1UserTxs))
  623. toForgeL1TxsNum := int64(1)
  624. for i := range blocks {
  625. err = historyDB.AddBlockSCData(&blocks[i])
  626. require.NoError(t, err)
  627. }
  628. l1UserTxs, err := historyDB.GetUnforgedL1UserTxs(toForgeL1TxsNum)
  629. require.NoError(t, err)
  630. assert.Equal(t, 5, len(l1UserTxs))
  631. assert.Equal(t, blocks[0].Rollup.L1UserTxs, l1UserTxs)
  632. // No l1UserTxs for this toForgeL1TxsNum
  633. l1UserTxs, err = historyDB.GetUnforgedL1UserTxs(2)
  634. require.NoError(t, err)
  635. assert.Equal(t, 0, len(l1UserTxs))
  636. }
  637. func exampleInitSCVars() (*common.RollupVariables, *common.AuctionVariables, *common.WDelayerVariables) {
  638. //nolint:govet
  639. rollup := &common.RollupVariables{
  640. 0,
  641. big.NewInt(10),
  642. 12,
  643. 13,
  644. [5]common.BucketParams{},
  645. false,
  646. }
  647. //nolint:govet
  648. auction := &common.AuctionVariables{
  649. 0,
  650. ethCommon.BigToAddress(big.NewInt(2)),
  651. ethCommon.BigToAddress(big.NewInt(3)),
  652. "https://boot.coord.com",
  653. [6]*big.Int{
  654. big.NewInt(1), big.NewInt(2), big.NewInt(3),
  655. big.NewInt(4), big.NewInt(5), big.NewInt(6),
  656. },
  657. 0,
  658. 2,
  659. 4320,
  660. [3]uint16{10, 11, 12},
  661. 1000,
  662. 20,
  663. }
  664. //nolint:govet
  665. wDelayer := &common.WDelayerVariables{
  666. 0,
  667. ethCommon.BigToAddress(big.NewInt(2)),
  668. ethCommon.BigToAddress(big.NewInt(3)),
  669. 13,
  670. 14,
  671. false,
  672. }
  673. return rollup, auction, wDelayer
  674. }
  675. func TestSetInitialSCVars(t *testing.T) {
  676. test.WipeDB(historyDB.DB())
  677. _, _, _, err := historyDB.GetSCVars()
  678. assert.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
  679. rollup, auction, wDelayer := exampleInitSCVars()
  680. err = historyDB.SetInitialSCVars(rollup, auction, wDelayer)
  681. require.NoError(t, err)
  682. dbRollup, dbAuction, dbWDelayer, err := historyDB.GetSCVars()
  683. require.NoError(t, err)
  684. require.Equal(t, rollup, dbRollup)
  685. require.Equal(t, auction, dbAuction)
  686. require.Equal(t, wDelayer, dbWDelayer)
  687. }
  688. func TestSetExtraInfoForgedL1UserTxs(t *testing.T) {
  689. test.WipeDB(historyDB.DB())
  690. set := `
  691. Type: Blockchain
  692. AddToken(1)
  693. CreateAccountDeposit(1) A: 2000
  694. CreateAccountDeposit(1) B: 500
  695. CreateAccountDeposit(1) C: 500
  696. > batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{*}
  697. > block // blockNum=2
  698. > batchL1 // forge defined L1UserTxs{*}
  699. > block // blockNum=3
  700. `
  701. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  702. tilCfgExtra := til.ConfigExtra{
  703. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  704. CoordUser: "A",
  705. }
  706. blocks, err := tc.GenerateBlocks(set)
  707. require.NoError(t, err)
  708. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  709. require.NoError(t, err)
  710. err = tc.FillBlocksForgedL1UserTxs(blocks)
  711. require.NoError(t, err)
  712. // Add only first block so that the L1UserTxs are not marked as forged
  713. for i := range blocks[:1] {
  714. err = historyDB.AddBlockSCData(&blocks[i])
  715. require.NoError(t, err)
  716. }
  717. // Add second batch to trigger the update of the batch_num,
  718. // while avoiding the implicit call of setExtraInfoForgedL1UserTxs
  719. err = historyDB.addBlock(historyDB.db, &blocks[1].Block)
  720. require.NoError(t, err)
  721. err = historyDB.addBatch(historyDB.db, &blocks[1].Rollup.Batches[0].Batch)
  722. require.NoError(t, err)
  723. err = historyDB.addAccounts(historyDB.db, blocks[1].Rollup.Batches[0].CreatedAccounts)
  724. require.NoError(t, err)
  725. // Set the Effective{Amount,DepositAmount} of the L1UserTxs that are forged in the second block
  726. l1Txs := blocks[1].Rollup.Batches[0].L1UserTxs
  727. require.Equal(t, 3, len(l1Txs))
  728. // Change some values to test all cases
  729. l1Txs[1].EffectiveAmount = big.NewInt(0)
  730. l1Txs[2].EffectiveDepositAmount = big.NewInt(0)
  731. l1Txs[2].EffectiveAmount = big.NewInt(0)
  732. err = historyDB.setExtraInfoForgedL1UserTxs(historyDB.db, l1Txs)
  733. require.NoError(t, err)
  734. dbL1Txs, err := historyDB.GetAllL1UserTxs()
  735. require.NoError(t, err)
  736. for i, tx := range dbL1Txs {
  737. log.Infof("%d %v %v", i, tx.EffectiveAmount, tx.EffectiveDepositAmount)
  738. assert.NotNil(t, tx.EffectiveAmount)
  739. assert.NotNil(t, tx.EffectiveDepositAmount)
  740. switch tx.TxID {
  741. case l1Txs[0].TxID:
  742. assert.Equal(t, l1Txs[0].DepositAmount, tx.EffectiveDepositAmount)
  743. assert.Equal(t, l1Txs[0].Amount, tx.EffectiveAmount)
  744. case l1Txs[1].TxID:
  745. assert.Equal(t, l1Txs[1].DepositAmount, tx.EffectiveDepositAmount)
  746. assert.Equal(t, big.NewInt(0), tx.EffectiveAmount)
  747. case l1Txs[2].TxID:
  748. assert.Equal(t, big.NewInt(0), tx.EffectiveDepositAmount)
  749. assert.Equal(t, big.NewInt(0), tx.EffectiveAmount)
  750. }
  751. }
  752. }
  753. func TestUpdateExitTree(t *testing.T) {
  754. test.WipeDB(historyDB.DB())
  755. set := `
  756. Type: Blockchain
  757. AddToken(1)
  758. CreateAccountDeposit(1) C: 2000 // Idx=256+2=258
  759. CreateAccountDeposit(1) D: 500 // Idx=256+3=259
  760. CreateAccountCoordinator(1) A // Idx=256+0=256
  761. CreateAccountCoordinator(1) B // Idx=256+1=257
  762. > batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{5}
  763. > batchL1 // forge defined L1UserTxs{5}, freeze L1UserTxs{nil}
  764. > block // blockNum=2
  765. ForceExit(1) A: 100
  766. ForceExit(1) B: 80
  767. Exit(1) C: 50 (172)
  768. Exit(1) D: 30 (172)
  769. > batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{3}
  770. > batchL1 // forge L1UserTxs{3}, freeze defined L1UserTxs{nil}
  771. > block // blockNum=3
  772. > block // blockNum=4 (empty block)
  773. > block // blockNum=5 (empty block)
  774. `
  775. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  776. tilCfgExtra := til.ConfigExtra{
  777. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  778. CoordUser: "A",
  779. }
  780. blocks, err := tc.GenerateBlocks(set)
  781. require.NoError(t, err)
  782. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  783. require.NoError(t, err)
  784. // Add all blocks except for the last two
  785. for i := range blocks[:len(blocks)-2] {
  786. err = historyDB.AddBlockSCData(&blocks[i])
  787. require.NoError(t, err)
  788. }
  789. // Add withdraws to the second-to-last block, and insert block into the DB
  790. block := &blocks[len(blocks)-2]
  791. require.Equal(t, int64(4), block.Block.Num)
  792. tokenAddr := blocks[0].Rollup.AddedTokens[0].EthAddr
  793. // block.WDelayer.Deposits = append(block.WDelayer.Deposits,
  794. // common.WDelayerTransfer{Owner: tc.UsersByIdx[257].Addr, Token: tokenAddr, Amount: big.NewInt(80)}, // 257
  795. // common.WDelayerTransfer{Owner: tc.UsersByIdx[259].Addr, Token: tokenAddr, Amount: big.NewInt(15)}, // 259
  796. // )
  797. block.Rollup.Withdrawals = append(block.Rollup.Withdrawals,
  798. common.WithdrawInfo{Idx: 256, NumExitRoot: 4, InstantWithdraw: true},
  799. common.WithdrawInfo{Idx: 257, NumExitRoot: 4, InstantWithdraw: false,
  800. Owner: tc.UsersByIdx[257].Addr, Token: tokenAddr},
  801. common.WithdrawInfo{Idx: 258, NumExitRoot: 3, InstantWithdraw: true},
  802. common.WithdrawInfo{Idx: 259, NumExitRoot: 3, InstantWithdraw: false,
  803. Owner: tc.UsersByIdx[259].Addr, Token: tokenAddr},
  804. )
  805. err = historyDB.addBlock(historyDB.db, &block.Block)
  806. require.NoError(t, err)
  807. err = historyDB.updateExitTree(historyDB.db, block.Block.Num,
  808. block.Rollup.Withdrawals, block.WDelayer.Withdrawals)
  809. require.NoError(t, err)
  810. // Check that exits in DB match with the expected values
  811. dbExits, err := historyDB.GetAllExits()
  812. require.NoError(t, err)
  813. assert.Equal(t, 4, len(dbExits))
  814. dbExitsByIdx := make(map[common.Idx]common.ExitInfo)
  815. for _, dbExit := range dbExits {
  816. dbExitsByIdx[dbExit.AccountIdx] = dbExit
  817. }
  818. for _, withdraw := range block.Rollup.Withdrawals {
  819. assert.Equal(t, withdraw.NumExitRoot, dbExitsByIdx[withdraw.Idx].BatchNum)
  820. if withdraw.InstantWithdraw {
  821. assert.Equal(t, &block.Block.Num, dbExitsByIdx[withdraw.Idx].InstantWithdrawn)
  822. } else {
  823. assert.Equal(t, &block.Block.Num, dbExitsByIdx[withdraw.Idx].DelayedWithdrawRequest)
  824. }
  825. }
  826. // Add delayed withdraw to the last block, and insert block into the DB
  827. block = &blocks[len(blocks)-1]
  828. require.Equal(t, int64(5), block.Block.Num)
  829. block.WDelayer.Withdrawals = append(block.WDelayer.Withdrawals,
  830. common.WDelayerTransfer{
  831. Owner: tc.UsersByIdx[257].Addr,
  832. Token: tokenAddr,
  833. Amount: big.NewInt(80),
  834. })
  835. err = historyDB.addBlock(historyDB.db, &block.Block)
  836. require.NoError(t, err)
  837. err = historyDB.updateExitTree(historyDB.db, block.Block.Num,
  838. block.Rollup.Withdrawals, block.WDelayer.Withdrawals)
  839. require.NoError(t, err)
  840. // Check that delayed withdrawn has been set
  841. dbExits, err = historyDB.GetAllExits()
  842. require.NoError(t, err)
  843. for _, dbExit := range dbExits {
  844. dbExitsByIdx[dbExit.AccountIdx] = dbExit
  845. }
  846. require.Equal(t, &block.Block.Num, dbExitsByIdx[257].DelayedWithdrawn)
  847. }
  848. func TestGetBestBidCoordinator(t *testing.T) {
  849. test.WipeDB(historyDB.DB())
  850. rollup, auction, wDelayer := exampleInitSCVars()
  851. err := historyDB.SetInitialSCVars(rollup, auction, wDelayer)
  852. require.NoError(t, err)
  853. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  854. blocks, err := tc.GenerateBlocks(`
  855. Type: Blockchain
  856. > block // blockNum=2
  857. `)
  858. require.NoError(t, err)
  859. err = historyDB.AddBlockSCData(&blocks[0])
  860. require.NoError(t, err)
  861. coords := []common.Coordinator{
  862. {
  863. Bidder: ethCommon.BigToAddress(big.NewInt(1)),
  864. Forger: ethCommon.BigToAddress(big.NewInt(2)),
  865. EthBlockNum: 2,
  866. URL: "foo",
  867. },
  868. {
  869. Bidder: ethCommon.BigToAddress(big.NewInt(3)),
  870. Forger: ethCommon.BigToAddress(big.NewInt(4)),
  871. EthBlockNum: 2,
  872. URL: "bar",
  873. },
  874. }
  875. err = historyDB.addCoordinators(historyDB.db, coords)
  876. require.NoError(t, err)
  877. bids := []common.Bid{
  878. {
  879. SlotNum: 10,
  880. BidValue: big.NewInt(10),
  881. EthBlockNum: 2,
  882. Bidder: coords[0].Bidder,
  883. },
  884. {
  885. SlotNum: 10,
  886. BidValue: big.NewInt(20),
  887. EthBlockNum: 2,
  888. Bidder: coords[1].Bidder,
  889. },
  890. }
  891. err = historyDB.addBids(historyDB.db, bids)
  892. require.NoError(t, err)
  893. forger10, err := historyDB.GetBestBidCoordinator(10)
  894. require.NoError(t, err)
  895. require.Equal(t, coords[1].Forger, forger10.Forger)
  896. require.Equal(t, coords[1].Bidder, forger10.Bidder)
  897. require.Equal(t, coords[1].URL, forger10.URL)
  898. require.Equal(t, bids[1].SlotNum, forger10.SlotNum)
  899. require.Equal(t, bids[1].BidValue, forger10.BidValue)
  900. for i := range forger10.DefaultSlotSetBid {
  901. require.Equal(t, auction.DefaultSlotSetBid[i], forger10.DefaultSlotSetBid[i])
  902. }
  903. _, err = historyDB.GetBestBidCoordinator(11)
  904. require.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
  905. }
  906. func TestAddBucketUpdates(t *testing.T) {
  907. test.WipeDB(historyDB.DB())
  908. const fromBlock int64 = 1
  909. const toBlock int64 = 5 + 1
  910. setTestBlocks(fromBlock, toBlock)
  911. bucketUpdates := []common.BucketUpdate{
  912. {
  913. EthBlockNum: 4,
  914. NumBucket: 0,
  915. BlockStamp: 4,
  916. Withdrawals: big.NewInt(123),
  917. },
  918. {
  919. EthBlockNum: 5,
  920. NumBucket: 2,
  921. BlockStamp: 5,
  922. Withdrawals: big.NewInt(42),
  923. },
  924. }
  925. err := historyDB.addBucketUpdates(historyDB.db, bucketUpdates)
  926. require.NoError(t, err)
  927. dbBucketUpdates, err := historyDB.GetAllBucketUpdates()
  928. require.NoError(t, err)
  929. assert.Equal(t, bucketUpdates, dbBucketUpdates)
  930. }
  931. func TestAddTokenExchanges(t *testing.T) {
  932. test.WipeDB(historyDB.DB())
  933. const fromBlock int64 = 1
  934. const toBlock int64 = 5 + 1
  935. setTestBlocks(fromBlock, toBlock)
  936. tokenExchanges := []common.TokenExchange{
  937. {
  938. EthBlockNum: 4,
  939. Address: ethCommon.BigToAddress(big.NewInt(111)),
  940. ValueUSD: 12345,
  941. },
  942. {
  943. EthBlockNum: 5,
  944. Address: ethCommon.BigToAddress(big.NewInt(222)),
  945. ValueUSD: 67890,
  946. },
  947. }
  948. err := historyDB.addTokenExchanges(historyDB.db, tokenExchanges)
  949. require.NoError(t, err)
  950. dbTokenExchanges, err := historyDB.GetAllTokenExchanges()
  951. require.NoError(t, err)
  952. assert.Equal(t, tokenExchanges, dbTokenExchanges)
  953. }
  954. func TestAddEscapeHatchWithdrawals(t *testing.T) {
  955. test.WipeDB(historyDB.DB())
  956. const fromBlock int64 = 1
  957. const toBlock int64 = 5 + 1
  958. setTestBlocks(fromBlock, toBlock)
  959. escapeHatchWithdrawals := []common.WDelayerEscapeHatchWithdrawal{
  960. {
  961. EthBlockNum: 4,
  962. Who: ethCommon.BigToAddress(big.NewInt(111)),
  963. To: ethCommon.BigToAddress(big.NewInt(222)),
  964. TokenAddr: ethCommon.BigToAddress(big.NewInt(333)),
  965. Amount: big.NewInt(10002),
  966. },
  967. {
  968. EthBlockNum: 5,
  969. Who: ethCommon.BigToAddress(big.NewInt(444)),
  970. To: ethCommon.BigToAddress(big.NewInt(555)),
  971. TokenAddr: ethCommon.BigToAddress(big.NewInt(666)),
  972. Amount: big.NewInt(20003),
  973. },
  974. }
  975. err := historyDB.addEscapeHatchWithdrawals(historyDB.db, escapeHatchWithdrawals)
  976. require.NoError(t, err)
  977. dbEscapeHatchWithdrawals, err := historyDB.GetAllEscapeHatchWithdrawals()
  978. require.NoError(t, err)
  979. assert.Equal(t, escapeHatchWithdrawals, dbEscapeHatchWithdrawals)
  980. }
  981. func TestGetMetrics(t *testing.T) {
  982. test.WipeDB(historyDB.DB())
  983. set := `
  984. Type: Blockchain
  985. AddToken(1)
  986. CreateAccountDeposit(1) A: 1000 // numTx=1
  987. CreateAccountDeposit(1) B: 2000 // numTx=2
  988. CreateAccountDeposit(1) C: 3000 //numTx=3
  989. // block 0 is stored as default in the DB
  990. // block 1 does not exist
  991. > batchL1 // numBatches=1
  992. > batchL1 // numBatches=2
  993. > block // blockNum=2
  994. Transfer(1) C-A : 10 (1) // numTx=4
  995. > batch // numBatches=3
  996. > block // blockNum=3
  997. Transfer(1) B-C : 10 (1) // numTx=5
  998. > batch // numBatches=5
  999. > block // blockNum=4
  1000. Transfer(1) A-B : 10 (1) // numTx=6
  1001. > batch // numBatches=5
  1002. > block // blockNum=5
  1003. Transfer(1) A-B : 10 (1) // numTx=7
  1004. > batch // numBatches=6
  1005. > block // blockNum=6
  1006. `
  1007. const numBatches int = 6
  1008. const numTx int = 7
  1009. const blockNum = 6 - 1
  1010. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  1011. tilCfgExtra := til.ConfigExtra{
  1012. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  1013. CoordUser: "A",
  1014. }
  1015. blocks, err := tc.GenerateBlocks(set)
  1016. require.NoError(t, err)
  1017. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  1018. require.NoError(t, err)
  1019. // Sanity check
  1020. require.Equal(t, blockNum, len(blocks))
  1021. // Adding one batch per block
  1022. // batch frequency can be chosen
  1023. const frequency int = 15
  1024. for i := range blocks {
  1025. blocks[i].Block.Timestamp = time.Now().Add(-time.Second * time.Duration(frequency*(len(blocks)-i)))
  1026. err = historyDB.AddBlockSCData(&blocks[i])
  1027. assert.NoError(t, err)
  1028. }
  1029. res, err := historyDB.GetMetrics(common.BatchNum(numBatches))
  1030. assert.NoError(t, err)
  1031. assert.Equal(t, float64(numTx)/float64(numBatches-1), res.TransactionsPerBatch)
  1032. // Frequency is not exactly the desired one, some decimals may appear
  1033. assert.GreaterOrEqual(t, res.BatchFrequency, float64(frequency))
  1034. assert.Less(t, res.BatchFrequency, float64(frequency+1))
  1035. // Truncate frecuency into an int to do an exact check
  1036. assert.Equal(t, frequency, int(res.BatchFrequency))
  1037. // This may also be different in some decimals
  1038. // Truncate it to the third decimal to compare
  1039. assert.Equal(t, math.Trunc((float64(numTx)/float64(frequency*blockNum-frequency))/0.001)*0.001, math.Trunc(res.TransactionsPerSecond/0.001)*0.001)
  1040. assert.Equal(t, int64(3), res.TotalAccounts)
  1041. assert.Equal(t, int64(3), res.TotalBJJs)
  1042. // Til does not set fees
  1043. assert.Equal(t, float64(0), res.AvgTransactionFee)
  1044. }
  1045. func TestGetMetricsMoreThan24Hours(t *testing.T) {
  1046. test.WipeDB(historyDB.DB())
  1047. testUsersLen := 3
  1048. var set []til.Instruction
  1049. for user := 0; user < testUsersLen; user++ {
  1050. set = append(set, til.Instruction{
  1051. Typ: common.TxTypeCreateAccountDeposit,
  1052. TokenID: common.TokenID(0),
  1053. DepositAmount: big.NewInt(1000000),
  1054. Amount: big.NewInt(0),
  1055. From: fmt.Sprintf("User%02d", user),
  1056. })
  1057. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1058. }
  1059. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1060. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1061. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1062. // Transfers
  1063. for x := 0; x < 6000; x++ {
  1064. set = append(set, til.Instruction{
  1065. Typ: common.TxTypeTransfer,
  1066. TokenID: common.TokenID(0),
  1067. DepositAmount: big.NewInt(1),
  1068. Amount: big.NewInt(0),
  1069. From: "User00",
  1070. To: "User01",
  1071. })
  1072. set = append(set, til.Instruction{Typ: til.TypeNewBatch})
  1073. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1074. }
  1075. var chainID uint16 = 0
  1076. tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
  1077. blocks, err := tc.GenerateBlocksFromInstructions(set)
  1078. assert.NoError(t, err)
  1079. tilCfgExtra := til.ConfigExtra{
  1080. CoordUser: "A",
  1081. }
  1082. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  1083. require.NoError(t, err)
  1084. const numBatches int = 6002
  1085. const numTx int = 6003
  1086. const blockNum = 6005 - 1
  1087. // Sanity check
  1088. require.Equal(t, blockNum, len(blocks))
  1089. // Adding one batch per block
  1090. // batch frequency can be chosen
  1091. const frequency int = 15
  1092. for i := range blocks {
  1093. blocks[i].Block.Timestamp = time.Now().Add(-time.Second * time.Duration(frequency*(len(blocks)-i)))
  1094. err = historyDB.AddBlockSCData(&blocks[i])
  1095. assert.NoError(t, err)
  1096. }
  1097. res, err := historyDB.GetMetrics(common.BatchNum(numBatches))
  1098. assert.NoError(t, err)
  1099. assert.Equal(t, math.Trunc((float64(numTx)/float64(numBatches-1))/0.001)*0.001, math.Trunc(res.TransactionsPerBatch/0.001)*0.001)
  1100. // Frequency is not exactly the desired one, some decimals may appear
  1101. assert.GreaterOrEqual(t, res.BatchFrequency, float64(frequency))
  1102. assert.Less(t, res.BatchFrequency, float64(frequency+1))
  1103. // Truncate frecuency into an int to do an exact check
  1104. assert.Equal(t, frequency, int(res.BatchFrequency))
  1105. // This may also be different in some decimals
  1106. // Truncate it to the third decimal to compare
  1107. assert.Equal(t, math.Trunc((float64(numTx)/float64(frequency*blockNum-frequency))/0.001)*0.001, math.Trunc(res.TransactionsPerSecond/0.001)*0.001)
  1108. assert.Equal(t, int64(3), res.TotalAccounts)
  1109. assert.Equal(t, int64(3), res.TotalBJJs)
  1110. // Til does not set fees
  1111. assert.Equal(t, float64(0), res.AvgTransactionFee)
  1112. }
  1113. func TestGetMetricsEmpty(t *testing.T) {
  1114. test.WipeDB(historyDB.DB())
  1115. _, err := historyDB.GetMetrics(0)
  1116. assert.NoError(t, err)
  1117. }
  1118. func TestGetAvgTxFeeEmpty(t *testing.T) {
  1119. test.WipeDB(historyDB.DB())
  1120. _, err := historyDB.GetAvgTxFee()
  1121. assert.NoError(t, err)
  1122. }
  1123. func TestGetLastL1TxsNum(t *testing.T) {
  1124. test.WipeDB(historyDB.DB())
  1125. _, err := historyDB.GetLastL1TxsNum()
  1126. assert.NoError(t, err)
  1127. }
  1128. func TestGetLastTxsPosition(t *testing.T) {
  1129. test.WipeDB(historyDB.DB())
  1130. _, err := historyDB.GetLastTxsPosition(0)
  1131. assert.Equal(t, sql.ErrNoRows.Error(), err.Error())
  1132. }
  1133. func TestGetFirstBatchBlockNumBySlot(t *testing.T) {
  1134. test.WipeDB(historyDB.DB())
  1135. set := `
  1136. Type: Blockchain
  1137. // Slot = 0
  1138. > block // 2
  1139. > block // 3
  1140. > block // 4
  1141. > block // 5
  1142. // Slot = 1
  1143. > block // 6
  1144. > block // 7
  1145. > batch
  1146. > block // 8
  1147. > block // 9
  1148. // Slot = 2
  1149. > batch
  1150. > block // 10
  1151. > block // 11
  1152. > block // 12
  1153. > block // 13
  1154. `
  1155. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  1156. blocks, err := tc.GenerateBlocks(set)
  1157. assert.NoError(t, err)
  1158. tilCfgExtra := til.ConfigExtra{
  1159. CoordUser: "A",
  1160. }
  1161. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  1162. require.NoError(t, err)
  1163. for i := range blocks {
  1164. for j := range blocks[i].Rollup.Batches {
  1165. blocks[i].Rollup.Batches[j].Batch.SlotNum = int64(i) / 4
  1166. }
  1167. }
  1168. // Add all blocks
  1169. for i := range blocks {
  1170. err = historyDB.AddBlockSCData(&blocks[i])
  1171. require.NoError(t, err)
  1172. }
  1173. _, err = historyDB.GetFirstBatchBlockNumBySlot(0)
  1174. require.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
  1175. bn1, err := historyDB.GetFirstBatchBlockNumBySlot(1)
  1176. require.NoError(t, err)
  1177. assert.Equal(t, int64(8), bn1)
  1178. bn2, err := historyDB.GetFirstBatchBlockNumBySlot(2)
  1179. require.NoError(t, err)
  1180. assert.Equal(t, int64(10), bn2)
  1181. }
  1182. func TestTxItemID(t *testing.T) {
  1183. test.WipeDB(historyDB.DB())
  1184. testUsersLen := 10
  1185. var set []til.Instruction
  1186. for user := 0; user < testUsersLen; user++ {
  1187. set = append(set, til.Instruction{
  1188. Typ: common.TxTypeCreateAccountDeposit,
  1189. TokenID: common.TokenID(0),
  1190. DepositAmount: big.NewInt(1000000),
  1191. Amount: big.NewInt(0),
  1192. From: fmt.Sprintf("User%02d", user),
  1193. })
  1194. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1195. }
  1196. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1197. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1198. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1199. for user := 0; user < testUsersLen; user++ {
  1200. set = append(set, til.Instruction{
  1201. Typ: common.TxTypeDeposit,
  1202. TokenID: common.TokenID(0),
  1203. DepositAmount: big.NewInt(100000),
  1204. Amount: big.NewInt(0),
  1205. From: fmt.Sprintf("User%02d", user),
  1206. })
  1207. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1208. }
  1209. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1210. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1211. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1212. for user := 0; user < testUsersLen; user++ {
  1213. set = append(set, til.Instruction{
  1214. Typ: common.TxTypeDepositTransfer,
  1215. TokenID: common.TokenID(0),
  1216. DepositAmount: big.NewInt(10000 * int64(user+1)),
  1217. Amount: big.NewInt(1000 * int64(user+1)),
  1218. From: fmt.Sprintf("User%02d", user),
  1219. To: fmt.Sprintf("User%02d", (user+1)%testUsersLen),
  1220. })
  1221. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1222. }
  1223. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1224. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1225. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1226. for user := 0; user < testUsersLen; user++ {
  1227. set = append(set, til.Instruction{
  1228. Typ: common.TxTypeForceTransfer,
  1229. TokenID: common.TokenID(0),
  1230. Amount: big.NewInt(100 * int64(user+1)),
  1231. DepositAmount: big.NewInt(0),
  1232. From: fmt.Sprintf("User%02d", user),
  1233. To: fmt.Sprintf("User%02d", (user+1)%testUsersLen),
  1234. })
  1235. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1236. }
  1237. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1238. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1239. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1240. for user := 0; user < testUsersLen; user++ {
  1241. set = append(set, til.Instruction{
  1242. Typ: common.TxTypeForceExit,
  1243. TokenID: common.TokenID(0),
  1244. Amount: big.NewInt(10 * int64(user+1)),
  1245. DepositAmount: big.NewInt(0),
  1246. From: fmt.Sprintf("User%02d", user),
  1247. })
  1248. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1249. }
  1250. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1251. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1252. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1253. var chainID uint16 = 0
  1254. tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
  1255. blocks, err := tc.GenerateBlocksFromInstructions(set)
  1256. assert.NoError(t, err)
  1257. tilCfgExtra := til.ConfigExtra{
  1258. CoordUser: "A",
  1259. }
  1260. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  1261. require.NoError(t, err)
  1262. // Add all blocks
  1263. for i := range blocks {
  1264. err = historyDB.AddBlockSCData(&blocks[i])
  1265. require.NoError(t, err)
  1266. }
  1267. txs, err := historyDB.GetAllL1UserTxs()
  1268. require.NoError(t, err)
  1269. position := 0
  1270. for _, tx := range txs {
  1271. if tx.Position == 0 {
  1272. position = 0
  1273. }
  1274. assert.Equal(t, position, tx.Position)
  1275. position++
  1276. }
  1277. }
  1278. // setTestBlocks WARNING: this will delete the blocks and recreate them
  1279. func setTestBlocks(from, to int64) []common.Block {
  1280. test.WipeDB(historyDB.DB())
  1281. blocks := test.GenBlocks(from, to)
  1282. if err := historyDB.AddBlocks(blocks); err != nil {
  1283. panic(err)
  1284. }
  1285. return blocks
  1286. }