You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1454 lines
46 KiB

Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
3 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
  1. package historydb
  2. import (
  3. "database/sql"
  4. "fmt"
  5. "math"
  6. "math/big"
  7. "os"
  8. "strings"
  9. "testing"
  10. "time"
  11. ethCommon "github.com/ethereum/go-ethereum/common"
  12. "github.com/hermeznetwork/hermez-node/common"
  13. dbUtils "github.com/hermeznetwork/hermez-node/db"
  14. "github.com/hermeznetwork/hermez-node/log"
  15. "github.com/hermeznetwork/hermez-node/test"
  16. "github.com/hermeznetwork/hermez-node/test/til"
  17. "github.com/hermeznetwork/tracerr"
  18. "github.com/stretchr/testify/assert"
  19. "github.com/stretchr/testify/require"
  20. )
  21. var historyDB *HistoryDB
  22. var historyDBWithACC *HistoryDB
  23. // In order to run the test you need to run a Posgres DB with
  24. // a database named "history" that is accessible by
  25. // user: "hermez"
  26. // pass: set it using the env var POSTGRES_PASS
  27. // This can be achieved by running: POSTGRES_PASS=your_strong_pass && sudo docker run --rm --name hermez-db-test -p 5432:5432 -e POSTGRES_DB=history -e POSTGRES_USER=hermez -e POSTGRES_PASSWORD=$POSTGRES_PASS -d postgres && sleep 2s && sudo docker exec -it hermez-db-test psql -a history -U hermez -c "CREATE DATABASE l2;"
  28. // After running the test you can stop the container by running: sudo docker kill hermez-db-test
  29. // If you already did that for the L2DB you don't have to do it again
  30. func TestMain(m *testing.M) {
  31. // init DB
  32. pass := os.Getenv("POSTGRES_PASS")
  33. db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
  34. if err != nil {
  35. panic(err)
  36. }
  37. historyDB = NewHistoryDB(db, nil)
  38. if err != nil {
  39. panic(err)
  40. }
  41. apiConnCon := dbUtils.NewAPICnnectionController(1, time.Second)
  42. historyDBWithACC = NewHistoryDB(db, apiConnCon)
  43. // Run tests
  44. result := m.Run()
  45. // Close DB
  46. if err := db.Close(); err != nil {
  47. log.Error("Error closing the history DB:", err)
  48. }
  49. os.Exit(result)
  50. }
  51. func TestBlocks(t *testing.T) {
  52. var fromBlock, toBlock int64
  53. fromBlock = 0
  54. toBlock = 7
  55. // Reset DB
  56. test.WipeDB(historyDB.DB())
  57. // Generate blocks using til
  58. set1 := `
  59. Type: Blockchain
  60. // block 0 is stored as default in the DB
  61. // block 1 does not exist
  62. > block // blockNum=2
  63. > block // blockNum=3
  64. > block // blockNum=4
  65. > block // blockNum=5
  66. > block // blockNum=6
  67. `
  68. tc := til.NewContext(uint16(0), 1)
  69. blocks, err := tc.GenerateBlocks(set1)
  70. require.NoError(t, err)
  71. // Save timestamp of a block with UTC and change it without UTC
  72. timestamp := time.Now().Add(time.Second * 13)
  73. blocks[fromBlock].Block.Timestamp = timestamp
  74. // Insert blocks into DB
  75. for i := 0; i < len(blocks); i++ {
  76. err := historyDB.AddBlock(&blocks[i].Block)
  77. assert.NoError(t, err)
  78. }
  79. // Add block 0 to the generated blocks
  80. blocks = append(
  81. []common.BlockData{{Block: test.Block0}}, //nolint:gofmt
  82. blocks...,
  83. )
  84. // Get all blocks from DB
  85. fetchedBlocks, err := historyDB.getBlocks(fromBlock, toBlock)
  86. assert.Equal(t, len(blocks), len(fetchedBlocks))
  87. // Compare generated vs getted blocks
  88. assert.NoError(t, err)
  89. for i := range fetchedBlocks {
  90. assertEqualBlock(t, &blocks[i].Block, &fetchedBlocks[i])
  91. }
  92. // Compare saved timestamp vs getted
  93. nameZoneUTC, offsetUTC := timestamp.UTC().Zone()
  94. zoneFetchedBlock, offsetFetchedBlock := fetchedBlocks[fromBlock].Timestamp.Zone()
  95. assert.Equal(t, nameZoneUTC, zoneFetchedBlock)
  96. assert.Equal(t, offsetUTC, offsetFetchedBlock)
  97. // Get blocks from the DB one by one
  98. for i := int64(2); i < toBlock; i++ { // avoid block 0 for simplicity
  99. fetchedBlock, err := historyDB.GetBlock(i)
  100. assert.NoError(t, err)
  101. assertEqualBlock(t, &blocks[i-1].Block, fetchedBlock)
  102. }
  103. // Get last block
  104. lastBlock, err := historyDB.GetLastBlock()
  105. assert.NoError(t, err)
  106. assertEqualBlock(t, &blocks[len(blocks)-1].Block, lastBlock)
  107. }
  108. func assertEqualBlock(t *testing.T, expected *common.Block, actual *common.Block) {
  109. assert.Equal(t, expected.Num, actual.Num)
  110. assert.Equal(t, expected.Hash, actual.Hash)
  111. assert.Equal(t, expected.Timestamp.Unix(), actual.Timestamp.Unix())
  112. }
  113. func TestBatches(t *testing.T) {
  114. // Reset DB
  115. test.WipeDB(historyDB.DB())
  116. // Generate batches using til (and blocks for foreign key)
  117. set := `
  118. Type: Blockchain
  119. AddToken(1) // Will have value in USD
  120. AddToken(2) // Will NOT have value in USD
  121. CreateAccountDeposit(1) A: 2000
  122. CreateAccountDeposit(2) A: 2000
  123. CreateAccountDeposit(1) B: 1000
  124. CreateAccountDeposit(2) B: 1000
  125. > batchL1
  126. > batchL1
  127. Transfer(1) A-B: 100 (5)
  128. Transfer(2) B-A: 100 (199)
  129. > batch // batchNum=2, L2 only batch, forges transfers (mixed case of with(out) USD value)
  130. > block
  131. Transfer(1) A-B: 100 (5)
  132. > batch // batchNum=3, L2 only batch, forges transfer (with USD value)
  133. Transfer(2) B-A: 100 (199)
  134. > batch // batchNum=4, L2 only batch, forges transfer (without USD value)
  135. > block
  136. `
  137. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  138. tilCfgExtra := til.ConfigExtra{
  139. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  140. CoordUser: "A",
  141. }
  142. blocks, err := tc.GenerateBlocks(set)
  143. require.NoError(t, err)
  144. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  145. require.NoError(t, err)
  146. // Insert to DB
  147. batches := []common.Batch{}
  148. tokensValue := make(map[common.TokenID]float64)
  149. lastL1TxsNum := new(int64)
  150. lastL1BatchBlockNum := int64(0)
  151. for _, block := range blocks {
  152. // Insert block
  153. assert.NoError(t, historyDB.AddBlock(&block.Block))
  154. // Insert tokens
  155. for i, token := range block.Rollup.AddedTokens {
  156. assert.NoError(t, historyDB.AddToken(&token)) //nolint:gosec
  157. if i%2 != 0 {
  158. // Set value to the token
  159. value := (float64(i) + 5) * 5.389329
  160. assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
  161. tokensValue[token.TokenID] = value / math.Pow(10, float64(token.Decimals))
  162. }
  163. }
  164. // Combine all generated batches into single array
  165. for _, batch := range block.Rollup.Batches {
  166. batches = append(batches, batch.Batch)
  167. forgeTxsNum := batch.Batch.ForgeL1TxsNum
  168. if forgeTxsNum != nil && (lastL1TxsNum == nil || *lastL1TxsNum < *forgeTxsNum) {
  169. *lastL1TxsNum = *forgeTxsNum
  170. lastL1BatchBlockNum = batch.Batch.EthBlockNum
  171. }
  172. }
  173. }
  174. // Insert batches
  175. assert.NoError(t, historyDB.AddBatches(batches))
  176. // Set expected total fee
  177. for _, batch := range batches {
  178. total := .0
  179. for tokenID, amount := range batch.CollectedFees {
  180. af := new(big.Float).SetInt(amount)
  181. amountFloat, _ := af.Float64()
  182. total += tokensValue[tokenID] * amountFloat
  183. }
  184. batch.TotalFeesUSD = &total
  185. }
  186. // Get batches from the DB
  187. fetchedBatches, err := historyDB.GetBatches(0, common.BatchNum(len(batches)+1))
  188. assert.NoError(t, err)
  189. assert.Equal(t, len(batches), len(fetchedBatches))
  190. for i, fetchedBatch := range fetchedBatches {
  191. assert.Equal(t, batches[i], fetchedBatch)
  192. }
  193. // Test GetLastBatchNum
  194. fetchedLastBatchNum, err := historyDB.GetLastBatchNum()
  195. assert.NoError(t, err)
  196. assert.Equal(t, batches[len(batches)-1].BatchNum, fetchedLastBatchNum)
  197. // Test GetLastBatch
  198. fetchedLastBatch, err := historyDB.GetLastBatch()
  199. assert.NoError(t, err)
  200. assert.Equal(t, &batches[len(batches)-1], fetchedLastBatch)
  201. // Test GetLastL1TxsNum
  202. fetchedLastL1TxsNum, err := historyDB.GetLastL1TxsNum()
  203. assert.NoError(t, err)
  204. assert.Equal(t, lastL1TxsNum, fetchedLastL1TxsNum)
  205. // Test GetLastL1BatchBlockNum
  206. fetchedLastL1BatchBlockNum, err := historyDB.GetLastL1BatchBlockNum()
  207. assert.NoError(t, err)
  208. assert.Equal(t, lastL1BatchBlockNum, fetchedLastL1BatchBlockNum)
  209. // Test GetBatch
  210. fetchedBatch, err := historyDB.GetBatch(1)
  211. require.NoError(t, err)
  212. assert.Equal(t, &batches[0], fetchedBatch)
  213. _, err = historyDB.GetBatch(common.BatchNum(len(batches) + 1))
  214. assert.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
  215. }
  216. func TestBids(t *testing.T) {
  217. const fromBlock int64 = 1
  218. const toBlock int64 = 5
  219. // Prepare blocks in the DB
  220. blocks := setTestBlocks(fromBlock, toBlock)
  221. // Generate fake coordinators
  222. const nCoords = 5
  223. coords := test.GenCoordinators(nCoords, blocks)
  224. err := historyDB.AddCoordinators(coords)
  225. assert.NoError(t, err)
  226. // Generate fake bids
  227. const nBids = 20
  228. bids := test.GenBids(nBids, blocks, coords)
  229. err = historyDB.AddBids(bids)
  230. assert.NoError(t, err)
  231. // Fetch bids
  232. fetchedBids, err := historyDB.GetAllBids()
  233. assert.NoError(t, err)
  234. // Compare fetched bids vs generated bids
  235. for i, bid := range fetchedBids {
  236. assert.Equal(t, bids[i], bid)
  237. }
  238. }
  239. func TestTokens(t *testing.T) {
  240. const fromBlock int64 = 1
  241. const toBlock int64 = 5
  242. // Prepare blocks in the DB
  243. blocks := setTestBlocks(fromBlock, toBlock)
  244. // Generate fake tokens
  245. const nTokens = 5
  246. tokens, ethToken := test.GenTokens(nTokens, blocks)
  247. err := historyDB.AddTokens(tokens)
  248. assert.NoError(t, err)
  249. tokens = append([]common.Token{ethToken}, tokens...)
  250. // Fetch tokens
  251. fetchedTokens, err := historyDB.GetTokensTest()
  252. assert.NoError(t, err)
  253. // Compare fetched tokens vs generated tokens
  254. // All the tokens should have USDUpdate setted by the DB trigger
  255. for i, token := range fetchedTokens {
  256. assert.Equal(t, tokens[i].TokenID, token.TokenID)
  257. assert.Equal(t, tokens[i].EthBlockNum, token.EthBlockNum)
  258. assert.Equal(t, tokens[i].EthAddr, token.EthAddr)
  259. assert.Equal(t, tokens[i].Name, token.Name)
  260. assert.Equal(t, tokens[i].Symbol, token.Symbol)
  261. assert.Nil(t, token.USD)
  262. assert.Nil(t, token.USDUpdate)
  263. }
  264. // Update token value
  265. for i, token := range tokens {
  266. value := 1.01 * float64(i)
  267. assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
  268. }
  269. // Fetch tokens
  270. fetchedTokens, err = historyDB.GetTokensTest()
  271. assert.NoError(t, err)
  272. // Compare fetched tokens vs generated tokens
  273. // All the tokens should have USDUpdate setted by the DB trigger
  274. for i, token := range fetchedTokens {
  275. value := 1.01 * float64(i)
  276. assert.Equal(t, value, *token.USD)
  277. nameZone, offset := token.USDUpdate.Zone()
  278. assert.Equal(t, "UTC", nameZone)
  279. assert.Equal(t, 0, offset)
  280. }
  281. }
  282. func TestTokensUTF8(t *testing.T) {
  283. // Reset DB
  284. test.WipeDB(historyDB.DB())
  285. const fromBlock int64 = 1
  286. const toBlock int64 = 5
  287. // Prepare blocks in the DB
  288. blocks := setTestBlocks(fromBlock, toBlock)
  289. // Generate fake tokens
  290. const nTokens = 5
  291. tokens, ethToken := test.GenTokens(nTokens, blocks)
  292. nonUTFTokens := make([]common.Token, len(tokens)+1)
  293. // Force token.name and token.symbol to be non UTF-8 Strings
  294. for i, token := range tokens {
  295. token.Name = fmt.Sprint("NON-UTF8-NAME-\xc5-", i)
  296. token.Symbol = fmt.Sprint("S-\xc5-", i)
  297. tokens[i] = token
  298. nonUTFTokens[i] = token
  299. }
  300. err := historyDB.AddTokens(tokens)
  301. assert.NoError(t, err)
  302. // Work with nonUTFTokens as tokens one gets updated and non UTF-8 characters are lost
  303. nonUTFTokens = append([]common.Token{ethToken}, nonUTFTokens...)
  304. // Fetch tokens
  305. fetchedTokens, err := historyDB.GetTokensTest()
  306. assert.NoError(t, err)
  307. // Compare fetched tokens vs generated tokens
  308. // All the tokens should have USDUpdate setted by the DB trigger
  309. for i, token := range fetchedTokens {
  310. assert.Equal(t, nonUTFTokens[i].TokenID, token.TokenID)
  311. assert.Equal(t, nonUTFTokens[i].EthBlockNum, token.EthBlockNum)
  312. assert.Equal(t, nonUTFTokens[i].EthAddr, token.EthAddr)
  313. assert.Equal(t, strings.ToValidUTF8(nonUTFTokens[i].Name, " "), token.Name)
  314. assert.Equal(t, strings.ToValidUTF8(nonUTFTokens[i].Symbol, " "), token.Symbol)
  315. assert.Nil(t, token.USD)
  316. assert.Nil(t, token.USDUpdate)
  317. }
  318. // Update token value
  319. for i, token := range nonUTFTokens {
  320. value := 1.01 * float64(i)
  321. assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
  322. }
  323. // Fetch tokens
  324. fetchedTokens, err = historyDB.GetTokensTest()
  325. assert.NoError(t, err)
  326. // Compare fetched tokens vs generated tokens
  327. // All the tokens should have USDUpdate setted by the DB trigger
  328. for i, token := range fetchedTokens {
  329. value := 1.01 * float64(i)
  330. assert.Equal(t, value, *token.USD)
  331. nameZone, offset := token.USDUpdate.Zone()
  332. assert.Equal(t, "UTC", nameZone)
  333. assert.Equal(t, 0, offset)
  334. }
  335. }
  336. func TestAccounts(t *testing.T) {
  337. const fromBlock int64 = 1
  338. const toBlock int64 = 5
  339. // Prepare blocks in the DB
  340. blocks := setTestBlocks(fromBlock, toBlock)
  341. // Generate fake tokens
  342. const nTokens = 5
  343. tokens, ethToken := test.GenTokens(nTokens, blocks)
  344. err := historyDB.AddTokens(tokens)
  345. assert.NoError(t, err)
  346. tokens = append([]common.Token{ethToken}, tokens...)
  347. // Generate fake batches
  348. const nBatches = 10
  349. batches := test.GenBatches(nBatches, blocks)
  350. err = historyDB.AddBatches(batches)
  351. assert.NoError(t, err)
  352. // Generate fake accounts
  353. const nAccounts = 3
  354. accs := test.GenAccounts(nAccounts, 0, tokens, nil, nil, batches)
  355. err = historyDB.AddAccounts(accs)
  356. assert.NoError(t, err)
  357. // Fetch accounts
  358. fetchedAccs, err := historyDB.GetAllAccounts()
  359. assert.NoError(t, err)
  360. // Compare fetched accounts vs generated accounts
  361. for i, acc := range fetchedAccs {
  362. accs[i].Balance = nil
  363. assert.Equal(t, accs[i], acc)
  364. }
  365. }
  366. func TestTxs(t *testing.T) {
  367. // Reset DB
  368. test.WipeDB(historyDB.DB())
  369. set := `
  370. Type: Blockchain
  371. AddToken(1)
  372. AddToken(2)
  373. CreateAccountDeposit(1) A: 10
  374. CreateAccountDeposit(1) B: 10
  375. > batchL1
  376. > batchL1
  377. > block
  378. CreateAccountDepositTransfer(1) C-A: 20, 10
  379. CreateAccountCoordinator(1) User0
  380. > batchL1
  381. > batchL1
  382. > block
  383. Deposit(1) B: 10
  384. Deposit(1) C: 10
  385. Transfer(1) C-A : 10 (1)
  386. Transfer(1) B-C : 10 (1)
  387. Transfer(1) A-B : 10 (1)
  388. Exit(1) A: 10 (1)
  389. > batch
  390. > block
  391. DepositTransfer(1) A-B: 10, 10
  392. > batchL1
  393. > block
  394. ForceTransfer(1) A-B: 10
  395. ForceExit(1) A: 5
  396. > batchL1
  397. > batchL1
  398. > block
  399. CreateAccountDeposit(2) D: 10
  400. > batchL1
  401. > block
  402. CreateAccountDeposit(2) E: 10
  403. > batchL1
  404. > batchL1
  405. > block
  406. `
  407. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  408. tilCfgExtra := til.ConfigExtra{
  409. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  410. CoordUser: "A",
  411. }
  412. blocks, err := tc.GenerateBlocks(set)
  413. require.NoError(t, err)
  414. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  415. require.NoError(t, err)
  416. // Sanity check
  417. require.Equal(t, 7, len(blocks))
  418. require.Equal(t, 2, len(blocks[0].Rollup.L1UserTxs))
  419. require.Equal(t, 1, len(blocks[1].Rollup.L1UserTxs))
  420. require.Equal(t, 2, len(blocks[2].Rollup.L1UserTxs))
  421. require.Equal(t, 1, len(blocks[3].Rollup.L1UserTxs))
  422. require.Equal(t, 2, len(blocks[4].Rollup.L1UserTxs))
  423. require.Equal(t, 1, len(blocks[5].Rollup.L1UserTxs))
  424. require.Equal(t, 1, len(blocks[6].Rollup.L1UserTxs))
  425. var null *common.BatchNum = nil
  426. var txID common.TxID
  427. // Insert blocks into DB
  428. for i := range blocks {
  429. if i == len(blocks)-1 {
  430. blocks[i].Block.Timestamp = time.Now()
  431. dbL1Txs, err := historyDB.GetAllL1UserTxs()
  432. assert.NoError(t, err)
  433. // Check batch_num is nil before forging
  434. assert.Equal(t, null, dbL1Txs[len(dbL1Txs)-1].BatchNum)
  435. // Save this TxId
  436. txID = dbL1Txs[len(dbL1Txs)-1].TxID
  437. }
  438. err = historyDB.AddBlockSCData(&blocks[i])
  439. assert.NoError(t, err)
  440. }
  441. // Check blocks
  442. dbBlocks, err := historyDB.GetAllBlocks()
  443. assert.NoError(t, err)
  444. assert.Equal(t, len(blocks)+1, len(dbBlocks))
  445. // Check batches
  446. batches, err := historyDB.GetAllBatches()
  447. assert.NoError(t, err)
  448. assert.Equal(t, 11, len(batches))
  449. // Check L1 Transactions
  450. dbL1Txs, err := historyDB.GetAllL1UserTxs()
  451. assert.NoError(t, err)
  452. assert.Equal(t, 10, len(dbL1Txs))
  453. // Tx Type
  454. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[0].Type)
  455. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[1].Type)
  456. assert.Equal(t, common.TxTypeCreateAccountDepositTransfer, dbL1Txs[2].Type)
  457. assert.Equal(t, common.TxTypeDeposit, dbL1Txs[3].Type)
  458. assert.Equal(t, common.TxTypeDeposit, dbL1Txs[4].Type)
  459. assert.Equal(t, common.TxTypeDepositTransfer, dbL1Txs[5].Type)
  460. assert.Equal(t, common.TxTypeForceTransfer, dbL1Txs[6].Type)
  461. assert.Equal(t, common.TxTypeForceExit, dbL1Txs[7].Type)
  462. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[8].Type)
  463. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[9].Type)
  464. // Tx ID
  465. assert.Equal(t, "0x00e979da4b80d60a17ce56fa19278c6f3a7e1b43359fb8a8ea46d0264de7d653ab", dbL1Txs[0].TxID.String())
  466. assert.Equal(t, "0x00af9bf96eb60f2d618519402a2f6b07057a034fa2baefd379fe8e1c969f1c5cf4", dbL1Txs[1].TxID.String())
  467. assert.Equal(t, "0x00a256ee191905243320ea830840fd666a73c7b4e6f89ce4bd47ddf998dfee627a", dbL1Txs[2].TxID.String())
  468. assert.Equal(t, "0x00930696d03ae0a1e6150b6ccb88043cb539a4e06a7f8baf213029ce9a0600197e", dbL1Txs[3].TxID.String())
  469. assert.Equal(t, "0x00de8e41d49f23832f66364e8702c4b78237eb0c95542a94d34188e51696e74fc8", dbL1Txs[4].TxID.String())
  470. assert.Equal(t, "0x007a44d6d60b15f3789d4ff49d62377a70255bf13a8d42e41ef49bf4c7b77d2c1b", dbL1Txs[5].TxID.String())
  471. assert.Equal(t, "0x00c33f316240f8d33a973db2d0e901e4ac1c96de30b185fcc6b63dac4d0e147bd4", dbL1Txs[6].TxID.String())
  472. assert.Equal(t, "0x00b55f0882c5229d1be3d9d3c1a076290f249cd0bae5ae6e609234606befb91233", dbL1Txs[7].TxID.String())
  473. assert.Equal(t, "0x009133d4c8a412ca45f50bccdbcfdb8393b0dd8efe953d0cc3bcc82796b7a581b6", dbL1Txs[8].TxID.String())
  474. assert.Equal(t, "0x00f5e8ab141ac16d673e654ba7747c2f12e93ea2c50ba6c05563752ca531968c62", dbL1Txs[9].TxID.String())
  475. // Tx From IDx
  476. assert.Equal(t, common.Idx(0), dbL1Txs[0].FromIdx)
  477. assert.Equal(t, common.Idx(0), dbL1Txs[1].FromIdx)
  478. assert.Equal(t, common.Idx(0), dbL1Txs[2].FromIdx)
  479. assert.NotEqual(t, common.Idx(0), dbL1Txs[3].FromIdx)
  480. assert.NotEqual(t, common.Idx(0), dbL1Txs[4].FromIdx)
  481. assert.NotEqual(t, common.Idx(0), dbL1Txs[5].FromIdx)
  482. assert.NotEqual(t, common.Idx(0), dbL1Txs[6].FromIdx)
  483. assert.NotEqual(t, common.Idx(0), dbL1Txs[7].FromIdx)
  484. assert.Equal(t, common.Idx(0), dbL1Txs[8].FromIdx)
  485. assert.Equal(t, common.Idx(0), dbL1Txs[9].FromIdx)
  486. assert.Equal(t, common.Idx(0), dbL1Txs[9].FromIdx)
  487. assert.Equal(t, dbL1Txs[5].FromIdx, dbL1Txs[6].FromIdx)
  488. assert.Equal(t, dbL1Txs[5].FromIdx, dbL1Txs[7].FromIdx)
  489. // Tx to IDx
  490. assert.Equal(t, dbL1Txs[2].ToIdx, dbL1Txs[5].FromIdx)
  491. assert.Equal(t, dbL1Txs[5].ToIdx, dbL1Txs[3].FromIdx)
  492. assert.Equal(t, dbL1Txs[6].ToIdx, dbL1Txs[3].FromIdx)
  493. // Token ID
  494. assert.Equal(t, common.TokenID(1), dbL1Txs[0].TokenID)
  495. assert.Equal(t, common.TokenID(1), dbL1Txs[1].TokenID)
  496. assert.Equal(t, common.TokenID(1), dbL1Txs[2].TokenID)
  497. assert.Equal(t, common.TokenID(1), dbL1Txs[3].TokenID)
  498. assert.Equal(t, common.TokenID(1), dbL1Txs[4].TokenID)
  499. assert.Equal(t, common.TokenID(1), dbL1Txs[5].TokenID)
  500. assert.Equal(t, common.TokenID(1), dbL1Txs[6].TokenID)
  501. assert.Equal(t, common.TokenID(1), dbL1Txs[7].TokenID)
  502. assert.Equal(t, common.TokenID(2), dbL1Txs[8].TokenID)
  503. assert.Equal(t, common.TokenID(2), dbL1Txs[9].TokenID)
  504. // Batch Number
  505. var bn common.BatchNum = common.BatchNum(2)
  506. assert.Equal(t, &bn, dbL1Txs[0].BatchNum)
  507. assert.Equal(t, &bn, dbL1Txs[1].BatchNum)
  508. bn = common.BatchNum(4)
  509. assert.Equal(t, &bn, dbL1Txs[2].BatchNum)
  510. bn = common.BatchNum(7)
  511. assert.Equal(t, &bn, dbL1Txs[3].BatchNum)
  512. assert.Equal(t, &bn, dbL1Txs[4].BatchNum)
  513. assert.Equal(t, &bn, dbL1Txs[5].BatchNum)
  514. bn = common.BatchNum(8)
  515. assert.Equal(t, &bn, dbL1Txs[6].BatchNum)
  516. assert.Equal(t, &bn, dbL1Txs[7].BatchNum)
  517. bn = common.BatchNum(10)
  518. assert.Equal(t, &bn, dbL1Txs[8].BatchNum)
  519. bn = common.BatchNum(11)
  520. assert.Equal(t, &bn, dbL1Txs[9].BatchNum)
  521. // eth_block_num
  522. assert.Equal(t, int64(2), dbL1Txs[0].EthBlockNum)
  523. assert.Equal(t, int64(2), dbL1Txs[1].EthBlockNum)
  524. assert.Equal(t, int64(3), dbL1Txs[2].EthBlockNum)
  525. assert.Equal(t, int64(4), dbL1Txs[3].EthBlockNum)
  526. assert.Equal(t, int64(4), dbL1Txs[4].EthBlockNum)
  527. assert.Equal(t, int64(5), dbL1Txs[5].EthBlockNum)
  528. assert.Equal(t, int64(6), dbL1Txs[6].EthBlockNum)
  529. assert.Equal(t, int64(6), dbL1Txs[7].EthBlockNum)
  530. assert.Equal(t, int64(7), dbL1Txs[8].EthBlockNum)
  531. assert.Equal(t, int64(8), dbL1Txs[9].EthBlockNum)
  532. // User Origin
  533. assert.Equal(t, true, dbL1Txs[0].UserOrigin)
  534. assert.Equal(t, true, dbL1Txs[1].UserOrigin)
  535. assert.Equal(t, true, dbL1Txs[2].UserOrigin)
  536. assert.Equal(t, true, dbL1Txs[3].UserOrigin)
  537. assert.Equal(t, true, dbL1Txs[4].UserOrigin)
  538. assert.Equal(t, true, dbL1Txs[5].UserOrigin)
  539. assert.Equal(t, true, dbL1Txs[6].UserOrigin)
  540. assert.Equal(t, true, dbL1Txs[7].UserOrigin)
  541. assert.Equal(t, true, dbL1Txs[8].UserOrigin)
  542. assert.Equal(t, true, dbL1Txs[9].UserOrigin)
  543. // Deposit Amount
  544. assert.Equal(t, big.NewInt(10), dbL1Txs[0].DepositAmount)
  545. assert.Equal(t, big.NewInt(10), dbL1Txs[1].DepositAmount)
  546. assert.Equal(t, big.NewInt(20), dbL1Txs[2].DepositAmount)
  547. assert.Equal(t, big.NewInt(10), dbL1Txs[3].DepositAmount)
  548. assert.Equal(t, big.NewInt(10), dbL1Txs[4].DepositAmount)
  549. assert.Equal(t, big.NewInt(10), dbL1Txs[5].DepositAmount)
  550. assert.Equal(t, big.NewInt(0), dbL1Txs[6].DepositAmount)
  551. assert.Equal(t, big.NewInt(0), dbL1Txs[7].DepositAmount)
  552. assert.Equal(t, big.NewInt(10), dbL1Txs[8].DepositAmount)
  553. assert.Equal(t, big.NewInt(10), dbL1Txs[9].DepositAmount)
  554. // Check saved txID's batch_num is not nil
  555. assert.Equal(t, txID, dbL1Txs[len(dbL1Txs)-2].TxID)
  556. assert.NotEqual(t, null, dbL1Txs[len(dbL1Txs)-2].BatchNum)
  557. // Check Coordinator TXs
  558. coordTxs, err := historyDB.GetAllL1CoordinatorTxs()
  559. assert.NoError(t, err)
  560. assert.Equal(t, 1, len(coordTxs))
  561. assert.Equal(t, common.TxTypeCreateAccountDeposit, coordTxs[0].Type)
  562. assert.Equal(t, false, coordTxs[0].UserOrigin)
  563. // Check L2 TXs
  564. dbL2Txs, err := historyDB.GetAllL2Txs()
  565. assert.NoError(t, err)
  566. assert.Equal(t, 4, len(dbL2Txs))
  567. // Tx Type
  568. assert.Equal(t, common.TxTypeTransfer, dbL2Txs[0].Type)
  569. assert.Equal(t, common.TxTypeTransfer, dbL2Txs[1].Type)
  570. assert.Equal(t, common.TxTypeTransfer, dbL2Txs[2].Type)
  571. assert.Equal(t, common.TxTypeExit, dbL2Txs[3].Type)
  572. // Tx ID
  573. assert.Equal(t, "0x02d709307533c4e3c03f20751fc4d72bc18b225d14f9616525540a64342c7c350d", dbL2Txs[0].TxID.String())
  574. assert.Equal(t, "0x02e88bc5503f282cca045847668511290e642410a459bb67b1fafcd1b6097c149c", dbL2Txs[1].TxID.String())
  575. assert.Equal(t, "0x027911262b43315c0b24942a02fe228274b6e4d57a476bfcdd7a324b3091362c7d", dbL2Txs[2].TxID.String())
  576. assert.Equal(t, "0x02f572b63f2a5c302e1b9337ea6944bfbac3d199e4ddd262b5a53759c72ec10ee6", dbL2Txs[3].TxID.String())
  577. // Tx From and To IDx
  578. assert.Equal(t, dbL2Txs[0].ToIdx, dbL2Txs[2].FromIdx)
  579. assert.Equal(t, dbL2Txs[1].ToIdx, dbL2Txs[0].FromIdx)
  580. assert.Equal(t, dbL2Txs[2].ToIdx, dbL2Txs[1].FromIdx)
  581. // Batch Number
  582. assert.Equal(t, common.BatchNum(5), dbL2Txs[0].BatchNum)
  583. assert.Equal(t, common.BatchNum(5), dbL2Txs[1].BatchNum)
  584. assert.Equal(t, common.BatchNum(5), dbL2Txs[2].BatchNum)
  585. assert.Equal(t, common.BatchNum(5), dbL2Txs[3].BatchNum)
  586. // eth_block_num
  587. assert.Equal(t, int64(4), dbL2Txs[0].EthBlockNum)
  588. assert.Equal(t, int64(4), dbL2Txs[1].EthBlockNum)
  589. assert.Equal(t, int64(4), dbL2Txs[2].EthBlockNum)
  590. // Amount
  591. assert.Equal(t, big.NewInt(10), dbL2Txs[0].Amount)
  592. assert.Equal(t, big.NewInt(10), dbL2Txs[1].Amount)
  593. assert.Equal(t, big.NewInt(10), dbL2Txs[2].Amount)
  594. assert.Equal(t, big.NewInt(10), dbL2Txs[3].Amount)
  595. }
  596. func TestExitTree(t *testing.T) {
  597. nBatches := 17
  598. blocks := setTestBlocks(1, 10)
  599. batches := test.GenBatches(nBatches, blocks)
  600. err := historyDB.AddBatches(batches)
  601. assert.NoError(t, err)
  602. const nTokens = 50
  603. tokens, ethToken := test.GenTokens(nTokens, blocks)
  604. err = historyDB.AddTokens(tokens)
  605. assert.NoError(t, err)
  606. tokens = append([]common.Token{ethToken}, tokens...)
  607. const nAccounts = 3
  608. accs := test.GenAccounts(nAccounts, 0, tokens, nil, nil, batches)
  609. assert.NoError(t, historyDB.AddAccounts(accs))
  610. exitTree := test.GenExitTree(nBatches, batches, accs, blocks)
  611. err = historyDB.AddExitTree(exitTree)
  612. assert.NoError(t, err)
  613. }
  614. func TestGetUnforgedL1UserTxs(t *testing.T) {
  615. test.WipeDB(historyDB.DB())
  616. set := `
  617. Type: Blockchain
  618. AddToken(1)
  619. AddToken(2)
  620. AddToken(3)
  621. CreateAccountDeposit(1) A: 20
  622. CreateAccountDeposit(2) A: 20
  623. CreateAccountDeposit(1) B: 5
  624. CreateAccountDeposit(1) C: 5
  625. CreateAccountDeposit(1) D: 5
  626. > block
  627. `
  628. tc := til.NewContext(uint16(0), 128)
  629. blocks, err := tc.GenerateBlocks(set)
  630. require.NoError(t, err)
  631. // Sanity check
  632. require.Equal(t, 1, len(blocks))
  633. require.Equal(t, 5, len(blocks[0].Rollup.L1UserTxs))
  634. toForgeL1TxsNum := int64(1)
  635. for i := range blocks {
  636. err = historyDB.AddBlockSCData(&blocks[i])
  637. require.NoError(t, err)
  638. }
  639. l1UserTxs, err := historyDB.GetUnforgedL1UserTxs(toForgeL1TxsNum)
  640. require.NoError(t, err)
  641. assert.Equal(t, 5, len(l1UserTxs))
  642. assert.Equal(t, blocks[0].Rollup.L1UserTxs, l1UserTxs)
  643. // No l1UserTxs for this toForgeL1TxsNum
  644. l1UserTxs, err = historyDB.GetUnforgedL1UserTxs(2)
  645. require.NoError(t, err)
  646. assert.Equal(t, 0, len(l1UserTxs))
  647. }
  648. func exampleInitSCVars() (*common.RollupVariables, *common.AuctionVariables, *common.WDelayerVariables) {
  649. //nolint:govet
  650. rollup := &common.RollupVariables{
  651. 0,
  652. big.NewInt(10),
  653. 12,
  654. 13,
  655. [5]common.BucketParams{},
  656. false,
  657. }
  658. //nolint:govet
  659. auction := &common.AuctionVariables{
  660. 0,
  661. ethCommon.BigToAddress(big.NewInt(2)),
  662. ethCommon.BigToAddress(big.NewInt(3)),
  663. "https://boot.coord.com",
  664. [6]*big.Int{
  665. big.NewInt(1), big.NewInt(2), big.NewInt(3),
  666. big.NewInt(4), big.NewInt(5), big.NewInt(6),
  667. },
  668. 0,
  669. 2,
  670. 4320,
  671. [3]uint16{10, 11, 12},
  672. 1000,
  673. 20,
  674. }
  675. //nolint:govet
  676. wDelayer := &common.WDelayerVariables{
  677. 0,
  678. ethCommon.BigToAddress(big.NewInt(2)),
  679. ethCommon.BigToAddress(big.NewInt(3)),
  680. 13,
  681. 14,
  682. false,
  683. }
  684. return rollup, auction, wDelayer
  685. }
  686. func TestSetInitialSCVars(t *testing.T) {
  687. test.WipeDB(historyDB.DB())
  688. _, _, _, err := historyDB.GetSCVars()
  689. assert.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
  690. rollup, auction, wDelayer := exampleInitSCVars()
  691. err = historyDB.SetInitialSCVars(rollup, auction, wDelayer)
  692. require.NoError(t, err)
  693. dbRollup, dbAuction, dbWDelayer, err := historyDB.GetSCVars()
  694. require.NoError(t, err)
  695. require.Equal(t, rollup, dbRollup)
  696. require.Equal(t, auction, dbAuction)
  697. require.Equal(t, wDelayer, dbWDelayer)
  698. }
  699. func TestSetExtraInfoForgedL1UserTxs(t *testing.T) {
  700. test.WipeDB(historyDB.DB())
  701. set := `
  702. Type: Blockchain
  703. AddToken(1)
  704. CreateAccountDeposit(1) A: 2000
  705. CreateAccountDeposit(1) B: 500
  706. CreateAccountDeposit(1) C: 500
  707. > batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{*}
  708. > block // blockNum=2
  709. > batchL1 // forge defined L1UserTxs{*}
  710. > block // blockNum=3
  711. `
  712. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  713. tilCfgExtra := til.ConfigExtra{
  714. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  715. CoordUser: "A",
  716. }
  717. blocks, err := tc.GenerateBlocks(set)
  718. require.NoError(t, err)
  719. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  720. require.NoError(t, err)
  721. err = tc.FillBlocksForgedL1UserTxs(blocks)
  722. require.NoError(t, err)
  723. // Add only first block so that the L1UserTxs are not marked as forged
  724. for i := range blocks[:1] {
  725. err = historyDB.AddBlockSCData(&blocks[i])
  726. require.NoError(t, err)
  727. }
  728. // Add second batch to trigger the update of the batch_num,
  729. // while avoiding the implicit call of setExtraInfoForgedL1UserTxs
  730. err = historyDB.addBlock(historyDB.db, &blocks[1].Block)
  731. require.NoError(t, err)
  732. err = historyDB.addBatch(historyDB.db, &blocks[1].Rollup.Batches[0].Batch)
  733. require.NoError(t, err)
  734. err = historyDB.addAccounts(historyDB.db, blocks[1].Rollup.Batches[0].CreatedAccounts)
  735. require.NoError(t, err)
  736. // Set the Effective{Amount,DepositAmount} of the L1UserTxs that are forged in the second block
  737. l1Txs := blocks[1].Rollup.Batches[0].L1UserTxs
  738. require.Equal(t, 3, len(l1Txs))
  739. // Change some values to test all cases
  740. l1Txs[1].EffectiveAmount = big.NewInt(0)
  741. l1Txs[2].EffectiveDepositAmount = big.NewInt(0)
  742. l1Txs[2].EffectiveAmount = big.NewInt(0)
  743. err = historyDB.setExtraInfoForgedL1UserTxs(historyDB.db, l1Txs)
  744. require.NoError(t, err)
  745. dbL1Txs, err := historyDB.GetAllL1UserTxs()
  746. require.NoError(t, err)
  747. for i, tx := range dbL1Txs {
  748. log.Infof("%d %v %v", i, tx.EffectiveAmount, tx.EffectiveDepositAmount)
  749. assert.NotNil(t, tx.EffectiveAmount)
  750. assert.NotNil(t, tx.EffectiveDepositAmount)
  751. switch tx.TxID {
  752. case l1Txs[0].TxID:
  753. assert.Equal(t, l1Txs[0].DepositAmount, tx.EffectiveDepositAmount)
  754. assert.Equal(t, l1Txs[0].Amount, tx.EffectiveAmount)
  755. case l1Txs[1].TxID:
  756. assert.Equal(t, l1Txs[1].DepositAmount, tx.EffectiveDepositAmount)
  757. assert.Equal(t, big.NewInt(0), tx.EffectiveAmount)
  758. case l1Txs[2].TxID:
  759. assert.Equal(t, big.NewInt(0), tx.EffectiveDepositAmount)
  760. assert.Equal(t, big.NewInt(0), tx.EffectiveAmount)
  761. }
  762. }
  763. }
  764. func TestUpdateExitTree(t *testing.T) {
  765. test.WipeDB(historyDB.DB())
  766. set := `
  767. Type: Blockchain
  768. AddToken(1)
  769. CreateAccountDeposit(1) C: 2000 // Idx=256+2=258
  770. CreateAccountDeposit(1) D: 500 // Idx=256+3=259
  771. CreateAccountCoordinator(1) A // Idx=256+0=256
  772. CreateAccountCoordinator(1) B // Idx=256+1=257
  773. > batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{5}
  774. > batchL1 // forge defined L1UserTxs{5}, freeze L1UserTxs{nil}
  775. > block // blockNum=2
  776. ForceExit(1) A: 100
  777. ForceExit(1) B: 80
  778. Exit(1) C: 50 (172)
  779. Exit(1) D: 30 (172)
  780. > batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{3}
  781. > batchL1 // forge L1UserTxs{3}, freeze defined L1UserTxs{nil}
  782. > block // blockNum=3
  783. > block // blockNum=4 (empty block)
  784. > block // blockNum=5 (empty block)
  785. `
  786. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  787. tilCfgExtra := til.ConfigExtra{
  788. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  789. CoordUser: "A",
  790. }
  791. blocks, err := tc.GenerateBlocks(set)
  792. require.NoError(t, err)
  793. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  794. require.NoError(t, err)
  795. // Add all blocks except for the last two
  796. for i := range blocks[:len(blocks)-2] {
  797. err = historyDB.AddBlockSCData(&blocks[i])
  798. require.NoError(t, err)
  799. }
  800. // Add withdraws to the second-to-last block, and insert block into the DB
  801. block := &blocks[len(blocks)-2]
  802. require.Equal(t, int64(4), block.Block.Num)
  803. tokenAddr := blocks[0].Rollup.AddedTokens[0].EthAddr
  804. // block.WDelayer.Deposits = append(block.WDelayer.Deposits,
  805. // common.WDelayerTransfer{Owner: tc.UsersByIdx[257].Addr, Token: tokenAddr, Amount: big.NewInt(80)}, // 257
  806. // common.WDelayerTransfer{Owner: tc.UsersByIdx[259].Addr, Token: tokenAddr, Amount: big.NewInt(15)}, // 259
  807. // )
  808. block.Rollup.Withdrawals = append(block.Rollup.Withdrawals,
  809. common.WithdrawInfo{Idx: 256, NumExitRoot: 4, InstantWithdraw: true},
  810. common.WithdrawInfo{Idx: 257, NumExitRoot: 4, InstantWithdraw: false,
  811. Owner: tc.UsersByIdx[257].Addr, Token: tokenAddr},
  812. common.WithdrawInfo{Idx: 258, NumExitRoot: 3, InstantWithdraw: true},
  813. common.WithdrawInfo{Idx: 259, NumExitRoot: 3, InstantWithdraw: false,
  814. Owner: tc.UsersByIdx[259].Addr, Token: tokenAddr},
  815. )
  816. err = historyDB.addBlock(historyDB.db, &block.Block)
  817. require.NoError(t, err)
  818. err = historyDB.updateExitTree(historyDB.db, block.Block.Num,
  819. block.Rollup.Withdrawals, block.WDelayer.Withdrawals)
  820. require.NoError(t, err)
  821. // Check that exits in DB match with the expected values
  822. dbExits, err := historyDB.GetAllExits()
  823. require.NoError(t, err)
  824. assert.Equal(t, 4, len(dbExits))
  825. dbExitsByIdx := make(map[common.Idx]common.ExitInfo)
  826. for _, dbExit := range dbExits {
  827. dbExitsByIdx[dbExit.AccountIdx] = dbExit
  828. }
  829. for _, withdraw := range block.Rollup.Withdrawals {
  830. assert.Equal(t, withdraw.NumExitRoot, dbExitsByIdx[withdraw.Idx].BatchNum)
  831. if withdraw.InstantWithdraw {
  832. assert.Equal(t, &block.Block.Num, dbExitsByIdx[withdraw.Idx].InstantWithdrawn)
  833. } else {
  834. assert.Equal(t, &block.Block.Num, dbExitsByIdx[withdraw.Idx].DelayedWithdrawRequest)
  835. }
  836. }
  837. // Add delayed withdraw to the last block, and insert block into the DB
  838. block = &blocks[len(blocks)-1]
  839. require.Equal(t, int64(5), block.Block.Num)
  840. block.WDelayer.Withdrawals = append(block.WDelayer.Withdrawals,
  841. common.WDelayerTransfer{
  842. Owner: tc.UsersByIdx[257].Addr,
  843. Token: tokenAddr,
  844. Amount: big.NewInt(80),
  845. })
  846. err = historyDB.addBlock(historyDB.db, &block.Block)
  847. require.NoError(t, err)
  848. err = historyDB.updateExitTree(historyDB.db, block.Block.Num,
  849. block.Rollup.Withdrawals, block.WDelayer.Withdrawals)
  850. require.NoError(t, err)
  851. // Check that delayed withdrawn has been set
  852. dbExits, err = historyDB.GetAllExits()
  853. require.NoError(t, err)
  854. for _, dbExit := range dbExits {
  855. dbExitsByIdx[dbExit.AccountIdx] = dbExit
  856. }
  857. require.Equal(t, &block.Block.Num, dbExitsByIdx[257].DelayedWithdrawn)
  858. }
  859. func TestGetBestBidCoordinator(t *testing.T) {
  860. test.WipeDB(historyDB.DB())
  861. rollup, auction, wDelayer := exampleInitSCVars()
  862. err := historyDB.SetInitialSCVars(rollup, auction, wDelayer)
  863. require.NoError(t, err)
  864. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  865. blocks, err := tc.GenerateBlocks(`
  866. Type: Blockchain
  867. > block // blockNum=2
  868. `)
  869. require.NoError(t, err)
  870. err = historyDB.AddBlockSCData(&blocks[0])
  871. require.NoError(t, err)
  872. coords := []common.Coordinator{
  873. {
  874. Bidder: ethCommon.BigToAddress(big.NewInt(1)),
  875. Forger: ethCommon.BigToAddress(big.NewInt(2)),
  876. EthBlockNum: 2,
  877. URL: "foo",
  878. },
  879. {
  880. Bidder: ethCommon.BigToAddress(big.NewInt(3)),
  881. Forger: ethCommon.BigToAddress(big.NewInt(4)),
  882. EthBlockNum: 2,
  883. URL: "bar",
  884. },
  885. }
  886. err = historyDB.addCoordinators(historyDB.db, coords)
  887. require.NoError(t, err)
  888. bids := []common.Bid{
  889. {
  890. SlotNum: 10,
  891. BidValue: big.NewInt(10),
  892. EthBlockNum: 2,
  893. Bidder: coords[0].Bidder,
  894. },
  895. {
  896. SlotNum: 10,
  897. BidValue: big.NewInt(20),
  898. EthBlockNum: 2,
  899. Bidder: coords[1].Bidder,
  900. },
  901. }
  902. err = historyDB.addBids(historyDB.db, bids)
  903. require.NoError(t, err)
  904. forger10, err := historyDB.GetBestBidCoordinator(10)
  905. require.NoError(t, err)
  906. require.Equal(t, coords[1].Forger, forger10.Forger)
  907. require.Equal(t, coords[1].Bidder, forger10.Bidder)
  908. require.Equal(t, coords[1].URL, forger10.URL)
  909. require.Equal(t, bids[1].SlotNum, forger10.SlotNum)
  910. require.Equal(t, bids[1].BidValue, forger10.BidValue)
  911. for i := range forger10.DefaultSlotSetBid {
  912. require.Equal(t, auction.DefaultSlotSetBid[i], forger10.DefaultSlotSetBid[i])
  913. }
  914. _, err = historyDB.GetBestBidCoordinator(11)
  915. require.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
  916. }
  917. func TestAddBucketUpdates(t *testing.T) {
  918. test.WipeDB(historyDB.DB())
  919. const fromBlock int64 = 1
  920. const toBlock int64 = 5 + 1
  921. setTestBlocks(fromBlock, toBlock)
  922. bucketUpdates := []common.BucketUpdate{
  923. {
  924. EthBlockNum: 4,
  925. NumBucket: 0,
  926. BlockStamp: 4,
  927. Withdrawals: big.NewInt(123),
  928. },
  929. {
  930. EthBlockNum: 5,
  931. NumBucket: 2,
  932. BlockStamp: 5,
  933. Withdrawals: big.NewInt(42),
  934. },
  935. }
  936. err := historyDB.addBucketUpdates(historyDB.db, bucketUpdates)
  937. require.NoError(t, err)
  938. dbBucketUpdates, err := historyDB.GetAllBucketUpdates()
  939. require.NoError(t, err)
  940. assert.Equal(t, bucketUpdates, dbBucketUpdates)
  941. }
  942. func TestAddTokenExchanges(t *testing.T) {
  943. test.WipeDB(historyDB.DB())
  944. const fromBlock int64 = 1
  945. const toBlock int64 = 5 + 1
  946. setTestBlocks(fromBlock, toBlock)
  947. tokenExchanges := []common.TokenExchange{
  948. {
  949. EthBlockNum: 4,
  950. Address: ethCommon.BigToAddress(big.NewInt(111)),
  951. ValueUSD: 12345,
  952. },
  953. {
  954. EthBlockNum: 5,
  955. Address: ethCommon.BigToAddress(big.NewInt(222)),
  956. ValueUSD: 67890,
  957. },
  958. }
  959. err := historyDB.addTokenExchanges(historyDB.db, tokenExchanges)
  960. require.NoError(t, err)
  961. dbTokenExchanges, err := historyDB.GetAllTokenExchanges()
  962. require.NoError(t, err)
  963. assert.Equal(t, tokenExchanges, dbTokenExchanges)
  964. }
  965. func TestAddEscapeHatchWithdrawals(t *testing.T) {
  966. test.WipeDB(historyDB.DB())
  967. const fromBlock int64 = 1
  968. const toBlock int64 = 5 + 1
  969. setTestBlocks(fromBlock, toBlock)
  970. escapeHatchWithdrawals := []common.WDelayerEscapeHatchWithdrawal{
  971. {
  972. EthBlockNum: 4,
  973. Who: ethCommon.BigToAddress(big.NewInt(111)),
  974. To: ethCommon.BigToAddress(big.NewInt(222)),
  975. TokenAddr: ethCommon.BigToAddress(big.NewInt(333)),
  976. Amount: big.NewInt(10002),
  977. },
  978. {
  979. EthBlockNum: 5,
  980. Who: ethCommon.BigToAddress(big.NewInt(444)),
  981. To: ethCommon.BigToAddress(big.NewInt(555)),
  982. TokenAddr: ethCommon.BigToAddress(big.NewInt(666)),
  983. Amount: big.NewInt(20003),
  984. },
  985. }
  986. err := historyDB.addEscapeHatchWithdrawals(historyDB.db, escapeHatchWithdrawals)
  987. require.NoError(t, err)
  988. dbEscapeHatchWithdrawals, err := historyDB.GetAllEscapeHatchWithdrawals()
  989. require.NoError(t, err)
  990. assert.Equal(t, escapeHatchWithdrawals, dbEscapeHatchWithdrawals)
  991. }
  992. func TestGetMetricsAPI(t *testing.T) {
  993. test.WipeDB(historyDB.DB())
  994. set := `
  995. Type: Blockchain
  996. AddToken(1)
  997. CreateAccountDeposit(1) A: 1000 // numTx=1
  998. CreateAccountDeposit(1) B: 2000 // numTx=2
  999. CreateAccountDeposit(1) C: 3000 //numTx=3
  1000. // block 0 is stored as default in the DB
  1001. // block 1 does not exist
  1002. > batchL1 // numBatches=1
  1003. > batchL1 // numBatches=2
  1004. > block // blockNum=2
  1005. Transfer(1) C-A : 10 (1) // numTx=4
  1006. > batch // numBatches=3
  1007. > block // blockNum=3
  1008. Transfer(1) B-C : 10 (1) // numTx=5
  1009. > batch // numBatches=5
  1010. > block // blockNum=4
  1011. Transfer(1) A-B : 10 (1) // numTx=6
  1012. > batch // numBatches=5
  1013. > block // blockNum=5
  1014. Transfer(1) A-B : 10 (1) // numTx=7
  1015. > batch // numBatches=6
  1016. > block // blockNum=6
  1017. `
  1018. const numBatches int = 6
  1019. const numTx int = 7
  1020. const blockNum = 6 - 1
  1021. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  1022. tilCfgExtra := til.ConfigExtra{
  1023. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  1024. CoordUser: "A",
  1025. }
  1026. blocks, err := tc.GenerateBlocks(set)
  1027. require.NoError(t, err)
  1028. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  1029. require.NoError(t, err)
  1030. // Sanity check
  1031. require.Equal(t, blockNum, len(blocks))
  1032. // Adding one batch per block
  1033. // batch frequency can be chosen
  1034. const frequency int = 15
  1035. for i := range blocks {
  1036. blocks[i].Block.Timestamp = time.Now().Add(-time.Second * time.Duration(frequency*(len(blocks)-i)))
  1037. err = historyDB.AddBlockSCData(&blocks[i])
  1038. assert.NoError(t, err)
  1039. }
  1040. res, err := historyDBWithACC.GetMetricsAPI(common.BatchNum(numBatches))
  1041. assert.NoError(t, err)
  1042. assert.Equal(t, float64(numTx)/float64(numBatches-1), res.TransactionsPerBatch)
  1043. // Frequency is not exactly the desired one, some decimals may appear
  1044. assert.GreaterOrEqual(t, res.BatchFrequency, float64(frequency))
  1045. assert.Less(t, res.BatchFrequency, float64(frequency+1))
  1046. // Truncate frecuency into an int to do an exact check
  1047. assert.Equal(t, frequency, int(res.BatchFrequency))
  1048. // This may also be different in some decimals
  1049. // Truncate it to the third decimal to compare
  1050. assert.Equal(t, math.Trunc((float64(numTx)/float64(frequency*blockNum-frequency))/0.001)*0.001, math.Trunc(res.TransactionsPerSecond/0.001)*0.001)
  1051. assert.Equal(t, int64(3), res.TotalAccounts)
  1052. assert.Equal(t, int64(3), res.TotalBJJs)
  1053. // Til does not set fees
  1054. assert.Equal(t, float64(0), res.AvgTransactionFee)
  1055. }
  1056. func TestGetMetricsAPIMoreThan24Hours(t *testing.T) {
  1057. test.WipeDB(historyDB.DB())
  1058. testUsersLen := 3
  1059. var set []til.Instruction
  1060. for user := 0; user < testUsersLen; user++ {
  1061. set = append(set, til.Instruction{
  1062. Typ: common.TxTypeCreateAccountDeposit,
  1063. TokenID: common.TokenID(0),
  1064. DepositAmount: big.NewInt(1000000),
  1065. Amount: big.NewInt(0),
  1066. From: fmt.Sprintf("User%02d", user),
  1067. })
  1068. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1069. }
  1070. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1071. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1072. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1073. // Transfers
  1074. for x := 0; x < 6000; x++ {
  1075. set = append(set, til.Instruction{
  1076. Typ: common.TxTypeTransfer,
  1077. TokenID: common.TokenID(0),
  1078. DepositAmount: big.NewInt(1),
  1079. Amount: big.NewInt(0),
  1080. From: "User00",
  1081. To: "User01",
  1082. })
  1083. set = append(set, til.Instruction{Typ: til.TypeNewBatch})
  1084. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1085. }
  1086. var chainID uint16 = 0
  1087. tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
  1088. blocks, err := tc.GenerateBlocksFromInstructions(set)
  1089. assert.NoError(t, err)
  1090. tilCfgExtra := til.ConfigExtra{
  1091. CoordUser: "A",
  1092. }
  1093. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  1094. require.NoError(t, err)
  1095. const numBatches int = 6002
  1096. const numTx int = 6003
  1097. const blockNum = 6005 - 1
  1098. // Sanity check
  1099. require.Equal(t, blockNum, len(blocks))
  1100. // Adding one batch per block
  1101. // batch frequency can be chosen
  1102. const frequency int = 15
  1103. for i := range blocks {
  1104. blocks[i].Block.Timestamp = time.Now().Add(-time.Second * time.Duration(frequency*(len(blocks)-i)))
  1105. err = historyDB.AddBlockSCData(&blocks[i])
  1106. assert.NoError(t, err)
  1107. }
  1108. res, err := historyDBWithACC.GetMetricsAPI(common.BatchNum(numBatches))
  1109. assert.NoError(t, err)
  1110. assert.Equal(t, math.Trunc((float64(numTx)/float64(numBatches-1))/0.001)*0.001, math.Trunc(res.TransactionsPerBatch/0.001)*0.001)
  1111. // Frequency is not exactly the desired one, some decimals may appear
  1112. assert.GreaterOrEqual(t, res.BatchFrequency, float64(frequency))
  1113. assert.Less(t, res.BatchFrequency, float64(frequency+1))
  1114. // Truncate frecuency into an int to do an exact check
  1115. assert.Equal(t, frequency, int(res.BatchFrequency))
  1116. // This may also be different in some decimals
  1117. // Truncate it to the third decimal to compare
  1118. assert.Equal(t, math.Trunc((float64(numTx)/float64(frequency*blockNum-frequency))/0.001)*0.001, math.Trunc(res.TransactionsPerSecond/0.001)*0.001)
  1119. assert.Equal(t, int64(3), res.TotalAccounts)
  1120. assert.Equal(t, int64(3), res.TotalBJJs)
  1121. // Til does not set fees
  1122. assert.Equal(t, float64(0), res.AvgTransactionFee)
  1123. }
  1124. func TestGetMetricsAPIEmpty(t *testing.T) {
  1125. test.WipeDB(historyDB.DB())
  1126. _, err := historyDBWithACC.GetMetricsAPI(0)
  1127. assert.NoError(t, err)
  1128. }
  1129. func TestGetAvgTxFeeEmpty(t *testing.T) {
  1130. test.WipeDB(historyDB.DB())
  1131. _, err := historyDBWithACC.GetAvgTxFeeAPI()
  1132. assert.NoError(t, err)
  1133. }
  1134. func TestGetLastL1TxsNum(t *testing.T) {
  1135. test.WipeDB(historyDB.DB())
  1136. _, err := historyDB.GetLastL1TxsNum()
  1137. assert.NoError(t, err)
  1138. }
  1139. func TestGetLastTxsPosition(t *testing.T) {
  1140. test.WipeDB(historyDB.DB())
  1141. _, err := historyDB.GetLastTxsPosition(0)
  1142. assert.Equal(t, sql.ErrNoRows.Error(), err.Error())
  1143. }
  1144. func TestGetFirstBatchBlockNumBySlot(t *testing.T) {
  1145. test.WipeDB(historyDB.DB())
  1146. set := `
  1147. Type: Blockchain
  1148. // Slot = 0
  1149. > block // 2
  1150. > block // 3
  1151. > block // 4
  1152. > block // 5
  1153. // Slot = 1
  1154. > block // 6
  1155. > block // 7
  1156. > batch
  1157. > block // 8
  1158. > block // 9
  1159. // Slot = 2
  1160. > batch
  1161. > block // 10
  1162. > block // 11
  1163. > block // 12
  1164. > block // 13
  1165. `
  1166. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  1167. blocks, err := tc.GenerateBlocks(set)
  1168. assert.NoError(t, err)
  1169. tilCfgExtra := til.ConfigExtra{
  1170. CoordUser: "A",
  1171. }
  1172. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  1173. require.NoError(t, err)
  1174. for i := range blocks {
  1175. for j := range blocks[i].Rollup.Batches {
  1176. blocks[i].Rollup.Batches[j].Batch.SlotNum = int64(i) / 4
  1177. }
  1178. }
  1179. // Add all blocks
  1180. for i := range blocks {
  1181. err = historyDB.AddBlockSCData(&blocks[i])
  1182. require.NoError(t, err)
  1183. }
  1184. _, err = historyDB.GetFirstBatchBlockNumBySlot(0)
  1185. require.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
  1186. bn1, err := historyDB.GetFirstBatchBlockNumBySlot(1)
  1187. require.NoError(t, err)
  1188. assert.Equal(t, int64(8), bn1)
  1189. bn2, err := historyDB.GetFirstBatchBlockNumBySlot(2)
  1190. require.NoError(t, err)
  1191. assert.Equal(t, int64(10), bn2)
  1192. }
  1193. func TestTxItemID(t *testing.T) {
  1194. test.WipeDB(historyDB.DB())
  1195. testUsersLen := 10
  1196. var set []til.Instruction
  1197. for user := 0; user < testUsersLen; user++ {
  1198. set = append(set, til.Instruction{
  1199. Typ: common.TxTypeCreateAccountDeposit,
  1200. TokenID: common.TokenID(0),
  1201. DepositAmount: big.NewInt(1000000),
  1202. Amount: big.NewInt(0),
  1203. From: fmt.Sprintf("User%02d", user),
  1204. })
  1205. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1206. }
  1207. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1208. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1209. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1210. for user := 0; user < testUsersLen; user++ {
  1211. set = append(set, til.Instruction{
  1212. Typ: common.TxTypeDeposit,
  1213. TokenID: common.TokenID(0),
  1214. DepositAmount: big.NewInt(100000),
  1215. Amount: big.NewInt(0),
  1216. From: fmt.Sprintf("User%02d", user),
  1217. })
  1218. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1219. }
  1220. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1221. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1222. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1223. for user := 0; user < testUsersLen; user++ {
  1224. set = append(set, til.Instruction{
  1225. Typ: common.TxTypeDepositTransfer,
  1226. TokenID: common.TokenID(0),
  1227. DepositAmount: big.NewInt(10000 * int64(user+1)),
  1228. Amount: big.NewInt(1000 * int64(user+1)),
  1229. From: fmt.Sprintf("User%02d", user),
  1230. To: fmt.Sprintf("User%02d", (user+1)%testUsersLen),
  1231. })
  1232. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1233. }
  1234. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1235. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1236. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1237. for user := 0; user < testUsersLen; user++ {
  1238. set = append(set, til.Instruction{
  1239. Typ: common.TxTypeForceTransfer,
  1240. TokenID: common.TokenID(0),
  1241. Amount: big.NewInt(100 * int64(user+1)),
  1242. DepositAmount: big.NewInt(0),
  1243. From: fmt.Sprintf("User%02d", user),
  1244. To: fmt.Sprintf("User%02d", (user+1)%testUsersLen),
  1245. })
  1246. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1247. }
  1248. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1249. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1250. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1251. for user := 0; user < testUsersLen; user++ {
  1252. set = append(set, til.Instruction{
  1253. Typ: common.TxTypeForceExit,
  1254. TokenID: common.TokenID(0),
  1255. Amount: big.NewInt(10 * int64(user+1)),
  1256. DepositAmount: big.NewInt(0),
  1257. From: fmt.Sprintf("User%02d", user),
  1258. })
  1259. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1260. }
  1261. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1262. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1263. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1264. var chainID uint16 = 0
  1265. tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
  1266. blocks, err := tc.GenerateBlocksFromInstructions(set)
  1267. assert.NoError(t, err)
  1268. tilCfgExtra := til.ConfigExtra{
  1269. CoordUser: "A",
  1270. }
  1271. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  1272. require.NoError(t, err)
  1273. // Add all blocks
  1274. for i := range blocks {
  1275. err = historyDB.AddBlockSCData(&blocks[i])
  1276. require.NoError(t, err)
  1277. }
  1278. txs, err := historyDB.GetAllL1UserTxs()
  1279. require.NoError(t, err)
  1280. position := 0
  1281. for _, tx := range txs {
  1282. if tx.Position == 0 {
  1283. position = 0
  1284. }
  1285. assert.Equal(t, position, tx.Position)
  1286. position++
  1287. }
  1288. }
  1289. // setTestBlocks WARNING: this will delete the blocks and recreate them
  1290. func setTestBlocks(from, to int64) []common.Block {
  1291. test.WipeDB(historyDB.DB())
  1292. blocks := test.GenBlocks(from, to)
  1293. if err := historyDB.AddBlocks(blocks); err != nil {
  1294. panic(err)
  1295. }
  1296. return blocks
  1297. }