You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1385 lines
43 KiB

Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
  1. package historydb
  2. import (
  3. "database/sql"
  4. "fmt"
  5. "math"
  6. "math/big"
  7. "os"
  8. "testing"
  9. "time"
  10. ethCommon "github.com/ethereum/go-ethereum/common"
  11. "github.com/hermeznetwork/hermez-node/common"
  12. dbUtils "github.com/hermeznetwork/hermez-node/db"
  13. "github.com/hermeznetwork/hermez-node/log"
  14. "github.com/hermeznetwork/hermez-node/test"
  15. "github.com/hermeznetwork/hermez-node/test/til"
  16. "github.com/hermeznetwork/tracerr"
  17. "github.com/stretchr/testify/assert"
  18. "github.com/stretchr/testify/require"
  19. )
  20. var historyDB *HistoryDB
  21. // In order to run the test you need to run a Posgres DB with
  22. // a database named "history" that is accessible by
  23. // user: "hermez"
  24. // pass: set it using the env var POSTGRES_PASS
  25. // This can be achieved by running: POSTGRES_PASS=your_strong_pass && sudo docker run --rm --name hermez-db-test -p 5432:5432 -e POSTGRES_DB=history -e POSTGRES_USER=hermez -e POSTGRES_PASSWORD=$POSTGRES_PASS -d postgres && sleep 2s && sudo docker exec -it hermez-db-test psql -a history -U hermez -c "CREATE DATABASE l2;"
  26. // After running the test you can stop the container by running: sudo docker kill hermez-db-test
  27. // If you already did that for the L2DB you don't have to do it again
  28. func TestMain(m *testing.M) {
  29. // init DB
  30. pass := os.Getenv("POSTGRES_PASS")
  31. db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
  32. if err != nil {
  33. panic(err)
  34. }
  35. historyDB = NewHistoryDB(db)
  36. if err != nil {
  37. panic(err)
  38. }
  39. // Run tests
  40. result := m.Run()
  41. // Close DB
  42. if err := db.Close(); err != nil {
  43. log.Error("Error closing the history DB:", err)
  44. }
  45. os.Exit(result)
  46. }
  47. func TestBlocks(t *testing.T) {
  48. var fromBlock, toBlock int64
  49. fromBlock = 0
  50. toBlock = 7
  51. // Reset DB
  52. test.WipeDB(historyDB.DB())
  53. // Generate blocks using til
  54. set1 := `
  55. Type: Blockchain
  56. // block 0 is stored as default in the DB
  57. // block 1 does not exist
  58. > block // blockNum=2
  59. > block // blockNum=3
  60. > block // blockNum=4
  61. > block // blockNum=5
  62. > block // blockNum=6
  63. `
  64. tc := til.NewContext(uint16(0), 1)
  65. blocks, err := tc.GenerateBlocks(set1)
  66. require.NoError(t, err)
  67. // Save timestamp of a block with UTC and change it without UTC
  68. timestamp := time.Now().Add(time.Second * 13)
  69. blocks[fromBlock].Block.Timestamp = timestamp
  70. // Insert blocks into DB
  71. for i := 0; i < len(blocks); i++ {
  72. err := historyDB.AddBlock(&blocks[i].Block)
  73. assert.NoError(t, err)
  74. }
  75. // Add block 0 to the generated blocks
  76. blocks = append(
  77. []common.BlockData{{Block: test.Block0}}, //nolint:gofmt
  78. blocks...,
  79. )
  80. // Get all blocks from DB
  81. fetchedBlocks, err := historyDB.GetBlocks(fromBlock, toBlock)
  82. assert.Equal(t, len(blocks), len(fetchedBlocks))
  83. // Compare generated vs getted blocks
  84. assert.NoError(t, err)
  85. for i := range fetchedBlocks {
  86. assertEqualBlock(t, &blocks[i].Block, &fetchedBlocks[i])
  87. }
  88. // Compare saved timestamp vs getted
  89. nameZoneUTC, offsetUTC := timestamp.UTC().Zone()
  90. zoneFetchedBlock, offsetFetchedBlock := fetchedBlocks[fromBlock].Timestamp.Zone()
  91. assert.Equal(t, nameZoneUTC, zoneFetchedBlock)
  92. assert.Equal(t, offsetUTC, offsetFetchedBlock)
  93. // Get blocks from the DB one by one
  94. for i := int64(2); i < toBlock; i++ { // avoid block 0 for simplicity
  95. fetchedBlock, err := historyDB.GetBlock(i)
  96. assert.NoError(t, err)
  97. assertEqualBlock(t, &blocks[i-1].Block, fetchedBlock)
  98. }
  99. // Get last block
  100. lastBlock, err := historyDB.GetLastBlock()
  101. assert.NoError(t, err)
  102. assertEqualBlock(t, &blocks[len(blocks)-1].Block, lastBlock)
  103. }
  104. func assertEqualBlock(t *testing.T, expected *common.Block, actual *common.Block) {
  105. assert.Equal(t, expected.Num, actual.Num)
  106. assert.Equal(t, expected.Hash, actual.Hash)
  107. assert.Equal(t, expected.Timestamp.Unix(), actual.Timestamp.Unix())
  108. }
  109. func TestBatches(t *testing.T) {
  110. // Reset DB
  111. test.WipeDB(historyDB.DB())
  112. // Generate batches using til (and blocks for foreign key)
  113. set := `
  114. Type: Blockchain
  115. AddToken(1) // Will have value in USD
  116. AddToken(2) // Will NOT have value in USD
  117. CreateAccountDeposit(1) A: 2000
  118. CreateAccountDeposit(2) A: 2000
  119. CreateAccountDeposit(1) B: 1000
  120. CreateAccountDeposit(2) B: 1000
  121. > batchL1
  122. > batchL1
  123. Transfer(1) A-B: 100 (5)
  124. Transfer(2) B-A: 100 (199)
  125. > batch // batchNum=2, L2 only batch, forges transfers (mixed case of with(out) USD value)
  126. > block
  127. Transfer(1) A-B: 100 (5)
  128. > batch // batchNum=3, L2 only batch, forges transfer (with USD value)
  129. Transfer(2) B-A: 100 (199)
  130. > batch // batchNum=4, L2 only batch, forges transfer (without USD value)
  131. > block
  132. `
  133. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  134. tilCfgExtra := til.ConfigExtra{
  135. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  136. CoordUser: "A",
  137. }
  138. blocks, err := tc.GenerateBlocks(set)
  139. require.NoError(t, err)
  140. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  141. require.NoError(t, err)
  142. // Insert to DB
  143. batches := []common.Batch{}
  144. tokensValue := make(map[common.TokenID]float64)
  145. lastL1TxsNum := new(int64)
  146. lastL1BatchBlockNum := int64(0)
  147. for _, block := range blocks {
  148. // Insert block
  149. assert.NoError(t, historyDB.AddBlock(&block.Block))
  150. // Insert tokens
  151. for i, token := range block.Rollup.AddedTokens {
  152. assert.NoError(t, historyDB.AddToken(&token)) //nolint:gosec
  153. if i%2 != 0 {
  154. // Set value to the token
  155. value := (float64(i) + 5) * 5.389329
  156. assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
  157. tokensValue[token.TokenID] = value / math.Pow(10, float64(token.Decimals))
  158. }
  159. }
  160. // Combine all generated batches into single array
  161. for _, batch := range block.Rollup.Batches {
  162. batches = append(batches, batch.Batch)
  163. forgeTxsNum := batch.Batch.ForgeL1TxsNum
  164. if forgeTxsNum != nil && (lastL1TxsNum == nil || *lastL1TxsNum < *forgeTxsNum) {
  165. *lastL1TxsNum = *forgeTxsNum
  166. lastL1BatchBlockNum = batch.Batch.EthBlockNum
  167. }
  168. }
  169. }
  170. // Insert batches
  171. assert.NoError(t, historyDB.AddBatches(batches))
  172. // Set expected total fee
  173. for _, batch := range batches {
  174. total := .0
  175. for tokenID, amount := range batch.CollectedFees {
  176. af := new(big.Float).SetInt(amount)
  177. amountFloat, _ := af.Float64()
  178. total += tokensValue[tokenID] * amountFloat
  179. }
  180. batch.TotalFeesUSD = &total
  181. }
  182. // Get batches from the DB
  183. fetchedBatches, err := historyDB.GetBatches(0, common.BatchNum(len(batches)+1))
  184. assert.NoError(t, err)
  185. assert.Equal(t, len(batches), len(fetchedBatches))
  186. for i, fetchedBatch := range fetchedBatches {
  187. assert.Equal(t, batches[i], fetchedBatch)
  188. }
  189. // Test GetLastBatchNum
  190. fetchedLastBatchNum, err := historyDB.GetLastBatchNum()
  191. assert.NoError(t, err)
  192. assert.Equal(t, batches[len(batches)-1].BatchNum, fetchedLastBatchNum)
  193. // Test GetLastL1TxsNum
  194. fetchedLastL1TxsNum, err := historyDB.GetLastL1TxsNum()
  195. assert.NoError(t, err)
  196. assert.Equal(t, lastL1TxsNum, fetchedLastL1TxsNum)
  197. // Test GetLastL1BatchBlockNum
  198. fetchedLastL1BatchBlockNum, err := historyDB.GetLastL1BatchBlockNum()
  199. assert.NoError(t, err)
  200. assert.Equal(t, lastL1BatchBlockNum, fetchedLastL1BatchBlockNum)
  201. }
  202. func TestBids(t *testing.T) {
  203. const fromBlock int64 = 1
  204. const toBlock int64 = 5
  205. // Prepare blocks in the DB
  206. blocks := setTestBlocks(fromBlock, toBlock)
  207. // Generate fake coordinators
  208. const nCoords = 5
  209. coords := test.GenCoordinators(nCoords, blocks)
  210. err := historyDB.AddCoordinators(coords)
  211. assert.NoError(t, err)
  212. // Generate fake bids
  213. const nBids = 20
  214. bids := test.GenBids(nBids, blocks, coords)
  215. err = historyDB.AddBids(bids)
  216. assert.NoError(t, err)
  217. // Fetch bids
  218. fetchedBids, err := historyDB.GetAllBids()
  219. assert.NoError(t, err)
  220. // Compare fetched bids vs generated bids
  221. for i, bid := range fetchedBids {
  222. assert.Equal(t, bids[i], bid)
  223. }
  224. }
  225. func TestTokens(t *testing.T) {
  226. const fromBlock int64 = 1
  227. const toBlock int64 = 5
  228. // Prepare blocks in the DB
  229. blocks := setTestBlocks(fromBlock, toBlock)
  230. // Generate fake tokens
  231. const nTokens = 5
  232. tokens, ethToken := test.GenTokens(nTokens, blocks)
  233. err := historyDB.AddTokens(tokens)
  234. assert.NoError(t, err)
  235. tokens = append([]common.Token{ethToken}, tokens...)
  236. limit := uint(10)
  237. // Fetch tokens
  238. fetchedTokens, _, err := historyDB.GetTokens(nil, nil, "", nil, &limit, OrderAsc)
  239. assert.NoError(t, err)
  240. // Compare fetched tokens vs generated tokens
  241. // All the tokens should have USDUpdate setted by the DB trigger
  242. for i, token := range fetchedTokens {
  243. assert.Equal(t, tokens[i].TokenID, token.TokenID)
  244. assert.Equal(t, tokens[i].EthBlockNum, token.EthBlockNum)
  245. assert.Equal(t, tokens[i].EthAddr, token.EthAddr)
  246. assert.Equal(t, tokens[i].Name, token.Name)
  247. assert.Equal(t, tokens[i].Symbol, token.Symbol)
  248. assert.Nil(t, token.USD)
  249. assert.Nil(t, token.USDUpdate)
  250. }
  251. // Update token value
  252. for i, token := range tokens {
  253. value := 1.01 * float64(i)
  254. assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
  255. }
  256. // Fetch tokens
  257. fetchedTokens, _, err = historyDB.GetTokens(nil, nil, "", nil, &limit, OrderAsc)
  258. assert.NoError(t, err)
  259. // Compare fetched tokens vs generated tokens
  260. // All the tokens should have USDUpdate setted by the DB trigger
  261. for i, token := range fetchedTokens {
  262. value := 1.01 * float64(i)
  263. assert.Equal(t, value, *token.USD)
  264. nameZone, offset := token.USDUpdate.Zone()
  265. assert.Equal(t, "UTC", nameZone)
  266. assert.Equal(t, 0, offset)
  267. }
  268. }
  269. func TestAccounts(t *testing.T) {
  270. const fromBlock int64 = 1
  271. const toBlock int64 = 5
  272. // Prepare blocks in the DB
  273. blocks := setTestBlocks(fromBlock, toBlock)
  274. // Generate fake tokens
  275. const nTokens = 5
  276. tokens, ethToken := test.GenTokens(nTokens, blocks)
  277. err := historyDB.AddTokens(tokens)
  278. assert.NoError(t, err)
  279. tokens = append([]common.Token{ethToken}, tokens...)
  280. // Generate fake batches
  281. const nBatches = 10
  282. batches := test.GenBatches(nBatches, blocks)
  283. err = historyDB.AddBatches(batches)
  284. assert.NoError(t, err)
  285. // Generate fake accounts
  286. const nAccounts = 3
  287. accs := test.GenAccounts(nAccounts, 0, tokens, nil, nil, batches)
  288. err = historyDB.AddAccounts(accs)
  289. assert.NoError(t, err)
  290. // Fetch accounts
  291. fetchedAccs, err := historyDB.GetAllAccounts()
  292. assert.NoError(t, err)
  293. // Compare fetched accounts vs generated accounts
  294. for i, acc := range fetchedAccs {
  295. accs[i].Balance = nil
  296. assert.Equal(t, accs[i], acc)
  297. }
  298. }
  299. func TestTxs(t *testing.T) {
  300. // Reset DB
  301. test.WipeDB(historyDB.DB())
  302. set := `
  303. Type: Blockchain
  304. AddToken(1)
  305. AddToken(2)
  306. CreateAccountDeposit(1) A: 10
  307. CreateAccountDeposit(1) B: 10
  308. > batchL1
  309. > batchL1
  310. > block
  311. CreateAccountDepositTransfer(1) C-A: 20, 10
  312. CreateAccountCoordinator(1) User0
  313. > batchL1
  314. > batchL1
  315. > block
  316. Deposit(1) B: 10
  317. Deposit(1) C: 10
  318. Transfer(1) C-A : 10 (1)
  319. Transfer(1) B-C : 10 (1)
  320. Transfer(1) A-B : 10 (1)
  321. Exit(1) A: 10 (1)
  322. > batch
  323. > block
  324. DepositTransfer(1) A-B: 10, 10
  325. > batchL1
  326. > block
  327. ForceTransfer(1) A-B: 10
  328. ForceExit(1) A: 5
  329. > batchL1
  330. > batchL1
  331. > block
  332. CreateAccountDeposit(2) D: 10
  333. > batchL1
  334. > block
  335. CreateAccountDeposit(2) E: 10
  336. > batchL1
  337. > batchL1
  338. > block
  339. `
  340. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  341. tilCfgExtra := til.ConfigExtra{
  342. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  343. CoordUser: "A",
  344. }
  345. blocks, err := tc.GenerateBlocks(set)
  346. require.NoError(t, err)
  347. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  348. require.NoError(t, err)
  349. // Sanity check
  350. require.Equal(t, 7, len(blocks))
  351. require.Equal(t, 2, len(blocks[0].Rollup.L1UserTxs))
  352. require.Equal(t, 1, len(blocks[1].Rollup.L1UserTxs))
  353. require.Equal(t, 2, len(blocks[2].Rollup.L1UserTxs))
  354. require.Equal(t, 1, len(blocks[3].Rollup.L1UserTxs))
  355. require.Equal(t, 2, len(blocks[4].Rollup.L1UserTxs))
  356. require.Equal(t, 1, len(blocks[5].Rollup.L1UserTxs))
  357. require.Equal(t, 1, len(blocks[6].Rollup.L1UserTxs))
  358. var null *common.BatchNum = nil
  359. var txID common.TxID
  360. // Insert blocks into DB
  361. for i := range blocks {
  362. if i == len(blocks)-1 {
  363. blocks[i].Block.Timestamp = time.Now()
  364. dbL1Txs, err := historyDB.GetAllL1UserTxs()
  365. assert.NoError(t, err)
  366. // Check batch_num is nil before forging
  367. assert.Equal(t, null, dbL1Txs[len(dbL1Txs)-1].BatchNum)
  368. // Save this TxId
  369. txID = dbL1Txs[len(dbL1Txs)-1].TxID
  370. }
  371. err = historyDB.AddBlockSCData(&blocks[i])
  372. assert.NoError(t, err)
  373. }
  374. // Check blocks
  375. dbBlocks, err := historyDB.GetAllBlocks()
  376. assert.NoError(t, err)
  377. assert.Equal(t, len(blocks)+1, len(dbBlocks))
  378. // Check batches
  379. batches, err := historyDB.GetAllBatches()
  380. assert.NoError(t, err)
  381. assert.Equal(t, 11, len(batches))
  382. // Check L1 Transactions
  383. dbL1Txs, err := historyDB.GetAllL1UserTxs()
  384. assert.NoError(t, err)
  385. assert.Equal(t, 10, len(dbL1Txs))
  386. // Tx Type
  387. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[0].Type)
  388. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[1].Type)
  389. assert.Equal(t, common.TxTypeCreateAccountDepositTransfer, dbL1Txs[2].Type)
  390. assert.Equal(t, common.TxTypeDeposit, dbL1Txs[3].Type)
  391. assert.Equal(t, common.TxTypeDeposit, dbL1Txs[4].Type)
  392. assert.Equal(t, common.TxTypeDepositTransfer, dbL1Txs[5].Type)
  393. assert.Equal(t, common.TxTypeForceTransfer, dbL1Txs[6].Type)
  394. assert.Equal(t, common.TxTypeForceExit, dbL1Txs[7].Type)
  395. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[8].Type)
  396. assert.Equal(t, common.TxTypeCreateAccountDeposit, dbL1Txs[9].Type)
  397. // Tx ID
  398. assert.Equal(t, "0x000000000000000001000000", dbL1Txs[0].TxID.String())
  399. assert.Equal(t, "0x000000000000000001000100", dbL1Txs[1].TxID.String())
  400. assert.Equal(t, "0x000000000000000003000000", dbL1Txs[2].TxID.String())
  401. assert.Equal(t, "0x000000000000000005000000", dbL1Txs[3].TxID.String())
  402. assert.Equal(t, "0x000000000000000005000100", dbL1Txs[4].TxID.String())
  403. assert.Equal(t, "0x000000000000000005000200", dbL1Txs[5].TxID.String())
  404. assert.Equal(t, "0x000000000000000006000000", dbL1Txs[6].TxID.String())
  405. assert.Equal(t, "0x000000000000000006000100", dbL1Txs[7].TxID.String())
  406. assert.Equal(t, "0x000000000000000008000000", dbL1Txs[8].TxID.String())
  407. assert.Equal(t, "0x000000000000000009000000", dbL1Txs[9].TxID.String())
  408. // Tx From IDx
  409. assert.Equal(t, common.Idx(0), dbL1Txs[0].FromIdx)
  410. assert.Equal(t, common.Idx(0), dbL1Txs[1].FromIdx)
  411. assert.Equal(t, common.Idx(0), dbL1Txs[2].FromIdx)
  412. assert.NotEqual(t, common.Idx(0), dbL1Txs[3].FromIdx)
  413. assert.NotEqual(t, common.Idx(0), dbL1Txs[4].FromIdx)
  414. assert.NotEqual(t, common.Idx(0), dbL1Txs[5].FromIdx)
  415. assert.NotEqual(t, common.Idx(0), dbL1Txs[6].FromIdx)
  416. assert.NotEqual(t, common.Idx(0), dbL1Txs[7].FromIdx)
  417. assert.Equal(t, common.Idx(0), dbL1Txs[8].FromIdx)
  418. assert.Equal(t, common.Idx(0), dbL1Txs[9].FromIdx)
  419. assert.Equal(t, common.Idx(0), dbL1Txs[9].FromIdx)
  420. assert.Equal(t, dbL1Txs[5].FromIdx, dbL1Txs[6].FromIdx)
  421. assert.Equal(t, dbL1Txs[5].FromIdx, dbL1Txs[7].FromIdx)
  422. // Tx to IDx
  423. assert.Equal(t, dbL1Txs[2].ToIdx, dbL1Txs[5].FromIdx)
  424. assert.Equal(t, dbL1Txs[5].ToIdx, dbL1Txs[3].FromIdx)
  425. assert.Equal(t, dbL1Txs[6].ToIdx, dbL1Txs[3].FromIdx)
  426. // Token ID
  427. assert.Equal(t, common.TokenID(1), dbL1Txs[0].TokenID)
  428. assert.Equal(t, common.TokenID(1), dbL1Txs[1].TokenID)
  429. assert.Equal(t, common.TokenID(1), dbL1Txs[2].TokenID)
  430. assert.Equal(t, common.TokenID(1), dbL1Txs[3].TokenID)
  431. assert.Equal(t, common.TokenID(1), dbL1Txs[4].TokenID)
  432. assert.Equal(t, common.TokenID(1), dbL1Txs[5].TokenID)
  433. assert.Equal(t, common.TokenID(1), dbL1Txs[6].TokenID)
  434. assert.Equal(t, common.TokenID(1), dbL1Txs[7].TokenID)
  435. assert.Equal(t, common.TokenID(2), dbL1Txs[8].TokenID)
  436. assert.Equal(t, common.TokenID(2), dbL1Txs[9].TokenID)
  437. // Batch Number
  438. var bn common.BatchNum = common.BatchNum(2)
  439. assert.Equal(t, &bn, dbL1Txs[0].BatchNum)
  440. assert.Equal(t, &bn, dbL1Txs[1].BatchNum)
  441. bn = common.BatchNum(4)
  442. assert.Equal(t, &bn, dbL1Txs[2].BatchNum)
  443. bn = common.BatchNum(7)
  444. assert.Equal(t, &bn, dbL1Txs[3].BatchNum)
  445. assert.Equal(t, &bn, dbL1Txs[4].BatchNum)
  446. assert.Equal(t, &bn, dbL1Txs[5].BatchNum)
  447. bn = common.BatchNum(8)
  448. assert.Equal(t, &bn, dbL1Txs[6].BatchNum)
  449. assert.Equal(t, &bn, dbL1Txs[7].BatchNum)
  450. bn = common.BatchNum(10)
  451. assert.Equal(t, &bn, dbL1Txs[8].BatchNum)
  452. bn = common.BatchNum(11)
  453. assert.Equal(t, &bn, dbL1Txs[9].BatchNum)
  454. // eth_block_num
  455. assert.Equal(t, int64(2), dbL1Txs[0].EthBlockNum)
  456. assert.Equal(t, int64(2), dbL1Txs[1].EthBlockNum)
  457. assert.Equal(t, int64(3), dbL1Txs[2].EthBlockNum)
  458. assert.Equal(t, int64(4), dbL1Txs[3].EthBlockNum)
  459. assert.Equal(t, int64(4), dbL1Txs[4].EthBlockNum)
  460. assert.Equal(t, int64(5), dbL1Txs[5].EthBlockNum)
  461. assert.Equal(t, int64(6), dbL1Txs[6].EthBlockNum)
  462. assert.Equal(t, int64(6), dbL1Txs[7].EthBlockNum)
  463. assert.Equal(t, int64(7), dbL1Txs[8].EthBlockNum)
  464. assert.Equal(t, int64(8), dbL1Txs[9].EthBlockNum)
  465. // User Origin
  466. assert.Equal(t, true, dbL1Txs[0].UserOrigin)
  467. assert.Equal(t, true, dbL1Txs[1].UserOrigin)
  468. assert.Equal(t, true, dbL1Txs[2].UserOrigin)
  469. assert.Equal(t, true, dbL1Txs[3].UserOrigin)
  470. assert.Equal(t, true, dbL1Txs[4].UserOrigin)
  471. assert.Equal(t, true, dbL1Txs[5].UserOrigin)
  472. assert.Equal(t, true, dbL1Txs[6].UserOrigin)
  473. assert.Equal(t, true, dbL1Txs[7].UserOrigin)
  474. assert.Equal(t, true, dbL1Txs[8].UserOrigin)
  475. assert.Equal(t, true, dbL1Txs[9].UserOrigin)
  476. // Deposit Amount
  477. assert.Equal(t, big.NewInt(10), dbL1Txs[0].DepositAmount)
  478. assert.Equal(t, big.NewInt(10), dbL1Txs[1].DepositAmount)
  479. assert.Equal(t, big.NewInt(20), dbL1Txs[2].DepositAmount)
  480. assert.Equal(t, big.NewInt(10), dbL1Txs[3].DepositAmount)
  481. assert.Equal(t, big.NewInt(10), dbL1Txs[4].DepositAmount)
  482. assert.Equal(t, big.NewInt(10), dbL1Txs[5].DepositAmount)
  483. assert.Equal(t, big.NewInt(0), dbL1Txs[6].DepositAmount)
  484. assert.Equal(t, big.NewInt(0), dbL1Txs[7].DepositAmount)
  485. assert.Equal(t, big.NewInt(10), dbL1Txs[8].DepositAmount)
  486. assert.Equal(t, big.NewInt(10), dbL1Txs[9].DepositAmount)
  487. // Check saved txID's batch_num is not nil
  488. assert.Equal(t, txID, dbL1Txs[len(dbL1Txs)-2].TxID)
  489. assert.NotEqual(t, null, dbL1Txs[len(dbL1Txs)-2].BatchNum)
  490. // Check Coordinator TXs
  491. coordTxs, err := historyDB.GetAllL1CoordinatorTxs()
  492. assert.NoError(t, err)
  493. assert.Equal(t, 1, len(coordTxs))
  494. assert.Equal(t, common.TxTypeCreateAccountDeposit, coordTxs[0].Type)
  495. assert.Equal(t, false, coordTxs[0].UserOrigin)
  496. // Check L2 TXs
  497. dbL2Txs, err := historyDB.GetAllL2Txs()
  498. assert.NoError(t, err)
  499. assert.Equal(t, 4, len(dbL2Txs))
  500. // Tx Type
  501. assert.Equal(t, common.TxTypeTransfer, dbL2Txs[0].Type)
  502. assert.Equal(t, common.TxTypeTransfer, dbL2Txs[1].Type)
  503. assert.Equal(t, common.TxTypeTransfer, dbL2Txs[2].Type)
  504. assert.Equal(t, common.TxTypeExit, dbL2Txs[3].Type)
  505. // Tx ID
  506. assert.Equal(t, "0x020000000001030000000000", dbL2Txs[0].TxID.String())
  507. assert.Equal(t, "0x020000000001010000000000", dbL2Txs[1].TxID.String())
  508. assert.Equal(t, "0x020000000001000000000000", dbL2Txs[2].TxID.String())
  509. assert.Equal(t, "0x020000000001000000000001", dbL2Txs[3].TxID.String())
  510. // Tx From and To IDx
  511. assert.Equal(t, dbL2Txs[0].ToIdx, dbL2Txs[2].FromIdx)
  512. assert.Equal(t, dbL2Txs[1].ToIdx, dbL2Txs[0].FromIdx)
  513. assert.Equal(t, dbL2Txs[2].ToIdx, dbL2Txs[1].FromIdx)
  514. // Batch Number
  515. assert.Equal(t, common.BatchNum(5), dbL2Txs[0].BatchNum)
  516. assert.Equal(t, common.BatchNum(5), dbL2Txs[1].BatchNum)
  517. assert.Equal(t, common.BatchNum(5), dbL2Txs[2].BatchNum)
  518. assert.Equal(t, common.BatchNum(5), dbL2Txs[3].BatchNum)
  519. // eth_block_num
  520. assert.Equal(t, int64(4), dbL2Txs[0].EthBlockNum)
  521. assert.Equal(t, int64(4), dbL2Txs[1].EthBlockNum)
  522. assert.Equal(t, int64(4), dbL2Txs[2].EthBlockNum)
  523. // Amount
  524. assert.Equal(t, big.NewInt(10), dbL2Txs[0].Amount)
  525. assert.Equal(t, big.NewInt(10), dbL2Txs[1].Amount)
  526. assert.Equal(t, big.NewInt(10), dbL2Txs[2].Amount)
  527. assert.Equal(t, big.NewInt(10), dbL2Txs[3].Amount)
  528. }
  529. func TestExitTree(t *testing.T) {
  530. nBatches := 17
  531. blocks := setTestBlocks(1, 10)
  532. batches := test.GenBatches(nBatches, blocks)
  533. err := historyDB.AddBatches(batches)
  534. assert.NoError(t, err)
  535. const nTokens = 50
  536. tokens, ethToken := test.GenTokens(nTokens, blocks)
  537. err = historyDB.AddTokens(tokens)
  538. assert.NoError(t, err)
  539. tokens = append([]common.Token{ethToken}, tokens...)
  540. const nAccounts = 3
  541. accs := test.GenAccounts(nAccounts, 0, tokens, nil, nil, batches)
  542. assert.NoError(t, historyDB.AddAccounts(accs))
  543. exitTree := test.GenExitTree(nBatches, batches, accs, blocks)
  544. err = historyDB.AddExitTree(exitTree)
  545. assert.NoError(t, err)
  546. }
  547. func TestGetUnforgedL1UserTxs(t *testing.T) {
  548. test.WipeDB(historyDB.DB())
  549. set := `
  550. Type: Blockchain
  551. AddToken(1)
  552. AddToken(2)
  553. AddToken(3)
  554. CreateAccountDeposit(1) A: 20
  555. CreateAccountDeposit(2) A: 20
  556. CreateAccountDeposit(1) B: 5
  557. CreateAccountDeposit(1) C: 5
  558. CreateAccountDeposit(1) D: 5
  559. > block
  560. `
  561. tc := til.NewContext(uint16(0), 128)
  562. blocks, err := tc.GenerateBlocks(set)
  563. require.NoError(t, err)
  564. // Sanity check
  565. require.Equal(t, 1, len(blocks))
  566. require.Equal(t, 5, len(blocks[0].Rollup.L1UserTxs))
  567. toForgeL1TxsNum := int64(1)
  568. for i := range blocks {
  569. err = historyDB.AddBlockSCData(&blocks[i])
  570. require.NoError(t, err)
  571. }
  572. l1UserTxs, err := historyDB.GetUnforgedL1UserTxs(toForgeL1TxsNum)
  573. require.NoError(t, err)
  574. assert.Equal(t, 5, len(l1UserTxs))
  575. assert.Equal(t, blocks[0].Rollup.L1UserTxs, l1UserTxs)
  576. // No l1UserTxs for this toForgeL1TxsNum
  577. l1UserTxs, err = historyDB.GetUnforgedL1UserTxs(2)
  578. require.NoError(t, err)
  579. assert.Equal(t, 0, len(l1UserTxs))
  580. }
  581. func exampleInitSCVars() (*common.RollupVariables, *common.AuctionVariables, *common.WDelayerVariables) {
  582. //nolint:govet
  583. rollup := &common.RollupVariables{
  584. 0,
  585. big.NewInt(10),
  586. 12,
  587. 13,
  588. [5]common.BucketParams{},
  589. false,
  590. }
  591. //nolint:govet
  592. auction := &common.AuctionVariables{
  593. 0,
  594. ethCommon.BigToAddress(big.NewInt(2)),
  595. ethCommon.BigToAddress(big.NewInt(3)),
  596. "https://boot.coord.com",
  597. [6]*big.Int{
  598. big.NewInt(1), big.NewInt(2), big.NewInt(3),
  599. big.NewInt(4), big.NewInt(5), big.NewInt(6),
  600. },
  601. 0,
  602. 2,
  603. 4320,
  604. [3]uint16{10, 11, 12},
  605. 1000,
  606. 20,
  607. }
  608. //nolint:govet
  609. wDelayer := &common.WDelayerVariables{
  610. 0,
  611. ethCommon.BigToAddress(big.NewInt(2)),
  612. ethCommon.BigToAddress(big.NewInt(3)),
  613. 13,
  614. 14,
  615. false,
  616. }
  617. return rollup, auction, wDelayer
  618. }
  619. func TestSetInitialSCVars(t *testing.T) {
  620. test.WipeDB(historyDB.DB())
  621. _, _, _, err := historyDB.GetSCVars()
  622. assert.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
  623. rollup, auction, wDelayer := exampleInitSCVars()
  624. err = historyDB.SetInitialSCVars(rollup, auction, wDelayer)
  625. require.NoError(t, err)
  626. dbRollup, dbAuction, dbWDelayer, err := historyDB.GetSCVars()
  627. require.NoError(t, err)
  628. require.Equal(t, rollup, dbRollup)
  629. require.Equal(t, auction, dbAuction)
  630. require.Equal(t, wDelayer, dbWDelayer)
  631. }
  632. func TestSetL1UserTxEffectiveAmounts(t *testing.T) {
  633. test.WipeDB(historyDB.DB())
  634. set := `
  635. Type: Blockchain
  636. AddToken(1)
  637. CreateAccountDeposit(1) A: 2000
  638. CreateAccountDeposit(1) B: 500
  639. CreateAccountDeposit(1) C: 500
  640. > batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{*}
  641. > block // blockNum=2
  642. > batchL1 // forge defined L1UserTxs{*}
  643. > block // blockNum=3
  644. `
  645. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  646. tilCfgExtra := til.ConfigExtra{
  647. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  648. CoordUser: "A",
  649. }
  650. blocks, err := tc.GenerateBlocks(set)
  651. require.NoError(t, err)
  652. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  653. require.NoError(t, err)
  654. err = tc.FillBlocksForgedL1UserTxs(blocks)
  655. require.NoError(t, err)
  656. // Add only first block so that the L1UserTxs are not marked as forged
  657. for i := range blocks[:1] {
  658. err = historyDB.AddBlockSCData(&blocks[i])
  659. require.NoError(t, err)
  660. }
  661. // Add second batch to trigger the update of the batch_num,
  662. // while avoiding the implicit call of setL1UserTxEffectiveAmounts
  663. err = historyDB.addBlock(historyDB.db, &blocks[1].Block)
  664. assert.NoError(t, err)
  665. err = historyDB.addBatch(historyDB.db, &blocks[1].Rollup.Batches[0].Batch)
  666. assert.NoError(t, err)
  667. require.NoError(t, err)
  668. // Set the Effective{Amount,DepositAmount} of the L1UserTxs that are forged in the second block
  669. l1Txs := blocks[1].Rollup.Batches[0].L1UserTxs
  670. require.Equal(t, 3, len(l1Txs))
  671. // Change some values to test all cases
  672. l1Txs[1].EffectiveAmount = big.NewInt(0)
  673. l1Txs[2].EffectiveDepositAmount = big.NewInt(0)
  674. l1Txs[2].EffectiveAmount = big.NewInt(0)
  675. err = historyDB.setL1UserTxEffectiveAmounts(historyDB.db, l1Txs)
  676. require.NoError(t, err)
  677. dbL1Txs, err := historyDB.GetAllL1UserTxs()
  678. require.NoError(t, err)
  679. for i, tx := range dbL1Txs {
  680. log.Infof("%d %v %v", i, tx.EffectiveAmount, tx.EffectiveDepositAmount)
  681. assert.NotNil(t, tx.EffectiveAmount)
  682. assert.NotNil(t, tx.EffectiveDepositAmount)
  683. switch tx.TxID {
  684. case l1Txs[0].TxID:
  685. assert.Equal(t, l1Txs[0].DepositAmount, tx.EffectiveDepositAmount)
  686. assert.Equal(t, l1Txs[0].Amount, tx.EffectiveAmount)
  687. case l1Txs[1].TxID:
  688. assert.Equal(t, l1Txs[1].DepositAmount, tx.EffectiveDepositAmount)
  689. assert.Equal(t, big.NewInt(0), tx.EffectiveAmount)
  690. case l1Txs[2].TxID:
  691. assert.Equal(t, big.NewInt(0), tx.EffectiveDepositAmount)
  692. assert.Equal(t, big.NewInt(0), tx.EffectiveAmount)
  693. }
  694. }
  695. }
  696. func TestUpdateExitTree(t *testing.T) {
  697. test.WipeDB(historyDB.DB())
  698. set := `
  699. Type: Blockchain
  700. AddToken(1)
  701. CreateAccountDeposit(1) C: 2000 // Idx=256+2=258
  702. CreateAccountDeposit(1) D: 500 // Idx=256+3=259
  703. CreateAccountCoordinator(1) A // Idx=256+0=256
  704. CreateAccountCoordinator(1) B // Idx=256+1=257
  705. > batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{5}
  706. > batchL1 // forge defined L1UserTxs{5}, freeze L1UserTxs{nil}
  707. > block // blockNum=2
  708. ForceExit(1) A: 100
  709. ForceExit(1) B: 80
  710. Exit(1) C: 50 (172)
  711. Exit(1) D: 30 (172)
  712. > batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{3}
  713. > batchL1 // forge L1UserTxs{3}, freeze defined L1UserTxs{nil}
  714. > block // blockNum=3
  715. > block // blockNum=4 (empty block)
  716. > block // blockNum=5 (empty block)
  717. `
  718. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  719. tilCfgExtra := til.ConfigExtra{
  720. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  721. CoordUser: "A",
  722. }
  723. blocks, err := tc.GenerateBlocks(set)
  724. require.NoError(t, err)
  725. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  726. require.NoError(t, err)
  727. // Add all blocks except for the last two
  728. for i := range blocks[:len(blocks)-2] {
  729. err = historyDB.AddBlockSCData(&blocks[i])
  730. require.NoError(t, err)
  731. }
  732. // Add withdraws to the second-to-last block, and insert block into the DB
  733. block := &blocks[len(blocks)-2]
  734. require.Equal(t, int64(4), block.Block.Num)
  735. tokenAddr := blocks[0].Rollup.AddedTokens[0].EthAddr
  736. // block.WDelayer.Deposits = append(block.WDelayer.Deposits,
  737. // common.WDelayerTransfer{Owner: tc.UsersByIdx[257].Addr, Token: tokenAddr, Amount: big.NewInt(80)}, // 257
  738. // common.WDelayerTransfer{Owner: tc.UsersByIdx[259].Addr, Token: tokenAddr, Amount: big.NewInt(15)}, // 259
  739. // )
  740. block.Rollup.Withdrawals = append(block.Rollup.Withdrawals,
  741. common.WithdrawInfo{Idx: 256, NumExitRoot: 4, InstantWithdraw: true},
  742. common.WithdrawInfo{Idx: 257, NumExitRoot: 4, InstantWithdraw: false,
  743. Owner: tc.UsersByIdx[257].Addr, Token: tokenAddr},
  744. common.WithdrawInfo{Idx: 258, NumExitRoot: 3, InstantWithdraw: true},
  745. common.WithdrawInfo{Idx: 259, NumExitRoot: 3, InstantWithdraw: false,
  746. Owner: tc.UsersByIdx[259].Addr, Token: tokenAddr},
  747. )
  748. err = historyDB.addBlock(historyDB.db, &block.Block)
  749. require.NoError(t, err)
  750. err = historyDB.updateExitTree(historyDB.db, block.Block.Num,
  751. block.Rollup.Withdrawals, block.WDelayer.Withdrawals)
  752. require.NoError(t, err)
  753. // Check that exits in DB match with the expected values
  754. dbExits, err := historyDB.GetAllExits()
  755. require.NoError(t, err)
  756. assert.Equal(t, 4, len(dbExits))
  757. dbExitsByIdx := make(map[common.Idx]common.ExitInfo)
  758. for _, dbExit := range dbExits {
  759. dbExitsByIdx[dbExit.AccountIdx] = dbExit
  760. }
  761. for _, withdraw := range block.Rollup.Withdrawals {
  762. assert.Equal(t, withdraw.NumExitRoot, dbExitsByIdx[withdraw.Idx].BatchNum)
  763. if withdraw.InstantWithdraw {
  764. assert.Equal(t, &block.Block.Num, dbExitsByIdx[withdraw.Idx].InstantWithdrawn)
  765. } else {
  766. assert.Equal(t, &block.Block.Num, dbExitsByIdx[withdraw.Idx].DelayedWithdrawRequest)
  767. }
  768. }
  769. // Add delayed withdraw to the last block, and insert block into the DB
  770. block = &blocks[len(blocks)-1]
  771. require.Equal(t, int64(5), block.Block.Num)
  772. block.WDelayer.Withdrawals = append(block.WDelayer.Withdrawals,
  773. common.WDelayerTransfer{
  774. Owner: tc.UsersByIdx[257].Addr,
  775. Token: tokenAddr,
  776. Amount: big.NewInt(80),
  777. })
  778. err = historyDB.addBlock(historyDB.db, &block.Block)
  779. require.NoError(t, err)
  780. err = historyDB.updateExitTree(historyDB.db, block.Block.Num,
  781. block.Rollup.Withdrawals, block.WDelayer.Withdrawals)
  782. require.NoError(t, err)
  783. // Check that delayed withdrawn has been set
  784. dbExits, err = historyDB.GetAllExits()
  785. require.NoError(t, err)
  786. for _, dbExit := range dbExits {
  787. dbExitsByIdx[dbExit.AccountIdx] = dbExit
  788. }
  789. require.Equal(t, &block.Block.Num, dbExitsByIdx[257].DelayedWithdrawn)
  790. }
  791. func TestGetBestBidCoordinator(t *testing.T) {
  792. test.WipeDB(historyDB.DB())
  793. rollup, auction, wDelayer := exampleInitSCVars()
  794. err := historyDB.SetInitialSCVars(rollup, auction, wDelayer)
  795. require.NoError(t, err)
  796. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  797. blocks, err := tc.GenerateBlocks(`
  798. Type: Blockchain
  799. > block // blockNum=2
  800. `)
  801. require.NoError(t, err)
  802. err = historyDB.AddBlockSCData(&blocks[0])
  803. require.NoError(t, err)
  804. coords := []common.Coordinator{
  805. {
  806. Bidder: ethCommon.BigToAddress(big.NewInt(1)),
  807. Forger: ethCommon.BigToAddress(big.NewInt(2)),
  808. EthBlockNum: 2,
  809. URL: "foo",
  810. },
  811. {
  812. Bidder: ethCommon.BigToAddress(big.NewInt(3)),
  813. Forger: ethCommon.BigToAddress(big.NewInt(4)),
  814. EthBlockNum: 2,
  815. URL: "bar",
  816. },
  817. }
  818. err = historyDB.addCoordinators(historyDB.db, coords)
  819. require.NoError(t, err)
  820. bids := []common.Bid{
  821. {
  822. SlotNum: 10,
  823. BidValue: big.NewInt(10),
  824. EthBlockNum: 2,
  825. Bidder: coords[0].Bidder,
  826. },
  827. {
  828. SlotNum: 10,
  829. BidValue: big.NewInt(20),
  830. EthBlockNum: 2,
  831. Bidder: coords[1].Bidder,
  832. },
  833. }
  834. err = historyDB.addBids(historyDB.db, bids)
  835. require.NoError(t, err)
  836. forger10, err := historyDB.GetBestBidCoordinator(10)
  837. require.NoError(t, err)
  838. require.Equal(t, coords[1].Forger, forger10.Forger)
  839. require.Equal(t, coords[1].Bidder, forger10.Bidder)
  840. require.Equal(t, coords[1].URL, forger10.URL)
  841. require.Equal(t, bids[1].SlotNum, forger10.SlotNum)
  842. require.Equal(t, bids[1].BidValue, forger10.BidValue)
  843. for i := range forger10.DefaultSlotSetBid {
  844. require.Equal(t, auction.DefaultSlotSetBid[i], forger10.DefaultSlotSetBid[i])
  845. }
  846. _, err = historyDB.GetBestBidCoordinator(11)
  847. require.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
  848. }
  849. func TestAddBucketUpdates(t *testing.T) {
  850. test.WipeDB(historyDB.DB())
  851. const fromBlock int64 = 1
  852. const toBlock int64 = 5 + 1
  853. setTestBlocks(fromBlock, toBlock)
  854. bucketUpdates := []common.BucketUpdate{
  855. {
  856. EthBlockNum: 4,
  857. NumBucket: 0,
  858. BlockStamp: 4,
  859. Withdrawals: big.NewInt(123),
  860. },
  861. {
  862. EthBlockNum: 5,
  863. NumBucket: 2,
  864. BlockStamp: 5,
  865. Withdrawals: big.NewInt(42),
  866. },
  867. }
  868. err := historyDB.addBucketUpdates(historyDB.db, bucketUpdates)
  869. require.NoError(t, err)
  870. dbBucketUpdates, err := historyDB.GetAllBucketUpdates()
  871. require.NoError(t, err)
  872. assert.Equal(t, bucketUpdates, dbBucketUpdates)
  873. }
  874. func TestAddTokenExchanges(t *testing.T) {
  875. test.WipeDB(historyDB.DB())
  876. const fromBlock int64 = 1
  877. const toBlock int64 = 5 + 1
  878. setTestBlocks(fromBlock, toBlock)
  879. tokenExchanges := []common.TokenExchange{
  880. {
  881. EthBlockNum: 4,
  882. Address: ethCommon.BigToAddress(big.NewInt(111)),
  883. ValueUSD: 12345,
  884. },
  885. {
  886. EthBlockNum: 5,
  887. Address: ethCommon.BigToAddress(big.NewInt(222)),
  888. ValueUSD: 67890,
  889. },
  890. }
  891. err := historyDB.addTokenExchanges(historyDB.db, tokenExchanges)
  892. require.NoError(t, err)
  893. dbTokenExchanges, err := historyDB.GetAllTokenExchanges()
  894. require.NoError(t, err)
  895. assert.Equal(t, tokenExchanges, dbTokenExchanges)
  896. }
  897. func TestAddEscapeHatchWithdrawals(t *testing.T) {
  898. test.WipeDB(historyDB.DB())
  899. const fromBlock int64 = 1
  900. const toBlock int64 = 5 + 1
  901. setTestBlocks(fromBlock, toBlock)
  902. escapeHatchWithdrawals := []common.WDelayerEscapeHatchWithdrawal{
  903. {
  904. EthBlockNum: 4,
  905. Who: ethCommon.BigToAddress(big.NewInt(111)),
  906. To: ethCommon.BigToAddress(big.NewInt(222)),
  907. TokenAddr: ethCommon.BigToAddress(big.NewInt(333)),
  908. Amount: big.NewInt(10002),
  909. },
  910. {
  911. EthBlockNum: 5,
  912. Who: ethCommon.BigToAddress(big.NewInt(444)),
  913. To: ethCommon.BigToAddress(big.NewInt(555)),
  914. TokenAddr: ethCommon.BigToAddress(big.NewInt(666)),
  915. Amount: big.NewInt(20003),
  916. },
  917. }
  918. err := historyDB.addEscapeHatchWithdrawals(historyDB.db, escapeHatchWithdrawals)
  919. require.NoError(t, err)
  920. dbEscapeHatchWithdrawals, err := historyDB.GetAllEscapeHatchWithdrawals()
  921. require.NoError(t, err)
  922. assert.Equal(t, escapeHatchWithdrawals, dbEscapeHatchWithdrawals)
  923. }
  924. func TestGetMetrics(t *testing.T) {
  925. test.WipeDB(historyDB.DB())
  926. set := `
  927. Type: Blockchain
  928. AddToken(1)
  929. CreateAccountDeposit(1) A: 1000 // numTx=1
  930. CreateAccountDeposit(1) B: 2000 // numTx=2
  931. CreateAccountDeposit(1) C: 3000 //numTx=3
  932. // block 0 is stored as default in the DB
  933. // block 1 does not exist
  934. > batchL1 // numBatches=1
  935. > batchL1 // numBatches=2
  936. > block // blockNum=2
  937. Transfer(1) C-A : 10 (1) // numTx=4
  938. > batch // numBatches=3
  939. > block // blockNum=3
  940. Transfer(1) B-C : 10 (1) // numTx=5
  941. > batch // numBatches=5
  942. > block // blockNum=4
  943. Transfer(1) A-B : 10 (1) // numTx=6
  944. > batch // numBatches=5
  945. > block // blockNum=5
  946. Transfer(1) A-B : 10 (1) // numTx=7
  947. > batch // numBatches=6
  948. > block // blockNum=6
  949. `
  950. const numBatches int = 6
  951. const numTx int = 7
  952. const blockNum = 6 - 1
  953. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  954. tilCfgExtra := til.ConfigExtra{
  955. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  956. CoordUser: "A",
  957. }
  958. blocks, err := tc.GenerateBlocks(set)
  959. require.NoError(t, err)
  960. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  961. require.NoError(t, err)
  962. // Sanity check
  963. require.Equal(t, blockNum, len(blocks))
  964. // Adding one batch per block
  965. // batch frequency can be chosen
  966. const frequency int = 15
  967. for i := range blocks {
  968. blocks[i].Block.Timestamp = time.Now().Add(-time.Second * time.Duration(frequency*(len(blocks)-i)))
  969. err = historyDB.AddBlockSCData(&blocks[i])
  970. assert.NoError(t, err)
  971. }
  972. res, err := historyDB.GetMetrics(common.BatchNum(numBatches))
  973. assert.NoError(t, err)
  974. assert.Equal(t, float64(numTx)/float64(numBatches-1), res.TransactionsPerBatch)
  975. // Frequency is not exactly the desired one, some decimals may appear
  976. assert.GreaterOrEqual(t, res.BatchFrequency, float64(frequency))
  977. assert.Less(t, res.BatchFrequency, float64(frequency+1))
  978. // Truncate frecuency into an int to do an exact check
  979. assert.Equal(t, frequency, int(res.BatchFrequency))
  980. // This may also be different in some decimals
  981. // Truncate it to the third decimal to compare
  982. assert.Equal(t, math.Trunc((float64(numTx)/float64(frequency*blockNum-frequency))/0.001)*0.001, math.Trunc(res.TransactionsPerSecond/0.001)*0.001)
  983. assert.Equal(t, int64(3), res.TotalAccounts)
  984. assert.Equal(t, int64(3), res.TotalBJJs)
  985. // Til does not set fees
  986. assert.Equal(t, float64(0), res.AvgTransactionFee)
  987. }
  988. func TestGetMetricsMoreThan24Hours(t *testing.T) {
  989. test.WipeDB(historyDB.DB())
  990. testUsersLen := 3
  991. var set []til.Instruction
  992. for user := 0; user < testUsersLen; user++ {
  993. set = append(set, til.Instruction{
  994. Typ: common.TxTypeCreateAccountDeposit,
  995. TokenID: common.TokenID(0),
  996. DepositAmount: big.NewInt(1000000),
  997. Amount: big.NewInt(0),
  998. From: fmt.Sprintf("User%02d", user),
  999. })
  1000. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1001. }
  1002. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1003. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1004. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1005. // Transfers
  1006. for x := 0; x < 6000; x++ {
  1007. set = append(set, til.Instruction{
  1008. Typ: common.TxTypeTransfer,
  1009. TokenID: common.TokenID(0),
  1010. DepositAmount: big.NewInt(1),
  1011. Amount: big.NewInt(0),
  1012. From: "User00",
  1013. To: "User01",
  1014. })
  1015. set = append(set, til.Instruction{Typ: til.TypeNewBatch})
  1016. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1017. }
  1018. var chainID uint16 = 0
  1019. tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
  1020. blocks, err := tc.GenerateBlocksFromInstructions(set)
  1021. assert.NoError(t, err)
  1022. tilCfgExtra := til.ConfigExtra{
  1023. CoordUser: "A",
  1024. }
  1025. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  1026. require.NoError(t, err)
  1027. const numBatches int = 6002
  1028. const numTx int = 6003
  1029. const blockNum = 6005 - 1
  1030. // Sanity check
  1031. require.Equal(t, blockNum, len(blocks))
  1032. // Adding one batch per block
  1033. // batch frequency can be chosen
  1034. const frequency int = 15
  1035. for i := range blocks {
  1036. blocks[i].Block.Timestamp = time.Now().Add(-time.Second * time.Duration(frequency*(len(blocks)-i)))
  1037. err = historyDB.AddBlockSCData(&blocks[i])
  1038. assert.NoError(t, err)
  1039. }
  1040. res, err := historyDB.GetMetrics(common.BatchNum(numBatches))
  1041. assert.NoError(t, err)
  1042. assert.Equal(t, math.Trunc((float64(numTx)/float64(numBatches-1))/0.001)*0.001, math.Trunc(res.TransactionsPerBatch/0.001)*0.001)
  1043. // Frequency is not exactly the desired one, some decimals may appear
  1044. assert.GreaterOrEqual(t, res.BatchFrequency, float64(frequency))
  1045. assert.Less(t, res.BatchFrequency, float64(frequency+1))
  1046. // Truncate frecuency into an int to do an exact check
  1047. assert.Equal(t, frequency, int(res.BatchFrequency))
  1048. // This may also be different in some decimals
  1049. // Truncate it to the third decimal to compare
  1050. assert.Equal(t, math.Trunc((float64(numTx)/float64(frequency*blockNum-frequency))/0.001)*0.001, math.Trunc(res.TransactionsPerSecond/0.001)*0.001)
  1051. assert.Equal(t, int64(3), res.TotalAccounts)
  1052. assert.Equal(t, int64(3), res.TotalBJJs)
  1053. // Til does not set fees
  1054. assert.Equal(t, float64(0), res.AvgTransactionFee)
  1055. }
  1056. func TestGetMetricsEmpty(t *testing.T) {
  1057. test.WipeDB(historyDB.DB())
  1058. _, err := historyDB.GetMetrics(0)
  1059. assert.NoError(t, err)
  1060. }
  1061. func TestGetAvgTxFeeEmpty(t *testing.T) {
  1062. test.WipeDB(historyDB.DB())
  1063. _, err := historyDB.GetAvgTxFee()
  1064. assert.NoError(t, err)
  1065. }
  1066. func TestGetLastL1TxsNum(t *testing.T) {
  1067. test.WipeDB(historyDB.DB())
  1068. _, err := historyDB.GetLastL1TxsNum()
  1069. assert.NoError(t, err)
  1070. }
  1071. func TestGetLastTxsPosition(t *testing.T) {
  1072. test.WipeDB(historyDB.DB())
  1073. _, err := historyDB.GetLastTxsPosition(0)
  1074. assert.Equal(t, sql.ErrNoRows.Error(), err.Error())
  1075. }
  1076. func TestGetFirstBatchBlockNumBySlot(t *testing.T) {
  1077. test.WipeDB(historyDB.DB())
  1078. set := `
  1079. Type: Blockchain
  1080. // Slot = 0
  1081. > block // 2
  1082. > block // 3
  1083. > block // 4
  1084. > block // 5
  1085. // Slot = 1
  1086. > block // 6
  1087. > block // 7
  1088. > batch
  1089. > block // 8
  1090. > block // 9
  1091. // Slot = 2
  1092. > batch
  1093. > block // 10
  1094. > block // 11
  1095. > block // 12
  1096. > block // 13
  1097. `
  1098. tc := til.NewContext(uint16(0), common.RollupConstMaxL1UserTx)
  1099. blocks, err := tc.GenerateBlocks(set)
  1100. assert.NoError(t, err)
  1101. tilCfgExtra := til.ConfigExtra{
  1102. CoordUser: "A",
  1103. }
  1104. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  1105. require.NoError(t, err)
  1106. for i := range blocks {
  1107. for j := range blocks[i].Rollup.Batches {
  1108. blocks[i].Rollup.Batches[j].Batch.SlotNum = int64(i) / 4
  1109. }
  1110. }
  1111. // Add all blocks
  1112. for i := range blocks {
  1113. err = historyDB.AddBlockSCData(&blocks[i])
  1114. require.NoError(t, err)
  1115. }
  1116. _, err = historyDB.GetFirstBatchBlockNumBySlot(0)
  1117. require.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
  1118. bn1, err := historyDB.GetFirstBatchBlockNumBySlot(1)
  1119. require.NoError(t, err)
  1120. assert.Equal(t, int64(8), bn1)
  1121. bn2, err := historyDB.GetFirstBatchBlockNumBySlot(2)
  1122. require.NoError(t, err)
  1123. assert.Equal(t, int64(10), bn2)
  1124. }
  1125. func TestTxItemID(t *testing.T) {
  1126. test.WipeDB(historyDB.DB())
  1127. testUsersLen := 10
  1128. var set []til.Instruction
  1129. for user := 0; user < testUsersLen; user++ {
  1130. set = append(set, til.Instruction{
  1131. Typ: common.TxTypeCreateAccountDeposit,
  1132. TokenID: common.TokenID(0),
  1133. DepositAmount: big.NewInt(1000000),
  1134. Amount: big.NewInt(0),
  1135. From: fmt.Sprintf("User%02d", user),
  1136. })
  1137. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1138. }
  1139. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1140. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1141. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1142. for user := 0; user < testUsersLen; user++ {
  1143. set = append(set, til.Instruction{
  1144. Typ: common.TxTypeDeposit,
  1145. TokenID: common.TokenID(0),
  1146. DepositAmount: big.NewInt(100000),
  1147. Amount: big.NewInt(0),
  1148. From: fmt.Sprintf("User%02d", user),
  1149. })
  1150. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1151. }
  1152. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1153. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1154. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1155. for user := 0; user < testUsersLen; user++ {
  1156. set = append(set, til.Instruction{
  1157. Typ: common.TxTypeDepositTransfer,
  1158. TokenID: common.TokenID(0),
  1159. DepositAmount: big.NewInt(10000 * int64(user+1)),
  1160. Amount: big.NewInt(1000 * int64(user+1)),
  1161. From: fmt.Sprintf("User%02d", user),
  1162. To: fmt.Sprintf("User%02d", (user+1)%testUsersLen),
  1163. })
  1164. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1165. }
  1166. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1167. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1168. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1169. for user := 0; user < testUsersLen; user++ {
  1170. set = append(set, til.Instruction{
  1171. Typ: common.TxTypeForceTransfer,
  1172. TokenID: common.TokenID(0),
  1173. Amount: big.NewInt(100 * int64(user+1)),
  1174. DepositAmount: big.NewInt(0),
  1175. From: fmt.Sprintf("User%02d", user),
  1176. To: fmt.Sprintf("User%02d", (user+1)%testUsersLen),
  1177. })
  1178. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1179. }
  1180. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1181. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1182. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1183. for user := 0; user < testUsersLen; user++ {
  1184. set = append(set, til.Instruction{
  1185. Typ: common.TxTypeForceExit,
  1186. TokenID: common.TokenID(0),
  1187. Amount: big.NewInt(10 * int64(user+1)),
  1188. DepositAmount: big.NewInt(0),
  1189. From: fmt.Sprintf("User%02d", user),
  1190. })
  1191. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1192. }
  1193. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1194. set = append(set, til.Instruction{Typ: til.TypeNewBatchL1})
  1195. set = append(set, til.Instruction{Typ: til.TypeNewBlock})
  1196. var chainID uint16 = 0
  1197. tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
  1198. blocks, err := tc.GenerateBlocksFromInstructions(set)
  1199. assert.NoError(t, err)
  1200. tilCfgExtra := til.ConfigExtra{
  1201. CoordUser: "A",
  1202. }
  1203. err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
  1204. require.NoError(t, err)
  1205. // Add all blocks
  1206. for i := range blocks {
  1207. err = historyDB.AddBlockSCData(&blocks[i])
  1208. require.NoError(t, err)
  1209. }
  1210. txs, err := historyDB.GetAllL1UserTxs()
  1211. require.NoError(t, err)
  1212. position := 0
  1213. for _, tx := range txs {
  1214. if tx.Position == 0 {
  1215. position = 0
  1216. }
  1217. assert.Equal(t, position, tx.Position)
  1218. position++
  1219. }
  1220. }
  1221. // setTestBlocks WARNING: this will delete the blocks and recreate them
  1222. func setTestBlocks(from, to int64) []common.Block {
  1223. test.WipeDB(historyDB.DB())
  1224. blocks := test.GenBlocks(from, to)
  1225. if err := historyDB.AddBlocks(blocks); err != nil {
  1226. panic(err)
  1227. }
  1228. return blocks
  1229. }