You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1114 lines
37 KiB

Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
  1. package historydb
  2. import (
  3. "math"
  4. "math/big"
  5. "strings"
  6. ethCommon "github.com/ethereum/go-ethereum/common"
  7. "github.com/hermeznetwork/hermez-node/common"
  8. "github.com/hermeznetwork/hermez-node/db"
  9. "github.com/hermeznetwork/tracerr"
  10. "github.com/jmoiron/sqlx"
  11. //nolint:errcheck // driver for postgres DB
  12. _ "github.com/lib/pq"
  13. "github.com/russross/meddler"
  14. )
  15. const (
  16. // OrderAsc indicates ascending order when using pagination
  17. OrderAsc = "ASC"
  18. // OrderDesc indicates descending order when using pagination
  19. OrderDesc = "DESC"
  20. )
  21. // TODO(Edu): Document here how HistoryDB is kept consistent
  22. // HistoryDB persist the historic of the rollup
  23. type HistoryDB struct {
  24. db *sqlx.DB
  25. apiConnCon *db.APIConnectionController
  26. }
  27. // NewHistoryDB initialize the DB
  28. func NewHistoryDB(db *sqlx.DB, apiConnCon *db.APIConnectionController) *HistoryDB {
  29. return &HistoryDB{db: db, apiConnCon: apiConnCon}
  30. }
  31. // DB returns a pointer to the L2DB.db. This method should be used only for
  32. // internal testing purposes.
  33. func (hdb *HistoryDB) DB() *sqlx.DB {
  34. return hdb.db
  35. }
  36. // AddBlock insert a block into the DB
  37. func (hdb *HistoryDB) AddBlock(block *common.Block) error { return hdb.addBlock(hdb.db, block) }
  38. func (hdb *HistoryDB) addBlock(d meddler.DB, block *common.Block) error {
  39. return tracerr.Wrap(meddler.Insert(d, "block", block))
  40. }
  41. // AddBlocks inserts blocks into the DB
  42. func (hdb *HistoryDB) AddBlocks(blocks []common.Block) error {
  43. return tracerr.Wrap(hdb.addBlocks(hdb.db, blocks))
  44. }
  45. func (hdb *HistoryDB) addBlocks(d meddler.DB, blocks []common.Block) error {
  46. return tracerr.Wrap(db.BulkInsert(
  47. d,
  48. `INSERT INTO block (
  49. eth_block_num,
  50. timestamp,
  51. hash
  52. ) VALUES %s;`,
  53. blocks[:],
  54. ))
  55. }
  56. // GetBlock retrieve a block from the DB, given a block number
  57. func (hdb *HistoryDB) GetBlock(blockNum int64) (*common.Block, error) {
  58. block := &common.Block{}
  59. err := meddler.QueryRow(
  60. hdb.db, block,
  61. "SELECT * FROM block WHERE eth_block_num = $1;", blockNum,
  62. )
  63. return block, tracerr.Wrap(err)
  64. }
  65. // GetAllBlocks retrieve all blocks from the DB
  66. func (hdb *HistoryDB) GetAllBlocks() ([]common.Block, error) {
  67. var blocks []*common.Block
  68. err := meddler.QueryAll(
  69. hdb.db, &blocks,
  70. "SELECT * FROM block ORDER BY eth_block_num;",
  71. )
  72. return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err)
  73. }
  74. // getBlocks retrieve blocks from the DB, given a range of block numbers defined by from and to
  75. func (hdb *HistoryDB) getBlocks(from, to int64) ([]common.Block, error) {
  76. var blocks []*common.Block
  77. err := meddler.QueryAll(
  78. hdb.db, &blocks,
  79. "SELECT * FROM block WHERE $1 <= eth_block_num AND eth_block_num < $2 ORDER BY eth_block_num;",
  80. from, to,
  81. )
  82. return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err)
  83. }
  84. // GetLastBlock retrieve the block with the highest block number from the DB
  85. func (hdb *HistoryDB) GetLastBlock() (*common.Block, error) {
  86. block := &common.Block{}
  87. err := meddler.QueryRow(
  88. hdb.db, block, "SELECT * FROM block ORDER BY eth_block_num DESC LIMIT 1;",
  89. )
  90. return block, tracerr.Wrap(err)
  91. }
  92. // AddBatch insert a Batch into the DB
  93. func (hdb *HistoryDB) AddBatch(batch *common.Batch) error { return hdb.addBatch(hdb.db, batch) }
  94. func (hdb *HistoryDB) addBatch(d meddler.DB, batch *common.Batch) error {
  95. // Calculate total collected fees in USD
  96. // Get IDs of collected tokens for fees
  97. tokenIDs := []common.TokenID{}
  98. for id := range batch.CollectedFees {
  99. tokenIDs = append(tokenIDs, id)
  100. }
  101. // Get USD value of the tokens
  102. type tokenPrice struct {
  103. ID common.TokenID `meddler:"token_id"`
  104. USD *float64 `meddler:"usd"`
  105. Decimals int `meddler:"decimals"`
  106. }
  107. var tokenPrices []*tokenPrice
  108. if len(tokenIDs) > 0 {
  109. query, args, err := sqlx.In(
  110. "SELECT token_id, usd, decimals FROM token WHERE token_id IN (?);",
  111. tokenIDs,
  112. )
  113. if err != nil {
  114. return tracerr.Wrap(err)
  115. }
  116. query = hdb.db.Rebind(query)
  117. if err := meddler.QueryAll(
  118. hdb.db, &tokenPrices, query, args...,
  119. ); err != nil {
  120. return tracerr.Wrap(err)
  121. }
  122. }
  123. // Calculate total collected
  124. var total float64
  125. for _, tokenPrice := range tokenPrices {
  126. if tokenPrice.USD == nil {
  127. continue
  128. }
  129. f := new(big.Float).SetInt(batch.CollectedFees[tokenPrice.ID])
  130. amount, _ := f.Float64()
  131. total += *tokenPrice.USD * (amount / math.Pow(10, float64(tokenPrice.Decimals))) //nolint decimals have to be ^10
  132. }
  133. batch.TotalFeesUSD = &total
  134. // Insert to DB
  135. return tracerr.Wrap(meddler.Insert(d, "batch", batch))
  136. }
  137. // AddBatches insert Bids into the DB
  138. func (hdb *HistoryDB) AddBatches(batches []common.Batch) error {
  139. return tracerr.Wrap(hdb.addBatches(hdb.db, batches))
  140. }
  141. func (hdb *HistoryDB) addBatches(d meddler.DB, batches []common.Batch) error {
  142. for i := 0; i < len(batches); i++ {
  143. if err := hdb.addBatch(d, &batches[i]); err != nil {
  144. return tracerr.Wrap(err)
  145. }
  146. }
  147. return nil
  148. }
  149. // GetAllBatches retrieve all batches from the DB
  150. func (hdb *HistoryDB) GetAllBatches() ([]common.Batch, error) {
  151. var batches []*common.Batch
  152. err := meddler.QueryAll(
  153. hdb.db, &batches,
  154. `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr, batch.fees_collected,
  155. batch.fee_idxs_coordinator, batch.state_root, batch.num_accounts, batch.last_idx, batch.exit_root,
  156. batch.forge_l1_txs_num, batch.slot_num, batch.total_fees_usd FROM batch
  157. ORDER BY item_id;`,
  158. )
  159. return db.SlicePtrsToSlice(batches).([]common.Batch), tracerr.Wrap(err)
  160. }
  161. // GetBatches retrieve batches from the DB, given a range of batch numbers defined by from and to
  162. func (hdb *HistoryDB) GetBatches(from, to common.BatchNum) ([]common.Batch, error) {
  163. var batches []*common.Batch
  164. err := meddler.QueryAll(
  165. hdb.db, &batches,
  166. `SELECT batch_num, eth_block_num, forger_addr, fees_collected, fee_idxs_coordinator,
  167. state_root, num_accounts, last_idx, exit_root, forge_l1_txs_num, slot_num, total_fees_usd
  168. FROM batch WHERE $1 <= batch_num AND batch_num < $2 ORDER BY batch_num;`,
  169. from, to,
  170. )
  171. return db.SlicePtrsToSlice(batches).([]common.Batch), tracerr.Wrap(err)
  172. }
  173. // GetFirstBatchBlockNumBySlot returns the ethereum block number of the first
  174. // batch within a slot
  175. func (hdb *HistoryDB) GetFirstBatchBlockNumBySlot(slotNum int64) (int64, error) {
  176. row := hdb.db.QueryRow(
  177. `SELECT eth_block_num FROM batch
  178. WHERE slot_num = $1 ORDER BY batch_num ASC LIMIT 1;`, slotNum,
  179. )
  180. var blockNum int64
  181. return blockNum, tracerr.Wrap(row.Scan(&blockNum))
  182. }
  183. // GetLastBatchNum returns the BatchNum of the latest forged batch
  184. func (hdb *HistoryDB) GetLastBatchNum() (common.BatchNum, error) {
  185. row := hdb.db.QueryRow("SELECT batch_num FROM batch ORDER BY batch_num DESC LIMIT 1;")
  186. var batchNum common.BatchNum
  187. return batchNum, tracerr.Wrap(row.Scan(&batchNum))
  188. }
  189. // GetLastBatchreturns the last forged batch
  190. func (hdb *HistoryDB) GetLastBatch() (*common.Batch, error) {
  191. var batch common.Batch
  192. err := meddler.QueryRow(
  193. hdb.db, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr,
  194. batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root,
  195. batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num,
  196. batch.slot_num, batch.total_fees_usd FROM batch ORDER BY batch_num DESC LIMIT 1;`,
  197. )
  198. return &batch, err
  199. }
  200. // GetLastL1BatchBlockNum returns the blockNum of the latest forged l1Batch
  201. func (hdb *HistoryDB) GetLastL1BatchBlockNum() (int64, error) {
  202. row := hdb.db.QueryRow(`SELECT eth_block_num FROM batch
  203. WHERE forge_l1_txs_num IS NOT NULL
  204. ORDER BY batch_num DESC LIMIT 1;`)
  205. var blockNum int64
  206. return blockNum, tracerr.Wrap(row.Scan(&blockNum))
  207. }
  208. // GetLastL1TxsNum returns the greatest ForgeL1TxsNum in the DB from forged
  209. // batches. If there's no batch in the DB (nil, nil) is returned.
  210. func (hdb *HistoryDB) GetLastL1TxsNum() (*int64, error) {
  211. row := hdb.db.QueryRow("SELECT MAX(forge_l1_txs_num) FROM batch;")
  212. lastL1TxsNum := new(int64)
  213. return lastL1TxsNum, tracerr.Wrap(row.Scan(&lastL1TxsNum))
  214. }
  215. // Reorg deletes all the information that was added into the DB after the
  216. // lastValidBlock. If lastValidBlock is negative, all block information is
  217. // deleted.
  218. func (hdb *HistoryDB) Reorg(lastValidBlock int64) error {
  219. var err error
  220. if lastValidBlock < 0 {
  221. _, err = hdb.db.Exec("DELETE FROM block;")
  222. } else {
  223. _, err = hdb.db.Exec("DELETE FROM block WHERE eth_block_num > $1;", lastValidBlock)
  224. }
  225. return tracerr.Wrap(err)
  226. }
  227. // AddBids insert Bids into the DB
  228. func (hdb *HistoryDB) AddBids(bids []common.Bid) error { return hdb.addBids(hdb.db, bids) }
  229. func (hdb *HistoryDB) addBids(d meddler.DB, bids []common.Bid) error {
  230. if len(bids) == 0 {
  231. return nil
  232. }
  233. // TODO: check the coordinator info
  234. return tracerr.Wrap(db.BulkInsert(
  235. d,
  236. "INSERT INTO bid (slot_num, bid_value, eth_block_num, bidder_addr) VALUES %s;",
  237. bids[:],
  238. ))
  239. }
  240. // GetAllBids retrieve all bids from the DB
  241. func (hdb *HistoryDB) GetAllBids() ([]common.Bid, error) {
  242. var bids []*common.Bid
  243. err := meddler.QueryAll(
  244. hdb.db, &bids,
  245. `SELECT bid.slot_num, bid.bid_value, bid.eth_block_num, bid.bidder_addr FROM bid
  246. ORDER BY item_id;`,
  247. )
  248. return db.SlicePtrsToSlice(bids).([]common.Bid), tracerr.Wrap(err)
  249. }
  250. // GetBestBidCoordinator returns the forger address of the highest bidder in a slot by slotNum
  251. func (hdb *HistoryDB) GetBestBidCoordinator(slotNum int64) (*common.BidCoordinator, error) {
  252. bidCoord := &common.BidCoordinator{}
  253. err := meddler.QueryRow(
  254. hdb.db, bidCoord,
  255. `SELECT (
  256. SELECT default_slot_set_bid
  257. FROM auction_vars
  258. WHERE default_slot_set_bid_slot_num <= $1
  259. ORDER BY eth_block_num DESC LIMIT 1
  260. ),
  261. bid.slot_num, bid.bid_value, bid.bidder_addr,
  262. coordinator.forger_addr, coordinator.url
  263. FROM bid
  264. INNER JOIN (
  265. SELECT bidder_addr, MAX(item_id) AS item_id FROM coordinator
  266. GROUP BY bidder_addr
  267. ) c ON bid.bidder_addr = c.bidder_addr
  268. INNER JOIN coordinator ON c.item_id = coordinator.item_id
  269. WHERE bid.slot_num = $1 ORDER BY bid.item_id DESC LIMIT 1;`,
  270. slotNum)
  271. return bidCoord, tracerr.Wrap(err)
  272. }
  273. // AddCoordinators insert Coordinators into the DB
  274. func (hdb *HistoryDB) AddCoordinators(coordinators []common.Coordinator) error {
  275. return tracerr.Wrap(hdb.addCoordinators(hdb.db, coordinators))
  276. }
  277. func (hdb *HistoryDB) addCoordinators(d meddler.DB, coordinators []common.Coordinator) error {
  278. if len(coordinators) == 0 {
  279. return nil
  280. }
  281. return tracerr.Wrap(db.BulkInsert(
  282. d,
  283. "INSERT INTO coordinator (bidder_addr, forger_addr, eth_block_num, url) VALUES %s;",
  284. coordinators[:],
  285. ))
  286. }
  287. // AddExitTree insert Exit tree into the DB
  288. func (hdb *HistoryDB) AddExitTree(exitTree []common.ExitInfo) error {
  289. return tracerr.Wrap(hdb.addExitTree(hdb.db, exitTree))
  290. }
  291. func (hdb *HistoryDB) addExitTree(d meddler.DB, exitTree []common.ExitInfo) error {
  292. if len(exitTree) == 0 {
  293. return nil
  294. }
  295. return tracerr.Wrap(db.BulkInsert(
  296. d,
  297. "INSERT INTO exit_tree (batch_num, account_idx, merkle_proof, balance, "+
  298. "instant_withdrawn, delayed_withdraw_request, delayed_withdrawn) VALUES %s;",
  299. exitTree[:],
  300. ))
  301. }
  302. func (hdb *HistoryDB) updateExitTree(d sqlx.Ext, blockNum int64,
  303. rollupWithdrawals []common.WithdrawInfo, wDelayerWithdrawals []common.WDelayerTransfer) error {
  304. if len(rollupWithdrawals) == 0 && len(wDelayerWithdrawals) == 0 {
  305. return nil
  306. }
  307. type withdrawal struct {
  308. BatchNum int64 `db:"batch_num"`
  309. AccountIdx int64 `db:"account_idx"`
  310. InstantWithdrawn *int64 `db:"instant_withdrawn"`
  311. DelayedWithdrawRequest *int64 `db:"delayed_withdraw_request"`
  312. DelayedWithdrawn *int64 `db:"delayed_withdrawn"`
  313. Owner *ethCommon.Address `db:"owner"`
  314. Token *ethCommon.Address `db:"token"`
  315. }
  316. withdrawals := make([]withdrawal, len(rollupWithdrawals)+len(wDelayerWithdrawals))
  317. for i := range rollupWithdrawals {
  318. info := &rollupWithdrawals[i]
  319. withdrawals[i] = withdrawal{
  320. BatchNum: int64(info.NumExitRoot),
  321. AccountIdx: int64(info.Idx),
  322. }
  323. if info.InstantWithdraw {
  324. withdrawals[i].InstantWithdrawn = &blockNum
  325. } else {
  326. withdrawals[i].DelayedWithdrawRequest = &blockNum
  327. withdrawals[i].Owner = &info.Owner
  328. withdrawals[i].Token = &info.Token
  329. }
  330. }
  331. for i := range wDelayerWithdrawals {
  332. info := &wDelayerWithdrawals[i]
  333. withdrawals[len(rollupWithdrawals)+i] = withdrawal{
  334. DelayedWithdrawn: &blockNum,
  335. Owner: &info.Owner,
  336. Token: &info.Token,
  337. }
  338. }
  339. // In VALUES we set an initial row of NULLs to set the types of each
  340. // variable passed as argument
  341. const query string = `
  342. UPDATE exit_tree e SET
  343. instant_withdrawn = d.instant_withdrawn,
  344. delayed_withdraw_request = CASE
  345. WHEN e.delayed_withdraw_request IS NOT NULL THEN e.delayed_withdraw_request
  346. ELSE d.delayed_withdraw_request
  347. END,
  348. delayed_withdrawn = d.delayed_withdrawn,
  349. owner = d.owner,
  350. token = d.token
  351. FROM (VALUES
  352. (NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BYTEA, NULL::::BYTEA),
  353. (:batch_num,
  354. :account_idx,
  355. :instant_withdrawn,
  356. :delayed_withdraw_request,
  357. :delayed_withdrawn,
  358. :owner,
  359. :token)
  360. ) as d (batch_num, account_idx, instant_withdrawn, delayed_withdraw_request, delayed_withdrawn, owner, token)
  361. WHERE
  362. (d.batch_num IS NOT NULL AND e.batch_num = d.batch_num AND e.account_idx = d.account_idx) OR
  363. (d.delayed_withdrawn IS NOT NULL AND e.delayed_withdrawn IS NULL AND e.owner = d.owner AND e.token = d.token);
  364. `
  365. if len(withdrawals) > 0 {
  366. if _, err := sqlx.NamedExec(d, query, withdrawals); err != nil {
  367. return tracerr.Wrap(err)
  368. }
  369. }
  370. return nil
  371. }
  372. // AddToken insert a token into the DB
  373. func (hdb *HistoryDB) AddToken(token *common.Token) error {
  374. return tracerr.Wrap(meddler.Insert(hdb.db, "token", token))
  375. }
  376. // AddTokens insert tokens into the DB
  377. func (hdb *HistoryDB) AddTokens(tokens []common.Token) error { return hdb.addTokens(hdb.db, tokens) }
  378. func (hdb *HistoryDB) addTokens(d meddler.DB, tokens []common.Token) error {
  379. if len(tokens) == 0 {
  380. return nil
  381. }
  382. // Sanitize name and symbol
  383. for i, token := range tokens {
  384. token.Name = strings.ToValidUTF8(token.Name, " ")
  385. token.Symbol = strings.ToValidUTF8(token.Symbol, " ")
  386. tokens[i] = token
  387. }
  388. return tracerr.Wrap(db.BulkInsert(
  389. d,
  390. `INSERT INTO token (
  391. token_id,
  392. eth_block_num,
  393. eth_addr,
  394. name,
  395. symbol,
  396. decimals
  397. ) VALUES %s;`,
  398. tokens[:],
  399. ))
  400. }
  401. // UpdateTokenValue updates the USD value of a token
  402. func (hdb *HistoryDB) UpdateTokenValue(tokenSymbol string, value float64) error {
  403. // Sanitize symbol
  404. tokenSymbol = strings.ToValidUTF8(tokenSymbol, " ")
  405. _, err := hdb.db.Exec(
  406. "UPDATE token SET usd = $1 WHERE symbol = $2;",
  407. value, tokenSymbol,
  408. )
  409. return tracerr.Wrap(err)
  410. }
  411. // GetToken returns a token from the DB given a TokenID
  412. func (hdb *HistoryDB) GetToken(tokenID common.TokenID) (*TokenWithUSD, error) {
  413. token := &TokenWithUSD{}
  414. err := meddler.QueryRow(
  415. hdb.db, token, `SELECT * FROM token WHERE token_id = $1;`, tokenID,
  416. )
  417. return token, tracerr.Wrap(err)
  418. }
  419. // GetAllTokens returns all tokens from the DB
  420. func (hdb *HistoryDB) GetAllTokens() ([]TokenWithUSD, error) {
  421. var tokens []*TokenWithUSD
  422. err := meddler.QueryAll(
  423. hdb.db, &tokens,
  424. "SELECT * FROM token ORDER BY token_id;",
  425. )
  426. return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), tracerr.Wrap(err)
  427. }
  428. // GetTokenSymbols returns all the token symbols from the DB
  429. func (hdb *HistoryDB) GetTokenSymbols() ([]string, error) {
  430. var tokenSymbols []string
  431. rows, err := hdb.db.Query("SELECT symbol FROM token;")
  432. if err != nil {
  433. return nil, tracerr.Wrap(err)
  434. }
  435. defer db.RowsClose(rows)
  436. sym := new(string)
  437. for rows.Next() {
  438. err = rows.Scan(sym)
  439. if err != nil {
  440. return nil, tracerr.Wrap(err)
  441. }
  442. tokenSymbols = append(tokenSymbols, *sym)
  443. }
  444. return tokenSymbols, nil
  445. }
  446. // AddAccounts insert accounts into the DB
  447. func (hdb *HistoryDB) AddAccounts(accounts []common.Account) error {
  448. return tracerr.Wrap(hdb.addAccounts(hdb.db, accounts))
  449. }
  450. func (hdb *HistoryDB) addAccounts(d meddler.DB, accounts []common.Account) error {
  451. if len(accounts) == 0 {
  452. return nil
  453. }
  454. return tracerr.Wrap(db.BulkInsert(
  455. d,
  456. `INSERT INTO account (
  457. idx,
  458. token_id,
  459. batch_num,
  460. bjj,
  461. eth_addr
  462. ) VALUES %s;`,
  463. accounts[:],
  464. ))
  465. }
  466. // GetAllAccounts returns a list of accounts from the DB
  467. func (hdb *HistoryDB) GetAllAccounts() ([]common.Account, error) {
  468. var accs []*common.Account
  469. err := meddler.QueryAll(
  470. hdb.db, &accs,
  471. "SELECT idx, token_id, batch_num, bjj, eth_addr FROM account ORDER BY idx;",
  472. )
  473. return db.SlicePtrsToSlice(accs).([]common.Account), tracerr.Wrap(err)
  474. }
  475. // AddL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
  476. // If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
  477. // BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
  478. // EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB.
  479. func (hdb *HistoryDB) AddL1Txs(l1txs []common.L1Tx) error {
  480. return tracerr.Wrap(hdb.addL1Txs(hdb.db, l1txs))
  481. }
  482. // addL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
  483. // If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
  484. // BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
  485. // EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB.
  486. func (hdb *HistoryDB) addL1Txs(d meddler.DB, l1txs []common.L1Tx) error {
  487. if len(l1txs) == 0 {
  488. return nil
  489. }
  490. txs := []txWrite{}
  491. for i := 0; i < len(l1txs); i++ {
  492. af := new(big.Float).SetInt(l1txs[i].Amount)
  493. amountFloat, _ := af.Float64()
  494. laf := new(big.Float).SetInt(l1txs[i].DepositAmount)
  495. depositAmountFloat, _ := laf.Float64()
  496. var effectiveFromIdx *common.Idx
  497. if l1txs[i].UserOrigin {
  498. if l1txs[i].Type != common.TxTypeCreateAccountDeposit &&
  499. l1txs[i].Type != common.TxTypeCreateAccountDepositTransfer {
  500. effectiveFromIdx = &l1txs[i].FromIdx
  501. }
  502. } else {
  503. effectiveFromIdx = &l1txs[i].EffectiveFromIdx
  504. }
  505. txs = append(txs, txWrite{
  506. // Generic
  507. IsL1: true,
  508. TxID: l1txs[i].TxID,
  509. Type: l1txs[i].Type,
  510. Position: l1txs[i].Position,
  511. FromIdx: &l1txs[i].FromIdx,
  512. EffectiveFromIdx: effectiveFromIdx,
  513. ToIdx: l1txs[i].ToIdx,
  514. Amount: l1txs[i].Amount,
  515. AmountFloat: amountFloat,
  516. TokenID: l1txs[i].TokenID,
  517. BatchNum: l1txs[i].BatchNum,
  518. EthBlockNum: l1txs[i].EthBlockNum,
  519. // L1
  520. ToForgeL1TxsNum: l1txs[i].ToForgeL1TxsNum,
  521. UserOrigin: &l1txs[i].UserOrigin,
  522. FromEthAddr: &l1txs[i].FromEthAddr,
  523. FromBJJ: &l1txs[i].FromBJJ,
  524. DepositAmount: l1txs[i].DepositAmount,
  525. DepositAmountFloat: &depositAmountFloat,
  526. })
  527. }
  528. return tracerr.Wrap(hdb.addTxs(d, txs))
  529. }
  530. // AddL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
  531. func (hdb *HistoryDB) AddL2Txs(l2txs []common.L2Tx) error {
  532. return tracerr.Wrap(hdb.addL2Txs(hdb.db, l2txs))
  533. }
  534. // addL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
  535. func (hdb *HistoryDB) addL2Txs(d meddler.DB, l2txs []common.L2Tx) error {
  536. txs := []txWrite{}
  537. for i := 0; i < len(l2txs); i++ {
  538. f := new(big.Float).SetInt(l2txs[i].Amount)
  539. amountFloat, _ := f.Float64()
  540. txs = append(txs, txWrite{
  541. // Generic
  542. IsL1: false,
  543. TxID: l2txs[i].TxID,
  544. Type: l2txs[i].Type,
  545. Position: l2txs[i].Position,
  546. FromIdx: &l2txs[i].FromIdx,
  547. EffectiveFromIdx: &l2txs[i].FromIdx,
  548. ToIdx: l2txs[i].ToIdx,
  549. TokenID: l2txs[i].TokenID,
  550. Amount: l2txs[i].Amount,
  551. AmountFloat: amountFloat,
  552. BatchNum: &l2txs[i].BatchNum,
  553. EthBlockNum: l2txs[i].EthBlockNum,
  554. // L2
  555. Fee: &l2txs[i].Fee,
  556. Nonce: &l2txs[i].Nonce,
  557. })
  558. }
  559. return tracerr.Wrap(hdb.addTxs(d, txs))
  560. }
  561. func (hdb *HistoryDB) addTxs(d meddler.DB, txs []txWrite) error {
  562. if len(txs) == 0 {
  563. return nil
  564. }
  565. return tracerr.Wrap(db.BulkInsert(
  566. d,
  567. `INSERT INTO tx (
  568. is_l1,
  569. id,
  570. type,
  571. position,
  572. from_idx,
  573. effective_from_idx,
  574. to_idx,
  575. amount,
  576. amount_f,
  577. token_id,
  578. batch_num,
  579. eth_block_num,
  580. to_forge_l1_txs_num,
  581. user_origin,
  582. from_eth_addr,
  583. from_bjj,
  584. deposit_amount,
  585. deposit_amount_f,
  586. fee,
  587. nonce
  588. ) VALUES %s;`,
  589. txs[:],
  590. ))
  591. }
  592. // GetAllExits returns all exit from the DB
  593. func (hdb *HistoryDB) GetAllExits() ([]common.ExitInfo, error) {
  594. var exits []*common.ExitInfo
  595. err := meddler.QueryAll(
  596. hdb.db, &exits,
  597. `SELECT exit_tree.batch_num, exit_tree.account_idx, exit_tree.merkle_proof,
  598. exit_tree.balance, exit_tree.instant_withdrawn, exit_tree.delayed_withdraw_request,
  599. exit_tree.delayed_withdrawn FROM exit_tree ORDER BY item_id;`,
  600. )
  601. return db.SlicePtrsToSlice(exits).([]common.ExitInfo), tracerr.Wrap(err)
  602. }
  603. // GetAllL1UserTxs returns all L1UserTxs from the DB
  604. func (hdb *HistoryDB) GetAllL1UserTxs() ([]common.L1Tx, error) {
  605. var txs []*common.L1Tx
  606. err := meddler.QueryAll(
  607. hdb.db, &txs, // Note that '\x' gets parsed as a big.Int with value = 0
  608. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  609. tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  610. tx.amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.amount_success THEN tx.amount ELSE '\x' END) AS effective_amount,
  611. tx.deposit_amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.deposit_amount_success THEN tx.deposit_amount ELSE '\x' END) AS effective_deposit_amount,
  612. tx.eth_block_num, tx.type, tx.batch_num
  613. FROM tx WHERE is_l1 = TRUE AND user_origin = TRUE ORDER BY item_id;`,
  614. )
  615. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  616. }
  617. // GetAllL1CoordinatorTxs returns all L1CoordinatorTxs from the DB
  618. func (hdb *HistoryDB) GetAllL1CoordinatorTxs() ([]common.L1Tx, error) {
  619. var txs []*common.L1Tx
  620. // Since the query specifies that only coordinator txs are returned, it's safe to assume
  621. // that returned txs will always have effective amounts
  622. err := meddler.QueryAll(
  623. hdb.db, &txs,
  624. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  625. tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  626. tx.amount, tx.amount AS effective_amount,
  627. tx.deposit_amount, tx.deposit_amount AS effective_deposit_amount,
  628. tx.eth_block_num, tx.type, tx.batch_num
  629. FROM tx WHERE is_l1 = TRUE AND user_origin = FALSE ORDER BY item_id;`,
  630. )
  631. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  632. }
  633. // GetAllL2Txs returns all L2Txs from the DB
  634. func (hdb *HistoryDB) GetAllL2Txs() ([]common.L2Tx, error) {
  635. var txs []*common.L2Tx
  636. err := meddler.QueryAll(
  637. hdb.db, &txs,
  638. `SELECT tx.id, tx.batch_num, tx.position,
  639. tx.from_idx, tx.to_idx, tx.amount, tx.token_id,
  640. tx.fee, tx.nonce, tx.type, tx.eth_block_num
  641. FROM tx WHERE is_l1 = FALSE ORDER BY item_id;`,
  642. )
  643. return db.SlicePtrsToSlice(txs).([]common.L2Tx), tracerr.Wrap(err)
  644. }
  645. // GetUnforgedL1UserTxs gets L1 User Txs to be forged in the L1Batch with toForgeL1TxsNum.
  646. func (hdb *HistoryDB) GetUnforgedL1UserTxs(toForgeL1TxsNum int64) ([]common.L1Tx, error) {
  647. var txs []*common.L1Tx
  648. err := meddler.QueryAll(
  649. hdb.db, &txs, // only L1 user txs can have batch_num set to null
  650. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  651. tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  652. tx.amount, NULL AS effective_amount,
  653. tx.deposit_amount, NULL AS effective_deposit_amount,
  654. tx.eth_block_num, tx.type, tx.batch_num
  655. FROM tx WHERE batch_num IS NULL AND to_forge_l1_txs_num = $1
  656. ORDER BY position;`,
  657. toForgeL1TxsNum,
  658. )
  659. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  660. }
  661. // TODO: Think about chaning all the queries that return a last value, to queries that return the next valid value.
  662. // GetLastTxsPosition for a given to_forge_l1_txs_num
  663. func (hdb *HistoryDB) GetLastTxsPosition(toForgeL1TxsNum int64) (int, error) {
  664. row := hdb.db.QueryRow(
  665. "SELECT position FROM tx WHERE to_forge_l1_txs_num = $1 ORDER BY position DESC;",
  666. toForgeL1TxsNum,
  667. )
  668. var lastL1TxsPosition int
  669. return lastL1TxsPosition, tracerr.Wrap(row.Scan(&lastL1TxsPosition))
  670. }
  671. // GetSCVars returns the rollup, auction and wdelayer smart contracts variables at their last update.
  672. func (hdb *HistoryDB) GetSCVars() (*common.RollupVariables, *common.AuctionVariables,
  673. *common.WDelayerVariables, error) {
  674. var rollup common.RollupVariables
  675. var auction common.AuctionVariables
  676. var wDelayer common.WDelayerVariables
  677. if err := meddler.QueryRow(hdb.db, &rollup,
  678. "SELECT * FROM rollup_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  679. return nil, nil, nil, tracerr.Wrap(err)
  680. }
  681. if err := meddler.QueryRow(hdb.db, &auction,
  682. "SELECT * FROM auction_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  683. return nil, nil, nil, tracerr.Wrap(err)
  684. }
  685. if err := meddler.QueryRow(hdb.db, &wDelayer,
  686. "SELECT * FROM wdelayer_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  687. return nil, nil, nil, tracerr.Wrap(err)
  688. }
  689. return &rollup, &auction, &wDelayer, nil
  690. }
  691. func (hdb *HistoryDB) setRollupVars(d meddler.DB, rollup *common.RollupVariables) error {
  692. return tracerr.Wrap(meddler.Insert(d, "rollup_vars", rollup))
  693. }
  694. func (hdb *HistoryDB) setAuctionVars(d meddler.DB, auction *common.AuctionVariables) error {
  695. return tracerr.Wrap(meddler.Insert(d, "auction_vars", auction))
  696. }
  697. func (hdb *HistoryDB) setWDelayerVars(d meddler.DB, wDelayer *common.WDelayerVariables) error {
  698. return tracerr.Wrap(meddler.Insert(d, "wdelayer_vars", wDelayer))
  699. }
  700. func (hdb *HistoryDB) addBucketUpdates(d meddler.DB, bucketUpdates []common.BucketUpdate) error {
  701. if len(bucketUpdates) == 0 {
  702. return nil
  703. }
  704. return tracerr.Wrap(db.BulkInsert(
  705. d,
  706. `INSERT INTO bucket_update (
  707. eth_block_num,
  708. num_bucket,
  709. block_stamp,
  710. withdrawals
  711. ) VALUES %s;`,
  712. bucketUpdates[:],
  713. ))
  714. }
  715. // AddBucketUpdatesTest allows call to unexported method
  716. // only for internal testing purposes
  717. func (hdb *HistoryDB) AddBucketUpdatesTest(d meddler.DB, bucketUpdates []common.BucketUpdate) error {
  718. return hdb.addBucketUpdates(d, bucketUpdates)
  719. }
  720. // GetAllBucketUpdates retrieves all the bucket updates
  721. func (hdb *HistoryDB) GetAllBucketUpdates() ([]common.BucketUpdate, error) {
  722. var bucketUpdates []*common.BucketUpdate
  723. err := meddler.QueryAll(
  724. hdb.db, &bucketUpdates,
  725. `SELECT eth_block_num, num_bucket, block_stamp, withdrawals
  726. FROM bucket_update ORDER BY item_id;`,
  727. )
  728. return db.SlicePtrsToSlice(bucketUpdates).([]common.BucketUpdate), tracerr.Wrap(err)
  729. }
  730. func (hdb *HistoryDB) addTokenExchanges(d meddler.DB, tokenExchanges []common.TokenExchange) error {
  731. if len(tokenExchanges) == 0 {
  732. return nil
  733. }
  734. return tracerr.Wrap(db.BulkInsert(
  735. d,
  736. `INSERT INTO token_exchange (
  737. eth_block_num,
  738. eth_addr,
  739. value_usd
  740. ) VALUES %s;`,
  741. tokenExchanges[:],
  742. ))
  743. }
  744. // GetAllTokenExchanges retrieves all the token exchanges
  745. func (hdb *HistoryDB) GetAllTokenExchanges() ([]common.TokenExchange, error) {
  746. var tokenExchanges []*common.TokenExchange
  747. err := meddler.QueryAll(
  748. hdb.db, &tokenExchanges,
  749. "SELECT eth_block_num, eth_addr, value_usd FROM token_exchange ORDER BY item_id;",
  750. )
  751. return db.SlicePtrsToSlice(tokenExchanges).([]common.TokenExchange), tracerr.Wrap(err)
  752. }
  753. func (hdb *HistoryDB) addEscapeHatchWithdrawals(d meddler.DB,
  754. escapeHatchWithdrawals []common.WDelayerEscapeHatchWithdrawal) error {
  755. if len(escapeHatchWithdrawals) == 0 {
  756. return nil
  757. }
  758. return tracerr.Wrap(db.BulkInsert(
  759. d,
  760. `INSERT INTO escape_hatch_withdrawal (
  761. eth_block_num,
  762. who_addr,
  763. to_addr,
  764. token_addr,
  765. amount
  766. ) VALUES %s;`,
  767. escapeHatchWithdrawals[:],
  768. ))
  769. }
  770. // GetAllEscapeHatchWithdrawals retrieves all the escape hatch withdrawals
  771. func (hdb *HistoryDB) GetAllEscapeHatchWithdrawals() ([]common.WDelayerEscapeHatchWithdrawal, error) {
  772. var escapeHatchWithdrawals []*common.WDelayerEscapeHatchWithdrawal
  773. err := meddler.QueryAll(
  774. hdb.db, &escapeHatchWithdrawals,
  775. "SELECT eth_block_num, who_addr, to_addr, token_addr, amount FROM escape_hatch_withdrawal ORDER BY item_id;",
  776. )
  777. return db.SlicePtrsToSlice(escapeHatchWithdrawals).([]common.WDelayerEscapeHatchWithdrawal),
  778. tracerr.Wrap(err)
  779. }
  780. // SetInitialSCVars sets the initial state of rollup, auction, wdelayer smart
  781. // contract variables. This initial state is stored linked to block 0, which
  782. // always exist in the DB and is used to store initialization data that always
  783. // exist in the smart contracts.
  784. func (hdb *HistoryDB) SetInitialSCVars(rollup *common.RollupVariables,
  785. auction *common.AuctionVariables, wDelayer *common.WDelayerVariables) error {
  786. txn, err := hdb.db.Beginx()
  787. if err != nil {
  788. return tracerr.Wrap(err)
  789. }
  790. defer func() {
  791. if err != nil {
  792. db.Rollback(txn)
  793. }
  794. }()
  795. // Force EthBlockNum to be 0 because it's the block used to link data
  796. // that belongs to the creation of the smart contracts
  797. rollup.EthBlockNum = 0
  798. auction.EthBlockNum = 0
  799. wDelayer.EthBlockNum = 0
  800. auction.DefaultSlotSetBidSlotNum = 0
  801. if err := hdb.setRollupVars(txn, rollup); err != nil {
  802. return tracerr.Wrap(err)
  803. }
  804. if err := hdb.setAuctionVars(txn, auction); err != nil {
  805. return tracerr.Wrap(err)
  806. }
  807. if err := hdb.setWDelayerVars(txn, wDelayer); err != nil {
  808. return tracerr.Wrap(err)
  809. }
  810. return tracerr.Wrap(txn.Commit())
  811. }
  812. // setExtraInfoForgedL1UserTxs sets the EffectiveAmount, EffectiveDepositAmount
  813. // and EffectiveFromIdx of the given l1UserTxs (with an UPDATE)
  814. func (hdb *HistoryDB) setExtraInfoForgedL1UserTxs(d sqlx.Ext, txs []common.L1Tx) error {
  815. if len(txs) == 0 {
  816. return nil
  817. }
  818. // Effective amounts are stored as success flags in the DB, with true value by default
  819. // to reduce the amount of updates. Therefore, only amounts that became uneffective should be
  820. // updated to become false. At the same time, all the txs that contain
  821. // accounts (FromIdx == 0) are updated to set the EffectiveFromIdx.
  822. type txUpdate struct {
  823. ID common.TxID `db:"id"`
  824. AmountSuccess bool `db:"amount_success"`
  825. DepositAmountSuccess bool `db:"deposit_amount_success"`
  826. EffectiveFromIdx common.Idx `db:"effective_from_idx"`
  827. }
  828. txUpdates := []txUpdate{}
  829. equal := func(a *big.Int, b *big.Int) bool {
  830. return a.Cmp(b) == 0
  831. }
  832. for i := range txs {
  833. amountSuccess := equal(txs[i].Amount, txs[i].EffectiveAmount)
  834. depositAmountSuccess := equal(txs[i].DepositAmount, txs[i].EffectiveDepositAmount)
  835. if !amountSuccess || !depositAmountSuccess || txs[i].FromIdx == 0 {
  836. txUpdates = append(txUpdates, txUpdate{
  837. ID: txs[i].TxID,
  838. AmountSuccess: amountSuccess,
  839. DepositAmountSuccess: depositAmountSuccess,
  840. EffectiveFromIdx: txs[i].EffectiveFromIdx,
  841. })
  842. }
  843. }
  844. const query string = `
  845. UPDATE tx SET
  846. amount_success = tx_update.amount_success,
  847. deposit_amount_success = tx_update.deposit_amount_success,
  848. effective_from_idx = tx_update.effective_from_idx
  849. FROM (VALUES
  850. (NULL::::BYTEA, NULL::::BOOL, NULL::::BOOL, NULL::::BIGINT),
  851. (:id, :amount_success, :deposit_amount_success, :effective_from_idx)
  852. ) as tx_update (id, amount_success, deposit_amount_success, effective_from_idx)
  853. WHERE tx.id = tx_update.id;
  854. `
  855. if len(txUpdates) > 0 {
  856. if _, err := sqlx.NamedExec(d, query, txUpdates); err != nil {
  857. return tracerr.Wrap(err)
  858. }
  859. }
  860. return nil
  861. }
  862. // AddBlockSCData stores all the information of a block retrieved by the
  863. // Synchronizer. Blocks should be inserted in order, leaving no gaps because
  864. // the pagination system of the API/DB depends on this. Within blocks, all
  865. // items should also be in the correct order (Accounts, Tokens, Txs, etc.)
  866. func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) {
  867. txn, err := hdb.db.Beginx()
  868. if err != nil {
  869. return tracerr.Wrap(err)
  870. }
  871. defer func() {
  872. if err != nil {
  873. db.Rollback(txn)
  874. }
  875. }()
  876. // Add block
  877. if err := hdb.addBlock(txn, &blockData.Block); err != nil {
  878. return tracerr.Wrap(err)
  879. }
  880. // Add Coordinators
  881. if err := hdb.addCoordinators(txn, blockData.Auction.Coordinators); err != nil {
  882. return tracerr.Wrap(err)
  883. }
  884. // Add Bids
  885. if err := hdb.addBids(txn, blockData.Auction.Bids); err != nil {
  886. return tracerr.Wrap(err)
  887. }
  888. // Add Tokens
  889. if err := hdb.addTokens(txn, blockData.Rollup.AddedTokens); err != nil {
  890. return tracerr.Wrap(err)
  891. }
  892. // Prepare user L1 txs to be added.
  893. // They must be added before the batch that will forge them (which can be in the same block)
  894. // and after the account that will be sent to (also can be in the same block).
  895. // Note: insert order is not relevant since item_id will be updated by a DB trigger when
  896. // the batch that forges those txs is inserted
  897. userL1s := make(map[common.BatchNum][]common.L1Tx)
  898. for i := range blockData.Rollup.L1UserTxs {
  899. batchThatForgesIsInTheBlock := false
  900. for _, batch := range blockData.Rollup.Batches {
  901. if batch.Batch.ForgeL1TxsNum != nil &&
  902. *batch.Batch.ForgeL1TxsNum == *blockData.Rollup.L1UserTxs[i].ToForgeL1TxsNum {
  903. // Tx is forged in this block. It's guaranteed that:
  904. // * the first batch of the block won't forge user L1 txs that have been added in this block
  905. // * batch nums are sequential therefore it's safe to add the tx at batch.BatchNum -1
  906. batchThatForgesIsInTheBlock = true
  907. addAtBatchNum := batch.Batch.BatchNum - 1
  908. userL1s[addAtBatchNum] = append(userL1s[addAtBatchNum], blockData.Rollup.L1UserTxs[i])
  909. break
  910. }
  911. }
  912. if !batchThatForgesIsInTheBlock {
  913. // User artificial batchNum 0 to add txs that are not forge in this block
  914. // after all the accounts of the block have been added
  915. userL1s[0] = append(userL1s[0], blockData.Rollup.L1UserTxs[i])
  916. }
  917. }
  918. // Add Batches
  919. for i := range blockData.Rollup.Batches {
  920. batch := &blockData.Rollup.Batches[i]
  921. // Add Batch: this will trigger an update on the DB
  922. // that will set the batch num of forged L1 txs in this batch
  923. if err = hdb.addBatch(txn, &batch.Batch); err != nil {
  924. return tracerr.Wrap(err)
  925. }
  926. // Add accounts
  927. if err := hdb.addAccounts(txn, batch.CreatedAccounts); err != nil {
  928. return tracerr.Wrap(err)
  929. }
  930. // Set the EffectiveAmount and EffectiveDepositAmount of all the
  931. // L1UserTxs that have been forged in this batch
  932. if err = hdb.setExtraInfoForgedL1UserTxs(txn, batch.L1UserTxs); err != nil {
  933. return tracerr.Wrap(err)
  934. }
  935. // Add forged l1 coordinator Txs
  936. if err := hdb.addL1Txs(txn, batch.L1CoordinatorTxs); err != nil {
  937. return tracerr.Wrap(err)
  938. }
  939. // Add l2 Txs
  940. if err := hdb.addL2Txs(txn, batch.L2Txs); err != nil {
  941. return tracerr.Wrap(err)
  942. }
  943. // Add user L1 txs that will be forged in next batch
  944. if userlL1s, ok := userL1s[batch.Batch.BatchNum]; ok {
  945. if err := hdb.addL1Txs(txn, userlL1s); err != nil {
  946. return tracerr.Wrap(err)
  947. }
  948. }
  949. // Add exit tree
  950. if err := hdb.addExitTree(txn, batch.ExitTree); err != nil {
  951. return tracerr.Wrap(err)
  952. }
  953. }
  954. // Add user L1 txs that won't be forged in this block
  955. if userL1sNotForgedInThisBlock, ok := userL1s[0]; ok {
  956. if err := hdb.addL1Txs(txn, userL1sNotForgedInThisBlock); err != nil {
  957. return tracerr.Wrap(err)
  958. }
  959. }
  960. // Set SC Vars if there was an update
  961. if blockData.Rollup.Vars != nil {
  962. if err := hdb.setRollupVars(txn, blockData.Rollup.Vars); err != nil {
  963. return tracerr.Wrap(err)
  964. }
  965. }
  966. if blockData.Auction.Vars != nil {
  967. if err := hdb.setAuctionVars(txn, blockData.Auction.Vars); err != nil {
  968. return tracerr.Wrap(err)
  969. }
  970. }
  971. if blockData.WDelayer.Vars != nil {
  972. if err := hdb.setWDelayerVars(txn, blockData.WDelayer.Vars); err != nil {
  973. return tracerr.Wrap(err)
  974. }
  975. }
  976. // Update withdrawals in exit tree table
  977. if err := hdb.updateExitTree(txn, blockData.Block.Num,
  978. blockData.Rollup.Withdrawals, blockData.WDelayer.Withdrawals); err != nil {
  979. return tracerr.Wrap(err)
  980. }
  981. // Add Escape Hatch Withdrawals
  982. if err := hdb.addEscapeHatchWithdrawals(txn,
  983. blockData.WDelayer.EscapeHatchWithdrawals); err != nil {
  984. return tracerr.Wrap(err)
  985. }
  986. // Add Buckets withdrawals updates
  987. if err := hdb.addBucketUpdates(txn, blockData.Rollup.UpdateBucketWithdraw); err != nil {
  988. return tracerr.Wrap(err)
  989. }
  990. // Add Token exchange updates
  991. if err := hdb.addTokenExchanges(txn, blockData.Rollup.TokenExchanges); err != nil {
  992. return tracerr.Wrap(err)
  993. }
  994. return tracerr.Wrap(txn.Commit())
  995. }
  996. // GetCoordinatorAPI returns a coordinator by its bidderAddr
  997. func (hdb *HistoryDB) GetCoordinatorAPI(bidderAddr ethCommon.Address) (*CoordinatorAPI, error) {
  998. coordinator := &CoordinatorAPI{}
  999. err := meddler.QueryRow(
  1000. hdb.db, coordinator,
  1001. "SELECT * FROM coordinator WHERE bidder_addr = $1 ORDER BY item_id DESC LIMIT 1;",
  1002. bidderAddr,
  1003. )
  1004. return coordinator, tracerr.Wrap(err)
  1005. }
  1006. // AddAuctionVars insert auction vars into the DB
  1007. func (hdb *HistoryDB) AddAuctionVars(auctionVars *common.AuctionVariables) error {
  1008. return tracerr.Wrap(meddler.Insert(hdb.db, "auction_vars", auctionVars))
  1009. }
  1010. // GetTokensTest used to get tokens in a testing context
  1011. func (hdb *HistoryDB) GetTokensTest() ([]TokenWithUSD, error) {
  1012. tokens := []*TokenWithUSD{}
  1013. if err := meddler.QueryAll(
  1014. hdb.db, &tokens,
  1015. "SELECT * FROM TOKEN",
  1016. ); err != nil {
  1017. return nil, tracerr.Wrap(err)
  1018. }
  1019. if len(tokens) == 0 {
  1020. return []TokenWithUSD{}, nil
  1021. }
  1022. return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), nil
  1023. }