You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1895 lines
59 KiB

Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
  1. package historydb
  2. import (
  3. "errors"
  4. "fmt"
  5. "math"
  6. "math/big"
  7. ethCommon "github.com/ethereum/go-ethereum/common"
  8. "github.com/hermeznetwork/hermez-node/common"
  9. "github.com/hermeznetwork/hermez-node/db"
  10. "github.com/hermeznetwork/tracerr"
  11. "github.com/iden3/go-iden3-crypto/babyjub"
  12. "github.com/jmoiron/sqlx"
  13. //nolint:errcheck // driver for postgres DB
  14. _ "github.com/lib/pq"
  15. "github.com/russross/meddler"
  16. )
  17. const (
  18. // OrderAsc indicates ascending order when using pagination
  19. OrderAsc = "ASC"
  20. // OrderDesc indicates descending order when using pagination
  21. OrderDesc = "DESC"
  22. )
  23. // TODO(Edu): Document here how HistoryDB is kept consistent
  24. // HistoryDB persist the historic of the rollup
  25. type HistoryDB struct {
  26. db *sqlx.DB
  27. }
  28. // NewHistoryDB initialize the DB
  29. func NewHistoryDB(db *sqlx.DB) *HistoryDB {
  30. return &HistoryDB{db: db}
  31. }
  32. // DB returns a pointer to the L2DB.db. This method should be used only for
  33. // internal testing purposes.
  34. func (hdb *HistoryDB) DB() *sqlx.DB {
  35. return hdb.db
  36. }
  37. // AddBlock insert a block into the DB
  38. func (hdb *HistoryDB) AddBlock(block *common.Block) error { return hdb.addBlock(hdb.db, block) }
  39. func (hdb *HistoryDB) addBlock(d meddler.DB, block *common.Block) error {
  40. return tracerr.Wrap(meddler.Insert(d, "block", block))
  41. }
  42. // AddBlocks inserts blocks into the DB
  43. func (hdb *HistoryDB) AddBlocks(blocks []common.Block) error {
  44. return tracerr.Wrap(hdb.addBlocks(hdb.db, blocks))
  45. }
  46. func (hdb *HistoryDB) addBlocks(d meddler.DB, blocks []common.Block) error {
  47. return tracerr.Wrap(db.BulkInsert(
  48. d,
  49. `INSERT INTO block (
  50. eth_block_num,
  51. timestamp,
  52. hash
  53. ) VALUES %s;`,
  54. blocks[:],
  55. ))
  56. }
  57. // GetBlock retrieve a block from the DB, given a block number
  58. func (hdb *HistoryDB) GetBlock(blockNum int64) (*common.Block, error) {
  59. block := &common.Block{}
  60. err := meddler.QueryRow(
  61. hdb.db, block,
  62. "SELECT * FROM block WHERE eth_block_num = $1;", blockNum,
  63. )
  64. return block, tracerr.Wrap(err)
  65. }
  66. // GetAllBlocks retrieve all blocks from the DB
  67. func (hdb *HistoryDB) GetAllBlocks() ([]common.Block, error) {
  68. var blocks []*common.Block
  69. err := meddler.QueryAll(
  70. hdb.db, &blocks,
  71. "SELECT * FROM block ORDER BY eth_block_num;",
  72. )
  73. return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err)
  74. }
  75. // GetBlocks retrieve blocks from the DB, given a range of block numbers defined by from and to
  76. func (hdb *HistoryDB) GetBlocks(from, to int64) ([]common.Block, error) {
  77. var blocks []*common.Block
  78. err := meddler.QueryAll(
  79. hdb.db, &blocks,
  80. "SELECT * FROM block WHERE $1 <= eth_block_num AND eth_block_num < $2 ORDER BY eth_block_num;",
  81. from, to,
  82. )
  83. return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err)
  84. }
  85. // GetLastBlock retrieve the block with the highest block number from the DB
  86. func (hdb *HistoryDB) GetLastBlock() (*common.Block, error) {
  87. block := &common.Block{}
  88. err := meddler.QueryRow(
  89. hdb.db, block, "SELECT * FROM block ORDER BY eth_block_num DESC LIMIT 1;",
  90. )
  91. return block, tracerr.Wrap(err)
  92. }
  93. // AddBatch insert a Batch into the DB
  94. func (hdb *HistoryDB) AddBatch(batch *common.Batch) error { return hdb.addBatch(hdb.db, batch) }
  95. func (hdb *HistoryDB) addBatch(d meddler.DB, batch *common.Batch) error {
  96. // Calculate total collected fees in USD
  97. // Get IDs of collected tokens for fees
  98. tokenIDs := []common.TokenID{}
  99. for id := range batch.CollectedFees {
  100. tokenIDs = append(tokenIDs, id)
  101. }
  102. // Get USD value of the tokens
  103. type tokenPrice struct {
  104. ID common.TokenID `meddler:"token_id"`
  105. USD *float64 `meddler:"usd"`
  106. Decimals int `meddler:"decimals"`
  107. }
  108. var tokenPrices []*tokenPrice
  109. if len(tokenIDs) > 0 {
  110. query, args, err := sqlx.In(
  111. "SELECT token_id, usd, decimals FROM token WHERE token_id IN (?);",
  112. tokenIDs,
  113. )
  114. if err != nil {
  115. return tracerr.Wrap(err)
  116. }
  117. query = hdb.db.Rebind(query)
  118. if err := meddler.QueryAll(
  119. hdb.db, &tokenPrices, query, args...,
  120. ); err != nil {
  121. return tracerr.Wrap(err)
  122. }
  123. }
  124. // Calculate total collected
  125. var total float64
  126. for _, tokenPrice := range tokenPrices {
  127. if tokenPrice.USD == nil {
  128. continue
  129. }
  130. f := new(big.Float).SetInt(batch.CollectedFees[tokenPrice.ID])
  131. amount, _ := f.Float64()
  132. total += *tokenPrice.USD * (amount / math.Pow(10, float64(tokenPrice.Decimals))) //nolint decimals have to be ^10
  133. }
  134. batch.TotalFeesUSD = &total
  135. // Insert to DB
  136. return tracerr.Wrap(meddler.Insert(d, "batch", batch))
  137. }
  138. // AddBatches insert Bids into the DB
  139. func (hdb *HistoryDB) AddBatches(batches []common.Batch) error {
  140. return tracerr.Wrap(hdb.addBatches(hdb.db, batches))
  141. }
  142. func (hdb *HistoryDB) addBatches(d meddler.DB, batches []common.Batch) error {
  143. for i := 0; i < len(batches); i++ {
  144. if err := hdb.addBatch(d, &batches[i]); err != nil {
  145. return tracerr.Wrap(err)
  146. }
  147. }
  148. return nil
  149. }
  150. // GetBatchAPI return the batch with the given batchNum
  151. func (hdb *HistoryDB) GetBatchAPI(batchNum common.BatchNum) (*BatchAPI, error) {
  152. batch := &BatchAPI{}
  153. return batch, tracerr.Wrap(meddler.QueryRow(
  154. hdb.db, batch,
  155. `SELECT batch.*, block.timestamp, block.hash,
  156. COALESCE ((SELECT COUNT(*) FROM tx WHERE batch_num = batch.batch_num), 0) AS forged_txs
  157. FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num
  158. WHERE batch_num = $1;`, batchNum,
  159. ))
  160. }
  161. // GetBatchesAPI return the batches applying the given filters
  162. func (hdb *HistoryDB) GetBatchesAPI(
  163. minBatchNum, maxBatchNum, slotNum *uint,
  164. forgerAddr *ethCommon.Address,
  165. fromItem, limit *uint, order string,
  166. ) ([]BatchAPI, uint64, error) {
  167. var query string
  168. var args []interface{}
  169. queryStr := `SELECT batch.*, block.timestamp, block.hash,
  170. COALESCE ((SELECT COUNT(*) FROM tx WHERE batch_num = batch.batch_num), 0) AS forged_txs,
  171. count(*) OVER() AS total_items
  172. FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num `
  173. // Apply filters
  174. nextIsAnd := false
  175. // minBatchNum filter
  176. if minBatchNum != nil {
  177. if nextIsAnd {
  178. queryStr += "AND "
  179. } else {
  180. queryStr += "WHERE "
  181. }
  182. queryStr += "batch.batch_num > ? "
  183. args = append(args, minBatchNum)
  184. nextIsAnd = true
  185. }
  186. // maxBatchNum filter
  187. if maxBatchNum != nil {
  188. if nextIsAnd {
  189. queryStr += "AND "
  190. } else {
  191. queryStr += "WHERE "
  192. }
  193. queryStr += "batch.batch_num < ? "
  194. args = append(args, maxBatchNum)
  195. nextIsAnd = true
  196. }
  197. // slotNum filter
  198. if slotNum != nil {
  199. if nextIsAnd {
  200. queryStr += "AND "
  201. } else {
  202. queryStr += "WHERE "
  203. }
  204. queryStr += "batch.slot_num = ? "
  205. args = append(args, slotNum)
  206. nextIsAnd = true
  207. }
  208. // forgerAddr filter
  209. if forgerAddr != nil {
  210. if nextIsAnd {
  211. queryStr += "AND "
  212. } else {
  213. queryStr += "WHERE "
  214. }
  215. queryStr += "batch.forger_addr = ? "
  216. args = append(args, forgerAddr)
  217. nextIsAnd = true
  218. }
  219. // pagination
  220. if fromItem != nil {
  221. if nextIsAnd {
  222. queryStr += "AND "
  223. } else {
  224. queryStr += "WHERE "
  225. }
  226. if order == OrderAsc {
  227. queryStr += "batch.item_id >= ? "
  228. } else {
  229. queryStr += "batch.item_id <= ? "
  230. }
  231. args = append(args, fromItem)
  232. }
  233. queryStr += "ORDER BY batch.item_id "
  234. if order == OrderAsc {
  235. queryStr += " ASC "
  236. } else {
  237. queryStr += " DESC "
  238. }
  239. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  240. query = hdb.db.Rebind(queryStr)
  241. // log.Debug(query)
  242. batchPtrs := []*BatchAPI{}
  243. if err := meddler.QueryAll(hdb.db, &batchPtrs, query, args...); err != nil {
  244. return nil, 0, tracerr.Wrap(err)
  245. }
  246. batches := db.SlicePtrsToSlice(batchPtrs).([]BatchAPI)
  247. if len(batches) == 0 {
  248. return batches, 0, nil
  249. }
  250. return batches, batches[0].TotalItems - uint64(len(batches)), nil
  251. }
  252. // GetAllBatches retrieve all batches from the DB
  253. func (hdb *HistoryDB) GetAllBatches() ([]common.Batch, error) {
  254. var batches []*common.Batch
  255. err := meddler.QueryAll(
  256. hdb.db, &batches,
  257. `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr, batch.fees_collected,
  258. batch.fee_idxs_coordinator, batch.state_root, batch.num_accounts, batch.last_idx, batch.exit_root,
  259. batch.forge_l1_txs_num, batch.slot_num, batch.total_fees_usd FROM batch
  260. ORDER BY item_id;`,
  261. )
  262. return db.SlicePtrsToSlice(batches).([]common.Batch), tracerr.Wrap(err)
  263. }
  264. // GetBatches retrieve batches from the DB, given a range of batch numbers defined by from and to
  265. func (hdb *HistoryDB) GetBatches(from, to common.BatchNum) ([]common.Batch, error) {
  266. var batches []*common.Batch
  267. err := meddler.QueryAll(
  268. hdb.db, &batches,
  269. "SELECT * FROM batch WHERE $1 <= batch_num AND batch_num < $2 ORDER BY batch_num;",
  270. from, to,
  271. )
  272. return db.SlicePtrsToSlice(batches).([]common.Batch), tracerr.Wrap(err)
  273. }
  274. // GetFirstBatchBlockNumBySlot returns the ethereum block number of the first
  275. // batch within a slot
  276. func (hdb *HistoryDB) GetFirstBatchBlockNumBySlot(slotNum int64) (int64, error) {
  277. row := hdb.db.QueryRow(
  278. `SELECT eth_block_num FROM batch
  279. WHERE slot_num = $1 ORDER BY batch_num ASC LIMIT 1;`, slotNum,
  280. )
  281. var blockNum int64
  282. return blockNum, tracerr.Wrap(row.Scan(&blockNum))
  283. }
  284. // GetLastBatchNum returns the BatchNum of the latest forged batch
  285. func (hdb *HistoryDB) GetLastBatchNum() (common.BatchNum, error) {
  286. row := hdb.db.QueryRow("SELECT batch_num FROM batch ORDER BY batch_num DESC LIMIT 1;")
  287. var batchNum common.BatchNum
  288. return batchNum, tracerr.Wrap(row.Scan(&batchNum))
  289. }
  290. // GetLastL1BatchBlockNum returns the blockNum of the latest forged l1Batch
  291. func (hdb *HistoryDB) GetLastL1BatchBlockNum() (int64, error) {
  292. row := hdb.db.QueryRow(`SELECT eth_block_num FROM batch
  293. WHERE forge_l1_txs_num IS NOT NULL
  294. ORDER BY batch_num DESC LIMIT 1;`)
  295. var blockNum int64
  296. return blockNum, tracerr.Wrap(row.Scan(&blockNum))
  297. }
  298. // GetLastL1TxsNum returns the greatest ForgeL1TxsNum in the DB from forged
  299. // batches. If there's no batch in the DB (nil, nil) is returned.
  300. func (hdb *HistoryDB) GetLastL1TxsNum() (*int64, error) {
  301. row := hdb.db.QueryRow("SELECT MAX(forge_l1_txs_num) FROM batch;")
  302. lastL1TxsNum := new(int64)
  303. return lastL1TxsNum, tracerr.Wrap(row.Scan(&lastL1TxsNum))
  304. }
  305. // Reorg deletes all the information that was added into the DB after the
  306. // lastValidBlock. If lastValidBlock is negative, all block information is
  307. // deleted.
  308. func (hdb *HistoryDB) Reorg(lastValidBlock int64) error {
  309. var err error
  310. if lastValidBlock < 0 {
  311. _, err = hdb.db.Exec("DELETE FROM block;")
  312. } else {
  313. _, err = hdb.db.Exec("DELETE FROM block WHERE eth_block_num > $1;", lastValidBlock)
  314. }
  315. return tracerr.Wrap(err)
  316. }
  317. // AddBids insert Bids into the DB
  318. func (hdb *HistoryDB) AddBids(bids []common.Bid) error { return hdb.addBids(hdb.db, bids) }
  319. func (hdb *HistoryDB) addBids(d meddler.DB, bids []common.Bid) error {
  320. if len(bids) == 0 {
  321. return nil
  322. }
  323. // TODO: check the coordinator info
  324. return tracerr.Wrap(db.BulkInsert(
  325. d,
  326. "INSERT INTO bid (slot_num, bid_value, eth_block_num, bidder_addr) VALUES %s;",
  327. bids[:],
  328. ))
  329. }
  330. // GetAllBids retrieve all bids from the DB
  331. func (hdb *HistoryDB) GetAllBids() ([]common.Bid, error) {
  332. var bids []*common.Bid
  333. err := meddler.QueryAll(
  334. hdb.db, &bids,
  335. `SELECT bid.slot_num, bid.bid_value, bid.eth_block_num, bid.bidder_addr FROM bid
  336. ORDER BY item_id;`,
  337. )
  338. return db.SlicePtrsToSlice(bids).([]common.Bid), tracerr.Wrap(err)
  339. }
  340. // GetBestBidAPI returns the best bid in specific slot by slotNum
  341. func (hdb *HistoryDB) GetBestBidAPI(slotNum *int64) (BidAPI, error) {
  342. bid := &BidAPI{}
  343. err := meddler.QueryRow(
  344. hdb.db, bid, `SELECT bid.*, block.timestamp, coordinator.forger_addr, coordinator.url
  345. FROM bid INNER JOIN block ON bid.eth_block_num = block.eth_block_num
  346. INNER JOIN coordinator ON bid.bidder_addr = coordinator.bidder_addr
  347. WHERE slot_num = $1 ORDER BY item_id DESC LIMIT 1;`, slotNum,
  348. )
  349. return *bid, tracerr.Wrap(err)
  350. }
  351. // GetBestBidCoordinator returns the forger address of the highest bidder in a slot by slotNum
  352. func (hdb *HistoryDB) GetBestBidCoordinator(slotNum int64) (*common.BidCoordinator, error) {
  353. bidCoord := &common.BidCoordinator{}
  354. err := meddler.QueryRow(
  355. hdb.db, bidCoord,
  356. `SELECT (
  357. SELECT default_slot_set_bid
  358. FROM auction_vars
  359. WHERE default_slot_set_bid_slot_num <= $1
  360. ORDER BY eth_block_num DESC LIMIT 1
  361. ),
  362. bid.slot_num, bid.bid_value, bid.bidder_addr,
  363. coordinator.forger_addr, coordinator.url
  364. FROM bid
  365. INNER JOIN coordinator ON bid.bidder_addr = coordinator.bidder_addr
  366. WHERE bid.slot_num = $1 ORDER BY bid.item_id DESC LIMIT 1;`,
  367. slotNum)
  368. return bidCoord, tracerr.Wrap(err)
  369. }
  370. // GetBestBidsAPI returns the best bid in specific slot by slotNum
  371. func (hdb *HistoryDB) GetBestBidsAPI(
  372. minSlotNum, maxSlotNum *int64,
  373. bidderAddr *ethCommon.Address,
  374. limit *uint, order string,
  375. ) ([]BidAPI, uint64, error) {
  376. var query string
  377. var args []interface{}
  378. queryStr := `SELECT b.*, block.timestamp, coordinator.forger_addr, coordinator.url,
  379. COUNT(*) OVER() AS total_items FROM (
  380. SELECT slot_num, MAX(item_id) as maxitem
  381. FROM bid GROUP BY slot_num
  382. )
  383. AS x INNER JOIN bid AS b ON b.item_id = x.maxitem
  384. INNER JOIN block ON b.eth_block_num = block.eth_block_num
  385. INNER JOIN coordinator ON b.bidder_addr = coordinator.bidder_addr
  386. WHERE (b.slot_num >= ? AND b.slot_num <= ?)`
  387. args = append(args, minSlotNum)
  388. args = append(args, maxSlotNum)
  389. // Apply filters
  390. if bidderAddr != nil {
  391. queryStr += " AND b.bidder_addr = ? "
  392. args = append(args, bidderAddr)
  393. }
  394. queryStr += " ORDER BY b.slot_num "
  395. if order == OrderAsc {
  396. queryStr += "ASC "
  397. } else {
  398. queryStr += "DESC "
  399. }
  400. if limit != nil {
  401. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  402. }
  403. query = hdb.db.Rebind(queryStr)
  404. bidPtrs := []*BidAPI{}
  405. if err := meddler.QueryAll(hdb.db, &bidPtrs, query, args...); err != nil {
  406. return nil, 0, tracerr.Wrap(err)
  407. }
  408. // log.Debug(query)
  409. bids := db.SlicePtrsToSlice(bidPtrs).([]BidAPI)
  410. if len(bids) == 0 {
  411. return bids, 0, nil
  412. }
  413. return bids, bids[0].TotalItems - uint64(len(bids)), nil
  414. }
  415. // GetBidsAPI return the bids applying the given filters
  416. func (hdb *HistoryDB) GetBidsAPI(
  417. slotNum *int64, forgerAddr *ethCommon.Address,
  418. fromItem, limit *uint, order string,
  419. ) ([]BidAPI, uint64, error) {
  420. var query string
  421. var args []interface{}
  422. queryStr := `SELECT bid.*, block.timestamp, coordinator.forger_addr, coordinator.url,
  423. COUNT(*) OVER() AS total_items
  424. FROM bid INNER JOIN block ON bid.eth_block_num = block.eth_block_num
  425. INNER JOIN coordinator ON bid.bidder_addr = coordinator.bidder_addr `
  426. // Apply filters
  427. nextIsAnd := false
  428. // slotNum filter
  429. if slotNum != nil {
  430. if nextIsAnd {
  431. queryStr += "AND "
  432. } else {
  433. queryStr += "WHERE "
  434. }
  435. queryStr += "bid.slot_num = ? "
  436. args = append(args, slotNum)
  437. nextIsAnd = true
  438. }
  439. // slotNum filter
  440. if forgerAddr != nil {
  441. if nextIsAnd {
  442. queryStr += "AND "
  443. } else {
  444. queryStr += "WHERE "
  445. }
  446. queryStr += "bid.bidder_addr = ? "
  447. args = append(args, forgerAddr)
  448. nextIsAnd = true
  449. }
  450. if fromItem != nil {
  451. if nextIsAnd {
  452. queryStr += "AND "
  453. } else {
  454. queryStr += "WHERE "
  455. }
  456. if order == OrderAsc {
  457. queryStr += "bid.item_id >= ? "
  458. } else {
  459. queryStr += "bid.item_id <= ? "
  460. }
  461. args = append(args, fromItem)
  462. }
  463. // pagination
  464. queryStr += "ORDER BY bid.item_id "
  465. if order == OrderAsc {
  466. queryStr += "ASC "
  467. } else {
  468. queryStr += "DESC "
  469. }
  470. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  471. query, argsQ, err := sqlx.In(queryStr, args...)
  472. if err != nil {
  473. return nil, 0, tracerr.Wrap(err)
  474. }
  475. query = hdb.db.Rebind(query)
  476. bids := []*BidAPI{}
  477. if err := meddler.QueryAll(hdb.db, &bids, query, argsQ...); err != nil {
  478. return nil, 0, tracerr.Wrap(err)
  479. }
  480. if len(bids) == 0 {
  481. return []BidAPI{}, 0, nil
  482. }
  483. return db.SlicePtrsToSlice(bids).([]BidAPI), bids[0].TotalItems - uint64(len(bids)), nil
  484. }
  485. // AddCoordinators insert Coordinators into the DB
  486. func (hdb *HistoryDB) AddCoordinators(coordinators []common.Coordinator) error {
  487. return tracerr.Wrap(hdb.addCoordinators(hdb.db, coordinators))
  488. }
  489. func (hdb *HistoryDB) addCoordinators(d meddler.DB, coordinators []common.Coordinator) error {
  490. if len(coordinators) == 0 {
  491. return nil
  492. }
  493. return tracerr.Wrap(db.BulkInsert(
  494. d,
  495. "INSERT INTO coordinator (bidder_addr, forger_addr, eth_block_num, url) VALUES %s;",
  496. coordinators[:],
  497. ))
  498. }
  499. // AddExitTree insert Exit tree into the DB
  500. func (hdb *HistoryDB) AddExitTree(exitTree []common.ExitInfo) error {
  501. return tracerr.Wrap(hdb.addExitTree(hdb.db, exitTree))
  502. }
  503. func (hdb *HistoryDB) addExitTree(d meddler.DB, exitTree []common.ExitInfo) error {
  504. if len(exitTree) == 0 {
  505. return nil
  506. }
  507. return tracerr.Wrap(db.BulkInsert(
  508. d,
  509. "INSERT INTO exit_tree (batch_num, account_idx, merkle_proof, balance, "+
  510. "instant_withdrawn, delayed_withdraw_request, delayed_withdrawn) VALUES %s;",
  511. exitTree[:],
  512. ))
  513. }
  514. func (hdb *HistoryDB) updateExitTree(d sqlx.Ext, blockNum int64,
  515. rollupWithdrawals []common.WithdrawInfo, wDelayerWithdrawals []common.WDelayerTransfer) error {
  516. if len(rollupWithdrawals) == 0 && len(wDelayerWithdrawals) == 0 {
  517. return nil
  518. }
  519. type withdrawal struct {
  520. BatchNum int64 `db:"batch_num"`
  521. AccountIdx int64 `db:"account_idx"`
  522. InstantWithdrawn *int64 `db:"instant_withdrawn"`
  523. DelayedWithdrawRequest *int64 `db:"delayed_withdraw_request"`
  524. DelayedWithdrawn *int64 `db:"delayed_withdrawn"`
  525. Owner *ethCommon.Address `db:"owner"`
  526. Token *ethCommon.Address `db:"token"`
  527. }
  528. withdrawals := make([]withdrawal, len(rollupWithdrawals)+len(wDelayerWithdrawals))
  529. for i := range rollupWithdrawals {
  530. info := &rollupWithdrawals[i]
  531. withdrawals[i] = withdrawal{
  532. BatchNum: int64(info.NumExitRoot),
  533. AccountIdx: int64(info.Idx),
  534. }
  535. if info.InstantWithdraw {
  536. withdrawals[i].InstantWithdrawn = &blockNum
  537. } else {
  538. withdrawals[i].DelayedWithdrawRequest = &blockNum
  539. withdrawals[i].Owner = &info.Owner
  540. withdrawals[i].Token = &info.Token
  541. }
  542. }
  543. for i := range wDelayerWithdrawals {
  544. info := &wDelayerWithdrawals[i]
  545. withdrawals[len(rollupWithdrawals)+i] = withdrawal{
  546. DelayedWithdrawn: &blockNum,
  547. Owner: &info.Owner,
  548. Token: &info.Token,
  549. }
  550. }
  551. // In VALUES we set an initial row of NULLs to set the types of each
  552. // variable passed as argument
  553. const query string = `
  554. UPDATE exit_tree e SET
  555. instant_withdrawn = d.instant_withdrawn,
  556. delayed_withdraw_request = CASE
  557. WHEN e.delayed_withdraw_request IS NOT NULL THEN e.delayed_withdraw_request
  558. ELSE d.delayed_withdraw_request
  559. END,
  560. delayed_withdrawn = d.delayed_withdrawn,
  561. owner = d.owner,
  562. token = d.token
  563. FROM (VALUES
  564. (NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BYTEA, NULL::::BYTEA),
  565. (:batch_num,
  566. :account_idx,
  567. :instant_withdrawn,
  568. :delayed_withdraw_request,
  569. :delayed_withdrawn,
  570. :owner,
  571. :token)
  572. ) as d (batch_num, account_idx, instant_withdrawn, delayed_withdraw_request, delayed_withdrawn, owner, token)
  573. WHERE
  574. (d.batch_num IS NOT NULL AND e.batch_num = d.batch_num AND e.account_idx = d.account_idx) OR
  575. (d.delayed_withdrawn IS NOT NULL AND e.delayed_withdrawn IS NULL AND e.owner = d.owner AND e.token = d.token);
  576. `
  577. if len(withdrawals) > 0 {
  578. if _, err := sqlx.NamedExec(d, query, withdrawals); err != nil {
  579. return tracerr.Wrap(err)
  580. }
  581. }
  582. return nil
  583. }
  584. // AddToken insert a token into the DB
  585. func (hdb *HistoryDB) AddToken(token *common.Token) error {
  586. return tracerr.Wrap(meddler.Insert(hdb.db, "token", token))
  587. }
  588. // AddTokens insert tokens into the DB
  589. func (hdb *HistoryDB) AddTokens(tokens []common.Token) error { return hdb.addTokens(hdb.db, tokens) }
  590. func (hdb *HistoryDB) addTokens(d meddler.DB, tokens []common.Token) error {
  591. if len(tokens) == 0 {
  592. return nil
  593. }
  594. return tracerr.Wrap(db.BulkInsert(
  595. d,
  596. `INSERT INTO token (
  597. token_id,
  598. eth_block_num,
  599. eth_addr,
  600. name,
  601. symbol,
  602. decimals
  603. ) VALUES %s;`,
  604. tokens[:],
  605. ))
  606. }
  607. // UpdateTokenValue updates the USD value of a token
  608. func (hdb *HistoryDB) UpdateTokenValue(tokenSymbol string, value float64) error {
  609. _, err := hdb.db.Exec(
  610. "UPDATE token SET usd = $1 WHERE symbol = $2;",
  611. value, tokenSymbol,
  612. )
  613. return tracerr.Wrap(err)
  614. }
  615. // GetToken returns a token from the DB given a TokenID
  616. func (hdb *HistoryDB) GetToken(tokenID common.TokenID) (*TokenWithUSD, error) {
  617. token := &TokenWithUSD{}
  618. err := meddler.QueryRow(
  619. hdb.db, token, `SELECT * FROM token WHERE token_id = $1;`, tokenID,
  620. )
  621. return token, tracerr.Wrap(err)
  622. }
  623. // GetAllTokens returns all tokens from the DB
  624. func (hdb *HistoryDB) GetAllTokens() ([]TokenWithUSD, error) {
  625. var tokens []*TokenWithUSD
  626. err := meddler.QueryAll(
  627. hdb.db, &tokens,
  628. "SELECT * FROM token ORDER BY token_id;",
  629. )
  630. return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), tracerr.Wrap(err)
  631. }
  632. // GetTokens returns a list of tokens from the DB
  633. func (hdb *HistoryDB) GetTokens(
  634. ids []common.TokenID, symbols []string, name string, fromItem,
  635. limit *uint, order string,
  636. ) ([]TokenWithUSD, uint64, error) {
  637. var query string
  638. var args []interface{}
  639. queryStr := `SELECT * , COUNT(*) OVER() AS total_items FROM token `
  640. // Apply filters
  641. nextIsAnd := false
  642. if len(ids) > 0 {
  643. queryStr += "WHERE token_id IN (?) "
  644. nextIsAnd = true
  645. args = append(args, ids)
  646. }
  647. if len(symbols) > 0 {
  648. if nextIsAnd {
  649. queryStr += "AND "
  650. } else {
  651. queryStr += "WHERE "
  652. }
  653. queryStr += "symbol IN (?) "
  654. args = append(args, symbols)
  655. nextIsAnd = true
  656. }
  657. if name != "" {
  658. if nextIsAnd {
  659. queryStr += "AND "
  660. } else {
  661. queryStr += "WHERE "
  662. }
  663. queryStr += "name ~ ? "
  664. args = append(args, name)
  665. nextIsAnd = true
  666. }
  667. if fromItem != nil {
  668. if nextIsAnd {
  669. queryStr += "AND "
  670. } else {
  671. queryStr += "WHERE "
  672. }
  673. if order == OrderAsc {
  674. queryStr += "item_id >= ? "
  675. } else {
  676. queryStr += "item_id <= ? "
  677. }
  678. args = append(args, fromItem)
  679. }
  680. // pagination
  681. queryStr += "ORDER BY item_id "
  682. if order == OrderAsc {
  683. queryStr += "ASC "
  684. } else {
  685. queryStr += "DESC "
  686. }
  687. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  688. query, argsQ, err := sqlx.In(queryStr, args...)
  689. if err != nil {
  690. return nil, 0, tracerr.Wrap(err)
  691. }
  692. query = hdb.db.Rebind(query)
  693. tokens := []*TokenWithUSD{}
  694. if err := meddler.QueryAll(hdb.db, &tokens, query, argsQ...); err != nil {
  695. return nil, 0, tracerr.Wrap(err)
  696. }
  697. if len(tokens) == 0 {
  698. return []TokenWithUSD{}, 0, nil
  699. }
  700. return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), uint64(len(tokens)) - tokens[0].TotalItems, nil
  701. }
  702. // GetTokenSymbols returns all the token symbols from the DB
  703. func (hdb *HistoryDB) GetTokenSymbols() ([]string, error) {
  704. var tokenSymbols []string
  705. rows, err := hdb.db.Query("SELECT symbol FROM token;")
  706. if err != nil {
  707. return nil, tracerr.Wrap(err)
  708. }
  709. defer db.RowsClose(rows)
  710. sym := new(string)
  711. for rows.Next() {
  712. err = rows.Scan(sym)
  713. if err != nil {
  714. return nil, tracerr.Wrap(err)
  715. }
  716. tokenSymbols = append(tokenSymbols, *sym)
  717. }
  718. return tokenSymbols, nil
  719. }
  720. // AddAccounts insert accounts into the DB
  721. func (hdb *HistoryDB) AddAccounts(accounts []common.Account) error {
  722. return tracerr.Wrap(hdb.addAccounts(hdb.db, accounts))
  723. }
  724. func (hdb *HistoryDB) addAccounts(d meddler.DB, accounts []common.Account) error {
  725. if len(accounts) == 0 {
  726. return nil
  727. }
  728. return tracerr.Wrap(db.BulkInsert(
  729. d,
  730. `INSERT INTO account (
  731. idx,
  732. token_id,
  733. batch_num,
  734. bjj,
  735. eth_addr
  736. ) VALUES %s;`,
  737. accounts[:],
  738. ))
  739. }
  740. // GetAllAccounts returns a list of accounts from the DB
  741. func (hdb *HistoryDB) GetAllAccounts() ([]common.Account, error) {
  742. var accs []*common.Account
  743. err := meddler.QueryAll(
  744. hdb.db, &accs,
  745. "SELECT * FROM account ORDER BY idx;",
  746. )
  747. return db.SlicePtrsToSlice(accs).([]common.Account), tracerr.Wrap(err)
  748. }
  749. // AddL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
  750. // If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
  751. // BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
  752. // EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB.
  753. func (hdb *HistoryDB) AddL1Txs(l1txs []common.L1Tx) error {
  754. return tracerr.Wrap(hdb.addL1Txs(hdb.db, l1txs))
  755. }
  756. // addL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
  757. // If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
  758. // BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
  759. // EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB.
  760. func (hdb *HistoryDB) addL1Txs(d meddler.DB, l1txs []common.L1Tx) error {
  761. if len(l1txs) == 0 {
  762. return nil
  763. }
  764. txs := []txWrite{}
  765. for i := 0; i < len(l1txs); i++ {
  766. af := new(big.Float).SetInt(l1txs[i].Amount)
  767. amountFloat, _ := af.Float64()
  768. laf := new(big.Float).SetInt(l1txs[i].DepositAmount)
  769. depositAmountFloat, _ := laf.Float64()
  770. txs = append(txs, txWrite{
  771. // Generic
  772. IsL1: true,
  773. TxID: l1txs[i].TxID,
  774. Type: l1txs[i].Type,
  775. Position: l1txs[i].Position,
  776. FromIdx: &l1txs[i].FromIdx,
  777. ToIdx: l1txs[i].ToIdx,
  778. Amount: l1txs[i].Amount,
  779. AmountFloat: amountFloat,
  780. TokenID: l1txs[i].TokenID,
  781. BatchNum: l1txs[i].BatchNum,
  782. EthBlockNum: l1txs[i].EthBlockNum,
  783. // L1
  784. ToForgeL1TxsNum: l1txs[i].ToForgeL1TxsNum,
  785. UserOrigin: &l1txs[i].UserOrigin,
  786. FromEthAddr: &l1txs[i].FromEthAddr,
  787. FromBJJ: &l1txs[i].FromBJJ,
  788. DepositAmount: l1txs[i].DepositAmount,
  789. DepositAmountFloat: &depositAmountFloat,
  790. })
  791. }
  792. return tracerr.Wrap(hdb.addTxs(d, txs))
  793. }
  794. // AddL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
  795. func (hdb *HistoryDB) AddL2Txs(l2txs []common.L2Tx) error {
  796. return tracerr.Wrap(hdb.addL2Txs(hdb.db, l2txs))
  797. }
  798. // addL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
  799. func (hdb *HistoryDB) addL2Txs(d meddler.DB, l2txs []common.L2Tx) error {
  800. txs := []txWrite{}
  801. for i := 0; i < len(l2txs); i++ {
  802. f := new(big.Float).SetInt(l2txs[i].Amount)
  803. amountFloat, _ := f.Float64()
  804. txs = append(txs, txWrite{
  805. // Generic
  806. IsL1: false,
  807. TxID: l2txs[i].TxID,
  808. Type: l2txs[i].Type,
  809. Position: l2txs[i].Position,
  810. FromIdx: &l2txs[i].FromIdx,
  811. ToIdx: l2txs[i].ToIdx,
  812. Amount: l2txs[i].Amount,
  813. AmountFloat: amountFloat,
  814. BatchNum: &l2txs[i].BatchNum,
  815. EthBlockNum: l2txs[i].EthBlockNum,
  816. // L2
  817. Fee: &l2txs[i].Fee,
  818. Nonce: &l2txs[i].Nonce,
  819. })
  820. }
  821. return tracerr.Wrap(hdb.addTxs(d, txs))
  822. }
  823. func (hdb *HistoryDB) addTxs(d meddler.DB, txs []txWrite) error {
  824. if len(txs) == 0 {
  825. return nil
  826. }
  827. return tracerr.Wrap(db.BulkInsert(
  828. d,
  829. `INSERT INTO tx (
  830. is_l1,
  831. id,
  832. type,
  833. position,
  834. from_idx,
  835. to_idx,
  836. amount,
  837. amount_f,
  838. token_id,
  839. batch_num,
  840. eth_block_num,
  841. to_forge_l1_txs_num,
  842. user_origin,
  843. from_eth_addr,
  844. from_bjj,
  845. deposit_amount,
  846. deposit_amount_f,
  847. fee,
  848. nonce
  849. ) VALUES %s;`,
  850. txs[:],
  851. ))
  852. }
  853. // // GetTxs returns a list of txs from the DB
  854. // func (hdb *HistoryDB) GetTxs() ([]common.Tx, error) {
  855. // var txs []*common.Tx
  856. // err := meddler.QueryAll(
  857. // hdb.db, &txs,
  858. // `SELECT * FROM tx
  859. // ORDER BY (batch_num, position) ASC`,
  860. // )
  861. // return db.SlicePtrsToSlice(txs).([]common.Tx), err
  862. // }
  863. // GetHistoryTx returns a tx from the DB given a TxID
  864. func (hdb *HistoryDB) GetHistoryTx(txID common.TxID) (*TxAPI, error) {
  865. // Warning: amount_success and deposit_amount_success have true as default for
  866. // performance reasons. The expected default value is false (when txs are unforged)
  867. // this case is handled at the function func (tx TxAPI) MarshalJSON() ([]byte, error)
  868. tx := &TxAPI{}
  869. err := meddler.QueryRow(
  870. hdb.db, tx, `SELECT tx.item_id, tx.is_l1, tx.id, tx.type, tx.position,
  871. hez_idx(tx.from_idx, token.symbol) AS from_idx, tx.from_eth_addr, tx.from_bjj,
  872. hez_idx(tx.to_idx, token.symbol) AS to_idx, tx.to_eth_addr, tx.to_bjj,
  873. tx.amount, tx.amount_success, tx.token_id, tx.amount_usd,
  874. tx.batch_num, tx.eth_block_num, tx.to_forge_l1_txs_num, tx.user_origin,
  875. tx.deposit_amount, tx.deposit_amount_usd, tx.deposit_amount_success, tx.fee, tx.fee_usd, tx.nonce,
  876. token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
  877. token.eth_addr, token.name, token.symbol, token.decimals, token.usd,
  878. token.usd_update, block.timestamp
  879. FROM tx INNER JOIN token ON tx.token_id = token.token_id
  880. INNER JOIN block ON tx.eth_block_num = block.eth_block_num
  881. WHERE tx.id = $1;`, txID,
  882. )
  883. return tx, tracerr.Wrap(err)
  884. }
  885. // GetHistoryTxs returns a list of txs from the DB using the HistoryTx struct
  886. // and pagination info
  887. func (hdb *HistoryDB) GetHistoryTxs(
  888. ethAddr *ethCommon.Address, bjj *babyjub.PublicKeyComp,
  889. tokenID *common.TokenID, idx *common.Idx, batchNum *uint, txType *common.TxType,
  890. fromItem, limit *uint, order string,
  891. ) ([]TxAPI, uint64, error) {
  892. // Warning: amount_success and deposit_amount_success have true as default for
  893. // performance reasons. The expected default value is false (when txs are unforged)
  894. // this case is handled at the function func (tx TxAPI) MarshalJSON() ([]byte, error)
  895. if ethAddr != nil && bjj != nil {
  896. return nil, 0, tracerr.Wrap(errors.New("ethAddr and bjj are incompatible"))
  897. }
  898. var query string
  899. var args []interface{}
  900. queryStr := `SELECT tx.item_id, tx.is_l1, tx.id, tx.type, tx.position,
  901. hez_idx(tx.from_idx, token.symbol) AS from_idx, tx.from_eth_addr, tx.from_bjj,
  902. hez_idx(tx.to_idx, token.symbol) AS to_idx, tx.to_eth_addr, tx.to_bjj,
  903. tx.amount, tx.amount_success, tx.token_id, tx.amount_usd,
  904. tx.batch_num, tx.eth_block_num, tx.to_forge_l1_txs_num, tx.user_origin,
  905. tx.deposit_amount, tx.deposit_amount_usd, tx.deposit_amount_success, tx.fee, tx.fee_usd, tx.nonce,
  906. token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
  907. token.eth_addr, token.name, token.symbol, token.decimals, token.usd,
  908. token.usd_update, block.timestamp, count(*) OVER() AS total_items
  909. FROM tx INNER JOIN token ON tx.token_id = token.token_id
  910. INNER JOIN block ON tx.eth_block_num = block.eth_block_num `
  911. // Apply filters
  912. nextIsAnd := false
  913. // ethAddr filter
  914. if ethAddr != nil {
  915. queryStr += "WHERE (tx.from_eth_addr = ? OR tx.to_eth_addr = ?) "
  916. nextIsAnd = true
  917. args = append(args, ethAddr, ethAddr)
  918. } else if bjj != nil { // bjj filter
  919. queryStr += "WHERE (tx.from_bjj = ? OR tx.to_bjj = ?) "
  920. nextIsAnd = true
  921. args = append(args, bjj, bjj)
  922. }
  923. // tokenID filter
  924. if tokenID != nil {
  925. if nextIsAnd {
  926. queryStr += "AND "
  927. } else {
  928. queryStr += "WHERE "
  929. }
  930. queryStr += "tx.token_id = ? "
  931. args = append(args, tokenID)
  932. nextIsAnd = true
  933. }
  934. // idx filter
  935. if idx != nil {
  936. if nextIsAnd {
  937. queryStr += "AND "
  938. } else {
  939. queryStr += "WHERE "
  940. }
  941. queryStr += "(tx.from_idx = ? OR tx.to_idx = ?) "
  942. args = append(args, idx, idx)
  943. nextIsAnd = true
  944. }
  945. // batchNum filter
  946. if batchNum != nil {
  947. if nextIsAnd {
  948. queryStr += "AND "
  949. } else {
  950. queryStr += "WHERE "
  951. }
  952. queryStr += "tx.batch_num = ? "
  953. args = append(args, batchNum)
  954. nextIsAnd = true
  955. }
  956. // txType filter
  957. if txType != nil {
  958. if nextIsAnd {
  959. queryStr += "AND "
  960. } else {
  961. queryStr += "WHERE "
  962. }
  963. queryStr += "tx.type = ? "
  964. args = append(args, txType)
  965. nextIsAnd = true
  966. }
  967. if fromItem != nil {
  968. if nextIsAnd {
  969. queryStr += "AND "
  970. } else {
  971. queryStr += "WHERE "
  972. }
  973. if order == OrderAsc {
  974. queryStr += "tx.item_id >= ? "
  975. } else {
  976. queryStr += "tx.item_id <= ? "
  977. }
  978. args = append(args, fromItem)
  979. nextIsAnd = true
  980. }
  981. if nextIsAnd {
  982. queryStr += "AND "
  983. } else {
  984. queryStr += "WHERE "
  985. }
  986. queryStr += "tx.batch_num IS NOT NULL "
  987. // pagination
  988. queryStr += "ORDER BY tx.item_id "
  989. if order == OrderAsc {
  990. queryStr += " ASC "
  991. } else {
  992. queryStr += " DESC "
  993. }
  994. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  995. query = hdb.db.Rebind(queryStr)
  996. // log.Debug(query)
  997. txsPtrs := []*TxAPI{}
  998. if err := meddler.QueryAll(hdb.db, &txsPtrs, query, args...); err != nil {
  999. return nil, 0, tracerr.Wrap(err)
  1000. }
  1001. txs := db.SlicePtrsToSlice(txsPtrs).([]TxAPI)
  1002. if len(txs) == 0 {
  1003. return txs, 0, nil
  1004. }
  1005. return txs, txs[0].TotalItems - uint64(len(txs)), nil
  1006. }
  1007. // GetAllExits returns all exit from the DB
  1008. func (hdb *HistoryDB) GetAllExits() ([]common.ExitInfo, error) {
  1009. var exits []*common.ExitInfo
  1010. err := meddler.QueryAll(
  1011. hdb.db, &exits,
  1012. `SELECT exit_tree.batch_num, exit_tree.account_idx, exit_tree.merkle_proof,
  1013. exit_tree.balance, exit_tree.instant_withdrawn, exit_tree.delayed_withdraw_request,
  1014. exit_tree.delayed_withdrawn FROM exit_tree ORDER BY item_id;`,
  1015. )
  1016. return db.SlicePtrsToSlice(exits).([]common.ExitInfo), tracerr.Wrap(err)
  1017. }
  1018. // GetExitAPI returns a exit from the DB
  1019. func (hdb *HistoryDB) GetExitAPI(batchNum *uint, idx *common.Idx) (*ExitAPI, error) {
  1020. exit := &ExitAPI{}
  1021. err := meddler.QueryRow(
  1022. hdb.db, exit, `SELECT exit_tree.item_id, exit_tree.batch_num,
  1023. hez_idx(exit_tree.account_idx, token.symbol) AS account_idx,
  1024. account.bjj, account.eth_addr,
  1025. exit_tree.merkle_proof, exit_tree.balance, exit_tree.instant_withdrawn,
  1026. exit_tree.delayed_withdraw_request, exit_tree.delayed_withdrawn,
  1027. token.token_id, token.item_id AS token_item_id,
  1028. token.eth_block_num AS token_block, token.eth_addr AS token_eth_addr, token.name, token.symbol,
  1029. token.decimals, token.usd, token.usd_update
  1030. FROM exit_tree INNER JOIN account ON exit_tree.account_idx = account.idx
  1031. INNER JOIN token ON account.token_id = token.token_id
  1032. WHERE exit_tree.batch_num = $1 AND exit_tree.account_idx = $2;`, batchNum, idx,
  1033. )
  1034. return exit, tracerr.Wrap(err)
  1035. }
  1036. // GetExitsAPI returns a list of exits from the DB and pagination info
  1037. func (hdb *HistoryDB) GetExitsAPI(
  1038. ethAddr *ethCommon.Address, bjj *babyjub.PublicKeyComp, tokenID *common.TokenID,
  1039. idx *common.Idx, batchNum *uint, onlyPendingWithdraws *bool,
  1040. fromItem, limit *uint, order string,
  1041. ) ([]ExitAPI, uint64, error) {
  1042. if ethAddr != nil && bjj != nil {
  1043. return nil, 0, tracerr.Wrap(errors.New("ethAddr and bjj are incompatible"))
  1044. }
  1045. var query string
  1046. var args []interface{}
  1047. queryStr := `SELECT exit_tree.item_id, exit_tree.batch_num,
  1048. hez_idx(exit_tree.account_idx, token.symbol) AS account_idx,
  1049. account.bjj, account.eth_addr,
  1050. exit_tree.merkle_proof, exit_tree.balance, exit_tree.instant_withdrawn,
  1051. exit_tree.delayed_withdraw_request, exit_tree.delayed_withdrawn,
  1052. token.token_id, token.item_id AS token_item_id,
  1053. token.eth_block_num AS token_block, token.eth_addr AS token_eth_addr, token.name, token.symbol,
  1054. token.decimals, token.usd, token.usd_update, COUNT(*) OVER() AS total_items
  1055. FROM exit_tree INNER JOIN account ON exit_tree.account_idx = account.idx
  1056. INNER JOIN token ON account.token_id = token.token_id `
  1057. // Apply filters
  1058. nextIsAnd := false
  1059. // ethAddr filter
  1060. if ethAddr != nil {
  1061. queryStr += "WHERE account.eth_addr = ? "
  1062. nextIsAnd = true
  1063. args = append(args, ethAddr)
  1064. } else if bjj != nil { // bjj filter
  1065. queryStr += "WHERE account.bjj = ? "
  1066. nextIsAnd = true
  1067. args = append(args, bjj)
  1068. }
  1069. // tokenID filter
  1070. if tokenID != nil {
  1071. if nextIsAnd {
  1072. queryStr += "AND "
  1073. } else {
  1074. queryStr += "WHERE "
  1075. }
  1076. queryStr += "account.token_id = ? "
  1077. args = append(args, tokenID)
  1078. nextIsAnd = true
  1079. }
  1080. // idx filter
  1081. if idx != nil {
  1082. if nextIsAnd {
  1083. queryStr += "AND "
  1084. } else {
  1085. queryStr += "WHERE "
  1086. }
  1087. queryStr += "exit_tree.account_idx = ? "
  1088. args = append(args, idx)
  1089. nextIsAnd = true
  1090. }
  1091. // batchNum filter
  1092. if batchNum != nil {
  1093. if nextIsAnd {
  1094. queryStr += "AND "
  1095. } else {
  1096. queryStr += "WHERE "
  1097. }
  1098. queryStr += "exit_tree.batch_num = ? "
  1099. args = append(args, batchNum)
  1100. nextIsAnd = true
  1101. }
  1102. // onlyPendingWithdraws
  1103. if onlyPendingWithdraws != nil {
  1104. if *onlyPendingWithdraws {
  1105. if nextIsAnd {
  1106. queryStr += "AND "
  1107. } else {
  1108. queryStr += "WHERE "
  1109. }
  1110. queryStr += "(exit_tree.instant_withdrawn IS NULL AND exit_tree.delayed_withdrawn IS NULL) "
  1111. nextIsAnd = true
  1112. }
  1113. }
  1114. if fromItem != nil {
  1115. if nextIsAnd {
  1116. queryStr += "AND "
  1117. } else {
  1118. queryStr += "WHERE "
  1119. }
  1120. if order == OrderAsc {
  1121. queryStr += "exit_tree.item_id >= ? "
  1122. } else {
  1123. queryStr += "exit_tree.item_id <= ? "
  1124. }
  1125. args = append(args, fromItem)
  1126. // nextIsAnd = true
  1127. }
  1128. // pagination
  1129. queryStr += "ORDER BY exit_tree.item_id "
  1130. if order == OrderAsc {
  1131. queryStr += " ASC "
  1132. } else {
  1133. queryStr += " DESC "
  1134. }
  1135. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  1136. query = hdb.db.Rebind(queryStr)
  1137. // log.Debug(query)
  1138. exits := []*ExitAPI{}
  1139. if err := meddler.QueryAll(hdb.db, &exits, query, args...); err != nil {
  1140. return nil, 0, tracerr.Wrap(err)
  1141. }
  1142. if len(exits) == 0 {
  1143. return []ExitAPI{}, 0, nil
  1144. }
  1145. return db.SlicePtrsToSlice(exits).([]ExitAPI), exits[0].TotalItems - uint64(len(exits)), nil
  1146. }
  1147. // GetAllL1UserTxs returns all L1UserTxs from the DB
  1148. func (hdb *HistoryDB) GetAllL1UserTxs() ([]common.L1Tx, error) {
  1149. var txs []*common.L1Tx
  1150. err := meddler.QueryAll(
  1151. hdb.db, &txs, // Note that '\x' gets parsed as a big.Int with value = 0
  1152. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  1153. tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  1154. tx.amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.amount_success THEN tx.amount ELSE '\x' END) AS effective_amount,
  1155. tx.deposit_amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.deposit_amount_success THEN tx.deposit_amount ELSE '\x' END) AS effective_deposit_amount,
  1156. tx.eth_block_num, tx.type, tx.batch_num
  1157. FROM tx WHERE is_l1 = TRUE AND user_origin = TRUE ORDER BY item_id;`,
  1158. )
  1159. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  1160. }
  1161. // GetAllL1CoordinatorTxs returns all L1CoordinatorTxs from the DB
  1162. func (hdb *HistoryDB) GetAllL1CoordinatorTxs() ([]common.L1Tx, error) {
  1163. var txs []*common.L1Tx
  1164. // Since the query specifies that only coordinator txs are returned, it's safe to assume
  1165. // that returned txs will always have effective amounts
  1166. err := meddler.QueryAll(
  1167. hdb.db, &txs,
  1168. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  1169. tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  1170. tx.amount, tx.amount AS effective_amount,
  1171. tx.deposit_amount, tx.deposit_amount AS effective_deposit_amount,
  1172. tx.eth_block_num, tx.type, tx.batch_num
  1173. FROM tx WHERE is_l1 = TRUE AND user_origin = FALSE ORDER BY item_id;`,
  1174. )
  1175. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  1176. }
  1177. // GetAllL2Txs returns all L2Txs from the DB
  1178. func (hdb *HistoryDB) GetAllL2Txs() ([]common.L2Tx, error) {
  1179. var txs []*common.L2Tx
  1180. err := meddler.QueryAll(
  1181. hdb.db, &txs,
  1182. `SELECT tx.id, tx.batch_num, tx.position,
  1183. tx.from_idx, tx.to_idx, tx.amount, tx.fee, tx.nonce,
  1184. tx.type, tx.eth_block_num
  1185. FROM tx WHERE is_l1 = FALSE ORDER BY item_id;`,
  1186. )
  1187. return db.SlicePtrsToSlice(txs).([]common.L2Tx), tracerr.Wrap(err)
  1188. }
  1189. // GetUnforgedL1UserTxs gets L1 User Txs to be forged in the L1Batch with toForgeL1TxsNum.
  1190. func (hdb *HistoryDB) GetUnforgedL1UserTxs(toForgeL1TxsNum int64) ([]common.L1Tx, error) {
  1191. var txs []*common.L1Tx
  1192. err := meddler.QueryAll(
  1193. hdb.db, &txs, // only L1 user txs can have batch_num set to null
  1194. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  1195. tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  1196. tx.amount, NULL AS effective_amount,
  1197. tx.deposit_amount, NULL AS effective_deposit_amount,
  1198. tx.eth_block_num, tx.type, tx.batch_num
  1199. FROM tx WHERE batch_num IS NULL AND to_forge_l1_txs_num = $1
  1200. ORDER BY position;`,
  1201. toForgeL1TxsNum,
  1202. )
  1203. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  1204. }
  1205. // TODO: Think about chaning all the queries that return a last value, to queries that return the next valid value.
  1206. // GetLastTxsPosition for a given to_forge_l1_txs_num
  1207. func (hdb *HistoryDB) GetLastTxsPosition(toForgeL1TxsNum int64) (int, error) {
  1208. row := hdb.db.QueryRow(
  1209. "SELECT position FROM tx WHERE to_forge_l1_txs_num = $1 ORDER BY position DESC;",
  1210. toForgeL1TxsNum,
  1211. )
  1212. var lastL1TxsPosition int
  1213. return lastL1TxsPosition, tracerr.Wrap(row.Scan(&lastL1TxsPosition))
  1214. }
  1215. // GetSCVars returns the rollup, auction and wdelayer smart contracts variables at their last update.
  1216. func (hdb *HistoryDB) GetSCVars() (*common.RollupVariables, *common.AuctionVariables,
  1217. *common.WDelayerVariables, error) {
  1218. var rollup common.RollupVariables
  1219. var auction common.AuctionVariables
  1220. var wDelayer common.WDelayerVariables
  1221. if err := meddler.QueryRow(hdb.db, &rollup,
  1222. "SELECT * FROM rollup_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  1223. return nil, nil, nil, tracerr.Wrap(err)
  1224. }
  1225. if err := meddler.QueryRow(hdb.db, &auction,
  1226. "SELECT * FROM auction_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  1227. return nil, nil, nil, tracerr.Wrap(err)
  1228. }
  1229. if err := meddler.QueryRow(hdb.db, &wDelayer,
  1230. "SELECT * FROM wdelayer_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  1231. return nil, nil, nil, tracerr.Wrap(err)
  1232. }
  1233. return &rollup, &auction, &wDelayer, nil
  1234. }
  1235. func (hdb *HistoryDB) setRollupVars(d meddler.DB, rollup *common.RollupVariables) error {
  1236. return tracerr.Wrap(meddler.Insert(d, "rollup_vars", rollup))
  1237. }
  1238. func (hdb *HistoryDB) setAuctionVars(d meddler.DB, auction *common.AuctionVariables) error {
  1239. return tracerr.Wrap(meddler.Insert(d, "auction_vars", auction))
  1240. }
  1241. func (hdb *HistoryDB) setWDelayerVars(d meddler.DB, wDelayer *common.WDelayerVariables) error {
  1242. return tracerr.Wrap(meddler.Insert(d, "wdelayer_vars", wDelayer))
  1243. }
  1244. func (hdb *HistoryDB) addBucketUpdates(d meddler.DB, bucketUpdates []common.BucketUpdate) error {
  1245. if len(bucketUpdates) == 0 {
  1246. return nil
  1247. }
  1248. return tracerr.Wrap(db.BulkInsert(
  1249. d,
  1250. `INSERT INTO bucket_update (
  1251. eth_block_num,
  1252. num_bucket,
  1253. block_stamp,
  1254. withdrawals
  1255. ) VALUES %s;`,
  1256. bucketUpdates[:],
  1257. ))
  1258. }
  1259. // GetAllBucketUpdates retrieves all the bucket updates
  1260. func (hdb *HistoryDB) GetAllBucketUpdates() ([]common.BucketUpdate, error) {
  1261. var bucketUpdates []*common.BucketUpdate
  1262. err := meddler.QueryAll(
  1263. hdb.db, &bucketUpdates,
  1264. "SELECT * FROM bucket_update ORDER BY item_id;",
  1265. )
  1266. return db.SlicePtrsToSlice(bucketUpdates).([]common.BucketUpdate), tracerr.Wrap(err)
  1267. }
  1268. func (hdb *HistoryDB) addTokenExchanges(d meddler.DB, tokenExchanges []common.TokenExchange) error {
  1269. if len(tokenExchanges) == 0 {
  1270. return nil
  1271. }
  1272. return tracerr.Wrap(db.BulkInsert(
  1273. d,
  1274. `INSERT INTO token_exchange (
  1275. eth_block_num,
  1276. eth_addr,
  1277. value_usd
  1278. ) VALUES %s;`,
  1279. tokenExchanges[:],
  1280. ))
  1281. }
  1282. // GetAllTokenExchanges retrieves all the token exchanges
  1283. func (hdb *HistoryDB) GetAllTokenExchanges() ([]common.TokenExchange, error) {
  1284. var tokenExchanges []*common.TokenExchange
  1285. err := meddler.QueryAll(
  1286. hdb.db, &tokenExchanges,
  1287. "SELECT * FROM token_exchange ORDER BY item_id;",
  1288. )
  1289. return db.SlicePtrsToSlice(tokenExchanges).([]common.TokenExchange), tracerr.Wrap(err)
  1290. }
  1291. func (hdb *HistoryDB) addEscapeHatchWithdrawals(d meddler.DB,
  1292. escapeHatchWithdrawals []common.WDelayerEscapeHatchWithdrawal) error {
  1293. if len(escapeHatchWithdrawals) == 0 {
  1294. return nil
  1295. }
  1296. return tracerr.Wrap(db.BulkInsert(
  1297. d,
  1298. `INSERT INTO escape_hatch_withdrawal (
  1299. eth_block_num,
  1300. who_addr,
  1301. to_addr,
  1302. token_addr,
  1303. amount
  1304. ) VALUES %s;`,
  1305. escapeHatchWithdrawals[:],
  1306. ))
  1307. }
  1308. // GetAllEscapeHatchWithdrawals retrieves all the escape hatch withdrawals
  1309. func (hdb *HistoryDB) GetAllEscapeHatchWithdrawals() ([]common.WDelayerEscapeHatchWithdrawal, error) {
  1310. var escapeHatchWithdrawals []*common.WDelayerEscapeHatchWithdrawal
  1311. err := meddler.QueryAll(
  1312. hdb.db, &escapeHatchWithdrawals,
  1313. "SELECT * FROM escape_hatch_withdrawal ORDER BY item_id;",
  1314. )
  1315. return db.SlicePtrsToSlice(escapeHatchWithdrawals).([]common.WDelayerEscapeHatchWithdrawal),
  1316. tracerr.Wrap(err)
  1317. }
  1318. // SetInitialSCVars sets the initial state of rollup, auction, wdelayer smart
  1319. // contract variables. This initial state is stored linked to block 0, which
  1320. // always exist in the DB and is used to store initialization data that always
  1321. // exist in the smart contracts.
  1322. func (hdb *HistoryDB) SetInitialSCVars(rollup *common.RollupVariables,
  1323. auction *common.AuctionVariables, wDelayer *common.WDelayerVariables) error {
  1324. txn, err := hdb.db.Beginx()
  1325. if err != nil {
  1326. return tracerr.Wrap(err)
  1327. }
  1328. defer func() {
  1329. if err != nil {
  1330. db.Rollback(txn)
  1331. }
  1332. }()
  1333. // Force EthBlockNum to be 0 because it's the block used to link data
  1334. // that belongs to the creation of the smart contracts
  1335. rollup.EthBlockNum = 0
  1336. auction.EthBlockNum = 0
  1337. wDelayer.EthBlockNum = 0
  1338. auction.DefaultSlotSetBidSlotNum = 0
  1339. if err := hdb.setRollupVars(txn, rollup); err != nil {
  1340. return tracerr.Wrap(err)
  1341. }
  1342. if err := hdb.setAuctionVars(txn, auction); err != nil {
  1343. return tracerr.Wrap(err)
  1344. }
  1345. if err := hdb.setWDelayerVars(txn, wDelayer); err != nil {
  1346. return tracerr.Wrap(err)
  1347. }
  1348. return tracerr.Wrap(txn.Commit())
  1349. }
  1350. // setL1UserTxEffectiveAmounts sets the EffectiveAmount and EffectiveDepositAmount
  1351. // of the given l1UserTxs (with an UPDATE)
  1352. func (hdb *HistoryDB) setL1UserTxEffectiveAmounts(d sqlx.Ext, txs []common.L1Tx) error {
  1353. if len(txs) == 0 {
  1354. return nil
  1355. }
  1356. // Effective amounts are stored as success flags in the DB, with true value by default
  1357. // to reduce the amount of updates. Therefore, only amounts that became uneffective should be
  1358. // updated to become false
  1359. type txUpdate struct {
  1360. ID common.TxID `db:"id"`
  1361. AmountSuccess bool `db:"amount_success"`
  1362. DepositAmountSuccess bool `db:"deposit_amount_success"`
  1363. }
  1364. txUpdates := []txUpdate{}
  1365. equal := func(a *big.Int, b *big.Int) bool {
  1366. return a.Cmp(b) == 0
  1367. }
  1368. for i := range txs {
  1369. amountSuccess := equal(txs[i].Amount, txs[i].EffectiveAmount)
  1370. depositAmountSuccess := equal(txs[i].DepositAmount, txs[i].EffectiveDepositAmount)
  1371. if !amountSuccess || !depositAmountSuccess {
  1372. txUpdates = append(txUpdates, txUpdate{
  1373. ID: txs[i].TxID,
  1374. AmountSuccess: amountSuccess,
  1375. DepositAmountSuccess: depositAmountSuccess,
  1376. })
  1377. }
  1378. }
  1379. const query string = `
  1380. UPDATE tx SET
  1381. amount_success = tx_update.amount_success,
  1382. deposit_amount_success = tx_update.deposit_amount_success
  1383. FROM (VALUES
  1384. (NULL::::BYTEA, NULL::::BOOL, NULL::::BOOL),
  1385. (:id, :amount_success, :deposit_amount_success)
  1386. ) as tx_update (id, amount_success, deposit_amount_success)
  1387. WHERE tx.id = tx_update.id;
  1388. `
  1389. if len(txUpdates) > 0 {
  1390. if _, err := sqlx.NamedExec(d, query, txUpdates); err != nil {
  1391. return tracerr.Wrap(err)
  1392. }
  1393. }
  1394. return nil
  1395. }
  1396. // AddBlockSCData stores all the information of a block retrieved by the
  1397. // Synchronizer. Blocks should be inserted in order, leaving no gaps because
  1398. // the pagination system of the API/DB depends on this. Within blocks, all
  1399. // items should also be in the correct order (Accounts, Tokens, Txs, etc.)
  1400. func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) {
  1401. txn, err := hdb.db.Beginx()
  1402. if err != nil {
  1403. return tracerr.Wrap(err)
  1404. }
  1405. defer func() {
  1406. if err != nil {
  1407. db.Rollback(txn)
  1408. }
  1409. }()
  1410. // Add block
  1411. if err := hdb.addBlock(txn, &blockData.Block); err != nil {
  1412. return tracerr.Wrap(err)
  1413. }
  1414. // Add Coordinators
  1415. if err := hdb.addCoordinators(txn, blockData.Auction.Coordinators); err != nil {
  1416. return tracerr.Wrap(err)
  1417. }
  1418. // Add Bids
  1419. if err := hdb.addBids(txn, blockData.Auction.Bids); err != nil {
  1420. return tracerr.Wrap(err)
  1421. }
  1422. // Add Tokens
  1423. if err := hdb.addTokens(txn, blockData.Rollup.AddedTokens); err != nil {
  1424. return tracerr.Wrap(err)
  1425. }
  1426. // Prepare user L1 txs to be added.
  1427. // They must be added before the batch that will forge them (which can be in the same block)
  1428. // and after the account that will be sent to (also can be in the same block).
  1429. // Note: insert order is not relevant since item_id will be updated by a DB trigger when
  1430. // the batch that forges those txs is inserted
  1431. userL1s := make(map[common.BatchNum][]common.L1Tx)
  1432. for i := range blockData.Rollup.L1UserTxs {
  1433. batchThatForgesIsInTheBlock := false
  1434. for _, batch := range blockData.Rollup.Batches {
  1435. if batch.Batch.ForgeL1TxsNum != nil &&
  1436. *batch.Batch.ForgeL1TxsNum == *blockData.Rollup.L1UserTxs[i].ToForgeL1TxsNum {
  1437. // Tx is forged in this block. It's guaranteed that:
  1438. // * the first batch of the block won't forge user L1 txs that have been added in this block
  1439. // * batch nums are sequential therefore it's safe to add the tx at batch.BatchNum -1
  1440. batchThatForgesIsInTheBlock = true
  1441. addAtBatchNum := batch.Batch.BatchNum - 1
  1442. userL1s[addAtBatchNum] = append(userL1s[addAtBatchNum], blockData.Rollup.L1UserTxs[i])
  1443. break
  1444. }
  1445. }
  1446. if !batchThatForgesIsInTheBlock {
  1447. // User artificial batchNum 0 to add txs that are not forge in this block
  1448. // after all the accounts of the block have been added
  1449. userL1s[0] = append(userL1s[0], blockData.Rollup.L1UserTxs[i])
  1450. }
  1451. }
  1452. // Add Batches
  1453. for i := range blockData.Rollup.Batches {
  1454. batch := &blockData.Rollup.Batches[i]
  1455. // Add Batch: this will trigger an update on the DB
  1456. // that will set the batch num of forged L1 txs in this batch
  1457. if err = hdb.addBatch(txn, &batch.Batch); err != nil {
  1458. return tracerr.Wrap(err)
  1459. }
  1460. // Set the EffectiveAmount and EffectiveDepositAmount of all the
  1461. // L1UserTxs that have been forged in this batch
  1462. if err = hdb.setL1UserTxEffectiveAmounts(txn, batch.L1UserTxs); err != nil {
  1463. return tracerr.Wrap(err)
  1464. }
  1465. // Add accounts
  1466. if err := hdb.addAccounts(txn, batch.CreatedAccounts); err != nil {
  1467. return tracerr.Wrap(err)
  1468. }
  1469. // Add forged l1 coordinator Txs
  1470. if err := hdb.addL1Txs(txn, batch.L1CoordinatorTxs); err != nil {
  1471. return tracerr.Wrap(err)
  1472. }
  1473. // Add l2 Txs
  1474. if err := hdb.addL2Txs(txn, batch.L2Txs); err != nil {
  1475. return tracerr.Wrap(err)
  1476. }
  1477. // Add user L1 txs that will be forged in next batch
  1478. if userlL1s, ok := userL1s[batch.Batch.BatchNum]; ok {
  1479. if err := hdb.addL1Txs(txn, userlL1s); err != nil {
  1480. return tracerr.Wrap(err)
  1481. }
  1482. }
  1483. // Add exit tree
  1484. if err := hdb.addExitTree(txn, batch.ExitTree); err != nil {
  1485. return tracerr.Wrap(err)
  1486. }
  1487. }
  1488. // Add user L1 txs that won't be forged in this block
  1489. if userL1sNotForgedInThisBlock, ok := userL1s[0]; ok {
  1490. if err := hdb.addL1Txs(txn, userL1sNotForgedInThisBlock); err != nil {
  1491. return tracerr.Wrap(err)
  1492. }
  1493. }
  1494. // Set SC Vars if there was an update
  1495. if blockData.Rollup.Vars != nil {
  1496. if err := hdb.setRollupVars(txn, blockData.Rollup.Vars); err != nil {
  1497. return tracerr.Wrap(err)
  1498. }
  1499. }
  1500. if blockData.Auction.Vars != nil {
  1501. if err := hdb.setAuctionVars(txn, blockData.Auction.Vars); err != nil {
  1502. return tracerr.Wrap(err)
  1503. }
  1504. }
  1505. if blockData.WDelayer.Vars != nil {
  1506. if err := hdb.setWDelayerVars(txn, blockData.WDelayer.Vars); err != nil {
  1507. return tracerr.Wrap(err)
  1508. }
  1509. }
  1510. // Update withdrawals in exit tree table
  1511. if err := hdb.updateExitTree(txn, blockData.Block.Num,
  1512. blockData.Rollup.Withdrawals, blockData.WDelayer.Withdrawals); err != nil {
  1513. return tracerr.Wrap(err)
  1514. }
  1515. // Add Escape Hatch Withdrawals
  1516. if err := hdb.addEscapeHatchWithdrawals(txn,
  1517. blockData.WDelayer.EscapeHatchWithdrawals); err != nil {
  1518. return tracerr.Wrap(err)
  1519. }
  1520. // Add Buckets withdrawals updates
  1521. if err := hdb.addBucketUpdates(txn, blockData.Rollup.UpdateBucketWithdraw); err != nil {
  1522. return tracerr.Wrap(err)
  1523. }
  1524. // Add Token exchange updates
  1525. if err := hdb.addTokenExchanges(txn, blockData.Rollup.TokenExchanges); err != nil {
  1526. return tracerr.Wrap(err)
  1527. }
  1528. return tracerr.Wrap(txn.Commit())
  1529. }
  1530. // GetCoordinatorAPI returns a coordinator by its bidderAddr
  1531. func (hdb *HistoryDB) GetCoordinatorAPI(bidderAddr ethCommon.Address) (*CoordinatorAPI, error) {
  1532. coordinator := &CoordinatorAPI{}
  1533. err := meddler.QueryRow(hdb.db, coordinator, "SELECT * FROM coordinator WHERE bidder_addr = $1;", bidderAddr)
  1534. return coordinator, tracerr.Wrap(err)
  1535. }
  1536. // GetCoordinatorsAPI returns a list of coordinators from the DB and pagination info
  1537. func (hdb *HistoryDB) GetCoordinatorsAPI(
  1538. bidderAddr, forgerAddr *ethCommon.Address,
  1539. fromItem, limit *uint, order string,
  1540. ) ([]CoordinatorAPI, uint64, error) {
  1541. var query string
  1542. var args []interface{}
  1543. queryStr := `SELECT coordinator.*,
  1544. COUNT(*) OVER() AS total_items
  1545. FROM coordinator `
  1546. // Apply filters
  1547. nextIsAnd := false
  1548. if bidderAddr != nil {
  1549. queryStr += "WHERE bidder_addr = ? "
  1550. nextIsAnd = true
  1551. args = append(args, bidderAddr)
  1552. }
  1553. if forgerAddr != nil {
  1554. if nextIsAnd {
  1555. queryStr += "AND "
  1556. } else {
  1557. queryStr += "WHERE "
  1558. }
  1559. queryStr += "forger_addr = ? "
  1560. nextIsAnd = true
  1561. args = append(args, forgerAddr)
  1562. }
  1563. if fromItem != nil {
  1564. if nextIsAnd {
  1565. queryStr += "AND "
  1566. } else {
  1567. queryStr += "WHERE "
  1568. }
  1569. if order == OrderAsc {
  1570. queryStr += "coordinator.item_id >= ? "
  1571. } else {
  1572. queryStr += "coordinator.item_id <= ? "
  1573. }
  1574. args = append(args, fromItem)
  1575. }
  1576. // pagination
  1577. queryStr += "ORDER BY coordinator.item_id "
  1578. if order == OrderAsc {
  1579. queryStr += " ASC "
  1580. } else {
  1581. queryStr += " DESC "
  1582. }
  1583. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  1584. query = hdb.db.Rebind(queryStr)
  1585. coordinators := []*CoordinatorAPI{}
  1586. if err := meddler.QueryAll(hdb.db, &coordinators, query, args...); err != nil {
  1587. return nil, 0, tracerr.Wrap(err)
  1588. }
  1589. if len(coordinators) == 0 {
  1590. return []CoordinatorAPI{}, 0, nil
  1591. }
  1592. return db.SlicePtrsToSlice(coordinators).([]CoordinatorAPI),
  1593. coordinators[0].TotalItems - uint64(len(coordinators)), nil
  1594. }
  1595. // AddAuctionVars insert auction vars into the DB
  1596. func (hdb *HistoryDB) AddAuctionVars(auctionVars *common.AuctionVariables) error {
  1597. return tracerr.Wrap(meddler.Insert(hdb.db, "auction_vars", auctionVars))
  1598. }
  1599. // GetAuctionVars returns auction variables
  1600. func (hdb *HistoryDB) GetAuctionVars() (*common.AuctionVariables, error) {
  1601. auctionVars := &common.AuctionVariables{}
  1602. err := meddler.QueryRow(
  1603. hdb.db, auctionVars, `SELECT * FROM auction_vars;`,
  1604. )
  1605. return auctionVars, tracerr.Wrap(err)
  1606. }
  1607. // GetAuctionVarsUntilSetSlotNum returns all the updates of the auction vars
  1608. // from the last entry in which DefaultSlotSetBidSlotNum <= slotNum
  1609. func (hdb *HistoryDB) GetAuctionVarsUntilSetSlotNum(slotNum int64, maxItems int) ([]MinBidInfo, error) {
  1610. auctionVars := []*MinBidInfo{}
  1611. query := `
  1612. SELECT DISTINCT default_slot_set_bid, default_slot_set_bid_slot_num FROM auction_vars
  1613. WHERE default_slot_set_bid_slot_num < $1
  1614. ORDER BY default_slot_set_bid_slot_num DESC
  1615. LIMIT $2;
  1616. `
  1617. err := meddler.QueryAll(hdb.db, &auctionVars, query, slotNum, maxItems)
  1618. if err != nil {
  1619. return nil, tracerr.Wrap(err)
  1620. }
  1621. return db.SlicePtrsToSlice(auctionVars).([]MinBidInfo), nil
  1622. }
  1623. // GetAccountAPI returns an account by its index
  1624. func (hdb *HistoryDB) GetAccountAPI(idx common.Idx) (*AccountAPI, error) {
  1625. account := &AccountAPI{}
  1626. err := meddler.QueryRow(hdb.db, account, `SELECT account.item_id, hez_idx(account.idx,
  1627. token.symbol) as idx, account.batch_num, account.bjj, account.eth_addr,
  1628. token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
  1629. token.eth_addr as token_eth_addr, token.name, token.symbol, token.decimals, token.usd, token.usd_update
  1630. FROM account INNER JOIN token ON account.token_id = token.token_id WHERE idx = $1;`, idx)
  1631. if err != nil {
  1632. return nil, tracerr.Wrap(err)
  1633. }
  1634. return account, nil
  1635. }
  1636. // GetAccountsAPI returns a list of accounts from the DB and pagination info
  1637. func (hdb *HistoryDB) GetAccountsAPI(
  1638. tokenIDs []common.TokenID, ethAddr *ethCommon.Address,
  1639. bjj *babyjub.PublicKeyComp, fromItem, limit *uint, order string,
  1640. ) ([]AccountAPI, uint64, error) {
  1641. if ethAddr != nil && bjj != nil {
  1642. return nil, 0, tracerr.Wrap(errors.New("ethAddr and bjj are incompatible"))
  1643. }
  1644. var query string
  1645. var args []interface{}
  1646. queryStr := `SELECT account.item_id, hez_idx(account.idx, token.symbol) as idx, account.batch_num,
  1647. account.bjj, account.eth_addr, token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
  1648. token.eth_addr as token_eth_addr, token.name, token.symbol, token.decimals, token.usd, token.usd_update,
  1649. COUNT(*) OVER() AS total_items
  1650. FROM account INNER JOIN token ON account.token_id = token.token_id `
  1651. // Apply filters
  1652. nextIsAnd := false
  1653. // ethAddr filter
  1654. if ethAddr != nil {
  1655. queryStr += "WHERE account.eth_addr = ? "
  1656. nextIsAnd = true
  1657. args = append(args, ethAddr)
  1658. } else if bjj != nil { // bjj filter
  1659. queryStr += "WHERE account.bjj = ? "
  1660. nextIsAnd = true
  1661. args = append(args, bjj)
  1662. }
  1663. // tokenID filter
  1664. if len(tokenIDs) > 0 {
  1665. if nextIsAnd {
  1666. queryStr += "AND "
  1667. } else {
  1668. queryStr += "WHERE "
  1669. }
  1670. queryStr += "account.token_id IN (?) "
  1671. args = append(args, tokenIDs)
  1672. nextIsAnd = true
  1673. }
  1674. if fromItem != nil {
  1675. if nextIsAnd {
  1676. queryStr += "AND "
  1677. } else {
  1678. queryStr += "WHERE "
  1679. }
  1680. if order == OrderAsc {
  1681. queryStr += "account.item_id >= ? "
  1682. } else {
  1683. queryStr += "account.item_id <= ? "
  1684. }
  1685. args = append(args, fromItem)
  1686. }
  1687. // pagination
  1688. queryStr += "ORDER BY account.item_id "
  1689. if order == OrderAsc {
  1690. queryStr += " ASC "
  1691. } else {
  1692. queryStr += " DESC "
  1693. }
  1694. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  1695. query, argsQ, err := sqlx.In(queryStr, args...)
  1696. if err != nil {
  1697. return nil, 0, tracerr.Wrap(err)
  1698. }
  1699. query = hdb.db.Rebind(query)
  1700. accounts := []*AccountAPI{}
  1701. if err := meddler.QueryAll(hdb.db, &accounts, query, argsQ...); err != nil {
  1702. return nil, 0, tracerr.Wrap(err)
  1703. }
  1704. if len(accounts) == 0 {
  1705. return []AccountAPI{}, 0, nil
  1706. }
  1707. return db.SlicePtrsToSlice(accounts).([]AccountAPI),
  1708. accounts[0].TotalItems - uint64(len(accounts)), nil
  1709. }
  1710. // GetMetrics returns metrics
  1711. func (hdb *HistoryDB) GetMetrics(lastBatchNum common.BatchNum) (*Metrics, error) {
  1712. metricsTotals := &MetricsTotals{}
  1713. metrics := &Metrics{}
  1714. err := meddler.QueryRow(
  1715. hdb.db, metricsTotals, `SELECT COUNT(tx.*) as total_txs,
  1716. COALESCE (MIN(tx.batch_num), 0) as batch_num
  1717. FROM tx INNER JOIN block ON tx.eth_block_num = block.eth_block_num
  1718. WHERE block.timestamp >= NOW() - INTERVAL '24 HOURS';`)
  1719. if err != nil {
  1720. return nil, tracerr.Wrap(err)
  1721. }
  1722. metrics.TransactionsPerSecond = float64(metricsTotals.TotalTransactions / (24 * 60 * 60))
  1723. if (lastBatchNum - metricsTotals.FirstBatchNum) > 0 {
  1724. metrics.TransactionsPerBatch = float64(int64(metricsTotals.TotalTransactions) /
  1725. int64(lastBatchNum-metricsTotals.FirstBatchNum))
  1726. } else {
  1727. metrics.TransactionsPerBatch = float64(0)
  1728. }
  1729. err = meddler.QueryRow(
  1730. hdb.db, metricsTotals, `SELECT COUNT(*) AS total_batches,
  1731. COALESCE (SUM(total_fees_usd), 0) AS total_fees FROM batch
  1732. WHERE batch_num > $1;`, metricsTotals.FirstBatchNum)
  1733. if err != nil {
  1734. return nil, tracerr.Wrap(err)
  1735. }
  1736. if metricsTotals.TotalBatches > 0 {
  1737. metrics.BatchFrequency = float64((24 * 60 * 60) / metricsTotals.TotalBatches)
  1738. } else {
  1739. metrics.BatchFrequency = 0
  1740. }
  1741. if metricsTotals.TotalTransactions > 0 {
  1742. metrics.AvgTransactionFee = metricsTotals.TotalFeesUSD / float64(metricsTotals.TotalTransactions)
  1743. } else {
  1744. metrics.AvgTransactionFee = 0
  1745. }
  1746. err = meddler.QueryRow(
  1747. hdb.db, metrics,
  1748. `SELECT COUNT(*) AS total_bjjs, COUNT(DISTINCT(bjj)) AS total_accounts FROM account;`)
  1749. if err != nil {
  1750. return nil, tracerr.Wrap(err)
  1751. }
  1752. return metrics, nil
  1753. }
  1754. // GetAvgTxFee returns average transaction fee of the last 1h
  1755. func (hdb *HistoryDB) GetAvgTxFee() (float64, error) {
  1756. metricsTotals := &MetricsTotals{}
  1757. err := meddler.QueryRow(
  1758. hdb.db, metricsTotals, `SELECT COUNT(tx.*) as total_txs,
  1759. COALESCE (MIN(tx.batch_num), 0) as batch_num
  1760. FROM tx INNER JOIN block ON tx.eth_block_num = block.eth_block_num
  1761. WHERE block.timestamp >= NOW() - INTERVAL '1 HOURS';`)
  1762. if err != nil {
  1763. return 0, tracerr.Wrap(err)
  1764. }
  1765. err = meddler.QueryRow(
  1766. hdb.db, metricsTotals, `SELECT COUNT(*) AS total_batches,
  1767. COALESCE (SUM(total_fees_usd), 0) AS total_fees FROM batch
  1768. WHERE batch_num > $1;`, metricsTotals.FirstBatchNum)
  1769. if err != nil {
  1770. return 0, tracerr.Wrap(err)
  1771. }
  1772. var avgTransactionFee float64
  1773. if metricsTotals.TotalTransactions > 0 {
  1774. avgTransactionFee = metricsTotals.TotalFeesUSD / float64(metricsTotals.TotalTransactions)
  1775. } else {
  1776. avgTransactionFee = 0
  1777. }
  1778. return avgTransactionFee, nil
  1779. }