You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1983 lines
63 KiB

Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
  1. package historydb
  2. import (
  3. "errors"
  4. "fmt"
  5. "math"
  6. "math/big"
  7. "strings"
  8. ethCommon "github.com/ethereum/go-ethereum/common"
  9. "github.com/hermeznetwork/hermez-node/common"
  10. "github.com/hermeznetwork/hermez-node/db"
  11. "github.com/hermeznetwork/tracerr"
  12. "github.com/iden3/go-iden3-crypto/babyjub"
  13. "github.com/jmoiron/sqlx"
  14. //nolint:errcheck // driver for postgres DB
  15. _ "github.com/lib/pq"
  16. "github.com/russross/meddler"
  17. )
  18. const (
  19. // OrderAsc indicates ascending order when using pagination
  20. OrderAsc = "ASC"
  21. // OrderDesc indicates descending order when using pagination
  22. OrderDesc = "DESC"
  23. )
  24. // TODO(Edu): Document here how HistoryDB is kept consistent
  25. // HistoryDB persist the historic of the rollup
  26. type HistoryDB struct {
  27. db *sqlx.DB
  28. }
  29. // NewHistoryDB initialize the DB
  30. func NewHistoryDB(db *sqlx.DB) *HistoryDB {
  31. return &HistoryDB{db: db}
  32. }
  33. // DB returns a pointer to the L2DB.db. This method should be used only for
  34. // internal testing purposes.
  35. func (hdb *HistoryDB) DB() *sqlx.DB {
  36. return hdb.db
  37. }
  38. // AddBlock insert a block into the DB
  39. func (hdb *HistoryDB) AddBlock(block *common.Block) error { return hdb.addBlock(hdb.db, block) }
  40. func (hdb *HistoryDB) addBlock(d meddler.DB, block *common.Block) error {
  41. return tracerr.Wrap(meddler.Insert(d, "block", block))
  42. }
  43. // AddBlocks inserts blocks into the DB
  44. func (hdb *HistoryDB) AddBlocks(blocks []common.Block) error {
  45. return tracerr.Wrap(hdb.addBlocks(hdb.db, blocks))
  46. }
  47. func (hdb *HistoryDB) addBlocks(d meddler.DB, blocks []common.Block) error {
  48. return tracerr.Wrap(db.BulkInsert(
  49. d,
  50. `INSERT INTO block (
  51. eth_block_num,
  52. timestamp,
  53. hash
  54. ) VALUES %s;`,
  55. blocks[:],
  56. ))
  57. }
  58. // GetBlock retrieve a block from the DB, given a block number
  59. func (hdb *HistoryDB) GetBlock(blockNum int64) (*common.Block, error) {
  60. block := &common.Block{}
  61. err := meddler.QueryRow(
  62. hdb.db, block,
  63. "SELECT * FROM block WHERE eth_block_num = $1;", blockNum,
  64. )
  65. return block, tracerr.Wrap(err)
  66. }
  67. // GetAllBlocks retrieve all blocks from the DB
  68. func (hdb *HistoryDB) GetAllBlocks() ([]common.Block, error) {
  69. var blocks []*common.Block
  70. err := meddler.QueryAll(
  71. hdb.db, &blocks,
  72. "SELECT * FROM block ORDER BY eth_block_num;",
  73. )
  74. return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err)
  75. }
  76. // GetBlocks retrieve blocks from the DB, given a range of block numbers defined by from and to
  77. func (hdb *HistoryDB) GetBlocks(from, to int64) ([]common.Block, error) {
  78. var blocks []*common.Block
  79. err := meddler.QueryAll(
  80. hdb.db, &blocks,
  81. "SELECT * FROM block WHERE $1 <= eth_block_num AND eth_block_num < $2 ORDER BY eth_block_num;",
  82. from, to,
  83. )
  84. return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err)
  85. }
  86. // GetLastBlock retrieve the block with the highest block number from the DB
  87. func (hdb *HistoryDB) GetLastBlock() (*common.Block, error) {
  88. block := &common.Block{}
  89. err := meddler.QueryRow(
  90. hdb.db, block, "SELECT * FROM block ORDER BY eth_block_num DESC LIMIT 1;",
  91. )
  92. return block, tracerr.Wrap(err)
  93. }
  94. // AddBatch insert a Batch into the DB
  95. func (hdb *HistoryDB) AddBatch(batch *common.Batch) error { return hdb.addBatch(hdb.db, batch) }
  96. func (hdb *HistoryDB) addBatch(d meddler.DB, batch *common.Batch) error {
  97. // Calculate total collected fees in USD
  98. // Get IDs of collected tokens for fees
  99. tokenIDs := []common.TokenID{}
  100. for id := range batch.CollectedFees {
  101. tokenIDs = append(tokenIDs, id)
  102. }
  103. // Get USD value of the tokens
  104. type tokenPrice struct {
  105. ID common.TokenID `meddler:"token_id"`
  106. USD *float64 `meddler:"usd"`
  107. Decimals int `meddler:"decimals"`
  108. }
  109. var tokenPrices []*tokenPrice
  110. if len(tokenIDs) > 0 {
  111. query, args, err := sqlx.In(
  112. "SELECT token_id, usd, decimals FROM token WHERE token_id IN (?);",
  113. tokenIDs,
  114. )
  115. if err != nil {
  116. return tracerr.Wrap(err)
  117. }
  118. query = hdb.db.Rebind(query)
  119. if err := meddler.QueryAll(
  120. hdb.db, &tokenPrices, query, args...,
  121. ); err != nil {
  122. return tracerr.Wrap(err)
  123. }
  124. }
  125. // Calculate total collected
  126. var total float64
  127. for _, tokenPrice := range tokenPrices {
  128. if tokenPrice.USD == nil {
  129. continue
  130. }
  131. f := new(big.Float).SetInt(batch.CollectedFees[tokenPrice.ID])
  132. amount, _ := f.Float64()
  133. total += *tokenPrice.USD * (amount / math.Pow(10, float64(tokenPrice.Decimals))) //nolint decimals have to be ^10
  134. }
  135. batch.TotalFeesUSD = &total
  136. // Insert to DB
  137. return tracerr.Wrap(meddler.Insert(d, "batch", batch))
  138. }
  139. // AddBatches insert Bids into the DB
  140. func (hdb *HistoryDB) AddBatches(batches []common.Batch) error {
  141. return tracerr.Wrap(hdb.addBatches(hdb.db, batches))
  142. }
  143. func (hdb *HistoryDB) addBatches(d meddler.DB, batches []common.Batch) error {
  144. for i := 0; i < len(batches); i++ {
  145. if err := hdb.addBatch(d, &batches[i]); err != nil {
  146. return tracerr.Wrap(err)
  147. }
  148. }
  149. return nil
  150. }
  151. // GetBatchAPI return the batch with the given batchNum
  152. func (hdb *HistoryDB) GetBatchAPI(batchNum common.BatchNum) (*BatchAPI, error) {
  153. batch := &BatchAPI{}
  154. return batch, tracerr.Wrap(meddler.QueryRow(
  155. hdb.db, batch,
  156. `SELECT batch.item_id, batch.batch_num, batch.eth_block_num,
  157. batch.forger_addr, batch.fees_collected, batch.total_fees_usd, batch.state_root,
  158. batch.num_accounts, batch.exit_root, batch.forge_l1_txs_num, batch.slot_num,
  159. block.timestamp, block.hash,
  160. COALESCE ((SELECT COUNT(*) FROM tx WHERE batch_num = batch.batch_num), 0) AS forged_txs
  161. FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num
  162. WHERE batch_num = $1;`, batchNum,
  163. ))
  164. }
  165. // GetBatchesAPI return the batches applying the given filters
  166. func (hdb *HistoryDB) GetBatchesAPI(
  167. minBatchNum, maxBatchNum, slotNum *uint,
  168. forgerAddr *ethCommon.Address,
  169. fromItem, limit *uint, order string,
  170. ) ([]BatchAPI, uint64, error) {
  171. var query string
  172. var args []interface{}
  173. queryStr := `SELECT batch.item_id, batch.batch_num, batch.eth_block_num,
  174. batch.forger_addr, batch.fees_collected, batch.total_fees_usd, batch.state_root,
  175. batch.num_accounts, batch.exit_root, batch.forge_l1_txs_num, batch.slot_num,
  176. block.timestamp, block.hash,
  177. COALESCE ((SELECT COUNT(*) FROM tx WHERE batch_num = batch.batch_num), 0) AS forged_txs,
  178. count(*) OVER() AS total_items
  179. FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num `
  180. // Apply filters
  181. nextIsAnd := false
  182. // minBatchNum filter
  183. if minBatchNum != nil {
  184. if nextIsAnd {
  185. queryStr += "AND "
  186. } else {
  187. queryStr += "WHERE "
  188. }
  189. queryStr += "batch.batch_num > ? "
  190. args = append(args, minBatchNum)
  191. nextIsAnd = true
  192. }
  193. // maxBatchNum filter
  194. if maxBatchNum != nil {
  195. if nextIsAnd {
  196. queryStr += "AND "
  197. } else {
  198. queryStr += "WHERE "
  199. }
  200. queryStr += "batch.batch_num < ? "
  201. args = append(args, maxBatchNum)
  202. nextIsAnd = true
  203. }
  204. // slotNum filter
  205. if slotNum != nil {
  206. if nextIsAnd {
  207. queryStr += "AND "
  208. } else {
  209. queryStr += "WHERE "
  210. }
  211. queryStr += "batch.slot_num = ? "
  212. args = append(args, slotNum)
  213. nextIsAnd = true
  214. }
  215. // forgerAddr filter
  216. if forgerAddr != nil {
  217. if nextIsAnd {
  218. queryStr += "AND "
  219. } else {
  220. queryStr += "WHERE "
  221. }
  222. queryStr += "batch.forger_addr = ? "
  223. args = append(args, forgerAddr)
  224. nextIsAnd = true
  225. }
  226. // pagination
  227. if fromItem != nil {
  228. if nextIsAnd {
  229. queryStr += "AND "
  230. } else {
  231. queryStr += "WHERE "
  232. }
  233. if order == OrderAsc {
  234. queryStr += "batch.item_id >= ? "
  235. } else {
  236. queryStr += "batch.item_id <= ? "
  237. }
  238. args = append(args, fromItem)
  239. }
  240. queryStr += "ORDER BY batch.item_id "
  241. if order == OrderAsc {
  242. queryStr += " ASC "
  243. } else {
  244. queryStr += " DESC "
  245. }
  246. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  247. query = hdb.db.Rebind(queryStr)
  248. // log.Debug(query)
  249. batchPtrs := []*BatchAPI{}
  250. if err := meddler.QueryAll(hdb.db, &batchPtrs, query, args...); err != nil {
  251. return nil, 0, tracerr.Wrap(err)
  252. }
  253. batches := db.SlicePtrsToSlice(batchPtrs).([]BatchAPI)
  254. if len(batches) == 0 {
  255. return batches, 0, nil
  256. }
  257. return batches, batches[0].TotalItems - uint64(len(batches)), nil
  258. }
  259. // GetAllBatches retrieve all batches from the DB
  260. func (hdb *HistoryDB) GetAllBatches() ([]common.Batch, error) {
  261. var batches []*common.Batch
  262. err := meddler.QueryAll(
  263. hdb.db, &batches,
  264. `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr, batch.fees_collected,
  265. batch.fee_idxs_coordinator, batch.state_root, batch.num_accounts, batch.last_idx, batch.exit_root,
  266. batch.forge_l1_txs_num, batch.slot_num, batch.total_fees_usd FROM batch
  267. ORDER BY item_id;`,
  268. )
  269. return db.SlicePtrsToSlice(batches).([]common.Batch), tracerr.Wrap(err)
  270. }
  271. // GetBatches retrieve batches from the DB, given a range of batch numbers defined by from and to
  272. func (hdb *HistoryDB) GetBatches(from, to common.BatchNum) ([]common.Batch, error) {
  273. var batches []*common.Batch
  274. err := meddler.QueryAll(
  275. hdb.db, &batches,
  276. `SELECT batch_num, eth_block_num, forger_addr, fees_collected, fee_idxs_coordinator,
  277. state_root, num_accounts, last_idx, exit_root, forge_l1_txs_num, slot_num, total_fees_usd
  278. FROM batch WHERE $1 <= batch_num AND batch_num < $2 ORDER BY batch_num;`,
  279. from, to,
  280. )
  281. return db.SlicePtrsToSlice(batches).([]common.Batch), tracerr.Wrap(err)
  282. }
  283. // GetFirstBatchBlockNumBySlot returns the ethereum block number of the first
  284. // batch within a slot
  285. func (hdb *HistoryDB) GetFirstBatchBlockNumBySlot(slotNum int64) (int64, error) {
  286. row := hdb.db.QueryRow(
  287. `SELECT eth_block_num FROM batch
  288. WHERE slot_num = $1 ORDER BY batch_num ASC LIMIT 1;`, slotNum,
  289. )
  290. var blockNum int64
  291. return blockNum, tracerr.Wrap(row.Scan(&blockNum))
  292. }
  293. // GetLastBatchNum returns the BatchNum of the latest forged batch
  294. func (hdb *HistoryDB) GetLastBatchNum() (common.BatchNum, error) {
  295. row := hdb.db.QueryRow("SELECT batch_num FROM batch ORDER BY batch_num DESC LIMIT 1;")
  296. var batchNum common.BatchNum
  297. return batchNum, tracerr.Wrap(row.Scan(&batchNum))
  298. }
  299. // GetLastBatchreturns the last forged batch
  300. func (hdb *HistoryDB) GetLastBatch() (*common.Batch, error) {
  301. var batch common.Batch
  302. err := meddler.QueryRow(
  303. hdb.db, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr,
  304. batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root,
  305. batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num,
  306. batch.slot_num, batch.total_fees_usd FROM batch ORDER BY batch_num DESC LIMIT 1;`,
  307. )
  308. return &batch, err
  309. }
  310. // GetLastL1BatchBlockNum returns the blockNum of the latest forged l1Batch
  311. func (hdb *HistoryDB) GetLastL1BatchBlockNum() (int64, error) {
  312. row := hdb.db.QueryRow(`SELECT eth_block_num FROM batch
  313. WHERE forge_l1_txs_num IS NOT NULL
  314. ORDER BY batch_num DESC LIMIT 1;`)
  315. var blockNum int64
  316. return blockNum, tracerr.Wrap(row.Scan(&blockNum))
  317. }
  318. // GetLastL1TxsNum returns the greatest ForgeL1TxsNum in the DB from forged
  319. // batches. If there's no batch in the DB (nil, nil) is returned.
  320. func (hdb *HistoryDB) GetLastL1TxsNum() (*int64, error) {
  321. row := hdb.db.QueryRow("SELECT MAX(forge_l1_txs_num) FROM batch;")
  322. lastL1TxsNum := new(int64)
  323. return lastL1TxsNum, tracerr.Wrap(row.Scan(&lastL1TxsNum))
  324. }
  325. // Reorg deletes all the information that was added into the DB after the
  326. // lastValidBlock. If lastValidBlock is negative, all block information is
  327. // deleted.
  328. func (hdb *HistoryDB) Reorg(lastValidBlock int64) error {
  329. var err error
  330. if lastValidBlock < 0 {
  331. _, err = hdb.db.Exec("DELETE FROM block;")
  332. } else {
  333. _, err = hdb.db.Exec("DELETE FROM block WHERE eth_block_num > $1;", lastValidBlock)
  334. }
  335. return tracerr.Wrap(err)
  336. }
  337. // AddBids insert Bids into the DB
  338. func (hdb *HistoryDB) AddBids(bids []common.Bid) error { return hdb.addBids(hdb.db, bids) }
  339. func (hdb *HistoryDB) addBids(d meddler.DB, bids []common.Bid) error {
  340. if len(bids) == 0 {
  341. return nil
  342. }
  343. // TODO: check the coordinator info
  344. return tracerr.Wrap(db.BulkInsert(
  345. d,
  346. "INSERT INTO bid (slot_num, bid_value, eth_block_num, bidder_addr) VALUES %s;",
  347. bids[:],
  348. ))
  349. }
  350. // GetAllBids retrieve all bids from the DB
  351. func (hdb *HistoryDB) GetAllBids() ([]common.Bid, error) {
  352. var bids []*common.Bid
  353. err := meddler.QueryAll(
  354. hdb.db, &bids,
  355. `SELECT bid.slot_num, bid.bid_value, bid.eth_block_num, bid.bidder_addr FROM bid
  356. ORDER BY item_id;`,
  357. )
  358. return db.SlicePtrsToSlice(bids).([]common.Bid), tracerr.Wrap(err)
  359. }
  360. // GetBestBidAPI returns the best bid in specific slot by slotNum
  361. func (hdb *HistoryDB) GetBestBidAPI(slotNum *int64) (BidAPI, error) {
  362. bid := &BidAPI{}
  363. err := meddler.QueryRow(
  364. hdb.db, bid, `SELECT bid.*, block.timestamp, coordinator.forger_addr, coordinator.url
  365. FROM bid INNER JOIN block ON bid.eth_block_num = block.eth_block_num
  366. INNER JOIN (
  367. SELECT bidder_addr, MAX(item_id) AS item_id FROM coordinator
  368. GROUP BY bidder_addr
  369. ) c ON bid.bidder_addr = c.bidder_addr
  370. INNER JOIN coordinator ON c.item_id = coordinator.item_id
  371. WHERE slot_num = $1 ORDER BY item_id DESC LIMIT 1;`, slotNum,
  372. )
  373. return *bid, tracerr.Wrap(err)
  374. }
  375. // GetBestBidCoordinator returns the forger address of the highest bidder in a slot by slotNum
  376. func (hdb *HistoryDB) GetBestBidCoordinator(slotNum int64) (*common.BidCoordinator, error) {
  377. bidCoord := &common.BidCoordinator{}
  378. err := meddler.QueryRow(
  379. hdb.db, bidCoord,
  380. `SELECT (
  381. SELECT default_slot_set_bid
  382. FROM auction_vars
  383. WHERE default_slot_set_bid_slot_num <= $1
  384. ORDER BY eth_block_num DESC LIMIT 1
  385. ),
  386. bid.slot_num, bid.bid_value, bid.bidder_addr,
  387. coordinator.forger_addr, coordinator.url
  388. FROM bid
  389. INNER JOIN (
  390. SELECT bidder_addr, MAX(item_id) AS item_id FROM coordinator
  391. GROUP BY bidder_addr
  392. ) c ON bid.bidder_addr = c.bidder_addr
  393. INNER JOIN coordinator ON c.item_id = coordinator.item_id
  394. WHERE bid.slot_num = $1 ORDER BY bid.item_id DESC LIMIT 1;`,
  395. slotNum)
  396. return bidCoord, tracerr.Wrap(err)
  397. }
  398. // GetBestBidsAPI returns the best bid in specific slot by slotNum
  399. func (hdb *HistoryDB) GetBestBidsAPI(
  400. minSlotNum, maxSlotNum *int64,
  401. bidderAddr *ethCommon.Address,
  402. limit *uint, order string,
  403. ) ([]BidAPI, uint64, error) {
  404. var query string
  405. var args []interface{}
  406. // JOIN the best bid of each slot with the latest update of each coordinator
  407. queryStr := `SELECT b.*, block.timestamp, coordinator.forger_addr, coordinator.url,
  408. COUNT(*) OVER() AS total_items FROM (
  409. SELECT slot_num, MAX(item_id) as maxitem
  410. FROM bid GROUP BY slot_num
  411. )
  412. AS x INNER JOIN bid AS b ON b.item_id = x.maxitem
  413. INNER JOIN block ON b.eth_block_num = block.eth_block_num
  414. INNER JOIN (
  415. SELECT bidder_addr, MAX(item_id) AS item_id FROM coordinator
  416. GROUP BY bidder_addr
  417. ) c ON b.bidder_addr = c.bidder_addr
  418. INNER JOIN coordinator ON c.item_id = coordinator.item_id
  419. WHERE (b.slot_num >= ? AND b.slot_num <= ?)`
  420. args = append(args, minSlotNum)
  421. args = append(args, maxSlotNum)
  422. // Apply filters
  423. if bidderAddr != nil {
  424. queryStr += " AND b.bidder_addr = ? "
  425. args = append(args, bidderAddr)
  426. }
  427. queryStr += " ORDER BY b.slot_num "
  428. if order == OrderAsc {
  429. queryStr += "ASC "
  430. } else {
  431. queryStr += "DESC "
  432. }
  433. if limit != nil {
  434. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  435. }
  436. query = hdb.db.Rebind(queryStr)
  437. bidPtrs := []*BidAPI{}
  438. if err := meddler.QueryAll(hdb.db, &bidPtrs, query, args...); err != nil {
  439. return nil, 0, tracerr.Wrap(err)
  440. }
  441. // log.Debug(query)
  442. bids := db.SlicePtrsToSlice(bidPtrs).([]BidAPI)
  443. if len(bids) == 0 {
  444. return bids, 0, nil
  445. }
  446. return bids, bids[0].TotalItems - uint64(len(bids)), nil
  447. }
  448. // GetBidsAPI return the bids applying the given filters
  449. func (hdb *HistoryDB) GetBidsAPI(
  450. slotNum *int64, bidderAddr *ethCommon.Address,
  451. fromItem, limit *uint, order string,
  452. ) ([]BidAPI, uint64, error) {
  453. var query string
  454. var args []interface{}
  455. // JOIN each bid with the latest update of each coordinator
  456. queryStr := `SELECT bid.*, block.timestamp, coord.forger_addr, coord.url,
  457. COUNT(*) OVER() AS total_items
  458. FROM bid INNER JOIN block ON bid.eth_block_num = block.eth_block_num
  459. INNER JOIN (
  460. SELECT bidder_addr, MAX(item_id) AS item_id FROM coordinator
  461. GROUP BY bidder_addr
  462. ) c ON bid.bidder_addr = c.bidder_addr
  463. INNER JOIN coordinator coord ON c.item_id = coord.item_id `
  464. // Apply filters
  465. nextIsAnd := false
  466. // slotNum filter
  467. if slotNum != nil {
  468. if nextIsAnd {
  469. queryStr += "AND "
  470. } else {
  471. queryStr += "WHERE "
  472. }
  473. queryStr += "bid.slot_num = ? "
  474. args = append(args, slotNum)
  475. nextIsAnd = true
  476. }
  477. // bidder filter
  478. if bidderAddr != nil {
  479. if nextIsAnd {
  480. queryStr += "AND "
  481. } else {
  482. queryStr += "WHERE "
  483. }
  484. queryStr += "bid.bidder_addr = ? "
  485. args = append(args, bidderAddr)
  486. nextIsAnd = true
  487. }
  488. if fromItem != nil {
  489. if nextIsAnd {
  490. queryStr += "AND "
  491. } else {
  492. queryStr += "WHERE "
  493. }
  494. if order == OrderAsc {
  495. queryStr += "bid.item_id >= ? "
  496. } else {
  497. queryStr += "bid.item_id <= ? "
  498. }
  499. args = append(args, fromItem)
  500. }
  501. // pagination
  502. queryStr += "ORDER BY bid.item_id "
  503. if order == OrderAsc {
  504. queryStr += "ASC "
  505. } else {
  506. queryStr += "DESC "
  507. }
  508. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  509. query, argsQ, err := sqlx.In(queryStr, args...)
  510. if err != nil {
  511. return nil, 0, tracerr.Wrap(err)
  512. }
  513. query = hdb.db.Rebind(query)
  514. bids := []*BidAPI{}
  515. if err := meddler.QueryAll(hdb.db, &bids, query, argsQ...); err != nil {
  516. return nil, 0, tracerr.Wrap(err)
  517. }
  518. if len(bids) == 0 {
  519. return []BidAPI{}, 0, nil
  520. }
  521. return db.SlicePtrsToSlice(bids).([]BidAPI), bids[0].TotalItems - uint64(len(bids)), nil
  522. }
  523. // AddCoordinators insert Coordinators into the DB
  524. func (hdb *HistoryDB) AddCoordinators(coordinators []common.Coordinator) error {
  525. return tracerr.Wrap(hdb.addCoordinators(hdb.db, coordinators))
  526. }
  527. func (hdb *HistoryDB) addCoordinators(d meddler.DB, coordinators []common.Coordinator) error {
  528. if len(coordinators) == 0 {
  529. return nil
  530. }
  531. return tracerr.Wrap(db.BulkInsert(
  532. d,
  533. "INSERT INTO coordinator (bidder_addr, forger_addr, eth_block_num, url) VALUES %s;",
  534. coordinators[:],
  535. ))
  536. }
  537. // AddExitTree insert Exit tree into the DB
  538. func (hdb *HistoryDB) AddExitTree(exitTree []common.ExitInfo) error {
  539. return tracerr.Wrap(hdb.addExitTree(hdb.db, exitTree))
  540. }
  541. func (hdb *HistoryDB) addExitTree(d meddler.DB, exitTree []common.ExitInfo) error {
  542. if len(exitTree) == 0 {
  543. return nil
  544. }
  545. return tracerr.Wrap(db.BulkInsert(
  546. d,
  547. "INSERT INTO exit_tree (batch_num, account_idx, merkle_proof, balance, "+
  548. "instant_withdrawn, delayed_withdraw_request, delayed_withdrawn) VALUES %s;",
  549. exitTree[:],
  550. ))
  551. }
  552. func (hdb *HistoryDB) updateExitTree(d sqlx.Ext, blockNum int64,
  553. rollupWithdrawals []common.WithdrawInfo, wDelayerWithdrawals []common.WDelayerTransfer) error {
  554. if len(rollupWithdrawals) == 0 && len(wDelayerWithdrawals) == 0 {
  555. return nil
  556. }
  557. type withdrawal struct {
  558. BatchNum int64 `db:"batch_num"`
  559. AccountIdx int64 `db:"account_idx"`
  560. InstantWithdrawn *int64 `db:"instant_withdrawn"`
  561. DelayedWithdrawRequest *int64 `db:"delayed_withdraw_request"`
  562. DelayedWithdrawn *int64 `db:"delayed_withdrawn"`
  563. Owner *ethCommon.Address `db:"owner"`
  564. Token *ethCommon.Address `db:"token"`
  565. }
  566. withdrawals := make([]withdrawal, len(rollupWithdrawals)+len(wDelayerWithdrawals))
  567. for i := range rollupWithdrawals {
  568. info := &rollupWithdrawals[i]
  569. withdrawals[i] = withdrawal{
  570. BatchNum: int64(info.NumExitRoot),
  571. AccountIdx: int64(info.Idx),
  572. }
  573. if info.InstantWithdraw {
  574. withdrawals[i].InstantWithdrawn = &blockNum
  575. } else {
  576. withdrawals[i].DelayedWithdrawRequest = &blockNum
  577. withdrawals[i].Owner = &info.Owner
  578. withdrawals[i].Token = &info.Token
  579. }
  580. }
  581. for i := range wDelayerWithdrawals {
  582. info := &wDelayerWithdrawals[i]
  583. withdrawals[len(rollupWithdrawals)+i] = withdrawal{
  584. DelayedWithdrawn: &blockNum,
  585. Owner: &info.Owner,
  586. Token: &info.Token,
  587. }
  588. }
  589. // In VALUES we set an initial row of NULLs to set the types of each
  590. // variable passed as argument
  591. const query string = `
  592. UPDATE exit_tree e SET
  593. instant_withdrawn = d.instant_withdrawn,
  594. delayed_withdraw_request = CASE
  595. WHEN e.delayed_withdraw_request IS NOT NULL THEN e.delayed_withdraw_request
  596. ELSE d.delayed_withdraw_request
  597. END,
  598. delayed_withdrawn = d.delayed_withdrawn,
  599. owner = d.owner,
  600. token = d.token
  601. FROM (VALUES
  602. (NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BYTEA, NULL::::BYTEA),
  603. (:batch_num,
  604. :account_idx,
  605. :instant_withdrawn,
  606. :delayed_withdraw_request,
  607. :delayed_withdrawn,
  608. :owner,
  609. :token)
  610. ) as d (batch_num, account_idx, instant_withdrawn, delayed_withdraw_request, delayed_withdrawn, owner, token)
  611. WHERE
  612. (d.batch_num IS NOT NULL AND e.batch_num = d.batch_num AND e.account_idx = d.account_idx) OR
  613. (d.delayed_withdrawn IS NOT NULL AND e.delayed_withdrawn IS NULL AND e.owner = d.owner AND e.token = d.token);
  614. `
  615. if len(withdrawals) > 0 {
  616. if _, err := sqlx.NamedExec(d, query, withdrawals); err != nil {
  617. return tracerr.Wrap(err)
  618. }
  619. }
  620. return nil
  621. }
  622. // AddToken insert a token into the DB
  623. func (hdb *HistoryDB) AddToken(token *common.Token) error {
  624. return tracerr.Wrap(meddler.Insert(hdb.db, "token", token))
  625. }
  626. // AddTokens insert tokens into the DB
  627. func (hdb *HistoryDB) AddTokens(tokens []common.Token) error { return hdb.addTokens(hdb.db, tokens) }
  628. func (hdb *HistoryDB) addTokens(d meddler.DB, tokens []common.Token) error {
  629. if len(tokens) == 0 {
  630. return nil
  631. }
  632. // Sanitize name and symbol
  633. for i, token := range tokens {
  634. token.Name = strings.ToValidUTF8(token.Name, " ")
  635. token.Symbol = strings.ToValidUTF8(token.Symbol, " ")
  636. tokens[i] = token
  637. }
  638. return tracerr.Wrap(db.BulkInsert(
  639. d,
  640. `INSERT INTO token (
  641. token_id,
  642. eth_block_num,
  643. eth_addr,
  644. name,
  645. symbol,
  646. decimals
  647. ) VALUES %s;`,
  648. tokens[:],
  649. ))
  650. }
  651. // UpdateTokenValue updates the USD value of a token
  652. func (hdb *HistoryDB) UpdateTokenValue(tokenSymbol string, value float64) error {
  653. // Sanitize symbol
  654. tokenSymbol = strings.ToValidUTF8(tokenSymbol, " ")
  655. _, err := hdb.db.Exec(
  656. "UPDATE token SET usd = $1 WHERE symbol = $2;",
  657. value, tokenSymbol,
  658. )
  659. return tracerr.Wrap(err)
  660. }
  661. // GetToken returns a token from the DB given a TokenID
  662. func (hdb *HistoryDB) GetToken(tokenID common.TokenID) (*TokenWithUSD, error) {
  663. token := &TokenWithUSD{}
  664. err := meddler.QueryRow(
  665. hdb.db, token, `SELECT * FROM token WHERE token_id = $1;`, tokenID,
  666. )
  667. return token, tracerr.Wrap(err)
  668. }
  669. // GetAllTokens returns all tokens from the DB
  670. func (hdb *HistoryDB) GetAllTokens() ([]TokenWithUSD, error) {
  671. var tokens []*TokenWithUSD
  672. err := meddler.QueryAll(
  673. hdb.db, &tokens,
  674. "SELECT * FROM token ORDER BY token_id;",
  675. )
  676. return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), tracerr.Wrap(err)
  677. }
  678. // GetTokens returns a list of tokens from the DB
  679. func (hdb *HistoryDB) GetTokens(
  680. ids []common.TokenID, symbols []string, name string, fromItem,
  681. limit *uint, order string,
  682. ) ([]TokenWithUSD, uint64, error) {
  683. var query string
  684. var args []interface{}
  685. queryStr := `SELECT * , COUNT(*) OVER() AS total_items FROM token `
  686. // Apply filters
  687. nextIsAnd := false
  688. if len(ids) > 0 {
  689. queryStr += "WHERE token_id IN (?) "
  690. nextIsAnd = true
  691. args = append(args, ids)
  692. }
  693. if len(symbols) > 0 {
  694. if nextIsAnd {
  695. queryStr += "AND "
  696. } else {
  697. queryStr += "WHERE "
  698. }
  699. queryStr += "symbol IN (?) "
  700. args = append(args, symbols)
  701. nextIsAnd = true
  702. }
  703. if name != "" {
  704. if nextIsAnd {
  705. queryStr += "AND "
  706. } else {
  707. queryStr += "WHERE "
  708. }
  709. queryStr += "name ~ ? "
  710. args = append(args, name)
  711. nextIsAnd = true
  712. }
  713. if fromItem != nil {
  714. if nextIsAnd {
  715. queryStr += "AND "
  716. } else {
  717. queryStr += "WHERE "
  718. }
  719. if order == OrderAsc {
  720. queryStr += "item_id >= ? "
  721. } else {
  722. queryStr += "item_id <= ? "
  723. }
  724. args = append(args, fromItem)
  725. }
  726. // pagination
  727. queryStr += "ORDER BY item_id "
  728. if order == OrderAsc {
  729. queryStr += "ASC "
  730. } else {
  731. queryStr += "DESC "
  732. }
  733. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  734. query, argsQ, err := sqlx.In(queryStr, args...)
  735. if err != nil {
  736. return nil, 0, tracerr.Wrap(err)
  737. }
  738. query = hdb.db.Rebind(query)
  739. tokens := []*TokenWithUSD{}
  740. if err := meddler.QueryAll(hdb.db, &tokens, query, argsQ...); err != nil {
  741. return nil, 0, tracerr.Wrap(err)
  742. }
  743. if len(tokens) == 0 {
  744. return []TokenWithUSD{}, 0, nil
  745. }
  746. return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), uint64(len(tokens)) - tokens[0].TotalItems, nil
  747. }
  748. // GetTokenSymbols returns all the token symbols from the DB
  749. func (hdb *HistoryDB) GetTokenSymbols() ([]string, error) {
  750. var tokenSymbols []string
  751. rows, err := hdb.db.Query("SELECT symbol FROM token;")
  752. if err != nil {
  753. return nil, tracerr.Wrap(err)
  754. }
  755. defer db.RowsClose(rows)
  756. sym := new(string)
  757. for rows.Next() {
  758. err = rows.Scan(sym)
  759. if err != nil {
  760. return nil, tracerr.Wrap(err)
  761. }
  762. tokenSymbols = append(tokenSymbols, *sym)
  763. }
  764. return tokenSymbols, nil
  765. }
  766. // AddAccounts insert accounts into the DB
  767. func (hdb *HistoryDB) AddAccounts(accounts []common.Account) error {
  768. return tracerr.Wrap(hdb.addAccounts(hdb.db, accounts))
  769. }
  770. func (hdb *HistoryDB) addAccounts(d meddler.DB, accounts []common.Account) error {
  771. if len(accounts) == 0 {
  772. return nil
  773. }
  774. return tracerr.Wrap(db.BulkInsert(
  775. d,
  776. `INSERT INTO account (
  777. idx,
  778. token_id,
  779. batch_num,
  780. bjj,
  781. eth_addr
  782. ) VALUES %s;`,
  783. accounts[:],
  784. ))
  785. }
  786. // GetAllAccounts returns a list of accounts from the DB
  787. func (hdb *HistoryDB) GetAllAccounts() ([]common.Account, error) {
  788. var accs []*common.Account
  789. err := meddler.QueryAll(
  790. hdb.db, &accs,
  791. "SELECT idx, token_id, batch_num, bjj, eth_addr FROM account ORDER BY idx;",
  792. )
  793. return db.SlicePtrsToSlice(accs).([]common.Account), tracerr.Wrap(err)
  794. }
  795. // AddL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
  796. // If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
  797. // BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
  798. // EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB.
  799. func (hdb *HistoryDB) AddL1Txs(l1txs []common.L1Tx) error {
  800. return tracerr.Wrap(hdb.addL1Txs(hdb.db, l1txs))
  801. }
  802. // addL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
  803. // If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
  804. // BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
  805. // EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB.
  806. func (hdb *HistoryDB) addL1Txs(d meddler.DB, l1txs []common.L1Tx) error {
  807. if len(l1txs) == 0 {
  808. return nil
  809. }
  810. txs := []txWrite{}
  811. for i := 0; i < len(l1txs); i++ {
  812. af := new(big.Float).SetInt(l1txs[i].Amount)
  813. amountFloat, _ := af.Float64()
  814. laf := new(big.Float).SetInt(l1txs[i].DepositAmount)
  815. depositAmountFloat, _ := laf.Float64()
  816. var effectiveFromIdx *common.Idx
  817. if l1txs[i].UserOrigin {
  818. if l1txs[i].Type != common.TxTypeCreateAccountDeposit &&
  819. l1txs[i].Type != common.TxTypeCreateAccountDepositTransfer {
  820. effectiveFromIdx = &l1txs[i].FromIdx
  821. }
  822. } else {
  823. effectiveFromIdx = &l1txs[i].EffectiveFromIdx
  824. }
  825. txs = append(txs, txWrite{
  826. // Generic
  827. IsL1: true,
  828. TxID: l1txs[i].TxID,
  829. Type: l1txs[i].Type,
  830. Position: l1txs[i].Position,
  831. FromIdx: &l1txs[i].FromIdx,
  832. EffectiveFromIdx: effectiveFromIdx,
  833. ToIdx: l1txs[i].ToIdx,
  834. Amount: l1txs[i].Amount,
  835. AmountFloat: amountFloat,
  836. TokenID: l1txs[i].TokenID,
  837. BatchNum: l1txs[i].BatchNum,
  838. EthBlockNum: l1txs[i].EthBlockNum,
  839. // L1
  840. ToForgeL1TxsNum: l1txs[i].ToForgeL1TxsNum,
  841. UserOrigin: &l1txs[i].UserOrigin,
  842. FromEthAddr: &l1txs[i].FromEthAddr,
  843. FromBJJ: &l1txs[i].FromBJJ,
  844. DepositAmount: l1txs[i].DepositAmount,
  845. DepositAmountFloat: &depositAmountFloat,
  846. })
  847. }
  848. return tracerr.Wrap(hdb.addTxs(d, txs))
  849. }
  850. // AddL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
  851. func (hdb *HistoryDB) AddL2Txs(l2txs []common.L2Tx) error {
  852. return tracerr.Wrap(hdb.addL2Txs(hdb.db, l2txs))
  853. }
  854. // addL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
  855. func (hdb *HistoryDB) addL2Txs(d meddler.DB, l2txs []common.L2Tx) error {
  856. txs := []txWrite{}
  857. for i := 0; i < len(l2txs); i++ {
  858. f := new(big.Float).SetInt(l2txs[i].Amount)
  859. amountFloat, _ := f.Float64()
  860. txs = append(txs, txWrite{
  861. // Generic
  862. IsL1: false,
  863. TxID: l2txs[i].TxID,
  864. Type: l2txs[i].Type,
  865. Position: l2txs[i].Position,
  866. FromIdx: &l2txs[i].FromIdx,
  867. EffectiveFromIdx: &l2txs[i].FromIdx,
  868. ToIdx: l2txs[i].ToIdx,
  869. TokenID: l2txs[i].TokenID,
  870. Amount: l2txs[i].Amount,
  871. AmountFloat: amountFloat,
  872. BatchNum: &l2txs[i].BatchNum,
  873. EthBlockNum: l2txs[i].EthBlockNum,
  874. // L2
  875. Fee: &l2txs[i].Fee,
  876. Nonce: &l2txs[i].Nonce,
  877. })
  878. }
  879. return tracerr.Wrap(hdb.addTxs(d, txs))
  880. }
  881. func (hdb *HistoryDB) addTxs(d meddler.DB, txs []txWrite) error {
  882. if len(txs) == 0 {
  883. return nil
  884. }
  885. return tracerr.Wrap(db.BulkInsert(
  886. d,
  887. `INSERT INTO tx (
  888. is_l1,
  889. id,
  890. type,
  891. position,
  892. from_idx,
  893. effective_from_idx,
  894. to_idx,
  895. amount,
  896. amount_f,
  897. token_id,
  898. batch_num,
  899. eth_block_num,
  900. to_forge_l1_txs_num,
  901. user_origin,
  902. from_eth_addr,
  903. from_bjj,
  904. deposit_amount,
  905. deposit_amount_f,
  906. fee,
  907. nonce
  908. ) VALUES %s;`,
  909. txs[:],
  910. ))
  911. }
  912. // GetHistoryTx returns a tx from the DB given a TxID
  913. func (hdb *HistoryDB) GetHistoryTx(txID common.TxID) (*TxAPI, error) {
  914. // Warning: amount_success and deposit_amount_success have true as default for
  915. // performance reasons. The expected default value is false (when txs are unforged)
  916. // this case is handled at the function func (tx TxAPI) MarshalJSON() ([]byte, error)
  917. tx := &TxAPI{}
  918. err := meddler.QueryRow(
  919. hdb.db, tx, `SELECT tx.item_id, tx.is_l1, tx.id, tx.type, tx.position,
  920. hez_idx(tx.effective_from_idx, token.symbol) AS from_idx, tx.from_eth_addr, tx.from_bjj,
  921. hez_idx(tx.to_idx, token.symbol) AS to_idx, tx.to_eth_addr, tx.to_bjj,
  922. tx.amount, tx.amount_success, tx.token_id, tx.amount_usd,
  923. tx.batch_num, tx.eth_block_num, tx.to_forge_l1_txs_num, tx.user_origin,
  924. tx.deposit_amount, tx.deposit_amount_usd, tx.deposit_amount_success, tx.fee, tx.fee_usd, tx.nonce,
  925. token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
  926. token.eth_addr, token.name, token.symbol, token.decimals, token.usd,
  927. token.usd_update, block.timestamp
  928. FROM tx INNER JOIN token ON tx.token_id = token.token_id
  929. INNER JOIN block ON tx.eth_block_num = block.eth_block_num
  930. WHERE tx.id = $1;`, txID,
  931. )
  932. return tx, tracerr.Wrap(err)
  933. }
  934. // GetHistoryTxs returns a list of txs from the DB using the HistoryTx struct
  935. // and pagination info
  936. func (hdb *HistoryDB) GetHistoryTxs(
  937. ethAddr *ethCommon.Address, bjj *babyjub.PublicKeyComp,
  938. tokenID *common.TokenID, idx *common.Idx, batchNum *uint, txType *common.TxType,
  939. fromItem, limit *uint, order string,
  940. ) ([]TxAPI, uint64, error) {
  941. // Warning: amount_success and deposit_amount_success have true as default for
  942. // performance reasons. The expected default value is false (when txs are unforged)
  943. // this case is handled at the function func (tx TxAPI) MarshalJSON() ([]byte, error)
  944. if ethAddr != nil && bjj != nil {
  945. return nil, 0, tracerr.Wrap(errors.New("ethAddr and bjj are incompatible"))
  946. }
  947. var query string
  948. var args []interface{}
  949. queryStr := `SELECT tx.item_id, tx.is_l1, tx.id, tx.type, tx.position,
  950. hez_idx(tx.effective_from_idx, token.symbol) AS from_idx, tx.from_eth_addr, tx.from_bjj,
  951. hez_idx(tx.to_idx, token.symbol) AS to_idx, tx.to_eth_addr, tx.to_bjj,
  952. tx.amount, tx.amount_success, tx.token_id, tx.amount_usd,
  953. tx.batch_num, tx.eth_block_num, tx.to_forge_l1_txs_num, tx.user_origin,
  954. tx.deposit_amount, tx.deposit_amount_usd, tx.deposit_amount_success, tx.fee, tx.fee_usd, tx.nonce,
  955. token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
  956. token.eth_addr, token.name, token.symbol, token.decimals, token.usd,
  957. token.usd_update, block.timestamp, count(*) OVER() AS total_items
  958. FROM tx INNER JOIN token ON tx.token_id = token.token_id
  959. INNER JOIN block ON tx.eth_block_num = block.eth_block_num `
  960. // Apply filters
  961. nextIsAnd := false
  962. // ethAddr filter
  963. if ethAddr != nil {
  964. queryStr += "WHERE (tx.from_eth_addr = ? OR tx.to_eth_addr = ?) "
  965. nextIsAnd = true
  966. args = append(args, ethAddr, ethAddr)
  967. } else if bjj != nil { // bjj filter
  968. queryStr += "WHERE (tx.from_bjj = ? OR tx.to_bjj = ?) "
  969. nextIsAnd = true
  970. args = append(args, bjj, bjj)
  971. }
  972. // tokenID filter
  973. if tokenID != nil {
  974. if nextIsAnd {
  975. queryStr += "AND "
  976. } else {
  977. queryStr += "WHERE "
  978. }
  979. queryStr += "tx.token_id = ? "
  980. args = append(args, tokenID)
  981. nextIsAnd = true
  982. }
  983. // idx filter
  984. if idx != nil {
  985. if nextIsAnd {
  986. queryStr += "AND "
  987. } else {
  988. queryStr += "WHERE "
  989. }
  990. queryStr += "(tx.effective_from_idx = ? OR tx.to_idx = ?) "
  991. args = append(args, idx, idx)
  992. nextIsAnd = true
  993. }
  994. // batchNum filter
  995. if batchNum != nil {
  996. if nextIsAnd {
  997. queryStr += "AND "
  998. } else {
  999. queryStr += "WHERE "
  1000. }
  1001. queryStr += "tx.batch_num = ? "
  1002. args = append(args, batchNum)
  1003. nextIsAnd = true
  1004. }
  1005. // txType filter
  1006. if txType != nil {
  1007. if nextIsAnd {
  1008. queryStr += "AND "
  1009. } else {
  1010. queryStr += "WHERE "
  1011. }
  1012. queryStr += "tx.type = ? "
  1013. args = append(args, txType)
  1014. nextIsAnd = true
  1015. }
  1016. if fromItem != nil {
  1017. if nextIsAnd {
  1018. queryStr += "AND "
  1019. } else {
  1020. queryStr += "WHERE "
  1021. }
  1022. if order == OrderAsc {
  1023. queryStr += "tx.item_id >= ? "
  1024. } else {
  1025. queryStr += "tx.item_id <= ? "
  1026. }
  1027. args = append(args, fromItem)
  1028. nextIsAnd = true
  1029. }
  1030. if nextIsAnd {
  1031. queryStr += "AND "
  1032. } else {
  1033. queryStr += "WHERE "
  1034. }
  1035. queryStr += "tx.batch_num IS NOT NULL "
  1036. // pagination
  1037. queryStr += "ORDER BY tx.item_id "
  1038. if order == OrderAsc {
  1039. queryStr += " ASC "
  1040. } else {
  1041. queryStr += " DESC "
  1042. }
  1043. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  1044. query = hdb.db.Rebind(queryStr)
  1045. // log.Debug(query)
  1046. txsPtrs := []*TxAPI{}
  1047. if err := meddler.QueryAll(hdb.db, &txsPtrs, query, args...); err != nil {
  1048. return nil, 0, tracerr.Wrap(err)
  1049. }
  1050. txs := db.SlicePtrsToSlice(txsPtrs).([]TxAPI)
  1051. if len(txs) == 0 {
  1052. return txs, 0, nil
  1053. }
  1054. return txs, txs[0].TotalItems - uint64(len(txs)), nil
  1055. }
  1056. // GetAllExits returns all exit from the DB
  1057. func (hdb *HistoryDB) GetAllExits() ([]common.ExitInfo, error) {
  1058. var exits []*common.ExitInfo
  1059. err := meddler.QueryAll(
  1060. hdb.db, &exits,
  1061. `SELECT exit_tree.batch_num, exit_tree.account_idx, exit_tree.merkle_proof,
  1062. exit_tree.balance, exit_tree.instant_withdrawn, exit_tree.delayed_withdraw_request,
  1063. exit_tree.delayed_withdrawn FROM exit_tree ORDER BY item_id;`,
  1064. )
  1065. return db.SlicePtrsToSlice(exits).([]common.ExitInfo), tracerr.Wrap(err)
  1066. }
  1067. // GetExitAPI returns a exit from the DB
  1068. func (hdb *HistoryDB) GetExitAPI(batchNum *uint, idx *common.Idx) (*ExitAPI, error) {
  1069. exit := &ExitAPI{}
  1070. err := meddler.QueryRow(
  1071. hdb.db, exit, `SELECT exit_tree.item_id, exit_tree.batch_num,
  1072. hez_idx(exit_tree.account_idx, token.symbol) AS account_idx,
  1073. account.bjj, account.eth_addr,
  1074. exit_tree.merkle_proof, exit_tree.balance, exit_tree.instant_withdrawn,
  1075. exit_tree.delayed_withdraw_request, exit_tree.delayed_withdrawn,
  1076. token.token_id, token.item_id AS token_item_id,
  1077. token.eth_block_num AS token_block, token.eth_addr AS token_eth_addr, token.name, token.symbol,
  1078. token.decimals, token.usd, token.usd_update
  1079. FROM exit_tree INNER JOIN account ON exit_tree.account_idx = account.idx
  1080. INNER JOIN token ON account.token_id = token.token_id
  1081. WHERE exit_tree.batch_num = $1 AND exit_tree.account_idx = $2;`, batchNum, idx,
  1082. )
  1083. return exit, tracerr.Wrap(err)
  1084. }
  1085. // GetExitsAPI returns a list of exits from the DB and pagination info
  1086. func (hdb *HistoryDB) GetExitsAPI(
  1087. ethAddr *ethCommon.Address, bjj *babyjub.PublicKeyComp, tokenID *common.TokenID,
  1088. idx *common.Idx, batchNum *uint, onlyPendingWithdraws *bool,
  1089. fromItem, limit *uint, order string,
  1090. ) ([]ExitAPI, uint64, error) {
  1091. if ethAddr != nil && bjj != nil {
  1092. return nil, 0, tracerr.Wrap(errors.New("ethAddr and bjj are incompatible"))
  1093. }
  1094. var query string
  1095. var args []interface{}
  1096. queryStr := `SELECT exit_tree.item_id, exit_tree.batch_num,
  1097. hez_idx(exit_tree.account_idx, token.symbol) AS account_idx,
  1098. account.bjj, account.eth_addr,
  1099. exit_tree.merkle_proof, exit_tree.balance, exit_tree.instant_withdrawn,
  1100. exit_tree.delayed_withdraw_request, exit_tree.delayed_withdrawn,
  1101. token.token_id, token.item_id AS token_item_id,
  1102. token.eth_block_num AS token_block, token.eth_addr AS token_eth_addr, token.name, token.symbol,
  1103. token.decimals, token.usd, token.usd_update, COUNT(*) OVER() AS total_items
  1104. FROM exit_tree INNER JOIN account ON exit_tree.account_idx = account.idx
  1105. INNER JOIN token ON account.token_id = token.token_id `
  1106. // Apply filters
  1107. nextIsAnd := false
  1108. // ethAddr filter
  1109. if ethAddr != nil {
  1110. queryStr += "WHERE account.eth_addr = ? "
  1111. nextIsAnd = true
  1112. args = append(args, ethAddr)
  1113. } else if bjj != nil { // bjj filter
  1114. queryStr += "WHERE account.bjj = ? "
  1115. nextIsAnd = true
  1116. args = append(args, bjj)
  1117. }
  1118. // tokenID filter
  1119. if tokenID != nil {
  1120. if nextIsAnd {
  1121. queryStr += "AND "
  1122. } else {
  1123. queryStr += "WHERE "
  1124. }
  1125. queryStr += "account.token_id = ? "
  1126. args = append(args, tokenID)
  1127. nextIsAnd = true
  1128. }
  1129. // idx filter
  1130. if idx != nil {
  1131. if nextIsAnd {
  1132. queryStr += "AND "
  1133. } else {
  1134. queryStr += "WHERE "
  1135. }
  1136. queryStr += "exit_tree.account_idx = ? "
  1137. args = append(args, idx)
  1138. nextIsAnd = true
  1139. }
  1140. // batchNum filter
  1141. if batchNum != nil {
  1142. if nextIsAnd {
  1143. queryStr += "AND "
  1144. } else {
  1145. queryStr += "WHERE "
  1146. }
  1147. queryStr += "exit_tree.batch_num = ? "
  1148. args = append(args, batchNum)
  1149. nextIsAnd = true
  1150. }
  1151. // onlyPendingWithdraws
  1152. if onlyPendingWithdraws != nil {
  1153. if *onlyPendingWithdraws {
  1154. if nextIsAnd {
  1155. queryStr += "AND "
  1156. } else {
  1157. queryStr += "WHERE "
  1158. }
  1159. queryStr += "(exit_tree.instant_withdrawn IS NULL AND exit_tree.delayed_withdrawn IS NULL) "
  1160. nextIsAnd = true
  1161. }
  1162. }
  1163. if fromItem != nil {
  1164. if nextIsAnd {
  1165. queryStr += "AND "
  1166. } else {
  1167. queryStr += "WHERE "
  1168. }
  1169. if order == OrderAsc {
  1170. queryStr += "exit_tree.item_id >= ? "
  1171. } else {
  1172. queryStr += "exit_tree.item_id <= ? "
  1173. }
  1174. args = append(args, fromItem)
  1175. // nextIsAnd = true
  1176. }
  1177. // pagination
  1178. queryStr += "ORDER BY exit_tree.item_id "
  1179. if order == OrderAsc {
  1180. queryStr += " ASC "
  1181. } else {
  1182. queryStr += " DESC "
  1183. }
  1184. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  1185. query = hdb.db.Rebind(queryStr)
  1186. // log.Debug(query)
  1187. exits := []*ExitAPI{}
  1188. if err := meddler.QueryAll(hdb.db, &exits, query, args...); err != nil {
  1189. return nil, 0, tracerr.Wrap(err)
  1190. }
  1191. if len(exits) == 0 {
  1192. return []ExitAPI{}, 0, nil
  1193. }
  1194. return db.SlicePtrsToSlice(exits).([]ExitAPI), exits[0].TotalItems - uint64(len(exits)), nil
  1195. }
  1196. // GetAllL1UserTxs returns all L1UserTxs from the DB
  1197. func (hdb *HistoryDB) GetAllL1UserTxs() ([]common.L1Tx, error) {
  1198. var txs []*common.L1Tx
  1199. err := meddler.QueryAll(
  1200. hdb.db, &txs, // Note that '\x' gets parsed as a big.Int with value = 0
  1201. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  1202. tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  1203. tx.amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.amount_success THEN tx.amount ELSE '\x' END) AS effective_amount,
  1204. tx.deposit_amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.deposit_amount_success THEN tx.deposit_amount ELSE '\x' END) AS effective_deposit_amount,
  1205. tx.eth_block_num, tx.type, tx.batch_num
  1206. FROM tx WHERE is_l1 = TRUE AND user_origin = TRUE ORDER BY item_id;`,
  1207. )
  1208. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  1209. }
  1210. // GetAllL1CoordinatorTxs returns all L1CoordinatorTxs from the DB
  1211. func (hdb *HistoryDB) GetAllL1CoordinatorTxs() ([]common.L1Tx, error) {
  1212. var txs []*common.L1Tx
  1213. // Since the query specifies that only coordinator txs are returned, it's safe to assume
  1214. // that returned txs will always have effective amounts
  1215. err := meddler.QueryAll(
  1216. hdb.db, &txs,
  1217. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  1218. tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  1219. tx.amount, tx.amount AS effective_amount,
  1220. tx.deposit_amount, tx.deposit_amount AS effective_deposit_amount,
  1221. tx.eth_block_num, tx.type, tx.batch_num
  1222. FROM tx WHERE is_l1 = TRUE AND user_origin = FALSE ORDER BY item_id;`,
  1223. )
  1224. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  1225. }
  1226. // GetAllL2Txs returns all L2Txs from the DB
  1227. func (hdb *HistoryDB) GetAllL2Txs() ([]common.L2Tx, error) {
  1228. var txs []*common.L2Tx
  1229. err := meddler.QueryAll(
  1230. hdb.db, &txs,
  1231. `SELECT tx.id, tx.batch_num, tx.position,
  1232. tx.from_idx, tx.to_idx, tx.amount, tx.token_id,
  1233. tx.fee, tx.nonce, tx.type, tx.eth_block_num
  1234. FROM tx WHERE is_l1 = FALSE ORDER BY item_id;`,
  1235. )
  1236. return db.SlicePtrsToSlice(txs).([]common.L2Tx), tracerr.Wrap(err)
  1237. }
  1238. // GetUnforgedL1UserTxs gets L1 User Txs to be forged in the L1Batch with toForgeL1TxsNum.
  1239. func (hdb *HistoryDB) GetUnforgedL1UserTxs(toForgeL1TxsNum int64) ([]common.L1Tx, error) {
  1240. var txs []*common.L1Tx
  1241. err := meddler.QueryAll(
  1242. hdb.db, &txs, // only L1 user txs can have batch_num set to null
  1243. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  1244. tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  1245. tx.amount, NULL AS effective_amount,
  1246. tx.deposit_amount, NULL AS effective_deposit_amount,
  1247. tx.eth_block_num, tx.type, tx.batch_num
  1248. FROM tx WHERE batch_num IS NULL AND to_forge_l1_txs_num = $1
  1249. ORDER BY position;`,
  1250. toForgeL1TxsNum,
  1251. )
  1252. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  1253. }
  1254. // TODO: Think about chaning all the queries that return a last value, to queries that return the next valid value.
  1255. // GetLastTxsPosition for a given to_forge_l1_txs_num
  1256. func (hdb *HistoryDB) GetLastTxsPosition(toForgeL1TxsNum int64) (int, error) {
  1257. row := hdb.db.QueryRow(
  1258. "SELECT position FROM tx WHERE to_forge_l1_txs_num = $1 ORDER BY position DESC;",
  1259. toForgeL1TxsNum,
  1260. )
  1261. var lastL1TxsPosition int
  1262. return lastL1TxsPosition, tracerr.Wrap(row.Scan(&lastL1TxsPosition))
  1263. }
  1264. // GetSCVars returns the rollup, auction and wdelayer smart contracts variables at their last update.
  1265. func (hdb *HistoryDB) GetSCVars() (*common.RollupVariables, *common.AuctionVariables,
  1266. *common.WDelayerVariables, error) {
  1267. var rollup common.RollupVariables
  1268. var auction common.AuctionVariables
  1269. var wDelayer common.WDelayerVariables
  1270. if err := meddler.QueryRow(hdb.db, &rollup,
  1271. "SELECT * FROM rollup_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  1272. return nil, nil, nil, tracerr.Wrap(err)
  1273. }
  1274. if err := meddler.QueryRow(hdb.db, &auction,
  1275. "SELECT * FROM auction_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  1276. return nil, nil, nil, tracerr.Wrap(err)
  1277. }
  1278. if err := meddler.QueryRow(hdb.db, &wDelayer,
  1279. "SELECT * FROM wdelayer_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  1280. return nil, nil, nil, tracerr.Wrap(err)
  1281. }
  1282. return &rollup, &auction, &wDelayer, nil
  1283. }
  1284. func (hdb *HistoryDB) setRollupVars(d meddler.DB, rollup *common.RollupVariables) error {
  1285. return tracerr.Wrap(meddler.Insert(d, "rollup_vars", rollup))
  1286. }
  1287. func (hdb *HistoryDB) setAuctionVars(d meddler.DB, auction *common.AuctionVariables) error {
  1288. return tracerr.Wrap(meddler.Insert(d, "auction_vars", auction))
  1289. }
  1290. func (hdb *HistoryDB) setWDelayerVars(d meddler.DB, wDelayer *common.WDelayerVariables) error {
  1291. return tracerr.Wrap(meddler.Insert(d, "wdelayer_vars", wDelayer))
  1292. }
  1293. func (hdb *HistoryDB) addBucketUpdates(d meddler.DB, bucketUpdates []common.BucketUpdate) error {
  1294. if len(bucketUpdates) == 0 {
  1295. return nil
  1296. }
  1297. return tracerr.Wrap(db.BulkInsert(
  1298. d,
  1299. `INSERT INTO bucket_update (
  1300. eth_block_num,
  1301. num_bucket,
  1302. block_stamp,
  1303. withdrawals
  1304. ) VALUES %s;`,
  1305. bucketUpdates[:],
  1306. ))
  1307. }
  1308. // AddBucketUpdatesTest allows call to unexported method
  1309. // only for internal testing purposes
  1310. func (hdb *HistoryDB) AddBucketUpdatesTest(d meddler.DB, bucketUpdates []common.BucketUpdate) error {
  1311. return hdb.addBucketUpdates(d, bucketUpdates)
  1312. }
  1313. // GetAllBucketUpdates retrieves all the bucket updates
  1314. func (hdb *HistoryDB) GetAllBucketUpdates() ([]common.BucketUpdate, error) {
  1315. var bucketUpdates []*common.BucketUpdate
  1316. err := meddler.QueryAll(
  1317. hdb.db, &bucketUpdates,
  1318. `SELECT eth_block_num, num_bucket, block_stamp, withdrawals
  1319. FROM bucket_update ORDER BY item_id;`,
  1320. )
  1321. return db.SlicePtrsToSlice(bucketUpdates).([]common.BucketUpdate), tracerr.Wrap(err)
  1322. }
  1323. // GetBucketUpdates retrieves latest values for each bucket
  1324. func (hdb *HistoryDB) GetBucketUpdates() ([]BucketUpdateAPI, error) {
  1325. var bucketUpdates []*BucketUpdateAPI
  1326. err := meddler.QueryAll(
  1327. hdb.db, &bucketUpdates,
  1328. `SELECT num_bucket, withdrawals FROM bucket_update
  1329. WHERE item_id in(SELECT max(item_id) FROM bucket_update
  1330. group by num_bucket)
  1331. ORDER BY num_bucket ASC;`,
  1332. )
  1333. return db.SlicePtrsToSlice(bucketUpdates).([]BucketUpdateAPI), tracerr.Wrap(err)
  1334. }
  1335. func (hdb *HistoryDB) addTokenExchanges(d meddler.DB, tokenExchanges []common.TokenExchange) error {
  1336. if len(tokenExchanges) == 0 {
  1337. return nil
  1338. }
  1339. return tracerr.Wrap(db.BulkInsert(
  1340. d,
  1341. `INSERT INTO token_exchange (
  1342. eth_block_num,
  1343. eth_addr,
  1344. value_usd
  1345. ) VALUES %s;`,
  1346. tokenExchanges[:],
  1347. ))
  1348. }
  1349. // GetAllTokenExchanges retrieves all the token exchanges
  1350. func (hdb *HistoryDB) GetAllTokenExchanges() ([]common.TokenExchange, error) {
  1351. var tokenExchanges []*common.TokenExchange
  1352. err := meddler.QueryAll(
  1353. hdb.db, &tokenExchanges,
  1354. "SELECT eth_block_num, eth_addr, value_usd FROM token_exchange ORDER BY item_id;",
  1355. )
  1356. return db.SlicePtrsToSlice(tokenExchanges).([]common.TokenExchange), tracerr.Wrap(err)
  1357. }
  1358. func (hdb *HistoryDB) addEscapeHatchWithdrawals(d meddler.DB,
  1359. escapeHatchWithdrawals []common.WDelayerEscapeHatchWithdrawal) error {
  1360. if len(escapeHatchWithdrawals) == 0 {
  1361. return nil
  1362. }
  1363. return tracerr.Wrap(db.BulkInsert(
  1364. d,
  1365. `INSERT INTO escape_hatch_withdrawal (
  1366. eth_block_num,
  1367. who_addr,
  1368. to_addr,
  1369. token_addr,
  1370. amount
  1371. ) VALUES %s;`,
  1372. escapeHatchWithdrawals[:],
  1373. ))
  1374. }
  1375. // GetAllEscapeHatchWithdrawals retrieves all the escape hatch withdrawals
  1376. func (hdb *HistoryDB) GetAllEscapeHatchWithdrawals() ([]common.WDelayerEscapeHatchWithdrawal, error) {
  1377. var escapeHatchWithdrawals []*common.WDelayerEscapeHatchWithdrawal
  1378. err := meddler.QueryAll(
  1379. hdb.db, &escapeHatchWithdrawals,
  1380. "SELECT eth_block_num, who_addr, to_addr, token_addr, amount FROM escape_hatch_withdrawal ORDER BY item_id;",
  1381. )
  1382. return db.SlicePtrsToSlice(escapeHatchWithdrawals).([]common.WDelayerEscapeHatchWithdrawal),
  1383. tracerr.Wrap(err)
  1384. }
  1385. // SetInitialSCVars sets the initial state of rollup, auction, wdelayer smart
  1386. // contract variables. This initial state is stored linked to block 0, which
  1387. // always exist in the DB and is used to store initialization data that always
  1388. // exist in the smart contracts.
  1389. func (hdb *HistoryDB) SetInitialSCVars(rollup *common.RollupVariables,
  1390. auction *common.AuctionVariables, wDelayer *common.WDelayerVariables) error {
  1391. txn, err := hdb.db.Beginx()
  1392. if err != nil {
  1393. return tracerr.Wrap(err)
  1394. }
  1395. defer func() {
  1396. if err != nil {
  1397. db.Rollback(txn)
  1398. }
  1399. }()
  1400. // Force EthBlockNum to be 0 because it's the block used to link data
  1401. // that belongs to the creation of the smart contracts
  1402. rollup.EthBlockNum = 0
  1403. auction.EthBlockNum = 0
  1404. wDelayer.EthBlockNum = 0
  1405. auction.DefaultSlotSetBidSlotNum = 0
  1406. if err := hdb.setRollupVars(txn, rollup); err != nil {
  1407. return tracerr.Wrap(err)
  1408. }
  1409. if err := hdb.setAuctionVars(txn, auction); err != nil {
  1410. return tracerr.Wrap(err)
  1411. }
  1412. if err := hdb.setWDelayerVars(txn, wDelayer); err != nil {
  1413. return tracerr.Wrap(err)
  1414. }
  1415. return tracerr.Wrap(txn.Commit())
  1416. }
  1417. // setExtraInfoForgedL1UserTxs sets the EffectiveAmount, EffectiveDepositAmount
  1418. // and EffectiveFromIdx of the given l1UserTxs (with an UPDATE)
  1419. func (hdb *HistoryDB) setExtraInfoForgedL1UserTxs(d sqlx.Ext, txs []common.L1Tx) error {
  1420. if len(txs) == 0 {
  1421. return nil
  1422. }
  1423. // Effective amounts are stored as success flags in the DB, with true value by default
  1424. // to reduce the amount of updates. Therefore, only amounts that became uneffective should be
  1425. // updated to become false. At the same time, all the txs that contain
  1426. // accounts (FromIdx == 0) are updated to set the EffectiveFromIdx.
  1427. type txUpdate struct {
  1428. ID common.TxID `db:"id"`
  1429. AmountSuccess bool `db:"amount_success"`
  1430. DepositAmountSuccess bool `db:"deposit_amount_success"`
  1431. EffectiveFromIdx common.Idx `db:"effective_from_idx"`
  1432. }
  1433. txUpdates := []txUpdate{}
  1434. equal := func(a *big.Int, b *big.Int) bool {
  1435. return a.Cmp(b) == 0
  1436. }
  1437. for i := range txs {
  1438. amountSuccess := equal(txs[i].Amount, txs[i].EffectiveAmount)
  1439. depositAmountSuccess := equal(txs[i].DepositAmount, txs[i].EffectiveDepositAmount)
  1440. if !amountSuccess || !depositAmountSuccess || txs[i].FromIdx == 0 {
  1441. txUpdates = append(txUpdates, txUpdate{
  1442. ID: txs[i].TxID,
  1443. AmountSuccess: amountSuccess,
  1444. DepositAmountSuccess: depositAmountSuccess,
  1445. EffectiveFromIdx: txs[i].EffectiveFromIdx,
  1446. })
  1447. }
  1448. }
  1449. const query string = `
  1450. UPDATE tx SET
  1451. amount_success = tx_update.amount_success,
  1452. deposit_amount_success = tx_update.deposit_amount_success,
  1453. effective_from_idx = tx_update.effective_from_idx
  1454. FROM (VALUES
  1455. (NULL::::BYTEA, NULL::::BOOL, NULL::::BOOL, NULL::::BIGINT),
  1456. (:id, :amount_success, :deposit_amount_success, :effective_from_idx)
  1457. ) as tx_update (id, amount_success, deposit_amount_success, effective_from_idx)
  1458. WHERE tx.id = tx_update.id;
  1459. `
  1460. if len(txUpdates) > 0 {
  1461. if _, err := sqlx.NamedExec(d, query, txUpdates); err != nil {
  1462. return tracerr.Wrap(err)
  1463. }
  1464. }
  1465. return nil
  1466. }
  1467. // AddBlockSCData stores all the information of a block retrieved by the
  1468. // Synchronizer. Blocks should be inserted in order, leaving no gaps because
  1469. // the pagination system of the API/DB depends on this. Within blocks, all
  1470. // items should also be in the correct order (Accounts, Tokens, Txs, etc.)
  1471. func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) {
  1472. txn, err := hdb.db.Beginx()
  1473. if err != nil {
  1474. return tracerr.Wrap(err)
  1475. }
  1476. defer func() {
  1477. if err != nil {
  1478. db.Rollback(txn)
  1479. }
  1480. }()
  1481. // Add block
  1482. if err := hdb.addBlock(txn, &blockData.Block); err != nil {
  1483. return tracerr.Wrap(err)
  1484. }
  1485. // Add Coordinators
  1486. if err := hdb.addCoordinators(txn, blockData.Auction.Coordinators); err != nil {
  1487. return tracerr.Wrap(err)
  1488. }
  1489. // Add Bids
  1490. if err := hdb.addBids(txn, blockData.Auction.Bids); err != nil {
  1491. return tracerr.Wrap(err)
  1492. }
  1493. // Add Tokens
  1494. if err := hdb.addTokens(txn, blockData.Rollup.AddedTokens); err != nil {
  1495. return tracerr.Wrap(err)
  1496. }
  1497. // Prepare user L1 txs to be added.
  1498. // They must be added before the batch that will forge them (which can be in the same block)
  1499. // and after the account that will be sent to (also can be in the same block).
  1500. // Note: insert order is not relevant since item_id will be updated by a DB trigger when
  1501. // the batch that forges those txs is inserted
  1502. userL1s := make(map[common.BatchNum][]common.L1Tx)
  1503. for i := range blockData.Rollup.L1UserTxs {
  1504. batchThatForgesIsInTheBlock := false
  1505. for _, batch := range blockData.Rollup.Batches {
  1506. if batch.Batch.ForgeL1TxsNum != nil &&
  1507. *batch.Batch.ForgeL1TxsNum == *blockData.Rollup.L1UserTxs[i].ToForgeL1TxsNum {
  1508. // Tx is forged in this block. It's guaranteed that:
  1509. // * the first batch of the block won't forge user L1 txs that have been added in this block
  1510. // * batch nums are sequential therefore it's safe to add the tx at batch.BatchNum -1
  1511. batchThatForgesIsInTheBlock = true
  1512. addAtBatchNum := batch.Batch.BatchNum - 1
  1513. userL1s[addAtBatchNum] = append(userL1s[addAtBatchNum], blockData.Rollup.L1UserTxs[i])
  1514. break
  1515. }
  1516. }
  1517. if !batchThatForgesIsInTheBlock {
  1518. // User artificial batchNum 0 to add txs that are not forge in this block
  1519. // after all the accounts of the block have been added
  1520. userL1s[0] = append(userL1s[0], blockData.Rollup.L1UserTxs[i])
  1521. }
  1522. }
  1523. // Add Batches
  1524. for i := range blockData.Rollup.Batches {
  1525. batch := &blockData.Rollup.Batches[i]
  1526. // Add Batch: this will trigger an update on the DB
  1527. // that will set the batch num of forged L1 txs in this batch
  1528. if err = hdb.addBatch(txn, &batch.Batch); err != nil {
  1529. return tracerr.Wrap(err)
  1530. }
  1531. // Add accounts
  1532. if err := hdb.addAccounts(txn, batch.CreatedAccounts); err != nil {
  1533. return tracerr.Wrap(err)
  1534. }
  1535. // Set the EffectiveAmount and EffectiveDepositAmount of all the
  1536. // L1UserTxs that have been forged in this batch
  1537. if err = hdb.setExtraInfoForgedL1UserTxs(txn, batch.L1UserTxs); err != nil {
  1538. return tracerr.Wrap(err)
  1539. }
  1540. // Add forged l1 coordinator Txs
  1541. if err := hdb.addL1Txs(txn, batch.L1CoordinatorTxs); err != nil {
  1542. return tracerr.Wrap(err)
  1543. }
  1544. // Add l2 Txs
  1545. if err := hdb.addL2Txs(txn, batch.L2Txs); err != nil {
  1546. return tracerr.Wrap(err)
  1547. }
  1548. // Add user L1 txs that will be forged in next batch
  1549. if userlL1s, ok := userL1s[batch.Batch.BatchNum]; ok {
  1550. if err := hdb.addL1Txs(txn, userlL1s); err != nil {
  1551. return tracerr.Wrap(err)
  1552. }
  1553. }
  1554. // Add exit tree
  1555. if err := hdb.addExitTree(txn, batch.ExitTree); err != nil {
  1556. return tracerr.Wrap(err)
  1557. }
  1558. }
  1559. // Add user L1 txs that won't be forged in this block
  1560. if userL1sNotForgedInThisBlock, ok := userL1s[0]; ok {
  1561. if err := hdb.addL1Txs(txn, userL1sNotForgedInThisBlock); err != nil {
  1562. return tracerr.Wrap(err)
  1563. }
  1564. }
  1565. // Set SC Vars if there was an update
  1566. if blockData.Rollup.Vars != nil {
  1567. if err := hdb.setRollupVars(txn, blockData.Rollup.Vars); err != nil {
  1568. return tracerr.Wrap(err)
  1569. }
  1570. }
  1571. if blockData.Auction.Vars != nil {
  1572. if err := hdb.setAuctionVars(txn, blockData.Auction.Vars); err != nil {
  1573. return tracerr.Wrap(err)
  1574. }
  1575. }
  1576. if blockData.WDelayer.Vars != nil {
  1577. if err := hdb.setWDelayerVars(txn, blockData.WDelayer.Vars); err != nil {
  1578. return tracerr.Wrap(err)
  1579. }
  1580. }
  1581. // Update withdrawals in exit tree table
  1582. if err := hdb.updateExitTree(txn, blockData.Block.Num,
  1583. blockData.Rollup.Withdrawals, blockData.WDelayer.Withdrawals); err != nil {
  1584. return tracerr.Wrap(err)
  1585. }
  1586. // Add Escape Hatch Withdrawals
  1587. if err := hdb.addEscapeHatchWithdrawals(txn,
  1588. blockData.WDelayer.EscapeHatchWithdrawals); err != nil {
  1589. return tracerr.Wrap(err)
  1590. }
  1591. // Add Buckets withdrawals updates
  1592. if err := hdb.addBucketUpdates(txn, blockData.Rollup.UpdateBucketWithdraw); err != nil {
  1593. return tracerr.Wrap(err)
  1594. }
  1595. // Add Token exchange updates
  1596. if err := hdb.addTokenExchanges(txn, blockData.Rollup.TokenExchanges); err != nil {
  1597. return tracerr.Wrap(err)
  1598. }
  1599. return tracerr.Wrap(txn.Commit())
  1600. }
  1601. // GetCoordinatorAPI returns a coordinator by its bidderAddr
  1602. func (hdb *HistoryDB) GetCoordinatorAPI(bidderAddr ethCommon.Address) (*CoordinatorAPI, error) {
  1603. coordinator := &CoordinatorAPI{}
  1604. err := meddler.QueryRow(
  1605. hdb.db, coordinator,
  1606. "SELECT * FROM coordinator WHERE bidder_addr = $1 ORDER BY item_id DESC LIMIT 1;",
  1607. bidderAddr,
  1608. )
  1609. return coordinator, tracerr.Wrap(err)
  1610. }
  1611. // GetCoordinatorsAPI returns a list of coordinators from the DB and pagination info
  1612. func (hdb *HistoryDB) GetCoordinatorsAPI(
  1613. bidderAddr, forgerAddr *ethCommon.Address,
  1614. fromItem, limit *uint, order string,
  1615. ) ([]CoordinatorAPI, uint64, error) {
  1616. var query string
  1617. var args []interface{}
  1618. queryStr := `SELECT coordinator.*, COUNT(*) OVER() AS total_items
  1619. FROM coordinator INNER JOIN (
  1620. SELECT MAX(item_id) AS item_id FROM coordinator
  1621. GROUP BY bidder_addr
  1622. ) c ON coordinator.item_id = c.item_id `
  1623. // Apply filters
  1624. nextIsAnd := false
  1625. if bidderAddr != nil {
  1626. queryStr += "WHERE bidder_addr = ? "
  1627. nextIsAnd = true
  1628. args = append(args, bidderAddr)
  1629. }
  1630. if forgerAddr != nil {
  1631. if nextIsAnd {
  1632. queryStr += "AND "
  1633. } else {
  1634. queryStr += "WHERE "
  1635. }
  1636. queryStr += "forger_addr = ? "
  1637. nextIsAnd = true
  1638. args = append(args, forgerAddr)
  1639. }
  1640. if fromItem != nil {
  1641. if nextIsAnd {
  1642. queryStr += "AND "
  1643. } else {
  1644. queryStr += "WHERE "
  1645. }
  1646. if order == OrderAsc {
  1647. queryStr += "coordinator.item_id >= ? "
  1648. } else {
  1649. queryStr += "coordinator.item_id <= ? "
  1650. }
  1651. args = append(args, fromItem)
  1652. }
  1653. // pagination
  1654. queryStr += "ORDER BY coordinator.item_id "
  1655. if order == OrderAsc {
  1656. queryStr += " ASC "
  1657. } else {
  1658. queryStr += " DESC "
  1659. }
  1660. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  1661. query = hdb.db.Rebind(queryStr)
  1662. coordinators := []*CoordinatorAPI{}
  1663. if err := meddler.QueryAll(hdb.db, &coordinators, query, args...); err != nil {
  1664. return nil, 0, tracerr.Wrap(err)
  1665. }
  1666. if len(coordinators) == 0 {
  1667. return []CoordinatorAPI{}, 0, nil
  1668. }
  1669. return db.SlicePtrsToSlice(coordinators).([]CoordinatorAPI),
  1670. coordinators[0].TotalItems - uint64(len(coordinators)), nil
  1671. }
  1672. // AddAuctionVars insert auction vars into the DB
  1673. func (hdb *HistoryDB) AddAuctionVars(auctionVars *common.AuctionVariables) error {
  1674. return tracerr.Wrap(meddler.Insert(hdb.db, "auction_vars", auctionVars))
  1675. }
  1676. // GetAuctionVars returns auction variables
  1677. func (hdb *HistoryDB) GetAuctionVars() (*common.AuctionVariables, error) {
  1678. auctionVars := &common.AuctionVariables{}
  1679. err := meddler.QueryRow(
  1680. hdb.db, auctionVars, `SELECT * FROM auction_vars;`,
  1681. )
  1682. return auctionVars, tracerr.Wrap(err)
  1683. }
  1684. // GetAuctionVarsUntilSetSlotNum returns all the updates of the auction vars
  1685. // from the last entry in which DefaultSlotSetBidSlotNum <= slotNum
  1686. func (hdb *HistoryDB) GetAuctionVarsUntilSetSlotNum(slotNum int64, maxItems int) ([]MinBidInfo, error) {
  1687. auctionVars := []*MinBidInfo{}
  1688. query := `
  1689. SELECT DISTINCT default_slot_set_bid, default_slot_set_bid_slot_num FROM auction_vars
  1690. WHERE default_slot_set_bid_slot_num < $1
  1691. ORDER BY default_slot_set_bid_slot_num DESC
  1692. LIMIT $2;
  1693. `
  1694. err := meddler.QueryAll(hdb.db, &auctionVars, query, slotNum, maxItems)
  1695. if err != nil {
  1696. return nil, tracerr.Wrap(err)
  1697. }
  1698. return db.SlicePtrsToSlice(auctionVars).([]MinBidInfo), nil
  1699. }
  1700. // GetAccountAPI returns an account by its index
  1701. func (hdb *HistoryDB) GetAccountAPI(idx common.Idx) (*AccountAPI, error) {
  1702. account := &AccountAPI{}
  1703. err := meddler.QueryRow(hdb.db, account, `SELECT account.item_id, hez_idx(account.idx,
  1704. token.symbol) as idx, account.batch_num, account.bjj, account.eth_addr,
  1705. token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
  1706. token.eth_addr as token_eth_addr, token.name, token.symbol, token.decimals, token.usd, token.usd_update
  1707. FROM account INNER JOIN token ON account.token_id = token.token_id WHERE idx = $1;`, idx)
  1708. if err != nil {
  1709. return nil, tracerr.Wrap(err)
  1710. }
  1711. return account, nil
  1712. }
  1713. // GetAccountsAPI returns a list of accounts from the DB and pagination info
  1714. func (hdb *HistoryDB) GetAccountsAPI(
  1715. tokenIDs []common.TokenID, ethAddr *ethCommon.Address,
  1716. bjj *babyjub.PublicKeyComp, fromItem, limit *uint, order string,
  1717. ) ([]AccountAPI, uint64, error) {
  1718. if ethAddr != nil && bjj != nil {
  1719. return nil, 0, tracerr.Wrap(errors.New("ethAddr and bjj are incompatible"))
  1720. }
  1721. var query string
  1722. var args []interface{}
  1723. queryStr := `SELECT account.item_id, hez_idx(account.idx, token.symbol) as idx, account.batch_num,
  1724. account.bjj, account.eth_addr, token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
  1725. token.eth_addr as token_eth_addr, token.name, token.symbol, token.decimals, token.usd, token.usd_update,
  1726. COUNT(*) OVER() AS total_items
  1727. FROM account INNER JOIN token ON account.token_id = token.token_id `
  1728. // Apply filters
  1729. nextIsAnd := false
  1730. // ethAddr filter
  1731. if ethAddr != nil {
  1732. queryStr += "WHERE account.eth_addr = ? "
  1733. nextIsAnd = true
  1734. args = append(args, ethAddr)
  1735. } else if bjj != nil { // bjj filter
  1736. queryStr += "WHERE account.bjj = ? "
  1737. nextIsAnd = true
  1738. args = append(args, bjj)
  1739. }
  1740. // tokenID filter
  1741. if len(tokenIDs) > 0 {
  1742. if nextIsAnd {
  1743. queryStr += "AND "
  1744. } else {
  1745. queryStr += "WHERE "
  1746. }
  1747. queryStr += "account.token_id IN (?) "
  1748. args = append(args, tokenIDs)
  1749. nextIsAnd = true
  1750. }
  1751. if fromItem != nil {
  1752. if nextIsAnd {
  1753. queryStr += "AND "
  1754. } else {
  1755. queryStr += "WHERE "
  1756. }
  1757. if order == OrderAsc {
  1758. queryStr += "account.item_id >= ? "
  1759. } else {
  1760. queryStr += "account.item_id <= ? "
  1761. }
  1762. args = append(args, fromItem)
  1763. }
  1764. // pagination
  1765. queryStr += "ORDER BY account.item_id "
  1766. if order == OrderAsc {
  1767. queryStr += " ASC "
  1768. } else {
  1769. queryStr += " DESC "
  1770. }
  1771. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  1772. query, argsQ, err := sqlx.In(queryStr, args...)
  1773. if err != nil {
  1774. return nil, 0, tracerr.Wrap(err)
  1775. }
  1776. query = hdb.db.Rebind(query)
  1777. accounts := []*AccountAPI{}
  1778. if err := meddler.QueryAll(hdb.db, &accounts, query, argsQ...); err != nil {
  1779. return nil, 0, tracerr.Wrap(err)
  1780. }
  1781. if len(accounts) == 0 {
  1782. return []AccountAPI{}, 0, nil
  1783. }
  1784. return db.SlicePtrsToSlice(accounts).([]AccountAPI),
  1785. accounts[0].TotalItems - uint64(len(accounts)), nil
  1786. }
  1787. // GetMetrics returns metrics
  1788. func (hdb *HistoryDB) GetMetrics(lastBatchNum common.BatchNum) (*Metrics, error) {
  1789. metricsTotals := &MetricsTotals{}
  1790. metrics := &Metrics{}
  1791. err := meddler.QueryRow(
  1792. hdb.db, metricsTotals, `SELECT COUNT(tx.*) as total_txs,
  1793. COALESCE (MIN(tx.batch_num), 0) as batch_num, COALESCE (MIN(block.timestamp),
  1794. NOW()) AS min_timestamp, COALESCE (MAX(block.timestamp), NOW()) AS max_timestamp
  1795. FROM tx INNER JOIN block ON tx.eth_block_num = block.eth_block_num
  1796. WHERE block.timestamp >= NOW() - INTERVAL '24 HOURS';`)
  1797. if err != nil {
  1798. return nil, tracerr.Wrap(err)
  1799. }
  1800. seconds := metricsTotals.MaxTimestamp.Sub(metricsTotals.MinTimestamp).Seconds()
  1801. // Avoid dividing by 0
  1802. if seconds == 0 {
  1803. seconds++
  1804. }
  1805. metrics.TransactionsPerSecond = float64(metricsTotals.TotalTransactions) / seconds
  1806. if (lastBatchNum - metricsTotals.FirstBatchNum) > 0 {
  1807. metrics.TransactionsPerBatch = float64(metricsTotals.TotalTransactions) /
  1808. float64(lastBatchNum-metricsTotals.FirstBatchNum+1)
  1809. } else {
  1810. metrics.TransactionsPerBatch = float64(0)
  1811. }
  1812. err = meddler.QueryRow(
  1813. hdb.db, metricsTotals, `SELECT COUNT(*) AS total_batches,
  1814. COALESCE (SUM(total_fees_usd), 0) AS total_fees FROM batch
  1815. WHERE batch_num > $1;`, metricsTotals.FirstBatchNum)
  1816. if err != nil {
  1817. return nil, tracerr.Wrap(err)
  1818. }
  1819. if metricsTotals.TotalBatches > 0 {
  1820. metrics.BatchFrequency = seconds / float64(metricsTotals.TotalBatches)
  1821. } else {
  1822. metrics.BatchFrequency = 0
  1823. }
  1824. if metricsTotals.TotalTransactions > 0 {
  1825. metrics.AvgTransactionFee = metricsTotals.TotalFeesUSD / float64(metricsTotals.TotalTransactions)
  1826. } else {
  1827. metrics.AvgTransactionFee = 0
  1828. }
  1829. err = meddler.QueryRow(
  1830. hdb.db, metrics,
  1831. `SELECT COUNT(*) AS total_bjjs, COUNT(DISTINCT(bjj)) AS total_accounts FROM account;`)
  1832. if err != nil {
  1833. return nil, tracerr.Wrap(err)
  1834. }
  1835. return metrics, nil
  1836. }
  1837. // GetAvgTxFee returns average transaction fee of the last 1h
  1838. func (hdb *HistoryDB) GetAvgTxFee() (float64, error) {
  1839. metricsTotals := &MetricsTotals{}
  1840. err := meddler.QueryRow(
  1841. hdb.db, metricsTotals, `SELECT COUNT(tx.*) as total_txs,
  1842. COALESCE (MIN(tx.batch_num), 0) as batch_num
  1843. FROM tx INNER JOIN block ON tx.eth_block_num = block.eth_block_num
  1844. WHERE block.timestamp >= NOW() - INTERVAL '1 HOURS';`)
  1845. if err != nil {
  1846. return 0, tracerr.Wrap(err)
  1847. }
  1848. err = meddler.QueryRow(
  1849. hdb.db, metricsTotals, `SELECT COUNT(*) AS total_batches,
  1850. COALESCE (SUM(total_fees_usd), 0) AS total_fees FROM batch
  1851. WHERE batch_num > $1;`, metricsTotals.FirstBatchNum)
  1852. if err != nil {
  1853. return 0, tracerr.Wrap(err)
  1854. }
  1855. var avgTransactionFee float64
  1856. if metricsTotals.TotalTransactions > 0 {
  1857. avgTransactionFee = metricsTotals.TotalFeesUSD / float64(metricsTotals.TotalTransactions)
  1858. } else {
  1859. avgTransactionFee = 0
  1860. }
  1861. return avgTransactionFee, nil
  1862. }