You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1721 lines
53 KiB

Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
3 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
3 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
3 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
3 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
  1. package historydb
  2. import (
  3. "database/sql"
  4. "errors"
  5. "fmt"
  6. "math"
  7. "math/big"
  8. ethCommon "github.com/ethereum/go-ethereum/common"
  9. "github.com/hermeznetwork/hermez-node/common"
  10. "github.com/hermeznetwork/hermez-node/db"
  11. "github.com/hermeznetwork/tracerr"
  12. "github.com/iden3/go-iden3-crypto/babyjub"
  13. "github.com/jmoiron/sqlx"
  14. //nolint:errcheck // driver for postgres DB
  15. _ "github.com/lib/pq"
  16. "github.com/russross/meddler"
  17. )
  18. const (
  19. // OrderAsc indicates ascending order when using pagination
  20. OrderAsc = "ASC"
  21. // OrderDesc indicates descending order when using pagination
  22. OrderDesc = "DESC"
  23. )
  24. // TODO(Edu): Document here how HistoryDB is kept consistent
  25. // HistoryDB persist the historic of the rollup
  26. type HistoryDB struct {
  27. db *sqlx.DB
  28. }
  29. // NewHistoryDB initialize the DB
  30. func NewHistoryDB(db *sqlx.DB) *HistoryDB {
  31. return &HistoryDB{db: db}
  32. }
  33. // DB returns a pointer to the L2DB.db. This method should be used only for
  34. // internal testing purposes.
  35. func (hdb *HistoryDB) DB() *sqlx.DB {
  36. return hdb.db
  37. }
  38. // AddBlock insert a block into the DB
  39. func (hdb *HistoryDB) AddBlock(block *common.Block) error { return hdb.addBlock(hdb.db, block) }
  40. func (hdb *HistoryDB) addBlock(d meddler.DB, block *common.Block) error {
  41. return meddler.Insert(d, "block", block)
  42. }
  43. // AddBlocks inserts blocks into the DB
  44. func (hdb *HistoryDB) AddBlocks(blocks []common.Block) error {
  45. return hdb.addBlocks(hdb.db, blocks)
  46. }
  47. func (hdb *HistoryDB) addBlocks(d meddler.DB, blocks []common.Block) error {
  48. return db.BulkInsert(
  49. d,
  50. `INSERT INTO block (
  51. eth_block_num,
  52. timestamp,
  53. hash
  54. ) VALUES %s;`,
  55. blocks[:],
  56. )
  57. }
  58. // GetBlock retrieve a block from the DB, given a block number
  59. func (hdb *HistoryDB) GetBlock(blockNum int64) (*common.Block, error) {
  60. block := &common.Block{}
  61. err := meddler.QueryRow(
  62. hdb.db, block,
  63. "SELECT * FROM block WHERE eth_block_num = $1;", blockNum,
  64. )
  65. return block, tracerr.Wrap(err)
  66. }
  67. // GetAllBlocks retrieve all blocks from the DB
  68. func (hdb *HistoryDB) GetAllBlocks() ([]common.Block, error) {
  69. var blocks []*common.Block
  70. err := meddler.QueryAll(
  71. hdb.db, &blocks,
  72. "SELECT * FROM block;",
  73. )
  74. return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err)
  75. }
  76. // GetBlocks retrieve blocks from the DB, given a range of block numbers defined by from and to
  77. func (hdb *HistoryDB) GetBlocks(from, to int64) ([]common.Block, error) {
  78. var blocks []*common.Block
  79. err := meddler.QueryAll(
  80. hdb.db, &blocks,
  81. "SELECT * FROM block WHERE $1 <= eth_block_num AND eth_block_num < $2;",
  82. from, to,
  83. )
  84. return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err)
  85. }
  86. // GetLastBlock retrieve the block with the highest block number from the DB
  87. func (hdb *HistoryDB) GetLastBlock() (*common.Block, error) {
  88. block := &common.Block{}
  89. err := meddler.QueryRow(
  90. hdb.db, block, "SELECT * FROM block ORDER BY eth_block_num DESC LIMIT 1;",
  91. )
  92. return block, tracerr.Wrap(err)
  93. }
  94. // AddBatch insert a Batch into the DB
  95. func (hdb *HistoryDB) AddBatch(batch *common.Batch) error { return hdb.addBatch(hdb.db, batch) }
  96. func (hdb *HistoryDB) addBatch(d meddler.DB, batch *common.Batch) error {
  97. // Calculate total collected fees in USD
  98. // Get IDs of collected tokens for fees
  99. tokenIDs := []common.TokenID{}
  100. for id := range batch.CollectedFees {
  101. tokenIDs = append(tokenIDs, id)
  102. }
  103. // Get USD value of the tokens
  104. type tokenPrice struct {
  105. ID common.TokenID `meddler:"token_id"`
  106. USD *float64 `meddler:"usd"`
  107. Decimals int `meddler:"decimals"`
  108. }
  109. var tokenPrices []*tokenPrice
  110. if len(tokenIDs) > 0 {
  111. query, args, err := sqlx.In(
  112. "SELECT token_id, usd, decimals FROM token WHERE token_id IN (?)",
  113. tokenIDs,
  114. )
  115. if err != nil {
  116. return tracerr.Wrap(err)
  117. }
  118. query = hdb.db.Rebind(query)
  119. if err := meddler.QueryAll(
  120. hdb.db, &tokenPrices, query, args...,
  121. ); err != nil {
  122. return tracerr.Wrap(err)
  123. }
  124. }
  125. // Calculate total collected
  126. var total float64
  127. for _, tokenPrice := range tokenPrices {
  128. if tokenPrice.USD == nil {
  129. continue
  130. }
  131. f := new(big.Float).SetInt(batch.CollectedFees[tokenPrice.ID])
  132. amount, _ := f.Float64()
  133. total += *tokenPrice.USD * (amount / math.Pow(10, float64(tokenPrice.Decimals))) //nolint decimals have to be ^10
  134. }
  135. batch.TotalFeesUSD = &total
  136. // Insert to DB
  137. return meddler.Insert(d, "batch", batch)
  138. }
  139. // AddBatches insert Bids into the DB
  140. func (hdb *HistoryDB) AddBatches(batches []common.Batch) error {
  141. return hdb.addBatches(hdb.db, batches)
  142. }
  143. func (hdb *HistoryDB) addBatches(d meddler.DB, batches []common.Batch) error {
  144. for i := 0; i < len(batches); i++ {
  145. if err := hdb.addBatch(d, &batches[i]); err != nil {
  146. return tracerr.Wrap(err)
  147. }
  148. }
  149. return nil
  150. }
  151. // GetBatchAPI return the batch with the given batchNum
  152. func (hdb *HistoryDB) GetBatchAPI(batchNum common.BatchNum) (*BatchAPI, error) {
  153. batch := &BatchAPI{}
  154. return batch, meddler.QueryRow(
  155. hdb.db, batch,
  156. `SELECT batch.*, block.timestamp, block.hash
  157. FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num
  158. WHERE batch_num = $1;`, batchNum,
  159. )
  160. }
  161. // GetBatchesAPI return the batches applying the given filters
  162. func (hdb *HistoryDB) GetBatchesAPI(
  163. minBatchNum, maxBatchNum, slotNum *uint,
  164. forgerAddr *ethCommon.Address,
  165. fromItem, limit *uint, order string,
  166. ) ([]BatchAPI, uint64, error) {
  167. var query string
  168. var args []interface{}
  169. queryStr := `SELECT batch.*, block.timestamp, block.hash,
  170. count(*) OVER() AS total_items
  171. FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num `
  172. // Apply filters
  173. nextIsAnd := false
  174. // minBatchNum filter
  175. if minBatchNum != nil {
  176. if nextIsAnd {
  177. queryStr += "AND "
  178. } else {
  179. queryStr += "WHERE "
  180. }
  181. queryStr += "batch.batch_num > ? "
  182. args = append(args, minBatchNum)
  183. nextIsAnd = true
  184. }
  185. // maxBatchNum filter
  186. if maxBatchNum != nil {
  187. if nextIsAnd {
  188. queryStr += "AND "
  189. } else {
  190. queryStr += "WHERE "
  191. }
  192. queryStr += "batch.batch_num < ? "
  193. args = append(args, maxBatchNum)
  194. nextIsAnd = true
  195. }
  196. // slotNum filter
  197. if slotNum != nil {
  198. if nextIsAnd {
  199. queryStr += "AND "
  200. } else {
  201. queryStr += "WHERE "
  202. }
  203. queryStr += "batch.slot_num = ? "
  204. args = append(args, slotNum)
  205. nextIsAnd = true
  206. }
  207. // forgerAddr filter
  208. if forgerAddr != nil {
  209. if nextIsAnd {
  210. queryStr += "AND "
  211. } else {
  212. queryStr += "WHERE "
  213. }
  214. queryStr += "batch.forger_addr = ? "
  215. args = append(args, forgerAddr)
  216. nextIsAnd = true
  217. }
  218. // pagination
  219. if fromItem != nil {
  220. if nextIsAnd {
  221. queryStr += "AND "
  222. } else {
  223. queryStr += "WHERE "
  224. }
  225. if order == OrderAsc {
  226. queryStr += "batch.item_id >= ? "
  227. } else {
  228. queryStr += "batch.item_id <= ? "
  229. }
  230. args = append(args, fromItem)
  231. }
  232. queryStr += "ORDER BY batch.item_id "
  233. if order == OrderAsc {
  234. queryStr += " ASC "
  235. } else {
  236. queryStr += " DESC "
  237. }
  238. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  239. query = hdb.db.Rebind(queryStr)
  240. // log.Debug(query)
  241. batchPtrs := []*BatchAPI{}
  242. if err := meddler.QueryAll(hdb.db, &batchPtrs, query, args...); err != nil {
  243. return nil, 0, tracerr.Wrap(err)
  244. }
  245. batches := db.SlicePtrsToSlice(batchPtrs).([]BatchAPI)
  246. if len(batches) == 0 {
  247. return nil, 0, tracerr.Wrap(sql.ErrNoRows)
  248. }
  249. return batches, batches[0].TotalItems - uint64(len(batches)), nil
  250. }
  251. // GetAllBatches retrieve all batches from the DB
  252. func (hdb *HistoryDB) GetAllBatches() ([]common.Batch, error) {
  253. var batches []*common.Batch
  254. err := meddler.QueryAll(
  255. hdb.db, &batches,
  256. `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr, batch.fees_collected,
  257. batch.fee_idxs_coordinator, batch.state_root, batch.num_accounts, batch.last_idx, batch.exit_root,
  258. batch.forge_l1_txs_num, batch.slot_num, batch.total_fees_usd FROM batch;`,
  259. )
  260. return db.SlicePtrsToSlice(batches).([]common.Batch), tracerr.Wrap(err)
  261. }
  262. // GetBatches retrieve batches from the DB, given a range of batch numbers defined by from and to
  263. func (hdb *HistoryDB) GetBatches(from, to common.BatchNum) ([]common.Batch, error) {
  264. var batches []*common.Batch
  265. err := meddler.QueryAll(
  266. hdb.db, &batches,
  267. "SELECT * FROM batch WHERE $1 <= batch_num AND batch_num < $2;",
  268. from, to,
  269. )
  270. return db.SlicePtrsToSlice(batches).([]common.Batch), tracerr.Wrap(err)
  271. }
  272. // GetBatchesLen retrieve number of batches from the DB, given a slotNum
  273. func (hdb *HistoryDB) GetBatchesLen(slotNum int64) (int, error) {
  274. row := hdb.db.QueryRow("SELECT COUNT(*) FROM batch WHERE slot_num = $1;", slotNum)
  275. var batchesLen int
  276. return batchesLen, row.Scan(&batchesLen)
  277. }
  278. // GetLastBatchNum returns the BatchNum of the latest forged batch
  279. func (hdb *HistoryDB) GetLastBatchNum() (common.BatchNum, error) {
  280. row := hdb.db.QueryRow("SELECT batch_num FROM batch ORDER BY batch_num DESC LIMIT 1;")
  281. var batchNum common.BatchNum
  282. return batchNum, row.Scan(&batchNum)
  283. }
  284. // GetLastL1BatchBlockNum returns the blockNum of the latest forged l1Batch
  285. func (hdb *HistoryDB) GetLastL1BatchBlockNum() (int64, error) {
  286. row := hdb.db.QueryRow(`SELECT eth_block_num FROM batch
  287. WHERE forge_l1_txs_num IS NOT NULL
  288. ORDER BY batch_num DESC LIMIT 1;`)
  289. var blockNum int64
  290. return blockNum, row.Scan(&blockNum)
  291. }
  292. // GetLastL1TxsNum returns the greatest ForgeL1TxsNum in the DB from forged
  293. // batches. If there's no batch in the DB (nil, nil) is returned.
  294. func (hdb *HistoryDB) GetLastL1TxsNum() (*int64, error) {
  295. row := hdb.db.QueryRow("SELECT MAX(forge_l1_txs_num) FROM batch;")
  296. lastL1TxsNum := new(int64)
  297. return lastL1TxsNum, row.Scan(&lastL1TxsNum)
  298. }
  299. // Reorg deletes all the information that was added into the DB after the
  300. // lastValidBlock. If lastValidBlock is negative, all block information is
  301. // deleted.
  302. func (hdb *HistoryDB) Reorg(lastValidBlock int64) error {
  303. var err error
  304. if lastValidBlock < 0 {
  305. _, err = hdb.db.Exec("DELETE FROM block;")
  306. } else {
  307. _, err = hdb.db.Exec("DELETE FROM block WHERE eth_block_num > $1;", lastValidBlock)
  308. }
  309. return tracerr.Wrap(err)
  310. }
  311. // AddBids insert Bids into the DB
  312. func (hdb *HistoryDB) AddBids(bids []common.Bid) error { return hdb.addBids(hdb.db, bids) }
  313. func (hdb *HistoryDB) addBids(d meddler.DB, bids []common.Bid) error {
  314. // TODO: check the coordinator info
  315. return db.BulkInsert(
  316. d,
  317. "INSERT INTO bid (slot_num, bid_value, eth_block_num, bidder_addr) VALUES %s;",
  318. bids[:],
  319. )
  320. }
  321. // GetAllBids retrieve all bids from the DB
  322. func (hdb *HistoryDB) GetAllBids() ([]common.Bid, error) {
  323. var bids []*common.Bid
  324. err := meddler.QueryAll(
  325. hdb.db, &bids,
  326. `SELECT bid.slot_num, bid.bid_value, bid.eth_block_num, bid.bidder_addr FROM bid;`,
  327. )
  328. return db.SlicePtrsToSlice(bids).([]common.Bid), tracerr.Wrap(err)
  329. }
  330. // GetBestBidAPI returns the best bid in specific slot by slotNum
  331. func (hdb *HistoryDB) GetBestBidAPI(slotNum *int64) (BidAPI, error) {
  332. bid := &BidAPI{}
  333. err := meddler.QueryRow(
  334. hdb.db, bid, `SELECT bid.*, block.timestamp, coordinator.forger_addr, coordinator.url
  335. FROM bid INNER JOIN block ON bid.eth_block_num = block.eth_block_num
  336. INNER JOIN coordinator ON bid.bidder_addr = coordinator.bidder_addr
  337. WHERE slot_num = $1 ORDER BY item_id DESC LIMIT 1;`, slotNum,
  338. )
  339. return *bid, tracerr.Wrap(err)
  340. }
  341. // GetBestBidCoordinator returns the forger address of the highest bidder in a slot by slotNum
  342. func (hdb *HistoryDB) GetBestBidCoordinator(slotNum int64) (*common.BidCoordinator, error) {
  343. bidCoord := &common.BidCoordinator{}
  344. err := meddler.QueryRow(
  345. hdb.db, bidCoord,
  346. `SELECT (
  347. SELECT default_slot_set_bid_slot_num
  348. FROM auction_vars
  349. WHERE default_slot_set_bid_slot_num <= $1
  350. ORDER BY eth_block_num DESC LIMIT 1
  351. ),
  352. bid.slot_num, bid.bid_value, bid.bidder_addr,
  353. coordinator.forger_addr, coordinator.url
  354. FROM bid
  355. INNER JOIN coordinator ON bid.bidder_addr = coordinator.bidder_addr
  356. WHERE bid.slot_num = $1 ORDER BY bid.item_id DESC LIMIT 1;`,
  357. slotNum)
  358. return bidCoord, tracerr.Wrap(err)
  359. }
  360. // GetBestBidsAPI returns the best bid in specific slot by slotNum
  361. func (hdb *HistoryDB) GetBestBidsAPI(
  362. minSlotNum, maxSlotNum *int64,
  363. bidderAddr *ethCommon.Address,
  364. limit *uint, order string,
  365. ) ([]BidAPI, uint64, error) {
  366. var query string
  367. var args []interface{}
  368. queryStr := `SELECT b.*, block.timestamp, coordinator.forger_addr, coordinator.url,
  369. COUNT(*) OVER() AS total_items FROM (
  370. SELECT slot_num, MAX(item_id) as maxitem
  371. FROM bid GROUP BY slot_num
  372. )
  373. AS x INNER JOIN bid AS b ON b.item_id = x.maxitem
  374. INNER JOIN block ON b.eth_block_num = block.eth_block_num
  375. INNER JOIN coordinator ON b.bidder_addr = coordinator.bidder_addr
  376. WHERE (b.slot_num >= ? AND b.slot_num <= ?)`
  377. args = append(args, minSlotNum)
  378. args = append(args, maxSlotNum)
  379. // Apply filters
  380. if bidderAddr != nil {
  381. queryStr += " AND b.bidder_addr = ? "
  382. args = append(args, bidderAddr)
  383. }
  384. queryStr += " ORDER BY b.slot_num "
  385. if order == OrderAsc {
  386. queryStr += "ASC "
  387. } else {
  388. queryStr += "DESC "
  389. }
  390. if limit != nil {
  391. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  392. }
  393. query = hdb.db.Rebind(queryStr)
  394. bidPtrs := []*BidAPI{}
  395. if err := meddler.QueryAll(hdb.db, &bidPtrs, query, args...); err != nil {
  396. return nil, 0, tracerr.Wrap(err)
  397. }
  398. // log.Debug(query)
  399. bids := db.SlicePtrsToSlice(bidPtrs).([]BidAPI)
  400. if len(bids) == 0 {
  401. return nil, 0, tracerr.Wrap(sql.ErrNoRows)
  402. }
  403. return bids, bids[0].TotalItems - uint64(len(bids)), nil
  404. }
  405. // GetBidsAPI return the bids applying the given filters
  406. func (hdb *HistoryDB) GetBidsAPI(
  407. slotNum *int64, forgerAddr *ethCommon.Address,
  408. fromItem, limit *uint, order string,
  409. ) ([]BidAPI, uint64, error) {
  410. var query string
  411. var args []interface{}
  412. queryStr := `SELECT bid.*, block.timestamp, coordinator.forger_addr, coordinator.url,
  413. COUNT(*) OVER() AS total_items
  414. FROM bid INNER JOIN block ON bid.eth_block_num = block.eth_block_num
  415. INNER JOIN coordinator ON bid.bidder_addr = coordinator.bidder_addr `
  416. // Apply filters
  417. nextIsAnd := false
  418. // slotNum filter
  419. if slotNum != nil {
  420. if nextIsAnd {
  421. queryStr += "AND "
  422. } else {
  423. queryStr += "WHERE "
  424. }
  425. queryStr += "bid.slot_num = ? "
  426. args = append(args, slotNum)
  427. nextIsAnd = true
  428. }
  429. // slotNum filter
  430. if forgerAddr != nil {
  431. if nextIsAnd {
  432. queryStr += "AND "
  433. } else {
  434. queryStr += "WHERE "
  435. }
  436. queryStr += "bid.bidder_addr = ? "
  437. args = append(args, forgerAddr)
  438. nextIsAnd = true
  439. }
  440. if fromItem != nil {
  441. if nextIsAnd {
  442. queryStr += "AND "
  443. } else {
  444. queryStr += "WHERE "
  445. }
  446. if order == OrderAsc {
  447. queryStr += "bid.item_id >= ? "
  448. } else {
  449. queryStr += "bid.item_id <= ? "
  450. }
  451. args = append(args, fromItem)
  452. }
  453. // pagination
  454. queryStr += "ORDER BY bid.item_id "
  455. if order == OrderAsc {
  456. queryStr += "ASC "
  457. } else {
  458. queryStr += "DESC "
  459. }
  460. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  461. query, argsQ, err := sqlx.In(queryStr, args...)
  462. if err != nil {
  463. return nil, 0, tracerr.Wrap(err)
  464. }
  465. query = hdb.db.Rebind(query)
  466. bids := []*BidAPI{}
  467. if err := meddler.QueryAll(hdb.db, &bids, query, argsQ...); err != nil {
  468. return nil, 0, tracerr.Wrap(err)
  469. }
  470. if len(bids) == 0 {
  471. return nil, 0, tracerr.Wrap(sql.ErrNoRows)
  472. }
  473. return db.SlicePtrsToSlice(bids).([]BidAPI), bids[0].TotalItems - uint64(len(bids)), nil
  474. }
  475. // AddCoordinators insert Coordinators into the DB
  476. func (hdb *HistoryDB) AddCoordinators(coordinators []common.Coordinator) error {
  477. return hdb.addCoordinators(hdb.db, coordinators)
  478. }
  479. func (hdb *HistoryDB) addCoordinators(d meddler.DB, coordinators []common.Coordinator) error {
  480. return db.BulkInsert(
  481. d,
  482. "INSERT INTO coordinator (bidder_addr, forger_addr, eth_block_num, url) VALUES %s;",
  483. coordinators[:],
  484. )
  485. }
  486. // AddExitTree insert Exit tree into the DB
  487. func (hdb *HistoryDB) AddExitTree(exitTree []common.ExitInfo) error {
  488. return hdb.addExitTree(hdb.db, exitTree)
  489. }
  490. func (hdb *HistoryDB) addExitTree(d meddler.DB, exitTree []common.ExitInfo) error {
  491. return db.BulkInsert(
  492. d,
  493. "INSERT INTO exit_tree (batch_num, account_idx, merkle_proof, balance, "+
  494. "instant_withdrawn, delayed_withdraw_request, delayed_withdrawn) VALUES %s;",
  495. exitTree[:],
  496. )
  497. }
  498. func (hdb *HistoryDB) updateExitTree(d sqlx.Ext, blockNum int64,
  499. rollupWithdrawals []common.WithdrawInfo, wDelayerWithdrawals []common.WDelayerTransfer) error {
  500. type withdrawal struct {
  501. BatchNum int64 `db:"batch_num"`
  502. AccountIdx int64 `db:"account_idx"`
  503. InstantWithdrawn *int64 `db:"instant_withdrawn"`
  504. DelayedWithdrawRequest *int64 `db:"delayed_withdraw_request"`
  505. DelayedWithdrawn *int64 `db:"delayed_withdrawn"`
  506. Owner *ethCommon.Address `db:"owner"`
  507. Token *ethCommon.Address `db:"token"`
  508. }
  509. withdrawals := make([]withdrawal, len(rollupWithdrawals)+len(wDelayerWithdrawals))
  510. for i := range rollupWithdrawals {
  511. info := &rollupWithdrawals[i]
  512. withdrawals[i] = withdrawal{
  513. BatchNum: int64(info.NumExitRoot),
  514. AccountIdx: int64(info.Idx),
  515. }
  516. if info.InstantWithdraw {
  517. withdrawals[i].InstantWithdrawn = &blockNum
  518. } else {
  519. withdrawals[i].DelayedWithdrawRequest = &blockNum
  520. withdrawals[i].Owner = &info.Owner
  521. withdrawals[i].Token = &info.Token
  522. }
  523. }
  524. for i := range wDelayerWithdrawals {
  525. info := &wDelayerWithdrawals[i]
  526. withdrawals[len(rollupWithdrawals)+i] = withdrawal{
  527. DelayedWithdrawn: &blockNum,
  528. Owner: &info.Owner,
  529. Token: &info.Token,
  530. }
  531. }
  532. // In VALUES we set an initial row of NULLs to set the types of each
  533. // variable passed as argument
  534. const query string = `
  535. UPDATE exit_tree e SET
  536. instant_withdrawn = d.instant_withdrawn,
  537. delayed_withdraw_request = CASE
  538. WHEN e.delayed_withdraw_request IS NOT NULL THEN e.delayed_withdraw_request
  539. ELSE d.delayed_withdraw_request
  540. END,
  541. delayed_withdrawn = d.delayed_withdrawn,
  542. owner = d.owner,
  543. token = d.token
  544. FROM (VALUES
  545. (NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BYTEA, NULL::::BYTEA),
  546. (:batch_num,
  547. :account_idx,
  548. :instant_withdrawn,
  549. :delayed_withdraw_request,
  550. :delayed_withdrawn,
  551. :owner,
  552. :token)
  553. ) as d (batch_num, account_idx, instant_withdrawn, delayed_withdraw_request, delayed_withdrawn, owner, token)
  554. WHERE
  555. (d.batch_num IS NOT NULL AND e.batch_num = d.batch_num AND e.account_idx = d.account_idx) OR
  556. (d.delayed_withdrawn IS NOT NULL AND e.delayed_withdrawn IS NULL AND e.owner = d.owner AND e.token = d.token)
  557. `
  558. if len(withdrawals) > 0 {
  559. if _, err := sqlx.NamedQuery(d, query, withdrawals); err != nil {
  560. return tracerr.Wrap(err)
  561. }
  562. }
  563. return nil
  564. }
  565. // AddToken insert a token into the DB
  566. func (hdb *HistoryDB) AddToken(token *common.Token) error {
  567. return meddler.Insert(hdb.db, "token", token)
  568. }
  569. // AddTokens insert tokens into the DB
  570. func (hdb *HistoryDB) AddTokens(tokens []common.Token) error { return hdb.addTokens(hdb.db, tokens) }
  571. func (hdb *HistoryDB) addTokens(d meddler.DB, tokens []common.Token) error {
  572. return db.BulkInsert(
  573. d,
  574. `INSERT INTO token (
  575. token_id,
  576. eth_block_num,
  577. eth_addr,
  578. name,
  579. symbol,
  580. decimals
  581. ) VALUES %s;`,
  582. tokens[:],
  583. )
  584. }
  585. // UpdateTokenValue updates the USD value of a token
  586. func (hdb *HistoryDB) UpdateTokenValue(tokenSymbol string, value float64) error {
  587. _, err := hdb.db.Exec(
  588. "UPDATE token SET usd = $1 WHERE symbol = $2;",
  589. value, tokenSymbol,
  590. )
  591. return tracerr.Wrap(err)
  592. }
  593. // GetToken returns a token from the DB given a TokenID
  594. func (hdb *HistoryDB) GetToken(tokenID common.TokenID) (*TokenWithUSD, error) {
  595. token := &TokenWithUSD{}
  596. err := meddler.QueryRow(
  597. hdb.db, token, `SELECT * FROM token WHERE token_id = $1;`, tokenID,
  598. )
  599. return token, tracerr.Wrap(err)
  600. }
  601. // GetAllTokens returns all tokens from the DB
  602. func (hdb *HistoryDB) GetAllTokens() ([]TokenWithUSD, error) {
  603. var tokens []*TokenWithUSD
  604. err := meddler.QueryAll(
  605. hdb.db, &tokens,
  606. "SELECT * FROM token ORDER BY token_id;",
  607. )
  608. return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), tracerr.Wrap(err)
  609. }
  610. // GetTokens returns a list of tokens from the DB
  611. func (hdb *HistoryDB) GetTokens(
  612. ids []common.TokenID, symbols []string, name string, fromItem,
  613. limit *uint, order string,
  614. ) ([]TokenWithUSD, uint64, error) {
  615. var query string
  616. var args []interface{}
  617. queryStr := `SELECT * , COUNT(*) OVER() AS total_items FROM token `
  618. // Apply filters
  619. nextIsAnd := false
  620. if len(ids) > 0 {
  621. queryStr += "WHERE token_id IN (?) "
  622. nextIsAnd = true
  623. args = append(args, ids)
  624. }
  625. if len(symbols) > 0 {
  626. if nextIsAnd {
  627. queryStr += "AND "
  628. } else {
  629. queryStr += "WHERE "
  630. }
  631. queryStr += "symbol IN (?) "
  632. args = append(args, symbols)
  633. nextIsAnd = true
  634. }
  635. if name != "" {
  636. if nextIsAnd {
  637. queryStr += "AND "
  638. } else {
  639. queryStr += "WHERE "
  640. }
  641. queryStr += "name ~ ? "
  642. args = append(args, name)
  643. nextIsAnd = true
  644. }
  645. if fromItem != nil {
  646. if nextIsAnd {
  647. queryStr += "AND "
  648. } else {
  649. queryStr += "WHERE "
  650. }
  651. if order == OrderAsc {
  652. queryStr += "item_id >= ? "
  653. } else {
  654. queryStr += "item_id <= ? "
  655. }
  656. args = append(args, fromItem)
  657. }
  658. // pagination
  659. queryStr += "ORDER BY item_id "
  660. if order == OrderAsc {
  661. queryStr += "ASC "
  662. } else {
  663. queryStr += "DESC "
  664. }
  665. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  666. query, argsQ, err := sqlx.In(queryStr, args...)
  667. if err != nil {
  668. return nil, 0, tracerr.Wrap(err)
  669. }
  670. query = hdb.db.Rebind(query)
  671. tokens := []*TokenWithUSD{}
  672. if err := meddler.QueryAll(hdb.db, &tokens, query, argsQ...); err != nil {
  673. return nil, 0, tracerr.Wrap(err)
  674. }
  675. if len(tokens) == 0 {
  676. return nil, 0, tracerr.Wrap(sql.ErrNoRows)
  677. }
  678. return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), uint64(len(tokens)) - tokens[0].TotalItems, nil
  679. }
  680. // GetTokenSymbols returns all the token symbols from the DB
  681. func (hdb *HistoryDB) GetTokenSymbols() ([]string, error) {
  682. var tokenSymbols []string
  683. rows, err := hdb.db.Query("SELECT symbol FROM token;")
  684. if err != nil {
  685. return nil, tracerr.Wrap(err)
  686. }
  687. sym := new(string)
  688. for rows.Next() {
  689. err = rows.Scan(sym)
  690. if err != nil {
  691. return nil, tracerr.Wrap(err)
  692. }
  693. tokenSymbols = append(tokenSymbols, *sym)
  694. }
  695. return tokenSymbols, nil
  696. }
  697. // AddAccounts insert accounts into the DB
  698. func (hdb *HistoryDB) AddAccounts(accounts []common.Account) error {
  699. return hdb.addAccounts(hdb.db, accounts)
  700. }
  701. func (hdb *HistoryDB) addAccounts(d meddler.DB, accounts []common.Account) error {
  702. return db.BulkInsert(
  703. d,
  704. `INSERT INTO account (
  705. idx,
  706. token_id,
  707. batch_num,
  708. bjj,
  709. eth_addr
  710. ) VALUES %s;`,
  711. accounts[:],
  712. )
  713. }
  714. // GetAllAccounts returns a list of accounts from the DB
  715. func (hdb *HistoryDB) GetAllAccounts() ([]common.Account, error) {
  716. var accs []*common.Account
  717. err := meddler.QueryAll(
  718. hdb.db, &accs,
  719. "SELECT * FROM account ORDER BY idx;",
  720. )
  721. return db.SlicePtrsToSlice(accs).([]common.Account), tracerr.Wrap(err)
  722. }
  723. // AddL1Txs inserts L1 txs to the DB. USD and LoadAmountUSD will be set automatically before storing the tx.
  724. // If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
  725. // BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
  726. // EffectiveAmount and EffectiveLoadAmount are seted with default values by the DB.
  727. func (hdb *HistoryDB) AddL1Txs(l1txs []common.L1Tx) error { return hdb.addL1Txs(hdb.db, l1txs) }
  728. // addL1Txs inserts L1 txs to the DB. USD and LoadAmountUSD will be set automatically before storing the tx.
  729. // If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
  730. // BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
  731. // EffectiveAmount and EffectiveLoadAmount are seted with default values by the DB.
  732. func (hdb *HistoryDB) addL1Txs(d meddler.DB, l1txs []common.L1Tx) error {
  733. txs := []txWrite{}
  734. for i := 0; i < len(l1txs); i++ {
  735. af := new(big.Float).SetInt(l1txs[i].Amount)
  736. amountFloat, _ := af.Float64()
  737. laf := new(big.Float).SetInt(l1txs[i].LoadAmount)
  738. loadAmountFloat, _ := laf.Float64()
  739. txs = append(txs, txWrite{
  740. // Generic
  741. IsL1: true,
  742. TxID: l1txs[i].TxID,
  743. Type: l1txs[i].Type,
  744. Position: l1txs[i].Position,
  745. FromIdx: &l1txs[i].FromIdx,
  746. ToIdx: l1txs[i].ToIdx,
  747. Amount: l1txs[i].Amount,
  748. AmountFloat: amountFloat,
  749. TokenID: l1txs[i].TokenID,
  750. BatchNum: l1txs[i].BatchNum,
  751. EthBlockNum: l1txs[i].EthBlockNum,
  752. // L1
  753. ToForgeL1TxsNum: l1txs[i].ToForgeL1TxsNum,
  754. UserOrigin: &l1txs[i].UserOrigin,
  755. FromEthAddr: &l1txs[i].FromEthAddr,
  756. FromBJJ: l1txs[i].FromBJJ,
  757. LoadAmount: l1txs[i].LoadAmount,
  758. LoadAmountFloat: &loadAmountFloat,
  759. })
  760. }
  761. return hdb.addTxs(d, txs)
  762. }
  763. // AddL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
  764. func (hdb *HistoryDB) AddL2Txs(l2txs []common.L2Tx) error { return hdb.addL2Txs(hdb.db, l2txs) }
  765. // addL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
  766. func (hdb *HistoryDB) addL2Txs(d meddler.DB, l2txs []common.L2Tx) error {
  767. txs := []txWrite{}
  768. for i := 0; i < len(l2txs); i++ {
  769. f := new(big.Float).SetInt(l2txs[i].Amount)
  770. amountFloat, _ := f.Float64()
  771. txs = append(txs, txWrite{
  772. // Generic
  773. IsL1: false,
  774. TxID: l2txs[i].TxID,
  775. Type: l2txs[i].Type,
  776. Position: l2txs[i].Position,
  777. FromIdx: &l2txs[i].FromIdx,
  778. ToIdx: l2txs[i].ToIdx,
  779. Amount: l2txs[i].Amount,
  780. AmountFloat: amountFloat,
  781. BatchNum: &l2txs[i].BatchNum,
  782. EthBlockNum: l2txs[i].EthBlockNum,
  783. // L2
  784. Fee: &l2txs[i].Fee,
  785. Nonce: &l2txs[i].Nonce,
  786. })
  787. }
  788. return hdb.addTxs(d, txs)
  789. }
  790. func (hdb *HistoryDB) addTxs(d meddler.DB, txs []txWrite) error {
  791. return db.BulkInsert(
  792. d,
  793. `INSERT INTO tx (
  794. is_l1,
  795. id,
  796. type,
  797. position,
  798. from_idx,
  799. to_idx,
  800. amount,
  801. amount_f,
  802. token_id,
  803. batch_num,
  804. eth_block_num,
  805. to_forge_l1_txs_num,
  806. user_origin,
  807. from_eth_addr,
  808. from_bjj,
  809. load_amount,
  810. load_amount_f,
  811. fee,
  812. nonce
  813. ) VALUES %s;`,
  814. txs[:],
  815. )
  816. }
  817. // // GetTxs returns a list of txs from the DB
  818. // func (hdb *HistoryDB) GetTxs() ([]common.Tx, error) {
  819. // var txs []*common.Tx
  820. // err := meddler.QueryAll(
  821. // hdb.db, &txs,
  822. // `SELECT * FROM tx
  823. // ORDER BY (batch_num, position) ASC`,
  824. // )
  825. // return db.SlicePtrsToSlice(txs).([]common.Tx), err
  826. // }
  827. // GetHistoryTx returns a tx from the DB given a TxID
  828. func (hdb *HistoryDB) GetHistoryTx(txID common.TxID) (*TxAPI, error) {
  829. // TODO: add success flags for L1s
  830. tx := &TxAPI{}
  831. err := meddler.QueryRow(
  832. hdb.db, tx, `SELECT tx.item_id, tx.is_l1, tx.id, tx.type, tx.position,
  833. hez_idx(tx.from_idx, token.symbol) AS from_idx, tx.from_eth_addr, tx.from_bjj,
  834. hez_idx(tx.to_idx, token.symbol) AS to_idx, tx.to_eth_addr, tx.to_bjj,
  835. tx.amount, tx.token_id, tx.amount_usd,
  836. tx.batch_num, tx.eth_block_num, tx.to_forge_l1_txs_num, tx.user_origin,
  837. tx.load_amount, tx.load_amount_usd, tx.fee, tx.fee_usd, tx.nonce,
  838. token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
  839. token.eth_addr, token.name, token.symbol, token.decimals, token.usd,
  840. token.usd_update, block.timestamp
  841. FROM tx INNER JOIN token ON tx.token_id = token.token_id
  842. INNER JOIN block ON tx.eth_block_num = block.eth_block_num
  843. WHERE tx.id = $1;`, txID,
  844. )
  845. return tx, tracerr.Wrap(err)
  846. }
  847. // GetHistoryTxs returns a list of txs from the DB using the HistoryTx struct
  848. // and pagination info
  849. func (hdb *HistoryDB) GetHistoryTxs(
  850. ethAddr *ethCommon.Address, bjj *babyjub.PublicKey,
  851. tokenID *common.TokenID, idx *common.Idx, batchNum *uint, txType *common.TxType,
  852. fromItem, limit *uint, order string,
  853. ) ([]TxAPI, uint64, error) {
  854. // TODO: add success flags for L1s
  855. if ethAddr != nil && bjj != nil {
  856. return nil, 0, tracerr.Wrap(errors.New("ethAddr and bjj are incompatible"))
  857. }
  858. var query string
  859. var args []interface{}
  860. queryStr := `SELECT tx.item_id, tx.is_l1, tx.id, tx.type, tx.position,
  861. hez_idx(tx.from_idx, token.symbol) AS from_idx, tx.from_eth_addr, tx.from_bjj,
  862. hez_idx(tx.to_idx, token.symbol) AS to_idx, tx.to_eth_addr, tx.to_bjj,
  863. tx.amount, tx.token_id, tx.amount_usd,
  864. tx.batch_num, tx.eth_block_num, tx.to_forge_l1_txs_num, tx.user_origin,
  865. tx.load_amount, tx.load_amount_usd, tx.fee, tx.fee_usd, tx.nonce,
  866. token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
  867. token.eth_addr, token.name, token.symbol, token.decimals, token.usd,
  868. token.usd_update, block.timestamp, count(*) OVER() AS total_items
  869. FROM tx INNER JOIN token ON tx.token_id = token.token_id
  870. INNER JOIN block ON tx.eth_block_num = block.eth_block_num `
  871. // Apply filters
  872. nextIsAnd := false
  873. // ethAddr filter
  874. if ethAddr != nil {
  875. queryStr += "WHERE (tx.from_eth_addr = ? OR tx.to_eth_addr = ?) "
  876. nextIsAnd = true
  877. args = append(args, ethAddr, ethAddr)
  878. } else if bjj != nil { // bjj filter
  879. queryStr += "WHERE (tx.from_bjj = ? OR tx.to_bjj = ?) "
  880. nextIsAnd = true
  881. args = append(args, bjj, bjj)
  882. }
  883. // tokenID filter
  884. if tokenID != nil {
  885. if nextIsAnd {
  886. queryStr += "AND "
  887. } else {
  888. queryStr += "WHERE "
  889. }
  890. queryStr += "tx.token_id = ? "
  891. args = append(args, tokenID)
  892. nextIsAnd = true
  893. }
  894. // idx filter
  895. if idx != nil {
  896. if nextIsAnd {
  897. queryStr += "AND "
  898. } else {
  899. queryStr += "WHERE "
  900. }
  901. queryStr += "(tx.from_idx = ? OR tx.to_idx = ?) "
  902. args = append(args, idx, idx)
  903. nextIsAnd = true
  904. }
  905. // batchNum filter
  906. if batchNum != nil {
  907. if nextIsAnd {
  908. queryStr += "AND "
  909. } else {
  910. queryStr += "WHERE "
  911. }
  912. queryStr += "tx.batch_num = ? "
  913. args = append(args, batchNum)
  914. nextIsAnd = true
  915. }
  916. // txType filter
  917. if txType != nil {
  918. if nextIsAnd {
  919. queryStr += "AND "
  920. } else {
  921. queryStr += "WHERE "
  922. }
  923. queryStr += "tx.type = ? "
  924. args = append(args, txType)
  925. nextIsAnd = true
  926. }
  927. if fromItem != nil {
  928. if nextIsAnd {
  929. queryStr += "AND "
  930. } else {
  931. queryStr += "WHERE "
  932. }
  933. if order == OrderAsc {
  934. queryStr += "tx.item_id >= ? "
  935. } else {
  936. queryStr += "tx.item_id <= ? "
  937. }
  938. args = append(args, fromItem)
  939. nextIsAnd = true
  940. }
  941. if nextIsAnd {
  942. queryStr += "AND "
  943. } else {
  944. queryStr += "WHERE "
  945. }
  946. queryStr += "tx.batch_num IS NOT NULL "
  947. // pagination
  948. queryStr += "ORDER BY tx.item_id "
  949. if order == OrderAsc {
  950. queryStr += " ASC "
  951. } else {
  952. queryStr += " DESC "
  953. }
  954. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  955. query = hdb.db.Rebind(queryStr)
  956. // log.Debug(query)
  957. txsPtrs := []*TxAPI{}
  958. if err := meddler.QueryAll(hdb.db, &txsPtrs, query, args...); err != nil {
  959. return nil, 0, tracerr.Wrap(err)
  960. }
  961. txs := db.SlicePtrsToSlice(txsPtrs).([]TxAPI)
  962. if len(txs) == 0 {
  963. return nil, 0, tracerr.Wrap(sql.ErrNoRows)
  964. }
  965. return txs, txs[0].TotalItems - uint64(len(txs)), nil
  966. }
  967. // GetAllExits returns all exit from the DB
  968. func (hdb *HistoryDB) GetAllExits() ([]common.ExitInfo, error) {
  969. var exits []*common.ExitInfo
  970. err := meddler.QueryAll(
  971. hdb.db, &exits,
  972. `SELECT exit_tree.batch_num, exit_tree.account_idx, exit_tree.merkle_proof,
  973. exit_tree.balance, exit_tree.instant_withdrawn, exit_tree.delayed_withdraw_request,
  974. exit_tree.delayed_withdrawn FROM exit_tree;`,
  975. )
  976. return db.SlicePtrsToSlice(exits).([]common.ExitInfo), tracerr.Wrap(err)
  977. }
  978. // GetExitAPI returns a exit from the DB
  979. func (hdb *HistoryDB) GetExitAPI(batchNum *uint, idx *common.Idx) (*ExitAPI, error) {
  980. exit := &ExitAPI{}
  981. err := meddler.QueryRow(
  982. hdb.db, exit, `SELECT exit_tree.item_id, exit_tree.batch_num,
  983. hez_idx(exit_tree.account_idx, token.symbol) AS account_idx,
  984. exit_tree.merkle_proof, exit_tree.balance, exit_tree.instant_withdrawn,
  985. exit_tree.delayed_withdraw_request, exit_tree.delayed_withdrawn,
  986. token.token_id, token.item_id AS token_item_id,
  987. token.eth_block_num AS token_block, token.eth_addr, token.name, token.symbol,
  988. token.decimals, token.usd, token.usd_update
  989. FROM exit_tree INNER JOIN account ON exit_tree.account_idx = account.idx
  990. INNER JOIN token ON account.token_id = token.token_id
  991. WHERE exit_tree.batch_num = $1 AND exit_tree.account_idx = $2;`, batchNum, idx,
  992. )
  993. return exit, tracerr.Wrap(err)
  994. }
  995. // GetExitsAPI returns a list of exits from the DB and pagination info
  996. func (hdb *HistoryDB) GetExitsAPI(
  997. ethAddr *ethCommon.Address, bjj *babyjub.PublicKey, tokenID *common.TokenID,
  998. idx *common.Idx, batchNum *uint, onlyPendingWithdraws *bool,
  999. fromItem, limit *uint, order string,
  1000. ) ([]ExitAPI, uint64, error) {
  1001. if ethAddr != nil && bjj != nil {
  1002. return nil, 0, tracerr.Wrap(errors.New("ethAddr and bjj are incompatible"))
  1003. }
  1004. var query string
  1005. var args []interface{}
  1006. queryStr := `SELECT exit_tree.item_id, exit_tree.batch_num,
  1007. hez_idx(exit_tree.account_idx, token.symbol) AS account_idx,
  1008. exit_tree.merkle_proof, exit_tree.balance, exit_tree.instant_withdrawn,
  1009. exit_tree.delayed_withdraw_request, exit_tree.delayed_withdrawn,
  1010. token.token_id, token.item_id AS token_item_id,
  1011. token.eth_block_num AS token_block, token.eth_addr, token.name, token.symbol,
  1012. token.decimals, token.usd, token.usd_update, COUNT(*) OVER() AS total_items
  1013. FROM exit_tree INNER JOIN account ON exit_tree.account_idx = account.idx
  1014. INNER JOIN token ON account.token_id = token.token_id `
  1015. // Apply filters
  1016. nextIsAnd := false
  1017. // ethAddr filter
  1018. if ethAddr != nil {
  1019. queryStr += "WHERE account.eth_addr = ? "
  1020. nextIsAnd = true
  1021. args = append(args, ethAddr)
  1022. } else if bjj != nil { // bjj filter
  1023. queryStr += "WHERE account.bjj = ? "
  1024. nextIsAnd = true
  1025. args = append(args, bjj)
  1026. }
  1027. // tokenID filter
  1028. if tokenID != nil {
  1029. if nextIsAnd {
  1030. queryStr += "AND "
  1031. } else {
  1032. queryStr += "WHERE "
  1033. }
  1034. queryStr += "account.token_id = ? "
  1035. args = append(args, tokenID)
  1036. nextIsAnd = true
  1037. }
  1038. // idx filter
  1039. if idx != nil {
  1040. if nextIsAnd {
  1041. queryStr += "AND "
  1042. } else {
  1043. queryStr += "WHERE "
  1044. }
  1045. queryStr += "exit_tree.account_idx = ? "
  1046. args = append(args, idx)
  1047. nextIsAnd = true
  1048. }
  1049. // batchNum filter
  1050. if batchNum != nil {
  1051. if nextIsAnd {
  1052. queryStr += "AND "
  1053. } else {
  1054. queryStr += "WHERE "
  1055. }
  1056. queryStr += "exit_tree.batch_num = ? "
  1057. args = append(args, batchNum)
  1058. nextIsAnd = true
  1059. }
  1060. // onlyPendingWithdraws
  1061. if onlyPendingWithdraws != nil {
  1062. if *onlyPendingWithdraws {
  1063. if nextIsAnd {
  1064. queryStr += "AND "
  1065. } else {
  1066. queryStr += "WHERE "
  1067. }
  1068. queryStr += "(exit_tree.instant_withdrawn IS NULL AND exit_tree.delayed_withdrawn IS NULL) "
  1069. nextIsAnd = true
  1070. }
  1071. }
  1072. if fromItem != nil {
  1073. if nextIsAnd {
  1074. queryStr += "AND "
  1075. } else {
  1076. queryStr += "WHERE "
  1077. }
  1078. if order == OrderAsc {
  1079. queryStr += "exit_tree.item_id >= ? "
  1080. } else {
  1081. queryStr += "exit_tree.item_id <= ? "
  1082. }
  1083. args = append(args, fromItem)
  1084. // nextIsAnd = true
  1085. }
  1086. // pagination
  1087. queryStr += "ORDER BY exit_tree.item_id "
  1088. if order == OrderAsc {
  1089. queryStr += " ASC "
  1090. } else {
  1091. queryStr += " DESC "
  1092. }
  1093. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  1094. query = hdb.db.Rebind(queryStr)
  1095. // log.Debug(query)
  1096. exits := []*ExitAPI{}
  1097. if err := meddler.QueryAll(hdb.db, &exits, query, args...); err != nil {
  1098. return nil, 0, tracerr.Wrap(err)
  1099. }
  1100. if len(exits) == 0 {
  1101. return nil, 0, tracerr.Wrap(sql.ErrNoRows)
  1102. }
  1103. return db.SlicePtrsToSlice(exits).([]ExitAPI), exits[0].TotalItems - uint64(len(exits)), nil
  1104. }
  1105. // GetAllL1UserTxs returns all L1UserTxs from the DB
  1106. func (hdb *HistoryDB) GetAllL1UserTxs() ([]common.L1Tx, error) {
  1107. var txs []*common.L1Tx
  1108. err := meddler.QueryAll(
  1109. hdb.db, &txs, // Note that '\x' gets parsed as a big.Int with value = 0
  1110. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  1111. tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  1112. tx.amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.amount_success THEN tx.amount ELSE '\x' END) AS effective_amount,
  1113. tx.load_amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.load_amount_success THEN tx.load_amount ELSE '\x' END) AS effective_load_amount,
  1114. tx.eth_block_num, tx.type, tx.batch_num
  1115. FROM tx WHERE is_l1 = TRUE AND user_origin = TRUE;`,
  1116. )
  1117. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  1118. }
  1119. // GetAllL1CoordinatorTxs returns all L1CoordinatorTxs from the DB
  1120. func (hdb *HistoryDB) GetAllL1CoordinatorTxs() ([]common.L1Tx, error) {
  1121. var txs []*common.L1Tx
  1122. // Since the query specifies that only coordinator txs are returned, it's safe to assume
  1123. // that returned txs will always have effective amounts
  1124. err := meddler.QueryAll(
  1125. hdb.db, &txs,
  1126. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  1127. tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  1128. tx.amount, tx.amount AS effective_amount,
  1129. tx.load_amount, tx.load_amount AS effective_load_amount,
  1130. tx.eth_block_num, tx.type, tx.batch_num
  1131. FROM tx WHERE is_l1 = TRUE AND user_origin = FALSE;`,
  1132. )
  1133. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  1134. }
  1135. // GetAllL2Txs returns all L2Txs from the DB
  1136. func (hdb *HistoryDB) GetAllL2Txs() ([]common.L2Tx, error) {
  1137. var txs []*common.L2Tx
  1138. err := meddler.QueryAll(
  1139. hdb.db, &txs,
  1140. `SELECT tx.id, tx.batch_num, tx.position,
  1141. tx.from_idx, tx.to_idx, tx.amount, tx.fee, tx.nonce,
  1142. tx.type, tx.eth_block_num
  1143. FROM tx WHERE is_l1 = FALSE;`,
  1144. )
  1145. return db.SlicePtrsToSlice(txs).([]common.L2Tx), tracerr.Wrap(err)
  1146. }
  1147. // GetUnforgedL1UserTxs gets L1 User Txs to be forged in the L1Batch with toForgeL1TxsNum.
  1148. func (hdb *HistoryDB) GetUnforgedL1UserTxs(toForgeL1TxsNum int64) ([]common.L1Tx, error) {
  1149. var txs []*common.L1Tx
  1150. err := meddler.QueryAll(
  1151. hdb.db, &txs, // only L1 user txs can have batch_num set to null
  1152. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  1153. tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  1154. tx.amount, NULL AS effective_amount,
  1155. tx.load_amount, NULL AS effective_load_amount,
  1156. tx.eth_block_num, tx.type, tx.batch_num
  1157. FROM tx WHERE batch_num IS NULL AND to_forge_l1_txs_num = $1;`,
  1158. toForgeL1TxsNum,
  1159. )
  1160. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  1161. }
  1162. // TODO: Think about chaning all the queries that return a last value, to queries that return the next valid value.
  1163. // GetLastTxsPosition for a given to_forge_l1_txs_num
  1164. func (hdb *HistoryDB) GetLastTxsPosition(toForgeL1TxsNum int64) (int, error) {
  1165. row := hdb.db.QueryRow("SELECT MAX(position) FROM tx WHERE to_forge_l1_txs_num = $1;", toForgeL1TxsNum)
  1166. var lastL1TxsPosition int
  1167. return lastL1TxsPosition, row.Scan(&lastL1TxsPosition)
  1168. }
  1169. // GetSCVars returns the rollup, auction and wdelayer smart contracts variables at their last update.
  1170. func (hdb *HistoryDB) GetSCVars() (*common.RollupVariables, *common.AuctionVariables,
  1171. *common.WDelayerVariables, error) {
  1172. var rollup common.RollupVariables
  1173. var auction common.AuctionVariables
  1174. var wDelayer common.WDelayerVariables
  1175. if err := meddler.QueryRow(hdb.db, &rollup,
  1176. "SELECT * FROM rollup_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  1177. return nil, nil, nil, tracerr.Wrap(err)
  1178. }
  1179. if err := meddler.QueryRow(hdb.db, &auction,
  1180. "SELECT * FROM auction_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  1181. return nil, nil, nil, tracerr.Wrap(err)
  1182. }
  1183. if err := meddler.QueryRow(hdb.db, &wDelayer,
  1184. "SELECT * FROM wdelayer_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  1185. return nil, nil, nil, tracerr.Wrap(err)
  1186. }
  1187. return &rollup, &auction, &wDelayer, nil
  1188. }
  1189. func (hdb *HistoryDB) setRollupVars(d meddler.DB, rollup *common.RollupVariables) error {
  1190. return meddler.Insert(d, "rollup_vars", rollup)
  1191. }
  1192. func (hdb *HistoryDB) setAuctionVars(d meddler.DB, auction *common.AuctionVariables) error {
  1193. return meddler.Insert(d, "auction_vars", auction)
  1194. }
  1195. func (hdb *HistoryDB) setWDelayerVars(d meddler.DB, wDelayer *common.WDelayerVariables) error {
  1196. return meddler.Insert(d, "wdelayer_vars", wDelayer)
  1197. }
  1198. // SetInitialSCVars sets the initial state of rollup, auction, wdelayer smart
  1199. // contract variables. This initial state is stored linked to block 0, which
  1200. // always exist in the DB and is used to store initialization data that always
  1201. // exist in the smart contracts.
  1202. func (hdb *HistoryDB) SetInitialSCVars(rollup *common.RollupVariables,
  1203. auction *common.AuctionVariables, wDelayer *common.WDelayerVariables) error {
  1204. txn, err := hdb.db.Beginx()
  1205. if err != nil {
  1206. return tracerr.Wrap(err)
  1207. }
  1208. defer func() {
  1209. if err != nil {
  1210. db.Rollback(txn)
  1211. }
  1212. }()
  1213. // Force EthBlockNum to be 0 because it's the block used to link data
  1214. // that belongs to the creation of the smart contracts
  1215. rollup.EthBlockNum = 0
  1216. auction.EthBlockNum = 0
  1217. wDelayer.EthBlockNum = 0
  1218. auction.DefaultSlotSetBidSlotNum = 0
  1219. if err := hdb.setRollupVars(txn, rollup); err != nil {
  1220. return tracerr.Wrap(err)
  1221. }
  1222. if err := hdb.setAuctionVars(txn, auction); err != nil {
  1223. return tracerr.Wrap(err)
  1224. }
  1225. if err := hdb.setWDelayerVars(txn, wDelayer); err != nil {
  1226. return tracerr.Wrap(err)
  1227. }
  1228. return tracerr.Wrap(txn.Commit())
  1229. }
  1230. // setL1UserTxEffectiveAmounts sets the EffectiveAmount and EffectiveLoadAmount
  1231. // of the given l1UserTxs (with an UPDATE)
  1232. func (hdb *HistoryDB) setL1UserTxEffectiveAmounts(d sqlx.Ext, txs []common.L1Tx) error {
  1233. // Effective amounts are stored as success flags in the DB, with true value by default
  1234. // to reduce the amount of updates. Therefore, only amounts that became uneffective should be
  1235. // updated to become false
  1236. type txUpdate struct {
  1237. ID common.TxID `db:"id"`
  1238. AmountSuccess bool `db:"amount_success"`
  1239. LoadAmountSuccess bool `db:"load_amount_success"`
  1240. }
  1241. txUpdates := []txUpdate{}
  1242. equal := func(a *big.Int, b *big.Int) bool {
  1243. return a.Cmp(b) == 0
  1244. }
  1245. for i := range txs {
  1246. amountSuccess := equal(txs[i].Amount, txs[i].EffectiveAmount)
  1247. loadAmountSuccess := equal(txs[i].LoadAmount, txs[i].EffectiveLoadAmount)
  1248. if !amountSuccess || !loadAmountSuccess {
  1249. txUpdates = append(txUpdates, txUpdate{
  1250. ID: txs[i].TxID,
  1251. AmountSuccess: amountSuccess,
  1252. LoadAmountSuccess: loadAmountSuccess,
  1253. })
  1254. }
  1255. }
  1256. const query string = `
  1257. UPDATE tx SET
  1258. amount_success = tx_update.amount_success,
  1259. load_amount_success = tx_update.load_amount_success
  1260. FROM (VALUES
  1261. (NULL::::BYTEA, NULL::::BOOL, NULL::::BOOL),
  1262. (:id, :amount_success, :load_amount_success)
  1263. ) as tx_update (id, amount_success, load_amount_success)
  1264. WHERE tx.id = tx_update.id
  1265. `
  1266. if len(txUpdates) > 0 {
  1267. if _, err := sqlx.NamedQuery(d, query, txUpdates); err != nil {
  1268. return tracerr.Wrap(err)
  1269. }
  1270. }
  1271. return nil
  1272. }
  1273. // AddBlockSCData stores all the information of a block retrieved by the
  1274. // Synchronizer. Blocks should be inserted in order, leaving no gaps because
  1275. // the pagination system of the API/DB depends on this. Within blocks, all
  1276. // items should also be in the correct order (Accounts, Tokens, Txs, etc.)
  1277. func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) {
  1278. txn, err := hdb.db.Beginx()
  1279. if err != nil {
  1280. return tracerr.Wrap(err)
  1281. }
  1282. defer func() {
  1283. if err != nil {
  1284. db.Rollback(txn)
  1285. }
  1286. }()
  1287. // Add block
  1288. if err := hdb.addBlock(txn, &blockData.Block); err != nil {
  1289. return tracerr.Wrap(err)
  1290. }
  1291. // Add Coordinators
  1292. if len(blockData.Auction.Coordinators) > 0 {
  1293. if err := hdb.addCoordinators(txn, blockData.Auction.Coordinators); err != nil {
  1294. return tracerr.Wrap(err)
  1295. }
  1296. }
  1297. // Add Bids
  1298. if len(blockData.Auction.Bids) > 0 {
  1299. if err := hdb.addBids(txn, blockData.Auction.Bids); err != nil {
  1300. return tracerr.Wrap(err)
  1301. }
  1302. }
  1303. // Add Tokens
  1304. if len(blockData.Rollup.AddedTokens) > 0 {
  1305. if err := hdb.addTokens(txn, blockData.Rollup.AddedTokens); err != nil {
  1306. return tracerr.Wrap(err)
  1307. }
  1308. }
  1309. // Prepare user L1 txs to be added.
  1310. // They must be added before the batch that will forge them (which can be in the same block)
  1311. // and after the account that will be sent to (also can be in the same block).
  1312. // Note: insert order is not relevant since item_id will be updated by a DB trigger when
  1313. // the batch that forges those txs is inserted
  1314. userL1s := make(map[common.BatchNum][]common.L1Tx)
  1315. for i := range blockData.Rollup.L1UserTxs {
  1316. batchThatForgesIsInTheBlock := false
  1317. for _, batch := range blockData.Rollup.Batches {
  1318. if batch.Batch.ForgeL1TxsNum != nil &&
  1319. *batch.Batch.ForgeL1TxsNum == *blockData.Rollup.L1UserTxs[i].ToForgeL1TxsNum {
  1320. // Tx is forged in this block. It's guaranteed that:
  1321. // * the first batch of the block won't forge user L1 txs that have been added in this block
  1322. // * batch nums are sequential therefore it's safe to add the tx at batch.BatchNum -1
  1323. batchThatForgesIsInTheBlock = true
  1324. addAtBatchNum := batch.Batch.BatchNum - 1
  1325. userL1s[addAtBatchNum] = append(userL1s[addAtBatchNum], blockData.Rollup.L1UserTxs[i])
  1326. break
  1327. }
  1328. }
  1329. if !batchThatForgesIsInTheBlock {
  1330. // User artificial batchNum 0 to add txs that are not forge in this block
  1331. // after all the accounts of the block have been added
  1332. userL1s[0] = append(userL1s[0], blockData.Rollup.L1UserTxs[i])
  1333. }
  1334. }
  1335. // Add Batches
  1336. for i := range blockData.Rollup.Batches {
  1337. batch := &blockData.Rollup.Batches[i]
  1338. // Set the EffectiveAmount and EffectiveLoadAmount of all the
  1339. // L1UserTxs that have been forged in this batch
  1340. if len(batch.L1UserTxs) > 0 {
  1341. if err = hdb.setL1UserTxEffectiveAmounts(txn, batch.L1UserTxs); err != nil {
  1342. return tracerr.Wrap(err)
  1343. }
  1344. }
  1345. // Add Batch: this will trigger an update on the DB
  1346. // that will set the batch num of forged L1 txs in this batch
  1347. if err = hdb.addBatch(txn, &batch.Batch); err != nil {
  1348. return tracerr.Wrap(err)
  1349. }
  1350. // Add accounts
  1351. if len(batch.CreatedAccounts) > 0 {
  1352. if err := hdb.addAccounts(txn, batch.CreatedAccounts); err != nil {
  1353. return tracerr.Wrap(err)
  1354. }
  1355. }
  1356. // Add forged l1 coordinator Txs
  1357. if len(batch.L1CoordinatorTxs) > 0 {
  1358. if err := hdb.addL1Txs(txn, batch.L1CoordinatorTxs); err != nil {
  1359. return tracerr.Wrap(err)
  1360. }
  1361. }
  1362. // Add l2 Txs
  1363. if len(batch.L2Txs) > 0 {
  1364. if err := hdb.addL2Txs(txn, batch.L2Txs); err != nil {
  1365. return tracerr.Wrap(err)
  1366. }
  1367. }
  1368. // Add user L1 txs that will be forged in next batch
  1369. if userlL1s, ok := userL1s[batch.Batch.BatchNum]; ok {
  1370. if err := hdb.addL1Txs(txn, userlL1s); err != nil {
  1371. return tracerr.Wrap(err)
  1372. }
  1373. }
  1374. // Add exit tree
  1375. if len(batch.ExitTree) > 0 {
  1376. if err := hdb.addExitTree(txn, batch.ExitTree); err != nil {
  1377. return tracerr.Wrap(err)
  1378. }
  1379. }
  1380. }
  1381. // Add user L1 txs that won't be forged in this block
  1382. if userL1sNotForgedInThisBlock, ok := userL1s[0]; ok {
  1383. if err := hdb.addL1Txs(txn, userL1sNotForgedInThisBlock); err != nil {
  1384. return tracerr.Wrap(err)
  1385. }
  1386. }
  1387. if blockData.Rollup.Vars != nil {
  1388. if err := hdb.setRollupVars(txn, blockData.Rollup.Vars); err != nil {
  1389. return tracerr.Wrap(err)
  1390. }
  1391. }
  1392. if blockData.Auction.Vars != nil {
  1393. if err := hdb.setAuctionVars(txn, blockData.Auction.Vars); err != nil {
  1394. return tracerr.Wrap(err)
  1395. }
  1396. }
  1397. if blockData.WDelayer.Vars != nil {
  1398. if err := hdb.setWDelayerVars(txn, blockData.WDelayer.Vars); err != nil {
  1399. return tracerr.Wrap(err)
  1400. }
  1401. }
  1402. if err := hdb.updateExitTree(txn, blockData.Block.Num,
  1403. blockData.Rollup.Withdrawals, blockData.WDelayer.Withdrawals); err != nil {
  1404. return tracerr.Wrap(err)
  1405. }
  1406. return tracerr.Wrap(txn.Commit())
  1407. }
  1408. // GetCoordinatorAPI returns a coordinator by its bidderAddr
  1409. func (hdb *HistoryDB) GetCoordinatorAPI(bidderAddr ethCommon.Address) (*CoordinatorAPI, error) {
  1410. coordinator := &CoordinatorAPI{}
  1411. err := meddler.QueryRow(hdb.db, coordinator, "SELECT * FROM coordinator WHERE bidder_addr = $1;", bidderAddr)
  1412. return coordinator, tracerr.Wrap(err)
  1413. }
  1414. // GetCoordinatorsAPI returns a list of coordinators from the DB and pagination info
  1415. func (hdb *HistoryDB) GetCoordinatorsAPI(fromItem, limit *uint, order string) ([]CoordinatorAPI, uint64, error) {
  1416. var query string
  1417. var args []interface{}
  1418. queryStr := `SELECT coordinator.*,
  1419. COUNT(*) OVER() AS total_items
  1420. FROM coordinator `
  1421. // Apply filters
  1422. if fromItem != nil {
  1423. queryStr += "WHERE "
  1424. if order == OrderAsc {
  1425. queryStr += "coordinator.item_id >= ? "
  1426. } else {
  1427. queryStr += "coordinator.item_id <= ? "
  1428. }
  1429. args = append(args, fromItem)
  1430. }
  1431. // pagination
  1432. queryStr += "ORDER BY coordinator.item_id "
  1433. if order == OrderAsc {
  1434. queryStr += " ASC "
  1435. } else {
  1436. queryStr += " DESC "
  1437. }
  1438. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  1439. query = hdb.db.Rebind(queryStr)
  1440. coordinators := []*CoordinatorAPI{}
  1441. if err := meddler.QueryAll(hdb.db, &coordinators, query, args...); err != nil {
  1442. return nil, 0, tracerr.Wrap(err)
  1443. }
  1444. if len(coordinators) == 0 {
  1445. return nil, 0, tracerr.Wrap(sql.ErrNoRows)
  1446. }
  1447. return db.SlicePtrsToSlice(coordinators).([]CoordinatorAPI),
  1448. coordinators[0].TotalItems - uint64(len(coordinators)), nil
  1449. }
  1450. // AddAuctionVars insert auction vars into the DB
  1451. func (hdb *HistoryDB) AddAuctionVars(auctionVars *common.AuctionVariables) error {
  1452. return meddler.Insert(hdb.db, "auction_vars", auctionVars)
  1453. }
  1454. // GetAuctionVars returns auction variables
  1455. func (hdb *HistoryDB) GetAuctionVars() (*common.AuctionVariables, error) {
  1456. auctionVars := &common.AuctionVariables{}
  1457. err := meddler.QueryRow(
  1458. hdb.db, auctionVars, `SELECT * FROM auction_vars;`,
  1459. )
  1460. return auctionVars, tracerr.Wrap(err)
  1461. }
  1462. // GetAccountAPI returns an account by its index
  1463. func (hdb *HistoryDB) GetAccountAPI(idx common.Idx) (*AccountAPI, error) {
  1464. account := &AccountAPI{}
  1465. err := meddler.QueryRow(hdb.db, account, `SELECT account.item_id, hez_idx(account.idx,
  1466. token.symbol) as idx, account.batch_num, account.bjj, account.eth_addr,
  1467. token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
  1468. token.eth_addr as token_eth_addr, token.name, token.symbol, token.decimals, token.usd, token.usd_update
  1469. FROM account INNER JOIN token ON account.token_id = token.token_id WHERE idx = $1;`, idx)
  1470. if err != nil {
  1471. return nil, tracerr.Wrap(err)
  1472. }
  1473. return account, nil
  1474. }
  1475. // GetAccountsAPI returns a list of accounts from the DB and pagination info
  1476. func (hdb *HistoryDB) GetAccountsAPI(
  1477. tokenIDs []common.TokenID, ethAddr *ethCommon.Address,
  1478. bjj *babyjub.PublicKey, fromItem, limit *uint, order string,
  1479. ) ([]AccountAPI, uint64, error) {
  1480. if ethAddr != nil && bjj != nil {
  1481. return nil, 0, tracerr.Wrap(errors.New("ethAddr and bjj are incompatible"))
  1482. }
  1483. var query string
  1484. var args []interface{}
  1485. queryStr := `SELECT account.item_id, hez_idx(account.idx, token.symbol) as idx, account.batch_num,
  1486. account.bjj, account.eth_addr, token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
  1487. token.eth_addr as token_eth_addr, token.name, token.symbol, token.decimals, token.usd, token.usd_update,
  1488. COUNT(*) OVER() AS total_items
  1489. FROM account INNER JOIN token ON account.token_id = token.token_id `
  1490. // Apply filters
  1491. nextIsAnd := false
  1492. // ethAddr filter
  1493. if ethAddr != nil {
  1494. queryStr += "WHERE account.eth_addr = ? "
  1495. nextIsAnd = true
  1496. args = append(args, ethAddr)
  1497. } else if bjj != nil { // bjj filter
  1498. queryStr += "WHERE account.bjj = ? "
  1499. nextIsAnd = true
  1500. args = append(args, bjj)
  1501. }
  1502. // tokenID filter
  1503. if len(tokenIDs) > 0 {
  1504. if nextIsAnd {
  1505. queryStr += "AND "
  1506. } else {
  1507. queryStr += "WHERE "
  1508. }
  1509. queryStr += "account.token_id IN (?) "
  1510. args = append(args, tokenIDs)
  1511. nextIsAnd = true
  1512. }
  1513. if fromItem != nil {
  1514. if nextIsAnd {
  1515. queryStr += "AND "
  1516. } else {
  1517. queryStr += "WHERE "
  1518. }
  1519. if order == OrderAsc {
  1520. queryStr += "account.item_id >= ? "
  1521. } else {
  1522. queryStr += "account.item_id <= ? "
  1523. }
  1524. args = append(args, fromItem)
  1525. }
  1526. // pagination
  1527. queryStr += "ORDER BY account.item_id "
  1528. if order == OrderAsc {
  1529. queryStr += " ASC "
  1530. } else {
  1531. queryStr += " DESC "
  1532. }
  1533. queryStr += fmt.Sprintf("LIMIT %d;", *limit)
  1534. query, argsQ, err := sqlx.In(queryStr, args...)
  1535. if err != nil {
  1536. return nil, 0, tracerr.Wrap(err)
  1537. }
  1538. query = hdb.db.Rebind(query)
  1539. accounts := []*AccountAPI{}
  1540. if err := meddler.QueryAll(hdb.db, &accounts, query, argsQ...); err != nil {
  1541. return nil, 0, tracerr.Wrap(err)
  1542. }
  1543. if len(accounts) == 0 {
  1544. return nil, 0, tracerr.Wrap(sql.ErrNoRows)
  1545. }
  1546. return db.SlicePtrsToSlice(accounts).([]AccountAPI),
  1547. accounts[0].TotalItems - uint64(len(accounts)), nil
  1548. }
  1549. // GetMetrics returns metrics
  1550. func (hdb *HistoryDB) GetMetrics(lastBatchNum common.BatchNum) (*Metrics, error) {
  1551. metricsTotals := &MetricsTotals{}
  1552. metrics := &Metrics{}
  1553. err := meddler.QueryRow(
  1554. hdb.db, metricsTotals, `SELECT COUNT(tx.*) as total_txs, MIN(tx.batch_num) as batch_num
  1555. FROM tx INNER JOIN block ON tx.eth_block_num = block.eth_block_num
  1556. WHERE block.timestamp >= NOW() - INTERVAL '24 HOURS';`)
  1557. if err != nil {
  1558. return nil, tracerr.Wrap(err)
  1559. }
  1560. metrics.TransactionsPerSecond = float64(metricsTotals.TotalTransactions / (24 * 60 * 60))
  1561. if (lastBatchNum - metricsTotals.FirstBatchNum) > 0 {
  1562. metrics.TransactionsPerBatch = float64(int64(metricsTotals.TotalTransactions) /
  1563. int64(lastBatchNum-metricsTotals.FirstBatchNum))
  1564. } else {
  1565. metrics.TransactionsPerBatch = float64(0)
  1566. }
  1567. err = meddler.QueryRow(
  1568. hdb.db, metricsTotals, `SELECT COUNT(*) AS total_batches,
  1569. SUM(total_fees_usd) AS total_fees FROM batch
  1570. WHERE batch_num > $1;`, metricsTotals.FirstBatchNum)
  1571. if err != nil {
  1572. return nil, tracerr.Wrap(err)
  1573. }
  1574. if metricsTotals.TotalBatches > 0 {
  1575. metrics.BatchFrequency = float64((24 * 60 * 60) / metricsTotals.TotalBatches)
  1576. } else {
  1577. metrics.BatchFrequency = 0
  1578. }
  1579. if metricsTotals.TotalTransactions > 0 {
  1580. metrics.AvgTransactionFee = metricsTotals.TotalFeesUSD / float64(metricsTotals.TotalTransactions)
  1581. } else {
  1582. metrics.AvgTransactionFee = 0
  1583. }
  1584. err = meddler.QueryRow(
  1585. hdb.db, metrics,
  1586. `SELECT COUNT(*) AS total_bjjs, COUNT(DISTINCT(bjj)) AS total_accounts FROM account;`)
  1587. if err != nil {
  1588. return nil, tracerr.Wrap(err)
  1589. }
  1590. return metrics, nil
  1591. }
  1592. // GetAvgTxFee returns average transaction fee of the last 1h
  1593. func (hdb *HistoryDB) GetAvgTxFee() (float64, error) {
  1594. metricsTotals := &MetricsTotals{}
  1595. err := meddler.QueryRow(
  1596. hdb.db, metricsTotals, `SELECT COUNT(tx.*) as total_txs, MIN(tx.batch_num) as batch_num
  1597. FROM tx INNER JOIN block ON tx.eth_block_num = block.eth_block_num
  1598. WHERE block.timestamp >= NOW() - INTERVAL '1 HOURS';`)
  1599. if err != nil {
  1600. return 0, tracerr.Wrap(err)
  1601. }
  1602. err = meddler.QueryRow(
  1603. hdb.db, metricsTotals, `SELECT COUNT(*) AS total_batches,
  1604. SUM(total_fees_usd) AS total_fees FROM batch
  1605. WHERE batch_num > $1;`, metricsTotals.FirstBatchNum)
  1606. if err != nil {
  1607. return 0, tracerr.Wrap(err)
  1608. }
  1609. var avgTransactionFee float64
  1610. if metricsTotals.TotalTransactions > 0 {
  1611. avgTransactionFee = metricsTotals.TotalFeesUSD / float64(metricsTotals.TotalTransactions)
  1612. } else {
  1613. avgTransactionFee = 0
  1614. }
  1615. return avgTransactionFee, nil
  1616. }