You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1171 lines
39 KiB

Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
  1. package historydb
  2. import (
  3. "math"
  4. "math/big"
  5. "strings"
  6. ethCommon "github.com/ethereum/go-ethereum/common"
  7. "github.com/hermeznetwork/hermez-node/common"
  8. "github.com/hermeznetwork/hermez-node/db"
  9. "github.com/hermeznetwork/tracerr"
  10. "github.com/jmoiron/sqlx"
  11. //nolint:errcheck // driver for postgres DB
  12. _ "github.com/lib/pq"
  13. "github.com/russross/meddler"
  14. )
  15. const (
  16. // OrderAsc indicates ascending order when using pagination
  17. OrderAsc = "ASC"
  18. // OrderDesc indicates descending order when using pagination
  19. OrderDesc = "DESC"
  20. )
  21. // TODO(Edu): Document here how HistoryDB is kept consistent
  22. // HistoryDB persist the historic of the rollup
  23. type HistoryDB struct {
  24. dbRead *sqlx.DB
  25. dbWrite *sqlx.DB
  26. apiConnCon *db.APIConnectionController
  27. }
  28. // NewHistoryDB initialize the DB
  29. func NewHistoryDB(dbRead, dbWrite *sqlx.DB, apiConnCon *db.APIConnectionController) *HistoryDB {
  30. return &HistoryDB{
  31. dbRead: dbRead,
  32. dbWrite: dbWrite,
  33. apiConnCon: apiConnCon,
  34. }
  35. }
  36. // DB returns a pointer to the L2DB.db. This method should be used only for
  37. // internal testing purposes.
  38. func (hdb *HistoryDB) DB() *sqlx.DB {
  39. return hdb.dbWrite
  40. }
  41. // AddBlock insert a block into the DB
  42. func (hdb *HistoryDB) AddBlock(block *common.Block) error { return hdb.addBlock(hdb.dbWrite, block) }
  43. func (hdb *HistoryDB) addBlock(d meddler.DB, block *common.Block) error {
  44. return tracerr.Wrap(meddler.Insert(d, "block", block))
  45. }
  46. // AddBlocks inserts blocks into the DB
  47. func (hdb *HistoryDB) AddBlocks(blocks []common.Block) error {
  48. return tracerr.Wrap(hdb.addBlocks(hdb.dbWrite, blocks))
  49. }
  50. func (hdb *HistoryDB) addBlocks(d meddler.DB, blocks []common.Block) error {
  51. return tracerr.Wrap(db.BulkInsert(
  52. d,
  53. `INSERT INTO block (
  54. eth_block_num,
  55. timestamp,
  56. hash
  57. ) VALUES %s;`,
  58. blocks,
  59. ))
  60. }
  61. // GetBlock retrieve a block from the DB, given a block number
  62. func (hdb *HistoryDB) GetBlock(blockNum int64) (*common.Block, error) {
  63. block := &common.Block{}
  64. err := meddler.QueryRow(
  65. hdb.dbRead, block,
  66. "SELECT * FROM block WHERE eth_block_num = $1;", blockNum,
  67. )
  68. return block, tracerr.Wrap(err)
  69. }
  70. // GetAllBlocks retrieve all blocks from the DB
  71. func (hdb *HistoryDB) GetAllBlocks() ([]common.Block, error) {
  72. var blocks []*common.Block
  73. err := meddler.QueryAll(
  74. hdb.dbRead, &blocks,
  75. "SELECT * FROM block ORDER BY eth_block_num;",
  76. )
  77. return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err)
  78. }
  79. // getBlocks retrieve blocks from the DB, given a range of block numbers defined by from and to
  80. func (hdb *HistoryDB) getBlocks(from, to int64) ([]common.Block, error) {
  81. var blocks []*common.Block
  82. err := meddler.QueryAll(
  83. hdb.dbRead, &blocks,
  84. "SELECT * FROM block WHERE $1 <= eth_block_num AND eth_block_num < $2 ORDER BY eth_block_num;",
  85. from, to,
  86. )
  87. return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err)
  88. }
  89. // GetLastBlock retrieve the block with the highest block number from the DB
  90. func (hdb *HistoryDB) GetLastBlock() (*common.Block, error) {
  91. block := &common.Block{}
  92. err := meddler.QueryRow(
  93. hdb.dbRead, block, "SELECT * FROM block ORDER BY eth_block_num DESC LIMIT 1;",
  94. )
  95. return block, tracerr.Wrap(err)
  96. }
  97. // AddBatch insert a Batch into the DB
  98. func (hdb *HistoryDB) AddBatch(batch *common.Batch) error { return hdb.addBatch(hdb.dbWrite, batch) }
  99. func (hdb *HistoryDB) addBatch(d meddler.DB, batch *common.Batch) error {
  100. // Calculate total collected fees in USD
  101. // Get IDs of collected tokens for fees
  102. tokenIDs := []common.TokenID{}
  103. for id := range batch.CollectedFees {
  104. tokenIDs = append(tokenIDs, id)
  105. }
  106. // Get USD value of the tokens
  107. type tokenPrice struct {
  108. ID common.TokenID `meddler:"token_id"`
  109. USD *float64 `meddler:"usd"`
  110. Decimals int `meddler:"decimals"`
  111. }
  112. var tokenPrices []*tokenPrice
  113. if len(tokenIDs) > 0 {
  114. query, args, err := sqlx.In(
  115. "SELECT token_id, usd, decimals FROM token WHERE token_id IN (?);",
  116. tokenIDs,
  117. )
  118. if err != nil {
  119. return tracerr.Wrap(err)
  120. }
  121. query = hdb.dbWrite.Rebind(query)
  122. if err := meddler.QueryAll(
  123. hdb.dbWrite, &tokenPrices, query, args...,
  124. ); err != nil {
  125. return tracerr.Wrap(err)
  126. }
  127. }
  128. // Calculate total collected
  129. var total float64
  130. for _, tokenPrice := range tokenPrices {
  131. if tokenPrice.USD == nil {
  132. continue
  133. }
  134. f := new(big.Float).SetInt(batch.CollectedFees[tokenPrice.ID])
  135. amount, _ := f.Float64()
  136. total += *tokenPrice.USD * (amount / math.Pow(10, float64(tokenPrice.Decimals))) //nolint decimals have to be ^10
  137. }
  138. batch.TotalFeesUSD = &total
  139. // Insert to DB
  140. return tracerr.Wrap(meddler.Insert(d, "batch", batch))
  141. }
  142. // AddBatches insert Bids into the DB
  143. func (hdb *HistoryDB) AddBatches(batches []common.Batch) error {
  144. return tracerr.Wrap(hdb.addBatches(hdb.dbWrite, batches))
  145. }
  146. func (hdb *HistoryDB) addBatches(d meddler.DB, batches []common.Batch) error {
  147. for i := 0; i < len(batches); i++ {
  148. if err := hdb.addBatch(d, &batches[i]); err != nil {
  149. return tracerr.Wrap(err)
  150. }
  151. }
  152. return nil
  153. }
  154. // GetBatch returns the batch with the given batchNum
  155. func (hdb *HistoryDB) GetBatch(batchNum common.BatchNum) (*common.Batch, error) {
  156. var batch common.Batch
  157. err := meddler.QueryRow(
  158. hdb.dbRead, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr,
  159. batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root,
  160. batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num,
  161. batch.slot_num, batch.total_fees_usd FROM batch WHERE batch_num = $1;`,
  162. batchNum,
  163. )
  164. return &batch, err
  165. }
  166. // GetAllBatches retrieve all batches from the DB
  167. func (hdb *HistoryDB) GetAllBatches() ([]common.Batch, error) {
  168. var batches []*common.Batch
  169. err := meddler.QueryAll(
  170. hdb.dbRead, &batches,
  171. `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr, batch.fees_collected,
  172. batch.fee_idxs_coordinator, batch.state_root, batch.num_accounts, batch.last_idx, batch.exit_root,
  173. batch.forge_l1_txs_num, batch.slot_num, batch.total_fees_usd FROM batch
  174. ORDER BY item_id;`,
  175. )
  176. return db.SlicePtrsToSlice(batches).([]common.Batch), tracerr.Wrap(err)
  177. }
  178. // GetBatches retrieve batches from the DB, given a range of batch numbers defined by from and to
  179. func (hdb *HistoryDB) GetBatches(from, to common.BatchNum) ([]common.Batch, error) {
  180. var batches []*common.Batch
  181. err := meddler.QueryAll(
  182. hdb.dbRead, &batches,
  183. `SELECT batch_num, eth_block_num, forger_addr, fees_collected, fee_idxs_coordinator,
  184. state_root, num_accounts, last_idx, exit_root, forge_l1_txs_num, slot_num, total_fees_usd
  185. FROM batch WHERE $1 <= batch_num AND batch_num < $2 ORDER BY batch_num;`,
  186. from, to,
  187. )
  188. return db.SlicePtrsToSlice(batches).([]common.Batch), tracerr.Wrap(err)
  189. }
  190. // GetFirstBatchBlockNumBySlot returns the ethereum block number of the first
  191. // batch within a slot
  192. func (hdb *HistoryDB) GetFirstBatchBlockNumBySlot(slotNum int64) (int64, error) {
  193. row := hdb.dbRead.QueryRow(
  194. `SELECT eth_block_num FROM batch
  195. WHERE slot_num = $1 ORDER BY batch_num ASC LIMIT 1;`, slotNum,
  196. )
  197. var blockNum int64
  198. return blockNum, tracerr.Wrap(row.Scan(&blockNum))
  199. }
  200. // GetLastBatchNum returns the BatchNum of the latest forged batch
  201. func (hdb *HistoryDB) GetLastBatchNum() (common.BatchNum, error) {
  202. row := hdb.dbRead.QueryRow("SELECT batch_num FROM batch ORDER BY batch_num DESC LIMIT 1;")
  203. var batchNum common.BatchNum
  204. return batchNum, tracerr.Wrap(row.Scan(&batchNum))
  205. }
  206. // GetLastBatch returns the last forged batch
  207. func (hdb *HistoryDB) GetLastBatch() (*common.Batch, error) {
  208. var batch common.Batch
  209. err := meddler.QueryRow(
  210. hdb.dbRead, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr,
  211. batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root,
  212. batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num,
  213. batch.slot_num, batch.total_fees_usd FROM batch ORDER BY batch_num DESC LIMIT 1;`,
  214. )
  215. return &batch, err
  216. }
  217. // GetLastL1BatchBlockNum returns the blockNum of the latest forged l1Batch
  218. func (hdb *HistoryDB) GetLastL1BatchBlockNum() (int64, error) {
  219. row := hdb.dbRead.QueryRow(`SELECT eth_block_num FROM batch
  220. WHERE forge_l1_txs_num IS NOT NULL
  221. ORDER BY batch_num DESC LIMIT 1;`)
  222. var blockNum int64
  223. return blockNum, tracerr.Wrap(row.Scan(&blockNum))
  224. }
  225. // GetLastL1TxsNum returns the greatest ForgeL1TxsNum in the DB from forged
  226. // batches. If there's no batch in the DB (nil, nil) is returned.
  227. func (hdb *HistoryDB) GetLastL1TxsNum() (*int64, error) {
  228. row := hdb.dbRead.QueryRow("SELECT MAX(forge_l1_txs_num) FROM batch;")
  229. lastL1TxsNum := new(int64)
  230. return lastL1TxsNum, tracerr.Wrap(row.Scan(&lastL1TxsNum))
  231. }
  232. // Reorg deletes all the information that was added into the DB after the
  233. // lastValidBlock. If lastValidBlock is negative, all block information is
  234. // deleted.
  235. func (hdb *HistoryDB) Reorg(lastValidBlock int64) error {
  236. var err error
  237. if lastValidBlock < 0 {
  238. _, err = hdb.dbWrite.Exec("DELETE FROM block;")
  239. } else {
  240. _, err = hdb.dbWrite.Exec("DELETE FROM block WHERE eth_block_num > $1;", lastValidBlock)
  241. }
  242. return tracerr.Wrap(err)
  243. }
  244. // AddBids insert Bids into the DB
  245. func (hdb *HistoryDB) AddBids(bids []common.Bid) error { return hdb.addBids(hdb.dbWrite, bids) }
  246. func (hdb *HistoryDB) addBids(d meddler.DB, bids []common.Bid) error {
  247. if len(bids) == 0 {
  248. return nil
  249. }
  250. // TODO: check the coordinator info
  251. return tracerr.Wrap(db.BulkInsert(
  252. d,
  253. "INSERT INTO bid (slot_num, bid_value, eth_block_num, bidder_addr) VALUES %s;",
  254. bids,
  255. ))
  256. }
  257. // GetAllBids retrieve all bids from the DB
  258. func (hdb *HistoryDB) GetAllBids() ([]common.Bid, error) {
  259. var bids []*common.Bid
  260. err := meddler.QueryAll(
  261. hdb.dbRead, &bids,
  262. `SELECT bid.slot_num, bid.bid_value, bid.eth_block_num, bid.bidder_addr FROM bid
  263. ORDER BY item_id;`,
  264. )
  265. return db.SlicePtrsToSlice(bids).([]common.Bid), tracerr.Wrap(err)
  266. }
  267. // GetBestBidCoordinator returns the forger address of the highest bidder in a slot by slotNum
  268. func (hdb *HistoryDB) GetBestBidCoordinator(slotNum int64) (*common.BidCoordinator, error) {
  269. bidCoord := &common.BidCoordinator{}
  270. err := meddler.QueryRow(
  271. hdb.dbRead, bidCoord,
  272. `SELECT (
  273. SELECT default_slot_set_bid
  274. FROM auction_vars
  275. WHERE default_slot_set_bid_slot_num <= $1
  276. ORDER BY eth_block_num DESC LIMIT 1
  277. ),
  278. bid.slot_num, bid.bid_value, bid.bidder_addr,
  279. coordinator.forger_addr, coordinator.url
  280. FROM bid
  281. INNER JOIN (
  282. SELECT bidder_addr, MAX(item_id) AS item_id FROM coordinator
  283. GROUP BY bidder_addr
  284. ) c ON bid.bidder_addr = c.bidder_addr
  285. INNER JOIN coordinator ON c.item_id = coordinator.item_id
  286. WHERE bid.slot_num = $1 ORDER BY bid.item_id DESC LIMIT 1;`,
  287. slotNum)
  288. return bidCoord, tracerr.Wrap(err)
  289. }
  290. // AddCoordinators insert Coordinators into the DB
  291. func (hdb *HistoryDB) AddCoordinators(coordinators []common.Coordinator) error {
  292. return tracerr.Wrap(hdb.addCoordinators(hdb.dbWrite, coordinators))
  293. }
  294. func (hdb *HistoryDB) addCoordinators(d meddler.DB, coordinators []common.Coordinator) error {
  295. if len(coordinators) == 0 {
  296. return nil
  297. }
  298. return tracerr.Wrap(db.BulkInsert(
  299. d,
  300. "INSERT INTO coordinator (bidder_addr, forger_addr, eth_block_num, url) VALUES %s;",
  301. coordinators,
  302. ))
  303. }
  304. // AddExitTree insert Exit tree into the DB
  305. func (hdb *HistoryDB) AddExitTree(exitTree []common.ExitInfo) error {
  306. return tracerr.Wrap(hdb.addExitTree(hdb.dbWrite, exitTree))
  307. }
  308. func (hdb *HistoryDB) addExitTree(d meddler.DB, exitTree []common.ExitInfo) error {
  309. if len(exitTree) == 0 {
  310. return nil
  311. }
  312. return tracerr.Wrap(db.BulkInsert(
  313. d,
  314. "INSERT INTO exit_tree (batch_num, account_idx, merkle_proof, balance, "+
  315. "instant_withdrawn, delayed_withdraw_request, delayed_withdrawn) VALUES %s;",
  316. exitTree,
  317. ))
  318. }
  319. func (hdb *HistoryDB) updateExitTree(d sqlx.Ext, blockNum int64,
  320. rollupWithdrawals []common.WithdrawInfo, wDelayerWithdrawals []common.WDelayerTransfer) error {
  321. if len(rollupWithdrawals) == 0 && len(wDelayerWithdrawals) == 0 {
  322. return nil
  323. }
  324. type withdrawal struct {
  325. BatchNum int64 `db:"batch_num"`
  326. AccountIdx int64 `db:"account_idx"`
  327. InstantWithdrawn *int64 `db:"instant_withdrawn"`
  328. DelayedWithdrawRequest *int64 `db:"delayed_withdraw_request"`
  329. DelayedWithdrawn *int64 `db:"delayed_withdrawn"`
  330. Owner *ethCommon.Address `db:"owner"`
  331. Token *ethCommon.Address `db:"token"`
  332. }
  333. withdrawals := make([]withdrawal, len(rollupWithdrawals)+len(wDelayerWithdrawals))
  334. for i := range rollupWithdrawals {
  335. info := &rollupWithdrawals[i]
  336. withdrawals[i] = withdrawal{
  337. BatchNum: int64(info.NumExitRoot),
  338. AccountIdx: int64(info.Idx),
  339. }
  340. if info.InstantWithdraw {
  341. withdrawals[i].InstantWithdrawn = &blockNum
  342. } else {
  343. withdrawals[i].DelayedWithdrawRequest = &blockNum
  344. withdrawals[i].Owner = &info.Owner
  345. withdrawals[i].Token = &info.Token
  346. }
  347. }
  348. for i := range wDelayerWithdrawals {
  349. info := &wDelayerWithdrawals[i]
  350. withdrawals[len(rollupWithdrawals)+i] = withdrawal{
  351. DelayedWithdrawn: &blockNum,
  352. Owner: &info.Owner,
  353. Token: &info.Token,
  354. }
  355. }
  356. // In VALUES we set an initial row of NULLs to set the types of each
  357. // variable passed as argument
  358. const query string = `
  359. UPDATE exit_tree e SET
  360. instant_withdrawn = d.instant_withdrawn,
  361. delayed_withdraw_request = CASE
  362. WHEN e.delayed_withdraw_request IS NOT NULL THEN e.delayed_withdraw_request
  363. ELSE d.delayed_withdraw_request
  364. END,
  365. delayed_withdrawn = d.delayed_withdrawn,
  366. owner = d.owner,
  367. token = d.token
  368. FROM (VALUES
  369. (NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BYTEA, NULL::::BYTEA),
  370. (:batch_num,
  371. :account_idx,
  372. :instant_withdrawn,
  373. :delayed_withdraw_request,
  374. :delayed_withdrawn,
  375. :owner,
  376. :token)
  377. ) as d (batch_num, account_idx, instant_withdrawn, delayed_withdraw_request, delayed_withdrawn, owner, token)
  378. WHERE
  379. (d.batch_num IS NOT NULL AND e.batch_num = d.batch_num AND e.account_idx = d.account_idx) OR
  380. (d.delayed_withdrawn IS NOT NULL AND e.delayed_withdrawn IS NULL AND e.owner = d.owner AND e.token = d.token);
  381. `
  382. if len(withdrawals) > 0 {
  383. if _, err := sqlx.NamedExec(d, query, withdrawals); err != nil {
  384. return tracerr.Wrap(err)
  385. }
  386. }
  387. return nil
  388. }
  389. // AddToken insert a token into the DB
  390. func (hdb *HistoryDB) AddToken(token *common.Token) error {
  391. return tracerr.Wrap(meddler.Insert(hdb.dbWrite, "token", token))
  392. }
  393. // AddTokens insert tokens into the DB
  394. func (hdb *HistoryDB) AddTokens(tokens []common.Token) error {
  395. return hdb.addTokens(hdb.dbWrite, tokens)
  396. }
  397. func (hdb *HistoryDB) addTokens(d meddler.DB, tokens []common.Token) error {
  398. if len(tokens) == 0 {
  399. return nil
  400. }
  401. // Sanitize name and symbol
  402. for i, token := range tokens {
  403. token.Name = strings.ToValidUTF8(token.Name, " ")
  404. token.Symbol = strings.ToValidUTF8(token.Symbol, " ")
  405. tokens[i] = token
  406. }
  407. return tracerr.Wrap(db.BulkInsert(
  408. d,
  409. `INSERT INTO token (
  410. token_id,
  411. eth_block_num,
  412. eth_addr,
  413. name,
  414. symbol,
  415. decimals
  416. ) VALUES %s;`,
  417. tokens,
  418. ))
  419. }
  420. // UpdateTokenValue updates the USD value of a token. Value is the price in
  421. // USD of a normalized token (1 token = 10^decimals units)
  422. func (hdb *HistoryDB) UpdateTokenValue(tokenSymbol string, value float64) error {
  423. // Sanitize symbol
  424. tokenSymbol = strings.ToValidUTF8(tokenSymbol, " ")
  425. _, err := hdb.dbWrite.Exec(
  426. "UPDATE token SET usd = $1 WHERE symbol = $2;",
  427. value, tokenSymbol,
  428. )
  429. return tracerr.Wrap(err)
  430. }
  431. // GetToken returns a token from the DB given a TokenID
  432. func (hdb *HistoryDB) GetToken(tokenID common.TokenID) (*TokenWithUSD, error) {
  433. token := &TokenWithUSD{}
  434. err := meddler.QueryRow(
  435. hdb.dbRead, token, `SELECT * FROM token WHERE token_id = $1;`, tokenID,
  436. )
  437. return token, tracerr.Wrap(err)
  438. }
  439. // GetAllTokens returns all tokens from the DB
  440. func (hdb *HistoryDB) GetAllTokens() ([]TokenWithUSD, error) {
  441. var tokens []*TokenWithUSD
  442. err := meddler.QueryAll(
  443. hdb.dbRead, &tokens,
  444. "SELECT * FROM token ORDER BY token_id;",
  445. )
  446. return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), tracerr.Wrap(err)
  447. }
  448. // GetTokenSymbols returns all the token symbols from the DB
  449. func (hdb *HistoryDB) GetTokenSymbols() ([]string, error) {
  450. var tokenSymbols []string
  451. rows, err := hdb.dbRead.Query("SELECT symbol FROM token;")
  452. if err != nil {
  453. return nil, tracerr.Wrap(err)
  454. }
  455. defer db.RowsClose(rows)
  456. sym := new(string)
  457. for rows.Next() {
  458. err = rows.Scan(sym)
  459. if err != nil {
  460. return nil, tracerr.Wrap(err)
  461. }
  462. tokenSymbols = append(tokenSymbols, *sym)
  463. }
  464. return tokenSymbols, nil
  465. }
  466. // AddAccounts insert accounts into the DB
  467. func (hdb *HistoryDB) AddAccounts(accounts []common.Account) error {
  468. return tracerr.Wrap(hdb.addAccounts(hdb.dbWrite, accounts))
  469. }
  470. func (hdb *HistoryDB) addAccounts(d meddler.DB, accounts []common.Account) error {
  471. if len(accounts) == 0 {
  472. return nil
  473. }
  474. return tracerr.Wrap(db.BulkInsert(
  475. d,
  476. `INSERT INTO account (
  477. idx,
  478. token_id,
  479. batch_num,
  480. bjj,
  481. eth_addr
  482. ) VALUES %s;`,
  483. accounts,
  484. ))
  485. }
  486. // GetAllAccounts returns a list of accounts from the DB
  487. func (hdb *HistoryDB) GetAllAccounts() ([]common.Account, error) {
  488. var accs []*common.Account
  489. err := meddler.QueryAll(
  490. hdb.dbRead, &accs,
  491. "SELECT idx, token_id, batch_num, bjj, eth_addr FROM account ORDER BY idx;",
  492. )
  493. return db.SlicePtrsToSlice(accs).([]common.Account), tracerr.Wrap(err)
  494. }
  495. // AddAccountUpdates inserts accUpdates into the DB
  496. func (hdb *HistoryDB) AddAccountUpdates(accUpdates []common.AccountUpdate) error {
  497. return tracerr.Wrap(hdb.addAccountUpdates(hdb.dbWrite, accUpdates))
  498. }
  499. func (hdb *HistoryDB) addAccountUpdates(d meddler.DB, accUpdates []common.AccountUpdate) error {
  500. if len(accUpdates) == 0 {
  501. return nil
  502. }
  503. return tracerr.Wrap(db.BulkInsert(
  504. d,
  505. `INSERT INTO account_update (
  506. eth_block_num,
  507. batch_num,
  508. idx,
  509. nonce,
  510. balance
  511. ) VALUES %s;`,
  512. accUpdates,
  513. ))
  514. }
  515. // GetAllAccountUpdates returns all the AccountUpdate from the DB
  516. func (hdb *HistoryDB) GetAllAccountUpdates() ([]common.AccountUpdate, error) {
  517. var accUpdates []*common.AccountUpdate
  518. err := meddler.QueryAll(
  519. hdb.dbRead, &accUpdates,
  520. "SELECT eth_block_num, batch_num, idx, nonce, balance FROM account_update ORDER BY idx;",
  521. )
  522. return db.SlicePtrsToSlice(accUpdates).([]common.AccountUpdate), tracerr.Wrap(err)
  523. }
  524. // AddL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
  525. // If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
  526. // BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
  527. // EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB.
  528. func (hdb *HistoryDB) AddL1Txs(l1txs []common.L1Tx) error {
  529. return tracerr.Wrap(hdb.addL1Txs(hdb.dbWrite, l1txs))
  530. }
  531. // addL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
  532. // If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
  533. // BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
  534. // EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB.
  535. func (hdb *HistoryDB) addL1Txs(d meddler.DB, l1txs []common.L1Tx) error {
  536. if len(l1txs) == 0 {
  537. return nil
  538. }
  539. txs := []txWrite{}
  540. for i := 0; i < len(l1txs); i++ {
  541. af := new(big.Float).SetInt(l1txs[i].Amount)
  542. amountFloat, _ := af.Float64()
  543. laf := new(big.Float).SetInt(l1txs[i].DepositAmount)
  544. depositAmountFloat, _ := laf.Float64()
  545. var effectiveFromIdx *common.Idx
  546. if l1txs[i].UserOrigin {
  547. if l1txs[i].Type != common.TxTypeCreateAccountDeposit &&
  548. l1txs[i].Type != common.TxTypeCreateAccountDepositTransfer {
  549. effectiveFromIdx = &l1txs[i].FromIdx
  550. }
  551. } else {
  552. effectiveFromIdx = &l1txs[i].EffectiveFromIdx
  553. }
  554. txs = append(txs, txWrite{
  555. // Generic
  556. IsL1: true,
  557. TxID: l1txs[i].TxID,
  558. Type: l1txs[i].Type,
  559. Position: l1txs[i].Position,
  560. FromIdx: &l1txs[i].FromIdx,
  561. EffectiveFromIdx: effectiveFromIdx,
  562. ToIdx: l1txs[i].ToIdx,
  563. Amount: l1txs[i].Amount,
  564. AmountFloat: amountFloat,
  565. TokenID: l1txs[i].TokenID,
  566. BatchNum: l1txs[i].BatchNum,
  567. EthBlockNum: l1txs[i].EthBlockNum,
  568. // L1
  569. ToForgeL1TxsNum: l1txs[i].ToForgeL1TxsNum,
  570. UserOrigin: &l1txs[i].UserOrigin,
  571. FromEthAddr: &l1txs[i].FromEthAddr,
  572. FromBJJ: &l1txs[i].FromBJJ,
  573. DepositAmount: l1txs[i].DepositAmount,
  574. DepositAmountFloat: &depositAmountFloat,
  575. })
  576. }
  577. return tracerr.Wrap(hdb.addTxs(d, txs))
  578. }
  579. // AddL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
  580. func (hdb *HistoryDB) AddL2Txs(l2txs []common.L2Tx) error {
  581. return tracerr.Wrap(hdb.addL2Txs(hdb.dbWrite, l2txs))
  582. }
  583. // addL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
  584. func (hdb *HistoryDB) addL2Txs(d meddler.DB, l2txs []common.L2Tx) error {
  585. txs := []txWrite{}
  586. for i := 0; i < len(l2txs); i++ {
  587. f := new(big.Float).SetInt(l2txs[i].Amount)
  588. amountFloat, _ := f.Float64()
  589. txs = append(txs, txWrite{
  590. // Generic
  591. IsL1: false,
  592. TxID: l2txs[i].TxID,
  593. Type: l2txs[i].Type,
  594. Position: l2txs[i].Position,
  595. FromIdx: &l2txs[i].FromIdx,
  596. EffectiveFromIdx: &l2txs[i].FromIdx,
  597. ToIdx: l2txs[i].ToIdx,
  598. TokenID: l2txs[i].TokenID,
  599. Amount: l2txs[i].Amount,
  600. AmountFloat: amountFloat,
  601. BatchNum: &l2txs[i].BatchNum,
  602. EthBlockNum: l2txs[i].EthBlockNum,
  603. // L2
  604. Fee: &l2txs[i].Fee,
  605. Nonce: &l2txs[i].Nonce,
  606. })
  607. }
  608. return tracerr.Wrap(hdb.addTxs(d, txs))
  609. }
  610. func (hdb *HistoryDB) addTxs(d meddler.DB, txs []txWrite) error {
  611. if len(txs) == 0 {
  612. return nil
  613. }
  614. return tracerr.Wrap(db.BulkInsert(
  615. d,
  616. `INSERT INTO tx (
  617. is_l1,
  618. id,
  619. type,
  620. position,
  621. from_idx,
  622. effective_from_idx,
  623. to_idx,
  624. amount,
  625. amount_f,
  626. token_id,
  627. batch_num,
  628. eth_block_num,
  629. to_forge_l1_txs_num,
  630. user_origin,
  631. from_eth_addr,
  632. from_bjj,
  633. deposit_amount,
  634. deposit_amount_f,
  635. fee,
  636. nonce
  637. ) VALUES %s;`,
  638. txs,
  639. ))
  640. }
  641. // GetAllExits returns all exit from the DB
  642. func (hdb *HistoryDB) GetAllExits() ([]common.ExitInfo, error) {
  643. var exits []*common.ExitInfo
  644. err := meddler.QueryAll(
  645. hdb.dbRead, &exits,
  646. `SELECT exit_tree.batch_num, exit_tree.account_idx, exit_tree.merkle_proof,
  647. exit_tree.balance, exit_tree.instant_withdrawn, exit_tree.delayed_withdraw_request,
  648. exit_tree.delayed_withdrawn FROM exit_tree ORDER BY item_id;`,
  649. )
  650. return db.SlicePtrsToSlice(exits).([]common.ExitInfo), tracerr.Wrap(err)
  651. }
  652. // GetAllL1UserTxs returns all L1UserTxs from the DB
  653. func (hdb *HistoryDB) GetAllL1UserTxs() ([]common.L1Tx, error) {
  654. var txs []*common.L1Tx
  655. err := meddler.QueryAll(
  656. hdb.dbRead, &txs, // Note that '\x' gets parsed as a big.Int with value = 0
  657. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  658. tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  659. tx.amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.amount_success THEN tx.amount ELSE '\x' END) AS effective_amount,
  660. tx.deposit_amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.deposit_amount_success THEN tx.deposit_amount ELSE '\x' END) AS effective_deposit_amount,
  661. tx.eth_block_num, tx.type, tx.batch_num
  662. FROM tx WHERE is_l1 = TRUE AND user_origin = TRUE ORDER BY item_id;`,
  663. )
  664. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  665. }
  666. // GetAllL1CoordinatorTxs returns all L1CoordinatorTxs from the DB
  667. func (hdb *HistoryDB) GetAllL1CoordinatorTxs() ([]common.L1Tx, error) {
  668. var txs []*common.L1Tx
  669. // Since the query specifies that only coordinator txs are returned, it's safe to assume
  670. // that returned txs will always have effective amounts
  671. err := meddler.QueryAll(
  672. hdb.dbRead, &txs,
  673. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  674. tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  675. tx.amount, tx.amount AS effective_amount,
  676. tx.deposit_amount, tx.deposit_amount AS effective_deposit_amount,
  677. tx.eth_block_num, tx.type, tx.batch_num
  678. FROM tx WHERE is_l1 = TRUE AND user_origin = FALSE ORDER BY item_id;`,
  679. )
  680. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  681. }
  682. // GetAllL2Txs returns all L2Txs from the DB
  683. func (hdb *HistoryDB) GetAllL2Txs() ([]common.L2Tx, error) {
  684. var txs []*common.L2Tx
  685. err := meddler.QueryAll(
  686. hdb.dbRead, &txs,
  687. `SELECT tx.id, tx.batch_num, tx.position,
  688. tx.from_idx, tx.to_idx, tx.amount, tx.token_id,
  689. tx.fee, tx.nonce, tx.type, tx.eth_block_num
  690. FROM tx WHERE is_l1 = FALSE ORDER BY item_id;`,
  691. )
  692. return db.SlicePtrsToSlice(txs).([]common.L2Tx), tracerr.Wrap(err)
  693. }
  694. // GetUnforgedL1UserTxs gets L1 User Txs to be forged in the L1Batch with toForgeL1TxsNum.
  695. func (hdb *HistoryDB) GetUnforgedL1UserTxs(toForgeL1TxsNum int64) ([]common.L1Tx, error) {
  696. var txs []*common.L1Tx
  697. err := meddler.QueryAll(
  698. hdb.dbRead, &txs, // only L1 user txs can have batch_num set to null
  699. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  700. tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  701. tx.amount, NULL AS effective_amount,
  702. tx.deposit_amount, NULL AS effective_deposit_amount,
  703. tx.eth_block_num, tx.type, tx.batch_num
  704. FROM tx WHERE batch_num IS NULL AND to_forge_l1_txs_num = $1
  705. ORDER BY position;`,
  706. toForgeL1TxsNum,
  707. )
  708. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  709. }
  710. // TODO: Think about chaning all the queries that return a last value, to queries that return the next valid value.
  711. // GetLastTxsPosition for a given to_forge_l1_txs_num
  712. func (hdb *HistoryDB) GetLastTxsPosition(toForgeL1TxsNum int64) (int, error) {
  713. row := hdb.dbRead.QueryRow(
  714. "SELECT position FROM tx WHERE to_forge_l1_txs_num = $1 ORDER BY position DESC;",
  715. toForgeL1TxsNum,
  716. )
  717. var lastL1TxsPosition int
  718. return lastL1TxsPosition, tracerr.Wrap(row.Scan(&lastL1TxsPosition))
  719. }
  720. // GetSCVars returns the rollup, auction and wdelayer smart contracts variables at their last update.
  721. func (hdb *HistoryDB) GetSCVars() (*common.RollupVariables, *common.AuctionVariables,
  722. *common.WDelayerVariables, error) {
  723. var rollup common.RollupVariables
  724. var auction common.AuctionVariables
  725. var wDelayer common.WDelayerVariables
  726. if err := meddler.QueryRow(hdb.dbRead, &rollup,
  727. "SELECT * FROM rollup_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  728. return nil, nil, nil, tracerr.Wrap(err)
  729. }
  730. if err := meddler.QueryRow(hdb.dbRead, &auction,
  731. "SELECT * FROM auction_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  732. return nil, nil, nil, tracerr.Wrap(err)
  733. }
  734. if err := meddler.QueryRow(hdb.dbRead, &wDelayer,
  735. "SELECT * FROM wdelayer_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  736. return nil, nil, nil, tracerr.Wrap(err)
  737. }
  738. return &rollup, &auction, &wDelayer, nil
  739. }
  740. func (hdb *HistoryDB) setRollupVars(d meddler.DB, rollup *common.RollupVariables) error {
  741. return tracerr.Wrap(meddler.Insert(d, "rollup_vars", rollup))
  742. }
  743. func (hdb *HistoryDB) setAuctionVars(d meddler.DB, auction *common.AuctionVariables) error {
  744. return tracerr.Wrap(meddler.Insert(d, "auction_vars", auction))
  745. }
  746. func (hdb *HistoryDB) setWDelayerVars(d meddler.DB, wDelayer *common.WDelayerVariables) error {
  747. return tracerr.Wrap(meddler.Insert(d, "wdelayer_vars", wDelayer))
  748. }
  749. func (hdb *HistoryDB) addBucketUpdates(d meddler.DB, bucketUpdates []common.BucketUpdate) error {
  750. if len(bucketUpdates) == 0 {
  751. return nil
  752. }
  753. return tracerr.Wrap(db.BulkInsert(
  754. d,
  755. `INSERT INTO bucket_update (
  756. eth_block_num,
  757. num_bucket,
  758. block_stamp,
  759. withdrawals
  760. ) VALUES %s;`,
  761. bucketUpdates,
  762. ))
  763. }
  764. // AddBucketUpdatesTest allows call to unexported method
  765. // only for internal testing purposes
  766. func (hdb *HistoryDB) AddBucketUpdatesTest(d meddler.DB, bucketUpdates []common.BucketUpdate) error {
  767. return hdb.addBucketUpdates(d, bucketUpdates)
  768. }
  769. // GetAllBucketUpdates retrieves all the bucket updates
  770. func (hdb *HistoryDB) GetAllBucketUpdates() ([]common.BucketUpdate, error) {
  771. var bucketUpdates []*common.BucketUpdate
  772. err := meddler.QueryAll(
  773. hdb.dbRead, &bucketUpdates,
  774. `SELECT eth_block_num, num_bucket, block_stamp, withdrawals
  775. FROM bucket_update ORDER BY item_id;`,
  776. )
  777. return db.SlicePtrsToSlice(bucketUpdates).([]common.BucketUpdate), tracerr.Wrap(err)
  778. }
  779. func (hdb *HistoryDB) addTokenExchanges(d meddler.DB, tokenExchanges []common.TokenExchange) error {
  780. if len(tokenExchanges) == 0 {
  781. return nil
  782. }
  783. return tracerr.Wrap(db.BulkInsert(
  784. d,
  785. `INSERT INTO token_exchange (
  786. eth_block_num,
  787. eth_addr,
  788. value_usd
  789. ) VALUES %s;`,
  790. tokenExchanges,
  791. ))
  792. }
  793. // GetAllTokenExchanges retrieves all the token exchanges
  794. func (hdb *HistoryDB) GetAllTokenExchanges() ([]common.TokenExchange, error) {
  795. var tokenExchanges []*common.TokenExchange
  796. err := meddler.QueryAll(
  797. hdb.dbRead, &tokenExchanges,
  798. "SELECT eth_block_num, eth_addr, value_usd FROM token_exchange ORDER BY item_id;",
  799. )
  800. return db.SlicePtrsToSlice(tokenExchanges).([]common.TokenExchange), tracerr.Wrap(err)
  801. }
  802. func (hdb *HistoryDB) addEscapeHatchWithdrawals(d meddler.DB,
  803. escapeHatchWithdrawals []common.WDelayerEscapeHatchWithdrawal) error {
  804. if len(escapeHatchWithdrawals) == 0 {
  805. return nil
  806. }
  807. return tracerr.Wrap(db.BulkInsert(
  808. d,
  809. `INSERT INTO escape_hatch_withdrawal (
  810. eth_block_num,
  811. who_addr,
  812. to_addr,
  813. token_addr,
  814. amount
  815. ) VALUES %s;`,
  816. escapeHatchWithdrawals,
  817. ))
  818. }
  819. // GetAllEscapeHatchWithdrawals retrieves all the escape hatch withdrawals
  820. func (hdb *HistoryDB) GetAllEscapeHatchWithdrawals() ([]common.WDelayerEscapeHatchWithdrawal, error) {
  821. var escapeHatchWithdrawals []*common.WDelayerEscapeHatchWithdrawal
  822. err := meddler.QueryAll(
  823. hdb.dbRead, &escapeHatchWithdrawals,
  824. "SELECT eth_block_num, who_addr, to_addr, token_addr, amount FROM escape_hatch_withdrawal ORDER BY item_id;",
  825. )
  826. return db.SlicePtrsToSlice(escapeHatchWithdrawals).([]common.WDelayerEscapeHatchWithdrawal),
  827. tracerr.Wrap(err)
  828. }
  829. // SetInitialSCVars sets the initial state of rollup, auction, wdelayer smart
  830. // contract variables. This initial state is stored linked to block 0, which
  831. // always exist in the DB and is used to store initialization data that always
  832. // exist in the smart contracts.
  833. func (hdb *HistoryDB) SetInitialSCVars(rollup *common.RollupVariables,
  834. auction *common.AuctionVariables, wDelayer *common.WDelayerVariables) error {
  835. txn, err := hdb.dbWrite.Beginx()
  836. if err != nil {
  837. return tracerr.Wrap(err)
  838. }
  839. defer func() {
  840. if err != nil {
  841. db.Rollback(txn)
  842. }
  843. }()
  844. // Force EthBlockNum to be 0 because it's the block used to link data
  845. // that belongs to the creation of the smart contracts
  846. rollup.EthBlockNum = 0
  847. auction.EthBlockNum = 0
  848. wDelayer.EthBlockNum = 0
  849. auction.DefaultSlotSetBidSlotNum = 0
  850. if err := hdb.setRollupVars(txn, rollup); err != nil {
  851. return tracerr.Wrap(err)
  852. }
  853. if err := hdb.setAuctionVars(txn, auction); err != nil {
  854. return tracerr.Wrap(err)
  855. }
  856. if err := hdb.setWDelayerVars(txn, wDelayer); err != nil {
  857. return tracerr.Wrap(err)
  858. }
  859. return tracerr.Wrap(txn.Commit())
  860. }
  861. // setExtraInfoForgedL1UserTxs sets the EffectiveAmount, EffectiveDepositAmount
  862. // and EffectiveFromIdx of the given l1UserTxs (with an UPDATE)
  863. func (hdb *HistoryDB) setExtraInfoForgedL1UserTxs(d sqlx.Ext, txs []common.L1Tx) error {
  864. if len(txs) == 0 {
  865. return nil
  866. }
  867. // Effective amounts are stored as success flags in the DB, with true value by default
  868. // to reduce the amount of updates. Therefore, only amounts that became uneffective should be
  869. // updated to become false. At the same time, all the txs that contain
  870. // accounts (FromIdx == 0) are updated to set the EffectiveFromIdx.
  871. type txUpdate struct {
  872. ID common.TxID `db:"id"`
  873. AmountSuccess bool `db:"amount_success"`
  874. DepositAmountSuccess bool `db:"deposit_amount_success"`
  875. EffectiveFromIdx common.Idx `db:"effective_from_idx"`
  876. }
  877. txUpdates := []txUpdate{}
  878. equal := func(a *big.Int, b *big.Int) bool {
  879. return a.Cmp(b) == 0
  880. }
  881. for i := range txs {
  882. amountSuccess := equal(txs[i].Amount, txs[i].EffectiveAmount)
  883. depositAmountSuccess := equal(txs[i].DepositAmount, txs[i].EffectiveDepositAmount)
  884. if !amountSuccess || !depositAmountSuccess || txs[i].FromIdx == 0 {
  885. txUpdates = append(txUpdates, txUpdate{
  886. ID: txs[i].TxID,
  887. AmountSuccess: amountSuccess,
  888. DepositAmountSuccess: depositAmountSuccess,
  889. EffectiveFromIdx: txs[i].EffectiveFromIdx,
  890. })
  891. }
  892. }
  893. const query string = `
  894. UPDATE tx SET
  895. amount_success = tx_update.amount_success,
  896. deposit_amount_success = tx_update.deposit_amount_success,
  897. effective_from_idx = tx_update.effective_from_idx
  898. FROM (VALUES
  899. (NULL::::BYTEA, NULL::::BOOL, NULL::::BOOL, NULL::::BIGINT),
  900. (:id, :amount_success, :deposit_amount_success, :effective_from_idx)
  901. ) as tx_update (id, amount_success, deposit_amount_success, effective_from_idx)
  902. WHERE tx.id = tx_update.id;
  903. `
  904. if len(txUpdates) > 0 {
  905. if _, err := sqlx.NamedExec(d, query, txUpdates); err != nil {
  906. return tracerr.Wrap(err)
  907. }
  908. }
  909. return nil
  910. }
  911. // AddBlockSCData stores all the information of a block retrieved by the
  912. // Synchronizer. Blocks should be inserted in order, leaving no gaps because
  913. // the pagination system of the API/DB depends on this. Within blocks, all
  914. // items should also be in the correct order (Accounts, Tokens, Txs, etc.)
  915. func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) {
  916. txn, err := hdb.dbWrite.Beginx()
  917. if err != nil {
  918. return tracerr.Wrap(err)
  919. }
  920. defer func() {
  921. if err != nil {
  922. db.Rollback(txn)
  923. }
  924. }()
  925. // Add block
  926. if err := hdb.addBlock(txn, &blockData.Block); err != nil {
  927. return tracerr.Wrap(err)
  928. }
  929. // Add Coordinators
  930. if err := hdb.addCoordinators(txn, blockData.Auction.Coordinators); err != nil {
  931. return tracerr.Wrap(err)
  932. }
  933. // Add Bids
  934. if err := hdb.addBids(txn, blockData.Auction.Bids); err != nil {
  935. return tracerr.Wrap(err)
  936. }
  937. // Add Tokens
  938. if err := hdb.addTokens(txn, blockData.Rollup.AddedTokens); err != nil {
  939. return tracerr.Wrap(err)
  940. }
  941. // Prepare user L1 txs to be added.
  942. // They must be added before the batch that will forge them (which can be in the same block)
  943. // and after the account that will be sent to (also can be in the same block).
  944. // Note: insert order is not relevant since item_id will be updated by a DB trigger when
  945. // the batch that forges those txs is inserted
  946. userL1s := make(map[common.BatchNum][]common.L1Tx)
  947. for i := range blockData.Rollup.L1UserTxs {
  948. batchThatForgesIsInTheBlock := false
  949. for _, batch := range blockData.Rollup.Batches {
  950. if batch.Batch.ForgeL1TxsNum != nil &&
  951. *batch.Batch.ForgeL1TxsNum == *blockData.Rollup.L1UserTxs[i].ToForgeL1TxsNum {
  952. // Tx is forged in this block. It's guaranteed that:
  953. // * the first batch of the block won't forge user L1 txs that have been added in this block
  954. // * batch nums are sequential therefore it's safe to add the tx at batch.BatchNum -1
  955. batchThatForgesIsInTheBlock = true
  956. addAtBatchNum := batch.Batch.BatchNum - 1
  957. userL1s[addAtBatchNum] = append(userL1s[addAtBatchNum], blockData.Rollup.L1UserTxs[i])
  958. break
  959. }
  960. }
  961. if !batchThatForgesIsInTheBlock {
  962. // User artificial batchNum 0 to add txs that are not forge in this block
  963. // after all the accounts of the block have been added
  964. userL1s[0] = append(userL1s[0], blockData.Rollup.L1UserTxs[i])
  965. }
  966. }
  967. // Add Batches
  968. for i := range blockData.Rollup.Batches {
  969. batch := &blockData.Rollup.Batches[i]
  970. // Add Batch: this will trigger an update on the DB
  971. // that will set the batch num of forged L1 txs in this batch
  972. if err = hdb.addBatch(txn, &batch.Batch); err != nil {
  973. return tracerr.Wrap(err)
  974. }
  975. // Add accounts
  976. if err := hdb.addAccounts(txn, batch.CreatedAccounts); err != nil {
  977. return tracerr.Wrap(err)
  978. }
  979. // Add accountBalances if it exists
  980. if err := hdb.addAccountUpdates(txn, batch.UpdatedAccounts); err != nil {
  981. return tracerr.Wrap(err)
  982. }
  983. // Set the EffectiveAmount and EffectiveDepositAmount of all the
  984. // L1UserTxs that have been forged in this batch
  985. if err = hdb.setExtraInfoForgedL1UserTxs(txn, batch.L1UserTxs); err != nil {
  986. return tracerr.Wrap(err)
  987. }
  988. // Add forged l1 coordinator Txs
  989. if err := hdb.addL1Txs(txn, batch.L1CoordinatorTxs); err != nil {
  990. return tracerr.Wrap(err)
  991. }
  992. // Add l2 Txs
  993. if err := hdb.addL2Txs(txn, batch.L2Txs); err != nil {
  994. return tracerr.Wrap(err)
  995. }
  996. // Add user L1 txs that will be forged in next batch
  997. if userlL1s, ok := userL1s[batch.Batch.BatchNum]; ok {
  998. if err := hdb.addL1Txs(txn, userlL1s); err != nil {
  999. return tracerr.Wrap(err)
  1000. }
  1001. }
  1002. // Add exit tree
  1003. if err := hdb.addExitTree(txn, batch.ExitTree); err != nil {
  1004. return tracerr.Wrap(err)
  1005. }
  1006. }
  1007. // Add user L1 txs that won't be forged in this block
  1008. if userL1sNotForgedInThisBlock, ok := userL1s[0]; ok {
  1009. if err := hdb.addL1Txs(txn, userL1sNotForgedInThisBlock); err != nil {
  1010. return tracerr.Wrap(err)
  1011. }
  1012. }
  1013. // Set SC Vars if there was an update
  1014. if blockData.Rollup.Vars != nil {
  1015. if err := hdb.setRollupVars(txn, blockData.Rollup.Vars); err != nil {
  1016. return tracerr.Wrap(err)
  1017. }
  1018. }
  1019. if blockData.Auction.Vars != nil {
  1020. if err := hdb.setAuctionVars(txn, blockData.Auction.Vars); err != nil {
  1021. return tracerr.Wrap(err)
  1022. }
  1023. }
  1024. if blockData.WDelayer.Vars != nil {
  1025. if err := hdb.setWDelayerVars(txn, blockData.WDelayer.Vars); err != nil {
  1026. return tracerr.Wrap(err)
  1027. }
  1028. }
  1029. // Update withdrawals in exit tree table
  1030. if err := hdb.updateExitTree(txn, blockData.Block.Num,
  1031. blockData.Rollup.Withdrawals, blockData.WDelayer.Withdrawals); err != nil {
  1032. return tracerr.Wrap(err)
  1033. }
  1034. // Add Escape Hatch Withdrawals
  1035. if err := hdb.addEscapeHatchWithdrawals(txn,
  1036. blockData.WDelayer.EscapeHatchWithdrawals); err != nil {
  1037. return tracerr.Wrap(err)
  1038. }
  1039. // Add Buckets withdrawals updates
  1040. if err := hdb.addBucketUpdates(txn, blockData.Rollup.UpdateBucketWithdraw); err != nil {
  1041. return tracerr.Wrap(err)
  1042. }
  1043. // Add Token exchange updates
  1044. if err := hdb.addTokenExchanges(txn, blockData.Rollup.TokenExchanges); err != nil {
  1045. return tracerr.Wrap(err)
  1046. }
  1047. return tracerr.Wrap(txn.Commit())
  1048. }
  1049. // GetCoordinatorAPI returns a coordinator by its bidderAddr
  1050. func (hdb *HistoryDB) GetCoordinatorAPI(bidderAddr ethCommon.Address) (*CoordinatorAPI, error) {
  1051. coordinator := &CoordinatorAPI{}
  1052. err := meddler.QueryRow(
  1053. hdb.dbRead, coordinator,
  1054. "SELECT * FROM coordinator WHERE bidder_addr = $1 ORDER BY item_id DESC LIMIT 1;",
  1055. bidderAddr,
  1056. )
  1057. return coordinator, tracerr.Wrap(err)
  1058. }
  1059. // AddAuctionVars insert auction vars into the DB
  1060. func (hdb *HistoryDB) AddAuctionVars(auctionVars *common.AuctionVariables) error {
  1061. return tracerr.Wrap(meddler.Insert(hdb.dbWrite, "auction_vars", auctionVars))
  1062. }
  1063. // GetTokensTest used to get tokens in a testing context
  1064. func (hdb *HistoryDB) GetTokensTest() ([]TokenWithUSD, error) {
  1065. tokens := []*TokenWithUSD{}
  1066. if err := meddler.QueryAll(
  1067. hdb.dbRead, &tokens,
  1068. "SELECT * FROM TOKEN",
  1069. ); err != nil {
  1070. return nil, tracerr.Wrap(err)
  1071. }
  1072. if len(tokens) == 0 {
  1073. return []TokenWithUSD{}, nil
  1074. }
  1075. return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), nil
  1076. }