You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1245 lines
42 KiB

Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
  1. package historydb
  2. import (
  3. "math"
  4. "math/big"
  5. "strings"
  6. ethCommon "github.com/ethereum/go-ethereum/common"
  7. "github.com/hermeznetwork/hermez-node/common"
  8. "github.com/hermeznetwork/hermez-node/db"
  9. "github.com/hermeznetwork/tracerr"
  10. "github.com/jmoiron/sqlx"
  11. //nolint:errcheck // driver for postgres DB
  12. _ "github.com/lib/pq"
  13. "github.com/russross/meddler"
  14. )
  15. const (
  16. // OrderAsc indicates ascending order when using pagination
  17. OrderAsc = "ASC"
  18. // OrderDesc indicates descending order when using pagination
  19. OrderDesc = "DESC"
  20. )
  21. // TODO(Edu): Document here how HistoryDB is kept consistent
  22. // HistoryDB persist the historic of the rollup
  23. type HistoryDB struct {
  24. dbRead *sqlx.DB
  25. dbWrite *sqlx.DB
  26. apiConnCon *db.APIConnectionController
  27. }
  28. // NewHistoryDB initialize the DB
  29. func NewHistoryDB(dbRead, dbWrite *sqlx.DB, apiConnCon *db.APIConnectionController) *HistoryDB {
  30. return &HistoryDB{
  31. dbRead: dbRead,
  32. dbWrite: dbWrite,
  33. apiConnCon: apiConnCon,
  34. }
  35. }
  36. // DB returns a pointer to the L2DB.db. This method should be used only for
  37. // internal testing purposes.
  38. func (hdb *HistoryDB) DB() *sqlx.DB {
  39. return hdb.dbWrite
  40. }
  41. // AddBlock insert a block into the DB
  42. func (hdb *HistoryDB) AddBlock(block *common.Block) error { return hdb.addBlock(hdb.dbWrite, block) }
  43. func (hdb *HistoryDB) addBlock(d meddler.DB, block *common.Block) error {
  44. return tracerr.Wrap(meddler.Insert(d, "block", block))
  45. }
  46. // AddBlocks inserts blocks into the DB
  47. func (hdb *HistoryDB) AddBlocks(blocks []common.Block) error {
  48. return tracerr.Wrap(hdb.addBlocks(hdb.dbWrite, blocks))
  49. }
  50. func (hdb *HistoryDB) addBlocks(d meddler.DB, blocks []common.Block) error {
  51. return tracerr.Wrap(db.BulkInsert(
  52. d,
  53. `INSERT INTO block (
  54. eth_block_num,
  55. timestamp,
  56. hash
  57. ) VALUES %s;`,
  58. blocks,
  59. ))
  60. }
  61. // GetBlock retrieve a block from the DB, given a block number
  62. func (hdb *HistoryDB) GetBlock(blockNum int64) (*common.Block, error) {
  63. block := &common.Block{}
  64. err := meddler.QueryRow(
  65. hdb.dbRead, block,
  66. "SELECT * FROM block WHERE eth_block_num = $1;", blockNum,
  67. )
  68. return block, tracerr.Wrap(err)
  69. }
  70. // GetAllBlocks retrieve all blocks from the DB
  71. func (hdb *HistoryDB) GetAllBlocks() ([]common.Block, error) {
  72. var blocks []*common.Block
  73. err := meddler.QueryAll(
  74. hdb.dbRead, &blocks,
  75. "SELECT * FROM block ORDER BY eth_block_num;",
  76. )
  77. return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err)
  78. }
  79. // getBlocks retrieve blocks from the DB, given a range of block numbers defined by from and to
  80. func (hdb *HistoryDB) getBlocks(from, to int64) ([]common.Block, error) {
  81. var blocks []*common.Block
  82. err := meddler.QueryAll(
  83. hdb.dbRead, &blocks,
  84. "SELECT * FROM block WHERE $1 <= eth_block_num AND eth_block_num < $2 ORDER BY eth_block_num;",
  85. from, to,
  86. )
  87. return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err)
  88. }
  89. // GetLastBlock retrieve the block with the highest block number from the DB
  90. func (hdb *HistoryDB) GetLastBlock() (*common.Block, error) {
  91. block := &common.Block{}
  92. err := meddler.QueryRow(
  93. hdb.dbRead, block, "SELECT * FROM block ORDER BY eth_block_num DESC LIMIT 1;",
  94. )
  95. return block, tracerr.Wrap(err)
  96. }
  97. // AddBatch insert a Batch into the DB
  98. func (hdb *HistoryDB) AddBatch(batch *common.Batch) error { return hdb.addBatch(hdb.dbWrite, batch) }
  99. func (hdb *HistoryDB) addBatch(d meddler.DB, batch *common.Batch) error {
  100. // Calculate total collected fees in USD
  101. // Get IDs of collected tokens for fees
  102. tokenIDs := []common.TokenID{}
  103. for id := range batch.CollectedFees {
  104. tokenIDs = append(tokenIDs, id)
  105. }
  106. // Get USD value of the tokens
  107. type tokenPrice struct {
  108. ID common.TokenID `meddler:"token_id"`
  109. USD *float64 `meddler:"usd"`
  110. Decimals int `meddler:"decimals"`
  111. }
  112. var tokenPrices []*tokenPrice
  113. if len(tokenIDs) > 0 {
  114. query, args, err := sqlx.In(
  115. "SELECT token_id, usd, decimals FROM token WHERE token_id IN (?);",
  116. tokenIDs,
  117. )
  118. if err != nil {
  119. return tracerr.Wrap(err)
  120. }
  121. query = hdb.dbWrite.Rebind(query)
  122. if err := meddler.QueryAll(
  123. hdb.dbWrite, &tokenPrices, query, args...,
  124. ); err != nil {
  125. return tracerr.Wrap(err)
  126. }
  127. }
  128. // Calculate total collected
  129. var total float64
  130. for _, tokenPrice := range tokenPrices {
  131. if tokenPrice.USD == nil {
  132. continue
  133. }
  134. f := new(big.Float).SetInt(batch.CollectedFees[tokenPrice.ID])
  135. amount, _ := f.Float64()
  136. total += *tokenPrice.USD * (amount / math.Pow(10, float64(tokenPrice.Decimals))) //nolint decimals have to be ^10
  137. }
  138. batch.TotalFeesUSD = &total
  139. // Insert to DB
  140. return tracerr.Wrap(meddler.Insert(d, "batch", batch))
  141. }
  142. // AddBatches insert Bids into the DB
  143. func (hdb *HistoryDB) AddBatches(batches []common.Batch) error {
  144. return tracerr.Wrap(hdb.addBatches(hdb.dbWrite, batches))
  145. }
  146. func (hdb *HistoryDB) addBatches(d meddler.DB, batches []common.Batch) error {
  147. for i := 0; i < len(batches); i++ {
  148. if err := hdb.addBatch(d, &batches[i]); err != nil {
  149. return tracerr.Wrap(err)
  150. }
  151. }
  152. return nil
  153. }
  154. // GetBatch returns the batch with the given batchNum
  155. func (hdb *HistoryDB) GetBatch(batchNum common.BatchNum) (*common.Batch, error) {
  156. var batch common.Batch
  157. err := meddler.QueryRow(
  158. hdb.dbRead, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr,
  159. batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root,
  160. batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num,
  161. batch.slot_num, batch.total_fees_usd FROM batch WHERE batch_num = $1;`,
  162. batchNum,
  163. )
  164. return &batch, tracerr.Wrap(err)
  165. }
  166. // GetAllBatches retrieve all batches from the DB
  167. func (hdb *HistoryDB) GetAllBatches() ([]common.Batch, error) {
  168. var batches []*common.Batch
  169. err := meddler.QueryAll(
  170. hdb.dbRead, &batches,
  171. `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr, batch.fees_collected,
  172. batch.fee_idxs_coordinator, batch.state_root, batch.num_accounts, batch.last_idx, batch.exit_root,
  173. batch.forge_l1_txs_num, batch.slot_num, batch.total_fees_usd FROM batch
  174. ORDER BY item_id;`,
  175. )
  176. return db.SlicePtrsToSlice(batches).([]common.Batch), tracerr.Wrap(err)
  177. }
  178. // GetBatches retrieve batches from the DB, given a range of batch numbers defined by from and to
  179. func (hdb *HistoryDB) GetBatches(from, to common.BatchNum) ([]common.Batch, error) {
  180. var batches []*common.Batch
  181. err := meddler.QueryAll(
  182. hdb.dbRead, &batches,
  183. `SELECT batch_num, eth_block_num, forger_addr, fees_collected, fee_idxs_coordinator,
  184. state_root, num_accounts, last_idx, exit_root, forge_l1_txs_num, slot_num, total_fees_usd
  185. FROM batch WHERE $1 <= batch_num AND batch_num < $2 ORDER BY batch_num;`,
  186. from, to,
  187. )
  188. return db.SlicePtrsToSlice(batches).([]common.Batch), tracerr.Wrap(err)
  189. }
  190. // GetFirstBatchBlockNumBySlot returns the ethereum block number of the first
  191. // batch within a slot
  192. func (hdb *HistoryDB) GetFirstBatchBlockNumBySlot(slotNum int64) (int64, error) {
  193. row := hdb.dbRead.QueryRow(
  194. `SELECT eth_block_num FROM batch
  195. WHERE slot_num = $1 ORDER BY batch_num ASC LIMIT 1;`, slotNum,
  196. )
  197. var blockNum int64
  198. return blockNum, tracerr.Wrap(row.Scan(&blockNum))
  199. }
  200. // GetLastBatchNum returns the BatchNum of the latest forged batch
  201. func (hdb *HistoryDB) GetLastBatchNum() (common.BatchNum, error) {
  202. row := hdb.dbRead.QueryRow("SELECT batch_num FROM batch ORDER BY batch_num DESC LIMIT 1;")
  203. var batchNum common.BatchNum
  204. return batchNum, tracerr.Wrap(row.Scan(&batchNum))
  205. }
  206. // GetLastBatch returns the last forged batch
  207. func (hdb *HistoryDB) GetLastBatch() (*common.Batch, error) {
  208. var batch common.Batch
  209. err := meddler.QueryRow(
  210. hdb.dbRead, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr,
  211. batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root,
  212. batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num,
  213. batch.slot_num, batch.total_fees_usd FROM batch ORDER BY batch_num DESC LIMIT 1;`,
  214. )
  215. return &batch, tracerr.Wrap(err)
  216. }
  217. // GetLastL1BatchBlockNum returns the blockNum of the latest forged l1Batch
  218. func (hdb *HistoryDB) GetLastL1BatchBlockNum() (int64, error) {
  219. row := hdb.dbRead.QueryRow(`SELECT eth_block_num FROM batch
  220. WHERE forge_l1_txs_num IS NOT NULL
  221. ORDER BY batch_num DESC LIMIT 1;`)
  222. var blockNum int64
  223. return blockNum, tracerr.Wrap(row.Scan(&blockNum))
  224. }
  225. // GetLastL1TxsNum returns the greatest ForgeL1TxsNum in the DB from forged
  226. // batches. If there's no batch in the DB (nil, nil) is returned.
  227. func (hdb *HistoryDB) GetLastL1TxsNum() (*int64, error) {
  228. row := hdb.dbRead.QueryRow("SELECT MAX(forge_l1_txs_num) FROM batch;")
  229. lastL1TxsNum := new(int64)
  230. return lastL1TxsNum, tracerr.Wrap(row.Scan(&lastL1TxsNum))
  231. }
  232. // Reorg deletes all the information that was added into the DB after the
  233. // lastValidBlock. If lastValidBlock is negative, all block information is
  234. // deleted.
  235. func (hdb *HistoryDB) Reorg(lastValidBlock int64) error {
  236. var err error
  237. if lastValidBlock < 0 {
  238. _, err = hdb.dbWrite.Exec("DELETE FROM block;")
  239. } else {
  240. _, err = hdb.dbWrite.Exec("DELETE FROM block WHERE eth_block_num > $1;", lastValidBlock)
  241. }
  242. return tracerr.Wrap(err)
  243. }
  244. // AddBids insert Bids into the DB
  245. func (hdb *HistoryDB) AddBids(bids []common.Bid) error { return hdb.addBids(hdb.dbWrite, bids) }
  246. func (hdb *HistoryDB) addBids(d meddler.DB, bids []common.Bid) error {
  247. if len(bids) == 0 {
  248. return nil
  249. }
  250. // TODO: check the coordinator info
  251. return tracerr.Wrap(db.BulkInsert(
  252. d,
  253. "INSERT INTO bid (slot_num, bid_value, eth_block_num, bidder_addr) VALUES %s;",
  254. bids,
  255. ))
  256. }
  257. // GetAllBids retrieve all bids from the DB
  258. func (hdb *HistoryDB) GetAllBids() ([]common.Bid, error) {
  259. var bids []*common.Bid
  260. err := meddler.QueryAll(
  261. hdb.dbRead, &bids,
  262. `SELECT bid.slot_num, bid.bid_value, bid.eth_block_num, bid.bidder_addr FROM bid
  263. ORDER BY item_id;`,
  264. )
  265. return db.SlicePtrsToSlice(bids).([]common.Bid), tracerr.Wrap(err)
  266. }
  267. // GetBestBidCoordinator returns the forger address of the highest bidder in a slot by slotNum
  268. func (hdb *HistoryDB) GetBestBidCoordinator(slotNum int64) (*common.BidCoordinator, error) {
  269. bidCoord := &common.BidCoordinator{}
  270. err := meddler.QueryRow(
  271. hdb.dbRead, bidCoord,
  272. `SELECT (
  273. SELECT default_slot_set_bid
  274. FROM auction_vars
  275. WHERE default_slot_set_bid_slot_num <= $1
  276. ORDER BY eth_block_num DESC LIMIT 1
  277. ),
  278. bid.slot_num, bid.bid_value, bid.bidder_addr,
  279. coordinator.forger_addr, coordinator.url
  280. FROM bid
  281. INNER JOIN (
  282. SELECT bidder_addr, MAX(item_id) AS item_id FROM coordinator
  283. GROUP BY bidder_addr
  284. ) c ON bid.bidder_addr = c.bidder_addr
  285. INNER JOIN coordinator ON c.item_id = coordinator.item_id
  286. WHERE bid.slot_num = $1 ORDER BY bid.item_id DESC LIMIT 1;`,
  287. slotNum)
  288. return bidCoord, tracerr.Wrap(err)
  289. }
  290. // AddCoordinators insert Coordinators into the DB
  291. func (hdb *HistoryDB) AddCoordinators(coordinators []common.Coordinator) error {
  292. return tracerr.Wrap(hdb.addCoordinators(hdb.dbWrite, coordinators))
  293. }
  294. func (hdb *HistoryDB) addCoordinators(d meddler.DB, coordinators []common.Coordinator) error {
  295. if len(coordinators) == 0 {
  296. return nil
  297. }
  298. return tracerr.Wrap(db.BulkInsert(
  299. d,
  300. "INSERT INTO coordinator (bidder_addr, forger_addr, eth_block_num, url) VALUES %s;",
  301. coordinators,
  302. ))
  303. }
  304. // AddExitTree insert Exit tree into the DB
  305. func (hdb *HistoryDB) AddExitTree(exitTree []common.ExitInfo) error {
  306. return tracerr.Wrap(hdb.addExitTree(hdb.dbWrite, exitTree))
  307. }
  308. func (hdb *HistoryDB) addExitTree(d meddler.DB, exitTree []common.ExitInfo) error {
  309. if len(exitTree) == 0 {
  310. return nil
  311. }
  312. return tracerr.Wrap(db.BulkInsert(
  313. d,
  314. "INSERT INTO exit_tree (batch_num, account_idx, merkle_proof, balance, "+
  315. "instant_withdrawn, delayed_withdraw_request, delayed_withdrawn) VALUES %s;",
  316. exitTree,
  317. ))
  318. }
  319. func (hdb *HistoryDB) updateExitTree(d sqlx.Ext, blockNum int64,
  320. rollupWithdrawals []common.WithdrawInfo, wDelayerWithdrawals []common.WDelayerTransfer) error {
  321. if len(rollupWithdrawals) == 0 && len(wDelayerWithdrawals) == 0 {
  322. return nil
  323. }
  324. type withdrawal struct {
  325. BatchNum int64 `db:"batch_num"`
  326. AccountIdx int64 `db:"account_idx"`
  327. InstantWithdrawn *int64 `db:"instant_withdrawn"`
  328. DelayedWithdrawRequest *int64 `db:"delayed_withdraw_request"`
  329. DelayedWithdrawn *int64 `db:"delayed_withdrawn"`
  330. Owner *ethCommon.Address `db:"owner"`
  331. Token *ethCommon.Address `db:"token"`
  332. }
  333. withdrawals := make([]withdrawal, len(rollupWithdrawals)+len(wDelayerWithdrawals))
  334. for i := range rollupWithdrawals {
  335. info := &rollupWithdrawals[i]
  336. withdrawals[i] = withdrawal{
  337. BatchNum: int64(info.NumExitRoot),
  338. AccountIdx: int64(info.Idx),
  339. }
  340. if info.InstantWithdraw {
  341. withdrawals[i].InstantWithdrawn = &blockNum
  342. } else {
  343. withdrawals[i].DelayedWithdrawRequest = &blockNum
  344. withdrawals[i].Owner = &info.Owner
  345. withdrawals[i].Token = &info.Token
  346. }
  347. }
  348. for i := range wDelayerWithdrawals {
  349. info := &wDelayerWithdrawals[i]
  350. withdrawals[len(rollupWithdrawals)+i] = withdrawal{
  351. DelayedWithdrawn: &blockNum,
  352. Owner: &info.Owner,
  353. Token: &info.Token,
  354. }
  355. }
  356. // In VALUES we set an initial row of NULLs to set the types of each
  357. // variable passed as argument
  358. const query string = `
  359. UPDATE exit_tree e SET
  360. instant_withdrawn = d.instant_withdrawn,
  361. delayed_withdraw_request = CASE
  362. WHEN e.delayed_withdraw_request IS NOT NULL THEN e.delayed_withdraw_request
  363. ELSE d.delayed_withdraw_request
  364. END,
  365. delayed_withdrawn = d.delayed_withdrawn,
  366. owner = d.owner,
  367. token = d.token
  368. FROM (VALUES
  369. (NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BYTEA, NULL::::BYTEA),
  370. (:batch_num,
  371. :account_idx,
  372. :instant_withdrawn,
  373. :delayed_withdraw_request,
  374. :delayed_withdrawn,
  375. :owner,
  376. :token)
  377. ) as d (batch_num, account_idx, instant_withdrawn, delayed_withdraw_request, delayed_withdrawn, owner, token)
  378. WHERE
  379. (d.batch_num IS NOT NULL AND e.batch_num = d.batch_num AND e.account_idx = d.account_idx) OR
  380. (d.delayed_withdrawn IS NOT NULL AND e.delayed_withdrawn IS NULL AND e.owner = d.owner AND e.token = d.token);
  381. `
  382. if len(withdrawals) > 0 {
  383. if _, err := sqlx.NamedExec(d, query, withdrawals); err != nil {
  384. return tracerr.Wrap(err)
  385. }
  386. }
  387. return nil
  388. }
  389. // AddToken insert a token into the DB
  390. func (hdb *HistoryDB) AddToken(token *common.Token) error {
  391. return tracerr.Wrap(meddler.Insert(hdb.dbWrite, "token", token))
  392. }
  393. // AddTokens insert tokens into the DB
  394. func (hdb *HistoryDB) AddTokens(tokens []common.Token) error {
  395. return hdb.addTokens(hdb.dbWrite, tokens)
  396. }
  397. func (hdb *HistoryDB) addTokens(d meddler.DB, tokens []common.Token) error {
  398. if len(tokens) == 0 {
  399. return nil
  400. }
  401. // Sanitize name and symbol
  402. for i, token := range tokens {
  403. token.Name = strings.ToValidUTF8(token.Name, " ")
  404. token.Symbol = strings.ToValidUTF8(token.Symbol, " ")
  405. tokens[i] = token
  406. }
  407. return tracerr.Wrap(db.BulkInsert(
  408. d,
  409. `INSERT INTO token (
  410. token_id,
  411. eth_block_num,
  412. eth_addr,
  413. name,
  414. symbol,
  415. decimals
  416. ) VALUES %s;`,
  417. tokens,
  418. ))
  419. }
  420. // UpdateTokenValue updates the USD value of a token. Value is the price in
  421. // USD of a normalized token (1 token = 10^decimals units)
  422. func (hdb *HistoryDB) UpdateTokenValue(tokenAddr ethCommon.Address, value float64) error {
  423. _, err := hdb.dbWrite.Exec(
  424. "UPDATE token SET usd = $1 WHERE eth_addr = $2;",
  425. value, tokenAddr,
  426. )
  427. return tracerr.Wrap(err)
  428. }
  429. // GetToken returns a token from the DB given a TokenID
  430. func (hdb *HistoryDB) GetToken(tokenID common.TokenID) (*TokenWithUSD, error) {
  431. token := &TokenWithUSD{}
  432. err := meddler.QueryRow(
  433. hdb.dbRead, token, `SELECT * FROM token WHERE token_id = $1;`, tokenID,
  434. )
  435. return token, tracerr.Wrap(err)
  436. }
  437. // GetAllTokens returns all tokens from the DB
  438. func (hdb *HistoryDB) GetAllTokens() ([]TokenWithUSD, error) {
  439. var tokens []*TokenWithUSD
  440. err := meddler.QueryAll(
  441. hdb.dbRead, &tokens,
  442. "SELECT * FROM token ORDER BY token_id;",
  443. )
  444. return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), tracerr.Wrap(err)
  445. }
  446. // GetTokenSymbolsAndAddrs returns all the token symbols and addresses from the DB
  447. func (hdb *HistoryDB) GetTokenSymbolsAndAddrs() ([]TokenSymbolAndAddr, error) {
  448. var tokens []*TokenSymbolAndAddr
  449. err := meddler.QueryAll(
  450. hdb.dbRead, &tokens,
  451. "SELECT symbol, eth_addr FROM token;",
  452. )
  453. return db.SlicePtrsToSlice(tokens).([]TokenSymbolAndAddr), tracerr.Wrap(err)
  454. }
  455. // AddAccounts insert accounts into the DB
  456. func (hdb *HistoryDB) AddAccounts(accounts []common.Account) error {
  457. return tracerr.Wrap(hdb.addAccounts(hdb.dbWrite, accounts))
  458. }
  459. func (hdb *HistoryDB) addAccounts(d meddler.DB, accounts []common.Account) error {
  460. if len(accounts) == 0 {
  461. return nil
  462. }
  463. return tracerr.Wrap(db.BulkInsert(
  464. d,
  465. `INSERT INTO account (
  466. idx,
  467. token_id,
  468. batch_num,
  469. bjj,
  470. eth_addr
  471. ) VALUES %s;`,
  472. accounts,
  473. ))
  474. }
  475. // GetAllAccounts returns a list of accounts from the DB
  476. func (hdb *HistoryDB) GetAllAccounts() ([]common.Account, error) {
  477. var accs []*common.Account
  478. err := meddler.QueryAll(
  479. hdb.dbRead, &accs,
  480. "SELECT idx, token_id, batch_num, bjj, eth_addr FROM account ORDER BY idx;",
  481. )
  482. return db.SlicePtrsToSlice(accs).([]common.Account), tracerr.Wrap(err)
  483. }
  484. // AddAccountUpdates inserts accUpdates into the DB
  485. func (hdb *HistoryDB) AddAccountUpdates(accUpdates []common.AccountUpdate) error {
  486. return tracerr.Wrap(hdb.addAccountUpdates(hdb.dbWrite, accUpdates))
  487. }
  488. func (hdb *HistoryDB) addAccountUpdates(d meddler.DB, accUpdates []common.AccountUpdate) error {
  489. if len(accUpdates) == 0 {
  490. return nil
  491. }
  492. return tracerr.Wrap(db.BulkInsert(
  493. d,
  494. `INSERT INTO account_update (
  495. eth_block_num,
  496. batch_num,
  497. idx,
  498. nonce,
  499. balance
  500. ) VALUES %s;`,
  501. accUpdates,
  502. ))
  503. }
  504. // GetAllAccountUpdates returns all the AccountUpdate from the DB
  505. func (hdb *HistoryDB) GetAllAccountUpdates() ([]common.AccountUpdate, error) {
  506. var accUpdates []*common.AccountUpdate
  507. err := meddler.QueryAll(
  508. hdb.dbRead, &accUpdates,
  509. "SELECT eth_block_num, batch_num, idx, nonce, balance FROM account_update ORDER BY idx;",
  510. )
  511. return db.SlicePtrsToSlice(accUpdates).([]common.AccountUpdate), tracerr.Wrap(err)
  512. }
  513. // AddL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
  514. // If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
  515. // BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
  516. // EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB.
  517. func (hdb *HistoryDB) AddL1Txs(l1txs []common.L1Tx) error {
  518. return tracerr.Wrap(hdb.addL1Txs(hdb.dbWrite, l1txs))
  519. }
  520. // addL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
  521. // If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
  522. // BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
  523. // EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB.
  524. func (hdb *HistoryDB) addL1Txs(d meddler.DB, l1txs []common.L1Tx) error {
  525. if len(l1txs) == 0 {
  526. return nil
  527. }
  528. txs := []txWrite{}
  529. for i := 0; i < len(l1txs); i++ {
  530. af := new(big.Float).SetInt(l1txs[i].Amount)
  531. amountFloat, _ := af.Float64()
  532. laf := new(big.Float).SetInt(l1txs[i].DepositAmount)
  533. depositAmountFloat, _ := laf.Float64()
  534. var effectiveFromIdx *common.Idx
  535. if l1txs[i].UserOrigin {
  536. if l1txs[i].Type != common.TxTypeCreateAccountDeposit &&
  537. l1txs[i].Type != common.TxTypeCreateAccountDepositTransfer {
  538. effectiveFromIdx = &l1txs[i].FromIdx
  539. }
  540. } else {
  541. effectiveFromIdx = &l1txs[i].EffectiveFromIdx
  542. }
  543. txs = append(txs, txWrite{
  544. // Generic
  545. IsL1: true,
  546. TxID: l1txs[i].TxID,
  547. Type: l1txs[i].Type,
  548. Position: l1txs[i].Position,
  549. FromIdx: &l1txs[i].FromIdx,
  550. EffectiveFromIdx: effectiveFromIdx,
  551. ToIdx: l1txs[i].ToIdx,
  552. Amount: l1txs[i].Amount,
  553. AmountFloat: amountFloat,
  554. TokenID: l1txs[i].TokenID,
  555. BatchNum: l1txs[i].BatchNum,
  556. EthBlockNum: l1txs[i].EthBlockNum,
  557. // L1
  558. ToForgeL1TxsNum: l1txs[i].ToForgeL1TxsNum,
  559. UserOrigin: &l1txs[i].UserOrigin,
  560. FromEthAddr: &l1txs[i].FromEthAddr,
  561. FromBJJ: &l1txs[i].FromBJJ,
  562. DepositAmount: l1txs[i].DepositAmount,
  563. DepositAmountFloat: &depositAmountFloat,
  564. })
  565. }
  566. return tracerr.Wrap(hdb.addTxs(d, txs))
  567. }
  568. // AddL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
  569. func (hdb *HistoryDB) AddL2Txs(l2txs []common.L2Tx) error {
  570. return tracerr.Wrap(hdb.addL2Txs(hdb.dbWrite, l2txs))
  571. }
  572. // addL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
  573. func (hdb *HistoryDB) addL2Txs(d meddler.DB, l2txs []common.L2Tx) error {
  574. txs := []txWrite{}
  575. for i := 0; i < len(l2txs); i++ {
  576. f := new(big.Float).SetInt(l2txs[i].Amount)
  577. amountFloat, _ := f.Float64()
  578. txs = append(txs, txWrite{
  579. // Generic
  580. IsL1: false,
  581. TxID: l2txs[i].TxID,
  582. Type: l2txs[i].Type,
  583. Position: l2txs[i].Position,
  584. FromIdx: &l2txs[i].FromIdx,
  585. EffectiveFromIdx: &l2txs[i].FromIdx,
  586. ToIdx: l2txs[i].ToIdx,
  587. TokenID: l2txs[i].TokenID,
  588. Amount: l2txs[i].Amount,
  589. AmountFloat: amountFloat,
  590. BatchNum: &l2txs[i].BatchNum,
  591. EthBlockNum: l2txs[i].EthBlockNum,
  592. // L2
  593. Fee: &l2txs[i].Fee,
  594. Nonce: &l2txs[i].Nonce,
  595. })
  596. }
  597. return tracerr.Wrap(hdb.addTxs(d, txs))
  598. }
  599. func (hdb *HistoryDB) addTxs(d meddler.DB, txs []txWrite) error {
  600. if len(txs) == 0 {
  601. return nil
  602. }
  603. return tracerr.Wrap(db.BulkInsert(
  604. d,
  605. `INSERT INTO tx (
  606. is_l1,
  607. id,
  608. type,
  609. position,
  610. from_idx,
  611. effective_from_idx,
  612. to_idx,
  613. amount,
  614. amount_f,
  615. token_id,
  616. batch_num,
  617. eth_block_num,
  618. to_forge_l1_txs_num,
  619. user_origin,
  620. from_eth_addr,
  621. from_bjj,
  622. deposit_amount,
  623. deposit_amount_f,
  624. fee,
  625. nonce
  626. ) VALUES %s;`,
  627. txs,
  628. ))
  629. }
  630. // GetAllExits returns all exit from the DB
  631. func (hdb *HistoryDB) GetAllExits() ([]common.ExitInfo, error) {
  632. var exits []*common.ExitInfo
  633. err := meddler.QueryAll(
  634. hdb.dbRead, &exits,
  635. `SELECT exit_tree.batch_num, exit_tree.account_idx, exit_tree.merkle_proof,
  636. exit_tree.balance, exit_tree.instant_withdrawn, exit_tree.delayed_withdraw_request,
  637. exit_tree.delayed_withdrawn FROM exit_tree ORDER BY item_id;`,
  638. )
  639. return db.SlicePtrsToSlice(exits).([]common.ExitInfo), tracerr.Wrap(err)
  640. }
  641. // GetAllL1UserTxs returns all L1UserTxs from the DB
  642. func (hdb *HistoryDB) GetAllL1UserTxs() ([]common.L1Tx, error) {
  643. var txs []*common.L1Tx
  644. err := meddler.QueryAll(
  645. hdb.dbRead, &txs,
  646. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  647. tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  648. tx.amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.amount_success THEN tx.amount ELSE 0 END) AS effective_amount,
  649. tx.deposit_amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.deposit_amount_success THEN tx.deposit_amount ELSE 0 END) AS effective_deposit_amount,
  650. tx.eth_block_num, tx.type, tx.batch_num
  651. FROM tx WHERE is_l1 = TRUE AND user_origin = TRUE ORDER BY item_id;`,
  652. )
  653. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  654. }
  655. // GetAllL1CoordinatorTxs returns all L1CoordinatorTxs from the DB
  656. func (hdb *HistoryDB) GetAllL1CoordinatorTxs() ([]common.L1Tx, error) {
  657. var txs []*common.L1Tx
  658. // Since the query specifies that only coordinator txs are returned, it's safe to assume
  659. // that returned txs will always have effective amounts
  660. err := meddler.QueryAll(
  661. hdb.dbRead, &txs,
  662. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  663. tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  664. tx.amount, tx.amount AS effective_amount,
  665. tx.deposit_amount, tx.deposit_amount AS effective_deposit_amount,
  666. tx.eth_block_num, tx.type, tx.batch_num
  667. FROM tx WHERE is_l1 = TRUE AND user_origin = FALSE ORDER BY item_id;`,
  668. )
  669. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  670. }
  671. // GetAllL2Txs returns all L2Txs from the DB
  672. func (hdb *HistoryDB) GetAllL2Txs() ([]common.L2Tx, error) {
  673. var txs []*common.L2Tx
  674. err := meddler.QueryAll(
  675. hdb.dbRead, &txs,
  676. `SELECT tx.id, tx.batch_num, tx.position,
  677. tx.from_idx, tx.to_idx, tx.amount, tx.token_id,
  678. tx.fee, tx.nonce, tx.type, tx.eth_block_num
  679. FROM tx WHERE is_l1 = FALSE ORDER BY item_id;`,
  680. )
  681. return db.SlicePtrsToSlice(txs).([]common.L2Tx), tracerr.Wrap(err)
  682. }
  683. // GetUnforgedL1UserTxs gets L1 User Txs to be forged in the L1Batch with toForgeL1TxsNum.
  684. func (hdb *HistoryDB) GetUnforgedL1UserTxs(toForgeL1TxsNum int64) ([]common.L1Tx, error) {
  685. var txs []*common.L1Tx
  686. err := meddler.QueryAll(
  687. hdb.dbRead, &txs, // only L1 user txs can have batch_num set to null
  688. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  689. tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  690. tx.amount, NULL AS effective_amount,
  691. tx.deposit_amount, NULL AS effective_deposit_amount,
  692. tx.eth_block_num, tx.type, tx.batch_num
  693. FROM tx WHERE batch_num IS NULL AND to_forge_l1_txs_num = $1
  694. ORDER BY position;`,
  695. toForgeL1TxsNum,
  696. )
  697. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  698. }
  699. // GetUnforgedL1UserFutureTxs gets L1 User Txs to be forged after the L1Batch
  700. // with toForgeL1TxsNum (in one of the future batches, not in the next one).
  701. func (hdb *HistoryDB) GetUnforgedL1UserFutureTxs(toForgeL1TxsNum int64) ([]common.L1Tx, error) {
  702. var txs []*common.L1Tx
  703. err := meddler.QueryAll(
  704. hdb.dbRead, &txs, // only L1 user txs can have batch_num set to null
  705. `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
  706. tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
  707. tx.amount, NULL AS effective_amount,
  708. tx.deposit_amount, NULL AS effective_deposit_amount,
  709. tx.eth_block_num, tx.type, tx.batch_num
  710. FROM tx WHERE batch_num IS NULL AND to_forge_l1_txs_num > $1
  711. ORDER BY position;`,
  712. toForgeL1TxsNum,
  713. )
  714. return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
  715. }
  716. // GetUnforgedL1UserTxsCount returns the count of unforged L1Txs (either in
  717. // open or frozen queues that are not yet forged)
  718. func (hdb *HistoryDB) GetUnforgedL1UserTxsCount() (int, error) {
  719. row := hdb.dbRead.QueryRow(
  720. `SELECT COUNT(*) FROM tx WHERE batch_num IS NULL;`,
  721. )
  722. var count int
  723. return count, tracerr.Wrap(row.Scan(&count))
  724. }
  725. // TODO: Think about chaning all the queries that return a last value, to queries that return the next valid value.
  726. // GetLastTxsPosition for a given to_forge_l1_txs_num
  727. func (hdb *HistoryDB) GetLastTxsPosition(toForgeL1TxsNum int64) (int, error) {
  728. row := hdb.dbRead.QueryRow(
  729. "SELECT position FROM tx WHERE to_forge_l1_txs_num = $1 ORDER BY position DESC;",
  730. toForgeL1TxsNum,
  731. )
  732. var lastL1TxsPosition int
  733. return lastL1TxsPosition, tracerr.Wrap(row.Scan(&lastL1TxsPosition))
  734. }
  735. // GetSCVars returns the rollup, auction and wdelayer smart contracts variables at their last update.
  736. func (hdb *HistoryDB) GetSCVars() (*common.RollupVariables, *common.AuctionVariables,
  737. *common.WDelayerVariables, error) {
  738. var rollup common.RollupVariables
  739. var auction common.AuctionVariables
  740. var wDelayer common.WDelayerVariables
  741. if err := meddler.QueryRow(hdb.dbRead, &rollup,
  742. "SELECT * FROM rollup_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  743. return nil, nil, nil, tracerr.Wrap(err)
  744. }
  745. if err := meddler.QueryRow(hdb.dbRead, &auction,
  746. "SELECT * FROM auction_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  747. return nil, nil, nil, tracerr.Wrap(err)
  748. }
  749. if err := meddler.QueryRow(hdb.dbRead, &wDelayer,
  750. "SELECT * FROM wdelayer_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
  751. return nil, nil, nil, tracerr.Wrap(err)
  752. }
  753. return &rollup, &auction, &wDelayer, nil
  754. }
  755. func (hdb *HistoryDB) setRollupVars(d meddler.DB, rollup *common.RollupVariables) error {
  756. return tracerr.Wrap(meddler.Insert(d, "rollup_vars", rollup))
  757. }
  758. func (hdb *HistoryDB) setAuctionVars(d meddler.DB, auction *common.AuctionVariables) error {
  759. return tracerr.Wrap(meddler.Insert(d, "auction_vars", auction))
  760. }
  761. func (hdb *HistoryDB) setWDelayerVars(d meddler.DB, wDelayer *common.WDelayerVariables) error {
  762. return tracerr.Wrap(meddler.Insert(d, "wdelayer_vars", wDelayer))
  763. }
  764. func (hdb *HistoryDB) addBucketUpdates(d meddler.DB, bucketUpdates []common.BucketUpdate) error {
  765. if len(bucketUpdates) == 0 {
  766. return nil
  767. }
  768. return tracerr.Wrap(db.BulkInsert(
  769. d,
  770. `INSERT INTO bucket_update (
  771. eth_block_num,
  772. num_bucket,
  773. block_stamp,
  774. withdrawals
  775. ) VALUES %s;`,
  776. bucketUpdates,
  777. ))
  778. }
  779. // AddBucketUpdatesTest allows call to unexported method
  780. // only for internal testing purposes
  781. func (hdb *HistoryDB) AddBucketUpdatesTest(d meddler.DB, bucketUpdates []common.BucketUpdate) error {
  782. return hdb.addBucketUpdates(d, bucketUpdates)
  783. }
  784. // GetAllBucketUpdates retrieves all the bucket updates
  785. func (hdb *HistoryDB) GetAllBucketUpdates() ([]common.BucketUpdate, error) {
  786. var bucketUpdates []*common.BucketUpdate
  787. err := meddler.QueryAll(
  788. hdb.dbRead, &bucketUpdates,
  789. `SELECT eth_block_num, num_bucket, block_stamp, withdrawals
  790. FROM bucket_update ORDER BY item_id;`,
  791. )
  792. return db.SlicePtrsToSlice(bucketUpdates).([]common.BucketUpdate), tracerr.Wrap(err)
  793. }
  794. func (hdb *HistoryDB) getMinBidInfo(d meddler.DB,
  795. currentSlot, lastClosedSlot int64) ([]MinBidInfo, error) {
  796. minBidInfo := []*MinBidInfo{}
  797. query := `
  798. SELECT DISTINCT default_slot_set_bid, default_slot_set_bid_slot_num FROM auction_vars
  799. WHERE default_slot_set_bid_slot_num < $1
  800. ORDER BY default_slot_set_bid_slot_num DESC
  801. LIMIT $2;`
  802. err := meddler.QueryAll(d, &minBidInfo, query, lastClosedSlot, int(lastClosedSlot-currentSlot)+1)
  803. return db.SlicePtrsToSlice(minBidInfo).([]MinBidInfo), tracerr.Wrap(err)
  804. }
  805. func (hdb *HistoryDB) addTokenExchanges(d meddler.DB, tokenExchanges []common.TokenExchange) error {
  806. if len(tokenExchanges) == 0 {
  807. return nil
  808. }
  809. return tracerr.Wrap(db.BulkInsert(
  810. d,
  811. `INSERT INTO token_exchange (
  812. eth_block_num,
  813. eth_addr,
  814. value_usd
  815. ) VALUES %s;`,
  816. tokenExchanges,
  817. ))
  818. }
  819. // GetAllTokenExchanges retrieves all the token exchanges
  820. func (hdb *HistoryDB) GetAllTokenExchanges() ([]common.TokenExchange, error) {
  821. var tokenExchanges []*common.TokenExchange
  822. err := meddler.QueryAll(
  823. hdb.dbRead, &tokenExchanges,
  824. "SELECT eth_block_num, eth_addr, value_usd FROM token_exchange ORDER BY item_id;",
  825. )
  826. return db.SlicePtrsToSlice(tokenExchanges).([]common.TokenExchange), tracerr.Wrap(err)
  827. }
  828. func (hdb *HistoryDB) addEscapeHatchWithdrawals(d meddler.DB,
  829. escapeHatchWithdrawals []common.WDelayerEscapeHatchWithdrawal) error {
  830. if len(escapeHatchWithdrawals) == 0 {
  831. return nil
  832. }
  833. return tracerr.Wrap(db.BulkInsert(
  834. d,
  835. `INSERT INTO escape_hatch_withdrawal (
  836. eth_block_num,
  837. who_addr,
  838. to_addr,
  839. token_addr,
  840. amount
  841. ) VALUES %s;`,
  842. escapeHatchWithdrawals,
  843. ))
  844. }
  845. // GetAllEscapeHatchWithdrawals retrieves all the escape hatch withdrawals
  846. func (hdb *HistoryDB) GetAllEscapeHatchWithdrawals() ([]common.WDelayerEscapeHatchWithdrawal, error) {
  847. var escapeHatchWithdrawals []*common.WDelayerEscapeHatchWithdrawal
  848. err := meddler.QueryAll(
  849. hdb.dbRead, &escapeHatchWithdrawals,
  850. "SELECT eth_block_num, who_addr, to_addr, token_addr, amount FROM escape_hatch_withdrawal ORDER BY item_id;",
  851. )
  852. return db.SlicePtrsToSlice(escapeHatchWithdrawals).([]common.WDelayerEscapeHatchWithdrawal),
  853. tracerr.Wrap(err)
  854. }
  855. // SetInitialSCVars sets the initial state of rollup, auction, wdelayer smart
  856. // contract variables. This initial state is stored linked to block 0, which
  857. // always exist in the DB and is used to store initialization data that always
  858. // exist in the smart contracts.
  859. func (hdb *HistoryDB) SetInitialSCVars(rollup *common.RollupVariables,
  860. auction *common.AuctionVariables, wDelayer *common.WDelayerVariables) error {
  861. txn, err := hdb.dbWrite.Beginx()
  862. if err != nil {
  863. return tracerr.Wrap(err)
  864. }
  865. defer func() {
  866. if err != nil {
  867. db.Rollback(txn)
  868. }
  869. }()
  870. // Force EthBlockNum to be 0 because it's the block used to link data
  871. // that belongs to the creation of the smart contracts
  872. rollup.EthBlockNum = 0
  873. auction.EthBlockNum = 0
  874. wDelayer.EthBlockNum = 0
  875. auction.DefaultSlotSetBidSlotNum = 0
  876. if err := hdb.setRollupVars(txn, rollup); err != nil {
  877. return tracerr.Wrap(err)
  878. }
  879. if err := hdb.setAuctionVars(txn, auction); err != nil {
  880. return tracerr.Wrap(err)
  881. }
  882. if err := hdb.setWDelayerVars(txn, wDelayer); err != nil {
  883. return tracerr.Wrap(err)
  884. }
  885. return tracerr.Wrap(txn.Commit())
  886. }
  887. // setExtraInfoForgedL1UserTxs sets the EffectiveAmount, EffectiveDepositAmount
  888. // and EffectiveFromIdx of the given l1UserTxs (with an UPDATE)
  889. func (hdb *HistoryDB) setExtraInfoForgedL1UserTxs(d sqlx.Ext, txs []common.L1Tx) error {
  890. if len(txs) == 0 {
  891. return nil
  892. }
  893. // Effective amounts are stored as success flags in the DB, with true value by default
  894. // to reduce the amount of updates. Therefore, only amounts that became uneffective should be
  895. // updated to become false. At the same time, all the txs that contain
  896. // accounts (FromIdx == 0) are updated to set the EffectiveFromIdx.
  897. type txUpdate struct {
  898. ID common.TxID `db:"id"`
  899. AmountSuccess bool `db:"amount_success"`
  900. DepositAmountSuccess bool `db:"deposit_amount_success"`
  901. EffectiveFromIdx common.Idx `db:"effective_from_idx"`
  902. }
  903. txUpdates := []txUpdate{}
  904. equal := func(a *big.Int, b *big.Int) bool {
  905. return a.Cmp(b) == 0
  906. }
  907. for i := range txs {
  908. amountSuccess := equal(txs[i].Amount, txs[i].EffectiveAmount)
  909. depositAmountSuccess := equal(txs[i].DepositAmount, txs[i].EffectiveDepositAmount)
  910. if !amountSuccess || !depositAmountSuccess || txs[i].FromIdx == 0 {
  911. txUpdates = append(txUpdates, txUpdate{
  912. ID: txs[i].TxID,
  913. AmountSuccess: amountSuccess,
  914. DepositAmountSuccess: depositAmountSuccess,
  915. EffectiveFromIdx: txs[i].EffectiveFromIdx,
  916. })
  917. }
  918. }
  919. const query string = `
  920. UPDATE tx SET
  921. amount_success = tx_update.amount_success,
  922. deposit_amount_success = tx_update.deposit_amount_success,
  923. effective_from_idx = tx_update.effective_from_idx
  924. FROM (VALUES
  925. (NULL::::BYTEA, NULL::::BOOL, NULL::::BOOL, NULL::::BIGINT),
  926. (:id, :amount_success, :deposit_amount_success, :effective_from_idx)
  927. ) as tx_update (id, amount_success, deposit_amount_success, effective_from_idx)
  928. WHERE tx.id = tx_update.id;
  929. `
  930. if len(txUpdates) > 0 {
  931. if _, err := sqlx.NamedExec(d, query, txUpdates); err != nil {
  932. return tracerr.Wrap(err)
  933. }
  934. }
  935. return nil
  936. }
  937. // AddBlockSCData stores all the information of a block retrieved by the
  938. // Synchronizer. Blocks should be inserted in order, leaving no gaps because
  939. // the pagination system of the API/DB depends on this. Within blocks, all
  940. // items should also be in the correct order (Accounts, Tokens, Txs, etc.)
  941. func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) {
  942. txn, err := hdb.dbWrite.Beginx()
  943. if err != nil {
  944. return tracerr.Wrap(err)
  945. }
  946. defer func() {
  947. if err != nil {
  948. db.Rollback(txn)
  949. }
  950. }()
  951. // Add block
  952. if err := hdb.addBlock(txn, &blockData.Block); err != nil {
  953. return tracerr.Wrap(err)
  954. }
  955. // Add Coordinators
  956. if err := hdb.addCoordinators(txn, blockData.Auction.Coordinators); err != nil {
  957. return tracerr.Wrap(err)
  958. }
  959. // Add Bids
  960. if err := hdb.addBids(txn, blockData.Auction.Bids); err != nil {
  961. return tracerr.Wrap(err)
  962. }
  963. // Add Tokens
  964. if err := hdb.addTokens(txn, blockData.Rollup.AddedTokens); err != nil {
  965. return tracerr.Wrap(err)
  966. }
  967. // Prepare user L1 txs to be added.
  968. // They must be added before the batch that will forge them (which can be in the same block)
  969. // and after the account that will be sent to (also can be in the same block).
  970. // Note: insert order is not relevant since item_id will be updated by a DB trigger when
  971. // the batch that forges those txs is inserted
  972. userL1s := make(map[common.BatchNum][]common.L1Tx)
  973. for i := range blockData.Rollup.L1UserTxs {
  974. batchThatForgesIsInTheBlock := false
  975. for _, batch := range blockData.Rollup.Batches {
  976. if batch.Batch.ForgeL1TxsNum != nil &&
  977. *batch.Batch.ForgeL1TxsNum == *blockData.Rollup.L1UserTxs[i].ToForgeL1TxsNum {
  978. // Tx is forged in this block. It's guaranteed that:
  979. // * the first batch of the block won't forge user L1 txs that have been added in this block
  980. // * batch nums are sequential therefore it's safe to add the tx at batch.BatchNum -1
  981. batchThatForgesIsInTheBlock = true
  982. addAtBatchNum := batch.Batch.BatchNum - 1
  983. userL1s[addAtBatchNum] = append(userL1s[addAtBatchNum], blockData.Rollup.L1UserTxs[i])
  984. break
  985. }
  986. }
  987. if !batchThatForgesIsInTheBlock {
  988. // User artificial batchNum 0 to add txs that are not forge in this block
  989. // after all the accounts of the block have been added
  990. userL1s[0] = append(userL1s[0], blockData.Rollup.L1UserTxs[i])
  991. }
  992. }
  993. // Add Batches
  994. for i := range blockData.Rollup.Batches {
  995. batch := &blockData.Rollup.Batches[i]
  996. // Add Batch: this will trigger an update on the DB
  997. // that will set the batch num of forged L1 txs in this batch
  998. if err = hdb.addBatch(txn, &batch.Batch); err != nil {
  999. return tracerr.Wrap(err)
  1000. }
  1001. // Add accounts
  1002. if err := hdb.addAccounts(txn, batch.CreatedAccounts); err != nil {
  1003. return tracerr.Wrap(err)
  1004. }
  1005. // Add accountBalances if it exists
  1006. if err := hdb.addAccountUpdates(txn, batch.UpdatedAccounts); err != nil {
  1007. return tracerr.Wrap(err)
  1008. }
  1009. // Set the EffectiveAmount and EffectiveDepositAmount of all the
  1010. // L1UserTxs that have been forged in this batch
  1011. if err = hdb.setExtraInfoForgedL1UserTxs(txn, batch.L1UserTxs); err != nil {
  1012. return tracerr.Wrap(err)
  1013. }
  1014. // Add forged l1 coordinator Txs
  1015. if err := hdb.addL1Txs(txn, batch.L1CoordinatorTxs); err != nil {
  1016. return tracerr.Wrap(err)
  1017. }
  1018. // Add l2 Txs
  1019. if err := hdb.addL2Txs(txn, batch.L2Txs); err != nil {
  1020. return tracerr.Wrap(err)
  1021. }
  1022. // Add user L1 txs that will be forged in next batch
  1023. if userlL1s, ok := userL1s[batch.Batch.BatchNum]; ok {
  1024. if err := hdb.addL1Txs(txn, userlL1s); err != nil {
  1025. return tracerr.Wrap(err)
  1026. }
  1027. }
  1028. // Add exit tree
  1029. if err := hdb.addExitTree(txn, batch.ExitTree); err != nil {
  1030. return tracerr.Wrap(err)
  1031. }
  1032. }
  1033. // Add user L1 txs that won't be forged in this block
  1034. if userL1sNotForgedInThisBlock, ok := userL1s[0]; ok {
  1035. if err := hdb.addL1Txs(txn, userL1sNotForgedInThisBlock); err != nil {
  1036. return tracerr.Wrap(err)
  1037. }
  1038. }
  1039. // Set SC Vars if there was an update
  1040. if blockData.Rollup.Vars != nil {
  1041. if err := hdb.setRollupVars(txn, blockData.Rollup.Vars); err != nil {
  1042. return tracerr.Wrap(err)
  1043. }
  1044. }
  1045. if blockData.Auction.Vars != nil {
  1046. if err := hdb.setAuctionVars(txn, blockData.Auction.Vars); err != nil {
  1047. return tracerr.Wrap(err)
  1048. }
  1049. }
  1050. if blockData.WDelayer.Vars != nil {
  1051. if err := hdb.setWDelayerVars(txn, blockData.WDelayer.Vars); err != nil {
  1052. return tracerr.Wrap(err)
  1053. }
  1054. }
  1055. // Update withdrawals in exit tree table
  1056. if err := hdb.updateExitTree(txn, blockData.Block.Num,
  1057. blockData.Rollup.Withdrawals, blockData.WDelayer.Withdrawals); err != nil {
  1058. return tracerr.Wrap(err)
  1059. }
  1060. // Add Escape Hatch Withdrawals
  1061. if err := hdb.addEscapeHatchWithdrawals(txn,
  1062. blockData.WDelayer.EscapeHatchWithdrawals); err != nil {
  1063. return tracerr.Wrap(err)
  1064. }
  1065. // Add Buckets withdrawals updates
  1066. if err := hdb.addBucketUpdates(txn, blockData.Rollup.UpdateBucketWithdraw); err != nil {
  1067. return tracerr.Wrap(err)
  1068. }
  1069. // Add Token exchange updates
  1070. if err := hdb.addTokenExchanges(txn, blockData.Rollup.TokenExchanges); err != nil {
  1071. return tracerr.Wrap(err)
  1072. }
  1073. return tracerr.Wrap(txn.Commit())
  1074. }
  1075. // AddAuctionVars insert auction vars into the DB
  1076. func (hdb *HistoryDB) AddAuctionVars(auctionVars *common.AuctionVariables) error {
  1077. return tracerr.Wrap(meddler.Insert(hdb.dbWrite, "auction_vars", auctionVars))
  1078. }
  1079. // GetTokensTest used to get tokens in a testing context
  1080. func (hdb *HistoryDB) GetTokensTest() ([]TokenWithUSD, error) {
  1081. tokens := []*TokenWithUSD{}
  1082. if err := meddler.QueryAll(
  1083. hdb.dbRead, &tokens,
  1084. "SELECT * FROM token ORDER BY token_id ASC",
  1085. ); err != nil {
  1086. return nil, tracerr.Wrap(err)
  1087. }
  1088. if len(tokens) == 0 {
  1089. return []TokenWithUSD{}, nil
  1090. }
  1091. return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), nil
  1092. }
  1093. const (
  1094. // CreateAccountExtraFeePercentage is the multiplication factor over
  1095. // the average fee for CreateAccount that is applied to obtain the
  1096. // recommended fee for CreateAccount
  1097. CreateAccountExtraFeePercentage float64 = 2.5
  1098. // CreateAccountInternalExtraFeePercentage is the multiplication factor
  1099. // over the average fee for CreateAccountInternal that is applied to
  1100. // obtain the recommended fee for CreateAccountInternal
  1101. CreateAccountInternalExtraFeePercentage float64 = 2.0
  1102. )
  1103. // GetRecommendedFee returns the RecommendedFee information
  1104. func (hdb *HistoryDB) GetRecommendedFee(minFeeUSD, maxFeeUSD float64) (*common.RecommendedFee, error) {
  1105. var recommendedFee common.RecommendedFee
  1106. // Get total txs and the batch of the first selected tx of the last hour
  1107. type totalTxsSinceBatchNum struct {
  1108. TotalTxs int `meddler:"total_txs"`
  1109. FirstBatchNum common.BatchNum `meddler:"batch_num"`
  1110. }
  1111. ttsbn := &totalTxsSinceBatchNum{}
  1112. if err := meddler.QueryRow(
  1113. hdb.dbRead, ttsbn, `SELECT COUNT(tx.*) as total_txs,
  1114. COALESCE (MIN(tx.batch_num), 0) as batch_num
  1115. FROM tx INNER JOIN block ON tx.eth_block_num = block.eth_block_num
  1116. WHERE block.timestamp >= NOW() - INTERVAL '1 HOURS';`,
  1117. ); err != nil {
  1118. return nil, tracerr.Wrap(err)
  1119. }
  1120. // Get the amount of batches and acumulated fees for the last hour
  1121. type totalBatchesAndFee struct {
  1122. TotalBatches int `meddler:"total_batches"`
  1123. TotalFees float64 `meddler:"total_fees"`
  1124. }
  1125. tbf := &totalBatchesAndFee{}
  1126. if err := meddler.QueryRow(
  1127. hdb.dbRead, tbf, `SELECT COUNT(*) AS total_batches,
  1128. COALESCE (SUM(total_fees_usd), 0) AS total_fees FROM batch
  1129. WHERE batch_num > $1;`, ttsbn.FirstBatchNum,
  1130. ); err != nil {
  1131. return nil, tracerr.Wrap(err)
  1132. }
  1133. // Update NodeInfo struct
  1134. var avgTransactionFee float64
  1135. if ttsbn.TotalTxs > 0 {
  1136. avgTransactionFee = tbf.TotalFees / float64(ttsbn.TotalTxs)
  1137. } else {
  1138. avgTransactionFee = 0
  1139. }
  1140. recommendedFee.ExistingAccount = math.Min(maxFeeUSD,
  1141. math.Max(avgTransactionFee, minFeeUSD))
  1142. recommendedFee.CreatesAccount = math.Min(maxFeeUSD,
  1143. math.Max(CreateAccountExtraFeePercentage*avgTransactionFee, minFeeUSD))
  1144. recommendedFee.CreatesAccountInternal = math.Min(maxFeeUSD,
  1145. math.Max(CreateAccountInternalExtraFeePercentage*avgTransactionFee, minFeeUSD))
  1146. return &recommendedFee, nil
  1147. }