You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

921 lines
26 KiB

Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Update coordinator to work better under real net - cli / node - Update handler of SIGINT so that after 3 SIGINTs, the process terminates unconditionally - coordinator - Store stats without pointer - In all functions that send a variable via channel, check for context done to avoid deadlock (due to no process reading from the channel, which has no queue) when the node is stopped. - Abstract `canForge` so that it can be used outside of the `Coordinator` - In `canForge` check the blockNumber in current and next slot. - Update tests due to smart contract changes in slot handling, and minimum bid defaults - TxManager - Add consts, vars and stats to allow evaluating `canForge` - Add `canForge` method (not used yet) - Store batch and nonces status (last success and last pending) - Track nonces internally instead of relying on the ethereum node (this is required to work with ganache when there are pending txs) - Handle the (common) case of the receipt not being found after the tx is sent. - Don't start the main loop until we get an initial messae fo the stats and vars (so that in the loop the stats and vars are set to synchronizer values) - When a tx fails, check and discard all the failed transactions before sending the message to stop the pipeline. This will avoid sending consecutive messages of stop the pipeline when multiple txs are detected to be failed consecutively. Also, future txs of the same pipeline after a discarded txs are discarded, and their nonces reused. - Robust handling of nonces: - If geth returns nonce is too low, increase it - If geth returns nonce too hight, decrease it - If geth returns underpriced, increase gas price - If geth returns replace underpriced, increase gas price - Add support for resending transactions after a timeout - Store `BatchInfos` in a queue - Pipeline - When an error is found, stop forging batches and send a message to the coordinator to stop the pipeline with information of the failed batch number so that in a restart, non-failed batches are not repated. - When doing a reset of the stateDB, if possible reset from the local checkpoint instead of resetting from the synchronizer. This allows resetting from a batch that is valid but not yet sent / synced. - Every time a pipeline is started, assign it a number from a counter. This allows the TxManager to ignore batches from stopped pipelines, via a message sent by the coordinator. - Avoid forging when we haven't reached the rollup genesis block number. - Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the number of blocks of delay to wait before starting the pipeline when we reach a slot in which we can forge. - When detecting a reorg, only reset the pipeline if the batch from which the pipeline started changed and wasn't sent by us. - Add config parameter `ScheduleBatchBlocksAheadCheck`: ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the forger address is checked to be allowed to forge (apart from checking the next block), used to decide when to stop scheduling new batches (by stopping the pipeline). For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge, the pipeline will be stopped if we can't forge at block 15. This value should be the expected number of blocks it takes between scheduling a batch and having it mined. - Add config parameter `SendBatchBlocksMarginCheck`: SendBatchBlocksMarginCheck is the number of margin blocks ahead in which the coordinator is also checked to be allowed to forge, apart from the next block; used to decide when to stop sending batches to the smart contract. For example, if we are at block 10 and SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the batch will be discarded if we can't forge at block 15. - Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout after which a non-mined ethereum transaction will be resent (reusing the nonce) with a newly calculated gas price - Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price allowed for ethereum transactions - Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces of pending transactions for new replacement transactions. This is useful for testing with Ganache. - Extend BatchInfo with more useful information for debugging - eth / ethereum client - Add necessary methods to create the auth object for transactions manually so that we can set the nonce, gas price, gas limit, etc manually - Update `RollupForgeBatch` to take an auth object as input (so that the coordinator can set parameters manually) - synchronizer - In stats, add `NextSlot` - In stats, store full last batch instead of just last batch number - Instead of calculating a nextSlot from scratch every time, update the current struct (only updating the forger info if we are Synced) - Afer every processed batch, check that the calculated StateDB MTRoot matches the StateRoot found in the forgeBatch event.
3 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
  1. package node
  2. import (
  3. "context"
  4. "errors"
  5. "fmt"
  6. "net"
  7. "net/http"
  8. "sync"
  9. "time"
  10. "github.com/ethereum/go-ethereum/accounts"
  11. ethKeystore "github.com/ethereum/go-ethereum/accounts/keystore"
  12. "github.com/ethereum/go-ethereum/ethclient"
  13. "github.com/gin-contrib/cors"
  14. "github.com/gin-gonic/gin"
  15. "github.com/hermeznetwork/hermez-node/api"
  16. "github.com/hermeznetwork/hermez-node/batchbuilder"
  17. "github.com/hermeznetwork/hermez-node/common"
  18. "github.com/hermeznetwork/hermez-node/config"
  19. "github.com/hermeznetwork/hermez-node/coordinator"
  20. dbUtils "github.com/hermeznetwork/hermez-node/db"
  21. "github.com/hermeznetwork/hermez-node/db/historydb"
  22. "github.com/hermeznetwork/hermez-node/db/l2db"
  23. "github.com/hermeznetwork/hermez-node/db/statedb"
  24. "github.com/hermeznetwork/hermez-node/eth"
  25. "github.com/hermeznetwork/hermez-node/log"
  26. "github.com/hermeznetwork/hermez-node/priceupdater"
  27. "github.com/hermeznetwork/hermez-node/prover"
  28. "github.com/hermeznetwork/hermez-node/stateapiupdater"
  29. "github.com/hermeznetwork/hermez-node/synchronizer"
  30. "github.com/hermeznetwork/hermez-node/test/debugapi"
  31. "github.com/hermeznetwork/hermez-node/txprocessor"
  32. "github.com/hermeznetwork/hermez-node/txselector"
  33. "github.com/hermeznetwork/tracerr"
  34. "github.com/jmoiron/sqlx"
  35. "github.com/russross/meddler"
  36. )
  37. // Mode sets the working mode of the node (synchronizer or coordinator)
  38. type Mode string
  39. const (
  40. // ModeCoordinator defines the mode of the HermezNode as Coordinator, which
  41. // means that the node is set to forge (which also will be synchronizing with
  42. // the L1 blockchain state)
  43. ModeCoordinator Mode = "coordinator"
  44. // ModeSynchronizer defines the mode of the HermezNode as Synchronizer, which
  45. // means that the node is set to only synchronize with the L1 blockchain state
  46. // and will not forge
  47. ModeSynchronizer Mode = "synchronizer"
  48. )
  49. // Node is the Hermez Node
  50. type Node struct {
  51. nodeAPI *NodeAPI
  52. stateAPIUpdater *stateapiupdater.Updater
  53. debugAPI *debugapi.DebugAPI
  54. priceUpdater *priceupdater.PriceUpdater
  55. // Coordinator
  56. coord *coordinator.Coordinator
  57. // Synchronizer
  58. sync *synchronizer.Synchronizer
  59. // General
  60. cfg *config.Node
  61. mode Mode
  62. sqlConnRead *sqlx.DB
  63. sqlConnWrite *sqlx.DB
  64. historyDB *historydb.HistoryDB
  65. ctx context.Context
  66. wg sync.WaitGroup
  67. cancel context.CancelFunc
  68. }
  69. // NewNode creates a Node
  70. func NewNode(mode Mode, cfg *config.Node) (*Node, error) {
  71. meddler.Debug = cfg.Debug.MeddlerLogs
  72. // Stablish DB connection
  73. dbWrite, err := dbUtils.InitSQLDB(
  74. cfg.PostgreSQL.PortWrite,
  75. cfg.PostgreSQL.HostWrite,
  76. cfg.PostgreSQL.UserWrite,
  77. cfg.PostgreSQL.PasswordWrite,
  78. cfg.PostgreSQL.NameWrite,
  79. )
  80. if err != nil {
  81. return nil, tracerr.Wrap(fmt.Errorf("dbUtils.InitSQLDB: %w", err))
  82. }
  83. var dbRead *sqlx.DB
  84. if cfg.PostgreSQL.HostRead == "" {
  85. dbRead = dbWrite
  86. } else if cfg.PostgreSQL.HostRead == cfg.PostgreSQL.HostWrite {
  87. return nil, tracerr.Wrap(fmt.Errorf(
  88. "PostgreSQL.HostRead and PostgreSQL.HostWrite must be different",
  89. ))
  90. } else {
  91. dbRead, err = dbUtils.InitSQLDB(
  92. cfg.PostgreSQL.PortRead,
  93. cfg.PostgreSQL.HostRead,
  94. cfg.PostgreSQL.UserRead,
  95. cfg.PostgreSQL.PasswordRead,
  96. cfg.PostgreSQL.NameRead,
  97. )
  98. if err != nil {
  99. return nil, tracerr.Wrap(fmt.Errorf("dbUtils.InitSQLDB: %w", err))
  100. }
  101. }
  102. var apiConnCon *dbUtils.APIConnectionController
  103. if cfg.API.Explorer || mode == ModeCoordinator {
  104. apiConnCon = dbUtils.NewAPIConnectionController(
  105. cfg.API.MaxSQLConnections,
  106. cfg.API.SQLConnectionTimeout.Duration,
  107. )
  108. }
  109. historyDB := historydb.NewHistoryDB(dbRead, dbWrite, apiConnCon)
  110. ethClient, err := ethclient.Dial(cfg.Web3.URL)
  111. if err != nil {
  112. return nil, tracerr.Wrap(err)
  113. }
  114. var ethCfg eth.EthereumConfig
  115. var account *accounts.Account
  116. var keyStore *ethKeystore.KeyStore
  117. if mode == ModeCoordinator {
  118. ethCfg = eth.EthereumConfig{
  119. CallGasLimit: 0, // cfg.Coordinator.EthClient.CallGasLimit,
  120. GasPriceDiv: 0, // cfg.Coordinator.EthClient.GasPriceDiv,
  121. }
  122. scryptN := ethKeystore.StandardScryptN
  123. scryptP := ethKeystore.StandardScryptP
  124. if cfg.Coordinator.Debug.LightScrypt {
  125. scryptN = ethKeystore.LightScryptN
  126. scryptP = ethKeystore.LightScryptP
  127. }
  128. keyStore = ethKeystore.NewKeyStore(cfg.Coordinator.EthClient.Keystore.Path,
  129. scryptN, scryptP)
  130. balance, err := ethClient.BalanceAt(context.TODO(), cfg.Coordinator.ForgerAddress, nil)
  131. if err != nil {
  132. return nil, tracerr.Wrap(err)
  133. }
  134. minForgeBalance := cfg.Coordinator.MinimumForgeAddressBalance
  135. if minForgeBalance != nil && balance.Cmp(minForgeBalance) == -1 {
  136. return nil, tracerr.Wrap(fmt.Errorf(
  137. "forger account balance is less than cfg.Coordinator.MinimumForgeAddressBalance: %v < %v",
  138. balance, minForgeBalance))
  139. }
  140. log.Infow("forger ethereum account balance",
  141. "addr", cfg.Coordinator.ForgerAddress,
  142. "balance", balance,
  143. "minForgeBalance", minForgeBalance,
  144. )
  145. // Unlock Coordinator ForgerAddr in the keystore to make calls
  146. // to ForgeBatch in the smart contract
  147. if !keyStore.HasAddress(cfg.Coordinator.ForgerAddress) {
  148. return nil, tracerr.Wrap(fmt.Errorf(
  149. "ethereum keystore doesn't have the key for address %v",
  150. cfg.Coordinator.ForgerAddress))
  151. }
  152. account = &accounts.Account{
  153. Address: cfg.Coordinator.ForgerAddress,
  154. }
  155. if err := keyStore.Unlock(*account,
  156. cfg.Coordinator.EthClient.Keystore.Password); err != nil {
  157. return nil, tracerr.Wrap(err)
  158. }
  159. log.Infow("Forger ethereum account unlocked in the keystore",
  160. "addr", cfg.Coordinator.ForgerAddress)
  161. }
  162. client, err := eth.NewClient(ethClient, account, keyStore, &eth.ClientConfig{
  163. Ethereum: ethCfg,
  164. Rollup: eth.RollupConfig{
  165. Address: cfg.SmartContracts.Rollup,
  166. },
  167. Auction: eth.AuctionConfig{
  168. Address: cfg.SmartContracts.Auction,
  169. TokenHEZ: eth.TokenConfig{
  170. Address: cfg.SmartContracts.TokenHEZ,
  171. Name: cfg.SmartContracts.TokenHEZName,
  172. },
  173. },
  174. WDelayer: eth.WDelayerConfig{
  175. Address: cfg.SmartContracts.WDelayer,
  176. },
  177. })
  178. if err != nil {
  179. return nil, tracerr.Wrap(err)
  180. }
  181. chainID, err := client.EthChainID()
  182. if err != nil {
  183. return nil, tracerr.Wrap(err)
  184. }
  185. if !chainID.IsUint64() {
  186. return nil, tracerr.Wrap(fmt.Errorf("chainID cannot be represented as uint64"))
  187. }
  188. chainIDU64 := chainID.Uint64()
  189. const maxUint16 uint64 = 0xffff
  190. if chainIDU64 > maxUint16 {
  191. return nil, tracerr.Wrap(fmt.Errorf("chainID overflows uint16"))
  192. }
  193. chainIDU16 := uint16(chainIDU64)
  194. const safeStateDBKeep = 128
  195. if cfg.StateDB.Keep < safeStateDBKeep {
  196. return nil, tracerr.Wrap(fmt.Errorf("cfg.StateDB.Keep = %v < %v, which is unsafe",
  197. cfg.StateDB.Keep, safeStateDBKeep))
  198. }
  199. stateDB, err := statedb.NewStateDB(statedb.Config{
  200. Path: cfg.StateDB.Path,
  201. Keep: cfg.StateDB.Keep,
  202. Type: statedb.TypeSynchronizer,
  203. NLevels: statedb.MaxNLevels,
  204. })
  205. if err != nil {
  206. return nil, tracerr.Wrap(err)
  207. }
  208. var l2DB *l2db.L2DB
  209. if mode == ModeCoordinator {
  210. l2DB = l2db.NewL2DB(
  211. dbRead, dbWrite,
  212. cfg.Coordinator.L2DB.SafetyPeriod,
  213. cfg.Coordinator.L2DB.MaxTxs,
  214. cfg.Coordinator.L2DB.MinFeeUSD,
  215. cfg.Coordinator.L2DB.TTL.Duration,
  216. apiConnCon,
  217. )
  218. }
  219. sync, err := synchronizer.NewSynchronizer(client, historyDB, l2DB, stateDB, synchronizer.Config{
  220. StatsRefreshPeriod: cfg.Synchronizer.StatsRefreshPeriod.Duration,
  221. ChainID: chainIDU16,
  222. })
  223. if err != nil {
  224. return nil, tracerr.Wrap(err)
  225. }
  226. initSCVars := sync.SCVars()
  227. scConsts := common.SCConsts{
  228. Rollup: *sync.RollupConstants(),
  229. Auction: *sync.AuctionConstants(),
  230. WDelayer: *sync.WDelayerConstants(),
  231. }
  232. hdbNodeCfg := historydb.NodeConfig{
  233. MaxPoolTxs: cfg.Coordinator.L2DB.MaxTxs,
  234. MinFeeUSD: cfg.Coordinator.L2DB.MinFeeUSD,
  235. ForgeDelay: cfg.Coordinator.ForgeDelay.Duration.Seconds(),
  236. }
  237. if err := historyDB.SetNodeConfig(&hdbNodeCfg); err != nil {
  238. return nil, tracerr.Wrap(err)
  239. }
  240. hdbConsts := historydb.Constants{
  241. SCConsts: common.SCConsts{
  242. Rollup: scConsts.Rollup,
  243. Auction: scConsts.Auction,
  244. WDelayer: scConsts.WDelayer,
  245. },
  246. ChainID: chainIDU16,
  247. HermezAddress: cfg.SmartContracts.Rollup,
  248. }
  249. if err := historyDB.SetConstants(&hdbConsts); err != nil {
  250. return nil, tracerr.Wrap(err)
  251. }
  252. stateAPIUpdater := stateapiupdater.NewUpdater(historyDB, &hdbNodeCfg, initSCVars, &hdbConsts)
  253. var coord *coordinator.Coordinator
  254. if mode == ModeCoordinator {
  255. // Unlock FeeAccount EthAddr in the keystore to generate the
  256. // account creation authorization
  257. if !keyStore.HasAddress(cfg.Coordinator.FeeAccount.Address) {
  258. return nil, tracerr.Wrap(fmt.Errorf(
  259. "ethereum keystore doesn't have the key for address %v",
  260. cfg.Coordinator.FeeAccount.Address))
  261. }
  262. feeAccount := accounts.Account{
  263. Address: cfg.Coordinator.FeeAccount.Address,
  264. }
  265. if err := keyStore.Unlock(feeAccount,
  266. cfg.Coordinator.EthClient.Keystore.Password); err != nil {
  267. return nil, tracerr.Wrap(err)
  268. }
  269. auth := &common.AccountCreationAuth{
  270. EthAddr: cfg.Coordinator.FeeAccount.Address,
  271. BJJ: cfg.Coordinator.FeeAccount.BJJ,
  272. }
  273. if err := auth.Sign(func(msg []byte) ([]byte, error) {
  274. return keyStore.SignHash(feeAccount, msg)
  275. }, chainIDU16, cfg.SmartContracts.Rollup); err != nil {
  276. return nil, tracerr.Wrap(err)
  277. }
  278. coordAccount := &txselector.CoordAccount{
  279. Addr: cfg.Coordinator.FeeAccount.Address,
  280. BJJ: cfg.Coordinator.FeeAccount.BJJ,
  281. AccountCreationAuth: auth.Signature,
  282. }
  283. txSelector, err := txselector.NewTxSelector(coordAccount,
  284. cfg.Coordinator.TxSelector.Path, stateDB, l2DB)
  285. if err != nil {
  286. return nil, tracerr.Wrap(err)
  287. }
  288. batchBuilder, err := batchbuilder.NewBatchBuilder(cfg.Coordinator.BatchBuilder.Path,
  289. stateDB, 0, uint64(cfg.Coordinator.Circuit.NLevels))
  290. if err != nil {
  291. return nil, tracerr.Wrap(err)
  292. }
  293. serverProofs := make([]prover.Client, len(cfg.Coordinator.ServerProofs))
  294. for i, serverProofCfg := range cfg.Coordinator.ServerProofs {
  295. serverProofs[i] = prover.NewProofServerClient(serverProofCfg.URL,
  296. cfg.Coordinator.ProofServerPollInterval.Duration)
  297. }
  298. txProcessorCfg := txprocessor.Config{
  299. NLevels: uint32(cfg.Coordinator.Circuit.NLevels),
  300. MaxTx: uint32(cfg.Coordinator.Circuit.MaxTx),
  301. ChainID: chainIDU16,
  302. MaxFeeTx: common.RollupConstMaxFeeIdxCoordinator,
  303. MaxL1Tx: common.RollupConstMaxL1Tx,
  304. }
  305. var verifierIdx int
  306. if cfg.Coordinator.Debug.RollupVerifierIndex == nil {
  307. verifierIdx, err = scConsts.Rollup.FindVerifierIdx(
  308. cfg.Coordinator.Circuit.MaxTx,
  309. cfg.Coordinator.Circuit.NLevels,
  310. )
  311. if err != nil {
  312. return nil, tracerr.Wrap(err)
  313. }
  314. log.Infow("Found verifier that matches circuit config", "verifierIdx", verifierIdx)
  315. } else {
  316. verifierIdx = *cfg.Coordinator.Debug.RollupVerifierIndex
  317. log.Infow("Using debug verifier index from config", "verifierIdx", verifierIdx)
  318. if verifierIdx >= len(scConsts.Rollup.Verifiers) {
  319. return nil, tracerr.Wrap(
  320. fmt.Errorf("verifierIdx (%v) >= "+
  321. "len(scConsts.Rollup.Verifiers) (%v)",
  322. verifierIdx, len(scConsts.Rollup.Verifiers)))
  323. }
  324. verifier := scConsts.Rollup.Verifiers[verifierIdx]
  325. if verifier.MaxTx != cfg.Coordinator.Circuit.MaxTx ||
  326. verifier.NLevels != cfg.Coordinator.Circuit.NLevels {
  327. return nil, tracerr.Wrap(
  328. fmt.Errorf("Circuit config and verifier params don't match. "+
  329. "circuit.MaxTx = %v, circuit.NLevels = %v, "+
  330. "verifier.MaxTx = %v, verifier.NLevels = %v",
  331. cfg.Coordinator.Circuit.MaxTx, cfg.Coordinator.Circuit.NLevels,
  332. verifier.MaxTx, verifier.NLevels,
  333. ))
  334. }
  335. }
  336. coord, err = coordinator.NewCoordinator(
  337. coordinator.Config{
  338. ForgerAddress: cfg.Coordinator.ForgerAddress,
  339. ConfirmBlocks: cfg.Coordinator.ConfirmBlocks,
  340. L1BatchTimeoutPerc: cfg.Coordinator.L1BatchTimeoutPerc,
  341. ForgeRetryInterval: cfg.Coordinator.ForgeRetryInterval.Duration,
  342. ForgeDelay: cfg.Coordinator.ForgeDelay.Duration,
  343. MustForgeAtSlotDeadline: cfg.Coordinator.MustForgeAtSlotDeadline,
  344. IgnoreSlotCommitment: cfg.Coordinator.IgnoreSlotCommitment,
  345. ForgeNoTxsDelay: cfg.Coordinator.ForgeNoTxsDelay.Duration,
  346. SyncRetryInterval: cfg.Coordinator.SyncRetryInterval.Duration,
  347. PurgeByExtDelInterval: cfg.Coordinator.PurgeByExtDelInterval.Duration,
  348. EthClientAttempts: cfg.Coordinator.EthClient.Attempts,
  349. EthClientAttemptsDelay: cfg.Coordinator.EthClient.AttemptsDelay.Duration,
  350. EthNoReuseNonce: cfg.Coordinator.EthClient.NoReuseNonce,
  351. EthTxResendTimeout: cfg.Coordinator.EthClient.TxResendTimeout.Duration,
  352. MaxGasPrice: cfg.Coordinator.EthClient.MaxGasPrice,
  353. GasPriceIncPerc: cfg.Coordinator.EthClient.GasPriceIncPerc,
  354. TxManagerCheckInterval: cfg.Coordinator.EthClient.CheckLoopInterval.Duration,
  355. DebugBatchPath: cfg.Coordinator.Debug.BatchPath,
  356. Purger: coordinator.PurgerCfg{
  357. PurgeBatchDelay: cfg.Coordinator.L2DB.PurgeBatchDelay,
  358. InvalidateBatchDelay: cfg.Coordinator.L2DB.InvalidateBatchDelay,
  359. PurgeBlockDelay: cfg.Coordinator.L2DB.PurgeBlockDelay,
  360. InvalidateBlockDelay: cfg.Coordinator.L2DB.InvalidateBlockDelay,
  361. },
  362. ForgeBatchGasCost: cfg.Coordinator.EthClient.ForgeBatchGasCost,
  363. VerifierIdx: uint8(verifierIdx),
  364. TxProcessorConfig: txProcessorCfg,
  365. },
  366. historyDB,
  367. l2DB,
  368. txSelector,
  369. batchBuilder,
  370. serverProofs,
  371. client,
  372. &scConsts,
  373. initSCVars,
  374. )
  375. if err != nil {
  376. return nil, tracerr.Wrap(err)
  377. }
  378. }
  379. var nodeAPI *NodeAPI
  380. if cfg.API.Address != "" {
  381. if cfg.Debug.GinDebugMode {
  382. gin.SetMode(gin.DebugMode)
  383. } else {
  384. gin.SetMode(gin.ReleaseMode)
  385. }
  386. if cfg.API.UpdateMetricsInterval.Duration == 0 {
  387. return nil, tracerr.Wrap(fmt.Errorf("invalid cfg.API.UpdateMetricsInterval: %v",
  388. cfg.API.UpdateMetricsInterval.Duration))
  389. }
  390. if cfg.API.UpdateRecommendedFeeInterval.Duration == 0 {
  391. return nil, tracerr.Wrap(fmt.Errorf("invalid cfg.API.UpdateRecommendedFeeInterval: %v",
  392. cfg.API.UpdateRecommendedFeeInterval.Duration))
  393. }
  394. server := gin.Default()
  395. coord := false
  396. if mode == ModeCoordinator {
  397. coord = cfg.Coordinator.API.Coordinator
  398. }
  399. var err error
  400. nodeAPI, err = NewNodeAPI(
  401. cfg.API.Address,
  402. coord, cfg.API.Explorer,
  403. server,
  404. historyDB,
  405. l2DB,
  406. )
  407. if err != nil {
  408. return nil, tracerr.Wrap(err)
  409. }
  410. }
  411. var debugAPI *debugapi.DebugAPI
  412. if cfg.Debug.APIAddress != "" {
  413. debugAPI = debugapi.NewDebugAPI(cfg.Debug.APIAddress, stateDB, sync)
  414. }
  415. priceUpdater, err := priceupdater.NewPriceUpdater(
  416. cfg.PriceUpdater.DefaultUpdateMethod,
  417. cfg.PriceUpdater.TokensConfig,
  418. historyDB,
  419. cfg.PriceUpdater.URLBitfinexV2,
  420. cfg.PriceUpdater.URLCoinGeckoV3,
  421. )
  422. if err != nil {
  423. return nil, tracerr.Wrap(err)
  424. }
  425. ctx, cancel := context.WithCancel(context.Background())
  426. return &Node{
  427. stateAPIUpdater: stateAPIUpdater,
  428. nodeAPI: nodeAPI,
  429. debugAPI: debugAPI,
  430. priceUpdater: priceUpdater,
  431. coord: coord,
  432. sync: sync,
  433. cfg: cfg,
  434. mode: mode,
  435. sqlConnRead: dbRead,
  436. sqlConnWrite: dbWrite,
  437. historyDB: historyDB,
  438. ctx: ctx,
  439. cancel: cancel,
  440. }, nil
  441. }
  442. // APIServer is a server that only runs the API
  443. type APIServer struct {
  444. nodeAPI *NodeAPI
  445. mode Mode
  446. ctx context.Context
  447. wg sync.WaitGroup
  448. cancel context.CancelFunc
  449. }
  450. // NewAPIServer creates a new APIServer
  451. func NewAPIServer(mode Mode, cfg *config.APIServer) (*APIServer, error) {
  452. meddler.Debug = cfg.Debug.MeddlerLogs
  453. // Stablish DB connection
  454. dbWrite, err := dbUtils.InitSQLDB(
  455. cfg.PostgreSQL.PortWrite,
  456. cfg.PostgreSQL.HostWrite,
  457. cfg.PostgreSQL.UserWrite,
  458. cfg.PostgreSQL.PasswordWrite,
  459. cfg.PostgreSQL.NameWrite,
  460. )
  461. if err != nil {
  462. return nil, tracerr.Wrap(fmt.Errorf("dbUtils.InitSQLDB: %w", err))
  463. }
  464. var dbRead *sqlx.DB
  465. if cfg.PostgreSQL.HostRead == "" {
  466. dbRead = dbWrite
  467. } else if cfg.PostgreSQL.HostRead == cfg.PostgreSQL.HostWrite {
  468. return nil, tracerr.Wrap(fmt.Errorf(
  469. "PostgreSQL.HostRead and PostgreSQL.HostWrite must be different",
  470. ))
  471. } else {
  472. dbRead, err = dbUtils.InitSQLDB(
  473. cfg.PostgreSQL.PortRead,
  474. cfg.PostgreSQL.HostRead,
  475. cfg.PostgreSQL.UserRead,
  476. cfg.PostgreSQL.PasswordRead,
  477. cfg.PostgreSQL.NameRead,
  478. )
  479. if err != nil {
  480. return nil, tracerr.Wrap(fmt.Errorf("dbUtils.InitSQLDB: %w", err))
  481. }
  482. }
  483. apiConnCon := dbUtils.NewAPIConnectionController(
  484. cfg.API.MaxSQLConnections,
  485. cfg.API.SQLConnectionTimeout.Duration,
  486. )
  487. historyDB := historydb.NewHistoryDB(dbRead, dbWrite, apiConnCon)
  488. var l2DB *l2db.L2DB
  489. if mode == ModeCoordinator {
  490. l2DB = l2db.NewL2DB(
  491. dbRead, dbWrite,
  492. 0,
  493. cfg.Coordinator.L2DB.MaxTxs,
  494. cfg.Coordinator.L2DB.MinFeeUSD,
  495. 0,
  496. apiConnCon,
  497. )
  498. }
  499. if cfg.Debug.GinDebugMode {
  500. gin.SetMode(gin.DebugMode)
  501. } else {
  502. gin.SetMode(gin.ReleaseMode)
  503. }
  504. server := gin.Default()
  505. coord := false
  506. if mode == ModeCoordinator {
  507. coord = cfg.Coordinator.API.Coordinator
  508. }
  509. nodeAPI, err := NewNodeAPI(
  510. cfg.API.Address,
  511. coord, cfg.API.Explorer,
  512. server,
  513. historyDB,
  514. l2DB,
  515. )
  516. if err != nil {
  517. return nil, tracerr.Wrap(err)
  518. }
  519. ctx, cancel := context.WithCancel(context.Background())
  520. return &APIServer{
  521. nodeAPI: nodeAPI,
  522. mode: mode,
  523. ctx: ctx,
  524. cancel: cancel,
  525. }, nil
  526. }
  527. // Start the APIServer
  528. func (s *APIServer) Start() {
  529. log.Infow("Starting api server...", "mode", s.mode)
  530. log.Info("Starting NodeAPI...")
  531. s.wg.Add(1)
  532. go func() {
  533. defer func() {
  534. log.Info("NodeAPI routine stopped")
  535. s.wg.Done()
  536. }()
  537. if err := s.nodeAPI.Run(s.ctx); err != nil {
  538. if s.ctx.Err() != nil {
  539. return
  540. }
  541. log.Fatalw("NodeAPI.Run", "err", err)
  542. }
  543. }()
  544. }
  545. // Stop the APIServer
  546. func (s *APIServer) Stop() {
  547. log.Infow("Stopping NodeAPI...")
  548. s.cancel()
  549. s.wg.Wait()
  550. }
  551. // NodeAPI holds the node http API
  552. type NodeAPI struct { //nolint:golint
  553. api *api.API
  554. engine *gin.Engine
  555. addr string
  556. }
  557. func handleNoRoute(c *gin.Context) {
  558. c.JSON(http.StatusNotFound, gin.H{
  559. "error": "404 page not found",
  560. })
  561. }
  562. // NewNodeAPI creates a new NodeAPI (which internally calls api.NewAPI)
  563. func NewNodeAPI(
  564. addr string,
  565. coordinatorEndpoints, explorerEndpoints bool,
  566. server *gin.Engine,
  567. hdb *historydb.HistoryDB,
  568. l2db *l2db.L2DB,
  569. ) (*NodeAPI, error) {
  570. engine := gin.Default()
  571. engine.NoRoute(handleNoRoute)
  572. engine.Use(cors.Default())
  573. _api, err := api.NewAPI(
  574. coordinatorEndpoints, explorerEndpoints,
  575. engine,
  576. hdb,
  577. l2db,
  578. )
  579. if err != nil {
  580. return nil, tracerr.Wrap(err)
  581. }
  582. return &NodeAPI{
  583. addr: addr,
  584. api: _api,
  585. engine: engine,
  586. }, nil
  587. }
  588. // Run starts the http server of the NodeAPI. To stop it, pass a context
  589. // with cancellation.
  590. func (a *NodeAPI) Run(ctx context.Context) error {
  591. server := &http.Server{
  592. Handler: a.engine,
  593. // TODO: Figure out best parameters for production
  594. ReadTimeout: 30 * time.Second, //nolint:gomnd
  595. WriteTimeout: 30 * time.Second, //nolint:gomnd
  596. MaxHeaderBytes: 1 << 20, //nolint:gomnd
  597. }
  598. listener, err := net.Listen("tcp", a.addr)
  599. if err != nil {
  600. return tracerr.Wrap(err)
  601. }
  602. log.Infof("NodeAPI is ready at %v", a.addr)
  603. go func() {
  604. if err := server.Serve(listener); err != nil &&
  605. tracerr.Unwrap(err) != http.ErrServerClosed {
  606. log.Fatalf("Listen: %s\n", err)
  607. }
  608. }()
  609. <-ctx.Done()
  610. log.Info("Stopping NodeAPI...")
  611. ctxTimeout, cancel := context.WithTimeout(context.Background(), 10*time.Second) //nolint:gomnd
  612. defer cancel()
  613. if err := server.Shutdown(ctxTimeout); err != nil {
  614. return tracerr.Wrap(err)
  615. }
  616. log.Info("NodeAPI done")
  617. return nil
  618. }
  619. func (n *Node) handleNewBlock(ctx context.Context, stats *synchronizer.Stats,
  620. vars *common.SCVariablesPtr, batches []common.BatchData) error {
  621. if n.mode == ModeCoordinator {
  622. n.coord.SendMsg(ctx, coordinator.MsgSyncBlock{
  623. Stats: *stats,
  624. Vars: *vars,
  625. Batches: batches,
  626. })
  627. }
  628. n.stateAPIUpdater.SetSCVars(vars)
  629. if stats.Synced() {
  630. if err := n.stateAPIUpdater.UpdateNetworkInfo(
  631. stats.Eth.LastBlock, stats.Sync.LastBlock,
  632. common.BatchNum(stats.Eth.LastBatchNum),
  633. stats.Sync.Auction.CurrentSlot.SlotNum,
  634. ); err != nil {
  635. log.Errorw("ApiStateUpdater.UpdateNetworkInfo", "err", err)
  636. }
  637. } else {
  638. n.stateAPIUpdater.UpdateNetworkInfoBlock(
  639. stats.Eth.LastBlock, stats.Sync.LastBlock,
  640. )
  641. }
  642. if err := n.stateAPIUpdater.Store(); err != nil {
  643. return tracerr.Wrap(err)
  644. }
  645. return nil
  646. }
  647. func (n *Node) handleReorg(ctx context.Context, stats *synchronizer.Stats,
  648. vars *common.SCVariables) error {
  649. if n.mode == ModeCoordinator {
  650. n.coord.SendMsg(ctx, coordinator.MsgSyncReorg{
  651. Stats: *stats,
  652. Vars: *vars.AsPtr(),
  653. })
  654. }
  655. n.stateAPIUpdater.SetSCVars(vars.AsPtr())
  656. n.stateAPIUpdater.UpdateNetworkInfoBlock(
  657. stats.Eth.LastBlock, stats.Sync.LastBlock,
  658. )
  659. if err := n.stateAPIUpdater.Store(); err != nil {
  660. return tracerr.Wrap(err)
  661. }
  662. return nil
  663. }
  664. // TODO(Edu): Consider keeping the `lastBlock` inside synchronizer so that we
  665. // don't have to pass it around.
  666. func (n *Node) syncLoopFn(ctx context.Context, lastBlock *common.Block) (*common.Block,
  667. time.Duration, error) {
  668. blockData, discarded, err := n.sync.Sync(ctx, lastBlock)
  669. stats := n.sync.Stats()
  670. if err != nil {
  671. // case: error
  672. return nil, n.cfg.Synchronizer.SyncLoopInterval.Duration, tracerr.Wrap(err)
  673. } else if discarded != nil {
  674. // case: reorg
  675. log.Infow("Synchronizer.Sync reorg", "discarded", *discarded)
  676. vars := n.sync.SCVars()
  677. if err := n.handleReorg(ctx, stats, vars); err != nil {
  678. return nil, time.Duration(0), tracerr.Wrap(err)
  679. }
  680. return nil, time.Duration(0), nil
  681. } else if blockData != nil {
  682. // case: new block
  683. vars := common.SCVariablesPtr{
  684. Rollup: blockData.Rollup.Vars,
  685. Auction: blockData.Auction.Vars,
  686. WDelayer: blockData.WDelayer.Vars,
  687. }
  688. if err := n.handleNewBlock(ctx, stats, &vars, blockData.Rollup.Batches); err != nil {
  689. return nil, time.Duration(0), tracerr.Wrap(err)
  690. }
  691. return &blockData.Block, time.Duration(0), nil
  692. } else {
  693. // case: no block
  694. return lastBlock, n.cfg.Synchronizer.SyncLoopInterval.Duration, nil
  695. }
  696. }
  697. // StartSynchronizer starts the synchronizer
  698. func (n *Node) StartSynchronizer() {
  699. log.Info("Starting Synchronizer...")
  700. // Trigger a manual call to handleNewBlock with the loaded state of the
  701. // synchronizer in order to quickly activate the API and Coordinator
  702. // and avoid waiting for the next block. Without this, the API and
  703. // Coordinator will not react until the following block (starting from
  704. // the last synced one) is synchronized
  705. stats := n.sync.Stats()
  706. vars := n.sync.SCVars()
  707. if err := n.handleNewBlock(n.ctx, stats, vars.AsPtr(), []common.BatchData{}); err != nil {
  708. log.Fatalw("Node.handleNewBlock", "err", err)
  709. }
  710. n.wg.Add(1)
  711. go func() {
  712. var err error
  713. var lastBlock *common.Block
  714. waitDuration := time.Duration(0)
  715. for {
  716. select {
  717. case <-n.ctx.Done():
  718. log.Info("Synchronizer done")
  719. n.wg.Done()
  720. return
  721. case <-time.After(waitDuration):
  722. if lastBlock, waitDuration, err = n.syncLoopFn(n.ctx,
  723. lastBlock); err != nil {
  724. if n.ctx.Err() != nil {
  725. continue
  726. }
  727. if errors.Is(err, eth.ErrBlockHashMismatchEvent) {
  728. log.Warnw("Synchronizer.Sync", "err", err)
  729. } else if errors.Is(err, synchronizer.ErrUnknownBlock) {
  730. log.Warnw("Synchronizer.Sync", "err", err)
  731. } else {
  732. log.Errorw("Synchronizer.Sync", "err", err)
  733. }
  734. }
  735. }
  736. }
  737. }()
  738. n.wg.Add(1)
  739. go func() {
  740. for {
  741. select {
  742. case <-n.ctx.Done():
  743. log.Info("PriceUpdater done")
  744. n.wg.Done()
  745. return
  746. case <-time.After(n.cfg.PriceUpdater.Interval.Duration):
  747. if err := n.priceUpdater.UpdateTokenList(); err != nil {
  748. log.Errorw("PriceUpdater.UpdateTokenList()", "err", err)
  749. }
  750. n.priceUpdater.UpdatePrices(n.ctx)
  751. }
  752. }
  753. }()
  754. }
  755. // StartDebugAPI starts the DebugAPI
  756. func (n *Node) StartDebugAPI() {
  757. log.Info("Starting DebugAPI...")
  758. n.wg.Add(1)
  759. go func() {
  760. defer func() {
  761. log.Info("DebugAPI routine stopped")
  762. n.wg.Done()
  763. }()
  764. if err := n.debugAPI.Run(n.ctx); err != nil {
  765. if n.ctx.Err() != nil {
  766. return
  767. }
  768. log.Fatalw("DebugAPI.Run", "err", err)
  769. }
  770. }()
  771. }
  772. // StartNodeAPI starts the NodeAPI
  773. func (n *Node) StartNodeAPI() {
  774. log.Info("Starting NodeAPI...")
  775. n.wg.Add(1)
  776. go func() {
  777. defer func() {
  778. log.Info("NodeAPI routine stopped")
  779. n.wg.Done()
  780. }()
  781. if err := n.nodeAPI.Run(n.ctx); err != nil {
  782. if n.ctx.Err() != nil {
  783. return
  784. }
  785. log.Fatalw("NodeAPI.Run", "err", err)
  786. }
  787. }()
  788. n.wg.Add(1)
  789. go func() {
  790. // Do an initial update on startup
  791. if err := n.stateAPIUpdater.UpdateMetrics(); err != nil {
  792. log.Errorw("ApiStateUpdater.UpdateMetrics", "err", err)
  793. }
  794. if err := n.stateAPIUpdater.Store(); err != nil {
  795. log.Errorw("ApiStateUpdater.Store", "err", err)
  796. }
  797. for {
  798. select {
  799. case <-n.ctx.Done():
  800. log.Info("ApiStateUpdater.UpdateMetrics loop done")
  801. n.wg.Done()
  802. return
  803. case <-time.After(n.cfg.API.UpdateMetricsInterval.Duration):
  804. if err := n.stateAPIUpdater.UpdateMetrics(); err != nil {
  805. log.Errorw("ApiStateUpdater.UpdateMetrics", "err", err)
  806. continue
  807. }
  808. if err := n.stateAPIUpdater.Store(); err != nil {
  809. log.Errorw("ApiStateUpdater.Store", "err", err)
  810. }
  811. }
  812. }
  813. }()
  814. n.wg.Add(1)
  815. go func() {
  816. // Do an initial update on startup
  817. if err := n.stateAPIUpdater.UpdateRecommendedFee(); err != nil {
  818. log.Errorw("ApiStateUpdater.UpdateRecommendedFee", "err", err)
  819. }
  820. if err := n.stateAPIUpdater.Store(); err != nil {
  821. log.Errorw("ApiStateUpdater.Store", "err", err)
  822. }
  823. for {
  824. select {
  825. case <-n.ctx.Done():
  826. log.Info("ApiStateUpdaterAPI.UpdateRecommendedFee loop done")
  827. n.wg.Done()
  828. return
  829. case <-time.After(n.cfg.API.UpdateRecommendedFeeInterval.Duration):
  830. if err := n.stateAPIUpdater.UpdateRecommendedFee(); err != nil {
  831. log.Errorw("ApiStateUpdaterAPI.UpdateRecommendedFee", "err", err)
  832. continue
  833. }
  834. if err := n.stateAPIUpdater.Store(); err != nil {
  835. log.Errorw("ApiStateUpdater.Store", "err", err)
  836. }
  837. }
  838. }
  839. }()
  840. }
  841. // Start the node
  842. func (n *Node) Start() {
  843. log.Infow("Starting node...", "mode", n.mode)
  844. if n.debugAPI != nil {
  845. n.StartDebugAPI()
  846. }
  847. if n.nodeAPI != nil {
  848. n.StartNodeAPI()
  849. }
  850. if n.mode == ModeCoordinator {
  851. log.Info("Starting Coordinator...")
  852. n.coord.Start()
  853. }
  854. n.StartSynchronizer()
  855. }
  856. // Stop the node
  857. func (n *Node) Stop() {
  858. log.Infow("Stopping node...")
  859. n.cancel()
  860. n.wg.Wait()
  861. if n.mode == ModeCoordinator {
  862. log.Info("Stopping Coordinator...")
  863. n.coord.Stop()
  864. }
  865. // Close kv DBs
  866. n.sync.StateDB().Close()
  867. if n.mode == ModeCoordinator {
  868. n.coord.TxSelector().LocalAccountsDB().Close()
  869. n.coord.BatchBuilder().LocalStateDB().Close()
  870. }
  871. }