You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

259 lines
9.5 KiB

4 years ago
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
3 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
3 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
3 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
  1. package config
  2. import (
  3. "fmt"
  4. "io/ioutil"
  5. "time"
  6. "github.com/BurntSushi/toml"
  7. ethCommon "github.com/ethereum/go-ethereum/common"
  8. "github.com/hermeznetwork/hermez-node/common"
  9. "github.com/hermeznetwork/tracerr"
  10. "github.com/iden3/go-iden3-crypto/babyjub"
  11. "gopkg.in/go-playground/validator.v9"
  12. )
  13. // Duration is a wrapper type that parses time duration from text.
  14. type Duration struct {
  15. time.Duration `validate:"required"`
  16. }
  17. // UnmarshalText unmarshalls time duration from text.
  18. func (d *Duration) UnmarshalText(data []byte) error {
  19. duration, err := time.ParseDuration(string(data))
  20. if err != nil {
  21. return tracerr.Wrap(err)
  22. }
  23. d.Duration = duration
  24. return nil
  25. }
  26. // ServerProof is the server proof configuration data.
  27. type ServerProof struct {
  28. // URL is the server proof API URL
  29. URL string `validate:"required"`
  30. }
  31. // Coordinator is the coordinator specific configuration.
  32. type Coordinator struct {
  33. // ForgerAddress is the address under which this coordinator is forging
  34. ForgerAddress ethCommon.Address `validate:"required"`
  35. // FeeAccount is the Hermez account that the coordinator uses to receive fees
  36. FeeAccount struct {
  37. // Address is the ethereum address of the account to receive fees
  38. Address ethCommon.Address `validate:"required"`
  39. // BJJ is the baby jub jub public key of the account to receive fees
  40. BJJ babyjub.PublicKeyComp `validate:"required"`
  41. } `validate:"required"`
  42. // ConfirmBlocks is the number of confirmation blocks to wait for sent
  43. // ethereum transactions before forgetting about them
  44. ConfirmBlocks int64 `validate:"required"`
  45. // L1BatchTimeoutPerc is the portion of the range before the L1Batch
  46. // timeout that will trigger a schedule to forge an L1Batch
  47. L1BatchTimeoutPerc float64 `validate:"required"`
  48. // ProofServerPollInterval is the waiting interval between polling the
  49. // ProofServer while waiting for a particular status
  50. ProofServerPollInterval Duration `validate:"required"`
  51. // ForgeRetryInterval is the waiting interval between calls forge a
  52. // batch after an error
  53. ForgeRetryInterval Duration `validate:"required"`
  54. // SyncRetryInterval is the waiting interval between calls to the main
  55. // handler of a synced block after an error
  56. SyncRetryInterval Duration `validate:"required"`
  57. // L2DB is the DB that holds the pool of L2Txs
  58. L2DB struct {
  59. // SafetyPeriod is the number of batches after which
  60. // non-pending L2Txs are deleted from the pool
  61. SafetyPeriod common.BatchNum `validate:"required"`
  62. // MaxTxs is the number of L2Txs that once reached triggers
  63. // deletion of old L2Txs
  64. MaxTxs uint32 `validate:"required"`
  65. // TTL is the Time To Live for L2Txs in the pool. Once MaxTxs
  66. // L2Txs is reached, L2Txs older than TTL will be deleted.
  67. TTL Duration `validate:"required"`
  68. // PurgeBatchDelay is the delay between batches to purge outdated transactions
  69. PurgeBatchDelay int64 `validate:"required"`
  70. // InvalidateBatchDelay is the delay between batches to mark invalid transactions
  71. InvalidateBatchDelay int64 `validate:"required"`
  72. // PurgeBlockDelay is the delay between blocks to purge outdated transactions
  73. PurgeBlockDelay int64 `validate:"required"`
  74. // InvalidateBlockDelay is the delay between blocks to mark invalid transactions
  75. InvalidateBlockDelay int64 `validate:"required"`
  76. } `validate:"required"`
  77. TxSelector struct {
  78. // Path where the TxSelector StateDB is stored
  79. Path string `validate:"required"`
  80. } `validate:"required"`
  81. BatchBuilder struct {
  82. // Path where the BatchBuilder StateDB is stored
  83. Path string `validate:"required"`
  84. } `validate:"required"`
  85. ServerProofs []ServerProof `validate:"required"`
  86. Circuit struct {
  87. // VerifierIdx uint8 `validate:"required"`
  88. // MaxTx is the maximum number of txs supported by the circuit
  89. MaxTx int64 `validate:"required"`
  90. // NLevels is the maximum number of merkle tree levels
  91. // supported by the circuit
  92. NLevels int64 `validate:"required"`
  93. } `validate:"required"`
  94. EthClient struct {
  95. // CallGasLimit is the default gas limit set for ethereum
  96. // calls, except for methods where a particular gas limit is
  97. // harcoded because it's known to be a big value
  98. CallGasLimit uint64 `validate:"required"`
  99. // GasPriceDiv is the gas price division
  100. GasPriceDiv uint64 `validate:"required"`
  101. // CheckLoopInterval is the waiting interval between receipt
  102. // checks of ethereum transactions in the TxManager
  103. CheckLoopInterval Duration `validate:"required"`
  104. // Attempts is the number of attempts to do an eth client RPC
  105. // call before giving up
  106. Attempts int `validate:"required"`
  107. // AttemptsDelay is delay between attempts do do an eth client
  108. // RPC call
  109. AttemptsDelay Duration `validate:"required"`
  110. // Keystore is the ethereum keystore where private keys are kept
  111. Keystore struct {
  112. // Path to the keystore
  113. Path string `validate:"required"`
  114. // Password used to decrypt the keys in the keystore
  115. Password string `validate:"required"`
  116. } `validate:"required"`
  117. } `validate:"required"`
  118. API struct {
  119. // Coordinator enables the coordinator API endpoints
  120. Coordinator bool
  121. } `validate:"required"`
  122. Debug struct {
  123. // BatchPath if set, specifies the path where batchInfo is stored
  124. // in JSON in every step/update of the pipeline
  125. BatchPath string
  126. // LightScrypt if set, uses light parameters for the ethereum
  127. // keystore encryption algorithm.
  128. LightScrypt bool
  129. }
  130. }
  131. // Node is the hermez node configuration.
  132. type Node struct {
  133. PriceUpdater struct {
  134. // Interval between price updater calls
  135. Interval Duration `valudate:"required"`
  136. // URL of the token prices provider
  137. URL string `valudate:"required"`
  138. // Type of the API of the token prices provider
  139. Type string `valudate:"required"`
  140. } `validate:"required"`
  141. StateDB struct {
  142. // Path where the synchronizer StateDB is stored
  143. Path string `validate:"required"`
  144. // Keep is the number of checkpoints to keep
  145. Keep int `validate:"required"`
  146. } `validate:"required"`
  147. PostgreSQL struct {
  148. // Port of the PostgreSQL server
  149. Port int `validate:"required"`
  150. // Host of the PostgreSQL server
  151. Host string `validate:"required"`
  152. // User of the PostgreSQL server
  153. User string `validate:"required"`
  154. // Password of the PostgreSQL server
  155. Password string `validate:"required"`
  156. // Name of the PostgreSQL server database
  157. Name string `validate:"required"`
  158. } `validate:"required"`
  159. Web3 struct {
  160. // URL is the URL of the web3 ethereum-node RPC server
  161. URL string `validate:"required"`
  162. } `validate:"required"`
  163. Synchronizer struct {
  164. // SyncLoopInterval is the interval between attempts to
  165. // synchronize a new block from an ethereum node
  166. SyncLoopInterval Duration `validate:"required"`
  167. // StatsRefreshPeriod is the interval between updates of the
  168. // synchronizer state Eth parameters (`Eth.LastBlock` and
  169. // `Eth.LastBatch`). This value only affects the reported % of
  170. // synchronization of blocks and batches, nothing else.
  171. StatsRefreshPeriod Duration `validate:"required"`
  172. } `validate:"required"`
  173. SmartContracts struct {
  174. // Rollup is the address of the Hermez.sol smart contract
  175. Rollup ethCommon.Address `validate:"required"`
  176. // Rollup is the address of the HermezAuctionProtocol.sol smart
  177. // contract
  178. Auction ethCommon.Address `validate:"required"`
  179. // WDelayer is the address of the WithdrawalDelayer.sol smart
  180. // contract
  181. WDelayer ethCommon.Address `validate:"required"`
  182. // TokenHEZ is the address of the HEZTokenFull.sol smart
  183. // contract
  184. TokenHEZ ethCommon.Address `validate:"required"`
  185. // TokenHEZName is the name of the HEZ token deployed at
  186. // TokenHEZ address
  187. TokenHEZName string `validate:"required"`
  188. } `validate:"required"`
  189. API struct {
  190. // Address where the API will listen if set
  191. Address string
  192. // Explorer enables the Explorer API endpoints
  193. Explorer bool
  194. // UpdateMetricsInterval is the interval between updates of the
  195. // API metrics
  196. UpdateMetricsInterval Duration
  197. // UpdateMetricsInterval is the interval between updates of the
  198. // recommended fees
  199. UpdateRecommendedFeeInterval Duration
  200. } `validate:"required"`
  201. Debug struct {
  202. // APIAddress is the address where the debugAPI will listen if
  203. // set
  204. APIAddress string
  205. // MeddlerLogs enables meddler debug mode, where unused columns and struct
  206. // fields will be logged
  207. MeddlerLogs bool
  208. }
  209. Coordinator Coordinator `validate:"-"`
  210. }
  211. // Load loads a generic config.
  212. func Load(path string, cfg interface{}) error {
  213. bs, err := ioutil.ReadFile(path) //nolint:gosec
  214. if err != nil {
  215. return tracerr.Wrap(err)
  216. }
  217. cfgToml := string(bs)
  218. if _, err := toml.Decode(cfgToml, cfg); err != nil {
  219. return tracerr.Wrap(err)
  220. }
  221. return nil
  222. }
  223. // LoadCoordinator loads the Coordinator configuration from path.
  224. func LoadCoordinator(path string) (*Node, error) {
  225. var cfg Node
  226. if err := Load(path, &cfg); err != nil {
  227. return nil, tracerr.Wrap(fmt.Errorf("error loading node configuration file: %w", err))
  228. }
  229. validate := validator.New()
  230. if err := validate.Struct(cfg); err != nil {
  231. return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
  232. }
  233. if err := validate.Struct(cfg.Coordinator); err != nil {
  234. return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
  235. }
  236. return &cfg, nil
  237. }
  238. // LoadNode loads the Node configuration from path.
  239. func LoadNode(path string) (*Node, error) {
  240. var cfg Node
  241. if err := Load(path, &cfg); err != nil {
  242. return nil, tracerr.Wrap(fmt.Errorf("error loading node configuration file: %w", err))
  243. }
  244. validate := validator.New()
  245. if err := validate.Struct(cfg); err != nil {
  246. return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
  247. }
  248. return &cfg, nil
  249. }