You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

292 lines
11 KiB

4 years ago
3 years ago
4 years ago
3 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update missing parts, improve til, and more - Node - Updated configuration to initialize the interface to all the smart contracts - Common - Moved BlockData and BatchData types to common so that they can be shared among: historydb, til and synchronizer - Remove hash.go (it was never used) - Remove slot.go (it was never used) - Remove smartcontractparams.go (it was never used, and appropriate structs are defined in `eth/`) - Comment state / status method until requirements of this method are properly defined, and move it to Synchronizer - Synchronizer - Simplify `Sync` routine to only sync one block per call, and return useful information. - Use BlockData and BatchData from common - Check that events belong to the expected block hash - In L1Batch, query L1UserTxs from HistoryDB - Fill ERC20 token information - Test AddTokens with test.Client - HistryDB - Use BlockData and BatchData from common - Add `GetAllTokens` method - Uncomment and update GetL1UserTxs (with corresponding tests) - Til - Rename all instances of RegisterToken to AddToken (to follow the smart contract implementation naming) - Use BlockData and BatchData from common - Move testL1CoordinatorTxs and testL2Txs to a separate struct from BatchData in Context - Start Context with BatchNum = 1 (which the protocol defines to be the first batchNum) - In every Batch, set StateRoot and ExitRoot to a non-nil big.Int (zero). - In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not used, set it to 0; so that no *big.Int is nil. - In L1UserTx, don't set BatchNum, because when L1UserTxs are created and obtained by the synchronizer, the BatchNum is not known yet (it's a synchronizer job to set it) - In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago
  1. package config
  2. import (
  3. "fmt"
  4. "io/ioutil"
  5. "time"
  6. "github.com/BurntSushi/toml"
  7. ethCommon "github.com/ethereum/go-ethereum/common"
  8. "github.com/hermeznetwork/hermez-node/common"
  9. "github.com/hermeznetwork/tracerr"
  10. "github.com/iden3/go-iden3-crypto/babyjub"
  11. "gopkg.in/go-playground/validator.v9"
  12. )
  13. // Duration is a wrapper type that parses time duration from text.
  14. type Duration struct {
  15. time.Duration `validate:"required"`
  16. }
  17. // UnmarshalText unmarshalls time duration from text.
  18. func (d *Duration) UnmarshalText(data []byte) error {
  19. duration, err := time.ParseDuration(string(data))
  20. if err != nil {
  21. return tracerr.Wrap(err)
  22. }
  23. d.Duration = duration
  24. return nil
  25. }
  26. // ServerProof is the server proof configuration data.
  27. type ServerProof struct {
  28. // URL is the server proof API URL
  29. URL string `validate:"required"`
  30. }
  31. // Coordinator is the coordinator specific configuration.
  32. type Coordinator struct {
  33. // ForgerAddress is the address under which this coordinator is forging
  34. ForgerAddress ethCommon.Address `validate:"required"`
  35. // FeeAccount is the Hermez account that the coordinator uses to receive fees
  36. FeeAccount struct {
  37. // Address is the ethereum address of the account to receive fees
  38. Address ethCommon.Address `validate:"required"`
  39. // BJJ is the baby jub jub public key of the account to receive fees
  40. BJJ babyjub.PublicKeyComp `validate:"required"`
  41. } `validate:"required"`
  42. // ConfirmBlocks is the number of confirmation blocks to wait for sent
  43. // ethereum transactions before forgetting about them
  44. ConfirmBlocks int64 `validate:"required"`
  45. // L1BatchTimeoutPerc is the portion of the range before the L1Batch
  46. // timeout that will trigger a schedule to forge an L1Batch
  47. L1BatchTimeoutPerc float64 `validate:"required"`
  48. // StartSlotBlocksDelay is the number of blocks of delay to wait before
  49. // starting the pipeline when we reach a slot in which we can forge.
  50. StartSlotBlocksDelay int64
  51. // ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which
  52. // the forger address is checked to be allowed to forge (appart from
  53. // checking the next block), used to decide when to stop scheduling new
  54. // batches (by stopping the pipeline).
  55. // For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck
  56. // is 5, eventhough at block 11 we canForge, the pipeline will be
  57. // stopped if we can't forge at block 15.
  58. // This value should be the expected number of blocks it takes between
  59. // scheduling a batch and having it mined.
  60. ScheduleBatchBlocksAheadCheck int64
  61. // SendBatchBlocksMarginCheck is the number of margin blocks ahead in
  62. // which the coordinator is also checked to be allowed to forge, appart
  63. // from the next block; used to decide when to stop sending batches to
  64. // the smart contract.
  65. // For example, if we are at block 10 and SendBatchBlocksMarginCheck is
  66. // 5, eventhough at block 11 we canForge, the batch will be discarded
  67. // if we can't forge at block 15.
  68. SendBatchBlocksMarginCheck int64
  69. // ProofServerPollInterval is the waiting interval between polling the
  70. // ProofServer while waiting for a particular status
  71. ProofServerPollInterval Duration `validate:"required"`
  72. // ForgeRetryInterval is the waiting interval between calls forge a
  73. // batch after an error
  74. ForgeRetryInterval Duration `validate:"required"`
  75. // SyncRetryInterval is the waiting interval between calls to the main
  76. // handler of a synced block after an error
  77. SyncRetryInterval Duration `validate:"required"`
  78. // L2DB is the DB that holds the pool of L2Txs
  79. L2DB struct {
  80. // SafetyPeriod is the number of batches after which
  81. // non-pending L2Txs are deleted from the pool
  82. SafetyPeriod common.BatchNum `validate:"required"`
  83. // MaxTxs is the number of L2Txs that once reached triggers
  84. // deletion of old L2Txs
  85. MaxTxs uint32 `validate:"required"`
  86. // TTL is the Time To Live for L2Txs in the pool. Once MaxTxs
  87. // L2Txs is reached, L2Txs older than TTL will be deleted.
  88. TTL Duration `validate:"required"`
  89. // PurgeBatchDelay is the delay between batches to purge outdated transactions
  90. PurgeBatchDelay int64 `validate:"required"`
  91. // InvalidateBatchDelay is the delay between batches to mark invalid transactions
  92. InvalidateBatchDelay int64 `validate:"required"`
  93. // PurgeBlockDelay is the delay between blocks to purge outdated transactions
  94. PurgeBlockDelay int64 `validate:"required"`
  95. // InvalidateBlockDelay is the delay between blocks to mark invalid transactions
  96. InvalidateBlockDelay int64 `validate:"required"`
  97. } `validate:"required"`
  98. TxSelector struct {
  99. // Path where the TxSelector StateDB is stored
  100. Path string `validate:"required"`
  101. } `validate:"required"`
  102. BatchBuilder struct {
  103. // Path where the BatchBuilder StateDB is stored
  104. Path string `validate:"required"`
  105. } `validate:"required"`
  106. ServerProofs []ServerProof `validate:"required"`
  107. Circuit struct {
  108. // MaxTx is the maximum number of txs supported by the circuit
  109. MaxTx int64 `validate:"required"`
  110. // NLevels is the maximum number of merkle tree levels
  111. // supported by the circuit
  112. NLevels int64 `validate:"required"`
  113. } `validate:"required"`
  114. EthClient struct {
  115. // CallGasLimit is the default gas limit set for ethereum
  116. // calls, except for methods where a particular gas limit is
  117. // harcoded because it's known to be a big value
  118. CallGasLimit uint64 `validate:"required"`
  119. // GasPriceDiv is the gas price division
  120. GasPriceDiv uint64 `validate:"required"`
  121. // CheckLoopInterval is the waiting interval between receipt
  122. // checks of ethereum transactions in the TxManager
  123. CheckLoopInterval Duration `validate:"required"`
  124. // Attempts is the number of attempts to do an eth client RPC
  125. // call before giving up
  126. Attempts int `validate:"required"`
  127. // AttemptsDelay is delay between attempts do do an eth client
  128. // RPC call
  129. AttemptsDelay Duration `validate:"required"`
  130. // TxResendTimeout is the timeout after which a non-mined
  131. // ethereum transaction will be resent (reusing the nonce) with
  132. // a newly calculated gas price
  133. TxResendTimeout time.Duration `validate:"required"`
  134. // Keystore is the ethereum keystore where private keys are kept
  135. Keystore struct {
  136. // Path to the keystore
  137. Path string `validate:"required"`
  138. // Password used to decrypt the keys in the keystore
  139. Password string `validate:"required"`
  140. } `validate:"required"`
  141. } `validate:"required"`
  142. API struct {
  143. // Coordinator enables the coordinator API endpoints
  144. Coordinator bool
  145. } `validate:"required"`
  146. Debug struct {
  147. // BatchPath if set, specifies the path where batchInfo is stored
  148. // in JSON in every step/update of the pipeline
  149. BatchPath string
  150. // LightScrypt if set, uses light parameters for the ethereum
  151. // keystore encryption algorithm.
  152. LightScrypt bool
  153. // RollupVerifierIndex is the index of the verifier to use in
  154. // the Rollup smart contract. The verifier chosen by index
  155. // must match with the Circuit parameters.
  156. RollupVerifierIndex *int
  157. }
  158. }
  159. // Node is the hermez node configuration.
  160. type Node struct {
  161. PriceUpdater struct {
  162. // Interval between price updater calls
  163. Interval Duration `valudate:"required"`
  164. // URL of the token prices provider
  165. URL string `valudate:"required"`
  166. // Type of the API of the token prices provider
  167. Type string `valudate:"required"`
  168. } `validate:"required"`
  169. StateDB struct {
  170. // Path where the synchronizer StateDB is stored
  171. Path string `validate:"required"`
  172. // Keep is the number of checkpoints to keep
  173. Keep int `validate:"required"`
  174. } `validate:"required"`
  175. PostgreSQL struct {
  176. // Port of the PostgreSQL server
  177. Port int `validate:"required"`
  178. // Host of the PostgreSQL server
  179. Host string `validate:"required"`
  180. // User of the PostgreSQL server
  181. User string `validate:"required"`
  182. // Password of the PostgreSQL server
  183. Password string `validate:"required"`
  184. // Name of the PostgreSQL server database
  185. Name string `validate:"required"`
  186. } `validate:"required"`
  187. Web3 struct {
  188. // URL is the URL of the web3 ethereum-node RPC server
  189. URL string `validate:"required"`
  190. } `validate:"required"`
  191. Synchronizer struct {
  192. // SyncLoopInterval is the interval between attempts to
  193. // synchronize a new block from an ethereum node
  194. SyncLoopInterval Duration `validate:"required"`
  195. // StatsRefreshPeriod is the interval between updates of the
  196. // synchronizer state Eth parameters (`Eth.LastBlock` and
  197. // `Eth.LastBatch`). This value only affects the reported % of
  198. // synchronization of blocks and batches, nothing else.
  199. StatsRefreshPeriod Duration `validate:"required"`
  200. } `validate:"required"`
  201. SmartContracts struct {
  202. // Rollup is the address of the Hermez.sol smart contract
  203. Rollup ethCommon.Address `validate:"required"`
  204. // Rollup is the address of the HermezAuctionProtocol.sol smart
  205. // contract
  206. Auction ethCommon.Address `validate:"required"`
  207. // WDelayer is the address of the WithdrawalDelayer.sol smart
  208. // contract
  209. WDelayer ethCommon.Address `validate:"required"`
  210. // TokenHEZ is the address of the HEZTokenFull.sol smart
  211. // contract
  212. TokenHEZ ethCommon.Address `validate:"required"`
  213. // TokenHEZName is the name of the HEZ token deployed at
  214. // TokenHEZ address
  215. TokenHEZName string `validate:"required"`
  216. } `validate:"required"`
  217. API struct {
  218. // Address where the API will listen if set
  219. Address string
  220. // Explorer enables the Explorer API endpoints
  221. Explorer bool
  222. // UpdateMetricsInterval is the interval between updates of the
  223. // API metrics
  224. UpdateMetricsInterval Duration
  225. // UpdateRecommendedFeeInterval is the interval between updates of the
  226. // recommended fees
  227. UpdateRecommendedFeeInterval Duration
  228. // Maximum concurrent connections allowed between API and SQL
  229. MaxSQLConnections int `validate:"required"`
  230. // SQLConnectionTimeout is the maximum amount of time that an API request
  231. // can wait to stablish a SQL connection
  232. SQLConnectionTimeout Duration
  233. } `validate:"required"`
  234. Debug struct {
  235. // APIAddress is the address where the debugAPI will listen if
  236. // set
  237. APIAddress string
  238. // MeddlerLogs enables meddler debug mode, where unused columns and struct
  239. // fields will be logged
  240. MeddlerLogs bool
  241. }
  242. Coordinator Coordinator `validate:"-"`
  243. }
  244. // Load loads a generic config.
  245. func Load(path string, cfg interface{}) error {
  246. bs, err := ioutil.ReadFile(path) //nolint:gosec
  247. if err != nil {
  248. return tracerr.Wrap(err)
  249. }
  250. cfgToml := string(bs)
  251. if _, err := toml.Decode(cfgToml, cfg); err != nil {
  252. return tracerr.Wrap(err)
  253. }
  254. return nil
  255. }
  256. // LoadCoordinator loads the Coordinator configuration from path.
  257. func LoadCoordinator(path string) (*Node, error) {
  258. var cfg Node
  259. if err := Load(path, &cfg); err != nil {
  260. return nil, tracerr.Wrap(fmt.Errorf("error loading node configuration file: %w", err))
  261. }
  262. validate := validator.New()
  263. if err := validate.Struct(cfg); err != nil {
  264. return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
  265. }
  266. if err := validate.Struct(cfg.Coordinator); err != nil {
  267. return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
  268. }
  269. return &cfg, nil
  270. }
  271. // LoadNode loads the Node configuration from path.
  272. func LoadNode(path string) (*Node, error) {
  273. var cfg Node
  274. if err := Load(path, &cfg); err != nil {
  275. return nil, tracerr.Wrap(fmt.Errorf("error loading node configuration file: %w", err))
  276. }
  277. validate := validator.New()
  278. if err := validate.Struct(cfg); err != nil {
  279. return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
  280. }
  281. return &cfg, nil
  282. }