You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

905 lines
25 KiB

Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
4 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Redo coordinator structure, connect API to node - API: - Modify the constructor so that hardcoded rollup constants don't need to be passed (introduce a `Config` and use `configAPI` internally) - Common: - Update rollup constants with proper *big.Int when required - Add BidCoordinator and Slot structs used by the HistoryDB and Synchronizer. - Add helper methods to AuctionConstants - AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL table: `default_slot_set_bid_slot_num`), which indicates at which slotNum does the `DefaultSlotSetBid` specified starts applying. - Config: - Move coordinator exclusive configuration from the node config to the coordinator config - Coordinator: - Reorganize the code towards having the goroutines started and stopped from the coordinator itself instead of the node. - Remove all stop and stopped channels, and use context.Context and sync.WaitGroup instead. - Remove BatchInfo setters and assing variables directly - In ServerProof and ServerProofPool use context instead stop channel. - Use message passing to notify the coordinator about sync updates and reorgs - Introduce the Pipeline, which can be started and stopped by the Coordinator - Introduce the TxManager, which manages ethereum transactions (the TxManager is also in charge of making the forge call to the rollup smart contract). The TxManager keeps ethereum transactions and: 1. Waits for the transaction to be accepted 2. Waits for the transaction to be confirmed for N blocks - In forge logic, first prepare a batch and then wait for an available server proof to have all work ready once the proof server is ready. - Remove the `isForgeSequence` method which was querying the smart contract, and instead use notifications sent by the Synchronizer to figure out if it's forging time. - Update test (which is a minimal test to manually see if the coordinator starts) - HistoryDB: - Add method to get the number of batches in a slot (used to detect when a slot has passed the bid winner forging deadline) - Add method to get the best bid and associated coordinator of a slot (used to detect the forgerAddress that can forge the slot) - General: - Rename some instances of `currentBlock` to `lastBlock` to be more clear. - Node: - Connect the API to the node and call the methods to update cached state when the sync advances blocks. - Call methods to update Coordinator state when the sync advances blocks and finds reorgs. - Synchronizer: - Add Auction field in the Stats, which contain the current slot with info about highest bidder and other related info required to know who can forge in the current block. - Better organization of cached state: - On Sync, update the internal cached state - On Init or Reorg, load the state from HistoryDB into the internal cached state.
4 years ago
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Allow serving API only via new cli command - Add new command to the cli/node: `serveapi` that alows serving the API just by connecting to the PostgreSQL database. The mode flag should me passed in order to select whether we are connecting to a synchronizer database or a coordinator database. If `coord` is chosen as mode, the coordinator endpoints can be activated in order to allow inserting l2txs and authorizations into the L2DB. Summary of the implementation details - New SQL table with 3 columns (plus `item_id` pk). The table only contains a single row with `item_id` = 1. Columns: - state: historydb.StateAPI in JSON. This is the struct that is served via the `/state` API endpoint. The node will periodically update this struct and store it int he DB. The api server will query it from the DB to serve it. - config: historydb.NodeConfig in JSON. This struct contains node configuration parameters that the API needs to be aware of. It's updated once every time the node starts. - constants: historydb.Constants in JSON. This struct contains all the hermez network constants gathered via the ethereum client by the node. It's written once every time the node starts. - The HistoryDB contains methods to get and update each one of these columns individually. - The HistoryDB contains all methods that query the DB and prepare objects that will appear in the StateAPI endpoint. - The configuration used in for the `serveapi` cli/node command is defined in `config.APIServer`, and is a subset of `node.Config` in order to allow reusing the same configuration file of the node if desired. - A new object is introduced in the api: `StateAPIUpdater`, which contains all the necessary information to update the StateAPI in the DB periodically by the node. - Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from `syncrhonizer` to `common` for convenience.
3 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
Update coordinator, call all api update functions - Common: - Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition - API: - Add UpdateNetworkInfoBlock to update just block information, to be used when the node is not yet synchronized - Node: - Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with configurable time intervals - Synchronizer: - When mapping events by TxHash, use an array to support the possibility of multiple calls of the same function happening in the same transaction (for example, a smart contract in a single transaction could call withdraw with delay twice, which would generate 2 withdraw events, and 2 deposit events). - In Stats, keep entire LastBlock instead of just the blockNum - In Stats, add lastL1BatchBlock - Test Stats and SCVars - Coordinator: - Enable writing the BatchInfo in every step of the pipeline to disk (with JSON text files) for debugging purposes. - Move the Pipeline functionality from the Coordinator to its own struct (Pipeline) - Implement shouldL1lL2Batch - In TxManager, implement logic to perform several attempts when doing ethereum node RPC calls before considering the error. (Both for calls to forgeBatch and transaction receipt) - In TxManager, reorganize the flow and note the specific points in which actions are made when err != nil - HistoryDB: - Implement GetLastL1BatchBlockNum: returns the blockNum of the latest forged l1Batch, to help the coordinator decide when to forge an L1Batch. - EthereumClient and test.Client: - Update EthBlockByNumber to return the last block when the passed number is -1.
4 years ago
  1. package api
  2. import (
  3. "context"
  4. "encoding/json"
  5. "errors"
  6. "fmt"
  7. "io"
  8. "io/ioutil"
  9. "math/big"
  10. "net"
  11. "net/http"
  12. "os"
  13. "strconv"
  14. "sync"
  15. "testing"
  16. "time"
  17. ethCommon "github.com/ethereum/go-ethereum/common"
  18. swagger "github.com/getkin/kin-openapi/openapi3filter"
  19. "github.com/gin-gonic/gin"
  20. "github.com/hermeznetwork/hermez-node/common"
  21. "github.com/hermeznetwork/hermez-node/db"
  22. "github.com/hermeznetwork/hermez-node/db/historydb"
  23. "github.com/hermeznetwork/hermez-node/db/l2db"
  24. "github.com/hermeznetwork/hermez-node/log"
  25. "github.com/hermeznetwork/hermez-node/stateapiupdater"
  26. "github.com/hermeznetwork/hermez-node/test"
  27. "github.com/hermeznetwork/hermez-node/test/til"
  28. "github.com/hermeznetwork/hermez-node/test/txsets"
  29. "github.com/hermeznetwork/tracerr"
  30. "github.com/stretchr/testify/require"
  31. )
  32. // Pendinger is an interface that allows getting last returned item ID and PendingItems to be used for building fromItem
  33. // when testing paginated endpoints.
  34. type Pendinger interface {
  35. GetPending() (pendingItems, lastItemID uint64)
  36. Len() int
  37. New() Pendinger
  38. }
  39. const apiAddr = ":4010"
  40. const apiURL = "http://localhost" + apiAddr + "/"
  41. var SetBlockchain = `
  42. Type: Blockchain
  43. AddToken(1)
  44. AddToken(2)
  45. AddToken(3)
  46. AddToken(4)
  47. AddToken(5)
  48. AddToken(6)
  49. AddToken(7)
  50. AddToken(8)
  51. > block
  52. // Coordinator accounts, Idxs: 256, 257
  53. CreateAccountCoordinator(0) Coord
  54. CreateAccountCoordinator(1) Coord
  55. // close Block:0, Batch:1
  56. > batch
  57. CreateAccountDeposit(0) A: 11100000000000000
  58. CreateAccountDeposit(1) C: 22222222200000000000
  59. CreateAccountCoordinator(0) C
  60. // close Block:0, Batch:2
  61. > batchL1
  62. // Expected balances:
  63. // Coord(0): 0, Coord(1): 0
  64. // C(0): 0
  65. CreateAccountDeposit(1) A: 33333333300000000000
  66. // close Block:0, Batch:3
  67. > batchL1
  68. // close Block:0, Batch:4
  69. > batchL1
  70. CreateAccountDepositTransfer(0) B-A: 44444444400000000000, 123444444400000000000
  71. // close Block:0, Batch:5
  72. > batchL1
  73. CreateAccountDeposit(0) D: 55555555500000000000
  74. // close Block:0, Batch:6
  75. > batchL1
  76. CreateAccountCoordinator(1) B
  77. Transfer(1) A-B: 11100000000000000 (2)
  78. Transfer(0) B-C: 22200000000000000 (3)
  79. // close Block:0, Batch:7
  80. > batchL1 // forge L1User{1}, forge L1Coord{2}, forge L2{2}
  81. Deposit(0) C: 66666666600000000000
  82. DepositTransfer(0) C-D: 77777777700000000000, 12377777700000000000
  83. Transfer(0) A-B: 33350000000000000 (111)
  84. Transfer(0) C-A: 44450000000000000 (222)
  85. Transfer(1) B-C: 55550000000000000 (123)
  86. Exit(0) A: 66650000000000000 (44)
  87. ForceTransfer(0) D-B: 77777700000000000
  88. ForceExit(0) B: 88888800000000000
  89. // close Block:0, Batch:8
  90. > batchL1
  91. > block
  92. Transfer(0) D-A: 99950000000000000 (77)
  93. Transfer(0) B-D: 12300000000000000 (55)
  94. // close Block:1, Batch:1
  95. > batchL1
  96. CreateAccountCoordinator(0) F
  97. CreateAccountCoordinator(0) G
  98. CreateAccountCoordinator(0) H
  99. CreateAccountCoordinator(0) I
  100. CreateAccountCoordinator(0) J
  101. CreateAccountCoordinator(0) K
  102. CreateAccountCoordinator(0) L
  103. CreateAccountCoordinator(0) M
  104. CreateAccountCoordinator(0) N
  105. CreateAccountCoordinator(0) O
  106. CreateAccountCoordinator(0) P
  107. CreateAccountCoordinator(5) G
  108. CreateAccountCoordinator(5) H
  109. CreateAccountCoordinator(5) I
  110. CreateAccountCoordinator(5) J
  111. CreateAccountCoordinator(5) K
  112. CreateAccountCoordinator(5) L
  113. CreateAccountCoordinator(5) M
  114. CreateAccountCoordinator(5) N
  115. CreateAccountCoordinator(5) O
  116. CreateAccountCoordinator(5) P
  117. CreateAccountCoordinator(2) G
  118. CreateAccountCoordinator(2) H
  119. CreateAccountCoordinator(2) I
  120. CreateAccountCoordinator(2) J
  121. CreateAccountCoordinator(2) K
  122. CreateAccountCoordinator(2) L
  123. CreateAccountCoordinator(2) M
  124. CreateAccountCoordinator(2) N
  125. CreateAccountCoordinator(2) O
  126. CreateAccountCoordinator(2) P
  127. > batch
  128. > block
  129. > batch
  130. > block
  131. > batch
  132. > block
  133. `
  134. type testCommon struct {
  135. blocks []common.Block
  136. tokens []historydb.TokenWithUSD
  137. batches []testBatch
  138. fullBatches []testFullBatch
  139. coordinators []historydb.CoordinatorAPI
  140. accounts []testAccount
  141. txs []testTx
  142. exits []testExit
  143. poolTxsToSend []testPoolTxSend
  144. poolTxsToReceive []testPoolTxReceive
  145. auths []testAuth
  146. router *swagger.Router
  147. bids []testBid
  148. slots []testSlot
  149. auctionVars common.AuctionVariables
  150. rollupVars common.RollupVariables
  151. wdelayerVars common.WDelayerVariables
  152. nextForgers []historydb.NextForgerAPI
  153. }
  154. var tc testCommon
  155. var config configAPI
  156. var api *API
  157. var stateAPIUpdater *stateapiupdater.Updater
  158. // TestMain initializes the API server, and fill HistoryDB and StateDB with fake data,
  159. // emulating the task of the synchronizer in order to have data to be returned
  160. // by the API endpoints that will be tested
  161. func TestMain(m *testing.M) {
  162. // Initializations
  163. // Swagger
  164. router := swagger.NewRouter().WithSwaggerFromFile("./swagger.yml")
  165. // HistoryDB
  166. pass := os.Getenv("POSTGRES_PASS")
  167. database, err := db.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
  168. if err != nil {
  169. panic(err)
  170. }
  171. apiConnCon := db.NewAPIConnectionController(1, time.Second)
  172. hdb := historydb.NewHistoryDB(database, database, apiConnCon)
  173. if err != nil {
  174. panic(err)
  175. }
  176. // L2DB
  177. l2DB := l2db.NewL2DB(database, database, 10, 1000, 0.0, 24*time.Hour, apiConnCon)
  178. test.WipeDB(l2DB.DB()) // this will clean HistoryDB and L2DB
  179. // Config (smart contract constants)
  180. chainID := uint16(0)
  181. _config := getConfigTest(chainID)
  182. config = configAPI{
  183. RollupConstants: *newRollupConstants(_config.RollupConstants),
  184. AuctionConstants: _config.AuctionConstants,
  185. WDelayerConstants: _config.WDelayerConstants,
  186. }
  187. // API
  188. apiGin := gin.Default()
  189. // Reset DB
  190. test.WipeDB(hdb.DB())
  191. constants := &historydb.Constants{
  192. SCConsts: common.SCConsts{
  193. Rollup: _config.RollupConstants,
  194. Auction: _config.AuctionConstants,
  195. WDelayer: _config.WDelayerConstants,
  196. },
  197. ChainID: chainID,
  198. HermezAddress: _config.HermezAddress,
  199. }
  200. if err := hdb.SetConstants(constants); err != nil {
  201. panic(err)
  202. }
  203. nodeConfig := &historydb.NodeConfig{
  204. MaxPoolTxs: 10,
  205. MinFeeUSD: 0,
  206. }
  207. if err := hdb.SetNodeConfig(nodeConfig); err != nil {
  208. panic(err)
  209. }
  210. api, err = NewAPI(
  211. true,
  212. true,
  213. apiGin,
  214. hdb,
  215. l2DB,
  216. )
  217. if err != nil {
  218. log.Error(err)
  219. panic(err)
  220. }
  221. // Start server
  222. listener, err := net.Listen("tcp", apiAddr) //nolint:gosec
  223. if err != nil {
  224. panic(err)
  225. }
  226. server := &http.Server{Handler: apiGin}
  227. go func() {
  228. if err := server.Serve(listener); err != nil &&
  229. tracerr.Unwrap(err) != http.ErrServerClosed {
  230. panic(err)
  231. }
  232. }()
  233. // Generate blockchain data with til
  234. tcc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
  235. tilCfgExtra := til.ConfigExtra{
  236. BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
  237. CoordUser: "Coord",
  238. }
  239. blocksData, err := tcc.GenerateBlocks(SetBlockchain)
  240. if err != nil {
  241. panic(err)
  242. }
  243. err = tcc.FillBlocksExtra(blocksData, &tilCfgExtra)
  244. if err != nil {
  245. panic(err)
  246. }
  247. err = tcc.FillBlocksForgedL1UserTxs(blocksData)
  248. if err != nil {
  249. panic(err)
  250. }
  251. AddAditionalInformation(blocksData)
  252. // Generate L2 Txs with til
  253. commonPoolTxs, err := tcc.GeneratePoolL2Txs(txsets.SetPoolL2MinimumFlow0)
  254. if err != nil {
  255. panic(err)
  256. }
  257. // Extract til generated data, and add it to HistoryDB
  258. var commonBlocks []common.Block
  259. var commonBatches []common.Batch
  260. var commonAccounts []common.Account
  261. var commonExitTree []common.ExitInfo
  262. var commonL1Txs []common.L1Tx
  263. var commonL2Txs []common.L2Tx
  264. // Add ETH token at the beginning of the array
  265. testTokens := []historydb.TokenWithUSD{}
  266. ethUSD := float64(500)
  267. ethNow := time.Now()
  268. testTokens = append(testTokens, historydb.TokenWithUSD{
  269. TokenID: test.EthToken.TokenID,
  270. EthBlockNum: test.EthToken.EthBlockNum,
  271. EthAddr: test.EthToken.EthAddr,
  272. Name: test.EthToken.Name,
  273. Symbol: test.EthToken.Symbol,
  274. Decimals: test.EthToken.Decimals,
  275. USD: &ethUSD,
  276. USDUpdate: &ethNow,
  277. })
  278. err = api.h.UpdateTokenValue(common.EmptyAddr, ethUSD)
  279. if err != nil {
  280. panic(err)
  281. }
  282. for _, block := range blocksData {
  283. // Insert block into HistoryDB
  284. // nolint reason: block is used as read only in the function
  285. if err := api.h.AddBlockSCData(&block); err != nil { //nolint:gosec
  286. log.Error(err)
  287. panic(err)
  288. }
  289. // Extract data
  290. commonBlocks = append(commonBlocks, block.Block)
  291. for i, tkn := range block.Rollup.AddedTokens {
  292. token := historydb.TokenWithUSD{
  293. TokenID: tkn.TokenID,
  294. EthBlockNum: tkn.EthBlockNum,
  295. EthAddr: tkn.EthAddr,
  296. Name: tkn.Name,
  297. Symbol: tkn.Symbol,
  298. Decimals: tkn.Decimals,
  299. }
  300. value := float64(i + 423)
  301. now := time.Now().UTC()
  302. token.USD = &value
  303. token.USDUpdate = &now
  304. // Set value in DB
  305. err = api.h.UpdateTokenValue(token.EthAddr, value)
  306. if err != nil {
  307. panic(err)
  308. }
  309. testTokens = append(testTokens, token)
  310. }
  311. // Set USD value for tokens in DB
  312. for _, batch := range block.Rollup.Batches {
  313. commonL2Txs = append(commonL2Txs, batch.L2Txs...)
  314. for i := range batch.CreatedAccounts {
  315. batch.CreatedAccounts[i].Nonce = common.Nonce(i)
  316. commonAccounts = append(commonAccounts, batch.CreatedAccounts[i])
  317. }
  318. commonBatches = append(commonBatches, batch.Batch)
  319. commonExitTree = append(commonExitTree, batch.ExitTree...)
  320. commonL1Txs = append(commonL1Txs, batch.L1UserTxs...)
  321. commonL1Txs = append(commonL1Txs, batch.L1CoordinatorTxs...)
  322. }
  323. }
  324. // Generate Coordinators and add them to HistoryDB
  325. const nCoords = 10
  326. commonCoords := test.GenCoordinators(nCoords, commonBlocks)
  327. // Update one coordinator to test behaviour when bidder address is repeated
  328. updatedCoordBlock := commonCoords[len(commonCoords)-1].EthBlockNum
  329. commonCoords = append(commonCoords, common.Coordinator{
  330. Bidder: commonCoords[0].Bidder,
  331. Forger: commonCoords[0].Forger,
  332. EthBlockNum: updatedCoordBlock,
  333. URL: commonCoords[0].URL + ".new",
  334. })
  335. if err := api.h.AddCoordinators(commonCoords); err != nil {
  336. panic(err)
  337. }
  338. // Test next forgers
  339. // Set auction vars
  340. // Slots 3 and 6 will have bids that will be invalidated because of minBid update
  341. // Slots 4 and 7 will have valid bids, the rest will be cordinator slots
  342. var slot3MinBid int64 = 3
  343. var slot4MinBid int64 = 4
  344. var slot6MinBid int64 = 6
  345. var slot7MinBid int64 = 7
  346. // First update will indicate how things behave from slot 0
  347. var defaultSlotSetBid [6]*big.Int = [6]*big.Int{
  348. big.NewInt(10), // Slot 0 min bid
  349. big.NewInt(10), // Slot 1 min bid
  350. big.NewInt(10), // Slot 2 min bid
  351. big.NewInt(slot3MinBid), // Slot 3 min bid
  352. big.NewInt(slot4MinBid), // Slot 4 min bid
  353. big.NewInt(10), // Slot 5 min bid
  354. }
  355. auctionVars := common.AuctionVariables{
  356. EthBlockNum: int64(2),
  357. DonationAddress: ethCommon.HexToAddress("0x1111111111111111111111111111111111111111"),
  358. DefaultSlotSetBid: defaultSlotSetBid,
  359. DefaultSlotSetBidSlotNum: 0,
  360. Outbidding: uint16(1),
  361. SlotDeadline: uint8(20),
  362. BootCoordinator: ethCommon.HexToAddress("0x1111111111111111111111111111111111111111"),
  363. BootCoordinatorURL: "https://boot.coordinator.io",
  364. ClosedAuctionSlots: uint16(10),
  365. OpenAuctionSlots: uint16(20),
  366. }
  367. if err := api.h.AddAuctionVars(&auctionVars); err != nil {
  368. panic(err)
  369. }
  370. // Last update in auction vars will indicate how things will behave from slot 5
  371. defaultSlotSetBid = [6]*big.Int{
  372. big.NewInt(10), // Slot 5 min bid
  373. big.NewInt(slot6MinBid), // Slot 6 min bid
  374. big.NewInt(slot7MinBid), // Slot 7 min bid
  375. big.NewInt(10), // Slot 8 min bid
  376. big.NewInt(10), // Slot 9 min bid
  377. big.NewInt(10), // Slot 10 min bid
  378. }
  379. auctionVars = common.AuctionVariables{
  380. EthBlockNum: int64(3),
  381. DonationAddress: ethCommon.HexToAddress("0x1111111111111111111111111111111111111111"),
  382. DefaultSlotSetBid: defaultSlotSetBid,
  383. DefaultSlotSetBidSlotNum: 5,
  384. Outbidding: uint16(1),
  385. SlotDeadline: uint8(20),
  386. BootCoordinator: ethCommon.HexToAddress("0x1111111111111111111111111111111111111111"),
  387. BootCoordinatorURL: "https://boot.coordinator.io",
  388. ClosedAuctionSlots: uint16(10),
  389. OpenAuctionSlots: uint16(20),
  390. }
  391. if err := api.h.AddAuctionVars(&auctionVars); err != nil {
  392. panic(err)
  393. }
  394. // Generate Bids and add them to HistoryDB
  395. bids := []common.Bid{}
  396. // Slot 1 and 2, no bids, wins boot coordinator
  397. // Slot 3, below what's going to be the minimum (wins boot coordinator)
  398. bids = append(bids, common.Bid{
  399. SlotNum: 3,
  400. BidValue: big.NewInt(slot3MinBid - 1),
  401. EthBlockNum: commonBlocks[0].Num,
  402. Bidder: commonCoords[0].Bidder,
  403. })
  404. // Slot 4, valid bid (wins bidder)
  405. bids = append(bids, common.Bid{
  406. SlotNum: 4,
  407. BidValue: big.NewInt(slot4MinBid),
  408. EthBlockNum: commonBlocks[0].Num,
  409. Bidder: commonCoords[0].Bidder,
  410. })
  411. // Slot 5 no bids, wins boot coordinator
  412. // Slot 6, below what's going to be the minimum (wins boot coordinator)
  413. bids = append(bids, common.Bid{
  414. SlotNum: 6,
  415. BidValue: big.NewInt(slot6MinBid - 1),
  416. EthBlockNum: commonBlocks[0].Num,
  417. Bidder: commonCoords[0].Bidder,
  418. })
  419. // Slot 7, valid bid (wins bidder)
  420. bids = append(bids, common.Bid{
  421. SlotNum: 7,
  422. BidValue: big.NewInt(slot7MinBid),
  423. EthBlockNum: commonBlocks[0].Num,
  424. Bidder: commonCoords[0].Bidder,
  425. })
  426. if err = api.h.AddBids(bids); err != nil {
  427. panic(err)
  428. }
  429. bootForger := historydb.NextForgerAPI{
  430. Coordinator: historydb.CoordinatorAPI{
  431. Forger: auctionVars.BootCoordinator,
  432. URL: auctionVars.BootCoordinatorURL,
  433. },
  434. }
  435. // Set next forgers: set all as boot coordinator then replace the non boot coordinators
  436. nextForgers := []historydb.NextForgerAPI{}
  437. var initBlock int64 = 140
  438. var deltaBlocks int64 = 40
  439. for i := 1; i < int(auctionVars.ClosedAuctionSlots)+2; i++ {
  440. fromBlock := initBlock + deltaBlocks*int64(i-1)
  441. bootForger.Period = historydb.Period{
  442. SlotNum: int64(i),
  443. FromBlock: fromBlock,
  444. ToBlock: fromBlock + deltaBlocks - 1,
  445. }
  446. nextForgers = append(nextForgers, bootForger)
  447. }
  448. // Set next forgers that aren't the boot coordinator
  449. nonBootForger := historydb.CoordinatorAPI{
  450. Bidder: commonCoords[0].Bidder,
  451. Forger: commonCoords[0].Forger,
  452. URL: commonCoords[0].URL + ".new",
  453. }
  454. // Slot 4
  455. nextForgers[3].Coordinator = nonBootForger
  456. // Slot 7
  457. nextForgers[6].Coordinator = nonBootForger
  458. buckets := make([]common.BucketParams, 5)
  459. for i := range buckets {
  460. buckets[i].CeilUSD = big.NewInt(int64(i) * 10)
  461. buckets[i].BlockStamp = big.NewInt(int64(i) * 100)
  462. buckets[i].Withdrawals = big.NewInt(int64(i) * 1000)
  463. buckets[i].RateBlocks = big.NewInt(int64(i) * 10000)
  464. buckets[i].RateWithdrawals = big.NewInt(int64(i) * 100000)
  465. buckets[i].MaxWithdrawals = big.NewInt(int64(i) * 1000000)
  466. }
  467. // Generate SC vars and add them to HistoryDB (if needed)
  468. rollupVars := common.RollupVariables{
  469. EthBlockNum: int64(3),
  470. FeeAddToken: big.NewInt(100),
  471. ForgeL1L2BatchTimeout: int64(44),
  472. WithdrawalDelay: uint64(3000),
  473. Buckets: buckets,
  474. SafeMode: false,
  475. }
  476. wdelayerVars := common.WDelayerVariables{
  477. WithdrawalDelay: uint64(3000),
  478. }
  479. stateAPIUpdater = stateapiupdater.NewUpdater(hdb, nodeConfig, &common.SCVariables{
  480. Rollup: rollupVars,
  481. Auction: auctionVars,
  482. WDelayer: wdelayerVars,
  483. }, constants)
  484. // Generate test data, as expected to be received/sended from/to the API
  485. testCoords := genTestCoordinators(commonCoords)
  486. testBids := genTestBids(commonBlocks, testCoords, bids)
  487. testExits := genTestExits(commonExitTree, testTokens, commonAccounts)
  488. testTxs := genTestTxs(commonL1Txs, commonL2Txs, commonAccounts, testTokens, commonBlocks)
  489. testBatches, testFullBatches := genTestBatches(commonBlocks, commonBatches, testTxs)
  490. poolTxsToSend, poolTxsToReceive := genTestPoolTxs(commonPoolTxs, testTokens, commonAccounts)
  491. // Add balance and nonce to historyDB
  492. accounts := genTestAccounts(commonAccounts, testTokens)
  493. accUpdates := []common.AccountUpdate{}
  494. for i := 0; i < len(accounts); i++ {
  495. balance := new(big.Int)
  496. balance.SetString(string(*accounts[i].Balance), 10)
  497. idx, err := stringToIdx(string(accounts[i].Idx), "foo")
  498. if err != nil {
  499. panic(err)
  500. }
  501. accUpdates = append(accUpdates, common.AccountUpdate{
  502. EthBlockNum: 0,
  503. BatchNum: 1,
  504. Idx: *idx,
  505. Nonce: 0,
  506. Balance: balance,
  507. })
  508. accUpdates = append(accUpdates, common.AccountUpdate{
  509. EthBlockNum: 0,
  510. BatchNum: 1,
  511. Idx: *idx,
  512. Nonce: accounts[i].Nonce,
  513. Balance: balance,
  514. })
  515. }
  516. if err := api.h.AddAccountUpdates(accUpdates); err != nil {
  517. panic(err)
  518. }
  519. tc = testCommon{
  520. blocks: commonBlocks,
  521. tokens: testTokens,
  522. batches: testBatches,
  523. fullBatches: testFullBatches,
  524. coordinators: testCoords,
  525. accounts: accounts,
  526. txs: testTxs,
  527. exits: testExits,
  528. poolTxsToSend: poolTxsToSend,
  529. poolTxsToReceive: poolTxsToReceive,
  530. auths: genTestAuths(test.GenAuths(5, _config.ChainID, _config.HermezAddress)),
  531. router: router,
  532. bids: testBids,
  533. slots: api.genTestSlots(
  534. 20,
  535. commonBlocks[len(commonBlocks)-1].Num,
  536. testBids,
  537. auctionVars,
  538. ),
  539. auctionVars: auctionVars,
  540. rollupVars: rollupVars,
  541. wdelayerVars: wdelayerVars,
  542. nextForgers: nextForgers,
  543. }
  544. // Run tests
  545. result := m.Run()
  546. // Fake server
  547. if os.Getenv("FAKE_SERVER") == "yes" {
  548. for {
  549. log.Info("Running fake server at " + apiURL + " until ^C is received")
  550. time.Sleep(30 * time.Second)
  551. }
  552. }
  553. // Stop server
  554. if err := server.Shutdown(context.Background()); err != nil {
  555. panic(err)
  556. }
  557. if err := database.Close(); err != nil {
  558. panic(err)
  559. }
  560. os.Exit(result)
  561. }
  562. func TestTimeout(t *testing.T) {
  563. pass := os.Getenv("POSTGRES_PASS")
  564. databaseTO, err := db.ConnectSQLDB(5432, "localhost", "hermez", pass, "hermez")
  565. require.NoError(t, err)
  566. apiConnConTO := db.NewAPIConnectionController(1, 100*time.Millisecond)
  567. hdbTO := historydb.NewHistoryDB(databaseTO, databaseTO, apiConnConTO)
  568. require.NoError(t, err)
  569. // L2DB
  570. l2DBTO := l2db.NewL2DB(databaseTO, databaseTO, 10, 1000, 1.0, 24*time.Hour, apiConnConTO)
  571. // API
  572. apiGinTO := gin.Default()
  573. finishWait := make(chan interface{})
  574. startWait := make(chan interface{})
  575. apiGinTO.GET("/wait", func(c *gin.Context) {
  576. cancel, err := apiConnConTO.Acquire()
  577. defer cancel()
  578. require.NoError(t, err)
  579. defer apiConnConTO.Release()
  580. startWait <- nil
  581. <-finishWait
  582. })
  583. // Start server
  584. serverTO := &http.Server{Handler: apiGinTO}
  585. listener, err := net.Listen("tcp", ":4444") //nolint:gosec
  586. require.NoError(t, err)
  587. go func() {
  588. if err := serverTO.Serve(listener); err != nil &&
  589. tracerr.Unwrap(err) != http.ErrServerClosed {
  590. require.NoError(t, err)
  591. }
  592. }()
  593. _, err = NewAPI(
  594. true,
  595. true,
  596. apiGinTO,
  597. hdbTO,
  598. l2DBTO,
  599. )
  600. require.NoError(t, err)
  601. client := &http.Client{}
  602. httpReq, err := http.NewRequest("GET", "http://localhost:4444/tokens", nil)
  603. require.NoError(t, err)
  604. httpReqWait, err := http.NewRequest("GET", "http://localhost:4444/wait", nil)
  605. require.NoError(t, err)
  606. // Request that will get timed out
  607. var wg sync.WaitGroup
  608. wg.Add(1)
  609. go func() {
  610. // Request that will make the API busy
  611. _, err = client.Do(httpReqWait)
  612. require.NoError(t, err)
  613. wg.Done()
  614. }()
  615. <-startWait
  616. resp, err := client.Do(httpReq)
  617. require.NoError(t, err)
  618. require.Equal(t, http.StatusServiceUnavailable, resp.StatusCode)
  619. defer resp.Body.Close() //nolint
  620. body, err := ioutil.ReadAll(resp.Body)
  621. require.NoError(t, err)
  622. // Unmarshal body into return struct
  623. msg := &errorMsg{}
  624. err = json.Unmarshal(body, msg)
  625. require.NoError(t, err)
  626. // Check that the error was the expected down
  627. require.Equal(t, errSQLTimeout, msg.Message)
  628. finishWait <- nil
  629. // Stop server
  630. wg.Wait()
  631. require.NoError(t, serverTO.Shutdown(context.Background()))
  632. require.NoError(t, databaseTO.Close())
  633. }
  634. func doGoodReqPaginated(
  635. path, order string,
  636. iterStruct Pendinger,
  637. appendIter func(res interface{}),
  638. ) error {
  639. var next uint64
  640. firstIte := true
  641. expectedTotal := 0
  642. totalReceived := 0
  643. for {
  644. // Calculate fromItem
  645. iterPath := path
  646. if !firstIte {
  647. iterPath += "&fromItem=" + strconv.Itoa(int(next))
  648. }
  649. // Call API to get this iteration items
  650. iterStruct = iterStruct.New()
  651. if err := doGoodReq(
  652. "GET", iterPath+"&order="+order, nil,
  653. iterStruct,
  654. ); err != nil {
  655. return tracerr.Wrap(err)
  656. }
  657. appendIter(iterStruct)
  658. // Keep iterating?
  659. remaining, lastID := iterStruct.GetPending()
  660. if remaining == 0 {
  661. break
  662. }
  663. if order == historydb.OrderDesc {
  664. next = lastID - 1
  665. } else {
  666. next = lastID + 1
  667. }
  668. // Check that the expected amount of items is consistent across iterations
  669. totalReceived += iterStruct.Len()
  670. if firstIte {
  671. firstIte = false
  672. expectedTotal = totalReceived + int(remaining)
  673. }
  674. if expectedTotal != totalReceived+int(remaining) {
  675. panic(fmt.Sprintf(
  676. "pagination error, totalReceived + remaining should be %d, but is %d",
  677. expectedTotal, totalReceived+int(remaining),
  678. ))
  679. }
  680. }
  681. return nil
  682. }
  683. func doGoodReq(method, path string, reqBody io.Reader, returnStruct interface{}) error {
  684. ctx := context.Background()
  685. client := &http.Client{}
  686. httpReq, err := http.NewRequest(method, path, reqBody)
  687. if err != nil {
  688. return tracerr.Wrap(err)
  689. }
  690. if reqBody != nil {
  691. httpReq.Header.Add("Content-Type", "application/json")
  692. }
  693. route, pathParams, err := tc.router.FindRoute(httpReq.Method, httpReq.URL)
  694. if err != nil {
  695. return tracerr.Wrap(err)
  696. }
  697. // Validate request against swagger spec
  698. requestValidationInput := &swagger.RequestValidationInput{
  699. Request: httpReq,
  700. PathParams: pathParams,
  701. Route: route,
  702. }
  703. if err := swagger.ValidateRequest(ctx, requestValidationInput); err != nil {
  704. return tracerr.Wrap(err)
  705. }
  706. // Do API call
  707. resp, err := client.Do(httpReq)
  708. if err != nil {
  709. return tracerr.Wrap(err)
  710. }
  711. if resp.Body == nil && returnStruct != nil {
  712. return tracerr.Wrap(errors.New("Nil body"))
  713. }
  714. //nolint
  715. defer resp.Body.Close()
  716. body, err := ioutil.ReadAll(resp.Body)
  717. if err != nil {
  718. return tracerr.Wrap(err)
  719. }
  720. if resp.StatusCode != 200 {
  721. return tracerr.Wrap(fmt.Errorf("%d response. Body: %s", resp.StatusCode, string(body)))
  722. }
  723. if returnStruct == nil {
  724. return nil
  725. }
  726. // Unmarshal body into return struct
  727. if err := json.Unmarshal(body, returnStruct); err != nil {
  728. log.Error("invalid json: " + string(body))
  729. log.Error(err)
  730. return tracerr.Wrap(err)
  731. }
  732. // log.Info(string(body))
  733. // Validate response against swagger spec
  734. responseValidationInput := &swagger.ResponseValidationInput{
  735. RequestValidationInput: requestValidationInput,
  736. Status: resp.StatusCode,
  737. Header: resp.Header,
  738. }
  739. responseValidationInput = responseValidationInput.SetBodyBytes(body)
  740. return swagger.ValidateResponse(ctx, responseValidationInput)
  741. }
  742. func doBadReq(method, path string, reqBody io.Reader, expectedResponseCode int) error {
  743. ctx := context.Background()
  744. client := &http.Client{}
  745. httpReq, _ := http.NewRequest(method, path, reqBody)
  746. route, pathParams, err := tc.router.FindRoute(httpReq.Method, httpReq.URL)
  747. if err != nil {
  748. return tracerr.Wrap(err)
  749. }
  750. // Validate request against swagger spec
  751. requestValidationInput := &swagger.RequestValidationInput{
  752. Request: httpReq,
  753. PathParams: pathParams,
  754. Route: route,
  755. }
  756. if err := swagger.ValidateRequest(ctx, requestValidationInput); err != nil {
  757. if expectedResponseCode != 400 {
  758. return tracerr.Wrap(err)
  759. }
  760. log.Warn("The request does not match the API spec")
  761. }
  762. // Do API call
  763. resp, err := client.Do(httpReq)
  764. if err != nil {
  765. return tracerr.Wrap(err)
  766. }
  767. if resp.Body == nil {
  768. return tracerr.Wrap(errors.New("Nil body"))
  769. }
  770. //nolint
  771. defer resp.Body.Close()
  772. body, err := ioutil.ReadAll(resp.Body)
  773. if err != nil {
  774. return tracerr.Wrap(err)
  775. }
  776. if resp.StatusCode != expectedResponseCode {
  777. return tracerr.Wrap(fmt.Errorf("Unexpected response code: %d. Body: %s", resp.StatusCode, string(body)))
  778. }
  779. // Validate response against swagger spec
  780. responseValidationInput := &swagger.ResponseValidationInput{
  781. RequestValidationInput: requestValidationInput,
  782. Status: resp.StatusCode,
  783. Header: resp.Header,
  784. }
  785. responseValidationInput = responseValidationInput.SetBodyBytes(body)
  786. return swagger.ValidateResponse(ctx, responseValidationInput)
  787. }
  788. // test helpers
  789. func getTimestamp(blockNum int64, blocks []common.Block) time.Time {
  790. for i := 0; i < len(blocks); i++ {
  791. if blocks[i].Num == blockNum {
  792. return blocks[i].Timestamp
  793. }
  794. }
  795. panic("timesamp not found")
  796. }
  797. func getTokenByID(id common.TokenID, tokens []historydb.TokenWithUSD) historydb.TokenWithUSD {
  798. for i := 0; i < len(tokens); i++ {
  799. if tokens[i].TokenID == id {
  800. return tokens[i]
  801. }
  802. }
  803. panic("token not found")
  804. }
  805. func getTokenByIdx(idx common.Idx, tokens []historydb.TokenWithUSD, accs []common.Account) historydb.TokenWithUSD {
  806. for _, acc := range accs {
  807. if idx == acc.Idx {
  808. return getTokenByID(acc.TokenID, tokens)
  809. }
  810. }
  811. panic("token not found")
  812. }
  813. func getAccountByIdx(idx common.Idx, accs []common.Account) *common.Account {
  814. for _, acc := range accs {
  815. if acc.Idx == idx {
  816. return &acc
  817. }
  818. }
  819. panic("account not found")
  820. }
  821. func getBlockByNum(ethBlockNum int64, blocks []common.Block) common.Block {
  822. for _, b := range blocks {
  823. if b.Num == ethBlockNum {
  824. return b
  825. }
  826. }
  827. panic("block not found")
  828. }
  829. func getCoordinatorByBidder(bidder ethCommon.Address, coordinators []historydb.CoordinatorAPI) historydb.CoordinatorAPI {
  830. var coordLastUpdate historydb.CoordinatorAPI
  831. found := false
  832. for _, c := range coordinators {
  833. if c.Bidder == bidder {
  834. coordLastUpdate = c
  835. found = true
  836. }
  837. }
  838. if !found {
  839. panic("coordinator not found")
  840. }
  841. return coordLastUpdate
  842. }