- Add new command to the cli/node: `serveapi` that alows serving the API just
by connecting to the PostgreSQL database. The mode flag should me passed in
order to select whether we are connecting to a synchronizer database or a
coordinator database. If `coord` is chosen as mode, the coordinator
endpoints can be activated in order to allow inserting l2txs and
authorizations into the L2DB.
Summary of the implementation details
- New SQL table with 3 columns (plus `item_id` pk). The table only contains a
single row with `item_id` = 1. Columns:
- state: historydb.StateAPI in JSON. This is the struct that is served via
the `/state` API endpoint. The node will periodically update this struct
and store it int he DB. The api server will query it from the DB to
serve it.
- config: historydb.NodeConfig in JSON. This struct contains node
configuration parameters that the API needs to be aware of. It's updated
once every time the node starts.
- constants: historydb.Constants in JSON. This struct contains all the
hermez network constants gathered via the ethereum client by the node.
It's written once every time the node starts.
- The HistoryDB contains methods to get and update each one of these columns
individually.
- The HistoryDB contains all methods that query the DB and prepare objects that
will appear in the StateAPI endpoint.
- The configuration used in for the `serveapi` cli/node command is defined in
`config.APIServer`, and is a subset of `node.Config` in order to allow
reusing the same configuration file of the node if desired.
- A new object is introduced in the api: `StateAPIUpdater`, which contains all
the necessary information to update the StateAPI in the DB periodically by
the node.
- Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from
`syncrhonizer` to `common` for convenience.
Updated:
batchbuilder
common
coordinator
db/statedb
eth
log
node
priceupdater
prover
synchronizer
test/*
txprocessor
txselector
Pending (once
https://github.com/hermeznetwork/hermez-node/tree/feature/serveapicli is
merged to master):
Update golangci-lint version to v1.37.1
api
apitypes
cli
config
db/historydb
db/l2db
coordinator:
- Add config `ForgeDelay`: ForgeDelay is the delay after which a batch is
forged if the slot is already commited. If set to 0s, the coordinator
will continously forge at the maximum rate.
- Add config `ForgeNoTxsDelay`: ForgeNoTxsDelay is the delay after which a
batch is forged even if there are no txs to forge if the slot is already
commited. If set to 0s, the coordinator will continously forge even if
the batches are empty.
- Add config `GasPriceIncPerc`: GasPriceIncPerc is the percentage increase
of gas price set in an ethereum transaction from the suggested gas price
by the ehtereum node
- Remove unused configuration parameters `CallGasLimit` and `GasPriceDiv`
- Forge always regardless of configured forge delay when the current slot is
not yet commited and we are the winner of the slot
synchronizer:
- Don't log with error (use warning) level when there's a reorg and the
queried events by block using the block hash returns "unknown block".
- cli / node
- Update handler of SIGINT so that after 3 SIGINTs, the process terminates
unconditionally
- coordinator
- Store stats without pointer
- In all functions that send a variable via channel, check for context done
to avoid deadlock (due to no process reading from the channel, which has
no queue) when the node is stopped.
- Abstract `canForge` so that it can be used outside of the `Coordinator`
- In `canForge` check the blockNumber in current and next slot.
- Update tests due to smart contract changes in slot handling, and minimum
bid defaults
- TxManager
- Add consts, vars and stats to allow evaluating `canForge`
- Add `canForge` method (not used yet)
- Store batch and nonces status (last success and last pending)
- Track nonces internally instead of relying on the ethereum node (this
is required to work with ganache when there are pending txs)
- Handle the (common) case of the receipt not being found after the tx
is sent.
- Don't start the main loop until we get an initial messae fo the stats
and vars (so that in the loop the stats and vars are set to
synchronizer values)
- When a tx fails, check and discard all the failed transactions before
sending the message to stop the pipeline. This will avoid sending
consecutive messages of stop the pipeline when multiple txs are
detected to be failed consecutively. Also, future txs of the same
pipeline after a discarded txs are discarded, and their nonces reused.
- Robust handling of nonces:
- If geth returns nonce is too low, increase it
- If geth returns nonce too hight, decrease it
- If geth returns underpriced, increase gas price
- If geth returns replace underpriced, increase gas price
- Add support for resending transactions after a timeout
- Store `BatchInfos` in a queue
- Pipeline
- When an error is found, stop forging batches and send a message to the
coordinator to stop the pipeline with information of the failed batch
number so that in a restart, non-failed batches are not repated.
- When doing a reset of the stateDB, if possible reset from the local
checkpoint instead of resetting from the synchronizer. This allows
resetting from a batch that is valid but not yet sent / synced.
- Every time a pipeline is started, assign it a number from a counter. This
allows the TxManager to ignore batches from stopped pipelines, via a
message sent by the coordinator.
- Avoid forging when we haven't reached the rollup genesis block number.
- Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the
number of blocks of delay to wait before starting the pipeline when we
reach a slot in which we can forge.
- When detecting a reorg, only reset the pipeline if the batch from which
the pipeline started changed and wasn't sent by us.
- Add config parameter `ScheduleBatchBlocksAheadCheck`:
ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the
forger address is checked to be allowed to forge (apart from checking the
next block), used to decide when to stop scheduling new batches (by
stopping the pipeline). For example, if we are at block 10 and
ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge,
the pipeline will be stopped if we can't forge at block 15. This value
should be the expected number of blocks it takes between scheduling a
batch and having it mined.
- Add config parameter `SendBatchBlocksMarginCheck`:
SendBatchBlocksMarginCheck is the number of margin blocks ahead in which
the coordinator is also checked to be allowed to forge, apart from the
next block; used to decide when to stop sending batches to the smart
contract. For example, if we are at block 10 and
SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the
batch will be discarded if we can't forge at block 15.
- Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout
after which a non-mined ethereum transaction will be resent (reusing the
nonce) with a newly calculated gas price
- Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price
allowed for ethereum transactions
- Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces
of pending transactions for new replacement transactions. This is useful
for testing with Ganache.
- Extend BatchInfo with more useful information for debugging
- eth / ethereum client
- Add necessary methods to create the auth object for transactions manually
so that we can set the nonce, gas price, gas limit, etc manually
- Update `RollupForgeBatch` to take an auth object as input (so that the
coordinator can set parameters manually)
- synchronizer
- In stats, add `NextSlot`
- In stats, store full last batch instead of just last batch number
- Instead of calculating a nextSlot from scratch every time, update the
current struct (only updating the forger info if we are Synced)
- Afer every processed batch, check that the calculated StateDB MTRoot
matches the StateRoot found in the forgeBatch event.
- kvdb
- Fix path in Last when doing `setNew`
- Only close if db != nil, and after closing, always set db to nil
- This will avoid a panic in the case where the db is closed but
there's an error soon after, and a future call tries to close
again. This is because pebble.Close() will panic if the db is
already closed.
- Avoid calling pebble methods when a the Storage interface already
implements that method (like Close).
- statedb
- In test, avoid calling KVDB method if the same method is available for
the StateDB (like MakeCheckpoint, CurrentBatch).
- eth
- In *EventByBlock methods, take blockHash as input argument and use it
when querying the event logs. Previously the blockHash was only taken
from the logs results *only if* there was any log. This caused the
following issue: if there was no logs, it was not possible to know if
the result was from the expected block or an uncle block! By querying
logs by blockHash we make sure that even if there are no logs, they
are from the right block.
- Note that now the function can either be called with a
blockNum or blockHash, but not both at the same time.
- sync
- If there's an error during call to Sync call resetState, which
internally resets the stateDB to avoid stale checkpoints (and a
corresponding invalid increase in the StateDB batchNum).
- During a Sync, after very batch processed, make sure that the StateDB
currentBatch corresponds to the batchNum in the smart contract
log/event.
- Close StateDB when stopping the node
- Lock the StateDB when doing checkpoints to avoid multiple instances of
oppening the pebble DB at the same time.
- cli / node
- Update handler of SIGINT so that after 3 SIGINTs, the process terminates
unconditionally
- coordinator
- Store stats without pointer
- In all functions that send a variable via channel, check for context done
to avoid deadlock (due to no process reading from the channel, which has
no queue) when the node is stopped.
- Abstract `canForge` so that it can be used outside of the `Coordinator`
- In `canForge` check the blockNumber in current and next slot.
- Update tests due to smart contract changes in slot handling, and minimum
bid defaults
- TxManager
- Add consts, vars and stats to allow evaluating `canForge`
- Add `canForge` method (not used yet)
- Store batch and nonces status (last success and last pending)
- Track nonces internally instead of relying on the ethereum node (this
is required to work with ganache when there are pending txs)
- Handle the (common) case of the receipt not being found after the tx
is sent.
- Don't start the main loop until we get an initial messae fo the stats
and vars (so that in the loop the stats and vars are set to
synchronizer values)
- eth / ethereum client
- Add necessary methods to create the auth object for transactions manually
so that we can set the nonce, gas price, gas limit, etc manually
- Update `RollupForgeBatch` to take an auth object as input (so that the
coordinator can set parameters manually)
- synchronizer
- In stats, add `NextSlot`
The smart contracts were updated at some point and there have been some changes
in slot calculation. Update these values in the synchronizer and test auction
smart contract implementaiton.
When scheduling an L1Batch, make sure the previous L1Batch has been
synchronized. Otherwise, an L1Batch will be forged that may not contain all
the L1UserTxs that are supposed to be included.
Previously the coordinator was erroneously using Slot.BatchesLen to determine
when anyone can forge. The correct behaviour is implmenented with the boolean
flag `ForgerCommitment`, that is set to true only when there's a batch before
the deadline within a slot.
Delete Slot.BatchesLen, and the synchronization code of this value from the
Synchronizer, as this is not needed
Previously the Synchronizer required the initial variables of the smart
contracts to be passed as a configuration parameter (that the node took from
the configuration file). The same applied to the blockNumber.
The last update of the smart contracts introduced events for each smart
contract constructor (initializer), which allows querying the initial variables
as well as the initial block number for each smart contract.
Now the synchronizer uses this information, and thus the initial variables and
the starting block numbers have been removed from the configuration.
For each tx, move the logic of setting the Type and TxID to separate functions,
so that they can be called when necessary.
In synchronizer, set all the required fields using the `SetID` and `SetType`
for l2txs when needed. This is necessary because the `ProcessTxs()` works with
`common.PoolL2Tx`, which misses some fields from `common.L2Tx`, and because
`ProcessTxs()` needs the Type to be set, but at the same time `ProcessTxs()`
sets the Nonce, which is required for the `TxID`.
- API:
- Replace `emergencyModeStaringTime` by `emercengyModeStartingBlock`
- Synchronizer:
- Track emergency mode starting block
- cli/node
- Add working coordinator config
- coordinator:
- Retry handler for synchronizer stats in case of error (instead of
waiting for the next block to try again)
- On init, trigger an initial call to the handler for synced block
before waiting for the synchronizer, to force the coordinator to start
its logic even if there's no new block right after the node has been
started (very useful for running in testnet where the frequency of
blocks is variable)
- Merge Msg for synced block and updated vars into one: `MsgSyncBlock`.
- API
- When updating network info, handle cases where no batches exists and
where no forgers exists
- cli/node
- Update `cfg.buidler.toml` config file to a working version
- common
- Add new smart contract structs and extend some existing ones to
reflect updates regarding events from the smart contracts
- SQL
- Add new tables and extend existing ones to reflect updates regarding
events from the smart contracts
- db/historydb
- Add functions to insert new smart contract events data
- Fix unclosed rows that led to inconsistent sql driver state (replace
NamedQuery by NamedExec). This fixes the error:
`pq: unexpected Parse response 'C'`
- db/l2db
- Close rows after usage
- eth
- In Rollup event, introduce a new UpdateBucketsParameter when there's a
SafeMode event, with `SafeMode = true`
- synchronizer
- synchronize new events
- avoid calling `auction.CanForge` before the genesisBlock to avoid
getting a revert.
- Common
- Add `IdxNonce` type used to track nonces in accounts to invalidate
l2txs in the pool
- Config
- Update coordinator config will all the new configuration parameters
used in the coordinator
- Coordinator
- Introduce the `Purger` to track how often to purge and do the job when
needed according to a configuration.
- Implement the methods to invalidate l2txs transactions due to l2txs
selection in batches. For now these functions are not used in favour
of the `Purger` methods, which check ALL the l2txs in the pool.
- Call Invalidation and Purging methods of the purger both when the
node is forging (in the pipeline when starting a new batch) and when
the node is not forging (in coordinator when being notified about a
new synced block)
- L2DB:
- Implement `GetPendingUniqueFromIdxs` to get all the unique idxs from
pending transactions (used to get their nonces and then invalidate
txs)
- Redo `CheckNonces` with a single SQL query and using `common.IdxNonce`
instead of `common.Account`
- StateDB:
- Expose GetIdx to check errors when invalidating pool txs
- Synchronizer:
- Test forged L1UserTxs processed by TxProcessor
- Improve checks of Effective values
- TxSelector:
- Expose the internal LocalStateDB in order to check account nonces in
the coordinator when not forging.
- Common
- Move ErrTODO and ErrDone to common for usage where needed.
- Coordinator
- Move prover types to prover package
- Handle reorgs, stopping the pipeline when necessary
- Handle ethereum transaction errors by stopping the pipeline
- In case of ethereum transaction revert, check for known revert causes
(more revert causes can be added to handle more cases)
- Fix skipped transactions in TxManager confirmation logic
- Cancel and wait for provers to be ready
- Connect L2DB to:
- purge l2txs due to timeout
- mark l2txs at the different states
- Connect HistoryDB to query L1UserTxs to forge in an L1Batch
- L2DB
- Skip update functions when the input slices have no values (to avoid a
query with no values that results in an SQL error)
- StateDB
- In LocalStateDB, fix Reset when mt == nil
- Prover (new package)
- Rename the interface to Prover
- Rename the mock struct to Mock
- Extend Prover interface methods to provide everything required by the
coordinator
- Begin implementing required http client code to interact with server
proof (not tested)
- Synchronizer:
- Add LastForgeL1TxsNum to Stats
- Test/Client
- Update Auction logic to track slots in which there's no forge during
the time before the deadline (following the solidity implementation)
- Common:
- Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition
- API:
- Add UpdateNetworkInfoBlock to update just block information, to be
used when the node is not yet synchronized
- Node:
- Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with
configurable time intervals
- Synchronizer:
- When mapping events by TxHash, use an array to support the possibility
of multiple calls of the same function happening in the same
transaction (for example, a smart contract in a single transaction
could call withdraw with delay twice, which would generate 2 withdraw
events, and 2 deposit events).
- In Stats, keep entire LastBlock instead of just the blockNum
- In Stats, add lastL1BatchBlock
- Test Stats and SCVars
- Coordinator:
- Enable writing the BatchInfo in every step of the pipeline to disk
(with JSON text files) for debugging purposes.
- Move the Pipeline functionality from the Coordinator to its own struct
(Pipeline)
- Implement shouldL1lL2Batch
- In TxManager, implement logic to perform several attempts when doing
ethereum node RPC calls before considering the error. (Both for calls
to forgeBatch and transaction receipt)
- In TxManager, reorganize the flow and note the specific points in
which actions are made when err != nil
- HistoryDB:
- Implement GetLastL1BatchBlockNum: returns the blockNum of the latest
forged l1Batch, to help the coordinator decide when to forge an
L1Batch.
- EthereumClient and test.Client:
- Update EthBlockByNumber to return the last block when the passed
number is -1.
- API:
- Modify the constructor so that hardcoded rollup constants don't need
to be passed (introduce a `Config` and use `configAPI` internally)
- Common:
- Update rollup constants with proper *big.Int when required
- Add BidCoordinator and Slot structs used by the HistoryDB and
Synchronizer.
- Add helper methods to AuctionConstants
- AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL
table: `default_slot_set_bid_slot_num`), which indicates at which
slotNum does the `DefaultSlotSetBid` specified starts applying.
- Config:
- Move coordinator exclusive configuration from the node config to the
coordinator config
- Coordinator:
- Reorganize the code towards having the goroutines started and stopped
from the coordinator itself instead of the node.
- Remove all stop and stopped channels, and use context.Context and
sync.WaitGroup instead.
- Remove BatchInfo setters and assing variables directly
- In ServerProof and ServerProofPool use context instead stop channel.
- Use message passing to notify the coordinator about sync updates and
reorgs
- Introduce the Pipeline, which can be started and stopped by the
Coordinator
- Introduce the TxManager, which manages ethereum transactions (the
TxManager is also in charge of making the forge call to the rollup
smart contract). The TxManager keeps ethereum transactions and:
1. Waits for the transaction to be accepted
2. Waits for the transaction to be confirmed for N blocks
- In forge logic, first prepare a batch and then wait for an available
server proof to have all work ready once the proof server is ready.
- Remove the `isForgeSequence` method which was querying the smart
contract, and instead use notifications sent by the Synchronizer to
figure out if it's forging time.
- Update test (which is a minimal test to manually see if the
coordinator starts)
- HistoryDB:
- Add method to get the number of batches in a slot (used to detect when
a slot has passed the bid winner forging deadline)
- Add method to get the best bid and associated coordinator of a slot
(used to detect the forgerAddress that can forge the slot)
- General:
- Rename some instances of `currentBlock` to `lastBlock` to be more
clear.
- Node:
- Connect the API to the node and call the methods to update cached
state when the sync advances blocks.
- Call methods to update Coordinator state when the sync advances blocks
and finds reorgs.
- Synchronizer:
- Add Auction field in the Stats, which contain the current slot with
info about highest bidder and other related info required to know who
can forge in the current block.
- Better organization of cached state:
- On Sync, update the internal cached state
- On Init or Reorg, load the state from HistoryDB into the
internal cached state.
Add HashGlobalInputs for ZKInputs compatible with js & circom circuits version.
Compatible with hermeznetwork/commonjs at version: c6a1448db5
(c6a1448db5)
- In exit table, `instant_withdrawn`, `delayed_withdraw_request`, and
`delayed_withdrawn` were referencing batch_num. But these actions happen
outside a batch, so they should reference a block_num.
- Process delayed withdrawns:
- In Synchronizer, first match a Rollup delayed withdrawn request, with the
WDelayer deposit (via TxHash), and store the owner and token associated
with the delayed withdrawn.
- In HistoryDB: store the owner and token of a delayed withdrawal request
in the exit_tree, and set delayed_withdrawn when the withdraw is done in
the WDelayer.
- Update dependency of sqlx to master
- Last release of sqlx is from 2018 October, and it doesn't support
`NamedQuery` with a slice of structs, which is used in this commit.
Previously, the synchronizer test was extending the output from til to
precalculate many fields to compare it with the synchronizer and BD output.
Since this is useful outside of the syncrhonizer testing: move this
functionality to til via a function that extends the output
(til.Context.FillBlocksExtra).
Also, add new functionality: calculate fee idxs dynamically by setting a user
name, and calculate collected fees.
- node:
- Extend config to add initial variables of the smart contracts used as
defaults before they are changed via events.
- In stopped channels, set size 1 so that panics are not witheld until the
node stops completely.
- common:
- In Smart Contract variables, comment:
- `WDelayerVariables.HermezRollupAddress` because it's not needed.
- `RollupVariables.Buckets` because there are no events for it, and for
now it's not used.
- historydb:
- Add functions to get and set smart contract variables.
- db:
- Add `Rollback` function in `utils.go` to reduce boilerplate in sql
transaction rollbacks in defers in db functions.
- Update `rollup_vars` and `auction_vars` (renamed from `consensus_vars`)
table, and add `wdelayer_vars` table.
- synchronizer:
- Synchronize WDelayer
- Handle SC variables properly
- test/ethclient:
- Add essential implementation of WDelayer
- Move smart contract constants and structs for variables to
common/{ethrollup.go, ethauction.go, ethwdelayer.go}:
- This removes repeated code of the structs for variables
- Allows reusing the constants and variables from all modules without
import cycles
- Remove unused common/scvars.go
- In common.BlockData, split data from each smart contract into a sepparate
field (Rollup, Auction, WDelayer). This affects the structures that til uses
as output, and HistoryDB in the AddBlockSCData.
- In Synchronizer:
- Pass starting block of each smart contract as config, instead of
incorrectly using the genesis block found in the acution constant (which
has a very different meaning)
- Use variable structs from common instead of an internal copy
- Synchronize more stuff (resolve some TODOs)
- Fix some issues found after initial testing with ganache
- In eth:
- In auction.go: Add method to get constants
- Update README to use ganache instead of buidlerevm as local blockchain
for testing
- Update env variables and test vectors to pass the tests with the
deployment in the ganache testnet.
- Use ethereum keys derived from paths (hdwallet) in testing to avoid
hardcoding private keys and generate the same keys from a mnemonic used
in the ganache tesnet.
- Test L1UserTxs in Synchronizer
- Test Batches in Synchronizer
- Add last_idx in `TABLE batch` in HistoryDB
- Minor updates in til to satisfy blockchain constraints
- Node
- Updated configuration to initialize the interface to all the smart
contracts
- Common
- Moved BlockData and BatchData types to common so that they can be
shared among: historydb, til and synchronizer
- Remove hash.go (it was never used)
- Remove slot.go (it was never used)
- Remove smartcontractparams.go (it was never used, and appropriate
structs are defined in `eth/`)
- Comment state / status method until requirements of this method are
properly defined, and move it to Synchronizer
- Synchronizer
- Simplify `Sync` routine to only sync one block per call, and return
useful information.
- Use BlockData and BatchData from common
- Check that events belong to the expected block hash
- In L1Batch, query L1UserTxs from HistoryDB
- Fill ERC20 token information
- Test AddTokens with test.Client
- HistryDB
- Use BlockData and BatchData from common
- Add `GetAllTokens` method
- Uncomment and update GetL1UserTxs (with corresponding tests)
- Til
- Rename all instances of RegisterToken to AddToken (to follow the smart
contract implementation naming)
- Use BlockData and BatchData from common
- Move testL1CoordinatorTxs and testL2Txs to a separate struct
from BatchData in Context
- Start Context with BatchNum = 1 (which the protocol defines to be the
first batchNum)
- In every Batch, set StateRoot and ExitRoot to a non-nil big.Int
(zero).
- In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not
used, set it to 0; so that no *big.Int is nil.
- In L1UserTx, don't set BatchNum, because when L1UserTxs are created
and obtained by the synchronizer, the BatchNum is not known yet (it's
a synchronizer job to set it)
- In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.