For the L2Txs of TransferToEthAddr & TransferToBJJ for a not-yet
existing accounts, in the TxSelector check if L2Tx receiver account will
be created by a L1UserFrozenTxs (in the next batch, the current frozen
queue). In that case, the L2Tx will be discarded at the current batch,
even if there is an AccountCreationAuth for the account, as there is a
L1UserTx in the frozen queue that will create the receiver Account. The
L2Tx is discarded to avoid the Coordinator creating a new
L1CoordinatorTx to create the receiver account, which will be also
created in the next batch from the L1UserFrozenTx, ending with the user
having 2 different accounts for the same TokenID.
The double account creation is supported by the Hermez zkRollup
specification, but it was decided to mitigate it at the TxSelector level
for the explained cases.
- Add new command to the cli/node: `serveapi` that alows serving the API just
by connecting to the PostgreSQL database. The mode flag should me passed in
order to select whether we are connecting to a synchronizer database or a
coordinator database. If `coord` is chosen as mode, the coordinator
endpoints can be activated in order to allow inserting l2txs and
authorizations into the L2DB.
Summary of the implementation details
- New SQL table with 3 columns (plus `item_id` pk). The table only contains a
single row with `item_id` = 1. Columns:
- state: historydb.StateAPI in JSON. This is the struct that is served via
the `/state` API endpoint. The node will periodically update this struct
and store it int he DB. The api server will query it from the DB to
serve it.
- config: historydb.NodeConfig in JSON. This struct contains node
configuration parameters that the API needs to be aware of. It's updated
once every time the node starts.
- constants: historydb.Constants in JSON. This struct contains all the
hermez network constants gathered via the ethereum client by the node.
It's written once every time the node starts.
- The HistoryDB contains methods to get and update each one of these columns
individually.
- The HistoryDB contains all methods that query the DB and prepare objects that
will appear in the StateAPI endpoint.
- The configuration used in for the `serveapi` cli/node command is defined in
`config.APIServer`, and is a subset of `node.Config` in order to allow
reusing the same configuration file of the node if desired.
- A new object is introduced in the api: `StateAPIUpdater`, which contains all
the necessary information to update the StateAPI in the DB periodically by
the node.
- Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from
`syncrhonizer` to `common` for convenience.
Update to last Poseidon (go-iden3-crypto) & go-merkletree versions,
update the affected test vectors, and check ZKInputs compatibility with
last version of circuits.
Update to last Poseidon version (which includes the changes of the
reference implementation from
26ddaa91db)
Compatible with update at circomlib
(cf853c1cc9)
Also, in the cli commands `wipesql` and `discard`, always rebuild the current
checkpoint of the stateDBs to make sure it's in a consistent non-corrupted
state and do a reset afterwards. These commands will allow reverting the
StateDB to a valid and consistent state in case a crash leaves the StateDB in a
corrupted state.
Updated:
batchbuilder
common
coordinator
db/statedb
eth
log
node
priceupdater
prover
synchronizer
test/*
txprocessor
txselector
Pending (once
https://github.com/hermeznetwork/hermez-node/tree/feature/serveapicli is
merged to master):
Update golangci-lint version to v1.37.1
api
apitypes
cli
config
db/historydb
db/l2db
In the metrics query, the estimatedTimeToForgeL1 was probably not being read
from the SQL query because it contained upper case letters, and in PostgreSQL
when identifiers are not double quoted they are case insensitive:
https://stackoverflow.com/a/20880247
Previously the code was only querying the unforged L1UserTxs of a particular
queue, but this was incorrect because there are always two non-forged queues:
the frozen one and the open one.
Replace it by a query to all the unforged L1UserTxs via a new HistoryDB method.
- Add config parameter `Coordinator.L2DB.MinPriceUSD` which allows rejecting
txs to the pool that have a fee lower than the minimum.
- In pool tx insertion, checking the number of pending txs atomically with the
insertion to avoid data races leading to more than MaxTxs pending txs in the
pool.
- In tx_pool, add a column called `external_delete` that can be set to true
externally. Regularly, the coordinator will delete all pending txs with this
column set to true. The interval for this action is set via the new config
parameter `Coordinator.PurgeByExtDelInterval`.
- In tx_pool, add a column for the client ip that sent the transaction. The
api fills this value using the ClientIP method from gin.Context, which should
work even under a reverse-proxy.
- cli / node
- Update handler of SIGINT so that after 3 SIGINTs, the process terminates
unconditionally
- coordinator
- Store stats without pointer
- In all functions that send a variable via channel, check for context done
to avoid deadlock (due to no process reading from the channel, which has
no queue) when the node is stopped.
- Abstract `canForge` so that it can be used outside of the `Coordinator`
- In `canForge` check the blockNumber in current and next slot.
- Update tests due to smart contract changes in slot handling, and minimum
bid defaults
- TxManager
- Add consts, vars and stats to allow evaluating `canForge`
- Add `canForge` method (not used yet)
- Store batch and nonces status (last success and last pending)
- Track nonces internally instead of relying on the ethereum node (this
is required to work with ganache when there are pending txs)
- Handle the (common) case of the receipt not being found after the tx
is sent.
- Don't start the main loop until we get an initial messae fo the stats
and vars (so that in the loop the stats and vars are set to
synchronizer values)
- When a tx fails, check and discard all the failed transactions before
sending the message to stop the pipeline. This will avoid sending
consecutive messages of stop the pipeline when multiple txs are
detected to be failed consecutively. Also, future txs of the same
pipeline after a discarded txs are discarded, and their nonces reused.
- Robust handling of nonces:
- If geth returns nonce is too low, increase it
- If geth returns nonce too hight, decrease it
- If geth returns underpriced, increase gas price
- If geth returns replace underpriced, increase gas price
- Add support for resending transactions after a timeout
- Store `BatchInfos` in a queue
- Pipeline
- When an error is found, stop forging batches and send a message to the
coordinator to stop the pipeline with information of the failed batch
number so that in a restart, non-failed batches are not repated.
- When doing a reset of the stateDB, if possible reset from the local
checkpoint instead of resetting from the synchronizer. This allows
resetting from a batch that is valid but not yet sent / synced.
- Every time a pipeline is started, assign it a number from a counter. This
allows the TxManager to ignore batches from stopped pipelines, via a
message sent by the coordinator.
- Avoid forging when we haven't reached the rollup genesis block number.
- Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the
number of blocks of delay to wait before starting the pipeline when we
reach a slot in which we can forge.
- When detecting a reorg, only reset the pipeline if the batch from which
the pipeline started changed and wasn't sent by us.
- Add config parameter `ScheduleBatchBlocksAheadCheck`:
ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the
forger address is checked to be allowed to forge (apart from checking the
next block), used to decide when to stop scheduling new batches (by
stopping the pipeline). For example, if we are at block 10 and
ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge,
the pipeline will be stopped if we can't forge at block 15. This value
should be the expected number of blocks it takes between scheduling a
batch and having it mined.
- Add config parameter `SendBatchBlocksMarginCheck`:
SendBatchBlocksMarginCheck is the number of margin blocks ahead in which
the coordinator is also checked to be allowed to forge, apart from the
next block; used to decide when to stop sending batches to the smart
contract. For example, if we are at block 10 and
SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the
batch will be discarded if we can't forge at block 15.
- Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout
after which a non-mined ethereum transaction will be resent (reusing the
nonce) with a newly calculated gas price
- Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price
allowed for ethereum transactions
- Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces
of pending transactions for new replacement transactions. This is useful
for testing with Ganache.
- Extend BatchInfo with more useful information for debugging
- eth / ethereum client
- Add necessary methods to create the auth object for transactions manually
so that we can set the nonce, gas price, gas limit, etc manually
- Update `RollupForgeBatch` to take an auth object as input (so that the
coordinator can set parameters manually)
- synchronizer
- In stats, add `NextSlot`
- In stats, store full last batch instead of just last batch number
- Instead of calculating a nextSlot from scratch every time, update the
current struct (only updating the forger info if we are Synced)
- Afer every processed batch, check that the calculated StateDB MTRoot
matches the StateRoot found in the forgeBatch event.
- KVDB/StateDB
- Pass config parameters in a Config type instead of using many
arguments in constructor.
- Add new parameter `NoLast` which disables having an opened DB with a
checkpoint to the last batchNum for thread-safe reads. Last will be
disabled in the StateDB used by the TxSelector and BatchBuilder.
- Add new parameter `NoGapsCheck` which skips checking gaps in the list
of checkpoints and returning errors if there are gaps. Gaps check
will be disabled in the StateDB used by the TxSelector and
BatchBuilder, because we expect to have gaps when there are multiple
coordinators forging (slots not forged by our coordinator will leave
gaps).
- kvdb
- Fix path in Last when doing `setNew`
- Only close if db != nil, and after closing, always set db to nil
- This will avoid a panic in the case where the db is closed but
there's an error soon after, and a future call tries to close
again. This is because pebble.Close() will panic if the db is
already closed.
- Avoid calling pebble methods when a the Storage interface already
implements that method (like Close).
- statedb
- In test, avoid calling KVDB method if the same method is available for
the StateDB (like MakeCheckpoint, CurrentBatch).
- eth
- In *EventByBlock methods, take blockHash as input argument and use it
when querying the event logs. Previously the blockHash was only taken
from the logs results *only if* there was any log. This caused the
following issue: if there was no logs, it was not possible to know if
the result was from the expected block or an uncle block! By querying
logs by blockHash we make sure that even if there are no logs, they
are from the right block.
- Note that now the function can either be called with a
blockNum or blockHash, but not both at the same time.
- sync
- If there's an error during call to Sync call resetState, which
internally resets the stateDB to avoid stale checkpoints (and a
corresponding invalid increase in the StateDB batchNum).
- During a Sync, after very batch processed, make sure that the StateDB
currentBatch corresponds to the batchNum in the smart contract
log/event.
Last db view is an opened pebble db which always contains a checkpoint from the
last batch. Methods to access this last batch are thread safe so that views of
the last checkpoint can be made anywhere and with a consistent view of the
state.
PoolL2Tx.Info contains information about the status & State of the
transaction. As for example, if the Tx has not been selected in the last
batch due not enough Balance at the Sender account, this reason would
appear at this parameter.
This will help the client (wallet, batchexplorer, etc) to reason why a
L2Tx is not selected in the forged batches.
- Close StateDB when stopping the node
- Lock the StateDB when doing checkpoints to avoid multiple instances of
oppening the pebble DB at the same time.
L2Tx.TokenID is not on the data obtained by the Synchronizer from the
blockchain, but is set by the TxProcessor when processing the
transactions in the StateDB.
- cli / node
- Update handler of SIGINT so that after 3 SIGINTs, the process terminates
unconditionally
- coordinator
- Store stats without pointer
- In all functions that send a variable via channel, check for context done
to avoid deadlock (due to no process reading from the channel, which has
no queue) when the node is stopped.
- Abstract `canForge` so that it can be used outside of the `Coordinator`
- In `canForge` check the blockNumber in current and next slot.
- Update tests due to smart contract changes in slot handling, and minimum
bid defaults
- TxManager
- Add consts, vars and stats to allow evaluating `canForge`
- Add `canForge` method (not used yet)
- Store batch and nonces status (last success and last pending)
- Track nonces internally instead of relying on the ethereum node (this
is required to work with ganache when there are pending txs)
- Handle the (common) case of the receipt not being found after the tx
is sent.
- Don't start the main loop until we get an initial messae fo the stats
and vars (so that in the loop the stats and vars are set to
synchronizer values)
- eth / ethereum client
- Add necessary methods to create the auth object for transactions manually
so that we can set the nonce, gas price, gas limit, etc manually
- Update `RollupForgeBatch` to take an auth object as input (so that the
coordinator can set parameters manually)
- synchronizer
- In stats, add `NextSlot`
- Add tests connecting TxSelector, BatchBuilder, ZKInputs, ProofServer
- Added test to check that the signatures of the PoolL2Txs from the L2DB
pool can be verified, to check that the parameters of the PoolL2Tx match
the original parameters signed before inserting them into the L2DB