Update missing parts, improve til, and more
- Node
- Updated configuration to initialize the interface to all the smart
contracts
- Common
- Moved BlockData and BatchData types to common so that they can be
shared among: historydb, til and synchronizer
- Remove hash.go (it was never used)
- Remove slot.go (it was never used)
- Remove smartcontractparams.go (it was never used, and appropriate
structs are defined in `eth/`)
- Comment state / status method until requirements of this method are
properly defined, and move it to Synchronizer
- Synchronizer
- Simplify `Sync` routine to only sync one block per call, and return
useful information.
- Use BlockData and BatchData from common
- Check that events belong to the expected block hash
- In L1Batch, query L1UserTxs from HistoryDB
- Fill ERC20 token information
- Test AddTokens with test.Client
- HistryDB
- Use BlockData and BatchData from common
- Add `GetAllTokens` method
- Uncomment and update GetL1UserTxs (with corresponding tests)
- Til
- Rename all instances of RegisterToken to AddToken (to follow the smart
contract implementation naming)
- Use BlockData and BatchData from common
- Move testL1CoordinatorTxs and testL2Txs to a separate struct
from BatchData in Context
- Start Context with BatchNum = 1 (which the protocol defines to be the
first batchNum)
- In every Batch, set StateRoot and ExitRoot to a non-nil big.Int
(zero).
- In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not
used, set it to 0; so that no *big.Int is nil.
- In L1UserTx, don't set BatchNum, because when L1UserTxs are created
and obtained by the synchronizer, the BatchNum is not known yet (it's
a synchronizer job to set it)
- In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago Update missing parts, improve til, and more
- Node
- Updated configuration to initialize the interface to all the smart
contracts
- Common
- Moved BlockData and BatchData types to common so that they can be
shared among: historydb, til and synchronizer
- Remove hash.go (it was never used)
- Remove slot.go (it was never used)
- Remove smartcontractparams.go (it was never used, and appropriate
structs are defined in `eth/`)
- Comment state / status method until requirements of this method are
properly defined, and move it to Synchronizer
- Synchronizer
- Simplify `Sync` routine to only sync one block per call, and return
useful information.
- Use BlockData and BatchData from common
- Check that events belong to the expected block hash
- In L1Batch, query L1UserTxs from HistoryDB
- Fill ERC20 token information
- Test AddTokens with test.Client
- HistryDB
- Use BlockData and BatchData from common
- Add `GetAllTokens` method
- Uncomment and update GetL1UserTxs (with corresponding tests)
- Til
- Rename all instances of RegisterToken to AddToken (to follow the smart
contract implementation naming)
- Use BlockData and BatchData from common
- Move testL1CoordinatorTxs and testL2Txs to a separate struct
from BatchData in Context
- Start Context with BatchNum = 1 (which the protocol defines to be the
first batchNum)
- In every Batch, set StateRoot and ExitRoot to a non-nil big.Int
(zero).
- In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not
used, set it to 0; so that no *big.Int is nil.
- In L1UserTx, don't set BatchNum, because when L1UserTxs are created
and obtained by the synchronizer, the BatchNum is not known yet (it's
a synchronizer job to set it)
- In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago Update missing parts, improve til, and more
- Node
- Updated configuration to initialize the interface to all the smart
contracts
- Common
- Moved BlockData and BatchData types to common so that they can be
shared among: historydb, til and synchronizer
- Remove hash.go (it was never used)
- Remove slot.go (it was never used)
- Remove smartcontractparams.go (it was never used, and appropriate
structs are defined in `eth/`)
- Comment state / status method until requirements of this method are
properly defined, and move it to Synchronizer
- Synchronizer
- Simplify `Sync` routine to only sync one block per call, and return
useful information.
- Use BlockData and BatchData from common
- Check that events belong to the expected block hash
- In L1Batch, query L1UserTxs from HistoryDB
- Fill ERC20 token information
- Test AddTokens with test.Client
- HistryDB
- Use BlockData and BatchData from common
- Add `GetAllTokens` method
- Uncomment and update GetL1UserTxs (with corresponding tests)
- Til
- Rename all instances of RegisterToken to AddToken (to follow the smart
contract implementation naming)
- Use BlockData and BatchData from common
- Move testL1CoordinatorTxs and testL2Txs to a separate struct
from BatchData in Context
- Start Context with BatchNum = 1 (which the protocol defines to be the
first batchNum)
- In every Batch, set StateRoot and ExitRoot to a non-nil big.Int
(zero).
- In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not
used, set it to 0; so that no *big.Int is nil.
- In L1UserTx, don't set BatchNum, because when L1UserTxs are created
and obtained by the synchronizer, the BatchNum is not known yet (it's
a synchronizer job to set it)
- In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago Update missing parts, improve til, and more
- Node
- Updated configuration to initialize the interface to all the smart
contracts
- Common
- Moved BlockData and BatchData types to common so that they can be
shared among: historydb, til and synchronizer
- Remove hash.go (it was never used)
- Remove slot.go (it was never used)
- Remove smartcontractparams.go (it was never used, and appropriate
structs are defined in `eth/`)
- Comment state / status method until requirements of this method are
properly defined, and move it to Synchronizer
- Synchronizer
- Simplify `Sync` routine to only sync one block per call, and return
useful information.
- Use BlockData and BatchData from common
- Check that events belong to the expected block hash
- In L1Batch, query L1UserTxs from HistoryDB
- Fill ERC20 token information
- Test AddTokens with test.Client
- HistryDB
- Use BlockData and BatchData from common
- Add `GetAllTokens` method
- Uncomment and update GetL1UserTxs (with corresponding tests)
- Til
- Rename all instances of RegisterToken to AddToken (to follow the smart
contract implementation naming)
- Use BlockData and BatchData from common
- Move testL1CoordinatorTxs and testL2Txs to a separate struct
from BatchData in Context
- Start Context with BatchNum = 1 (which the protocol defines to be the
first batchNum)
- In every Batch, set StateRoot and ExitRoot to a non-nil big.Int
(zero).
- In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not
used, set it to 0; so that no *big.Int is nil.
- In L1UserTx, don't set BatchNum, because when L1UserTxs are created
and obtained by the synchronizer, the BatchNum is not known yet (it's
a synchronizer job to set it)
- In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago Update coordinator to work better under real net
- cli / node
- Update handler of SIGINT so that after 3 SIGINTs, the process terminates
unconditionally
- coordinator
- Store stats without pointer
- In all functions that send a variable via channel, check for context done
to avoid deadlock (due to no process reading from the channel, which has
no queue) when the node is stopped.
- Abstract `canForge` so that it can be used outside of the `Coordinator`
- In `canForge` check the blockNumber in current and next slot.
- Update tests due to smart contract changes in slot handling, and minimum
bid defaults
- TxManager
- Add consts, vars and stats to allow evaluating `canForge`
- Add `canForge` method (not used yet)
- Store batch and nonces status (last success and last pending)
- Track nonces internally instead of relying on the ethereum node (this
is required to work with ganache when there are pending txs)
- Handle the (common) case of the receipt not being found after the tx
is sent.
- Don't start the main loop until we get an initial messae fo the stats
and vars (so that in the loop the stats and vars are set to
synchronizer values)
- When a tx fails, check and discard all the failed transactions before
sending the message to stop the pipeline. This will avoid sending
consecutive messages of stop the pipeline when multiple txs are
detected to be failed consecutively. Also, future txs of the same
pipeline after a discarded txs are discarded, and their nonces reused.
- Robust handling of nonces:
- If geth returns nonce is too low, increase it
- If geth returns nonce too hight, decrease it
- If geth returns underpriced, increase gas price
- If geth returns replace underpriced, increase gas price
- Add support for resending transactions after a timeout
- Store `BatchInfos` in a queue
- Pipeline
- When an error is found, stop forging batches and send a message to the
coordinator to stop the pipeline with information of the failed batch
number so that in a restart, non-failed batches are not repated.
- When doing a reset of the stateDB, if possible reset from the local
checkpoint instead of resetting from the synchronizer. This allows
resetting from a batch that is valid but not yet sent / synced.
- Every time a pipeline is started, assign it a number from a counter. This
allows the TxManager to ignore batches from stopped pipelines, via a
message sent by the coordinator.
- Avoid forging when we haven't reached the rollup genesis block number.
- Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the
number of blocks of delay to wait before starting the pipeline when we
reach a slot in which we can forge.
- When detecting a reorg, only reset the pipeline if the batch from which
the pipeline started changed and wasn't sent by us.
- Add config parameter `ScheduleBatchBlocksAheadCheck`:
ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the
forger address is checked to be allowed to forge (apart from checking the
next block), used to decide when to stop scheduling new batches (by
stopping the pipeline). For example, if we are at block 10 and
ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge,
the pipeline will be stopped if we can't forge at block 15. This value
should be the expected number of blocks it takes between scheduling a
batch and having it mined.
- Add config parameter `SendBatchBlocksMarginCheck`:
SendBatchBlocksMarginCheck is the number of margin blocks ahead in which
the coordinator is also checked to be allowed to forge, apart from the
next block; used to decide when to stop sending batches to the smart
contract. For example, if we are at block 10 and
SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the
batch will be discarded if we can't forge at block 15.
- Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout
after which a non-mined ethereum transaction will be resent (reusing the
nonce) with a newly calculated gas price
- Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price
allowed for ethereum transactions
- Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces
of pending transactions for new replacement transactions. This is useful
for testing with Ganache.
- Extend BatchInfo with more useful information for debugging
- eth / ethereum client
- Add necessary methods to create the auth object for transactions manually
so that we can set the nonce, gas price, gas limit, etc manually
- Update `RollupForgeBatch` to take an auth object as input (so that the
coordinator can set parameters manually)
- synchronizer
- In stats, add `NextSlot`
- In stats, store full last batch instead of just last batch number
- Instead of calculating a nextSlot from scratch every time, update the
current struct (only updating the forger info if we are Synced)
- Afer every processed batch, check that the calculated StateDB MTRoot
matches the StateRoot found in the forgeBatch event.
3 years ago Redo coordinator structure, connect API to node
- API:
- Modify the constructor so that hardcoded rollup constants don't need
to be passed (introduce a `Config` and use `configAPI` internally)
- Common:
- Update rollup constants with proper *big.Int when required
- Add BidCoordinator and Slot structs used by the HistoryDB and
Synchronizer.
- Add helper methods to AuctionConstants
- AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL
table: `default_slot_set_bid_slot_num`), which indicates at which
slotNum does the `DefaultSlotSetBid` specified starts applying.
- Config:
- Move coordinator exclusive configuration from the node config to the
coordinator config
- Coordinator:
- Reorganize the code towards having the goroutines started and stopped
from the coordinator itself instead of the node.
- Remove all stop and stopped channels, and use context.Context and
sync.WaitGroup instead.
- Remove BatchInfo setters and assing variables directly
- In ServerProof and ServerProofPool use context instead stop channel.
- Use message passing to notify the coordinator about sync updates and
reorgs
- Introduce the Pipeline, which can be started and stopped by the
Coordinator
- Introduce the TxManager, which manages ethereum transactions (the
TxManager is also in charge of making the forge call to the rollup
smart contract). The TxManager keeps ethereum transactions and:
1. Waits for the transaction to be accepted
2. Waits for the transaction to be confirmed for N blocks
- In forge logic, first prepare a batch and then wait for an available
server proof to have all work ready once the proof server is ready.
- Remove the `isForgeSequence` method which was querying the smart
contract, and instead use notifications sent by the Synchronizer to
figure out if it's forging time.
- Update test (which is a minimal test to manually see if the
coordinator starts)
- HistoryDB:
- Add method to get the number of batches in a slot (used to detect when
a slot has passed the bid winner forging deadline)
- Add method to get the best bid and associated coordinator of a slot
(used to detect the forgerAddress that can forge the slot)
- General:
- Rename some instances of `currentBlock` to `lastBlock` to be more
clear.
- Node:
- Connect the API to the node and call the methods to update cached
state when the sync advances blocks.
- Call methods to update Coordinator state when the sync advances blocks
and finds reorgs.
- Synchronizer:
- Add Auction field in the Stats, which contain the current slot with
info about highest bidder and other related info required to know who
can forge in the current block.
- Better organization of cached state:
- On Sync, update the internal cached state
- On Init or Reorg, load the state from HistoryDB into the
internal cached state.
4 years ago Update coordinator to work better under real net
- cli / node
- Update handler of SIGINT so that after 3 SIGINTs, the process terminates
unconditionally
- coordinator
- Store stats without pointer
- In all functions that send a variable via channel, check for context done
to avoid deadlock (due to no process reading from the channel, which has
no queue) when the node is stopped.
- Abstract `canForge` so that it can be used outside of the `Coordinator`
- In `canForge` check the blockNumber in current and next slot.
- Update tests due to smart contract changes in slot handling, and minimum
bid defaults
- TxManager
- Add consts, vars and stats to allow evaluating `canForge`
- Add `canForge` method (not used yet)
- Store batch and nonces status (last success and last pending)
- Track nonces internally instead of relying on the ethereum node (this
is required to work with ganache when there are pending txs)
- Handle the (common) case of the receipt not being found after the tx
is sent.
- Don't start the main loop until we get an initial messae fo the stats
and vars (so that in the loop the stats and vars are set to
synchronizer values)
- When a tx fails, check and discard all the failed transactions before
sending the message to stop the pipeline. This will avoid sending
consecutive messages of stop the pipeline when multiple txs are
detected to be failed consecutively. Also, future txs of the same
pipeline after a discarded txs are discarded, and their nonces reused.
- Robust handling of nonces:
- If geth returns nonce is too low, increase it
- If geth returns nonce too hight, decrease it
- If geth returns underpriced, increase gas price
- If geth returns replace underpriced, increase gas price
- Add support for resending transactions after a timeout
- Store `BatchInfos` in a queue
- Pipeline
- When an error is found, stop forging batches and send a message to the
coordinator to stop the pipeline with information of the failed batch
number so that in a restart, non-failed batches are not repated.
- When doing a reset of the stateDB, if possible reset from the local
checkpoint instead of resetting from the synchronizer. This allows
resetting from a batch that is valid but not yet sent / synced.
- Every time a pipeline is started, assign it a number from a counter. This
allows the TxManager to ignore batches from stopped pipelines, via a
message sent by the coordinator.
- Avoid forging when we haven't reached the rollup genesis block number.
- Add config parameter `StartSlotBlocksDelay`: StartSlotBlocksDelay is the
number of blocks of delay to wait before starting the pipeline when we
reach a slot in which we can forge.
- When detecting a reorg, only reset the pipeline if the batch from which
the pipeline started changed and wasn't sent by us.
- Add config parameter `ScheduleBatchBlocksAheadCheck`:
ScheduleBatchBlocksAheadCheck is the number of blocks ahead in which the
forger address is checked to be allowed to forge (apart from checking the
next block), used to decide when to stop scheduling new batches (by
stopping the pipeline). For example, if we are at block 10 and
ScheduleBatchBlocksAheadCheck is 5, eventhough at block 11 we canForge,
the pipeline will be stopped if we can't forge at block 15. This value
should be the expected number of blocks it takes between scheduling a
batch and having it mined.
- Add config parameter `SendBatchBlocksMarginCheck`:
SendBatchBlocksMarginCheck is the number of margin blocks ahead in which
the coordinator is also checked to be allowed to forge, apart from the
next block; used to decide when to stop sending batches to the smart
contract. For example, if we are at block 10 and
SendBatchBlocksMarginCheck is 5, eventhough at block 11 we canForge, the
batch will be discarded if we can't forge at block 15.
- Add config parameter `TxResendTimeout`: TxResendTimeout is the timeout
after which a non-mined ethereum transaction will be resent (reusing the
nonce) with a newly calculated gas price
- Add config parameter `MaxGasPrice`: MaxGasPrice is the maximum gas price
allowed for ethereum transactions
- Add config parameter `NoReuseNonce`: NoReuseNonce disables reusing nonces
of pending transactions for new replacement transactions. This is useful
for testing with Ganache.
- Extend BatchInfo with more useful information for debugging
- eth / ethereum client
- Add necessary methods to create the auth object for transactions manually
so that we can set the nonce, gas price, gas limit, etc manually
- Update `RollupForgeBatch` to take an auth object as input (so that the
coordinator can set parameters manually)
- synchronizer
- In stats, add `NextSlot`
- In stats, store full last batch instead of just last batch number
- Instead of calculating a nextSlot from scratch every time, update the
current struct (only updating the forger info if we are Synced)
- Afer every processed batch, check that the calculated StateDB MTRoot
matches the StateRoot found in the forgeBatch event.
3 years ago Update coordinator, call all api update functions
- Common:
- Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition
- API:
- Add UpdateNetworkInfoBlock to update just block information, to be
used when the node is not yet synchronized
- Node:
- Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with
configurable time intervals
- Synchronizer:
- When mapping events by TxHash, use an array to support the possibility
of multiple calls of the same function happening in the same
transaction (for example, a smart contract in a single transaction
could call withdraw with delay twice, which would generate 2 withdraw
events, and 2 deposit events).
- In Stats, keep entire LastBlock instead of just the blockNum
- In Stats, add lastL1BatchBlock
- Test Stats and SCVars
- Coordinator:
- Enable writing the BatchInfo in every step of the pipeline to disk
(with JSON text files) for debugging purposes.
- Move the Pipeline functionality from the Coordinator to its own struct
(Pipeline)
- Implement shouldL1lL2Batch
- In TxManager, implement logic to perform several attempts when doing
ethereum node RPC calls before considering the error. (Both for calls
to forgeBatch and transaction receipt)
- In TxManager, reorganize the flow and note the specific points in
which actions are made when err != nil
- HistoryDB:
- Implement GetLastL1BatchBlockNum: returns the blockNum of the latest
forged l1Batch, to help the coordinator decide when to forge an
L1Batch.
- EthereumClient and test.Client:
- Update EthBlockByNumber to return the last block when the passed
number is -1.
4 years ago Update coordinator, call all api update functions
- Common:
- Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition
- API:
- Add UpdateNetworkInfoBlock to update just block information, to be
used when the node is not yet synchronized
- Node:
- Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with
configurable time intervals
- Synchronizer:
- When mapping events by TxHash, use an array to support the possibility
of multiple calls of the same function happening in the same
transaction (for example, a smart contract in a single transaction
could call withdraw with delay twice, which would generate 2 withdraw
events, and 2 deposit events).
- In Stats, keep entire LastBlock instead of just the blockNum
- In Stats, add lastL1BatchBlock
- Test Stats and SCVars
- Coordinator:
- Enable writing the BatchInfo in every step of the pipeline to disk
(with JSON text files) for debugging purposes.
- Move the Pipeline functionality from the Coordinator to its own struct
(Pipeline)
- Implement shouldL1lL2Batch
- In TxManager, implement logic to perform several attempts when doing
ethereum node RPC calls before considering the error. (Both for calls
to forgeBatch and transaction receipt)
- In TxManager, reorganize the flow and note the specific points in
which actions are made when err != nil
- HistoryDB:
- Implement GetLastL1BatchBlockNum: returns the blockNum of the latest
forged l1Batch, to help the coordinator decide when to forge an
L1Batch.
- EthereumClient and test.Client:
- Update EthBlockByNumber to return the last block when the passed
number is -1.
4 years ago Update missing parts, improve til, and more
- Node
- Updated configuration to initialize the interface to all the smart
contracts
- Common
- Moved BlockData and BatchData types to common so that they can be
shared among: historydb, til and synchronizer
- Remove hash.go (it was never used)
- Remove slot.go (it was never used)
- Remove smartcontractparams.go (it was never used, and appropriate
structs are defined in `eth/`)
- Comment state / status method until requirements of this method are
properly defined, and move it to Synchronizer
- Synchronizer
- Simplify `Sync` routine to only sync one block per call, and return
useful information.
- Use BlockData and BatchData from common
- Check that events belong to the expected block hash
- In L1Batch, query L1UserTxs from HistoryDB
- Fill ERC20 token information
- Test AddTokens with test.Client
- HistryDB
- Use BlockData and BatchData from common
- Add `GetAllTokens` method
- Uncomment and update GetL1UserTxs (with corresponding tests)
- Til
- Rename all instances of RegisterToken to AddToken (to follow the smart
contract implementation naming)
- Use BlockData and BatchData from common
- Move testL1CoordinatorTxs and testL2Txs to a separate struct
from BatchData in Context
- Start Context with BatchNum = 1 (which the protocol defines to be the
first batchNum)
- In every Batch, set StateRoot and ExitRoot to a non-nil big.Int
(zero).
- In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not
used, set it to 0; so that no *big.Int is nil.
- In L1UserTx, don't set BatchNum, because when L1UserTxs are created
and obtained by the synchronizer, the BatchNum is not known yet (it's
a synchronizer job to set it)
- In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago Redo coordinator structure, connect API to node
- API:
- Modify the constructor so that hardcoded rollup constants don't need
to be passed (introduce a `Config` and use `configAPI` internally)
- Common:
- Update rollup constants with proper *big.Int when required
- Add BidCoordinator and Slot structs used by the HistoryDB and
Synchronizer.
- Add helper methods to AuctionConstants
- AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL
table: `default_slot_set_bid_slot_num`), which indicates at which
slotNum does the `DefaultSlotSetBid` specified starts applying.
- Config:
- Move coordinator exclusive configuration from the node config to the
coordinator config
- Coordinator:
- Reorganize the code towards having the goroutines started and stopped
from the coordinator itself instead of the node.
- Remove all stop and stopped channels, and use context.Context and
sync.WaitGroup instead.
- Remove BatchInfo setters and assing variables directly
- In ServerProof and ServerProofPool use context instead stop channel.
- Use message passing to notify the coordinator about sync updates and
reorgs
- Introduce the Pipeline, which can be started and stopped by the
Coordinator
- Introduce the TxManager, which manages ethereum transactions (the
TxManager is also in charge of making the forge call to the rollup
smart contract). The TxManager keeps ethereum transactions and:
1. Waits for the transaction to be accepted
2. Waits for the transaction to be confirmed for N blocks
- In forge logic, first prepare a batch and then wait for an available
server proof to have all work ready once the proof server is ready.
- Remove the `isForgeSequence` method which was querying the smart
contract, and instead use notifications sent by the Synchronizer to
figure out if it's forging time.
- Update test (which is a minimal test to manually see if the
coordinator starts)
- HistoryDB:
- Add method to get the number of batches in a slot (used to detect when
a slot has passed the bid winner forging deadline)
- Add method to get the best bid and associated coordinator of a slot
(used to detect the forgerAddress that can forge the slot)
- General:
- Rename some instances of `currentBlock` to `lastBlock` to be more
clear.
- Node:
- Connect the API to the node and call the methods to update cached
state when the sync advances blocks.
- Call methods to update Coordinator state when the sync advances blocks
and finds reorgs.
- Synchronizer:
- Add Auction field in the Stats, which contain the current slot with
info about highest bidder and other related info required to know who
can forge in the current block.
- Better organization of cached state:
- On Sync, update the internal cached state
- On Init or Reorg, load the state from HistoryDB into the
internal cached state.
4 years ago Redo coordinator structure, connect API to node
- API:
- Modify the constructor so that hardcoded rollup constants don't need
to be passed (introduce a `Config` and use `configAPI` internally)
- Common:
- Update rollup constants with proper *big.Int when required
- Add BidCoordinator and Slot structs used by the HistoryDB and
Synchronizer.
- Add helper methods to AuctionConstants
- AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL
table: `default_slot_set_bid_slot_num`), which indicates at which
slotNum does the `DefaultSlotSetBid` specified starts applying.
- Config:
- Move coordinator exclusive configuration from the node config to the
coordinator config
- Coordinator:
- Reorganize the code towards having the goroutines started and stopped
from the coordinator itself instead of the node.
- Remove all stop and stopped channels, and use context.Context and
sync.WaitGroup instead.
- Remove BatchInfo setters and assing variables directly
- In ServerProof and ServerProofPool use context instead stop channel.
- Use message passing to notify the coordinator about sync updates and
reorgs
- Introduce the Pipeline, which can be started and stopped by the
Coordinator
- Introduce the TxManager, which manages ethereum transactions (the
TxManager is also in charge of making the forge call to the rollup
smart contract). The TxManager keeps ethereum transactions and:
1. Waits for the transaction to be accepted
2. Waits for the transaction to be confirmed for N blocks
- In forge logic, first prepare a batch and then wait for an available
server proof to have all work ready once the proof server is ready.
- Remove the `isForgeSequence` method which was querying the smart
contract, and instead use notifications sent by the Synchronizer to
figure out if it's forging time.
- Update test (which is a minimal test to manually see if the
coordinator starts)
- HistoryDB:
- Add method to get the number of batches in a slot (used to detect when
a slot has passed the bid winner forging deadline)
- Add method to get the best bid and associated coordinator of a slot
(used to detect the forgerAddress that can forge the slot)
- General:
- Rename some instances of `currentBlock` to `lastBlock` to be more
clear.
- Node:
- Connect the API to the node and call the methods to update cached
state when the sync advances blocks.
- Call methods to update Coordinator state when the sync advances blocks
and finds reorgs.
- Synchronizer:
- Add Auction field in the Stats, which contain the current slot with
info about highest bidder and other related info required to know who
can forge in the current block.
- Better organization of cached state:
- On Sync, update the internal cached state
- On Init or Reorg, load the state from HistoryDB into the
internal cached state.
4 years ago Redo coordinator structure, connect API to node
- API:
- Modify the constructor so that hardcoded rollup constants don't need
to be passed (introduce a `Config` and use `configAPI` internally)
- Common:
- Update rollup constants with proper *big.Int when required
- Add BidCoordinator and Slot structs used by the HistoryDB and
Synchronizer.
- Add helper methods to AuctionConstants
- AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL
table: `default_slot_set_bid_slot_num`), which indicates at which
slotNum does the `DefaultSlotSetBid` specified starts applying.
- Config:
- Move coordinator exclusive configuration from the node config to the
coordinator config
- Coordinator:
- Reorganize the code towards having the goroutines started and stopped
from the coordinator itself instead of the node.
- Remove all stop and stopped channels, and use context.Context and
sync.WaitGroup instead.
- Remove BatchInfo setters and assing variables directly
- In ServerProof and ServerProofPool use context instead stop channel.
- Use message passing to notify the coordinator about sync updates and
reorgs
- Introduce the Pipeline, which can be started and stopped by the
Coordinator
- Introduce the TxManager, which manages ethereum transactions (the
TxManager is also in charge of making the forge call to the rollup
smart contract). The TxManager keeps ethereum transactions and:
1. Waits for the transaction to be accepted
2. Waits for the transaction to be confirmed for N blocks
- In forge logic, first prepare a batch and then wait for an available
server proof to have all work ready once the proof server is ready.
- Remove the `isForgeSequence` method which was querying the smart
contract, and instead use notifications sent by the Synchronizer to
figure out if it's forging time.
- Update test (which is a minimal test to manually see if the
coordinator starts)
- HistoryDB:
- Add method to get the number of batches in a slot (used to detect when
a slot has passed the bid winner forging deadline)
- Add method to get the best bid and associated coordinator of a slot
(used to detect the forgerAddress that can forge the slot)
- General:
- Rename some instances of `currentBlock` to `lastBlock` to be more
clear.
- Node:
- Connect the API to the node and call the methods to update cached
state when the sync advances blocks.
- Call methods to update Coordinator state when the sync advances blocks
and finds reorgs.
- Synchronizer:
- Add Auction field in the Stats, which contain the current slot with
info about highest bidder and other related info required to know who
can forge in the current block.
- Better organization of cached state:
- On Sync, update the internal cached state
- On Init or Reorg, load the state from HistoryDB into the
internal cached state.
4 years ago Redo coordinator structure, connect API to node
- API:
- Modify the constructor so that hardcoded rollup constants don't need
to be passed (introduce a `Config` and use `configAPI` internally)
- Common:
- Update rollup constants with proper *big.Int when required
- Add BidCoordinator and Slot structs used by the HistoryDB and
Synchronizer.
- Add helper methods to AuctionConstants
- AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL
table: `default_slot_set_bid_slot_num`), which indicates at which
slotNum does the `DefaultSlotSetBid` specified starts applying.
- Config:
- Move coordinator exclusive configuration from the node config to the
coordinator config
- Coordinator:
- Reorganize the code towards having the goroutines started and stopped
from the coordinator itself instead of the node.
- Remove all stop and stopped channels, and use context.Context and
sync.WaitGroup instead.
- Remove BatchInfo setters and assing variables directly
- In ServerProof and ServerProofPool use context instead stop channel.
- Use message passing to notify the coordinator about sync updates and
reorgs
- Introduce the Pipeline, which can be started and stopped by the
Coordinator
- Introduce the TxManager, which manages ethereum transactions (the
TxManager is also in charge of making the forge call to the rollup
smart contract). The TxManager keeps ethereum transactions and:
1. Waits for the transaction to be accepted
2. Waits for the transaction to be confirmed for N blocks
- In forge logic, first prepare a batch and then wait for an available
server proof to have all work ready once the proof server is ready.
- Remove the `isForgeSequence` method which was querying the smart
contract, and instead use notifications sent by the Synchronizer to
figure out if it's forging time.
- Update test (which is a minimal test to manually see if the
coordinator starts)
- HistoryDB:
- Add method to get the number of batches in a slot (used to detect when
a slot has passed the bid winner forging deadline)
- Add method to get the best bid and associated coordinator of a slot
(used to detect the forgerAddress that can forge the slot)
- General:
- Rename some instances of `currentBlock` to `lastBlock` to be more
clear.
- Node:
- Connect the API to the node and call the methods to update cached
state when the sync advances blocks.
- Call methods to update Coordinator state when the sync advances blocks
and finds reorgs.
- Synchronizer:
- Add Auction field in the Stats, which contain the current slot with
info about highest bidder and other related info required to know who
can forge in the current block.
- Better organization of cached state:
- On Sync, update the internal cached state
- On Init or Reorg, load the state from HistoryDB into the
internal cached state.
4 years ago Redo coordinator structure, connect API to node
- API:
- Modify the constructor so that hardcoded rollup constants don't need
to be passed (introduce a `Config` and use `configAPI` internally)
- Common:
- Update rollup constants with proper *big.Int when required
- Add BidCoordinator and Slot structs used by the HistoryDB and
Synchronizer.
- Add helper methods to AuctionConstants
- AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL
table: `default_slot_set_bid_slot_num`), which indicates at which
slotNum does the `DefaultSlotSetBid` specified starts applying.
- Config:
- Move coordinator exclusive configuration from the node config to the
coordinator config
- Coordinator:
- Reorganize the code towards having the goroutines started and stopped
from the coordinator itself instead of the node.
- Remove all stop and stopped channels, and use context.Context and
sync.WaitGroup instead.
- Remove BatchInfo setters and assing variables directly
- In ServerProof and ServerProofPool use context instead stop channel.
- Use message passing to notify the coordinator about sync updates and
reorgs
- Introduce the Pipeline, which can be started and stopped by the
Coordinator
- Introduce the TxManager, which manages ethereum transactions (the
TxManager is also in charge of making the forge call to the rollup
smart contract). The TxManager keeps ethereum transactions and:
1. Waits for the transaction to be accepted
2. Waits for the transaction to be confirmed for N blocks
- In forge logic, first prepare a batch and then wait for an available
server proof to have all work ready once the proof server is ready.
- Remove the `isForgeSequence` method which was querying the smart
contract, and instead use notifications sent by the Synchronizer to
figure out if it's forging time.
- Update test (which is a minimal test to manually see if the
coordinator starts)
- HistoryDB:
- Add method to get the number of batches in a slot (used to detect when
a slot has passed the bid winner forging deadline)
- Add method to get the best bid and associated coordinator of a slot
(used to detect the forgerAddress that can forge the slot)
- General:
- Rename some instances of `currentBlock` to `lastBlock` to be more
clear.
- Node:
- Connect the API to the node and call the methods to update cached
state when the sync advances blocks.
- Call methods to update Coordinator state when the sync advances blocks
and finds reorgs.
- Synchronizer:
- Add Auction field in the Stats, which contain the current slot with
info about highest bidder and other related info required to know who
can forge in the current block.
- Better organization of cached state:
- On Sync, update the internal cached state
- On Init or Reorg, load the state from HistoryDB into the
internal cached state.
4 years ago Fix exit table, set delayed_withdrawn in exits
- In exit table, `instant_withdrawn`, `delayed_withdraw_request`, and
`delayed_withdrawn` were referencing batch_num. But these actions happen
outside a batch, so they should reference a block_num.
- Process delayed withdrawns:
- In Synchronizer, first match a Rollup delayed withdrawn request, with the
WDelayer deposit (via TxHash), and store the owner and token associated
with the delayed withdrawn.
- In HistoryDB: store the owner and token of a delayed withdrawal request
in the exit_tree, and set delayed_withdrawn when the withdraw is done in
the WDelayer.
- Update dependency of sqlx to master
- Last release of sqlx is from 2018 October, and it doesn't support
`NamedQuery` with a slice of structs, which is used in this commit.
4 years ago Fix exit table, set delayed_withdrawn in exits
- In exit table, `instant_withdrawn`, `delayed_withdraw_request`, and
`delayed_withdrawn` were referencing batch_num. But these actions happen
outside a batch, so they should reference a block_num.
- Process delayed withdrawns:
- In Synchronizer, first match a Rollup delayed withdrawn request, with the
WDelayer deposit (via TxHash), and store the owner and token associated
with the delayed withdrawn.
- In HistoryDB: store the owner and token of a delayed withdrawal request
in the exit_tree, and set delayed_withdrawn when the withdraw is done in
the WDelayer.
- Update dependency of sqlx to master
- Last release of sqlx is from 2018 October, and it doesn't support
`NamedQuery` with a slice of structs, which is used in this commit.
4 years ago Fix exit table, set delayed_withdrawn in exits
- In exit table, `instant_withdrawn`, `delayed_withdraw_request`, and
`delayed_withdrawn` were referencing batch_num. But these actions happen
outside a batch, so they should reference a block_num.
- Process delayed withdrawns:
- In Synchronizer, first match a Rollup delayed withdrawn request, with the
WDelayer deposit (via TxHash), and store the owner and token associated
with the delayed withdrawn.
- In HistoryDB: store the owner and token of a delayed withdrawal request
in the exit_tree, and set delayed_withdrawn when the withdraw is done in
the WDelayer.
- Update dependency of sqlx to master
- Last release of sqlx is from 2018 October, and it doesn't support
`NamedQuery` with a slice of structs, which is used in this commit.
4 years ago Fix exit table, set delayed_withdrawn in exits
- In exit table, `instant_withdrawn`, `delayed_withdraw_request`, and
`delayed_withdrawn` were referencing batch_num. But these actions happen
outside a batch, so they should reference a block_num.
- Process delayed withdrawns:
- In Synchronizer, first match a Rollup delayed withdrawn request, with the
WDelayer deposit (via TxHash), and store the owner and token associated
with the delayed withdrawn.
- In HistoryDB: store the owner and token of a delayed withdrawal request
in the exit_tree, and set delayed_withdrawn when the withdraw is done in
the WDelayer.
- Update dependency of sqlx to master
- Last release of sqlx is from 2018 October, and it doesn't support
`NamedQuery` with a slice of structs, which is used in this commit.
4 years ago Fix exit table, set delayed_withdrawn in exits
- In exit table, `instant_withdrawn`, `delayed_withdraw_request`, and
`delayed_withdrawn` were referencing batch_num. But these actions happen
outside a batch, so they should reference a block_num.
- Process delayed withdrawns:
- In Synchronizer, first match a Rollup delayed withdrawn request, with the
WDelayer deposit (via TxHash), and store the owner and token associated
with the delayed withdrawn.
- In HistoryDB: store the owner and token of a delayed withdrawal request
in the exit_tree, and set delayed_withdrawn when the withdraw is done in
the WDelayer.
- Update dependency of sqlx to master
- Last release of sqlx is from 2018 October, and it doesn't support
`NamedQuery` with a slice of structs, which is used in this commit.
4 years ago Fix exit table, set delayed_withdrawn in exits
- In exit table, `instant_withdrawn`, `delayed_withdraw_request`, and
`delayed_withdrawn` were referencing batch_num. But these actions happen
outside a batch, so they should reference a block_num.
- Process delayed withdrawns:
- In Synchronizer, first match a Rollup delayed withdrawn request, with the
WDelayer deposit (via TxHash), and store the owner and token associated
with the delayed withdrawn.
- In HistoryDB: store the owner and token of a delayed withdrawal request
in the exit_tree, and set delayed_withdrawn when the withdraw is done in
the WDelayer.
- Update dependency of sqlx to master
- Last release of sqlx is from 2018 October, and it doesn't support
`NamedQuery` with a slice of structs, which is used in this commit.
4 years ago Fix exit table, set delayed_withdrawn in exits
- In exit table, `instant_withdrawn`, `delayed_withdraw_request`, and
`delayed_withdrawn` were referencing batch_num. But these actions happen
outside a batch, so they should reference a block_num.
- Process delayed withdrawns:
- In Synchronizer, first match a Rollup delayed withdrawn request, with the
WDelayer deposit (via TxHash), and store the owner and token associated
with the delayed withdrawn.
- In HistoryDB: store the owner and token of a delayed withdrawal request
in the exit_tree, and set delayed_withdrawn when the withdraw is done in
the WDelayer.
- Update dependency of sqlx to master
- Last release of sqlx is from 2018 October, and it doesn't support
`NamedQuery` with a slice of structs, which is used in this commit.
4 years ago Update missing parts, improve til, and more
- Node
- Updated configuration to initialize the interface to all the smart
contracts
- Common
- Moved BlockData and BatchData types to common so that they can be
shared among: historydb, til and synchronizer
- Remove hash.go (it was never used)
- Remove slot.go (it was never used)
- Remove smartcontractparams.go (it was never used, and appropriate
structs are defined in `eth/`)
- Comment state / status method until requirements of this method are
properly defined, and move it to Synchronizer
- Synchronizer
- Simplify `Sync` routine to only sync one block per call, and return
useful information.
- Use BlockData and BatchData from common
- Check that events belong to the expected block hash
- In L1Batch, query L1UserTxs from HistoryDB
- Fill ERC20 token information
- Test AddTokens with test.Client
- HistryDB
- Use BlockData and BatchData from common
- Add `GetAllTokens` method
- Uncomment and update GetL1UserTxs (with corresponding tests)
- Til
- Rename all instances of RegisterToken to AddToken (to follow the smart
contract implementation naming)
- Use BlockData and BatchData from common
- Move testL1CoordinatorTxs and testL2Txs to a separate struct
from BatchData in Context
- Start Context with BatchNum = 1 (which the protocol defines to be the
first batchNum)
- In every Batch, set StateRoot and ExitRoot to a non-nil big.Int
(zero).
- In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not
used, set it to 0; so that no *big.Int is nil.
- In L1UserTx, don't set BatchNum, because when L1UserTxs are created
and obtained by the synchronizer, the BatchNum is not known yet (it's
a synchronizer job to set it)
- In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago Update missing parts, improve til, and more
- Node
- Updated configuration to initialize the interface to all the smart
contracts
- Common
- Moved BlockData and BatchData types to common so that they can be
shared among: historydb, til and synchronizer
- Remove hash.go (it was never used)
- Remove slot.go (it was never used)
- Remove smartcontractparams.go (it was never used, and appropriate
structs are defined in `eth/`)
- Comment state / status method until requirements of this method are
properly defined, and move it to Synchronizer
- Synchronizer
- Simplify `Sync` routine to only sync one block per call, and return
useful information.
- Use BlockData and BatchData from common
- Check that events belong to the expected block hash
- In L1Batch, query L1UserTxs from HistoryDB
- Fill ERC20 token information
- Test AddTokens with test.Client
- HistryDB
- Use BlockData and BatchData from common
- Add `GetAllTokens` method
- Uncomment and update GetL1UserTxs (with corresponding tests)
- Til
- Rename all instances of RegisterToken to AddToken (to follow the smart
contract implementation naming)
- Use BlockData and BatchData from common
- Move testL1CoordinatorTxs and testL2Txs to a separate struct
from BatchData in Context
- Start Context with BatchNum = 1 (which the protocol defines to be the
first batchNum)
- In every Batch, set StateRoot and ExitRoot to a non-nil big.Int
(zero).
- In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not
used, set it to 0; so that no *big.Int is nil.
- In L1UserTx, don't set BatchNum, because when L1UserTxs are created
and obtained by the synchronizer, the BatchNum is not known yet (it's
a synchronizer job to set it)
- In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago Update missing parts, improve til, and more
- Node
- Updated configuration to initialize the interface to all the smart
contracts
- Common
- Moved BlockData and BatchData types to common so that they can be
shared among: historydb, til and synchronizer
- Remove hash.go (it was never used)
- Remove slot.go (it was never used)
- Remove smartcontractparams.go (it was never used, and appropriate
structs are defined in `eth/`)
- Comment state / status method until requirements of this method are
properly defined, and move it to Synchronizer
- Synchronizer
- Simplify `Sync` routine to only sync one block per call, and return
useful information.
- Use BlockData and BatchData from common
- Check that events belong to the expected block hash
- In L1Batch, query L1UserTxs from HistoryDB
- Fill ERC20 token information
- Test AddTokens with test.Client
- HistryDB
- Use BlockData and BatchData from common
- Add `GetAllTokens` method
- Uncomment and update GetL1UserTxs (with corresponding tests)
- Til
- Rename all instances of RegisterToken to AddToken (to follow the smart
contract implementation naming)
- Use BlockData and BatchData from common
- Move testL1CoordinatorTxs and testL2Txs to a separate struct
from BatchData in Context
- Start Context with BatchNum = 1 (which the protocol defines to be the
first batchNum)
- In every Batch, set StateRoot and ExitRoot to a non-nil big.Int
(zero).
- In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not
used, set it to 0; so that no *big.Int is nil.
- In L1UserTx, don't set BatchNum, because when L1UserTxs are created
and obtained by the synchronizer, the BatchNum is not known yet (it's
a synchronizer job to set it)
- In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago Update missing parts, improve til, and more
- Node
- Updated configuration to initialize the interface to all the smart
contracts
- Common
- Moved BlockData and BatchData types to common so that they can be
shared among: historydb, til and synchronizer
- Remove hash.go (it was never used)
- Remove slot.go (it was never used)
- Remove smartcontractparams.go (it was never used, and appropriate
structs are defined in `eth/`)
- Comment state / status method until requirements of this method are
properly defined, and move it to Synchronizer
- Synchronizer
- Simplify `Sync` routine to only sync one block per call, and return
useful information.
- Use BlockData and BatchData from common
- Check that events belong to the expected block hash
- In L1Batch, query L1UserTxs from HistoryDB
- Fill ERC20 token information
- Test AddTokens with test.Client
- HistryDB
- Use BlockData and BatchData from common
- Add `GetAllTokens` method
- Uncomment and update GetL1UserTxs (with corresponding tests)
- Til
- Rename all instances of RegisterToken to AddToken (to follow the smart
contract implementation naming)
- Use BlockData and BatchData from common
- Move testL1CoordinatorTxs and testL2Txs to a separate struct
from BatchData in Context
- Start Context with BatchNum = 1 (which the protocol defines to be the
first batchNum)
- In every Batch, set StateRoot and ExitRoot to a non-nil big.Int
(zero).
- In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not
used, set it to 0; so that no *big.Int is nil.
- In L1UserTx, don't set BatchNum, because when L1UserTxs are created
and obtained by the synchronizer, the BatchNum is not known yet (it's
a synchronizer job to set it)
- In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago Update missing parts, improve til, and more
- Node
- Updated configuration to initialize the interface to all the smart
contracts
- Common
- Moved BlockData and BatchData types to common so that they can be
shared among: historydb, til and synchronizer
- Remove hash.go (it was never used)
- Remove slot.go (it was never used)
- Remove smartcontractparams.go (it was never used, and appropriate
structs are defined in `eth/`)
- Comment state / status method until requirements of this method are
properly defined, and move it to Synchronizer
- Synchronizer
- Simplify `Sync` routine to only sync one block per call, and return
useful information.
- Use BlockData and BatchData from common
- Check that events belong to the expected block hash
- In L1Batch, query L1UserTxs from HistoryDB
- Fill ERC20 token information
- Test AddTokens with test.Client
- HistryDB
- Use BlockData and BatchData from common
- Add `GetAllTokens` method
- Uncomment and update GetL1UserTxs (with corresponding tests)
- Til
- Rename all instances of RegisterToken to AddToken (to follow the smart
contract implementation naming)
- Use BlockData and BatchData from common
- Move testL1CoordinatorTxs and testL2Txs to a separate struct
from BatchData in Context
- Start Context with BatchNum = 1 (which the protocol defines to be the
first batchNum)
- In every Batch, set StateRoot and ExitRoot to a non-nil big.Int
(zero).
- In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not
used, set it to 0; so that no *big.Int is nil.
- In L1UserTx, don't set BatchNum, because when L1UserTxs are created
and obtained by the synchronizer, the BatchNum is not known yet (it's
a synchronizer job to set it)
- In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago Update missing parts, improve til, and more
- Node
- Updated configuration to initialize the interface to all the smart
contracts
- Common
- Moved BlockData and BatchData types to common so that they can be
shared among: historydb, til and synchronizer
- Remove hash.go (it was never used)
- Remove slot.go (it was never used)
- Remove smartcontractparams.go (it was never used, and appropriate
structs are defined in `eth/`)
- Comment state / status method until requirements of this method are
properly defined, and move it to Synchronizer
- Synchronizer
- Simplify `Sync` routine to only sync one block per call, and return
useful information.
- Use BlockData and BatchData from common
- Check that events belong to the expected block hash
- In L1Batch, query L1UserTxs from HistoryDB
- Fill ERC20 token information
- Test AddTokens with test.Client
- HistryDB
- Use BlockData and BatchData from common
- Add `GetAllTokens` method
- Uncomment and update GetL1UserTxs (with corresponding tests)
- Til
- Rename all instances of RegisterToken to AddToken (to follow the smart
contract implementation naming)
- Use BlockData and BatchData from common
- Move testL1CoordinatorTxs and testL2Txs to a separate struct
from BatchData in Context
- Start Context with BatchNum = 1 (which the protocol defines to be the
first batchNum)
- In every Batch, set StateRoot and ExitRoot to a non-nil big.Int
(zero).
- In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not
used, set it to 0; so that no *big.Int is nil.
- In L1UserTx, don't set BatchNum, because when L1UserTxs are created
and obtained by the synchronizer, the BatchNum is not known yet (it's
a synchronizer job to set it)
- In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago Update missing parts, improve til, and more
- Node
- Updated configuration to initialize the interface to all the smart
contracts
- Common
- Moved BlockData and BatchData types to common so that they can be
shared among: historydb, til and synchronizer
- Remove hash.go (it was never used)
- Remove slot.go (it was never used)
- Remove smartcontractparams.go (it was never used, and appropriate
structs are defined in `eth/`)
- Comment state / status method until requirements of this method are
properly defined, and move it to Synchronizer
- Synchronizer
- Simplify `Sync` routine to only sync one block per call, and return
useful information.
- Use BlockData and BatchData from common
- Check that events belong to the expected block hash
- In L1Batch, query L1UserTxs from HistoryDB
- Fill ERC20 token information
- Test AddTokens with test.Client
- HistryDB
- Use BlockData and BatchData from common
- Add `GetAllTokens` method
- Uncomment and update GetL1UserTxs (with corresponding tests)
- Til
- Rename all instances of RegisterToken to AddToken (to follow the smart
contract implementation naming)
- Use BlockData and BatchData from common
- Move testL1CoordinatorTxs and testL2Txs to a separate struct
from BatchData in Context
- Start Context with BatchNum = 1 (which the protocol defines to be the
first batchNum)
- In every Batch, set StateRoot and ExitRoot to a non-nil big.Int
(zero).
- In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not
used, set it to 0; so that no *big.Int is nil.
- In L1UserTx, don't set BatchNum, because when L1UserTxs are created
and obtained by the synchronizer, the BatchNum is not known yet (it's
a synchronizer job to set it)
- In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago Fix exit table, set delayed_withdrawn in exits
- In exit table, `instant_withdrawn`, `delayed_withdraw_request`, and
`delayed_withdrawn` were referencing batch_num. But these actions happen
outside a batch, so they should reference a block_num.
- Process delayed withdrawns:
- In Synchronizer, first match a Rollup delayed withdrawn request, with the
WDelayer deposit (via TxHash), and store the owner and token associated
with the delayed withdrawn.
- In HistoryDB: store the owner and token of a delayed withdrawal request
in the exit_tree, and set delayed_withdrawn when the withdraw is done in
the WDelayer.
- Update dependency of sqlx to master
- Last release of sqlx is from 2018 October, and it doesn't support
`NamedQuery` with a slice of structs, which is used in this commit.
4 years ago Redo coordinator structure, connect API to node
- API:
- Modify the constructor so that hardcoded rollup constants don't need
to be passed (introduce a `Config` and use `configAPI` internally)
- Common:
- Update rollup constants with proper *big.Int when required
- Add BidCoordinator and Slot structs used by the HistoryDB and
Synchronizer.
- Add helper methods to AuctionConstants
- AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL
table: `default_slot_set_bid_slot_num`), which indicates at which
slotNum does the `DefaultSlotSetBid` specified starts applying.
- Config:
- Move coordinator exclusive configuration from the node config to the
coordinator config
- Coordinator:
- Reorganize the code towards having the goroutines started and stopped
from the coordinator itself instead of the node.
- Remove all stop and stopped channels, and use context.Context and
sync.WaitGroup instead.
- Remove BatchInfo setters and assing variables directly
- In ServerProof and ServerProofPool use context instead stop channel.
- Use message passing to notify the coordinator about sync updates and
reorgs
- Introduce the Pipeline, which can be started and stopped by the
Coordinator
- Introduce the TxManager, which manages ethereum transactions (the
TxManager is also in charge of making the forge call to the rollup
smart contract). The TxManager keeps ethereum transactions and:
1. Waits for the transaction to be accepted
2. Waits for the transaction to be confirmed for N blocks
- In forge logic, first prepare a batch and then wait for an available
server proof to have all work ready once the proof server is ready.
- Remove the `isForgeSequence` method which was querying the smart
contract, and instead use notifications sent by the Synchronizer to
figure out if it's forging time.
- Update test (which is a minimal test to manually see if the
coordinator starts)
- HistoryDB:
- Add method to get the number of batches in a slot (used to detect when
a slot has passed the bid winner forging deadline)
- Add method to get the best bid and associated coordinator of a slot
(used to detect the forgerAddress that can forge the slot)
- General:
- Rename some instances of `currentBlock` to `lastBlock` to be more
clear.
- Node:
- Connect the API to the node and call the methods to update cached
state when the sync advances blocks.
- Call methods to update Coordinator state when the sync advances blocks
and finds reorgs.
- Synchronizer:
- Add Auction field in the Stats, which contain the current slot with
info about highest bidder and other related info required to know who
can forge in the current block.
- Better organization of cached state:
- On Sync, update the internal cached state
- On Init or Reorg, load the state from HistoryDB into the
internal cached state.
4 years ago Update missing parts, improve til, and more
- Node
- Updated configuration to initialize the interface to all the smart
contracts
- Common
- Moved BlockData and BatchData types to common so that they can be
shared among: historydb, til and synchronizer
- Remove hash.go (it was never used)
- Remove slot.go (it was never used)
- Remove smartcontractparams.go (it was never used, and appropriate
structs are defined in `eth/`)
- Comment state / status method until requirements of this method are
properly defined, and move it to Synchronizer
- Synchronizer
- Simplify `Sync` routine to only sync one block per call, and return
useful information.
- Use BlockData and BatchData from common
- Check that events belong to the expected block hash
- In L1Batch, query L1UserTxs from HistoryDB
- Fill ERC20 token information
- Test AddTokens with test.Client
- HistryDB
- Use BlockData and BatchData from common
- Add `GetAllTokens` method
- Uncomment and update GetL1UserTxs (with corresponding tests)
- Til
- Rename all instances of RegisterToken to AddToken (to follow the smart
contract implementation naming)
- Use BlockData and BatchData from common
- Move testL1CoordinatorTxs and testL2Txs to a separate struct
from BatchData in Context
- Start Context with BatchNum = 1 (which the protocol defines to be the
first batchNum)
- In every Batch, set StateRoot and ExitRoot to a non-nil big.Int
(zero).
- In all L1Txs, if LoadAmount is not used, set it to 0; if Amount is not
used, set it to 0; so that no *big.Int is nil.
- In L1UserTx, don't set BatchNum, because when L1UserTxs are created
and obtained by the synchronizer, the BatchNum is not known yet (it's
a synchronizer job to set it)
- In L1UserTxs, set `UserOrigin` and set `ToForgeL1TxsNum`.
4 years ago Fix exit table, set delayed_withdrawn in exits
- In exit table, `instant_withdrawn`, `delayed_withdraw_request`, and
`delayed_withdrawn` were referencing batch_num. But these actions happen
outside a batch, so they should reference a block_num.
- Process delayed withdrawns:
- In Synchronizer, first match a Rollup delayed withdrawn request, with the
WDelayer deposit (via TxHash), and store the owner and token associated
with the delayed withdrawn.
- In HistoryDB: store the owner and token of a delayed withdrawal request
in the exit_tree, and set delayed_withdrawn when the withdraw is done in
the WDelayer.
- Update dependency of sqlx to master
- Last release of sqlx is from 2018 October, and it doesn't support
`NamedQuery` with a slice of structs, which is used in this commit.
4 years ago Update coordinator, call all api update functions
- Common:
- Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition
- API:
- Add UpdateNetworkInfoBlock to update just block information, to be
used when the node is not yet synchronized
- Node:
- Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with
configurable time intervals
- Synchronizer:
- When mapping events by TxHash, use an array to support the possibility
of multiple calls of the same function happening in the same
transaction (for example, a smart contract in a single transaction
could call withdraw with delay twice, which would generate 2 withdraw
events, and 2 deposit events).
- In Stats, keep entire LastBlock instead of just the blockNum
- In Stats, add lastL1BatchBlock
- Test Stats and SCVars
- Coordinator:
- Enable writing the BatchInfo in every step of the pipeline to disk
(with JSON text files) for debugging purposes.
- Move the Pipeline functionality from the Coordinator to its own struct
(Pipeline)
- Implement shouldL1lL2Batch
- In TxManager, implement logic to perform several attempts when doing
ethereum node RPC calls before considering the error. (Both for calls
to forgeBatch and transaction receipt)
- In TxManager, reorganize the flow and note the specific points in
which actions are made when err != nil
- HistoryDB:
- Implement GetLastL1BatchBlockNum: returns the blockNum of the latest
forged l1Batch, to help the coordinator decide when to forge an
L1Batch.
- EthereumClient and test.Client:
- Update EthBlockByNumber to return the last block when the passed
number is -1.
4 years ago Fix exit table, set delayed_withdrawn in exits
- In exit table, `instant_withdrawn`, `delayed_withdraw_request`, and
`delayed_withdrawn` were referencing batch_num. But these actions happen
outside a batch, so they should reference a block_num.
- Process delayed withdrawns:
- In Synchronizer, first match a Rollup delayed withdrawn request, with the
WDelayer deposit (via TxHash), and store the owner and token associated
with the delayed withdrawn.
- In HistoryDB: store the owner and token of a delayed withdrawal request
in the exit_tree, and set delayed_withdrawn when the withdraw is done in
the WDelayer.
- Update dependency of sqlx to master
- Last release of sqlx is from 2018 October, and it doesn't support
`NamedQuery` with a slice of structs, which is used in this commit.
4 years ago |
|
package historydb
import ( "math" "math/big" "strings"
ethCommon "github.com/ethereum/go-ethereum/common" "github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/db" "github.com/hermeznetwork/tracerr" "github.com/jmoiron/sqlx"
//nolint:errcheck // driver for postgres DB
_ "github.com/lib/pq" "github.com/russross/meddler" )
const ( // OrderAsc indicates ascending order when using pagination
OrderAsc = "ASC" // OrderDesc indicates descending order when using pagination
OrderDesc = "DESC" )
// TODO(Edu): Document here how HistoryDB is kept consistent
// HistoryDB persist the historic of the rollup
type HistoryDB struct { db *sqlx.DB apiConnCon *db.APIConnectionController }
// NewHistoryDB initialize the DB
func NewHistoryDB(db *sqlx.DB, apiConnCon *db.APIConnectionController) *HistoryDB { return &HistoryDB{db: db, apiConnCon: apiConnCon} }
// DB returns a pointer to the L2DB.db. This method should be used only for
// internal testing purposes.
func (hdb *HistoryDB) DB() *sqlx.DB { return hdb.db }
// AddBlock insert a block into the DB
func (hdb *HistoryDB) AddBlock(block *common.Block) error { return hdb.addBlock(hdb.db, block) } func (hdb *HistoryDB) addBlock(d meddler.DB, block *common.Block) error { return tracerr.Wrap(meddler.Insert(d, "block", block)) }
// AddBlocks inserts blocks into the DB
func (hdb *HistoryDB) AddBlocks(blocks []common.Block) error { return tracerr.Wrap(hdb.addBlocks(hdb.db, blocks)) }
func (hdb *HistoryDB) addBlocks(d meddler.DB, blocks []common.Block) error { return tracerr.Wrap(db.BulkInsert( d, `INSERT INTO block ( eth_block_num, timestamp, hash ) VALUES %s;`, blocks[:], )) }
// GetBlock retrieve a block from the DB, given a block number
func (hdb *HistoryDB) GetBlock(blockNum int64) (*common.Block, error) { block := &common.Block{} err := meddler.QueryRow( hdb.db, block, "SELECT * FROM block WHERE eth_block_num = $1;", blockNum, ) return block, tracerr.Wrap(err) }
// GetAllBlocks retrieve all blocks from the DB
func (hdb *HistoryDB) GetAllBlocks() ([]common.Block, error) { var blocks []*common.Block err := meddler.QueryAll( hdb.db, &blocks, "SELECT * FROM block ORDER BY eth_block_num;", ) return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err) }
// getBlocks retrieve blocks from the DB, given a range of block numbers defined by from and to
func (hdb *HistoryDB) getBlocks(from, to int64) ([]common.Block, error) { var blocks []*common.Block err := meddler.QueryAll( hdb.db, &blocks, "SELECT * FROM block WHERE $1 <= eth_block_num AND eth_block_num < $2 ORDER BY eth_block_num;", from, to, ) return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err) }
// GetLastBlock retrieve the block with the highest block number from the DB
func (hdb *HistoryDB) GetLastBlock() (*common.Block, error) { block := &common.Block{} err := meddler.QueryRow( hdb.db, block, "SELECT * FROM block ORDER BY eth_block_num DESC LIMIT 1;", ) return block, tracerr.Wrap(err) }
// AddBatch insert a Batch into the DB
func (hdb *HistoryDB) AddBatch(batch *common.Batch) error { return hdb.addBatch(hdb.db, batch) } func (hdb *HistoryDB) addBatch(d meddler.DB, batch *common.Batch) error { // Calculate total collected fees in USD
// Get IDs of collected tokens for fees
tokenIDs := []common.TokenID{} for id := range batch.CollectedFees { tokenIDs = append(tokenIDs, id) } // Get USD value of the tokens
type tokenPrice struct { ID common.TokenID `meddler:"token_id"` USD *float64 `meddler:"usd"` Decimals int `meddler:"decimals"` } var tokenPrices []*tokenPrice if len(tokenIDs) > 0 { query, args, err := sqlx.In( "SELECT token_id, usd, decimals FROM token WHERE token_id IN (?);", tokenIDs, ) if err != nil { return tracerr.Wrap(err) } query = hdb.db.Rebind(query) if err := meddler.QueryAll( hdb.db, &tokenPrices, query, args..., ); err != nil { return tracerr.Wrap(err) } } // Calculate total collected
var total float64 for _, tokenPrice := range tokenPrices { if tokenPrice.USD == nil { continue } f := new(big.Float).SetInt(batch.CollectedFees[tokenPrice.ID]) amount, _ := f.Float64() total += *tokenPrice.USD * (amount / math.Pow(10, float64(tokenPrice.Decimals))) //nolint decimals have to be ^10
} batch.TotalFeesUSD = &total // Insert to DB
return tracerr.Wrap(meddler.Insert(d, "batch", batch)) }
// AddBatches insert Bids into the DB
func (hdb *HistoryDB) AddBatches(batches []common.Batch) error { return tracerr.Wrap(hdb.addBatches(hdb.db, batches)) } func (hdb *HistoryDB) addBatches(d meddler.DB, batches []common.Batch) error { for i := 0; i < len(batches); i++ { if err := hdb.addBatch(d, &batches[i]); err != nil { return tracerr.Wrap(err) } } return nil }
// GetBatch returns the batch with the given batchNum
func (hdb *HistoryDB) GetBatch(batchNum common.BatchNum) (*common.Batch, error) { var batch common.Batch err := meddler.QueryRow( hdb.db, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr, batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root, batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num, batch.slot_num, batch.total_fees_usd FROM batch WHERE batch_num = $1;`, batchNum, ) return &batch, err }
// GetAllBatches retrieve all batches from the DB
func (hdb *HistoryDB) GetAllBatches() ([]common.Batch, error) { var batches []*common.Batch err := meddler.QueryAll( hdb.db, &batches, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr, batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root, batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num, batch.slot_num, batch.total_fees_usd FROM batch ORDER BY item_id;`, ) return db.SlicePtrsToSlice(batches).([]common.Batch), tracerr.Wrap(err) }
// GetBatches retrieve batches from the DB, given a range of batch numbers defined by from and to
func (hdb *HistoryDB) GetBatches(from, to common.BatchNum) ([]common.Batch, error) { var batches []*common.Batch err := meddler.QueryAll( hdb.db, &batches, `SELECT batch_num, eth_block_num, forger_addr, fees_collected, fee_idxs_coordinator, state_root, num_accounts, last_idx, exit_root, forge_l1_txs_num, slot_num, total_fees_usd FROM batch WHERE $1 <= batch_num AND batch_num < $2 ORDER BY batch_num;`, from, to, ) return db.SlicePtrsToSlice(batches).([]common.Batch), tracerr.Wrap(err) }
// GetFirstBatchBlockNumBySlot returns the ethereum block number of the first
// batch within a slot
func (hdb *HistoryDB) GetFirstBatchBlockNumBySlot(slotNum int64) (int64, error) { row := hdb.db.QueryRow( `SELECT eth_block_num FROM batch WHERE slot_num = $1 ORDER BY batch_num ASC LIMIT 1;`, slotNum, ) var blockNum int64 return blockNum, tracerr.Wrap(row.Scan(&blockNum)) }
// GetLastBatchNum returns the BatchNum of the latest forged batch
func (hdb *HistoryDB) GetLastBatchNum() (common.BatchNum, error) { row := hdb.db.QueryRow("SELECT batch_num FROM batch ORDER BY batch_num DESC LIMIT 1;") var batchNum common.BatchNum return batchNum, tracerr.Wrap(row.Scan(&batchNum)) }
// GetLastBatch returns the last forged batch
func (hdb *HistoryDB) GetLastBatch() (*common.Batch, error) { var batch common.Batch err := meddler.QueryRow( hdb.db, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr, batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root, batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num, batch.slot_num, batch.total_fees_usd FROM batch ORDER BY batch_num DESC LIMIT 1;`, ) return &batch, err }
// GetLastL1BatchBlockNum returns the blockNum of the latest forged l1Batch
func (hdb *HistoryDB) GetLastL1BatchBlockNum() (int64, error) { row := hdb.db.QueryRow(`SELECT eth_block_num FROM batch WHERE forge_l1_txs_num IS NOT NULL ORDER BY batch_num DESC LIMIT 1;`) var blockNum int64 return blockNum, tracerr.Wrap(row.Scan(&blockNum)) }
// GetLastL1TxsNum returns the greatest ForgeL1TxsNum in the DB from forged
// batches. If there's no batch in the DB (nil, nil) is returned.
func (hdb *HistoryDB) GetLastL1TxsNum() (*int64, error) { row := hdb.db.QueryRow("SELECT MAX(forge_l1_txs_num) FROM batch;") lastL1TxsNum := new(int64) return lastL1TxsNum, tracerr.Wrap(row.Scan(&lastL1TxsNum)) }
// Reorg deletes all the information that was added into the DB after the
// lastValidBlock. If lastValidBlock is negative, all block information is
// deleted.
func (hdb *HistoryDB) Reorg(lastValidBlock int64) error { var err error if lastValidBlock < 0 { _, err = hdb.db.Exec("DELETE FROM block;") } else { _, err = hdb.db.Exec("DELETE FROM block WHERE eth_block_num > $1;", lastValidBlock) } return tracerr.Wrap(err) }
// AddBids insert Bids into the DB
func (hdb *HistoryDB) AddBids(bids []common.Bid) error { return hdb.addBids(hdb.db, bids) } func (hdb *HistoryDB) addBids(d meddler.DB, bids []common.Bid) error { if len(bids) == 0 { return nil } // TODO: check the coordinator info
return tracerr.Wrap(db.BulkInsert( d, "INSERT INTO bid (slot_num, bid_value, eth_block_num, bidder_addr) VALUES %s;", bids[:], )) }
// GetAllBids retrieve all bids from the DB
func (hdb *HistoryDB) GetAllBids() ([]common.Bid, error) { var bids []*common.Bid err := meddler.QueryAll( hdb.db, &bids, `SELECT bid.slot_num, bid.bid_value, bid.eth_block_num, bid.bidder_addr FROM bid ORDER BY item_id;`, ) return db.SlicePtrsToSlice(bids).([]common.Bid), tracerr.Wrap(err) }
// GetBestBidCoordinator returns the forger address of the highest bidder in a slot by slotNum
func (hdb *HistoryDB) GetBestBidCoordinator(slotNum int64) (*common.BidCoordinator, error) { bidCoord := &common.BidCoordinator{} err := meddler.QueryRow( hdb.db, bidCoord, `SELECT ( SELECT default_slot_set_bid FROM auction_vars WHERE default_slot_set_bid_slot_num <= $1 ORDER BY eth_block_num DESC LIMIT 1 ), bid.slot_num, bid.bid_value, bid.bidder_addr, coordinator.forger_addr, coordinator.url FROM bid INNER JOIN ( SELECT bidder_addr, MAX(item_id) AS item_id FROM coordinator GROUP BY bidder_addr ) c ON bid.bidder_addr = c.bidder_addr INNER JOIN coordinator ON c.item_id = coordinator.item_id WHERE bid.slot_num = $1 ORDER BY bid.item_id DESC LIMIT 1;`, slotNum)
return bidCoord, tracerr.Wrap(err) }
// AddCoordinators insert Coordinators into the DB
func (hdb *HistoryDB) AddCoordinators(coordinators []common.Coordinator) error { return tracerr.Wrap(hdb.addCoordinators(hdb.db, coordinators)) } func (hdb *HistoryDB) addCoordinators(d meddler.DB, coordinators []common.Coordinator) error { if len(coordinators) == 0 { return nil } return tracerr.Wrap(db.BulkInsert( d, "INSERT INTO coordinator (bidder_addr, forger_addr, eth_block_num, url) VALUES %s;", coordinators[:], )) }
// AddExitTree insert Exit tree into the DB
func (hdb *HistoryDB) AddExitTree(exitTree []common.ExitInfo) error { return tracerr.Wrap(hdb.addExitTree(hdb.db, exitTree)) } func (hdb *HistoryDB) addExitTree(d meddler.DB, exitTree []common.ExitInfo) error { if len(exitTree) == 0 { return nil } return tracerr.Wrap(db.BulkInsert( d, "INSERT INTO exit_tree (batch_num, account_idx, merkle_proof, balance, "+ "instant_withdrawn, delayed_withdraw_request, delayed_withdrawn) VALUES %s;", exitTree[:], )) }
func (hdb *HistoryDB) updateExitTree(d sqlx.Ext, blockNum int64, rollupWithdrawals []common.WithdrawInfo, wDelayerWithdrawals []common.WDelayerTransfer) error { if len(rollupWithdrawals) == 0 && len(wDelayerWithdrawals) == 0 { return nil } type withdrawal struct { BatchNum int64 `db:"batch_num"` AccountIdx int64 `db:"account_idx"` InstantWithdrawn *int64 `db:"instant_withdrawn"` DelayedWithdrawRequest *int64 `db:"delayed_withdraw_request"` DelayedWithdrawn *int64 `db:"delayed_withdrawn"` Owner *ethCommon.Address `db:"owner"` Token *ethCommon.Address `db:"token"` } withdrawals := make([]withdrawal, len(rollupWithdrawals)+len(wDelayerWithdrawals)) for i := range rollupWithdrawals { info := &rollupWithdrawals[i] withdrawals[i] = withdrawal{ BatchNum: int64(info.NumExitRoot), AccountIdx: int64(info.Idx), } if info.InstantWithdraw { withdrawals[i].InstantWithdrawn = &blockNum } else { withdrawals[i].DelayedWithdrawRequest = &blockNum withdrawals[i].Owner = &info.Owner withdrawals[i].Token = &info.Token } } for i := range wDelayerWithdrawals { info := &wDelayerWithdrawals[i] withdrawals[len(rollupWithdrawals)+i] = withdrawal{ DelayedWithdrawn: &blockNum, Owner: &info.Owner, Token: &info.Token, } } // In VALUES we set an initial row of NULLs to set the types of each
// variable passed as argument
const query string = ` UPDATE exit_tree e SET instant_withdrawn = d.instant_withdrawn, delayed_withdraw_request = CASE WHEN e.delayed_withdraw_request IS NOT NULL THEN e.delayed_withdraw_request ELSE d.delayed_withdraw_request END, delayed_withdrawn = d.delayed_withdrawn, owner = d.owner, token = d.token FROM (VALUES (NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BIGINT, NULL::::BYTEA, NULL::::BYTEA), (:batch_num, :account_idx, :instant_withdrawn, :delayed_withdraw_request, :delayed_withdrawn, :owner, :token) ) as d (batch_num, account_idx, instant_withdrawn, delayed_withdraw_request, delayed_withdrawn, owner, token) WHERE (d.batch_num IS NOT NULL AND e.batch_num = d.batch_num AND e.account_idx = d.account_idx) OR (d.delayed_withdrawn IS NOT NULL AND e.delayed_withdrawn IS NULL AND e.owner = d.owner AND e.token = d.token); ` if len(withdrawals) > 0 { if _, err := sqlx.NamedExec(d, query, withdrawals); err != nil { return tracerr.Wrap(err) } }
return nil }
// AddToken insert a token into the DB
func (hdb *HistoryDB) AddToken(token *common.Token) error { return tracerr.Wrap(meddler.Insert(hdb.db, "token", token)) }
// AddTokens insert tokens into the DB
func (hdb *HistoryDB) AddTokens(tokens []common.Token) error { return hdb.addTokens(hdb.db, tokens) } func (hdb *HistoryDB) addTokens(d meddler.DB, tokens []common.Token) error { if len(tokens) == 0 { return nil } // Sanitize name and symbol
for i, token := range tokens { token.Name = strings.ToValidUTF8(token.Name, " ") token.Symbol = strings.ToValidUTF8(token.Symbol, " ") tokens[i] = token } return tracerr.Wrap(db.BulkInsert( d, `INSERT INTO token ( token_id, eth_block_num, eth_addr, name, symbol, decimals ) VALUES %s;`, tokens[:], )) }
// UpdateTokenValue updates the USD value of a token
func (hdb *HistoryDB) UpdateTokenValue(tokenSymbol string, value float64) error { // Sanitize symbol
tokenSymbol = strings.ToValidUTF8(tokenSymbol, " ")
_, err := hdb.db.Exec( "UPDATE token SET usd = $1 WHERE symbol = $2;", value, tokenSymbol, ) return tracerr.Wrap(err) }
// GetToken returns a token from the DB given a TokenID
func (hdb *HistoryDB) GetToken(tokenID common.TokenID) (*TokenWithUSD, error) { token := &TokenWithUSD{} err := meddler.QueryRow( hdb.db, token, `SELECT * FROM token WHERE token_id = $1;`, tokenID, ) return token, tracerr.Wrap(err) }
// GetAllTokens returns all tokens from the DB
func (hdb *HistoryDB) GetAllTokens() ([]TokenWithUSD, error) { var tokens []*TokenWithUSD err := meddler.QueryAll( hdb.db, &tokens, "SELECT * FROM token ORDER BY token_id;", ) return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), tracerr.Wrap(err) }
// GetTokenSymbols returns all the token symbols from the DB
func (hdb *HistoryDB) GetTokenSymbols() ([]string, error) { var tokenSymbols []string rows, err := hdb.db.Query("SELECT symbol FROM token;") if err != nil { return nil, tracerr.Wrap(err) } defer db.RowsClose(rows) sym := new(string) for rows.Next() { err = rows.Scan(sym) if err != nil { return nil, tracerr.Wrap(err) } tokenSymbols = append(tokenSymbols, *sym) } return tokenSymbols, nil }
// AddAccounts insert accounts into the DB
func (hdb *HistoryDB) AddAccounts(accounts []common.Account) error { return tracerr.Wrap(hdb.addAccounts(hdb.db, accounts)) } func (hdb *HistoryDB) addAccounts(d meddler.DB, accounts []common.Account) error { if len(accounts) == 0 { return nil } return tracerr.Wrap(db.BulkInsert( d, `INSERT INTO account ( idx, token_id, batch_num, bjj, eth_addr ) VALUES %s;`, accounts[:], )) }
// GetAllAccounts returns a list of accounts from the DB
func (hdb *HistoryDB) GetAllAccounts() ([]common.Account, error) { var accs []*common.Account err := meddler.QueryAll( hdb.db, &accs, "SELECT idx, token_id, batch_num, bjj, eth_addr FROM account ORDER BY idx;", ) return db.SlicePtrsToSlice(accs).([]common.Account), tracerr.Wrap(err) }
// AddL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
// If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
// BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
// EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB.
func (hdb *HistoryDB) AddL1Txs(l1txs []common.L1Tx) error { return tracerr.Wrap(hdb.addL1Txs(hdb.db, l1txs)) }
// addL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
// If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
// BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
// EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB.
func (hdb *HistoryDB) addL1Txs(d meddler.DB, l1txs []common.L1Tx) error { if len(l1txs) == 0 { return nil } txs := []txWrite{} for i := 0; i < len(l1txs); i++ { af := new(big.Float).SetInt(l1txs[i].Amount) amountFloat, _ := af.Float64() laf := new(big.Float).SetInt(l1txs[i].DepositAmount) depositAmountFloat, _ := laf.Float64() var effectiveFromIdx *common.Idx if l1txs[i].UserOrigin { if l1txs[i].Type != common.TxTypeCreateAccountDeposit && l1txs[i].Type != common.TxTypeCreateAccountDepositTransfer { effectiveFromIdx = &l1txs[i].FromIdx } } else { effectiveFromIdx = &l1txs[i].EffectiveFromIdx } txs = append(txs, txWrite{ // Generic
IsL1: true, TxID: l1txs[i].TxID, Type: l1txs[i].Type, Position: l1txs[i].Position, FromIdx: &l1txs[i].FromIdx, EffectiveFromIdx: effectiveFromIdx, ToIdx: l1txs[i].ToIdx, Amount: l1txs[i].Amount, AmountFloat: amountFloat, TokenID: l1txs[i].TokenID, BatchNum: l1txs[i].BatchNum, EthBlockNum: l1txs[i].EthBlockNum, // L1
ToForgeL1TxsNum: l1txs[i].ToForgeL1TxsNum, UserOrigin: &l1txs[i].UserOrigin, FromEthAddr: &l1txs[i].FromEthAddr, FromBJJ: &l1txs[i].FromBJJ, DepositAmount: l1txs[i].DepositAmount, DepositAmountFloat: &depositAmountFloat, }) } return tracerr.Wrap(hdb.addTxs(d, txs)) }
// AddL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
func (hdb *HistoryDB) AddL2Txs(l2txs []common.L2Tx) error { return tracerr.Wrap(hdb.addL2Txs(hdb.db, l2txs)) }
// addL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
func (hdb *HistoryDB) addL2Txs(d meddler.DB, l2txs []common.L2Tx) error { txs := []txWrite{} for i := 0; i < len(l2txs); i++ { f := new(big.Float).SetInt(l2txs[i].Amount) amountFloat, _ := f.Float64() txs = append(txs, txWrite{ // Generic
IsL1: false, TxID: l2txs[i].TxID, Type: l2txs[i].Type, Position: l2txs[i].Position, FromIdx: &l2txs[i].FromIdx, EffectiveFromIdx: &l2txs[i].FromIdx, ToIdx: l2txs[i].ToIdx, TokenID: l2txs[i].TokenID, Amount: l2txs[i].Amount, AmountFloat: amountFloat, BatchNum: &l2txs[i].BatchNum, EthBlockNum: l2txs[i].EthBlockNum, // L2
Fee: &l2txs[i].Fee, Nonce: &l2txs[i].Nonce, }) } return tracerr.Wrap(hdb.addTxs(d, txs)) }
func (hdb *HistoryDB) addTxs(d meddler.DB, txs []txWrite) error { if len(txs) == 0 { return nil } return tracerr.Wrap(db.BulkInsert( d, `INSERT INTO tx ( is_l1, id, type, position, from_idx, effective_from_idx, to_idx, amount, amount_f, token_id, batch_num, eth_block_num, to_forge_l1_txs_num, user_origin, from_eth_addr, from_bjj, deposit_amount, deposit_amount_f, fee, nonce ) VALUES %s;`, txs[:], )) }
// GetAllExits returns all exit from the DB
func (hdb *HistoryDB) GetAllExits() ([]common.ExitInfo, error) { var exits []*common.ExitInfo err := meddler.QueryAll( hdb.db, &exits, `SELECT exit_tree.batch_num, exit_tree.account_idx, exit_tree.merkle_proof, exit_tree.balance, exit_tree.instant_withdrawn, exit_tree.delayed_withdraw_request, exit_tree.delayed_withdrawn FROM exit_tree ORDER BY item_id;`, ) return db.SlicePtrsToSlice(exits).([]common.ExitInfo), tracerr.Wrap(err) }
// GetAllL1UserTxs returns all L1UserTxs from the DB
func (hdb *HistoryDB) GetAllL1UserTxs() ([]common.L1Tx, error) { var txs []*common.L1Tx err := meddler.QueryAll( hdb.db, &txs, // Note that '\x' gets parsed as a big.Int with value = 0
`SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin, tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id, tx.amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.amount_success THEN tx.amount ELSE '\x' END) AS effective_amount, tx.deposit_amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.deposit_amount_success THEN tx.deposit_amount ELSE '\x' END) AS effective_deposit_amount, tx.eth_block_num, tx.type, tx.batch_num FROM tx WHERE is_l1 = TRUE AND user_origin = TRUE ORDER BY item_id;`, ) return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err) }
// GetAllL1CoordinatorTxs returns all L1CoordinatorTxs from the DB
func (hdb *HistoryDB) GetAllL1CoordinatorTxs() ([]common.L1Tx, error) { var txs []*common.L1Tx // Since the query specifies that only coordinator txs are returned, it's safe to assume
// that returned txs will always have effective amounts
err := meddler.QueryAll( hdb.db, &txs, `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin, tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id, tx.amount, tx.amount AS effective_amount, tx.deposit_amount, tx.deposit_amount AS effective_deposit_amount, tx.eth_block_num, tx.type, tx.batch_num FROM tx WHERE is_l1 = TRUE AND user_origin = FALSE ORDER BY item_id;`, ) return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err) }
// GetAllL2Txs returns all L2Txs from the DB
func (hdb *HistoryDB) GetAllL2Txs() ([]common.L2Tx, error) { var txs []*common.L2Tx err := meddler.QueryAll( hdb.db, &txs, `SELECT tx.id, tx.batch_num, tx.position, tx.from_idx, tx.to_idx, tx.amount, tx.token_id, tx.fee, tx.nonce, tx.type, tx.eth_block_num FROM tx WHERE is_l1 = FALSE ORDER BY item_id;`, ) return db.SlicePtrsToSlice(txs).([]common.L2Tx), tracerr.Wrap(err) }
// GetUnforgedL1UserTxs gets L1 User Txs to be forged in the L1Batch with toForgeL1TxsNum.
func (hdb *HistoryDB) GetUnforgedL1UserTxs(toForgeL1TxsNum int64) ([]common.L1Tx, error) { var txs []*common.L1Tx err := meddler.QueryAll( hdb.db, &txs, // only L1 user txs can have batch_num set to null
`SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin, tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id, tx.amount, NULL AS effective_amount, tx.deposit_amount, NULL AS effective_deposit_amount, tx.eth_block_num, tx.type, tx.batch_num FROM tx WHERE batch_num IS NULL AND to_forge_l1_txs_num = $1 ORDER BY position;`, toForgeL1TxsNum, ) return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err) }
// TODO: Think about chaning all the queries that return a last value, to queries that return the next valid value.
// GetLastTxsPosition for a given to_forge_l1_txs_num
func (hdb *HistoryDB) GetLastTxsPosition(toForgeL1TxsNum int64) (int, error) { row := hdb.db.QueryRow( "SELECT position FROM tx WHERE to_forge_l1_txs_num = $1 ORDER BY position DESC;", toForgeL1TxsNum, ) var lastL1TxsPosition int return lastL1TxsPosition, tracerr.Wrap(row.Scan(&lastL1TxsPosition)) }
// GetSCVars returns the rollup, auction and wdelayer smart contracts variables at their last update.
func (hdb *HistoryDB) GetSCVars() (*common.RollupVariables, *common.AuctionVariables, *common.WDelayerVariables, error) { var rollup common.RollupVariables var auction common.AuctionVariables var wDelayer common.WDelayerVariables if err := meddler.QueryRow(hdb.db, &rollup, "SELECT * FROM rollup_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil { return nil, nil, nil, tracerr.Wrap(err) } if err := meddler.QueryRow(hdb.db, &auction, "SELECT * FROM auction_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil { return nil, nil, nil, tracerr.Wrap(err) } if err := meddler.QueryRow(hdb.db, &wDelayer, "SELECT * FROM wdelayer_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil { return nil, nil, nil, tracerr.Wrap(err) } return &rollup, &auction, &wDelayer, nil }
func (hdb *HistoryDB) setRollupVars(d meddler.DB, rollup *common.RollupVariables) error { return tracerr.Wrap(meddler.Insert(d, "rollup_vars", rollup)) }
func (hdb *HistoryDB) setAuctionVars(d meddler.DB, auction *common.AuctionVariables) error { return tracerr.Wrap(meddler.Insert(d, "auction_vars", auction)) }
func (hdb *HistoryDB) setWDelayerVars(d meddler.DB, wDelayer *common.WDelayerVariables) error { return tracerr.Wrap(meddler.Insert(d, "wdelayer_vars", wDelayer)) }
func (hdb *HistoryDB) addBucketUpdates(d meddler.DB, bucketUpdates []common.BucketUpdate) error { if len(bucketUpdates) == 0 { return nil } return tracerr.Wrap(db.BulkInsert( d, `INSERT INTO bucket_update ( eth_block_num, num_bucket, block_stamp, withdrawals ) VALUES %s;`, bucketUpdates[:], )) }
// AddBucketUpdatesTest allows call to unexported method
// only for internal testing purposes
func (hdb *HistoryDB) AddBucketUpdatesTest(d meddler.DB, bucketUpdates []common.BucketUpdate) error { return hdb.addBucketUpdates(d, bucketUpdates) }
// GetAllBucketUpdates retrieves all the bucket updates
func (hdb *HistoryDB) GetAllBucketUpdates() ([]common.BucketUpdate, error) { var bucketUpdates []*common.BucketUpdate err := meddler.QueryAll( hdb.db, &bucketUpdates, `SELECT eth_block_num, num_bucket, block_stamp, withdrawals FROM bucket_update ORDER BY item_id;`, ) return db.SlicePtrsToSlice(bucketUpdates).([]common.BucketUpdate), tracerr.Wrap(err) }
func (hdb *HistoryDB) addTokenExchanges(d meddler.DB, tokenExchanges []common.TokenExchange) error { if len(tokenExchanges) == 0 { return nil } return tracerr.Wrap(db.BulkInsert( d, `INSERT INTO token_exchange ( eth_block_num, eth_addr, value_usd ) VALUES %s;`, tokenExchanges[:], )) }
// GetAllTokenExchanges retrieves all the token exchanges
func (hdb *HistoryDB) GetAllTokenExchanges() ([]common.TokenExchange, error) { var tokenExchanges []*common.TokenExchange err := meddler.QueryAll( hdb.db, &tokenExchanges, "SELECT eth_block_num, eth_addr, value_usd FROM token_exchange ORDER BY item_id;", ) return db.SlicePtrsToSlice(tokenExchanges).([]common.TokenExchange), tracerr.Wrap(err) }
func (hdb *HistoryDB) addEscapeHatchWithdrawals(d meddler.DB, escapeHatchWithdrawals []common.WDelayerEscapeHatchWithdrawal) error { if len(escapeHatchWithdrawals) == 0 { return nil } return tracerr.Wrap(db.BulkInsert( d, `INSERT INTO escape_hatch_withdrawal ( eth_block_num, who_addr, to_addr, token_addr, amount ) VALUES %s;`, escapeHatchWithdrawals[:], )) }
// GetAllEscapeHatchWithdrawals retrieves all the escape hatch withdrawals
func (hdb *HistoryDB) GetAllEscapeHatchWithdrawals() ([]common.WDelayerEscapeHatchWithdrawal, error) { var escapeHatchWithdrawals []*common.WDelayerEscapeHatchWithdrawal err := meddler.QueryAll( hdb.db, &escapeHatchWithdrawals, "SELECT eth_block_num, who_addr, to_addr, token_addr, amount FROM escape_hatch_withdrawal ORDER BY item_id;", ) return db.SlicePtrsToSlice(escapeHatchWithdrawals).([]common.WDelayerEscapeHatchWithdrawal), tracerr.Wrap(err) }
// SetInitialSCVars sets the initial state of rollup, auction, wdelayer smart
// contract variables. This initial state is stored linked to block 0, which
// always exist in the DB and is used to store initialization data that always
// exist in the smart contracts.
func (hdb *HistoryDB) SetInitialSCVars(rollup *common.RollupVariables, auction *common.AuctionVariables, wDelayer *common.WDelayerVariables) error { txn, err := hdb.db.Beginx() if err != nil { return tracerr.Wrap(err) } defer func() { if err != nil { db.Rollback(txn) } }() // Force EthBlockNum to be 0 because it's the block used to link data
// that belongs to the creation of the smart contracts
rollup.EthBlockNum = 0 auction.EthBlockNum = 0 wDelayer.EthBlockNum = 0 auction.DefaultSlotSetBidSlotNum = 0 if err := hdb.setRollupVars(txn, rollup); err != nil { return tracerr.Wrap(err) } if err := hdb.setAuctionVars(txn, auction); err != nil { return tracerr.Wrap(err) } if err := hdb.setWDelayerVars(txn, wDelayer); err != nil { return tracerr.Wrap(err) }
return tracerr.Wrap(txn.Commit()) }
// setExtraInfoForgedL1UserTxs sets the EffectiveAmount, EffectiveDepositAmount
// and EffectiveFromIdx of the given l1UserTxs (with an UPDATE)
func (hdb *HistoryDB) setExtraInfoForgedL1UserTxs(d sqlx.Ext, txs []common.L1Tx) error { if len(txs) == 0 { return nil } // Effective amounts are stored as success flags in the DB, with true value by default
// to reduce the amount of updates. Therefore, only amounts that became uneffective should be
// updated to become false. At the same time, all the txs that contain
// accounts (FromIdx == 0) are updated to set the EffectiveFromIdx.
type txUpdate struct { ID common.TxID `db:"id"` AmountSuccess bool `db:"amount_success"` DepositAmountSuccess bool `db:"deposit_amount_success"` EffectiveFromIdx common.Idx `db:"effective_from_idx"` } txUpdates := []txUpdate{} equal := func(a *big.Int, b *big.Int) bool { return a.Cmp(b) == 0 } for i := range txs { amountSuccess := equal(txs[i].Amount, txs[i].EffectiveAmount) depositAmountSuccess := equal(txs[i].DepositAmount, txs[i].EffectiveDepositAmount) if !amountSuccess || !depositAmountSuccess || txs[i].FromIdx == 0 { txUpdates = append(txUpdates, txUpdate{ ID: txs[i].TxID, AmountSuccess: amountSuccess, DepositAmountSuccess: depositAmountSuccess, EffectiveFromIdx: txs[i].EffectiveFromIdx, }) } } const query string = ` UPDATE tx SET amount_success = tx_update.amount_success, deposit_amount_success = tx_update.deposit_amount_success, effective_from_idx = tx_update.effective_from_idx FROM (VALUES (NULL::::BYTEA, NULL::::BOOL, NULL::::BOOL, NULL::::BIGINT), (:id, :amount_success, :deposit_amount_success, :effective_from_idx) ) as tx_update (id, amount_success, deposit_amount_success, effective_from_idx) WHERE tx.id = tx_update.id; ` if len(txUpdates) > 0 { if _, err := sqlx.NamedExec(d, query, txUpdates); err != nil { return tracerr.Wrap(err) } } return nil }
// AddBlockSCData stores all the information of a block retrieved by the
// Synchronizer. Blocks should be inserted in order, leaving no gaps because
// the pagination system of the API/DB depends on this. Within blocks, all
// items should also be in the correct order (Accounts, Tokens, Txs, etc.)
func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) { txn, err := hdb.db.Beginx() if err != nil { return tracerr.Wrap(err) } defer func() { if err != nil { db.Rollback(txn) } }()
// Add block
if err := hdb.addBlock(txn, &blockData.Block); err != nil { return tracerr.Wrap(err) }
// Add Coordinators
if err := hdb.addCoordinators(txn, blockData.Auction.Coordinators); err != nil { return tracerr.Wrap(err) }
// Add Bids
if err := hdb.addBids(txn, blockData.Auction.Bids); err != nil { return tracerr.Wrap(err) }
// Add Tokens
if err := hdb.addTokens(txn, blockData.Rollup.AddedTokens); err != nil { return tracerr.Wrap(err) }
// Prepare user L1 txs to be added.
// They must be added before the batch that will forge them (which can be in the same block)
// and after the account that will be sent to (also can be in the same block).
// Note: insert order is not relevant since item_id will be updated by a DB trigger when
// the batch that forges those txs is inserted
userL1s := make(map[common.BatchNum][]common.L1Tx) for i := range blockData.Rollup.L1UserTxs { batchThatForgesIsInTheBlock := false for _, batch := range blockData.Rollup.Batches { if batch.Batch.ForgeL1TxsNum != nil && *batch.Batch.ForgeL1TxsNum == *blockData.Rollup.L1UserTxs[i].ToForgeL1TxsNum { // Tx is forged in this block. It's guaranteed that:
// * the first batch of the block won't forge user L1 txs that have been added in this block
// * batch nums are sequential therefore it's safe to add the tx at batch.BatchNum -1
batchThatForgesIsInTheBlock = true addAtBatchNum := batch.Batch.BatchNum - 1 userL1s[addAtBatchNum] = append(userL1s[addAtBatchNum], blockData.Rollup.L1UserTxs[i]) break } } if !batchThatForgesIsInTheBlock { // User artificial batchNum 0 to add txs that are not forge in this block
// after all the accounts of the block have been added
userL1s[0] = append(userL1s[0], blockData.Rollup.L1UserTxs[i]) } }
// Add Batches
for i := range blockData.Rollup.Batches { batch := &blockData.Rollup.Batches[i]
// Add Batch: this will trigger an update on the DB
// that will set the batch num of forged L1 txs in this batch
if err = hdb.addBatch(txn, &batch.Batch); err != nil { return tracerr.Wrap(err) }
// Add accounts
if err := hdb.addAccounts(txn, batch.CreatedAccounts); err != nil { return tracerr.Wrap(err) }
// Set the EffectiveAmount and EffectiveDepositAmount of all the
// L1UserTxs that have been forged in this batch
if err = hdb.setExtraInfoForgedL1UserTxs(txn, batch.L1UserTxs); err != nil { return tracerr.Wrap(err) }
// Add forged l1 coordinator Txs
if err := hdb.addL1Txs(txn, batch.L1CoordinatorTxs); err != nil { return tracerr.Wrap(err) }
// Add l2 Txs
if err := hdb.addL2Txs(txn, batch.L2Txs); err != nil { return tracerr.Wrap(err) }
// Add user L1 txs that will be forged in next batch
if userlL1s, ok := userL1s[batch.Batch.BatchNum]; ok { if err := hdb.addL1Txs(txn, userlL1s); err != nil { return tracerr.Wrap(err) } }
// Add exit tree
if err := hdb.addExitTree(txn, batch.ExitTree); err != nil { return tracerr.Wrap(err) } } // Add user L1 txs that won't be forged in this block
if userL1sNotForgedInThisBlock, ok := userL1s[0]; ok { if err := hdb.addL1Txs(txn, userL1sNotForgedInThisBlock); err != nil { return tracerr.Wrap(err) } }
// Set SC Vars if there was an update
if blockData.Rollup.Vars != nil { if err := hdb.setRollupVars(txn, blockData.Rollup.Vars); err != nil { return tracerr.Wrap(err) } } if blockData.Auction.Vars != nil { if err := hdb.setAuctionVars(txn, blockData.Auction.Vars); err != nil { return tracerr.Wrap(err) } } if blockData.WDelayer.Vars != nil { if err := hdb.setWDelayerVars(txn, blockData.WDelayer.Vars); err != nil { return tracerr.Wrap(err) } }
// Update withdrawals in exit tree table
if err := hdb.updateExitTree(txn, blockData.Block.Num, blockData.Rollup.Withdrawals, blockData.WDelayer.Withdrawals); err != nil { return tracerr.Wrap(err) }
// Add Escape Hatch Withdrawals
if err := hdb.addEscapeHatchWithdrawals(txn, blockData.WDelayer.EscapeHatchWithdrawals); err != nil { return tracerr.Wrap(err) }
// Add Buckets withdrawals updates
if err := hdb.addBucketUpdates(txn, blockData.Rollup.UpdateBucketWithdraw); err != nil { return tracerr.Wrap(err) }
// Add Token exchange updates
if err := hdb.addTokenExchanges(txn, blockData.Rollup.TokenExchanges); err != nil { return tracerr.Wrap(err) }
return tracerr.Wrap(txn.Commit()) }
// GetCoordinatorAPI returns a coordinator by its bidderAddr
func (hdb *HistoryDB) GetCoordinatorAPI(bidderAddr ethCommon.Address) (*CoordinatorAPI, error) { coordinator := &CoordinatorAPI{} err := meddler.QueryRow( hdb.db, coordinator, "SELECT * FROM coordinator WHERE bidder_addr = $1 ORDER BY item_id DESC LIMIT 1;", bidderAddr, ) return coordinator, tracerr.Wrap(err) }
// AddAuctionVars insert auction vars into the DB
func (hdb *HistoryDB) AddAuctionVars(auctionVars *common.AuctionVariables) error { return tracerr.Wrap(meddler.Insert(hdb.db, "auction_vars", auctionVars)) }
// GetTokensTest used to get tokens in a testing context
func (hdb *HistoryDB) GetTokensTest() ([]TokenWithUSD, error) { tokens := []*TokenWithUSD{} if err := meddler.QueryAll( hdb.db, &tokens, "SELECT * FROM TOKEN", ); err != nil { return nil, tracerr.Wrap(err) } if len(tokens) == 0 { return []TokenWithUSD{}, nil } return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), nil }
|