Redo coordinator structure, connect API to node
- API:
- Modify the constructor so that hardcoded rollup constants don't need
to be passed (introduce a `Config` and use `configAPI` internally)
- Common:
- Update rollup constants with proper *big.Int when required
- Add BidCoordinator and Slot structs used by the HistoryDB and
Synchronizer.
- Add helper methods to AuctionConstants
- AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL
table: `default_slot_set_bid_slot_num`), which indicates at which
slotNum does the `DefaultSlotSetBid` specified starts applying.
- Config:
- Move coordinator exclusive configuration from the node config to the
coordinator config
- Coordinator:
- Reorganize the code towards having the goroutines started and stopped
from the coordinator itself instead of the node.
- Remove all stop and stopped channels, and use context.Context and
sync.WaitGroup instead.
- Remove BatchInfo setters and assing variables directly
- In ServerProof and ServerProofPool use context instead stop channel.
- Use message passing to notify the coordinator about sync updates and
reorgs
- Introduce the Pipeline, which can be started and stopped by the
Coordinator
- Introduce the TxManager, which manages ethereum transactions (the
TxManager is also in charge of making the forge call to the rollup
smart contract). The TxManager keeps ethereum transactions and:
1. Waits for the transaction to be accepted
2. Waits for the transaction to be confirmed for N blocks
- In forge logic, first prepare a batch and then wait for an available
server proof to have all work ready once the proof server is ready.
- Remove the `isForgeSequence` method which was querying the smart
contract, and instead use notifications sent by the Synchronizer to
figure out if it's forging time.
- Update test (which is a minimal test to manually see if the
coordinator starts)
- HistoryDB:
- Add method to get the number of batches in a slot (used to detect when
a slot has passed the bid winner forging deadline)
- Add method to get the best bid and associated coordinator of a slot
(used to detect the forgerAddress that can forge the slot)
- General:
- Rename some instances of `currentBlock` to `lastBlock` to be more
clear.
- Node:
- Connect the API to the node and call the methods to update cached
state when the sync advances blocks.
- Call methods to update Coordinator state when the sync advances blocks
and finds reorgs.
- Synchronizer:
- Add Auction field in the Stats, which contain the current slot with
info about highest bidder and other related info required to know who
can forge in the current block.
- Better organization of cached state:
- On Sync, update the internal cached state
- On Init or Reorg, load the state from HistoryDB into the
internal cached state.
4 years ago Update coordinator, call all api update functions
- Common:
- Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition
- API:
- Add UpdateNetworkInfoBlock to update just block information, to be
used when the node is not yet synchronized
- Node:
- Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with
configurable time intervals
- Synchronizer:
- When mapping events by TxHash, use an array to support the possibility
of multiple calls of the same function happening in the same
transaction (for example, a smart contract in a single transaction
could call withdraw with delay twice, which would generate 2 withdraw
events, and 2 deposit events).
- In Stats, keep entire LastBlock instead of just the blockNum
- In Stats, add lastL1BatchBlock
- Test Stats and SCVars
- Coordinator:
- Enable writing the BatchInfo in every step of the pipeline to disk
(with JSON text files) for debugging purposes.
- Move the Pipeline functionality from the Coordinator to its own struct
(Pipeline)
- Implement shouldL1lL2Batch
- In TxManager, implement logic to perform several attempts when doing
ethereum node RPC calls before considering the error. (Both for calls
to forgeBatch and transaction receipt)
- In TxManager, reorganize the flow and note the specific points in
which actions are made when err != nil
- HistoryDB:
- Implement GetLastL1BatchBlockNum: returns the blockNum of the latest
forged l1Batch, to help the coordinator decide when to forge an
L1Batch.
- EthereumClient and test.Client:
- Update EthBlockByNumber to return the last block when the passed
number is -1.
4 years ago Update coordinator, call all api update functions
- Common:
- Rename Block.EthBlockNum to Block.Num to avoid unneeded repetition
- API:
- Add UpdateNetworkInfoBlock to update just block information, to be
used when the node is not yet synchronized
- Node:
- Call API.UpdateMetrics and UpdateRecommendedFee in a loop, with
configurable time intervals
- Synchronizer:
- When mapping events by TxHash, use an array to support the possibility
of multiple calls of the same function happening in the same
transaction (for example, a smart contract in a single transaction
could call withdraw with delay twice, which would generate 2 withdraw
events, and 2 deposit events).
- In Stats, keep entire LastBlock instead of just the blockNum
- In Stats, add lastL1BatchBlock
- Test Stats and SCVars
- Coordinator:
- Enable writing the BatchInfo in every step of the pipeline to disk
(with JSON text files) for debugging purposes.
- Move the Pipeline functionality from the Coordinator to its own struct
(Pipeline)
- Implement shouldL1lL2Batch
- In TxManager, implement logic to perform several attempts when doing
ethereum node RPC calls before considering the error. (Both for calls
to forgeBatch and transaction receipt)
- In TxManager, reorganize the flow and note the specific points in
which actions are made when err != nil
- HistoryDB:
- Implement GetLastL1BatchBlockNum: returns the blockNum of the latest
forged l1Batch, to help the coordinator decide when to forge an
L1Batch.
- EthereumClient and test.Client:
- Update EthBlockByNumber to return the last block when the passed
number is -1.
4 years ago Redo coordinator structure, connect API to node
- API:
- Modify the constructor so that hardcoded rollup constants don't need
to be passed (introduce a `Config` and use `configAPI` internally)
- Common:
- Update rollup constants with proper *big.Int when required
- Add BidCoordinator and Slot structs used by the HistoryDB and
Synchronizer.
- Add helper methods to AuctionConstants
- AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL
table: `default_slot_set_bid_slot_num`), which indicates at which
slotNum does the `DefaultSlotSetBid` specified starts applying.
- Config:
- Move coordinator exclusive configuration from the node config to the
coordinator config
- Coordinator:
- Reorganize the code towards having the goroutines started and stopped
from the coordinator itself instead of the node.
- Remove all stop and stopped channels, and use context.Context and
sync.WaitGroup instead.
- Remove BatchInfo setters and assing variables directly
- In ServerProof and ServerProofPool use context instead stop channel.
- Use message passing to notify the coordinator about sync updates and
reorgs
- Introduce the Pipeline, which can be started and stopped by the
Coordinator
- Introduce the TxManager, which manages ethereum transactions (the
TxManager is also in charge of making the forge call to the rollup
smart contract). The TxManager keeps ethereum transactions and:
1. Waits for the transaction to be accepted
2. Waits for the transaction to be confirmed for N blocks
- In forge logic, first prepare a batch and then wait for an available
server proof to have all work ready once the proof server is ready.
- Remove the `isForgeSequence` method which was querying the smart
contract, and instead use notifications sent by the Synchronizer to
figure out if it's forging time.
- Update test (which is a minimal test to manually see if the
coordinator starts)
- HistoryDB:
- Add method to get the number of batches in a slot (used to detect when
a slot has passed the bid winner forging deadline)
- Add method to get the best bid and associated coordinator of a slot
(used to detect the forgerAddress that can forge the slot)
- General:
- Rename some instances of `currentBlock` to `lastBlock` to be more
clear.
- Node:
- Connect the API to the node and call the methods to update cached
state when the sync advances blocks.
- Call methods to update Coordinator state when the sync advances blocks
and finds reorgs.
- Synchronizer:
- Add Auction field in the Stats, which contain the current slot with
info about highest bidder and other related info required to know who
can forge in the current block.
- Better organization of cached state:
- On Sync, update the internal cached state
- On Init or Reorg, load the state from HistoryDB into the
internal cached state.
4 years ago Fix eth events query and sync inconsistent state
- kvdb
- Fix path in Last when doing `setNew`
- Only close if db != nil, and after closing, always set db to nil
- This will avoid a panic in the case where the db is closed but
there's an error soon after, and a future call tries to close
again. This is because pebble.Close() will panic if the db is
already closed.
- Avoid calling pebble methods when a the Storage interface already
implements that method (like Close).
- statedb
- In test, avoid calling KVDB method if the same method is available for
the StateDB (like MakeCheckpoint, CurrentBatch).
- eth
- In *EventByBlock methods, take blockHash as input argument and use it
when querying the event logs. Previously the blockHash was only taken
from the logs results *only if* there was any log. This caused the
following issue: if there was no logs, it was not possible to know if
the result was from the expected block or an uncle block! By querying
logs by blockHash we make sure that even if there are no logs, they
are from the right block.
- Note that now the function can either be called with a
blockNum or blockHash, but not both at the same time.
- sync
- If there's an error during call to Sync call resetState, which
internally resets the stateDB to avoid stale checkpoints (and a
corresponding invalid increase in the StateDB batchNum).
- During a Sync, after very batch processed, make sure that the StateDB
currentBatch corresponds to the batchNum in the smart contract
log/event.
3 years ago Redo coordinator structure, connect API to node
- API:
- Modify the constructor so that hardcoded rollup constants don't need
to be passed (introduce a `Config` and use `configAPI` internally)
- Common:
- Update rollup constants with proper *big.Int when required
- Add BidCoordinator and Slot structs used by the HistoryDB and
Synchronizer.
- Add helper methods to AuctionConstants
- AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL
table: `default_slot_set_bid_slot_num`), which indicates at which
slotNum does the `DefaultSlotSetBid` specified starts applying.
- Config:
- Move coordinator exclusive configuration from the node config to the
coordinator config
- Coordinator:
- Reorganize the code towards having the goroutines started and stopped
from the coordinator itself instead of the node.
- Remove all stop and stopped channels, and use context.Context and
sync.WaitGroup instead.
- Remove BatchInfo setters and assing variables directly
- In ServerProof and ServerProofPool use context instead stop channel.
- Use message passing to notify the coordinator about sync updates and
reorgs
- Introduce the Pipeline, which can be started and stopped by the
Coordinator
- Introduce the TxManager, which manages ethereum transactions (the
TxManager is also in charge of making the forge call to the rollup
smart contract). The TxManager keeps ethereum transactions and:
1. Waits for the transaction to be accepted
2. Waits for the transaction to be confirmed for N blocks
- In forge logic, first prepare a batch and then wait for an available
server proof to have all work ready once the proof server is ready.
- Remove the `isForgeSequence` method which was querying the smart
contract, and instead use notifications sent by the Synchronizer to
figure out if it's forging time.
- Update test (which is a minimal test to manually see if the
coordinator starts)
- HistoryDB:
- Add method to get the number of batches in a slot (used to detect when
a slot has passed the bid winner forging deadline)
- Add method to get the best bid and associated coordinator of a slot
(used to detect the forgerAddress that can forge the slot)
- General:
- Rename some instances of `currentBlock` to `lastBlock` to be more
clear.
- Node:
- Connect the API to the node and call the methods to update cached
state when the sync advances blocks.
- Call methods to update Coordinator state when the sync advances blocks
and finds reorgs.
- Synchronizer:
- Add Auction field in the Stats, which contain the current slot with
info about highest bidder and other related info required to know who
can forge in the current block.
- Better organization of cached state:
- On Sync, update the internal cached state
- On Init or Reorg, load the state from HistoryDB into the
internal cached state.
4 years ago Fix eth events query and sync inconsistent state
- kvdb
- Fix path in Last when doing `setNew`
- Only close if db != nil, and after closing, always set db to nil
- This will avoid a panic in the case where the db is closed but
there's an error soon after, and a future call tries to close
again. This is because pebble.Close() will panic if the db is
already closed.
- Avoid calling pebble methods when a the Storage interface already
implements that method (like Close).
- statedb
- In test, avoid calling KVDB method if the same method is available for
the StateDB (like MakeCheckpoint, CurrentBatch).
- eth
- In *EventByBlock methods, take blockHash as input argument and use it
when querying the event logs. Previously the blockHash was only taken
from the logs results *only if* there was any log. This caused the
following issue: if there was no logs, it was not possible to know if
the result was from the expected block or an uncle block! By querying
logs by blockHash we make sure that even if there are no logs, they
are from the right block.
- Note that now the function can either be called with a
blockNum or blockHash, but not both at the same time.
- sync
- If there's an error during call to Sync call resetState, which
internally resets the stateDB to avoid stale checkpoints (and a
corresponding invalid increase in the StateDB batchNum).
- During a Sync, after very batch processed, make sure that the StateDB
currentBatch corresponds to the batchNum in the smart contract
log/event.
3 years ago Redo coordinator structure, connect API to node
- API:
- Modify the constructor so that hardcoded rollup constants don't need
to be passed (introduce a `Config` and use `configAPI` internally)
- Common:
- Update rollup constants with proper *big.Int when required
- Add BidCoordinator and Slot structs used by the HistoryDB and
Synchronizer.
- Add helper methods to AuctionConstants
- AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL
table: `default_slot_set_bid_slot_num`), which indicates at which
slotNum does the `DefaultSlotSetBid` specified starts applying.
- Config:
- Move coordinator exclusive configuration from the node config to the
coordinator config
- Coordinator:
- Reorganize the code towards having the goroutines started and stopped
from the coordinator itself instead of the node.
- Remove all stop and stopped channels, and use context.Context and
sync.WaitGroup instead.
- Remove BatchInfo setters and assing variables directly
- In ServerProof and ServerProofPool use context instead stop channel.
- Use message passing to notify the coordinator about sync updates and
reorgs
- Introduce the Pipeline, which can be started and stopped by the
Coordinator
- Introduce the TxManager, which manages ethereum transactions (the
TxManager is also in charge of making the forge call to the rollup
smart contract). The TxManager keeps ethereum transactions and:
1. Waits for the transaction to be accepted
2. Waits for the transaction to be confirmed for N blocks
- In forge logic, first prepare a batch and then wait for an available
server proof to have all work ready once the proof server is ready.
- Remove the `isForgeSequence` method which was querying the smart
contract, and instead use notifications sent by the Synchronizer to
figure out if it's forging time.
- Update test (which is a minimal test to manually see if the
coordinator starts)
- HistoryDB:
- Add method to get the number of batches in a slot (used to detect when
a slot has passed the bid winner forging deadline)
- Add method to get the best bid and associated coordinator of a slot
(used to detect the forgerAddress that can forge the slot)
- General:
- Rename some instances of `currentBlock` to `lastBlock` to be more
clear.
- Node:
- Connect the API to the node and call the methods to update cached
state when the sync advances blocks.
- Call methods to update Coordinator state when the sync advances blocks
and finds reorgs.
- Synchronizer:
- Add Auction field in the Stats, which contain the current slot with
info about highest bidder and other related info required to know who
can forge in the current block.
- Better organization of cached state:
- On Sync, update the internal cached state
- On Init or Reorg, load the state from HistoryDB into the
internal cached state.
4 years ago Fix eth events query and sync inconsistent state
- kvdb
- Fix path in Last when doing `setNew`
- Only close if db != nil, and after closing, always set db to nil
- This will avoid a panic in the case where the db is closed but
there's an error soon after, and a future call tries to close
again. This is because pebble.Close() will panic if the db is
already closed.
- Avoid calling pebble methods when a the Storage interface already
implements that method (like Close).
- statedb
- In test, avoid calling KVDB method if the same method is available for
the StateDB (like MakeCheckpoint, CurrentBatch).
- eth
- In *EventByBlock methods, take blockHash as input argument and use it
when querying the event logs. Previously the blockHash was only taken
from the logs results *only if* there was any log. This caused the
following issue: if there was no logs, it was not possible to know if
the result was from the expected block or an uncle block! By querying
logs by blockHash we make sure that even if there are no logs, they
are from the right block.
- Note that now the function can either be called with a
blockNum or blockHash, but not both at the same time.
- sync
- If there's an error during call to Sync call resetState, which
internally resets the stateDB to avoid stale checkpoints (and a
corresponding invalid increase in the StateDB batchNum).
- During a Sync, after very batch processed, make sure that the StateDB
currentBatch corresponds to the batchNum in the smart contract
log/event.
3 years ago Redo coordinator structure, connect API to node
- API:
- Modify the constructor so that hardcoded rollup constants don't need
to be passed (introduce a `Config` and use `configAPI` internally)
- Common:
- Update rollup constants with proper *big.Int when required
- Add BidCoordinator and Slot structs used by the HistoryDB and
Synchronizer.
- Add helper methods to AuctionConstants
- AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL
table: `default_slot_set_bid_slot_num`), which indicates at which
slotNum does the `DefaultSlotSetBid` specified starts applying.
- Config:
- Move coordinator exclusive configuration from the node config to the
coordinator config
- Coordinator:
- Reorganize the code towards having the goroutines started and stopped
from the coordinator itself instead of the node.
- Remove all stop and stopped channels, and use context.Context and
sync.WaitGroup instead.
- Remove BatchInfo setters and assing variables directly
- In ServerProof and ServerProofPool use context instead stop channel.
- Use message passing to notify the coordinator about sync updates and
reorgs
- Introduce the Pipeline, which can be started and stopped by the
Coordinator
- Introduce the TxManager, which manages ethereum transactions (the
TxManager is also in charge of making the forge call to the rollup
smart contract). The TxManager keeps ethereum transactions and:
1. Waits for the transaction to be accepted
2. Waits for the transaction to be confirmed for N blocks
- In forge logic, first prepare a batch and then wait for an available
server proof to have all work ready once the proof server is ready.
- Remove the `isForgeSequence` method which was querying the smart
contract, and instead use notifications sent by the Synchronizer to
figure out if it's forging time.
- Update test (which is a minimal test to manually see if the
coordinator starts)
- HistoryDB:
- Add method to get the number of batches in a slot (used to detect when
a slot has passed the bid winner forging deadline)
- Add method to get the best bid and associated coordinator of a slot
(used to detect the forgerAddress that can forge the slot)
- General:
- Rename some instances of `currentBlock` to `lastBlock` to be more
clear.
- Node:
- Connect the API to the node and call the methods to update cached
state when the sync advances blocks.
- Call methods to update Coordinator state when the sync advances blocks
and finds reorgs.
- Synchronizer:
- Add Auction field in the Stats, which contain the current slot with
info about highest bidder and other related info required to know who
can forge in the current block.
- Better organization of cached state:
- On Sync, update the internal cached state
- On Init or Reorg, load the state from HistoryDB into the
internal cached state.
4 years ago Fix eth events query and sync inconsistent state
- kvdb
- Fix path in Last when doing `setNew`
- Only close if db != nil, and after closing, always set db to nil
- This will avoid a panic in the case where the db is closed but
there's an error soon after, and a future call tries to close
again. This is because pebble.Close() will panic if the db is
already closed.
- Avoid calling pebble methods when a the Storage interface already
implements that method (like Close).
- statedb
- In test, avoid calling KVDB method if the same method is available for
the StateDB (like MakeCheckpoint, CurrentBatch).
- eth
- In *EventByBlock methods, take blockHash as input argument and use it
when querying the event logs. Previously the blockHash was only taken
from the logs results *only if* there was any log. This caused the
following issue: if there was no logs, it was not possible to know if
the result was from the expected block or an uncle block! By querying
logs by blockHash we make sure that even if there are no logs, they
are from the right block.
- Note that now the function can either be called with a
blockNum or blockHash, but not both at the same time.
- sync
- If there's an error during call to Sync call resetState, which
internally resets the stateDB to avoid stale checkpoints (and a
corresponding invalid increase in the StateDB batchNum).
- During a Sync, after very batch processed, make sure that the StateDB
currentBatch corresponds to the batchNum in the smart contract
log/event.
3 years ago Fix eth events query and sync inconsistent state
- kvdb
- Fix path in Last when doing `setNew`
- Only close if db != nil, and after closing, always set db to nil
- This will avoid a panic in the case where the db is closed but
there's an error soon after, and a future call tries to close
again. This is because pebble.Close() will panic if the db is
already closed.
- Avoid calling pebble methods when a the Storage interface already
implements that method (like Close).
- statedb
- In test, avoid calling KVDB method if the same method is available for
the StateDB (like MakeCheckpoint, CurrentBatch).
- eth
- In *EventByBlock methods, take blockHash as input argument and use it
when querying the event logs. Previously the blockHash was only taken
from the logs results *only if* there was any log. This caused the
following issue: if there was no logs, it was not possible to know if
the result was from the expected block or an uncle block! By querying
logs by blockHash we make sure that even if there are no logs, they
are from the right block.
- Note that now the function can either be called with a
blockNum or blockHash, but not both at the same time.
- sync
- If there's an error during call to Sync call resetState, which
internally resets the stateDB to avoid stale checkpoints (and a
corresponding invalid increase in the StateDB batchNum).
- During a Sync, after very batch processed, make sure that the StateDB
currentBatch corresponds to the batchNum in the smart contract
log/event.
3 years ago Fix eth events query and sync inconsistent state
- kvdb
- Fix path in Last when doing `setNew`
- Only close if db != nil, and after closing, always set db to nil
- This will avoid a panic in the case where the db is closed but
there's an error soon after, and a future call tries to close
again. This is because pebble.Close() will panic if the db is
already closed.
- Avoid calling pebble methods when a the Storage interface already
implements that method (like Close).
- statedb
- In test, avoid calling KVDB method if the same method is available for
the StateDB (like MakeCheckpoint, CurrentBatch).
- eth
- In *EventByBlock methods, take blockHash as input argument and use it
when querying the event logs. Previously the blockHash was only taken
from the logs results *only if* there was any log. This caused the
following issue: if there was no logs, it was not possible to know if
the result was from the expected block or an uncle block! By querying
logs by blockHash we make sure that even if there are no logs, they
are from the right block.
- Note that now the function can either be called with a
blockNum or blockHash, but not both at the same time.
- sync
- If there's an error during call to Sync call resetState, which
internally resets the stateDB to avoid stale checkpoints (and a
corresponding invalid increase in the StateDB batchNum).
- During a Sync, after very batch processed, make sure that the StateDB
currentBatch corresponds to the batchNum in the smart contract
log/event.
3 years ago Redo coordinator structure, connect API to node
- API:
- Modify the constructor so that hardcoded rollup constants don't need
to be passed (introduce a `Config` and use `configAPI` internally)
- Common:
- Update rollup constants with proper *big.Int when required
- Add BidCoordinator and Slot structs used by the HistoryDB and
Synchronizer.
- Add helper methods to AuctionConstants
- AuctionVariables: Add column `DefaultSlotSetBidSlotNum` (in the SQL
table: `default_slot_set_bid_slot_num`), which indicates at which
slotNum does the `DefaultSlotSetBid` specified starts applying.
- Config:
- Move coordinator exclusive configuration from the node config to the
coordinator config
- Coordinator:
- Reorganize the code towards having the goroutines started and stopped
from the coordinator itself instead of the node.
- Remove all stop and stopped channels, and use context.Context and
sync.WaitGroup instead.
- Remove BatchInfo setters and assing variables directly
- In ServerProof and ServerProofPool use context instead stop channel.
- Use message passing to notify the coordinator about sync updates and
reorgs
- Introduce the Pipeline, which can be started and stopped by the
Coordinator
- Introduce the TxManager, which manages ethereum transactions (the
TxManager is also in charge of making the forge call to the rollup
smart contract). The TxManager keeps ethereum transactions and:
1. Waits for the transaction to be accepted
2. Waits for the transaction to be confirmed for N blocks
- In forge logic, first prepare a batch and then wait for an available
server proof to have all work ready once the proof server is ready.
- Remove the `isForgeSequence` method which was querying the smart
contract, and instead use notifications sent by the Synchronizer to
figure out if it's forging time.
- Update test (which is a minimal test to manually see if the
coordinator starts)
- HistoryDB:
- Add method to get the number of batches in a slot (used to detect when
a slot has passed the bid winner forging deadline)
- Add method to get the best bid and associated coordinator of a slot
(used to detect the forgerAddress that can forge the slot)
- General:
- Rename some instances of `currentBlock` to `lastBlock` to be more
clear.
- Node:
- Connect the API to the node and call the methods to update cached
state when the sync advances blocks.
- Call methods to update Coordinator state when the sync advances blocks
and finds reorgs.
- Synchronizer:
- Add Auction field in the Stats, which contain the current slot with
info about highest bidder and other related info required to know who
can forge in the current block.
- Better organization of cached state:
- On Sync, update the internal cached state
- On Init or Reorg, load the state from HistoryDB into the
internal cached state.
4 years ago Fix eth events query and sync inconsistent state
- kvdb
- Fix path in Last when doing `setNew`
- Only close if db != nil, and after closing, always set db to nil
- This will avoid a panic in the case where the db is closed but
there's an error soon after, and a future call tries to close
again. This is because pebble.Close() will panic if the db is
already closed.
- Avoid calling pebble methods when a the Storage interface already
implements that method (like Close).
- statedb
- In test, avoid calling KVDB method if the same method is available for
the StateDB (like MakeCheckpoint, CurrentBatch).
- eth
- In *EventByBlock methods, take blockHash as input argument and use it
when querying the event logs. Previously the blockHash was only taken
from the logs results *only if* there was any log. This caused the
following issue: if there was no logs, it was not possible to know if
the result was from the expected block or an uncle block! By querying
logs by blockHash we make sure that even if there are no logs, they
are from the right block.
- Note that now the function can either be called with a
blockNum or blockHash, but not both at the same time.
- sync
- If there's an error during call to Sync call resetState, which
internally resets the stateDB to avoid stale checkpoints (and a
corresponding invalid increase in the StateDB batchNum).
- During a Sync, after very batch processed, make sure that the StateDB
currentBatch corresponds to the batchNum in the smart contract
log/event.
3 years ago |
|
package test
import ( "context" "crypto/ecdsa" "encoding/binary" "math/big" "testing" "time"
ethCommon "github.com/ethereum/go-ethereum/common" ethCrypto "github.com/ethereum/go-ethereum/crypto" "github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/eth" "github.com/hermeznetwork/tracerr" "github.com/iden3/go-iden3-crypto/babyjub" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/require" )
type timer struct { time int64 }
func (t *timer) Time() int64 { currentTime := t.time t.time++ return currentTime }
func TestClientInterface(t *testing.T) { var c eth.ClientInterface var timer timer clientSetup := NewClientSetupExample() client := NewClient(true, &timer, ðCommon.Address{}, clientSetup) c = client require.NotNil(t, c) }
func TestClientEth(t *testing.T) { var timer timer clientSetup := NewClientSetupExample() c := NewClient(true, &timer, ðCommon.Address{}, clientSetup) blockNum, err := c.EthLastBlock() require.Nil(t, err) assert.Equal(t, int64(1), blockNum)
block, err := c.EthBlockByNumber(context.TODO(), 0) require.Nil(t, err) assert.Equal(t, int64(0), block.Num) assert.Equal(t, time.Unix(0, 0), block.Timestamp) assert.Equal(t, "0x0000000000000000000000000000000000000000000000000000000000000000", block.Hash.Hex())
// Mine some empty blocks
assert.Equal(t, int64(1), c.blockNum) c.CtlMineBlock() assert.Equal(t, int64(2), c.blockNum) c.CtlMineBlock() assert.Equal(t, int64(3), c.blockNum)
block, err = c.EthBlockByNumber(context.TODO(), 2) require.Nil(t, err) assert.Equal(t, int64(2), block.Num) assert.Equal(t, time.Unix(2, 0), block.Timestamp)
// Add a token
tokenAddr := ethCommon.HexToAddress("0x44021007485550008e0f9f1f7b506c7d970ad8ce") constants := eth.ERC20Consts{ Name: "FooBar", Symbol: "FOO", Decimals: 4, } c.CtlAddERC20(tokenAddr, constants) c.CtlMineBlock() tokenConstants, err := c.EthERC20Consts(tokenAddr) require.Nil(t, err) assert.Equal(t, constants, *tokenConstants) }
func TestClientAuction(t *testing.T) { addrBidder1 := ethCommon.HexToAddress("0x6b175474e89094c44da98b954eedeac495271d0f") addrBidder2 := ethCommon.HexToAddress("0xc27cadc437d067a6ec869502cc9f7F834cFc087a") addrForge := ethCommon.HexToAddress("0xCfAA413eEb796f328620a3630Ae39124cabcEa92") addrForge2 := ethCommon.HexToAddress("0x1fCb4ac309428feCc61B1C8cA5823C15A5e1a800")
var timer timer clientSetup := NewClientSetupExample() clientSetup.AuctionVariables.ClosedAuctionSlots = 2 clientSetup.AuctionVariables.OpenAuctionSlots = 4320 clientSetup.AuctionVariables.DefaultSlotSetBid = [6]*big.Int{ big.NewInt(1000), big.NewInt(1100), big.NewInt(1200), big.NewInt(1300), big.NewInt(1400), big.NewInt(1500)} c := NewClient(true, &timer, &addrBidder1, clientSetup)
// Check several cases in which bid doesn't succed, and also do 2 successful bids.
_, err := c.AuctionBidSimple(0, big.NewInt(1)) assert.Equal(t, errBidClosed, tracerr.Unwrap(err))
_, err = c.AuctionBidSimple(4323, big.NewInt(1)) assert.Equal(t, errBidNotOpen, tracerr.Unwrap(err))
// 101 % 6 = 5; defaultSlotSetBid[5] = 1500; 1500 + 10% = 1650
_, err = c.AuctionBidSimple(101, big.NewInt(1650)) assert.Equal(t, errCoordNotReg, tracerr.Unwrap(err))
_, err = c.AuctionSetCoordinator(addrForge, "https://foo.bar") assert.Nil(t, err)
_, err = c.AuctionBidSimple(3, big.NewInt(1)) assert.Equal(t, errBidBelowMin, tracerr.Unwrap(err))
_, err = c.AuctionBidSimple(3, big.NewInt(1650)) assert.Nil(t, err)
c.CtlSetAddr(addrBidder2) _, err = c.AuctionSetCoordinator(addrForge2, "https://foo2.bar") assert.Nil(t, err)
_, err = c.AuctionBidSimple(3, big.NewInt(16)) assert.Equal(t, errBidBelowMin, tracerr.Unwrap(err))
// 1650 + 10% = 1815
_, err = c.AuctionBidSimple(3, big.NewInt(1815)) assert.Nil(t, err)
c.CtlMineBlock()
blockNum, err := c.EthLastBlock() require.Nil(t, err)
auctionEvents, err := c.AuctionEventsByBlock(blockNum, nil) require.Nil(t, err) assert.Equal(t, 2, len(auctionEvents.NewBid)) }
func TestClientRollup(t *testing.T) { token1Addr := ethCommon.HexToAddress("0x6b175474e89094c44da98b954eedeac495271d0f")
var timer timer clientSetup := NewClientSetupExample() c := NewClient(true, &timer, ðCommon.Address{}, clientSetup)
// Add a token
tx, err := c.RollupAddTokenSimple(token1Addr, clientSetup.RollupVariables.FeeAddToken) require.Nil(t, err) assert.NotNil(t, tx)
// Add some L1UserTxs
// Create Accounts
const N = 16 var keys [N]*keys for i := 0; i < N; i++ { keys[i] = genKeys(int64(i)) tx := common.L1Tx{ FromIdx: 0, FromEthAddr: keys[i].Addr, FromBJJ: keys[i].BJJPublicKey.Compress(), TokenID: common.TokenID(0), Amount: big.NewInt(0), DepositAmount: big.NewInt(10 + int64(i)), } _, err := c.RollupL1UserTxERC20ETH(tx.FromBJJ, int64(tx.FromIdx), tx.DepositAmount, tx.Amount, uint32(tx.TokenID), int64(tx.ToIdx)) require.Nil(t, err) } c.CtlMineBlock()
blockNum, err := c.EthLastBlock() require.Nil(t, err) rollupEvents, err := c.RollupEventsByBlock(blockNum, nil) require.Nil(t, err) assert.Equal(t, N, len(rollupEvents.L1UserTx)) assert.Equal(t, 1, len(rollupEvents.AddToken))
// Forge a batch
c.CtlAddBatch(ð.RollupForgeBatchArgs{ NewLastIdx: 0, NewStRoot: big.NewInt(1), NewExitRoot: big.NewInt(100), L1CoordinatorTxs: []common.L1Tx{}, L2TxsData: []common.L2Tx{}, FeeIdxCoordinator: []common.Idx{}, VerifierIdx: 0, L1Batch: true, }) c.CtlMineBlock()
blockNumA, err := c.EthLastBlock() require.Nil(t, err) rollupEvents, err = c.RollupEventsByBlock(blockNumA, nil) require.Nil(t, err) assert.Equal(t, 0, len(rollupEvents.L1UserTx)) assert.Equal(t, 0, len(rollupEvents.AddToken)) assert.Equal(t, 1, len(rollupEvents.ForgeBatch))
// Simulate reorg discarding last mined block
c.CtlRollback() c.CtlMineBlock()
blockNumB, err := c.EthLastBlock() require.Nil(t, err) rollupEventsB, err := c.RollupEventsByBlock(blockNumA, nil) require.Nil(t, err) assert.Equal(t, 0, len(rollupEventsB.L1UserTx)) assert.Equal(t, 0, len(rollupEventsB.AddToken)) assert.Equal(t, 0, len(rollupEventsB.ForgeBatch))
assert.Equal(t, blockNumA, blockNumB) assert.NotEqual(t, rollupEvents, rollupEventsB)
// Forge again
rollupForgeBatchArgs0 := ð.RollupForgeBatchArgs{ NewLastIdx: 0, NewStRoot: big.NewInt(1), NewExitRoot: big.NewInt(100), L1CoordinatorTxs: []common.L1Tx{}, L2TxsData: []common.L2Tx{}, FeeIdxCoordinator: []common.Idx{}, VerifierIdx: 0, L1Batch: true, } c.CtlAddBatch(rollupForgeBatchArgs0) c.CtlMineBlock()
// Retrieve ForgeBatchArguments starting from the events
blockNum, err = c.EthLastBlock() require.Nil(t, err) rollupEvents, err = c.RollupEventsByBlock(blockNum, nil) require.Nil(t, err)
rollupForgeBatchArgs1, sender, err := c.RollupForgeBatchArgs(rollupEvents.ForgeBatch[0].EthTxHash, rollupEvents.ForgeBatch[0].L1UserTxsLen) require.Nil(t, err) assert.Equal(t, *c.addr, *sender) assert.Equal(t, rollupForgeBatchArgs0, rollupForgeBatchArgs1) }
type keys struct { BJJSecretKey *babyjub.PrivateKey BJJPublicKey *babyjub.PublicKey Addr ethCommon.Address }
func genKeys(i int64) *keys { i++ // i = 0 doesn't work for the ecdsa key generation
var sk babyjub.PrivateKey binary.LittleEndian.PutUint64(sk[:], uint64(i))
// eth address
var key ecdsa.PrivateKey key.D = big.NewInt(i) // only for testing
key.PublicKey.X, key.PublicKey.Y = ethCrypto.S256().ScalarBaseMult(key.D.Bytes()) key.Curve = ethCrypto.S256() addr := ethCrypto.PubkeyToAddress(key.PublicKey)
return &keys{ BJJSecretKey: &sk, BJJPublicKey: sk.Public(), Addr: addr, } }
|