You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1044 lines
41 KiB

Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
  1. package eth
  2. import (
  3. "context"
  4. "crypto/ecdsa"
  5. "encoding/binary"
  6. "encoding/hex"
  7. "math/big"
  8. "testing"
  9. ethCommon "github.com/ethereum/go-ethereum/common"
  10. ethCrypto "github.com/ethereum/go-ethereum/crypto"
  11. "github.com/hermeznetwork/hermez-node/common"
  12. "github.com/iden3/go-iden3-crypto/babyjub"
  13. "github.com/stretchr/testify/assert"
  14. "github.com/stretchr/testify/require"
  15. )
  16. var rollupClient *RollupClient
  17. var auctionClient *AuctionClient
  18. var ethHashForge ethCommon.Hash
  19. var argsForge *RollupForgeBatchArgs
  20. var absoluteMaxL1L2BatchTimeout = int64(240)
  21. var maxTx = int64(512)
  22. var nLevels = int64(32)
  23. var tokenIDERC777 uint32
  24. var tokenHEZID uint32
  25. var L1UserTxs []common.L1Tx
  26. var blockStampBucket int64
  27. type keys struct {
  28. BJJSecretKey *babyjub.PrivateKey
  29. BJJPublicKey *babyjub.PublicKey
  30. Addr ethCommon.Address
  31. }
  32. func genKeysBjj(i int64) *keys {
  33. i++ // i = 0 doesn't work for the ecdsa key generation
  34. var sk babyjub.PrivateKey
  35. binary.LittleEndian.PutUint64(sk[:], uint64(i))
  36. // eth address
  37. var key ecdsa.PrivateKey
  38. key.D = big.NewInt(i) // only for testing
  39. key.PublicKey.X, key.PublicKey.Y = ethCrypto.S256().ScalarBaseMult(key.D.Bytes())
  40. key.Curve = ethCrypto.S256()
  41. return &keys{
  42. BJJSecretKey: &sk,
  43. BJJPublicKey: sk.Public(),
  44. }
  45. }
  46. func TestRollupEventInit(t *testing.T) {
  47. rollupInit, blockNum, err := rollupClient.RollupEventInit()
  48. require.NoError(t, err)
  49. assert.Equal(t, int64(19), blockNum)
  50. assert.Equal(t, uint8(10), rollupInit.ForgeL1L2BatchTimeout)
  51. assert.Equal(t, big.NewInt(10), rollupInit.FeeAddToken)
  52. assert.Equal(t, uint64(60*60*24*7*2), rollupInit.WithdrawalDelay)
  53. }
  54. func TestRollupConstants(t *testing.T) {
  55. rollupConstants, err := rollupClient.RollupConstants()
  56. require.NoError(t, err)
  57. assert.Equal(t, absoluteMaxL1L2BatchTimeout, rollupConstants.AbsoluteMaxL1L2BatchTimeout)
  58. assert.Equal(t, auctionAddressConst, rollupConstants.HermezAuctionContract)
  59. assert.Equal(t, tokenHEZAddressConst, rollupConstants.TokenHEZ)
  60. assert.Equal(t, maxTx, rollupConstants.Verifiers[0].MaxTx)
  61. assert.Equal(t, nLevels, rollupConstants.Verifiers[0].NLevels)
  62. assert.Equal(t, governanceAddressConst, rollupConstants.HermezGovernanceAddress)
  63. assert.Equal(t, wdelayerAddressConst, rollupConstants.WithdrawDelayerContract)
  64. }
  65. func TestRollupRegisterTokensCount(t *testing.T) {
  66. registerTokensCount, err := rollupClient.RollupRegisterTokensCount()
  67. require.NoError(t, err)
  68. assert.Equal(t, big.NewInt(1), registerTokensCount)
  69. }
  70. func TestRollupAddToken(t *testing.T) {
  71. feeAddToken := big.NewInt(10)
  72. // Addtoken ERC20Permit
  73. registerTokensCount, err := rollupClient.RollupRegisterTokensCount()
  74. require.NoError(t, err)
  75. _, err = rollupClient.RollupAddToken(tokenHEZAddressConst, feeAddToken, deadline)
  76. require.NoError(t, err)
  77. currentBlockNum, err := rollupClient.client.EthLastBlock()
  78. require.NoError(t, err)
  79. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  80. require.NoError(t, err)
  81. assert.Equal(t, tokenHEZAddressConst, rollupEvents.AddToken[0].TokenAddress)
  82. assert.Equal(t, registerTokensCount, common.TokenID(rollupEvents.AddToken[0].TokenID).BigInt())
  83. tokenHEZID = rollupEvents.AddToken[0].TokenID
  84. }
  85. func TestRollupForgeBatch(t *testing.T) {
  86. chainid, _ := auctionClient.client.Client().ChainID(context.Background())
  87. // Register Coordinator
  88. forgerAddress := governanceAddressConst
  89. _, err := auctionClient.AuctionSetCoordinator(forgerAddress, URL)
  90. require.NoError(t, err)
  91. // MultiBid
  92. currentSlot, err := auctionClient.AuctionGetCurrentSlotNumber()
  93. require.NoError(t, err)
  94. slotSet := [6]bool{true, false, true, false, true, false}
  95. maxBid := new(big.Int)
  96. maxBid.SetString("15000000000000000000", 10)
  97. minBid := new(big.Int)
  98. minBid.SetString("11000000000000000000", 10)
  99. budget := new(big.Int)
  100. budget.SetString("45200000000000000000", 10)
  101. _, err = auctionClient.AuctionMultiBid(budget, currentSlot+4, currentSlot+10, slotSet,
  102. maxBid, minBid, deadline)
  103. require.NoError(t, err)
  104. // Add Blocks
  105. blockNum := int64(int(blocksPerSlot)*int(currentSlot+4) + int(genesisBlock))
  106. currentBlockNum, err := auctionClient.client.EthLastBlock()
  107. require.NoError(t, err)
  108. blocksToAdd := blockNum - currentBlockNum
  109. addBlocks(blocksToAdd, ethClientDialURL)
  110. // Forge Batch 1
  111. args := new(RollupForgeBatchArgs)
  112. // When encoded, 64 times the 0 idx means that no idx to collect fees is specified.
  113. args.FeeIdxCoordinator = []common.Idx{}
  114. l1CoordinatorBytes, err := hex.DecodeString(
  115. "1c660323607bb113e586183609964a333d07ebe4bef3be82ec13af453bae9590bd7711cdb6abf" +
  116. "42f176eadfbe5506fbef5e092e5543733f91b0061d9a7747fa10694a915a6470fa230" +
  117. "de387b51e6f4db0b09787867778687b55197ad6d6a86eac000000001")
  118. require.NoError(t, err)
  119. numTxsL1 := len(l1CoordinatorBytes) / common.RollupConstL1CoordinatorTotalBytes
  120. for i := 0; i < numTxsL1; i++ {
  121. bytesL1Coordinator :=
  122. l1CoordinatorBytes[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*
  123. common.RollupConstL1CoordinatorTotalBytes]
  124. var signature []byte
  125. v := bytesL1Coordinator[0]
  126. s := bytesL1Coordinator[1:33]
  127. r := bytesL1Coordinator[33:65]
  128. signature = append(signature, r[:]...)
  129. signature = append(signature, s[:]...)
  130. signature = append(signature, v)
  131. l1Tx, err := common.L1CoordinatorTxFromBytes(bytesL1Coordinator, chainid, rollupClient.address)
  132. require.NoError(t, err)
  133. args.L1CoordinatorTxs = append(args.L1CoordinatorTxs, *l1Tx)
  134. args.L1CoordinatorTxsAuths = append(args.L1CoordinatorTxsAuths, signature)
  135. }
  136. args.L1UserTxs = []common.L1Tx{}
  137. args.L2TxsData = []common.L2Tx{}
  138. newStateRoot := new(big.Int)
  139. newStateRoot.SetString(
  140. "18317824016047294649053625209337295956588174734569560016974612130063629505228",
  141. 10)
  142. newExitRoot := new(big.Int)
  143. bytesNumExitRoot, err := hex.DecodeString(
  144. "10a89d5fe8d488eda1ba371d633515739933c706c210c604f5bd209180daa43b")
  145. require.NoError(t, err)
  146. newExitRoot.SetBytes(bytesNumExitRoot)
  147. args.NewLastIdx = int64(300)
  148. args.NewStRoot = newStateRoot
  149. args.NewExitRoot = newExitRoot
  150. args.L1Batch = true
  151. args.VerifierIdx = 0
  152. args.ProofA[0] = big.NewInt(0)
  153. args.ProofA[1] = big.NewInt(0)
  154. args.ProofB[0][0] = big.NewInt(0)
  155. args.ProofB[0][1] = big.NewInt(0)
  156. args.ProofB[1][0] = big.NewInt(0)
  157. args.ProofB[1][1] = big.NewInt(0)
  158. args.ProofC[0] = big.NewInt(0)
  159. args.ProofC[1] = big.NewInt(0)
  160. argsForge = args
  161. _, err = rollupClient.RollupForgeBatch(argsForge, nil)
  162. require.NoError(t, err)
  163. currentBlockNum, err = rollupClient.client.EthLastBlock()
  164. require.NoError(t, err)
  165. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  166. require.NoError(t, err)
  167. assert.Equal(t, int64(1), rollupEvents.ForgeBatch[0].BatchNum)
  168. assert.Equal(t, uint16(len(L1UserTxs)), rollupEvents.ForgeBatch[0].L1UserTxsLen)
  169. ethHashForge = rollupEvents.ForgeBatch[0].EthTxHash
  170. }
  171. func TestRollupForgeBatchArgs(t *testing.T) {
  172. args, sender, err := rollupClient.RollupForgeBatchArgs(ethHashForge, uint16(len(L1UserTxs)))
  173. require.NoError(t, err)
  174. assert.Equal(t, *sender, rollupClient.client.account.Address)
  175. assert.Equal(t, argsForge.FeeIdxCoordinator, args.FeeIdxCoordinator)
  176. assert.Equal(t, argsForge.L1Batch, args.L1Batch)
  177. assert.Equal(t, argsForge.L1CoordinatorTxs, args.L1CoordinatorTxs)
  178. assert.Equal(t, argsForge.L1CoordinatorTxsAuths, args.L1CoordinatorTxsAuths)
  179. assert.Equal(t, argsForge.L2TxsData, args.L2TxsData)
  180. assert.Equal(t, argsForge.NewLastIdx, args.NewLastIdx)
  181. assert.Equal(t, argsForge.NewStRoot, args.NewStRoot)
  182. assert.Equal(t, argsForge.VerifierIdx, args.VerifierIdx)
  183. }
  184. func TestRollupUpdateForgeL1L2BatchTimeout(t *testing.T) {
  185. newForgeL1L2BatchTimeout := int64(222)
  186. _, err := rollupClient.RollupUpdateForgeL1L2BatchTimeout(newForgeL1L2BatchTimeout)
  187. require.NoError(t, err)
  188. currentBlockNum, err := rollupClient.client.EthLastBlock()
  189. require.NoError(t, err)
  190. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  191. require.NoError(t, err)
  192. assert.Equal(t, newForgeL1L2BatchTimeout,
  193. rollupEvents.UpdateForgeL1L2BatchTimeout[0].NewForgeL1L2BatchTimeout)
  194. }
  195. func TestRollupUpdateFeeAddToken(t *testing.T) {
  196. newFeeAddToken := big.NewInt(12)
  197. _, err := rollupClient.RollupUpdateFeeAddToken(newFeeAddToken)
  198. require.NoError(t, err)
  199. currentBlockNum, err := rollupClient.client.EthLastBlock()
  200. require.NoError(t, err)
  201. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  202. require.NoError(t, err)
  203. assert.Equal(t, newFeeAddToken, rollupEvents.UpdateFeeAddToken[0].NewFeeAddToken)
  204. }
  205. func TestRollupUpdateBucketsParameters(t *testing.T) {
  206. bucketsParameters := make([]RollupUpdateBucketsParameters, 5)
  207. ceilUSD, _ := new(big.Int).SetString("10000000", 10)
  208. for i := range bucketsParameters {
  209. bucketsParameters[i].CeilUSD = big.NewInt(0).Mul(ceilUSD, big.NewInt(int64(i+1)))
  210. bucketsParameters[i].BlockStamp = big.NewInt(int64(0))
  211. bucketsParameters[i].Withdrawals = big.NewInt(int64(i + 1))
  212. bucketsParameters[i].RateBlocks = big.NewInt(int64(i+1) * 4)
  213. bucketsParameters[i].RateWithdrawals = big.NewInt(int64(3))
  214. bucketsParameters[i].MaxWithdrawals = big.NewInt(int64(1215752192))
  215. }
  216. _, err := rollupClient.RollupUpdateBucketsParameters(bucketsParameters)
  217. require.NoError(t, err)
  218. currentBlockNum, err := rollupClient.client.EthLastBlock()
  219. require.NoError(t, err)
  220. blockStampBucket = currentBlockNum
  221. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  222. require.NoError(t, err)
  223. for i := range bucketsParameters {
  224. assert.Equal(t, 0, bucketsParameters[i].CeilUSD.Cmp(rollupEvents.UpdateBucketsParameters[0].ArrayBuckets[i].CeilUSD))
  225. assert.Equal(t, 0, bucketsParameters[i].BlockStamp.Cmp(rollupEvents.UpdateBucketsParameters[0].ArrayBuckets[i].BlockStamp))
  226. assert.Equal(t, 0, bucketsParameters[i].Withdrawals.Cmp(rollupEvents.UpdateBucketsParameters[0].ArrayBuckets[i].Withdrawals))
  227. assert.Equal(t, 0, bucketsParameters[i].RateBlocks.Cmp(rollupEvents.UpdateBucketsParameters[0].ArrayBuckets[i].RateBlocks))
  228. assert.Equal(t, 0, bucketsParameters[i].RateWithdrawals.Cmp(rollupEvents.UpdateBucketsParameters[0].ArrayBuckets[i].RateWithdrawals))
  229. assert.Equal(t, 0, bucketsParameters[i].MaxWithdrawals.Cmp(rollupEvents.UpdateBucketsParameters[0].ArrayBuckets[i].MaxWithdrawals))
  230. }
  231. }
  232. func TestRollupUpdateWithdrawalDelay(t *testing.T) {
  233. newWithdrawalDelay := int64(100000)
  234. _, err := rollupClient.RollupUpdateWithdrawalDelay(newWithdrawalDelay)
  235. require.NoError(t, err)
  236. currentBlockNum, err := rollupClient.client.EthLastBlock()
  237. require.NoError(t, err)
  238. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  239. require.NoError(t, err)
  240. assert.Equal(t, newWithdrawalDelay,
  241. int64(rollupEvents.UpdateWithdrawalDelay[0].NewWithdrawalDelay))
  242. }
  243. func TestRollupUpdateTokenExchange(t *testing.T) {
  244. var addressArray []ethCommon.Address
  245. var valueArray []uint64
  246. addressToken1, err := rollupClient.hermez.TokenList(nil, big.NewInt(1))
  247. addressArray = append(addressArray, addressToken1)
  248. tokenPrice := 10
  249. valueArray = append(valueArray, uint64(tokenPrice*1e14))
  250. require.NoError(t, err)
  251. _, err = rollupClient.RollupUpdateTokenExchange(addressArray, valueArray)
  252. require.NoError(t, err)
  253. currentBlockNum, err := rollupClient.client.EthLastBlock()
  254. require.NoError(t, err)
  255. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  256. require.NoError(t, err)
  257. assert.Equal(t, addressArray, rollupEvents.UpdateTokenExchange[0].AddressArray)
  258. assert.Equal(t, valueArray, rollupEvents.UpdateTokenExchange[0].ValueArray)
  259. }
  260. func TestRollupL1UserTxETHCreateAccountDeposit(t *testing.T) {
  261. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  262. require.NoError(t, err)
  263. key := genKeysBjj(2)
  264. fromIdxInt64 := int64(0)
  265. toIdxInt64 := int64(0)
  266. tokenIDUint32 := uint32(0)
  267. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  268. l1Tx := common.L1Tx{
  269. FromBJJ: key.BJJPublicKey.Compress(),
  270. FromIdx: common.Idx(fromIdxInt64),
  271. ToIdx: common.Idx(toIdxInt64),
  272. DepositAmount: depositAmount,
  273. TokenID: common.TokenID(tokenIDUint32),
  274. Amount: big.NewInt(0),
  275. }
  276. L1UserTxs = append(L1UserTxs, l1Tx)
  277. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  278. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  279. require.NoError(t, err)
  280. currentBlockNum, err := rollupClient.client.EthLastBlock()
  281. require.NoError(t, err)
  282. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  283. require.NoError(t, err)
  284. assert.Equal(t, l1Tx.FromBJJ, rollupEvents.L1UserTx[0].L1UserTx.FromBJJ)
  285. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  286. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  287. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  288. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  289. assert.Equal(t, rollupClientAux.client.account.Address,
  290. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  291. }
  292. func TestRollupL1UserTxERC20CreateAccountDeposit(t *testing.T) {
  293. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  294. tokenHEZ)
  295. require.NoError(t, err)
  296. key := genKeysBjj(1)
  297. fromIdxInt64 := int64(0)
  298. toIdxInt64 := int64(0)
  299. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  300. l1Tx := common.L1Tx{
  301. FromBJJ: key.BJJPublicKey.Compress(),
  302. FromIdx: common.Idx(fromIdxInt64),
  303. ToIdx: common.Idx(toIdxInt64),
  304. DepositAmount: depositAmount,
  305. TokenID: common.TokenID(tokenHEZID),
  306. Amount: big.NewInt(0),
  307. }
  308. L1UserTxs = append(L1UserTxs, l1Tx)
  309. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  310. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  311. require.NoError(t, err)
  312. currentBlockNum, err := rollupClient.client.EthLastBlock()
  313. require.NoError(t, err)
  314. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  315. require.NoError(t, err)
  316. assert.Equal(t, l1Tx.FromBJJ, rollupEvents.L1UserTx[0].L1UserTx.FromBJJ)
  317. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  318. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  319. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  320. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  321. assert.Equal(t, rollupClientAux2.client.account.Address,
  322. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  323. }
  324. func TestRollupL1UserTxERC20PermitCreateAccountDeposit(t *testing.T) {
  325. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  326. tokenHEZ)
  327. require.NoError(t, err)
  328. key := genKeysBjj(3)
  329. fromIdxInt64 := int64(0)
  330. toIdxInt64 := int64(0)
  331. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  332. l1Tx := common.L1Tx{
  333. FromBJJ: key.BJJPublicKey.Compress(),
  334. FromIdx: common.Idx(fromIdxInt64),
  335. ToIdx: common.Idx(toIdxInt64),
  336. DepositAmount: depositAmount,
  337. TokenID: common.TokenID(tokenIDERC777),
  338. Amount: big.NewInt(0),
  339. }
  340. L1UserTxs = append(L1UserTxs, l1Tx)
  341. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  342. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  343. require.NoError(t, err)
  344. currentBlockNum, err := rollupClient.client.EthLastBlock()
  345. require.NoError(t, err)
  346. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  347. require.NoError(t, err)
  348. assert.Equal(t, l1Tx.FromBJJ, rollupEvents.L1UserTx[0].L1UserTx.FromBJJ)
  349. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  350. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  351. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  352. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  353. assert.Equal(t, rollupClientAux.client.account.Address,
  354. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  355. }
  356. func TestRollupL1UserTxETHDeposit(t *testing.T) {
  357. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  358. tokenHEZ)
  359. require.NoError(t, err)
  360. fromIdxInt64 := int64(256)
  361. toIdxInt64 := int64(0)
  362. tokenIDUint32 := uint32(0)
  363. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  364. l1Tx := common.L1Tx{
  365. FromBJJ: common.EmptyBJJComp,
  366. FromIdx: common.Idx(fromIdxInt64),
  367. ToIdx: common.Idx(toIdxInt64),
  368. DepositAmount: depositAmount,
  369. TokenID: common.TokenID(tokenIDUint32),
  370. Amount: big.NewInt(0),
  371. }
  372. L1UserTxs = append(L1UserTxs, l1Tx)
  373. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  374. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  375. require.NoError(t, err)
  376. currentBlockNum, err := rollupClient.client.EthLastBlock()
  377. require.NoError(t, err)
  378. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  379. require.NoError(t, err)
  380. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  381. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  382. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  383. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  384. assert.Equal(t, rollupClientAux.client.account.Address,
  385. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  386. }
  387. func TestRollupL1UserTxERC20Deposit(t *testing.T) {
  388. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  389. tokenHEZ)
  390. require.NoError(t, err)
  391. fromIdxInt64 := int64(257)
  392. toIdxInt64 := int64(0)
  393. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  394. l1Tx := common.L1Tx{
  395. FromBJJ: common.EmptyBJJComp,
  396. FromIdx: common.Idx(fromIdxInt64),
  397. ToIdx: common.Idx(toIdxInt64),
  398. DepositAmount: depositAmount,
  399. TokenID: common.TokenID(tokenHEZID),
  400. Amount: big.NewInt(0),
  401. }
  402. L1UserTxs = append(L1UserTxs, l1Tx)
  403. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  404. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  405. require.NoError(t, err)
  406. currentBlockNum, err := rollupClient.client.EthLastBlock()
  407. require.NoError(t, err)
  408. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  409. require.NoError(t, err)
  410. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  411. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  412. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  413. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  414. assert.Equal(t, rollupClientAux2.client.account.Address,
  415. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  416. }
  417. func TestRollupL1UserTxERC20PermitDeposit(t *testing.T) {
  418. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  419. tokenHEZ)
  420. require.NoError(t, err)
  421. fromIdxInt64 := int64(258)
  422. toIdxInt64 := int64(0)
  423. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  424. l1Tx := common.L1Tx{
  425. FromIdx: common.Idx(fromIdxInt64),
  426. ToIdx: common.Idx(toIdxInt64),
  427. DepositAmount: depositAmount,
  428. TokenID: common.TokenID(tokenIDERC777),
  429. Amount: big.NewInt(0),
  430. }
  431. L1UserTxs = append(L1UserTxs, l1Tx)
  432. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  433. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  434. require.NoError(t, err)
  435. currentBlockNum, err := rollupClient.client.EthLastBlock()
  436. require.NoError(t, err)
  437. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  438. require.NoError(t, err)
  439. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  440. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  441. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  442. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  443. assert.Equal(t, rollupClientAux.client.account.Address,
  444. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  445. }
  446. func TestRollupL1UserTxETHDepositTransfer(t *testing.T) {
  447. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  448. tokenHEZ)
  449. require.NoError(t, err)
  450. fromIdxInt64 := int64(256)
  451. toIdxInt64 := int64(257)
  452. tokenIDUint32 := uint32(0)
  453. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  454. amount, _ := new(big.Int).SetString("100000000000000000000", 10)
  455. l1Tx := common.L1Tx{
  456. FromIdx: common.Idx(fromIdxInt64),
  457. ToIdx: common.Idx(toIdxInt64),
  458. DepositAmount: depositAmount,
  459. TokenID: common.TokenID(tokenIDUint32),
  460. Amount: amount,
  461. }
  462. L1UserTxs = append(L1UserTxs, l1Tx)
  463. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  464. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  465. require.NoError(t, err)
  466. currentBlockNum, err := rollupClient.client.EthLastBlock()
  467. require.NoError(t, err)
  468. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  469. require.NoError(t, err)
  470. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  471. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  472. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  473. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  474. assert.Equal(t, rollupClientAux.client.account.Address,
  475. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  476. }
  477. func TestRollupL1UserTxERC20DepositTransfer(t *testing.T) {
  478. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  479. tokenHEZ)
  480. require.NoError(t, err)
  481. fromIdxInt64 := int64(257)
  482. toIdxInt64 := int64(258)
  483. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  484. amount, _ := new(big.Int).SetString("100000000000000000000", 10)
  485. l1Tx := common.L1Tx{
  486. FromIdx: common.Idx(fromIdxInt64),
  487. ToIdx: common.Idx(toIdxInt64),
  488. DepositAmount: depositAmount,
  489. TokenID: common.TokenID(tokenHEZID),
  490. Amount: amount,
  491. }
  492. L1UserTxs = append(L1UserTxs, l1Tx)
  493. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  494. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  495. require.NoError(t, err)
  496. currentBlockNum, err := rollupClient.client.EthLastBlock()
  497. require.NoError(t, err)
  498. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  499. require.NoError(t, err)
  500. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  501. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  502. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  503. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  504. assert.Equal(t, rollupClientAux2.client.account.Address,
  505. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  506. }
  507. func TestRollupL1UserTxERC20PermitDepositTransfer(t *testing.T) {
  508. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  509. tokenHEZ)
  510. require.NoError(t, err)
  511. fromIdxInt64 := int64(258)
  512. toIdxInt64 := int64(259)
  513. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  514. amount, _ := new(big.Int).SetString("100000000000000000000", 10)
  515. l1Tx := common.L1Tx{
  516. FromIdx: common.Idx(fromIdxInt64),
  517. ToIdx: common.Idx(toIdxInt64),
  518. DepositAmount: depositAmount,
  519. TokenID: common.TokenID(tokenIDERC777),
  520. Amount: amount,
  521. }
  522. L1UserTxs = append(L1UserTxs, l1Tx)
  523. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  524. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  525. require.NoError(t, err)
  526. currentBlockNum, err := rollupClient.client.EthLastBlock()
  527. require.NoError(t, err)
  528. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  529. require.NoError(t, err)
  530. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  531. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  532. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  533. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  534. assert.Equal(t, rollupClientAux.client.account.Address,
  535. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  536. }
  537. func TestRollupL1UserTxETHCreateAccountDepositTransfer(t *testing.T) {
  538. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  539. tokenHEZ)
  540. require.NoError(t, err)
  541. fromIdxInt64 := int64(256)
  542. toIdxInt64 := int64(257)
  543. tokenIDUint32 := uint32(0)
  544. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  545. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  546. l1Tx := common.L1Tx{
  547. FromIdx: common.Idx(fromIdxInt64),
  548. ToIdx: common.Idx(toIdxInt64),
  549. DepositAmount: depositAmount,
  550. TokenID: common.TokenID(tokenIDUint32),
  551. Amount: amount,
  552. }
  553. L1UserTxs = append(L1UserTxs, l1Tx)
  554. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  555. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  556. require.NoError(t, err)
  557. currentBlockNum, err := rollupClient.client.EthLastBlock()
  558. require.NoError(t, err)
  559. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  560. require.NoError(t, err)
  561. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  562. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  563. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  564. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  565. assert.Equal(t, rollupClientAux.client.account.Address,
  566. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  567. }
  568. func TestRollupL1UserTxERC20CreateAccountDepositTransfer(t *testing.T) {
  569. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  570. tokenHEZ)
  571. require.NoError(t, err)
  572. fromIdxInt64 := int64(257)
  573. toIdxInt64 := int64(258)
  574. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  575. amount, _ := new(big.Int).SetString("30000000000000000000", 10)
  576. l1Tx := common.L1Tx{
  577. FromIdx: common.Idx(fromIdxInt64),
  578. ToIdx: common.Idx(toIdxInt64),
  579. DepositAmount: depositAmount,
  580. TokenID: common.TokenID(tokenHEZID),
  581. Amount: amount,
  582. }
  583. L1UserTxs = append(L1UserTxs, l1Tx)
  584. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  585. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  586. require.NoError(t, err)
  587. currentBlockNum, err := rollupClient.client.EthLastBlock()
  588. require.NoError(t, err)
  589. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  590. require.NoError(t, err)
  591. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  592. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  593. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  594. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  595. assert.Equal(t, rollupClientAux2.client.account.Address,
  596. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  597. }
  598. func TestRollupL1UserTxERC20PermitCreateAccountDepositTransfer(t *testing.T) {
  599. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  600. tokenHEZ)
  601. require.NoError(t, err)
  602. fromIdxInt64 := int64(258)
  603. toIdxInt64 := int64(259)
  604. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  605. amount, _ := new(big.Int).SetString("40000000000000000000", 10)
  606. l1Tx := common.L1Tx{
  607. FromIdx: common.Idx(fromIdxInt64),
  608. ToIdx: common.Idx(toIdxInt64),
  609. DepositAmount: depositAmount,
  610. TokenID: common.TokenID(tokenIDERC777),
  611. Amount: amount,
  612. }
  613. L1UserTxs = append(L1UserTxs, l1Tx)
  614. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  615. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  616. require.NoError(t, err)
  617. currentBlockNum, err := rollupClient.client.EthLastBlock()
  618. require.NoError(t, err)
  619. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  620. require.NoError(t, err)
  621. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  622. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  623. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  624. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  625. assert.Equal(t, rollupClientAux.client.account.Address,
  626. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  627. }
  628. func TestRollupL1UserTxETHForceTransfer(t *testing.T) {
  629. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  630. tokenHEZ)
  631. require.NoError(t, err)
  632. fromIdxInt64 := int64(256)
  633. toIdxInt64 := int64(257)
  634. tokenIDUint32 := uint32(0)
  635. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  636. l1Tx := common.L1Tx{
  637. FromIdx: common.Idx(fromIdxInt64),
  638. ToIdx: common.Idx(toIdxInt64),
  639. DepositAmount: big.NewInt(0),
  640. TokenID: common.TokenID(tokenIDUint32),
  641. Amount: amount,
  642. }
  643. L1UserTxs = append(L1UserTxs, l1Tx)
  644. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  645. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  646. require.NoError(t, err)
  647. currentBlockNum, err := rollupClient.client.EthLastBlock()
  648. require.NoError(t, err)
  649. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  650. require.NoError(t, err)
  651. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  652. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  653. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  654. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  655. assert.Equal(t, rollupClientAux.client.account.Address,
  656. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  657. }
  658. func TestRollupL1UserTxERC20ForceTransfer(t *testing.T) {
  659. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  660. tokenHEZ)
  661. require.NoError(t, err)
  662. fromIdxInt64 := int64(257)
  663. toIdxInt64 := int64(258)
  664. amount, _ := new(big.Int).SetString("10000000000000000000", 10)
  665. l1Tx := common.L1Tx{
  666. FromIdx: common.Idx(fromIdxInt64),
  667. ToIdx: common.Idx(toIdxInt64),
  668. DepositAmount: big.NewInt(0),
  669. TokenID: common.TokenID(tokenHEZID),
  670. Amount: amount,
  671. }
  672. L1UserTxs = append(L1UserTxs, l1Tx)
  673. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  674. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  675. require.NoError(t, err)
  676. currentBlockNum, err := rollupClient.client.EthLastBlock()
  677. require.NoError(t, err)
  678. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  679. require.NoError(t, err)
  680. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  681. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  682. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  683. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  684. assert.Equal(t, rollupClientAux2.client.account.Address,
  685. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  686. }
  687. func TestRollupL1UserTxERC20PermitForceTransfer(t *testing.T) {
  688. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  689. tokenHEZ)
  690. require.NoError(t, err)
  691. fromIdxInt64 := int64(259)
  692. toIdxInt64 := int64(260)
  693. amount, _ := new(big.Int).SetString("30000000000000000000", 10)
  694. l1Tx := common.L1Tx{
  695. FromIdx: common.Idx(fromIdxInt64),
  696. ToIdx: common.Idx(toIdxInt64),
  697. DepositAmount: big.NewInt(0),
  698. TokenID: common.TokenID(tokenIDERC777),
  699. Amount: amount,
  700. }
  701. L1UserTxs = append(L1UserTxs, l1Tx)
  702. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  703. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  704. require.NoError(t, err)
  705. currentBlockNum, err := rollupClient.client.EthLastBlock()
  706. require.NoError(t, err)
  707. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  708. require.NoError(t, err)
  709. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  710. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  711. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  712. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  713. assert.Equal(t, rollupClientAux.client.account.Address,
  714. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  715. }
  716. func TestRollupL1UserTxETHForceExit(t *testing.T) {
  717. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  718. tokenHEZ)
  719. require.NoError(t, err)
  720. fromIdxInt64 := int64(256)
  721. toIdxInt64 := int64(1)
  722. tokenIDUint32 := uint32(0)
  723. amount, _ := new(big.Int).SetString("10000000000000000000", 10)
  724. l1Tx := common.L1Tx{
  725. FromIdx: common.Idx(fromIdxInt64),
  726. ToIdx: common.Idx(toIdxInt64),
  727. DepositAmount: big.NewInt(0),
  728. TokenID: common.TokenID(tokenIDUint32),
  729. Amount: amount,
  730. }
  731. L1UserTxs = append(L1UserTxs, l1Tx)
  732. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  733. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  734. require.NoError(t, err)
  735. currentBlockNum, err := rollupClient.client.EthLastBlock()
  736. require.NoError(t, err)
  737. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  738. require.NoError(t, err)
  739. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  740. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  741. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  742. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  743. assert.Equal(t, rollupClientAux.client.account.Address,
  744. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  745. }
  746. func TestRollupL1UserTxERC20ForceExit(t *testing.T) {
  747. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  748. tokenHEZ)
  749. require.NoError(t, err)
  750. fromIdxInt64 := int64(257)
  751. toIdxInt64 := int64(1)
  752. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  753. l1Tx := common.L1Tx{
  754. FromIdx: common.Idx(fromIdxInt64),
  755. ToIdx: common.Idx(toIdxInt64),
  756. DepositAmount: big.NewInt(0),
  757. TokenID: common.TokenID(tokenHEZID),
  758. Amount: amount,
  759. }
  760. L1UserTxs = append(L1UserTxs, l1Tx)
  761. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  762. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  763. require.NoError(t, err)
  764. currentBlockNum, err := rollupClient.client.EthLastBlock()
  765. require.NoError(t, err)
  766. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  767. require.NoError(t, err)
  768. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  769. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  770. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  771. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  772. assert.Equal(t, rollupClientAux2.client.account.Address,
  773. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  774. }
  775. func TestRollupL1UserTxERC20PermitForceExit(t *testing.T) {
  776. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  777. tokenHEZ)
  778. require.NoError(t, err)
  779. fromIdxInt64 := int64(258)
  780. toIdxInt64 := int64(1)
  781. fromIdx := new(common.Idx)
  782. *fromIdx = 0
  783. amount, _ := new(big.Int).SetString("30000000000000000000", 10)
  784. l1Tx := common.L1Tx{
  785. FromIdx: common.Idx(fromIdxInt64),
  786. ToIdx: common.Idx(toIdxInt64),
  787. DepositAmount: big.NewInt(0),
  788. TokenID: common.TokenID(tokenIDERC777),
  789. Amount: amount,
  790. }
  791. L1UserTxs = append(L1UserTxs, l1Tx)
  792. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  793. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  794. require.NoError(t, err)
  795. currentBlockNum, err := rollupClient.client.EthLastBlock()
  796. require.NoError(t, err)
  797. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  798. require.NoError(t, err)
  799. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  800. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  801. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  802. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  803. assert.Equal(t, rollupClientAux.client.account.Address,
  804. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  805. }
  806. func TestRollupForgeBatch2(t *testing.T) {
  807. // Forge Batch 2
  808. _, err := rollupClient.RollupForgeBatch(argsForge, nil)
  809. require.NoError(t, err)
  810. currentBlockNum, err := rollupClient.client.EthLastBlock()
  811. require.NoError(t, err)
  812. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  813. require.NoError(t, err)
  814. assert.Equal(t, int64(2), rollupEvents.ForgeBatch[0].BatchNum)
  815. // Forge Batch 3
  816. args := new(RollupForgeBatchArgs)
  817. // When encoded, 64 times the 0 idx means that no idx to collect fees is specified.
  818. args.FeeIdxCoordinator = []common.Idx{}
  819. args.L1CoordinatorTxs = argsForge.L1CoordinatorTxs
  820. args.L1CoordinatorTxsAuths = argsForge.L1CoordinatorTxsAuths
  821. for i := 0; i < len(L1UserTxs); i++ {
  822. l1UserTx := L1UserTxs[i]
  823. l1UserTx.EffectiveAmount = l1UserTx.Amount
  824. l1Bytes, err := l1UserTx.BytesDataAvailability(uint32(nLevels))
  825. require.NoError(t, err)
  826. l1UserTxDataAvailability, err := common.L1TxFromDataAvailability(l1Bytes,
  827. uint32(nLevels))
  828. require.NoError(t, err)
  829. args.L1UserTxs = append(args.L1UserTxs, *l1UserTxDataAvailability)
  830. }
  831. newStateRoot := new(big.Int)
  832. newStateRoot.SetString(
  833. "18317824016047294649053625209337295956588174734569560016974612130063629505228",
  834. 10)
  835. newExitRoot := new(big.Int)
  836. newExitRoot.SetString(
  837. "1114281409737474688393837964161044726766678436313681099613347372031079422302",
  838. 10)
  839. amount := new(big.Int)
  840. amount.SetString("79000000", 10)
  841. l2Tx := common.L2Tx{
  842. ToIdx: 256,
  843. Amount: amount,
  844. FromIdx: 257,
  845. Fee: 201,
  846. }
  847. l2Txs := []common.L2Tx{}
  848. l2Txs = append(l2Txs, l2Tx)
  849. l2Txs = append(l2Txs, l2Tx)
  850. args.L2TxsData = l2Txs
  851. args.NewLastIdx = int64(1000)
  852. args.NewStRoot = newStateRoot
  853. args.NewExitRoot = newExitRoot
  854. args.L1Batch = true
  855. args.VerifierIdx = 0
  856. args.ProofA[0] = big.NewInt(0)
  857. args.ProofA[1] = big.NewInt(0)
  858. args.ProofB[0] = [2]*big.Int{big.NewInt(0), big.NewInt(0)}
  859. args.ProofB[1] = [2]*big.Int{big.NewInt(0), big.NewInt(0)}
  860. args.ProofC[0] = big.NewInt(0)
  861. args.ProofC[1] = big.NewInt(0)
  862. argsForge = args
  863. _, err = rollupClient.RollupForgeBatch(argsForge, nil)
  864. require.NoError(t, err)
  865. currentBlockNum, err = rollupClient.client.EthLastBlock()
  866. require.NoError(t, err)
  867. rollupEvents, err = rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  868. require.NoError(t, err)
  869. assert.Equal(t, int64(3), rollupEvents.ForgeBatch[0].BatchNum)
  870. assert.Equal(t, uint16(len(L1UserTxs)), rollupEvents.ForgeBatch[0].L1UserTxsLen)
  871. ethHashForge = rollupEvents.ForgeBatch[0].EthTxHash
  872. }
  873. func TestRollupForgeBatchArgs2(t *testing.T) {
  874. args, sender, err := rollupClient.RollupForgeBatchArgs(ethHashForge, uint16(len(L1UserTxs)))
  875. require.NoError(t, err)
  876. assert.Equal(t, *sender, rollupClient.client.account.Address)
  877. assert.Equal(t, argsForge.FeeIdxCoordinator, args.FeeIdxCoordinator)
  878. assert.Equal(t, argsForge.L1Batch, args.L1Batch)
  879. assert.Equal(t, argsForge.L1UserTxs, args.L1UserTxs)
  880. assert.Equal(t, argsForge.L1CoordinatorTxs, args.L1CoordinatorTxs)
  881. assert.Equal(t, argsForge.L1CoordinatorTxsAuths, args.L1CoordinatorTxsAuths)
  882. assert.Equal(t, argsForge.L2TxsData, args.L2TxsData)
  883. assert.Equal(t, argsForge.NewLastIdx, args.NewLastIdx)
  884. assert.Equal(t, argsForge.NewStRoot, args.NewStRoot)
  885. assert.Equal(t, argsForge.VerifierIdx, args.VerifierIdx)
  886. }
  887. func TestRollupWithdrawMerkleProof(t *testing.T) {
  888. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  889. require.NoError(t, err)
  890. var pkComp babyjub.PublicKeyComp
  891. pkCompBE, err :=
  892. hex.DecodeString("adc3b754f8da621967b073a787bef8eec7052f2ba712b23af57d98f65beea8b2")
  893. require.NoError(t, err)
  894. pkCompLE := common.SwapEndianness(pkCompBE)
  895. copy(pkComp[:], pkCompLE)
  896. require.NoError(t, err)
  897. tokenID := uint32(tokenHEZID)
  898. numExitRoot := int64(3)
  899. fromIdx := int64(256)
  900. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  901. // siblingBytes0, err := new(big.Int).SetString(
  902. // "19508838618377323910556678335932426220272947530531646682154552299216398748115",
  903. // 10)
  904. // require.NoError(t, err)
  905. // siblingBytes1, err := new(big.Int).SetString(
  906. // "15198806719713909654457742294233381653226080862567104272457668857208564789571", 10)
  907. // require.NoError(t, err)
  908. var siblings []*big.Int
  909. // siblings = append(siblings, siblingBytes0)
  910. // siblings = append(siblings, siblingBytes1)
  911. instantWithdraw := true
  912. _, err = rollupClientAux.RollupWithdrawMerkleProof(pkComp, tokenID, numExitRoot, fromIdx,
  913. amount, siblings, instantWithdraw)
  914. require.NoError(t, err)
  915. currentBlockNum, err := rollupClient.client.EthLastBlock()
  916. require.NoError(t, err)
  917. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  918. require.NoError(t, err)
  919. assert.Equal(t, uint64(fromIdx), rollupEvents.Withdraw[0].Idx)
  920. assert.Equal(t, instantWithdraw, rollupEvents.Withdraw[0].InstantWithdraw)
  921. assert.Equal(t, uint64(numExitRoot), rollupEvents.Withdraw[0].NumExitRoot)
  922. // tokenAmount = 20
  923. // amountUSD = tokenAmount * tokenPrice = 20 * 10 = 200
  924. // Bucket[0].ceilUSD = 100, Bucket[1].ceilUSD = 200, ...
  925. // Bucket 1
  926. // Bucket[0].withdrawals = 1, Bucket[1].withdrawals = 2, ...
  927. // Bucket[1].withdrawals - 1 = 1
  928. assert.Equal(t, 0, rollupEvents.UpdateBucketWithdraw[0].NumBucket)
  929. assert.Equal(t, int64(442), rollupEvents.UpdateBucketWithdraw[0].BlockStamp)
  930. assert.Equal(t, big.NewInt(15), rollupEvents.UpdateBucketWithdraw[0].Withdrawals)
  931. }
  932. func TestRollupSafeMode(t *testing.T) {
  933. _, err := rollupClient.RollupSafeMode()
  934. require.NoError(t, err)
  935. currentBlockNum, err := rollupClient.client.EthLastBlock()
  936. require.NoError(t, err)
  937. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  938. require.NoError(t, err)
  939. auxEvent := new(RollupEventSafeMode)
  940. assert.Equal(t, auxEvent, &rollupEvents.SafeMode[0])
  941. }