You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

958 lines
40 KiB

Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
  1. package eth
  2. import (
  3. "context"
  4. "crypto/ecdsa"
  5. "encoding/binary"
  6. "encoding/hex"
  7. "math/big"
  8. "testing"
  9. ethCommon "github.com/ethereum/go-ethereum/common"
  10. ethCrypto "github.com/ethereum/go-ethereum/crypto"
  11. "github.com/hermeznetwork/hermez-node/common"
  12. "github.com/iden3/go-iden3-crypto/babyjub"
  13. "github.com/stretchr/testify/assert"
  14. "github.com/stretchr/testify/require"
  15. )
  16. var rollupClient *RollupClient
  17. var auctionClient *AuctionClient
  18. var ethHashForge ethCommon.Hash
  19. var argsForge *RollupForgeBatchArgs
  20. var absoluteMaxL1L2BatchTimeout = int64(240)
  21. var maxTx = int64(512)
  22. var nLevels = int64(32)
  23. var tokenIDERC777 uint32
  24. var tokenHEZID uint32
  25. var L1UserTxs []common.L1Tx
  26. var blockStampBucket int64
  27. type keys struct {
  28. BJJSecretKey *babyjub.PrivateKey
  29. BJJPublicKey *babyjub.PublicKey
  30. Addr ethCommon.Address
  31. }
  32. func genKeysBjj(i int64) *keys {
  33. i++ // i = 0 doesn't work for the ecdsa key generation
  34. var sk babyjub.PrivateKey
  35. binary.LittleEndian.PutUint64(sk[:], uint64(i))
  36. // eth address
  37. var key ecdsa.PrivateKey
  38. key.D = big.NewInt(i) // only for testing
  39. key.PublicKey.X, key.PublicKey.Y = ethCrypto.S256().ScalarBaseMult(key.D.Bytes())
  40. key.Curve = ethCrypto.S256()
  41. return &keys{
  42. BJJSecretKey: &sk,
  43. BJJPublicKey: sk.Public(),
  44. }
  45. }
  46. func TestRollupEventInit(t *testing.T) {
  47. rollupInit, blockNum, err := rollupClient.RollupEventInit()
  48. require.NoError(t, err)
  49. assert.Equal(t, int64(19), blockNum)
  50. assert.Equal(t, uint8(10), rollupInit.ForgeL1L2BatchTimeout)
  51. assert.Equal(t, big.NewInt(10), rollupInit.FeeAddToken)
  52. assert.Equal(t, uint64(60*60*24*7*2), rollupInit.WithdrawalDelay)
  53. }
  54. func TestRollupConstants(t *testing.T) {
  55. rollupConstants, err := rollupClient.RollupConstants()
  56. require.NoError(t, err)
  57. assert.Equal(t, absoluteMaxL1L2BatchTimeout, rollupConstants.AbsoluteMaxL1L2BatchTimeout)
  58. assert.Equal(t, auctionAddressConst, rollupConstants.HermezAuctionContract)
  59. assert.Equal(t, tokenHEZAddressConst, rollupConstants.TokenHEZ)
  60. assert.Equal(t, maxTx, rollupConstants.Verifiers[0].MaxTx)
  61. assert.Equal(t, nLevels, rollupConstants.Verifiers[0].NLevels)
  62. assert.Equal(t, governanceAddressConst, rollupConstants.HermezGovernanceAddress)
  63. assert.Equal(t, wdelayerAddressConst, rollupConstants.WithdrawDelayerContract)
  64. }
  65. func TestRollupRegisterTokensCount(t *testing.T) {
  66. registerTokensCount, err := rollupClient.RollupRegisterTokensCount()
  67. require.NoError(t, err)
  68. assert.Equal(t, big.NewInt(1), registerTokensCount)
  69. }
  70. func TestRollupAddToken(t *testing.T) {
  71. feeAddToken := big.NewInt(10)
  72. // Addtoken ERC20Permit
  73. registerTokensCount, err := rollupClient.RollupRegisterTokensCount()
  74. require.NoError(t, err)
  75. _, err = rollupClient.RollupAddToken(tokenHEZAddressConst, feeAddToken, deadline)
  76. require.NoError(t, err)
  77. currentBlockNum, err := rollupClient.client.EthLastBlock()
  78. require.NoError(t, err)
  79. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  80. require.NoError(t, err)
  81. assert.Equal(t, tokenHEZAddressConst, rollupEvents.AddToken[0].TokenAddress)
  82. assert.Equal(t, registerTokensCount, common.TokenID(rollupEvents.AddToken[0].TokenID).BigInt())
  83. tokenHEZID = rollupEvents.AddToken[0].TokenID
  84. }
  85. func TestRollupForgeBatch(t *testing.T) {
  86. chainid, _ := auctionClient.client.Client().ChainID(context.Background())
  87. // Register Coordinator
  88. forgerAddress := governanceAddressConst
  89. _, err := auctionClient.AuctionSetCoordinator(forgerAddress, URL)
  90. require.NoError(t, err)
  91. // MultiBid
  92. currentSlot, err := auctionClient.AuctionGetCurrentSlotNumber()
  93. require.NoError(t, err)
  94. slotSet := [6]bool{true, false, true, false, true, false}
  95. maxBid := new(big.Int)
  96. maxBid.SetString("15000000000000000000", 10)
  97. minBid := new(big.Int)
  98. minBid.SetString("11000000000000000000", 10)
  99. budget := new(big.Int)
  100. budget.SetString("45200000000000000000", 10)
  101. _, err = auctionClient.AuctionMultiBid(budget, currentSlot+4, currentSlot+10, slotSet, maxBid, minBid, deadline)
  102. require.NoError(t, err)
  103. // Add Blocks
  104. blockNum := int64(int(blocksPerSlot)*int(currentSlot+4) + int(genesisBlock))
  105. currentBlockNum, err := auctionClient.client.EthLastBlock()
  106. require.NoError(t, err)
  107. blocksToAdd := blockNum - currentBlockNum
  108. addBlocks(blocksToAdd, ethClientDialURL)
  109. // Forge Batch 1
  110. args := new(RollupForgeBatchArgs)
  111. args.FeeIdxCoordinator = []common.Idx{} // When encoded, 64 times the 0 idx means that no idx to collect fees is specified.
  112. l1CoordinatorBytes, err := hex.DecodeString("1c660323607bb113e586183609964a333d07ebe4bef3be82ec13af453bae9590bd7711cdb6abf42f176eadfbe5506fbef5e092e5543733f91b0061d9a7747fa10694a915a6470fa230de387b51e6f4db0b09787867778687b55197ad6d6a86eac000000001")
  113. require.NoError(t, err)
  114. numTxsL1 := len(l1CoordinatorBytes) / common.RollupConstL1CoordinatorTotalBytes
  115. for i := 0; i < numTxsL1; i++ {
  116. bytesL1Coordinator := l1CoordinatorBytes[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*common.RollupConstL1CoordinatorTotalBytes]
  117. var signature []byte
  118. v := bytesL1Coordinator[0]
  119. s := bytesL1Coordinator[1:33]
  120. r := bytesL1Coordinator[33:65]
  121. signature = append(signature, r[:]...)
  122. signature = append(signature, s[:]...)
  123. signature = append(signature, v)
  124. l1Tx, err := common.L1CoordinatorTxFromBytes(bytesL1Coordinator, chainid, rollupClient.address)
  125. require.NoError(t, err)
  126. args.L1CoordinatorTxs = append(args.L1CoordinatorTxs, *l1Tx)
  127. args.L1CoordinatorTxsAuths = append(args.L1CoordinatorTxsAuths, signature)
  128. }
  129. args.L1UserTxs = []common.L1Tx{}
  130. args.L2TxsData = []common.L2Tx{}
  131. newStateRoot := new(big.Int)
  132. newStateRoot.SetString("18317824016047294649053625209337295956588174734569560016974612130063629505228", 10)
  133. newExitRoot := new(big.Int)
  134. bytesNumExitRoot, err := hex.DecodeString("10a89d5fe8d488eda1ba371d633515739933c706c210c604f5bd209180daa43b")
  135. require.NoError(t, err)
  136. newExitRoot.SetBytes(bytesNumExitRoot)
  137. args.NewLastIdx = int64(300)
  138. args.NewStRoot = newStateRoot
  139. args.NewExitRoot = newExitRoot
  140. args.L1Batch = true
  141. args.VerifierIdx = 0
  142. args.ProofA[0] = big.NewInt(0)
  143. args.ProofA[1] = big.NewInt(0)
  144. args.ProofB[0][0] = big.NewInt(0)
  145. args.ProofB[0][1] = big.NewInt(0)
  146. args.ProofB[1][0] = big.NewInt(0)
  147. args.ProofB[1][1] = big.NewInt(0)
  148. args.ProofC[0] = big.NewInt(0)
  149. args.ProofC[1] = big.NewInt(0)
  150. argsForge = args
  151. _, err = rollupClient.RollupForgeBatch(argsForge, nil)
  152. require.NoError(t, err)
  153. currentBlockNum, err = rollupClient.client.EthLastBlock()
  154. require.NoError(t, err)
  155. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  156. require.NoError(t, err)
  157. assert.Equal(t, int64(1), rollupEvents.ForgeBatch[0].BatchNum)
  158. assert.Equal(t, uint16(len(L1UserTxs)), rollupEvents.ForgeBatch[0].L1UserTxsLen)
  159. ethHashForge = rollupEvents.ForgeBatch[0].EthTxHash
  160. }
  161. func TestRollupForgeBatchArgs(t *testing.T) {
  162. args, sender, err := rollupClient.RollupForgeBatchArgs(ethHashForge, uint16(len(L1UserTxs)))
  163. require.NoError(t, err)
  164. assert.Equal(t, *sender, rollupClient.client.account.Address)
  165. assert.Equal(t, argsForge.FeeIdxCoordinator, args.FeeIdxCoordinator)
  166. assert.Equal(t, argsForge.L1Batch, args.L1Batch)
  167. assert.Equal(t, argsForge.L1CoordinatorTxs, args.L1CoordinatorTxs)
  168. assert.Equal(t, argsForge.L1CoordinatorTxsAuths, args.L1CoordinatorTxsAuths)
  169. assert.Equal(t, argsForge.L2TxsData, args.L2TxsData)
  170. assert.Equal(t, argsForge.NewLastIdx, args.NewLastIdx)
  171. assert.Equal(t, argsForge.NewStRoot, args.NewStRoot)
  172. assert.Equal(t, argsForge.VerifierIdx, args.VerifierIdx)
  173. }
  174. func TestRollupUpdateForgeL1L2BatchTimeout(t *testing.T) {
  175. newForgeL1L2BatchTimeout := int64(222)
  176. _, err := rollupClient.RollupUpdateForgeL1L2BatchTimeout(newForgeL1L2BatchTimeout)
  177. require.NoError(t, err)
  178. currentBlockNum, err := rollupClient.client.EthLastBlock()
  179. require.NoError(t, err)
  180. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  181. require.NoError(t, err)
  182. assert.Equal(t, newForgeL1L2BatchTimeout, rollupEvents.UpdateForgeL1L2BatchTimeout[0].NewForgeL1L2BatchTimeout)
  183. }
  184. func TestRollupUpdateFeeAddToken(t *testing.T) {
  185. newFeeAddToken := big.NewInt(12)
  186. _, err := rollupClient.RollupUpdateFeeAddToken(newFeeAddToken)
  187. require.NoError(t, err)
  188. currentBlockNum, err := rollupClient.client.EthLastBlock()
  189. require.NoError(t, err)
  190. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  191. require.NoError(t, err)
  192. assert.Equal(t, newFeeAddToken, rollupEvents.UpdateFeeAddToken[0].NewFeeAddToken)
  193. }
  194. func TestRollupUpdateBucketsParameters(t *testing.T) {
  195. var bucketsParameters [common.RollupConstNumBuckets]RollupUpdateBucketsParameters
  196. for i := range bucketsParameters {
  197. bucketsParameters[i].CeilUSD = big.NewInt(int64((i + 1) * 100))
  198. bucketsParameters[i].Withdrawals = big.NewInt(int64(i + 1))
  199. bucketsParameters[i].BlockWithdrawalRate = big.NewInt(int64(i+1) * 100)
  200. bucketsParameters[i].MaxWithdrawals = big.NewInt(int64(100000000000))
  201. }
  202. _, err := rollupClient.RollupUpdateBucketsParameters(bucketsParameters)
  203. require.NoError(t, err)
  204. currentBlockNum, err := rollupClient.client.EthLastBlock()
  205. require.NoError(t, err)
  206. blockStampBucket = currentBlockNum
  207. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  208. require.NoError(t, err)
  209. assert.Equal(t, bucketsParameters, rollupEvents.UpdateBucketsParameters[0].ArrayBuckets)
  210. }
  211. func TestRollupUpdateWithdrawalDelay(t *testing.T) {
  212. newWithdrawalDelay := int64(100000)
  213. _, err := rollupClient.RollupUpdateWithdrawalDelay(newWithdrawalDelay)
  214. require.NoError(t, err)
  215. currentBlockNum, err := rollupClient.client.EthLastBlock()
  216. require.NoError(t, err)
  217. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  218. require.NoError(t, err)
  219. assert.Equal(t, newWithdrawalDelay, int64(rollupEvents.UpdateWithdrawalDelay[0].NewWithdrawalDelay))
  220. }
  221. func TestRollupUpdateTokenExchange(t *testing.T) {
  222. var addressArray []ethCommon.Address
  223. var valueArray []uint64
  224. addressToken1, err := rollupClient.hermez.TokenList(nil, big.NewInt(1))
  225. addressArray = append(addressArray, addressToken1)
  226. tokenPrice := 10
  227. valueArray = append(valueArray, uint64(tokenPrice*1e14))
  228. require.NoError(t, err)
  229. _, err = rollupClient.RollupUpdateTokenExchange(addressArray, valueArray)
  230. require.NoError(t, err)
  231. currentBlockNum, err := rollupClient.client.EthLastBlock()
  232. require.NoError(t, err)
  233. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  234. require.NoError(t, err)
  235. assert.Equal(t, addressArray, rollupEvents.UpdateTokenExchange[0].AddressArray)
  236. assert.Equal(t, valueArray, rollupEvents.UpdateTokenExchange[0].ValueArray)
  237. }
  238. func TestRollupL1UserTxETHCreateAccountDeposit(t *testing.T) {
  239. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  240. require.NoError(t, err)
  241. key := genKeysBjj(2)
  242. fromIdxInt64 := int64(0)
  243. toIdxInt64 := int64(0)
  244. tokenIDUint32 := uint32(0)
  245. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  246. l1Tx := common.L1Tx{
  247. FromBJJ: key.BJJPublicKey.Compress(),
  248. FromIdx: common.Idx(fromIdxInt64),
  249. ToIdx: common.Idx(toIdxInt64),
  250. DepositAmount: depositAmount,
  251. TokenID: common.TokenID(tokenIDUint32),
  252. Amount: big.NewInt(0),
  253. }
  254. L1UserTxs = append(L1UserTxs, l1Tx)
  255. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  256. require.NoError(t, err)
  257. currentBlockNum, err := rollupClient.client.EthLastBlock()
  258. require.NoError(t, err)
  259. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  260. require.NoError(t, err)
  261. assert.Equal(t, l1Tx.FromBJJ, rollupEvents.L1UserTx[0].L1UserTx.FromBJJ)
  262. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  263. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  264. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  265. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  266. assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  267. }
  268. func TestRollupL1UserTxERC20CreateAccountDeposit(t *testing.T) {
  269. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
  270. require.NoError(t, err)
  271. key := genKeysBjj(1)
  272. fromIdxInt64 := int64(0)
  273. toIdxInt64 := int64(0)
  274. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  275. l1Tx := common.L1Tx{
  276. FromBJJ: key.BJJPublicKey.Compress(),
  277. FromIdx: common.Idx(fromIdxInt64),
  278. ToIdx: common.Idx(toIdxInt64),
  279. DepositAmount: depositAmount,
  280. TokenID: common.TokenID(tokenHEZID),
  281. Amount: big.NewInt(0),
  282. }
  283. L1UserTxs = append(L1UserTxs, l1Tx)
  284. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  285. require.NoError(t, err)
  286. currentBlockNum, err := rollupClient.client.EthLastBlock()
  287. require.NoError(t, err)
  288. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  289. require.NoError(t, err)
  290. assert.Equal(t, l1Tx.FromBJJ, rollupEvents.L1UserTx[0].L1UserTx.FromBJJ)
  291. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  292. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  293. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  294. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  295. assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  296. }
  297. func TestRollupL1UserTxERC20PermitCreateAccountDeposit(t *testing.T) {
  298. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  299. require.NoError(t, err)
  300. key := genKeysBjj(3)
  301. fromIdxInt64 := int64(0)
  302. toIdxInt64 := int64(0)
  303. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  304. l1Tx := common.L1Tx{
  305. FromBJJ: key.BJJPublicKey.Compress(),
  306. FromIdx: common.Idx(fromIdxInt64),
  307. ToIdx: common.Idx(toIdxInt64),
  308. DepositAmount: depositAmount,
  309. TokenID: common.TokenID(tokenIDERC777),
  310. Amount: big.NewInt(0),
  311. }
  312. L1UserTxs = append(L1UserTxs, l1Tx)
  313. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  314. require.NoError(t, err)
  315. currentBlockNum, err := rollupClient.client.EthLastBlock()
  316. require.NoError(t, err)
  317. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  318. require.NoError(t, err)
  319. assert.Equal(t, l1Tx.FromBJJ, rollupEvents.L1UserTx[0].L1UserTx.FromBJJ)
  320. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  321. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  322. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  323. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  324. assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  325. }
  326. func TestRollupL1UserTxETHDeposit(t *testing.T) {
  327. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  328. require.NoError(t, err)
  329. fromIdxInt64 := int64(256)
  330. toIdxInt64 := int64(0)
  331. tokenIDUint32 := uint32(0)
  332. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  333. l1Tx := common.L1Tx{
  334. FromBJJ: common.EmptyBJJComp,
  335. FromIdx: common.Idx(fromIdxInt64),
  336. ToIdx: common.Idx(toIdxInt64),
  337. DepositAmount: depositAmount,
  338. TokenID: common.TokenID(tokenIDUint32),
  339. Amount: big.NewInt(0),
  340. }
  341. L1UserTxs = append(L1UserTxs, l1Tx)
  342. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  343. require.NoError(t, err)
  344. currentBlockNum, err := rollupClient.client.EthLastBlock()
  345. require.NoError(t, err)
  346. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  347. require.NoError(t, err)
  348. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  349. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  350. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  351. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  352. assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  353. }
  354. func TestRollupL1UserTxERC20Deposit(t *testing.T) {
  355. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
  356. require.NoError(t, err)
  357. fromIdxInt64 := int64(257)
  358. toIdxInt64 := int64(0)
  359. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  360. l1Tx := common.L1Tx{
  361. FromBJJ: common.EmptyBJJComp,
  362. FromIdx: common.Idx(fromIdxInt64),
  363. ToIdx: common.Idx(toIdxInt64),
  364. DepositAmount: depositAmount,
  365. TokenID: common.TokenID(tokenHEZID),
  366. Amount: big.NewInt(0),
  367. }
  368. L1UserTxs = append(L1UserTxs, l1Tx)
  369. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  370. require.NoError(t, err)
  371. currentBlockNum, err := rollupClient.client.EthLastBlock()
  372. require.NoError(t, err)
  373. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  374. require.NoError(t, err)
  375. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  376. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  377. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  378. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  379. assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  380. }
  381. func TestRollupL1UserTxERC20PermitDeposit(t *testing.T) {
  382. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  383. require.NoError(t, err)
  384. fromIdxInt64 := int64(258)
  385. toIdxInt64 := int64(0)
  386. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  387. l1Tx := common.L1Tx{
  388. FromIdx: common.Idx(fromIdxInt64),
  389. ToIdx: common.Idx(toIdxInt64),
  390. DepositAmount: depositAmount,
  391. TokenID: common.TokenID(tokenIDERC777),
  392. Amount: big.NewInt(0),
  393. }
  394. L1UserTxs = append(L1UserTxs, l1Tx)
  395. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  396. require.NoError(t, err)
  397. currentBlockNum, err := rollupClient.client.EthLastBlock()
  398. require.NoError(t, err)
  399. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  400. require.NoError(t, err)
  401. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  402. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  403. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  404. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  405. assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  406. }
  407. func TestRollupL1UserTxETHDepositTransfer(t *testing.T) {
  408. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  409. require.NoError(t, err)
  410. fromIdxInt64 := int64(256)
  411. toIdxInt64 := int64(257)
  412. tokenIDUint32 := uint32(0)
  413. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  414. amount, _ := new(big.Int).SetString("100000000000000000000", 10)
  415. l1Tx := common.L1Tx{
  416. FromIdx: common.Idx(fromIdxInt64),
  417. ToIdx: common.Idx(toIdxInt64),
  418. DepositAmount: depositAmount,
  419. TokenID: common.TokenID(tokenIDUint32),
  420. Amount: amount,
  421. }
  422. L1UserTxs = append(L1UserTxs, l1Tx)
  423. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  424. require.NoError(t, err)
  425. currentBlockNum, err := rollupClient.client.EthLastBlock()
  426. require.NoError(t, err)
  427. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  428. require.NoError(t, err)
  429. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  430. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  431. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  432. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  433. assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  434. }
  435. func TestRollupL1UserTxERC20DepositTransfer(t *testing.T) {
  436. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
  437. require.NoError(t, err)
  438. fromIdxInt64 := int64(257)
  439. toIdxInt64 := int64(258)
  440. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  441. amount, _ := new(big.Int).SetString("100000000000000000000", 10)
  442. l1Tx := common.L1Tx{
  443. FromIdx: common.Idx(fromIdxInt64),
  444. ToIdx: common.Idx(toIdxInt64),
  445. DepositAmount: depositAmount,
  446. TokenID: common.TokenID(tokenHEZID),
  447. Amount: amount,
  448. }
  449. L1UserTxs = append(L1UserTxs, l1Tx)
  450. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  451. require.NoError(t, err)
  452. currentBlockNum, err := rollupClient.client.EthLastBlock()
  453. require.NoError(t, err)
  454. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  455. require.NoError(t, err)
  456. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  457. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  458. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  459. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  460. assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  461. }
  462. func TestRollupL1UserTxERC20PermitDepositTransfer(t *testing.T) {
  463. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  464. require.NoError(t, err)
  465. fromIdxInt64 := int64(258)
  466. toIdxInt64 := int64(259)
  467. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  468. amount, _ := new(big.Int).SetString("100000000000000000000", 10)
  469. l1Tx := common.L1Tx{
  470. FromIdx: common.Idx(fromIdxInt64),
  471. ToIdx: common.Idx(toIdxInt64),
  472. DepositAmount: depositAmount,
  473. TokenID: common.TokenID(tokenIDERC777),
  474. Amount: amount,
  475. }
  476. L1UserTxs = append(L1UserTxs, l1Tx)
  477. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  478. require.NoError(t, err)
  479. currentBlockNum, err := rollupClient.client.EthLastBlock()
  480. require.NoError(t, err)
  481. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  482. require.NoError(t, err)
  483. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  484. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  485. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  486. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  487. assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  488. }
  489. func TestRollupL1UserTxETHCreateAccountDepositTransfer(t *testing.T) {
  490. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  491. require.NoError(t, err)
  492. fromIdxInt64 := int64(256)
  493. toIdxInt64 := int64(257)
  494. tokenIDUint32 := uint32(0)
  495. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  496. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  497. l1Tx := common.L1Tx{
  498. FromIdx: common.Idx(fromIdxInt64),
  499. ToIdx: common.Idx(toIdxInt64),
  500. DepositAmount: depositAmount,
  501. TokenID: common.TokenID(tokenIDUint32),
  502. Amount: amount,
  503. }
  504. L1UserTxs = append(L1UserTxs, l1Tx)
  505. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  506. require.NoError(t, err)
  507. currentBlockNum, err := rollupClient.client.EthLastBlock()
  508. require.NoError(t, err)
  509. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  510. require.NoError(t, err)
  511. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  512. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  513. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  514. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  515. assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  516. }
  517. func TestRollupL1UserTxERC20CreateAccountDepositTransfer(t *testing.T) {
  518. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
  519. require.NoError(t, err)
  520. fromIdxInt64 := int64(257)
  521. toIdxInt64 := int64(258)
  522. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  523. amount, _ := new(big.Int).SetString("30000000000000000000", 10)
  524. l1Tx := common.L1Tx{
  525. FromIdx: common.Idx(fromIdxInt64),
  526. ToIdx: common.Idx(toIdxInt64),
  527. DepositAmount: depositAmount,
  528. TokenID: common.TokenID(tokenHEZID),
  529. Amount: amount,
  530. }
  531. L1UserTxs = append(L1UserTxs, l1Tx)
  532. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  533. require.NoError(t, err)
  534. currentBlockNum, err := rollupClient.client.EthLastBlock()
  535. require.NoError(t, err)
  536. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  537. require.NoError(t, err)
  538. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  539. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  540. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  541. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  542. assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  543. }
  544. func TestRollupL1UserTxERC20PermitCreateAccountDepositTransfer(t *testing.T) {
  545. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  546. require.NoError(t, err)
  547. fromIdxInt64 := int64(258)
  548. toIdxInt64 := int64(259)
  549. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  550. amount, _ := new(big.Int).SetString("40000000000000000000", 10)
  551. l1Tx := common.L1Tx{
  552. FromIdx: common.Idx(fromIdxInt64),
  553. ToIdx: common.Idx(toIdxInt64),
  554. DepositAmount: depositAmount,
  555. TokenID: common.TokenID(tokenIDERC777),
  556. Amount: amount,
  557. }
  558. L1UserTxs = append(L1UserTxs, l1Tx)
  559. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  560. require.NoError(t, err)
  561. currentBlockNum, err := rollupClient.client.EthLastBlock()
  562. require.NoError(t, err)
  563. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  564. require.NoError(t, err)
  565. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  566. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  567. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  568. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  569. assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  570. }
  571. func TestRollupL1UserTxETHForceTransfer(t *testing.T) {
  572. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  573. require.NoError(t, err)
  574. fromIdxInt64 := int64(256)
  575. toIdxInt64 := int64(257)
  576. tokenIDUint32 := uint32(0)
  577. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  578. l1Tx := common.L1Tx{
  579. FromIdx: common.Idx(fromIdxInt64),
  580. ToIdx: common.Idx(toIdxInt64),
  581. DepositAmount: big.NewInt(0),
  582. TokenID: common.TokenID(tokenIDUint32),
  583. Amount: amount,
  584. }
  585. L1UserTxs = append(L1UserTxs, l1Tx)
  586. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  587. require.NoError(t, err)
  588. currentBlockNum, err := rollupClient.client.EthLastBlock()
  589. require.NoError(t, err)
  590. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  591. require.NoError(t, err)
  592. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  593. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  594. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  595. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  596. assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  597. }
  598. func TestRollupL1UserTxERC20ForceTransfer(t *testing.T) {
  599. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
  600. require.NoError(t, err)
  601. fromIdxInt64 := int64(257)
  602. toIdxInt64 := int64(258)
  603. amount, _ := new(big.Int).SetString("10000000000000000000", 10)
  604. l1Tx := common.L1Tx{
  605. FromIdx: common.Idx(fromIdxInt64),
  606. ToIdx: common.Idx(toIdxInt64),
  607. DepositAmount: big.NewInt(0),
  608. TokenID: common.TokenID(tokenHEZID),
  609. Amount: amount,
  610. }
  611. L1UserTxs = append(L1UserTxs, l1Tx)
  612. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  613. require.NoError(t, err)
  614. currentBlockNum, err := rollupClient.client.EthLastBlock()
  615. require.NoError(t, err)
  616. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  617. require.NoError(t, err)
  618. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  619. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  620. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  621. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  622. assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  623. }
  624. func TestRollupL1UserTxERC20PermitForceTransfer(t *testing.T) {
  625. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  626. require.NoError(t, err)
  627. fromIdxInt64 := int64(259)
  628. toIdxInt64 := int64(260)
  629. amount, _ := new(big.Int).SetString("30000000000000000000", 10)
  630. l1Tx := common.L1Tx{
  631. FromIdx: common.Idx(fromIdxInt64),
  632. ToIdx: common.Idx(toIdxInt64),
  633. DepositAmount: big.NewInt(0),
  634. TokenID: common.TokenID(tokenIDERC777),
  635. Amount: amount,
  636. }
  637. L1UserTxs = append(L1UserTxs, l1Tx)
  638. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  639. require.NoError(t, err)
  640. currentBlockNum, err := rollupClient.client.EthLastBlock()
  641. require.NoError(t, err)
  642. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  643. require.NoError(t, err)
  644. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  645. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  646. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  647. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  648. assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  649. }
  650. func TestRollupL1UserTxETHForceExit(t *testing.T) {
  651. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  652. require.NoError(t, err)
  653. fromIdxInt64 := int64(256)
  654. toIdxInt64 := int64(1)
  655. tokenIDUint32 := uint32(0)
  656. amount, _ := new(big.Int).SetString("10000000000000000000", 10)
  657. l1Tx := common.L1Tx{
  658. FromIdx: common.Idx(fromIdxInt64),
  659. ToIdx: common.Idx(toIdxInt64),
  660. DepositAmount: big.NewInt(0),
  661. TokenID: common.TokenID(tokenIDUint32),
  662. Amount: amount,
  663. }
  664. L1UserTxs = append(L1UserTxs, l1Tx)
  665. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  666. require.NoError(t, err)
  667. currentBlockNum, err := rollupClient.client.EthLastBlock()
  668. require.NoError(t, err)
  669. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  670. require.NoError(t, err)
  671. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  672. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  673. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  674. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  675. assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  676. }
  677. func TestRollupL1UserTxERC20ForceExit(t *testing.T) {
  678. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
  679. require.NoError(t, err)
  680. fromIdxInt64 := int64(257)
  681. toIdxInt64 := int64(1)
  682. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  683. l1Tx := common.L1Tx{
  684. FromIdx: common.Idx(fromIdxInt64),
  685. ToIdx: common.Idx(toIdxInt64),
  686. DepositAmount: big.NewInt(0),
  687. TokenID: common.TokenID(tokenHEZID),
  688. Amount: amount,
  689. }
  690. L1UserTxs = append(L1UserTxs, l1Tx)
  691. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  692. require.NoError(t, err)
  693. currentBlockNum, err := rollupClient.client.EthLastBlock()
  694. require.NoError(t, err)
  695. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  696. require.NoError(t, err)
  697. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  698. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  699. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  700. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  701. assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  702. }
  703. func TestRollupL1UserTxERC20PermitForceExit(t *testing.T) {
  704. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  705. require.NoError(t, err)
  706. fromIdxInt64 := int64(258)
  707. toIdxInt64 := int64(1)
  708. fromIdx := new(common.Idx)
  709. *fromIdx = 0
  710. amount, _ := new(big.Int).SetString("30000000000000000000", 10)
  711. l1Tx := common.L1Tx{
  712. FromIdx: common.Idx(fromIdxInt64),
  713. ToIdx: common.Idx(toIdxInt64),
  714. DepositAmount: big.NewInt(0),
  715. TokenID: common.TokenID(tokenIDERC777),
  716. Amount: amount,
  717. }
  718. L1UserTxs = append(L1UserTxs, l1Tx)
  719. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  720. require.NoError(t, err)
  721. currentBlockNum, err := rollupClient.client.EthLastBlock()
  722. require.NoError(t, err)
  723. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  724. require.NoError(t, err)
  725. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  726. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  727. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  728. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  729. assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  730. }
  731. func TestRollupForgeBatch2(t *testing.T) {
  732. // Forge Batch 2
  733. _, err := rollupClient.RollupForgeBatch(argsForge, nil)
  734. require.NoError(t, err)
  735. currentBlockNum, err := rollupClient.client.EthLastBlock()
  736. require.NoError(t, err)
  737. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  738. require.NoError(t, err)
  739. assert.Equal(t, int64(2), rollupEvents.ForgeBatch[0].BatchNum)
  740. // Forge Batch 3
  741. args := new(RollupForgeBatchArgs)
  742. args.FeeIdxCoordinator = []common.Idx{} // When encoded, 64 times the 0 idx means that no idx to collect fees is specified.
  743. args.L1CoordinatorTxs = argsForge.L1CoordinatorTxs
  744. args.L1CoordinatorTxsAuths = argsForge.L1CoordinatorTxsAuths
  745. for i := 0; i < len(L1UserTxs); i++ {
  746. l1UserTx := L1UserTxs[i]
  747. l1UserTx.EffectiveAmount = l1UserTx.Amount
  748. l1Bytes, err := l1UserTx.BytesDataAvailability(uint32(nLevels))
  749. require.NoError(t, err)
  750. l1UserTxDataAvailability, err := common.L1TxFromDataAvailability(l1Bytes, uint32(nLevels))
  751. require.NoError(t, err)
  752. args.L1UserTxs = append(args.L1UserTxs, *l1UserTxDataAvailability)
  753. }
  754. newStateRoot := new(big.Int)
  755. newStateRoot.SetString("18317824016047294649053625209337295956588174734569560016974612130063629505228", 10)
  756. newExitRoot := new(big.Int)
  757. newExitRoot.SetString("1114281409737474688393837964161044726766678436313681099613347372031079422302", 10)
  758. amount := new(big.Int)
  759. amount.SetString("79000000", 10)
  760. l2Tx := common.L2Tx{
  761. ToIdx: 256,
  762. Amount: amount,
  763. FromIdx: 257,
  764. Fee: 201,
  765. }
  766. l2Txs := []common.L2Tx{}
  767. l2Txs = append(l2Txs, l2Tx)
  768. l2Txs = append(l2Txs, l2Tx)
  769. args.L2TxsData = l2Txs
  770. args.NewLastIdx = int64(1000)
  771. args.NewStRoot = newStateRoot
  772. args.NewExitRoot = newExitRoot
  773. args.L1Batch = true
  774. args.VerifierIdx = 0
  775. args.ProofA[0] = big.NewInt(0)
  776. args.ProofA[1] = big.NewInt(0)
  777. args.ProofB[0] = [2]*big.Int{big.NewInt(0), big.NewInt(0)}
  778. args.ProofB[1] = [2]*big.Int{big.NewInt(0), big.NewInt(0)}
  779. args.ProofC[0] = big.NewInt(0)
  780. args.ProofC[1] = big.NewInt(0)
  781. argsForge = args
  782. _, err = rollupClient.RollupForgeBatch(argsForge, nil)
  783. require.NoError(t, err)
  784. currentBlockNum, err = rollupClient.client.EthLastBlock()
  785. require.NoError(t, err)
  786. rollupEvents, err = rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  787. require.NoError(t, err)
  788. assert.Equal(t, int64(3), rollupEvents.ForgeBatch[0].BatchNum)
  789. assert.Equal(t, uint16(len(L1UserTxs)), rollupEvents.ForgeBatch[0].L1UserTxsLen)
  790. ethHashForge = rollupEvents.ForgeBatch[0].EthTxHash
  791. }
  792. func TestRollupForgeBatchArgs2(t *testing.T) {
  793. args, sender, err := rollupClient.RollupForgeBatchArgs(ethHashForge, uint16(len(L1UserTxs)))
  794. require.NoError(t, err)
  795. assert.Equal(t, *sender, rollupClient.client.account.Address)
  796. assert.Equal(t, argsForge.FeeIdxCoordinator, args.FeeIdxCoordinator)
  797. assert.Equal(t, argsForge.L1Batch, args.L1Batch)
  798. assert.Equal(t, argsForge.L1UserTxs, args.L1UserTxs)
  799. assert.Equal(t, argsForge.L1CoordinatorTxs, args.L1CoordinatorTxs)
  800. assert.Equal(t, argsForge.L1CoordinatorTxsAuths, args.L1CoordinatorTxsAuths)
  801. assert.Equal(t, argsForge.L2TxsData, args.L2TxsData)
  802. assert.Equal(t, argsForge.NewLastIdx, args.NewLastIdx)
  803. assert.Equal(t, argsForge.NewStRoot, args.NewStRoot)
  804. assert.Equal(t, argsForge.VerifierIdx, args.VerifierIdx)
  805. }
  806. func TestRollupWithdrawMerkleProof(t *testing.T) {
  807. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  808. require.NoError(t, err)
  809. var pkComp babyjub.PublicKeyComp
  810. pkCompBE, err := hex.DecodeString("adc3b754f8da621967b073a787bef8eec7052f2ba712b23af57d98f65beea8b2")
  811. require.NoError(t, err)
  812. pkCompLE := common.SwapEndianness(pkCompBE)
  813. copy(pkComp[:], pkCompLE)
  814. require.NoError(t, err)
  815. tokenID := uint32(tokenHEZID)
  816. numExitRoot := int64(3)
  817. fromIdx := int64(256)
  818. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  819. // siblingBytes0, err := new(big.Int).SetString("19508838618377323910556678335932426220272947530531646682154552299216398748115", 10)
  820. // require.NoError(t, err)
  821. // siblingBytes1, err := new(big.Int).SetString("15198806719713909654457742294233381653226080862567104272457668857208564789571", 10)
  822. // require.NoError(t, err)
  823. var siblings []*big.Int
  824. // siblings = append(siblings, siblingBytes0)
  825. // siblings = append(siblings, siblingBytes1)
  826. instantWithdraw := true
  827. _, err = rollupClientAux.RollupWithdrawMerkleProof(pkComp, tokenID, numExitRoot, fromIdx, amount, siblings, instantWithdraw)
  828. require.NoError(t, err)
  829. currentBlockNum, err := rollupClient.client.EthLastBlock()
  830. require.NoError(t, err)
  831. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  832. require.NoError(t, err)
  833. assert.Equal(t, uint64(fromIdx), rollupEvents.Withdraw[0].Idx)
  834. assert.Equal(t, instantWithdraw, rollupEvents.Withdraw[0].InstantWithdraw)
  835. assert.Equal(t, uint64(numExitRoot), rollupEvents.Withdraw[0].NumExitRoot)
  836. // tokenAmount = 20
  837. // amountUSD = tokenAmount * tokenPrice = 20 * 10 = 200
  838. // Bucket[0].ceilUSD = 100, Bucket[1].ceilUSD = 200, ...
  839. // Bucket 1
  840. // Bucket[0].withdrawals = 1, Bucket[1].withdrawals = 2, ...
  841. // Bucket[1].withdrawals - 1 = 1
  842. assert.Equal(t, 1, rollupEvents.UpdateBucketWithdraw[0].NumBucket)
  843. assert.Equal(t, blockStampBucket, rollupEvents.UpdateBucketWithdraw[0].BlockStamp)
  844. assert.Equal(t, big.NewInt(1), rollupEvents.UpdateBucketWithdraw[0].Withdrawals)
  845. }
  846. func TestRollupSafeMode(t *testing.T) {
  847. _, err := rollupClient.RollupSafeMode()
  848. require.NoError(t, err)
  849. currentBlockNum, err := rollupClient.client.EthLastBlock()
  850. require.NoError(t, err)
  851. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  852. require.NoError(t, err)
  853. auxEvent := new(RollupEventSafeMode)
  854. assert.Equal(t, auxEvent, &rollupEvents.SafeMode[0])
  855. }