You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1036 lines
40 KiB

Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
  1. package eth
  2. import (
  3. "context"
  4. "crypto/ecdsa"
  5. "encoding/binary"
  6. "encoding/hex"
  7. "math/big"
  8. "testing"
  9. ethCommon "github.com/ethereum/go-ethereum/common"
  10. ethCrypto "github.com/ethereum/go-ethereum/crypto"
  11. "github.com/hermeznetwork/hermez-node/common"
  12. "github.com/iden3/go-iden3-crypto/babyjub"
  13. "github.com/stretchr/testify/assert"
  14. "github.com/stretchr/testify/require"
  15. )
  16. var rollupClient *RollupClient
  17. var auctionClient *AuctionClient
  18. var ethHashForge ethCommon.Hash
  19. var argsForge *RollupForgeBatchArgs
  20. var absoluteMaxL1L2BatchTimeout = int64(240)
  21. var maxTx = int64(512)
  22. var nLevels = int64(32)
  23. var tokenIDERC777 uint32
  24. var tokenHEZID uint32
  25. var L1UserTxs []common.L1Tx
  26. var blockStampBucket int64
  27. type keys struct {
  28. BJJSecretKey *babyjub.PrivateKey
  29. BJJPublicKey *babyjub.PublicKey
  30. Addr ethCommon.Address
  31. }
  32. func genKeysBjj(i int64) *keys {
  33. i++ // i = 0 doesn't work for the ecdsa key generation
  34. var sk babyjub.PrivateKey
  35. binary.LittleEndian.PutUint64(sk[:], uint64(i))
  36. // eth address
  37. var key ecdsa.PrivateKey
  38. key.D = big.NewInt(i) // only for testing
  39. key.PublicKey.X, key.PublicKey.Y = ethCrypto.S256().ScalarBaseMult(key.D.Bytes())
  40. key.Curve = ethCrypto.S256()
  41. return &keys{
  42. BJJSecretKey: &sk,
  43. BJJPublicKey: sk.Public(),
  44. }
  45. }
  46. func TestRollupEventInit(t *testing.T) {
  47. rollupInit, blockNum, err := rollupClient.RollupEventInit()
  48. require.NoError(t, err)
  49. assert.Equal(t, int64(19), blockNum)
  50. assert.Equal(t, uint8(10), rollupInit.ForgeL1L2BatchTimeout)
  51. assert.Equal(t, big.NewInt(10), rollupInit.FeeAddToken)
  52. assert.Equal(t, uint64(60*60*24*7*2), rollupInit.WithdrawalDelay)
  53. }
  54. func TestRollupConstants(t *testing.T) {
  55. rollupConstants, err := rollupClient.RollupConstants()
  56. require.NoError(t, err)
  57. assert.Equal(t, absoluteMaxL1L2BatchTimeout, rollupConstants.AbsoluteMaxL1L2BatchTimeout)
  58. assert.Equal(t, auctionAddressConst, rollupConstants.HermezAuctionContract)
  59. assert.Equal(t, tokenHEZAddressConst, rollupConstants.TokenHEZ)
  60. assert.Equal(t, maxTx, rollupConstants.Verifiers[0].MaxTx)
  61. assert.Equal(t, nLevels, rollupConstants.Verifiers[0].NLevels)
  62. assert.Equal(t, governanceAddressConst, rollupConstants.HermezGovernanceAddress)
  63. assert.Equal(t, wdelayerAddressConst, rollupConstants.WithdrawDelayerContract)
  64. }
  65. func TestRollupRegisterTokensCount(t *testing.T) {
  66. registerTokensCount, err := rollupClient.RollupRegisterTokensCount()
  67. require.NoError(t, err)
  68. assert.Equal(t, big.NewInt(1), registerTokensCount)
  69. }
  70. func TestRollupAddToken(t *testing.T) {
  71. feeAddToken := big.NewInt(10)
  72. // Addtoken ERC20Permit
  73. registerTokensCount, err := rollupClient.RollupRegisterTokensCount()
  74. require.NoError(t, err)
  75. _, err = rollupClient.RollupAddToken(tokenHEZAddressConst, feeAddToken, deadline)
  76. require.NoError(t, err)
  77. currentBlockNum, err := rollupClient.client.EthLastBlock()
  78. require.NoError(t, err)
  79. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  80. require.NoError(t, err)
  81. assert.Equal(t, tokenHEZAddressConst, rollupEvents.AddToken[0].TokenAddress)
  82. assert.Equal(t, registerTokensCount, common.TokenID(rollupEvents.AddToken[0].TokenID).BigInt())
  83. tokenHEZID = rollupEvents.AddToken[0].TokenID
  84. }
  85. func TestRollupForgeBatch(t *testing.T) {
  86. chainid, _ := auctionClient.client.Client().ChainID(context.Background())
  87. // Register Coordinator
  88. forgerAddress := governanceAddressConst
  89. _, err := auctionClient.AuctionSetCoordinator(forgerAddress, URL)
  90. require.NoError(t, err)
  91. // MultiBid
  92. currentSlot, err := auctionClient.AuctionGetCurrentSlotNumber()
  93. require.NoError(t, err)
  94. slotSet := [6]bool{true, false, true, false, true, false}
  95. maxBid := new(big.Int)
  96. maxBid.SetString("15000000000000000000", 10)
  97. minBid := new(big.Int)
  98. minBid.SetString("11000000000000000000", 10)
  99. budget := new(big.Int)
  100. budget.SetString("45200000000000000000", 10)
  101. _, err = auctionClient.AuctionMultiBid(budget, currentSlot+4, currentSlot+10, slotSet,
  102. maxBid, minBid, deadline)
  103. require.NoError(t, err)
  104. // Add Blocks
  105. blockNum := int64(int(blocksPerSlot)*int(currentSlot+4) + int(genesisBlock))
  106. currentBlockNum, err := auctionClient.client.EthLastBlock()
  107. require.NoError(t, err)
  108. blocksToAdd := blockNum - currentBlockNum
  109. addBlocks(blocksToAdd, ethClientDialURL)
  110. // Forge Batch 1
  111. args := new(RollupForgeBatchArgs)
  112. // When encoded, 64 times the 0 idx means that no idx to collect fees is specified.
  113. args.FeeIdxCoordinator = []common.Idx{}
  114. l1CoordinatorBytes, err := hex.DecodeString(
  115. "1c660323607bb113e586183609964a333d07ebe4bef3be82ec13af453bae9590bd7711cdb6abf" +
  116. "42f176eadfbe5506fbef5e092e5543733f91b0061d9a7747fa10694a915a6470fa230" +
  117. "de387b51e6f4db0b09787867778687b55197ad6d6a86eac000000001")
  118. require.NoError(t, err)
  119. numTxsL1 := len(l1CoordinatorBytes) / common.RollupConstL1CoordinatorTotalBytes
  120. for i := 0; i < numTxsL1; i++ {
  121. bytesL1Coordinator :=
  122. l1CoordinatorBytes[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*
  123. common.RollupConstL1CoordinatorTotalBytes]
  124. var signature []byte
  125. v := bytesL1Coordinator[0]
  126. s := bytesL1Coordinator[1:33]
  127. r := bytesL1Coordinator[33:65]
  128. signature = append(signature, r[:]...)
  129. signature = append(signature, s[:]...)
  130. signature = append(signature, v)
  131. l1Tx, err := common.L1CoordinatorTxFromBytes(bytesL1Coordinator, chainid, rollupClient.address)
  132. require.NoError(t, err)
  133. args.L1CoordinatorTxs = append(args.L1CoordinatorTxs, *l1Tx)
  134. args.L1CoordinatorTxsAuths = append(args.L1CoordinatorTxsAuths, signature)
  135. }
  136. args.L1UserTxs = []common.L1Tx{}
  137. args.L2TxsData = []common.L2Tx{}
  138. newStateRoot := new(big.Int)
  139. newStateRoot.SetString(
  140. "18317824016047294649053625209337295956588174734569560016974612130063629505228",
  141. 10)
  142. newExitRoot := new(big.Int)
  143. bytesNumExitRoot, err := hex.DecodeString(
  144. "10a89d5fe8d488eda1ba371d633515739933c706c210c604f5bd209180daa43b")
  145. require.NoError(t, err)
  146. newExitRoot.SetBytes(bytesNumExitRoot)
  147. args.NewLastIdx = int64(300)
  148. args.NewStRoot = newStateRoot
  149. args.NewExitRoot = newExitRoot
  150. args.L1Batch = true
  151. args.VerifierIdx = 0
  152. args.ProofA[0] = big.NewInt(0)
  153. args.ProofA[1] = big.NewInt(0)
  154. args.ProofB[0][0] = big.NewInt(0)
  155. args.ProofB[0][1] = big.NewInt(0)
  156. args.ProofB[1][0] = big.NewInt(0)
  157. args.ProofB[1][1] = big.NewInt(0)
  158. args.ProofC[0] = big.NewInt(0)
  159. args.ProofC[1] = big.NewInt(0)
  160. argsForge = args
  161. _, err = rollupClient.RollupForgeBatch(argsForge, nil)
  162. require.NoError(t, err)
  163. currentBlockNum, err = rollupClient.client.EthLastBlock()
  164. require.NoError(t, err)
  165. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  166. require.NoError(t, err)
  167. assert.Equal(t, int64(1), rollupEvents.ForgeBatch[0].BatchNum)
  168. assert.Equal(t, uint16(len(L1UserTxs)), rollupEvents.ForgeBatch[0].L1UserTxsLen)
  169. ethHashForge = rollupEvents.ForgeBatch[0].EthTxHash
  170. }
  171. func TestRollupForgeBatchArgs(t *testing.T) {
  172. args, sender, err := rollupClient.RollupForgeBatchArgs(ethHashForge, uint16(len(L1UserTxs)))
  173. require.NoError(t, err)
  174. assert.Equal(t, *sender, rollupClient.client.account.Address)
  175. assert.Equal(t, argsForge.FeeIdxCoordinator, args.FeeIdxCoordinator)
  176. assert.Equal(t, argsForge.L1Batch, args.L1Batch)
  177. assert.Equal(t, argsForge.L1CoordinatorTxs, args.L1CoordinatorTxs)
  178. assert.Equal(t, argsForge.L1CoordinatorTxsAuths, args.L1CoordinatorTxsAuths)
  179. assert.Equal(t, argsForge.L2TxsData, args.L2TxsData)
  180. assert.Equal(t, argsForge.NewLastIdx, args.NewLastIdx)
  181. assert.Equal(t, argsForge.NewStRoot, args.NewStRoot)
  182. assert.Equal(t, argsForge.VerifierIdx, args.VerifierIdx)
  183. }
  184. func TestRollupUpdateForgeL1L2BatchTimeout(t *testing.T) {
  185. newForgeL1L2BatchTimeout := int64(222)
  186. _, err := rollupClient.RollupUpdateForgeL1L2BatchTimeout(newForgeL1L2BatchTimeout)
  187. require.NoError(t, err)
  188. currentBlockNum, err := rollupClient.client.EthLastBlock()
  189. require.NoError(t, err)
  190. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  191. require.NoError(t, err)
  192. assert.Equal(t, newForgeL1L2BatchTimeout,
  193. rollupEvents.UpdateForgeL1L2BatchTimeout[0].NewForgeL1L2BatchTimeout)
  194. }
  195. func TestRollupUpdateFeeAddToken(t *testing.T) {
  196. newFeeAddToken := big.NewInt(12)
  197. _, err := rollupClient.RollupUpdateFeeAddToken(newFeeAddToken)
  198. require.NoError(t, err)
  199. currentBlockNum, err := rollupClient.client.EthLastBlock()
  200. require.NoError(t, err)
  201. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  202. require.NoError(t, err)
  203. assert.Equal(t, newFeeAddToken, rollupEvents.UpdateFeeAddToken[0].NewFeeAddToken)
  204. }
  205. func TestRollupUpdateBucketsParameters(t *testing.T) {
  206. var bucketsParameters [common.RollupConstNumBuckets]RollupUpdateBucketsParameters
  207. for i := range bucketsParameters {
  208. bucketsParameters[i].CeilUSD = big.NewInt(int64((i + 1) * 100))
  209. bucketsParameters[i].BlockStamp = big.NewInt(int64(0))
  210. bucketsParameters[i].Withdrawals = big.NewInt(int64(i + 1))
  211. bucketsParameters[i].RateBlocks = big.NewInt(int64(i+1) * 4)
  212. bucketsParameters[i].RateWithdrawals = big.NewInt(int64(3))
  213. bucketsParameters[i].MaxWithdrawals = big.NewInt(int64(100000000000))
  214. }
  215. _, err := rollupClient.RollupUpdateBucketsParameters(bucketsParameters)
  216. require.NoError(t, err)
  217. currentBlockNum, err := rollupClient.client.EthLastBlock()
  218. require.NoError(t, err)
  219. blockStampBucket = currentBlockNum
  220. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  221. require.NoError(t, err)
  222. assert.Equal(t, bucketsParameters, rollupEvents.UpdateBucketsParameters[0].ArrayBuckets)
  223. }
  224. func TestRollupUpdateWithdrawalDelay(t *testing.T) {
  225. newWithdrawalDelay := int64(100000)
  226. _, err := rollupClient.RollupUpdateWithdrawalDelay(newWithdrawalDelay)
  227. require.NoError(t, err)
  228. currentBlockNum, err := rollupClient.client.EthLastBlock()
  229. require.NoError(t, err)
  230. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  231. require.NoError(t, err)
  232. assert.Equal(t, newWithdrawalDelay,
  233. int64(rollupEvents.UpdateWithdrawalDelay[0].NewWithdrawalDelay))
  234. }
  235. func TestRollupUpdateTokenExchange(t *testing.T) {
  236. var addressArray []ethCommon.Address
  237. var valueArray []uint64
  238. addressToken1, err := rollupClient.hermez.TokenList(nil, big.NewInt(1))
  239. addressArray = append(addressArray, addressToken1)
  240. tokenPrice := 10
  241. valueArray = append(valueArray, uint64(tokenPrice*1e14))
  242. require.NoError(t, err)
  243. _, err = rollupClient.RollupUpdateTokenExchange(addressArray, valueArray)
  244. require.NoError(t, err)
  245. currentBlockNum, err := rollupClient.client.EthLastBlock()
  246. require.NoError(t, err)
  247. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  248. require.NoError(t, err)
  249. assert.Equal(t, addressArray, rollupEvents.UpdateTokenExchange[0].AddressArray)
  250. assert.Equal(t, valueArray, rollupEvents.UpdateTokenExchange[0].ValueArray)
  251. }
  252. func TestRollupL1UserTxETHCreateAccountDeposit(t *testing.T) {
  253. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  254. require.NoError(t, err)
  255. key := genKeysBjj(2)
  256. fromIdxInt64 := int64(0)
  257. toIdxInt64 := int64(0)
  258. tokenIDUint32 := uint32(0)
  259. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  260. l1Tx := common.L1Tx{
  261. FromBJJ: key.BJJPublicKey.Compress(),
  262. FromIdx: common.Idx(fromIdxInt64),
  263. ToIdx: common.Idx(toIdxInt64),
  264. DepositAmount: depositAmount,
  265. TokenID: common.TokenID(tokenIDUint32),
  266. Amount: big.NewInt(0),
  267. }
  268. L1UserTxs = append(L1UserTxs, l1Tx)
  269. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  270. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  271. require.NoError(t, err)
  272. currentBlockNum, err := rollupClient.client.EthLastBlock()
  273. require.NoError(t, err)
  274. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  275. require.NoError(t, err)
  276. assert.Equal(t, l1Tx.FromBJJ, rollupEvents.L1UserTx[0].L1UserTx.FromBJJ)
  277. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  278. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  279. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  280. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  281. assert.Equal(t, rollupClientAux.client.account.Address,
  282. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  283. }
  284. func TestRollupL1UserTxERC20CreateAccountDeposit(t *testing.T) {
  285. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  286. tokenHEZ)
  287. require.NoError(t, err)
  288. key := genKeysBjj(1)
  289. fromIdxInt64 := int64(0)
  290. toIdxInt64 := int64(0)
  291. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  292. l1Tx := common.L1Tx{
  293. FromBJJ: key.BJJPublicKey.Compress(),
  294. FromIdx: common.Idx(fromIdxInt64),
  295. ToIdx: common.Idx(toIdxInt64),
  296. DepositAmount: depositAmount,
  297. TokenID: common.TokenID(tokenHEZID),
  298. Amount: big.NewInt(0),
  299. }
  300. L1UserTxs = append(L1UserTxs, l1Tx)
  301. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  302. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  303. require.NoError(t, err)
  304. currentBlockNum, err := rollupClient.client.EthLastBlock()
  305. require.NoError(t, err)
  306. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  307. require.NoError(t, err)
  308. assert.Equal(t, l1Tx.FromBJJ, rollupEvents.L1UserTx[0].L1UserTx.FromBJJ)
  309. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  310. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  311. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  312. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  313. assert.Equal(t, rollupClientAux2.client.account.Address,
  314. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  315. }
  316. func TestRollupL1UserTxERC20PermitCreateAccountDeposit(t *testing.T) {
  317. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  318. tokenHEZ)
  319. require.NoError(t, err)
  320. key := genKeysBjj(3)
  321. fromIdxInt64 := int64(0)
  322. toIdxInt64 := int64(0)
  323. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  324. l1Tx := common.L1Tx{
  325. FromBJJ: key.BJJPublicKey.Compress(),
  326. FromIdx: common.Idx(fromIdxInt64),
  327. ToIdx: common.Idx(toIdxInt64),
  328. DepositAmount: depositAmount,
  329. TokenID: common.TokenID(tokenIDERC777),
  330. Amount: big.NewInt(0),
  331. }
  332. L1UserTxs = append(L1UserTxs, l1Tx)
  333. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  334. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  335. require.NoError(t, err)
  336. currentBlockNum, err := rollupClient.client.EthLastBlock()
  337. require.NoError(t, err)
  338. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  339. require.NoError(t, err)
  340. assert.Equal(t, l1Tx.FromBJJ, rollupEvents.L1UserTx[0].L1UserTx.FromBJJ)
  341. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  342. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  343. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  344. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  345. assert.Equal(t, rollupClientAux.client.account.Address,
  346. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  347. }
  348. func TestRollupL1UserTxETHDeposit(t *testing.T) {
  349. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  350. tokenHEZ)
  351. require.NoError(t, err)
  352. fromIdxInt64 := int64(256)
  353. toIdxInt64 := int64(0)
  354. tokenIDUint32 := uint32(0)
  355. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  356. l1Tx := common.L1Tx{
  357. FromBJJ: common.EmptyBJJComp,
  358. FromIdx: common.Idx(fromIdxInt64),
  359. ToIdx: common.Idx(toIdxInt64),
  360. DepositAmount: depositAmount,
  361. TokenID: common.TokenID(tokenIDUint32),
  362. Amount: big.NewInt(0),
  363. }
  364. L1UserTxs = append(L1UserTxs, l1Tx)
  365. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  366. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  367. require.NoError(t, err)
  368. currentBlockNum, err := rollupClient.client.EthLastBlock()
  369. require.NoError(t, err)
  370. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  371. require.NoError(t, err)
  372. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  373. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  374. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  375. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  376. assert.Equal(t, rollupClientAux.client.account.Address,
  377. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  378. }
  379. func TestRollupL1UserTxERC20Deposit(t *testing.T) {
  380. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  381. tokenHEZ)
  382. require.NoError(t, err)
  383. fromIdxInt64 := int64(257)
  384. toIdxInt64 := int64(0)
  385. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  386. l1Tx := common.L1Tx{
  387. FromBJJ: common.EmptyBJJComp,
  388. FromIdx: common.Idx(fromIdxInt64),
  389. ToIdx: common.Idx(toIdxInt64),
  390. DepositAmount: depositAmount,
  391. TokenID: common.TokenID(tokenHEZID),
  392. Amount: big.NewInt(0),
  393. }
  394. L1UserTxs = append(L1UserTxs, l1Tx)
  395. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  396. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  397. require.NoError(t, err)
  398. currentBlockNum, err := rollupClient.client.EthLastBlock()
  399. require.NoError(t, err)
  400. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  401. require.NoError(t, err)
  402. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  403. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  404. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  405. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  406. assert.Equal(t, rollupClientAux2.client.account.Address,
  407. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  408. }
  409. func TestRollupL1UserTxERC20PermitDeposit(t *testing.T) {
  410. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  411. tokenHEZ)
  412. require.NoError(t, err)
  413. fromIdxInt64 := int64(258)
  414. toIdxInt64 := int64(0)
  415. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  416. l1Tx := common.L1Tx{
  417. FromIdx: common.Idx(fromIdxInt64),
  418. ToIdx: common.Idx(toIdxInt64),
  419. DepositAmount: depositAmount,
  420. TokenID: common.TokenID(tokenIDERC777),
  421. Amount: big.NewInt(0),
  422. }
  423. L1UserTxs = append(L1UserTxs, l1Tx)
  424. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  425. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  426. require.NoError(t, err)
  427. currentBlockNum, err := rollupClient.client.EthLastBlock()
  428. require.NoError(t, err)
  429. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  430. require.NoError(t, err)
  431. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  432. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  433. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  434. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  435. assert.Equal(t, rollupClientAux.client.account.Address,
  436. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  437. }
  438. func TestRollupL1UserTxETHDepositTransfer(t *testing.T) {
  439. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  440. tokenHEZ)
  441. require.NoError(t, err)
  442. fromIdxInt64 := int64(256)
  443. toIdxInt64 := int64(257)
  444. tokenIDUint32 := uint32(0)
  445. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  446. amount, _ := new(big.Int).SetString("100000000000000000000", 10)
  447. l1Tx := common.L1Tx{
  448. FromIdx: common.Idx(fromIdxInt64),
  449. ToIdx: common.Idx(toIdxInt64),
  450. DepositAmount: depositAmount,
  451. TokenID: common.TokenID(tokenIDUint32),
  452. Amount: amount,
  453. }
  454. L1UserTxs = append(L1UserTxs, l1Tx)
  455. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  456. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  457. require.NoError(t, err)
  458. currentBlockNum, err := rollupClient.client.EthLastBlock()
  459. require.NoError(t, err)
  460. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  461. require.NoError(t, err)
  462. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  463. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  464. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  465. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  466. assert.Equal(t, rollupClientAux.client.account.Address,
  467. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  468. }
  469. func TestRollupL1UserTxERC20DepositTransfer(t *testing.T) {
  470. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  471. tokenHEZ)
  472. require.NoError(t, err)
  473. fromIdxInt64 := int64(257)
  474. toIdxInt64 := int64(258)
  475. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  476. amount, _ := new(big.Int).SetString("100000000000000000000", 10)
  477. l1Tx := common.L1Tx{
  478. FromIdx: common.Idx(fromIdxInt64),
  479. ToIdx: common.Idx(toIdxInt64),
  480. DepositAmount: depositAmount,
  481. TokenID: common.TokenID(tokenHEZID),
  482. Amount: amount,
  483. }
  484. L1UserTxs = append(L1UserTxs, l1Tx)
  485. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  486. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  487. require.NoError(t, err)
  488. currentBlockNum, err := rollupClient.client.EthLastBlock()
  489. require.NoError(t, err)
  490. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  491. require.NoError(t, err)
  492. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  493. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  494. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  495. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  496. assert.Equal(t, rollupClientAux2.client.account.Address,
  497. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  498. }
  499. func TestRollupL1UserTxERC20PermitDepositTransfer(t *testing.T) {
  500. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  501. tokenHEZ)
  502. require.NoError(t, err)
  503. fromIdxInt64 := int64(258)
  504. toIdxInt64 := int64(259)
  505. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  506. amount, _ := new(big.Int).SetString("100000000000000000000", 10)
  507. l1Tx := common.L1Tx{
  508. FromIdx: common.Idx(fromIdxInt64),
  509. ToIdx: common.Idx(toIdxInt64),
  510. DepositAmount: depositAmount,
  511. TokenID: common.TokenID(tokenIDERC777),
  512. Amount: amount,
  513. }
  514. L1UserTxs = append(L1UserTxs, l1Tx)
  515. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  516. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  517. require.NoError(t, err)
  518. currentBlockNum, err := rollupClient.client.EthLastBlock()
  519. require.NoError(t, err)
  520. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  521. require.NoError(t, err)
  522. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  523. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  524. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  525. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  526. assert.Equal(t, rollupClientAux.client.account.Address,
  527. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  528. }
  529. func TestRollupL1UserTxETHCreateAccountDepositTransfer(t *testing.T) {
  530. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  531. tokenHEZ)
  532. require.NoError(t, err)
  533. fromIdxInt64 := int64(256)
  534. toIdxInt64 := int64(257)
  535. tokenIDUint32 := uint32(0)
  536. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  537. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  538. l1Tx := common.L1Tx{
  539. FromIdx: common.Idx(fromIdxInt64),
  540. ToIdx: common.Idx(toIdxInt64),
  541. DepositAmount: depositAmount,
  542. TokenID: common.TokenID(tokenIDUint32),
  543. Amount: amount,
  544. }
  545. L1UserTxs = append(L1UserTxs, l1Tx)
  546. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  547. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  548. require.NoError(t, err)
  549. currentBlockNum, err := rollupClient.client.EthLastBlock()
  550. require.NoError(t, err)
  551. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  552. require.NoError(t, err)
  553. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  554. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  555. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  556. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  557. assert.Equal(t, rollupClientAux.client.account.Address,
  558. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  559. }
  560. func TestRollupL1UserTxERC20CreateAccountDepositTransfer(t *testing.T) {
  561. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  562. tokenHEZ)
  563. require.NoError(t, err)
  564. fromIdxInt64 := int64(257)
  565. toIdxInt64 := int64(258)
  566. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  567. amount, _ := new(big.Int).SetString("30000000000000000000", 10)
  568. l1Tx := common.L1Tx{
  569. FromIdx: common.Idx(fromIdxInt64),
  570. ToIdx: common.Idx(toIdxInt64),
  571. DepositAmount: depositAmount,
  572. TokenID: common.TokenID(tokenHEZID),
  573. Amount: amount,
  574. }
  575. L1UserTxs = append(L1UserTxs, l1Tx)
  576. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  577. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  578. require.NoError(t, err)
  579. currentBlockNum, err := rollupClient.client.EthLastBlock()
  580. require.NoError(t, err)
  581. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  582. require.NoError(t, err)
  583. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  584. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  585. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  586. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  587. assert.Equal(t, rollupClientAux2.client.account.Address,
  588. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  589. }
  590. func TestRollupL1UserTxERC20PermitCreateAccountDepositTransfer(t *testing.T) {
  591. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  592. tokenHEZ)
  593. require.NoError(t, err)
  594. fromIdxInt64 := int64(258)
  595. toIdxInt64 := int64(259)
  596. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  597. amount, _ := new(big.Int).SetString("40000000000000000000", 10)
  598. l1Tx := common.L1Tx{
  599. FromIdx: common.Idx(fromIdxInt64),
  600. ToIdx: common.Idx(toIdxInt64),
  601. DepositAmount: depositAmount,
  602. TokenID: common.TokenID(tokenIDERC777),
  603. Amount: amount,
  604. }
  605. L1UserTxs = append(L1UserTxs, l1Tx)
  606. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  607. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  608. require.NoError(t, err)
  609. currentBlockNum, err := rollupClient.client.EthLastBlock()
  610. require.NoError(t, err)
  611. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  612. require.NoError(t, err)
  613. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  614. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  615. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  616. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  617. assert.Equal(t, rollupClientAux.client.account.Address,
  618. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  619. }
  620. func TestRollupL1UserTxETHForceTransfer(t *testing.T) {
  621. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  622. tokenHEZ)
  623. require.NoError(t, err)
  624. fromIdxInt64 := int64(256)
  625. toIdxInt64 := int64(257)
  626. tokenIDUint32 := uint32(0)
  627. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  628. l1Tx := common.L1Tx{
  629. FromIdx: common.Idx(fromIdxInt64),
  630. ToIdx: common.Idx(toIdxInt64),
  631. DepositAmount: big.NewInt(0),
  632. TokenID: common.TokenID(tokenIDUint32),
  633. Amount: amount,
  634. }
  635. L1UserTxs = append(L1UserTxs, l1Tx)
  636. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  637. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  638. require.NoError(t, err)
  639. currentBlockNum, err := rollupClient.client.EthLastBlock()
  640. require.NoError(t, err)
  641. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  642. require.NoError(t, err)
  643. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  644. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  645. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  646. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  647. assert.Equal(t, rollupClientAux.client.account.Address,
  648. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  649. }
  650. func TestRollupL1UserTxERC20ForceTransfer(t *testing.T) {
  651. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  652. tokenHEZ)
  653. require.NoError(t, err)
  654. fromIdxInt64 := int64(257)
  655. toIdxInt64 := int64(258)
  656. amount, _ := new(big.Int).SetString("10000000000000000000", 10)
  657. l1Tx := common.L1Tx{
  658. FromIdx: common.Idx(fromIdxInt64),
  659. ToIdx: common.Idx(toIdxInt64),
  660. DepositAmount: big.NewInt(0),
  661. TokenID: common.TokenID(tokenHEZID),
  662. Amount: amount,
  663. }
  664. L1UserTxs = append(L1UserTxs, l1Tx)
  665. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  666. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  667. require.NoError(t, err)
  668. currentBlockNum, err := rollupClient.client.EthLastBlock()
  669. require.NoError(t, err)
  670. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  671. require.NoError(t, err)
  672. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  673. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  674. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  675. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  676. assert.Equal(t, rollupClientAux2.client.account.Address,
  677. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  678. }
  679. func TestRollupL1UserTxERC20PermitForceTransfer(t *testing.T) {
  680. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  681. tokenHEZ)
  682. require.NoError(t, err)
  683. fromIdxInt64 := int64(259)
  684. toIdxInt64 := int64(260)
  685. amount, _ := new(big.Int).SetString("30000000000000000000", 10)
  686. l1Tx := common.L1Tx{
  687. FromIdx: common.Idx(fromIdxInt64),
  688. ToIdx: common.Idx(toIdxInt64),
  689. DepositAmount: big.NewInt(0),
  690. TokenID: common.TokenID(tokenIDERC777),
  691. Amount: amount,
  692. }
  693. L1UserTxs = append(L1UserTxs, l1Tx)
  694. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  695. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  696. require.NoError(t, err)
  697. currentBlockNum, err := rollupClient.client.EthLastBlock()
  698. require.NoError(t, err)
  699. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  700. require.NoError(t, err)
  701. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  702. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  703. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  704. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  705. assert.Equal(t, rollupClientAux.client.account.Address,
  706. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  707. }
  708. func TestRollupL1UserTxETHForceExit(t *testing.T) {
  709. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  710. tokenHEZ)
  711. require.NoError(t, err)
  712. fromIdxInt64 := int64(256)
  713. toIdxInt64 := int64(1)
  714. tokenIDUint32 := uint32(0)
  715. amount, _ := new(big.Int).SetString("10000000000000000000", 10)
  716. l1Tx := common.L1Tx{
  717. FromIdx: common.Idx(fromIdxInt64),
  718. ToIdx: common.Idx(toIdxInt64),
  719. DepositAmount: big.NewInt(0),
  720. TokenID: common.TokenID(tokenIDUint32),
  721. Amount: amount,
  722. }
  723. L1UserTxs = append(L1UserTxs, l1Tx)
  724. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  725. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  726. require.NoError(t, err)
  727. currentBlockNum, err := rollupClient.client.EthLastBlock()
  728. require.NoError(t, err)
  729. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  730. require.NoError(t, err)
  731. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  732. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  733. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  734. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  735. assert.Equal(t, rollupClientAux.client.account.Address,
  736. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  737. }
  738. func TestRollupL1UserTxERC20ForceExit(t *testing.T) {
  739. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  740. tokenHEZ)
  741. require.NoError(t, err)
  742. fromIdxInt64 := int64(257)
  743. toIdxInt64 := int64(1)
  744. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  745. l1Tx := common.L1Tx{
  746. FromIdx: common.Idx(fromIdxInt64),
  747. ToIdx: common.Idx(toIdxInt64),
  748. DepositAmount: big.NewInt(0),
  749. TokenID: common.TokenID(tokenHEZID),
  750. Amount: amount,
  751. }
  752. L1UserTxs = append(L1UserTxs, l1Tx)
  753. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  754. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  755. require.NoError(t, err)
  756. currentBlockNum, err := rollupClient.client.EthLastBlock()
  757. require.NoError(t, err)
  758. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  759. require.NoError(t, err)
  760. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  761. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  762. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  763. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  764. assert.Equal(t, rollupClientAux2.client.account.Address,
  765. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  766. }
  767. func TestRollupL1UserTxERC20PermitForceExit(t *testing.T) {
  768. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  769. tokenHEZ)
  770. require.NoError(t, err)
  771. fromIdxInt64 := int64(258)
  772. toIdxInt64 := int64(1)
  773. fromIdx := new(common.Idx)
  774. *fromIdx = 0
  775. amount, _ := new(big.Int).SetString("30000000000000000000", 10)
  776. l1Tx := common.L1Tx{
  777. FromIdx: common.Idx(fromIdxInt64),
  778. ToIdx: common.Idx(toIdxInt64),
  779. DepositAmount: big.NewInt(0),
  780. TokenID: common.TokenID(tokenIDERC777),
  781. Amount: amount,
  782. }
  783. L1UserTxs = append(L1UserTxs, l1Tx)
  784. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  785. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  786. require.NoError(t, err)
  787. currentBlockNum, err := rollupClient.client.EthLastBlock()
  788. require.NoError(t, err)
  789. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  790. require.NoError(t, err)
  791. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  792. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  793. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  794. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  795. assert.Equal(t, rollupClientAux.client.account.Address,
  796. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  797. }
  798. func TestRollupForgeBatch2(t *testing.T) {
  799. // Forge Batch 2
  800. _, err := rollupClient.RollupForgeBatch(argsForge, nil)
  801. require.NoError(t, err)
  802. currentBlockNum, err := rollupClient.client.EthLastBlock()
  803. require.NoError(t, err)
  804. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  805. require.NoError(t, err)
  806. assert.Equal(t, int64(2), rollupEvents.ForgeBatch[0].BatchNum)
  807. // Forge Batch 3
  808. args := new(RollupForgeBatchArgs)
  809. // When encoded, 64 times the 0 idx means that no idx to collect fees is specified.
  810. args.FeeIdxCoordinator = []common.Idx{}
  811. args.L1CoordinatorTxs = argsForge.L1CoordinatorTxs
  812. args.L1CoordinatorTxsAuths = argsForge.L1CoordinatorTxsAuths
  813. for i := 0; i < len(L1UserTxs); i++ {
  814. l1UserTx := L1UserTxs[i]
  815. l1UserTx.EffectiveAmount = l1UserTx.Amount
  816. l1Bytes, err := l1UserTx.BytesDataAvailability(uint32(nLevels))
  817. require.NoError(t, err)
  818. l1UserTxDataAvailability, err := common.L1TxFromDataAvailability(l1Bytes,
  819. uint32(nLevels))
  820. require.NoError(t, err)
  821. args.L1UserTxs = append(args.L1UserTxs, *l1UserTxDataAvailability)
  822. }
  823. newStateRoot := new(big.Int)
  824. newStateRoot.SetString(
  825. "18317824016047294649053625209337295956588174734569560016974612130063629505228",
  826. 10)
  827. newExitRoot := new(big.Int)
  828. newExitRoot.SetString(
  829. "1114281409737474688393837964161044726766678436313681099613347372031079422302",
  830. 10)
  831. amount := new(big.Int)
  832. amount.SetString("79000000", 10)
  833. l2Tx := common.L2Tx{
  834. ToIdx: 256,
  835. Amount: amount,
  836. FromIdx: 257,
  837. Fee: 201,
  838. }
  839. l2Txs := []common.L2Tx{}
  840. l2Txs = append(l2Txs, l2Tx)
  841. l2Txs = append(l2Txs, l2Tx)
  842. args.L2TxsData = l2Txs
  843. args.NewLastIdx = int64(1000)
  844. args.NewStRoot = newStateRoot
  845. args.NewExitRoot = newExitRoot
  846. args.L1Batch = true
  847. args.VerifierIdx = 0
  848. args.ProofA[0] = big.NewInt(0)
  849. args.ProofA[1] = big.NewInt(0)
  850. args.ProofB[0] = [2]*big.Int{big.NewInt(0), big.NewInt(0)}
  851. args.ProofB[1] = [2]*big.Int{big.NewInt(0), big.NewInt(0)}
  852. args.ProofC[0] = big.NewInt(0)
  853. args.ProofC[1] = big.NewInt(0)
  854. argsForge = args
  855. _, err = rollupClient.RollupForgeBatch(argsForge, nil)
  856. require.NoError(t, err)
  857. currentBlockNum, err = rollupClient.client.EthLastBlock()
  858. require.NoError(t, err)
  859. rollupEvents, err = rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  860. require.NoError(t, err)
  861. assert.Equal(t, int64(3), rollupEvents.ForgeBatch[0].BatchNum)
  862. assert.Equal(t, uint16(len(L1UserTxs)), rollupEvents.ForgeBatch[0].L1UserTxsLen)
  863. ethHashForge = rollupEvents.ForgeBatch[0].EthTxHash
  864. }
  865. func TestRollupForgeBatchArgs2(t *testing.T) {
  866. args, sender, err := rollupClient.RollupForgeBatchArgs(ethHashForge, uint16(len(L1UserTxs)))
  867. require.NoError(t, err)
  868. assert.Equal(t, *sender, rollupClient.client.account.Address)
  869. assert.Equal(t, argsForge.FeeIdxCoordinator, args.FeeIdxCoordinator)
  870. assert.Equal(t, argsForge.L1Batch, args.L1Batch)
  871. assert.Equal(t, argsForge.L1UserTxs, args.L1UserTxs)
  872. assert.Equal(t, argsForge.L1CoordinatorTxs, args.L1CoordinatorTxs)
  873. assert.Equal(t, argsForge.L1CoordinatorTxsAuths, args.L1CoordinatorTxsAuths)
  874. assert.Equal(t, argsForge.L2TxsData, args.L2TxsData)
  875. assert.Equal(t, argsForge.NewLastIdx, args.NewLastIdx)
  876. assert.Equal(t, argsForge.NewStRoot, args.NewStRoot)
  877. assert.Equal(t, argsForge.VerifierIdx, args.VerifierIdx)
  878. }
  879. func TestRollupWithdrawMerkleProof(t *testing.T) {
  880. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  881. require.NoError(t, err)
  882. var pkComp babyjub.PublicKeyComp
  883. pkCompBE, err :=
  884. hex.DecodeString("adc3b754f8da621967b073a787bef8eec7052f2ba712b23af57d98f65beea8b2")
  885. require.NoError(t, err)
  886. pkCompLE := common.SwapEndianness(pkCompBE)
  887. copy(pkComp[:], pkCompLE)
  888. require.NoError(t, err)
  889. tokenID := uint32(tokenHEZID)
  890. numExitRoot := int64(3)
  891. fromIdx := int64(256)
  892. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  893. // siblingBytes0, err := new(big.Int).SetString(
  894. // "19508838618377323910556678335932426220272947530531646682154552299216398748115",
  895. // 10)
  896. // require.NoError(t, err)
  897. // siblingBytes1, err := new(big.Int).SetString(
  898. // "15198806719713909654457742294233381653226080862567104272457668857208564789571", 10)
  899. // require.NoError(t, err)
  900. var siblings []*big.Int
  901. // siblings = append(siblings, siblingBytes0)
  902. // siblings = append(siblings, siblingBytes1)
  903. instantWithdraw := true
  904. _, err = rollupClientAux.RollupWithdrawMerkleProof(pkComp, tokenID, numExitRoot, fromIdx,
  905. amount, siblings, instantWithdraw)
  906. require.NoError(t, err)
  907. currentBlockNum, err := rollupClient.client.EthLastBlock()
  908. require.NoError(t, err)
  909. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  910. require.NoError(t, err)
  911. assert.Equal(t, uint64(fromIdx), rollupEvents.Withdraw[0].Idx)
  912. assert.Equal(t, instantWithdraw, rollupEvents.Withdraw[0].InstantWithdraw)
  913. assert.Equal(t, uint64(numExitRoot), rollupEvents.Withdraw[0].NumExitRoot)
  914. // tokenAmount = 20
  915. // amountUSD = tokenAmount * tokenPrice = 20 * 10 = 200
  916. // Bucket[0].ceilUSD = 100, Bucket[1].ceilUSD = 200, ...
  917. // Bucket 1
  918. // Bucket[0].withdrawals = 1, Bucket[1].withdrawals = 2, ...
  919. // Bucket[1].withdrawals - 1 = 1
  920. assert.Equal(t, 1, rollupEvents.UpdateBucketWithdraw[0].NumBucket)
  921. assert.Equal(t, blockStampBucket, rollupEvents.UpdateBucketWithdraw[0].BlockStamp)
  922. assert.Equal(t, big.NewInt(1), rollupEvents.UpdateBucketWithdraw[0].Withdrawals)
  923. }
  924. func TestRollupSafeMode(t *testing.T) {
  925. _, err := rollupClient.RollupSafeMode()
  926. require.NoError(t, err)
  927. currentBlockNum, err := rollupClient.client.EthLastBlock()
  928. require.NoError(t, err)
  929. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  930. require.NoError(t, err)
  931. auxEvent := new(RollupEventSafeMode)
  932. assert.Equal(t, auxEvent, &rollupEvents.SafeMode[0])
  933. }