You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1034 lines
40 KiB

Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
  1. package eth
  2. import (
  3. "context"
  4. "crypto/ecdsa"
  5. "encoding/binary"
  6. "encoding/hex"
  7. "math/big"
  8. "testing"
  9. ethCommon "github.com/ethereum/go-ethereum/common"
  10. ethCrypto "github.com/ethereum/go-ethereum/crypto"
  11. "github.com/hermeznetwork/hermez-node/common"
  12. "github.com/iden3/go-iden3-crypto/babyjub"
  13. "github.com/stretchr/testify/assert"
  14. "github.com/stretchr/testify/require"
  15. )
  16. var rollupClient *RollupClient
  17. var auctionClient *AuctionClient
  18. var ethHashForge ethCommon.Hash
  19. var argsForge *RollupForgeBatchArgs
  20. var absoluteMaxL1L2BatchTimeout = int64(240)
  21. var maxTx = int64(512)
  22. var nLevels = int64(32)
  23. var tokenIDERC777 uint32
  24. var tokenHEZID uint32
  25. var L1UserTxs []common.L1Tx
  26. var blockStampBucket int64
  27. type keys struct {
  28. BJJSecretKey *babyjub.PrivateKey
  29. BJJPublicKey *babyjub.PublicKey
  30. Addr ethCommon.Address
  31. }
  32. func genKeysBjj(i int64) *keys {
  33. i++ // i = 0 doesn't work for the ecdsa key generation
  34. var sk babyjub.PrivateKey
  35. binary.LittleEndian.PutUint64(sk[:], uint64(i))
  36. // eth address
  37. var key ecdsa.PrivateKey
  38. key.D = big.NewInt(i) // only for testing
  39. key.PublicKey.X, key.PublicKey.Y = ethCrypto.S256().ScalarBaseMult(key.D.Bytes())
  40. key.Curve = ethCrypto.S256()
  41. return &keys{
  42. BJJSecretKey: &sk,
  43. BJJPublicKey: sk.Public(),
  44. }
  45. }
  46. func TestRollupEventInit(t *testing.T) {
  47. rollupInit, blockNum, err := rollupClient.RollupEventInit(genesisBlock)
  48. require.NoError(t, err)
  49. assert.Equal(t, int64(19), blockNum)
  50. assert.Equal(t, uint8(10), rollupInit.ForgeL1L2BatchTimeout)
  51. assert.Equal(t, big.NewInt(10), rollupInit.FeeAddToken)
  52. assert.Equal(t, uint64(60*60*24*7*2), rollupInit.WithdrawalDelay)
  53. }
  54. func TestRollupConstants(t *testing.T) {
  55. rollupConstants, err := rollupClient.RollupConstants()
  56. require.NoError(t, err)
  57. assert.Equal(t, absoluteMaxL1L2BatchTimeout, rollupConstants.AbsoluteMaxL1L2BatchTimeout)
  58. assert.Equal(t, auctionAddressConst, rollupConstants.HermezAuctionContract)
  59. assert.Equal(t, tokenHEZAddressConst, rollupConstants.TokenHEZ)
  60. assert.Equal(t, maxTx, rollupConstants.Verifiers[0].MaxTx)
  61. assert.Equal(t, nLevels, rollupConstants.Verifiers[0].NLevels)
  62. assert.Equal(t, governanceAddressConst, rollupConstants.HermezGovernanceAddress)
  63. assert.Equal(t, wdelayerAddressConst, rollupConstants.WithdrawDelayerContract)
  64. }
  65. func TestRollupRegisterTokensCount(t *testing.T) {
  66. registerTokensCount, err := rollupClient.RollupRegisterTokensCount()
  67. require.NoError(t, err)
  68. assert.Equal(t, big.NewInt(1), registerTokensCount)
  69. }
  70. func TestRollupAddToken(t *testing.T) {
  71. feeAddToken := big.NewInt(10)
  72. // Addtoken ERC20Permit
  73. registerTokensCount, err := rollupClient.RollupRegisterTokensCount()
  74. require.NoError(t, err)
  75. _, err = rollupClient.RollupAddToken(tokenHEZAddressConst, feeAddToken, deadline)
  76. require.NoError(t, err)
  77. currentBlockNum, err := rollupClient.client.EthLastBlock()
  78. require.NoError(t, err)
  79. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  80. require.NoError(t, err)
  81. assert.Equal(t, tokenHEZAddressConst, rollupEvents.AddToken[0].TokenAddress)
  82. assert.Equal(t, registerTokensCount, common.TokenID(rollupEvents.AddToken[0].TokenID).BigInt())
  83. tokenHEZID = rollupEvents.AddToken[0].TokenID
  84. }
  85. func TestRollupForgeBatch(t *testing.T) {
  86. chainid, _ := auctionClient.client.Client().ChainID(context.Background())
  87. // Register Coordinator
  88. forgerAddress := governanceAddressConst
  89. _, err := auctionClient.AuctionSetCoordinator(forgerAddress, URL)
  90. require.NoError(t, err)
  91. // MultiBid
  92. currentSlot, err := auctionClient.AuctionGetCurrentSlotNumber()
  93. require.NoError(t, err)
  94. slotSet := [6]bool{true, false, true, false, true, false}
  95. maxBid := new(big.Int)
  96. maxBid.SetString("15000000000000000000", 10)
  97. minBid := new(big.Int)
  98. minBid.SetString("11000000000000000000", 10)
  99. budget := new(big.Int)
  100. budget.SetString("45200000000000000000", 10)
  101. _, err = auctionClient.AuctionMultiBid(budget, currentSlot+4, currentSlot+10, slotSet,
  102. maxBid, minBid, deadline)
  103. require.NoError(t, err)
  104. // Add Blocks
  105. blockNum := int64(int(blocksPerSlot)*int(currentSlot+4) + int(genesisBlock))
  106. currentBlockNum, err := auctionClient.client.EthLastBlock()
  107. require.NoError(t, err)
  108. blocksToAdd := blockNum - currentBlockNum
  109. addBlocks(blocksToAdd, ethClientDialURL)
  110. // Forge Batch 1
  111. args := new(RollupForgeBatchArgs)
  112. // When encoded, 64 times the 0 idx means that no idx to collect fees is specified.
  113. args.FeeIdxCoordinator = []common.Idx{}
  114. l1CoordinatorBytes, err := hex.DecodeString(
  115. "1c660323607bb113e586183609964a333d07ebe4bef3be82ec13af453bae9590bd7711cdb6abf" +
  116. "42f176eadfbe5506fbef5e092e5543733f91b0061d9a7747fa10694a915a6470fa230" +
  117. "de387b51e6f4db0b09787867778687b55197ad6d6a86eac000000001")
  118. require.NoError(t, err)
  119. numTxsL1 := len(l1CoordinatorBytes) / common.RollupConstL1CoordinatorTotalBytes
  120. for i := 0; i < numTxsL1; i++ {
  121. bytesL1Coordinator :=
  122. l1CoordinatorBytes[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*
  123. common.RollupConstL1CoordinatorTotalBytes]
  124. var signature []byte
  125. v := bytesL1Coordinator[0]
  126. s := bytesL1Coordinator[1:33]
  127. r := bytesL1Coordinator[33:65]
  128. signature = append(signature, r[:]...)
  129. signature = append(signature, s[:]...)
  130. signature = append(signature, v)
  131. l1Tx, err := common.L1CoordinatorTxFromBytes(bytesL1Coordinator, chainid, rollupClient.address)
  132. require.NoError(t, err)
  133. args.L1CoordinatorTxs = append(args.L1CoordinatorTxs, *l1Tx)
  134. args.L1CoordinatorTxsAuths = append(args.L1CoordinatorTxsAuths, signature)
  135. }
  136. args.L1UserTxs = []common.L1Tx{}
  137. args.L2TxsData = []common.L2Tx{}
  138. newStateRoot := new(big.Int)
  139. newStateRoot.SetString(
  140. "18317824016047294649053625209337295956588174734569560016974612130063629505228",
  141. 10)
  142. newExitRoot := new(big.Int)
  143. bytesNumExitRoot, err := hex.DecodeString(
  144. "10a89d5fe8d488eda1ba371d633515739933c706c210c604f5bd209180daa43b")
  145. require.NoError(t, err)
  146. newExitRoot.SetBytes(bytesNumExitRoot)
  147. args.NewLastIdx = int64(300)
  148. args.NewStRoot = newStateRoot
  149. args.NewExitRoot = newExitRoot
  150. args.L1Batch = true
  151. args.VerifierIdx = 0
  152. args.ProofA[0] = big.NewInt(0)
  153. args.ProofA[1] = big.NewInt(0)
  154. args.ProofB[0][0] = big.NewInt(0)
  155. args.ProofB[0][1] = big.NewInt(0)
  156. args.ProofB[1][0] = big.NewInt(0)
  157. args.ProofB[1][1] = big.NewInt(0)
  158. args.ProofC[0] = big.NewInt(0)
  159. args.ProofC[1] = big.NewInt(0)
  160. argsForge = args
  161. _, err = rollupClient.RollupForgeBatch(argsForge, nil)
  162. require.NoError(t, err)
  163. currentBlockNum, err = rollupClient.client.EthLastBlock()
  164. require.NoError(t, err)
  165. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  166. require.NoError(t, err)
  167. assert.Equal(t, int64(1), rollupEvents.ForgeBatch[0].BatchNum)
  168. assert.Equal(t, uint16(len(L1UserTxs)), rollupEvents.ForgeBatch[0].L1UserTxsLen)
  169. ethHashForge = rollupEvents.ForgeBatch[0].EthTxHash
  170. }
  171. func TestRollupForgeBatchArgs(t *testing.T) {
  172. args, sender, err := rollupClient.RollupForgeBatchArgs(ethHashForge, uint16(len(L1UserTxs)))
  173. require.NoError(t, err)
  174. assert.Equal(t, *sender, rollupClient.client.account.Address)
  175. assert.Equal(t, argsForge.FeeIdxCoordinator, args.FeeIdxCoordinator)
  176. assert.Equal(t, argsForge.L1Batch, args.L1Batch)
  177. assert.Equal(t, argsForge.L1CoordinatorTxs, args.L1CoordinatorTxs)
  178. assert.Equal(t, argsForge.L1CoordinatorTxsAuths, args.L1CoordinatorTxsAuths)
  179. assert.Equal(t, argsForge.L2TxsData, args.L2TxsData)
  180. assert.Equal(t, argsForge.NewLastIdx, args.NewLastIdx)
  181. assert.Equal(t, argsForge.NewStRoot, args.NewStRoot)
  182. assert.Equal(t, argsForge.VerifierIdx, args.VerifierIdx)
  183. }
  184. func TestRollupUpdateForgeL1L2BatchTimeout(t *testing.T) {
  185. newForgeL1L2BatchTimeout := int64(222)
  186. _, err := rollupClient.RollupUpdateForgeL1L2BatchTimeout(newForgeL1L2BatchTimeout)
  187. require.NoError(t, err)
  188. currentBlockNum, err := rollupClient.client.EthLastBlock()
  189. require.NoError(t, err)
  190. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  191. require.NoError(t, err)
  192. assert.Equal(t, newForgeL1L2BatchTimeout,
  193. rollupEvents.UpdateForgeL1L2BatchTimeout[0].NewForgeL1L2BatchTimeout)
  194. }
  195. func TestRollupUpdateFeeAddToken(t *testing.T) {
  196. newFeeAddToken := big.NewInt(12)
  197. _, err := rollupClient.RollupUpdateFeeAddToken(newFeeAddToken)
  198. require.NoError(t, err)
  199. currentBlockNum, err := rollupClient.client.EthLastBlock()
  200. require.NoError(t, err)
  201. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  202. require.NoError(t, err)
  203. assert.Equal(t, newFeeAddToken, rollupEvents.UpdateFeeAddToken[0].NewFeeAddToken)
  204. }
  205. func TestRollupUpdateBucketsParameters(t *testing.T) {
  206. var bucketsParameters [common.RollupConstNumBuckets]RollupUpdateBucketsParameters
  207. for i := range bucketsParameters {
  208. bucketsParameters[i].CeilUSD = big.NewInt(int64((i + 1) * 100))
  209. bucketsParameters[i].Withdrawals = big.NewInt(int64(i + 1))
  210. bucketsParameters[i].BlockWithdrawalRate = big.NewInt(int64(i+1) * 100)
  211. bucketsParameters[i].MaxWithdrawals = big.NewInt(int64(100000000000))
  212. }
  213. _, err := rollupClient.RollupUpdateBucketsParameters(bucketsParameters)
  214. require.NoError(t, err)
  215. currentBlockNum, err := rollupClient.client.EthLastBlock()
  216. require.NoError(t, err)
  217. blockStampBucket = currentBlockNum
  218. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  219. require.NoError(t, err)
  220. assert.Equal(t, bucketsParameters, rollupEvents.UpdateBucketsParameters[0].ArrayBuckets)
  221. }
  222. func TestRollupUpdateWithdrawalDelay(t *testing.T) {
  223. newWithdrawalDelay := int64(100000)
  224. _, err := rollupClient.RollupUpdateWithdrawalDelay(newWithdrawalDelay)
  225. require.NoError(t, err)
  226. currentBlockNum, err := rollupClient.client.EthLastBlock()
  227. require.NoError(t, err)
  228. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  229. require.NoError(t, err)
  230. assert.Equal(t, newWithdrawalDelay,
  231. int64(rollupEvents.UpdateWithdrawalDelay[0].NewWithdrawalDelay))
  232. }
  233. func TestRollupUpdateTokenExchange(t *testing.T) {
  234. var addressArray []ethCommon.Address
  235. var valueArray []uint64
  236. addressToken1, err := rollupClient.hermez.TokenList(nil, big.NewInt(1))
  237. addressArray = append(addressArray, addressToken1)
  238. tokenPrice := 10
  239. valueArray = append(valueArray, uint64(tokenPrice*1e14))
  240. require.NoError(t, err)
  241. _, err = rollupClient.RollupUpdateTokenExchange(addressArray, valueArray)
  242. require.NoError(t, err)
  243. currentBlockNum, err := rollupClient.client.EthLastBlock()
  244. require.NoError(t, err)
  245. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  246. require.NoError(t, err)
  247. assert.Equal(t, addressArray, rollupEvents.UpdateTokenExchange[0].AddressArray)
  248. assert.Equal(t, valueArray, rollupEvents.UpdateTokenExchange[0].ValueArray)
  249. }
  250. func TestRollupL1UserTxETHCreateAccountDeposit(t *testing.T) {
  251. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  252. require.NoError(t, err)
  253. key := genKeysBjj(2)
  254. fromIdxInt64 := int64(0)
  255. toIdxInt64 := int64(0)
  256. tokenIDUint32 := uint32(0)
  257. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  258. l1Tx := common.L1Tx{
  259. FromBJJ: key.BJJPublicKey.Compress(),
  260. FromIdx: common.Idx(fromIdxInt64),
  261. ToIdx: common.Idx(toIdxInt64),
  262. DepositAmount: depositAmount,
  263. TokenID: common.TokenID(tokenIDUint32),
  264. Amount: big.NewInt(0),
  265. }
  266. L1UserTxs = append(L1UserTxs, l1Tx)
  267. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  268. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  269. require.NoError(t, err)
  270. currentBlockNum, err := rollupClient.client.EthLastBlock()
  271. require.NoError(t, err)
  272. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  273. require.NoError(t, err)
  274. assert.Equal(t, l1Tx.FromBJJ, rollupEvents.L1UserTx[0].L1UserTx.FromBJJ)
  275. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  276. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  277. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  278. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  279. assert.Equal(t, rollupClientAux.client.account.Address,
  280. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  281. }
  282. func TestRollupL1UserTxERC20CreateAccountDeposit(t *testing.T) {
  283. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  284. tokenHEZ)
  285. require.NoError(t, err)
  286. key := genKeysBjj(1)
  287. fromIdxInt64 := int64(0)
  288. toIdxInt64 := int64(0)
  289. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  290. l1Tx := common.L1Tx{
  291. FromBJJ: key.BJJPublicKey.Compress(),
  292. FromIdx: common.Idx(fromIdxInt64),
  293. ToIdx: common.Idx(toIdxInt64),
  294. DepositAmount: depositAmount,
  295. TokenID: common.TokenID(tokenHEZID),
  296. Amount: big.NewInt(0),
  297. }
  298. L1UserTxs = append(L1UserTxs, l1Tx)
  299. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  300. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  301. require.NoError(t, err)
  302. currentBlockNum, err := rollupClient.client.EthLastBlock()
  303. require.NoError(t, err)
  304. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  305. require.NoError(t, err)
  306. assert.Equal(t, l1Tx.FromBJJ, rollupEvents.L1UserTx[0].L1UserTx.FromBJJ)
  307. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  308. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  309. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  310. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  311. assert.Equal(t, rollupClientAux2.client.account.Address,
  312. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  313. }
  314. func TestRollupL1UserTxERC20PermitCreateAccountDeposit(t *testing.T) {
  315. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  316. tokenHEZ)
  317. require.NoError(t, err)
  318. key := genKeysBjj(3)
  319. fromIdxInt64 := int64(0)
  320. toIdxInt64 := int64(0)
  321. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  322. l1Tx := common.L1Tx{
  323. FromBJJ: key.BJJPublicKey.Compress(),
  324. FromIdx: common.Idx(fromIdxInt64),
  325. ToIdx: common.Idx(toIdxInt64),
  326. DepositAmount: depositAmount,
  327. TokenID: common.TokenID(tokenIDERC777),
  328. Amount: big.NewInt(0),
  329. }
  330. L1UserTxs = append(L1UserTxs, l1Tx)
  331. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  332. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  333. require.NoError(t, err)
  334. currentBlockNum, err := rollupClient.client.EthLastBlock()
  335. require.NoError(t, err)
  336. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  337. require.NoError(t, err)
  338. assert.Equal(t, l1Tx.FromBJJ, rollupEvents.L1UserTx[0].L1UserTx.FromBJJ)
  339. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  340. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  341. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  342. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  343. assert.Equal(t, rollupClientAux.client.account.Address,
  344. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  345. }
  346. func TestRollupL1UserTxETHDeposit(t *testing.T) {
  347. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  348. tokenHEZ)
  349. require.NoError(t, err)
  350. fromIdxInt64 := int64(256)
  351. toIdxInt64 := int64(0)
  352. tokenIDUint32 := uint32(0)
  353. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  354. l1Tx := common.L1Tx{
  355. FromBJJ: common.EmptyBJJComp,
  356. FromIdx: common.Idx(fromIdxInt64),
  357. ToIdx: common.Idx(toIdxInt64),
  358. DepositAmount: depositAmount,
  359. TokenID: common.TokenID(tokenIDUint32),
  360. Amount: big.NewInt(0),
  361. }
  362. L1UserTxs = append(L1UserTxs, l1Tx)
  363. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  364. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  365. require.NoError(t, err)
  366. currentBlockNum, err := rollupClient.client.EthLastBlock()
  367. require.NoError(t, err)
  368. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  369. require.NoError(t, err)
  370. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  371. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  372. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  373. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  374. assert.Equal(t, rollupClientAux.client.account.Address,
  375. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  376. }
  377. func TestRollupL1UserTxERC20Deposit(t *testing.T) {
  378. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  379. tokenHEZ)
  380. require.NoError(t, err)
  381. fromIdxInt64 := int64(257)
  382. toIdxInt64 := int64(0)
  383. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  384. l1Tx := common.L1Tx{
  385. FromBJJ: common.EmptyBJJComp,
  386. FromIdx: common.Idx(fromIdxInt64),
  387. ToIdx: common.Idx(toIdxInt64),
  388. DepositAmount: depositAmount,
  389. TokenID: common.TokenID(tokenHEZID),
  390. Amount: big.NewInt(0),
  391. }
  392. L1UserTxs = append(L1UserTxs, l1Tx)
  393. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  394. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  395. require.NoError(t, err)
  396. currentBlockNum, err := rollupClient.client.EthLastBlock()
  397. require.NoError(t, err)
  398. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  399. require.NoError(t, err)
  400. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  401. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  402. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  403. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  404. assert.Equal(t, rollupClientAux2.client.account.Address,
  405. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  406. }
  407. func TestRollupL1UserTxERC20PermitDeposit(t *testing.T) {
  408. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  409. tokenHEZ)
  410. require.NoError(t, err)
  411. fromIdxInt64 := int64(258)
  412. toIdxInt64 := int64(0)
  413. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  414. l1Tx := common.L1Tx{
  415. FromIdx: common.Idx(fromIdxInt64),
  416. ToIdx: common.Idx(toIdxInt64),
  417. DepositAmount: depositAmount,
  418. TokenID: common.TokenID(tokenIDERC777),
  419. Amount: big.NewInt(0),
  420. }
  421. L1UserTxs = append(L1UserTxs, l1Tx)
  422. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  423. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  424. require.NoError(t, err)
  425. currentBlockNum, err := rollupClient.client.EthLastBlock()
  426. require.NoError(t, err)
  427. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  428. require.NoError(t, err)
  429. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  430. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  431. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  432. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  433. assert.Equal(t, rollupClientAux.client.account.Address,
  434. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  435. }
  436. func TestRollupL1UserTxETHDepositTransfer(t *testing.T) {
  437. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  438. tokenHEZ)
  439. require.NoError(t, err)
  440. fromIdxInt64 := int64(256)
  441. toIdxInt64 := int64(257)
  442. tokenIDUint32 := uint32(0)
  443. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  444. amount, _ := new(big.Int).SetString("100000000000000000000", 10)
  445. l1Tx := common.L1Tx{
  446. FromIdx: common.Idx(fromIdxInt64),
  447. ToIdx: common.Idx(toIdxInt64),
  448. DepositAmount: depositAmount,
  449. TokenID: common.TokenID(tokenIDUint32),
  450. Amount: amount,
  451. }
  452. L1UserTxs = append(L1UserTxs, l1Tx)
  453. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  454. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  455. require.NoError(t, err)
  456. currentBlockNum, err := rollupClient.client.EthLastBlock()
  457. require.NoError(t, err)
  458. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  459. require.NoError(t, err)
  460. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  461. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  462. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  463. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  464. assert.Equal(t, rollupClientAux.client.account.Address,
  465. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  466. }
  467. func TestRollupL1UserTxERC20DepositTransfer(t *testing.T) {
  468. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  469. tokenHEZ)
  470. require.NoError(t, err)
  471. fromIdxInt64 := int64(257)
  472. toIdxInt64 := int64(258)
  473. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  474. amount, _ := new(big.Int).SetString("100000000000000000000", 10)
  475. l1Tx := common.L1Tx{
  476. FromIdx: common.Idx(fromIdxInt64),
  477. ToIdx: common.Idx(toIdxInt64),
  478. DepositAmount: depositAmount,
  479. TokenID: common.TokenID(tokenHEZID),
  480. Amount: amount,
  481. }
  482. L1UserTxs = append(L1UserTxs, l1Tx)
  483. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  484. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  485. require.NoError(t, err)
  486. currentBlockNum, err := rollupClient.client.EthLastBlock()
  487. require.NoError(t, err)
  488. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  489. require.NoError(t, err)
  490. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  491. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  492. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  493. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  494. assert.Equal(t, rollupClientAux2.client.account.Address,
  495. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  496. }
  497. func TestRollupL1UserTxERC20PermitDepositTransfer(t *testing.T) {
  498. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  499. tokenHEZ)
  500. require.NoError(t, err)
  501. fromIdxInt64 := int64(258)
  502. toIdxInt64 := int64(259)
  503. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  504. amount, _ := new(big.Int).SetString("100000000000000000000", 10)
  505. l1Tx := common.L1Tx{
  506. FromIdx: common.Idx(fromIdxInt64),
  507. ToIdx: common.Idx(toIdxInt64),
  508. DepositAmount: depositAmount,
  509. TokenID: common.TokenID(tokenIDERC777),
  510. Amount: amount,
  511. }
  512. L1UserTxs = append(L1UserTxs, l1Tx)
  513. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  514. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  515. require.NoError(t, err)
  516. currentBlockNum, err := rollupClient.client.EthLastBlock()
  517. require.NoError(t, err)
  518. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  519. require.NoError(t, err)
  520. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  521. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  522. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  523. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  524. assert.Equal(t, rollupClientAux.client.account.Address,
  525. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  526. }
  527. func TestRollupL1UserTxETHCreateAccountDepositTransfer(t *testing.T) {
  528. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  529. tokenHEZ)
  530. require.NoError(t, err)
  531. fromIdxInt64 := int64(256)
  532. toIdxInt64 := int64(257)
  533. tokenIDUint32 := uint32(0)
  534. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  535. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  536. l1Tx := common.L1Tx{
  537. FromIdx: common.Idx(fromIdxInt64),
  538. ToIdx: common.Idx(toIdxInt64),
  539. DepositAmount: depositAmount,
  540. TokenID: common.TokenID(tokenIDUint32),
  541. Amount: amount,
  542. }
  543. L1UserTxs = append(L1UserTxs, l1Tx)
  544. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  545. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  546. require.NoError(t, err)
  547. currentBlockNum, err := rollupClient.client.EthLastBlock()
  548. require.NoError(t, err)
  549. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  550. require.NoError(t, err)
  551. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  552. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  553. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  554. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  555. assert.Equal(t, rollupClientAux.client.account.Address,
  556. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  557. }
  558. func TestRollupL1UserTxERC20CreateAccountDepositTransfer(t *testing.T) {
  559. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  560. tokenHEZ)
  561. require.NoError(t, err)
  562. fromIdxInt64 := int64(257)
  563. toIdxInt64 := int64(258)
  564. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  565. amount, _ := new(big.Int).SetString("30000000000000000000", 10)
  566. l1Tx := common.L1Tx{
  567. FromIdx: common.Idx(fromIdxInt64),
  568. ToIdx: common.Idx(toIdxInt64),
  569. DepositAmount: depositAmount,
  570. TokenID: common.TokenID(tokenHEZID),
  571. Amount: amount,
  572. }
  573. L1UserTxs = append(L1UserTxs, l1Tx)
  574. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  575. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  576. require.NoError(t, err)
  577. currentBlockNum, err := rollupClient.client.EthLastBlock()
  578. require.NoError(t, err)
  579. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  580. require.NoError(t, err)
  581. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  582. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  583. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  584. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  585. assert.Equal(t, rollupClientAux2.client.account.Address,
  586. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  587. }
  588. func TestRollupL1UserTxERC20PermitCreateAccountDepositTransfer(t *testing.T) {
  589. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  590. tokenHEZ)
  591. require.NoError(t, err)
  592. fromIdxInt64 := int64(258)
  593. toIdxInt64 := int64(259)
  594. depositAmount, _ := new(big.Int).SetString("1000000000000000000000", 10)
  595. amount, _ := new(big.Int).SetString("40000000000000000000", 10)
  596. l1Tx := common.L1Tx{
  597. FromIdx: common.Idx(fromIdxInt64),
  598. ToIdx: common.Idx(toIdxInt64),
  599. DepositAmount: depositAmount,
  600. TokenID: common.TokenID(tokenIDERC777),
  601. Amount: amount,
  602. }
  603. L1UserTxs = append(L1UserTxs, l1Tx)
  604. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  605. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  606. require.NoError(t, err)
  607. currentBlockNum, err := rollupClient.client.EthLastBlock()
  608. require.NoError(t, err)
  609. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  610. require.NoError(t, err)
  611. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  612. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  613. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  614. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  615. assert.Equal(t, rollupClientAux.client.account.Address,
  616. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  617. }
  618. func TestRollupL1UserTxETHForceTransfer(t *testing.T) {
  619. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  620. tokenHEZ)
  621. require.NoError(t, err)
  622. fromIdxInt64 := int64(256)
  623. toIdxInt64 := int64(257)
  624. tokenIDUint32 := uint32(0)
  625. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  626. l1Tx := common.L1Tx{
  627. FromIdx: common.Idx(fromIdxInt64),
  628. ToIdx: common.Idx(toIdxInt64),
  629. DepositAmount: big.NewInt(0),
  630. TokenID: common.TokenID(tokenIDUint32),
  631. Amount: amount,
  632. }
  633. L1UserTxs = append(L1UserTxs, l1Tx)
  634. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  635. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  636. require.NoError(t, err)
  637. currentBlockNum, err := rollupClient.client.EthLastBlock()
  638. require.NoError(t, err)
  639. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  640. require.NoError(t, err)
  641. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  642. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  643. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  644. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  645. assert.Equal(t, rollupClientAux.client.account.Address,
  646. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  647. }
  648. func TestRollupL1UserTxERC20ForceTransfer(t *testing.T) {
  649. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  650. tokenHEZ)
  651. require.NoError(t, err)
  652. fromIdxInt64 := int64(257)
  653. toIdxInt64 := int64(258)
  654. amount, _ := new(big.Int).SetString("10000000000000000000", 10)
  655. l1Tx := common.L1Tx{
  656. FromIdx: common.Idx(fromIdxInt64),
  657. ToIdx: common.Idx(toIdxInt64),
  658. DepositAmount: big.NewInt(0),
  659. TokenID: common.TokenID(tokenHEZID),
  660. Amount: amount,
  661. }
  662. L1UserTxs = append(L1UserTxs, l1Tx)
  663. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  664. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  665. require.NoError(t, err)
  666. currentBlockNum, err := rollupClient.client.EthLastBlock()
  667. require.NoError(t, err)
  668. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  669. require.NoError(t, err)
  670. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  671. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  672. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  673. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  674. assert.Equal(t, rollupClientAux2.client.account.Address,
  675. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  676. }
  677. func TestRollupL1UserTxERC20PermitForceTransfer(t *testing.T) {
  678. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  679. tokenHEZ)
  680. require.NoError(t, err)
  681. fromIdxInt64 := int64(259)
  682. toIdxInt64 := int64(260)
  683. amount, _ := new(big.Int).SetString("30000000000000000000", 10)
  684. l1Tx := common.L1Tx{
  685. FromIdx: common.Idx(fromIdxInt64),
  686. ToIdx: common.Idx(toIdxInt64),
  687. DepositAmount: big.NewInt(0),
  688. TokenID: common.TokenID(tokenIDERC777),
  689. Amount: amount,
  690. }
  691. L1UserTxs = append(L1UserTxs, l1Tx)
  692. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  693. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  694. require.NoError(t, err)
  695. currentBlockNum, err := rollupClient.client.EthLastBlock()
  696. require.NoError(t, err)
  697. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  698. require.NoError(t, err)
  699. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  700. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  701. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  702. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  703. assert.Equal(t, rollupClientAux.client.account.Address,
  704. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  705. }
  706. func TestRollupL1UserTxETHForceExit(t *testing.T) {
  707. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  708. tokenHEZ)
  709. require.NoError(t, err)
  710. fromIdxInt64 := int64(256)
  711. toIdxInt64 := int64(1)
  712. tokenIDUint32 := uint32(0)
  713. amount, _ := new(big.Int).SetString("10000000000000000000", 10)
  714. l1Tx := common.L1Tx{
  715. FromIdx: common.Idx(fromIdxInt64),
  716. ToIdx: common.Idx(toIdxInt64),
  717. DepositAmount: big.NewInt(0),
  718. TokenID: common.TokenID(tokenIDUint32),
  719. Amount: amount,
  720. }
  721. L1UserTxs = append(L1UserTxs, l1Tx)
  722. _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  723. l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
  724. require.NoError(t, err)
  725. currentBlockNum, err := rollupClient.client.EthLastBlock()
  726. require.NoError(t, err)
  727. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  728. require.NoError(t, err)
  729. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  730. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  731. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  732. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  733. assert.Equal(t, rollupClientAux.client.account.Address,
  734. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  735. }
  736. func TestRollupL1UserTxERC20ForceExit(t *testing.T) {
  737. rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
  738. tokenHEZ)
  739. require.NoError(t, err)
  740. fromIdxInt64 := int64(257)
  741. toIdxInt64 := int64(1)
  742. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  743. l1Tx := common.L1Tx{
  744. FromIdx: common.Idx(fromIdxInt64),
  745. ToIdx: common.Idx(toIdxInt64),
  746. DepositAmount: big.NewInt(0),
  747. TokenID: common.TokenID(tokenHEZID),
  748. Amount: amount,
  749. }
  750. L1UserTxs = append(L1UserTxs, l1Tx)
  751. _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
  752. l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
  753. require.NoError(t, err)
  754. currentBlockNum, err := rollupClient.client.EthLastBlock()
  755. require.NoError(t, err)
  756. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  757. require.NoError(t, err)
  758. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  759. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  760. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  761. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  762. assert.Equal(t, rollupClientAux2.client.account.Address,
  763. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  764. }
  765. func TestRollupL1UserTxERC20PermitForceExit(t *testing.T) {
  766. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
  767. tokenHEZ)
  768. require.NoError(t, err)
  769. fromIdxInt64 := int64(258)
  770. toIdxInt64 := int64(1)
  771. fromIdx := new(common.Idx)
  772. *fromIdx = 0
  773. amount, _ := new(big.Int).SetString("30000000000000000000", 10)
  774. l1Tx := common.L1Tx{
  775. FromIdx: common.Idx(fromIdxInt64),
  776. ToIdx: common.Idx(toIdxInt64),
  777. DepositAmount: big.NewInt(0),
  778. TokenID: common.TokenID(tokenIDERC777),
  779. Amount: amount,
  780. }
  781. L1UserTxs = append(L1UserTxs, l1Tx)
  782. _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
  783. l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
  784. require.NoError(t, err)
  785. currentBlockNum, err := rollupClient.client.EthLastBlock()
  786. require.NoError(t, err)
  787. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  788. require.NoError(t, err)
  789. assert.Equal(t, l1Tx.ToIdx, rollupEvents.L1UserTx[0].L1UserTx.ToIdx)
  790. assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
  791. assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
  792. assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
  793. assert.Equal(t, rollupClientAux.client.account.Address,
  794. rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
  795. }
  796. func TestRollupForgeBatch2(t *testing.T) {
  797. // Forge Batch 2
  798. _, err := rollupClient.RollupForgeBatch(argsForge, nil)
  799. require.NoError(t, err)
  800. currentBlockNum, err := rollupClient.client.EthLastBlock()
  801. require.NoError(t, err)
  802. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  803. require.NoError(t, err)
  804. assert.Equal(t, int64(2), rollupEvents.ForgeBatch[0].BatchNum)
  805. // Forge Batch 3
  806. args := new(RollupForgeBatchArgs)
  807. // When encoded, 64 times the 0 idx means that no idx to collect fees is specified.
  808. args.FeeIdxCoordinator = []common.Idx{}
  809. args.L1CoordinatorTxs = argsForge.L1CoordinatorTxs
  810. args.L1CoordinatorTxsAuths = argsForge.L1CoordinatorTxsAuths
  811. for i := 0; i < len(L1UserTxs); i++ {
  812. l1UserTx := L1UserTxs[i]
  813. l1UserTx.EffectiveAmount = l1UserTx.Amount
  814. l1Bytes, err := l1UserTx.BytesDataAvailability(uint32(nLevels))
  815. require.NoError(t, err)
  816. l1UserTxDataAvailability, err := common.L1TxFromDataAvailability(l1Bytes,
  817. uint32(nLevels))
  818. require.NoError(t, err)
  819. args.L1UserTxs = append(args.L1UserTxs, *l1UserTxDataAvailability)
  820. }
  821. newStateRoot := new(big.Int)
  822. newStateRoot.SetString(
  823. "18317824016047294649053625209337295956588174734569560016974612130063629505228",
  824. 10)
  825. newExitRoot := new(big.Int)
  826. newExitRoot.SetString(
  827. "1114281409737474688393837964161044726766678436313681099613347372031079422302",
  828. 10)
  829. amount := new(big.Int)
  830. amount.SetString("79000000", 10)
  831. l2Tx := common.L2Tx{
  832. ToIdx: 256,
  833. Amount: amount,
  834. FromIdx: 257,
  835. Fee: 201,
  836. }
  837. l2Txs := []common.L2Tx{}
  838. l2Txs = append(l2Txs, l2Tx)
  839. l2Txs = append(l2Txs, l2Tx)
  840. args.L2TxsData = l2Txs
  841. args.NewLastIdx = int64(1000)
  842. args.NewStRoot = newStateRoot
  843. args.NewExitRoot = newExitRoot
  844. args.L1Batch = true
  845. args.VerifierIdx = 0
  846. args.ProofA[0] = big.NewInt(0)
  847. args.ProofA[1] = big.NewInt(0)
  848. args.ProofB[0] = [2]*big.Int{big.NewInt(0), big.NewInt(0)}
  849. args.ProofB[1] = [2]*big.Int{big.NewInt(0), big.NewInt(0)}
  850. args.ProofC[0] = big.NewInt(0)
  851. args.ProofC[1] = big.NewInt(0)
  852. argsForge = args
  853. _, err = rollupClient.RollupForgeBatch(argsForge, nil)
  854. require.NoError(t, err)
  855. currentBlockNum, err = rollupClient.client.EthLastBlock()
  856. require.NoError(t, err)
  857. rollupEvents, err = rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  858. require.NoError(t, err)
  859. assert.Equal(t, int64(3), rollupEvents.ForgeBatch[0].BatchNum)
  860. assert.Equal(t, uint16(len(L1UserTxs)), rollupEvents.ForgeBatch[0].L1UserTxsLen)
  861. ethHashForge = rollupEvents.ForgeBatch[0].EthTxHash
  862. }
  863. func TestRollupForgeBatchArgs2(t *testing.T) {
  864. args, sender, err := rollupClient.RollupForgeBatchArgs(ethHashForge, uint16(len(L1UserTxs)))
  865. require.NoError(t, err)
  866. assert.Equal(t, *sender, rollupClient.client.account.Address)
  867. assert.Equal(t, argsForge.FeeIdxCoordinator, args.FeeIdxCoordinator)
  868. assert.Equal(t, argsForge.L1Batch, args.L1Batch)
  869. assert.Equal(t, argsForge.L1UserTxs, args.L1UserTxs)
  870. assert.Equal(t, argsForge.L1CoordinatorTxs, args.L1CoordinatorTxs)
  871. assert.Equal(t, argsForge.L1CoordinatorTxsAuths, args.L1CoordinatorTxsAuths)
  872. assert.Equal(t, argsForge.L2TxsData, args.L2TxsData)
  873. assert.Equal(t, argsForge.NewLastIdx, args.NewLastIdx)
  874. assert.Equal(t, argsForge.NewStRoot, args.NewStRoot)
  875. assert.Equal(t, argsForge.VerifierIdx, args.VerifierIdx)
  876. }
  877. func TestRollupWithdrawMerkleProof(t *testing.T) {
  878. rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
  879. require.NoError(t, err)
  880. var pkComp babyjub.PublicKeyComp
  881. pkCompBE, err :=
  882. hex.DecodeString("adc3b754f8da621967b073a787bef8eec7052f2ba712b23af57d98f65beea8b2")
  883. require.NoError(t, err)
  884. pkCompLE := common.SwapEndianness(pkCompBE)
  885. copy(pkComp[:], pkCompLE)
  886. require.NoError(t, err)
  887. tokenID := uint32(tokenHEZID)
  888. numExitRoot := int64(3)
  889. fromIdx := int64(256)
  890. amount, _ := new(big.Int).SetString("20000000000000000000", 10)
  891. // siblingBytes0, err := new(big.Int).SetString(
  892. // "19508838618377323910556678335932426220272947530531646682154552299216398748115",
  893. // 10)
  894. // require.NoError(t, err)
  895. // siblingBytes1, err := new(big.Int).SetString(
  896. // "15198806719713909654457742294233381653226080862567104272457668857208564789571", 10)
  897. // require.NoError(t, err)
  898. var siblings []*big.Int
  899. // siblings = append(siblings, siblingBytes0)
  900. // siblings = append(siblings, siblingBytes1)
  901. instantWithdraw := true
  902. _, err = rollupClientAux.RollupWithdrawMerkleProof(pkComp, tokenID, numExitRoot, fromIdx,
  903. amount, siblings, instantWithdraw)
  904. require.NoError(t, err)
  905. currentBlockNum, err := rollupClient.client.EthLastBlock()
  906. require.NoError(t, err)
  907. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  908. require.NoError(t, err)
  909. assert.Equal(t, uint64(fromIdx), rollupEvents.Withdraw[0].Idx)
  910. assert.Equal(t, instantWithdraw, rollupEvents.Withdraw[0].InstantWithdraw)
  911. assert.Equal(t, uint64(numExitRoot), rollupEvents.Withdraw[0].NumExitRoot)
  912. // tokenAmount = 20
  913. // amountUSD = tokenAmount * tokenPrice = 20 * 10 = 200
  914. // Bucket[0].ceilUSD = 100, Bucket[1].ceilUSD = 200, ...
  915. // Bucket 1
  916. // Bucket[0].withdrawals = 1, Bucket[1].withdrawals = 2, ...
  917. // Bucket[1].withdrawals - 1 = 1
  918. assert.Equal(t, 1, rollupEvents.UpdateBucketWithdraw[0].NumBucket)
  919. assert.Equal(t, blockStampBucket, rollupEvents.UpdateBucketWithdraw[0].BlockStamp)
  920. assert.Equal(t, big.NewInt(1), rollupEvents.UpdateBucketWithdraw[0].Withdrawals)
  921. }
  922. func TestRollupSafeMode(t *testing.T) {
  923. _, err := rollupClient.RollupSafeMode()
  924. require.NoError(t, err)
  925. currentBlockNum, err := rollupClient.client.EthLastBlock()
  926. require.NoError(t, err)
  927. rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
  928. require.NoError(t, err)
  929. auxEvent := new(RollupEventSafeMode)
  930. assert.Equal(t, auxEvent, &rollupEvents.SafeMode[0])
  931. }