You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1073 lines
38 KiB

4 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
  1. package eth
  2. import (
  3. "context"
  4. "fmt"
  5. "math/big"
  6. "strconv"
  7. "strings"
  8. "github.com/ethereum/go-ethereum"
  9. "github.com/ethereum/go-ethereum/accounts/abi"
  10. "github.com/ethereum/go-ethereum/accounts/abi/bind"
  11. ethCommon "github.com/ethereum/go-ethereum/common"
  12. "github.com/ethereum/go-ethereum/core/types"
  13. "github.com/ethereum/go-ethereum/crypto"
  14. "github.com/ethereum/go-ethereum/ethclient"
  15. "github.com/hermeznetwork/hermez-node/common"
  16. Hermez "github.com/hermeznetwork/hermez-node/eth/contracts/hermez"
  17. HEZ "github.com/hermeznetwork/hermez-node/eth/contracts/tokenHEZ"
  18. "github.com/hermeznetwork/hermez-node/log"
  19. "github.com/hermeznetwork/tracerr"
  20. "github.com/iden3/go-iden3-crypto/babyjub"
  21. )
  22. // QueueStruct is the queue of L1Txs for a batch
  23. type QueueStruct struct {
  24. L1TxQueue []common.L1Tx
  25. TotalL1TxFee *big.Int
  26. }
  27. // NewQueueStruct creates a new clear QueueStruct.
  28. func NewQueueStruct() *QueueStruct {
  29. return &QueueStruct{
  30. L1TxQueue: make([]common.L1Tx, 0),
  31. TotalL1TxFee: big.NewInt(0),
  32. }
  33. }
  34. // RollupState represents the state of the Rollup in the Smart Contract
  35. type RollupState struct {
  36. StateRoot *big.Int
  37. ExitRoots []*big.Int
  38. // ExitNullifierMap map[[256 / 8]byte]bool
  39. ExitNullifierMap map[int64]map[int64]bool // batchNum -> idx -> bool
  40. TokenList []ethCommon.Address
  41. TokenMap map[ethCommon.Address]bool
  42. MapL1TxQueue map[int64]*QueueStruct
  43. LastL1L2Batch int64
  44. CurrentToForgeL1TxsNum int64
  45. LastToForgeL1TxsNum int64
  46. CurrentIdx int64
  47. }
  48. // RollupEventInitialize is the InitializeHermezEvent event of the
  49. // Smart Contract
  50. type RollupEventInitialize struct {
  51. ForgeL1L2BatchTimeout uint8
  52. FeeAddToken *big.Int
  53. WithdrawalDelay uint64
  54. }
  55. // RollupVariables returns the RollupVariables from the initialize event
  56. func (ei *RollupEventInitialize) RollupVariables() *common.RollupVariables {
  57. var buckets [common.RollupConstNumBuckets]common.BucketParams
  58. for i := range buckets {
  59. buckets[i] = common.BucketParams{
  60. CeilUSD: big.NewInt(0),
  61. BlockStamp: big.NewInt(0),
  62. Withdrawals: big.NewInt(0),
  63. RateBlocks: big.NewInt(0),
  64. RateWithdrawals: big.NewInt(0),
  65. MaxWithdrawals: big.NewInt(0),
  66. }
  67. }
  68. return &common.RollupVariables{
  69. EthBlockNum: 0,
  70. FeeAddToken: ei.FeeAddToken,
  71. ForgeL1L2BatchTimeout: int64(ei.ForgeL1L2BatchTimeout),
  72. WithdrawalDelay: ei.WithdrawalDelay,
  73. Buckets: buckets,
  74. SafeMode: false,
  75. }
  76. }
  77. // RollupEventL1UserTx is an event of the Rollup Smart Contract
  78. type RollupEventL1UserTx struct {
  79. // ToForgeL1TxsNum int64 // QueueIndex *big.Int
  80. // Position int // TransactionIndex *big.Int
  81. L1UserTx common.L1Tx
  82. }
  83. // RollupEventL1UserTxAux is an event of the Rollup Smart Contract
  84. type rollupEventL1UserTxAux struct {
  85. ToForgeL1TxsNum uint64 // QueueIndex *big.Int
  86. Position uint8 // TransactionIndex *big.Int
  87. L1UserTx []byte
  88. }
  89. // RollupEventAddToken is an event of the Rollup Smart Contract
  90. type RollupEventAddToken struct {
  91. TokenAddress ethCommon.Address
  92. TokenID uint32
  93. }
  94. // RollupEventForgeBatch is an event of the Rollup Smart Contract
  95. type RollupEventForgeBatch struct {
  96. BatchNum int64
  97. // Sender ethCommon.Address
  98. EthTxHash ethCommon.Hash
  99. L1UserTxsLen uint16
  100. }
  101. // RollupEventUpdateForgeL1L2BatchTimeout is an event of the Rollup Smart Contract
  102. type RollupEventUpdateForgeL1L2BatchTimeout struct {
  103. NewForgeL1L2BatchTimeout int64
  104. }
  105. // RollupEventUpdateFeeAddToken is an event of the Rollup Smart Contract
  106. type RollupEventUpdateFeeAddToken struct {
  107. NewFeeAddToken *big.Int
  108. }
  109. // RollupEventWithdraw is an event of the Rollup Smart Contract
  110. type RollupEventWithdraw struct {
  111. Idx uint64
  112. NumExitRoot uint64
  113. InstantWithdraw bool
  114. TxHash ethCommon.Hash // Hash of the transaction that generated this event
  115. }
  116. type rollupEventUpdateBucketWithdrawAux struct {
  117. NumBucket uint8
  118. BlockStamp *big.Int
  119. Withdrawals *big.Int
  120. }
  121. // RollupEventUpdateBucketWithdraw is an event of the Rollup Smart Contract
  122. type RollupEventUpdateBucketWithdraw struct {
  123. NumBucket int
  124. BlockStamp int64 // blockNum
  125. Withdrawals *big.Int
  126. }
  127. // RollupEventUpdateWithdrawalDelay is an event of the Rollup Smart Contract
  128. type RollupEventUpdateWithdrawalDelay struct {
  129. NewWithdrawalDelay uint64
  130. }
  131. // RollupUpdateBucketsParameters are the bucket parameters used in an update
  132. type RollupUpdateBucketsParameters struct {
  133. CeilUSD *big.Int
  134. BlockStamp *big.Int
  135. Withdrawals *big.Int
  136. RateBlocks *big.Int
  137. RateWithdrawals *big.Int
  138. MaxWithdrawals *big.Int
  139. }
  140. type rollupEventUpdateBucketsParametersAux struct {
  141. ArrayBuckets [common.RollupConstNumBuckets][6]*big.Int
  142. }
  143. // RollupEventUpdateBucketsParameters is an event of the Rollup Smart Contract
  144. type RollupEventUpdateBucketsParameters struct {
  145. // ArrayBuckets [common.RollupConstNumBuckets][4]*big.Int
  146. ArrayBuckets [common.RollupConstNumBuckets]RollupUpdateBucketsParameters
  147. SafeMode bool
  148. }
  149. // RollupEventUpdateTokenExchange is an event of the Rollup Smart Contract
  150. type RollupEventUpdateTokenExchange struct {
  151. AddressArray []ethCommon.Address
  152. ValueArray []uint64
  153. }
  154. // RollupEventSafeMode is an event of the Rollup Smart Contract
  155. type RollupEventSafeMode struct {
  156. }
  157. // RollupEvents is the list of events in a block of the Rollup Smart Contract
  158. type RollupEvents struct {
  159. L1UserTx []RollupEventL1UserTx
  160. AddToken []RollupEventAddToken
  161. ForgeBatch []RollupEventForgeBatch
  162. UpdateForgeL1L2BatchTimeout []RollupEventUpdateForgeL1L2BatchTimeout
  163. UpdateFeeAddToken []RollupEventUpdateFeeAddToken
  164. Withdraw []RollupEventWithdraw
  165. UpdateWithdrawalDelay []RollupEventUpdateWithdrawalDelay
  166. UpdateBucketWithdraw []RollupEventUpdateBucketWithdraw
  167. UpdateBucketsParameters []RollupEventUpdateBucketsParameters
  168. UpdateTokenExchange []RollupEventUpdateTokenExchange
  169. SafeMode []RollupEventSafeMode
  170. }
  171. // NewRollupEvents creates an empty RollupEvents with the slices initialized.
  172. func NewRollupEvents() RollupEvents {
  173. return RollupEvents{
  174. L1UserTx: make([]RollupEventL1UserTx, 0),
  175. AddToken: make([]RollupEventAddToken, 0),
  176. ForgeBatch: make([]RollupEventForgeBatch, 0),
  177. UpdateForgeL1L2BatchTimeout: make([]RollupEventUpdateForgeL1L2BatchTimeout, 0),
  178. UpdateFeeAddToken: make([]RollupEventUpdateFeeAddToken, 0),
  179. Withdraw: make([]RollupEventWithdraw, 0),
  180. }
  181. }
  182. // RollupForgeBatchArgs are the arguments to the ForgeBatch function in the Rollup Smart Contract
  183. type RollupForgeBatchArgs struct {
  184. NewLastIdx int64
  185. NewStRoot *big.Int
  186. NewExitRoot *big.Int
  187. L1UserTxs []common.L1Tx
  188. L1CoordinatorTxs []common.L1Tx
  189. L1CoordinatorTxsAuths [][]byte // Authorization for accountCreations for each L1CoordinatorTx
  190. L2TxsData []common.L2Tx
  191. FeeIdxCoordinator []common.Idx
  192. // Circuit selector
  193. VerifierIdx uint8
  194. L1Batch bool
  195. ProofA [2]*big.Int
  196. ProofB [2][2]*big.Int
  197. ProofC [2]*big.Int
  198. }
  199. // RollupForgeBatchArgsAux are the arguments to the ForgeBatch function in the Rollup Smart Contract
  200. type rollupForgeBatchArgsAux struct {
  201. NewLastIdx *big.Int
  202. NewStRoot *big.Int
  203. NewExitRoot *big.Int
  204. EncodedL1CoordinatorTx []byte
  205. L1L2TxsData []byte
  206. FeeIdxCoordinator []byte
  207. // Circuit selector
  208. VerifierIdx uint8
  209. L1Batch bool
  210. ProofA [2]*big.Int
  211. ProofB [2][2]*big.Int
  212. ProofC [2]*big.Int
  213. }
  214. // RollupInterface is the inteface to to Rollup Smart Contract
  215. type RollupInterface interface {
  216. //
  217. // Smart Contract Methods
  218. //
  219. // Public Functions
  220. RollupForgeBatch(*RollupForgeBatchArgs, *bind.TransactOpts) (*types.Transaction, error)
  221. RollupAddToken(tokenAddress ethCommon.Address, feeAddToken,
  222. deadline *big.Int) (*types.Transaction, error)
  223. RollupWithdrawMerkleProof(babyPubKey babyjub.PublicKeyComp, tokenID uint32, numExitRoot,
  224. idx int64, amount *big.Int, siblings []*big.Int, instantWithdraw bool) (*types.Transaction,
  225. error)
  226. RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int, tokenID uint32,
  227. numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction, error)
  228. RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64, depositAmount *big.Int,
  229. amount *big.Int, tokenID uint32, toIdx int64) (*types.Transaction, error)
  230. RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
  231. depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64,
  232. deadline *big.Int) (tx *types.Transaction, err error)
  233. // Governance Public Functions
  234. RollupUpdateForgeL1L2BatchTimeout(newForgeL1L2BatchTimeout int64) (*types.Transaction, error)
  235. RollupUpdateFeeAddToken(newFeeAddToken *big.Int) (*types.Transaction, error)
  236. // Viewers
  237. RollupRegisterTokensCount() (*big.Int, error)
  238. RollupLastForgedBatch() (int64, error)
  239. //
  240. // Smart Contract Status
  241. //
  242. RollupConstants() (*common.RollupConstants, error)
  243. RollupEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*RollupEvents, error)
  244. RollupForgeBatchArgs(ethCommon.Hash, uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error)
  245. RollupEventInit() (*RollupEventInitialize, int64, error)
  246. }
  247. //
  248. // Implementation
  249. //
  250. // RollupClient is the implementation of the interface to the Rollup Smart Contract in ethereum.
  251. type RollupClient struct {
  252. client *EthereumClient
  253. chainID *big.Int
  254. address ethCommon.Address
  255. tokenHEZCfg TokenConfig
  256. hermez *Hermez.Hermez
  257. tokenHEZ *HEZ.HEZ
  258. contractAbi abi.ABI
  259. opts *bind.CallOpts
  260. consts *common.RollupConstants
  261. }
  262. // NewRollupClient creates a new RollupClient
  263. func NewRollupClient(client *EthereumClient, address ethCommon.Address,
  264. tokenHEZCfg TokenConfig) (*RollupClient, error) {
  265. contractAbi, err := abi.JSON(strings.NewReader(string(Hermez.HermezABI)))
  266. if err != nil {
  267. return nil, tracerr.Wrap(err)
  268. }
  269. hermez, err := Hermez.NewHermez(address, client.Client())
  270. if err != nil {
  271. return nil, tracerr.Wrap(err)
  272. }
  273. tokenHEZ, err := HEZ.NewHEZ(tokenHEZCfg.Address, client.Client())
  274. if err != nil {
  275. return nil, tracerr.Wrap(err)
  276. }
  277. chainID, err := client.EthChainID()
  278. if err != nil {
  279. return nil, tracerr.Wrap(err)
  280. }
  281. c := &RollupClient{
  282. client: client,
  283. chainID: chainID,
  284. address: address,
  285. tokenHEZCfg: tokenHEZCfg,
  286. hermez: hermez,
  287. tokenHEZ: tokenHEZ,
  288. contractAbi: contractAbi,
  289. opts: newCallOpts(),
  290. }
  291. consts, err := c.RollupConstants()
  292. if err != nil {
  293. return nil, tracerr.Wrap(fmt.Errorf("RollupConstants at %v: %w", address, err))
  294. }
  295. c.consts = consts
  296. return c, nil
  297. }
  298. // RollupForgeBatch is the interface to call the smart contract function
  299. func (c *RollupClient) RollupForgeBatch(args *RollupForgeBatchArgs,
  300. auth *bind.TransactOpts) (tx *types.Transaction, err error) {
  301. if auth == nil {
  302. auth, err = c.client.NewAuth()
  303. if err != nil {
  304. return nil, tracerr.Wrap(err)
  305. }
  306. auth.GasLimit = 1000000
  307. }
  308. nLevels := c.consts.Verifiers[args.VerifierIdx].NLevels
  309. lenBytes := nLevels / 8 //nolint:gomnd
  310. newLastIdx := big.NewInt(int64(args.NewLastIdx))
  311. // L1CoordinatorBytes
  312. var l1CoordinatorBytes []byte
  313. for i := 0; i < len(args.L1CoordinatorTxs); i++ {
  314. l1 := args.L1CoordinatorTxs[i]
  315. bytesl1, err := l1.BytesCoordinatorTx(args.L1CoordinatorTxsAuths[i])
  316. if err != nil {
  317. return nil, tracerr.Wrap(err)
  318. }
  319. l1CoordinatorBytes = append(l1CoordinatorBytes, bytesl1[:]...)
  320. }
  321. // L1L2TxData
  322. var l1l2TxData []byte
  323. for i := 0; i < len(args.L1UserTxs); i++ {
  324. l1User := args.L1UserTxs[i]
  325. bytesl1User, err := l1User.BytesDataAvailability(uint32(nLevels))
  326. if err != nil {
  327. return nil, tracerr.Wrap(err)
  328. }
  329. l1l2TxData = append(l1l2TxData, bytesl1User[:]...)
  330. }
  331. for i := 0; i < len(args.L1CoordinatorTxs); i++ {
  332. l1Coord := args.L1CoordinatorTxs[i]
  333. bytesl1Coord, err := l1Coord.BytesDataAvailability(uint32(nLevels))
  334. if err != nil {
  335. return nil, tracerr.Wrap(err)
  336. }
  337. l1l2TxData = append(l1l2TxData, bytesl1Coord[:]...)
  338. }
  339. for i := 0; i < len(args.L2TxsData); i++ {
  340. l2 := args.L2TxsData[i]
  341. bytesl2, err := l2.BytesDataAvailability(uint32(nLevels))
  342. if err != nil {
  343. return nil, tracerr.Wrap(err)
  344. }
  345. l1l2TxData = append(l1l2TxData, bytesl2[:]...)
  346. }
  347. // FeeIdxCoordinator
  348. var feeIdxCoordinator []byte
  349. if len(args.FeeIdxCoordinator) > common.RollupConstMaxFeeIdxCoordinator {
  350. return nil, tracerr.Wrap(fmt.Errorf("len(args.FeeIdxCoordinator) > %v",
  351. common.RollupConstMaxFeeIdxCoordinator))
  352. }
  353. for i := 0; i < common.RollupConstMaxFeeIdxCoordinator; i++ {
  354. feeIdx := common.Idx(0)
  355. if i < len(args.FeeIdxCoordinator) {
  356. feeIdx = args.FeeIdxCoordinator[i]
  357. }
  358. bytesFeeIdx, err := feeIdx.Bytes()
  359. if err != nil {
  360. return nil, tracerr.Wrap(err)
  361. }
  362. feeIdxCoordinator = append(feeIdxCoordinator,
  363. bytesFeeIdx[len(bytesFeeIdx)-int(lenBytes):]...)
  364. }
  365. tx, err = c.hermez.ForgeBatch(auth, newLastIdx, args.NewStRoot, args.NewExitRoot,
  366. l1CoordinatorBytes, l1l2TxData, feeIdxCoordinator, args.VerifierIdx, args.L1Batch,
  367. args.ProofA, args.ProofB, args.ProofC)
  368. if err != nil {
  369. return nil, tracerr.Wrap(fmt.Errorf("Hermez.ForgeBatch: %w", err))
  370. }
  371. return tx, nil
  372. }
  373. // RollupAddToken is the interface to call the smart contract function.
  374. // `feeAddToken` is the amount of HEZ tokens that will be paid to add the
  375. // token. `feeAddToken` must match the public value of the smart contract.
  376. func (c *RollupClient) RollupAddToken(tokenAddress ethCommon.Address, feeAddToken,
  377. deadline *big.Int) (tx *types.Transaction, err error) {
  378. if tx, err = c.client.CallAuth(
  379. 0,
  380. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  381. owner := c.client.account.Address
  382. spender := c.address
  383. nonce, err := c.tokenHEZ.Nonces(c.opts, owner)
  384. if err != nil {
  385. return nil, tracerr.Wrap(err)
  386. }
  387. tokenName := c.tokenHEZCfg.Name
  388. tokenAddr := c.tokenHEZCfg.Address
  389. digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID,
  390. feeAddToken, nonce, deadline, tokenName)
  391. signature, _ := c.client.ks.SignHash(*c.client.account, digest)
  392. permit := createPermit(owner, spender, feeAddToken, deadline, digest,
  393. signature)
  394. return c.hermez.AddToken(auth, tokenAddress, permit)
  395. },
  396. ); err != nil {
  397. return nil, tracerr.Wrap(fmt.Errorf("Failed add Token %w", err))
  398. }
  399. return tx, nil
  400. }
  401. // RollupWithdrawMerkleProof is the interface to call the smart contract function
  402. func (c *RollupClient) RollupWithdrawMerkleProof(fromBJJ babyjub.PublicKeyComp, tokenID uint32,
  403. numExitRoot, idx int64, amount *big.Int, siblings []*big.Int,
  404. instantWithdraw bool) (tx *types.Transaction, err error) {
  405. if tx, err = c.client.CallAuth(
  406. 0,
  407. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  408. pkCompB := common.SwapEndianness(fromBJJ[:])
  409. babyPubKey := new(big.Int).SetBytes(pkCompB)
  410. numExitRootB := uint32(numExitRoot)
  411. idxBig := big.NewInt(idx)
  412. return c.hermez.WithdrawMerkleProof(auth, tokenID, amount, babyPubKey,
  413. numExitRootB, siblings, idxBig, instantWithdraw)
  414. },
  415. ); err != nil {
  416. return nil, tracerr.Wrap(fmt.Errorf("Failed update WithdrawMerkleProof: %w", err))
  417. }
  418. return tx, nil
  419. }
  420. // RollupWithdrawCircuit is the interface to call the smart contract function
  421. func (c *RollupClient) RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int,
  422. tokenID uint32, numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction,
  423. error) {
  424. log.Error("TODO")
  425. return nil, tracerr.Wrap(errTODO)
  426. }
  427. // RollupL1UserTxERC20ETH is the interface to call the smart contract function
  428. func (c *RollupClient) RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
  429. depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64) (tx *types.Transaction,
  430. err error) {
  431. if tx, err = c.client.CallAuth(
  432. 0,
  433. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  434. var babyPubKey *big.Int
  435. if fromBJJ != common.EmptyBJJComp {
  436. pkCompB := common.SwapEndianness(fromBJJ[:])
  437. babyPubKey = new(big.Int).SetBytes(pkCompB)
  438. } else {
  439. babyPubKey = big.NewInt(0)
  440. }
  441. fromIdxBig := big.NewInt(fromIdx)
  442. toIdxBig := big.NewInt(toIdx)
  443. depositAmountF, err := common.NewFloat40(depositAmount)
  444. if err != nil {
  445. return nil, tracerr.Wrap(err)
  446. }
  447. amountF, err := common.NewFloat40(amount)
  448. if err != nil {
  449. return nil, tracerr.Wrap(err)
  450. }
  451. if tokenID == 0 {
  452. auth.Value = depositAmount
  453. }
  454. var permit []byte
  455. return c.hermez.AddL1Transaction(auth, babyPubKey, fromIdxBig, uint16(depositAmountF),
  456. uint16(amountF), tokenID, toIdxBig, permit)
  457. },
  458. ); err != nil {
  459. return nil, tracerr.Wrap(fmt.Errorf("Failed add L1 Tx ERC20/ETH: %w", err))
  460. }
  461. return tx, nil
  462. }
  463. // RollupL1UserTxERC20Permit is the interface to call the smart contract function
  464. func (c *RollupClient) RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
  465. depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64,
  466. deadline *big.Int) (tx *types.Transaction, err error) {
  467. if tx, err = c.client.CallAuth(
  468. 0,
  469. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  470. var babyPubKey *big.Int
  471. if fromBJJ != common.EmptyBJJComp {
  472. pkCompB := common.SwapEndianness(fromBJJ[:])
  473. babyPubKey = new(big.Int).SetBytes(pkCompB)
  474. } else {
  475. babyPubKey = big.NewInt(0)
  476. }
  477. fromIdxBig := big.NewInt(fromIdx)
  478. toIdxBig := big.NewInt(toIdx)
  479. depositAmountF, err := common.NewFloat40(depositAmount)
  480. if err != nil {
  481. return nil, tracerr.Wrap(err)
  482. }
  483. amountF, err := common.NewFloat40(amount)
  484. if err != nil {
  485. return nil, tracerr.Wrap(err)
  486. }
  487. if tokenID == 0 {
  488. auth.Value = depositAmount
  489. }
  490. owner := c.client.account.Address
  491. spender := c.address
  492. nonce, err := c.tokenHEZ.Nonces(c.opts, owner)
  493. if err != nil {
  494. return nil, tracerr.Wrap(err)
  495. }
  496. tokenName := c.tokenHEZCfg.Name
  497. tokenAddr := c.tokenHEZCfg.Address
  498. digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID,
  499. amount, nonce, deadline, tokenName)
  500. signature, _ := c.client.ks.SignHash(*c.client.account, digest)
  501. permit := createPermit(owner, spender, amount, deadline, digest, signature)
  502. return c.hermez.AddL1Transaction(auth, babyPubKey, fromIdxBig,
  503. uint16(depositAmountF), uint16(amountF), tokenID, toIdxBig, permit)
  504. },
  505. ); err != nil {
  506. return nil, tracerr.Wrap(fmt.Errorf("Failed add L1 Tx ERC20Permit: %w", err))
  507. }
  508. return tx, nil
  509. }
  510. // RollupRegisterTokensCount is the interface to call the smart contract function
  511. func (c *RollupClient) RollupRegisterTokensCount() (registerTokensCount *big.Int, err error) {
  512. if err := c.client.Call(func(ec *ethclient.Client) error {
  513. registerTokensCount, err = c.hermez.RegisterTokensCount(c.opts)
  514. return tracerr.Wrap(err)
  515. }); err != nil {
  516. return nil, tracerr.Wrap(err)
  517. }
  518. return registerTokensCount, nil
  519. }
  520. // RollupLastForgedBatch is the interface to call the smart contract function
  521. func (c *RollupClient) RollupLastForgedBatch() (lastForgedBatch int64, err error) {
  522. if err := c.client.Call(func(ec *ethclient.Client) error {
  523. _lastForgedBatch, err := c.hermez.LastForgedBatch(c.opts)
  524. lastForgedBatch = int64(_lastForgedBatch)
  525. return tracerr.Wrap(err)
  526. }); err != nil {
  527. return 0, tracerr.Wrap(err)
  528. }
  529. return lastForgedBatch, nil
  530. }
  531. // RollupUpdateForgeL1L2BatchTimeout is the interface to call the smart contract function
  532. func (c *RollupClient) RollupUpdateForgeL1L2BatchTimeout(
  533. newForgeL1L2BatchTimeout int64) (tx *types.Transaction, err error) {
  534. if tx, err = c.client.CallAuth(
  535. 0,
  536. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  537. return c.hermez.UpdateForgeL1L2BatchTimeout(auth,
  538. uint8(newForgeL1L2BatchTimeout))
  539. },
  540. ); err != nil {
  541. return nil, tracerr.Wrap(fmt.Errorf("Failed update ForgeL1L2BatchTimeout: %w", err))
  542. }
  543. return tx, nil
  544. }
  545. // RollupUpdateFeeAddToken is the interface to call the smart contract function
  546. func (c *RollupClient) RollupUpdateFeeAddToken(newFeeAddToken *big.Int) (tx *types.Transaction,
  547. err error) {
  548. if tx, err = c.client.CallAuth(
  549. 0,
  550. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  551. return c.hermez.UpdateFeeAddToken(auth, newFeeAddToken)
  552. },
  553. ); err != nil {
  554. return nil, tracerr.Wrap(fmt.Errorf("Failed update FeeAddToken: %w", err))
  555. }
  556. return tx, nil
  557. }
  558. // RollupUpdateBucketsParameters is the interface to call the smart contract function
  559. func (c *RollupClient) RollupUpdateBucketsParameters(
  560. arrayBuckets [common.RollupConstNumBuckets]RollupUpdateBucketsParameters,
  561. ) (tx *types.Transaction, err error) {
  562. params := [common.RollupConstNumBuckets][6]*big.Int{}
  563. for i, bucket := range arrayBuckets {
  564. params[i][0] = bucket.CeilUSD
  565. params[i][1] = bucket.BlockStamp
  566. params[i][2] = bucket.Withdrawals
  567. params[i][3] = bucket.RateBlocks
  568. params[i][4] = bucket.RateWithdrawals
  569. params[i][5] = bucket.MaxWithdrawals
  570. }
  571. if tx, err = c.client.CallAuth(
  572. 12500000, //nolint:gomnd
  573. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  574. return c.hermez.UpdateBucketsParameters(auth, params)
  575. },
  576. ); err != nil {
  577. return nil, tracerr.Wrap(fmt.Errorf("Failed update Buckets Parameters: %w", err))
  578. }
  579. return tx, nil
  580. }
  581. // RollupUpdateTokenExchange is the interface to call the smart contract function
  582. func (c *RollupClient) RollupUpdateTokenExchange(addressArray []ethCommon.Address,
  583. valueArray []uint64) (tx *types.Transaction, err error) {
  584. if tx, err = c.client.CallAuth(
  585. 0,
  586. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  587. return c.hermez.UpdateTokenExchange(auth, addressArray, valueArray)
  588. },
  589. ); err != nil {
  590. return nil, tracerr.Wrap(fmt.Errorf("Failed update Token Exchange: %w", err))
  591. }
  592. return tx, nil
  593. }
  594. // RollupUpdateWithdrawalDelay is the interface to call the smart contract function
  595. func (c *RollupClient) RollupUpdateWithdrawalDelay(newWithdrawalDelay int64) (tx *types.Transaction,
  596. err error) {
  597. if tx, err = c.client.CallAuth(
  598. 0,
  599. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  600. return c.hermez.UpdateWithdrawalDelay(auth, uint64(newWithdrawalDelay))
  601. },
  602. ); err != nil {
  603. return nil, tracerr.Wrap(fmt.Errorf("Failed update WithdrawalDelay: %w", err))
  604. }
  605. return tx, nil
  606. }
  607. // RollupSafeMode is the interface to call the smart contract function
  608. func (c *RollupClient) RollupSafeMode() (tx *types.Transaction, err error) {
  609. if tx, err = c.client.CallAuth(
  610. 0,
  611. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  612. return c.hermez.SafeMode(auth)
  613. },
  614. ); err != nil {
  615. return nil, tracerr.Wrap(fmt.Errorf("Failed update Safe Mode: %w", err))
  616. }
  617. return tx, nil
  618. }
  619. // RollupInstantWithdrawalViewer is the interface to call the smart contract function
  620. func (c *RollupClient) RollupInstantWithdrawalViewer(tokenAddress ethCommon.Address,
  621. amount *big.Int) (instantAllowed bool, err error) {
  622. if err := c.client.Call(func(ec *ethclient.Client) error {
  623. instantAllowed, err = c.hermez.InstantWithdrawalViewer(c.opts, tokenAddress, amount)
  624. return tracerr.Wrap(err)
  625. }); err != nil {
  626. return false, tracerr.Wrap(err)
  627. }
  628. return instantAllowed, nil
  629. }
  630. // RollupConstants returns the Constants of the Rollup Smart Contract
  631. func (c *RollupClient) RollupConstants() (rollupConstants *common.RollupConstants, err error) {
  632. rollupConstants = new(common.RollupConstants)
  633. if err := c.client.Call(func(ec *ethclient.Client) error {
  634. absoluteMaxL1L2BatchTimeout, err := c.hermez.ABSOLUTEMAXL1L2BATCHTIMEOUT(c.opts)
  635. if err != nil {
  636. return tracerr.Wrap(err)
  637. }
  638. rollupConstants.AbsoluteMaxL1L2BatchTimeout = int64(absoluteMaxL1L2BatchTimeout)
  639. rollupConstants.TokenHEZ, err = c.hermez.TokenHEZ(c.opts)
  640. if err != nil {
  641. return tracerr.Wrap(err)
  642. }
  643. rollupVerifiersLength, err := c.hermez.RollupVerifiersLength(c.opts)
  644. if err != nil {
  645. return tracerr.Wrap(err)
  646. }
  647. for i := int64(0); i < rollupVerifiersLength.Int64(); i++ {
  648. var newRollupVerifier common.RollupVerifierStruct
  649. rollupVerifier, err := c.hermez.RollupVerifiers(c.opts, big.NewInt(i))
  650. if err != nil {
  651. return tracerr.Wrap(err)
  652. }
  653. newRollupVerifier.MaxTx = rollupVerifier.MaxTx.Int64()
  654. newRollupVerifier.NLevels = rollupVerifier.NLevels.Int64()
  655. rollupConstants.Verifiers = append(rollupConstants.Verifiers,
  656. newRollupVerifier)
  657. }
  658. rollupConstants.HermezAuctionContract, err = c.hermez.HermezAuctionContract(c.opts)
  659. if err != nil {
  660. return tracerr.Wrap(err)
  661. }
  662. rollupConstants.HermezGovernanceAddress, err = c.hermez.HermezGovernanceAddress(c.opts)
  663. if err != nil {
  664. return tracerr.Wrap(err)
  665. }
  666. rollupConstants.WithdrawDelayerContract, err = c.hermez.WithdrawDelayerContract(c.opts)
  667. return tracerr.Wrap(err)
  668. }); err != nil {
  669. return nil, tracerr.Wrap(err)
  670. }
  671. return rollupConstants, nil
  672. }
  673. var (
  674. logHermezL1UserTxEvent = crypto.Keccak256Hash([]byte(
  675. "L1UserTxEvent(uint32,uint8,bytes)"))
  676. logHermezAddToken = crypto.Keccak256Hash([]byte(
  677. "AddToken(address,uint32)"))
  678. logHermezForgeBatch = crypto.Keccak256Hash([]byte(
  679. "ForgeBatch(uint32,uint16)"))
  680. logHermezUpdateForgeL1L2BatchTimeout = crypto.Keccak256Hash([]byte(
  681. "UpdateForgeL1L2BatchTimeout(uint8)"))
  682. logHermezUpdateFeeAddToken = crypto.Keccak256Hash([]byte(
  683. "UpdateFeeAddToken(uint256)"))
  684. logHermezWithdrawEvent = crypto.Keccak256Hash([]byte(
  685. "WithdrawEvent(uint48,uint32,bool)"))
  686. logHermezUpdateBucketWithdraw = crypto.Keccak256Hash([]byte(
  687. "UpdateBucketWithdraw(uint8,uint256,uint256)"))
  688. logHermezUpdateWithdrawalDelay = crypto.Keccak256Hash([]byte(
  689. "UpdateWithdrawalDelay(uint64)"))
  690. logHermezUpdateBucketsParameters = crypto.Keccak256Hash([]byte(
  691. "UpdateBucketsParameters(uint256[4][" + strconv.Itoa(common.RollupConstNumBuckets) + "])"))
  692. logHermezUpdateTokenExchange = crypto.Keccak256Hash([]byte(
  693. "UpdateTokenExchange(address[],uint64[])"))
  694. logHermezSafeMode = crypto.Keccak256Hash([]byte(
  695. "SafeMode()"))
  696. logHermezInitialize = crypto.Keccak256Hash([]byte(
  697. "InitializeHermezEvent(uint8,uint256,uint64)"))
  698. )
  699. // RollupEventInit returns the initialize event with its corresponding block number
  700. func (c *RollupClient) RollupEventInit() (*RollupEventInitialize, int64, error) {
  701. query := ethereum.FilterQuery{
  702. Addresses: []ethCommon.Address{
  703. c.address,
  704. },
  705. Topics: [][]ethCommon.Hash{{logHermezInitialize}},
  706. }
  707. logs, err := c.client.client.FilterLogs(context.Background(), query)
  708. if err != nil {
  709. return nil, 0, tracerr.Wrap(err)
  710. }
  711. if len(logs) != 1 {
  712. return nil, 0, tracerr.Wrap(fmt.Errorf("no event of type InitializeHermezEvent found"))
  713. }
  714. vLog := logs[0]
  715. if vLog.Topics[0] != logHermezInitialize {
  716. return nil, 0, tracerr.Wrap(fmt.Errorf("event is not InitializeHermezEvent"))
  717. }
  718. var rollupInit RollupEventInitialize
  719. if err := c.contractAbi.UnpackIntoInterface(&rollupInit, "InitializeHermezEvent",
  720. vLog.Data); err != nil {
  721. return nil, 0, tracerr.Wrap(err)
  722. }
  723. return &rollupInit, int64(vLog.BlockNumber), tracerr.Wrap(err)
  724. }
  725. // RollupEventsByBlock returns the events in a block that happened in the
  726. // Rollup Smart Contract.
  727. // To query by blockNum, set blockNum >= 0 and blockHash == nil.
  728. // To query by blockHash set blockHash != nil, and blockNum will be ignored.
  729. // If there are no events in that block the result is nil.
  730. func (c *RollupClient) RollupEventsByBlock(blockNum int64,
  731. blockHash *ethCommon.Hash) (*RollupEvents, error) {
  732. var rollupEvents RollupEvents
  733. var blockNumBigInt *big.Int
  734. if blockHash == nil {
  735. blockNumBigInt = big.NewInt(blockNum)
  736. }
  737. query := ethereum.FilterQuery{
  738. BlockHash: blockHash,
  739. FromBlock: blockNumBigInt,
  740. ToBlock: blockNumBigInt,
  741. Addresses: []ethCommon.Address{
  742. c.address,
  743. },
  744. Topics: [][]ethCommon.Hash{},
  745. }
  746. logs, err := c.client.client.FilterLogs(context.Background(), query)
  747. if err != nil {
  748. return nil, tracerr.Wrap(err)
  749. }
  750. if len(logs) == 0 {
  751. return nil, nil
  752. }
  753. for _, vLog := range logs {
  754. if blockHash != nil && vLog.BlockHash != *blockHash {
  755. log.Errorw("Block hash mismatch", "expected", blockHash.String(), "got", vLog.BlockHash.String())
  756. return nil, tracerr.Wrap(ErrBlockHashMismatchEvent)
  757. }
  758. switch vLog.Topics[0] {
  759. case logHermezL1UserTxEvent:
  760. var L1UserTxAux rollupEventL1UserTxAux
  761. var L1UserTx RollupEventL1UserTx
  762. err := c.contractAbi.UnpackIntoInterface(&L1UserTxAux, "L1UserTxEvent", vLog.Data)
  763. if err != nil {
  764. return nil, tracerr.Wrap(err)
  765. }
  766. L1Tx, err := common.L1UserTxFromBytes(L1UserTxAux.L1UserTx)
  767. if err != nil {
  768. return nil, tracerr.Wrap(err)
  769. }
  770. toForgeL1TxsNum := new(big.Int).SetBytes(vLog.Topics[1][:]).Int64()
  771. L1Tx.ToForgeL1TxsNum = &toForgeL1TxsNum
  772. L1Tx.Position = int(new(big.Int).SetBytes(vLog.Topics[2][:]).Int64())
  773. L1Tx.UserOrigin = true
  774. L1UserTx.L1UserTx = *L1Tx
  775. rollupEvents.L1UserTx = append(rollupEvents.L1UserTx, L1UserTx)
  776. case logHermezAddToken:
  777. var addToken RollupEventAddToken
  778. err := c.contractAbi.UnpackIntoInterface(&addToken, "AddToken", vLog.Data)
  779. if err != nil {
  780. return nil, tracerr.Wrap(err)
  781. }
  782. addToken.TokenAddress = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
  783. rollupEvents.AddToken = append(rollupEvents.AddToken, addToken)
  784. case logHermezForgeBatch:
  785. var forgeBatch RollupEventForgeBatch
  786. err := c.contractAbi.UnpackIntoInterface(&forgeBatch, "ForgeBatch", vLog.Data)
  787. if err != nil {
  788. return nil, tracerr.Wrap(err)
  789. }
  790. forgeBatch.BatchNum = new(big.Int).SetBytes(vLog.Topics[1][:]).Int64()
  791. forgeBatch.EthTxHash = vLog.TxHash
  792. // forgeBatch.Sender = vLog.Address
  793. rollupEvents.ForgeBatch = append(rollupEvents.ForgeBatch, forgeBatch)
  794. case logHermezUpdateForgeL1L2BatchTimeout:
  795. var updateForgeL1L2BatchTimeout struct {
  796. NewForgeL1L2BatchTimeout uint8
  797. }
  798. err := c.contractAbi.UnpackIntoInterface(&updateForgeL1L2BatchTimeout,
  799. "UpdateForgeL1L2BatchTimeout", vLog.Data)
  800. if err != nil {
  801. return nil, tracerr.Wrap(err)
  802. }
  803. rollupEvents.UpdateForgeL1L2BatchTimeout = append(rollupEvents.UpdateForgeL1L2BatchTimeout,
  804. RollupEventUpdateForgeL1L2BatchTimeout{
  805. NewForgeL1L2BatchTimeout: int64(updateForgeL1L2BatchTimeout.NewForgeL1L2BatchTimeout),
  806. })
  807. case logHermezUpdateFeeAddToken:
  808. var updateFeeAddToken RollupEventUpdateFeeAddToken
  809. err := c.contractAbi.UnpackIntoInterface(&updateFeeAddToken, "UpdateFeeAddToken", vLog.Data)
  810. if err != nil {
  811. return nil, tracerr.Wrap(err)
  812. }
  813. rollupEvents.UpdateFeeAddToken = append(rollupEvents.UpdateFeeAddToken, updateFeeAddToken)
  814. case logHermezWithdrawEvent:
  815. var withdraw RollupEventWithdraw
  816. withdraw.Idx = new(big.Int).SetBytes(vLog.Topics[1][:]).Uint64()
  817. withdraw.NumExitRoot = new(big.Int).SetBytes(vLog.Topics[2][:]).Uint64()
  818. instantWithdraw := new(big.Int).SetBytes(vLog.Topics[3][:]).Uint64()
  819. if instantWithdraw == 1 {
  820. withdraw.InstantWithdraw = true
  821. }
  822. withdraw.TxHash = vLog.TxHash
  823. rollupEvents.Withdraw = append(rollupEvents.Withdraw, withdraw)
  824. case logHermezUpdateBucketWithdraw:
  825. var updateBucketWithdrawAux rollupEventUpdateBucketWithdrawAux
  826. var updateBucketWithdraw RollupEventUpdateBucketWithdraw
  827. err := c.contractAbi.UnpackIntoInterface(&updateBucketWithdrawAux,
  828. "UpdateBucketWithdraw", vLog.Data)
  829. if err != nil {
  830. return nil, tracerr.Wrap(err)
  831. }
  832. updateBucketWithdraw.Withdrawals = updateBucketWithdrawAux.Withdrawals
  833. updateBucketWithdraw.NumBucket = int(new(big.Int).SetBytes(vLog.Topics[1][:]).Int64())
  834. updateBucketWithdraw.BlockStamp = new(big.Int).SetBytes(vLog.Topics[2][:]).Int64()
  835. rollupEvents.UpdateBucketWithdraw =
  836. append(rollupEvents.UpdateBucketWithdraw, updateBucketWithdraw)
  837. case logHermezUpdateWithdrawalDelay:
  838. var withdrawalDelay RollupEventUpdateWithdrawalDelay
  839. err := c.contractAbi.UnpackIntoInterface(&withdrawalDelay, "UpdateWithdrawalDelay", vLog.Data)
  840. if err != nil {
  841. return nil, tracerr.Wrap(err)
  842. }
  843. rollupEvents.UpdateWithdrawalDelay = append(rollupEvents.UpdateWithdrawalDelay, withdrawalDelay)
  844. case logHermezUpdateBucketsParameters:
  845. var bucketsParametersAux rollupEventUpdateBucketsParametersAux
  846. var bucketsParameters RollupEventUpdateBucketsParameters
  847. err := c.contractAbi.UnpackIntoInterface(&bucketsParametersAux,
  848. "UpdateBucketsParameters", vLog.Data)
  849. if err != nil {
  850. return nil, tracerr.Wrap(err)
  851. }
  852. for i, bucket := range bucketsParametersAux.ArrayBuckets {
  853. bucketsParameters.ArrayBuckets[i].CeilUSD = bucket[0]
  854. bucketsParameters.ArrayBuckets[i].BlockStamp = bucket[1]
  855. bucketsParameters.ArrayBuckets[i].Withdrawals = bucket[2]
  856. bucketsParameters.ArrayBuckets[i].RateBlocks = bucket[3]
  857. bucketsParameters.ArrayBuckets[i].RateWithdrawals = bucket[4]
  858. bucketsParameters.ArrayBuckets[i].MaxWithdrawals = bucket[5]
  859. }
  860. rollupEvents.UpdateBucketsParameters =
  861. append(rollupEvents.UpdateBucketsParameters, bucketsParameters)
  862. case logHermezUpdateTokenExchange:
  863. var tokensExchange RollupEventUpdateTokenExchange
  864. err := c.contractAbi.UnpackIntoInterface(&tokensExchange, "UpdateTokenExchange", vLog.Data)
  865. if err != nil {
  866. return nil, tracerr.Wrap(err)
  867. }
  868. rollupEvents.UpdateTokenExchange = append(rollupEvents.UpdateTokenExchange, tokensExchange)
  869. case logHermezSafeMode:
  870. var safeMode RollupEventSafeMode
  871. rollupEvents.SafeMode = append(rollupEvents.SafeMode, safeMode)
  872. // Also add an UpdateBucketsParameter with
  873. // SafeMode=true to keep the order between `safeMode`
  874. // and `UpdateBucketsParameters`
  875. bucketsParameters := RollupEventUpdateBucketsParameters{
  876. SafeMode: true,
  877. }
  878. for i := range bucketsParameters.ArrayBuckets {
  879. bucketsParameters.ArrayBuckets[i].CeilUSD = big.NewInt(0)
  880. bucketsParameters.ArrayBuckets[i].BlockStamp = big.NewInt(0)
  881. bucketsParameters.ArrayBuckets[i].Withdrawals = big.NewInt(0)
  882. bucketsParameters.ArrayBuckets[i].RateBlocks = big.NewInt(0)
  883. bucketsParameters.ArrayBuckets[i].RateWithdrawals = big.NewInt(0)
  884. bucketsParameters.ArrayBuckets[i].MaxWithdrawals = big.NewInt(0)
  885. }
  886. rollupEvents.UpdateBucketsParameters = append(rollupEvents.UpdateBucketsParameters,
  887. bucketsParameters)
  888. }
  889. }
  890. return &rollupEvents, nil
  891. }
  892. // RollupForgeBatchArgs returns the arguments used in a ForgeBatch call in the
  893. // Rollup Smart Contract in the given transaction, and the sender address.
  894. func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash,
  895. l1UserTxsLen uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error) {
  896. tx, _, err := c.client.client.TransactionByHash(context.Background(), ethTxHash)
  897. if err != nil {
  898. return nil, nil, tracerr.Wrap(fmt.Errorf("TransactionByHash: %w", err))
  899. }
  900. txData := tx.Data()
  901. method, err := c.contractAbi.MethodById(txData[:4])
  902. if err != nil {
  903. return nil, nil, tracerr.Wrap(err)
  904. }
  905. receipt, err := c.client.client.TransactionReceipt(context.Background(), ethTxHash)
  906. if err != nil {
  907. return nil, nil, tracerr.Wrap(err)
  908. }
  909. sender, err := c.client.client.TransactionSender(context.Background(), tx,
  910. receipt.Logs[0].BlockHash, receipt.Logs[0].Index)
  911. if err != nil {
  912. return nil, nil, tracerr.Wrap(err)
  913. }
  914. var aux rollupForgeBatchArgsAux
  915. if values, err := method.Inputs.Unpack(txData[4:]); err != nil {
  916. return nil, nil, tracerr.Wrap(err)
  917. } else if err := method.Inputs.Copy(&aux, values); err != nil {
  918. return nil, nil, tracerr.Wrap(err)
  919. }
  920. rollupForgeBatchArgs := RollupForgeBatchArgs{
  921. L1Batch: aux.L1Batch,
  922. NewExitRoot: aux.NewExitRoot,
  923. NewLastIdx: aux.NewLastIdx.Int64(),
  924. NewStRoot: aux.NewStRoot,
  925. ProofA: aux.ProofA,
  926. ProofB: aux.ProofB,
  927. ProofC: aux.ProofC,
  928. VerifierIdx: aux.VerifierIdx,
  929. L1CoordinatorTxs: []common.L1Tx{},
  930. L1CoordinatorTxsAuths: [][]byte{},
  931. L2TxsData: []common.L2Tx{},
  932. FeeIdxCoordinator: []common.Idx{},
  933. }
  934. nLevels := c.consts.Verifiers[rollupForgeBatchArgs.VerifierIdx].NLevels
  935. lenL1L2TxsBytes := int((nLevels/8)*2 + common.Float40BytesLength + 1) //nolint:gomnd
  936. numBytesL1TxUser := int(l1UserTxsLen) * lenL1L2TxsBytes
  937. numTxsL1Coord := len(aux.EncodedL1CoordinatorTx) / common.RollupConstL1CoordinatorTotalBytes
  938. numBytesL1TxCoord := numTxsL1Coord * lenL1L2TxsBytes
  939. numBeginL2Tx := numBytesL1TxCoord + numBytesL1TxUser
  940. l1UserTxsData := []byte{}
  941. if l1UserTxsLen > 0 {
  942. l1UserTxsData = aux.L1L2TxsData[:numBytesL1TxUser]
  943. }
  944. for i := 0; i < int(l1UserTxsLen); i++ {
  945. l1Tx, err :=
  946. common.L1TxFromDataAvailability(l1UserTxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes],
  947. uint32(nLevels))
  948. if err != nil {
  949. return nil, nil, tracerr.Wrap(err)
  950. }
  951. rollupForgeBatchArgs.L1UserTxs = append(rollupForgeBatchArgs.L1UserTxs, *l1Tx)
  952. }
  953. l2TxsData := []byte{}
  954. if numBeginL2Tx < len(aux.L1L2TxsData) {
  955. l2TxsData = aux.L1L2TxsData[numBeginL2Tx:]
  956. }
  957. numTxsL2 := len(l2TxsData) / lenL1L2TxsBytes
  958. for i := 0; i < numTxsL2; i++ {
  959. l2Tx, err :=
  960. common.L2TxFromBytesDataAvailability(l2TxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes],
  961. int(nLevels))
  962. if err != nil {
  963. return nil, nil, tracerr.Wrap(err)
  964. }
  965. rollupForgeBatchArgs.L2TxsData = append(rollupForgeBatchArgs.L2TxsData, *l2Tx)
  966. }
  967. for i := 0; i < numTxsL1Coord; i++ {
  968. bytesL1Coordinator :=
  969. aux.EncodedL1CoordinatorTx[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*common.RollupConstL1CoordinatorTotalBytes] //nolint:lll
  970. var signature []byte
  971. v := bytesL1Coordinator[0]
  972. s := bytesL1Coordinator[1:33]
  973. r := bytesL1Coordinator[33:65]
  974. signature = append(signature, r[:]...)
  975. signature = append(signature, s[:]...)
  976. signature = append(signature, v)
  977. l1Tx, err := common.L1CoordinatorTxFromBytes(bytesL1Coordinator, c.chainID, c.address)
  978. if err != nil {
  979. return nil, nil, tracerr.Wrap(err)
  980. }
  981. rollupForgeBatchArgs.L1CoordinatorTxs = append(rollupForgeBatchArgs.L1CoordinatorTxs, *l1Tx)
  982. rollupForgeBatchArgs.L1CoordinatorTxsAuths =
  983. append(rollupForgeBatchArgs.L1CoordinatorTxsAuths, signature)
  984. }
  985. lenFeeIdxCoordinatorBytes := int(nLevels / 8) //nolint:gomnd
  986. numFeeIdxCoordinator := len(aux.FeeIdxCoordinator) / lenFeeIdxCoordinatorBytes
  987. for i := 0; i < numFeeIdxCoordinator; i++ {
  988. var paddedFeeIdx [6]byte
  989. // TODO: This check is not necessary: the first case will always work. Test it
  990. // before removing the if.
  991. if lenFeeIdxCoordinatorBytes < common.IdxBytesLen {
  992. copy(paddedFeeIdx[6-lenFeeIdxCoordinatorBytes:],
  993. aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
  994. } else {
  995. copy(paddedFeeIdx[:],
  996. aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
  997. }
  998. feeIdxCoordinator, err := common.IdxFromBytes(paddedFeeIdx[:])
  999. if err != nil {
  1000. return nil, nil, tracerr.Wrap(err)
  1001. }
  1002. if feeIdxCoordinator != common.Idx(0) {
  1003. rollupForgeBatchArgs.FeeIdxCoordinator =
  1004. append(rollupForgeBatchArgs.FeeIdxCoordinator, feeIdxCoordinator)
  1005. }
  1006. }
  1007. return &rollupForgeBatchArgs, &sender, nil
  1008. }