You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1065 lines
38 KiB

4 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
  1. package eth
  2. import (
  3. "context"
  4. "fmt"
  5. "math/big"
  6. "strings"
  7. "github.com/ethereum/go-ethereum"
  8. "github.com/ethereum/go-ethereum/accounts/abi"
  9. "github.com/ethereum/go-ethereum/accounts/abi/bind"
  10. ethCommon "github.com/ethereum/go-ethereum/common"
  11. "github.com/ethereum/go-ethereum/core/types"
  12. "github.com/ethereum/go-ethereum/crypto"
  13. "github.com/ethereum/go-ethereum/ethclient"
  14. "github.com/hermeznetwork/hermez-node/common"
  15. Hermez "github.com/hermeznetwork/hermez-node/eth/contracts/hermez"
  16. HEZ "github.com/hermeznetwork/hermez-node/eth/contracts/tokenHEZ"
  17. "github.com/hermeznetwork/hermez-node/log"
  18. "github.com/hermeznetwork/tracerr"
  19. "github.com/iden3/go-iden3-crypto/babyjub"
  20. )
  21. // QueueStruct is the queue of L1Txs for a batch
  22. type QueueStruct struct {
  23. L1TxQueue []common.L1Tx
  24. TotalL1TxFee *big.Int
  25. }
  26. // NewQueueStruct creates a new clear QueueStruct.
  27. func NewQueueStruct() *QueueStruct {
  28. return &QueueStruct{
  29. L1TxQueue: make([]common.L1Tx, 0),
  30. TotalL1TxFee: big.NewInt(0),
  31. }
  32. }
  33. // RollupState represents the state of the Rollup in the Smart Contract
  34. type RollupState struct {
  35. StateRoot *big.Int
  36. ExitRoots []*big.Int
  37. // ExitNullifierMap map[[256 / 8]byte]bool
  38. ExitNullifierMap map[int64]map[int64]bool // batchNum -> idx -> bool
  39. TokenList []ethCommon.Address
  40. TokenMap map[ethCommon.Address]bool
  41. MapL1TxQueue map[int64]*QueueStruct
  42. LastL1L2Batch int64
  43. CurrentToForgeL1TxsNum int64
  44. LastToForgeL1TxsNum int64
  45. CurrentIdx int64
  46. }
  47. // RollupEventInitialize is the InitializeHermezEvent event of the
  48. // Smart Contract
  49. type RollupEventInitialize struct {
  50. ForgeL1L2BatchTimeout uint8
  51. FeeAddToken *big.Int
  52. WithdrawalDelay uint64
  53. }
  54. // RollupVariables returns the RollupVariables from the initialize event
  55. func (ei *RollupEventInitialize) RollupVariables() *common.RollupVariables {
  56. return &common.RollupVariables{
  57. EthBlockNum: 0,
  58. FeeAddToken: ei.FeeAddToken,
  59. ForgeL1L2BatchTimeout: int64(ei.ForgeL1L2BatchTimeout),
  60. WithdrawalDelay: ei.WithdrawalDelay,
  61. Buckets: []common.BucketParams{},
  62. SafeMode: false,
  63. }
  64. }
  65. // RollupEventL1UserTx is an event of the Rollup Smart Contract
  66. type RollupEventL1UserTx struct {
  67. // ToForgeL1TxsNum int64 // QueueIndex *big.Int
  68. // Position int // TransactionIndex *big.Int
  69. L1UserTx common.L1Tx
  70. }
  71. // RollupEventL1UserTxAux is an event of the Rollup Smart Contract
  72. type rollupEventL1UserTxAux struct {
  73. ToForgeL1TxsNum uint64 // QueueIndex *big.Int
  74. Position uint8 // TransactionIndex *big.Int
  75. L1UserTx []byte
  76. }
  77. // RollupEventAddToken is an event of the Rollup Smart Contract
  78. type RollupEventAddToken struct {
  79. TokenAddress ethCommon.Address
  80. TokenID uint32
  81. }
  82. // RollupEventForgeBatch is an event of the Rollup Smart Contract
  83. type RollupEventForgeBatch struct {
  84. BatchNum int64
  85. // Sender ethCommon.Address
  86. EthTxHash ethCommon.Hash
  87. L1UserTxsLen uint16
  88. }
  89. // RollupEventUpdateForgeL1L2BatchTimeout is an event of the Rollup Smart Contract
  90. type RollupEventUpdateForgeL1L2BatchTimeout struct {
  91. NewForgeL1L2BatchTimeout int64
  92. }
  93. // RollupEventUpdateFeeAddToken is an event of the Rollup Smart Contract
  94. type RollupEventUpdateFeeAddToken struct {
  95. NewFeeAddToken *big.Int
  96. }
  97. // RollupEventWithdraw is an event of the Rollup Smart Contract
  98. type RollupEventWithdraw struct {
  99. Idx uint64
  100. NumExitRoot uint64
  101. InstantWithdraw bool
  102. TxHash ethCommon.Hash // Hash of the transaction that generated this event
  103. }
  104. type rollupEventUpdateBucketWithdrawAux struct {
  105. NumBucket uint8
  106. BlockStamp *big.Int
  107. Withdrawals *big.Int
  108. }
  109. // RollupEventUpdateBucketWithdraw is an event of the Rollup Smart Contract
  110. type RollupEventUpdateBucketWithdraw struct {
  111. NumBucket int
  112. BlockStamp int64 // blockNum
  113. Withdrawals *big.Int
  114. }
  115. // RollupEventUpdateWithdrawalDelay is an event of the Rollup Smart Contract
  116. type RollupEventUpdateWithdrawalDelay struct {
  117. NewWithdrawalDelay uint64
  118. }
  119. // RollupUpdateBucketsParameters are the bucket parameters used in an update
  120. type RollupUpdateBucketsParameters struct {
  121. CeilUSD *big.Int
  122. BlockStamp *big.Int
  123. Withdrawals *big.Int
  124. RateBlocks *big.Int
  125. RateWithdrawals *big.Int
  126. MaxWithdrawals *big.Int
  127. }
  128. type rollupEventUpdateBucketsParametersAux struct {
  129. ArrayBuckets []*big.Int
  130. }
  131. // RollupEventUpdateBucketsParameters is an event of the Rollup Smart Contract
  132. type RollupEventUpdateBucketsParameters struct {
  133. ArrayBuckets []RollupUpdateBucketsParameters
  134. SafeMode bool
  135. }
  136. // RollupEventUpdateTokenExchange is an event of the Rollup Smart Contract
  137. type RollupEventUpdateTokenExchange struct {
  138. AddressArray []ethCommon.Address
  139. ValueArray []uint64
  140. }
  141. // RollupEventSafeMode is an event of the Rollup Smart Contract
  142. type RollupEventSafeMode struct {
  143. }
  144. // RollupEvents is the list of events in a block of the Rollup Smart Contract
  145. type RollupEvents struct {
  146. L1UserTx []RollupEventL1UserTx
  147. AddToken []RollupEventAddToken
  148. ForgeBatch []RollupEventForgeBatch
  149. UpdateForgeL1L2BatchTimeout []RollupEventUpdateForgeL1L2BatchTimeout
  150. UpdateFeeAddToken []RollupEventUpdateFeeAddToken
  151. Withdraw []RollupEventWithdraw
  152. UpdateWithdrawalDelay []RollupEventUpdateWithdrawalDelay
  153. UpdateBucketWithdraw []RollupEventUpdateBucketWithdraw
  154. UpdateBucketsParameters []RollupEventUpdateBucketsParameters
  155. UpdateTokenExchange []RollupEventUpdateTokenExchange
  156. SafeMode []RollupEventSafeMode
  157. }
  158. // NewRollupEvents creates an empty RollupEvents with the slices initialized.
  159. func NewRollupEvents() RollupEvents {
  160. return RollupEvents{
  161. L1UserTx: make([]RollupEventL1UserTx, 0),
  162. AddToken: make([]RollupEventAddToken, 0),
  163. ForgeBatch: make([]RollupEventForgeBatch, 0),
  164. UpdateForgeL1L2BatchTimeout: make([]RollupEventUpdateForgeL1L2BatchTimeout, 0),
  165. UpdateFeeAddToken: make([]RollupEventUpdateFeeAddToken, 0),
  166. Withdraw: make([]RollupEventWithdraw, 0),
  167. }
  168. }
  169. // RollupForgeBatchArgs are the arguments to the ForgeBatch function in the Rollup Smart Contract
  170. type RollupForgeBatchArgs struct {
  171. NewLastIdx int64
  172. NewStRoot *big.Int
  173. NewExitRoot *big.Int
  174. L1UserTxs []common.L1Tx
  175. L1CoordinatorTxs []common.L1Tx
  176. L1CoordinatorTxsAuths [][]byte // Authorization for accountCreations for each L1CoordinatorTx
  177. L2TxsData []common.L2Tx
  178. FeeIdxCoordinator []common.Idx
  179. // Circuit selector
  180. VerifierIdx uint8
  181. L1Batch bool
  182. ProofA [2]*big.Int
  183. ProofB [2][2]*big.Int
  184. ProofC [2]*big.Int
  185. }
  186. // RollupForgeBatchArgsAux are the arguments to the ForgeBatch function in the Rollup Smart Contract
  187. type rollupForgeBatchArgsAux struct {
  188. NewLastIdx *big.Int
  189. NewStRoot *big.Int
  190. NewExitRoot *big.Int
  191. EncodedL1CoordinatorTx []byte
  192. L1L2TxsData []byte
  193. FeeIdxCoordinator []byte
  194. // Circuit selector
  195. VerifierIdx uint8
  196. L1Batch bool
  197. ProofA [2]*big.Int
  198. ProofB [2][2]*big.Int
  199. ProofC [2]*big.Int
  200. }
  201. // RollupInterface is the inteface to to Rollup Smart Contract
  202. type RollupInterface interface {
  203. //
  204. // Smart Contract Methods
  205. //
  206. // Public Functions
  207. RollupForgeBatch(*RollupForgeBatchArgs, *bind.TransactOpts) (*types.Transaction, error)
  208. RollupAddToken(tokenAddress ethCommon.Address, feeAddToken,
  209. deadline *big.Int) (*types.Transaction, error)
  210. RollupWithdrawMerkleProof(babyPubKey babyjub.PublicKeyComp, tokenID uint32, numExitRoot,
  211. idx int64, amount *big.Int, siblings []*big.Int, instantWithdraw bool) (*types.Transaction,
  212. error)
  213. RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int, tokenID uint32,
  214. numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction, error)
  215. RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64, depositAmount *big.Int,
  216. amount *big.Int, tokenID uint32, toIdx int64) (*types.Transaction, error)
  217. RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
  218. depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64,
  219. deadline *big.Int) (tx *types.Transaction, err error)
  220. // Governance Public Functions
  221. RollupUpdateForgeL1L2BatchTimeout(newForgeL1L2BatchTimeout int64) (*types.Transaction, error)
  222. RollupUpdateFeeAddToken(newFeeAddToken *big.Int) (*types.Transaction, error)
  223. // Viewers
  224. RollupRegisterTokensCount() (*big.Int, error)
  225. RollupLastForgedBatch() (int64, error)
  226. //
  227. // Smart Contract Status
  228. //
  229. RollupConstants() (*common.RollupConstants, error)
  230. RollupEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*RollupEvents, error)
  231. RollupForgeBatchArgs(ethCommon.Hash, uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error)
  232. RollupEventInit() (*RollupEventInitialize, int64, error)
  233. }
  234. //
  235. // Implementation
  236. //
  237. // RollupClient is the implementation of the interface to the Rollup Smart Contract in ethereum.
  238. type RollupClient struct {
  239. client *EthereumClient
  240. chainID *big.Int
  241. address ethCommon.Address
  242. tokenHEZCfg TokenConfig
  243. hermez *Hermez.Hermez
  244. tokenHEZ *HEZ.HEZ
  245. contractAbi abi.ABI
  246. opts *bind.CallOpts
  247. consts *common.RollupConstants
  248. }
  249. // NewRollupClient creates a new RollupClient
  250. func NewRollupClient(client *EthereumClient, address ethCommon.Address,
  251. tokenHEZCfg TokenConfig) (*RollupClient, error) {
  252. contractAbi, err := abi.JSON(strings.NewReader(string(Hermez.HermezABI)))
  253. if err != nil {
  254. return nil, tracerr.Wrap(err)
  255. }
  256. hermez, err := Hermez.NewHermez(address, client.Client())
  257. if err != nil {
  258. return nil, tracerr.Wrap(err)
  259. }
  260. tokenHEZ, err := HEZ.NewHEZ(tokenHEZCfg.Address, client.Client())
  261. if err != nil {
  262. return nil, tracerr.Wrap(err)
  263. }
  264. chainID, err := client.EthChainID()
  265. if err != nil {
  266. return nil, tracerr.Wrap(err)
  267. }
  268. c := &RollupClient{
  269. client: client,
  270. chainID: chainID,
  271. address: address,
  272. tokenHEZCfg: tokenHEZCfg,
  273. hermez: hermez,
  274. tokenHEZ: tokenHEZ,
  275. contractAbi: contractAbi,
  276. opts: newCallOpts(),
  277. }
  278. consts, err := c.RollupConstants()
  279. if err != nil {
  280. return nil, tracerr.Wrap(fmt.Errorf("RollupConstants at %v: %w", address, err))
  281. }
  282. c.consts = consts
  283. return c, nil
  284. }
  285. // RollupForgeBatch is the interface to call the smart contract function
  286. func (c *RollupClient) RollupForgeBatch(args *RollupForgeBatchArgs,
  287. auth *bind.TransactOpts) (tx *types.Transaction, err error) {
  288. if auth == nil {
  289. auth, err = c.client.NewAuth()
  290. if err != nil {
  291. return nil, tracerr.Wrap(err)
  292. }
  293. auth.GasLimit = 1000000
  294. }
  295. nLevels := c.consts.Verifiers[args.VerifierIdx].NLevels
  296. lenBytes := nLevels / 8 //nolint:gomnd
  297. newLastIdx := big.NewInt(int64(args.NewLastIdx))
  298. // L1CoordinatorBytes
  299. var l1CoordinatorBytes []byte
  300. for i := 0; i < len(args.L1CoordinatorTxs); i++ {
  301. l1 := args.L1CoordinatorTxs[i]
  302. bytesl1, err := l1.BytesCoordinatorTx(args.L1CoordinatorTxsAuths[i])
  303. if err != nil {
  304. return nil, tracerr.Wrap(err)
  305. }
  306. l1CoordinatorBytes = append(l1CoordinatorBytes, bytesl1[:]...)
  307. }
  308. // L1L2TxData
  309. var l1l2TxData []byte
  310. for i := 0; i < len(args.L1UserTxs); i++ {
  311. l1User := args.L1UserTxs[i]
  312. bytesl1User, err := l1User.BytesDataAvailability(uint32(nLevels))
  313. if err != nil {
  314. return nil, tracerr.Wrap(err)
  315. }
  316. l1l2TxData = append(l1l2TxData, bytesl1User[:]...)
  317. }
  318. for i := 0; i < len(args.L1CoordinatorTxs); i++ {
  319. l1Coord := args.L1CoordinatorTxs[i]
  320. bytesl1Coord, err := l1Coord.BytesDataAvailability(uint32(nLevels))
  321. if err != nil {
  322. return nil, tracerr.Wrap(err)
  323. }
  324. l1l2TxData = append(l1l2TxData, bytesl1Coord[:]...)
  325. }
  326. for i := 0; i < len(args.L2TxsData); i++ {
  327. l2 := args.L2TxsData[i]
  328. bytesl2, err := l2.BytesDataAvailability(uint32(nLevels))
  329. if err != nil {
  330. return nil, tracerr.Wrap(err)
  331. }
  332. l1l2TxData = append(l1l2TxData, bytesl2[:]...)
  333. }
  334. // FeeIdxCoordinator
  335. var feeIdxCoordinator []byte
  336. if len(args.FeeIdxCoordinator) > common.RollupConstMaxFeeIdxCoordinator {
  337. return nil, tracerr.Wrap(fmt.Errorf("len(args.FeeIdxCoordinator) > %v",
  338. common.RollupConstMaxFeeIdxCoordinator))
  339. }
  340. for i := 0; i < common.RollupConstMaxFeeIdxCoordinator; i++ {
  341. feeIdx := common.Idx(0)
  342. if i < len(args.FeeIdxCoordinator) {
  343. feeIdx = args.FeeIdxCoordinator[i]
  344. }
  345. bytesFeeIdx, err := feeIdx.Bytes()
  346. if err != nil {
  347. return nil, tracerr.Wrap(err)
  348. }
  349. feeIdxCoordinator = append(feeIdxCoordinator,
  350. bytesFeeIdx[len(bytesFeeIdx)-int(lenBytes):]...)
  351. }
  352. tx, err = c.hermez.ForgeBatch(auth, newLastIdx, args.NewStRoot, args.NewExitRoot,
  353. l1CoordinatorBytes, l1l2TxData, feeIdxCoordinator, args.VerifierIdx, args.L1Batch,
  354. args.ProofA, args.ProofB, args.ProofC)
  355. if err != nil {
  356. return nil, tracerr.Wrap(fmt.Errorf("Hermez.ForgeBatch: %w", err))
  357. }
  358. return tx, nil
  359. }
  360. // RollupAddToken is the interface to call the smart contract function.
  361. // `feeAddToken` is the amount of HEZ tokens that will be paid to add the
  362. // token. `feeAddToken` must match the public value of the smart contract.
  363. func (c *RollupClient) RollupAddToken(tokenAddress ethCommon.Address, feeAddToken,
  364. deadline *big.Int) (tx *types.Transaction, err error) {
  365. if tx, err = c.client.CallAuth(
  366. 0,
  367. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  368. owner := c.client.account.Address
  369. spender := c.address
  370. nonce, err := c.tokenHEZ.Nonces(c.opts, owner)
  371. if err != nil {
  372. return nil, tracerr.Wrap(err)
  373. }
  374. tokenName := c.tokenHEZCfg.Name
  375. tokenAddr := c.tokenHEZCfg.Address
  376. digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID,
  377. feeAddToken, nonce, deadline, tokenName)
  378. signature, _ := c.client.ks.SignHash(*c.client.account, digest)
  379. permit := createPermit(owner, spender, feeAddToken, deadline, digest,
  380. signature)
  381. return c.hermez.AddToken(auth, tokenAddress, permit)
  382. },
  383. ); err != nil {
  384. return nil, tracerr.Wrap(fmt.Errorf("Failed add Token %w", err))
  385. }
  386. return tx, nil
  387. }
  388. // RollupWithdrawMerkleProof is the interface to call the smart contract function
  389. func (c *RollupClient) RollupWithdrawMerkleProof(fromBJJ babyjub.PublicKeyComp, tokenID uint32,
  390. numExitRoot, idx int64, amount *big.Int, siblings []*big.Int,
  391. instantWithdraw bool) (tx *types.Transaction, err error) {
  392. if tx, err = c.client.CallAuth(
  393. 0,
  394. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  395. pkCompB := common.SwapEndianness(fromBJJ[:])
  396. babyPubKey := new(big.Int).SetBytes(pkCompB)
  397. numExitRootB := uint32(numExitRoot)
  398. idxBig := big.NewInt(idx)
  399. return c.hermez.WithdrawMerkleProof(auth, tokenID, amount, babyPubKey,
  400. numExitRootB, siblings, idxBig, instantWithdraw)
  401. },
  402. ); err != nil {
  403. return nil, tracerr.Wrap(fmt.Errorf("Failed update WithdrawMerkleProof: %w", err))
  404. }
  405. return tx, nil
  406. }
  407. // RollupWithdrawCircuit is the interface to call the smart contract function
  408. func (c *RollupClient) RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int,
  409. tokenID uint32, numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction,
  410. error) {
  411. log.Error("TODO")
  412. return nil, tracerr.Wrap(errTODO)
  413. }
  414. // RollupL1UserTxERC20ETH is the interface to call the smart contract function
  415. func (c *RollupClient) RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
  416. depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64) (tx *types.Transaction,
  417. err error) {
  418. if tx, err = c.client.CallAuth(
  419. 0,
  420. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  421. var babyPubKey *big.Int
  422. if fromBJJ != common.EmptyBJJComp {
  423. pkCompB := common.SwapEndianness(fromBJJ[:])
  424. babyPubKey = new(big.Int).SetBytes(pkCompB)
  425. } else {
  426. babyPubKey = big.NewInt(0)
  427. }
  428. fromIdxBig := big.NewInt(fromIdx)
  429. toIdxBig := big.NewInt(toIdx)
  430. depositAmountF, err := common.NewFloat40(depositAmount)
  431. if err != nil {
  432. return nil, tracerr.Wrap(err)
  433. }
  434. amountF, err := common.NewFloat40(amount)
  435. if err != nil {
  436. return nil, tracerr.Wrap(err)
  437. }
  438. if tokenID == 0 {
  439. auth.Value = depositAmount
  440. }
  441. var permit []byte
  442. return c.hermez.AddL1Transaction(auth, babyPubKey, fromIdxBig, big.NewInt(int64(depositAmountF)),
  443. big.NewInt(int64(amountF)), tokenID, toIdxBig, permit)
  444. },
  445. ); err != nil {
  446. return nil, tracerr.Wrap(fmt.Errorf("Failed add L1 Tx ERC20/ETH: %w", err))
  447. }
  448. return tx, nil
  449. }
  450. // RollupL1UserTxERC20Permit is the interface to call the smart contract function
  451. func (c *RollupClient) RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
  452. depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64,
  453. deadline *big.Int) (tx *types.Transaction, err error) {
  454. if tx, err = c.client.CallAuth(
  455. 0,
  456. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  457. var babyPubKey *big.Int
  458. if fromBJJ != common.EmptyBJJComp {
  459. pkCompB := common.SwapEndianness(fromBJJ[:])
  460. babyPubKey = new(big.Int).SetBytes(pkCompB)
  461. } else {
  462. babyPubKey = big.NewInt(0)
  463. }
  464. fromIdxBig := big.NewInt(fromIdx)
  465. toIdxBig := big.NewInt(toIdx)
  466. depositAmountF, err := common.NewFloat40(depositAmount)
  467. if err != nil {
  468. return nil, tracerr.Wrap(err)
  469. }
  470. amountF, err := common.NewFloat40(amount)
  471. if err != nil {
  472. return nil, tracerr.Wrap(err)
  473. }
  474. if tokenID == 0 {
  475. auth.Value = depositAmount
  476. }
  477. owner := c.client.account.Address
  478. spender := c.address
  479. nonce, err := c.tokenHEZ.Nonces(c.opts, owner)
  480. if err != nil {
  481. return nil, tracerr.Wrap(err)
  482. }
  483. tokenName := c.tokenHEZCfg.Name
  484. tokenAddr := c.tokenHEZCfg.Address
  485. digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID,
  486. amount, nonce, deadline, tokenName)
  487. signature, _ := c.client.ks.SignHash(*c.client.account, digest)
  488. permit := createPermit(owner, spender, amount, deadline, digest, signature)
  489. return c.hermez.AddL1Transaction(auth, babyPubKey, fromIdxBig,
  490. big.NewInt(int64(depositAmountF)), big.NewInt(int64(amountF)), tokenID, toIdxBig, permit)
  491. },
  492. ); err != nil {
  493. return nil, tracerr.Wrap(fmt.Errorf("Failed add L1 Tx ERC20Permit: %w", err))
  494. }
  495. return tx, nil
  496. }
  497. // RollupRegisterTokensCount is the interface to call the smart contract function
  498. func (c *RollupClient) RollupRegisterTokensCount() (registerTokensCount *big.Int, err error) {
  499. if err := c.client.Call(func(ec *ethclient.Client) error {
  500. registerTokensCount, err = c.hermez.RegisterTokensCount(c.opts)
  501. return tracerr.Wrap(err)
  502. }); err != nil {
  503. return nil, tracerr.Wrap(err)
  504. }
  505. return registerTokensCount, nil
  506. }
  507. // RollupLastForgedBatch is the interface to call the smart contract function
  508. func (c *RollupClient) RollupLastForgedBatch() (lastForgedBatch int64, err error) {
  509. if err := c.client.Call(func(ec *ethclient.Client) error {
  510. _lastForgedBatch, err := c.hermez.LastForgedBatch(c.opts)
  511. lastForgedBatch = int64(_lastForgedBatch)
  512. return tracerr.Wrap(err)
  513. }); err != nil {
  514. return 0, tracerr.Wrap(err)
  515. }
  516. return lastForgedBatch, nil
  517. }
  518. // RollupUpdateForgeL1L2BatchTimeout is the interface to call the smart contract function
  519. func (c *RollupClient) RollupUpdateForgeL1L2BatchTimeout(
  520. newForgeL1L2BatchTimeout int64) (tx *types.Transaction, err error) {
  521. if tx, err = c.client.CallAuth(
  522. 0,
  523. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  524. return c.hermez.UpdateForgeL1L2BatchTimeout(auth,
  525. uint8(newForgeL1L2BatchTimeout))
  526. },
  527. ); err != nil {
  528. return nil, tracerr.Wrap(fmt.Errorf("Failed update ForgeL1L2BatchTimeout: %w", err))
  529. }
  530. return tx, nil
  531. }
  532. // RollupUpdateFeeAddToken is the interface to call the smart contract function
  533. func (c *RollupClient) RollupUpdateFeeAddToken(newFeeAddToken *big.Int) (tx *types.Transaction,
  534. err error) {
  535. if tx, err = c.client.CallAuth(
  536. 0,
  537. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  538. return c.hermez.UpdateFeeAddToken(auth, newFeeAddToken)
  539. },
  540. ); err != nil {
  541. return nil, tracerr.Wrap(fmt.Errorf("Failed update FeeAddToken: %w", err))
  542. }
  543. return tx, nil
  544. }
  545. // RollupUpdateBucketsParameters is the interface to call the smart contract function
  546. func (c *RollupClient) RollupUpdateBucketsParameters(
  547. arrayBuckets []RollupUpdateBucketsParameters,
  548. ) (tx *types.Transaction, err error) {
  549. if tx, err = c.client.CallAuth(
  550. 12500000, //nolint:gomnd
  551. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  552. params := make([]*big.Int, len(arrayBuckets))
  553. for i, bucket := range arrayBuckets {
  554. params[i], err = c.hermez.PackBucket(c.opts,
  555. bucket.CeilUSD, bucket.BlockStamp, bucket.Withdrawals,
  556. bucket.RateBlocks, bucket.RateWithdrawals, bucket.MaxWithdrawals)
  557. if err != nil {
  558. return nil, tracerr.Wrap(fmt.Errorf("failed to pack bucket: %w", err))
  559. }
  560. }
  561. return c.hermez.UpdateBucketsParameters(auth, params)
  562. },
  563. ); err != nil {
  564. return nil, tracerr.Wrap(fmt.Errorf("Failed update Buckets Parameters: %w", err))
  565. }
  566. return tx, nil
  567. }
  568. // RollupUpdateTokenExchange is the interface to call the smart contract function
  569. func (c *RollupClient) RollupUpdateTokenExchange(addressArray []ethCommon.Address,
  570. valueArray []uint64) (tx *types.Transaction, err error) {
  571. if tx, err = c.client.CallAuth(
  572. 0,
  573. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  574. return c.hermez.UpdateTokenExchange(auth, addressArray, valueArray)
  575. },
  576. ); err != nil {
  577. return nil, tracerr.Wrap(fmt.Errorf("Failed update Token Exchange: %w", err))
  578. }
  579. return tx, nil
  580. }
  581. // RollupUpdateWithdrawalDelay is the interface to call the smart contract function
  582. func (c *RollupClient) RollupUpdateWithdrawalDelay(newWithdrawalDelay int64) (tx *types.Transaction,
  583. err error) {
  584. if tx, err = c.client.CallAuth(
  585. 0,
  586. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  587. return c.hermez.UpdateWithdrawalDelay(auth, uint64(newWithdrawalDelay))
  588. },
  589. ); err != nil {
  590. return nil, tracerr.Wrap(fmt.Errorf("Failed update WithdrawalDelay: %w", err))
  591. }
  592. return tx, nil
  593. }
  594. // RollupSafeMode is the interface to call the smart contract function
  595. func (c *RollupClient) RollupSafeMode() (tx *types.Transaction, err error) {
  596. if tx, err = c.client.CallAuth(
  597. 0,
  598. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  599. return c.hermez.SafeMode(auth)
  600. },
  601. ); err != nil {
  602. return nil, tracerr.Wrap(fmt.Errorf("Failed update Safe Mode: %w", err))
  603. }
  604. return tx, nil
  605. }
  606. // RollupInstantWithdrawalViewer is the interface to call the smart contract function
  607. func (c *RollupClient) RollupInstantWithdrawalViewer(tokenAddress ethCommon.Address,
  608. amount *big.Int) (instantAllowed bool, err error) {
  609. if err := c.client.Call(func(ec *ethclient.Client) error {
  610. instantAllowed, err = c.hermez.InstantWithdrawalViewer(c.opts, tokenAddress, amount)
  611. return tracerr.Wrap(err)
  612. }); err != nil {
  613. return false, tracerr.Wrap(err)
  614. }
  615. return instantAllowed, nil
  616. }
  617. // RollupConstants returns the Constants of the Rollup Smart Contract
  618. func (c *RollupClient) RollupConstants() (rollupConstants *common.RollupConstants, err error) {
  619. rollupConstants = new(common.RollupConstants)
  620. if err := c.client.Call(func(ec *ethclient.Client) error {
  621. absoluteMaxL1L2BatchTimeout, err := c.hermez.ABSOLUTEMAXL1L2BATCHTIMEOUT(c.opts)
  622. if err != nil {
  623. return tracerr.Wrap(err)
  624. }
  625. rollupConstants.AbsoluteMaxL1L2BatchTimeout = int64(absoluteMaxL1L2BatchTimeout)
  626. rollupConstants.TokenHEZ, err = c.hermez.TokenHEZ(c.opts)
  627. if err != nil {
  628. return tracerr.Wrap(err)
  629. }
  630. rollupVerifiersLength, err := c.hermez.RollupVerifiersLength(c.opts)
  631. if err != nil {
  632. return tracerr.Wrap(err)
  633. }
  634. for i := int64(0); i < rollupVerifiersLength.Int64(); i++ {
  635. var newRollupVerifier common.RollupVerifierStruct
  636. rollupVerifier, err := c.hermez.RollupVerifiers(c.opts, big.NewInt(i))
  637. if err != nil {
  638. return tracerr.Wrap(err)
  639. }
  640. newRollupVerifier.MaxTx = rollupVerifier.MaxTx.Int64()
  641. newRollupVerifier.NLevels = rollupVerifier.NLevels.Int64()
  642. rollupConstants.Verifiers = append(rollupConstants.Verifiers,
  643. newRollupVerifier)
  644. }
  645. rollupConstants.HermezAuctionContract, err = c.hermez.HermezAuctionContract(c.opts)
  646. if err != nil {
  647. return tracerr.Wrap(err)
  648. }
  649. rollupConstants.HermezGovernanceAddress, err = c.hermez.HermezGovernanceAddress(c.opts)
  650. if err != nil {
  651. return tracerr.Wrap(err)
  652. }
  653. rollupConstants.WithdrawDelayerContract, err = c.hermez.WithdrawDelayerContract(c.opts)
  654. return tracerr.Wrap(err)
  655. }); err != nil {
  656. return nil, tracerr.Wrap(err)
  657. }
  658. return rollupConstants, nil
  659. }
  660. var (
  661. logHermezL1UserTxEvent = crypto.Keccak256Hash([]byte(
  662. "L1UserTxEvent(uint32,uint8,bytes)"))
  663. logHermezAddToken = crypto.Keccak256Hash([]byte(
  664. "AddToken(address,uint32)"))
  665. logHermezForgeBatch = crypto.Keccak256Hash([]byte(
  666. "ForgeBatch(uint32,uint16)"))
  667. logHermezUpdateForgeL1L2BatchTimeout = crypto.Keccak256Hash([]byte(
  668. "UpdateForgeL1L2BatchTimeout(uint8)"))
  669. logHermezUpdateFeeAddToken = crypto.Keccak256Hash([]byte(
  670. "UpdateFeeAddToken(uint256)"))
  671. logHermezWithdrawEvent = crypto.Keccak256Hash([]byte(
  672. "WithdrawEvent(uint48,uint32,bool)"))
  673. logHermezUpdateBucketWithdraw = crypto.Keccak256Hash([]byte(
  674. "UpdateBucketWithdraw(uint8,uint256,uint256)"))
  675. logHermezUpdateWithdrawalDelay = crypto.Keccak256Hash([]byte(
  676. "UpdateWithdrawalDelay(uint64)"))
  677. logHermezUpdateBucketsParameters = crypto.Keccak256Hash([]byte(
  678. "UpdateBucketsParameters(uint256[])"))
  679. logHermezUpdateTokenExchange = crypto.Keccak256Hash([]byte(
  680. "UpdateTokenExchange(address[],uint64[])"))
  681. logHermezSafeMode = crypto.Keccak256Hash([]byte(
  682. "SafeMode()"))
  683. logHermezInitialize = crypto.Keccak256Hash([]byte(
  684. "InitializeHermezEvent(uint8,uint256,uint64)"))
  685. )
  686. // RollupEventInit returns the initialize event with its corresponding block number
  687. func (c *RollupClient) RollupEventInit() (*RollupEventInitialize, int64, error) {
  688. query := ethereum.FilterQuery{
  689. Addresses: []ethCommon.Address{
  690. c.address,
  691. },
  692. Topics: [][]ethCommon.Hash{{logHermezInitialize}},
  693. }
  694. logs, err := c.client.client.FilterLogs(context.Background(), query)
  695. if err != nil {
  696. return nil, 0, tracerr.Wrap(err)
  697. }
  698. if len(logs) != 1 {
  699. return nil, 0, tracerr.Wrap(fmt.Errorf("no event of type InitializeHermezEvent found"))
  700. }
  701. vLog := logs[0]
  702. if vLog.Topics[0] != logHermezInitialize {
  703. return nil, 0, tracerr.Wrap(fmt.Errorf("event is not InitializeHermezEvent"))
  704. }
  705. var rollupInit RollupEventInitialize
  706. if err := c.contractAbi.UnpackIntoInterface(&rollupInit, "InitializeHermezEvent",
  707. vLog.Data); err != nil {
  708. return nil, 0, tracerr.Wrap(err)
  709. }
  710. return &rollupInit, int64(vLog.BlockNumber), tracerr.Wrap(err)
  711. }
  712. // RollupEventsByBlock returns the events in a block that happened in the
  713. // Rollup Smart Contract.
  714. // To query by blockNum, set blockNum >= 0 and blockHash == nil.
  715. // To query by blockHash set blockHash != nil, and blockNum will be ignored.
  716. // If there are no events in that block the result is nil.
  717. func (c *RollupClient) RollupEventsByBlock(blockNum int64,
  718. blockHash *ethCommon.Hash) (*RollupEvents, error) {
  719. var rollupEvents RollupEvents
  720. var blockNumBigInt *big.Int
  721. if blockHash == nil {
  722. blockNumBigInt = big.NewInt(blockNum)
  723. }
  724. query := ethereum.FilterQuery{
  725. BlockHash: blockHash,
  726. FromBlock: blockNumBigInt,
  727. ToBlock: blockNumBigInt,
  728. Addresses: []ethCommon.Address{
  729. c.address,
  730. },
  731. Topics: [][]ethCommon.Hash{},
  732. }
  733. logs, err := c.client.client.FilterLogs(context.Background(), query)
  734. if err != nil {
  735. return nil, tracerr.Wrap(err)
  736. }
  737. if len(logs) == 0 {
  738. return nil, nil
  739. }
  740. for _, vLog := range logs {
  741. if blockHash != nil && vLog.BlockHash != *blockHash {
  742. log.Errorw("Block hash mismatch", "expected", blockHash.String(), "got", vLog.BlockHash.String())
  743. return nil, tracerr.Wrap(ErrBlockHashMismatchEvent)
  744. }
  745. switch vLog.Topics[0] {
  746. case logHermezL1UserTxEvent:
  747. var L1UserTxAux rollupEventL1UserTxAux
  748. var L1UserTx RollupEventL1UserTx
  749. err := c.contractAbi.UnpackIntoInterface(&L1UserTxAux, "L1UserTxEvent", vLog.Data)
  750. if err != nil {
  751. return nil, tracerr.Wrap(err)
  752. }
  753. L1Tx, err := common.L1UserTxFromBytes(L1UserTxAux.L1UserTx)
  754. if err != nil {
  755. return nil, tracerr.Wrap(err)
  756. }
  757. toForgeL1TxsNum := new(big.Int).SetBytes(vLog.Topics[1][:]).Int64()
  758. L1Tx.ToForgeL1TxsNum = &toForgeL1TxsNum
  759. L1Tx.Position = int(new(big.Int).SetBytes(vLog.Topics[2][:]).Int64())
  760. L1Tx.UserOrigin = true
  761. L1UserTx.L1UserTx = *L1Tx
  762. rollupEvents.L1UserTx = append(rollupEvents.L1UserTx, L1UserTx)
  763. case logHermezAddToken:
  764. var addToken RollupEventAddToken
  765. err := c.contractAbi.UnpackIntoInterface(&addToken, "AddToken", vLog.Data)
  766. if err != nil {
  767. return nil, tracerr.Wrap(err)
  768. }
  769. addToken.TokenAddress = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
  770. rollupEvents.AddToken = append(rollupEvents.AddToken, addToken)
  771. case logHermezForgeBatch:
  772. var forgeBatch RollupEventForgeBatch
  773. err := c.contractAbi.UnpackIntoInterface(&forgeBatch, "ForgeBatch", vLog.Data)
  774. if err != nil {
  775. return nil, tracerr.Wrap(err)
  776. }
  777. forgeBatch.BatchNum = new(big.Int).SetBytes(vLog.Topics[1][:]).Int64()
  778. forgeBatch.EthTxHash = vLog.TxHash
  779. // forgeBatch.Sender = vLog.Address
  780. rollupEvents.ForgeBatch = append(rollupEvents.ForgeBatch, forgeBatch)
  781. case logHermezUpdateForgeL1L2BatchTimeout:
  782. var updateForgeL1L2BatchTimeout struct {
  783. NewForgeL1L2BatchTimeout uint8
  784. }
  785. err := c.contractAbi.UnpackIntoInterface(&updateForgeL1L2BatchTimeout,
  786. "UpdateForgeL1L2BatchTimeout", vLog.Data)
  787. if err != nil {
  788. return nil, tracerr.Wrap(err)
  789. }
  790. rollupEvents.UpdateForgeL1L2BatchTimeout = append(rollupEvents.UpdateForgeL1L2BatchTimeout,
  791. RollupEventUpdateForgeL1L2BatchTimeout{
  792. NewForgeL1L2BatchTimeout: int64(updateForgeL1L2BatchTimeout.NewForgeL1L2BatchTimeout),
  793. })
  794. case logHermezUpdateFeeAddToken:
  795. var updateFeeAddToken RollupEventUpdateFeeAddToken
  796. err := c.contractAbi.UnpackIntoInterface(&updateFeeAddToken, "UpdateFeeAddToken", vLog.Data)
  797. if err != nil {
  798. return nil, tracerr.Wrap(err)
  799. }
  800. rollupEvents.UpdateFeeAddToken = append(rollupEvents.UpdateFeeAddToken, updateFeeAddToken)
  801. case logHermezWithdrawEvent:
  802. var withdraw RollupEventWithdraw
  803. withdraw.Idx = new(big.Int).SetBytes(vLog.Topics[1][:]).Uint64()
  804. withdraw.NumExitRoot = new(big.Int).SetBytes(vLog.Topics[2][:]).Uint64()
  805. instantWithdraw := new(big.Int).SetBytes(vLog.Topics[3][:]).Uint64()
  806. if instantWithdraw == 1 {
  807. withdraw.InstantWithdraw = true
  808. }
  809. withdraw.TxHash = vLog.TxHash
  810. rollupEvents.Withdraw = append(rollupEvents.Withdraw, withdraw)
  811. case logHermezUpdateBucketWithdraw:
  812. var updateBucketWithdrawAux rollupEventUpdateBucketWithdrawAux
  813. var updateBucketWithdraw RollupEventUpdateBucketWithdraw
  814. err := c.contractAbi.UnpackIntoInterface(&updateBucketWithdrawAux,
  815. "UpdateBucketWithdraw", vLog.Data)
  816. if err != nil {
  817. return nil, tracerr.Wrap(err)
  818. }
  819. updateBucketWithdraw.Withdrawals = updateBucketWithdrawAux.Withdrawals
  820. updateBucketWithdraw.NumBucket = int(new(big.Int).SetBytes(vLog.Topics[1][:]).Int64())
  821. updateBucketWithdraw.BlockStamp = new(big.Int).SetBytes(vLog.Topics[2][:]).Int64()
  822. rollupEvents.UpdateBucketWithdraw =
  823. append(rollupEvents.UpdateBucketWithdraw, updateBucketWithdraw)
  824. case logHermezUpdateWithdrawalDelay:
  825. var withdrawalDelay RollupEventUpdateWithdrawalDelay
  826. err := c.contractAbi.UnpackIntoInterface(&withdrawalDelay, "UpdateWithdrawalDelay", vLog.Data)
  827. if err != nil {
  828. return nil, tracerr.Wrap(err)
  829. }
  830. rollupEvents.UpdateWithdrawalDelay = append(rollupEvents.UpdateWithdrawalDelay, withdrawalDelay)
  831. case logHermezUpdateBucketsParameters:
  832. var bucketsParametersAux rollupEventUpdateBucketsParametersAux
  833. var bucketsParameters RollupEventUpdateBucketsParameters
  834. err := c.contractAbi.UnpackIntoInterface(&bucketsParametersAux,
  835. "UpdateBucketsParameters", vLog.Data)
  836. if err != nil {
  837. return nil, tracerr.Wrap(err)
  838. }
  839. bucketsParameters.ArrayBuckets = make([]RollupUpdateBucketsParameters, len(bucketsParametersAux.ArrayBuckets))
  840. for i, bucket := range bucketsParametersAux.ArrayBuckets {
  841. bucket, err := c.hermez.UnpackBucket(c.opts, bucket)
  842. if err != nil {
  843. return nil, tracerr.Wrap(err)
  844. }
  845. bucketsParameters.ArrayBuckets[i].CeilUSD = bucket.CeilUSD
  846. bucketsParameters.ArrayBuckets[i].BlockStamp = bucket.BlockStamp
  847. bucketsParameters.ArrayBuckets[i].Withdrawals = bucket.Withdrawals
  848. bucketsParameters.ArrayBuckets[i].RateBlocks = bucket.RateBlocks
  849. bucketsParameters.ArrayBuckets[i].RateWithdrawals = bucket.RateWithdrawals
  850. bucketsParameters.ArrayBuckets[i].MaxWithdrawals = bucket.MaxWithdrawals
  851. }
  852. rollupEvents.UpdateBucketsParameters =
  853. append(rollupEvents.UpdateBucketsParameters, bucketsParameters)
  854. case logHermezUpdateTokenExchange:
  855. var tokensExchange RollupEventUpdateTokenExchange
  856. err := c.contractAbi.UnpackIntoInterface(&tokensExchange, "UpdateTokenExchange", vLog.Data)
  857. if err != nil {
  858. return nil, tracerr.Wrap(err)
  859. }
  860. rollupEvents.UpdateTokenExchange = append(rollupEvents.UpdateTokenExchange, tokensExchange)
  861. case logHermezSafeMode:
  862. var safeMode RollupEventSafeMode
  863. rollupEvents.SafeMode = append(rollupEvents.SafeMode, safeMode)
  864. // Also add an UpdateBucketsParameter with
  865. // SafeMode=true to keep the order between `safeMode`
  866. // and `UpdateBucketsParameters`
  867. bucketsParameters := RollupEventUpdateBucketsParameters{
  868. SafeMode: true,
  869. }
  870. for i := range bucketsParameters.ArrayBuckets {
  871. bucketsParameters.ArrayBuckets[i].CeilUSD = big.NewInt(0)
  872. bucketsParameters.ArrayBuckets[i].BlockStamp = big.NewInt(0)
  873. bucketsParameters.ArrayBuckets[i].Withdrawals = big.NewInt(0)
  874. bucketsParameters.ArrayBuckets[i].RateBlocks = big.NewInt(0)
  875. bucketsParameters.ArrayBuckets[i].RateWithdrawals = big.NewInt(0)
  876. bucketsParameters.ArrayBuckets[i].MaxWithdrawals = big.NewInt(0)
  877. }
  878. rollupEvents.UpdateBucketsParameters = append(rollupEvents.UpdateBucketsParameters,
  879. bucketsParameters)
  880. }
  881. }
  882. return &rollupEvents, nil
  883. }
  884. // RollupForgeBatchArgs returns the arguments used in a ForgeBatch call in the
  885. // Rollup Smart Contract in the given transaction, and the sender address.
  886. func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash,
  887. l1UserTxsLen uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error) {
  888. tx, _, err := c.client.client.TransactionByHash(context.Background(), ethTxHash)
  889. if err != nil {
  890. return nil, nil, tracerr.Wrap(fmt.Errorf("TransactionByHash: %w", err))
  891. }
  892. txData := tx.Data()
  893. method, err := c.contractAbi.MethodById(txData[:4])
  894. if err != nil {
  895. return nil, nil, tracerr.Wrap(err)
  896. }
  897. receipt, err := c.client.client.TransactionReceipt(context.Background(), ethTxHash)
  898. if err != nil {
  899. return nil, nil, tracerr.Wrap(err)
  900. }
  901. sender, err := c.client.client.TransactionSender(context.Background(), tx,
  902. receipt.Logs[0].BlockHash, receipt.Logs[0].Index)
  903. if err != nil {
  904. return nil, nil, tracerr.Wrap(err)
  905. }
  906. var aux rollupForgeBatchArgsAux
  907. if values, err := method.Inputs.Unpack(txData[4:]); err != nil {
  908. return nil, nil, tracerr.Wrap(err)
  909. } else if err := method.Inputs.Copy(&aux, values); err != nil {
  910. return nil, nil, tracerr.Wrap(err)
  911. }
  912. rollupForgeBatchArgs := RollupForgeBatchArgs{
  913. L1Batch: aux.L1Batch,
  914. NewExitRoot: aux.NewExitRoot,
  915. NewLastIdx: aux.NewLastIdx.Int64(),
  916. NewStRoot: aux.NewStRoot,
  917. ProofA: aux.ProofA,
  918. ProofB: aux.ProofB,
  919. ProofC: aux.ProofC,
  920. VerifierIdx: aux.VerifierIdx,
  921. L1CoordinatorTxs: []common.L1Tx{},
  922. L1CoordinatorTxsAuths: [][]byte{},
  923. L2TxsData: []common.L2Tx{},
  924. FeeIdxCoordinator: []common.Idx{},
  925. }
  926. nLevels := c.consts.Verifiers[rollupForgeBatchArgs.VerifierIdx].NLevels
  927. lenL1L2TxsBytes := int((nLevels/8)*2 + common.Float40BytesLength + 1) //nolint:gomnd
  928. numBytesL1TxUser := int(l1UserTxsLen) * lenL1L2TxsBytes
  929. numTxsL1Coord := len(aux.EncodedL1CoordinatorTx) / common.RollupConstL1CoordinatorTotalBytes
  930. numBytesL1TxCoord := numTxsL1Coord * lenL1L2TxsBytes
  931. numBeginL2Tx := numBytesL1TxCoord + numBytesL1TxUser
  932. l1UserTxsData := []byte{}
  933. if l1UserTxsLen > 0 {
  934. l1UserTxsData = aux.L1L2TxsData[:numBytesL1TxUser]
  935. }
  936. for i := 0; i < int(l1UserTxsLen); i++ {
  937. l1Tx, err :=
  938. common.L1TxFromDataAvailability(l1UserTxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes],
  939. uint32(nLevels))
  940. if err != nil {
  941. return nil, nil, tracerr.Wrap(err)
  942. }
  943. rollupForgeBatchArgs.L1UserTxs = append(rollupForgeBatchArgs.L1UserTxs, *l1Tx)
  944. }
  945. l2TxsData := []byte{}
  946. if numBeginL2Tx < len(aux.L1L2TxsData) {
  947. l2TxsData = aux.L1L2TxsData[numBeginL2Tx:]
  948. }
  949. numTxsL2 := len(l2TxsData) / lenL1L2TxsBytes
  950. for i := 0; i < numTxsL2; i++ {
  951. l2Tx, err :=
  952. common.L2TxFromBytesDataAvailability(l2TxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes],
  953. int(nLevels))
  954. if err != nil {
  955. return nil, nil, tracerr.Wrap(err)
  956. }
  957. rollupForgeBatchArgs.L2TxsData = append(rollupForgeBatchArgs.L2TxsData, *l2Tx)
  958. }
  959. for i := 0; i < numTxsL1Coord; i++ {
  960. bytesL1Coordinator :=
  961. aux.EncodedL1CoordinatorTx[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*common.RollupConstL1CoordinatorTotalBytes] //nolint:lll
  962. var signature []byte
  963. v := bytesL1Coordinator[0]
  964. s := bytesL1Coordinator[1:33]
  965. r := bytesL1Coordinator[33:65]
  966. signature = append(signature, r[:]...)
  967. signature = append(signature, s[:]...)
  968. signature = append(signature, v)
  969. l1Tx, err := common.L1CoordinatorTxFromBytes(bytesL1Coordinator, c.chainID, c.address)
  970. if err != nil {
  971. return nil, nil, tracerr.Wrap(err)
  972. }
  973. rollupForgeBatchArgs.L1CoordinatorTxs = append(rollupForgeBatchArgs.L1CoordinatorTxs, *l1Tx)
  974. rollupForgeBatchArgs.L1CoordinatorTxsAuths =
  975. append(rollupForgeBatchArgs.L1CoordinatorTxsAuths, signature)
  976. }
  977. lenFeeIdxCoordinatorBytes := int(nLevels / 8) //nolint:gomnd
  978. numFeeIdxCoordinator := len(aux.FeeIdxCoordinator) / lenFeeIdxCoordinatorBytes
  979. for i := 0; i < numFeeIdxCoordinator; i++ {
  980. var paddedFeeIdx [6]byte
  981. // TODO: This check is not necessary: the first case will always work. Test it
  982. // before removing the if.
  983. if lenFeeIdxCoordinatorBytes < common.IdxBytesLen {
  984. copy(paddedFeeIdx[6-lenFeeIdxCoordinatorBytes:],
  985. aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
  986. } else {
  987. copy(paddedFeeIdx[:],
  988. aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
  989. }
  990. feeIdxCoordinator, err := common.IdxFromBytes(paddedFeeIdx[:])
  991. if err != nil {
  992. return nil, nil, tracerr.Wrap(err)
  993. }
  994. if feeIdxCoordinator != common.Idx(0) {
  995. rollupForgeBatchArgs.FeeIdxCoordinator =
  996. append(rollupForgeBatchArgs.FeeIdxCoordinator, feeIdxCoordinator)
  997. }
  998. }
  999. return &rollupForgeBatchArgs, &sender, nil
  1000. }