You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1065 lines
38 KiB

3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
  1. package eth
  2. import (
  3. "context"
  4. "fmt"
  5. "math/big"
  6. "strconv"
  7. "strings"
  8. "github.com/ethereum/go-ethereum"
  9. "github.com/ethereum/go-ethereum/accounts/abi"
  10. "github.com/ethereum/go-ethereum/accounts/abi/bind"
  11. ethCommon "github.com/ethereum/go-ethereum/common"
  12. "github.com/ethereum/go-ethereum/core/types"
  13. "github.com/ethereum/go-ethereum/crypto"
  14. "github.com/ethereum/go-ethereum/ethclient"
  15. "github.com/hermeznetwork/hermez-node/common"
  16. Hermez "github.com/hermeznetwork/hermez-node/eth/contracts/hermez"
  17. HEZ "github.com/hermeznetwork/hermez-node/eth/contracts/tokenHEZ"
  18. "github.com/hermeznetwork/hermez-node/log"
  19. "github.com/hermeznetwork/tracerr"
  20. "github.com/iden3/go-iden3-crypto/babyjub"
  21. )
  22. // QueueStruct is the queue of L1Txs for a batch
  23. type QueueStruct struct {
  24. L1TxQueue []common.L1Tx
  25. TotalL1TxFee *big.Int
  26. }
  27. // NewQueueStruct creates a new clear QueueStruct.
  28. func NewQueueStruct() *QueueStruct {
  29. return &QueueStruct{
  30. L1TxQueue: make([]common.L1Tx, 0),
  31. TotalL1TxFee: big.NewInt(0),
  32. }
  33. }
  34. // RollupState represents the state of the Rollup in the Smart Contract
  35. type RollupState struct {
  36. StateRoot *big.Int
  37. ExitRoots []*big.Int
  38. // ExitNullifierMap map[[256 / 8]byte]bool
  39. ExitNullifierMap map[int64]map[int64]bool // batchNum -> idx -> bool
  40. TokenList []ethCommon.Address
  41. TokenMap map[ethCommon.Address]bool
  42. MapL1TxQueue map[int64]*QueueStruct
  43. LastL1L2Batch int64
  44. CurrentToForgeL1TxsNum int64
  45. LastToForgeL1TxsNum int64
  46. CurrentIdx int64
  47. }
  48. // RollupEventInitialize is the InitializeHermezEvent event of the
  49. // Smart Contract
  50. type RollupEventInitialize struct {
  51. ForgeL1L2BatchTimeout uint8
  52. FeeAddToken *big.Int
  53. WithdrawalDelay uint64
  54. }
  55. // RollupVariables returns the RollupVariables from the initialize event
  56. func (ei *RollupEventInitialize) RollupVariables() *common.RollupVariables {
  57. var buckets [common.RollupConstNumBuckets]common.BucketParams
  58. for i := range buckets {
  59. buckets[i] = common.BucketParams{
  60. CeilUSD: big.NewInt(0),
  61. Withdrawals: big.NewInt(0),
  62. BlockWithdrawalRate: big.NewInt(0),
  63. MaxWithdrawals: big.NewInt(0),
  64. }
  65. }
  66. return &common.RollupVariables{
  67. EthBlockNum: 0,
  68. FeeAddToken: ei.FeeAddToken,
  69. ForgeL1L2BatchTimeout: int64(ei.ForgeL1L2BatchTimeout),
  70. WithdrawalDelay: ei.WithdrawalDelay,
  71. Buckets: buckets,
  72. SafeMode: false,
  73. }
  74. }
  75. // RollupEventL1UserTx is an event of the Rollup Smart Contract
  76. type RollupEventL1UserTx struct {
  77. // ToForgeL1TxsNum int64 // QueueIndex *big.Int
  78. // Position int // TransactionIndex *big.Int
  79. L1UserTx common.L1Tx
  80. }
  81. // RollupEventL1UserTxAux is an event of the Rollup Smart Contract
  82. type rollupEventL1UserTxAux struct {
  83. ToForgeL1TxsNum uint64 // QueueIndex *big.Int
  84. Position uint8 // TransactionIndex *big.Int
  85. L1UserTx []byte
  86. }
  87. // RollupEventAddToken is an event of the Rollup Smart Contract
  88. type RollupEventAddToken struct {
  89. TokenAddress ethCommon.Address
  90. TokenID uint32
  91. }
  92. // RollupEventForgeBatch is an event of the Rollup Smart Contract
  93. type RollupEventForgeBatch struct {
  94. BatchNum int64
  95. // Sender ethCommon.Address
  96. EthTxHash ethCommon.Hash
  97. L1UserTxsLen uint16
  98. }
  99. // RollupEventUpdateForgeL1L2BatchTimeout is an event of the Rollup Smart Contract
  100. type RollupEventUpdateForgeL1L2BatchTimeout struct {
  101. NewForgeL1L2BatchTimeout int64
  102. }
  103. // RollupEventUpdateFeeAddToken is an event of the Rollup Smart Contract
  104. type RollupEventUpdateFeeAddToken struct {
  105. NewFeeAddToken *big.Int
  106. }
  107. // RollupEventWithdraw is an event of the Rollup Smart Contract
  108. type RollupEventWithdraw struct {
  109. Idx uint64
  110. NumExitRoot uint64
  111. InstantWithdraw bool
  112. TxHash ethCommon.Hash // Hash of the transaction that generated this event
  113. }
  114. type rollupEventUpdateBucketWithdrawAux struct {
  115. NumBucket uint8
  116. BlockStamp *big.Int
  117. Withdrawals *big.Int
  118. }
  119. // RollupEventUpdateBucketWithdraw is an event of the Rollup Smart Contract
  120. type RollupEventUpdateBucketWithdraw struct {
  121. NumBucket int
  122. BlockStamp int64 // blockNum
  123. Withdrawals *big.Int
  124. }
  125. // RollupEventUpdateWithdrawalDelay is an event of the Rollup Smart Contract
  126. type RollupEventUpdateWithdrawalDelay struct {
  127. NewWithdrawalDelay uint64
  128. }
  129. // RollupUpdateBucketsParameters are the bucket parameters used in an update
  130. type RollupUpdateBucketsParameters struct {
  131. CeilUSD *big.Int
  132. Withdrawals *big.Int
  133. BlockWithdrawalRate *big.Int
  134. MaxWithdrawals *big.Int
  135. }
  136. type rollupEventUpdateBucketsParametersAux struct {
  137. ArrayBuckets [common.RollupConstNumBuckets][4]*big.Int
  138. }
  139. // RollupEventUpdateBucketsParameters is an event of the Rollup Smart Contract
  140. type RollupEventUpdateBucketsParameters struct {
  141. // ArrayBuckets [common.RollupConstNumBuckets][4]*big.Int
  142. ArrayBuckets [common.RollupConstNumBuckets]RollupUpdateBucketsParameters
  143. SafeMode bool
  144. }
  145. // RollupEventUpdateTokenExchange is an event of the Rollup Smart Contract
  146. type RollupEventUpdateTokenExchange struct {
  147. AddressArray []ethCommon.Address
  148. ValueArray []uint64
  149. }
  150. // RollupEventSafeMode is an event of the Rollup Smart Contract
  151. type RollupEventSafeMode struct {
  152. }
  153. // RollupEvents is the list of events in a block of the Rollup Smart Contract
  154. type RollupEvents struct {
  155. L1UserTx []RollupEventL1UserTx
  156. AddToken []RollupEventAddToken
  157. ForgeBatch []RollupEventForgeBatch
  158. UpdateForgeL1L2BatchTimeout []RollupEventUpdateForgeL1L2BatchTimeout
  159. UpdateFeeAddToken []RollupEventUpdateFeeAddToken
  160. Withdraw []RollupEventWithdraw
  161. UpdateWithdrawalDelay []RollupEventUpdateWithdrawalDelay
  162. UpdateBucketWithdraw []RollupEventUpdateBucketWithdraw
  163. UpdateBucketsParameters []RollupEventUpdateBucketsParameters
  164. UpdateTokenExchange []RollupEventUpdateTokenExchange
  165. SafeMode []RollupEventSafeMode
  166. }
  167. // NewRollupEvents creates an empty RollupEvents with the slices initialized.
  168. func NewRollupEvents() RollupEvents {
  169. return RollupEvents{
  170. L1UserTx: make([]RollupEventL1UserTx, 0),
  171. AddToken: make([]RollupEventAddToken, 0),
  172. ForgeBatch: make([]RollupEventForgeBatch, 0),
  173. UpdateForgeL1L2BatchTimeout: make([]RollupEventUpdateForgeL1L2BatchTimeout, 0),
  174. UpdateFeeAddToken: make([]RollupEventUpdateFeeAddToken, 0),
  175. Withdraw: make([]RollupEventWithdraw, 0),
  176. }
  177. }
  178. // RollupForgeBatchArgs are the arguments to the ForgeBatch function in the Rollup Smart Contract
  179. type RollupForgeBatchArgs struct {
  180. NewLastIdx int64
  181. NewStRoot *big.Int
  182. NewExitRoot *big.Int
  183. L1UserTxs []common.L1Tx
  184. L1CoordinatorTxs []common.L1Tx
  185. L1CoordinatorTxsAuths [][]byte // Authorization for accountCreations for each L1CoordinatorTx
  186. L2TxsData []common.L2Tx
  187. FeeIdxCoordinator []common.Idx
  188. // Circuit selector
  189. VerifierIdx uint8
  190. L1Batch bool
  191. ProofA [2]*big.Int
  192. ProofB [2][2]*big.Int
  193. ProofC [2]*big.Int
  194. }
  195. // RollupForgeBatchArgsAux are the arguments to the ForgeBatch function in the Rollup Smart Contract
  196. type rollupForgeBatchArgsAux struct {
  197. NewLastIdx *big.Int
  198. NewStRoot *big.Int
  199. NewExitRoot *big.Int
  200. EncodedL1CoordinatorTx []byte
  201. L1L2TxsData []byte
  202. FeeIdxCoordinator []byte
  203. // Circuit selector
  204. VerifierIdx uint8
  205. L1Batch bool
  206. ProofA [2]*big.Int
  207. ProofB [2][2]*big.Int
  208. ProofC [2]*big.Int
  209. }
  210. // RollupInterface is the inteface to to Rollup Smart Contract
  211. type RollupInterface interface {
  212. //
  213. // Smart Contract Methods
  214. //
  215. // Public Functions
  216. RollupForgeBatch(*RollupForgeBatchArgs, *bind.TransactOpts) (*types.Transaction, error)
  217. RollupAddToken(tokenAddress ethCommon.Address, feeAddToken,
  218. deadline *big.Int) (*types.Transaction, error)
  219. RollupWithdrawMerkleProof(babyPubKey babyjub.PublicKeyComp, tokenID uint32, numExitRoot,
  220. idx int64, amount *big.Int, siblings []*big.Int, instantWithdraw bool) (*types.Transaction,
  221. error)
  222. RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int, tokenID uint32,
  223. numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction, error)
  224. RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64, depositAmount *big.Int,
  225. amount *big.Int, tokenID uint32, toIdx int64) (*types.Transaction, error)
  226. RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
  227. depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64,
  228. deadline *big.Int) (tx *types.Transaction, err error)
  229. // Governance Public Functions
  230. RollupUpdateForgeL1L2BatchTimeout(newForgeL1L2BatchTimeout int64) (*types.Transaction, error)
  231. RollupUpdateFeeAddToken(newFeeAddToken *big.Int) (*types.Transaction, error)
  232. // Viewers
  233. RollupRegisterTokensCount() (*big.Int, error)
  234. RollupLastForgedBatch() (int64, error)
  235. //
  236. // Smart Contract Status
  237. //
  238. RollupConstants() (*common.RollupConstants, error)
  239. RollupEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*RollupEvents, error)
  240. RollupForgeBatchArgs(ethCommon.Hash, uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error)
  241. RollupEventInit(genesisBlockNum int64) (*RollupEventInitialize, int64, error)
  242. }
  243. //
  244. // Implementation
  245. //
  246. // RollupClient is the implementation of the interface to the Rollup Smart Contract in ethereum.
  247. type RollupClient struct {
  248. client *EthereumClient
  249. chainID *big.Int
  250. address ethCommon.Address
  251. tokenHEZCfg TokenConfig
  252. hermez *Hermez.Hermez
  253. tokenHEZ *HEZ.HEZ
  254. contractAbi abi.ABI
  255. opts *bind.CallOpts
  256. consts *common.RollupConstants
  257. }
  258. // NewRollupClient creates a new RollupClient
  259. func NewRollupClient(client *EthereumClient, address ethCommon.Address,
  260. tokenHEZCfg TokenConfig) (*RollupClient, error) {
  261. contractAbi, err := abi.JSON(strings.NewReader(string(Hermez.HermezABI)))
  262. if err != nil {
  263. return nil, tracerr.Wrap(err)
  264. }
  265. hermez, err := Hermez.NewHermez(address, client.Client())
  266. if err != nil {
  267. return nil, tracerr.Wrap(err)
  268. }
  269. tokenHEZ, err := HEZ.NewHEZ(tokenHEZCfg.Address, client.Client())
  270. if err != nil {
  271. return nil, tracerr.Wrap(err)
  272. }
  273. chainID, err := client.EthChainID()
  274. if err != nil {
  275. return nil, tracerr.Wrap(err)
  276. }
  277. c := &RollupClient{
  278. client: client,
  279. chainID: chainID,
  280. address: address,
  281. tokenHEZCfg: tokenHEZCfg,
  282. hermez: hermez,
  283. tokenHEZ: tokenHEZ,
  284. contractAbi: contractAbi,
  285. opts: newCallOpts(),
  286. }
  287. consts, err := c.RollupConstants()
  288. if err != nil {
  289. return nil, tracerr.Wrap(fmt.Errorf("RollupConstants at %v: %w", address, err))
  290. }
  291. c.consts = consts
  292. return c, nil
  293. }
  294. // RollupForgeBatch is the interface to call the smart contract function
  295. func (c *RollupClient) RollupForgeBatch(args *RollupForgeBatchArgs,
  296. auth *bind.TransactOpts) (tx *types.Transaction, err error) {
  297. if auth == nil {
  298. auth, err = c.client.NewAuth()
  299. if err != nil {
  300. return nil, tracerr.Wrap(err)
  301. }
  302. auth.GasLimit = 1000000
  303. }
  304. nLevels := c.consts.Verifiers[args.VerifierIdx].NLevels
  305. lenBytes := nLevels / 8 //nolint:gomnd
  306. newLastIdx := big.NewInt(int64(args.NewLastIdx))
  307. // L1CoordinatorBytes
  308. var l1CoordinatorBytes []byte
  309. for i := 0; i < len(args.L1CoordinatorTxs); i++ {
  310. l1 := args.L1CoordinatorTxs[i]
  311. bytesl1, err := l1.BytesCoordinatorTx(args.L1CoordinatorTxsAuths[i])
  312. if err != nil {
  313. return nil, tracerr.Wrap(err)
  314. }
  315. l1CoordinatorBytes = append(l1CoordinatorBytes, bytesl1[:]...)
  316. }
  317. // L1L2TxData
  318. var l1l2TxData []byte
  319. for i := 0; i < len(args.L1UserTxs); i++ {
  320. l1User := args.L1UserTxs[i]
  321. bytesl1User, err := l1User.BytesDataAvailability(uint32(nLevels))
  322. if err != nil {
  323. return nil, tracerr.Wrap(err)
  324. }
  325. l1l2TxData = append(l1l2TxData, bytesl1User[:]...)
  326. }
  327. for i := 0; i < len(args.L1CoordinatorTxs); i++ {
  328. l1Coord := args.L1CoordinatorTxs[i]
  329. bytesl1Coord, err := l1Coord.BytesDataAvailability(uint32(nLevels))
  330. if err != nil {
  331. return nil, tracerr.Wrap(err)
  332. }
  333. l1l2TxData = append(l1l2TxData, bytesl1Coord[:]...)
  334. }
  335. for i := 0; i < len(args.L2TxsData); i++ {
  336. l2 := args.L2TxsData[i]
  337. bytesl2, err := l2.BytesDataAvailability(uint32(nLevels))
  338. if err != nil {
  339. return nil, tracerr.Wrap(err)
  340. }
  341. l1l2TxData = append(l1l2TxData, bytesl2[:]...)
  342. }
  343. // FeeIdxCoordinator
  344. var feeIdxCoordinator []byte
  345. if len(args.FeeIdxCoordinator) > common.RollupConstMaxFeeIdxCoordinator {
  346. return nil, tracerr.Wrap(fmt.Errorf("len(args.FeeIdxCoordinator) > %v",
  347. common.RollupConstMaxFeeIdxCoordinator))
  348. }
  349. for i := 0; i < common.RollupConstMaxFeeIdxCoordinator; i++ {
  350. feeIdx := common.Idx(0)
  351. if i < len(args.FeeIdxCoordinator) {
  352. feeIdx = args.FeeIdxCoordinator[i]
  353. }
  354. bytesFeeIdx, err := feeIdx.Bytes()
  355. if err != nil {
  356. return nil, tracerr.Wrap(err)
  357. }
  358. feeIdxCoordinator = append(feeIdxCoordinator,
  359. bytesFeeIdx[len(bytesFeeIdx)-int(lenBytes):]...)
  360. }
  361. tx, err = c.hermez.ForgeBatch(auth, newLastIdx, args.NewStRoot, args.NewExitRoot,
  362. l1CoordinatorBytes, l1l2TxData, feeIdxCoordinator, args.VerifierIdx, args.L1Batch,
  363. args.ProofA, args.ProofB, args.ProofC)
  364. if err != nil {
  365. return nil, tracerr.Wrap(fmt.Errorf("Hermez.ForgeBatch: %w", err))
  366. }
  367. return tx, nil
  368. }
  369. // RollupAddToken is the interface to call the smart contract function.
  370. // `feeAddToken` is the amount of HEZ tokens that will be paid to add the
  371. // token. `feeAddToken` must match the public value of the smart contract.
  372. func (c *RollupClient) RollupAddToken(tokenAddress ethCommon.Address, feeAddToken,
  373. deadline *big.Int) (tx *types.Transaction, err error) {
  374. if tx, err = c.client.CallAuth(
  375. 0,
  376. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  377. owner := c.client.account.Address
  378. spender := c.address
  379. nonce, err := c.tokenHEZ.Nonces(c.opts, owner)
  380. if err != nil {
  381. return nil, tracerr.Wrap(err)
  382. }
  383. tokenName := c.tokenHEZCfg.Name
  384. tokenAddr := c.tokenHEZCfg.Address
  385. digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID,
  386. feeAddToken, nonce, deadline, tokenName)
  387. signature, _ := c.client.ks.SignHash(*c.client.account, digest)
  388. permit := createPermit(owner, spender, feeAddToken, deadline, digest,
  389. signature)
  390. return c.hermez.AddToken(auth, tokenAddress, permit)
  391. },
  392. ); err != nil {
  393. return nil, tracerr.Wrap(fmt.Errorf("Failed add Token %w", err))
  394. }
  395. return tx, nil
  396. }
  397. // RollupWithdrawMerkleProof is the interface to call the smart contract function
  398. func (c *RollupClient) RollupWithdrawMerkleProof(fromBJJ babyjub.PublicKeyComp, tokenID uint32,
  399. numExitRoot, idx int64, amount *big.Int, siblings []*big.Int,
  400. instantWithdraw bool) (tx *types.Transaction, err error) {
  401. if tx, err = c.client.CallAuth(
  402. 0,
  403. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  404. pkCompB := common.SwapEndianness(fromBJJ[:])
  405. babyPubKey := new(big.Int).SetBytes(pkCompB)
  406. numExitRootB := uint32(numExitRoot)
  407. idxBig := big.NewInt(idx)
  408. return c.hermez.WithdrawMerkleProof(auth, tokenID, amount, babyPubKey,
  409. numExitRootB, siblings, idxBig, instantWithdraw)
  410. },
  411. ); err != nil {
  412. return nil, tracerr.Wrap(fmt.Errorf("Failed update WithdrawMerkleProof: %w", err))
  413. }
  414. return tx, nil
  415. }
  416. // RollupWithdrawCircuit is the interface to call the smart contract function
  417. func (c *RollupClient) RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int,
  418. tokenID uint32, numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction,
  419. error) {
  420. log.Error("TODO")
  421. return nil, tracerr.Wrap(errTODO)
  422. }
  423. // RollupL1UserTxERC20ETH is the interface to call the smart contract function
  424. func (c *RollupClient) RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
  425. depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64) (tx *types.Transaction,
  426. err error) {
  427. if tx, err = c.client.CallAuth(
  428. 0,
  429. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  430. var babyPubKey *big.Int
  431. if fromBJJ != common.EmptyBJJComp {
  432. pkCompB := common.SwapEndianness(fromBJJ[:])
  433. babyPubKey = new(big.Int).SetBytes(pkCompB)
  434. } else {
  435. babyPubKey = big.NewInt(0)
  436. }
  437. fromIdxBig := big.NewInt(fromIdx)
  438. toIdxBig := big.NewInt(toIdx)
  439. depositAmountF, err := common.NewFloat40(depositAmount)
  440. if err != nil {
  441. return nil, tracerr.Wrap(err)
  442. }
  443. amountF, err := common.NewFloat40(amount)
  444. if err != nil {
  445. return nil, tracerr.Wrap(err)
  446. }
  447. if tokenID == 0 {
  448. auth.Value = depositAmount
  449. }
  450. var permit []byte
  451. return c.hermez.AddL1Transaction(auth, babyPubKey, fromIdxBig, uint16(depositAmountF),
  452. uint16(amountF), tokenID, toIdxBig, permit)
  453. },
  454. ); err != nil {
  455. return nil, tracerr.Wrap(fmt.Errorf("Failed add L1 Tx ERC20/ETH: %w", err))
  456. }
  457. return tx, nil
  458. }
  459. // RollupL1UserTxERC20Permit is the interface to call the smart contract function
  460. func (c *RollupClient) RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
  461. depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64,
  462. deadline *big.Int) (tx *types.Transaction, err error) {
  463. if tx, err = c.client.CallAuth(
  464. 0,
  465. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  466. var babyPubKey *big.Int
  467. if fromBJJ != common.EmptyBJJComp {
  468. pkCompB := common.SwapEndianness(fromBJJ[:])
  469. babyPubKey = new(big.Int).SetBytes(pkCompB)
  470. } else {
  471. babyPubKey = big.NewInt(0)
  472. }
  473. fromIdxBig := big.NewInt(fromIdx)
  474. toIdxBig := big.NewInt(toIdx)
  475. depositAmountF, err := common.NewFloat40(depositAmount)
  476. if err != nil {
  477. return nil, tracerr.Wrap(err)
  478. }
  479. amountF, err := common.NewFloat40(amount)
  480. if err != nil {
  481. return nil, tracerr.Wrap(err)
  482. }
  483. if tokenID == 0 {
  484. auth.Value = depositAmount
  485. }
  486. owner := c.client.account.Address
  487. spender := c.address
  488. nonce, err := c.tokenHEZ.Nonces(c.opts, owner)
  489. if err != nil {
  490. return nil, tracerr.Wrap(err)
  491. }
  492. tokenName := c.tokenHEZCfg.Name
  493. tokenAddr := c.tokenHEZCfg.Address
  494. digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID,
  495. amount, nonce, deadline, tokenName)
  496. signature, _ := c.client.ks.SignHash(*c.client.account, digest)
  497. permit := createPermit(owner, spender, amount, deadline, digest, signature)
  498. return c.hermez.AddL1Transaction(auth, babyPubKey, fromIdxBig,
  499. uint16(depositAmountF), uint16(amountF), tokenID, toIdxBig, permit)
  500. },
  501. ); err != nil {
  502. return nil, tracerr.Wrap(fmt.Errorf("Failed add L1 Tx ERC20Permit: %w", err))
  503. }
  504. return tx, nil
  505. }
  506. // RollupRegisterTokensCount is the interface to call the smart contract function
  507. func (c *RollupClient) RollupRegisterTokensCount() (registerTokensCount *big.Int, err error) {
  508. if err := c.client.Call(func(ec *ethclient.Client) error {
  509. registerTokensCount, err = c.hermez.RegisterTokensCount(c.opts)
  510. return tracerr.Wrap(err)
  511. }); err != nil {
  512. return nil, tracerr.Wrap(err)
  513. }
  514. return registerTokensCount, nil
  515. }
  516. // RollupLastForgedBatch is the interface to call the smart contract function
  517. func (c *RollupClient) RollupLastForgedBatch() (lastForgedBatch int64, err error) {
  518. if err := c.client.Call(func(ec *ethclient.Client) error {
  519. _lastForgedBatch, err := c.hermez.LastForgedBatch(c.opts)
  520. lastForgedBatch = int64(_lastForgedBatch)
  521. return tracerr.Wrap(err)
  522. }); err != nil {
  523. return 0, tracerr.Wrap(err)
  524. }
  525. return lastForgedBatch, nil
  526. }
  527. // RollupUpdateForgeL1L2BatchTimeout is the interface to call the smart contract function
  528. func (c *RollupClient) RollupUpdateForgeL1L2BatchTimeout(
  529. newForgeL1L2BatchTimeout int64) (tx *types.Transaction, err error) {
  530. if tx, err = c.client.CallAuth(
  531. 0,
  532. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  533. return c.hermez.UpdateForgeL1L2BatchTimeout(auth,
  534. uint8(newForgeL1L2BatchTimeout))
  535. },
  536. ); err != nil {
  537. return nil, tracerr.Wrap(fmt.Errorf("Failed update ForgeL1L2BatchTimeout: %w", err))
  538. }
  539. return tx, nil
  540. }
  541. // RollupUpdateFeeAddToken is the interface to call the smart contract function
  542. func (c *RollupClient) RollupUpdateFeeAddToken(newFeeAddToken *big.Int) (tx *types.Transaction,
  543. err error) {
  544. if tx, err = c.client.CallAuth(
  545. 0,
  546. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  547. return c.hermez.UpdateFeeAddToken(auth, newFeeAddToken)
  548. },
  549. ); err != nil {
  550. return nil, tracerr.Wrap(fmt.Errorf("Failed update FeeAddToken: %w", err))
  551. }
  552. return tx, nil
  553. }
  554. // RollupUpdateBucketsParameters is the interface to call the smart contract function
  555. func (c *RollupClient) RollupUpdateBucketsParameters(
  556. arrayBuckets [common.RollupConstNumBuckets]RollupUpdateBucketsParameters,
  557. ) (tx *types.Transaction, err error) {
  558. params := [common.RollupConstNumBuckets][4]*big.Int{}
  559. for i, bucket := range arrayBuckets {
  560. params[i][0] = bucket.CeilUSD
  561. params[i][1] = bucket.Withdrawals
  562. params[i][2] = bucket.BlockWithdrawalRate
  563. params[i][3] = bucket.MaxWithdrawals
  564. }
  565. if tx, err = c.client.CallAuth(
  566. 12500000, //nolint:gomnd
  567. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  568. return c.hermez.UpdateBucketsParameters(auth, params)
  569. },
  570. ); err != nil {
  571. return nil, tracerr.Wrap(fmt.Errorf("Failed update Buckets Parameters: %w", err))
  572. }
  573. return tx, nil
  574. }
  575. // RollupUpdateTokenExchange is the interface to call the smart contract function
  576. func (c *RollupClient) RollupUpdateTokenExchange(addressArray []ethCommon.Address,
  577. valueArray []uint64) (tx *types.Transaction, err error) {
  578. if tx, err = c.client.CallAuth(
  579. 0,
  580. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  581. return c.hermez.UpdateTokenExchange(auth, addressArray, valueArray)
  582. },
  583. ); err != nil {
  584. return nil, tracerr.Wrap(fmt.Errorf("Failed update Token Exchange: %w", err))
  585. }
  586. return tx, nil
  587. }
  588. // RollupUpdateWithdrawalDelay is the interface to call the smart contract function
  589. func (c *RollupClient) RollupUpdateWithdrawalDelay(newWithdrawalDelay int64) (tx *types.Transaction,
  590. err error) {
  591. if tx, err = c.client.CallAuth(
  592. 0,
  593. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  594. return c.hermez.UpdateWithdrawalDelay(auth, uint64(newWithdrawalDelay))
  595. },
  596. ); err != nil {
  597. return nil, tracerr.Wrap(fmt.Errorf("Failed update WithdrawalDelay: %w", err))
  598. }
  599. return tx, nil
  600. }
  601. // RollupSafeMode is the interface to call the smart contract function
  602. func (c *RollupClient) RollupSafeMode() (tx *types.Transaction, err error) {
  603. if tx, err = c.client.CallAuth(
  604. 0,
  605. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  606. return c.hermez.SafeMode(auth)
  607. },
  608. ); err != nil {
  609. return nil, tracerr.Wrap(fmt.Errorf("Failed update Safe Mode: %w", err))
  610. }
  611. return tx, nil
  612. }
  613. // RollupInstantWithdrawalViewer is the interface to call the smart contract function
  614. func (c *RollupClient) RollupInstantWithdrawalViewer(tokenAddress ethCommon.Address,
  615. amount *big.Int) (instantAllowed bool, err error) {
  616. if err := c.client.Call(func(ec *ethclient.Client) error {
  617. instantAllowed, err = c.hermez.InstantWithdrawalViewer(c.opts, tokenAddress, amount)
  618. return tracerr.Wrap(err)
  619. }); err != nil {
  620. return false, tracerr.Wrap(err)
  621. }
  622. return instantAllowed, nil
  623. }
  624. // RollupConstants returns the Constants of the Rollup Smart Contract
  625. func (c *RollupClient) RollupConstants() (rollupConstants *common.RollupConstants, err error) {
  626. rollupConstants = new(common.RollupConstants)
  627. if err := c.client.Call(func(ec *ethclient.Client) error {
  628. absoluteMaxL1L2BatchTimeout, err := c.hermez.ABSOLUTEMAXL1L2BATCHTIMEOUT(c.opts)
  629. if err != nil {
  630. return tracerr.Wrap(err)
  631. }
  632. rollupConstants.AbsoluteMaxL1L2BatchTimeout = int64(absoluteMaxL1L2BatchTimeout)
  633. rollupConstants.TokenHEZ, err = c.hermez.TokenHEZ(c.opts)
  634. if err != nil {
  635. return tracerr.Wrap(err)
  636. }
  637. rollupVerifiersLength, err := c.hermez.RollupVerifiersLength(c.opts)
  638. if err != nil {
  639. return tracerr.Wrap(err)
  640. }
  641. for i := int64(0); i < rollupVerifiersLength.Int64(); i++ {
  642. var newRollupVerifier common.RollupVerifierStruct
  643. rollupVerifier, err := c.hermez.RollupVerifiers(c.opts, big.NewInt(i))
  644. if err != nil {
  645. return tracerr.Wrap(err)
  646. }
  647. newRollupVerifier.MaxTx = rollupVerifier.MaxTx.Int64()
  648. newRollupVerifier.NLevels = rollupVerifier.NLevels.Int64()
  649. rollupConstants.Verifiers = append(rollupConstants.Verifiers,
  650. newRollupVerifier)
  651. }
  652. rollupConstants.HermezAuctionContract, err = c.hermez.HermezAuctionContract(c.opts)
  653. if err != nil {
  654. return tracerr.Wrap(err)
  655. }
  656. rollupConstants.HermezGovernanceAddress, err = c.hermez.HermezGovernanceAddress(c.opts)
  657. if err != nil {
  658. return tracerr.Wrap(err)
  659. }
  660. rollupConstants.WithdrawDelayerContract, err = c.hermez.WithdrawDelayerContract(c.opts)
  661. return tracerr.Wrap(err)
  662. }); err != nil {
  663. return nil, tracerr.Wrap(err)
  664. }
  665. return rollupConstants, nil
  666. }
  667. var (
  668. logHermezL1UserTxEvent = crypto.Keccak256Hash([]byte(
  669. "L1UserTxEvent(uint32,uint8,bytes)"))
  670. logHermezAddToken = crypto.Keccak256Hash([]byte(
  671. "AddToken(address,uint32)"))
  672. logHermezForgeBatch = crypto.Keccak256Hash([]byte(
  673. "ForgeBatch(uint32,uint16)"))
  674. logHermezUpdateForgeL1L2BatchTimeout = crypto.Keccak256Hash([]byte(
  675. "UpdateForgeL1L2BatchTimeout(uint8)"))
  676. logHermezUpdateFeeAddToken = crypto.Keccak256Hash([]byte(
  677. "UpdateFeeAddToken(uint256)"))
  678. logHermezWithdrawEvent = crypto.Keccak256Hash([]byte(
  679. "WithdrawEvent(uint48,uint32,bool)"))
  680. logHermezUpdateBucketWithdraw = crypto.Keccak256Hash([]byte(
  681. "UpdateBucketWithdraw(uint8,uint256,uint256)"))
  682. logHermezUpdateWithdrawalDelay = crypto.Keccak256Hash([]byte(
  683. "UpdateWithdrawalDelay(uint64)"))
  684. logHermezUpdateBucketsParameters = crypto.Keccak256Hash([]byte(
  685. "UpdateBucketsParameters(uint256[4][" + strconv.Itoa(common.RollupConstNumBuckets) + "])"))
  686. logHermezUpdateTokenExchange = crypto.Keccak256Hash([]byte(
  687. "UpdateTokenExchange(address[],uint64[])"))
  688. logHermezSafeMode = crypto.Keccak256Hash([]byte(
  689. "SafeMode()"))
  690. logHermezInitialize = crypto.Keccak256Hash([]byte(
  691. "InitializeHermezEvent(uint8,uint256,uint64)"))
  692. )
  693. // RollupEventInit returns the initialize event with its corresponding block number
  694. func (c *RollupClient) RollupEventInit(genesisBlockNum int64) (*RollupEventInitialize, int64, error) {
  695. query := ethereum.FilterQuery{
  696. Addresses: []ethCommon.Address{
  697. c.address,
  698. },
  699. FromBlock: big.NewInt(max(0, genesisBlockNum-blocksPerDay)),
  700. ToBlock: big.NewInt(genesisBlockNum),
  701. Topics: [][]ethCommon.Hash{{logHermezInitialize}},
  702. }
  703. logs, err := c.client.client.FilterLogs(context.Background(), query)
  704. if err != nil {
  705. return nil, 0, tracerr.Wrap(err)
  706. }
  707. if len(logs) != 1 {
  708. return nil, 0, tracerr.Wrap(fmt.Errorf("no event of type InitializeHermezEvent found"))
  709. }
  710. vLog := logs[0]
  711. if vLog.Topics[0] != logHermezInitialize {
  712. return nil, 0, tracerr.Wrap(fmt.Errorf("event is not InitializeHermezEvent"))
  713. }
  714. var rollupInit RollupEventInitialize
  715. if err := c.contractAbi.UnpackIntoInterface(&rollupInit, "InitializeHermezEvent",
  716. vLog.Data); err != nil {
  717. return nil, 0, tracerr.Wrap(err)
  718. }
  719. return &rollupInit, int64(vLog.BlockNumber), tracerr.Wrap(err)
  720. }
  721. // RollupEventsByBlock returns the events in a block that happened in the
  722. // Rollup Smart Contract.
  723. // To query by blockNum, set blockNum >= 0 and blockHash == nil.
  724. // To query by blockHash set blockHash != nil, and blockNum will be ignored.
  725. // If there are no events in that block the result is nil.
  726. func (c *RollupClient) RollupEventsByBlock(blockNum int64,
  727. blockHash *ethCommon.Hash) (*RollupEvents, error) {
  728. var rollupEvents RollupEvents
  729. var blockNumBigInt *big.Int
  730. if blockHash == nil {
  731. blockNumBigInt = big.NewInt(blockNum)
  732. }
  733. query := ethereum.FilterQuery{
  734. BlockHash: blockHash,
  735. FromBlock: blockNumBigInt,
  736. ToBlock: blockNumBigInt,
  737. Addresses: []ethCommon.Address{
  738. c.address,
  739. },
  740. Topics: [][]ethCommon.Hash{},
  741. }
  742. logs, err := c.client.client.FilterLogs(context.Background(), query)
  743. if err != nil {
  744. return nil, tracerr.Wrap(err)
  745. }
  746. if len(logs) == 0 {
  747. return nil, nil
  748. }
  749. for _, vLog := range logs {
  750. if blockHash != nil && vLog.BlockHash != *blockHash {
  751. log.Errorw("Block hash mismatch", "expected", blockHash.String(), "got", vLog.BlockHash.String())
  752. return nil, tracerr.Wrap(ErrBlockHashMismatchEvent)
  753. }
  754. switch vLog.Topics[0] {
  755. case logHermezL1UserTxEvent:
  756. var L1UserTxAux rollupEventL1UserTxAux
  757. var L1UserTx RollupEventL1UserTx
  758. err := c.contractAbi.UnpackIntoInterface(&L1UserTxAux, "L1UserTxEvent", vLog.Data)
  759. if err != nil {
  760. return nil, tracerr.Wrap(err)
  761. }
  762. L1Tx, err := common.L1UserTxFromBytes(L1UserTxAux.L1UserTx)
  763. if err != nil {
  764. return nil, tracerr.Wrap(err)
  765. }
  766. toForgeL1TxsNum := new(big.Int).SetBytes(vLog.Topics[1][:]).Int64()
  767. L1Tx.ToForgeL1TxsNum = &toForgeL1TxsNum
  768. L1Tx.Position = int(new(big.Int).SetBytes(vLog.Topics[2][:]).Int64())
  769. L1Tx.UserOrigin = true
  770. L1UserTx.L1UserTx = *L1Tx
  771. rollupEvents.L1UserTx = append(rollupEvents.L1UserTx, L1UserTx)
  772. case logHermezAddToken:
  773. var addToken RollupEventAddToken
  774. err := c.contractAbi.UnpackIntoInterface(&addToken, "AddToken", vLog.Data)
  775. if err != nil {
  776. return nil, tracerr.Wrap(err)
  777. }
  778. addToken.TokenAddress = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
  779. rollupEvents.AddToken = append(rollupEvents.AddToken, addToken)
  780. case logHermezForgeBatch:
  781. var forgeBatch RollupEventForgeBatch
  782. err := c.contractAbi.UnpackIntoInterface(&forgeBatch, "ForgeBatch", vLog.Data)
  783. if err != nil {
  784. return nil, tracerr.Wrap(err)
  785. }
  786. forgeBatch.BatchNum = new(big.Int).SetBytes(vLog.Topics[1][:]).Int64()
  787. forgeBatch.EthTxHash = vLog.TxHash
  788. // forgeBatch.Sender = vLog.Address
  789. rollupEvents.ForgeBatch = append(rollupEvents.ForgeBatch, forgeBatch)
  790. case logHermezUpdateForgeL1L2BatchTimeout:
  791. var updateForgeL1L2BatchTimeout struct {
  792. NewForgeL1L2BatchTimeout uint8
  793. }
  794. err := c.contractAbi.UnpackIntoInterface(&updateForgeL1L2BatchTimeout,
  795. "UpdateForgeL1L2BatchTimeout", vLog.Data)
  796. if err != nil {
  797. return nil, tracerr.Wrap(err)
  798. }
  799. rollupEvents.UpdateForgeL1L2BatchTimeout = append(rollupEvents.UpdateForgeL1L2BatchTimeout,
  800. RollupEventUpdateForgeL1L2BatchTimeout{
  801. NewForgeL1L2BatchTimeout: int64(updateForgeL1L2BatchTimeout.NewForgeL1L2BatchTimeout),
  802. })
  803. case logHermezUpdateFeeAddToken:
  804. var updateFeeAddToken RollupEventUpdateFeeAddToken
  805. err := c.contractAbi.UnpackIntoInterface(&updateFeeAddToken, "UpdateFeeAddToken", vLog.Data)
  806. if err != nil {
  807. return nil, tracerr.Wrap(err)
  808. }
  809. rollupEvents.UpdateFeeAddToken = append(rollupEvents.UpdateFeeAddToken, updateFeeAddToken)
  810. case logHermezWithdrawEvent:
  811. var withdraw RollupEventWithdraw
  812. withdraw.Idx = new(big.Int).SetBytes(vLog.Topics[1][:]).Uint64()
  813. withdraw.NumExitRoot = new(big.Int).SetBytes(vLog.Topics[2][:]).Uint64()
  814. instantWithdraw := new(big.Int).SetBytes(vLog.Topics[3][:]).Uint64()
  815. if instantWithdraw == 1 {
  816. withdraw.InstantWithdraw = true
  817. }
  818. withdraw.TxHash = vLog.TxHash
  819. rollupEvents.Withdraw = append(rollupEvents.Withdraw, withdraw)
  820. case logHermezUpdateBucketWithdraw:
  821. var updateBucketWithdrawAux rollupEventUpdateBucketWithdrawAux
  822. var updateBucketWithdraw RollupEventUpdateBucketWithdraw
  823. err := c.contractAbi.UnpackIntoInterface(&updateBucketWithdrawAux,
  824. "UpdateBucketWithdraw", vLog.Data)
  825. if err != nil {
  826. return nil, tracerr.Wrap(err)
  827. }
  828. updateBucketWithdraw.Withdrawals = updateBucketWithdrawAux.Withdrawals
  829. updateBucketWithdraw.NumBucket = int(new(big.Int).SetBytes(vLog.Topics[1][:]).Int64())
  830. updateBucketWithdraw.BlockStamp = new(big.Int).SetBytes(vLog.Topics[2][:]).Int64()
  831. rollupEvents.UpdateBucketWithdraw =
  832. append(rollupEvents.UpdateBucketWithdraw, updateBucketWithdraw)
  833. case logHermezUpdateWithdrawalDelay:
  834. var withdrawalDelay RollupEventUpdateWithdrawalDelay
  835. err := c.contractAbi.UnpackIntoInterface(&withdrawalDelay, "UpdateWithdrawalDelay", vLog.Data)
  836. if err != nil {
  837. return nil, tracerr.Wrap(err)
  838. }
  839. rollupEvents.UpdateWithdrawalDelay = append(rollupEvents.UpdateWithdrawalDelay, withdrawalDelay)
  840. case logHermezUpdateBucketsParameters:
  841. var bucketsParametersAux rollupEventUpdateBucketsParametersAux
  842. var bucketsParameters RollupEventUpdateBucketsParameters
  843. err := c.contractAbi.UnpackIntoInterface(&bucketsParametersAux,
  844. "UpdateBucketsParameters", vLog.Data)
  845. if err != nil {
  846. return nil, tracerr.Wrap(err)
  847. }
  848. for i, bucket := range bucketsParametersAux.ArrayBuckets {
  849. bucketsParameters.ArrayBuckets[i].CeilUSD = bucket[0]
  850. bucketsParameters.ArrayBuckets[i].Withdrawals = bucket[1]
  851. bucketsParameters.ArrayBuckets[i].BlockWithdrawalRate = bucket[2]
  852. bucketsParameters.ArrayBuckets[i].MaxWithdrawals = bucket[3]
  853. }
  854. rollupEvents.UpdateBucketsParameters =
  855. append(rollupEvents.UpdateBucketsParameters, bucketsParameters)
  856. case logHermezUpdateTokenExchange:
  857. var tokensExchange RollupEventUpdateTokenExchange
  858. err := c.contractAbi.UnpackIntoInterface(&tokensExchange, "UpdateTokenExchange", vLog.Data)
  859. if err != nil {
  860. return nil, tracerr.Wrap(err)
  861. }
  862. rollupEvents.UpdateTokenExchange = append(rollupEvents.UpdateTokenExchange, tokensExchange)
  863. case logHermezSafeMode:
  864. var safeMode RollupEventSafeMode
  865. rollupEvents.SafeMode = append(rollupEvents.SafeMode, safeMode)
  866. // Also add an UpdateBucketsParameter with
  867. // SafeMode=true to keep the order between `safeMode`
  868. // and `UpdateBucketsParameters`
  869. bucketsParameters := RollupEventUpdateBucketsParameters{
  870. SafeMode: true,
  871. }
  872. for i := range bucketsParameters.ArrayBuckets {
  873. bucketsParameters.ArrayBuckets[i].CeilUSD = big.NewInt(0)
  874. bucketsParameters.ArrayBuckets[i].Withdrawals = big.NewInt(0)
  875. bucketsParameters.ArrayBuckets[i].BlockWithdrawalRate = big.NewInt(0)
  876. bucketsParameters.ArrayBuckets[i].MaxWithdrawals = big.NewInt(0)
  877. }
  878. rollupEvents.UpdateBucketsParameters = append(rollupEvents.UpdateBucketsParameters,
  879. bucketsParameters)
  880. }
  881. }
  882. return &rollupEvents, nil
  883. }
  884. // RollupForgeBatchArgs returns the arguments used in a ForgeBatch call in the
  885. // Rollup Smart Contract in the given transaction, and the sender address.
  886. func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash,
  887. l1UserTxsLen uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error) {
  888. tx, _, err := c.client.client.TransactionByHash(context.Background(), ethTxHash)
  889. if err != nil {
  890. return nil, nil, tracerr.Wrap(fmt.Errorf("TransactionByHash: %w", err))
  891. }
  892. txData := tx.Data()
  893. method, err := c.contractAbi.MethodById(txData[:4])
  894. if err != nil {
  895. return nil, nil, tracerr.Wrap(err)
  896. }
  897. receipt, err := c.client.client.TransactionReceipt(context.Background(), ethTxHash)
  898. if err != nil {
  899. return nil, nil, tracerr.Wrap(err)
  900. }
  901. sender, err := c.client.client.TransactionSender(context.Background(), tx,
  902. receipt.Logs[0].BlockHash, receipt.Logs[0].Index)
  903. if err != nil {
  904. return nil, nil, tracerr.Wrap(err)
  905. }
  906. var aux rollupForgeBatchArgsAux
  907. if values, err := method.Inputs.Unpack(txData[4:]); err != nil {
  908. return nil, nil, tracerr.Wrap(err)
  909. } else if err := method.Inputs.Copy(&aux, values); err != nil {
  910. return nil, nil, tracerr.Wrap(err)
  911. }
  912. rollupForgeBatchArgs := RollupForgeBatchArgs{
  913. L1Batch: aux.L1Batch,
  914. NewExitRoot: aux.NewExitRoot,
  915. NewLastIdx: aux.NewLastIdx.Int64(),
  916. NewStRoot: aux.NewStRoot,
  917. ProofA: aux.ProofA,
  918. ProofB: aux.ProofB,
  919. ProofC: aux.ProofC,
  920. VerifierIdx: aux.VerifierIdx,
  921. L1CoordinatorTxs: []common.L1Tx{},
  922. L1CoordinatorTxsAuths: [][]byte{},
  923. L2TxsData: []common.L2Tx{},
  924. FeeIdxCoordinator: []common.Idx{},
  925. }
  926. nLevels := c.consts.Verifiers[rollupForgeBatchArgs.VerifierIdx].NLevels
  927. lenL1L2TxsBytes := int((nLevels/8)*2 + common.Float40BytesLength + 1) //nolint:gomnd
  928. numBytesL1TxUser := int(l1UserTxsLen) * lenL1L2TxsBytes
  929. numTxsL1Coord := len(aux.EncodedL1CoordinatorTx) / common.RollupConstL1CoordinatorTotalBytes
  930. numBytesL1TxCoord := numTxsL1Coord * lenL1L2TxsBytes
  931. numBeginL2Tx := numBytesL1TxCoord + numBytesL1TxUser
  932. l1UserTxsData := []byte{}
  933. if l1UserTxsLen > 0 {
  934. l1UserTxsData = aux.L1L2TxsData[:numBytesL1TxUser]
  935. }
  936. for i := 0; i < int(l1UserTxsLen); i++ {
  937. l1Tx, err :=
  938. common.L1TxFromDataAvailability(l1UserTxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes],
  939. uint32(nLevels))
  940. if err != nil {
  941. return nil, nil, tracerr.Wrap(err)
  942. }
  943. rollupForgeBatchArgs.L1UserTxs = append(rollupForgeBatchArgs.L1UserTxs, *l1Tx)
  944. }
  945. l2TxsData := []byte{}
  946. if numBeginL2Tx < len(aux.L1L2TxsData) {
  947. l2TxsData = aux.L1L2TxsData[numBeginL2Tx:]
  948. }
  949. numTxsL2 := len(l2TxsData) / lenL1L2TxsBytes
  950. for i := 0; i < numTxsL2; i++ {
  951. l2Tx, err :=
  952. common.L2TxFromBytesDataAvailability(l2TxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes],
  953. int(nLevels))
  954. if err != nil {
  955. return nil, nil, tracerr.Wrap(err)
  956. }
  957. rollupForgeBatchArgs.L2TxsData = append(rollupForgeBatchArgs.L2TxsData, *l2Tx)
  958. }
  959. for i := 0; i < numTxsL1Coord; i++ {
  960. bytesL1Coordinator :=
  961. aux.EncodedL1CoordinatorTx[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*common.RollupConstL1CoordinatorTotalBytes] //nolint:lll
  962. var signature []byte
  963. v := bytesL1Coordinator[0]
  964. s := bytesL1Coordinator[1:33]
  965. r := bytesL1Coordinator[33:65]
  966. signature = append(signature, r[:]...)
  967. signature = append(signature, s[:]...)
  968. signature = append(signature, v)
  969. l1Tx, err := common.L1CoordinatorTxFromBytes(bytesL1Coordinator, c.chainID, c.address)
  970. if err != nil {
  971. return nil, nil, tracerr.Wrap(err)
  972. }
  973. rollupForgeBatchArgs.L1CoordinatorTxs = append(rollupForgeBatchArgs.L1CoordinatorTxs, *l1Tx)
  974. rollupForgeBatchArgs.L1CoordinatorTxsAuths =
  975. append(rollupForgeBatchArgs.L1CoordinatorTxsAuths, signature)
  976. }
  977. lenFeeIdxCoordinatorBytes := int(nLevels / 8) //nolint:gomnd
  978. numFeeIdxCoordinator := len(aux.FeeIdxCoordinator) / lenFeeIdxCoordinatorBytes
  979. for i := 0; i < numFeeIdxCoordinator; i++ {
  980. var paddedFeeIdx [6]byte
  981. // TODO: This check is not necessary: the first case will always work. Test it
  982. // before removing the if.
  983. if lenFeeIdxCoordinatorBytes < common.IdxBytesLen {
  984. copy(paddedFeeIdx[6-lenFeeIdxCoordinatorBytes:],
  985. aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
  986. } else {
  987. copy(paddedFeeIdx[:],
  988. aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
  989. }
  990. feeIdxCoordinator, err := common.IdxFromBytes(paddedFeeIdx[:])
  991. if err != nil {
  992. return nil, nil, tracerr.Wrap(err)
  993. }
  994. if feeIdxCoordinator != common.Idx(0) {
  995. rollupForgeBatchArgs.FeeIdxCoordinator =
  996. append(rollupForgeBatchArgs.FeeIdxCoordinator, feeIdxCoordinator)
  997. }
  998. }
  999. return &rollupForgeBatchArgs, &sender, nil
  1000. }