You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

1074 lines
39 KiB

4 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
Fix eth events query and sync inconsistent state - kvdb - Fix path in Last when doing `setNew` - Only close if db != nil, and after closing, always set db to nil - This will avoid a panic in the case where the db is closed but there's an error soon after, and a future call tries to close again. This is because pebble.Close() will panic if the db is already closed. - Avoid calling pebble methods when a the Storage interface already implements that method (like Close). - statedb - In test, avoid calling KVDB method if the same method is available for the StateDB (like MakeCheckpoint, CurrentBatch). - eth - In *EventByBlock methods, take blockHash as input argument and use it when querying the event logs. Previously the blockHash was only taken from the logs results *only if* there was any log. This caused the following issue: if there was no logs, it was not possible to know if the result was from the expected block or an uncle block! By querying logs by blockHash we make sure that even if there are no logs, they are from the right block. - Note that now the function can either be called with a blockNum or blockHash, but not both at the same time. - sync - If there's an error during call to Sync call resetState, which internally resets the stateDB to avoid stale checkpoints (and a corresponding invalid increase in the StateDB batchNum). - During a Sync, after very batch processed, make sure that the StateDB currentBatch corresponds to the batchNum in the smart contract log/event.
3 years ago
  1. package eth
  2. import (
  3. "context"
  4. "fmt"
  5. "math/big"
  6. "strings"
  7. "github.com/ethereum/go-ethereum"
  8. "github.com/ethereum/go-ethereum/accounts/abi"
  9. "github.com/ethereum/go-ethereum/accounts/abi/bind"
  10. ethCommon "github.com/ethereum/go-ethereum/common"
  11. "github.com/ethereum/go-ethereum/core/types"
  12. "github.com/ethereum/go-ethereum/crypto"
  13. "github.com/ethereum/go-ethereum/ethclient"
  14. "github.com/hermeznetwork/hermez-node/common"
  15. Hermez "github.com/hermeznetwork/hermez-node/eth/contracts/hermez"
  16. HEZ "github.com/hermeznetwork/hermez-node/eth/contracts/tokenHEZ"
  17. "github.com/hermeznetwork/hermez-node/log"
  18. "github.com/hermeznetwork/tracerr"
  19. "github.com/iden3/go-iden3-crypto/babyjub"
  20. )
  21. // QueueStruct is the queue of L1Txs for a batch
  22. type QueueStruct struct {
  23. L1TxQueue []common.L1Tx
  24. TotalL1TxFee *big.Int
  25. }
  26. // NewQueueStruct creates a new clear QueueStruct.
  27. func NewQueueStruct() *QueueStruct {
  28. return &QueueStruct{
  29. L1TxQueue: make([]common.L1Tx, 0),
  30. TotalL1TxFee: big.NewInt(0),
  31. }
  32. }
  33. // RollupState represents the state of the Rollup in the Smart Contract
  34. type RollupState struct {
  35. StateRoot *big.Int
  36. ExitRoots []*big.Int
  37. // ExitNullifierMap map[[256 / 8]byte]bool
  38. ExitNullifierMap map[int64]map[int64]bool // batchNum -> idx -> bool
  39. TokenList []ethCommon.Address
  40. TokenMap map[ethCommon.Address]bool
  41. MapL1TxQueue map[int64]*QueueStruct
  42. LastL1L2Batch int64
  43. CurrentToForgeL1TxsNum int64
  44. LastToForgeL1TxsNum int64
  45. CurrentIdx int64
  46. }
  47. // RollupEventInitialize is the InitializeHermezEvent event of the
  48. // Smart Contract
  49. type RollupEventInitialize struct {
  50. ForgeL1L2BatchTimeout uint8
  51. FeeAddToken *big.Int
  52. WithdrawalDelay uint64
  53. }
  54. // RollupVariables returns the RollupVariables from the initialize event
  55. func (ei *RollupEventInitialize) RollupVariables() *common.RollupVariables {
  56. var buckets [common.RollupConstNumBuckets]common.BucketParams
  57. for i := range buckets {
  58. buckets[i] = common.BucketParams{
  59. CeilUSD: big.NewInt(0),
  60. BlockStamp: big.NewInt(0),
  61. Withdrawals: big.NewInt(0),
  62. RateBlocks: big.NewInt(0),
  63. RateWithdrawals: big.NewInt(0),
  64. MaxWithdrawals: big.NewInt(0),
  65. }
  66. }
  67. return &common.RollupVariables{
  68. EthBlockNum: 0,
  69. FeeAddToken: ei.FeeAddToken,
  70. ForgeL1L2BatchTimeout: int64(ei.ForgeL1L2BatchTimeout),
  71. WithdrawalDelay: ei.WithdrawalDelay,
  72. Buckets: buckets,
  73. SafeMode: false,
  74. }
  75. }
  76. // RollupEventL1UserTx is an event of the Rollup Smart Contract
  77. type RollupEventL1UserTx struct {
  78. // ToForgeL1TxsNum int64 // QueueIndex *big.Int
  79. // Position int // TransactionIndex *big.Int
  80. L1UserTx common.L1Tx
  81. }
  82. // RollupEventL1UserTxAux is an event of the Rollup Smart Contract
  83. type rollupEventL1UserTxAux struct {
  84. ToForgeL1TxsNum uint64 // QueueIndex *big.Int
  85. Position uint8 // TransactionIndex *big.Int
  86. L1UserTx []byte
  87. }
  88. // RollupEventAddToken is an event of the Rollup Smart Contract
  89. type RollupEventAddToken struct {
  90. TokenAddress ethCommon.Address
  91. TokenID uint32
  92. }
  93. // RollupEventForgeBatch is an event of the Rollup Smart Contract
  94. type RollupEventForgeBatch struct {
  95. BatchNum int64
  96. // Sender ethCommon.Address
  97. EthTxHash ethCommon.Hash
  98. L1UserTxsLen uint16
  99. }
  100. // RollupEventUpdateForgeL1L2BatchTimeout is an event of the Rollup Smart Contract
  101. type RollupEventUpdateForgeL1L2BatchTimeout struct {
  102. NewForgeL1L2BatchTimeout int64
  103. }
  104. // RollupEventUpdateFeeAddToken is an event of the Rollup Smart Contract
  105. type RollupEventUpdateFeeAddToken struct {
  106. NewFeeAddToken *big.Int
  107. }
  108. // RollupEventWithdraw is an event of the Rollup Smart Contract
  109. type RollupEventWithdraw struct {
  110. Idx uint64
  111. NumExitRoot uint64
  112. InstantWithdraw bool
  113. TxHash ethCommon.Hash // Hash of the transaction that generated this event
  114. }
  115. type rollupEventUpdateBucketWithdrawAux struct {
  116. NumBucket uint8
  117. BlockStamp *big.Int
  118. Withdrawals *big.Int
  119. }
  120. // RollupEventUpdateBucketWithdraw is an event of the Rollup Smart Contract
  121. type RollupEventUpdateBucketWithdraw struct {
  122. NumBucket int
  123. BlockStamp int64 // blockNum
  124. Withdrawals *big.Int
  125. }
  126. // RollupEventUpdateWithdrawalDelay is an event of the Rollup Smart Contract
  127. type RollupEventUpdateWithdrawalDelay struct {
  128. NewWithdrawalDelay uint64
  129. }
  130. // RollupUpdateBucketsParameters are the bucket parameters used in an update
  131. type RollupUpdateBucketsParameters struct {
  132. CeilUSD *big.Int
  133. BlockStamp *big.Int
  134. Withdrawals *big.Int
  135. RateBlocks *big.Int
  136. RateWithdrawals *big.Int
  137. MaxWithdrawals *big.Int
  138. }
  139. type rollupEventUpdateBucketsParametersAux struct {
  140. ArrayBuckets []*big.Int
  141. }
  142. // RollupEventUpdateBucketsParameters is an event of the Rollup Smart Contract
  143. type RollupEventUpdateBucketsParameters struct {
  144. // ArrayBuckets [common.RollupConstNumBuckets][4]*big.Int
  145. ArrayBuckets [common.RollupConstNumBuckets]RollupUpdateBucketsParameters
  146. SafeMode bool
  147. }
  148. // RollupEventUpdateTokenExchange is an event of the Rollup Smart Contract
  149. type RollupEventUpdateTokenExchange struct {
  150. AddressArray []ethCommon.Address
  151. ValueArray []uint64
  152. }
  153. // RollupEventSafeMode is an event of the Rollup Smart Contract
  154. type RollupEventSafeMode struct {
  155. }
  156. // RollupEvents is the list of events in a block of the Rollup Smart Contract
  157. type RollupEvents struct {
  158. L1UserTx []RollupEventL1UserTx
  159. AddToken []RollupEventAddToken
  160. ForgeBatch []RollupEventForgeBatch
  161. UpdateForgeL1L2BatchTimeout []RollupEventUpdateForgeL1L2BatchTimeout
  162. UpdateFeeAddToken []RollupEventUpdateFeeAddToken
  163. Withdraw []RollupEventWithdraw
  164. UpdateWithdrawalDelay []RollupEventUpdateWithdrawalDelay
  165. UpdateBucketWithdraw []RollupEventUpdateBucketWithdraw
  166. UpdateBucketsParameters []RollupEventUpdateBucketsParameters
  167. UpdateTokenExchange []RollupEventUpdateTokenExchange
  168. SafeMode []RollupEventSafeMode
  169. }
  170. // NewRollupEvents creates an empty RollupEvents with the slices initialized.
  171. func NewRollupEvents() RollupEvents {
  172. return RollupEvents{
  173. L1UserTx: make([]RollupEventL1UserTx, 0),
  174. AddToken: make([]RollupEventAddToken, 0),
  175. ForgeBatch: make([]RollupEventForgeBatch, 0),
  176. UpdateForgeL1L2BatchTimeout: make([]RollupEventUpdateForgeL1L2BatchTimeout, 0),
  177. UpdateFeeAddToken: make([]RollupEventUpdateFeeAddToken, 0),
  178. Withdraw: make([]RollupEventWithdraw, 0),
  179. }
  180. }
  181. // RollupForgeBatchArgs are the arguments to the ForgeBatch function in the Rollup Smart Contract
  182. type RollupForgeBatchArgs struct {
  183. NewLastIdx int64
  184. NewStRoot *big.Int
  185. NewExitRoot *big.Int
  186. L1UserTxs []common.L1Tx
  187. L1CoordinatorTxs []common.L1Tx
  188. L1CoordinatorTxsAuths [][]byte // Authorization for accountCreations for each L1CoordinatorTx
  189. L2TxsData []common.L2Tx
  190. FeeIdxCoordinator []common.Idx
  191. // Circuit selector
  192. VerifierIdx uint8
  193. L1Batch bool
  194. ProofA [2]*big.Int
  195. ProofB [2][2]*big.Int
  196. ProofC [2]*big.Int
  197. }
  198. // RollupForgeBatchArgsAux are the arguments to the ForgeBatch function in the Rollup Smart Contract
  199. type rollupForgeBatchArgsAux struct {
  200. NewLastIdx *big.Int
  201. NewStRoot *big.Int
  202. NewExitRoot *big.Int
  203. EncodedL1CoordinatorTx []byte
  204. L1L2TxsData []byte
  205. FeeIdxCoordinator []byte
  206. // Circuit selector
  207. VerifierIdx uint8
  208. L1Batch bool
  209. ProofA [2]*big.Int
  210. ProofB [2][2]*big.Int
  211. ProofC [2]*big.Int
  212. }
  213. // RollupInterface is the inteface to to Rollup Smart Contract
  214. type RollupInterface interface {
  215. //
  216. // Smart Contract Methods
  217. //
  218. // Public Functions
  219. RollupForgeBatch(*RollupForgeBatchArgs, *bind.TransactOpts) (*types.Transaction, error)
  220. RollupAddToken(tokenAddress ethCommon.Address, feeAddToken,
  221. deadline *big.Int) (*types.Transaction, error)
  222. RollupWithdrawMerkleProof(babyPubKey babyjub.PublicKeyComp, tokenID uint32, numExitRoot,
  223. idx int64, amount *big.Int, siblings []*big.Int, instantWithdraw bool) (*types.Transaction,
  224. error)
  225. RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int, tokenID uint32,
  226. numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction, error)
  227. RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64, depositAmount *big.Int,
  228. amount *big.Int, tokenID uint32, toIdx int64) (*types.Transaction, error)
  229. RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
  230. depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64,
  231. deadline *big.Int) (tx *types.Transaction, err error)
  232. // Governance Public Functions
  233. RollupUpdateForgeL1L2BatchTimeout(newForgeL1L2BatchTimeout int64) (*types.Transaction, error)
  234. RollupUpdateFeeAddToken(newFeeAddToken *big.Int) (*types.Transaction, error)
  235. // Viewers
  236. RollupRegisterTokensCount() (*big.Int, error)
  237. RollupLastForgedBatch() (int64, error)
  238. //
  239. // Smart Contract Status
  240. //
  241. RollupConstants() (*common.RollupConstants, error)
  242. RollupEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*RollupEvents, error)
  243. RollupForgeBatchArgs(ethCommon.Hash, uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error)
  244. RollupEventInit() (*RollupEventInitialize, int64, error)
  245. }
  246. //
  247. // Implementation
  248. //
  249. // RollupClient is the implementation of the interface to the Rollup Smart Contract in ethereum.
  250. type RollupClient struct {
  251. client *EthereumClient
  252. chainID *big.Int
  253. address ethCommon.Address
  254. tokenHEZCfg TokenConfig
  255. hermez *Hermez.Hermez
  256. tokenHEZ *HEZ.HEZ
  257. contractAbi abi.ABI
  258. opts *bind.CallOpts
  259. consts *common.RollupConstants
  260. }
  261. // NewRollupClient creates a new RollupClient
  262. func NewRollupClient(client *EthereumClient, address ethCommon.Address,
  263. tokenHEZCfg TokenConfig) (*RollupClient, error) {
  264. contractAbi, err := abi.JSON(strings.NewReader(string(Hermez.HermezABI)))
  265. if err != nil {
  266. return nil, tracerr.Wrap(err)
  267. }
  268. hermez, err := Hermez.NewHermez(address, client.Client())
  269. if err != nil {
  270. return nil, tracerr.Wrap(err)
  271. }
  272. tokenHEZ, err := HEZ.NewHEZ(tokenHEZCfg.Address, client.Client())
  273. if err != nil {
  274. return nil, tracerr.Wrap(err)
  275. }
  276. chainID, err := client.EthChainID()
  277. if err != nil {
  278. return nil, tracerr.Wrap(err)
  279. }
  280. c := &RollupClient{
  281. client: client,
  282. chainID: chainID,
  283. address: address,
  284. tokenHEZCfg: tokenHEZCfg,
  285. hermez: hermez,
  286. tokenHEZ: tokenHEZ,
  287. contractAbi: contractAbi,
  288. opts: newCallOpts(),
  289. }
  290. consts, err := c.RollupConstants()
  291. if err != nil {
  292. return nil, tracerr.Wrap(fmt.Errorf("RollupConstants at %v: %w", address, err))
  293. }
  294. c.consts = consts
  295. return c, nil
  296. }
  297. // RollupForgeBatch is the interface to call the smart contract function
  298. func (c *RollupClient) RollupForgeBatch(args *RollupForgeBatchArgs,
  299. auth *bind.TransactOpts) (tx *types.Transaction, err error) {
  300. if auth == nil {
  301. auth, err = c.client.NewAuth()
  302. if err != nil {
  303. return nil, tracerr.Wrap(err)
  304. }
  305. auth.GasLimit = 1000000
  306. }
  307. nLevels := c.consts.Verifiers[args.VerifierIdx].NLevels
  308. lenBytes := nLevels / 8 //nolint:gomnd
  309. newLastIdx := big.NewInt(int64(args.NewLastIdx))
  310. // L1CoordinatorBytes
  311. var l1CoordinatorBytes []byte
  312. for i := 0; i < len(args.L1CoordinatorTxs); i++ {
  313. l1 := args.L1CoordinatorTxs[i]
  314. bytesl1, err := l1.BytesCoordinatorTx(args.L1CoordinatorTxsAuths[i])
  315. if err != nil {
  316. return nil, tracerr.Wrap(err)
  317. }
  318. l1CoordinatorBytes = append(l1CoordinatorBytes, bytesl1[:]...)
  319. }
  320. // L1L2TxData
  321. var l1l2TxData []byte
  322. for i := 0; i < len(args.L1UserTxs); i++ {
  323. l1User := args.L1UserTxs[i]
  324. bytesl1User, err := l1User.BytesDataAvailability(uint32(nLevels))
  325. if err != nil {
  326. return nil, tracerr.Wrap(err)
  327. }
  328. l1l2TxData = append(l1l2TxData, bytesl1User[:]...)
  329. }
  330. for i := 0; i < len(args.L1CoordinatorTxs); i++ {
  331. l1Coord := args.L1CoordinatorTxs[i]
  332. bytesl1Coord, err := l1Coord.BytesDataAvailability(uint32(nLevels))
  333. if err != nil {
  334. return nil, tracerr.Wrap(err)
  335. }
  336. l1l2TxData = append(l1l2TxData, bytesl1Coord[:]...)
  337. }
  338. for i := 0; i < len(args.L2TxsData); i++ {
  339. l2 := args.L2TxsData[i]
  340. bytesl2, err := l2.BytesDataAvailability(uint32(nLevels))
  341. if err != nil {
  342. return nil, tracerr.Wrap(err)
  343. }
  344. l1l2TxData = append(l1l2TxData, bytesl2[:]...)
  345. }
  346. // FeeIdxCoordinator
  347. var feeIdxCoordinator []byte
  348. if len(args.FeeIdxCoordinator) > common.RollupConstMaxFeeIdxCoordinator {
  349. return nil, tracerr.Wrap(fmt.Errorf("len(args.FeeIdxCoordinator) > %v",
  350. common.RollupConstMaxFeeIdxCoordinator))
  351. }
  352. for i := 0; i < common.RollupConstMaxFeeIdxCoordinator; i++ {
  353. feeIdx := common.Idx(0)
  354. if i < len(args.FeeIdxCoordinator) {
  355. feeIdx = args.FeeIdxCoordinator[i]
  356. }
  357. bytesFeeIdx, err := feeIdx.Bytes()
  358. if err != nil {
  359. return nil, tracerr.Wrap(err)
  360. }
  361. feeIdxCoordinator = append(feeIdxCoordinator,
  362. bytesFeeIdx[len(bytesFeeIdx)-int(lenBytes):]...)
  363. }
  364. tx, err = c.hermez.ForgeBatch(auth, newLastIdx, args.NewStRoot, args.NewExitRoot,
  365. l1CoordinatorBytes, l1l2TxData, feeIdxCoordinator, args.VerifierIdx, args.L1Batch,
  366. args.ProofA, args.ProofB, args.ProofC)
  367. if err != nil {
  368. return nil, tracerr.Wrap(fmt.Errorf("Hermez.ForgeBatch: %w", err))
  369. }
  370. return tx, nil
  371. }
  372. // RollupAddToken is the interface to call the smart contract function.
  373. // `feeAddToken` is the amount of HEZ tokens that will be paid to add the
  374. // token. `feeAddToken` must match the public value of the smart contract.
  375. func (c *RollupClient) RollupAddToken(tokenAddress ethCommon.Address, feeAddToken,
  376. deadline *big.Int) (tx *types.Transaction, err error) {
  377. if tx, err = c.client.CallAuth(
  378. 0,
  379. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  380. owner := c.client.account.Address
  381. spender := c.address
  382. nonce, err := c.tokenHEZ.Nonces(c.opts, owner)
  383. if err != nil {
  384. return nil, tracerr.Wrap(err)
  385. }
  386. tokenName := c.tokenHEZCfg.Name
  387. tokenAddr := c.tokenHEZCfg.Address
  388. digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID,
  389. feeAddToken, nonce, deadline, tokenName)
  390. signature, _ := c.client.ks.SignHash(*c.client.account, digest)
  391. permit := createPermit(owner, spender, feeAddToken, deadline, digest,
  392. signature)
  393. return c.hermez.AddToken(auth, tokenAddress, permit)
  394. },
  395. ); err != nil {
  396. return nil, tracerr.Wrap(fmt.Errorf("Failed add Token %w", err))
  397. }
  398. return tx, nil
  399. }
  400. // RollupWithdrawMerkleProof is the interface to call the smart contract function
  401. func (c *RollupClient) RollupWithdrawMerkleProof(fromBJJ babyjub.PublicKeyComp, tokenID uint32,
  402. numExitRoot, idx int64, amount *big.Int, siblings []*big.Int,
  403. instantWithdraw bool) (tx *types.Transaction, err error) {
  404. if tx, err = c.client.CallAuth(
  405. 0,
  406. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  407. pkCompB := common.SwapEndianness(fromBJJ[:])
  408. babyPubKey := new(big.Int).SetBytes(pkCompB)
  409. numExitRootB := uint32(numExitRoot)
  410. idxBig := big.NewInt(idx)
  411. return c.hermez.WithdrawMerkleProof(auth, tokenID, amount, babyPubKey,
  412. numExitRootB, siblings, idxBig, instantWithdraw)
  413. },
  414. ); err != nil {
  415. return nil, tracerr.Wrap(fmt.Errorf("Failed update WithdrawMerkleProof: %w", err))
  416. }
  417. return tx, nil
  418. }
  419. // RollupWithdrawCircuit is the interface to call the smart contract function
  420. func (c *RollupClient) RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int,
  421. tokenID uint32, numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction,
  422. error) {
  423. log.Error("TODO")
  424. return nil, tracerr.Wrap(errTODO)
  425. }
  426. // RollupL1UserTxERC20ETH is the interface to call the smart contract function
  427. func (c *RollupClient) RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
  428. depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64) (tx *types.Transaction,
  429. err error) {
  430. if tx, err = c.client.CallAuth(
  431. 0,
  432. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  433. var babyPubKey *big.Int
  434. if fromBJJ != common.EmptyBJJComp {
  435. pkCompB := common.SwapEndianness(fromBJJ[:])
  436. babyPubKey = new(big.Int).SetBytes(pkCompB)
  437. } else {
  438. babyPubKey = big.NewInt(0)
  439. }
  440. fromIdxBig := big.NewInt(fromIdx)
  441. toIdxBig := big.NewInt(toIdx)
  442. depositAmountF, err := common.NewFloat40(depositAmount)
  443. if err != nil {
  444. return nil, tracerr.Wrap(err)
  445. }
  446. amountF, err := common.NewFloat40(amount)
  447. if err != nil {
  448. return nil, tracerr.Wrap(err)
  449. }
  450. if tokenID == 0 {
  451. auth.Value = depositAmount
  452. }
  453. var permit []byte
  454. return c.hermez.AddL1Transaction(auth, babyPubKey, fromIdxBig, big.NewInt(0).SetUint64(uint64(depositAmountF)),
  455. big.NewInt(0).SetUint64(uint64(amountF)), tokenID, toIdxBig, permit)
  456. },
  457. ); err != nil {
  458. return nil, tracerr.Wrap(fmt.Errorf("Failed add L1 Tx ERC20/ETH: %w", err))
  459. }
  460. return tx, nil
  461. }
  462. // RollupL1UserTxERC20Permit is the interface to call the smart contract function
  463. func (c *RollupClient) RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
  464. depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64,
  465. deadline *big.Int) (tx *types.Transaction, err error) {
  466. if tx, err = c.client.CallAuth(
  467. 0,
  468. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  469. var babyPubKey *big.Int
  470. if fromBJJ != common.EmptyBJJComp {
  471. pkCompB := common.SwapEndianness(fromBJJ[:])
  472. babyPubKey = new(big.Int).SetBytes(pkCompB)
  473. } else {
  474. babyPubKey = big.NewInt(0)
  475. }
  476. fromIdxBig := big.NewInt(fromIdx)
  477. toIdxBig := big.NewInt(toIdx)
  478. depositAmountF, err := common.NewFloat40(depositAmount)
  479. if err != nil {
  480. return nil, tracerr.Wrap(err)
  481. }
  482. amountF, err := common.NewFloat40(amount)
  483. if err != nil {
  484. return nil, tracerr.Wrap(err)
  485. }
  486. if tokenID == 0 {
  487. auth.Value = depositAmount
  488. }
  489. owner := c.client.account.Address
  490. spender := c.address
  491. nonce, err := c.tokenHEZ.Nonces(c.opts, owner)
  492. if err != nil {
  493. return nil, tracerr.Wrap(err)
  494. }
  495. tokenName := c.tokenHEZCfg.Name
  496. tokenAddr := c.tokenHEZCfg.Address
  497. digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID,
  498. amount, nonce, deadline, tokenName)
  499. signature, _ := c.client.ks.SignHash(*c.client.account, digest)
  500. permit := createPermit(owner, spender, amount, deadline, digest, signature)
  501. return c.hermez.AddL1Transaction(auth, babyPubKey, fromIdxBig,
  502. big.NewInt(0).SetUint64(uint64(depositAmountF)), big.NewInt(0).SetUint64(uint64(amountF)), tokenID, toIdxBig, permit)
  503. },
  504. ); err != nil {
  505. return nil, tracerr.Wrap(fmt.Errorf("Failed add L1 Tx ERC20Permit: %w", err))
  506. }
  507. return tx, nil
  508. }
  509. // RollupRegisterTokensCount is the interface to call the smart contract function
  510. func (c *RollupClient) RollupRegisterTokensCount() (registerTokensCount *big.Int, err error) {
  511. if err := c.client.Call(func(ec *ethclient.Client) error {
  512. registerTokensCount, err = c.hermez.RegisterTokensCount(c.opts)
  513. return tracerr.Wrap(err)
  514. }); err != nil {
  515. return nil, tracerr.Wrap(err)
  516. }
  517. return registerTokensCount, nil
  518. }
  519. // RollupLastForgedBatch is the interface to call the smart contract function
  520. func (c *RollupClient) RollupLastForgedBatch() (lastForgedBatch int64, err error) {
  521. if err := c.client.Call(func(ec *ethclient.Client) error {
  522. _lastForgedBatch, err := c.hermez.LastForgedBatch(c.opts)
  523. lastForgedBatch = int64(_lastForgedBatch)
  524. return tracerr.Wrap(err)
  525. }); err != nil {
  526. return 0, tracerr.Wrap(err)
  527. }
  528. return lastForgedBatch, nil
  529. }
  530. // RollupUpdateForgeL1L2BatchTimeout is the interface to call the smart contract function
  531. func (c *RollupClient) RollupUpdateForgeL1L2BatchTimeout(
  532. newForgeL1L2BatchTimeout int64) (tx *types.Transaction, err error) {
  533. if tx, err = c.client.CallAuth(
  534. 0,
  535. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  536. return c.hermez.UpdateForgeL1L2BatchTimeout(auth,
  537. uint8(newForgeL1L2BatchTimeout))
  538. },
  539. ); err != nil {
  540. return nil, tracerr.Wrap(fmt.Errorf("Failed update ForgeL1L2BatchTimeout: %w", err))
  541. }
  542. return tx, nil
  543. }
  544. // RollupUpdateFeeAddToken is the interface to call the smart contract function
  545. func (c *RollupClient) RollupUpdateFeeAddToken(newFeeAddToken *big.Int) (tx *types.Transaction,
  546. err error) {
  547. if tx, err = c.client.CallAuth(
  548. 0,
  549. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  550. return c.hermez.UpdateFeeAddToken(auth, newFeeAddToken)
  551. },
  552. ); err != nil {
  553. return nil, tracerr.Wrap(fmt.Errorf("Failed update FeeAddToken: %w", err))
  554. }
  555. return tx, nil
  556. }
  557. // RollupUpdateBucketsParameters is the interface to call the smart contract function
  558. func (c *RollupClient) RollupUpdateBucketsParameters(
  559. arrayBuckets [common.RollupConstNumBuckets]RollupUpdateBucketsParameters,
  560. ) (tx *types.Transaction, err error) {
  561. if tx, err = c.client.CallAuth(
  562. 12500000, //nolint:gomnd
  563. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  564. params := make([]*big.Int, len(arrayBuckets))
  565. for i, bucket := range arrayBuckets {
  566. params[i], err = c.hermez.PackBucket(c.opts, bucket.CeilUSD, bucket.BlockStamp, bucket.Withdrawals, bucket.RateBlocks, bucket.RateWithdrawals, bucket.MaxWithdrawals)
  567. if err != nil {
  568. return nil, tracerr.Wrap(fmt.Errorf("failed to pack bucket: %w", err))
  569. }
  570. }
  571. return c.hermez.UpdateBucketsParameters(auth, params)
  572. },
  573. ); err != nil {
  574. return nil, tracerr.Wrap(fmt.Errorf("Failed update Buckets Parameters: %w", err))
  575. }
  576. return tx, nil
  577. }
  578. // RollupUpdateTokenExchange is the interface to call the smart contract function
  579. func (c *RollupClient) RollupUpdateTokenExchange(addressArray []ethCommon.Address,
  580. valueArray []uint64) (tx *types.Transaction, err error) {
  581. if tx, err = c.client.CallAuth(
  582. 0,
  583. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  584. return c.hermez.UpdateTokenExchange(auth, addressArray, valueArray)
  585. },
  586. ); err != nil {
  587. return nil, tracerr.Wrap(fmt.Errorf("Failed update Token Exchange: %w", err))
  588. }
  589. return tx, nil
  590. }
  591. // RollupUpdateWithdrawalDelay is the interface to call the smart contract function
  592. func (c *RollupClient) RollupUpdateWithdrawalDelay(newWithdrawalDelay int64) (tx *types.Transaction,
  593. err error) {
  594. if tx, err = c.client.CallAuth(
  595. 0,
  596. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  597. return c.hermez.UpdateWithdrawalDelay(auth, uint64(newWithdrawalDelay))
  598. },
  599. ); err != nil {
  600. return nil, tracerr.Wrap(fmt.Errorf("Failed update WithdrawalDelay: %w", err))
  601. }
  602. return tx, nil
  603. }
  604. // RollupSafeMode is the interface to call the smart contract function
  605. func (c *RollupClient) RollupSafeMode() (tx *types.Transaction, err error) {
  606. if tx, err = c.client.CallAuth(
  607. 0,
  608. func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
  609. return c.hermez.SafeMode(auth)
  610. },
  611. ); err != nil {
  612. return nil, tracerr.Wrap(fmt.Errorf("Failed update Safe Mode: %w", err))
  613. }
  614. return tx, nil
  615. }
  616. // RollupInstantWithdrawalViewer is the interface to call the smart contract function
  617. func (c *RollupClient) RollupInstantWithdrawalViewer(tokenAddress ethCommon.Address,
  618. amount *big.Int) (instantAllowed bool, err error) {
  619. if err := c.client.Call(func(ec *ethclient.Client) error {
  620. instantAllowed, err = c.hermez.InstantWithdrawalViewer(c.opts, tokenAddress, amount)
  621. return tracerr.Wrap(err)
  622. }); err != nil {
  623. return false, tracerr.Wrap(err)
  624. }
  625. return instantAllowed, nil
  626. }
  627. // RollupConstants returns the Constants of the Rollup Smart Contract
  628. func (c *RollupClient) RollupConstants() (rollupConstants *common.RollupConstants, err error) {
  629. rollupConstants = new(common.RollupConstants)
  630. if err := c.client.Call(func(ec *ethclient.Client) error {
  631. absoluteMaxL1L2BatchTimeout, err := c.hermez.ABSOLUTEMAXL1L2BATCHTIMEOUT(c.opts)
  632. if err != nil {
  633. return tracerr.Wrap(err)
  634. }
  635. rollupConstants.AbsoluteMaxL1L2BatchTimeout = int64(absoluteMaxL1L2BatchTimeout)
  636. rollupConstants.TokenHEZ, err = c.hermez.TokenHEZ(c.opts)
  637. if err != nil {
  638. return tracerr.Wrap(err)
  639. }
  640. rollupVerifiersLength, err := c.hermez.RollupVerifiersLength(c.opts)
  641. if err != nil {
  642. return tracerr.Wrap(err)
  643. }
  644. for i := int64(0); i < rollupVerifiersLength.Int64(); i++ {
  645. var newRollupVerifier common.RollupVerifierStruct
  646. rollupVerifier, err := c.hermez.RollupVerifiers(c.opts, big.NewInt(i))
  647. if err != nil {
  648. return tracerr.Wrap(err)
  649. }
  650. newRollupVerifier.MaxTx = rollupVerifier.MaxTx.Int64()
  651. newRollupVerifier.NLevels = rollupVerifier.NLevels.Int64()
  652. rollupConstants.Verifiers = append(rollupConstants.Verifiers,
  653. newRollupVerifier)
  654. }
  655. rollupConstants.HermezAuctionContract, err = c.hermez.HermezAuctionContract(c.opts)
  656. if err != nil {
  657. return tracerr.Wrap(err)
  658. }
  659. rollupConstants.HermezGovernanceAddress, err = c.hermez.HermezGovernanceAddress(c.opts)
  660. if err != nil {
  661. return tracerr.Wrap(err)
  662. }
  663. rollupConstants.WithdrawDelayerContract, err = c.hermez.WithdrawDelayerContract(c.opts)
  664. return tracerr.Wrap(err)
  665. }); err != nil {
  666. return nil, tracerr.Wrap(err)
  667. }
  668. return rollupConstants, nil
  669. }
  670. var (
  671. logHermezL1UserTxEvent = crypto.Keccak256Hash([]byte(
  672. "L1UserTxEvent(uint32,uint8,bytes)"))
  673. logHermezAddToken = crypto.Keccak256Hash([]byte(
  674. "AddToken(address,uint32)"))
  675. logHermezForgeBatch = crypto.Keccak256Hash([]byte(
  676. "ForgeBatch(uint32,uint16)"))
  677. logHermezUpdateForgeL1L2BatchTimeout = crypto.Keccak256Hash([]byte(
  678. "UpdateForgeL1L2BatchTimeout(uint8)"))
  679. logHermezUpdateFeeAddToken = crypto.Keccak256Hash([]byte(
  680. "UpdateFeeAddToken(uint256)"))
  681. logHermezWithdrawEvent = crypto.Keccak256Hash([]byte(
  682. "WithdrawEvent(uint48,uint32,bool)"))
  683. logHermezUpdateBucketWithdraw = crypto.Keccak256Hash([]byte(
  684. "UpdateBucketWithdraw(uint8,uint256,uint256)"))
  685. logHermezUpdateWithdrawalDelay = crypto.Keccak256Hash([]byte(
  686. "UpdateWithdrawalDelay(uint64)"))
  687. logHermezUpdateBucketsParameters = crypto.Keccak256Hash([]byte(
  688. "UpdateBucketsParameters(uint256[])"))
  689. logHermezUpdateTokenExchange = crypto.Keccak256Hash([]byte(
  690. "UpdateTokenExchange(address[],uint64[])"))
  691. logHermezSafeMode = crypto.Keccak256Hash([]byte(
  692. "SafeMode()"))
  693. logHermezInitialize = crypto.Keccak256Hash([]byte(
  694. "InitializeHermezEvent(uint8,uint256,uint64)"))
  695. )
  696. // RollupEventInit returns the initialize event with its corresponding block number
  697. func (c *RollupClient) RollupEventInit() (*RollupEventInitialize, int64, error) {
  698. query := ethereum.FilterQuery{
  699. Addresses: []ethCommon.Address{
  700. c.address,
  701. },
  702. Topics: [][]ethCommon.Hash{{logHermezInitialize}},
  703. }
  704. logs, err := c.client.client.FilterLogs(context.Background(), query)
  705. if err != nil {
  706. return nil, 0, tracerr.Wrap(err)
  707. }
  708. if len(logs) != 1 {
  709. return nil, 0, tracerr.Wrap(fmt.Errorf("no event of type InitializeHermezEvent found"))
  710. }
  711. vLog := logs[0]
  712. if vLog.Topics[0] != logHermezInitialize {
  713. return nil, 0, tracerr.Wrap(fmt.Errorf("event is not InitializeHermezEvent"))
  714. }
  715. var rollupInit RollupEventInitialize
  716. if err := c.contractAbi.UnpackIntoInterface(&rollupInit, "InitializeHermezEvent",
  717. vLog.Data); err != nil {
  718. return nil, 0, tracerr.Wrap(err)
  719. }
  720. return &rollupInit, int64(vLog.BlockNumber), tracerr.Wrap(err)
  721. }
  722. // RollupEventsByBlock returns the events in a block that happened in the
  723. // Rollup Smart Contract.
  724. // To query by blockNum, set blockNum >= 0 and blockHash == nil.
  725. // To query by blockHash set blockHash != nil, and blockNum will be ignored.
  726. // If there are no events in that block the result is nil.
  727. func (c *RollupClient) RollupEventsByBlock(blockNum int64,
  728. blockHash *ethCommon.Hash) (*RollupEvents, error) {
  729. var rollupEvents RollupEvents
  730. var blockNumBigInt *big.Int
  731. if blockHash == nil {
  732. blockNumBigInt = big.NewInt(blockNum)
  733. }
  734. query := ethereum.FilterQuery{
  735. BlockHash: blockHash,
  736. FromBlock: blockNumBigInt,
  737. ToBlock: blockNumBigInt,
  738. Addresses: []ethCommon.Address{
  739. c.address,
  740. },
  741. Topics: [][]ethCommon.Hash{},
  742. }
  743. logs, err := c.client.client.FilterLogs(context.Background(), query)
  744. if err != nil {
  745. return nil, tracerr.Wrap(err)
  746. }
  747. if len(logs) == 0 {
  748. return nil, nil
  749. }
  750. for _, vLog := range logs {
  751. if blockHash != nil && vLog.BlockHash != *blockHash {
  752. log.Errorw("Block hash mismatch", "expected", blockHash.String(), "got", vLog.BlockHash.String())
  753. return nil, tracerr.Wrap(ErrBlockHashMismatchEvent)
  754. }
  755. switch vLog.Topics[0] {
  756. case logHermezL1UserTxEvent:
  757. var L1UserTxAux rollupEventL1UserTxAux
  758. var L1UserTx RollupEventL1UserTx
  759. err := c.contractAbi.UnpackIntoInterface(&L1UserTxAux, "L1UserTxEvent", vLog.Data)
  760. if err != nil {
  761. return nil, tracerr.Wrap(err)
  762. }
  763. L1Tx, err := common.L1UserTxFromBytes(L1UserTxAux.L1UserTx)
  764. if err != nil {
  765. return nil, tracerr.Wrap(err)
  766. }
  767. toForgeL1TxsNum := new(big.Int).SetBytes(vLog.Topics[1][:]).Int64()
  768. L1Tx.ToForgeL1TxsNum = &toForgeL1TxsNum
  769. L1Tx.Position = int(new(big.Int).SetBytes(vLog.Topics[2][:]).Int64())
  770. L1Tx.UserOrigin = true
  771. L1UserTx.L1UserTx = *L1Tx
  772. rollupEvents.L1UserTx = append(rollupEvents.L1UserTx, L1UserTx)
  773. case logHermezAddToken:
  774. var addToken RollupEventAddToken
  775. err := c.contractAbi.UnpackIntoInterface(&addToken, "AddToken", vLog.Data)
  776. if err != nil {
  777. return nil, tracerr.Wrap(err)
  778. }
  779. addToken.TokenAddress = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
  780. rollupEvents.AddToken = append(rollupEvents.AddToken, addToken)
  781. case logHermezForgeBatch:
  782. var forgeBatch RollupEventForgeBatch
  783. err := c.contractAbi.UnpackIntoInterface(&forgeBatch, "ForgeBatch", vLog.Data)
  784. if err != nil {
  785. return nil, tracerr.Wrap(err)
  786. }
  787. forgeBatch.BatchNum = new(big.Int).SetBytes(vLog.Topics[1][:]).Int64()
  788. forgeBatch.EthTxHash = vLog.TxHash
  789. // forgeBatch.Sender = vLog.Address
  790. rollupEvents.ForgeBatch = append(rollupEvents.ForgeBatch, forgeBatch)
  791. case logHermezUpdateForgeL1L2BatchTimeout:
  792. var updateForgeL1L2BatchTimeout struct {
  793. NewForgeL1L2BatchTimeout uint8
  794. }
  795. err := c.contractAbi.UnpackIntoInterface(&updateForgeL1L2BatchTimeout,
  796. "UpdateForgeL1L2BatchTimeout", vLog.Data)
  797. if err != nil {
  798. return nil, tracerr.Wrap(err)
  799. }
  800. rollupEvents.UpdateForgeL1L2BatchTimeout = append(rollupEvents.UpdateForgeL1L2BatchTimeout,
  801. RollupEventUpdateForgeL1L2BatchTimeout{
  802. NewForgeL1L2BatchTimeout: int64(updateForgeL1L2BatchTimeout.NewForgeL1L2BatchTimeout),
  803. })
  804. case logHermezUpdateFeeAddToken:
  805. var updateFeeAddToken RollupEventUpdateFeeAddToken
  806. err := c.contractAbi.UnpackIntoInterface(&updateFeeAddToken, "UpdateFeeAddToken", vLog.Data)
  807. if err != nil {
  808. return nil, tracerr.Wrap(err)
  809. }
  810. rollupEvents.UpdateFeeAddToken = append(rollupEvents.UpdateFeeAddToken, updateFeeAddToken)
  811. case logHermezWithdrawEvent:
  812. var withdraw RollupEventWithdraw
  813. withdraw.Idx = new(big.Int).SetBytes(vLog.Topics[1][:]).Uint64()
  814. withdraw.NumExitRoot = new(big.Int).SetBytes(vLog.Topics[2][:]).Uint64()
  815. instantWithdraw := new(big.Int).SetBytes(vLog.Topics[3][:]).Uint64()
  816. if instantWithdraw == 1 {
  817. withdraw.InstantWithdraw = true
  818. }
  819. withdraw.TxHash = vLog.TxHash
  820. rollupEvents.Withdraw = append(rollupEvents.Withdraw, withdraw)
  821. case logHermezUpdateBucketWithdraw:
  822. var updateBucketWithdrawAux rollupEventUpdateBucketWithdrawAux
  823. var updateBucketWithdraw RollupEventUpdateBucketWithdraw
  824. err := c.contractAbi.UnpackIntoInterface(&updateBucketWithdrawAux,
  825. "UpdateBucketWithdraw", vLog.Data)
  826. if err != nil {
  827. return nil, tracerr.Wrap(err)
  828. }
  829. updateBucketWithdraw.Withdrawals = updateBucketWithdrawAux.Withdrawals
  830. updateBucketWithdraw.NumBucket = int(new(big.Int).SetBytes(vLog.Topics[1][:]).Int64())
  831. updateBucketWithdraw.BlockStamp = new(big.Int).SetBytes(vLog.Topics[2][:]).Int64()
  832. rollupEvents.UpdateBucketWithdraw =
  833. append(rollupEvents.UpdateBucketWithdraw, updateBucketWithdraw)
  834. case logHermezUpdateWithdrawalDelay:
  835. var withdrawalDelay RollupEventUpdateWithdrawalDelay
  836. err := c.contractAbi.UnpackIntoInterface(&withdrawalDelay, "UpdateWithdrawalDelay", vLog.Data)
  837. if err != nil {
  838. return nil, tracerr.Wrap(err)
  839. }
  840. rollupEvents.UpdateWithdrawalDelay = append(rollupEvents.UpdateWithdrawalDelay, withdrawalDelay)
  841. case logHermezUpdateBucketsParameters:
  842. var bucketsParametersAux rollupEventUpdateBucketsParametersAux
  843. var bucketsParameters RollupEventUpdateBucketsParameters
  844. err := c.contractAbi.UnpackIntoInterface(&bucketsParametersAux,
  845. "UpdateBucketsParameters", vLog.Data)
  846. if err != nil {
  847. return nil, tracerr.Wrap(err)
  848. }
  849. for i, bucket := range bucketsParametersAux.ArrayBuckets {
  850. bucket, err := c.hermez.UnpackBucket(c.opts, bucket)
  851. if err != nil {
  852. return nil, tracerr.Wrap(err)
  853. }
  854. bucketsParameters.ArrayBuckets[i].CeilUSD = bucket.CeilUSD
  855. bucketsParameters.ArrayBuckets[i].BlockStamp = bucket.BlockStamp
  856. bucketsParameters.ArrayBuckets[i].Withdrawals = bucket.Withdrawals
  857. bucketsParameters.ArrayBuckets[i].RateBlocks = bucket.RateBlocks
  858. bucketsParameters.ArrayBuckets[i].RateWithdrawals = bucket.RateWithdrawals
  859. bucketsParameters.ArrayBuckets[i].MaxWithdrawals = bucket.MaxWithdrawals
  860. }
  861. rollupEvents.UpdateBucketsParameters =
  862. append(rollupEvents.UpdateBucketsParameters, bucketsParameters)
  863. case logHermezUpdateTokenExchange:
  864. var tokensExchange RollupEventUpdateTokenExchange
  865. err := c.contractAbi.UnpackIntoInterface(&tokensExchange, "UpdateTokenExchange", vLog.Data)
  866. if err != nil {
  867. return nil, tracerr.Wrap(err)
  868. }
  869. rollupEvents.UpdateTokenExchange = append(rollupEvents.UpdateTokenExchange, tokensExchange)
  870. case logHermezSafeMode:
  871. var safeMode RollupEventSafeMode
  872. rollupEvents.SafeMode = append(rollupEvents.SafeMode, safeMode)
  873. // Also add an UpdateBucketsParameter with
  874. // SafeMode=true to keep the order between `safeMode`
  875. // and `UpdateBucketsParameters`
  876. bucketsParameters := RollupEventUpdateBucketsParameters{
  877. SafeMode: true,
  878. }
  879. for i := range bucketsParameters.ArrayBuckets {
  880. bucketsParameters.ArrayBuckets[i].CeilUSD = big.NewInt(0)
  881. bucketsParameters.ArrayBuckets[i].BlockStamp = big.NewInt(0)
  882. bucketsParameters.ArrayBuckets[i].Withdrawals = big.NewInt(0)
  883. bucketsParameters.ArrayBuckets[i].RateBlocks = big.NewInt(0)
  884. bucketsParameters.ArrayBuckets[i].RateWithdrawals = big.NewInt(0)
  885. bucketsParameters.ArrayBuckets[i].MaxWithdrawals = big.NewInt(0)
  886. }
  887. rollupEvents.UpdateBucketsParameters = append(rollupEvents.UpdateBucketsParameters,
  888. bucketsParameters)
  889. }
  890. }
  891. return &rollupEvents, nil
  892. }
  893. // RollupForgeBatchArgs returns the arguments used in a ForgeBatch call in the
  894. // Rollup Smart Contract in the given transaction, and the sender address.
  895. func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash,
  896. l1UserTxsLen uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error) {
  897. tx, _, err := c.client.client.TransactionByHash(context.Background(), ethTxHash)
  898. if err != nil {
  899. return nil, nil, tracerr.Wrap(fmt.Errorf("TransactionByHash: %w", err))
  900. }
  901. txData := tx.Data()
  902. method, err := c.contractAbi.MethodById(txData[:4])
  903. if err != nil {
  904. return nil, nil, tracerr.Wrap(err)
  905. }
  906. receipt, err := c.client.client.TransactionReceipt(context.Background(), ethTxHash)
  907. if err != nil {
  908. return nil, nil, tracerr.Wrap(err)
  909. }
  910. sender, err := c.client.client.TransactionSender(context.Background(), tx,
  911. receipt.Logs[0].BlockHash, receipt.Logs[0].Index)
  912. if err != nil {
  913. return nil, nil, tracerr.Wrap(err)
  914. }
  915. var aux rollupForgeBatchArgsAux
  916. if values, err := method.Inputs.Unpack(txData[4:]); err != nil {
  917. return nil, nil, tracerr.Wrap(err)
  918. } else if err := method.Inputs.Copy(&aux, values); err != nil {
  919. return nil, nil, tracerr.Wrap(err)
  920. }
  921. rollupForgeBatchArgs := RollupForgeBatchArgs{
  922. L1Batch: aux.L1Batch,
  923. NewExitRoot: aux.NewExitRoot,
  924. NewLastIdx: aux.NewLastIdx.Int64(),
  925. NewStRoot: aux.NewStRoot,
  926. ProofA: aux.ProofA,
  927. ProofB: aux.ProofB,
  928. ProofC: aux.ProofC,
  929. VerifierIdx: aux.VerifierIdx,
  930. L1CoordinatorTxs: []common.L1Tx{},
  931. L1CoordinatorTxsAuths: [][]byte{},
  932. L2TxsData: []common.L2Tx{},
  933. FeeIdxCoordinator: []common.Idx{},
  934. }
  935. nLevels := c.consts.Verifiers[rollupForgeBatchArgs.VerifierIdx].NLevels
  936. lenL1L2TxsBytes := int((nLevels/8)*2 + common.Float40BytesLength + 1) //nolint:gomnd
  937. numBytesL1TxUser := int(l1UserTxsLen) * lenL1L2TxsBytes
  938. numTxsL1Coord := len(aux.EncodedL1CoordinatorTx) / common.RollupConstL1CoordinatorTotalBytes
  939. numBytesL1TxCoord := numTxsL1Coord * lenL1L2TxsBytes
  940. numBeginL2Tx := numBytesL1TxCoord + numBytesL1TxUser
  941. l1UserTxsData := []byte{}
  942. if l1UserTxsLen > 0 {
  943. l1UserTxsData = aux.L1L2TxsData[:numBytesL1TxUser]
  944. }
  945. for i := 0; i < int(l1UserTxsLen); i++ {
  946. l1Tx, err :=
  947. common.L1TxFromDataAvailability(l1UserTxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes],
  948. uint32(nLevels))
  949. if err != nil {
  950. return nil, nil, tracerr.Wrap(err)
  951. }
  952. rollupForgeBatchArgs.L1UserTxs = append(rollupForgeBatchArgs.L1UserTxs, *l1Tx)
  953. }
  954. l2TxsData := []byte{}
  955. if numBeginL2Tx < len(aux.L1L2TxsData) {
  956. l2TxsData = aux.L1L2TxsData[numBeginL2Tx:]
  957. }
  958. numTxsL2 := len(l2TxsData) / lenL1L2TxsBytes
  959. for i := 0; i < numTxsL2; i++ {
  960. l2Tx, err :=
  961. common.L2TxFromBytesDataAvailability(l2TxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes],
  962. int(nLevels))
  963. if err != nil {
  964. return nil, nil, tracerr.Wrap(err)
  965. }
  966. rollupForgeBatchArgs.L2TxsData = append(rollupForgeBatchArgs.L2TxsData, *l2Tx)
  967. }
  968. for i := 0; i < numTxsL1Coord; i++ {
  969. bytesL1Coordinator :=
  970. aux.EncodedL1CoordinatorTx[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*common.RollupConstL1CoordinatorTotalBytes] //nolint:lll
  971. var signature []byte
  972. v := bytesL1Coordinator[0]
  973. s := bytesL1Coordinator[1:33]
  974. r := bytesL1Coordinator[33:65]
  975. signature = append(signature, r[:]...)
  976. signature = append(signature, s[:]...)
  977. signature = append(signature, v)
  978. l1Tx, err := common.L1CoordinatorTxFromBytes(bytesL1Coordinator, c.chainID, c.address)
  979. if err != nil {
  980. return nil, nil, tracerr.Wrap(err)
  981. }
  982. rollupForgeBatchArgs.L1CoordinatorTxs = append(rollupForgeBatchArgs.L1CoordinatorTxs, *l1Tx)
  983. rollupForgeBatchArgs.L1CoordinatorTxsAuths =
  984. append(rollupForgeBatchArgs.L1CoordinatorTxsAuths, signature)
  985. }
  986. lenFeeIdxCoordinatorBytes := int(nLevels / 8) //nolint:gomnd
  987. numFeeIdxCoordinator := len(aux.FeeIdxCoordinator) / lenFeeIdxCoordinatorBytes
  988. for i := 0; i < numFeeIdxCoordinator; i++ {
  989. var paddedFeeIdx [6]byte
  990. // TODO: This check is not necessary: the first case will always work. Test it
  991. // before removing the if.
  992. if lenFeeIdxCoordinatorBytes < common.IdxBytesLen {
  993. copy(paddedFeeIdx[6-lenFeeIdxCoordinatorBytes:],
  994. aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
  995. } else {
  996. copy(paddedFeeIdx[:],
  997. aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
  998. }
  999. feeIdxCoordinator, err := common.IdxFromBytes(paddedFeeIdx[:])
  1000. if err != nil {
  1001. return nil, nil, tracerr.Wrap(err)
  1002. }
  1003. if feeIdxCoordinator != common.Idx(0) {
  1004. rollupForgeBatchArgs.FeeIdxCoordinator =
  1005. append(rollupForgeBatchArgs.FeeIdxCoordinator, feeIdxCoordinator)
  1006. }
  1007. }
  1008. return &rollupForgeBatchArgs, &sender, nil
  1009. }