Compare commits

..

39 Commits

Author SHA1 Message Date
Eduard S
672d08c671 Calculate ForgeBatch gasLimit with parametrized formula
Add the following config parameters at
`Coordinator.EthClient.ForgeBatchGasCost`:
- `Fixed`
- `L1UserTx`
- `L1CoordTx`
- `L2Tx`
Which are the costs associated to a ForgeBatch transaction, split
into different parts to be used in the formula to compute the gasLimit.
2021-02-25 15:30:50 +01:00
Eduard S
9b7b333acf Reorg l2db before starting pipeline 2021-02-25 15:30:38 +01:00
Eduard S
2a5992d218 Fix missing timer reset in TxManager
Also, replace usage of time.Duration by time.NewTimer, because the later allows
replacing timers by stopping them before so that we never leak resources.
2021-02-25 15:30:26 +01:00
arnau
706e4c7a3d Merge pull request #575 from hermeznetwork/fix/httpserve
In http servers, first listen, then serve
2021-02-24 16:40:16 +01:00
arnau
bf4acfad9f Merge pull request #578 from hermeznetwork/feature/waitproofserverbeforebatch
Wait for serverProof before starting batch
2021-02-24 16:39:46 +01:00
a_bennassar
ba88b25425 Merge pull request #570 from hermeznetwork/feature/api-TxPerBatchAmend
Change how TX per Batch is calculated in state API endpoint
2021-02-24 15:49:16 +01:00
Eduard S
df729b3b71 Wait for serverProof before starting batch 2021-02-24 15:43:55 +01:00
Toni Ramírez
f8d01b5998 Change how TX per Batch is calculated in state API endpoint 2021-02-24 15:39:54 +01:00
Eduard S
155e06484b Merge pull request #571 from hermeznetwork/feature/sql-read-write
Diferentiate SQL connections by read write
2021-02-24 15:26:43 +01:00
Eduard S
5cdcae47b8 Merge pull request #577 from hermeznetwork/feature/zki-forceexit-circuit-effectiveamount-behaviour
Update ZKI generation to match circuit behaviour for Exit Amount>Balance
2021-02-24 15:25:32 +01:00
arnaubennassar
ed9bfdffa9 Diferentiate SQL connections by read write 2021-02-24 15:12:43 +01:00
arnaucube
504e36ac47 Update ZKI gen to match circuit behaviour for Exit
Currently the circuit does not use an Exit Leaf at the Exit MerkleTree
when the Amount=0 & EffectiveAmount=0, but it does use an Exit Leaf when
the Amount>0 but EffectiveAmount=0 (for example when tx.Amount >
sender.Balance). This is a particularity of the approach of the circuit,
the idea will be in the future to update the circuit and when Amount>0
but EffectiveAmount=0, to not add the Exit in the Exits MerkleTree, but
for the moment the Go code is adapted to the circuit and when an Exit
with Amount>0 & EffectiveAmount=0, it will be added to the Leaf of the
Exit MerkleTree.
2021-02-24 15:00:30 +01:00
Eduard S
ca53dc9b52 In http servers, first listen, then serve
- In test, doing the `net.Listen` outside of the goroutine guarantees that we
  are listening, so we avoid a possible datarace consisting of doing an http
  request before listening.
- In packages that run an http server: doing the listen first allows
    - Checking for errors when opening the address for listening before
      starting the goroutine, so that if there's an error on listen we can
      terminate grafecully
    - Logging that we are listening when we are really listening, and not
      before.
2021-02-24 12:11:12 +01:00
Eduard S
5b4cfac309 Merge pull request #563 from hermeznetwork/feature/metrics0
Add Prometheus initial setup
2021-02-23 18:53:57 +01:00
arnau
11a0707746 Merge pull request #574 from hermeznetwork/feature/apirecommendedfeeuseminfee
In API recommended fee, use minFeeUSD as min value
2021-02-23 17:58:38 +01:00
Eduard S
d16fba72a7 In API recommended fee, use minFeeUSD as min value 2021-02-23 17:20:16 +01:00
arnaucube
971b2c1d40 Add Prometheus initial setup 2021-02-23 16:27:29 +01:00
arnau
9d08ec6978 Merge pull request #566 from hermeznetwork/fix/process-tx-runtime-error
fix possible runtime error
2021-02-23 16:13:31 +01:00
arnau
ffda9fa1ef Merge pull request #569 from hermeznetwork/fix/node-redundant-condition
node: remove redundant error check
2021-02-23 16:11:34 +01:00
arnau
5a11aa5c27 Merge pull request #567 from hermeznetwork/fix/tx-manager-redundant-condition
tx manager: remove redundant error check
2021-02-23 16:10:38 +01:00
Eduard S
3e5e9bd633 Merge pull request #564 from hermeznetwork/feature/api-without-statedb
Stop using stateDB in API
2021-02-23 15:22:53 +01:00
arnaubennassar
c83047f527 Stop using stateDB in API 2021-02-23 11:04:16 +01:00
Danilo Pantani
bcd576480c remove redundant error check condition for node creation 2021-02-22 13:59:56 -03:00
Danilo Pantani
35ea597ac4 remove redundant error check condition for tx manager 2021-02-22 13:54:10 -03:00
arnau
8259aee884 Merge pull request #565 from hermeznetwork/feature/minprice1
Add minPriceUSD in L2DB, check maxTxs atomically
2021-02-22 17:54:04 +01:00
Danilo Pantani
72862147f3 fix possible runtime error 2021-02-22 13:44:36 -03:00
Eduard S
3706ddb2fb Add minPriceUSD in L2DB, check maxTxs atomically
- Add config parameter `Coordinator.L2DB.MinPriceUSD` which allows rejecting
  txs to the pool that have a fee lower than the minimum.
- In pool tx insertion, checking the number of pending txs atomically with the
  insertion to avoid data races leading to more than MaxTxs pending txs in the
  pool.
2021-02-22 16:45:15 +01:00
arnau
df0cc32eed Merge pull request #557 from hermeznetwork/feature/poolexternaldelete
Delete pending txs by external mark, store tx IP
2021-02-19 16:58:56 +01:00
Eduard S
67b2b7da4b Delete pending txs by external mark, store tx IP
- In tx_pool, add a column called `external_delete` that can be set to true
  externally.  Regularly, the coordinator will delete all pending txs with this
  column set to true.  The interval for this action is set via the new config
  parameter `Coordinator.PurgeByExtDelInterval`.
- In tx_pool, add a column for the client ip that sent the transaction.  The
  api fills this value using the ClientIP method from gin.Context, which should
  work even under a reverse-proxy.
2021-02-19 16:00:45 +01:00
arnau
e23063380c Merge pull request #555 from hermeznetwork/feature/accountupdatetable
Feature/accountupdatetable
2021-02-19 13:23:03 +01:00
Eduard S
ed4d39fcd1 Add account_update SQL table with balances and nonces 2021-02-19 13:16:14 +01:00
Eduard S
d6ec1910da Simplify historyDB test and make it faster 2021-02-19 11:12:56 +01:00
Eduard S
c829eb99dc Remove unecessary slice indexing 2021-02-19 11:00:36 +01:00
Eduard S
6ecb8118bd Merge pull request #553 from hermeznetwork/fix/exit-amount-0
Txs w/ exit Amount=0,to not create new Exit leafs
2021-02-19 10:56:08 +01:00
arnaucube
4500820a03 Txs w/ exit Amount=0,to not create new Exit leafs
There are 2 ways to interpret an Exit transaction with Amount=0:
`A`. Not adding a new Leaf to the ExitTree, so not modifying the Exit
MerkleRoot
`B`. Adding a new Leaf to the ExitTree, with Balance=0, which modifies
the Exit MerkleRoot

Currently the [Circuits](https://github.com/hermeznetwork/circuits) are
doing approach `A`, and the
[hermez-node](https://github.com/hermeznetwork/hermez-node) is doing the
approach `B`. The idea of this commit, is to use approach `A` also in
the hermez-node.
2021-02-19 10:50:59 +01:00
Eduard S
b4e6104fd3 Merge pull request #551 from hermeznetwork/fix/api-err-dupkey-meddler
Duplicated error when caused by meddler
2021-02-18 17:18:20 +01:00
arnaubennassar
28f026f628 Duplicated error when caused by meddler 2021-02-18 17:06:49 +01:00
arnau
688d376ce0 Merge pull request #550 from hermeznetwork/fix/txselectornoncessorting2
Fix TxSel sorting, TxManager geth err checking
2021-02-18 16:59:10 +01:00
Eduard S
2547d5dce7 Fix TxSel sorting, TxManager geth err checking 2021-02-18 15:18:39 +01:00
52 changed files with 1646 additions and 901 deletions

View File

@@ -15,7 +15,7 @@ start PostgreSQL with docker easily this way (where `yourpasswordhere` should
be your password): be your password):
``` ```
POSTGRES_PASS=yourpasswordhere sudo docker run --rm --name hermez-db-test -p 5432:5432 -e POSTGRES_DB=hermez -e POSTGRES_USER=hermez -e POSTGRES_PASSWORD="$POSTGRES_PASS" -d postgres POSTGRES_PASS=yourpasswordhere; sudo docker run --rm --name hermez-db-test -p 5432:5432 -e POSTGRES_DB=hermez -e POSTGRES_USER=hermez -e POSTGRES_PASSWORD="$POSTGRES_PASS" -d postgres
``` ```
Afterwards, run the tests with the password as env var: Afterwards, run the tests with the password as env var:

View File

@@ -4,10 +4,7 @@ import (
"net/http" "net/http"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/db/historydb" "github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/hermeznetwork/hermez-node/db/statedb"
"github.com/hermeznetwork/tracerr"
) )
func (a *API) getAccount(c *gin.Context) { func (a *API) getAccount(c *gin.Context) {
@@ -23,16 +20,6 @@ func (a *API) getAccount(c *gin.Context) {
return return
} }
// Get balance from stateDB
account, err := a.s.LastGetAccount(*idx)
if err != nil {
retSQLErr(err, c)
return
}
apiAccount.Balance = apitypes.NewBigIntStr(account.Balance)
apiAccount.Nonce = account.Nonce
c.JSON(http.StatusOK, apiAccount) c.JSON(http.StatusOK, apiAccount)
} }
@@ -57,26 +44,6 @@ func (a *API) getAccounts(c *gin.Context) {
return return
} }
// Get balances from stateDB
if err := a.s.LastRead(func(sdb *statedb.Last) error {
for x, apiAccount := range apiAccounts {
idx, err := stringToIdx(string(apiAccount.Idx), "Account Idx")
if err != nil {
return tracerr.Wrap(err)
}
account, err := sdb.GetAccount(*idx)
if err != nil {
return tracerr.Wrap(err)
}
apiAccounts[x].Balance = apitypes.NewBigIntStr(account.Balance)
apiAccounts[x].Nonce = account.Nonce
}
return nil
}); err != nil {
retSQLErr(err, c)
return
}
// Build succesfull response // Build succesfull response
type accountResponse struct { type accountResponse struct {
Accounts []historydb.AccountAPI `json:"accounts"` Accounts []historydb.AccountAPI `json:"accounts"`

View File

@@ -9,7 +9,6 @@ import (
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb" "github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/hermeznetwork/hermez-node/db/l2db" "github.com/hermeznetwork/hermez-node/db/l2db"
"github.com/hermeznetwork/hermez-node/db/statedb"
"github.com/hermeznetwork/tracerr" "github.com/hermeznetwork/tracerr"
) )
@@ -34,7 +33,6 @@ type Status struct {
type API struct { type API struct {
h *historydb.HistoryDB h *historydb.HistoryDB
cg *configAPI cg *configAPI
s *statedb.StateDB
l2 *l2db.L2DB l2 *l2db.L2DB
status Status status Status
chainID uint16 chainID uint16
@@ -46,7 +44,6 @@ func NewAPI(
coordinatorEndpoints, explorerEndpoints bool, coordinatorEndpoints, explorerEndpoints bool,
server *gin.Engine, server *gin.Engine,
hdb *historydb.HistoryDB, hdb *historydb.HistoryDB,
sdb *statedb.StateDB,
l2db *l2db.L2DB, l2db *l2db.L2DB,
config *Config, config *Config,
) (*API, error) { ) (*API, error) {
@@ -66,7 +63,6 @@ func NewAPI(
AuctionConstants: config.AuctionConstants, AuctionConstants: config.AuctionConstants,
WDelayerConstants: config.WDelayerConstants, WDelayerConstants: config.WDelayerConstants,
}, },
s: sdb,
l2: l2db, l2: l2db,
status: Status{}, status: Status{},
chainID: config.ChainID, chainID: config.ChainID,

View File

@@ -8,6 +8,7 @@ import (
"io" "io"
"io/ioutil" "io/ioutil"
"math/big" "math/big"
"net"
"net/http" "net/http"
"os" "os"
"strconv" "strconv"
@@ -22,7 +23,6 @@ import (
"github.com/hermeznetwork/hermez-node/db" "github.com/hermeznetwork/hermez-node/db"
"github.com/hermeznetwork/hermez-node/db/historydb" "github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/hermeznetwork/hermez-node/db/l2db" "github.com/hermeznetwork/hermez-node/db/l2db"
"github.com/hermeznetwork/hermez-node/db/statedb"
"github.com/hermeznetwork/hermez-node/log" "github.com/hermeznetwork/hermez-node/log"
"github.com/hermeznetwork/hermez-node/test" "github.com/hermeznetwork/hermez-node/test"
"github.com/hermeznetwork/hermez-node/test/til" "github.com/hermeznetwork/hermez-node/test/til"
@@ -39,8 +39,8 @@ type Pendinger interface {
New() Pendinger New() Pendinger
} }
const apiPort = ":4010" const apiAddr = ":4010"
const apiURL = "http://localhost" + apiPort + "/" const apiURL = "http://localhost" + apiAddr + "/"
var SetBlockchain = ` var SetBlockchain = `
Type: Blockchain Type: Blockchain
@@ -202,7 +202,7 @@ func TestMain(m *testing.M) {
panic(err) panic(err)
} }
apiConnCon := db.NewAPICnnectionController(1, time.Second) apiConnCon := db.NewAPICnnectionController(1, time.Second)
hdb := historydb.NewHistoryDB(database, apiConnCon) hdb := historydb.NewHistoryDB(database, database, apiConnCon)
if err != nil { if err != nil {
panic(err) panic(err)
} }
@@ -216,12 +216,8 @@ func TestMain(m *testing.M) {
panic(err) panic(err)
} }
}() }()
sdb, err := statedb.NewStateDB(statedb.Config{Path: dir, Keep: 128, Type: statedb.TypeTxSelector, NLevels: 0})
if err != nil {
panic(err)
}
// L2DB // L2DB
l2DB := l2db.NewL2DB(database, 10, 1000, 24*time.Hour, apiConnCon) l2DB := l2db.NewL2DB(database, database, 10, 1000, 0.0, 24*time.Hour, apiConnCon)
test.WipeDB(l2DB.DB()) // this will clean HistoryDB and L2DB test.WipeDB(l2DB.DB()) // this will clean HistoryDB and L2DB
// Config (smart contract constants) // Config (smart contract constants)
chainID := uint16(0) chainID := uint16(0)
@@ -239,7 +235,6 @@ func TestMain(m *testing.M) {
true, true,
apiGin, apiGin,
hdb, hdb,
sdb,
l2DB, l2DB,
&_config, &_config,
) )
@@ -247,9 +242,14 @@ func TestMain(m *testing.M) {
panic(err) panic(err)
} }
// Start server // Start server
server := &http.Server{Addr: apiPort, Handler: apiGin} listener, err := net.Listen("tcp", apiAddr) //nolint:gosec
if err != nil {
panic(err)
}
server := &http.Server{Handler: apiGin}
go func() { go func() {
if err := server.ListenAndServe(); err != nil && tracerr.Unwrap(err) != http.ErrServerClosed { if err := server.Serve(listener); err != nil &&
tracerr.Unwrap(err) != http.ErrServerClosed {
panic(err) panic(err)
} }
}() }()
@@ -350,19 +350,6 @@ func TestMain(m *testing.M) {
} }
} }
// lastBlockNum2 := blocksData[len(blocksData)-1].Block.EthBlockNum
// Add accounts to StateDB
for i := 0; i < len(commonAccounts); i++ {
if _, err := api.s.CreateAccount(commonAccounts[i].Idx, &commonAccounts[i]); err != nil {
panic(err)
}
}
// Make a checkpoint to make the accounts available in Last
if err := api.s.MakeCheckpoint(); err != nil {
panic(err)
}
// Generate Coordinators and add them to HistoryDB // Generate Coordinators and add them to HistoryDB
const nCoords = 10 const nCoords = 10
commonCoords := test.GenCoordinators(nCoords, commonBlocks) commonCoords := test.GenCoordinators(nCoords, commonBlocks)
@@ -529,13 +516,41 @@ func TestMain(m *testing.M) {
testTxs := genTestTxs(commonL1Txs, commonL2Txs, commonAccounts, testTokens, commonBlocks) testTxs := genTestTxs(commonL1Txs, commonL2Txs, commonAccounts, testTokens, commonBlocks)
testBatches, testFullBatches := genTestBatches(commonBlocks, commonBatches, testTxs) testBatches, testFullBatches := genTestBatches(commonBlocks, commonBatches, testTxs)
poolTxsToSend, poolTxsToReceive := genTestPoolTxs(commonPoolTxs, testTokens, commonAccounts) poolTxsToSend, poolTxsToReceive := genTestPoolTxs(commonPoolTxs, testTokens, commonAccounts)
// Add balance and nonce to historyDB
accounts := genTestAccounts(commonAccounts, testTokens)
accUpdates := []common.AccountUpdate{}
for i := 0; i < len(accounts); i++ {
balance := new(big.Int)
balance.SetString(string(*accounts[i].Balance), 10)
idx, err := stringToIdx(string(accounts[i].Idx), "foo")
if err != nil {
panic(err)
}
accUpdates = append(accUpdates, common.AccountUpdate{
EthBlockNum: 0,
BatchNum: 1,
Idx: *idx,
Nonce: 0,
Balance: balance,
})
accUpdates = append(accUpdates, common.AccountUpdate{
EthBlockNum: 0,
BatchNum: 1,
Idx: *idx,
Nonce: accounts[i].Nonce,
Balance: balance,
})
}
if err := api.h.AddAccountUpdates(accUpdates); err != nil {
panic(err)
}
tc = testCommon{ tc = testCommon{
blocks: commonBlocks, blocks: commonBlocks,
tokens: testTokens, tokens: testTokens,
batches: testBatches, batches: testBatches,
fullBatches: testFullBatches, fullBatches: testFullBatches,
coordinators: testCoords, coordinators: testCoords,
accounts: genTestAccounts(commonAccounts, testTokens), accounts: accounts,
txs: testTxs, txs: testTxs,
exits: testExits, exits: testExits,
poolTxsToSend: poolTxsToSend, poolTxsToSend: poolTxsToSend,
@@ -582,10 +597,10 @@ func TestTimeout(t *testing.T) {
databaseTO, err := db.InitSQLDB(5432, "localhost", "hermez", pass, "hermez") databaseTO, err := db.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err) require.NoError(t, err)
apiConnConTO := db.NewAPICnnectionController(1, 100*time.Millisecond) apiConnConTO := db.NewAPICnnectionController(1, 100*time.Millisecond)
hdbTO := historydb.NewHistoryDB(databaseTO, apiConnConTO) hdbTO := historydb.NewHistoryDB(databaseTO, databaseTO, apiConnConTO)
require.NoError(t, err) require.NoError(t, err)
// L2DB // L2DB
l2DBTO := l2db.NewL2DB(databaseTO, 10, 1000, 24*time.Hour, apiConnConTO) l2DBTO := l2db.NewL2DB(databaseTO, databaseTO, 10, 1000, 1.0, 24*time.Hour, apiConnConTO)
// API // API
apiGinTO := gin.Default() apiGinTO := gin.Default()
@@ -600,9 +615,12 @@ func TestTimeout(t *testing.T) {
<-finishWait <-finishWait
}) })
// Start server // Start server
serverTO := &http.Server{Addr: ":4444", Handler: apiGinTO} serverTO := &http.Server{Handler: apiGinTO}
listener, err := net.Listen("tcp", ":4444") //nolint:gosec
require.NoError(t, err)
go func() { go func() {
if err := serverTO.ListenAndServe(); err != nil && tracerr.Unwrap(err) != http.ErrServerClosed { if err := serverTO.Serve(listener); err != nil &&
tracerr.Unwrap(err) != http.ErrServerClosed {
require.NoError(t, err) require.NoError(t, err)
} }
}() }()
@@ -612,7 +630,6 @@ func TestTimeout(t *testing.T) {
true, true,
apiGinTO, apiGinTO,
hdbTO, hdbTO,
nil,
l2DBTO, l2DBTO,
&_config, &_config,
) )

View File

@@ -10,6 +10,7 @@ import (
"github.com/hermeznetwork/hermez-node/log" "github.com/hermeznetwork/hermez-node/log"
"github.com/hermeznetwork/tracerr" "github.com/hermeznetwork/tracerr"
"github.com/lib/pq" "github.com/lib/pq"
"github.com/russross/meddler"
) )
const ( const (
@@ -46,24 +47,33 @@ var (
func retSQLErr(err error, c *gin.Context) { func retSQLErr(err error, c *gin.Context) {
log.Warnw("HTTP API SQL request error", "err", err) log.Warnw("HTTP API SQL request error", "err", err)
errMsg := tracerr.Unwrap(err).Error() errMsg := tracerr.Unwrap(err).Error()
retDupKey := func(errCode pq.ErrorCode) {
// https://www.postgresql.org/docs/current/errcodes-appendix.html
if errCode == "23505" {
c.JSON(http.StatusInternalServerError, errorMsg{
Message: errDuplicatedKey,
})
} else {
c.JSON(http.StatusInternalServerError, errorMsg{
Message: errMsg,
})
}
}
if errMsg == errCtxTimeout { if errMsg == errCtxTimeout {
c.JSON(http.StatusServiceUnavailable, errorMsg{ c.JSON(http.StatusServiceUnavailable, errorMsg{
Message: errSQLTimeout, Message: errSQLTimeout,
}) })
} else if sqlErr, ok := tracerr.Unwrap(err).(*pq.Error); ok { } else if sqlErr, ok := tracerr.Unwrap(err).(*pq.Error); ok {
// https://www.postgresql.org/docs/current/errcodes-appendix.html retDupKey(sqlErr.Code)
if sqlErr.Code == "23505" { } else if sqlErr, ok := meddler.DriverErr(tracerr.Unwrap(err)); ok {
c.JSON(http.StatusInternalServerError, errorMsg{ retDupKey(sqlErr.(*pq.Error).Code)
Message: errDuplicatedKey,
})
}
} else if tracerr.Unwrap(err) == sql.ErrNoRows { } else if tracerr.Unwrap(err) == sql.ErrNoRows {
c.JSON(http.StatusNotFound, errorMsg{ c.JSON(http.StatusNotFound, errorMsg{
Message: err.Error(), Message: errMsg,
}) })
} else { } else {
c.JSON(http.StatusInternalServerError, errorMsg{ c.JSON(http.StatusInternalServerError, errorMsg{
Message: err.Error(), Message: errMsg,
}) })
} }
} }

View File

@@ -3,6 +3,7 @@ package api
import ( import (
"database/sql" "database/sql"
"fmt" "fmt"
"math"
"math/big" "math/big"
"net/http" "net/http"
"time" "time"
@@ -297,10 +298,17 @@ func (a *API) UpdateRecommendedFee() error {
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
var minFeeUSD float64
if a.l2 != nil {
minFeeUSD = a.l2.MinFeeUSD()
}
a.status.Lock() a.status.Lock()
a.status.RecommendedFee.ExistingAccount = feeExistingAccount a.status.RecommendedFee.ExistingAccount =
a.status.RecommendedFee.CreatesAccount = createAccountExtraFeePercentage * feeExistingAccount math.Max(feeExistingAccount, minFeeUSD)
a.status.RecommendedFee.CreatesAccountAndRegister = createAccountInternalExtraFeePercentage * feeExistingAccount a.status.RecommendedFee.CreatesAccount =
math.Max(createAccountExtraFeePercentage*feeExistingAccount, minFeeUSD)
a.status.RecommendedFee.CreatesAccountAndRegister =
math.Max(createAccountInternalExtraFeePercentage*feeExistingAccount, minFeeUSD)
a.status.Unlock() a.status.Unlock()
return nil return nil
} }

View File

@@ -131,7 +131,7 @@ func TestUpdateNetworkInfo(t *testing.T) {
func TestUpdateMetrics(t *testing.T) { func TestUpdateMetrics(t *testing.T) {
// Update Metrics needs api.status.Network.LastBatch.BatchNum to be updated // Update Metrics needs api.status.Network.LastBatch.BatchNum to be updated
lastBlock := tc.blocks[3] lastBlock := tc.blocks[3]
lastBatchNum := common.BatchNum(3) lastBatchNum := common.BatchNum(12)
currentSlotNum := int64(1) currentSlotNum := int64(1)
err := api.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum) err := api.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
assert.NoError(t, err) assert.NoError(t, err)
@@ -149,7 +149,11 @@ func TestUpdateMetrics(t *testing.T) {
func TestUpdateRecommendedFee(t *testing.T) { func TestUpdateRecommendedFee(t *testing.T) {
err := api.UpdateRecommendedFee() err := api.UpdateRecommendedFee()
assert.NoError(t, err) assert.NoError(t, err)
assert.Greater(t, api.status.RecommendedFee.ExistingAccount, float64(0)) var minFeeUSD float64
if api.l2 != nil {
minFeeUSD = api.l2.MinFeeUSD()
}
assert.Greater(t, api.status.RecommendedFee.ExistingAccount, minFeeUSD)
assert.Equal(t, api.status.RecommendedFee.CreatesAccount, assert.Equal(t, api.status.RecommendedFee.CreatesAccount,
api.status.RecommendedFee.ExistingAccount*createAccountExtraFeePercentage) api.status.RecommendedFee.ExistingAccount*createAccountExtraFeePercentage)
assert.Equal(t, api.status.RecommendedFee.CreatesAccountAndRegister, assert.Equal(t, api.status.RecommendedFee.CreatesAccountAndRegister,
@@ -158,7 +162,7 @@ func TestUpdateRecommendedFee(t *testing.T) {
func TestGetState(t *testing.T) { func TestGetState(t *testing.T) {
lastBlock := tc.blocks[3] lastBlock := tc.blocks[3]
lastBatchNum := common.BatchNum(3) lastBatchNum := common.BatchNum(12)
currentSlotNum := int64(1) currentSlotNum := int64(1)
api.SetRollupVariables(tc.rollupVars) api.SetRollupVariables(tc.rollupVars)
api.SetWDelayerVariables(tc.wdelayerVars) api.SetWDelayerVariables(tc.wdelayerVars)

View File

@@ -2,6 +2,7 @@ package api
import ( import (
"errors" "errors"
"fmt"
"math/big" "math/big"
"net/http" "net/http"
@@ -27,6 +28,7 @@ func (a *API) postPoolTx(c *gin.Context) {
retBadReq(err, c) retBadReq(err, c)
return return
} }
writeTx.ClientIP = c.ClientIP()
// Insert to DB // Insert to DB
if err := a.l2.AddTxAPI(writeTx); err != nil { if err := a.l2.AddTxAPI(writeTx); err != nil {
retSQLErr(err, c) retSQLErr(err, c)
@@ -169,16 +171,21 @@ func (a *API) verifyPoolL2TxWrite(txw *l2db.PoolL2TxWrite) error {
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
// Get public key
account, err := a.s.LastGetAccount(poolTx.FromIdx)
if err != nil {
return tracerr.Wrap(err)
}
// Validate feeAmount // Validate feeAmount
_, err = common.CalcFeeAmount(poolTx.Amount, poolTx.Fee) _, err = common.CalcFeeAmount(poolTx.Amount, poolTx.Fee)
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
// Get public key
account, err := a.h.GetCommonAccountAPI(poolTx.FromIdx)
if err != nil {
return tracerr.Wrap(err)
}
// Validate TokenID
if poolTx.TokenID != account.TokenID {
return tracerr.Wrap(fmt.Errorf("tx.TokenID (%v) != account.TokenID (%v)",
poolTx.TokenID, account.TokenID))
}
// Check signature // Check signature
if !poolTx.VerifySignature(a.chainID, account.BJJ) { if !poolTx.VerifySignature(a.chainID, account.BJJ) {
return tracerr.Wrap(errors.New("wrong signature")) return tracerr.Wrap(errors.New("wrong signature"))

View File

@@ -10,6 +10,7 @@ import (
"github.com/hermeznetwork/hermez-node/db/historydb" "github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/iden3/go-iden3-crypto/babyjub" "github.com/iden3/go-iden3-crypto/babyjub"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
// testPoolTxReceive is a struct to be used to assert the response // testPoolTxReceive is a struct to be used to assert the response
@@ -170,9 +171,9 @@ func TestPoolTxs(t *testing.T) {
fetchedTxID := common.TxID{} fetchedTxID := common.TxID{}
for _, tx := range tc.poolTxsToSend { for _, tx := range tc.poolTxsToSend {
jsonTxBytes, err := json.Marshal(tx) jsonTxBytes, err := json.Marshal(tx)
assert.NoError(t, err) require.NoError(t, err)
jsonTxReader := bytes.NewReader(jsonTxBytes) jsonTxReader := bytes.NewReader(jsonTxBytes)
assert.NoError( require.NoError(
t, doGoodReq( t, doGoodReq(
"POST", "POST",
endpoint, endpoint,
@@ -187,42 +188,42 @@ func TestPoolTxs(t *testing.T) {
badTx.Amount = "99950000000000000" badTx.Amount = "99950000000000000"
badTx.Fee = 255 badTx.Fee = 255
jsonTxBytes, err := json.Marshal(badTx) jsonTxBytes, err := json.Marshal(badTx)
assert.NoError(t, err) require.NoError(t, err)
jsonTxReader := bytes.NewReader(jsonTxBytes) jsonTxReader := bytes.NewReader(jsonTxBytes)
err = doBadReq("POST", endpoint, jsonTxReader, 400) err = doBadReq("POST", endpoint, jsonTxReader, 400)
assert.NoError(t, err) require.NoError(t, err)
// Wrong signature // Wrong signature
badTx = tc.poolTxsToSend[0] badTx = tc.poolTxsToSend[0]
badTx.FromIdx = "hez:foo:1000" badTx.FromIdx = "hez:foo:1000"
jsonTxBytes, err = json.Marshal(badTx) jsonTxBytes, err = json.Marshal(badTx)
assert.NoError(t, err) require.NoError(t, err)
jsonTxReader = bytes.NewReader(jsonTxBytes) jsonTxReader = bytes.NewReader(jsonTxBytes)
err = doBadReq("POST", endpoint, jsonTxReader, 400) err = doBadReq("POST", endpoint, jsonTxReader, 400)
assert.NoError(t, err) require.NoError(t, err)
// Wrong to // Wrong to
badTx = tc.poolTxsToSend[0] badTx = tc.poolTxsToSend[0]
ethAddr := "hez:0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF" ethAddr := "hez:0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"
badTx.ToEthAddr = &ethAddr badTx.ToEthAddr = &ethAddr
badTx.ToIdx = nil badTx.ToIdx = nil
jsonTxBytes, err = json.Marshal(badTx) jsonTxBytes, err = json.Marshal(badTx)
assert.NoError(t, err) require.NoError(t, err)
jsonTxReader = bytes.NewReader(jsonTxBytes) jsonTxReader = bytes.NewReader(jsonTxBytes)
err = doBadReq("POST", endpoint, jsonTxReader, 400) err = doBadReq("POST", endpoint, jsonTxReader, 400)
assert.NoError(t, err) require.NoError(t, err)
// Wrong rq // Wrong rq
badTx = tc.poolTxsToSend[0] badTx = tc.poolTxsToSend[0]
rqFromIdx := "hez:foo:30" rqFromIdx := "hez:foo:30"
badTx.RqFromIdx = &rqFromIdx badTx.RqFromIdx = &rqFromIdx
jsonTxBytes, err = json.Marshal(badTx) jsonTxBytes, err = json.Marshal(badTx)
assert.NoError(t, err) require.NoError(t, err)
jsonTxReader = bytes.NewReader(jsonTxBytes) jsonTxReader = bytes.NewReader(jsonTxBytes)
err = doBadReq("POST", endpoint, jsonTxReader, 400) err = doBadReq("POST", endpoint, jsonTxReader, 400)
assert.NoError(t, err) require.NoError(t, err)
// GET // GET
endpoint += "/" endpoint += "/"
for _, tx := range tc.poolTxsToReceive { for _, tx := range tc.poolTxsToReceive {
fetchedTx := testPoolTxReceive{} fetchedTx := testPoolTxReceive{}
assert.NoError( require.NoError(
t, doGoodReq( t, doGoodReq(
"GET", "GET",
endpoint+tx.TxID.String(), endpoint+tx.TxID.String(),
@@ -233,10 +234,10 @@ func TestPoolTxs(t *testing.T) {
} }
// 400, due invalid TxID // 400, due invalid TxID
err = doBadReq("GET", endpoint+"0xG2241b6f2b1dd772dba391f4a1a3407c7c21f598d86e2585a14e616fb4a255f823", nil, 400) err = doBadReq("GET", endpoint+"0xG2241b6f2b1dd772dba391f4a1a3407c7c21f598d86e2585a14e616fb4a255f823", nil, 400)
assert.NoError(t, err) require.NoError(t, err)
// 404, due inexistent TxID in DB // 404, due inexistent TxID in DB
err = doBadReq("GET", endpoint+"0x02241b6f2b1dd772dba391f4a1a3407c7c21f598d86e2585a14e616fb4a255f823", nil, 404) err = doBadReq("GET", endpoint+"0x02241b6f2b1dd772dba391f4a1a3407c7c21f598d86e2585a14e616fb4a255f823", nil, 404)
assert.NoError(t, err) require.NoError(t, err)
} }
func assertPoolTx(t *testing.T, expected, actual testPoolTxReceive) { func assertPoolTx(t *testing.T, expected, actual testPoolTxReceive) {

View File

@@ -64,7 +64,10 @@ func (bb *BatchBuilder) BuildBatch(coordIdxs []common.Idx, configBatch *ConfigBa
tp := txprocessor.NewTxProcessor(bbStateDB, configBatch.TxProcessorConfig) tp := txprocessor.NewTxProcessor(bbStateDB, configBatch.TxProcessorConfig)
ptOut, err := tp.ProcessTxs(coordIdxs, l1usertxs, l1coordinatortxs, pooll2txs) ptOut, err := tp.ProcessTxs(coordIdxs, l1usertxs, l1coordinatortxs, pooll2txs)
return ptOut.ZKInputs, tracerr.Wrap(err) if err != nil {
return nil, tracerr.Wrap(err)
}
return ptOut.ZKInputs, nil
} }
// LocalStateDB returns the underlying LocalStateDB // LocalStateDB returns the underlying LocalStateDB

1
cli/node/.gitignore vendored
View File

@@ -1,2 +1,3 @@
cfg.example.secret.toml cfg.example.secret.toml
cfg.toml cfg.toml
node

View File

@@ -20,11 +20,16 @@ Path = "/tmp/iden3-test/hermez/statedb"
Keep = 256 Keep = 256
[PostgreSQL] [PostgreSQL]
Port = 5432 PortWrite = 5432
Host = "localhost" HostWrite = "localhost"
User = "hermez" UserWrite = "hermez"
Password = "yourpasswordhere" PasswordWrite = "yourpasswordhere"
Name = "hermez" NameWrite = "hermez"
# PortRead = 5432
# HostRead = "localhost"
# UserRead = "hermez"
# PasswordRead = "yourpasswordhere"
# NameRead = "hermez"
[Web3] [Web3]
URL = "http://localhost:8545" URL = "http://localhost:8545"
@@ -55,6 +60,7 @@ ForgeRetryInterval = "500ms"
SyncRetryInterval = "1s" SyncRetryInterval = "1s"
ForgeDelay = "10s" ForgeDelay = "10s"
ForgeNoTxsDelay = "0s" ForgeNoTxsDelay = "0s"
PurgeByExtDelInterval = "1m"
[Coordinator.FeeAccount] [Coordinator.FeeAccount]
Address = "0x56232B1c5B10038125Bc7345664B4AFD745bcF8E" Address = "0x56232B1c5B10038125Bc7345664B4AFD745bcF8E"
@@ -65,6 +71,7 @@ BJJ = "0x1b176232f78ba0d388ecc5f4896eca2d3b3d4f272092469f559247297f5c0c13"
[Coordinator.L2DB] [Coordinator.L2DB]
SafetyPeriod = 10 SafetyPeriod = 10
MaxTxs = 512 MaxTxs = 512
MinFeeUSD = 0.0
TTL = "24h" TTL = "24h"
PurgeBatchDelay = 10 PurgeBatchDelay = 10
InvalidateBatchDelay = 20 InvalidateBatchDelay = 20
@@ -97,6 +104,12 @@ GasPriceIncPerc = 10
Path = "/tmp/iden3-test/hermez/ethkeystore" Path = "/tmp/iden3-test/hermez/ethkeystore"
Password = "yourpasswordhere" Password = "yourpasswordhere"
[Coordinator.EthClient.ForgeBatchGasCost]
Fixed = 500000
L1UserTx = 8000
L1CoordTx = 9000
L2Tx = 1
[Coordinator.API] [Coordinator.API]
Coordinator = true Coordinator = true

View File

@@ -17,6 +17,7 @@ import (
"github.com/hermeznetwork/hermez-node/node" "github.com/hermeznetwork/hermez-node/node"
"github.com/hermeznetwork/tracerr" "github.com/hermeznetwork/tracerr"
"github.com/iden3/go-iden3-crypto/babyjub" "github.com/iden3/go-iden3-crypto/babyjub"
"github.com/jmoiron/sqlx"
"github.com/urfave/cli/v2" "github.com/urfave/cli/v2"
) )
@@ -90,11 +91,11 @@ func cmdWipeSQL(c *cli.Context) error {
} }
} }
db, err := dbUtils.ConnectSQLDB( db, err := dbUtils.ConnectSQLDB(
cfg.PostgreSQL.Port, cfg.PostgreSQL.PortWrite,
cfg.PostgreSQL.Host, cfg.PostgreSQL.HostWrite,
cfg.PostgreSQL.User, cfg.PostgreSQL.UserWrite,
cfg.PostgreSQL.Password, cfg.PostgreSQL.PasswordWrite,
cfg.PostgreSQL.Name, cfg.PostgreSQL.NameWrite,
) )
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
@@ -151,17 +152,36 @@ func cmdDiscard(c *cli.Context) error {
blockNum := c.Int64(flagBlock) blockNum := c.Int64(flagBlock)
log.Infof("Discarding all blocks up to block %v...", blockNum) log.Infof("Discarding all blocks up to block %v...", blockNum)
db, err := dbUtils.InitSQLDB( dbWrite, err := dbUtils.InitSQLDB(
cfg.PostgreSQL.Port, cfg.PostgreSQL.PortWrite,
cfg.PostgreSQL.Host, cfg.PostgreSQL.HostWrite,
cfg.PostgreSQL.User, cfg.PostgreSQL.UserWrite,
cfg.PostgreSQL.Password, cfg.PostgreSQL.PasswordWrite,
cfg.PostgreSQL.Name, cfg.PostgreSQL.NameWrite,
) )
if err != nil { if err != nil {
return tracerr.Wrap(fmt.Errorf("dbUtils.InitSQLDB: %w", err)) return tracerr.Wrap(fmt.Errorf("dbUtils.InitSQLDB: %w", err))
} }
historyDB := historydb.NewHistoryDB(db, nil) var dbRead *sqlx.DB
if cfg.PostgreSQL.HostRead == "" {
dbRead = dbWrite
} else if cfg.PostgreSQL.HostRead == cfg.PostgreSQL.HostWrite {
return tracerr.Wrap(fmt.Errorf(
"PostgreSQL.HostRead and PostgreSQL.HostWrite must be different",
))
} else {
dbRead, err = dbUtils.InitSQLDB(
cfg.PostgreSQL.PortRead,
cfg.PostgreSQL.HostRead,
cfg.PostgreSQL.UserRead,
cfg.PostgreSQL.PasswordRead,
cfg.PostgreSQL.NameRead,
)
if err != nil {
return tracerr.Wrap(fmt.Errorf("dbUtils.InitSQLDB: %w", err))
}
}
historyDB := historydb.NewHistoryDB(dbRead, dbWrite, nil)
if err := historyDB.Reorg(blockNum); err != nil { if err := historyDB.Reorg(blockNum); err != nil {
return tracerr.Wrap(fmt.Errorf("historyDB.Reorg: %w", err)) return tracerr.Wrap(fmt.Errorf("historyDB.Reorg: %w", err))
} }
@@ -170,9 +190,10 @@ func cmdDiscard(c *cli.Context) error {
return tracerr.Wrap(fmt.Errorf("historyDB.GetLastBatchNum: %w", err)) return tracerr.Wrap(fmt.Errorf("historyDB.GetLastBatchNum: %w", err))
} }
l2DB := l2db.NewL2DB( l2DB := l2db.NewL2DB(
db, dbRead, dbWrite,
cfg.Coordinator.L2DB.SafetyPeriod, cfg.Coordinator.L2DB.SafetyPeriod,
cfg.Coordinator.L2DB.MaxTxs, cfg.Coordinator.L2DB.MaxTxs,
cfg.Coordinator.L2DB.MinFeeUSD,
cfg.Coordinator.L2DB.TTL.Duration, cfg.Coordinator.L2DB.TTL.Duration,
nil, nil,
) )

View File

@@ -263,3 +263,13 @@ type IdxNonce struct {
Idx Idx `db:"idx"` Idx Idx `db:"idx"`
Nonce Nonce `db:"nonce"` Nonce Nonce `db:"nonce"`
} }
// AccountUpdate represents an account balance and/or nonce update after a
// processed batch
type AccountUpdate struct {
EthBlockNum int64 `meddler:"eth_block_num"`
BatchNum BatchNum `meddler:"batch_num"`
Idx Idx `meddler:"idx"`
Nonce Nonce `meddler:"nonce"`
Balance *big.Int `meddler:"balance,bigint"`
}

View File

@@ -77,6 +77,7 @@ type BatchData struct {
L1CoordinatorTxs []L1Tx L1CoordinatorTxs []L1Tx
L2Txs []L2Tx L2Txs []L2Tx
CreatedAccounts []Account CreatedAccounts []Account
UpdatedAccounts []AccountUpdate
ExitTree []ExitInfo ExitTree []ExitInfo
Batch Batch Batch Batch
} }

View File

@@ -62,3 +62,17 @@ func RmEndingZeroes(siblings []*merkletree.Hash) []*merkletree.Hash {
} }
return siblings[:pos] return siblings[:pos]
} }
// TokensToUSD is a helper function to calculate the USD value of a certain
// amount of tokens considering the normalized token price (which is the price
// commonly reported by exhanges)
func TokensToUSD(amount *big.Int, decimals uint64, valueUSD float64) float64 {
amountF := new(big.Float).SetInt(amount)
// Divide by 10^decimals to normalize the amount
baseF := new(big.Float).SetInt(new(big.Int).Exp(
big.NewInt(10), big.NewInt(int64(decimals)), nil)) //nolint:gomnd
amountF.Mul(amountF, big.NewFloat(valueUSD))
amountF.Quo(amountF, baseF)
amountUSD, _ := amountF.Float64()
return amountUSD
}

View File

@@ -35,6 +35,15 @@ type ServerProof struct {
URL string `validate:"required"` URL string `validate:"required"`
} }
// ForgeBatchGasCost is the costs associated to a ForgeBatch transaction, split
// into different parts to be used in a formula.
type ForgeBatchGasCost struct {
Fixed uint64 `validate:"required"`
L1UserTx uint64 `validate:"required"`
L1CoordTx uint64 `validate:"required"`
L2Tx uint64 `validate:"required"`
}
// Coordinator is the coordinator specific configuration. // Coordinator is the coordinator specific configuration.
type Coordinator struct { type Coordinator struct {
// ForgerAddress is the address under which this coordinator is forging // ForgerAddress is the address under which this coordinator is forging
@@ -91,6 +100,10 @@ type Coordinator struct {
// SyncRetryInterval is the waiting interval between calls to the main // SyncRetryInterval is the waiting interval between calls to the main
// handler of a synced block after an error // handler of a synced block after an error
SyncRetryInterval Duration `validate:"required"` SyncRetryInterval Duration `validate:"required"`
// PurgeByExtDelInterval is the waiting interval between calls
// to the PurgeByExternalDelete function of the l2db which deletes
// pending txs externally marked by the column `external_delete`
PurgeByExtDelInterval Duration `validate:"required"`
// L2DB is the DB that holds the pool of L2Txs // L2DB is the DB that holds the pool of L2Txs
L2DB struct { L2DB struct {
// SafetyPeriod is the number of batches after which // SafetyPeriod is the number of batches after which
@@ -101,6 +114,10 @@ type Coordinator struct {
// reached, inserts to the pool will be denied until some of // reached, inserts to the pool will be denied until some of
// the pending txs are forged. // the pending txs are forged.
MaxTxs uint32 `validate:"required"` MaxTxs uint32 `validate:"required"`
// MinFeeUSD is the minimum fee in USD that a tx must pay in
// order to be accepted into the pool. Txs with lower than
// minimum fee will be rejected at the API level.
MinFeeUSD float64
// TTL is the Time To Live for L2Txs in the pool. Once MaxTxs // TTL is the Time To Live for L2Txs in the pool. Once MaxTxs
// L2Txs is reached, L2Txs older than TTL will be deleted. // L2Txs is reached, L2Txs older than TTL will be deleted.
TTL Duration `validate:"required"` TTL Duration `validate:"required"`
@@ -172,6 +189,9 @@ type Coordinator struct {
// Password used to decrypt the keys in the keystore // Password used to decrypt the keys in the keystore
Password string `validate:"required"` Password string `validate:"required"`
} `validate:"required"` } `validate:"required"`
// ForgeBatchGasCost contains the cost of each action in the
// ForgeBatch transaction.
ForgeBatchGasCost ForgeBatchGasCost `validate:"required"`
} `validate:"required"` } `validate:"required"`
API struct { API struct {
// Coordinator enables the coordinator API endpoints // Coordinator enables the coordinator API endpoints
@@ -207,17 +227,30 @@ type Node struct {
// Keep is the number of checkpoints to keep // Keep is the number of checkpoints to keep
Keep int `validate:"required"` Keep int `validate:"required"`
} `validate:"required"` } `validate:"required"`
// It's possible to use diferentiated SQL connections for read/write.
// If the read configuration is not provided, the write one it's going to be used
// for both reads and writes
PostgreSQL struct { PostgreSQL struct {
// Port of the PostgreSQL server // Port of the PostgreSQL write server
Port int `validate:"required"` PortWrite int `validate:"required"`
// Host of the PostgreSQL server // Host of the PostgreSQL write server
Host string `validate:"required"` HostWrite string `validate:"required"`
// User of the PostgreSQL server // User of the PostgreSQL write server
User string `validate:"required"` UserWrite string `validate:"required"`
// Password of the PostgreSQL server // Password of the PostgreSQL write server
Password string `validate:"required"` PasswordWrite string `validate:"required"`
// Name of the PostgreSQL server database // Name of the PostgreSQL write server database
Name string `validate:"required"` NameWrite string `validate:"required"`
// Port of the PostgreSQL read server
PortRead int
// Host of the PostgreSQL read server
HostRead string
// User of the PostgreSQL read server
UserRead string
// Password of the PostgreSQL read server
PasswordRead string
// Name of the PostgreSQL read server database
NameRead string
} `validate:"required"` } `validate:"required"`
Web3 struct { Web3 struct {
// URL is the URL of the web3 ethereum-node RPC server // URL is the URL of the web3 ethereum-node RPC server

View File

@@ -11,6 +11,7 @@ import (
ethCommon "github.com/ethereum/go-ethereum/common" ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/batchbuilder" "github.com/hermeznetwork/hermez-node/batchbuilder"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/config"
"github.com/hermeznetwork/hermez-node/db/historydb" "github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/hermeznetwork/hermez-node/db/l2db" "github.com/hermeznetwork/hermez-node/db/l2db"
"github.com/hermeznetwork/hermez-node/eth" "github.com/hermeznetwork/hermez-node/eth"
@@ -85,6 +86,10 @@ type Config struct {
// SyncRetryInterval is the waiting interval between calls to the main // SyncRetryInterval is the waiting interval between calls to the main
// handler of a synced block after an error // handler of a synced block after an error
SyncRetryInterval time.Duration SyncRetryInterval time.Duration
// PurgeByExtDelInterval is the waiting interval between calls
// to the PurgeByExternalDelete function of the l2db which deletes
// pending txs externally marked by the column `external_delete`
PurgeByExtDelInterval time.Duration
// EthClientAttemptsDelay is delay between attempts do do an eth client // EthClientAttemptsDelay is delay between attempts do do an eth client
// RPC call // RPC call
EthClientAttemptsDelay time.Duration EthClientAttemptsDelay time.Duration
@@ -112,6 +117,9 @@ type Config struct {
// VerifierIdx is the index of the verifier contract registered in the // VerifierIdx is the index of the verifier contract registered in the
// smart contract // smart contract
VerifierIdx uint8 VerifierIdx uint8
// ForgeBatchGasCost contains the cost of each action in the
// ForgeBatch transaction.
ForgeBatchGasCost config.ForgeBatchGasCost
TxProcessorConfig txprocessor.Config TxProcessorConfig txprocessor.Config
} }
@@ -153,6 +161,15 @@ type Coordinator struct {
wg sync.WaitGroup wg sync.WaitGroup
cancel context.CancelFunc cancel context.CancelFunc
// mutexL2DBUpdateDelete protects updates to the L2DB so that
// these two processes always happen exclusively:
// - Pipeline taking pending txs, running through the TxProcessor and
// marking selected txs as forging
// - Coordinator deleting pending txs that have been marked with
// `external_delete`.
// Without this mutex, the coordinator could delete a pending txs that
// has just been selected by the TxProcessor in the pipeline.
mutexL2DBUpdateDelete sync.Mutex
pipeline *Pipeline pipeline *Pipeline
lastNonFailedBatchNum common.BatchNum lastNonFailedBatchNum common.BatchNum
@@ -248,7 +265,8 @@ func (c *Coordinator) BatchBuilder() *batchbuilder.BatchBuilder {
func (c *Coordinator) newPipeline(ctx context.Context) (*Pipeline, error) { func (c *Coordinator) newPipeline(ctx context.Context) (*Pipeline, error) {
c.pipelineNum++ c.pipelineNum++
return NewPipeline(ctx, c.cfg, c.pipelineNum, c.historyDB, c.l2DB, c.txSelector, return NewPipeline(ctx, c.cfg, c.pipelineNum, c.historyDB, c.l2DB, c.txSelector,
c.batchBuilder, c.purger, c, c.txManager, c.provers, &c.consts) c.batchBuilder, &c.mutexL2DBUpdateDelete, c.purger, c, c.txManager,
c.provers, &c.consts)
} }
// MsgSyncBlock indicates an update to the Synchronizer stats // MsgSyncBlock indicates an update to the Synchronizer stats
@@ -369,11 +387,23 @@ func (c *Coordinator) syncStats(ctx context.Context, stats *synchronizer.Stats)
fromBatch.ForgerAddr = c.cfg.ForgerAddress fromBatch.ForgerAddr = c.cfg.ForgerAddress
fromBatch.StateRoot = big.NewInt(0) fromBatch.StateRoot = big.NewInt(0)
} }
// Before starting the pipeline make sure we reset any
// l2tx from the pool that was forged in a batch that
// didn't end up being mined. We are already doing
// this in handleStopPipeline, but we do it again as a
// failsafe in case the last synced batchnum is
// different than in the previous call to l2DB.Reorg,
// or in case the node was restarted when there was a
// started batch that included l2txs but was not mined.
if err := c.l2DB.Reorg(fromBatch.BatchNum); err != nil {
return tracerr.Wrap(err)
}
var err error var err error
if c.pipeline, err = c.newPipeline(ctx); err != nil { if c.pipeline, err = c.newPipeline(ctx); err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
c.pipelineFromBatch = fromBatch c.pipelineFromBatch = fromBatch
// Start the pipeline
if err := c.pipeline.Start(fromBatch.BatchNum, stats, &c.vars); err != nil { if err := c.pipeline.Start(fromBatch.BatchNum, stats, &c.vars); err != nil {
c.pipeline = nil c.pipeline = nil
return tracerr.Wrap(err) return tracerr.Wrap(err)
@@ -494,7 +524,7 @@ func (c *Coordinator) Start() {
c.wg.Add(1) c.wg.Add(1)
go func() { go func() {
waitCh := time.After(longWaitDuration) timer := time.NewTimer(longWaitDuration)
for { for {
select { select {
case <-c.ctx.Done(): case <-c.ctx.Done():
@@ -506,24 +536,45 @@ func (c *Coordinator) Start() {
continue continue
} else if err != nil { } else if err != nil {
log.Errorw("Coordinator.handleMsg", "err", err) log.Errorw("Coordinator.handleMsg", "err", err)
waitCh = time.After(c.cfg.SyncRetryInterval) if !timer.Stop() {
<-timer.C
}
timer.Reset(c.cfg.SyncRetryInterval)
continue continue
} }
waitCh = time.After(longWaitDuration) case <-timer.C:
case <-waitCh: timer.Reset(longWaitDuration)
if !c.stats.Synced() { if !c.stats.Synced() {
waitCh = time.After(longWaitDuration)
continue continue
} }
if err := c.syncStats(c.ctx, &c.stats); c.ctx.Err() != nil { if err := c.syncStats(c.ctx, &c.stats); c.ctx.Err() != nil {
waitCh = time.After(longWaitDuration)
continue continue
} else if err != nil { } else if err != nil {
log.Errorw("Coordinator.syncStats", "err", err) log.Errorw("Coordinator.syncStats", "err", err)
waitCh = time.After(c.cfg.SyncRetryInterval) if !timer.Stop() {
<-timer.C
}
timer.Reset(c.cfg.SyncRetryInterval)
continue continue
} }
waitCh = time.After(longWaitDuration) }
}
}()
c.wg.Add(1)
go func() {
for {
select {
case <-c.ctx.Done():
log.Info("Coordinator L2DB.PurgeByExternalDelete loop done")
c.wg.Done()
return
case <-time.After(c.cfg.PurgeByExtDelInterval):
c.mutexL2DBUpdateDelete.Lock()
if err := c.l2DB.PurgeByExternalDelete(); err != nil {
log.Errorw("L2DB.PurgeByExternalDelete", "err", err)
}
c.mutexL2DBUpdateDelete.Unlock()
} }
} }
}() }()

View File

@@ -105,8 +105,8 @@ func newTestModules(t *testing.T) modules {
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez") db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err) require.NoError(t, err)
test.WipeDB(db) test.WipeDB(db)
l2DB := l2db.NewL2DB(db, 10, 100, 24*time.Hour, nil) l2DB := l2db.NewL2DB(db, db, 10, 100, 0.0, 24*time.Hour, nil)
historyDB := historydb.NewHistoryDB(db, nil) historyDB := historydb.NewHistoryDB(db, db, nil)
txSelDBPath, err = ioutil.TempDir("", "tmpTxSelDB") txSelDBPath, err = ioutil.TempDir("", "tmpTxSelDB")
require.NoError(t, err) require.NoError(t, err)

View File

@@ -53,6 +53,7 @@ type Pipeline struct {
l2DB *l2db.L2DB l2DB *l2db.L2DB
txSelector *txselector.TxSelector txSelector *txselector.TxSelector
batchBuilder *batchbuilder.BatchBuilder batchBuilder *batchbuilder.BatchBuilder
mutexL2DBUpdateDelete *sync.Mutex
purger *Purger purger *Purger
stats synchronizer.Stats stats synchronizer.Stats
@@ -84,6 +85,7 @@ func NewPipeline(ctx context.Context,
l2DB *l2db.L2DB, l2DB *l2db.L2DB,
txSelector *txselector.TxSelector, txSelector *txselector.TxSelector,
batchBuilder *batchbuilder.BatchBuilder, batchBuilder *batchbuilder.BatchBuilder,
mutexL2DBUpdateDelete *sync.Mutex,
purger *Purger, purger *Purger,
coord *Coordinator, coord *Coordinator,
txManager *TxManager, txManager *TxManager,
@@ -112,6 +114,7 @@ func NewPipeline(ctx context.Context,
batchBuilder: batchBuilder, batchBuilder: batchBuilder,
provers: provers, provers: provers,
proversPool: proversPool, proversPool: proversPool,
mutexL2DBUpdateDelete: mutexL2DBUpdateDelete,
purger: purger, purger: purger,
coord: coord, coord: coord,
txManager: txManager, txManager: txManager,
@@ -195,11 +198,34 @@ func (p *Pipeline) syncSCVars(vars synchronizer.SCVariablesPtr) {
updateSCVars(&p.vars, vars) updateSCVars(&p.vars, vars)
} }
// handleForgeBatch calls p.forgeBatch to forge the batch and get the zkInputs, // handleForgeBatch waits for an available proof server, calls p.forgeBatch to
// and then waits for an available proof server and sends the zkInputs to it so // forge the batch and get the zkInputs, and then sends the zkInputs to the
// that the proof computation begins. // selected proof server so that the proof computation begins.
func (p *Pipeline) handleForgeBatch(ctx context.Context, batchNum common.BatchNum) (*BatchInfo, error) { func (p *Pipeline) handleForgeBatch(ctx context.Context,
batchInfo, err := p.forgeBatch(batchNum) batchNum common.BatchNum) (batchInfo *BatchInfo, err error) {
// 1. Wait for an available serverProof (blocking call)
serverProof, err := p.proversPool.Get(ctx)
if ctx.Err() != nil {
return nil, ctx.Err()
} else if err != nil {
log.Errorw("proversPool.Get", "err", err)
return nil, err
}
defer func() {
// If we encounter any error (notice that this function returns
// errors to notify that a batch is not forged not only because
// of unexpected errors but also due to benign causes), add the
// serverProof back to the pool
if err != nil {
p.proversPool.Add(ctx, serverProof)
}
}()
// 2. Forge the batch internally (make a selection of txs and prepare
// all the smart contract arguments)
p.mutexL2DBUpdateDelete.Lock()
batchInfo, err = p.forgeBatch(batchNum)
p.mutexL2DBUpdateDelete.Unlock()
if ctx.Err() != nil { if ctx.Err() != nil {
return nil, ctx.Err() return nil, ctx.Err()
} else if err != nil { } else if err != nil {
@@ -215,21 +241,13 @@ func (p *Pipeline) handleForgeBatch(ctx context.Context, batchNum common.BatchNu
} }
return nil, err return nil, err
} }
// 6. Wait for an available server proof (blocking call)
serverProof, err := p.proversPool.Get(ctx) // 3. Send the ZKInputs to the proof server
if ctx.Err() != nil {
return nil, ctx.Err()
} else if err != nil {
log.Errorw("proversPool.Get", "err", err)
return nil, err
}
batchInfo.ServerProof = serverProof batchInfo.ServerProof = serverProof
if err := p.sendServerProof(ctx, batchInfo); ctx.Err() != nil { if err := p.sendServerProof(ctx, batchInfo); ctx.Err() != nil {
return nil, ctx.Err() return nil, ctx.Err()
} else if err != nil { } else if err != nil {
log.Errorw("sendServerProof", "err", err) log.Errorw("sendServerProof", "err", err)
batchInfo.ServerProof = nil
p.proversPool.Add(ctx, serverProof)
return nil, err return nil, err
} }
return batchInfo, nil return batchInfo, nil
@@ -253,7 +271,7 @@ func (p *Pipeline) Start(batchNum common.BatchNum,
p.wg.Add(1) p.wg.Add(1)
go func() { go func() {
waitCh := time.After(zeroDuration) timer := time.NewTimer(zeroDuration)
for { for {
select { select {
case <-p.ctx.Done(): case <-p.ctx.Done():
@@ -263,23 +281,21 @@ func (p *Pipeline) Start(batchNum common.BatchNum,
case statsVars := <-p.statsVarsCh: case statsVars := <-p.statsVarsCh:
p.stats = statsVars.Stats p.stats = statsVars.Stats
p.syncSCVars(statsVars.Vars) p.syncSCVars(statsVars.Vars)
case <-waitCh: case <-timer.C:
timer.Reset(p.cfg.ForgeRetryInterval)
// Once errAtBatchNum != 0, we stop forging // Once errAtBatchNum != 0, we stop forging
// batches because there's been an error and we // batches because there's been an error and we
// wait for the pipeline to be stopped. // wait for the pipeline to be stopped.
if p.getErrAtBatchNum() != 0 { if p.getErrAtBatchNum() != 0 {
waitCh = time.After(p.cfg.ForgeRetryInterval)
continue continue
} }
batchNum = p.state.batchNum + 1 batchNum = p.state.batchNum + 1
batchInfo, err := p.handleForgeBatch(p.ctx, batchNum) batchInfo, err := p.handleForgeBatch(p.ctx, batchNum)
if p.ctx.Err() != nil { if p.ctx.Err() != nil {
waitCh = time.After(p.cfg.ForgeRetryInterval)
continue continue
} else if tracerr.Unwrap(err) == errLastL1BatchNotSynced || } else if tracerr.Unwrap(err) == errLastL1BatchNotSynced ||
tracerr.Unwrap(err) == errForgeNoTxsBeforeDelay || tracerr.Unwrap(err) == errForgeNoTxsBeforeDelay ||
tracerr.Unwrap(err) == errForgeBeforeDelay { tracerr.Unwrap(err) == errForgeBeforeDelay {
waitCh = time.After(p.cfg.ForgeRetryInterval)
continue continue
} else if err != nil { } else if err != nil {
p.setErrAtBatchNum(batchNum) p.setErrAtBatchNum(batchNum)
@@ -288,7 +304,6 @@ func (p *Pipeline) Start(batchNum common.BatchNum,
"Pipeline.handleForgBatch: %v", err), "Pipeline.handleForgBatch: %v", err),
FailedBatchNum: batchNum, FailedBatchNum: batchNum,
}) })
waitCh = time.After(p.cfg.ForgeRetryInterval)
continue continue
} }
p.lastForgeTime = time.Now() p.lastForgeTime = time.Now()
@@ -298,7 +313,10 @@ func (p *Pipeline) Start(batchNum common.BatchNum,
case batchChSentServerProof <- batchInfo: case batchChSentServerProof <- batchInfo:
case <-p.ctx.Done(): case <-p.ctx.Done():
} }
waitCh = time.After(zeroDuration) if !timer.Stop() {
<-timer.C
}
timer.Reset(zeroDuration)
} }
} }
}() }()
@@ -333,7 +351,6 @@ func (p *Pipeline) Start(batchNum common.BatchNum,
} }
// We are done with this serverProof, add it back to the pool // We are done with this serverProof, add it back to the pool
p.proversPool.Add(p.ctx, batchInfo.ServerProof) p.proversPool.Add(p.ctx, batchInfo.ServerProof)
// batchInfo.ServerProof = nil
p.txManager.AddBatch(p.ctx, batchInfo) p.txManager.AddBatch(p.ctx, batchInfo)
} }
} }

View File

@@ -21,7 +21,7 @@ func newL2DB(t *testing.T) *l2db.L2DB {
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez") db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err) require.NoError(t, err)
test.WipeDB(db) test.WipeDB(db)
return l2db.NewL2DB(db, 10, 100, 24*time.Hour, nil) return l2db.NewL2DB(db, db, 10, 100, 0.0, 24*time.Hour, nil)
} }
func newStateDB(t *testing.T) *statedb.LocalStateDB { func newStateDB(t *testing.T) *statedb.LocalStateDB {

View File

@@ -2,9 +2,9 @@ package coordinator
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"math/big" "math/big"
"strings"
"time" "time"
"github.com/ethereum/go-ethereum" "github.com/ethereum/go-ethereum"
@@ -123,7 +123,7 @@ func (t *TxManager) syncSCVars(vars synchronizer.SCVariablesPtr) {
} }
// NewAuth generates a new auth object for an ethereum transaction // NewAuth generates a new auth object for an ethereum transaction
func (t *TxManager) NewAuth(ctx context.Context) (*bind.TransactOpts, error) { func (t *TxManager) NewAuth(ctx context.Context, batchInfo *BatchInfo) (*bind.TransactOpts, error) {
gasPrice, err := t.ethClient.EthSuggestGasPrice(ctx) gasPrice, err := t.ethClient.EthSuggestGasPrice(ctx)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
@@ -143,15 +143,12 @@ func (t *TxManager) NewAuth(ctx context.Context) (*bind.TransactOpts, error) {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
auth.Value = big.NewInt(0) // in wei auth.Value = big.NewInt(0) // in wei
// TODO: Calculate GasLimit based on the contents of the ForgeBatchArgs
// This requires a function that estimates the gas usage of the gasLimit := t.cfg.ForgeBatchGasCost.Fixed +
// forgeBatch call based on the contents of the ForgeBatch args: uint64(len(batchInfo.L1UserTxsExtra))*t.cfg.ForgeBatchGasCost.L1UserTx +
// - length of l2txs uint64(len(batchInfo.L1CoordTxs))*t.cfg.ForgeBatchGasCost.L1CoordTx +
// - length of l1Usertxs uint64(len(batchInfo.L2Txs))*t.cfg.ForgeBatchGasCost.L2Tx
// - length of l1CoordTxs with authorization signature auth.GasLimit = gasLimit
// - length of l1CoordTxs without authoriation signature
// - etc.
auth.GasLimit = 1000000
auth.GasPrice = gasPrice auth.GasPrice = gasPrice
auth.Nonce = nil auth.Nonce = nil
@@ -191,7 +188,7 @@ func addPerc(v *big.Int, p int64) *big.Int {
func (t *TxManager) sendRollupForgeBatch(ctx context.Context, batchInfo *BatchInfo, resend bool) error { func (t *TxManager) sendRollupForgeBatch(ctx context.Context, batchInfo *BatchInfo, resend bool) error {
var ethTx *types.Transaction var ethTx *types.Transaction
var err error var err error
auth, err := t.NewAuth(ctx) auth, err := t.NewAuth(ctx, batchInfo)
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -206,32 +203,35 @@ func (t *TxManager) sendRollupForgeBatch(ctx context.Context, batchInfo *BatchIn
} }
// RollupForgeBatch() calls ethclient.SendTransaction() // RollupForgeBatch() calls ethclient.SendTransaction()
ethTx, err = t.ethClient.RollupForgeBatch(batchInfo.ForgeBatchArgs, auth) ethTx, err = t.ethClient.RollupForgeBatch(batchInfo.ForgeBatchArgs, auth)
if errors.Is(err, core.ErrNonceTooLow) { // We check the errors via strings because we match the
// definition of the error from geth, with the string returned
// via RPC obtained by the client.
if err == nil {
break
} else if strings.Contains(err.Error(), core.ErrNonceTooLow.Error()) {
log.Warnw("TxManager ethClient.RollupForgeBatch incrementing nonce", log.Warnw("TxManager ethClient.RollupForgeBatch incrementing nonce",
"err", err, "nonce", auth.Nonce, "batchNum", batchInfo.BatchNum) "err", err, "nonce", auth.Nonce, "batchNum", batchInfo.BatchNum)
auth.Nonce.Add(auth.Nonce, big.NewInt(1)) auth.Nonce.Add(auth.Nonce, big.NewInt(1))
attempt-- attempt--
} else if errors.Is(err, core.ErrNonceTooHigh) { } else if strings.Contains(err.Error(), core.ErrNonceTooHigh.Error()) {
log.Warnw("TxManager ethClient.RollupForgeBatch decrementing nonce", log.Warnw("TxManager ethClient.RollupForgeBatch decrementing nonce",
"err", err, "nonce", auth.Nonce, "batchNum", batchInfo.BatchNum) "err", err, "nonce", auth.Nonce, "batchNum", batchInfo.BatchNum)
auth.Nonce.Sub(auth.Nonce, big.NewInt(1)) auth.Nonce.Sub(auth.Nonce, big.NewInt(1))
attempt-- attempt--
} else if errors.Is(err, core.ErrUnderpriced) { } else if strings.Contains(err.Error(), core.ErrReplaceUnderpriced.Error()) {
log.Warnw("TxManager ethClient.RollupForgeBatch incrementing gasPrice", log.Warnw("TxManager ethClient.RollupForgeBatch incrementing gasPrice",
"err", err, "gasPrice", auth.GasPrice, "batchNum", batchInfo.BatchNum) "err", err, "gasPrice", auth.GasPrice, "batchNum", batchInfo.BatchNum)
auth.GasPrice = addPerc(auth.GasPrice, 10) auth.GasPrice = addPerc(auth.GasPrice, 10)
attempt-- attempt--
} else if errors.Is(err, core.ErrReplaceUnderpriced) { } else if strings.Contains(err.Error(), core.ErrUnderpriced.Error()) {
log.Warnw("TxManager ethClient.RollupForgeBatch incrementing gasPrice", log.Warnw("TxManager ethClient.RollupForgeBatch incrementing gasPrice",
"err", err, "gasPrice", auth.GasPrice, "batchNum", batchInfo.BatchNum) "err", err, "gasPrice", auth.GasPrice, "batchNum", batchInfo.BatchNum)
auth.GasPrice = addPerc(auth.GasPrice, 10) auth.GasPrice = addPerc(auth.GasPrice, 10)
attempt-- attempt--
} else if err != nil { } else {
log.Errorw("TxManager ethClient.RollupForgeBatch", log.Errorw("TxManager ethClient.RollupForgeBatch",
"attempt", attempt, "err", err, "block", t.stats.Eth.LastBlock.Num+1, "attempt", attempt, "err", err, "block", t.stats.Eth.LastBlock.Num+1,
"batchNum", batchInfo.BatchNum) "batchNum", batchInfo.BatchNum)
} else {
break
} }
select { select {
case <-ctx.Done(): case <-ctx.Done():
@@ -416,8 +416,6 @@ func (q *Queue) Push(batchInfo *BatchInfo) {
// Run the TxManager // Run the TxManager
func (t *TxManager) Run(ctx context.Context) { func (t *TxManager) Run(ctx context.Context) {
waitCh := time.After(longWaitDuration)
var statsVars statsVars var statsVars statsVars
select { select {
case statsVars = <-t.statsVarsCh: case statsVars = <-t.statsVarsCh:
@@ -428,6 +426,7 @@ func (t *TxManager) Run(ctx context.Context) {
log.Infow("TxManager: received initial statsVars", log.Infow("TxManager: received initial statsVars",
"block", t.stats.Eth.LastBlock.Num, "batch", t.stats.Eth.LastBatchNum) "block", t.stats.Eth.LastBlock.Num, "batch", t.stats.Eth.LastBatchNum)
timer := time.NewTimer(longWaitDuration)
for { for {
select { select {
case <-ctx.Done(): case <-ctx.Done():
@@ -471,13 +470,17 @@ func (t *TxManager) Run(ctx context.Context) {
continue continue
} }
t.queue.Push(batchInfo) t.queue.Push(batchInfo)
waitCh = time.After(t.cfg.TxManagerCheckInterval) if !timer.Stop() {
case <-waitCh: <-timer.C
}
timer.Reset(t.cfg.TxManagerCheckInterval)
case <-timer.C:
queuePosition, batchInfo := t.queue.Next() queuePosition, batchInfo := t.queue.Next()
if batchInfo == nil { if batchInfo == nil {
waitCh = time.After(longWaitDuration) timer.Reset(longWaitDuration)
continue continue
} }
timer.Reset(t.cfg.TxManagerCheckInterval)
if err := t.checkEthTransactionReceipt(ctx, batchInfo); ctx.Err() != nil { if err := t.checkEthTransactionReceipt(ctx, batchInfo); ctx.Err() != nil {
continue continue
} else if err != nil { //nolint:staticcheck } else if err != nil { //nolint:staticcheck

View File

@@ -34,7 +34,7 @@ func (hdb *HistoryDB) GetBatchAPI(batchNum common.BatchNum) (*BatchAPI, error) {
defer hdb.apiConnCon.Release() defer hdb.apiConnCon.Release()
batch := &BatchAPI{} batch := &BatchAPI{}
return batch, tracerr.Wrap(meddler.QueryRow( return batch, tracerr.Wrap(meddler.QueryRow(
hdb.db, batch, hdb.dbRead, batch,
`SELECT batch.item_id, batch.batch_num, batch.eth_block_num, `SELECT batch.item_id, batch.batch_num, batch.eth_block_num,
batch.forger_addr, batch.fees_collected, batch.total_fees_usd, batch.state_root, batch.forger_addr, batch.fees_collected, batch.total_fees_usd, batch.state_root,
batch.num_accounts, batch.exit_root, batch.forge_l1_txs_num, batch.slot_num, batch.num_accounts, batch.exit_root, batch.forge_l1_txs_num, batch.slot_num,
@@ -133,10 +133,10 @@ func (hdb *HistoryDB) GetBatchesAPI(
queryStr += " DESC " queryStr += " DESC "
} }
queryStr += fmt.Sprintf("LIMIT %d;", *limit) queryStr += fmt.Sprintf("LIMIT %d;", *limit)
query = hdb.db.Rebind(queryStr) query = hdb.dbRead.Rebind(queryStr)
// log.Debug(query) // log.Debug(query)
batchPtrs := []*BatchAPI{} batchPtrs := []*BatchAPI{}
if err := meddler.QueryAll(hdb.db, &batchPtrs, query, args...); err != nil { if err := meddler.QueryAll(hdb.dbRead, &batchPtrs, query, args...); err != nil {
return nil, 0, tracerr.Wrap(err) return nil, 0, tracerr.Wrap(err)
} }
batches := db.SlicePtrsToSlice(batchPtrs).([]BatchAPI) batches := db.SlicePtrsToSlice(batchPtrs).([]BatchAPI)
@@ -156,7 +156,7 @@ func (hdb *HistoryDB) GetBestBidAPI(slotNum *int64) (BidAPI, error) {
} }
defer hdb.apiConnCon.Release() defer hdb.apiConnCon.Release()
err = meddler.QueryRow( err = meddler.QueryRow(
hdb.db, bid, `SELECT bid.*, block.timestamp, coordinator.forger_addr, coordinator.url hdb.dbRead, bid, `SELECT bid.*, block.timestamp, coordinator.forger_addr, coordinator.url
FROM bid INNER JOIN block ON bid.eth_block_num = block.eth_block_num FROM bid INNER JOIN block ON bid.eth_block_num = block.eth_block_num
INNER JOIN ( INNER JOIN (
SELECT bidder_addr, MAX(item_id) AS item_id FROM coordinator SELECT bidder_addr, MAX(item_id) AS item_id FROM coordinator
@@ -212,9 +212,9 @@ func (hdb *HistoryDB) GetBestBidsAPI(
if limit != nil { if limit != nil {
queryStr += fmt.Sprintf("LIMIT %d;", *limit) queryStr += fmt.Sprintf("LIMIT %d;", *limit)
} }
query = hdb.db.Rebind(queryStr) query = hdb.dbRead.Rebind(queryStr)
bidPtrs := []*BidAPI{} bidPtrs := []*BidAPI{}
if err := meddler.QueryAll(hdb.db, &bidPtrs, query, args...); err != nil { if err := meddler.QueryAll(hdb.dbRead, &bidPtrs, query, args...); err != nil {
return nil, 0, tracerr.Wrap(err) return nil, 0, tracerr.Wrap(err)
} }
// log.Debug(query) // log.Debug(query)
@@ -296,9 +296,9 @@ func (hdb *HistoryDB) GetBidsAPI(
if err != nil { if err != nil {
return nil, 0, tracerr.Wrap(err) return nil, 0, tracerr.Wrap(err)
} }
query = hdb.db.Rebind(query) query = hdb.dbRead.Rebind(query)
bids := []*BidAPI{} bids := []*BidAPI{}
if err := meddler.QueryAll(hdb.db, &bids, query, argsQ...); err != nil { if err := meddler.QueryAll(hdb.dbRead, &bids, query, argsQ...); err != nil {
return nil, 0, tracerr.Wrap(err) return nil, 0, tracerr.Wrap(err)
} }
if len(bids) == 0 { if len(bids) == 0 {
@@ -384,9 +384,9 @@ func (hdb *HistoryDB) GetTokensAPI(
if err != nil { if err != nil {
return nil, 0, tracerr.Wrap(err) return nil, 0, tracerr.Wrap(err)
} }
query = hdb.db.Rebind(query) query = hdb.dbRead.Rebind(query)
tokens := []*TokenWithUSD{} tokens := []*TokenWithUSD{}
if err := meddler.QueryAll(hdb.db, &tokens, query, argsQ...); err != nil { if err := meddler.QueryAll(hdb.dbRead, &tokens, query, argsQ...); err != nil {
return nil, 0, tracerr.Wrap(err) return nil, 0, tracerr.Wrap(err)
} }
if len(tokens) == 0 { if len(tokens) == 0 {
@@ -408,7 +408,7 @@ func (hdb *HistoryDB) GetTxAPI(txID common.TxID) (*TxAPI, error) {
defer hdb.apiConnCon.Release() defer hdb.apiConnCon.Release()
tx := &TxAPI{} tx := &TxAPI{}
err = meddler.QueryRow( err = meddler.QueryRow(
hdb.db, tx, `SELECT tx.item_id, tx.is_l1, tx.id, tx.type, tx.position, hdb.dbRead, tx, `SELECT tx.item_id, tx.is_l1, tx.id, tx.type, tx.position,
hez_idx(tx.effective_from_idx, token.symbol) AS from_idx, tx.from_eth_addr, tx.from_bjj, hez_idx(tx.effective_from_idx, token.symbol) AS from_idx, tx.from_eth_addr, tx.from_bjj,
hez_idx(tx.to_idx, token.symbol) AS to_idx, tx.to_eth_addr, tx.to_bjj, hez_idx(tx.to_idx, token.symbol) AS to_idx, tx.to_eth_addr, tx.to_bjj,
tx.amount, tx.amount_success, tx.token_id, tx.amount_usd, tx.amount, tx.amount_success, tx.token_id, tx.amount_usd,
@@ -541,10 +541,10 @@ func (hdb *HistoryDB) GetTxsAPI(
queryStr += " DESC " queryStr += " DESC "
} }
queryStr += fmt.Sprintf("LIMIT %d;", *limit) queryStr += fmt.Sprintf("LIMIT %d;", *limit)
query = hdb.db.Rebind(queryStr) query = hdb.dbRead.Rebind(queryStr)
// log.Debug(query) // log.Debug(query)
txsPtrs := []*TxAPI{} txsPtrs := []*TxAPI{}
if err := meddler.QueryAll(hdb.db, &txsPtrs, query, args...); err != nil { if err := meddler.QueryAll(hdb.dbRead, &txsPtrs, query, args...); err != nil {
return nil, 0, tracerr.Wrap(err) return nil, 0, tracerr.Wrap(err)
} }
txs := db.SlicePtrsToSlice(txsPtrs).([]TxAPI) txs := db.SlicePtrsToSlice(txsPtrs).([]TxAPI)
@@ -564,7 +564,7 @@ func (hdb *HistoryDB) GetExitAPI(batchNum *uint, idx *common.Idx) (*ExitAPI, err
defer hdb.apiConnCon.Release() defer hdb.apiConnCon.Release()
exit := &ExitAPI{} exit := &ExitAPI{}
err = meddler.QueryRow( err = meddler.QueryRow(
hdb.db, exit, `SELECT exit_tree.item_id, exit_tree.batch_num, hdb.dbRead, exit, `SELECT exit_tree.item_id, exit_tree.batch_num,
hez_idx(exit_tree.account_idx, token.symbol) AS account_idx, hez_idx(exit_tree.account_idx, token.symbol) AS account_idx,
account.bjj, account.eth_addr, account.bjj, account.eth_addr,
exit_tree.merkle_proof, exit_tree.balance, exit_tree.instant_withdrawn, exit_tree.merkle_proof, exit_tree.balance, exit_tree.instant_withdrawn,
@@ -685,10 +685,10 @@ func (hdb *HistoryDB) GetExitsAPI(
queryStr += " DESC " queryStr += " DESC "
} }
queryStr += fmt.Sprintf("LIMIT %d;", *limit) queryStr += fmt.Sprintf("LIMIT %d;", *limit)
query = hdb.db.Rebind(queryStr) query = hdb.dbRead.Rebind(queryStr)
// log.Debug(query) // log.Debug(query)
exits := []*ExitAPI{} exits := []*ExitAPI{}
if err := meddler.QueryAll(hdb.db, &exits, query, args...); err != nil { if err := meddler.QueryAll(hdb.dbRead, &exits, query, args...); err != nil {
return nil, 0, tracerr.Wrap(err) return nil, 0, tracerr.Wrap(err)
} }
if len(exits) == 0 { if len(exits) == 0 {
@@ -707,7 +707,7 @@ func (hdb *HistoryDB) GetBucketUpdatesAPI() ([]BucketUpdateAPI, error) {
defer hdb.apiConnCon.Release() defer hdb.apiConnCon.Release()
var bucketUpdates []*BucketUpdateAPI var bucketUpdates []*BucketUpdateAPI
err = meddler.QueryAll( err = meddler.QueryAll(
hdb.db, &bucketUpdates, hdb.dbRead, &bucketUpdates,
`SELECT num_bucket, withdrawals FROM bucket_update `SELECT num_bucket, withdrawals FROM bucket_update
WHERE item_id in(SELECT max(item_id) FROM bucket_update WHERE item_id in(SELECT max(item_id) FROM bucket_update
group by num_bucket) group by num_bucket)
@@ -772,10 +772,10 @@ func (hdb *HistoryDB) GetCoordinatorsAPI(
queryStr += " DESC " queryStr += " DESC "
} }
queryStr += fmt.Sprintf("LIMIT %d;", *limit) queryStr += fmt.Sprintf("LIMIT %d;", *limit)
query = hdb.db.Rebind(queryStr) query = hdb.dbRead.Rebind(queryStr)
coordinators := []*CoordinatorAPI{} coordinators := []*CoordinatorAPI{}
if err := meddler.QueryAll(hdb.db, &coordinators, query, args...); err != nil { if err := meddler.QueryAll(hdb.dbRead, &coordinators, query, args...); err != nil {
return nil, 0, tracerr.Wrap(err) return nil, 0, tracerr.Wrap(err)
} }
if len(coordinators) == 0 { if len(coordinators) == 0 {
@@ -795,7 +795,7 @@ func (hdb *HistoryDB) GetAuctionVarsAPI() (*common.AuctionVariables, error) {
defer hdb.apiConnCon.Release() defer hdb.apiConnCon.Release()
auctionVars := &common.AuctionVariables{} auctionVars := &common.AuctionVariables{}
err = meddler.QueryRow( err = meddler.QueryRow(
hdb.db, auctionVars, `SELECT * FROM auction_vars;`, hdb.dbRead, auctionVars, `SELECT * FROM auction_vars;`,
) )
return auctionVars, tracerr.Wrap(err) return auctionVars, tracerr.Wrap(err)
} }
@@ -816,7 +816,7 @@ func (hdb *HistoryDB) GetAuctionVarsUntilSetSlotNumAPI(slotNum int64, maxItems i
ORDER BY default_slot_set_bid_slot_num DESC ORDER BY default_slot_set_bid_slot_num DESC
LIMIT $2; LIMIT $2;
` `
err = meddler.QueryAll(hdb.db, &auctionVars, query, slotNum, maxItems) err = meddler.QueryAll(hdb.dbRead, &auctionVars, query, slotNum, maxItems)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
@@ -832,11 +832,19 @@ func (hdb *HistoryDB) GetAccountAPI(idx common.Idx) (*AccountAPI, error) {
} }
defer hdb.apiConnCon.Release() defer hdb.apiConnCon.Release()
account := &AccountAPI{} account := &AccountAPI{}
err = meddler.QueryRow(hdb.db, account, `SELECT account.item_id, hez_idx(account.idx, err = meddler.QueryRow(hdb.dbRead, account, `SELECT account.item_id, hez_idx(account.idx,
token.symbol) as idx, account.batch_num, account.bjj, account.eth_addr, token.symbol) as idx, account.batch_num, account.bjj, account.eth_addr,
token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block, token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
token.eth_addr as token_eth_addr, token.name, token.symbol, token.decimals, token.usd, token.usd_update token.eth_addr as token_eth_addr, token.name, token.symbol, token.decimals, token.usd,
FROM account INNER JOIN token ON account.token_id = token.token_id WHERE idx = $1;`, idx) token.usd_update, account_update.nonce, account_update.balance
FROM account inner JOIN (
SELECT idx, nonce, balance
FROM account_update
WHERE idx = $1
ORDER BY item_id DESC LIMIT 1
) AS account_update ON account_update.idx = account.idx
INNER JOIN token ON account.token_id = token.token_id
WHERE account.idx = $1;`, idx)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
@@ -864,8 +872,13 @@ func (hdb *HistoryDB) GetAccountsAPI(
queryStr := `SELECT account.item_id, hez_idx(account.idx, token.symbol) as idx, account.batch_num, queryStr := `SELECT account.item_id, hez_idx(account.idx, token.symbol) as idx, account.batch_num,
account.bjj, account.eth_addr, token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block, account.bjj, account.eth_addr, token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
token.eth_addr as token_eth_addr, token.name, token.symbol, token.decimals, token.usd, token.usd_update, token.eth_addr as token_eth_addr, token.name, token.symbol, token.decimals, token.usd, token.usd_update,
COUNT(*) OVER() AS total_items account_update.nonce, account_update.balance, COUNT(*) OVER() AS total_items
FROM account INNER JOIN token ON account.token_id = token.token_id ` FROM account inner JOIN (
SELECT DISTINCT idx,
first_value(nonce) over(partition by idx ORDER BY item_id DESC) as nonce,
first_value(balance) over(partition by idx ORDER BY item_id DESC) as balance
FROM account_update
) AS account_update ON account_update.idx = account.idx INNER JOIN token ON account.token_id = token.token_id `
// Apply filters // Apply filters
nextIsAnd := false nextIsAnd := false
// ethAddr filter // ethAddr filter
@@ -914,10 +927,10 @@ func (hdb *HistoryDB) GetAccountsAPI(
if err != nil { if err != nil {
return nil, 0, tracerr.Wrap(err) return nil, 0, tracerr.Wrap(err)
} }
query = hdb.db.Rebind(query) query = hdb.dbRead.Rebind(query)
accounts := []*AccountAPI{} accounts := []*AccountAPI{}
if err := meddler.QueryAll(hdb.db, &accounts, query, argsQ...); err != nil { if err := meddler.QueryAll(hdb.dbRead, &accounts, query, argsQ...); err != nil {
return nil, 0, tracerr.Wrap(err) return nil, 0, tracerr.Wrap(err)
} }
if len(accounts) == 0 { if len(accounts) == 0 {
@@ -939,11 +952,19 @@ func (hdb *HistoryDB) GetMetricsAPI(lastBatchNum common.BatchNum) (*Metrics, err
metricsTotals := &MetricsTotals{} metricsTotals := &MetricsTotals{}
metrics := &Metrics{} metrics := &Metrics{}
err = meddler.QueryRow( err = meddler.QueryRow(
hdb.db, metricsTotals, `SELECT COUNT(tx.*) as total_txs, hdb.dbRead, metricsTotals, `SELECT
COALESCE (MIN(tx.batch_num), 0) as batch_num, COALESCE (MIN(block.timestamp), COALESCE (MIN(batch.batch_num), 0) as batch_num,
NOW()) AS min_timestamp, COALESCE (MAX(block.timestamp), NOW()) AS max_timestamp COALESCE (MIN(block.timestamp), NOW()) AS min_timestamp,
FROM tx INNER JOIN block ON tx.eth_block_num = block.eth_block_num COALESCE (MAX(block.timestamp), NOW()) AS max_timestamp
WHERE block.timestamp >= NOW() - INTERVAL '24 HOURS';`) FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num
WHERE block.timestamp >= NOW() - INTERVAL '24 HOURS' and batch.batch_num <= $1;`, lastBatchNum)
if err != nil {
return nil, tracerr.Wrap(err)
}
err = meddler.QueryRow(
hdb.dbRead, metricsTotals, `SELECT COUNT(*) as total_txs
FROM tx WHERE tx.batch_num between $1 AND $2;`, metricsTotals.FirstBatchNum, lastBatchNum)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
@@ -964,12 +985,13 @@ func (hdb *HistoryDB) GetMetricsAPI(lastBatchNum common.BatchNum) (*Metrics, err
} }
err = meddler.QueryRow( err = meddler.QueryRow(
hdb.db, metricsTotals, `SELECT COUNT(*) AS total_batches, hdb.dbRead, metricsTotals, `SELECT COUNT(*) AS total_batches,
COALESCE (SUM(total_fees_usd), 0) AS total_fees FROM batch COALESCE (SUM(total_fees_usd), 0) AS total_fees FROM batch
WHERE batch_num > $1;`, metricsTotals.FirstBatchNum) WHERE batch_num between $1 and $2;`, metricsTotals.FirstBatchNum, lastBatchNum)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
if metricsTotals.TotalBatches > 0 { if metricsTotals.TotalBatches > 0 {
metrics.BatchFrequency = seconds / float64(metricsTotals.TotalBatches) metrics.BatchFrequency = seconds / float64(metricsTotals.TotalBatches)
} else { } else {
@@ -981,7 +1003,7 @@ func (hdb *HistoryDB) GetMetricsAPI(lastBatchNum common.BatchNum) (*Metrics, err
metrics.AvgTransactionFee = 0 metrics.AvgTransactionFee = 0
} }
err = meddler.QueryRow( err = meddler.QueryRow(
hdb.db, metrics, hdb.dbRead, metrics,
`SELECT COUNT(*) AS total_bjjs, COUNT(DISTINCT(bjj)) AS total_accounts FROM account;`) `SELECT COUNT(*) AS total_bjjs, COUNT(DISTINCT(bjj)) AS total_accounts FROM account;`)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
@@ -1000,7 +1022,7 @@ func (hdb *HistoryDB) GetAvgTxFeeAPI() (float64, error) {
defer hdb.apiConnCon.Release() defer hdb.apiConnCon.Release()
metricsTotals := &MetricsTotals{} metricsTotals := &MetricsTotals{}
err = meddler.QueryRow( err = meddler.QueryRow(
hdb.db, metricsTotals, `SELECT COUNT(tx.*) as total_txs, hdb.dbRead, metricsTotals, `SELECT COUNT(tx.*) as total_txs,
COALESCE (MIN(tx.batch_num), 0) as batch_num COALESCE (MIN(tx.batch_num), 0) as batch_num
FROM tx INNER JOIN block ON tx.eth_block_num = block.eth_block_num FROM tx INNER JOIN block ON tx.eth_block_num = block.eth_block_num
WHERE block.timestamp >= NOW() - INTERVAL '1 HOURS';`) WHERE block.timestamp >= NOW() - INTERVAL '1 HOURS';`)
@@ -1008,7 +1030,7 @@ func (hdb *HistoryDB) GetAvgTxFeeAPI() (float64, error) {
return 0, tracerr.Wrap(err) return 0, tracerr.Wrap(err)
} }
err = meddler.QueryRow( err = meddler.QueryRow(
hdb.db, metricsTotals, `SELECT COUNT(*) AS total_batches, hdb.dbRead, metricsTotals, `SELECT COUNT(*) AS total_batches,
COALESCE (SUM(total_fees_usd), 0) AS total_fees FROM batch COALESCE (SUM(total_fees_usd), 0) AS total_fees FROM batch
WHERE batch_num > $1;`, metricsTotals.FirstBatchNum) WHERE batch_num > $1;`, metricsTotals.FirstBatchNum)
if err != nil { if err != nil {
@@ -1024,3 +1046,18 @@ func (hdb *HistoryDB) GetAvgTxFeeAPI() (float64, error) {
return avgTransactionFee, nil return avgTransactionFee, nil
} }
// GetCommonAccountAPI returns the account associated to an account idx
func (hdb *HistoryDB) GetCommonAccountAPI(idx common.Idx) (*common.Account, error) {
cancel, err := hdb.apiConnCon.Acquire()
defer cancel()
if err != nil {
return nil, tracerr.Wrap(err)
}
defer hdb.apiConnCon.Release()
account := &common.Account{}
err = meddler.QueryRow(
hdb.dbRead, account, `SELECT * FROM account WHERE idx = $1;`, idx,
)
return account, tracerr.Wrap(err)
}

View File

@@ -27,30 +27,35 @@ const (
// HistoryDB persist the historic of the rollup // HistoryDB persist the historic of the rollup
type HistoryDB struct { type HistoryDB struct {
db *sqlx.DB dbRead *sqlx.DB
dbWrite *sqlx.DB
apiConnCon *db.APIConnectionController apiConnCon *db.APIConnectionController
} }
// NewHistoryDB initialize the DB // NewHistoryDB initialize the DB
func NewHistoryDB(db *sqlx.DB, apiConnCon *db.APIConnectionController) *HistoryDB { func NewHistoryDB(dbRead, dbWrite *sqlx.DB, apiConnCon *db.APIConnectionController) *HistoryDB {
return &HistoryDB{db: db, apiConnCon: apiConnCon} return &HistoryDB{
dbRead: dbRead,
dbWrite: dbWrite,
apiConnCon: apiConnCon,
}
} }
// DB returns a pointer to the L2DB.db. This method should be used only for // DB returns a pointer to the L2DB.db. This method should be used only for
// internal testing purposes. // internal testing purposes.
func (hdb *HistoryDB) DB() *sqlx.DB { func (hdb *HistoryDB) DB() *sqlx.DB {
return hdb.db return hdb.dbWrite
} }
// AddBlock insert a block into the DB // AddBlock insert a block into the DB
func (hdb *HistoryDB) AddBlock(block *common.Block) error { return hdb.addBlock(hdb.db, block) } func (hdb *HistoryDB) AddBlock(block *common.Block) error { return hdb.addBlock(hdb.dbWrite, block) }
func (hdb *HistoryDB) addBlock(d meddler.DB, block *common.Block) error { func (hdb *HistoryDB) addBlock(d meddler.DB, block *common.Block) error {
return tracerr.Wrap(meddler.Insert(d, "block", block)) return tracerr.Wrap(meddler.Insert(d, "block", block))
} }
// AddBlocks inserts blocks into the DB // AddBlocks inserts blocks into the DB
func (hdb *HistoryDB) AddBlocks(blocks []common.Block) error { func (hdb *HistoryDB) AddBlocks(blocks []common.Block) error {
return tracerr.Wrap(hdb.addBlocks(hdb.db, blocks)) return tracerr.Wrap(hdb.addBlocks(hdb.dbWrite, blocks))
} }
func (hdb *HistoryDB) addBlocks(d meddler.DB, blocks []common.Block) error { func (hdb *HistoryDB) addBlocks(d meddler.DB, blocks []common.Block) error {
@@ -61,7 +66,7 @@ func (hdb *HistoryDB) addBlocks(d meddler.DB, blocks []common.Block) error {
timestamp, timestamp,
hash hash
) VALUES %s;`, ) VALUES %s;`,
blocks[:], blocks,
)) ))
} }
@@ -69,7 +74,7 @@ func (hdb *HistoryDB) addBlocks(d meddler.DB, blocks []common.Block) error {
func (hdb *HistoryDB) GetBlock(blockNum int64) (*common.Block, error) { func (hdb *HistoryDB) GetBlock(blockNum int64) (*common.Block, error) {
block := &common.Block{} block := &common.Block{}
err := meddler.QueryRow( err := meddler.QueryRow(
hdb.db, block, hdb.dbRead, block,
"SELECT * FROM block WHERE eth_block_num = $1;", blockNum, "SELECT * FROM block WHERE eth_block_num = $1;", blockNum,
) )
return block, tracerr.Wrap(err) return block, tracerr.Wrap(err)
@@ -79,7 +84,7 @@ func (hdb *HistoryDB) GetBlock(blockNum int64) (*common.Block, error) {
func (hdb *HistoryDB) GetAllBlocks() ([]common.Block, error) { func (hdb *HistoryDB) GetAllBlocks() ([]common.Block, error) {
var blocks []*common.Block var blocks []*common.Block
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.db, &blocks, hdb.dbRead, &blocks,
"SELECT * FROM block ORDER BY eth_block_num;", "SELECT * FROM block ORDER BY eth_block_num;",
) )
return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err) return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err)
@@ -89,7 +94,7 @@ func (hdb *HistoryDB) GetAllBlocks() ([]common.Block, error) {
func (hdb *HistoryDB) getBlocks(from, to int64) ([]common.Block, error) { func (hdb *HistoryDB) getBlocks(from, to int64) ([]common.Block, error) {
var blocks []*common.Block var blocks []*common.Block
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.db, &blocks, hdb.dbRead, &blocks,
"SELECT * FROM block WHERE $1 <= eth_block_num AND eth_block_num < $2 ORDER BY eth_block_num;", "SELECT * FROM block WHERE $1 <= eth_block_num AND eth_block_num < $2 ORDER BY eth_block_num;",
from, to, from, to,
) )
@@ -100,13 +105,13 @@ func (hdb *HistoryDB) getBlocks(from, to int64) ([]common.Block, error) {
func (hdb *HistoryDB) GetLastBlock() (*common.Block, error) { func (hdb *HistoryDB) GetLastBlock() (*common.Block, error) {
block := &common.Block{} block := &common.Block{}
err := meddler.QueryRow( err := meddler.QueryRow(
hdb.db, block, "SELECT * FROM block ORDER BY eth_block_num DESC LIMIT 1;", hdb.dbRead, block, "SELECT * FROM block ORDER BY eth_block_num DESC LIMIT 1;",
) )
return block, tracerr.Wrap(err) return block, tracerr.Wrap(err)
} }
// AddBatch insert a Batch into the DB // AddBatch insert a Batch into the DB
func (hdb *HistoryDB) AddBatch(batch *common.Batch) error { return hdb.addBatch(hdb.db, batch) } func (hdb *HistoryDB) AddBatch(batch *common.Batch) error { return hdb.addBatch(hdb.dbWrite, batch) }
func (hdb *HistoryDB) addBatch(d meddler.DB, batch *common.Batch) error { func (hdb *HistoryDB) addBatch(d meddler.DB, batch *common.Batch) error {
// Calculate total collected fees in USD // Calculate total collected fees in USD
// Get IDs of collected tokens for fees // Get IDs of collected tokens for fees
@@ -129,9 +134,9 @@ func (hdb *HistoryDB) addBatch(d meddler.DB, batch *common.Batch) error {
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
query = hdb.db.Rebind(query) query = hdb.dbWrite.Rebind(query)
if err := meddler.QueryAll( if err := meddler.QueryAll(
hdb.db, &tokenPrices, query, args..., hdb.dbWrite, &tokenPrices, query, args...,
); err != nil { ); err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -153,7 +158,7 @@ func (hdb *HistoryDB) addBatch(d meddler.DB, batch *common.Batch) error {
// AddBatches insert Bids into the DB // AddBatches insert Bids into the DB
func (hdb *HistoryDB) AddBatches(batches []common.Batch) error { func (hdb *HistoryDB) AddBatches(batches []common.Batch) error {
return tracerr.Wrap(hdb.addBatches(hdb.db, batches)) return tracerr.Wrap(hdb.addBatches(hdb.dbWrite, batches))
} }
func (hdb *HistoryDB) addBatches(d meddler.DB, batches []common.Batch) error { func (hdb *HistoryDB) addBatches(d meddler.DB, batches []common.Batch) error {
for i := 0; i < len(batches); i++ { for i := 0; i < len(batches); i++ {
@@ -168,7 +173,7 @@ func (hdb *HistoryDB) addBatches(d meddler.DB, batches []common.Batch) error {
func (hdb *HistoryDB) GetBatch(batchNum common.BatchNum) (*common.Batch, error) { func (hdb *HistoryDB) GetBatch(batchNum common.BatchNum) (*common.Batch, error) {
var batch common.Batch var batch common.Batch
err := meddler.QueryRow( err := meddler.QueryRow(
hdb.db, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr, hdb.dbRead, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr,
batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root, batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root,
batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num, batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num,
batch.slot_num, batch.total_fees_usd FROM batch WHERE batch_num = $1;`, batch.slot_num, batch.total_fees_usd FROM batch WHERE batch_num = $1;`,
@@ -181,7 +186,7 @@ func (hdb *HistoryDB) GetBatch(batchNum common.BatchNum) (*common.Batch, error)
func (hdb *HistoryDB) GetAllBatches() ([]common.Batch, error) { func (hdb *HistoryDB) GetAllBatches() ([]common.Batch, error) {
var batches []*common.Batch var batches []*common.Batch
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.db, &batches, hdb.dbRead, &batches,
`SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr, batch.fees_collected, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr, batch.fees_collected,
batch.fee_idxs_coordinator, batch.state_root, batch.num_accounts, batch.last_idx, batch.exit_root, batch.fee_idxs_coordinator, batch.state_root, batch.num_accounts, batch.last_idx, batch.exit_root,
batch.forge_l1_txs_num, batch.slot_num, batch.total_fees_usd FROM batch batch.forge_l1_txs_num, batch.slot_num, batch.total_fees_usd FROM batch
@@ -194,7 +199,7 @@ func (hdb *HistoryDB) GetAllBatches() ([]common.Batch, error) {
func (hdb *HistoryDB) GetBatches(from, to common.BatchNum) ([]common.Batch, error) { func (hdb *HistoryDB) GetBatches(from, to common.BatchNum) ([]common.Batch, error) {
var batches []*common.Batch var batches []*common.Batch
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.db, &batches, hdb.dbRead, &batches,
`SELECT batch_num, eth_block_num, forger_addr, fees_collected, fee_idxs_coordinator, `SELECT batch_num, eth_block_num, forger_addr, fees_collected, fee_idxs_coordinator,
state_root, num_accounts, last_idx, exit_root, forge_l1_txs_num, slot_num, total_fees_usd state_root, num_accounts, last_idx, exit_root, forge_l1_txs_num, slot_num, total_fees_usd
FROM batch WHERE $1 <= batch_num AND batch_num < $2 ORDER BY batch_num;`, FROM batch WHERE $1 <= batch_num AND batch_num < $2 ORDER BY batch_num;`,
@@ -206,7 +211,7 @@ func (hdb *HistoryDB) GetBatches(from, to common.BatchNum) ([]common.Batch, erro
// GetFirstBatchBlockNumBySlot returns the ethereum block number of the first // GetFirstBatchBlockNumBySlot returns the ethereum block number of the first
// batch within a slot // batch within a slot
func (hdb *HistoryDB) GetFirstBatchBlockNumBySlot(slotNum int64) (int64, error) { func (hdb *HistoryDB) GetFirstBatchBlockNumBySlot(slotNum int64) (int64, error) {
row := hdb.db.QueryRow( row := hdb.dbRead.QueryRow(
`SELECT eth_block_num FROM batch `SELECT eth_block_num FROM batch
WHERE slot_num = $1 ORDER BY batch_num ASC LIMIT 1;`, slotNum, WHERE slot_num = $1 ORDER BY batch_num ASC LIMIT 1;`, slotNum,
) )
@@ -216,7 +221,7 @@ func (hdb *HistoryDB) GetFirstBatchBlockNumBySlot(slotNum int64) (int64, error)
// GetLastBatchNum returns the BatchNum of the latest forged batch // GetLastBatchNum returns the BatchNum of the latest forged batch
func (hdb *HistoryDB) GetLastBatchNum() (common.BatchNum, error) { func (hdb *HistoryDB) GetLastBatchNum() (common.BatchNum, error) {
row := hdb.db.QueryRow("SELECT batch_num FROM batch ORDER BY batch_num DESC LIMIT 1;") row := hdb.dbRead.QueryRow("SELECT batch_num FROM batch ORDER BY batch_num DESC LIMIT 1;")
var batchNum common.BatchNum var batchNum common.BatchNum
return batchNum, tracerr.Wrap(row.Scan(&batchNum)) return batchNum, tracerr.Wrap(row.Scan(&batchNum))
} }
@@ -225,7 +230,7 @@ func (hdb *HistoryDB) GetLastBatchNum() (common.BatchNum, error) {
func (hdb *HistoryDB) GetLastBatch() (*common.Batch, error) { func (hdb *HistoryDB) GetLastBatch() (*common.Batch, error) {
var batch common.Batch var batch common.Batch
err := meddler.QueryRow( err := meddler.QueryRow(
hdb.db, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr, hdb.dbRead, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr,
batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root, batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root,
batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num, batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num,
batch.slot_num, batch.total_fees_usd FROM batch ORDER BY batch_num DESC LIMIT 1;`, batch.slot_num, batch.total_fees_usd FROM batch ORDER BY batch_num DESC LIMIT 1;`,
@@ -235,7 +240,7 @@ func (hdb *HistoryDB) GetLastBatch() (*common.Batch, error) {
// GetLastL1BatchBlockNum returns the blockNum of the latest forged l1Batch // GetLastL1BatchBlockNum returns the blockNum of the latest forged l1Batch
func (hdb *HistoryDB) GetLastL1BatchBlockNum() (int64, error) { func (hdb *HistoryDB) GetLastL1BatchBlockNum() (int64, error) {
row := hdb.db.QueryRow(`SELECT eth_block_num FROM batch row := hdb.dbRead.QueryRow(`SELECT eth_block_num FROM batch
WHERE forge_l1_txs_num IS NOT NULL WHERE forge_l1_txs_num IS NOT NULL
ORDER BY batch_num DESC LIMIT 1;`) ORDER BY batch_num DESC LIMIT 1;`)
var blockNum int64 var blockNum int64
@@ -245,7 +250,7 @@ func (hdb *HistoryDB) GetLastL1BatchBlockNum() (int64, error) {
// GetLastL1TxsNum returns the greatest ForgeL1TxsNum in the DB from forged // GetLastL1TxsNum returns the greatest ForgeL1TxsNum in the DB from forged
// batches. If there's no batch in the DB (nil, nil) is returned. // batches. If there's no batch in the DB (nil, nil) is returned.
func (hdb *HistoryDB) GetLastL1TxsNum() (*int64, error) { func (hdb *HistoryDB) GetLastL1TxsNum() (*int64, error) {
row := hdb.db.QueryRow("SELECT MAX(forge_l1_txs_num) FROM batch;") row := hdb.dbRead.QueryRow("SELECT MAX(forge_l1_txs_num) FROM batch;")
lastL1TxsNum := new(int64) lastL1TxsNum := new(int64)
return lastL1TxsNum, tracerr.Wrap(row.Scan(&lastL1TxsNum)) return lastL1TxsNum, tracerr.Wrap(row.Scan(&lastL1TxsNum))
} }
@@ -256,15 +261,15 @@ func (hdb *HistoryDB) GetLastL1TxsNum() (*int64, error) {
func (hdb *HistoryDB) Reorg(lastValidBlock int64) error { func (hdb *HistoryDB) Reorg(lastValidBlock int64) error {
var err error var err error
if lastValidBlock < 0 { if lastValidBlock < 0 {
_, err = hdb.db.Exec("DELETE FROM block;") _, err = hdb.dbWrite.Exec("DELETE FROM block;")
} else { } else {
_, err = hdb.db.Exec("DELETE FROM block WHERE eth_block_num > $1;", lastValidBlock) _, err = hdb.dbWrite.Exec("DELETE FROM block WHERE eth_block_num > $1;", lastValidBlock)
} }
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
// AddBids insert Bids into the DB // AddBids insert Bids into the DB
func (hdb *HistoryDB) AddBids(bids []common.Bid) error { return hdb.addBids(hdb.db, bids) } func (hdb *HistoryDB) AddBids(bids []common.Bid) error { return hdb.addBids(hdb.dbWrite, bids) }
func (hdb *HistoryDB) addBids(d meddler.DB, bids []common.Bid) error { func (hdb *HistoryDB) addBids(d meddler.DB, bids []common.Bid) error {
if len(bids) == 0 { if len(bids) == 0 {
return nil return nil
@@ -273,7 +278,7 @@ func (hdb *HistoryDB) addBids(d meddler.DB, bids []common.Bid) error {
return tracerr.Wrap(db.BulkInsert( return tracerr.Wrap(db.BulkInsert(
d, d,
"INSERT INTO bid (slot_num, bid_value, eth_block_num, bidder_addr) VALUES %s;", "INSERT INTO bid (slot_num, bid_value, eth_block_num, bidder_addr) VALUES %s;",
bids[:], bids,
)) ))
} }
@@ -281,7 +286,7 @@ func (hdb *HistoryDB) addBids(d meddler.DB, bids []common.Bid) error {
func (hdb *HistoryDB) GetAllBids() ([]common.Bid, error) { func (hdb *HistoryDB) GetAllBids() ([]common.Bid, error) {
var bids []*common.Bid var bids []*common.Bid
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.db, &bids, hdb.dbRead, &bids,
`SELECT bid.slot_num, bid.bid_value, bid.eth_block_num, bid.bidder_addr FROM bid `SELECT bid.slot_num, bid.bid_value, bid.eth_block_num, bid.bidder_addr FROM bid
ORDER BY item_id;`, ORDER BY item_id;`,
) )
@@ -292,7 +297,7 @@ func (hdb *HistoryDB) GetAllBids() ([]common.Bid, error) {
func (hdb *HistoryDB) GetBestBidCoordinator(slotNum int64) (*common.BidCoordinator, error) { func (hdb *HistoryDB) GetBestBidCoordinator(slotNum int64) (*common.BidCoordinator, error) {
bidCoord := &common.BidCoordinator{} bidCoord := &common.BidCoordinator{}
err := meddler.QueryRow( err := meddler.QueryRow(
hdb.db, bidCoord, hdb.dbRead, bidCoord,
`SELECT ( `SELECT (
SELECT default_slot_set_bid SELECT default_slot_set_bid
FROM auction_vars FROM auction_vars
@@ -315,7 +320,7 @@ func (hdb *HistoryDB) GetBestBidCoordinator(slotNum int64) (*common.BidCoordinat
// AddCoordinators insert Coordinators into the DB // AddCoordinators insert Coordinators into the DB
func (hdb *HistoryDB) AddCoordinators(coordinators []common.Coordinator) error { func (hdb *HistoryDB) AddCoordinators(coordinators []common.Coordinator) error {
return tracerr.Wrap(hdb.addCoordinators(hdb.db, coordinators)) return tracerr.Wrap(hdb.addCoordinators(hdb.dbWrite, coordinators))
} }
func (hdb *HistoryDB) addCoordinators(d meddler.DB, coordinators []common.Coordinator) error { func (hdb *HistoryDB) addCoordinators(d meddler.DB, coordinators []common.Coordinator) error {
if len(coordinators) == 0 { if len(coordinators) == 0 {
@@ -324,13 +329,13 @@ func (hdb *HistoryDB) addCoordinators(d meddler.DB, coordinators []common.Coordi
return tracerr.Wrap(db.BulkInsert( return tracerr.Wrap(db.BulkInsert(
d, d,
"INSERT INTO coordinator (bidder_addr, forger_addr, eth_block_num, url) VALUES %s;", "INSERT INTO coordinator (bidder_addr, forger_addr, eth_block_num, url) VALUES %s;",
coordinators[:], coordinators,
)) ))
} }
// AddExitTree insert Exit tree into the DB // AddExitTree insert Exit tree into the DB
func (hdb *HistoryDB) AddExitTree(exitTree []common.ExitInfo) error { func (hdb *HistoryDB) AddExitTree(exitTree []common.ExitInfo) error {
return tracerr.Wrap(hdb.addExitTree(hdb.db, exitTree)) return tracerr.Wrap(hdb.addExitTree(hdb.dbWrite, exitTree))
} }
func (hdb *HistoryDB) addExitTree(d meddler.DB, exitTree []common.ExitInfo) error { func (hdb *HistoryDB) addExitTree(d meddler.DB, exitTree []common.ExitInfo) error {
if len(exitTree) == 0 { if len(exitTree) == 0 {
@@ -340,7 +345,7 @@ func (hdb *HistoryDB) addExitTree(d meddler.DB, exitTree []common.ExitInfo) erro
d, d,
"INSERT INTO exit_tree (batch_num, account_idx, merkle_proof, balance, "+ "INSERT INTO exit_tree (batch_num, account_idx, merkle_proof, balance, "+
"instant_withdrawn, delayed_withdraw_request, delayed_withdrawn) VALUES %s;", "instant_withdrawn, delayed_withdraw_request, delayed_withdrawn) VALUES %s;",
exitTree[:], exitTree,
)) ))
} }
@@ -418,11 +423,13 @@ func (hdb *HistoryDB) updateExitTree(d sqlx.Ext, blockNum int64,
// AddToken insert a token into the DB // AddToken insert a token into the DB
func (hdb *HistoryDB) AddToken(token *common.Token) error { func (hdb *HistoryDB) AddToken(token *common.Token) error {
return tracerr.Wrap(meddler.Insert(hdb.db, "token", token)) return tracerr.Wrap(meddler.Insert(hdb.dbWrite, "token", token))
} }
// AddTokens insert tokens into the DB // AddTokens insert tokens into the DB
func (hdb *HistoryDB) AddTokens(tokens []common.Token) error { return hdb.addTokens(hdb.db, tokens) } func (hdb *HistoryDB) AddTokens(tokens []common.Token) error {
return hdb.addTokens(hdb.dbWrite, tokens)
}
func (hdb *HistoryDB) addTokens(d meddler.DB, tokens []common.Token) error { func (hdb *HistoryDB) addTokens(d meddler.DB, tokens []common.Token) error {
if len(tokens) == 0 { if len(tokens) == 0 {
return nil return nil
@@ -443,16 +450,17 @@ func (hdb *HistoryDB) addTokens(d meddler.DB, tokens []common.Token) error {
symbol, symbol,
decimals decimals
) VALUES %s;`, ) VALUES %s;`,
tokens[:], tokens,
)) ))
} }
// UpdateTokenValue updates the USD value of a token // UpdateTokenValue updates the USD value of a token. Value is the price in
// USD of a normalized token (1 token = 10^decimals units)
func (hdb *HistoryDB) UpdateTokenValue(tokenSymbol string, value float64) error { func (hdb *HistoryDB) UpdateTokenValue(tokenSymbol string, value float64) error {
// Sanitize symbol // Sanitize symbol
tokenSymbol = strings.ToValidUTF8(tokenSymbol, " ") tokenSymbol = strings.ToValidUTF8(tokenSymbol, " ")
_, err := hdb.db.Exec( _, err := hdb.dbWrite.Exec(
"UPDATE token SET usd = $1 WHERE symbol = $2;", "UPDATE token SET usd = $1 WHERE symbol = $2;",
value, tokenSymbol, value, tokenSymbol,
) )
@@ -463,7 +471,7 @@ func (hdb *HistoryDB) UpdateTokenValue(tokenSymbol string, value float64) error
func (hdb *HistoryDB) GetToken(tokenID common.TokenID) (*TokenWithUSD, error) { func (hdb *HistoryDB) GetToken(tokenID common.TokenID) (*TokenWithUSD, error) {
token := &TokenWithUSD{} token := &TokenWithUSD{}
err := meddler.QueryRow( err := meddler.QueryRow(
hdb.db, token, `SELECT * FROM token WHERE token_id = $1;`, tokenID, hdb.dbRead, token, `SELECT * FROM token WHERE token_id = $1;`, tokenID,
) )
return token, tracerr.Wrap(err) return token, tracerr.Wrap(err)
} }
@@ -472,7 +480,7 @@ func (hdb *HistoryDB) GetToken(tokenID common.TokenID) (*TokenWithUSD, error) {
func (hdb *HistoryDB) GetAllTokens() ([]TokenWithUSD, error) { func (hdb *HistoryDB) GetAllTokens() ([]TokenWithUSD, error) {
var tokens []*TokenWithUSD var tokens []*TokenWithUSD
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.db, &tokens, hdb.dbRead, &tokens,
"SELECT * FROM token ORDER BY token_id;", "SELECT * FROM token ORDER BY token_id;",
) )
return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), tracerr.Wrap(err) return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), tracerr.Wrap(err)
@@ -481,7 +489,7 @@ func (hdb *HistoryDB) GetAllTokens() ([]TokenWithUSD, error) {
// GetTokenSymbols returns all the token symbols from the DB // GetTokenSymbols returns all the token symbols from the DB
func (hdb *HistoryDB) GetTokenSymbols() ([]string, error) { func (hdb *HistoryDB) GetTokenSymbols() ([]string, error) {
var tokenSymbols []string var tokenSymbols []string
rows, err := hdb.db.Query("SELECT symbol FROM token;") rows, err := hdb.dbRead.Query("SELECT symbol FROM token;")
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
@@ -499,7 +507,7 @@ func (hdb *HistoryDB) GetTokenSymbols() ([]string, error) {
// AddAccounts insert accounts into the DB // AddAccounts insert accounts into the DB
func (hdb *HistoryDB) AddAccounts(accounts []common.Account) error { func (hdb *HistoryDB) AddAccounts(accounts []common.Account) error {
return tracerr.Wrap(hdb.addAccounts(hdb.db, accounts)) return tracerr.Wrap(hdb.addAccounts(hdb.dbWrite, accounts))
} }
func (hdb *HistoryDB) addAccounts(d meddler.DB, accounts []common.Account) error { func (hdb *HistoryDB) addAccounts(d meddler.DB, accounts []common.Account) error {
if len(accounts) == 0 { if len(accounts) == 0 {
@@ -514,7 +522,7 @@ func (hdb *HistoryDB) addAccounts(d meddler.DB, accounts []common.Account) error
bjj, bjj,
eth_addr eth_addr
) VALUES %s;`, ) VALUES %s;`,
accounts[:], accounts,
)) ))
} }
@@ -522,18 +530,49 @@ func (hdb *HistoryDB) addAccounts(d meddler.DB, accounts []common.Account) error
func (hdb *HistoryDB) GetAllAccounts() ([]common.Account, error) { func (hdb *HistoryDB) GetAllAccounts() ([]common.Account, error) {
var accs []*common.Account var accs []*common.Account
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.db, &accs, hdb.dbRead, &accs,
"SELECT idx, token_id, batch_num, bjj, eth_addr FROM account ORDER BY idx;", "SELECT idx, token_id, batch_num, bjj, eth_addr FROM account ORDER BY idx;",
) )
return db.SlicePtrsToSlice(accs).([]common.Account), tracerr.Wrap(err) return db.SlicePtrsToSlice(accs).([]common.Account), tracerr.Wrap(err)
} }
// AddAccountUpdates inserts accUpdates into the DB
func (hdb *HistoryDB) AddAccountUpdates(accUpdates []common.AccountUpdate) error {
return tracerr.Wrap(hdb.addAccountUpdates(hdb.dbWrite, accUpdates))
}
func (hdb *HistoryDB) addAccountUpdates(d meddler.DB, accUpdates []common.AccountUpdate) error {
if len(accUpdates) == 0 {
return nil
}
return tracerr.Wrap(db.BulkInsert(
d,
`INSERT INTO account_update (
eth_block_num,
batch_num,
idx,
nonce,
balance
) VALUES %s;`,
accUpdates,
))
}
// GetAllAccountUpdates returns all the AccountUpdate from the DB
func (hdb *HistoryDB) GetAllAccountUpdates() ([]common.AccountUpdate, error) {
var accUpdates []*common.AccountUpdate
err := meddler.QueryAll(
hdb.dbRead, &accUpdates,
"SELECT eth_block_num, batch_num, idx, nonce, balance FROM account_update ORDER BY idx;",
)
return db.SlicePtrsToSlice(accUpdates).([]common.AccountUpdate), tracerr.Wrap(err)
}
// AddL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx. // AddL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
// If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user, // If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
// BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx. // BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
// EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB. // EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB.
func (hdb *HistoryDB) AddL1Txs(l1txs []common.L1Tx) error { func (hdb *HistoryDB) AddL1Txs(l1txs []common.L1Tx) error {
return tracerr.Wrap(hdb.addL1Txs(hdb.db, l1txs)) return tracerr.Wrap(hdb.addL1Txs(hdb.dbWrite, l1txs))
} }
// addL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx. // addL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
@@ -587,7 +626,7 @@ func (hdb *HistoryDB) addL1Txs(d meddler.DB, l1txs []common.L1Tx) error {
// AddL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx. // AddL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
func (hdb *HistoryDB) AddL2Txs(l2txs []common.L2Tx) error { func (hdb *HistoryDB) AddL2Txs(l2txs []common.L2Tx) error {
return tracerr.Wrap(hdb.addL2Txs(hdb.db, l2txs)) return tracerr.Wrap(hdb.addL2Txs(hdb.dbWrite, l2txs))
} }
// addL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx. // addL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
@@ -646,7 +685,7 @@ func (hdb *HistoryDB) addTxs(d meddler.DB, txs []txWrite) error {
fee, fee,
nonce nonce
) VALUES %s;`, ) VALUES %s;`,
txs[:], txs,
)) ))
} }
@@ -654,7 +693,7 @@ func (hdb *HistoryDB) addTxs(d meddler.DB, txs []txWrite) error {
func (hdb *HistoryDB) GetAllExits() ([]common.ExitInfo, error) { func (hdb *HistoryDB) GetAllExits() ([]common.ExitInfo, error) {
var exits []*common.ExitInfo var exits []*common.ExitInfo
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.db, &exits, hdb.dbRead, &exits,
`SELECT exit_tree.batch_num, exit_tree.account_idx, exit_tree.merkle_proof, `SELECT exit_tree.batch_num, exit_tree.account_idx, exit_tree.merkle_proof,
exit_tree.balance, exit_tree.instant_withdrawn, exit_tree.delayed_withdraw_request, exit_tree.balance, exit_tree.instant_withdrawn, exit_tree.delayed_withdraw_request,
exit_tree.delayed_withdrawn FROM exit_tree ORDER BY item_id;`, exit_tree.delayed_withdrawn FROM exit_tree ORDER BY item_id;`,
@@ -666,7 +705,7 @@ func (hdb *HistoryDB) GetAllExits() ([]common.ExitInfo, error) {
func (hdb *HistoryDB) GetAllL1UserTxs() ([]common.L1Tx, error) { func (hdb *HistoryDB) GetAllL1UserTxs() ([]common.L1Tx, error) {
var txs []*common.L1Tx var txs []*common.L1Tx
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.db, &txs, // Note that '\x' gets parsed as a big.Int with value = 0 hdb.dbRead, &txs, // Note that '\x' gets parsed as a big.Int with value = 0
`SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin, `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id, tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
tx.amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.amount_success THEN tx.amount ELSE '\x' END) AS effective_amount, tx.amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.amount_success THEN tx.amount ELSE '\x' END) AS effective_amount,
@@ -683,7 +722,7 @@ func (hdb *HistoryDB) GetAllL1CoordinatorTxs() ([]common.L1Tx, error) {
// Since the query specifies that only coordinator txs are returned, it's safe to assume // Since the query specifies that only coordinator txs are returned, it's safe to assume
// that returned txs will always have effective amounts // that returned txs will always have effective amounts
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.db, &txs, hdb.dbRead, &txs,
`SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin, `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id, tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
tx.amount, tx.amount AS effective_amount, tx.amount, tx.amount AS effective_amount,
@@ -698,7 +737,7 @@ func (hdb *HistoryDB) GetAllL1CoordinatorTxs() ([]common.L1Tx, error) {
func (hdb *HistoryDB) GetAllL2Txs() ([]common.L2Tx, error) { func (hdb *HistoryDB) GetAllL2Txs() ([]common.L2Tx, error) {
var txs []*common.L2Tx var txs []*common.L2Tx
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.db, &txs, hdb.dbRead, &txs,
`SELECT tx.id, tx.batch_num, tx.position, `SELECT tx.id, tx.batch_num, tx.position,
tx.from_idx, tx.to_idx, tx.amount, tx.token_id, tx.from_idx, tx.to_idx, tx.amount, tx.token_id,
tx.fee, tx.nonce, tx.type, tx.eth_block_num tx.fee, tx.nonce, tx.type, tx.eth_block_num
@@ -711,7 +750,7 @@ func (hdb *HistoryDB) GetAllL2Txs() ([]common.L2Tx, error) {
func (hdb *HistoryDB) GetUnforgedL1UserTxs(toForgeL1TxsNum int64) ([]common.L1Tx, error) { func (hdb *HistoryDB) GetUnforgedL1UserTxs(toForgeL1TxsNum int64) ([]common.L1Tx, error) {
var txs []*common.L1Tx var txs []*common.L1Tx
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.db, &txs, // only L1 user txs can have batch_num set to null hdb.dbRead, &txs, // only L1 user txs can have batch_num set to null
`SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin, `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id, tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
tx.amount, NULL AS effective_amount, tx.amount, NULL AS effective_amount,
@@ -728,7 +767,7 @@ func (hdb *HistoryDB) GetUnforgedL1UserTxs(toForgeL1TxsNum int64) ([]common.L1Tx
// GetLastTxsPosition for a given to_forge_l1_txs_num // GetLastTxsPosition for a given to_forge_l1_txs_num
func (hdb *HistoryDB) GetLastTxsPosition(toForgeL1TxsNum int64) (int, error) { func (hdb *HistoryDB) GetLastTxsPosition(toForgeL1TxsNum int64) (int, error) {
row := hdb.db.QueryRow( row := hdb.dbRead.QueryRow(
"SELECT position FROM tx WHERE to_forge_l1_txs_num = $1 ORDER BY position DESC;", "SELECT position FROM tx WHERE to_forge_l1_txs_num = $1 ORDER BY position DESC;",
toForgeL1TxsNum, toForgeL1TxsNum,
) )
@@ -742,15 +781,15 @@ func (hdb *HistoryDB) GetSCVars() (*common.RollupVariables, *common.AuctionVaria
var rollup common.RollupVariables var rollup common.RollupVariables
var auction common.AuctionVariables var auction common.AuctionVariables
var wDelayer common.WDelayerVariables var wDelayer common.WDelayerVariables
if err := meddler.QueryRow(hdb.db, &rollup, if err := meddler.QueryRow(hdb.dbRead, &rollup,
"SELECT * FROM rollup_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil { "SELECT * FROM rollup_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
return nil, nil, nil, tracerr.Wrap(err) return nil, nil, nil, tracerr.Wrap(err)
} }
if err := meddler.QueryRow(hdb.db, &auction, if err := meddler.QueryRow(hdb.dbRead, &auction,
"SELECT * FROM auction_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil { "SELECT * FROM auction_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
return nil, nil, nil, tracerr.Wrap(err) return nil, nil, nil, tracerr.Wrap(err)
} }
if err := meddler.QueryRow(hdb.db, &wDelayer, if err := meddler.QueryRow(hdb.dbRead, &wDelayer,
"SELECT * FROM wdelayer_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil { "SELECT * FROM wdelayer_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
return nil, nil, nil, tracerr.Wrap(err) return nil, nil, nil, tracerr.Wrap(err)
} }
@@ -781,7 +820,7 @@ func (hdb *HistoryDB) addBucketUpdates(d meddler.DB, bucketUpdates []common.Buck
block_stamp, block_stamp,
withdrawals withdrawals
) VALUES %s;`, ) VALUES %s;`,
bucketUpdates[:], bucketUpdates,
)) ))
} }
@@ -795,7 +834,7 @@ func (hdb *HistoryDB) AddBucketUpdatesTest(d meddler.DB, bucketUpdates []common.
func (hdb *HistoryDB) GetAllBucketUpdates() ([]common.BucketUpdate, error) { func (hdb *HistoryDB) GetAllBucketUpdates() ([]common.BucketUpdate, error) {
var bucketUpdates []*common.BucketUpdate var bucketUpdates []*common.BucketUpdate
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.db, &bucketUpdates, hdb.dbRead, &bucketUpdates,
`SELECT eth_block_num, num_bucket, block_stamp, withdrawals `SELECT eth_block_num, num_bucket, block_stamp, withdrawals
FROM bucket_update ORDER BY item_id;`, FROM bucket_update ORDER BY item_id;`,
) )
@@ -813,7 +852,7 @@ func (hdb *HistoryDB) addTokenExchanges(d meddler.DB, tokenExchanges []common.To
eth_addr, eth_addr,
value_usd value_usd
) VALUES %s;`, ) VALUES %s;`,
tokenExchanges[:], tokenExchanges,
)) ))
} }
@@ -821,7 +860,7 @@ func (hdb *HistoryDB) addTokenExchanges(d meddler.DB, tokenExchanges []common.To
func (hdb *HistoryDB) GetAllTokenExchanges() ([]common.TokenExchange, error) { func (hdb *HistoryDB) GetAllTokenExchanges() ([]common.TokenExchange, error) {
var tokenExchanges []*common.TokenExchange var tokenExchanges []*common.TokenExchange
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.db, &tokenExchanges, hdb.dbRead, &tokenExchanges,
"SELECT eth_block_num, eth_addr, value_usd FROM token_exchange ORDER BY item_id;", "SELECT eth_block_num, eth_addr, value_usd FROM token_exchange ORDER BY item_id;",
) )
return db.SlicePtrsToSlice(tokenExchanges).([]common.TokenExchange), tracerr.Wrap(err) return db.SlicePtrsToSlice(tokenExchanges).([]common.TokenExchange), tracerr.Wrap(err)
@@ -841,7 +880,7 @@ func (hdb *HistoryDB) addEscapeHatchWithdrawals(d meddler.DB,
token_addr, token_addr,
amount amount
) VALUES %s;`, ) VALUES %s;`,
escapeHatchWithdrawals[:], escapeHatchWithdrawals,
)) ))
} }
@@ -849,7 +888,7 @@ func (hdb *HistoryDB) addEscapeHatchWithdrawals(d meddler.DB,
func (hdb *HistoryDB) GetAllEscapeHatchWithdrawals() ([]common.WDelayerEscapeHatchWithdrawal, error) { func (hdb *HistoryDB) GetAllEscapeHatchWithdrawals() ([]common.WDelayerEscapeHatchWithdrawal, error) {
var escapeHatchWithdrawals []*common.WDelayerEscapeHatchWithdrawal var escapeHatchWithdrawals []*common.WDelayerEscapeHatchWithdrawal
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.db, &escapeHatchWithdrawals, hdb.dbRead, &escapeHatchWithdrawals,
"SELECT eth_block_num, who_addr, to_addr, token_addr, amount FROM escape_hatch_withdrawal ORDER BY item_id;", "SELECT eth_block_num, who_addr, to_addr, token_addr, amount FROM escape_hatch_withdrawal ORDER BY item_id;",
) )
return db.SlicePtrsToSlice(escapeHatchWithdrawals).([]common.WDelayerEscapeHatchWithdrawal), return db.SlicePtrsToSlice(escapeHatchWithdrawals).([]common.WDelayerEscapeHatchWithdrawal),
@@ -862,7 +901,7 @@ func (hdb *HistoryDB) GetAllEscapeHatchWithdrawals() ([]common.WDelayerEscapeHat
// exist in the smart contracts. // exist in the smart contracts.
func (hdb *HistoryDB) SetInitialSCVars(rollup *common.RollupVariables, func (hdb *HistoryDB) SetInitialSCVars(rollup *common.RollupVariables,
auction *common.AuctionVariables, wDelayer *common.WDelayerVariables) error { auction *common.AuctionVariables, wDelayer *common.WDelayerVariables) error {
txn, err := hdb.db.Beginx() txn, err := hdb.dbWrite.Beginx()
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -946,7 +985,7 @@ func (hdb *HistoryDB) setExtraInfoForgedL1UserTxs(d sqlx.Ext, txs []common.L1Tx)
// the pagination system of the API/DB depends on this. Within blocks, all // the pagination system of the API/DB depends on this. Within blocks, all
// items should also be in the correct order (Accounts, Tokens, Txs, etc.) // items should also be in the correct order (Accounts, Tokens, Txs, etc.)
func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) { func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) {
txn, err := hdb.db.Beginx() txn, err := hdb.dbWrite.Beginx()
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -1018,6 +1057,11 @@ func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
// Add accountBalances if it exists
if err := hdb.addAccountUpdates(txn, batch.UpdatedAccounts); err != nil {
return tracerr.Wrap(err)
}
// Set the EffectiveAmount and EffectiveDepositAmount of all the // Set the EffectiveAmount and EffectiveDepositAmount of all the
// L1UserTxs that have been forged in this batch // L1UserTxs that have been forged in this batch
if err = hdb.setExtraInfoForgedL1UserTxs(txn, batch.L1UserTxs); err != nil { if err = hdb.setExtraInfoForgedL1UserTxs(txn, batch.L1UserTxs); err != nil {
@@ -1099,7 +1143,7 @@ func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) {
func (hdb *HistoryDB) GetCoordinatorAPI(bidderAddr ethCommon.Address) (*CoordinatorAPI, error) { func (hdb *HistoryDB) GetCoordinatorAPI(bidderAddr ethCommon.Address) (*CoordinatorAPI, error) {
coordinator := &CoordinatorAPI{} coordinator := &CoordinatorAPI{}
err := meddler.QueryRow( err := meddler.QueryRow(
hdb.db, coordinator, hdb.dbRead, coordinator,
"SELECT * FROM coordinator WHERE bidder_addr = $1 ORDER BY item_id DESC LIMIT 1;", "SELECT * FROM coordinator WHERE bidder_addr = $1 ORDER BY item_id DESC LIMIT 1;",
bidderAddr, bidderAddr,
) )
@@ -1108,14 +1152,14 @@ func (hdb *HistoryDB) GetCoordinatorAPI(bidderAddr ethCommon.Address) (*Coordina
// AddAuctionVars insert auction vars into the DB // AddAuctionVars insert auction vars into the DB
func (hdb *HistoryDB) AddAuctionVars(auctionVars *common.AuctionVariables) error { func (hdb *HistoryDB) AddAuctionVars(auctionVars *common.AuctionVariables) error {
return tracerr.Wrap(meddler.Insert(hdb.db, "auction_vars", auctionVars)) return tracerr.Wrap(meddler.Insert(hdb.dbWrite, "auction_vars", auctionVars))
} }
// GetTokensTest used to get tokens in a testing context // GetTokensTest used to get tokens in a testing context
func (hdb *HistoryDB) GetTokensTest() ([]TokenWithUSD, error) { func (hdb *HistoryDB) GetTokensTest() ([]TokenWithUSD, error) {
tokens := []*TokenWithUSD{} tokens := []*TokenWithUSD{}
if err := meddler.QueryAll( if err := meddler.QueryAll(
hdb.db, &tokens, hdb.dbRead, &tokens,
"SELECT * FROM TOKEN", "SELECT * FROM TOKEN",
); err != nil { ); err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)

View File

@@ -39,12 +39,12 @@ func TestMain(m *testing.M) {
if err != nil { if err != nil {
panic(err) panic(err)
} }
historyDB = NewHistoryDB(db, nil) historyDB = NewHistoryDB(db, db, nil)
if err != nil { if err != nil {
panic(err) panic(err)
} }
apiConnCon := dbUtils.NewAPICnnectionController(1, time.Second) apiConnCon := dbUtils.NewAPICnnectionController(1, time.Second)
historyDBWithACC = NewHistoryDB(db, apiConnCon) historyDBWithACC = NewHistoryDB(db, db, apiConnCon)
// Run tests // Run tests
result := m.Run() result := m.Run()
// Close DB // Close DB
@@ -377,6 +377,22 @@ func TestAccounts(t *testing.T) {
accs[i].Balance = nil accs[i].Balance = nil
assert.Equal(t, accs[i], acc) assert.Equal(t, accs[i], acc)
} }
// Test AccountBalances
accUpdates := make([]common.AccountUpdate, len(accs))
for i, acc := range accs {
accUpdates[i] = common.AccountUpdate{
EthBlockNum: batches[acc.BatchNum-1].EthBlockNum,
BatchNum: acc.BatchNum,
Idx: acc.Idx,
Nonce: common.Nonce(i),
Balance: big.NewInt(int64(i)),
}
}
err = historyDB.AddAccountUpdates(accUpdates)
require.NoError(t, err)
fetchedAccBalances, err := historyDB.GetAllAccountUpdates()
require.NoError(t, err)
assert.Equal(t, accUpdates, fetchedAccBalances)
} }
func TestTxs(t *testing.T) { func TestTxs(t *testing.T) {
@@ -801,11 +817,11 @@ func TestSetExtraInfoForgedL1UserTxs(t *testing.T) {
} }
// Add second batch to trigger the update of the batch_num, // Add second batch to trigger the update of the batch_num,
// while avoiding the implicit call of setExtraInfoForgedL1UserTxs // while avoiding the implicit call of setExtraInfoForgedL1UserTxs
err = historyDB.addBlock(historyDB.db, &blocks[1].Block) err = historyDB.addBlock(historyDB.dbWrite, &blocks[1].Block)
require.NoError(t, err) require.NoError(t, err)
err = historyDB.addBatch(historyDB.db, &blocks[1].Rollup.Batches[0].Batch) err = historyDB.addBatch(historyDB.dbWrite, &blocks[1].Rollup.Batches[0].Batch)
require.NoError(t, err) require.NoError(t, err)
err = historyDB.addAccounts(historyDB.db, blocks[1].Rollup.Batches[0].CreatedAccounts) err = historyDB.addAccounts(historyDB.dbWrite, blocks[1].Rollup.Batches[0].CreatedAccounts)
require.NoError(t, err) require.NoError(t, err)
// Set the Effective{Amount,DepositAmount} of the L1UserTxs that are forged in the second block // Set the Effective{Amount,DepositAmount} of the L1UserTxs that are forged in the second block
@@ -815,7 +831,7 @@ func TestSetExtraInfoForgedL1UserTxs(t *testing.T) {
l1Txs[1].EffectiveAmount = big.NewInt(0) l1Txs[1].EffectiveAmount = big.NewInt(0)
l1Txs[2].EffectiveDepositAmount = big.NewInt(0) l1Txs[2].EffectiveDepositAmount = big.NewInt(0)
l1Txs[2].EffectiveAmount = big.NewInt(0) l1Txs[2].EffectiveAmount = big.NewInt(0)
err = historyDB.setExtraInfoForgedL1UserTxs(historyDB.db, l1Txs) err = historyDB.setExtraInfoForgedL1UserTxs(historyDB.dbWrite, l1Txs)
require.NoError(t, err) require.NoError(t, err)
dbL1Txs, err := historyDB.GetAllL1UserTxs() dbL1Txs, err := historyDB.GetAllL1UserTxs()
@@ -902,10 +918,10 @@ func TestUpdateExitTree(t *testing.T) {
common.WithdrawInfo{Idx: 259, NumExitRoot: 3, InstantWithdraw: false, common.WithdrawInfo{Idx: 259, NumExitRoot: 3, InstantWithdraw: false,
Owner: tc.UsersByIdx[259].Addr, Token: tokenAddr}, Owner: tc.UsersByIdx[259].Addr, Token: tokenAddr},
) )
err = historyDB.addBlock(historyDB.db, &block.Block) err = historyDB.addBlock(historyDB.dbWrite, &block.Block)
require.NoError(t, err) require.NoError(t, err)
err = historyDB.updateExitTree(historyDB.db, block.Block.Num, err = historyDB.updateExitTree(historyDB.dbWrite, block.Block.Num,
block.Rollup.Withdrawals, block.WDelayer.Withdrawals) block.Rollup.Withdrawals, block.WDelayer.Withdrawals)
require.NoError(t, err) require.NoError(t, err)
@@ -935,10 +951,10 @@ func TestUpdateExitTree(t *testing.T) {
Token: tokenAddr, Token: tokenAddr,
Amount: big.NewInt(80), Amount: big.NewInt(80),
}) })
err = historyDB.addBlock(historyDB.db, &block.Block) err = historyDB.addBlock(historyDB.dbWrite, &block.Block)
require.NoError(t, err) require.NoError(t, err)
err = historyDB.updateExitTree(historyDB.db, block.Block.Num, err = historyDB.updateExitTree(historyDB.dbWrite, block.Block.Num,
block.Rollup.Withdrawals, block.WDelayer.Withdrawals) block.Rollup.Withdrawals, block.WDelayer.Withdrawals)
require.NoError(t, err) require.NoError(t, err)
@@ -981,7 +997,7 @@ func TestGetBestBidCoordinator(t *testing.T) {
URL: "bar", URL: "bar",
}, },
} }
err = historyDB.addCoordinators(historyDB.db, coords) err = historyDB.addCoordinators(historyDB.dbWrite, coords)
require.NoError(t, err) require.NoError(t, err)
bids := []common.Bid{ bids := []common.Bid{
@@ -999,7 +1015,7 @@ func TestGetBestBidCoordinator(t *testing.T) {
}, },
} }
err = historyDB.addBids(historyDB.db, bids) err = historyDB.addBids(historyDB.dbWrite, bids)
require.NoError(t, err) require.NoError(t, err)
forger10, err := historyDB.GetBestBidCoordinator(10) forger10, err := historyDB.GetBestBidCoordinator(10)
@@ -1037,7 +1053,7 @@ func TestAddBucketUpdates(t *testing.T) {
Withdrawals: big.NewInt(42), Withdrawals: big.NewInt(42),
}, },
} }
err := historyDB.addBucketUpdates(historyDB.db, bucketUpdates) err := historyDB.addBucketUpdates(historyDB.dbWrite, bucketUpdates)
require.NoError(t, err) require.NoError(t, err)
dbBucketUpdates, err := historyDB.GetAllBucketUpdates() dbBucketUpdates, err := historyDB.GetAllBucketUpdates()
require.NoError(t, err) require.NoError(t, err)
@@ -1062,7 +1078,7 @@ func TestAddTokenExchanges(t *testing.T) {
ValueUSD: 67890, ValueUSD: 67890,
}, },
} }
err := historyDB.addTokenExchanges(historyDB.db, tokenExchanges) err := historyDB.addTokenExchanges(historyDB.dbWrite, tokenExchanges)
require.NoError(t, err) require.NoError(t, err)
dbTokenExchanges, err := historyDB.GetAllTokenExchanges() dbTokenExchanges, err := historyDB.GetAllTokenExchanges()
require.NoError(t, err) require.NoError(t, err)
@@ -1091,7 +1107,7 @@ func TestAddEscapeHatchWithdrawals(t *testing.T) {
Amount: big.NewInt(20003), Amount: big.NewInt(20003),
}, },
} }
err := historyDB.addEscapeHatchWithdrawals(historyDB.db, escapeHatchWithdrawals) err := historyDB.addEscapeHatchWithdrawals(historyDB.dbWrite, escapeHatchWithdrawals)
require.NoError(t, err) require.NoError(t, err)
dbEscapeHatchWithdrawals, err := historyDB.GetAllEscapeHatchWithdrawals() dbEscapeHatchWithdrawals, err := historyDB.GetAllEscapeHatchWithdrawals()
require.NoError(t, err) require.NoError(t, err)
@@ -1159,16 +1175,12 @@ func TestGetMetricsAPI(t *testing.T) {
res, err := historyDBWithACC.GetMetricsAPI(common.BatchNum(numBatches)) res, err := historyDBWithACC.GetMetricsAPI(common.BatchNum(numBatches))
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, float64(numTx)/float64(numBatches-1), res.TransactionsPerBatch) assert.Equal(t, float64(numTx)/float64(numBatches), res.TransactionsPerBatch)
// Frequency is not exactly the desired one, some decimals may appear // Frequency is not exactly the desired one, some decimals may appear
assert.GreaterOrEqual(t, res.BatchFrequency, float64(frequency)) // There is a -2 as time for first and last batch is not taken into account
assert.Less(t, res.BatchFrequency, float64(frequency+1)) assert.InEpsilon(t, float64(frequency)*float64(numBatches-2)/float64(numBatches), res.BatchFrequency, 0.01)
// Truncate frecuency into an int to do an exact check assert.InEpsilon(t, float64(numTx)/float64(frequency*blockNum-frequency), res.TransactionsPerSecond, 0.01)
assert.Equal(t, frequency, int(res.BatchFrequency))
// This may also be different in some decimals
// Truncate it to the third decimal to compare
assert.Equal(t, math.Trunc((float64(numTx)/float64(frequency*blockNum-frequency))/0.001)*0.001, math.Trunc(res.TransactionsPerSecond/0.001)*0.001)
assert.Equal(t, int64(3), res.TotalAccounts) assert.Equal(t, int64(3), res.TotalAccounts)
assert.Equal(t, int64(3), res.TotalBJJs) assert.Equal(t, int64(3), res.TotalBJJs)
// Til does not set fees // Til does not set fees
@@ -1195,7 +1207,8 @@ func TestGetMetricsAPIMoreThan24Hours(t *testing.T) {
set = append(set, til.Instruction{Typ: til.TypeNewBlock}) set = append(set, til.Instruction{Typ: til.TypeNewBlock})
// Transfers // Transfers
for x := 0; x < 6000; x++ { const numBlocks int = 30
for x := 0; x < numBlocks; x++ {
set = append(set, til.Instruction{ set = append(set, til.Instruction{
Typ: common.TxTypeTransfer, Typ: common.TxTypeTransfer,
TokenID: common.TokenID(0), TokenID: common.TokenID(0),
@@ -1219,19 +1232,20 @@ func TestGetMetricsAPIMoreThan24Hours(t *testing.T) {
err = tc.FillBlocksExtra(blocks, &tilCfgExtra) err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
require.NoError(t, err) require.NoError(t, err)
const numBatches int = 6002 const numBatches int = 2 + numBlocks
const numTx int = 6003 const blockNum = 4 + numBlocks
const blockNum = 6005 - 1
// Sanity check // Sanity check
require.Equal(t, blockNum, len(blocks)) require.Equal(t, blockNum, len(blocks))
// Adding one batch per block // Adding one batch per block
// batch frequency can be chosen // batch frequency can be chosen
const frequency int = 15 const blockTime time.Duration = 3600 * time.Second
now := time.Now()
require.NoError(t, err)
for i := range blocks { for i := range blocks {
blocks[i].Block.Timestamp = time.Now().Add(-time.Second * time.Duration(frequency*(len(blocks)-i))) blocks[i].Block.Timestamp = now.Add(-time.Duration(len(blocks)-1-i) * blockTime)
err = historyDB.AddBlockSCData(&blocks[i]) err = historyDB.AddBlockSCData(&blocks[i])
assert.NoError(t, err) assert.NoError(t, err)
} }
@@ -1239,16 +1253,10 @@ func TestGetMetricsAPIMoreThan24Hours(t *testing.T) {
res, err := historyDBWithACC.GetMetricsAPI(common.BatchNum(numBatches)) res, err := historyDBWithACC.GetMetricsAPI(common.BatchNum(numBatches))
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, math.Trunc((float64(numTx)/float64(numBatches-1))/0.001)*0.001, math.Trunc(res.TransactionsPerBatch/0.001)*0.001) assert.InEpsilon(t, 1.0, res.TransactionsPerBatch, 0.1)
// Frequency is not exactly the desired one, some decimals may appear assert.InEpsilon(t, res.BatchFrequency, float64(blockTime/time.Second), 0.1)
assert.GreaterOrEqual(t, res.BatchFrequency, float64(frequency)) assert.InEpsilon(t, 1.0/float64(blockTime/time.Second), res.TransactionsPerSecond, 0.1)
assert.Less(t, res.BatchFrequency, float64(frequency+1))
// Truncate frecuency into an int to do an exact check
assert.Equal(t, frequency, int(res.BatchFrequency))
// This may also be different in some decimals
// Truncate it to the third decimal to compare
assert.Equal(t, math.Trunc((float64(numTx)/float64(frequency*blockNum-frequency))/0.001)*0.001, math.Trunc(res.TransactionsPerSecond/0.001)*0.001)
assert.Equal(t, int64(3), res.TotalAccounts) assert.Equal(t, int64(3), res.TotalAccounts)
assert.Equal(t, int64(3), res.TotalBJJs) assert.Equal(t, int64(3), res.TotalBJJs)
// Til does not set fees // Til does not set fees

View File

@@ -239,8 +239,8 @@ type AccountAPI struct {
BatchNum common.BatchNum `meddler:"batch_num"` BatchNum common.BatchNum `meddler:"batch_num"`
PublicKey apitypes.HezBJJ `meddler:"bjj"` PublicKey apitypes.HezBJJ `meddler:"bjj"`
EthAddr apitypes.HezEthAddr `meddler:"eth_addr"` EthAddr apitypes.HezEthAddr `meddler:"eth_addr"`
Nonce common.Nonce `meddler:"-"` // max of 40 bits used Nonce common.Nonce `meddler:"nonce"` // max of 40 bits used
Balance *apitypes.BigIntStr `meddler:"-"` // max of 192 bits used Balance *apitypes.BigIntStr `meddler:"balance"` // max of 192 bits used
TotalItems uint64 `meddler:"total_items"` TotalItems uint64 `meddler:"total_items"`
FirstItem uint64 `meddler:"first_item"` FirstItem uint64 `meddler:"first_item"`
LastItem uint64 `meddler:"last_item"` LastItem uint64 `meddler:"last_item"`

View File

@@ -1,12 +1,18 @@
package l2db package l2db
import ( import (
"fmt"
ethCommon "github.com/ethereum/go-ethereum/common" ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/tracerr" "github.com/hermeznetwork/tracerr"
"github.com/russross/meddler" "github.com/russross/meddler"
) )
var (
errPoolFull = fmt.Errorf("the pool is at full capacity. More transactions are not accepted currently")
)
// AddAccountCreationAuthAPI inserts an account creation authorization into the DB // AddAccountCreationAuthAPI inserts an account creation authorization into the DB
func (l2db *L2DB) AddAccountCreationAuthAPI(auth *common.AccountCreationAuth) error { func (l2db *L2DB) AddAccountCreationAuthAPI(auth *common.AccountCreationAuth) error {
cancel, err := l2db.apiConnCon.Acquire() cancel, err := l2db.apiConnCon.Acquire()
@@ -28,7 +34,7 @@ func (l2db *L2DB) GetAccountCreationAuthAPI(addr ethCommon.Address) (*AccountCre
defer l2db.apiConnCon.Release() defer l2db.apiConnCon.Release()
auth := new(AccountCreationAuthAPI) auth := new(AccountCreationAuthAPI)
return auth, tracerr.Wrap(meddler.QueryRow( return auth, tracerr.Wrap(meddler.QueryRow(
l2db.db, auth, l2db.dbRead, auth,
"SELECT * FROM account_creation_auth WHERE eth_addr = $1;", "SELECT * FROM account_creation_auth WHERE eth_addr = $1;",
addr, addr,
)) ))
@@ -42,20 +48,54 @@ func (l2db *L2DB) AddTxAPI(tx *PoolL2TxWrite) error {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
defer l2db.apiConnCon.Release() defer l2db.apiConnCon.Release()
row := l2db.db.QueryRow(
"SELECT COUNT(*) FROM tx_pool WHERE state = $1;", row := l2db.dbRead.QueryRow(`SELECT
common.PoolL2TxStatePending, ($1::NUMERIC * token.usd * fee_percentage($2::NUMERIC)) /
) (10.0 ^ token.decimals::NUMERIC)
var totalTxs uint32 FROM token WHERE token.token_id = $3;`,
if err := row.Scan(&totalTxs); err != nil { tx.AmountFloat, tx.Fee, tx.TokenID)
var feeUSD float64
if err := row.Scan(&feeUSD); err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
if totalTxs >= l2db.maxTxs { if feeUSD < l2db.minFeeUSD {
return tracerr.New( return tracerr.Wrap(fmt.Errorf("tx.feeUSD (%v) < minFeeUSD (%v)",
"The pool is at full capacity. More transactions are not accepted currently", feeUSD, l2db.minFeeUSD))
)
} }
return tracerr.Wrap(meddler.Insert(l2db.db, "tx_pool", tx))
// Prepare insert SQL query argument parameters
namesPart, err := meddler.Default.ColumnsQuoted(tx, false)
if err != nil {
return err
}
valuesPart, err := meddler.Default.PlaceholdersString(tx, false)
if err != nil {
return err
}
values, err := meddler.Default.Values(tx, false)
if err != nil {
return err
}
q := fmt.Sprintf(
`INSERT INTO tx_pool (%s)
SELECT %s
WHERE (SELECT COUNT(*) FROM tx_pool WHERE state = $%v) < $%v;`,
namesPart, valuesPart,
len(values)+1, len(values)+2) //nolint:gomnd
values = append(values, common.PoolL2TxStatePending, l2db.maxTxs)
res, err := l2db.dbWrite.Exec(q, values...)
if err != nil {
return tracerr.Wrap(err)
}
rowsAffected, err := res.RowsAffected()
if err != nil {
return tracerr.Wrap(err)
}
if rowsAffected == 0 {
return tracerr.Wrap(errPoolFull)
}
return nil
} }
// selectPoolTxAPI select part of queries to get PoolL2TxRead // selectPoolTxAPI select part of queries to get PoolL2TxRead
@@ -78,7 +118,7 @@ func (l2db *L2DB) GetTxAPI(txID common.TxID) (*PoolTxAPI, error) {
defer l2db.apiConnCon.Release() defer l2db.apiConnCon.Release()
tx := new(PoolTxAPI) tx := new(PoolTxAPI)
return tx, tracerr.Wrap(meddler.QueryRow( return tx, tracerr.Wrap(meddler.QueryRow(
l2db.db, tx, l2db.dbRead, tx,
selectPoolTxAPI+"WHERE tx_id = $1;", selectPoolTxAPI+"WHERE tx_id = $1;",
txID, txID,
)) ))

View File

@@ -21,10 +21,12 @@ import (
// L2DB stores L2 txs and authorization registers received by the coordinator and keeps them until they are no longer relevant // L2DB stores L2 txs and authorization registers received by the coordinator and keeps them until they are no longer relevant
// due to them being forged or invalid after a safety period // due to them being forged or invalid after a safety period
type L2DB struct { type L2DB struct {
db *sqlx.DB dbRead *sqlx.DB
dbWrite *sqlx.DB
safetyPeriod common.BatchNum safetyPeriod common.BatchNum
ttl time.Duration ttl time.Duration
maxTxs uint32 // limit of txs that are accepted in the pool maxTxs uint32 // limit of txs that are accepted in the pool
minFeeUSD float64
apiConnCon *db.APIConnectionController apiConnCon *db.APIConnectionController
} }
@@ -32,17 +34,20 @@ type L2DB struct {
// To create it, it's needed db connection, safety period expressed in batches, // To create it, it's needed db connection, safety period expressed in batches,
// maxTxs that the DB should have and TTL (time to live) for pending txs. // maxTxs that the DB should have and TTL (time to live) for pending txs.
func NewL2DB( func NewL2DB(
db *sqlx.DB, dbRead, dbWrite *sqlx.DB,
safetyPeriod common.BatchNum, safetyPeriod common.BatchNum,
maxTxs uint32, maxTxs uint32,
minFeeUSD float64,
TTL time.Duration, TTL time.Duration,
apiConnCon *db.APIConnectionController, apiConnCon *db.APIConnectionController,
) *L2DB { ) *L2DB {
return &L2DB{ return &L2DB{
db: db, dbRead: dbRead,
dbWrite: dbWrite,
safetyPeriod: safetyPeriod, safetyPeriod: safetyPeriod,
ttl: TTL, ttl: TTL,
maxTxs: maxTxs, maxTxs: maxTxs,
minFeeUSD: minFeeUSD,
apiConnCon: apiConnCon, apiConnCon: apiConnCon,
} }
} }
@@ -50,12 +55,18 @@ func NewL2DB(
// DB returns a pointer to the L2DB.db. This method should be used only for // DB returns a pointer to the L2DB.db. This method should be used only for
// internal testing purposes. // internal testing purposes.
func (l2db *L2DB) DB() *sqlx.DB { func (l2db *L2DB) DB() *sqlx.DB {
return l2db.db return l2db.dbWrite
}
// MinFeeUSD returns the minimum fee in USD that is required to accept txs into
// the pool
func (l2db *L2DB) MinFeeUSD() float64 {
return l2db.minFeeUSD
} }
// AddAccountCreationAuth inserts an account creation authorization into the DB // AddAccountCreationAuth inserts an account creation authorization into the DB
func (l2db *L2DB) AddAccountCreationAuth(auth *common.AccountCreationAuth) error { func (l2db *L2DB) AddAccountCreationAuth(auth *common.AccountCreationAuth) error {
_, err := l2db.db.Exec( _, err := l2db.dbWrite.Exec(
`INSERT INTO account_creation_auth (eth_addr, bjj, signature) `INSERT INTO account_creation_auth (eth_addr, bjj, signature)
VALUES ($1, $2, $3);`, VALUES ($1, $2, $3);`,
auth.EthAddr, auth.BJJ, auth.Signature, auth.EthAddr, auth.BJJ, auth.Signature,
@@ -67,30 +78,12 @@ func (l2db *L2DB) AddAccountCreationAuth(auth *common.AccountCreationAuth) error
func (l2db *L2DB) GetAccountCreationAuth(addr ethCommon.Address) (*common.AccountCreationAuth, error) { func (l2db *L2DB) GetAccountCreationAuth(addr ethCommon.Address) (*common.AccountCreationAuth, error) {
auth := new(common.AccountCreationAuth) auth := new(common.AccountCreationAuth)
return auth, tracerr.Wrap(meddler.QueryRow( return auth, tracerr.Wrap(meddler.QueryRow(
l2db.db, auth, l2db.dbRead, auth,
"SELECT * FROM account_creation_auth WHERE eth_addr = $1;", "SELECT * FROM account_creation_auth WHERE eth_addr = $1;",
addr, addr,
)) ))
} }
// AddTx inserts a tx to the pool
func (l2db *L2DB) AddTx(tx *PoolL2TxWrite) error {
row := l2db.db.QueryRow(
"SELECT COUNT(*) FROM tx_pool WHERE state = $1;",
common.PoolL2TxStatePending,
)
var totalTxs uint32
if err := row.Scan(&totalTxs); err != nil {
return tracerr.Wrap(err)
}
if totalTxs >= l2db.maxTxs {
return tracerr.New(
"The pool is at full capacity. More transactions are not accepted currently",
)
}
return tracerr.Wrap(meddler.Insert(l2db.db, "tx_pool", tx))
}
// UpdateTxsInfo updates the parameter Info of the pool transactions // UpdateTxsInfo updates the parameter Info of the pool transactions
func (l2db *L2DB) UpdateTxsInfo(txs []common.PoolL2Tx) error { func (l2db *L2DB) UpdateTxsInfo(txs []common.PoolL2Tx) error {
if len(txs) == 0 { if len(txs) == 0 {
@@ -114,7 +107,7 @@ func (l2db *L2DB) UpdateTxsInfo(txs []common.PoolL2Tx) error {
WHERE tx_pool.tx_id = tx_update.id; WHERE tx_pool.tx_id = tx_update.id;
` `
if len(txUpdates) > 0 { if len(txUpdates) > 0 {
if _, err := sqlx.NamedExec(l2db.db, query, txUpdates); err != nil { if _, err := sqlx.NamedExec(l2db.dbWrite, query, txUpdates); err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
} }
@@ -122,9 +115,8 @@ func (l2db *L2DB) UpdateTxsInfo(txs []common.PoolL2Tx) error {
return nil return nil
} }
// AddTxTest inserts a tx into the L2DB. This is useful for test purposes, // NewPoolL2TxWriteFromPoolL2Tx creates a new PoolL2TxWrite from a PoolL2Tx
// but in production txs will only be inserted through the API func NewPoolL2TxWriteFromPoolL2Tx(tx *common.PoolL2Tx) *PoolL2TxWrite {
func (l2db *L2DB) AddTxTest(tx *common.PoolL2Tx) error {
// transform tx from *common.PoolL2Tx to PoolL2TxWrite // transform tx from *common.PoolL2Tx to PoolL2TxWrite
insertTx := &PoolL2TxWrite{ insertTx := &PoolL2TxWrite{
TxID: tx.TxID, TxID: tx.TxID,
@@ -166,8 +158,15 @@ func (l2db *L2DB) AddTxTest(tx *common.PoolL2Tx) error {
f := new(big.Float).SetInt(tx.Amount) f := new(big.Float).SetInt(tx.Amount)
amountF, _ := f.Float64() amountF, _ := f.Float64()
insertTx.AmountFloat = amountF insertTx.AmountFloat = amountF
return insertTx
}
// AddTxTest inserts a tx into the L2DB. This is useful for test purposes,
// but in production txs will only be inserted through the API
func (l2db *L2DB) AddTxTest(tx *common.PoolL2Tx) error {
insertTx := NewPoolL2TxWriteFromPoolL2Tx(tx)
// insert tx // insert tx
return tracerr.Wrap(meddler.Insert(l2db.db, "tx_pool", insertTx)) return tracerr.Wrap(meddler.Insert(l2db.dbWrite, "tx_pool", insertTx))
} }
// selectPoolTxCommon select part of queries to get common.PoolL2Tx // selectPoolTxCommon select part of queries to get common.PoolL2Tx
@@ -176,14 +175,15 @@ tx_pool.to_bjj, tx_pool.token_id, tx_pool.amount, tx_pool.fee, tx_pool.nonce,
tx_pool.state, tx_pool.info, tx_pool.signature, tx_pool.timestamp, rq_from_idx, tx_pool.state, tx_pool.info, tx_pool.signature, tx_pool.timestamp, rq_from_idx,
rq_to_idx, tx_pool.rq_to_eth_addr, tx_pool.rq_to_bjj, tx_pool.rq_token_id, tx_pool.rq_amount, rq_to_idx, tx_pool.rq_to_eth_addr, tx_pool.rq_to_bjj, tx_pool.rq_token_id, tx_pool.rq_amount,
tx_pool.rq_fee, tx_pool.rq_nonce, tx_pool.tx_type, tx_pool.rq_fee, tx_pool.rq_nonce, tx_pool.tx_type,
fee_percentage(tx_pool.fee::NUMERIC) * token.usd * tx_pool.amount_f AS fee_usd, token.usd_update (fee_percentage(tx_pool.fee::NUMERIC) * token.usd * tx_pool.amount_f) /
(10.0 ^ token.decimals::NUMERIC) AS fee_usd, token.usd_update
FROM tx_pool INNER JOIN token ON tx_pool.token_id = token.token_id ` FROM tx_pool INNER JOIN token ON tx_pool.token_id = token.token_id `
// GetTx return the specified Tx in common.PoolL2Tx format // GetTx return the specified Tx in common.PoolL2Tx format
func (l2db *L2DB) GetTx(txID common.TxID) (*common.PoolL2Tx, error) { func (l2db *L2DB) GetTx(txID common.TxID) (*common.PoolL2Tx, error) {
tx := new(common.PoolL2Tx) tx := new(common.PoolL2Tx)
return tx, tracerr.Wrap(meddler.QueryRow( return tx, tracerr.Wrap(meddler.QueryRow(
l2db.db, tx, l2db.dbRead, tx,
selectPoolTxCommon+"WHERE tx_id = $1;", selectPoolTxCommon+"WHERE tx_id = $1;",
txID, txID,
)) ))
@@ -193,7 +193,7 @@ func (l2db *L2DB) GetTx(txID common.TxID) (*common.PoolL2Tx, error) {
func (l2db *L2DB) GetPendingTxs() ([]common.PoolL2Tx, error) { func (l2db *L2DB) GetPendingTxs() ([]common.PoolL2Tx, error) {
var txs []*common.PoolL2Tx var txs []*common.PoolL2Tx
err := meddler.QueryAll( err := meddler.QueryAll(
l2db.db, &txs, l2db.dbRead, &txs,
selectPoolTxCommon+"WHERE state = $1", selectPoolTxCommon+"WHERE state = $1",
common.PoolL2TxStatePending, common.PoolL2TxStatePending,
) )
@@ -218,8 +218,8 @@ func (l2db *L2DB) StartForging(txIDs []common.TxID, batchNum common.BatchNum) er
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
query = l2db.db.Rebind(query) query = l2db.dbWrite.Rebind(query)
_, err = l2db.db.Exec(query, args...) _, err = l2db.dbWrite.Exec(query, args...)
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -241,8 +241,8 @@ func (l2db *L2DB) DoneForging(txIDs []common.TxID, batchNum common.BatchNum) err
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
query = l2db.db.Rebind(query) query = l2db.dbWrite.Rebind(query)
_, err = l2db.db.Exec(query, args...) _, err = l2db.dbWrite.Exec(query, args...)
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -263,8 +263,8 @@ func (l2db *L2DB) InvalidateTxs(txIDs []common.TxID, batchNum common.BatchNum) e
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
query = l2db.db.Rebind(query) query = l2db.dbWrite.Rebind(query)
_, err = l2db.db.Exec(query, args...) _, err = l2db.dbWrite.Exec(query, args...)
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -272,7 +272,7 @@ func (l2db *L2DB) InvalidateTxs(txIDs []common.TxID, batchNum common.BatchNum) e
// of unique FromIdx // of unique FromIdx
func (l2db *L2DB) GetPendingUniqueFromIdxs() ([]common.Idx, error) { func (l2db *L2DB) GetPendingUniqueFromIdxs() ([]common.Idx, error) {
var idxs []common.Idx var idxs []common.Idx
rows, err := l2db.db.Query(`SELECT DISTINCT from_idx FROM tx_pool rows, err := l2db.dbRead.Query(`SELECT DISTINCT from_idx FROM tx_pool
WHERE state = $1;`, common.PoolL2TxStatePending) WHERE state = $1;`, common.PoolL2TxStatePending)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
@@ -313,7 +313,7 @@ func (l2db *L2DB) InvalidateOldNonces(updatedAccounts []common.IdxNonce, batchNu
// named query which works with slices, and doens't handle an extra // named query which works with slices, and doens't handle an extra
// individual argument. // individual argument.
query := fmt.Sprintf(invalidateOldNoncesQuery, batchNum) query := fmt.Sprintf(invalidateOldNoncesQuery, batchNum)
if _, err := sqlx.NamedExec(l2db.db, query, updatedAccounts); err != nil { if _, err := sqlx.NamedExec(l2db.dbWrite, query, updatedAccounts); err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
return nil return nil
@@ -322,7 +322,7 @@ func (l2db *L2DB) InvalidateOldNonces(updatedAccounts []common.IdxNonce, batchNu
// Reorg updates the state of txs that were updated in a batch that has been discarted due to a blockchain reorg. // Reorg updates the state of txs that were updated in a batch that has been discarted due to a blockchain reorg.
// The state of the affected txs can change form Forged -> Pending or from Invalid -> Pending // The state of the affected txs can change form Forged -> Pending or from Invalid -> Pending
func (l2db *L2DB) Reorg(lastValidBatch common.BatchNum) error { func (l2db *L2DB) Reorg(lastValidBatch common.BatchNum) error {
_, err := l2db.db.Exec( _, err := l2db.dbWrite.Exec(
`UPDATE tx_pool SET batch_num = NULL, state = $1 `UPDATE tx_pool SET batch_num = NULL, state = $1
WHERE (state = $2 OR state = $3 OR state = $4) AND batch_num > $5`, WHERE (state = $2 OR state = $3 OR state = $4) AND batch_num > $5`,
common.PoolL2TxStatePending, common.PoolL2TxStatePending,
@@ -338,7 +338,7 @@ func (l2db *L2DB) Reorg(lastValidBatch common.BatchNum) error {
// it also deletes pending txs that have been in the L2DB for longer than the ttl if maxTxs has been exceeded // it also deletes pending txs that have been in the L2DB for longer than the ttl if maxTxs has been exceeded
func (l2db *L2DB) Purge(currentBatchNum common.BatchNum) (err error) { func (l2db *L2DB) Purge(currentBatchNum common.BatchNum) (err error) {
now := time.Now().UTC().Unix() now := time.Now().UTC().Unix()
_, err = l2db.db.Exec( _, err = l2db.dbWrite.Exec(
`DELETE FROM tx_pool WHERE ( `DELETE FROM tx_pool WHERE (
batch_num < $1 AND (state = $2 OR state = $3) batch_num < $1 AND (state = $2 OR state = $3)
) OR ( ) OR (
@@ -354,3 +354,14 @@ func (l2db *L2DB) Purge(currentBatchNum common.BatchNum) (err error) {
) )
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
// PurgeByExternalDelete deletes all pending transactions marked with true in
// the `external_delete` column. An external process can set this column to
// true to instruct the coordinator to delete the tx when possible.
func (l2db *L2DB) PurgeByExternalDelete() error {
_, err := l2db.dbWrite.Exec(
`DELETE from tx_pool WHERE (external_delete = true AND state = $1);`,
common.PoolL2TxStatePending,
)
return tracerr.Wrap(err)
}

View File

@@ -1,8 +1,8 @@
package l2db package l2db
import ( import (
"math" "database/sql"
"math/big" "fmt"
"os" "os"
"testing" "testing"
"time" "time"
@@ -20,12 +20,14 @@ import (
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
var decimals = uint64(3)
var tokenValue = 1.0 // The price update gives a value of 1.0 USD to the token
var l2DB *L2DB var l2DB *L2DB
var l2DBWithACC *L2DB var l2DBWithACC *L2DB
var historyDB *historydb.HistoryDB var historyDB *historydb.HistoryDB
var tc *til.Context var tc *til.Context
var tokens map[common.TokenID]historydb.TokenWithUSD var tokens map[common.TokenID]historydb.TokenWithUSD
var tokensValue map[common.TokenID]float64
var accs map[common.Idx]common.Account var accs map[common.Idx]common.Account
func TestMain(m *testing.M) { func TestMain(m *testing.M) {
@@ -35,11 +37,11 @@ func TestMain(m *testing.M) {
if err != nil { if err != nil {
panic(err) panic(err)
} }
l2DB = NewL2DB(db, 10, 1000, 24*time.Hour, nil) l2DB = NewL2DB(db, db, 10, 1000, 0.0, 24*time.Hour, nil)
apiConnCon := dbUtils.NewAPICnnectionController(1, time.Second) apiConnCon := dbUtils.NewAPICnnectionController(1, time.Second)
l2DBWithACC = NewL2DB(db, 10, 1000, 24*time.Hour, apiConnCon) l2DBWithACC = NewL2DB(db, db, 10, 1000, 0.0, 24*time.Hour, apiConnCon)
test.WipeDB(l2DB.DB()) test.WipeDB(l2DB.DB())
historyDB = historydb.NewHistoryDB(db, nil) historyDB = historydb.NewHistoryDB(db, db, nil)
// Run tests // Run tests
result := m.Run() result := m.Run()
// Close DB // Close DB
@@ -58,10 +60,10 @@ func prepareHistoryDB(historyDB *historydb.HistoryDB) error {
AddToken(1) AddToken(1)
AddToken(2) AddToken(2)
CreateAccountDeposit(1) A: 2000 CreateAccountDeposit(1) A: 20000
CreateAccountDeposit(2) A: 2000 CreateAccountDeposit(2) A: 20000
CreateAccountDeposit(1) B: 1000 CreateAccountDeposit(1) B: 10000
CreateAccountDeposit(2) B: 1000 CreateAccountDeposit(2) B: 10000
> batchL1 > batchL1
> batchL1 > batchL1
> block > block
@@ -82,15 +84,23 @@ func prepareHistoryDB(historyDB *historydb.HistoryDB) error {
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
for i := range blocks {
block := &blocks[i]
for j := range block.Rollup.AddedTokens {
token := &block.Rollup.AddedTokens[j]
token.Name = fmt.Sprintf("Token %d", token.TokenID)
token.Symbol = fmt.Sprintf("TK%d", token.TokenID)
token.Decimals = decimals
}
}
tokens = make(map[common.TokenID]historydb.TokenWithUSD) tokens = make(map[common.TokenID]historydb.TokenWithUSD)
tokensValue = make(map[common.TokenID]float64) // tokensValue = make(map[common.TokenID]float64)
accs = make(map[common.Idx]common.Account) accs = make(map[common.Idx]common.Account)
value := 5 * 5.389329
now := time.Now().UTC() now := time.Now().UTC()
// Add all blocks except for the last one // Add all blocks except for the last one
for i := range blocks[:len(blocks)-1] { for i := range blocks[:len(blocks)-1] {
err = historyDB.AddBlockSCData(&blocks[i]) if err := historyDB.AddBlockSCData(&blocks[i]); err != nil {
if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
for _, batch := range blocks[i].Rollup.Batches { for _, batch := range blocks[i].Rollup.Batches {
@@ -106,39 +116,38 @@ func prepareHistoryDB(historyDB *historydb.HistoryDB) error {
Name: token.Name, Name: token.Name,
Symbol: token.Symbol, Symbol: token.Symbol,
Decimals: token.Decimals, Decimals: token.Decimals,
USD: &tokenValue,
USDUpdate: &now,
} }
tokensValue[token.TokenID] = value / math.Pow(10, float64(token.Decimals))
readToken.USDUpdate = &now
readToken.USD = &value
tokens[token.TokenID] = readToken tokens[token.TokenID] = readToken
} // Set value to the tokens
// Set value to the tokens (tokens have no symbol) err := historyDB.UpdateTokenValue(readToken.Symbol, *readToken.USD)
tokenSymbol := ""
err := historyDB.UpdateTokenValue(tokenSymbol, value)
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
} }
}
return nil return nil
} }
func generatePoolL2Txs() ([]common.PoolL2Tx, error) { func generatePoolL2Txs() ([]common.PoolL2Tx, error) {
// Fee = 126 corresponds to ~10%
setPool := ` setPool := `
Type: PoolL2 Type: PoolL2
PoolTransfer(1) A-B: 6 (4) PoolTransfer(1) A-B: 6000 (126)
PoolTransfer(2) A-B: 3 (1) PoolTransfer(2) A-B: 3000 (126)
PoolTransfer(1) B-A: 5 (2) PoolTransfer(1) B-A: 5000 (126)
PoolTransfer(2) B-A: 10 (3) PoolTransfer(2) B-A: 10000 (126)
PoolTransfer(1) A-B: 7 (2) PoolTransfer(1) A-B: 7000 (126)
PoolTransfer(2) A-B: 2 (1) PoolTransfer(2) A-B: 2000 (126)
PoolTransfer(1) B-A: 8 (2) PoolTransfer(1) B-A: 8000 (126)
PoolTransfer(2) B-A: 1 (1) PoolTransfer(2) B-A: 1000 (126)
PoolTransfer(1) A-B: 3 (1) PoolTransfer(1) A-B: 3000 (126)
PoolTransferToEthAddr(2) B-A: 5 (2) PoolTransferToEthAddr(2) B-A: 5000 (126)
PoolTransferToBJJ(2) B-A: 5 (2) PoolTransferToBJJ(2) B-A: 5000 (126)
PoolExit(1) A: 5 (2) PoolExit(1) A: 5000 (126)
PoolExit(2) B: 3 (1) PoolExit(2) B: 3000 (126)
` `
poolL2Txs, err := tc.GeneratePoolL2Txs(setPool) poolL2Txs, err := tc.GeneratePoolL2Txs(setPool)
if err != nil { if err != nil {
@@ -153,25 +162,74 @@ func TestAddTxTest(t *testing.T) {
log.Error("Error prepare historyDB", err) log.Error("Error prepare historyDB", err)
} }
poolL2Txs, err := generatePoolL2Txs() poolL2Txs, err := generatePoolL2Txs()
assert.NoError(t, err) require.NoError(t, err)
for i := range poolL2Txs { for i := range poolL2Txs {
err := l2DB.AddTxTest(&poolL2Txs[i]) err := l2DB.AddTxTest(&poolL2Txs[i])
assert.NoError(t, err) require.NoError(t, err)
fetchedTx, err := l2DB.GetTx(poolL2Txs[i].TxID) fetchedTx, err := l2DB.GetTx(poolL2Txs[i].TxID)
assert.NoError(t, err) require.NoError(t, err)
assertTx(t, &poolL2Txs[i], fetchedTx) assertTx(t, &poolL2Txs[i], fetchedTx)
nameZone, offset := fetchedTx.Timestamp.Zone() nameZone, offset := fetchedTx.Timestamp.Zone()
assert.Equal(t, "UTC", nameZone) assert.Equal(t, "UTC", nameZone)
assert.Equal(t, 0, offset) assert.Equal(t, 0, offset)
} }
} }
func TestAddTxAPI(t *testing.T) {
err := prepareHistoryDB(historyDB)
if err != nil {
log.Error("Error prepare historyDB", err)
}
oldMaxTxs := l2DBWithACC.maxTxs
// set max number of pending txs that can be kept in the pool to 5
l2DBWithACC.maxTxs = 5
poolL2Txs, err := generatePoolL2Txs()
txs := make([]*PoolL2TxWrite, len(poolL2Txs))
for i := range poolL2Txs {
txs[i] = NewPoolL2TxWriteFromPoolL2Tx(&poolL2Txs[i])
}
require.NoError(t, err)
require.GreaterOrEqual(t, len(poolL2Txs), 8)
for i := range txs[:5] {
err := l2DBWithACC.AddTxAPI(txs[i])
require.NoError(t, err)
fetchedTx, err := l2DB.GetTx(poolL2Txs[i].TxID)
require.NoError(t, err)
assertTx(t, &poolL2Txs[i], fetchedTx)
nameZone, offset := fetchedTx.Timestamp.Zone()
assert.Equal(t, "UTC", nameZone)
assert.Equal(t, 0, offset)
}
err = l2DBWithACC.AddTxAPI(txs[5])
assert.Equal(t, errPoolFull, tracerr.Unwrap(err))
// reset maxTxs to original value
l2DBWithACC.maxTxs = oldMaxTxs
// set minFeeUSD to a high value than the tx feeUSD to test the error
// of inserting a tx with lower than min fee
oldMinFeeUSD := l2DBWithACC.minFeeUSD
tx := txs[5]
feeAmount, err := common.CalcFeeAmount(tx.Amount, tx.Fee)
require.NoError(t, err)
feeAmountUSD := common.TokensToUSD(feeAmount, decimals, tokenValue)
// set minFeeUSD higher than the tx fee to trigger the error
l2DBWithACC.minFeeUSD = feeAmountUSD + 1
err = l2DBWithACC.AddTxAPI(tx)
require.Error(t, err)
assert.Regexp(t, "tx.feeUSD (.*) < minFeeUSD (.*)", err.Error())
// reset minFeeUSD to original value
l2DBWithACC.minFeeUSD = oldMinFeeUSD
}
func TestUpdateTxsInfo(t *testing.T) { func TestUpdateTxsInfo(t *testing.T) {
err := prepareHistoryDB(historyDB) err := prepareHistoryDB(historyDB)
if err != nil { if err != nil {
log.Error("Error prepare historyDB", err) log.Error("Error prepare historyDB", err)
} }
poolL2Txs, err := generatePoolL2Txs() poolL2Txs, err := generatePoolL2Txs()
assert.NoError(t, err) require.NoError(t, err)
for i := range poolL2Txs { for i := range poolL2Txs {
err := l2DB.AddTxTest(&poolL2Txs[i]) err := l2DB.AddTxTest(&poolL2Txs[i])
require.NoError(t, err) require.NoError(t, err)
@@ -185,7 +243,7 @@ func TestUpdateTxsInfo(t *testing.T) {
for i := range poolL2Txs { for i := range poolL2Txs {
fetchedTx, err := l2DB.GetTx(poolL2Txs[i].TxID) fetchedTx, err := l2DB.GetTx(poolL2Txs[i].TxID)
assert.NoError(t, err) require.NoError(t, err)
assert.Equal(t, "test", fetchedTx.Info) assert.Equal(t, "test", fetchedTx.Info)
} }
} }
@@ -203,9 +261,8 @@ func assertTx(t *testing.T, expected, actual *common.PoolL2Tx) {
assert.Less(t, token.USDUpdate.Unix()-3, actual.AbsoluteFeeUpdate.Unix()) assert.Less(t, token.USDUpdate.Unix()-3, actual.AbsoluteFeeUpdate.Unix())
expected.AbsoluteFeeUpdate = actual.AbsoluteFeeUpdate expected.AbsoluteFeeUpdate = actual.AbsoluteFeeUpdate
// Set expected fee // Set expected fee
f := new(big.Float).SetInt(expected.Amount) amountUSD := common.TokensToUSD(expected.Amount, token.Decimals, *token.USD)
amountF, _ := f.Float64() expected.AbsoluteFee = amountUSD * expected.Fee.Percentage()
expected.AbsoluteFee = *token.USD * amountF * expected.Fee.Percentage()
test.AssertUSD(t, &expected.AbsoluteFee, &actual.AbsoluteFee) test.AssertUSD(t, &expected.AbsoluteFee, &actual.AbsoluteFee)
} }
assert.Equal(t, expected, actual) assert.Equal(t, expected, actual)
@@ -230,19 +287,28 @@ func TestGetPending(t *testing.T) {
log.Error("Error prepare historyDB", err) log.Error("Error prepare historyDB", err)
} }
poolL2Txs, err := generatePoolL2Txs() poolL2Txs, err := generatePoolL2Txs()
assert.NoError(t, err) require.NoError(t, err)
var pendingTxs []*common.PoolL2Tx var pendingTxs []*common.PoolL2Tx
for i := range poolL2Txs { for i := range poolL2Txs {
err := l2DB.AddTxTest(&poolL2Txs[i]) err := l2DB.AddTxTest(&poolL2Txs[i])
assert.NoError(t, err) require.NoError(t, err)
pendingTxs = append(pendingTxs, &poolL2Txs[i]) pendingTxs = append(pendingTxs, &poolL2Txs[i])
} }
fetchedTxs, err := l2DB.GetPendingTxs() fetchedTxs, err := l2DB.GetPendingTxs()
assert.NoError(t, err) require.NoError(t, err)
assert.Equal(t, len(pendingTxs), len(fetchedTxs)) assert.Equal(t, len(pendingTxs), len(fetchedTxs))
for i := range fetchedTxs { for i := range fetchedTxs {
assertTx(t, pendingTxs[i], &fetchedTxs[i]) assertTx(t, pendingTxs[i], &fetchedTxs[i])
} }
// Check AbsoluteFee amount
for i := range fetchedTxs {
tx := &fetchedTxs[i]
feeAmount, err := common.CalcFeeAmount(tx.Amount, tx.Fee)
require.NoError(t, err)
feeAmountUSD := common.TokensToUSD(feeAmount,
tokens[tx.TokenID].Decimals, *tokens[tx.TokenID].USD)
assert.InEpsilon(t, feeAmountUSD, tx.AbsoluteFee, 0.01)
}
} }
func TestStartForging(t *testing.T) { func TestStartForging(t *testing.T) {
@@ -253,13 +319,13 @@ func TestStartForging(t *testing.T) {
log.Error("Error prepare historyDB", err) log.Error("Error prepare historyDB", err)
} }
poolL2Txs, err := generatePoolL2Txs() poolL2Txs, err := generatePoolL2Txs()
assert.NoError(t, err) require.NoError(t, err)
var startForgingTxIDs []common.TxID var startForgingTxIDs []common.TxID
randomizer := 0 randomizer := 0
// Add txs to DB // Add txs to DB
for i := range poolL2Txs { for i := range poolL2Txs {
err := l2DB.AddTxTest(&poolL2Txs[i]) err := l2DB.AddTxTest(&poolL2Txs[i])
assert.NoError(t, err) require.NoError(t, err)
if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 { if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 {
startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID) startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID)
} }
@@ -267,11 +333,11 @@ func TestStartForging(t *testing.T) {
} }
// Start forging txs // Start forging txs
err = l2DB.StartForging(startForgingTxIDs, fakeBatchNum) err = l2DB.StartForging(startForgingTxIDs, fakeBatchNum)
assert.NoError(t, err) require.NoError(t, err)
// Fetch txs and check that they've been updated correctly // Fetch txs and check that they've been updated correctly
for _, id := range startForgingTxIDs { for _, id := range startForgingTxIDs {
fetchedTx, err := l2DBWithACC.GetTxAPI(id) fetchedTx, err := l2DBWithACC.GetTxAPI(id)
assert.NoError(t, err) require.NoError(t, err)
assert.Equal(t, common.PoolL2TxStateForging, fetchedTx.State) assert.Equal(t, common.PoolL2TxStateForging, fetchedTx.State)
assert.Equal(t, &fakeBatchNum, fetchedTx.BatchNum) assert.Equal(t, &fakeBatchNum, fetchedTx.BatchNum)
} }
@@ -285,13 +351,13 @@ func TestDoneForging(t *testing.T) {
log.Error("Error prepare historyDB", err) log.Error("Error prepare historyDB", err)
} }
poolL2Txs, err := generatePoolL2Txs() poolL2Txs, err := generatePoolL2Txs()
assert.NoError(t, err) require.NoError(t, err)
var startForgingTxIDs []common.TxID var startForgingTxIDs []common.TxID
randomizer := 0 randomizer := 0
// Add txs to DB // Add txs to DB
for i := range poolL2Txs { for i := range poolL2Txs {
err := l2DB.AddTxTest(&poolL2Txs[i]) err := l2DB.AddTxTest(&poolL2Txs[i])
assert.NoError(t, err) require.NoError(t, err)
if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 { if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 {
startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID) startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID)
} }
@@ -299,7 +365,7 @@ func TestDoneForging(t *testing.T) {
} }
// Start forging txs // Start forging txs
err = l2DB.StartForging(startForgingTxIDs, fakeBatchNum) err = l2DB.StartForging(startForgingTxIDs, fakeBatchNum)
assert.NoError(t, err) require.NoError(t, err)
var doneForgingTxIDs []common.TxID var doneForgingTxIDs []common.TxID
randomizer = 0 randomizer = 0
@@ -311,12 +377,12 @@ func TestDoneForging(t *testing.T) {
} }
// Done forging txs // Done forging txs
err = l2DB.DoneForging(doneForgingTxIDs, fakeBatchNum) err = l2DB.DoneForging(doneForgingTxIDs, fakeBatchNum)
assert.NoError(t, err) require.NoError(t, err)
// Fetch txs and check that they've been updated correctly // Fetch txs and check that they've been updated correctly
for _, id := range doneForgingTxIDs { for _, id := range doneForgingTxIDs {
fetchedTx, err := l2DBWithACC.GetTxAPI(id) fetchedTx, err := l2DBWithACC.GetTxAPI(id)
assert.NoError(t, err) require.NoError(t, err)
assert.Equal(t, common.PoolL2TxStateForged, fetchedTx.State) assert.Equal(t, common.PoolL2TxStateForged, fetchedTx.State)
assert.Equal(t, &fakeBatchNum, fetchedTx.BatchNum) assert.Equal(t, &fakeBatchNum, fetchedTx.BatchNum)
} }
@@ -330,13 +396,13 @@ func TestInvalidate(t *testing.T) {
log.Error("Error prepare historyDB", err) log.Error("Error prepare historyDB", err)
} }
poolL2Txs, err := generatePoolL2Txs() poolL2Txs, err := generatePoolL2Txs()
assert.NoError(t, err) require.NoError(t, err)
var invalidTxIDs []common.TxID var invalidTxIDs []common.TxID
randomizer := 0 randomizer := 0
// Add txs to DB // Add txs to DB
for i := range poolL2Txs { for i := range poolL2Txs {
err := l2DB.AddTxTest(&poolL2Txs[i]) err := l2DB.AddTxTest(&poolL2Txs[i])
assert.NoError(t, err) require.NoError(t, err)
if poolL2Txs[i].State != common.PoolL2TxStateInvalid && randomizer%2 == 0 { if poolL2Txs[i].State != common.PoolL2TxStateInvalid && randomizer%2 == 0 {
randomizer++ randomizer++
invalidTxIDs = append(invalidTxIDs, poolL2Txs[i].TxID) invalidTxIDs = append(invalidTxIDs, poolL2Txs[i].TxID)
@@ -344,11 +410,11 @@ func TestInvalidate(t *testing.T) {
} }
// Invalidate txs // Invalidate txs
err = l2DB.InvalidateTxs(invalidTxIDs, fakeBatchNum) err = l2DB.InvalidateTxs(invalidTxIDs, fakeBatchNum)
assert.NoError(t, err) require.NoError(t, err)
// Fetch txs and check that they've been updated correctly // Fetch txs and check that they've been updated correctly
for _, id := range invalidTxIDs { for _, id := range invalidTxIDs {
fetchedTx, err := l2DBWithACC.GetTxAPI(id) fetchedTx, err := l2DBWithACC.GetTxAPI(id)
assert.NoError(t, err) require.NoError(t, err)
assert.Equal(t, common.PoolL2TxStateInvalid, fetchedTx.State) assert.Equal(t, common.PoolL2TxStateInvalid, fetchedTx.State)
assert.Equal(t, &fakeBatchNum, fetchedTx.BatchNum) assert.Equal(t, &fakeBatchNum, fetchedTx.BatchNum)
} }
@@ -362,7 +428,7 @@ func TestInvalidateOldNonces(t *testing.T) {
log.Error("Error prepare historyDB", err) log.Error("Error prepare historyDB", err)
} }
poolL2Txs, err := generatePoolL2Txs() poolL2Txs, err := generatePoolL2Txs()
assert.NoError(t, err) require.NoError(t, err)
// Update Accounts currentNonce // Update Accounts currentNonce
var updateAccounts []common.IdxNonce var updateAccounts []common.IdxNonce
var currentNonce = common.Nonce(1) var currentNonce = common.Nonce(1)
@@ -379,13 +445,13 @@ func TestInvalidateOldNonces(t *testing.T) {
invalidTxIDs = append(invalidTxIDs, poolL2Txs[i].TxID) invalidTxIDs = append(invalidTxIDs, poolL2Txs[i].TxID)
} }
err := l2DB.AddTxTest(&poolL2Txs[i]) err := l2DB.AddTxTest(&poolL2Txs[i])
assert.NoError(t, err) require.NoError(t, err)
} }
// sanity check // sanity check
require.Greater(t, len(invalidTxIDs), 0) require.Greater(t, len(invalidTxIDs), 0)
err = l2DB.InvalidateOldNonces(updateAccounts, fakeBatchNum) err = l2DB.InvalidateOldNonces(updateAccounts, fakeBatchNum)
assert.NoError(t, err) require.NoError(t, err)
// Fetch txs and check that they've been updated correctly // Fetch txs and check that they've been updated correctly
for _, id := range invalidTxIDs { for _, id := range invalidTxIDs {
fetchedTx, err := l2DBWithACC.GetTxAPI(id) fetchedTx, err := l2DBWithACC.GetTxAPI(id)
@@ -407,7 +473,7 @@ func TestReorg(t *testing.T) {
log.Error("Error prepare historyDB", err) log.Error("Error prepare historyDB", err)
} }
poolL2Txs, err := generatePoolL2Txs() poolL2Txs, err := generatePoolL2Txs()
assert.NoError(t, err) require.NoError(t, err)
reorgedTxIDs := []common.TxID{} reorgedTxIDs := []common.TxID{}
nonReorgedTxIDs := []common.TxID{} nonReorgedTxIDs := []common.TxID{}
@@ -418,7 +484,7 @@ func TestReorg(t *testing.T) {
// Add txs to DB // Add txs to DB
for i := range poolL2Txs { for i := range poolL2Txs {
err := l2DB.AddTxTest(&poolL2Txs[i]) err := l2DB.AddTxTest(&poolL2Txs[i])
assert.NoError(t, err) require.NoError(t, err)
if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 { if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 {
startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID) startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID)
allTxRandomize = append(allTxRandomize, poolL2Txs[i].TxID) allTxRandomize = append(allTxRandomize, poolL2Txs[i].TxID)
@@ -430,7 +496,7 @@ func TestReorg(t *testing.T) {
} }
// Start forging txs // Start forging txs
err = l2DB.StartForging(startForgingTxIDs, lastValidBatch) err = l2DB.StartForging(startForgingTxIDs, lastValidBatch)
assert.NoError(t, err) require.NoError(t, err)
var doneForgingTxIDs []common.TxID var doneForgingTxIDs []common.TxID
randomizer = 0 randomizer = 0
@@ -455,22 +521,22 @@ func TestReorg(t *testing.T) {
// Invalidate txs BEFORE reorgBatch --> nonReorg // Invalidate txs BEFORE reorgBatch --> nonReorg
err = l2DB.InvalidateTxs(invalidTxIDs, lastValidBatch) err = l2DB.InvalidateTxs(invalidTxIDs, lastValidBatch)
assert.NoError(t, err) require.NoError(t, err)
// Done forging txs in reorgBatch --> Reorg // Done forging txs in reorgBatch --> Reorg
err = l2DB.DoneForging(doneForgingTxIDs, reorgBatch) err = l2DB.DoneForging(doneForgingTxIDs, reorgBatch)
assert.NoError(t, err) require.NoError(t, err)
err = l2DB.Reorg(lastValidBatch) err = l2DB.Reorg(lastValidBatch)
assert.NoError(t, err) require.NoError(t, err)
for _, id := range reorgedTxIDs { for _, id := range reorgedTxIDs {
tx, err := l2DBWithACC.GetTxAPI(id) tx, err := l2DBWithACC.GetTxAPI(id)
assert.NoError(t, err) require.NoError(t, err)
assert.Nil(t, tx.BatchNum) assert.Nil(t, tx.BatchNum)
assert.Equal(t, common.PoolL2TxStatePending, tx.State) assert.Equal(t, common.PoolL2TxStatePending, tx.State)
} }
for _, id := range nonReorgedTxIDs { for _, id := range nonReorgedTxIDs {
fetchedTx, err := l2DBWithACC.GetTxAPI(id) fetchedTx, err := l2DBWithACC.GetTxAPI(id)
assert.NoError(t, err) require.NoError(t, err)
assert.Equal(t, lastValidBatch, *fetchedTx.BatchNum) assert.Equal(t, lastValidBatch, *fetchedTx.BatchNum)
} }
} }
@@ -487,7 +553,7 @@ func TestReorg2(t *testing.T) {
log.Error("Error prepare historyDB", err) log.Error("Error prepare historyDB", err)
} }
poolL2Txs, err := generatePoolL2Txs() poolL2Txs, err := generatePoolL2Txs()
assert.NoError(t, err) require.NoError(t, err)
reorgedTxIDs := []common.TxID{} reorgedTxIDs := []common.TxID{}
nonReorgedTxIDs := []common.TxID{} nonReorgedTxIDs := []common.TxID{}
@@ -498,7 +564,7 @@ func TestReorg2(t *testing.T) {
// Add txs to DB // Add txs to DB
for i := range poolL2Txs { for i := range poolL2Txs {
err := l2DB.AddTxTest(&poolL2Txs[i]) err := l2DB.AddTxTest(&poolL2Txs[i])
assert.NoError(t, err) require.NoError(t, err)
if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 { if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 {
startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID) startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID)
allTxRandomize = append(allTxRandomize, poolL2Txs[i].TxID) allTxRandomize = append(allTxRandomize, poolL2Txs[i].TxID)
@@ -510,7 +576,7 @@ func TestReorg2(t *testing.T) {
} }
// Start forging txs // Start forging txs
err = l2DB.StartForging(startForgingTxIDs, lastValidBatch) err = l2DB.StartForging(startForgingTxIDs, lastValidBatch)
assert.NoError(t, err) require.NoError(t, err)
var doneForgingTxIDs []common.TxID var doneForgingTxIDs []common.TxID
randomizer = 0 randomizer = 0
@@ -532,22 +598,22 @@ func TestReorg2(t *testing.T) {
} }
// Done forging txs BEFORE reorgBatch --> nonReorg // Done forging txs BEFORE reorgBatch --> nonReorg
err = l2DB.DoneForging(doneForgingTxIDs, lastValidBatch) err = l2DB.DoneForging(doneForgingTxIDs, lastValidBatch)
assert.NoError(t, err) require.NoError(t, err)
// Invalidate txs in reorgBatch --> Reorg // Invalidate txs in reorgBatch --> Reorg
err = l2DB.InvalidateTxs(invalidTxIDs, reorgBatch) err = l2DB.InvalidateTxs(invalidTxIDs, reorgBatch)
assert.NoError(t, err) require.NoError(t, err)
err = l2DB.Reorg(lastValidBatch) err = l2DB.Reorg(lastValidBatch)
assert.NoError(t, err) require.NoError(t, err)
for _, id := range reorgedTxIDs { for _, id := range reorgedTxIDs {
tx, err := l2DBWithACC.GetTxAPI(id) tx, err := l2DBWithACC.GetTxAPI(id)
assert.NoError(t, err) require.NoError(t, err)
assert.Nil(t, tx.BatchNum) assert.Nil(t, tx.BatchNum)
assert.Equal(t, common.PoolL2TxStatePending, tx.State) assert.Equal(t, common.PoolL2TxStatePending, tx.State)
} }
for _, id := range nonReorgedTxIDs { for _, id := range nonReorgedTxIDs {
fetchedTx, err := l2DBWithACC.GetTxAPI(id) fetchedTx, err := l2DBWithACC.GetTxAPI(id)
assert.NoError(t, err) require.NoError(t, err)
assert.Equal(t, lastValidBatch, *fetchedTx.BatchNum) assert.Equal(t, lastValidBatch, *fetchedTx.BatchNum)
} }
} }
@@ -563,7 +629,7 @@ func TestPurge(t *testing.T) {
var poolL2Tx []common.PoolL2Tx var poolL2Tx []common.PoolL2Tx
for i := 0; i < generateTx; i++ { for i := 0; i < generateTx; i++ {
poolL2TxAux, err := generatePoolL2Txs() poolL2TxAux, err := generatePoolL2Txs()
assert.NoError(t, err) require.NoError(t, err)
poolL2Tx = append(poolL2Tx, poolL2TxAux...) poolL2Tx = append(poolL2Tx, poolL2TxAux...)
} }
@@ -590,39 +656,39 @@ func TestPurge(t *testing.T) {
deletedIDs = append(deletedIDs, poolL2Tx[i].TxID) deletedIDs = append(deletedIDs, poolL2Tx[i].TxID)
} }
err := l2DB.AddTxTest(&tx) err := l2DB.AddTxTest(&tx)
assert.NoError(t, err) require.NoError(t, err)
} }
// Set batchNum keeped txs // Set batchNum keeped txs
for i := range keepedIDs { for i := range keepedIDs {
_, err = l2DB.db.Exec( _, err = l2DB.dbWrite.Exec(
"UPDATE tx_pool SET batch_num = $1 WHERE tx_id = $2;", "UPDATE tx_pool SET batch_num = $1 WHERE tx_id = $2;",
safeBatchNum, keepedIDs[i], safeBatchNum, keepedIDs[i],
) )
assert.NoError(t, err) require.NoError(t, err)
} }
// Start forging txs and set batchNum // Start forging txs and set batchNum
err = l2DB.StartForging(doneForgingTxIDs, toDeleteBatchNum) err = l2DB.StartForging(doneForgingTxIDs, toDeleteBatchNum)
assert.NoError(t, err) require.NoError(t, err)
// Done forging txs and set batchNum // Done forging txs and set batchNum
err = l2DB.DoneForging(doneForgingTxIDs, toDeleteBatchNum) err = l2DB.DoneForging(doneForgingTxIDs, toDeleteBatchNum)
assert.NoError(t, err) require.NoError(t, err)
// Invalidate txs and set batchNum // Invalidate txs and set batchNum
err = l2DB.InvalidateTxs(invalidTxIDs, toDeleteBatchNum) err = l2DB.InvalidateTxs(invalidTxIDs, toDeleteBatchNum)
assert.NoError(t, err) require.NoError(t, err)
// Update timestamp of afterTTL txs // Update timestamp of afterTTL txs
deleteTimestamp := time.Unix(time.Now().UTC().Unix()-int64(l2DB.ttl.Seconds()+float64(4*time.Second)), 0) deleteTimestamp := time.Unix(time.Now().UTC().Unix()-int64(l2DB.ttl.Seconds()+float64(4*time.Second)), 0)
for _, id := range afterTTLIDs { for _, id := range afterTTLIDs {
// Set timestamp // Set timestamp
_, err = l2DB.db.Exec( _, err = l2DB.dbWrite.Exec(
"UPDATE tx_pool SET timestamp = $1, state = $2 WHERE tx_id = $3;", "UPDATE tx_pool SET timestamp = $1, state = $2 WHERE tx_id = $3;",
deleteTimestamp, common.PoolL2TxStatePending, id, deleteTimestamp, common.PoolL2TxStatePending, id,
) )
assert.NoError(t, err) require.NoError(t, err)
} }
// Purge txs // Purge txs
err = l2DB.Purge(safeBatchNum) err = l2DB.Purge(safeBatchNum)
assert.NoError(t, err) require.NoError(t, err)
// Check results // Check results
for _, id := range deletedIDs { for _, id := range deletedIDs {
_, err := l2DB.GetTx(id) _, err := l2DB.GetTx(id)
@@ -630,7 +696,7 @@ func TestPurge(t *testing.T) {
} }
for _, id := range keepedIDs { for _, id := range keepedIDs {
_, err := l2DB.GetTx(id) _, err := l2DB.GetTx(id)
assert.NoError(t, err) require.NoError(t, err)
} }
} }
@@ -644,10 +710,10 @@ func TestAuth(t *testing.T) {
for i := 0; i < len(auths); i++ { for i := 0; i < len(auths); i++ {
// Add to the DB // Add to the DB
err := l2DB.AddAccountCreationAuth(auths[i]) err := l2DB.AddAccountCreationAuth(auths[i])
assert.NoError(t, err) require.NoError(t, err)
// Fetch from DB // Fetch from DB
auth, err := l2DB.GetAccountCreationAuth(auths[i].EthAddr) auth, err := l2DB.GetAccountCreationAuth(auths[i].EthAddr)
assert.NoError(t, err) require.NoError(t, err)
// Check fetched vs generated // Check fetched vs generated
assert.Equal(t, auths[i].EthAddr, auth.EthAddr) assert.Equal(t, auths[i].EthAddr, auth.EthAddr)
assert.Equal(t, auths[i].BJJ, auth.BJJ) assert.Equal(t, auths[i].BJJ, auth.BJJ)
@@ -665,7 +731,7 @@ func TestAddGet(t *testing.T) {
log.Error("Error prepare historyDB", err) log.Error("Error prepare historyDB", err)
} }
poolL2Txs, err := generatePoolL2Txs() poolL2Txs, err := generatePoolL2Txs()
assert.NoError(t, err) require.NoError(t, err)
// We will work with only 3 txs // We will work with only 3 txs
require.GreaterOrEqual(t, len(poolL2Txs), 3) require.GreaterOrEqual(t, len(poolL2Txs), 3)
@@ -701,3 +767,56 @@ func TestAddGet(t *testing.T) {
assert.Equal(t, txs[i], *dbTx) assert.Equal(t, txs[i], *dbTx)
} }
} }
func TestPurgeByExternalDelete(t *testing.T) {
err := prepareHistoryDB(historyDB)
if err != nil {
log.Error("Error prepare historyDB", err)
}
txs, err := generatePoolL2Txs()
require.NoError(t, err)
// We will work with 8 txs
require.GreaterOrEqual(t, len(txs), 8)
txs = txs[:8]
for i := range txs {
require.NoError(t, l2DB.AddTxTest(&txs[i]))
}
// We will recreate this scenario:
// tx index, status , external_delete
// 0 , pending, false
// 1 , pending, false
// 2 , pending, true // will be deleted
// 3 , pending, true // will be deleted
// 4 , fging , false
// 5 , fging , false
// 6 , fging , true
// 7 , fging , true
require.NoError(t, l2DB.StartForging(
[]common.TxID{txs[4].TxID, txs[5].TxID, txs[6].TxID, txs[7].TxID},
1))
_, err = l2DB.dbWrite.Exec(
`UPDATE tx_pool SET external_delete = true WHERE
tx_id IN ($1, $2, $3, $4)
;`,
txs[2].TxID, txs[3].TxID, txs[6].TxID, txs[7].TxID,
)
require.NoError(t, err)
require.NoError(t, l2DB.PurgeByExternalDelete())
// Query txs that are have been not deleted
for _, i := range []int{0, 1, 4, 5, 6, 7} {
txID := txs[i].TxID
_, err := l2DB.GetTx(txID)
require.NoError(t, err)
}
// Query txs that have been deleted
for _, i := range []int{2, 3} {
txID := txs[i].TxID
_, err := l2DB.GetTx(txID)
require.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
}
}

View File

@@ -34,6 +34,7 @@ type PoolL2TxWrite struct {
RqFee *common.FeeSelector `meddler:"rq_fee"` RqFee *common.FeeSelector `meddler:"rq_fee"`
RqNonce *common.Nonce `meddler:"rq_nonce"` RqNonce *common.Nonce `meddler:"rq_nonce"`
Type common.TxType `meddler:"tx_type"` Type common.TxType `meddler:"tx_type"`
ClientIP string `meddler:"client_ip"`
} }
// PoolTxAPI represents a L2 Tx pool with extra metadata used by the API // PoolTxAPI represents a L2 Tx pool with extra metadata used by the API

View File

@@ -47,7 +47,7 @@ CREATE TABLE token (
name VARCHAR(20) NOT NULL, name VARCHAR(20) NOT NULL,
symbol VARCHAR(10) NOT NULL, symbol VARCHAR(10) NOT NULL,
decimals INT NOT NULL, decimals INT NOT NULL,
usd NUMERIC, usd NUMERIC, -- value of a normalized token (1 token = 10^decimals units)
usd_update TIMESTAMP WITHOUT TIME ZONE usd_update TIMESTAMP WITHOUT TIME ZONE
); );
@@ -100,6 +100,15 @@ CREATE TABLE account (
eth_addr BYTEA NOT NULL eth_addr BYTEA NOT NULL
); );
CREATE TABLE account_update (
item_id SERIAL,
eth_block_num BIGINT NOT NULL REFERENCES block (eth_block_num) ON DELETE CASCADE,
batch_num BIGINT NOT NULL REFERENCES batch (batch_num) ON DELETE CASCADE,
idx BIGINT NOT NULL REFERENCES account (idx) ON DELETE CASCADE,
nonce BIGINT NOT NULL,
balance BYTEA NOT NULL
);
CREATE TABLE exit_tree ( CREATE TABLE exit_tree (
item_id SERIAL PRIMARY KEY, item_id SERIAL PRIMARY KEY,
batch_num BIGINT REFERENCES batch (batch_num) ON DELETE CASCADE, batch_num BIGINT REFERENCES batch (batch_num) ON DELETE CASCADE,
@@ -618,7 +627,9 @@ CREATE TABLE tx_pool (
rq_amount BYTEA, rq_amount BYTEA,
rq_fee SMALLINT, rq_fee SMALLINT,
rq_nonce BIGINT, rq_nonce BIGINT,
tx_type VARCHAR(40) NOT NULL tx_type VARCHAR(40) NOT NULL,
client_ip VARCHAR,
external_delete BOOLEAN NOT NULL DEFAULT false
); );
-- +migrate StatementBegin -- +migrate StatementBegin
@@ -651,34 +662,35 @@ CREATE TABLE account_creation_auth (
); );
-- +migrate Down -- +migrate Down
-- drop triggers -- triggers
DROP TRIGGER trigger_token_usd_update ON token; DROP TRIGGER IF EXISTS trigger_token_usd_update ON token;
DROP TRIGGER trigger_set_tx ON tx; DROP TRIGGER IF EXISTS trigger_set_tx ON tx;
DROP TRIGGER trigger_forge_l1_txs ON batch; DROP TRIGGER IF EXISTS trigger_forge_l1_txs ON batch;
DROP TRIGGER trigger_set_pool_tx ON tx_pool; DROP TRIGGER IF EXISTS trigger_set_pool_tx ON tx_pool;
-- drop functions -- functions
DROP FUNCTION hez_idx; DROP FUNCTION IF EXISTS hez_idx;
DROP FUNCTION set_token_usd_update; DROP FUNCTION IF EXISTS set_token_usd_update;
DROP FUNCTION fee_percentage; DROP FUNCTION IF EXISTS fee_percentage;
DROP FUNCTION set_tx; DROP FUNCTION IF EXISTS set_tx;
DROP FUNCTION forge_l1_user_txs; DROP FUNCTION IF EXISTS forge_l1_user_txs;
DROP FUNCTION set_pool_tx; DROP FUNCTION IF EXISTS set_pool_tx;
-- drop tables -- drop tables IF EXISTS
DROP TABLE account_creation_auth; DROP TABLE IF EXISTS account_creation_auth;
DROP TABLE tx_pool; DROP TABLE IF EXISTS tx_pool;
DROP TABLE auction_vars; DROP TABLE IF EXISTS auction_vars;
DROP TABLE rollup_vars; DROP TABLE IF EXISTS rollup_vars;
DROP TABLE escape_hatch_withdrawal; DROP TABLE IF EXISTS escape_hatch_withdrawal;
DROP TABLE bucket_update; DROP TABLE IF EXISTS bucket_update;
DROP TABLE token_exchange; DROP TABLE IF EXISTS token_exchange;
DROP TABLE wdelayer_vars; DROP TABLE IF EXISTS wdelayer_vars;
DROP TABLE tx; DROP TABLE IF EXISTS tx;
DROP TABLE exit_tree; DROP TABLE IF EXISTS exit_tree;
DROP TABLE account; DROP TABLE IF EXISTS account_update;
DROP TABLE token; DROP TABLE IF EXISTS account;
DROP TABLE bid; DROP TABLE IF EXISTS token;
DROP TABLE batch; DROP TABLE IF EXISTS bid;
DROP TABLE coordinator; DROP TABLE IF EXISTS batch;
DROP TABLE block; DROP TABLE IF EXISTS coordinator;
-- drop sequences DROP TABLE IF EXISTS block;
DROP SEQUENCE tx_item_id; -- sequences
DROP SEQUENCE IF EXISTS tx_item_id;

View File

@@ -275,8 +275,7 @@ func (s *StateDB) GetAccount(idx common.Idx) (*common.Account, error) {
return GetAccountInTreeDB(s.db.DB(), idx) return GetAccountInTreeDB(s.db.DB(), idx)
} }
// AccountsIter iterates over all the accounts in db, calling fn for each one func accountsIter(db db.Storage, fn func(a *common.Account) (bool, error)) error {
func AccountsIter(db db.Storage, fn func(a *common.Account) (bool, error)) error {
idxDB := db.WithPrefix(PrefixKeyIdx) idxDB := db.WithPrefix(PrefixKeyIdx)
if err := idxDB.Iterate(func(k []byte, v []byte) (bool, error) { if err := idxDB.Iterate(func(k []byte, v []byte) (bool, error) {
idx, err := common.IdxFromBytes(k) idx, err := common.IdxFromBytes(k)

View File

@@ -324,5 +324,6 @@ func (c *EthereumClient) EthCall(ctx context.Context, tx *types.Transaction,
Value: tx.Value(), Value: tx.Value(),
Data: tx.Data(), Data: tx.Data(),
} }
return c.client.CallContract(ctx, msg, blockNum) result, err := c.client.CallContract(ctx, msg, blockNum)
return result, tracerr.Wrap(err)
} }

View File

@@ -327,7 +327,7 @@ func (c *RollupClient) RollupForgeBatch(args *RollupForgeBatchArgs, auth *bind.T
if auth == nil { if auth == nil {
auth, err = c.client.NewAuth() auth, err = c.client.NewAuth()
if err != nil { if err != nil {
return nil, err return nil, tracerr.Wrap(err)
} }
auth.GasLimit = 1000000 auth.GasLimit = 1000000
} }
@@ -393,7 +393,7 @@ func (c *RollupClient) RollupForgeBatch(args *RollupForgeBatchArgs, auth *bind.T
l1CoordinatorBytes, l1l2TxData, feeIdxCoordinator, args.VerifierIdx, args.L1Batch, l1CoordinatorBytes, l1l2TxData, feeIdxCoordinator, args.VerifierIdx, args.L1Batch,
args.ProofA, args.ProofB, args.ProofC) args.ProofA, args.ProofB, args.ProofC)
if err != nil { if err != nil {
return nil, tracerr.Wrap(fmt.Errorf("Failed Hermez.ForgeBatch: %w", err)) return nil, tracerr.Wrap(fmt.Errorf("Hermez.ForgeBatch: %w", err))
} }
return tx, nil return tx, nil
} }

1
go.mod
View File

@@ -21,6 +21,7 @@ require (
github.com/miguelmota/go-ethereum-hdwallet v0.0.0-20200123000308-a60dcd172b4c github.com/miguelmota/go-ethereum-hdwallet v0.0.0-20200123000308-a60dcd172b4c
github.com/mitchellh/copystructure v1.0.0 github.com/mitchellh/copystructure v1.0.0
github.com/mitchellh/mapstructure v1.3.0 github.com/mitchellh/mapstructure v1.3.0
github.com/prometheus/client_golang v1.3.0
github.com/rubenv/sql-migrate v0.0.0-20200616145509-8d140a17f351 github.com/rubenv/sql-migrate v0.0.0-20200616145509-8d140a17f351
github.com/russross/meddler v1.0.0 github.com/russross/meddler v1.0.0
github.com/stretchr/testify v1.6.1 github.com/stretchr/testify v1.6.1

6
go.sum
View File

@@ -68,6 +68,7 @@ github.com/aws/aws-sdk-go-v2 v0.18.0/go.mod h1:JWVYvqSMppoMJC0x5wdwiImzgXTI9FuZw
github.com/aymerick/raymond v2.0.3-0.20180322193309-b565731e1464+incompatible/go.mod h1:osfaiScAUVup+UC9Nfq76eWqDhXlp+4UYaA8uhTBO6g= github.com/aymerick/raymond v2.0.3-0.20180322193309-b565731e1464+incompatible/go.mod h1:osfaiScAUVup+UC9Nfq76eWqDhXlp+4UYaA8uhTBO6g=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q= github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8= github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs= github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/btcsuite/btcd v0.0.0-20171128150713-2e60448ffcc6/go.mod h1:Dmm/EzmjnCiweXmzRIAiUWCInVmPgjkzgv5k4tVyXiQ= github.com/btcsuite/btcd v0.0.0-20171128150713-2e60448ffcc6/go.mod h1:Dmm/EzmjnCiweXmzRIAiUWCInVmPgjkzgv5k4tVyXiQ=
@@ -444,6 +445,7 @@ github.com/mattn/go-sqlite3 v1.12.0/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsO
github.com/mattn/go-sqlite3 v2.0.3+incompatible h1:gXHsfypPkaMZrKbD5209QV9jbUTJKjyR5WD3HYQSd+U= github.com/mattn/go-sqlite3 v2.0.3+incompatible h1:gXHsfypPkaMZrKbD5209QV9jbUTJKjyR5WD3HYQSd+U=
github.com/mattn/go-sqlite3 v2.0.3+incompatible/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc= github.com/mattn/go-sqlite3 v2.0.3+incompatible/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/mattn/goveralls v0.0.2/go.mod h1:8d1ZMHsd7fW6IRPKQh46F2WRpyib5/X4FOpevwGNQEw= github.com/mattn/goveralls v0.0.2/go.mod h1:8d1ZMHsd7fW6IRPKQh46F2WRpyib5/X4FOpevwGNQEw=
github.com/matttproud/golang_protobuf_extensions v1.0.1 h1:4hp9jkHxhMHkqkrB3Ix0jegS5sx/RkqARlsWZ6pIwiU=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0= github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/mediocregopher/mediocre-go-lib v0.0.0-20181029021733-cb65787f37ed/go.mod h1:dSsfyI2zABAdhcbvkXqgxOxrCsbYeHCPgrZkku60dSg= github.com/mediocregopher/mediocre-go-lib v0.0.0-20181029021733-cb65787f37ed/go.mod h1:dSsfyI2zABAdhcbvkXqgxOxrCsbYeHCPgrZkku60dSg=
github.com/mediocregopher/radix/v3 v3.3.0/go.mod h1:EmfVyvspXz1uZEyPBMyGK+kjWiKQGvsUt6O3Pj+LDCQ= github.com/mediocregopher/radix/v3 v3.3.0/go.mod h1:EmfVyvspXz1uZEyPBMyGK+kjWiKQGvsUt6O3Pj+LDCQ=
@@ -544,23 +546,27 @@ github.com/prometheus/client_golang v0.9.3-0.20190127221311-3c4408c8b829/go.mod
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso= github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo= github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g= github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=
github.com/prometheus/client_golang v1.3.0 h1:miYCvYqFXtl/J9FIy8eNpBfYthAEFg+Ys0XyUVEcDsc=
github.com/prometheus/client_golang v1.3.0/go.mod h1:hJaj2vgQTGQmVCsAACORcieXFeDPbaTKGT+JTgUa3og= github.com/prometheus/client_golang v1.3.0/go.mod h1:hJaj2vgQTGQmVCsAACORcieXFeDPbaTKGT+JTgUa3og=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo= github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.1.0 h1:ElTg5tNp4DqfV7UQjDqv2+RJlNzsDtvNAWccbItceIE=
github.com/prometheus/client_model v0.1.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA= github.com/prometheus/client_model v0.1.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro= github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.2.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4= github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc= github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
github.com/prometheus/common v0.7.0 h1:L+1lyG48J1zAQXA3RBX/nG/B3gjlHq0zTt2tlbJLyCY=
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA= github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk= github.com/prometheus/procfs v0.0.0-20190117184657-bf6a532e95b1/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA= github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ= github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
github.com/prometheus/procfs v0.0.8 h1:+fpWZdT24pJBiqJdAwYBjPSk+5YmQzYNPYzQsdzLkt8=
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A= github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/prometheus/tsdb v0.6.2-0.20190402121629-4f204dcbc150/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU= github.com/prometheus/tsdb v0.6.2-0.20190402121629-4f204dcbc150/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/prometheus/tsdb v0.7.1 h1:YZcsG11NqnK4czYLrWd9mpEuAJIHVQLwdrleYfszMAA= github.com/prometheus/tsdb v0.7.1 h1:YZcsG11NqnK4czYLrWd9mpEuAJIHVQLwdrleYfszMAA=

View File

@@ -67,6 +67,11 @@ func Init(levelStr string, outputs []string) {
func sprintStackTrace(st []tracerr.Frame) string { func sprintStackTrace(st []tracerr.Frame) string {
builder := strings.Builder{} builder := strings.Builder{}
// Skip deepest frame because it belongs to the go runtime and we don't
// care about it.
if len(st) > 0 {
st = st[:len(st)-1]
}
for _, f := range st { for _, f := range st {
builder.WriteString(fmt.Sprintf("\n%s:%d %s()", f.Path, f.Line, f.Func)) builder.WriteString(fmt.Sprintf("\n%s:%d %s()", f.Path, f.Line, f.Func))
} }

View File

@@ -4,6 +4,7 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
"net"
"net/http" "net/http"
"sync" "sync"
"time" "time"
@@ -64,7 +65,8 @@ type Node struct {
// General // General
cfg *config.Node cfg *config.Node
mode Mode mode Mode
sqlConn *sqlx.DB sqlConnRead *sqlx.DB
sqlConnWrite *sqlx.DB
ctx context.Context ctx context.Context
wg sync.WaitGroup wg sync.WaitGroup
cancel context.CancelFunc cancel context.CancelFunc
@@ -74,15 +76,34 @@ type Node struct {
func NewNode(mode Mode, cfg *config.Node) (*Node, error) { func NewNode(mode Mode, cfg *config.Node) (*Node, error) {
meddler.Debug = cfg.Debug.MeddlerLogs meddler.Debug = cfg.Debug.MeddlerLogs
// Stablish DB connection // Stablish DB connection
db, err := dbUtils.InitSQLDB( dbWrite, err := dbUtils.InitSQLDB(
cfg.PostgreSQL.Port, cfg.PostgreSQL.PortWrite,
cfg.PostgreSQL.Host, cfg.PostgreSQL.HostWrite,
cfg.PostgreSQL.User, cfg.PostgreSQL.UserWrite,
cfg.PostgreSQL.Password, cfg.PostgreSQL.PasswordWrite,
cfg.PostgreSQL.Name, cfg.PostgreSQL.NameWrite,
) )
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(fmt.Errorf("dbUtils.InitSQLDB: %w", err))
}
var dbRead *sqlx.DB
if cfg.PostgreSQL.HostRead == "" {
dbRead = dbWrite
} else if cfg.PostgreSQL.HostRead == cfg.PostgreSQL.HostWrite {
return nil, tracerr.Wrap(fmt.Errorf(
"PostgreSQL.HostRead and PostgreSQL.HostWrite must be different",
))
} else {
dbRead, err = dbUtils.InitSQLDB(
cfg.PostgreSQL.PortRead,
cfg.PostgreSQL.HostRead,
cfg.PostgreSQL.UserRead,
cfg.PostgreSQL.PasswordRead,
cfg.PostgreSQL.NameRead,
)
if err != nil {
return nil, tracerr.Wrap(fmt.Errorf("dbUtils.InitSQLDB: %w", err))
}
} }
var apiConnCon *dbUtils.APIConnectionController var apiConnCon *dbUtils.APIConnectionController
if cfg.API.Explorer || mode == ModeCoordinator { if cfg.API.Explorer || mode == ModeCoordinator {
@@ -92,7 +113,7 @@ func NewNode(mode Mode, cfg *config.Node) (*Node, error) {
) )
} }
historyDB := historydb.NewHistoryDB(db, apiConnCon) historyDB := historydb.NewHistoryDB(dbRead, dbWrite, apiConnCon)
ethClient, err := ethclient.Dial(cfg.Web3.URL) ethClient, err := ethclient.Dial(cfg.Web3.URL)
if err != nil { if err != nil {
@@ -201,9 +222,10 @@ func NewNode(mode Mode, cfg *config.Node) (*Node, error) {
var l2DB *l2db.L2DB var l2DB *l2db.L2DB
if mode == ModeCoordinator { if mode == ModeCoordinator {
l2DB = l2db.NewL2DB( l2DB = l2db.NewL2DB(
db, dbRead, dbWrite,
cfg.Coordinator.L2DB.SafetyPeriod, cfg.Coordinator.L2DB.SafetyPeriod,
cfg.Coordinator.L2DB.MaxTxs, cfg.Coordinator.L2DB.MaxTxs,
cfg.Coordinator.L2DB.MinFeeUSD,
cfg.Coordinator.L2DB.TTL.Duration, cfg.Coordinator.L2DB.TTL.Duration,
apiConnCon, apiConnCon,
) )
@@ -245,9 +267,6 @@ func NewNode(mode Mode, cfg *config.Node) (*Node, error) {
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
if err != nil {
return nil, tracerr.Wrap(err)
}
serverProofs := make([]prover.Client, len(cfg.Coordinator.ServerProofs)) serverProofs := make([]prover.Client, len(cfg.Coordinator.ServerProofs))
for i, serverProofCfg := range cfg.Coordinator.ServerProofs { for i, serverProofCfg := range cfg.Coordinator.ServerProofs {
serverProofs[i] = prover.NewProofServerClient(serverProofCfg.URL, serverProofs[i] = prover.NewProofServerClient(serverProofCfg.URL,
@@ -302,6 +321,7 @@ func NewNode(mode Mode, cfg *config.Node) (*Node, error) {
ForgeDelay: cfg.Coordinator.ForgeDelay.Duration, ForgeDelay: cfg.Coordinator.ForgeDelay.Duration,
ForgeNoTxsDelay: cfg.Coordinator.ForgeNoTxsDelay.Duration, ForgeNoTxsDelay: cfg.Coordinator.ForgeNoTxsDelay.Duration,
SyncRetryInterval: cfg.Coordinator.SyncRetryInterval.Duration, SyncRetryInterval: cfg.Coordinator.SyncRetryInterval.Duration,
PurgeByExtDelInterval: cfg.Coordinator.PurgeByExtDelInterval.Duration,
EthClientAttempts: cfg.Coordinator.EthClient.Attempts, EthClientAttempts: cfg.Coordinator.EthClient.Attempts,
EthClientAttemptsDelay: cfg.Coordinator.EthClient.AttemptsDelay.Duration, EthClientAttemptsDelay: cfg.Coordinator.EthClient.AttemptsDelay.Duration,
EthNoReuseNonce: cfg.Coordinator.EthClient.NoReuseNonce, EthNoReuseNonce: cfg.Coordinator.EthClient.NoReuseNonce,
@@ -316,6 +336,7 @@ func NewNode(mode Mode, cfg *config.Node) (*Node, error) {
PurgeBlockDelay: cfg.Coordinator.L2DB.PurgeBlockDelay, PurgeBlockDelay: cfg.Coordinator.L2DB.PurgeBlockDelay,
InvalidateBlockDelay: cfg.Coordinator.L2DB.InvalidateBlockDelay, InvalidateBlockDelay: cfg.Coordinator.L2DB.InvalidateBlockDelay,
}, },
ForgeBatchGasCost: cfg.Coordinator.EthClient.ForgeBatchGasCost,
VerifierIdx: uint8(verifierIdx), VerifierIdx: uint8(verifierIdx),
TxProcessorConfig: txProcessorCfg, TxProcessorConfig: txProcessorCfg,
}, },
@@ -376,7 +397,7 @@ func NewNode(mode Mode, cfg *config.Node) (*Node, error) {
} }
var debugAPI *debugapi.DebugAPI var debugAPI *debugapi.DebugAPI
if cfg.Debug.APIAddress != "" { if cfg.Debug.APIAddress != "" {
debugAPI = debugapi.NewDebugAPI(cfg.Debug.APIAddress, historyDB, stateDB, sync) debugAPI = debugapi.NewDebugAPI(cfg.Debug.APIAddress, stateDB, sync)
} }
priceUpdater, err := priceupdater.NewPriceUpdater(cfg.PriceUpdater.URL, priceUpdater, err := priceupdater.NewPriceUpdater(cfg.PriceUpdater.URL,
priceupdater.APIType(cfg.PriceUpdater.Type), historyDB) priceupdater.APIType(cfg.PriceUpdater.Type), historyDB)
@@ -392,7 +413,8 @@ func NewNode(mode Mode, cfg *config.Node) (*Node, error) {
sync: sync, sync: sync,
cfg: cfg, cfg: cfg,
mode: mode, mode: mode,
sqlConn: db, sqlConnRead: dbRead,
sqlConnWrite: dbWrite,
ctx: ctx, ctx: ctx,
cancel: cancel, cancel: cancel,
}, nil }, nil
@@ -428,7 +450,6 @@ func NewNodeAPI(
coordinatorEndpoints, explorerEndpoints, coordinatorEndpoints, explorerEndpoints,
engine, engine,
hdb, hdb,
sdb,
l2db, l2db,
config, config,
) )
@@ -446,16 +467,20 @@ func NewNodeAPI(
// cancelation. // cancelation.
func (a *NodeAPI) Run(ctx context.Context) error { func (a *NodeAPI) Run(ctx context.Context) error {
server := &http.Server{ server := &http.Server{
Addr: a.addr,
Handler: a.engine, Handler: a.engine,
// TODO: Figure out best parameters for production // TODO: Figure out best parameters for production
ReadTimeout: 30 * time.Second, //nolint:gomnd ReadTimeout: 30 * time.Second, //nolint:gomnd
WriteTimeout: 30 * time.Second, //nolint:gomnd WriteTimeout: 30 * time.Second, //nolint:gomnd
MaxHeaderBytes: 1 << 20, //nolint:gomnd MaxHeaderBytes: 1 << 20, //nolint:gomnd
} }
go func() { listener, err := net.Listen("tcp", a.addr)
if err != nil {
return tracerr.Wrap(err)
}
log.Infof("NodeAPI is ready at %v", a.addr) log.Infof("NodeAPI is ready at %v", a.addr)
if err := server.ListenAndServe(); err != nil && tracerr.Unwrap(err) != http.ErrServerClosed { go func() {
if err := server.Serve(listener); err != nil &&
tracerr.Unwrap(err) != http.ErrServerClosed {
log.Fatalf("Listen: %s\n", err) log.Fatalf("Listen: %s\n", err)
} }
}() }()
@@ -547,9 +572,6 @@ func (n *Node) syncLoopFn(ctx context.Context, lastBlock *common.Block) (*common
WDelayer: blockData.WDelayer.Vars, WDelayer: blockData.WDelayer.Vars,
} }
n.handleNewBlock(ctx, stats, vars, blockData.Rollup.Batches) n.handleNewBlock(ctx, stats, vars, blockData.Rollup.Batches)
if n.debugAPI != nil {
n.debugAPI.SyncBlockHook()
}
return &blockData.Block, time.Duration(0), nil return &blockData.Block, time.Duration(0), nil
} else { } else {
// case: no block // case: no block
@@ -654,6 +676,10 @@ func (n *Node) StartNodeAPI() {
n.wg.Add(1) n.wg.Add(1)
go func() { go func() {
// Do an initial update on startup
if err := n.nodeAPI.api.UpdateMetrics(); err != nil {
log.Errorw("API.UpdateMetrics", "err", err)
}
for { for {
select { select {
case <-n.ctx.Done(): case <-n.ctx.Done():
@@ -670,6 +696,10 @@ func (n *Node) StartNodeAPI() {
n.wg.Add(1) n.wg.Add(1)
go func() { go func() {
// Do an initial update on startup
if err := n.nodeAPI.api.UpdateRecommendedFee(); err != nil {
log.Errorw("API.UpdateRecommendedFee", "err", err)
}
for { for {
select { select {
case <-n.ctx.Done(): case <-n.ctx.Done():

View File

@@ -20,7 +20,7 @@ func TestPriceUpdater(t *testing.T) {
pass := os.Getenv("POSTGRES_PASS") pass := os.Getenv("POSTGRES_PASS")
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez") db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
assert.NoError(t, err) assert.NoError(t, err)
historyDB := historydb.NewHistoryDB(db, nil) historyDB := historydb.NewHistoryDB(db, db, nil)
// Clean DB // Clean DB
test.WipeDB(historyDB.DB()) test.WipeDB(historyDB.DB())
// Populate DB // Populate DB

44
synchronizer/metrics.go Normal file
View File

@@ -0,0 +1,44 @@
package synchronizer
import "github.com/prometheus/client_golang/prometheus"
var (
metricReorgsCount = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "sync_reorgs",
Help: "",
},
)
metricSyncedLastBlockNum = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "sync_synced_last_block_num",
Help: "",
},
)
metricEthLastBlockNum = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "sync_eth_last_block_num",
Help: "",
},
)
metricSyncedLastBatchNum = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "sync_synced_last_batch_num",
Help: "",
},
)
metricEthLastBatchNum = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "sync_eth_last_batch_num",
Help: "",
},
)
)
func init() {
prometheus.MustRegister(metricReorgsCount)
prometheus.MustRegister(metricSyncedLastBlockNum)
prometheus.MustRegister(metricEthLastBlockNum)
prometheus.MustRegister(metricSyncedLastBatchNum)
prometheus.MustRegister(metricEthLastBatchNum)
}

View File

@@ -662,12 +662,16 @@ func (s *Synchronizer) Sync2(ctx context.Context,
} }
for _, batchData := range rollupData.Batches { for _, batchData := range rollupData.Batches {
metricSyncedLastBatchNum.Set(float64(batchData.Batch.BatchNum))
metricEthLastBatchNum.Set(float64(s.stats.Eth.LastBatchNum))
log.Debugw("Synced batch", log.Debugw("Synced batch",
"syncLastBatch", batchData.Batch.BatchNum, "syncLastBatch", batchData.Batch.BatchNum,
"syncBatchesPerc", s.stats.batchesPerc(batchData.Batch.BatchNum), "syncBatchesPerc", s.stats.batchesPerc(batchData.Batch.BatchNum),
"ethLastBatch", s.stats.Eth.LastBatchNum, "ethLastBatch", s.stats.Eth.LastBatchNum,
) )
} }
metricSyncedLastBlockNum.Set(float64(s.stats.Sync.LastBlock.Num))
metricEthLastBlockNum.Set(float64(s.stats.Eth.LastBlock.Num))
log.Debugw("Synced block", log.Debugw("Synced block",
"syncLastBlockNum", s.stats.Sync.LastBlock.Num, "syncLastBlockNum", s.stats.Sync.LastBlock.Num,
"syncBlocksPerc", s.stats.blocksPerc(), "syncBlocksPerc", s.stats.blocksPerc(),
@@ -993,6 +997,19 @@ func (s *Synchronizer) rollupSync(ethBlock *common.Block) (*common.RollupData, e
} }
batchData.CreatedAccounts = processTxsOut.CreatedAccounts batchData.CreatedAccounts = processTxsOut.CreatedAccounts
batchData.UpdatedAccounts = make([]common.AccountUpdate, 0,
len(processTxsOut.UpdatedAccounts))
for _, acc := range processTxsOut.UpdatedAccounts {
batchData.UpdatedAccounts = append(batchData.UpdatedAccounts,
common.AccountUpdate{
EthBlockNum: blockNum,
BatchNum: batchNum,
Idx: acc.Idx,
Nonce: acc.Nonce,
Balance: acc.Balance,
})
}
slotNum := int64(0) slotNum := int64(0)
if ethBlock.Num >= s.consts.Auction.GenesisBlockNum { if ethBlock.Num >= s.consts.Auction.GenesisBlockNum {
slotNum = (ethBlock.Num - s.consts.Auction.GenesisBlockNum) / slotNum = (ethBlock.Num - s.consts.Auction.GenesisBlockNum) /

View File

@@ -171,6 +171,8 @@ func checkSyncBlock(t *testing.T, s *Synchronizer, blockNum int, block, syncBloc
*exit = syncBatch.ExitTree[j] *exit = syncBatch.ExitTree[j]
} }
assert.Equal(t, batch.Batch, syncBatch.Batch) assert.Equal(t, batch.Batch, syncBatch.Batch)
// Ignore updated accounts
syncBatch.UpdatedAccounts = nil
assert.Equal(t, batch, syncBatch) assert.Equal(t, batch, syncBatch)
assert.Equal(t, &batch.Batch, dbBatch) //nolint:gosec assert.Equal(t, &batch.Batch, dbBatch) //nolint:gosec
@@ -313,7 +315,7 @@ func newTestModules(t *testing.T) (*statedb.StateDB, *historydb.HistoryDB) {
pass := os.Getenv("POSTGRES_PASS") pass := os.Getenv("POSTGRES_PASS")
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez") db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err) require.NoError(t, err)
historyDB := historydb.NewHistoryDB(db, nil) historyDB := historydb.NewHistoryDB(db, db, nil)
// Clear DB // Clear DB
test.WipeDB(historyDB.DB()) test.WipeDB(historyDB.DB())

View File

@@ -2,20 +2,18 @@ package debugapi
import ( import (
"context" "context"
"fmt" "net"
"math/big"
"net/http" "net/http"
"sync"
"time" "time"
"github.com/gin-contrib/cors" "github.com/gin-contrib/cors"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/hermeznetwork/hermez-node/db/statedb" "github.com/hermeznetwork/hermez-node/db/statedb"
"github.com/hermeznetwork/hermez-node/log" "github.com/hermeznetwork/hermez-node/log"
"github.com/hermeznetwork/hermez-node/synchronizer" "github.com/hermeznetwork/hermez-node/synchronizer"
"github.com/hermeznetwork/tracerr" "github.com/hermeznetwork/tracerr"
"github.com/prometheus/client_golang/prometheus/promhttp"
) )
func handleNoRoute(c *gin.Context) { func handleNoRoute(c *gin.Context) {
@@ -35,109 +33,22 @@ func badReq(err error, c *gin.Context) {
}) })
} }
const (
statusUpdating = "updating"
statusOK = "ok"
)
type tokenBalances struct {
sync.RWMutex
Value struct {
Status string
Block *common.Block
Batch *common.Batch
Balances map[common.TokenID]*big.Int
}
}
func (t *tokenBalances) Update(historyDB *historydb.HistoryDB, sdb *statedb.StateDB) (err error) {
var block *common.Block
var batch *common.Batch
var balances map[common.TokenID]*big.Int
defer func() {
t.Lock()
if err == nil {
t.Value.Status = statusOK
t.Value.Block = block
t.Value.Batch = batch
t.Value.Balances = balances
} else {
t.Value.Status = fmt.Sprintf("tokenBalances.Update: %v", err)
t.Value.Block = nil
t.Value.Batch = nil
t.Value.Balances = nil
}
t.Unlock()
}()
if block, err = historyDB.GetLastBlock(); err != nil {
return tracerr.Wrap(err)
}
if batch, err = historyDB.GetLastBatch(); err != nil {
return tracerr.Wrap(err)
}
balances = make(map[common.TokenID]*big.Int)
sdb.LastRead(func(sdbLast *statedb.Last) error {
return tracerr.Wrap(
statedb.AccountsIter(sdbLast.DB(), func(a *common.Account) (bool, error) {
if balance, ok := balances[a.TokenID]; !ok {
balances[a.TokenID] = a.Balance
} else {
balance.Add(balance, a.Balance)
}
return true, nil
}),
)
})
return nil
}
// DebugAPI is an http API with debugging endpoints // DebugAPI is an http API with debugging endpoints
type DebugAPI struct { type DebugAPI struct {
addr string addr string
historyDB *historydb.HistoryDB
stateDB *statedb.StateDB // synchronizer statedb stateDB *statedb.StateDB // synchronizer statedb
sync *synchronizer.Synchronizer sync *synchronizer.Synchronizer
tokenBalances tokenBalances
} }
// NewDebugAPI creates a new DebugAPI // NewDebugAPI creates a new DebugAPI
func NewDebugAPI(addr string, historyDB *historydb.HistoryDB, stateDB *statedb.StateDB, func NewDebugAPI(addr string, stateDB *statedb.StateDB, sync *synchronizer.Synchronizer) *DebugAPI {
sync *synchronizer.Synchronizer) *DebugAPI {
return &DebugAPI{ return &DebugAPI{
addr: addr, addr: addr,
historyDB: historyDB,
stateDB: stateDB, stateDB: stateDB,
sync: sync, sync: sync,
} }
} }
// SyncBlockHook is a hook function that the node will call after every new synchronized block
func (a *DebugAPI) SyncBlockHook() {
a.tokenBalances.RLock()
updateTokenBalances := a.tokenBalances.Value.Status == statusUpdating
a.tokenBalances.RUnlock()
if updateTokenBalances {
if err := a.tokenBalances.Update(a.historyDB, a.stateDB); err != nil {
log.Errorw("DebugAPI.tokenBalances.Upate", "err", err)
}
}
}
func (a *DebugAPI) handleTokenBalances(c *gin.Context) {
a.tokenBalances.RLock()
tokenBalances := a.tokenBalances.Value
a.tokenBalances.RUnlock()
c.JSON(http.StatusOK, tokenBalances)
}
func (a *DebugAPI) handlePostTokenBalances(c *gin.Context) {
a.tokenBalances.Lock()
a.tokenBalances.Value.Status = statusUpdating
a.tokenBalances.Unlock()
c.JSON(http.StatusOK, nil)
}
func (a *DebugAPI) handleAccount(c *gin.Context) { func (a *DebugAPI) handleAccount(c *gin.Context) {
uri := struct { uri := struct {
Idx uint32 Idx uint32
@@ -198,6 +109,8 @@ func (a *DebugAPI) Run(ctx context.Context) error {
api.Use(cors.Default()) api.Use(cors.Default())
debugAPI := api.Group("/debug") debugAPI := api.Group("/debug")
debugAPI.GET("/metrics", gin.WrapH(promhttp.Handler()))
debugAPI.GET("sdb/batchnum", a.handleCurrentBatch) debugAPI.GET("sdb/batchnum", a.handleCurrentBatch)
debugAPI.GET("sdb/mtroot", a.handleMTRoot) debugAPI.GET("sdb/mtroot", a.handleMTRoot)
// Accounts returned by these endpoints will always have BatchNum = 0, // Accounts returned by these endpoints will always have BatchNum = 0,
@@ -205,22 +118,24 @@ func (a *DebugAPI) Run(ctx context.Context) error {
// is created. // is created.
debugAPI.GET("sdb/accounts", a.handleAccounts) debugAPI.GET("sdb/accounts", a.handleAccounts)
debugAPI.GET("sdb/accounts/:Idx", a.handleAccount) debugAPI.GET("sdb/accounts/:Idx", a.handleAccount)
debugAPI.POST("sdb/tokenbalances", a.handlePostTokenBalances)
debugAPI.GET("sdb/tokenbalances", a.handleTokenBalances)
debugAPI.GET("sync/stats", a.handleSyncStats) debugAPI.GET("sync/stats", a.handleSyncStats)
debugAPIServer := &http.Server{ debugAPIServer := &http.Server{
Addr: a.addr,
Handler: api, Handler: api,
// Use some hardcoded numberes that are suitable for testing // Use some hardcoded numbers that are suitable for testing
ReadTimeout: 30 * time.Second, //nolint:gomnd ReadTimeout: 30 * time.Second, //nolint:gomnd
WriteTimeout: 30 * time.Second, //nolint:gomnd WriteTimeout: 30 * time.Second, //nolint:gomnd
MaxHeaderBytes: 1 << 20, //nolint:gomnd MaxHeaderBytes: 1 << 20, //nolint:gomnd
} }
go func() { listener, err := net.Listen("tcp", a.addr)
if err != nil {
return tracerr.Wrap(err)
}
log.Infof("DebugAPI is ready at %v", a.addr) log.Infof("DebugAPI is ready at %v", a.addr)
if err := debugAPIServer.ListenAndServe(); err != nil && tracerr.Unwrap(err) != http.ErrServerClosed { go func() {
if err := debugAPIServer.Serve(listener); err != nil &&
tracerr.Unwrap(err) != http.ErrServerClosed {
log.Fatalf("Listen: %s\n", err) log.Fatalf("Listen: %s\n", err)
} }
}() }()

View File

@@ -6,21 +6,13 @@ import (
"io/ioutil" "io/ioutil"
"math/big" "math/big"
"net/http" "net/http"
"os"
"strconv" "strconv"
"sync"
"testing" "testing"
"github.com/dghubble/sling" "github.com/dghubble/sling"
ethCommon "github.com/ethereum/go-ethereum/common"
ethCrypto "github.com/ethereum/go-ethereum/crypto" ethCrypto "github.com/ethereum/go-ethereum/crypto"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
dbUtils "github.com/hermeznetwork/hermez-node/db"
"github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/hermeznetwork/hermez-node/db/statedb" "github.com/hermeznetwork/hermez-node/db/statedb"
"github.com/hermeznetwork/hermez-node/test"
"github.com/hermeznetwork/hermez-node/test/til"
"github.com/hermeznetwork/hermez-node/txprocessor"
"github.com/iden3/go-iden3-crypto/babyjub" "github.com/iden3/go-iden3-crypto/babyjub"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
@@ -59,15 +51,12 @@ func TestDebugAPI(t *testing.T) {
addr := "localhost:12345" addr := "localhost:12345"
// We won't test the sync/stats endpoint, so we can se the Syncrhonizer to nil // We won't test the sync/stats endpoint, so we can se the Syncrhonizer to nil
debugAPI := NewDebugAPI(addr, nil, sdb, nil) debugAPI := NewDebugAPI(addr, sdb, nil)
ctx, cancel := context.WithCancel(context.Background()) ctx, cancel := context.WithCancel(context.Background())
wg := sync.WaitGroup{}
wg.Add(1)
go func() { go func() {
err := debugAPI.Run(ctx) err := debugAPI.Run(ctx)
require.Nil(t, err) require.Nil(t, err)
wg.Done()
}() }()
var accounts []common.Account var accounts []common.Account
@@ -113,130 +102,4 @@ func TestDebugAPI(t *testing.T) {
assert.Equal(t, accounts, accountsAPI) assert.Equal(t, accounts, accountsAPI)
cancel() cancel()
wg.Wait()
}
func TestDebugAPITokenBalances(t *testing.T) {
dir, err := ioutil.TempDir("", "tmpdb")
require.Nil(t, err)
sdb, err := statedb.NewStateDB(statedb.Config{Path: dir, Keep: 128, Type: statedb.TypeSynchronizer, NLevels: 32})
require.Nil(t, err)
// Init History DB
pass := os.Getenv("POSTGRES_PASS")
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err)
historyDB := historydb.NewHistoryDB(db, nil)
// Clear DB
test.WipeDB(historyDB.DB())
set := `
Type: Blockchain
AddToken(1)
AddToken(2)
AddToken(3)
CreateAccountDeposit(1) A: 1000
CreateAccountDeposit(2) A: 2000
CreateAccountDeposit(1) B: 100
CreateAccountDeposit(2) B: 200
CreateAccountDeposit(2) C: 400
> batchL1 // forge L1UserTxs{nil}, freeze defined L1UserTxs{5}
> batchL1 // forge defined L1UserTxs{5}, freeze L1UserTxs{nil}
> block // blockNum=2
`
chainID := uint16(0)
tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
tilCfgExtra := til.ConfigExtra{
BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
CoordUser: "A",
}
blocks, err := tc.GenerateBlocks(set)
require.NoError(t, err)
err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
require.NoError(t, err)
tc.FillBlocksL1UserTxsBatchNum(blocks)
err = tc.FillBlocksForgedL1UserTxs(blocks)
require.NoError(t, err)
tpc := txprocessor.Config{
NLevels: 32,
MaxTx: 100,
ChainID: chainID,
MaxFeeTx: common.RollupConstMaxFeeIdxCoordinator,
MaxL1Tx: common.RollupConstMaxL1Tx,
}
tp := txprocessor.NewTxProcessor(sdb, tpc)
for _, block := range blocks {
require.NoError(t, historyDB.AddBlockSCData(&block))
for _, batch := range block.Rollup.Batches {
_, err := tp.ProcessTxs(batch.Batch.FeeIdxsCoordinator,
batch.L1UserTxs, batch.L1CoordinatorTxs, []common.PoolL2Tx{})
require.NoError(t, err)
}
}
addr := "localhost:12345"
// We won't test the sync/stats endpoint, so we can se the Syncrhonizer to nil
debugAPI := NewDebugAPI(addr, historyDB, sdb, nil)
ctx, cancel := context.WithCancel(context.Background())
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
err := debugAPI.Run(ctx)
require.Nil(t, err)
wg.Done()
}()
var accounts []common.Account
for i := 0; i < 16; i++ {
account := newAccount(t, i)
accounts = append(accounts, *account)
_, err = sdb.CreateAccount(account.Idx, account)
require.Nil(t, err)
}
// Make a checkpoint (batchNum 2) to make the accounts available in Last
err = sdb.MakeCheckpoint()
require.Nil(t, err)
url := fmt.Sprintf("http://%v/debug/", addr)
var batchNum common.BatchNum
req, err := sling.New().Get(url).Path("sdb/batchnum").ReceiveSuccess(&batchNum)
require.Equal(t, http.StatusOK, req.StatusCode)
require.Nil(t, err)
assert.Equal(t, common.BatchNum(2), batchNum)
var mtroot *big.Int
req, err = sling.New().Get(url).Path("sdb/mtroot").ReceiveSuccess(&mtroot)
require.Equal(t, http.StatusOK, req.StatusCode)
require.Nil(t, err)
// Testing against a hardcoded value obtained by running the test and
// printing the value previously.
assert.Equal(t, "21765339739823365993496282904432398015268846626944509989242908567129545640185",
mtroot.String())
var accountAPI common.Account
req, err = sling.New().Get(url).
Path(fmt.Sprintf("sdb/accounts/%v", accounts[0].Idx)).
ReceiveSuccess(&accountAPI)
require.Equal(t, http.StatusOK, req.StatusCode)
require.Nil(t, err)
assert.Equal(t, accounts[0], accountAPI)
var accountsAPI []common.Account
req, err = sling.New().Get(url).Path("sdb/accounts").ReceiveSuccess(&accountsAPI)
require.Equal(t, http.StatusOK, req.StatusCode)
require.Nil(t, err)
assert.Equal(t, accounts, accountsAPI)
cancel()
wg.Wait()
} }

View File

@@ -4,6 +4,7 @@ import (
"context" "context"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"net"
"net/http" "net/http"
"sync" "sync"
"time" "time"
@@ -145,7 +146,7 @@ const longWaitDuration = 999 * time.Hour
// const provingDuration = 2 * time.Second // const provingDuration = 2 * time.Second
func (s *Mock) runProver(ctx context.Context) { func (s *Mock) runProver(ctx context.Context) {
waitCh := time.After(longWaitDuration) timer := time.NewTimer(longWaitDuration)
for { for {
select { select {
case <-ctx.Done(): case <-ctx.Done():
@@ -153,21 +154,27 @@ func (s *Mock) runProver(ctx context.Context) {
case msg := <-s.msgCh: case msg := <-s.msgCh:
switch msg.value { switch msg.value {
case "cancel": case "cancel":
waitCh = time.After(longWaitDuration) if !timer.Stop() {
<-timer.C
}
timer.Reset(longWaitDuration)
s.Lock() s.Lock()
if !s.status.IsReady() { if !s.status.IsReady() {
s.status = prover.StatusCodeAborted s.status = prover.StatusCodeAborted
} }
s.Unlock() s.Unlock()
case "prove": case "prove":
waitCh = time.After(s.provingDuration) if !timer.Stop() {
<-timer.C
}
timer.Reset(s.provingDuration)
s.Lock() s.Lock()
s.status = prover.StatusCodeBusy s.status = prover.StatusCodeBusy
s.Unlock() s.Unlock()
} }
msg.ackCh <- true msg.ackCh <- true
case <-waitCh: case <-timer.C:
waitCh = time.After(longWaitDuration) timer.Reset(longWaitDuration)
s.Lock() s.Lock()
if s.status != prover.StatusCodeBusy { if s.status != prover.StatusCodeBusy {
s.Unlock() s.Unlock()
@@ -202,16 +209,20 @@ func (s *Mock) Run(ctx context.Context) error {
apiGroup.POST("/cancel", s.handleCancel) apiGroup.POST("/cancel", s.handleCancel)
debugAPIServer := &http.Server{ debugAPIServer := &http.Server{
Addr: s.addr,
Handler: api, Handler: api,
// Use some hardcoded numberes that are suitable for testing // Use some hardcoded numberes that are suitable for testing
ReadTimeout: 30 * time.Second, //nolint:gomnd ReadTimeout: 30 * time.Second, //nolint:gomnd
WriteTimeout: 30 * time.Second, //nolint:gomnd WriteTimeout: 30 * time.Second, //nolint:gomnd
MaxHeaderBytes: 1 << 20, //nolint:gomnd MaxHeaderBytes: 1 << 20, //nolint:gomnd
} }
go func() { listener, err := net.Listen("tcp", s.addr)
if err != nil {
return tracerr.Wrap(err)
}
log.Infof("prover.MockServer is ready at %v", s.addr) log.Infof("prover.MockServer is ready at %v", s.addr)
if err := debugAPIServer.ListenAndServe(); err != nil && tracerr.Unwrap(err) != http.ErrServerClosed { go func() {
if err := debugAPIServer.Serve(listener); err != nil &&
tracerr.Unwrap(err) != http.ErrServerClosed {
log.Fatalf("Listen: %s\n", err) log.Fatalf("Listen: %s\n", err)
} }
}() }()

View File

@@ -38,7 +38,7 @@ func addTokens(t *testing.T, tc *til.Context, db *sqlx.DB) {
}) })
} }
hdb := historydb.NewHistoryDB(db, nil) hdb := historydb.NewHistoryDB(db, db, nil)
assert.NoError(t, hdb.AddBlock(&common.Block{ assert.NoError(t, hdb.AddBlock(&common.Block{
Num: 1, Num: 1,
})) }))
@@ -75,7 +75,7 @@ func initTxSelector(t *testing.T, chainID uint16, hermezContractAddr ethCommon.A
pass := os.Getenv("POSTGRES_PASS") pass := os.Getenv("POSTGRES_PASS")
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez") db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err) require.NoError(t, err)
l2DB := l2db.NewL2DB(db, 10, 100, 24*time.Hour, nil) l2DB := l2db.NewL2DB(db, db, 10, 100, 0.0, 24*time.Hour, nil)
dir, err := ioutil.TempDir("", "tmpSyncDB") dir, err := ioutil.TempDir("", "tmpSyncDB")
require.NoError(t, err) require.NoError(t, err)

View File

@@ -27,6 +27,9 @@ type TxProcessor struct {
// AccumulatedFees contains the accumulated fees for each token (Coord // AccumulatedFees contains the accumulated fees for each token (Coord
// Idx) in the processed batch // Idx) in the processed batch
AccumulatedFees map[common.Idx]*big.Int AccumulatedFees map[common.Idx]*big.Int
// updatedAccounts stores the last version of the account when it has
// been created/updated by any of the processed transactions.
updatedAccounts map[common.Idx]*common.Account
config Config config Config
} }
@@ -55,6 +58,9 @@ type ProcessTxOutput struct {
CreatedAccounts []common.Account CreatedAccounts []common.Account
CoordinatorIdxsMap map[common.TokenID]common.Idx CoordinatorIdxsMap map[common.TokenID]common.Idx
CollectedFees map[common.TokenID]*big.Int CollectedFees map[common.TokenID]*big.Int
// UpdatedAccounts returns the current state of each account
// created/updated by any of the processed transactions.
UpdatedAccounts map[common.Idx]*common.Account
} }
func newErrorNotEnoughBalance(tx common.Tx) error { func newErrorNotEnoughBalance(tx common.Tx) error {
@@ -127,6 +133,10 @@ func (tp *TxProcessor) ProcessTxs(coordIdxs []common.Idx, l1usertxs, l1coordinat
return nil, tracerr.Wrap(fmt.Errorf("L1UserTx + L1CoordinatorTx (%d) can not be bigger than MaxL1Tx (%d)", len(l1usertxs)+len(l1coordinatortxs), tp.config.MaxTx)) return nil, tracerr.Wrap(fmt.Errorf("L1UserTx + L1CoordinatorTx (%d) can not be bigger than MaxL1Tx (%d)", len(l1usertxs)+len(l1coordinatortxs), tp.config.MaxTx))
} }
if tp.s.Type() == statedb.TypeSynchronizer {
tp.updatedAccounts = make(map[common.Idx]*common.Account)
}
exits := make([]processedExit, nTx) exits := make([]processedExit, nTx)
if tp.s.Type() == statedb.TypeBatchBuilder { if tp.s.Type() == statedb.TypeBatchBuilder {
@@ -198,7 +208,7 @@ func (tp *TxProcessor) ProcessTxs(coordIdxs []common.Idx, l1usertxs, l1coordinat
} }
} }
if tp.s.Type() == statedb.TypeSynchronizer || tp.s.Type() == statedb.TypeBatchBuilder { if tp.s.Type() == statedb.TypeSynchronizer || tp.s.Type() == statedb.TypeBatchBuilder {
if exitIdx != nil && exitTree != nil { if exitIdx != nil && exitTree != nil && exitAccount != nil {
exits[tp.i] = processedExit{ exits[tp.i] = processedExit{
exit: true, exit: true,
newExit: newExit, newExit: newExit,
@@ -382,7 +392,7 @@ func (tp *TxProcessor) ProcessTxs(coordIdxs []common.Idx, l1usertxs, l1coordinat
tp.zki.EthAddr3[iFee] = common.EthAddrToBigInt(accCoord.EthAddr) tp.zki.EthAddr3[iFee] = common.EthAddrToBigInt(accCoord.EthAddr)
} }
accCoord.Balance = new(big.Int).Add(accCoord.Balance, accumulatedFee) accCoord.Balance = new(big.Int).Add(accCoord.Balance, accumulatedFee)
pFee, err := tp.s.UpdateAccount(idx, accCoord) pFee, err := tp.updateAccount(idx, accCoord)
if err != nil { if err != nil {
log.Error(err) log.Error(err)
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
@@ -407,6 +417,7 @@ func (tp *TxProcessor) ProcessTxs(coordIdxs []common.Idx, l1usertxs, l1coordinat
return nil, nil return nil, nil
} }
if tp.s.Type() == statedb.TypeSynchronizer {
// once all txs processed (exitTree root frozen), for each Exit, // once all txs processed (exitTree root frozen), for each Exit,
// generate common.ExitInfo data // generate common.ExitInfo data
var exitInfos []common.ExitInfo var exitInfos []common.ExitInfo
@@ -438,8 +449,7 @@ func (tp *TxProcessor) ProcessTxs(coordIdxs []common.Idx, l1usertxs, l1coordinat
} }
} }
if tp.s.Type() == statedb.TypeSynchronizer { // retun exitInfos, createdAccounts and collectedFees, so Synchronizer will
// retuTypeexitInfos, createdAccounts and collectedFees, so Synchronizer will
// be able to store it into HistoryDB for the concrete BatchNum // be able to store it into HistoryDB for the concrete BatchNum
return &ProcessTxOutput{ return &ProcessTxOutput{
ZKInputs: nil, ZKInputs: nil,
@@ -447,6 +457,7 @@ func (tp *TxProcessor) ProcessTxs(coordIdxs []common.Idx, l1usertxs, l1coordinat
CreatedAccounts: createdAccounts, CreatedAccounts: createdAccounts,
CoordinatorIdxsMap: coordIdxsMap, CoordinatorIdxsMap: coordIdxsMap,
CollectedFees: collectedFees, CollectedFees: collectedFees,
UpdatedAccounts: tp.updatedAccounts,
}, nil }, nil
} }
@@ -594,7 +605,7 @@ func (tp *TxProcessor) ProcessL1Tx(exitTree *merkletree.MerkleTree, tx *common.L
// execute exit flow // execute exit flow
// coordIdxsMap is 'nil', as at L1Txs there is no L2 fees // coordIdxsMap is 'nil', as at L1Txs there is no L2 fees
exitAccount, newExit, err := tp.applyExit(nil, nil, exitTree, tx.Tx()) exitAccount, newExit, err := tp.applyExit(nil, nil, exitTree, tx.Tx(), tx.Amount)
if err != nil { if err != nil {
log.Error(err) log.Error(err)
return nil, nil, false, nil, tracerr.Wrap(err) return nil, nil, false, nil, tracerr.Wrap(err)
@@ -719,7 +730,7 @@ func (tp *TxProcessor) ProcessL2Tx(coordIdxsMap map[common.TokenID]common.Idx,
} }
case common.TxTypeExit: case common.TxTypeExit:
// execute exit flow // execute exit flow
exitAccount, newExit, err := tp.applyExit(coordIdxsMap, collectedFees, exitTree, tx.Tx()) exitAccount, newExit, err := tp.applyExit(coordIdxsMap, collectedFees, exitTree, tx.Tx(), tx.Amount)
if err != nil { if err != nil {
log.Error(err) log.Error(err)
return nil, nil, false, tracerr.Wrap(err) return nil, nil, false, tracerr.Wrap(err)
@@ -741,7 +752,7 @@ func (tp *TxProcessor) applyCreateAccount(tx *common.L1Tx) error {
EthAddr: tx.FromEthAddr, EthAddr: tx.FromEthAddr,
} }
p, err := tp.s.CreateAccount(common.Idx(tp.s.CurrentIdx()+1), account) p, err := tp.createAccount(common.Idx(tp.s.CurrentIdx()+1), account)
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -776,6 +787,28 @@ func (tp *TxProcessor) applyCreateAccount(tx *common.L1Tx) error {
return tp.s.SetCurrentIdx(tp.s.CurrentIdx() + 1) return tp.s.SetCurrentIdx(tp.s.CurrentIdx() + 1)
} }
// createAccount is a wrapper over the StateDB.CreateAccount method that also
// stores the created account in the updatedAccounts map in case the StateDB is
// of TypeSynchronizer
func (tp *TxProcessor) createAccount(idx common.Idx, account *common.Account) (*merkletree.CircomProcessorProof, error) {
if tp.s.Type() == statedb.TypeSynchronizer {
account.Idx = idx
tp.updatedAccounts[idx] = account
}
return tp.s.CreateAccount(idx, account)
}
// updateAccount is a wrapper over the StateDB.UpdateAccount method that also
// stores the updated account in the updatedAccounts map in case the StateDB is
// of TypeSynchronizer
func (tp *TxProcessor) updateAccount(idx common.Idx, account *common.Account) (*merkletree.CircomProcessorProof, error) {
if tp.s.Type() == statedb.TypeSynchronizer {
account.Idx = idx
tp.updatedAccounts[idx] = account
}
return tp.s.UpdateAccount(idx, account)
}
// applyDeposit updates the balance in the account of the depositer, if // applyDeposit updates the balance in the account of the depositer, if
// andTransfer parameter is set to true, the method will also apply the // andTransfer parameter is set to true, the method will also apply the
// Transfer of the L1Tx/DepositTransfer // Transfer of the L1Tx/DepositTransfer
@@ -806,7 +839,7 @@ func (tp *TxProcessor) applyDeposit(tx *common.L1Tx, transfer bool) error {
} }
// update sender account in localStateDB // update sender account in localStateDB
p, err := tp.s.UpdateAccount(tx.FromIdx, accSender) p, err := tp.updateAccount(tx.FromIdx, accSender)
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -843,7 +876,7 @@ func (tp *TxProcessor) applyDeposit(tx *common.L1Tx, transfer bool) error {
accReceiver.Balance = new(big.Int).Add(accReceiver.Balance, tx.EffectiveAmount) accReceiver.Balance = new(big.Int).Add(accReceiver.Balance, tx.EffectiveAmount)
// update receiver account in localStateDB // update receiver account in localStateDB
p, err := tp.s.UpdateAccount(tx.ToIdx, accReceiver) p, err := tp.updateAccount(tx.ToIdx, accReceiver)
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -926,7 +959,7 @@ func (tp *TxProcessor) applyTransfer(coordIdxsMap map[common.TokenID]common.Idx,
} }
// update sender account in localStateDB // update sender account in localStateDB
pSender, err := tp.s.UpdateAccount(tx.FromIdx, accSender) pSender, err := tp.updateAccount(tx.FromIdx, accSender)
if err != nil { if err != nil {
log.Error(err) log.Error(err)
return tracerr.Wrap(err) return tracerr.Wrap(err)
@@ -965,7 +998,7 @@ func (tp *TxProcessor) applyTransfer(coordIdxsMap map[common.TokenID]common.Idx,
accReceiver.Balance = new(big.Int).Add(accReceiver.Balance, tx.Amount) accReceiver.Balance = new(big.Int).Add(accReceiver.Balance, tx.Amount)
// update receiver account in localStateDB // update receiver account in localStateDB
pReceiver, err := tp.s.UpdateAccount(auxToIdx, accReceiver) pReceiver, err := tp.updateAccount(auxToIdx, accReceiver)
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -1008,7 +1041,7 @@ func (tp *TxProcessor) applyCreateAccountDepositTransfer(tx *common.L1Tx) error
} }
// create Account of the Sender // create Account of the Sender
p, err := tp.s.CreateAccount(common.Idx(tp.s.CurrentIdx()+1), accSender) p, err := tp.createAccount(common.Idx(tp.s.CurrentIdx()+1), accSender)
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -1056,7 +1089,7 @@ func (tp *TxProcessor) applyCreateAccountDepositTransfer(tx *common.L1Tx) error
accReceiver.Balance = new(big.Int).Add(accReceiver.Balance, tx.EffectiveAmount) accReceiver.Balance = new(big.Int).Add(accReceiver.Balance, tx.EffectiveAmount)
// update receiver account in localStateDB // update receiver account in localStateDB
p, err = tp.s.UpdateAccount(tx.ToIdx, accReceiver) p, err = tp.updateAccount(tx.ToIdx, accReceiver)
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -1071,7 +1104,7 @@ func (tp *TxProcessor) applyCreateAccountDepositTransfer(tx *common.L1Tx) error
// new Leaf in the ExitTree. // new Leaf in the ExitTree.
func (tp *TxProcessor) applyExit(coordIdxsMap map[common.TokenID]common.Idx, func (tp *TxProcessor) applyExit(coordIdxsMap map[common.TokenID]common.Idx,
collectedFees map[common.TokenID]*big.Int, exitTree *merkletree.MerkleTree, collectedFees map[common.TokenID]*big.Int, exitTree *merkletree.MerkleTree,
tx common.Tx) (*common.Account, bool, error) { tx common.Tx, originalAmount *big.Int) (*common.Account, bool, error) {
// 0. subtract tx.Amount from current Account in StateMT // 0. subtract tx.Amount from current Account in StateMT
// add the tx.Amount into the Account (tx.FromIdx) in the ExitMT // add the tx.Amount into the Account (tx.FromIdx) in the ExitMT
acc, err := tp.s.GetAccount(tx.FromIdx) acc, err := tp.s.GetAccount(tx.FromIdx)
@@ -1130,7 +1163,7 @@ func (tp *TxProcessor) applyExit(coordIdxsMap map[common.TokenID]common.Idx,
} }
} }
p, err := tp.s.UpdateAccount(tx.FromIdx, acc) p, err := tp.updateAccount(tx.FromIdx, acc)
if err != nil { if err != nil {
return nil, false, tracerr.Wrap(err) return nil, false, tracerr.Wrap(err)
} }
@@ -1141,6 +1174,21 @@ func (tp *TxProcessor) applyExit(coordIdxsMap map[common.TokenID]common.Idx,
if exitTree == nil { if exitTree == nil {
return nil, false, nil return nil, false, nil
} }
// Do not add the Exit when Amount=0, not EffectiveAmount=0. In
// txprocessor.applyExit function, the tx.Amount is in reality the
// EffectiveAmount, that's why is used here the originalAmount
// parameter, which contains the real value of the tx.Amount (not
// tx.EffectiveAmount). This is a particularity of the approach of the
// circuit, the idea will be in the future to update the circuit and
// when Amount>0 but EffectiveAmount=0, to not add the Exit in the
// Exits MerkleTree, but for the moment the Go code is adapted to the
// circuit.
if originalAmount.Cmp(big.NewInt(0)) == 0 { // Amount == 0
// if the Exit Amount==0, the Exit is not added to the ExitTree
return nil, false, nil
}
exitAccount, err := statedb.GetAccountInTreeDB(exitTree.DB(), tx.FromIdx) exitAccount, err := statedb.GetAccountInTreeDB(exitTree.DB(), tx.FromIdx)
if tracerr.Unwrap(err) == db.ErrNotFound { if tracerr.Unwrap(err) == db.ErrNotFound {
// 1a. if idx does not exist in exitTree: // 1a. if idx does not exist in exitTree:
@@ -1149,6 +1197,8 @@ func (tp *TxProcessor) applyExit(coordIdxsMap map[common.TokenID]common.Idx,
exitAccount := &common.Account{ exitAccount := &common.Account{
TokenID: acc.TokenID, TokenID: acc.TokenID,
Nonce: common.Nonce(0), Nonce: common.Nonce(0),
// as is a common.Tx, the Amount is already an
// EffectiveAmount
Balance: tx.Amount, Balance: tx.Amount,
BJJ: acc.BJJ, BJJ: acc.BJJ,
EthAddr: acc.EthAddr, EthAddr: acc.EthAddr,
@@ -1162,6 +1212,8 @@ func (tp *TxProcessor) applyExit(coordIdxsMap map[common.TokenID]common.Idx,
tp.zki.Sign2[tp.i] = big.NewInt(1) tp.zki.Sign2[tp.i] = big.NewInt(1)
} }
tp.zki.Ay2[tp.i] = accBJJY tp.zki.Ay2[tp.i] = accBJJY
// as is a common.Tx, the Amount is already an
// EffectiveAmount
tp.zki.Balance2[tp.i] = tx.Amount tp.zki.Balance2[tp.i] = tx.Amount
tp.zki.EthAddr2[tp.i] = common.EthAddrToBigInt(acc.EthAddr) tp.zki.EthAddr2[tp.i] = common.EthAddrToBigInt(acc.EthAddr)
// as Leaf didn't exist in the ExitTree, set NewExit[i]=1 // as Leaf didn't exist in the ExitTree, set NewExit[i]=1

View File

@@ -4,6 +4,7 @@ import (
"io/ioutil" "io/ioutil"
"math/big" "math/big"
"os" "os"
"sort"
"testing" "testing"
ethCommon "github.com/ethereum/go-ethereum/common" ethCommon "github.com/ethereum/go-ethereum/common"
@@ -911,3 +912,198 @@ func TestTwoExits(t *testing.T) {
assert.Equal(t, ptOuts[3].ExitInfos[0].MerkleProof, ptOuts2[3].ExitInfos[0].MerkleProof) assert.Equal(t, ptOuts[3].ExitInfos[0].MerkleProof, ptOuts2[3].ExitInfos[0].MerkleProof)
} }
func TestExitOf0Amount(t *testing.T) {
// Test to check that when doing an Exit with amount 0 the Exit Root
// does not change (as there is no new Exit Leaf created)
dir, err := ioutil.TempDir("", "tmpdb")
require.NoError(t, err)
defer assert.NoError(t, os.RemoveAll(dir))
sdb, err := statedb.NewStateDB(statedb.Config{Path: dir, Keep: 128,
Type: statedb.TypeBatchBuilder, NLevels: 32})
assert.NoError(t, err)
chainID := uint16(1)
tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
set := `
Type: Blockchain
CreateAccountDeposit(0) A: 100
CreateAccountDeposit(0) B: 100
> batchL1 // batch1: freeze L1User{2}
> batchL1 // batch2: forge L1User{2}
ForceExit(0) A: 10
ForceExit(0) B: 0
> batchL1 // batch3: freeze L1User{2}
> batchL1 // batch4: forge L1User{2}
ForceExit(0) A: 10
> batchL1 // batch5: freeze L1User{1}
> batchL1 // batch6: forge L1User{1}
ForceExit(0) A: 0
> batchL1 // batch7: freeze L1User{1}
> batchL1 // batch8: forge L1User{1}
> block
`
blocks, err := tc.GenerateBlocks(set)
require.NoError(t, err)
err = tc.FillBlocksExtra(blocks, &til.ConfigExtra{})
require.NoError(t, err)
err = tc.FillBlocksForgedL1UserTxs(blocks)
require.NoError(t, err)
// Sanity check
require.Equal(t, 2, len(blocks[0].Rollup.Batches[1].L1UserTxs))
require.Equal(t, 2, len(blocks[0].Rollup.Batches[3].L1UserTxs))
require.Equal(t, big.NewInt(10), blocks[0].Rollup.Batches[3].L1UserTxs[0].Amount)
require.Equal(t, big.NewInt(0), blocks[0].Rollup.Batches[3].L1UserTxs[1].Amount)
config := Config{
NLevels: 32,
MaxFeeTx: 64,
MaxTx: 512,
MaxL1Tx: 16,
ChainID: chainID,
}
tp := NewTxProcessor(sdb, config)
// For this test are only processed the batches with transactions:
// - Batch2, equivalent to Batches[1]
// - Batch4, equivalent to Batches[3]
// - Batch6, equivalent to Batches[5]
// - Batch8, equivalent to Batches[7]
// process Batch2:
_, err = tp.ProcessTxs(nil, blocks[0].Rollup.Batches[1].L1UserTxs, nil, nil)
require.NoError(t, err)
// process Batch4:
ptOut, err := tp.ProcessTxs(nil, blocks[0].Rollup.Batches[3].L1UserTxs, nil, nil)
require.NoError(t, err)
assert.Equal(t, "14329759303391468223438874789317921522067594445474390443816827472846339238908", ptOut.ZKInputs.Metadata.NewExitRootRaw.BigInt().String())
exitRootBatch4 := ptOut.ZKInputs.Metadata.NewExitRootRaw.BigInt().String()
// process Batch6:
ptOut, err = tp.ProcessTxs(nil, blocks[0].Rollup.Batches[5].L1UserTxs, nil, nil)
require.NoError(t, err)
assert.Equal(t, "14329759303391468223438874789317921522067594445474390443816827472846339238908", ptOut.ZKInputs.Metadata.NewExitRootRaw.BigInt().String())
// Expect that the ExitRoot for the Batch6 will be equal than for the
// Batch4, as the Batch4 & Batch6 have the same tx with Exit Amount=10,
// and Batch4 has a 2nd tx with Exit Amount=0.
assert.Equal(t, exitRootBatch4, ptOut.ZKInputs.Metadata.NewExitRootRaw.BigInt().String())
// For the Batch8, as there is only 1 exit with Amount=0, the ExitRoot
// should be 0.
// process Batch8:
ptOut, err = tp.ProcessTxs(nil, blocks[0].Rollup.Batches[7].L1UserTxs, nil, nil)
require.NoError(t, err)
assert.Equal(t, "0", ptOut.ZKInputs.Metadata.NewExitRootRaw.BigInt().String())
}
func TestUpdatedAccounts(t *testing.T) {
dir, err := ioutil.TempDir("", "tmpdb")
require.NoError(t, err)
defer assert.NoError(t, os.RemoveAll(dir))
sdb, err := statedb.NewStateDB(statedb.Config{Path: dir, Keep: 128,
Type: statedb.TypeSynchronizer, NLevels: 32})
assert.NoError(t, err)
set := `
Type: Blockchain
AddToken(1)
CreateAccountCoordinator(0) Coord // 256
CreateAccountCoordinator(1) Coord // 257
> batch // 1
CreateAccountDeposit(0) A: 50 // 258
CreateAccountDeposit(0) B: 60 // 259
CreateAccountDeposit(1) A: 70 // 260
CreateAccountDeposit(1) B: 80 // 261
> batchL1 // 2
> batchL1 // 3
Transfer(0) A-B: 5 (126)
> batch // 4
Exit(1) B: 5 (126)
> batch // 5
> block
`
chainID := uint16(0)
tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
blocks, err := tc.GenerateBlocks(set)
require.NoError(t, err)
tilCfgExtra := til.ConfigExtra{
BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
CoordUser: "Coord",
}
err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
require.NoError(t, err)
tc.FillBlocksL1UserTxsBatchNum(blocks)
err = tc.FillBlocksForgedL1UserTxs(blocks)
require.NoError(t, err)
require.Equal(t, 5, len(blocks[0].Rollup.Batches))
config := Config{
NLevels: 32,
MaxFeeTx: 64,
MaxTx: 512,
MaxL1Tx: 16,
ChainID: chainID,
}
tp := NewTxProcessor(sdb, config)
sortedKeys := func(m map[common.Idx]*common.Account) []int {
keys := make([]int, 0)
for k := range m {
keys = append(keys, int(k))
}
sort.Ints(keys)
return keys
}
for _, batch := range blocks[0].Rollup.Batches {
l2Txs := common.L2TxsToPoolL2Txs(batch.L2Txs)
ptOut, err := tp.ProcessTxs(batch.Batch.FeeIdxsCoordinator, batch.L1UserTxs,
batch.L1CoordinatorTxs, l2Txs)
require.NoError(t, err)
switch batch.Batch.BatchNum {
case 1:
assert.Equal(t, 2, len(ptOut.UpdatedAccounts))
assert.Equal(t, []int{256, 257}, sortedKeys(ptOut.UpdatedAccounts))
case 2:
assert.Equal(t, 0, len(ptOut.UpdatedAccounts))
assert.Equal(t, []int{}, sortedKeys(ptOut.UpdatedAccounts))
case 3:
assert.Equal(t, 4, len(ptOut.UpdatedAccounts))
assert.Equal(t, []int{258, 259, 260, 261}, sortedKeys(ptOut.UpdatedAccounts))
case 4:
assert.Equal(t, 2+1, len(ptOut.UpdatedAccounts))
assert.Equal(t, []int{256, 258, 259}, sortedKeys(ptOut.UpdatedAccounts))
case 5:
assert.Equal(t, 1+1, len(ptOut.UpdatedAccounts))
assert.Equal(t, []int{257, 261}, sortedKeys(ptOut.UpdatedAccounts))
}
for idx, updAcc := range ptOut.UpdatedAccounts {
acc, err := sdb.GetAccount(idx)
require.NoError(t, err)
// If acc.Balance is 0, set it to 0 with big.NewInt so
// that the comparison succeeds. Without this, the
// comparison will not succeed because acc.Balance is
// set from a slice, and thus the internal big.Int
// buffer is not nil (big.Int.abs)
if acc.Balance.BitLen() == 0 {
acc.Balance = big.NewInt(0)
}
assert.Equal(t, acc, updAcc)
}
}
}

File diff suppressed because one or more lines are too long

53
txselector/metrics.go Normal file
View File

@@ -0,0 +1,53 @@
package txselector
import "github.com/prometheus/client_golang/prometheus"
var (
metricGetL2TxSelection = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "txsel_get_l2_txselecton_total",
Help: "",
},
)
metricGetL1L2TxSelection = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "txsel_get_l1_l2_txselecton_total",
Help: "",
},
)
metricSelectedL1CoordinatorTxs = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "txsel_selected_l1_coordinator_txs",
Help: "",
},
)
metricSelectedL1UserTxs = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "txsel_selected_l1_user_txs",
Help: "",
},
)
metricSelectedL2Txs = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "txsel_selected_l2_txs",
Help: "",
},
)
metricDiscardedL2Txs = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "txsel_discarded_l2_txs",
Help: "",
},
)
)
func init() {
prometheus.MustRegister(metricGetL2TxSelection)
prometheus.MustRegister(metricGetL1L2TxSelection)
prometheus.MustRegister(metricSelectedL1CoordinatorTxs)
prometheus.MustRegister(metricSelectedL1UserTxs)
prometheus.MustRegister(metricSelectedL2Txs)
prometheus.MustRegister(metricDiscardedL2Txs)
}

View File

@@ -18,20 +18,6 @@ import (
"github.com/iden3/go-iden3-crypto/babyjub" "github.com/iden3/go-iden3-crypto/babyjub"
) )
// txs implements the interface Sort for an array of Tx, and sorts the txs by
// absolute fee
type txs []common.PoolL2Tx
func (t txs) Len() int {
return len(t)
}
func (t txs) Swap(i, j int) {
t[i], t[j] = t[j], t[i]
}
func (t txs) Less(i, j int) bool {
return t[i].AbsoluteFee > t[j].AbsoluteFee
}
// CoordAccount contains the data of the Coordinator account, that will be used // CoordAccount contains the data of the Coordinator account, that will be used
// to create new transactions of CreateAccountDeposit type to add new TokenID // to create new transactions of CreateAccountDeposit type to add new TokenID
// accounts for the Coordinator to receive the fees. // accounts for the Coordinator to receive the fees.
@@ -147,9 +133,11 @@ func (txsel *TxSelector) coordAccountForTokenID(l1CoordinatorTxs []common.L1Tx,
// included in the next batch. // included in the next batch.
func (txsel *TxSelector) GetL2TxSelection(selectionConfig *SelectionConfig) ([]common.Idx, func (txsel *TxSelector) GetL2TxSelection(selectionConfig *SelectionConfig) ([]common.Idx,
[][]byte, []common.L1Tx, []common.PoolL2Tx, []common.PoolL2Tx, error) { [][]byte, []common.L1Tx, []common.PoolL2Tx, []common.PoolL2Tx, error) {
coordIdxs, accCreationAuths, _, l1CoordinatorTxs, l2Txs, discardedL2Txs, err := metricGetL2TxSelection.Inc()
txsel.GetL1L2TxSelection(selectionConfig, []common.L1Tx{}) coordIdxs, accCreationAuths, _, l1CoordinatorTxs, l2Txs,
return coordIdxs, accCreationAuths, l1CoordinatorTxs, l2Txs, discardedL2Txs, tracerr.Wrap(err) discardedL2Txs, err := txsel.getL1L2TxSelection(selectionConfig, []common.L1Tx{})
return coordIdxs, accCreationAuths, l1CoordinatorTxs, l2Txs,
discardedL2Txs, tracerr.Wrap(err)
} }
// GetL1L2TxSelection returns the selection of L1 + L2 txs. // GetL1L2TxSelection returns the selection of L1 + L2 txs.
@@ -161,6 +149,16 @@ func (txsel *TxSelector) GetL2TxSelection(selectionConfig *SelectionConfig) ([]c
// creation exists. The L1UserTxs, L1CoordinatorTxs, PoolL2Txs that will be // creation exists. The L1UserTxs, L1CoordinatorTxs, PoolL2Txs that will be
// included in the next batch. // included in the next batch.
func (txsel *TxSelector) GetL1L2TxSelection(selectionConfig *SelectionConfig, func (txsel *TxSelector) GetL1L2TxSelection(selectionConfig *SelectionConfig,
l1UserTxs []common.L1Tx) ([]common.Idx, [][]byte, []common.L1Tx,
[]common.L1Tx, []common.PoolL2Tx, []common.PoolL2Tx, error) {
metricGetL1L2TxSelection.Inc()
coordIdxs, accCreationAuths, l1UserTxs, l1CoordinatorTxs, l2Txs,
discardedL2Txs, err := txsel.getL1L2TxSelection(selectionConfig, l1UserTxs)
return coordIdxs, accCreationAuths, l1UserTxs, l1CoordinatorTxs, l2Txs,
discardedL2Txs, tracerr.Wrap(err)
}
func (txsel *TxSelector) getL1L2TxSelection(selectionConfig *SelectionConfig,
l1UserTxs []common.L1Tx) ([]common.Idx, [][]byte, []common.L1Tx, l1UserTxs []common.L1Tx) ([]common.Idx, [][]byte, []common.L1Tx,
[]common.L1Tx, []common.PoolL2Tx, []common.PoolL2Tx, error) { []common.L1Tx, []common.PoolL2Tx, []common.PoolL2Tx, error) {
// WIP.0: the TxSelector is not optimized and will need a redesign. The // WIP.0: the TxSelector is not optimized and will need a redesign. The
@@ -191,14 +189,16 @@ func (txsel *TxSelector) GetL1L2TxSelection(selectionConfig *SelectionConfig,
} }
// discardedL2Txs contains an array of the L2Txs that have not been selected in this Batch // discardedL2Txs contains an array of the L2Txs that have not been selected in this Batch
var discardedL2Txs []common.PoolL2Tx
var l1CoordinatorTxs []common.L1Tx var l1CoordinatorTxs []common.L1Tx
positionL1 := len(l1UserTxs) positionL1 := len(l1UserTxs)
var accAuths [][]byte var accAuths [][]byte
// sort l2TxsRaw (cropping at MaxTx at this point) // sort l2TxsRaw (cropping at MaxTx at this point)
l2Txs0 := txsel.getL2Profitable(l2TxsRaw, selectionConfig.TxProcessorConfig.MaxTx) l2Txs0, discardedL2Txs := txsel.getL2Profitable(l2TxsRaw, selectionConfig.TxProcessorConfig.MaxTx)
for i := range discardedL2Txs {
discardedL2Txs[i].Info = "Tx not selected due to low absolute fee"
}
noncesMap := make(map[common.Idx]common.Nonce) noncesMap := make(map[common.Idx]common.Nonce)
var l2Txs []common.PoolL2Tx var l2Txs []common.PoolL2Tx
@@ -464,6 +464,11 @@ func (txsel *TxSelector) GetL1L2TxSelection(selectionConfig *SelectionConfig,
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err) return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
} }
metricSelectedL1CoordinatorTxs.Set(float64(len(l1CoordinatorTxs)))
metricSelectedL1UserTxs.Set(float64(len(l1UserTxs)))
metricSelectedL2Txs.Set(float64(len(finalL2Txs)))
metricDiscardedL2Txs.Set(float64(len(discardedL2Txs)))
return coordIdxs, accAuths, l1UserTxs, l1CoordinatorTxs, finalL2Txs, discardedL2Txs, nil return coordIdxs, accAuths, l1UserTxs, l1CoordinatorTxs, finalL2Txs, discardedL2Txs, nil
} }
@@ -590,11 +595,22 @@ func checkAlreadyPendingToCreate(l1CoordinatorTxs []common.L1Tx, tokenID common.
} }
// getL2Profitable returns the profitable selection of L2Txssorted by Nonce // getL2Profitable returns the profitable selection of L2Txssorted by Nonce
func (txsel *TxSelector) getL2Profitable(l2Txs []common.PoolL2Tx, max uint32) []common.PoolL2Tx { func (txsel *TxSelector) getL2Profitable(l2Txs []common.PoolL2Tx, max uint32) ([]common.PoolL2Tx,
// Sort by absolute fee []common.PoolL2Tx) {
sort.Sort(txs(l2Txs)) // First sort by nonce so that txs from the same account are sorted so
// that they could be applied in succession.
sort.Slice(l2Txs, func(i, j int) bool {
return l2Txs[i].Nonce < l2Txs[j].Nonce
})
// Sort by absolute fee with SliceStable, so that txs with same
// AbsoluteFee are not rearranged and nonce order is kept in such case
sort.SliceStable(l2Txs, func(i, j int) bool {
return l2Txs[i].AbsoluteFee > l2Txs[j].AbsoluteFee
})
discardedL2Txs := []common.PoolL2Tx{}
if len(l2Txs) > int(max) { if len(l2Txs) > int(max) {
discardedL2Txs = l2Txs[max:]
l2Txs = l2Txs[:max] l2Txs = l2Txs[:max]
} }
@@ -603,9 +619,9 @@ func (txsel *TxSelector) getL2Profitable(l2Txs []common.PoolL2Tx, max uint32) []
// Account is sorted, but the l2Txs can not be grouped by sender Account // Account is sorted, but the l2Txs can not be grouped by sender Account
// neither by Fee. This is because later on the Nonces will need to be // neither by Fee. This is because later on the Nonces will need to be
// sequential for the zkproof generation. // sequential for the zkproof generation.
sort.SliceStable(l2Txs, func(i, j int) bool { sort.Slice(l2Txs, func(i, j int) bool {
return l2Txs[i].Nonce < l2Txs[j].Nonce return l2Txs[i].Nonce < l2Txs[j].Nonce
}) })
return l2Txs return l2Txs, discardedL2Txs
} }

View File

@@ -29,7 +29,7 @@ func initTest(t *testing.T, chainID uint16, hermezContractAddr ethCommon.Address
pass := os.Getenv("POSTGRES_PASS") pass := os.Getenv("POSTGRES_PASS")
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez") db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err) require.NoError(t, err)
l2DB := l2db.NewL2DB(db, 10, 100, 24*time.Hour, nil) l2DB := l2db.NewL2DB(db, db, 10, 100, 0.0, 24*time.Hour, nil)
dir, err := ioutil.TempDir("", "tmpdb") dir, err := ioutil.TempDir("", "tmpdb")
require.NoError(t, err) require.NoError(t, err)
@@ -106,7 +106,7 @@ func addTokens(t *testing.T, tc *til.Context, db *sqlx.DB) {
}) })
} }
hdb := historydb.NewHistoryDB(db, nil) hdb := historydb.NewHistoryDB(db, db, nil)
assert.NoError(t, hdb.AddBlock(&common.Block{ assert.NoError(t, hdb.AddBlock(&common.Block{
Num: 1, Num: 1,
})) }))
@@ -658,7 +658,7 @@ func TestTransferManyFromSameAccount(t *testing.T) {
tpc := txprocessor.Config{ tpc := txprocessor.Config{
NLevels: 16, NLevels: 16,
MaxFeeTx: 10, MaxFeeTx: 10,
MaxTx: 20, MaxTx: 10,
MaxL1Tx: 10, MaxL1Tx: 10,
ChainID: chainID, ChainID: chainID,
} }
@@ -683,13 +683,17 @@ func TestTransferManyFromSameAccount(t *testing.T) {
PoolTransfer(0) A-B: 10 (126) // 6 PoolTransfer(0) A-B: 10 (126) // 6
PoolTransfer(0) A-B: 10 (126) // 7 PoolTransfer(0) A-B: 10 (126) // 7
PoolTransfer(0) A-B: 10 (126) // 8 PoolTransfer(0) A-B: 10 (126) // 8
PoolTransfer(0) A-B: 10 (126) // 9
PoolTransfer(0) A-B: 10 (126) // 10
PoolTransfer(0) A-B: 10 (126) // 11
` `
poolL2Txs, err := tc.GeneratePoolL2Txs(batchPoolL2) poolL2Txs, err := tc.GeneratePoolL2Txs(batchPoolL2)
require.NoError(t, err) require.NoError(t, err)
require.Equal(t, 11, len(poolL2Txs))
// reorder poolL2Txs so that nonces are not sorted // reorder poolL2Txs so that nonces are not sorted
poolL2Txs[0], poolL2Txs[7] = poolL2Txs[7], poolL2Txs[0] poolL2Txs[0], poolL2Txs[7] = poolL2Txs[7], poolL2Txs[0]
poolL2Txs[1], poolL2Txs[6] = poolL2Txs[6], poolL2Txs[1] poolL2Txs[1], poolL2Txs[10] = poolL2Txs[10], poolL2Txs[1]
// add the PoolL2Txs to the l2DB // add the PoolL2Txs to the l2DB
addL2Txs(t, txsel, poolL2Txs) addL2Txs(t, txsel, poolL2Txs)
@@ -699,8 +703,9 @@ func TestTransferManyFromSameAccount(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, 3, len(oL1UserTxs)) assert.Equal(t, 3, len(oL1UserTxs))
require.Equal(t, 0, len(oL1CoordTxs)) require.Equal(t, 0, len(oL1CoordTxs))
assert.Equal(t, 8, len(oL2Txs)) assert.Equal(t, 7, len(oL2Txs))
assert.Equal(t, 0, len(discardedL2Txs)) assert.Equal(t, 1, len(discardedL2Txs))
err = txsel.l2db.StartForging(common.TxIDsFromPoolL2Txs(oL2Txs), txsel.localAccountsDB.CurrentBatch()) err = txsel.l2db.StartForging(common.TxIDsFromPoolL2Txs(oL2Txs), txsel.localAccountsDB.CurrentBatch())
require.NoError(t, err) require.NoError(t, err)
} }