Compare commits

...

49 Commits

Author SHA1 Message Date
Oleksandr Brezhniev
e68c64db75 Faster synchronization with less calls to update Ethereum stats 2021-03-23 15:13:18 +02:00
Danilo Pantani
e9be904c2f Merge pull request #656 from hermeznetwork/feature/license
add AGPLv3 License
2021-03-22 20:45:24 -03:00
Danilo Pantani
1ffe437538 Merge pull request #657 from hermeznetwork/feature/documentserveapi
Add minimum documentation of serveapi command
2021-03-22 16:02:42 -03:00
arnau
e2f9a1d7eb Merge pull request #650 from hermeznetwork/fix/serveapi-flags
fix the application initialization flags
2021-03-22 18:14:03 +01:00
arnau
999dde5621 Merge pull request #662 from hermeznetwork/feature/improveSCinits
Bound the filter of init SC events by blocks
2021-03-22 18:12:53 +01:00
Eduard S
aa82efc868 Bound the filter of init SC events by blocks
When querying the init event for each smart contract, bound the search from
block genesis-24h to block genesis.  Otherwise, this query in geth synced to
mainnet takes too long.
2021-03-22 17:55:30 +01:00
a_bennassar
2184084408 Merge pull request #659 from hermeznetwork/feature/move-api-subpackages
Move apitypes & stateapiupdater into api dir
2021-03-22 17:02:38 +01:00
arnaucube
206c8e6e8f Move apitypes & stateapiupdater into api dir 2021-03-22 16:51:10 +01:00
Eduard S
c2c74e14f1 Merge pull request #655 from hermeznetwork/feature/api-versioning
Add API versioning
2021-03-22 16:38:12 +01:00
a_bennassar
80e20f3cf1 Merge pull request #658 from hermeznetwork/feature/update-sql-func
Add account_state view
2021-03-22 16:37:46 +01:00
arnaucube
d9741da43b Add API versioning 2021-03-22 16:33:38 +01:00
Eduard S
8c51cfb3d2 Add minimum documentation of serveapi command 2021-03-22 15:41:35 +01:00
laisolizq
7b297c77da Add account_state view 2021-03-22 14:43:24 +01:00
Pantani
9a863eadc4 add AGPLv3 License 2021-03-22 09:26:58 -03:00
Eduard S
d80e3a8988 Merge pull request #648 from hermeznetwork/feature/txsel-forgablepriority
Update TxSel selection to prioritize the forgable
2021-03-22 13:14:53 +01:00
arnaucube
2873ce875a Update TxSel selection to prioritize the forgable
Previous to this commit, there were cases where being
len(nonForgableL2Txs)>maxL2Txs and nonForgableL2Txs have bigger fee than
forgableL2Txs, the forgableTxs where never forged, neither the
nonForgableTxs.  Now, the TxSelector first forges the forgableTxs (which
are forgable for the initial state of the accounts (balances & nonces),
and then the nonForgableL2Txs, which may be unblocked once the forgable
ones have been processed.
2021-03-22 13:03:47 +01:00
Eduard S
10cfc91250 Merge pull request #651 from hermeznetwork/feature/total-txs-in-pool
API change names and add poolLoad, add maxFeeUSD
2021-03-22 11:59:09 +01:00
Eduard S
f6765f82bf Merge pull request #654 from hermeznetwork/feature/api-pending-l1s
Add pending L1 txs to API
2021-03-22 11:58:48 +01:00
arnaubennassar
90126a03a2 API change names and add poolLoad, add maxFeeUSD 2021-03-22 11:52:23 +01:00
arnaubennassar
3fcec947b4 Add pending L1 txs to API 2021-03-22 11:50:37 +01:00
Pantani
dc198c35ae fix import key command into the example script 2021-03-19 22:44:51 -03:00
Pantani
ae0858df3d fix the serve api flags 2021-03-18 12:02:18 -03:00
arnau
334eecc99e Merge pull request #645 from hermeznetwork/feature/newpolicy
Add config parameter  ForgeOncePerSlotIfTxs
2021-03-18 14:02:53 +01:00
arnau
b01d5d50ee Merge pull request #633 from hermeznetwork/feature/makefile
Create a Makefile for build using -ldflags and useful commands
2021-03-18 13:36:51 +01:00
a_bennassar
8018f348a7 Merge pull request #649 from hermeznetwork/feature/dbbigintsdecimal
Store *big.Int as DECIMAL in sql
2021-03-18 13:25:43 +01:00
Eduard S
91ffdc250b Store *big.Int as DECIMAL in sql 2021-03-18 12:53:46 +01:00
Eduard S
4ebe285912 Merge pull request #644 from hermeznetwork/feature/external-delete-not-count
Not count txs marked as external_delete to reach MaxTxsPool
2021-03-18 11:44:08 +01:00
arnaubennassar
c280b21b89 Not count txs marked as external_delete to reach MaxTxsPool 2021-03-18 11:35:51 +01:00
Eduard S
e78fd613c6 Merge pull request #641 from hermeznetwork/fix/avg-tx-price
Fix average transaction fee calculation
2021-03-18 10:24:25 +01:00
a_bennassar
9edbd135ef Merge pull request #647 from hermeznetwork/fix/apistatecollectedfees
Fix API /state collected fees unmarshal
2021-03-17 17:51:41 +01:00
Eduard S
009d0c5be1 Fix API /state collected fees unmarshal 2021-03-17 15:05:23 +01:00
Eduard S
777ca3d87e Merge pull request #646 from hermeznetwork/fix/forgerCommitment
Fix Synchronizer not setting slot.ForgerCommitment to true
2021-03-17 13:53:07 +01:00
Oleksandr Brezhniev
eb89ab84f2 Rename the variable to match the meaning 2021-03-17 14:41:06 +02:00
Pantani
234268028e create cli command to print the version and the build commit and date 2021-03-17 09:07:03 -03:00
Pantani
d7bab0c524 create a makefile to build, compile and run the binary using ldflags 2021-03-17 09:06:57 -03:00
Oleksandr Brezhniev
36c1ba84df Fix Synchronizer not setting slot.ForgerCommitment to true until fully synced and after reorgs 2021-03-17 13:21:27 +02:00
Eduard S
450fa08d80 Add config parameter ForgeOncePerSlotIfTxs
New configuration Coordinator configuration parameter `ForgeOncePerSlotIfTxs`:
ForgeOncePerSlotIfTxs will make the coordinator forge at most one batch per
slot, only if there are included txs in that batch, or pending l1UserTxs in the
smart contract.  Setting this parameter overrides `ForgeDelay`,
`ForgeNoTxsDelay`, `MustForgeAtSlotDeadline` and `IgnoreSlotCommitment`.

Also restructure a bit the functions that check policies to decide whether or
not to forge a batch.
2021-03-17 11:48:36 +01:00
arnaubennassar
578edc80bc Fix average transaction fee calculation 2021-03-16 11:50:43 +01:00
Eduard S
80f16201a2 Merge pull request #640 from hermeznetwork/feature/BatchCheck
separated the policy to decide if we forge batch or not
2021-03-15 18:00:41 +01:00
Mikelle
cd74f1fda3 separated the policy to decide if we forge batch or not 2021-03-15 19:23:28 +03:00
arnau
6c6d1ea7b8 Merge pull request #631 from hermeznetwork/feature/serveapicli2
Allow serving API only via new cli command
2021-03-15 15:41:36 +01:00
Eduard S
d625cc9287 Allow serving API only via new cli command
- Add new command to the cli/node: `serveapi` that alows serving the API just
  by connecting to the PostgreSQL database.  The mode flag should me passed in
  order to select whether we are connecting to a synchronizer database or a
  coordinator database.  If `coord` is chosen as mode, the coordinator
  endpoints can be activated in order to allow inserting l2txs and
  authorizations into the L2DB.

Summary of the implementation details
- New SQL table with 3 columns (plus `item_id` pk).  The table only contains a
  single row with `item_id` = 1.  Columns:
    - state: historydb.StateAPI in JSON.  This is the struct that is served via
      the `/state` API endpoint.  The node will periodically update this struct
      and store it int he DB.  The api server will query it from the DB to
      serve it.
    - config: historydb.NodeConfig in JSON.  This struct contains node
      configuration parameters that the API needs to be aware of.  It's updated
      once every time the node starts.
    - constants: historydb.Constants in JSON.  This struct contains all the
      hermez network constants gathered via the ethereum client by the node.
      It's written once every time the node starts.
- The HistoryDB contains methods to get and update each one of these columns
  individually.
- The HistoryDB contains all methods that query the DB and prepare objects that
  will appear in the StateAPI endpoint.
- The configuration used in for the `serveapi` cli/node command is defined in
  `config.APIServer`, and is a subset of `node.Config` in order to allow
  reusing the same configuration file of the node if desired.
- A new object is introduced in the api: `StateAPIUpdater`, which contains all
  the necessary information to update the StateAPI in the DB periodically by
  the node.

- Moved the types `SCConsts`, `SCVariables` and `SCVariablesPtr` from
  `syncrhonizer` to `common` for convenience.
2021-03-15 15:35:20 +01:00
Eduard S
ec9b0aadce Merge pull request #617 from hermeznetwork/fix/txsel-discard-processl2txerrs
Fix TxSel discard tx when ProcessL2Tx gives err
2021-03-15 13:45:46 +01:00
arnaucube
5d8579a609 Fix TxSel discard tx when ProcessL2Tx gives err
Refactor getL1L2TxSelection, which fixes some problems for certain
combinations of txs.
2021-03-15 13:36:15 +01:00
arnau
84b84ecc17 Merge pull request #614 from hermeznetwork/feature/forger-config-options
Add the option to forge batch at the slot deadline
2021-03-15 11:29:27 +01:00
Pantani
968fcc207e Add the option to force or not a forgeBatch at the beginning of the slot 2021-03-11 23:26:17 -03:00
Eduard S
a5dcc67a08 Merge pull request #629 from hermeznetwork/feature/indivisual-token-config
Allow price update configuration to be specified per token
2021-03-10 14:11:57 +01:00
arnaubennassar
97062afc90 Allow price update configuration to be specified per token 2021-03-10 14:04:00 +01:00
Danilo Pantani
d361abb8cd Add the config option to forge batch when the coordinator we are not the auction winner and reach the slot deadline 2021-03-04 13:55:06 -03:00
67 changed files with 3793 additions and 1583 deletions

1
.gitignore vendored
View File

@@ -0,0 +1 @@
bin/

661
LICENSE Normal file
View File

@@ -0,0 +1,661 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

135
Makefile Normal file
View File

@@ -0,0 +1,135 @@
#! /usr/bin/make -f
# Project variables.
PACKAGE := github.com/hermeznetwork/hermez-node
VERSION := $(shell git describe --tags --always)
BUILD := $(shell git rev-parse --short HEAD)
BUILD_DATE := $(shell date +%Y-%m-%dT%H:%M:%S%z)
PROJECT_NAME := $(shell basename "$(PWD)")
# Go related variables.
GO_FILES ?= $$(find . -name '*.go' | grep -v vendor)
GOBASE := $(shell pwd)
GOBIN := $(GOBASE)/bin
GOPKG := $(.)
GOENVVARS := GOBIN=$(GOBIN)
GOCMD := $(GOBASE)/cli/node
GOPROOF := $(GOBASE)/test/proofserver/cli
GOBINARY := node
# Project configs.
MODE ?= sync
CONFIG ?= $(GOBASE)/cli/node/cfg.buidler.toml
POSTGRES_PASS ?= yourpasswordhere
# Use linker flags to provide version/build settings.
LDFLAGS=-ldflags "-X=main.Version=$(VERSION) -X=main.Build=$(BUILD) -X=main.Date=$(BUILD_DATE)"
# PID file will keep the process id of the server.
PID_PROOF_MOCK := /tmp/.$(PROJECT_NAME).proof.pid
# Make is verbose in Linux. Make it silent.
MAKEFLAGS += --silent
.PHONY: help
help: Makefile
@echo
@echo " Choose a command run in "$(PROJECT_NAME)":"
@echo
@sed -n 's/^##//p' $< | column -t -s ':' | sed -e 's/^/ /'
@echo
## test: Run the application check and all tests.
test: govet gocilint test-unit
## test-unit: Run all unit tests.
test-unit:
@echo " > Running unit tests"
$(GOENVVARS) go test -race -p 1 -failfast -timeout 300s -v ./...
## test-api-server: Run the API server using the Go tests.
test-api-server:
@echo " > Running unit tests"
$(GOENVVARS) FAKE_SERVER=yes go test -timeout 0 ./api -p 1 -count 1 -v
## gofmt: Run `go fmt` for all go files.
gofmt:
@echo " > Format all go files"
$(GOENVVARS) gofmt -w ${GO_FILES}
## govet: Run go vet.
govet:
@echo " > Running go vet"
$(GOENVVARS) go vet ./...
## golint: Run default golint.
golint:
@echo " > Running golint"
$(GOENVVARS) golint -set_exit_status ./...
## gocilint: Run Golang CI Lint.
gocilint:
@echo " > Running Golang CI Lint"
$-golangci-lint run --timeout=5m -E whitespace -E gosec -E gci -E misspell -E gomnd -E gofmt -E goimports -E golint --exclude-use-default=false --max-same-issues 0
## exec: Run given command. e.g; make exec run="go test ./..."
exec:
GOBIN=$(GOBIN) $(run)
## clean: Clean build files. Runs `go clean` internally.
clean:
@-rm $(GOBIN)/ 2> /dev/null
@echo " > Cleaning build cache"
$(GOENVVARS) go clean
## build: Build the project.
build: install
@echo " > Building Hermez binary..."
@bash -c "$(MAKE) migration-pack"
$(GOENVVARS) go build $(LDFLAGS) -o $(GOBIN)/$(GOBINARY) $(GOCMD)
@bash -c "$(MAKE) migration-clean"
## install: Install missing dependencies. Runs `go get` internally. e.g; make install get=github.com/foo/bar
install:
@echo " > Checking if there is any missing dependencies..."
$(GOENVVARS) go get $(GOCMD)/... $(get)
## run: Run Hermez node.
run:
@bash -c "$(MAKE) clean build"
@echo " > Running $(PROJECT_NAME)"
@$(GOBIN)/$(GOBINARY) --mode $(MODE) --cfg $(CONFIG) run
## run-proof-mock: Run proof server mock API.
run-proof-mock: stop-proof-mock
@echo " > Running Proof Server Mock"
$(GOENVVARS) go build -o $(GOBIN)/proof $(GOPROOF)
@$(GOBIN)/proof 2>&1 & echo $$! > $(PID_PROOF_MOCK)
@cat $(PID_PROOF_MOCK) | sed "/^/s/^/ \> Proof Server Mock PID: /"
## stop-proof-mock: Stop proof server mock API.
stop-proof-mock:
@-touch $(PID_PROOF_MOCK)
@-kill -s INT `cat $(PID_PROOF_MOCK)` 2> /dev/null || true
@-rm $(PID_PROOF_MOCK) $(GOBIN)/proof 2> /dev/null || true
## migration-pack: Pack the database migrations into the binary.
migration-pack:
@echo " > Packing the migrations..."
@cd /tmp && go get -u github.com/gobuffalo/packr/v2/packr2 && cd -
@cd $(GOBASE)/db && packr2 && cd -
## migration-clean: Clean the database migrations pack.
migration-clean:
@echo " > Cleaning the migrations..."
@cd $(GOBASE)/db && packr2 clean && cd -
## run-database-container: Run the Postgres container
run-database-container:
@echo " > Running the postgreSQL DB..."
@-docker run --rm --name hermez-db -p 5432:5432 -e POSTGRES_DB=hermez -e POSTGRES_USER=hermez -e POSTGRES_PASSWORD="$(POSTGRES_PASS)" -d postgres
## stop-database-container: Stop the Postgres container
stop-database-container:
@echo " > Stopping the postgreSQL DB..."
@-docker stop hermez-db

View File

@@ -8,42 +8,75 @@ Go implementation of the Hermez node.
The `hermez-node` has been tested with go version 1.14
### Build
Build the binary and check the current version:
```shell
$ make build
$ bin/node version
```
### Run
First you must edit the default/template config file into [cli/node/cfg.buidler.toml](cli/node/cfg.buidler.toml),
there are more information about the config file into [cli/node/README.md](cli/node/README.md)
After setting the config, you can build and run the Hermez Node as a synchronizer:
```shell
$ make run
```
Or build and run as a coordinator, and also passing the config file from other location:
```shell
$ MODE=sync CONFIG=cli/node/cfg.buidler.toml make run
```
To check the useful make commands:
```shell
$ make help
```
### Unit testing
Running the unit tests requires a connection to a PostgreSQL database. You can
start PostgreSQL with docker easily this way (where `yourpasswordhere` should
run PostgreSQL with docker easily this way (where `yourpasswordhere` should
be your password):
```
POSTGRES_PASS=yourpasswordhere; sudo docker run --rm --name hermez-db-test -p 5432:5432 -e POSTGRES_DB=hermez -e POSTGRES_USER=hermez -e POSTGRES_PASSWORD="$POSTGRES_PASS" -d postgres
```shell
$ POSTGRES_PASS="yourpasswordhere" make run-database-container
```
Afterwards, run the tests with the password as env var:
Afterward, run the tests with the password as env var:
```
POSTGRES_PASS=yourpasswordhere go test -p 1 ./...
```shell
$ POSTGRES_PASS="yourpasswordhere" make test
```
NOTE: `-p 1` forces execution of package test in serial. Otherwise they may be
executed in paralel and the test may find unexpected entries in the SQL databse
NOTE: `-p 1` forces execution of package test in serial. Otherwise, they may be
executed in parallel, and the test may find unexpected entries in the SQL database
because it's shared among all tests.
There is an extra temporary option that allows you to run the API server using
the Go tests. This will be removed once the API can be properly initialized,
with data from the synchronizer and so on. To use this, run:
There is an extra temporary option that allows you to run the API server using the
Go tests. It will be removed once the API can be properly initialized with data
from the synchronizer. To use this, run:
```
FAKE_SERVER=yes POSTGRES_PASS=yourpasswordhere go test -timeout 0 ./api -p 1 -count 1 -v`
```shell
$ POSTGRES_PASS="yourpasswordhere" make test-api-server
```
### Lint
All Pull Requests need to pass the configured linter.
To run the linter locally, first install [golangci-lint](https://golangci-lint.run). Afterwards you can check the lints with this command:
To run the linter locally, first, install [golangci-lint](https://golangci-lint.run).
Afterward, you can check the lints with this command:
```
golangci-lint run --timeout=5m -E whitespace -E gosec -E gci -E misspell -E gomnd -E gofmt -E goimports -E golint --exclude-use-default=false --max-same-issues 0
```shell
$ make gocilint
```
## Usage
@@ -54,13 +87,13 @@ See [cli/node/README.md](cli/node/README.md)
### Proof Server
The node in mode coordinator requires a proof server (a server that is capable
of calculating proofs from the zkInputs). For testing purposes there is a mock
proof server cli at `test/proofserver/cli`.
The node in mode coordinator requires a proof server (a server capable of
calculating proofs from the zkInputs). There is a mock proof server CLI
at `test/proofserver/cli` for testing purposes.
Usage of `test/proofserver/cli`:
```
```shell
USAGE:
go run ./test/proofserver/cli OPTIONS
@@ -71,11 +104,19 @@ OPTIONS:
proving time duration (default 2s)
```
Also, the Makefile commands can be used to run and stop the proof server
in the background:
```shell
$ make run-proof-mock
$ make stop-proof-mock
```
### `/tmp` as tmpfs
For every processed batch, the node builds a temporary exit tree in a key-value
DB stored in `/tmp`. It is highly recommended that `/tmp` is mounted as a RAM
file system in production to avoid unecessary reads an writes to disk. This
file system in production to avoid unnecessary reads a writes to disk. This
can be done by mounting `/tmp` as tmpfs; for example, by having this line in
`/etc/fstab`:
```

View File

@@ -5,7 +5,7 @@ import (
"strconv"
"testing"
"github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/mitchellh/copystructure"

View File

@@ -7,7 +7,7 @@ import (
ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/gin-gonic/gin"
"github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common"
"github.com/iden3/go-iden3-crypto/babyjub"
)

View File

@@ -2,40 +2,19 @@ package api
import (
"errors"
"sync"
ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/gin-gonic/gin"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/hermeznetwork/hermez-node/db/l2db"
"github.com/hermeznetwork/tracerr"
)
// TODO: Add correct values to constants
const (
createAccountExtraFeePercentage float64 = 2
createAccountInternalExtraFeePercentage float64 = 2.5
)
// Status define status of the network
type Status struct {
sync.RWMutex
NodeConfig NodeConfig `json:"nodeConfig"`
Network Network `json:"network"`
Metrics historydb.Metrics `json:"metrics"`
Rollup historydb.RollupVariablesAPI `json:"rollup"`
Auction historydb.AuctionVariablesAPI `json:"auction"`
WithdrawalDelayer common.WDelayerVariables `json:"withdrawalDelayer"`
RecommendedFee common.RecommendedFee `json:"recommendedFee"`
}
// API serves HTTP requests to allow external interaction with the Hermez node
type API struct {
h *historydb.HistoryDB
cg *configAPI
l2 *l2db.L2DB
status Status
chainID uint16
hermezAddress ethCommon.Address
}
@@ -46,8 +25,6 @@ func NewAPI(
server *gin.Engine,
hdb *historydb.HistoryDB,
l2db *l2db.L2DB,
config *Config,
nodeConfig *NodeConfig,
) (*API, error) {
// Check input
// TODO: is stateDB only needed for explorer endpoints or for both?
@@ -57,54 +34,56 @@ func NewAPI(
if explorerEndpoints && hdb == nil {
return nil, tracerr.Wrap(errors.New("cannot serve Explorer endpoints without HistoryDB"))
}
consts, err := hdb.GetConstants()
if err != nil {
return nil, err
}
a := &API{
h: hdb,
cg: &configAPI{
RollupConstants: *newRollupConstants(config.RollupConstants),
AuctionConstants: config.AuctionConstants,
WDelayerConstants: config.WDelayerConstants,
RollupConstants: *newRollupConstants(consts.Rollup),
AuctionConstants: consts.Auction,
WDelayerConstants: consts.WDelayer,
},
l2: l2db,
status: Status{
NodeConfig: *nodeConfig,
},
chainID: config.ChainID,
hermezAddress: config.HermezAddress,
chainID: consts.ChainID,
hermezAddress: consts.HermezAddress,
}
v1 := server.Group("/v1")
// Add coordinator endpoints
if coordinatorEndpoints {
// Account
server.POST("/account-creation-authorization", a.postAccountCreationAuth)
server.GET("/account-creation-authorization/:hezEthereumAddress", a.getAccountCreationAuth)
v1.POST("/account-creation-authorization", a.postAccountCreationAuth)
v1.GET("/account-creation-authorization/:hezEthereumAddress", a.getAccountCreationAuth)
// Transaction
server.POST("/transactions-pool", a.postPoolTx)
server.GET("/transactions-pool/:id", a.getPoolTx)
v1.POST("/transactions-pool", a.postPoolTx)
v1.GET("/transactions-pool/:id", a.getPoolTx)
}
// Add explorer endpoints
if explorerEndpoints {
// Account
server.GET("/accounts", a.getAccounts)
server.GET("/accounts/:accountIndex", a.getAccount)
server.GET("/exits", a.getExits)
server.GET("/exits/:batchNum/:accountIndex", a.getExit)
v1.GET("/accounts", a.getAccounts)
v1.GET("/accounts/:accountIndex", a.getAccount)
v1.GET("/exits", a.getExits)
v1.GET("/exits/:batchNum/:accountIndex", a.getExit)
// Transaction
server.GET("/transactions-history", a.getHistoryTxs)
server.GET("/transactions-history/:id", a.getHistoryTx)
v1.GET("/transactions-history", a.getHistoryTxs)
v1.GET("/transactions-history/:id", a.getHistoryTx)
// Status
server.GET("/batches", a.getBatches)
server.GET("/batches/:batchNum", a.getBatch)
server.GET("/full-batches/:batchNum", a.getFullBatch)
server.GET("/slots", a.getSlots)
server.GET("/slots/:slotNum", a.getSlot)
server.GET("/bids", a.getBids)
server.GET("/state", a.getState)
server.GET("/config", a.getConfig)
server.GET("/tokens", a.getTokens)
server.GET("/tokens/:id", a.getToken)
server.GET("/coordinators", a.getCoordinators)
v1.GET("/batches", a.getBatches)
v1.GET("/batches/:batchNum", a.getBatch)
v1.GET("/full-batches/:batchNum", a.getFullBatch)
v1.GET("/slots", a.getSlots)
v1.GET("/slots/:slotNum", a.getSlot)
v1.GET("/bids", a.getBids)
v1.GET("/state", a.getState)
v1.GET("/config", a.getConfig)
v1.GET("/tokens", a.getTokens)
v1.GET("/tokens/:id", a.getToken)
v1.GET("/coordinators", a.getCoordinators)
}
return a, nil

View File

@@ -19,6 +19,7 @@ import (
ethCommon "github.com/ethereum/go-ethereum/common"
swagger "github.com/getkin/kin-openapi/openapi3filter"
"github.com/gin-gonic/gin"
"github.com/hermeznetwork/hermez-node/api/stateapiupdater"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db"
"github.com/hermeznetwork/hermez-node/db/historydb"
@@ -39,8 +40,8 @@ type Pendinger interface {
New() Pendinger
}
const apiAddr = ":4010"
const apiURL = "http://localhost" + apiAddr + "/"
const apiPort = "4010"
const apiURL = "http://localhost:" + apiPort + "/v1/"
var SetBlockchain = `
Type: Blockchain
@@ -180,12 +181,13 @@ type testCommon struct {
auctionVars common.AuctionVariables
rollupVars common.RollupVariables
wdelayerVars common.WDelayerVariables
nextForgers []NextForger
nextForgers []historydb.NextForgerAPI
}
var tc testCommon
var config configAPI
var api *API
var stateAPIUpdater *stateapiupdater.Updater
// TestMain initializes the API server, and fill HistoryDB and StateDB with fake data,
// emulating the task of the synchronizer in order to have data to be returned
@@ -206,18 +208,8 @@ func TestMain(m *testing.M) {
if err != nil {
panic(err)
}
// StateDB
dir, err := ioutil.TempDir("", "tmpdb")
if err != nil {
panic(err)
}
defer func() {
if err := os.RemoveAll(dir); err != nil {
panic(err)
}
}()
// L2DB
l2DB := l2db.NewL2DB(database, database, 10, 1000, 0.0, 24*time.Hour, apiConnCon)
l2DB := l2db.NewL2DB(database, database, 10, 1000, 0.0, 1000.0, 24*time.Hour, apiConnCon)
test.WipeDB(l2DB.DB()) // this will clean HistoryDB and L2DB
// Config (smart contract constants)
chainID := uint16(0)
@@ -230,22 +222,43 @@ func TestMain(m *testing.M) {
// API
apiGin := gin.Default()
// Reset DB
test.WipeDB(hdb.DB())
constants := &historydb.Constants{
SCConsts: common.SCConsts{
Rollup: _config.RollupConstants,
Auction: _config.AuctionConstants,
WDelayer: _config.WDelayerConstants,
},
ChainID: chainID,
HermezAddress: _config.HermezAddress,
}
if err := hdb.SetConstants(constants); err != nil {
panic(err)
}
nodeConfig := &historydb.NodeConfig{
MaxPoolTxs: 10,
MinFeeUSD: 0,
MaxFeeUSD: 10000000000,
}
if err := hdb.SetNodeConfig(nodeConfig); err != nil {
panic(err)
}
api, err = NewAPI(
true,
true,
apiGin,
hdb,
l2DB,
&_config,
&NodeConfig{
ForgeDelay: 180,
},
)
if err != nil {
log.Error(err)
panic(err)
}
// Start server
listener, err := net.Listen("tcp", apiAddr) //nolint:gosec
listener, err := net.Listen("tcp", ":"+apiPort) //nolint:gosec
if err != nil {
panic(err)
}
@@ -257,9 +270,6 @@ func TestMain(m *testing.M) {
}
}()
// Reset DB
test.WipeDB(api.h.DB())
// Generate blockchain data with til
tcc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
tilCfgExtra := til.ConfigExtra{
@@ -306,7 +316,7 @@ func TestMain(m *testing.M) {
USD: &ethUSD,
USDUpdate: &ethNow,
})
err = api.h.UpdateTokenValue(test.EthToken.Symbol, ethUSD)
err = api.h.UpdateTokenValue(common.EmptyAddr, ethUSD)
if err != nil {
panic(err)
}
@@ -333,7 +343,7 @@ func TestMain(m *testing.M) {
token.USD = &value
token.USDUpdate = &now
// Set value in DB
err = api.h.UpdateTokenValue(token.Symbol, value)
err = api.h.UpdateTokenValue(token.EthAddr, value)
if err != nil {
panic(err)
}
@@ -460,19 +470,19 @@ func TestMain(m *testing.M) {
if err = api.h.AddBids(bids); err != nil {
panic(err)
}
bootForger := NextForger{
bootForger := historydb.NextForgerAPI{
Coordinator: historydb.CoordinatorAPI{
Forger: auctionVars.BootCoordinator,
URL: auctionVars.BootCoordinatorURL,
},
}
// Set next forgers: set all as boot coordinator then replace the non boot coordinators
nextForgers := []NextForger{}
nextForgers := []historydb.NextForgerAPI{}
var initBlock int64 = 140
var deltaBlocks int64 = 40
for i := 1; i < int(auctionVars.ClosedAuctionSlots)+2; i++ {
fromBlock := initBlock + deltaBlocks*int64(i-1)
bootForger.Period = Period{
bootForger.Period = historydb.Period{
SlotNum: int64(i),
FromBlock: fromBlock,
ToBlock: fromBlock + deltaBlocks - 1,
@@ -512,7 +522,13 @@ func TestMain(m *testing.M) {
WithdrawalDelay: uint64(3000),
}
// Generate test data, as expected to be received/sent from/to the API
stateAPIUpdater = stateapiupdater.NewUpdater(hdb, nodeConfig, &common.SCVariables{
Rollup: rollupVars,
Auction: auctionVars,
WDelayer: wdelayerVars,
}, constants)
// Generate test data, as expected to be received/sended from/to the API
testCoords := genTestCoordinators(commonCoords)
testBids := genTestBids(commonBlocks, testCoords, bids)
testExits := genTestExits(commonExitTree, testTokens, commonAccounts)
@@ -589,27 +605,24 @@ func TestMain(m *testing.M) {
if err := database.Close(); err != nil {
panic(err)
}
if err := os.RemoveAll(dir); err != nil {
panic(err)
}
os.Exit(result)
}
func TestTimeout(t *testing.T) {
pass := os.Getenv("POSTGRES_PASS")
databaseTO, err := db.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
databaseTO, err := db.ConnectSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err)
apiConnConTO := db.NewAPIConnectionController(1, 100*time.Millisecond)
hdbTO := historydb.NewHistoryDB(databaseTO, databaseTO, apiConnConTO)
require.NoError(t, err)
// L2DB
l2DBTO := l2db.NewL2DB(databaseTO, databaseTO, 10, 1000, 1.0, 24*time.Hour, apiConnConTO)
l2DBTO := l2db.NewL2DB(databaseTO, databaseTO, 10, 1000, 1.0, 1000.0, 24*time.Hour, apiConnConTO)
// API
apiGinTO := gin.Default()
finishWait := make(chan interface{})
startWait := make(chan interface{})
apiGinTO.GET("/wait", func(c *gin.Context) {
apiGinTO.GET("/v1/wait", func(c *gin.Context) {
cancel, err := apiConnConTO.Acquire()
defer cancel()
require.NoError(t, err)
@@ -627,24 +640,19 @@ func TestTimeout(t *testing.T) {
require.NoError(t, err)
}
}()
_config := getConfigTest(0)
_, err = NewAPI(
true,
true,
apiGinTO,
hdbTO,
l2DBTO,
&_config,
&NodeConfig{
ForgeDelay: 180,
},
)
require.NoError(t, err)
client := &http.Client{}
httpReq, err := http.NewRequest("GET", "http://localhost:4444/tokens", nil)
httpReq, err := http.NewRequest("GET", "http://localhost:4444/v1/tokens", nil)
require.NoError(t, err)
httpReqWait, err := http.NewRequest("GET", "http://localhost:4444/wait", nil)
httpReqWait, err := http.NewRequest("GET", "http://localhost:4444/v1/wait", nil)
require.NoError(t, err)
// Request that will get timed out
var wg sync.WaitGroup

View File

@@ -4,7 +4,6 @@ import (
"database/sql/driver"
"encoding/base64"
"encoding/hex"
"encoding/json"
"errors"
"fmt"
"math/big"
@@ -19,7 +18,10 @@ import (
// BigIntStr is used to scan/value *big.Int directly into strings from/to sql DBs.
// It assumes that *big.Int are inserted/fetched to/from the DB using the BigIntMeddler meddler
// defined at github.com/hermeznetwork/hermez-node/db
// defined at github.com/hermeznetwork/hermez-node/db. Since *big.Int is
// stored as DECIMAL in SQL, there's no need to implement Scan()/Value()
// because DECIMALS are encoded/decoded as strings by the sql driver, and
// BigIntStr is already a string.
type BigIntStr string
// NewBigIntStr creates a *BigIntStr from a *big.Int.
@@ -32,34 +34,6 @@ func NewBigIntStr(bigInt *big.Int) *BigIntStr {
return &bigIntStr
}
// Scan implements Scanner for database/sql
func (b *BigIntStr) Scan(src interface{}) error {
srcBytes, ok := src.([]byte)
if !ok {
return tracerr.Wrap(fmt.Errorf("can't scan %T into apitypes.BigIntStr", src))
}
// bytes to *big.Int
bigInt := new(big.Int).SetBytes(srcBytes)
// *big.Int to BigIntStr
bigIntStr := NewBigIntStr(bigInt)
if bigIntStr == nil {
return nil
}
*b = *bigIntStr
return nil
}
// Value implements valuer for database/sql
func (b BigIntStr) Value() (driver.Value, error) {
// string to *big.Int
bigInt, ok := new(big.Int).SetString(string(b), 10)
if !ok || bigInt == nil {
return nil, tracerr.Wrap(errors.New("invalid representation of a *big.Int"))
}
// *big.Int to bytes
return bigInt.Bytes(), nil
}
// StrBigInt is used to unmarshal BigIntStr directly into an alias of big.Int
type StrBigInt big.Int
@@ -73,22 +47,16 @@ func (s *StrBigInt) UnmarshalText(text []byte) error {
return nil
}
// CollectedFees is used to retrieve common.batch.CollectedFee from the DB
type CollectedFees map[common.TokenID]BigIntStr
// CollectedFeesAPI is send common.batch.CollectedFee through the API
type CollectedFeesAPI map[common.TokenID]BigIntStr
// UnmarshalJSON unmarshals a json representation of map[common.TokenID]*big.Int
func (c *CollectedFees) UnmarshalJSON(text []byte) error {
bigIntMap := make(map[common.TokenID]*big.Int)
if err := json.Unmarshal(text, &bigIntMap); err != nil {
return tracerr.Wrap(err)
// NewCollectedFeesAPI creates a new CollectedFeesAPI from a *big.Int map
func NewCollectedFeesAPI(m map[common.TokenID]*big.Int) CollectedFeesAPI {
c := CollectedFeesAPI(make(map[common.TokenID]BigIntStr))
for k, v := range m {
c[k] = *NewBigIntStr(v)
}
*c = CollectedFees(make(map[common.TokenID]BigIntStr))
for k, v := range bigIntMap {
bStr := NewBigIntStr(v)
(CollectedFees(*c)[k]) = *bStr
}
// *c = CollectedFees(bStrMap)
return nil
return c
}
// HezEthAddr is used to scan/value Ethereum Address directly into strings that follow the Ethereum address hez format (^hez:0x[a-fA-F0-9]{40}$) from/to sql DBs.

View File

@@ -7,10 +7,12 @@ import (
"time"
ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/mitchellh/copystructure"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type testBatch struct {
@@ -20,7 +22,7 @@ type testBatch struct {
EthBlockHash ethCommon.Hash `json:"ethereumBlockHash"`
Timestamp time.Time `json:"timestamp"`
ForgerAddr ethCommon.Address `json:"forgerAddr"`
CollectedFees map[common.TokenID]string `json:"collectedFees"`
CollectedFees apitypes.CollectedFeesAPI `json:"collectedFees"`
TotalFeesUSD *float64 `json:"historicTotalCollectedFeesUSD"`
StateRoot string `json:"stateRoot"`
NumAccounts int `json:"numAccounts"`
@@ -73,9 +75,9 @@ func genTestBatches(
if !found {
panic("block not found")
}
collectedFees := make(map[common.TokenID]string)
collectedFees := apitypes.CollectedFeesAPI(make(map[common.TokenID]apitypes.BigIntStr))
for k, v := range cBatches[i].CollectedFees {
collectedFees[k] = v.String()
collectedFees[k] = *apitypes.NewBigIntStr(v)
}
forgedTxs := 0
for _, tx := range txs {
@@ -132,7 +134,7 @@ func TestGetBatches(t *testing.T) {
limit := 3
path := fmt.Sprintf("%s?limit=%d", endpoint, limit)
err := doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
assert.NoError(t, err)
require.NoError(t, err)
assertBatches(t, tc.batches, fetchedBatches)
// minBatchNum
@@ -141,7 +143,7 @@ func TestGetBatches(t *testing.T) {
minBatchNum := tc.batches[len(tc.batches)/2].BatchNum
path = fmt.Sprintf("%s?minBatchNum=%d&limit=%d", endpoint, minBatchNum, limit)
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
assert.NoError(t, err)
require.NoError(t, err)
minBatchNumBatches := []testBatch{}
for i := 0; i < len(tc.batches); i++ {
if tc.batches[i].BatchNum > minBatchNum {
@@ -156,7 +158,7 @@ func TestGetBatches(t *testing.T) {
maxBatchNum := tc.batches[len(tc.batches)/2].BatchNum
path = fmt.Sprintf("%s?maxBatchNum=%d&limit=%d", endpoint, maxBatchNum, limit)
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
assert.NoError(t, err)
require.NoError(t, err)
maxBatchNumBatches := []testBatch{}
for i := 0; i < len(tc.batches); i++ {
if tc.batches[i].BatchNum < maxBatchNum {
@@ -171,7 +173,7 @@ func TestGetBatches(t *testing.T) {
slotNum := tc.batches[len(tc.batches)/2].SlotNum
path = fmt.Sprintf("%s?slotNum=%d&limit=%d", endpoint, slotNum, limit)
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
assert.NoError(t, err)
require.NoError(t, err)
slotNumBatches := []testBatch{}
for i := 0; i < len(tc.batches); i++ {
if tc.batches[i].SlotNum == slotNum {
@@ -186,7 +188,7 @@ func TestGetBatches(t *testing.T) {
forgerAddr := tc.batches[len(tc.batches)/2].ForgerAddr
path = fmt.Sprintf("%s?forgerAddr=%s&limit=%d", endpoint, forgerAddr.String(), limit)
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
assert.NoError(t, err)
require.NoError(t, err)
forgerAddrBatches := []testBatch{}
for i := 0; i < len(tc.batches); i++ {
if tc.batches[i].ForgerAddr == forgerAddr {
@@ -200,7 +202,7 @@ func TestGetBatches(t *testing.T) {
limit = 6
path = fmt.Sprintf("%s?limit=%d", endpoint, limit)
err = doGoodReqPaginated(path, historydb.OrderDesc, &testBatchesResponse{}, appendIter)
assert.NoError(t, err)
require.NoError(t, err)
flippedBatches := []testBatch{}
for i := len(tc.batches) - 1; i >= 0; i-- {
flippedBatches = append(flippedBatches, tc.batches[i])
@@ -214,7 +216,7 @@ func TestGetBatches(t *testing.T) {
minBatchNum = tc.batches[len(tc.batches)/4].BatchNum
path = fmt.Sprintf("%s?minBatchNum=%d&maxBatchNum=%d&limit=%d", endpoint, minBatchNum, maxBatchNum, limit)
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
assert.NoError(t, err)
require.NoError(t, err)
minMaxBatchNumBatches := []testBatch{}
for i := 0; i < len(tc.batches); i++ {
if tc.batches[i].BatchNum < maxBatchNum && tc.batches[i].BatchNum > minBatchNum {
@@ -227,25 +229,25 @@ func TestGetBatches(t *testing.T) {
fetchedBatches = []testBatch{}
path = fmt.Sprintf("%s?slotNum=%d&minBatchNum=%d", endpoint, 1, 25)
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
assert.NoError(t, err)
require.NoError(t, err)
assertBatches(t, []testBatch{}, fetchedBatches)
// 400
// Invalid minBatchNum
path = fmt.Sprintf("%s?minBatchNum=%d", endpoint, -2)
err = doBadReq("GET", path, nil, 400)
assert.NoError(t, err)
require.NoError(t, err)
// Invalid forgerAddr
path = fmt.Sprintf("%s?forgerAddr=%s", endpoint, "0xG0000001")
err = doBadReq("GET", path, nil, 400)
assert.NoError(t, err)
require.NoError(t, err)
}
func TestGetBatch(t *testing.T) {
endpoint := apiURL + "batches/"
for _, batch := range tc.batches {
fetchedBatch := testBatch{}
assert.NoError(
require.NoError(
t, doGoodReq(
"GET",
endpoint+strconv.Itoa(int(batch.BatchNum)),
@@ -255,16 +257,16 @@ func TestGetBatch(t *testing.T) {
assertBatch(t, batch, fetchedBatch)
}
// 400
assert.NoError(t, doBadReq("GET", endpoint+"foo", nil, 400))
require.NoError(t, doBadReq("GET", endpoint+"foo", nil, 400))
// 404
assert.NoError(t, doBadReq("GET", endpoint+"99999", nil, 404))
require.NoError(t, doBadReq("GET", endpoint+"99999", nil, 404))
}
func TestGetFullBatch(t *testing.T) {
endpoint := apiURL + "full-batches/"
for _, fullBatch := range tc.fullBatches {
fetchedFullBatch := testFullBatch{}
assert.NoError(
require.NoError(
t, doGoodReq(
"GET",
endpoint+strconv.Itoa(int(fullBatch.Batch.BatchNum)),
@@ -275,9 +277,9 @@ func TestGetFullBatch(t *testing.T) {
assertTxs(t, fullBatch.Txs, fetchedFullBatch.Txs)
}
// 400
assert.NoError(t, doBadReq("GET", endpoint+"foo", nil, 400))
require.NoError(t, doBadReq("GET", endpoint+"foo", nil, 400))
// 404
assert.NoError(t, doBadReq("GET", endpoint+"99999", nil, 404))
require.NoError(t, doBadReq("GET", endpoint+"99999", nil, 404))
}
func assertBatches(t *testing.T, expected, actual []testBatch) {

View File

@@ -4,7 +4,7 @@ import (
"fmt"
"testing"
"github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/mitchellh/copystructure"

View File

@@ -99,7 +99,9 @@ func TestGetSlot(t *testing.T) {
nil, &fetchedSlot,
),
)
emptySlot := api.getEmptyTestSlot(slotNum, api.status.Network.LastSyncBlock, tc.auctionVars)
// ni, err := api.h.GetNodeInfoAPI()
// assert.NoError(t, err)
emptySlot := api.getEmptyTestSlot(slotNum, 0, tc.auctionVars)
assertSlot(t, emptySlot, fetchedSlot)
// Invalid slotNum
@@ -127,8 +129,10 @@ func TestGetSlots(t *testing.T) {
err := doGoodReqPaginated(path, historydb.OrderAsc, &testSlotsResponse{}, appendIter)
assert.NoError(t, err)
allSlots := tc.slots
// ni, err := api.h.GetNodeInfoAPI()
// assert.NoError(t, err)
for i := tc.slots[len(tc.slots)-1].SlotNum; i < maxSlotNum; i++ {
emptySlot := api.getEmptyTestSlot(i+1, api.status.Network.LastSyncBlock, tc.auctionVars)
emptySlot := api.getEmptyTestSlot(i+1, 0, tc.auctionVars)
allSlots = append(allSlots, emptySlot)
}
assertSlots(t, allSlots, fetchedSlots)

View File

@@ -1,320 +1,16 @@
package api
import (
"database/sql"
"fmt"
"math"
"math/big"
"net/http"
"time"
"github.com/gin-gonic/gin"
"github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/hermeznetwork/tracerr"
)
// Network define status of the network
type Network struct {
LastEthBlock int64 `json:"lastEthereumBlock"`
LastSyncBlock int64 `json:"lastSynchedBlock"`
LastBatch *historydb.BatchAPI `json:"lastBatch"`
CurrentSlot int64 `json:"currentSlot"`
NextForgers []NextForger `json:"nextForgers"`
}
// NodeConfig is the configuration of the node that is exposed via API
type NodeConfig struct {
// ForgeDelay in seconds
ForgeDelay float64 `json:"forgeDelay"`
}
// NextForger is a representation of the information of a coordinator and the period will forge
type NextForger struct {
Coordinator historydb.CoordinatorAPI `json:"coordinator"`
Period Period `json:"period"`
}
// Period is a representation of a period
type Period struct {
SlotNum int64 `json:"slotNum"`
FromBlock int64 `json:"fromBlock"`
ToBlock int64 `json:"toBlock"`
FromTimestamp time.Time `json:"fromTimestamp"`
ToTimestamp time.Time `json:"toTimestamp"`
}
func (a *API) getState(c *gin.Context) {
// TODO: There are no events for the buckets information, so now this information will be 0
a.status.RLock()
status := a.status //nolint
a.status.RUnlock()
c.JSON(http.StatusOK, status) //nolint
}
// SC Vars
// SetRollupVariables set Status.Rollup variables
func (a *API) SetRollupVariables(rollupVariables common.RollupVariables) {
a.status.Lock()
var rollupVAPI historydb.RollupVariablesAPI
rollupVAPI.EthBlockNum = rollupVariables.EthBlockNum
rollupVAPI.FeeAddToken = apitypes.NewBigIntStr(rollupVariables.FeeAddToken)
rollupVAPI.ForgeL1L2BatchTimeout = rollupVariables.ForgeL1L2BatchTimeout
rollupVAPI.WithdrawalDelay = rollupVariables.WithdrawalDelay
for i, bucket := range rollupVariables.Buckets {
var apiBucket historydb.BucketParamsAPI
apiBucket.CeilUSD = apitypes.NewBigIntStr(bucket.CeilUSD)
apiBucket.Withdrawals = apitypes.NewBigIntStr(bucket.Withdrawals)
apiBucket.BlockWithdrawalRate = apitypes.NewBigIntStr(bucket.BlockWithdrawalRate)
apiBucket.MaxWithdrawals = apitypes.NewBigIntStr(bucket.MaxWithdrawals)
rollupVAPI.Buckets[i] = apiBucket
}
rollupVAPI.SafeMode = rollupVariables.SafeMode
a.status.Rollup = rollupVAPI
a.status.Unlock()
}
// SetWDelayerVariables set Status.WithdrawalDelayer variables
func (a *API) SetWDelayerVariables(wDelayerVariables common.WDelayerVariables) {
a.status.Lock()
a.status.WithdrawalDelayer = wDelayerVariables
a.status.Unlock()
}
// SetAuctionVariables set Status.Auction variables
func (a *API) SetAuctionVariables(auctionVariables common.AuctionVariables) {
a.status.Lock()
var auctionAPI historydb.AuctionVariablesAPI
auctionAPI.EthBlockNum = auctionVariables.EthBlockNum
auctionAPI.DonationAddress = auctionVariables.DonationAddress
auctionAPI.BootCoordinator = auctionVariables.BootCoordinator
auctionAPI.BootCoordinatorURL = auctionVariables.BootCoordinatorURL
auctionAPI.DefaultSlotSetBidSlotNum = auctionVariables.DefaultSlotSetBidSlotNum
auctionAPI.ClosedAuctionSlots = auctionVariables.ClosedAuctionSlots
auctionAPI.OpenAuctionSlots = auctionVariables.OpenAuctionSlots
auctionAPI.Outbidding = auctionVariables.Outbidding
auctionAPI.SlotDeadline = auctionVariables.SlotDeadline
for i, slot := range auctionVariables.DefaultSlotSetBid {
auctionAPI.DefaultSlotSetBid[i] = apitypes.NewBigIntStr(slot)
}
for i, ratio := range auctionVariables.AllocationRatio {
auctionAPI.AllocationRatio[i] = ratio
}
a.status.Auction = auctionAPI
a.status.Unlock()
}
// Network
// UpdateNetworkInfoBlock update Status.Network block related information
func (a *API) UpdateNetworkInfoBlock(
lastEthBlock, lastSyncBlock common.Block,
) {
a.status.Network.LastSyncBlock = lastSyncBlock.Num
a.status.Network.LastEthBlock = lastEthBlock.Num
}
// UpdateNetworkInfo update Status.Network information
func (a *API) UpdateNetworkInfo(
lastEthBlock, lastSyncBlock common.Block,
lastBatchNum common.BatchNum, currentSlot int64,
) error {
lastBatch, err := a.h.GetBatchAPI(lastBatchNum)
if tracerr.Unwrap(err) == sql.ErrNoRows {
lastBatch = nil
} else if err != nil {
return tracerr.Wrap(err)
}
lastClosedSlot := currentSlot + int64(a.status.Auction.ClosedAuctionSlots)
nextForgers, err := a.getNextForgers(lastSyncBlock, currentSlot, lastClosedSlot)
if tracerr.Unwrap(err) == sql.ErrNoRows {
nextForgers = nil
} else if err != nil {
return tracerr.Wrap(err)
}
a.status.Lock()
a.status.Network.LastSyncBlock = lastSyncBlock.Num
a.status.Network.LastEthBlock = lastEthBlock.Num
a.status.Network.LastBatch = lastBatch
a.status.Network.CurrentSlot = currentSlot
a.status.Network.NextForgers = nextForgers
// Update buckets withdrawals
bucketsUpdate, err := a.h.GetBucketUpdatesAPI()
if tracerr.Unwrap(err) == sql.ErrNoRows {
bucketsUpdate = nil
} else if err != nil {
return tracerr.Wrap(err)
}
for i, bucketParams := range a.status.Rollup.Buckets {
for _, bucketUpdate := range bucketsUpdate {
if bucketUpdate.NumBucket == i {
bucketParams.Withdrawals = bucketUpdate.Withdrawals
a.status.Rollup.Buckets[i] = bucketParams
break
}
}
}
a.status.Unlock()
return nil
}
// apiSlotToBigInts converts from [6]*apitypes.BigIntStr to [6]*big.Int
func apiSlotToBigInts(defaultSlotSetBid [6]*apitypes.BigIntStr) ([6]*big.Int, error) {
var slots [6]*big.Int
for i, slot := range defaultSlotSetBid {
bigInt, ok := new(big.Int).SetString(string(*slot), 10)
if !ok {
return slots, tracerr.Wrap(fmt.Errorf("can't convert %T into big.Int", slot))
}
slots[i] = bigInt
}
return slots, nil
}
// getNextForgers returns next forgers
func (a *API) getNextForgers(lastBlock common.Block, currentSlot, lastClosedSlot int64) ([]NextForger, error) {
secondsPerBlock := int64(15) //nolint:gomnd
// currentSlot and lastClosedSlot included
limit := uint(lastClosedSlot - currentSlot + 1)
bids, _, err := a.h.GetBestBidsAPI(&currentSlot, &lastClosedSlot, nil, &limit, "ASC")
if err != nil && tracerr.Unwrap(err) != sql.ErrNoRows {
return nil, tracerr.Wrap(err)
}
nextForgers := []NextForger{}
// Get min bid info
var minBidInfo []historydb.MinBidInfo
if currentSlot >= a.status.Auction.DefaultSlotSetBidSlotNum {
// All min bids can be calculated with the last update of AuctionVariables
bigIntSlots, err := apiSlotToBigInts(a.status.Auction.DefaultSlotSetBid)
stateAPI, err := a.h.GetStateAPI()
if err != nil {
return nil, tracerr.Wrap(err)
retBadReq(err, c)
return
}
minBidInfo = []historydb.MinBidInfo{{
DefaultSlotSetBid: bigIntSlots,
DefaultSlotSetBidSlotNum: a.status.Auction.DefaultSlotSetBidSlotNum,
}}
} else {
// Get all the relevant updates from the DB
minBidInfo, err = a.h.GetAuctionVarsUntilSetSlotNumAPI(lastClosedSlot, int(lastClosedSlot-currentSlot)+1)
if err != nil {
return nil, tracerr.Wrap(err)
}
}
// Create nextForger for each slot
for i := currentSlot; i <= lastClosedSlot; i++ {
fromBlock := i*int64(a.cg.AuctionConstants.BlocksPerSlot) + a.cg.AuctionConstants.GenesisBlockNum
toBlock := (i+1)*int64(a.cg.AuctionConstants.BlocksPerSlot) + a.cg.AuctionConstants.GenesisBlockNum - 1
nextForger := NextForger{
Period: Period{
SlotNum: i,
FromBlock: fromBlock,
ToBlock: toBlock,
FromTimestamp: lastBlock.Timestamp.Add(time.Second * time.Duration(secondsPerBlock*(fromBlock-lastBlock.Num))),
ToTimestamp: lastBlock.Timestamp.Add(time.Second * time.Duration(secondsPerBlock*(toBlock-lastBlock.Num))),
},
}
foundForger := false
// If there is a bid for a slot, get forger (coordinator)
for j := range bids {
slotNum := bids[j].SlotNum
if slotNum == i {
// There's a bid for the slot
// Check if the bid is greater than the minimum required
for i := 0; i < len(minBidInfo); i++ {
// Find the most recent update
if slotNum >= minBidInfo[i].DefaultSlotSetBidSlotNum {
// Get min bid
minBidSelector := slotNum % int64(len(a.status.Auction.DefaultSlotSetBid))
minBid := minBidInfo[i].DefaultSlotSetBid[minBidSelector]
// Check if the bid has beaten the minimum
bid, ok := new(big.Int).SetString(string(bids[j].BidValue), 10)
if !ok {
return nil, tracerr.New("Wrong bid value, error parsing it as big.Int")
}
if minBid.Cmp(bid) == 1 {
// Min bid is greater than bid, the slot will be forged by boot coordinator
break
}
foundForger = true
break
}
}
if !foundForger { // There is no bid or it's smaller than the minimum
break
}
coordinator, err := a.h.GetCoordinatorAPI(bids[j].Bidder)
if err != nil {
return nil, tracerr.Wrap(err)
}
nextForger.Coordinator = *coordinator
break
}
}
// If there is no bid, the coordinator that will forge is boot coordinator
if !foundForger {
nextForger.Coordinator = historydb.CoordinatorAPI{
Forger: a.status.Auction.BootCoordinator,
URL: a.status.Auction.BootCoordinatorURL,
}
}
nextForgers = append(nextForgers, nextForger)
}
return nextForgers, nil
}
// Metrics
// UpdateMetrics update Status.Metrics information
func (a *API) UpdateMetrics() error {
a.status.RLock()
if a.status.Network.LastBatch == nil {
a.status.RUnlock()
return nil
}
batchNum := a.status.Network.LastBatch.BatchNum
a.status.RUnlock()
metrics, err := a.h.GetMetricsAPI(batchNum)
if err != nil {
return tracerr.Wrap(err)
}
a.status.Lock()
a.status.Metrics = *metrics
a.status.Unlock()
return nil
}
// Recommended fee
// UpdateRecommendedFee update Status.RecommendedFee information
func (a *API) UpdateRecommendedFee() error {
feeExistingAccount, err := a.h.GetAvgTxFeeAPI()
if err != nil {
return tracerr.Wrap(err)
}
var minFeeUSD float64
if a.l2 != nil {
minFeeUSD = a.l2.MinFeeUSD()
}
a.status.Lock()
a.status.RecommendedFee.ExistingAccount =
math.Max(feeExistingAccount, minFeeUSD)
a.status.RecommendedFee.CreatesAccount =
math.Max(createAccountExtraFeePercentage*feeExistingAccount, minFeeUSD)
a.status.RecommendedFee.CreatesAccountAndRegister =
math.Max(createAccountInternalExtraFeePercentage*feeExistingAccount, minFeeUSD)
a.status.Unlock()
return nil
c.JSON(http.StatusOK, stateAPI)
}

View File

@@ -4,7 +4,7 @@ import (
"math/big"
"testing"
"github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/stretchr/testify/assert"
@@ -13,7 +13,7 @@ import (
type testStatus struct {
Network testNetwork `json:"network"`
Metrics historydb.Metrics `json:"metrics"`
Metrics historydb.MetricsAPI `json:"metrics"`
Rollup historydb.RollupVariablesAPI `json:"rollup"`
Auction historydb.AuctionVariablesAPI `json:"auction"`
WithdrawalDelayer common.WDelayerVariables `json:"withdrawalDelayer"`
@@ -25,14 +25,15 @@ type testNetwork struct {
LastSyncBlock int64 `json:"lastSynchedBlock"`
LastBatch testBatch `json:"lastBatch"`
CurrentSlot int64 `json:"currentSlot"`
NextForgers []NextForger `json:"nextForgers"`
NextForgers []historydb.NextForgerAPI `json:"nextForgers"`
}
func TestSetRollupVariables(t *testing.T) {
rollupVars := &common.RollupVariables{}
assertEqualRollupVariables(t, *rollupVars, api.status.Rollup, true)
api.SetRollupVariables(tc.rollupVars)
assertEqualRollupVariables(t, tc.rollupVars, api.status.Rollup, true)
stateAPIUpdater.SetSCVars(&common.SCVariablesPtr{Rollup: &tc.rollupVars})
require.NoError(t, stateAPIUpdater.Store())
ni, err := api.h.GetNodeInfoAPI()
require.NoError(t, err)
assertEqualRollupVariables(t, tc.rollupVars, ni.StateAPI.Rollup, true)
}
func assertEqualRollupVariables(t *testing.T, rollupVariables common.RollupVariables, apiVariables historydb.RollupVariablesAPI, checkBuckets bool) {
@@ -51,17 +52,19 @@ func assertEqualRollupVariables(t *testing.T, rollupVariables common.RollupVaria
}
func TestSetWDelayerVariables(t *testing.T) {
wdelayerVars := &common.WDelayerVariables{}
assert.Equal(t, *wdelayerVars, api.status.WithdrawalDelayer)
api.SetWDelayerVariables(tc.wdelayerVars)
assert.Equal(t, tc.wdelayerVars, api.status.WithdrawalDelayer)
stateAPIUpdater.SetSCVars(&common.SCVariablesPtr{WDelayer: &tc.wdelayerVars})
require.NoError(t, stateAPIUpdater.Store())
ni, err := api.h.GetNodeInfoAPI()
require.NoError(t, err)
assert.Equal(t, tc.wdelayerVars, ni.StateAPI.WithdrawalDelayer)
}
func TestSetAuctionVariables(t *testing.T) {
auctionVars := &common.AuctionVariables{}
assertEqualAuctionVariables(t, *auctionVars, api.status.Auction)
api.SetAuctionVariables(tc.auctionVars)
assertEqualAuctionVariables(t, tc.auctionVars, api.status.Auction)
stateAPIUpdater.SetSCVars(&common.SCVariablesPtr{Auction: &tc.auctionVars})
require.NoError(t, stateAPIUpdater.Store())
ni, err := api.h.GetNodeInfoAPI()
require.NoError(t, err)
assertEqualAuctionVariables(t, tc.auctionVars, ni.StateAPI.Auction)
}
func assertEqualAuctionVariables(t *testing.T, auctionVariables common.AuctionVariables, apiVariables historydb.AuctionVariablesAPI) {
@@ -85,11 +88,6 @@ func assertEqualAuctionVariables(t *testing.T, auctionVariables common.AuctionVa
}
func TestUpdateNetworkInfo(t *testing.T) {
status := &Network{}
assert.Equal(t, status.LastSyncBlock, api.status.Network.LastSyncBlock)
assert.Equal(t, status.LastBatch, api.status.Network.LastBatch)
assert.Equal(t, status.CurrentSlot, api.status.Network.CurrentSlot)
assert.Equal(t, status.NextForgers, api.status.Network.NextForgers)
lastBlock := tc.blocks[3]
lastBatchNum := common.BatchNum(3)
currentSlotNum := int64(1)
@@ -118,14 +116,17 @@ func TestUpdateNetworkInfo(t *testing.T) {
err := api.h.AddBucketUpdatesTest(api.h.DB(), bucketUpdates)
require.NoError(t, err)
err = api.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
assert.NoError(t, err)
assert.Equal(t, lastBlock.Num, api.status.Network.LastSyncBlock)
assert.Equal(t, lastBatchNum, api.status.Network.LastBatch.BatchNum)
assert.Equal(t, currentSlotNum, api.status.Network.CurrentSlot)
assert.Equal(t, int(api.status.Auction.ClosedAuctionSlots)+1, len(api.status.Network.NextForgers))
assert.Equal(t, api.status.Rollup.Buckets[0].Withdrawals, apitypes.NewBigIntStr(big.NewInt(123)))
assert.Equal(t, api.status.Rollup.Buckets[2].Withdrawals, apitypes.NewBigIntStr(big.NewInt(43)))
err = stateAPIUpdater.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
require.NoError(t, err)
require.NoError(t, stateAPIUpdater.Store())
ni, err := api.h.GetNodeInfoAPI()
require.NoError(t, err)
assert.Equal(t, lastBlock.Num, ni.StateAPI.Network.LastSyncBlock)
assert.Equal(t, lastBatchNum, ni.StateAPI.Network.LastBatch.BatchNum)
assert.Equal(t, currentSlotNum, ni.StateAPI.Network.CurrentSlot)
assert.Equal(t, int(ni.StateAPI.Auction.ClosedAuctionSlots)+1, len(ni.StateAPI.Network.NextForgers))
assert.Equal(t, ni.StateAPI.Rollup.Buckets[0].Withdrawals, apitypes.NewBigIntStr(big.NewInt(123)))
assert.Equal(t, ni.StateAPI.Rollup.Buckets[2].Withdrawals, apitypes.NewBigIntStr(big.NewInt(43)))
}
func TestUpdateMetrics(t *testing.T) {
@@ -133,51 +134,62 @@ func TestUpdateMetrics(t *testing.T) {
lastBlock := tc.blocks[3]
lastBatchNum := common.BatchNum(12)
currentSlotNum := int64(1)
err := api.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
assert.NoError(t, err)
err := stateAPIUpdater.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
require.NoError(t, err)
err = api.UpdateMetrics()
assert.NoError(t, err)
assert.Greater(t, api.status.Metrics.TransactionsPerBatch, float64(0))
assert.Greater(t, api.status.Metrics.BatchFrequency, float64(0))
assert.Greater(t, api.status.Metrics.TransactionsPerSecond, float64(0))
assert.Greater(t, api.status.Metrics.TotalAccounts, int64(0))
assert.Greater(t, api.status.Metrics.TotalBJJs, int64(0))
assert.Greater(t, api.status.Metrics.AvgTransactionFee, float64(0))
err = stateAPIUpdater.UpdateMetrics()
require.NoError(t, err)
require.NoError(t, stateAPIUpdater.Store())
ni, err := api.h.GetNodeInfoAPI()
require.NoError(t, err)
assert.Greater(t, ni.StateAPI.Metrics.TransactionsPerBatch, float64(0))
assert.Greater(t, ni.StateAPI.Metrics.BatchFrequency, float64(0))
assert.Greater(t, ni.StateAPI.Metrics.TransactionsPerSecond, float64(0))
assert.Greater(t, ni.StateAPI.Metrics.TokenAccounts, int64(0))
assert.Greater(t, ni.StateAPI.Metrics.Wallets, int64(0))
assert.Greater(t, ni.StateAPI.Metrics.AvgTransactionFee, float64(0))
}
func TestUpdateRecommendedFee(t *testing.T) {
err := api.UpdateRecommendedFee()
assert.NoError(t, err)
err := stateAPIUpdater.UpdateRecommendedFee()
require.NoError(t, err)
require.NoError(t, stateAPIUpdater.Store())
var minFeeUSD float64
if api.l2 != nil {
minFeeUSD = api.l2.MinFeeUSD()
}
assert.Greater(t, api.status.RecommendedFee.ExistingAccount, minFeeUSD)
assert.Equal(t, api.status.RecommendedFee.CreatesAccount,
api.status.RecommendedFee.ExistingAccount*createAccountExtraFeePercentage)
assert.Equal(t, api.status.RecommendedFee.CreatesAccountAndRegister,
api.status.RecommendedFee.ExistingAccount*createAccountInternalExtraFeePercentage)
ni, err := api.h.GetNodeInfoAPI()
require.NoError(t, err)
assert.Greater(t, ni.StateAPI.RecommendedFee.ExistingAccount, minFeeUSD)
assert.Equal(t, ni.StateAPI.RecommendedFee.CreatesAccount,
ni.StateAPI.RecommendedFee.ExistingAccount*
historydb.CreateAccountExtraFeePercentage)
assert.Equal(t, ni.StateAPI.RecommendedFee.CreatesAccountInternal,
ni.StateAPI.RecommendedFee.ExistingAccount*
historydb.CreateAccountInternalExtraFeePercentage)
}
func TestGetState(t *testing.T) {
lastBlock := tc.blocks[3]
lastBatchNum := common.BatchNum(12)
currentSlotNum := int64(1)
api.SetRollupVariables(tc.rollupVars)
api.SetWDelayerVariables(tc.wdelayerVars)
api.SetAuctionVariables(tc.auctionVars)
err := api.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
assert.NoError(t, err)
err = api.UpdateMetrics()
assert.NoError(t, err)
err = api.UpdateRecommendedFee()
assert.NoError(t, err)
stateAPIUpdater.SetSCVars(&common.SCVariablesPtr{
Rollup: &tc.rollupVars,
Auction: &tc.auctionVars,
WDelayer: &tc.wdelayerVars,
})
err := stateAPIUpdater.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
require.NoError(t, err)
err = stateAPIUpdater.UpdateMetrics()
require.NoError(t, err)
err = stateAPIUpdater.UpdateRecommendedFee()
require.NoError(t, err)
require.NoError(t, stateAPIUpdater.Store())
endpoint := apiURL + "state"
var status testStatus
assert.NoError(t, doGoodReq("GET", endpoint, nil, &status))
require.NoError(t, doGoodReq("GET", endpoint, nil, &status))
// SC vars
// UpdateNetworkInfo will overwrite buckets withdrawal values
@@ -198,19 +210,21 @@ func TestGetState(t *testing.T) {
assert.Greater(t, status.Metrics.TransactionsPerBatch, float64(0))
assert.Greater(t, status.Metrics.BatchFrequency, float64(0))
assert.Greater(t, status.Metrics.TransactionsPerSecond, float64(0))
assert.Greater(t, status.Metrics.TotalAccounts, int64(0))
assert.Greater(t, status.Metrics.TotalBJJs, int64(0))
assert.Greater(t, status.Metrics.TokenAccounts, int64(0))
assert.Greater(t, status.Metrics.Wallets, int64(0))
assert.Greater(t, status.Metrics.AvgTransactionFee, float64(0))
// Recommended fee
// TODO: perform real asserts (not just greater than 0)
assert.Greater(t, status.RecommendedFee.ExistingAccount, float64(0))
assert.Equal(t, status.RecommendedFee.CreatesAccount,
status.RecommendedFee.ExistingAccount*createAccountExtraFeePercentage)
assert.Equal(t, status.RecommendedFee.CreatesAccountAndRegister,
status.RecommendedFee.ExistingAccount*createAccountInternalExtraFeePercentage)
status.RecommendedFee.ExistingAccount*
historydb.CreateAccountExtraFeePercentage)
assert.Equal(t, status.RecommendedFee.CreatesAccountInternal,
status.RecommendedFee.ExistingAccount*
historydb.CreateAccountInternalExtraFeePercentage)
}
func assertNextForgers(t *testing.T, expected, actual []NextForger) {
func assertNextForgers(t *testing.T, expected, actual []historydb.NextForgerAPI) {
assert.Equal(t, len(expected), len(actual))
for i := range expected {
// ignore timestamps and other metadata

View File

@@ -0,0 +1,162 @@
package stateapiupdater
import (
"database/sql"
"sync"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/hermeznetwork/tracerr"
)
// Updater is an utility object to facilitate updating the StateAPI
type Updater struct {
hdb *historydb.HistoryDB
state historydb.StateAPI
config historydb.NodeConfig
vars common.SCVariablesPtr
consts historydb.Constants
rw sync.RWMutex
}
// NewUpdater creates a new Updater
func NewUpdater(hdb *historydb.HistoryDB, config *historydb.NodeConfig, vars *common.SCVariables,
consts *historydb.Constants) *Updater {
u := Updater{
hdb: hdb,
config: *config,
consts: *consts,
state: historydb.StateAPI{
NodePublicInfo: historydb.NodePublicInfo{
ForgeDelay: config.ForgeDelay,
},
},
}
u.SetSCVars(vars.AsPtr())
return &u
}
// Store the State in the HistoryDB
func (u *Updater) Store() error {
u.rw.RLock()
defer u.rw.RUnlock()
return tracerr.Wrap(u.hdb.SetStateInternalAPI(&u.state))
}
// SetSCVars sets the smart contract vars (ony updates those that are not nil)
func (u *Updater) SetSCVars(vars *common.SCVariablesPtr) {
u.rw.Lock()
defer u.rw.Unlock()
if vars.Rollup != nil {
u.vars.Rollup = vars.Rollup
rollupVars := historydb.NewRollupVariablesAPI(u.vars.Rollup)
u.state.Rollup = *rollupVars
}
if vars.Auction != nil {
u.vars.Auction = vars.Auction
auctionVars := historydb.NewAuctionVariablesAPI(u.vars.Auction)
u.state.Auction = *auctionVars
}
if vars.WDelayer != nil {
u.vars.WDelayer = vars.WDelayer
u.state.WithdrawalDelayer = *u.vars.WDelayer
}
}
// UpdateRecommendedFee update Status.RecommendedFee information
func (u *Updater) UpdateRecommendedFee() error {
recommendedFee, err := u.hdb.GetRecommendedFee(u.config.MinFeeUSD, u.config.MaxFeeUSD)
if err != nil {
return tracerr.Wrap(err)
}
u.rw.Lock()
u.state.RecommendedFee = *recommendedFee
u.rw.Unlock()
return nil
}
// UpdateMetrics update Status.Metrics information
func (u *Updater) UpdateMetrics() error {
u.rw.RLock()
lastBatch := u.state.Network.LastBatch
u.rw.RUnlock()
if lastBatch == nil {
return nil
}
lastBatchNum := lastBatch.BatchNum
metrics, poolLoad, err := u.hdb.GetMetricsInternalAPI(lastBatchNum)
if err != nil {
return tracerr.Wrap(err)
}
u.rw.Lock()
u.state.Metrics = *metrics
u.state.NodePublicInfo.PoolLoad = poolLoad
u.rw.Unlock()
return nil
}
// UpdateNetworkInfoBlock update Status.Network block related information
func (u *Updater) UpdateNetworkInfoBlock(lastEthBlock, lastSyncBlock common.Block) {
u.rw.Lock()
u.state.Network.LastSyncBlock = lastSyncBlock.Num
u.state.Network.LastEthBlock = lastEthBlock.Num
u.rw.Unlock()
}
// UpdateNetworkInfo update Status.Network information
func (u *Updater) UpdateNetworkInfo(
lastEthBlock, lastSyncBlock common.Block,
lastBatchNum common.BatchNum, currentSlot int64,
) error {
// Get last batch in API format
lastBatch, err := u.hdb.GetBatchInternalAPI(lastBatchNum)
if tracerr.Unwrap(err) == sql.ErrNoRows {
lastBatch = nil
} else if err != nil {
return tracerr.Wrap(err)
}
u.rw.RLock()
auctionVars := u.vars.Auction
u.rw.RUnlock()
// Get next forgers
lastClosedSlot := currentSlot + int64(auctionVars.ClosedAuctionSlots)
nextForgers, err := u.hdb.GetNextForgersInternalAPI(auctionVars, &u.consts.Auction,
lastSyncBlock, currentSlot, lastClosedSlot)
if tracerr.Unwrap(err) == sql.ErrNoRows {
nextForgers = nil
} else if err != nil {
return tracerr.Wrap(err)
}
bucketUpdates, err := u.hdb.GetBucketUpdatesInternalAPI()
if err == sql.ErrNoRows {
bucketUpdates = nil
} else if err != nil {
return tracerr.Wrap(err)
}
u.rw.Lock()
// Update NodeInfo struct
for i, bucketParams := range u.state.Rollup.Buckets {
for _, bucketUpdate := range bucketUpdates {
if bucketUpdate.NumBucket == i {
bucketParams.Withdrawals = bucketUpdate.Withdrawals
u.state.Rollup.Buckets[i] = bucketParams
break
}
}
}
// Update pending L1s
pendingL1s, err := u.hdb.GetUnforgedL1UserTxsCount()
if err != nil {
return tracerr.Wrap(err)
}
u.state.Network.LastSyncBlock = lastSyncBlock.Num
u.state.Network.LastEthBlock = lastEthBlock.Num
u.state.Network.LastBatch = lastBatch
u.state.Network.CurrentSlot = currentSlot
u.state.Network.NextForgers = nextForgers
u.state.Network.PendingL1Txs = pendingL1s
u.rw.Unlock()
return nil
}

View File

@@ -60,9 +60,9 @@ externalDocs:
url: 'https://hermez.io'
servers:
- description: Hosted mock up
url: https://apimock.hermez.network
url: https://apimock.hermez.network/v1
- description: Localhost mock Up
url: http://localhost:4010
url: http://localhost:4010/v1
tags:
- name: Coordinator
description: Endpoints used by the nodes running in coordinator mode. They are used to interact with the network.
@@ -2569,9 +2569,9 @@ components:
description: List of next coordinators to forge.
items:
$ref: '#/components/schemas/NextForger'
NodeConfig:
Node:
type: object
description: Configuration of the coordinator node. Note that this is specific for each coordinator.
description: Configuration and metrics of the coordinator node. Note that this is specific for each coordinator.
properties:
forgeDelay:
type: number
@@ -2581,9 +2581,14 @@ components:
forge at the maximum rate. Note that this is a configuration parameter of a node,
so each coordinator may have a different value.
example: 193.4
poolLoad:
type: number
description: Number of pending transactions in the pool
example: 23201
additionalProperties: false
required:
- forgeDelay
- poolLoad
State:
type: object
description: Gobal variables of the network
@@ -2600,8 +2605,8 @@ components:
$ref: '#/components/schemas/StateWithdrawDelayer'
recommendedFee:
$ref: '#/components/schemas/RecommendedFee'
nodeConfig:
$ref: '#/components/schemas/NodeConfig'
node:
$ref: '#/components/schemas/Node'
additionalProperties: false
required:
- network
@@ -2610,7 +2615,7 @@ components:
- auction
- withdrawalDelayer
- recommendedFee
- nodeConfig
- node
StateNetwork:
type: object
description: Gobal statistics of the network
@@ -2634,6 +2639,10 @@ components:
- example: 2334
nextForgers:
$ref: '#/components/schemas/NextForgers'
pendingL1Transactions:
type: number
description: Number of pending L1 transactions (added in the smart contract queue but not forged).
example: 22
additionalProperties: false
required:
- lastEthereumBlock
@@ -2809,11 +2818,11 @@ components:
type: number
description: Average transactions per second in the last 24 hours.
example: 302.3
totalAccounts:
tokenAccounts:
type: integer
description: Number of created accounts.
example: 90473
totalBJJs:
wallets:
type: integer
description: Number of different registered BJJs.
example: 23067
@@ -2830,8 +2839,8 @@ components:
- transactionsPerBatch
- batchFrequency
- transactionsPerSecond
- totalAccounts
- totalBJJs
- tokenAccounts
- wallets
- avgTransactionFee
- estimatedTimeToForgeL1
PendingItems:

View File

@@ -8,7 +8,7 @@ import (
"testing"
"time"
"github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/hermeznetwork/hermez-node/test"

View File

@@ -8,7 +8,7 @@ import (
ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/gin-gonic/gin"
"github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/l2db"
"github.com/hermeznetwork/tracerr"
@@ -179,7 +179,7 @@ func (a *API) verifyPoolL2TxWrite(txw *l2db.PoolL2TxWrite) error {
// Get public key
account, err := a.h.GetCommonAccountAPI(poolTx.FromIdx)
if err != nil {
return tracerr.Wrap(err)
return tracerr.Wrap(fmt.Errorf("Error getting from account: %w", err))
}
// Validate TokenID
if poolTx.TokenID != account.TokenID {

View File

@@ -8,7 +8,7 @@ The `hermez-node` has been tested with go version 1.14
## Usage
```
```shell
NAME:
hermez-node - A new cli application
@@ -16,18 +16,19 @@ USAGE:
node [global options] command [command options] [arguments...]
VERSION:
0.1.0-alpha
v0.1.0-6-gd8a50c5
COMMANDS:
version Show the application version
importkey Import ethereum private key
genbjj Generate a new BabyJubJub key
wipesql Wipe the SQL DB (HistoryDB and L2DB), leaving the DB in a clean state
wipesql Wipe the SQL DB (HistoryDB and L2DB) and the StateDBs, leaving the DB in a clean state
run Run the hermez-node in the indicated mode
serveapi Serve the API only
discard Discard blocks up to a specified block number
help, h Shows a list of commands or help for one command
GLOBAL OPTIONS:
--mode MODE Set node MODE (can be "sync" or "coord")
--cfg FILE Node configuration FILE
--help, -h show help (default: false)
--version, -v print the version (default: false)
```
@@ -54,6 +55,10 @@ To read the documentation of each configuration parameter, please check the
with `Coordinator` are only used in coord mode, and don't need to be defined
when running the coordinator in sync mode
When running the API in standalone mode, the required configuration is a subset
of the node configuration. Please, check the `type APIServer` at
[config/config.go](../../config/config.go) to learn about all the parametes.
### Notes
- The private key corresponding to the parameter `Coordinator.ForgerAddress` needs to be imported in the ethereum keystore
@@ -75,7 +80,7 @@ when running the coordinator in sync mode
Building the node requires using the packr utility to bundle the database
migrations inside the resulting binary. Install the packr utility with:
```
```shell
cd /tmp && go get -u github.com/gobuffalo/packr/v2/packr2 && cd -
```
@@ -83,7 +88,7 @@ Make sure your `$PATH` contains `$GOPATH/bin`, otherwise the packr utility will
not be found.
Now build the node executable:
```
```shell
cd ../../db && packr2 && cd -
go build .
cd ../../db && packr2 clean && cd -
@@ -98,35 +103,48 @@ run the following examples by replacing `./node` with `go run .` and executing
them in the `cli/node` directory to build from source and run at the same time.
Run the node in mode synchronizer:
```
./node --mode sync --cfg cfg.buidler.toml run
```shell
./node run --mode sync --cfg cfg.buidler.toml
```
Run the node in mode coordinator:
```shell
./node run --mode coord --cfg cfg.buidler.toml
```
./node --mode coord --cfg cfg.buidler.toml run
Serve the API in standalone mode. This command allows serving the API just
with access to the PostgreSQL database that a node is using. Several instances
of `serveapi` can be running at the same time with a single PostgreSQL
database:
```shell
./node serveapi --mode coord --cfg cfg.buidler.toml
```
Import an ethereum private key into the keystore:
```
./node --mode coord --cfg cfg.buidler.toml importkey --privatekey 0x618b35096c477aab18b11a752be619f0023a539bb02dd6c813477a6211916cde
```shell
./node importkey --mode coord --cfg cfg.buidler.toml --privatekey 0x618b35096c477aab18b11a752be619f0023a539bb02dd6c813477a6211916cde
```
Generate a new BabyJubJub key pair:
```shell
./node genbjj
```
./node --mode coord --cfg cfg.buidler.toml genbjj
Check the binary version:
```shell
./node version
```
Wipe the entier SQL database (this will destroy all synchronized and pool
data):
```
./node --mode coord --cfg cfg.buidler.toml wipesql
```shell
./node wipesql --mode coord --cfg cfg.buidler.toml
```
Discard all synchronized blocks and associated state up to a given block
number. This command is useful in case the synchronizer reaches an invalid
state and you want to roll back a few blocks and try again (maybe with some
fixes in the code).
```
./node --mode coord --cfg cfg.buidler.toml discard --block 8061330
```shell
./node discard --mode coord --cfg cfg.buidler.toml --block 8061330
```

24
cli/node/cfg.api.toml Normal file
View File

@@ -0,0 +1,24 @@
[API]
Address = "localhost:8386"
Explorer = true
MaxSQLConnections = 10
SQLConnectionTimeout = "2s"
[PostgreSQL]
PortWrite = 5432
HostWrite = "localhost"
UserWrite = "hermez"
PasswordWrite = "yourpasswordhere"
NameWrite = "hermez"
[Coordinator.L2DB]
SafetyPeriod = 10
MaxTxs = 512
TTL = "24h"
PurgeBatchDelay = 10
InvalidateBatchDelay = 20
PurgeBlockDelay = 10
InvalidateBlockDelay = 20
[Coordinator.API]
Coordinator = true

View File

@@ -8,10 +8,31 @@ SQLConnectionTimeout = "2s"
[PriceUpdater]
Interval = "10s"
URL = "https://api-pub.bitfinex.com/v2/"
Type = "bitfinexV2"
# URL = "https://api.coingecko.com/api/v3/"
# Type = "coingeckoV3"
URLBitfinexV2 = "https://api-pub.bitfinex.com/v2/"
URLCoinGeckoV3 = "https://api.coingecko.com/api/v3/"
# Available update methods:
# - coingeckoV3 (recommended): get price by SC addr using coingecko API
# - bitfinexV2: get price by token symbol using bitfinex API
# - static (recommended for blacklisting tokens): use the given StaticValue to set the price (if not provided 0 will be used)
# - ignore: don't update the price leave it as it is on the DB
DefaultUpdateMethod = "coingeckoV3" # Update method used for all the tokens registered on the network, and not listed in [[PriceUpdater.TokensConfig]]
[[PriceUpdater.TokensConfig]]
UpdateMethod = "bitfinexV2"
Symbol = "USDT"
Addr = "0xdac17f958d2ee523a2206206994597c13d831ec7"
[[PriceUpdater.TokensConfig]]
UpdateMethod = "coingeckoV3"
Symbol = "ETH"
Addr = "0x0000000000000000000000000000000000000000"
[[PriceUpdater.TokensConfig]]
UpdateMethod = "static"
Symbol = "UNI"
Addr = "0x1f9840a85d5af5bf1d1762f925bdaddc4201f984"
StaticValue = 30.12
[[PriceUpdater.TokensConfig]]
UpdateMethod = "ignore"
Symbol = "SUSHI"
Addr = "0x6b3595068778dd592e39a122f4f5a5cf09c90fe2"
[Debug]
APIAddress = "localhost:12345"
@@ -65,6 +86,8 @@ SyncRetryInterval = "1s"
ForgeDelay = "10s"
ForgeNoTxsDelay = "0s"
PurgeByExtDelInterval = "1m"
MustForgeAtSlotDeadline = true
IgnoreSlotCommitment = false
[Coordinator.FeeAccount]
Address = "0x56232B1c5B10038125Bc7345664B4AFD745bcF8E"
@@ -76,6 +99,7 @@ BJJ = "0x1b176232f78ba0d388ecc5f4896eca2d3b3d4f272092469f559247297f5c0c13"
SafetyPeriod = 10
MaxTxs = 512
MinFeeUSD = 0.0
MaxFeeUSD = 50.0
TTL = "24h"
PurgeBatchDelay = 10
InvalidateBatchDelay = 20

View File

@@ -1,10 +1,10 @@
#!/bin/sh
# Non-Boot Coordinator
go run . --mode coord --cfg cfg.buidler.toml importkey --privatekey 0x30f5fddb34cd4166adb2c6003fa6b18f380fd2341376be42cf1c7937004ac7a3
go run . importkey --mode coord --cfg cfg.buidler.toml --privatekey 0x30f5fddb34cd4166adb2c6003fa6b18f380fd2341376be42cf1c7937004ac7a3
# Boot Coordinator
go run . --mode coord --cfg cfg.buidler.toml importkey --privatekey 0xa8a54b2d8197bc0b19bb8a084031be71835580a01e70a45a13babd16c9bc1563
go run . importkey --mode coord --cfg cfg.buidler.toml --privatekey 0xa8a54b2d8197bc0b19bb8a084031be71835580a01e70a45a13babd16c9bc1563
# FeeAccount
go run . --mode coord --cfg cfg.buidler.toml importkey --privatekey 0x3a9270c020e169097808da4b02e8d9100be0f8a38cfad3dcfc0b398076381fdd
go run . importkey --mode coord --cfg cfg.buidler.toml --privatekey 0x3a9270c020e169097808da4b02e8d9100be0f8a38cfad3dcfc0b398076381fdd

View File

@@ -34,6 +34,22 @@ const (
modeCoord = "coord"
)
var (
// Version represents the program based on the git tag
Version = "v0.1.0"
// Build represents the program based on the git commit
Build = "dev"
// Date represents the date of application was built
Date = ""
)
func cmdVersion(c *cli.Context) error {
fmt.Printf("Version = \"%v\"\n", Version)
fmt.Printf("Build = \"%v\"\n", Build)
fmt.Printf("Date = \"%v\"\n", Date)
return nil
}
func cmdGenBJJ(c *cli.Context) error {
sk := babyjub.NewRandPrivKey()
skBuf := [32]byte(sk)
@@ -196,17 +212,7 @@ func cmdWipeSQL(c *cli.Context) error {
return nil
}
func cmdRun(c *cli.Context) error {
cfg, err := parseCli(c)
if err != nil {
return tracerr.Wrap(fmt.Errorf("error parsing flags and config: %w", err))
}
node, err := node.NewNode(cfg.mode, cfg.node)
if err != nil {
return tracerr.Wrap(fmt.Errorf("error starting node: %w", err))
}
node.Start()
func waitSigInt() {
stopCh := make(chan interface{})
// catch ^C to send the stop signal
@@ -227,11 +233,40 @@ func cmdRun(c *cli.Context) error {
}
}()
<-stopCh
}
func cmdRun(c *cli.Context) error {
cfg, err := parseCli(c)
if err != nil {
return tracerr.Wrap(fmt.Errorf("error parsing flags and config: %w", err))
}
node, err := node.NewNode(cfg.mode, cfg.node)
if err != nil {
return tracerr.Wrap(fmt.Errorf("error starting node: %w", err))
}
node.Start()
waitSigInt()
node.Stop()
return nil
}
func cmdServeAPI(c *cli.Context) error {
cfg, err := parseCliAPIServer(c)
if err != nil {
return tracerr.Wrap(fmt.Errorf("error parsing flags and config: %w", err))
}
srv, err := node.NewAPIServer(cfg.mode, cfg.server)
if err != nil {
return tracerr.Wrap(fmt.Errorf("error starting api server: %w", err))
}
srv.Start()
waitSigInt()
srv.Stop()
return nil
}
func cmdDiscard(c *cli.Context) error {
_cfg, err := parseCli(c)
if err != nil {
@@ -283,6 +318,7 @@ func cmdDiscard(c *cli.Context) error {
cfg.Coordinator.L2DB.SafetyPeriod,
cfg.Coordinator.L2DB.MaxTxs,
cfg.Coordinator.L2DB.MinFeeUSD,
cfg.Coordinator.L2DB.MaxFeeUSD,
cfg.Coordinator.L2DB.TTL.Duration,
nil,
)
@@ -319,20 +355,59 @@ func getConfig(c *cli.Context) (*Config, error) {
var cfg Config
mode := c.String(flagMode)
nodeCfgPath := c.String(flagCfg)
if nodeCfgPath == "" {
return nil, tracerr.Wrap(fmt.Errorf("required flag \"%v\" not set", flagCfg))
}
var err error
switch mode {
case modeSync:
cfg.mode = node.ModeSynchronizer
cfg.node, err = config.LoadNode(nodeCfgPath)
cfg.node, err = config.LoadNode(nodeCfgPath, false)
if err != nil {
return nil, tracerr.Wrap(err)
}
case modeCoord:
cfg.mode = node.ModeCoordinator
cfg.node, err = config.LoadCoordinator(nodeCfgPath)
cfg.node, err = config.LoadNode(nodeCfgPath, true)
if err != nil {
return nil, tracerr.Wrap(err)
}
default:
return nil, tracerr.Wrap(fmt.Errorf("invalid mode \"%v\"", mode))
}
return &cfg, nil
}
// ConfigAPIServer is the configuration of the api server execution
type ConfigAPIServer struct {
mode node.Mode
server *config.APIServer
}
func parseCliAPIServer(c *cli.Context) (*ConfigAPIServer, error) {
cfg, err := getConfigAPIServer(c)
if err != nil {
if err := cli.ShowAppHelp(c); err != nil {
panic(err)
}
return nil, tracerr.Wrap(err)
}
return cfg, nil
}
func getConfigAPIServer(c *cli.Context) (*ConfigAPIServer, error) {
var cfg ConfigAPIServer
mode := c.String(flagMode)
nodeCfgPath := c.String(flagCfg)
var err error
switch mode {
case modeSync:
cfg.mode = node.ModeSynchronizer
cfg.server, err = config.LoadAPIServer(nodeCfgPath, false)
if err != nil {
return nil, tracerr.Wrap(err)
}
case modeCoord:
cfg.mode = node.ModeCoordinator
cfg.server, err = config.LoadAPIServer(nodeCfgPath, true)
if err != nil {
return nil, tracerr.Wrap(err)
}
@@ -346,8 +421,8 @@ func getConfig(c *cli.Context) (*Config, error) {
func main() {
app := cli.NewApp()
app.Name = "hermez-node"
app.Version = "0.1.0-alpha"
app.Flags = []cli.Flag{
app.Version = Version
flags := []cli.Flag{
&cli.StringFlag{
Name: flagMode,
Usage: fmt.Sprintf("Set node `MODE` (can be \"%v\" or \"%v\")", modeSync, modeCoord),
@@ -361,17 +436,23 @@ func main() {
}
app.Commands = []*cli.Command{
{
Name: "version",
Aliases: []string{},
Usage: "Show the application version and build",
Action: cmdVersion,
},
{
Name: "importkey",
Aliases: []string{},
Usage: "Import ethereum private key",
Action: cmdImportKey,
Flags: []cli.Flag{
Flags: append(flags,
&cli.StringFlag{
Name: flagSK,
Usage: "ethereum `PRIVATE_KEY` in hex",
Required: true,
}},
}),
},
{
Name: "genbjj",
@@ -385,30 +466,38 @@ func main() {
Usage: "Wipe the SQL DB (HistoryDB and L2DB) and the StateDBs, " +
"leaving the DB in a clean state",
Action: cmdWipeSQL,
Flags: []cli.Flag{
Flags: append(flags,
&cli.BoolFlag{
Name: flagYes,
Usage: "automatic yes to the prompt",
Required: false,
}},
}),
},
{
Name: "run",
Aliases: []string{},
Usage: "Run the hermez-node in the indicated mode",
Action: cmdRun,
Flags: flags,
},
{
Name: "serveapi",
Aliases: []string{},
Usage: "Serve the API only",
Action: cmdServeAPI,
Flags: flags,
},
{
Name: "discard",
Aliases: []string{},
Usage: "Discard blocks up to a specified block number",
Action: cmdDiscard,
Flags: []cli.Flag{
Flags: append(flags,
&cli.Int64Flag{
Name: flagBlock,
Usage: "last block number to keep",
Required: false,
}},
}),
},
}

View File

@@ -11,15 +11,15 @@ import (
"github.com/iden3/go-iden3-crypto/babyjub"
)
// AccountCreationAuthMsg is the message that is signed to authorize a Hermez
// account creation
const AccountCreationAuthMsg = "Account creation"
// EIP712Version is the used version of the EIP-712
const EIP712Version = "1"
// EIP712Provider defines the Provider for the EIP-712
const EIP712Provider = "Hermez Network"
const (
// AccountCreationAuthMsg is the message that is signed to authorize a
// Hermez account creation
AccountCreationAuthMsg = "Account creation"
// EIP712Version is the used version of the EIP-712
EIP712Version = "1"
// EIP712Provider defines the Provider for the EIP-712
EIP712Provider = "Hermez Network"
)
var (
// EmptyEthSignature is an ethereum signature of all zeroes

33
common/eth.go Normal file
View File

@@ -0,0 +1,33 @@
package common
// SCVariables joins all the smart contract variables in a single struct
type SCVariables struct {
Rollup RollupVariables `validate:"required"`
Auction AuctionVariables `validate:"required"`
WDelayer WDelayerVariables `validate:"required"`
}
// AsPtr returns the SCVariables as a SCVariablesPtr using pointers to the
// original SCVariables
func (v *SCVariables) AsPtr() *SCVariablesPtr {
return &SCVariablesPtr{
Rollup: &v.Rollup,
Auction: &v.Auction,
WDelayer: &v.WDelayer,
}
}
// SCVariablesPtr joins all the smart contract variables as pointers in a single
// struct
type SCVariablesPtr struct {
Rollup *RollupVariables `validate:"required"`
Auction *AuctionVariables `validate:"required"`
WDelayer *WDelayerVariables `validate:"required"`
}
// SCConsts joins all the smart contract constants in a single struct
type SCConsts struct {
Rollup RollupConstants
Auction AuctionConstants
WDelayer WDelayerConstants
}

View File

@@ -24,7 +24,7 @@ var FeeFactorLsh60 [256]*big.Int
type RecommendedFee struct {
ExistingAccount float64 `json:"existingAccount"`
CreatesAccount float64 `json:"createAccount"`
CreatesAccountAndRegister float64 `json:"createAccountInternal"`
CreatesAccountInternal float64 `json:"createAccountInternal"`
}
// FeeSelector is used to select a percentage from the FeePlan.

View File

@@ -9,6 +9,7 @@ import (
"github.com/BurntSushi/toml"
ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/priceupdater"
"github.com/hermeznetwork/tracerr"
"github.com/iden3/go-iden3-crypto/babyjub"
"gopkg.in/go-playground/validator.v9"
@@ -44,6 +45,13 @@ type ForgeBatchGasCost struct {
L2Tx uint64 `validate:"required"`
}
// CoordinatorAPI specifies the configuration parameters of the API in mode
// coordinator
type CoordinatorAPI struct {
// Coordinator enables the coordinator API endpoints
Coordinator bool
}
// Coordinator is the coordinator specific configuration.
type Coordinator struct {
// ForgerAddress is the address under which this coordinator is forging
@@ -101,6 +109,20 @@ type Coordinator struct {
// to 0s, the coordinator will continuously forge even if the batches
// are empty.
ForgeNoTxsDelay Duration `validate:"-"`
// MustForgeAtSlotDeadline enables the coordinator to forge slots if
// the empty slots reach the slot deadline.
MustForgeAtSlotDeadline bool
// IgnoreSlotCommitment disables forcing the coordinator to forge a
// slot immediately when the slot is not committed. If set to false,
// the coordinator will immediately forge a batch at the beginning of a
// slot if it's the slot winner.
IgnoreSlotCommitment bool
// ForgeOncePerSlotIfTxs will make the coordinator forge at most one
// batch per slot, only if there are included txs in that batch, or
// pending l1UserTxs in the smart contract. Setting this parameter
// overrides `ForgeDelay`, `ForgeNoTxsDelay`, `MustForgeAtSlotDeadline`
// and `IgnoreSlotCommitment`.
ForgeOncePerSlotIfTxs bool
// SyncRetryInterval is the waiting interval between calls to the main
// handler of a synced block after an error
SyncRetryInterval Duration `validate:"required"`
@@ -122,6 +144,10 @@ type Coordinator struct {
// order to be accepted into the pool. Txs with lower than
// minimum fee will be rejected at the API level.
MinFeeUSD float64
// MaxFeeUSD is the maximum fee in USD that a tx must pay in
// order to be accepted into the pool. Txs with greater than
// maximum fee will be rejected at the API level.
MaxFeeUSD float64 `validate:"required"`
// TTL is the Time To Live for L2Txs in the pool. Once MaxTxs
// L2Txs is reached, L2Txs older than TTL will be deleted.
TTL Duration `validate:"required"`
@@ -197,10 +223,7 @@ type Coordinator struct {
// ForgeBatch transaction.
ForgeBatchGasCost ForgeBatchGasCost `validate:"required"`
} `validate:"required"`
API struct {
// Coordinator enables the coordinator API endpoints
Coordinator bool
} `validate:"required"`
API CoordinatorAPI `validate:"required"`
Debug struct {
// BatchPath if set, specifies the path where batchInfo is stored
// in JSON in every step/update of the pipeline
@@ -215,26 +238,10 @@ type Coordinator struct {
}
}
// Node is the hermez node configuration.
type Node struct {
PriceUpdater struct {
// Interval between price updater calls
Interval Duration `valudate:"required"`
// URL of the token prices provider
URL string `valudate:"required"`
// Type of the API of the token prices provider
Type string `valudate:"required"`
} `validate:"required"`
StateDB struct {
// Path where the synchronizer StateDB is stored
Path string `validate:"required"`
// Keep is the number of checkpoints to keep
Keep int `validate:"required"`
} `validate:"required"`
// It's possible to use diferentiated SQL connections for read/write.
// If the read configuration is not provided, the write one it's going to be used
// for both reads and writes
PostgreSQL struct {
// PostgreSQL is the postgreSQL configuration parameters. It's possible to use
// diferentiated SQL connections for read/write. If the read configuration is
// not provided, the write one it's going to be used for both reads and writes
type PostgreSQL struct {
// Port of the PostgreSQL write server
PortWrite int `validate:"required"`
// Host of the PostgreSQL write server
@@ -255,7 +262,42 @@ type Node struct {
PasswordRead string
// Name of the PostgreSQL read server database
NameRead string
}
// NodeDebug specifies debug configuration parameters
type NodeDebug struct {
// APIAddress is the address where the debugAPI will listen if
// set
APIAddress string
// MeddlerLogs enables meddler debug mode, where unused columns and struct
// fields will be logged
MeddlerLogs bool
// GinDebugMode sets Gin-Gonic (the web framework) to run in
// debug mode
GinDebugMode bool
}
// Node is the hermez node configuration.
type Node struct {
PriceUpdater struct {
// Interval between price updater calls
Interval Duration `validate:"required"`
// URLBitfinexV2 is the URL of bitfinex V2 API
URLBitfinexV2 string `validate:"required"`
// URLCoinGeckoV3 is the URL of coingecko V3 API
URLCoinGeckoV3 string `validate:"required"`
// DefaultUpdateMethod to get token prices
DefaultUpdateMethod priceupdater.UpdateMethodType `validate:"required"`
// TokensConfig to specify how each token get it's price updated
TokensConfig []priceupdater.TokenConfig
} `validate:"required"`
StateDB struct {
// Path where the synchronizer StateDB is stored
Path string `validate:"required"`
// Keep is the number of checkpoints to keep
Keep int `validate:"required"`
} `validate:"required"`
PostgreSQL PostgreSQL `validate:"required"`
Web3 struct {
// URL is the URL of the web3 ethereum-node RPC server
URL string `validate:"required"`
@@ -286,6 +328,7 @@ type Node struct {
// TokenHEZ address
TokenHEZName string `validate:"required"`
} `validate:"required"`
// API specifies the configuration parameters of the API
API struct {
// Address where the API will listen if set
Address string
@@ -303,20 +346,49 @@ type Node struct {
// can wait to stablish a SQL connection
SQLConnectionTimeout Duration
} `validate:"required"`
Debug struct {
// APIAddress is the address where the debugAPI will listen if
// set
APIAddress string
// MeddlerLogs enables meddler debug mode, where unused columns and struct
// fields will be logged
MeddlerLogs bool
// GinDebugMode sets Gin-Gonic (the web framework) to run in
// debug mode
GinDebugMode bool
}
Debug NodeDebug `validate:"required"`
Coordinator Coordinator `validate:"-"`
}
// APIServer is the api server configuration parameters
type APIServer struct {
// NodeAPI specifies the configuration parameters of the API
API struct {
// Address where the API will listen if set
Address string `validate:"required"`
// Explorer enables the Explorer API endpoints
Explorer bool
// Maximum concurrent connections allowed between API and SQL
MaxSQLConnections int `validate:"required"`
// SQLConnectionTimeout is the maximum amount of time that an API request
// can wait to stablish a SQL connection
SQLConnectionTimeout Duration
} `validate:"required"`
PostgreSQL PostgreSQL `validate:"required"`
Coordinator struct {
API struct {
// Coordinator enables the coordinator API endpoints
Coordinator bool
} `validate:"required"`
L2DB struct {
// MaxTxs is the maximum number of pending L2Txs that can be
// stored in the pool. Once this number of pending L2Txs is
// reached, inserts to the pool will be denied until some of
// the pending txs are forged.
MaxTxs uint32 `validate:"required"`
// MinFeeUSD is the minimum fee in USD that a tx must pay in
// order to be accepted into the pool. Txs with lower than
// minimum fee will be rejected at the API level.
MinFeeUSD float64
// MaxFeeUSD is the maximum fee in USD that a tx must pay in
// order to be accepted into the pool. Txs with greater than
// maximum fee will be rejected at the API level.
MaxFeeUSD float64 `validate:"required"`
} `validate:"required"`
}
Debug NodeDebug `validate:"required"`
}
// Load loads a generic config.
func Load(path string, cfg interface{}) error {
bs, err := ioutil.ReadFile(path) //nolint:gosec
@@ -330,8 +402,8 @@ func Load(path string, cfg interface{}) error {
return nil
}
// LoadCoordinator loads the Coordinator configuration from path.
func LoadCoordinator(path string) (*Node, error) {
// LoadNode loads the Node configuration from path.
func LoadNode(path string, coordinator bool) (*Node, error) {
var cfg Node
if err := Load(path, &cfg); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("error loading node configuration file: %w", err))
@@ -340,21 +412,28 @@ func LoadCoordinator(path string) (*Node, error) {
if err := validate.Struct(cfg); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
}
if coordinator {
if err := validate.Struct(cfg.Coordinator); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
}
}
return &cfg, nil
}
// LoadNode loads the Node configuration from path.
func LoadNode(path string) (*Node, error) {
var cfg Node
// LoadAPIServer loads the APIServer configuration from path.
func LoadAPIServer(path string, coordinator bool) (*APIServer, error) {
var cfg APIServer
if err := Load(path, &cfg); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("error loading node configuration file: %w", err))
return nil, tracerr.Wrap(fmt.Errorf("error loading apiServer configuration file: %w", err))
}
validate := validator.New()
if err := validate.Struct(cfg); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
}
if coordinator {
if err := validate.Struct(cfg.Coordinator); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
}
}
return &cfg, nil
}

View File

@@ -85,7 +85,7 @@ type BatchInfo struct {
PublicInputs []*big.Int
L1Batch bool
VerifierIdx uint8
L1UserTxsExtra []common.L1Tx
L1UserTxs []common.L1Tx
L1CoordTxs []common.L1Tx
L1CoordinatorTxsAuths [][]byte
L2Txs []common.L2Tx

View File

@@ -25,9 +25,7 @@ import (
var (
errLastL1BatchNotSynced = fmt.Errorf("last L1Batch not synced yet")
errForgeNoTxsBeforeDelay = fmt.Errorf(
"no txs to forge and we haven't reached the forge no txs delay")
errForgeBeforeDelay = fmt.Errorf("we haven't reached the forge delay")
errSkipBatchByPolicy = fmt.Errorf("skip batch by policy")
)
const (
@@ -84,6 +82,20 @@ type Config struct {
// to 0s, the coordinator will continuously forge even if the batches
// are empty.
ForgeNoTxsDelay time.Duration
// MustForgeAtSlotDeadline enables the coordinator to forge slots if
// the empty slots reach the slot deadline.
MustForgeAtSlotDeadline bool
// IgnoreSlotCommitment disables forcing the coordinator to forge a
// slot immediately when the slot is not committed. If set to false,
// the coordinator will immediately forge a batch at the beginning of
// a slot if it's the slot winner.
IgnoreSlotCommitment bool
// ForgeOncePerSlotIfTxs will make the coordinator forge at most one
// batch per slot, only if there are included txs in that batch, or
// pending l1UserTxs in the smart contract. Setting this parameter
// overrides `ForgeDelay`, `ForgeNoTxsDelay`, `MustForgeAtSlotDeadline`
// and `IgnoreSlotCommitment`.
ForgeOncePerSlotIfTxs bool
// SyncRetryInterval is the waiting interval between calls to the main
// handler of a synced block after an error
SyncRetryInterval time.Duration
@@ -145,8 +157,8 @@ type Coordinator struct {
pipelineNum int // Pipeline sequential number. The first pipeline is 1
pipelineFromBatch fromBatch // batch from which we started the pipeline
provers []prover.Client
consts synchronizer.SCConsts
vars synchronizer.SCVariables
consts common.SCConsts
vars common.SCVariables
stats synchronizer.Stats
started bool
@@ -186,8 +198,8 @@ func NewCoordinator(cfg Config,
batchBuilder *batchbuilder.BatchBuilder,
serverProofs []prover.Client,
ethClient eth.ClientInterface,
scConsts *synchronizer.SCConsts,
initSCVars *synchronizer.SCVariables,
scConsts *common.SCConsts,
initSCVars *common.SCVariables,
) (*Coordinator, error) {
// nolint reason: hardcoded `1.0`, by design the percentage can't be over 100%
if cfg.L1BatchTimeoutPerc >= 1.0 { //nolint:gomnd
@@ -276,13 +288,13 @@ type MsgSyncBlock struct {
Batches []common.BatchData
// Vars contains each Smart Contract variables if they are updated, or
// nil if they haven't changed.
Vars synchronizer.SCVariablesPtr
Vars common.SCVariablesPtr
}
// MsgSyncReorg indicates a reorg
type MsgSyncReorg struct {
Stats synchronizer.Stats
Vars synchronizer.SCVariablesPtr
Vars common.SCVariablesPtr
}
// MsgStopPipeline indicates a signal to reset the pipeline
@@ -301,7 +313,7 @@ func (c *Coordinator) SendMsg(ctx context.Context, msg interface{}) {
}
}
func updateSCVars(vars *synchronizer.SCVariables, update synchronizer.SCVariablesPtr) {
func updateSCVars(vars *common.SCVariables, update common.SCVariablesPtr) {
if update.Rollup != nil {
vars.Rollup = *update.Rollup
}
@@ -313,12 +325,13 @@ func updateSCVars(vars *synchronizer.SCVariables, update synchronizer.SCVariable
}
}
func (c *Coordinator) syncSCVars(vars synchronizer.SCVariablesPtr) {
func (c *Coordinator) syncSCVars(vars common.SCVariablesPtr) {
updateSCVars(&c.vars, vars)
}
func canForge(auctionConstants *common.AuctionConstants, auctionVars *common.AuctionVariables,
currentSlot *common.Slot, nextSlot *common.Slot, addr ethCommon.Address, blockNum int64) bool {
currentSlot *common.Slot, nextSlot *common.Slot, addr ethCommon.Address, blockNum int64,
mustForgeAtDeadline bool) bool {
if blockNum < auctionConstants.GenesisBlockNum {
log.Infow("canForge: requested blockNum is < genesis", "blockNum", blockNum,
"genesis", auctionConstants.GenesisBlockNum)
@@ -343,7 +356,7 @@ func canForge(auctionConstants *common.AuctionConstants, auctionVars *common.Auc
"block", blockNum)
anyoneForge = true
}
if slot.Forger == addr || anyoneForge {
if slot.Forger == addr || (anyoneForge && mustForgeAtDeadline) {
return true
}
log.Debugw("canForge: can't forge", "slot.Forger", slot.Forger)
@@ -353,14 +366,14 @@ func canForge(auctionConstants *common.AuctionConstants, auctionVars *common.Auc
func (c *Coordinator) canForgeAt(blockNum int64) bool {
return canForge(&c.consts.Auction, &c.vars.Auction,
&c.stats.Sync.Auction.CurrentSlot, &c.stats.Sync.Auction.NextSlot,
c.cfg.ForgerAddress, blockNum)
c.cfg.ForgerAddress, blockNum, c.cfg.MustForgeAtSlotDeadline)
}
func (c *Coordinator) canForge() bool {
blockNum := c.stats.Eth.LastBlock.Num + 1
return canForge(&c.consts.Auction, &c.vars.Auction,
&c.stats.Sync.Auction.CurrentSlot, &c.stats.Sync.Auction.NextSlot,
c.cfg.ForgerAddress, blockNum)
c.cfg.ForgerAddress, blockNum, c.cfg.MustForgeAtSlotDeadline)
}
func (c *Coordinator) syncStats(ctx context.Context, stats *synchronizer.Stats) error {

View File

@@ -105,7 +105,7 @@ func newTestModules(t *testing.T) modules {
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err)
test.WipeDB(db)
l2DB := l2db.NewL2DB(db, db, 10, 100, 0.0, 24*time.Hour, nil)
l2DB := l2db.NewL2DB(db, db, 10, 100, 0.0, 1000.0, 24*time.Hour, nil)
historyDB := historydb.NewHistoryDB(db, db, nil)
txSelDBPath, err = ioutil.TempDir("", "tmpTxSelDB")
@@ -167,6 +167,7 @@ func newTestCoordinator(t *testing.T, forgerAddr ethCommon.Address, ethClient *t
EthClientAttemptsDelay: 100 * time.Millisecond,
TxManagerCheckInterval: 300 * time.Millisecond,
DebugBatchPath: debugBatchPath,
MustForgeAtSlotDeadline: true,
Purger: PurgerCfg{
PurgeBatchDelay: 10,
PurgeBlockDelay: 10,
@@ -188,12 +189,12 @@ func newTestCoordinator(t *testing.T, forgerAddr ethCommon.Address, ethClient *t
&prover.MockClient{Delay: 400 * time.Millisecond},
}
scConsts := &synchronizer.SCConsts{
scConsts := &common.SCConsts{
Rollup: *ethClientSetup.RollupConstants,
Auction: *ethClientSetup.AuctionConstants,
WDelayer: *ethClientSetup.WDelayerConstants,
}
initSCVars := &synchronizer.SCVariables{
initSCVars := &common.SCVariables{
Rollup: *ethClientSetup.RollupVariables,
Auction: *ethClientSetup.AuctionVariables,
WDelayer: *ethClientSetup.WDelayerVariables,
@@ -391,6 +392,10 @@ func TestCoordCanForge(t *testing.T) {
assert.Equal(t, true, coord.canForge())
assert.Equal(t, true, bootCoord.canForge())
// Anyone can forge but the node MustForgeAtSlotDeadline as set as false
coord.cfg.MustForgeAtSlotDeadline = false
assert.Equal(t, false, coord.canForge())
// Slot 3. coordinator bid, so the winner is the coordinator
stats.Eth.LastBlock.Num = ethClientSetup.AuctionConstants.GenesisBlockNum +
3*int64(ethClientSetup.AuctionConstants.BlocksPerSlot)
@@ -529,7 +534,7 @@ func TestCoordinatorStress(t *testing.T) {
coord.SendMsg(ctx, MsgSyncBlock{
Stats: *stats,
Batches: blockData.Rollup.Batches,
Vars: synchronizer.SCVariablesPtr{
Vars: common.SCVariablesPtr{
Rollup: blockData.Rollup.Vars,
Auction: blockData.Auction.Vars,
WDelayer: blockData.WDelayer.Vars,

View File

@@ -22,7 +22,7 @@ import (
type statsVars struct {
Stats synchronizer.Stats
Vars synchronizer.SCVariablesPtr
Vars common.SCVariablesPtr
}
type state struct {
@@ -36,7 +36,7 @@ type state struct {
type Pipeline struct {
num int
cfg Config
consts synchronizer.SCConsts
consts common.SCConsts
// state
state state
@@ -57,7 +57,7 @@ type Pipeline struct {
purger *Purger
stats synchronizer.Stats
vars synchronizer.SCVariables
vars common.SCVariables
statsVarsCh chan statsVars
ctx context.Context
@@ -90,7 +90,7 @@ func NewPipeline(ctx context.Context,
coord *Coordinator,
txManager *TxManager,
provers []prover.Client,
scConsts *synchronizer.SCConsts,
scConsts *common.SCConsts,
) (*Pipeline, error) {
proversPool := NewProversPool(len(provers))
proversPoolSize := 0
@@ -125,7 +125,7 @@ func NewPipeline(ctx context.Context,
// SetSyncStatsVars is a thread safe method to sets the synchronizer Stats
func (p *Pipeline) SetSyncStatsVars(ctx context.Context, stats *synchronizer.Stats,
vars *synchronizer.SCVariablesPtr) {
vars *common.SCVariablesPtr) {
select {
case p.statsVarsCh <- statsVars{Stats: *stats, Vars: *vars}:
case <-ctx.Done():
@@ -134,7 +134,7 @@ func (p *Pipeline) SetSyncStatsVars(ctx context.Context, stats *synchronizer.Sta
// reset pipeline state
func (p *Pipeline) reset(batchNum common.BatchNum,
stats *synchronizer.Stats, vars *synchronizer.SCVariables) error {
stats *synchronizer.Stats, vars *common.SCVariables) error {
p.state = state{
batchNum: batchNum,
lastForgeL1TxsNum: stats.Sync.LastForgeL1TxsNum,
@@ -195,7 +195,7 @@ func (p *Pipeline) reset(batchNum common.BatchNum,
return nil
}
func (p *Pipeline) syncSCVars(vars synchronizer.SCVariablesPtr) {
func (p *Pipeline) syncSCVars(vars common.SCVariablesPtr) {
updateSCVars(&p.vars, vars)
}
@@ -224,8 +224,9 @@ func (p *Pipeline) handleForgeBatch(ctx context.Context,
// 2. Forge the batch internally (make a selection of txs and prepare
// all the smart contract arguments)
var skipReason *string
p.mutexL2DBUpdateDelete.Lock()
batchInfo, err = p.forgeBatch(batchNum)
batchInfo, skipReason, err = p.forgeBatch(batchNum)
p.mutexL2DBUpdateDelete.Unlock()
if ctx.Err() != nil {
return nil, ctx.Err()
@@ -234,13 +235,13 @@ func (p *Pipeline) handleForgeBatch(ctx context.Context,
log.Warnw("forgeBatch: scheduled L1Batch too early", "err", err,
"lastForgeL1TxsNum", p.state.lastForgeL1TxsNum,
"syncLastForgeL1TxsNum", p.stats.Sync.LastForgeL1TxsNum)
} else if tracerr.Unwrap(err) == errForgeNoTxsBeforeDelay ||
tracerr.Unwrap(err) == errForgeBeforeDelay {
// no log
} else {
log.Errorw("forgeBatch", "err", err)
}
return nil, tracerr.Wrap(err)
} else if skipReason != nil {
log.Debugw("skipping batch", "batch", batchNum, "reason", *skipReason)
return nil, tracerr.Wrap(errSkipBatchByPolicy)
}
// 3. Send the ZKInputs to the proof server
@@ -256,7 +257,7 @@ func (p *Pipeline) handleForgeBatch(ctx context.Context,
// Start the forging pipeline
func (p *Pipeline) Start(batchNum common.BatchNum,
stats *synchronizer.Stats, vars *synchronizer.SCVariables) error {
stats *synchronizer.Stats, vars *common.SCVariables) error {
if p.started {
log.Fatal("Pipeline already started")
}
@@ -295,8 +296,7 @@ func (p *Pipeline) Start(batchNum common.BatchNum,
if p.ctx.Err() != nil {
continue
} else if tracerr.Unwrap(err) == errLastL1BatchNotSynced ||
tracerr.Unwrap(err) == errForgeNoTxsBeforeDelay ||
tracerr.Unwrap(err) == errForgeBeforeDelay {
tracerr.Unwrap(err) == errSkipBatchByPolicy {
continue
} else if err != nil {
p.setErrAtBatchNum(batchNum)
@@ -389,17 +389,109 @@ func (p *Pipeline) sendServerProof(ctx context.Context, batchInfo *BatchInfo) er
return nil
}
// slotCommitted returns true if the current slot has already been committed
func (p *Pipeline) slotCommitted() bool {
// Synchronizer has synchronized a batch in the current slot (setting
// CurrentSlot.ForgerCommitment) or the pipeline has already
// internally-forged a batch in the current slot
return p.stats.Sync.Auction.CurrentSlot.ForgerCommitment ||
p.stats.Sync.Auction.CurrentSlot.SlotNum == p.state.lastSlotForged
}
// forgePolicySkipPreSelection is called before doing a tx selection in a batch to
// determine by policy if we should forge the batch or not. Returns true and
// the reason when the forging of the batch must be skipped.
func (p *Pipeline) forgePolicySkipPreSelection(now time.Time) (bool, string) {
// Check if the slot is not yet fulfilled
slotCommitted := p.slotCommitted()
if p.cfg.ForgeOncePerSlotIfTxs {
if slotCommitted {
return true, "cfg.ForgeOncePerSlotIfTxs = true and slot already committed"
}
return false, ""
}
// Determine if we must commit the slot
if !p.cfg.IgnoreSlotCommitment && !slotCommitted {
return false, ""
}
// If we haven't reached the ForgeDelay, skip forging the batch
if now.Sub(p.lastForgeTime) < p.cfg.ForgeDelay {
return true, "we haven't reached the forge delay"
}
return false, ""
}
// forgePolicySkipPostSelection is called after doing a tx selection in a batch to
// determine by policy if we should forge the batch or not. Returns true and
// the reason when the forging of the batch must be skipped.
func (p *Pipeline) forgePolicySkipPostSelection(now time.Time, l1UserTxsExtra, l1CoordTxs []common.L1Tx,
poolL2Txs []common.PoolL2Tx, batchInfo *BatchInfo) (bool, string, error) {
// Check if the slot is not yet fulfilled
slotCommitted := p.slotCommitted()
pendingTxs := true
if len(l1UserTxsExtra) == 0 && len(l1CoordTxs) == 0 && len(poolL2Txs) == 0 {
if batchInfo.L1Batch {
// Query the number of unforged L1UserTxs
// (either in a open queue or in a frozen
// not-yet-forged queue).
count, err := p.historyDB.GetUnforgedL1UserTxsCount()
if err != nil {
return false, "", err
}
// If there are future L1UserTxs, we forge a
// batch to advance the queues to be able to
// forge the L1UserTxs in the future.
// Otherwise, skip.
if count == 0 {
pendingTxs = false
}
} else {
pendingTxs = false
}
}
if p.cfg.ForgeOncePerSlotIfTxs {
if slotCommitted {
return true, "cfg.ForgeOncePerSlotIfTxs = true and slot already committed",
nil
}
if pendingTxs {
return false, "", nil
}
return true, "cfg.ForgeOncePerSlotIfTxs = true and no pending txs",
nil
}
// Determine if we must commit the slot
if !p.cfg.IgnoreSlotCommitment && !slotCommitted {
return false, "", nil
}
// check if there is no txs to forge, no l1UserTxs in the open queue to
// freeze and we haven't reached the ForgeNoTxsDelay
if now.Sub(p.lastForgeTime) < p.cfg.ForgeNoTxsDelay {
if !pendingTxs {
return true, "no txs to forge and we haven't reached the forge no txs delay",
nil
}
}
return false, "", nil
}
// forgeBatch forges the batchNum batch.
func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo, err error) {
func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo,
skipReason *string, err error) {
// remove transactions from the pool that have been there for too long
_, err = p.purger.InvalidateMaybe(p.l2DB, p.txSelector.LocalAccountsDB(),
p.stats.Sync.LastBlock.Num, int64(batchNum))
if err != nil {
return nil, tracerr.Wrap(err)
return nil, nil, tracerr.Wrap(err)
}
_, err = p.purger.PurgeMaybe(p.l2DB, p.stats.Sync.LastBlock.Num, int64(batchNum))
if err != nil {
return nil, tracerr.Wrap(err)
return nil, nil, tracerr.Wrap(err)
}
// Structure to accumulate data and metadata of the batch
now := time.Now()
@@ -409,79 +501,48 @@ func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo, e
var poolL2Txs []common.PoolL2Tx
var discardedL2Txs []common.PoolL2Tx
var l1UserTxsExtra, l1CoordTxs []common.L1Tx
var l1UserTxs, l1CoordTxs []common.L1Tx
var auths [][]byte
var coordIdxs []common.Idx
// Check if the slot is not yet fulfilled
slotCommitted := false
if p.stats.Sync.Auction.CurrentSlot.ForgerCommitment ||
p.stats.Sync.Auction.CurrentSlot.SlotNum == p.state.lastSlotForged {
slotCommitted = true
}
// If we haven't reached the ForgeDelay, skip forging the batch
if slotCommitted && now.Sub(p.lastForgeTime) < p.cfg.ForgeDelay {
return nil, tracerr.Wrap(errForgeBeforeDelay)
if skip, reason := p.forgePolicySkipPreSelection(now); skip {
return nil, &reason, nil
}
// 1. Decide if we forge L2Tx or L1+L2Tx
if p.shouldL1L2Batch(batchInfo) {
batchInfo.L1Batch = true
if p.state.lastForgeL1TxsNum != p.stats.Sync.LastForgeL1TxsNum {
return nil, tracerr.Wrap(errLastL1BatchNotSynced)
return nil, nil, tracerr.Wrap(errLastL1BatchNotSynced)
}
// 2a: L1+L2 txs
l1UserTxs, err := p.historyDB.GetUnforgedL1UserTxs(p.state.lastForgeL1TxsNum + 1)
_l1UserTxs, err := p.historyDB.GetUnforgedL1UserTxs(p.state.lastForgeL1TxsNum + 1)
if err != nil {
return nil, tracerr.Wrap(err)
return nil, nil, tracerr.Wrap(err)
}
coordIdxs, auths, l1UserTxsExtra, l1CoordTxs, poolL2Txs, discardedL2Txs, err =
p.txSelector.GetL1L2TxSelection(p.cfg.TxProcessorConfig, l1UserTxs)
coordIdxs, auths, l1UserTxs, l1CoordTxs, poolL2Txs, discardedL2Txs, err =
p.txSelector.GetL1L2TxSelection(p.cfg.TxProcessorConfig, _l1UserTxs)
if err != nil {
return nil, tracerr.Wrap(err)
return nil, nil, tracerr.Wrap(err)
}
} else {
// 2b: only L2 txs
coordIdxs, auths, l1CoordTxs, poolL2Txs, discardedL2Txs, err =
p.txSelector.GetL2TxSelection(p.cfg.TxProcessorConfig)
if err != nil {
return nil, tracerr.Wrap(err)
return nil, nil, tracerr.Wrap(err)
}
l1UserTxsExtra = nil
l1UserTxs = nil
}
// If there are no txs to forge, no l1UserTxs in the open queue to
// freeze, and we haven't reached the ForgeNoTxsDelay, skip forging the
// batch.
if slotCommitted && now.Sub(p.lastForgeTime) < p.cfg.ForgeNoTxsDelay {
noTxs := false
if len(l1UserTxsExtra) == 0 && len(l1CoordTxs) == 0 && len(poolL2Txs) == 0 {
if batchInfo.L1Batch {
// Query the number of unforged L1UserTxs
// (either in a open queue or in a frozen
// not-yet-forged queue).
count, err := p.historyDB.GetUnforgedL1UserTxsCount()
if err != nil {
return nil, tracerr.Wrap(err)
}
// If there are future L1UserTxs, we forge a
// batch to advance the queues to be able to
// forge the L1UserTxs in the future.
// Otherwise, skip.
if count == 0 {
noTxs = true
}
} else {
noTxs = true
}
}
if noTxs {
if skip, reason, err := p.forgePolicySkipPostSelection(now,
l1UserTxs, l1CoordTxs, poolL2Txs, batchInfo); err != nil {
return nil, nil, tracerr.Wrap(err)
} else if skip {
if err := p.txSelector.Reset(batchInfo.BatchNum-1, false); err != nil {
return nil, tracerr.Wrap(err)
}
return nil, tracerr.Wrap(errForgeNoTxsBeforeDelay)
return nil, nil, tracerr.Wrap(err)
}
return nil, &reason, tracerr.Wrap(err)
}
if batchInfo.L1Batch {
@@ -490,7 +551,7 @@ func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo, e
}
// 3. Save metadata from TxSelector output for BatchNum
batchInfo.L1UserTxsExtra = l1UserTxsExtra
batchInfo.L1UserTxs = l1UserTxs
batchInfo.L1CoordTxs = l1CoordTxs
batchInfo.L1CoordinatorTxsAuths = auths
batchInfo.CoordIdxs = coordIdxs
@@ -498,10 +559,10 @@ func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo, e
if err := p.l2DB.StartForging(common.TxIDsFromPoolL2Txs(poolL2Txs),
batchInfo.BatchNum); err != nil {
return nil, tracerr.Wrap(err)
return nil, nil, tracerr.Wrap(err)
}
if err := p.l2DB.UpdateTxsInfo(discardedL2Txs); err != nil {
return nil, tracerr.Wrap(err)
return nil, nil, tracerr.Wrap(err)
}
// Invalidate transactions that become invalid because of
@@ -510,21 +571,21 @@ func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo, e
// all the nonces smaller than the current one)
err = p.l2DB.InvalidateOldNonces(idxsNonceFromPoolL2Txs(poolL2Txs), batchInfo.BatchNum)
if err != nil {
return nil, tracerr.Wrap(err)
return nil, nil, tracerr.Wrap(err)
}
// 4. Call BatchBuilder with TxSelector output
configBatch := &batchbuilder.ConfigBatch{
TxProcessorConfig: p.cfg.TxProcessorConfig,
}
zkInputs, err := p.batchBuilder.BuildBatch(coordIdxs, configBatch, l1UserTxsExtra,
zkInputs, err := p.batchBuilder.BuildBatch(coordIdxs, configBatch, l1UserTxs,
l1CoordTxs, poolL2Txs)
if err != nil {
return nil, tracerr.Wrap(err)
return nil, nil, tracerr.Wrap(err)
}
l2Txs, err := common.PoolL2TxsToL2Txs(poolL2Txs) // NOTE: This is a big uggly, find a better way
if err != nil {
return nil, tracerr.Wrap(err)
return nil, nil, tracerr.Wrap(err)
}
batchInfo.L2Txs = l2Txs
@@ -536,7 +597,7 @@ func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo, e
p.state.lastSlotForged = p.stats.Sync.Auction.CurrentSlot.SlotNum
return batchInfo, nil
return batchInfo, nil, nil
}
// waitServerProof gets the generated zkProof & sends it to the SmartContract
@@ -581,7 +642,7 @@ func prepareForgeBatchArgs(batchInfo *BatchInfo) *eth.RollupForgeBatchArgs {
NewLastIdx: int64(zki.Metadata.NewLastIdxRaw),
NewStRoot: zki.Metadata.NewStateRootRaw.BigInt(),
NewExitRoot: zki.Metadata.NewExitRootRaw.BigInt(),
L1UserTxs: batchInfo.L1UserTxsExtra,
L1UserTxs: batchInfo.L1UserTxs,
L1CoordinatorTxs: batchInfo.L1CoordTxs,
L1CoordinatorTxsAuths: batchInfo.L1CoordinatorTxsAuths,
L2TxsData: batchInfo.L2Txs,

View File

@@ -206,11 +206,7 @@ PoolTransfer(0) User2-User3: 300 (126)
require.NoError(t, err)
}
err = pipeline.reset(batchNum, syncStats, &synchronizer.SCVariables{
Rollup: *syncSCVars.Rollup,
Auction: *syncSCVars.Auction,
WDelayer: *syncSCVars.WDelayer,
})
err = pipeline.reset(batchNum, syncStats, syncSCVars)
require.NoError(t, err)
// Sanity check
sdbAccounts, err := pipeline.txSelector.LocalAccountsDB().TestGetAccounts()
@@ -228,12 +224,12 @@ PoolTransfer(0) User2-User3: 300 (126)
batchNum++
batchInfo, err := pipeline.forgeBatch(batchNum)
batchInfo, _, err := pipeline.forgeBatch(batchNum)
require.NoError(t, err)
assert.Equal(t, 3, len(batchInfo.L2Txs))
batchNum++
batchInfo, err = pipeline.forgeBatch(batchNum)
batchInfo, _, err = pipeline.forgeBatch(batchNum)
require.NoError(t, err)
assert.Equal(t, 0, len(batchInfo.L2Txs))
}

View File

@@ -21,7 +21,7 @@ func newL2DB(t *testing.T) *l2db.L2DB {
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err)
test.WipeDB(db)
return l2db.NewL2DB(db, db, 10, 100, 0.0, 24*time.Hour, nil)
return l2db.NewL2DB(db, db, 10, 100, 0.0, 1000.0, 24*time.Hour, nil)
}
func newStateDB(t *testing.T) *statedb.LocalStateDB {

View File

@@ -31,10 +31,10 @@ type TxManager struct {
batchCh chan *BatchInfo
chainID *big.Int
account accounts.Account
consts synchronizer.SCConsts
consts common.SCConsts
stats synchronizer.Stats
vars synchronizer.SCVariables
vars common.SCVariables
statsVarsCh chan statsVars
discardPipelineCh chan int // int refers to the pipelineNum
@@ -55,7 +55,7 @@ type TxManager struct {
// NewTxManager creates a new TxManager
func NewTxManager(ctx context.Context, cfg *Config, ethClient eth.ClientInterface, l2DB *l2db.L2DB,
coord *Coordinator, scConsts *synchronizer.SCConsts, initSCVars *synchronizer.SCVariables) (
coord *Coordinator, scConsts *common.SCConsts, initSCVars *common.SCVariables) (
*TxManager, error) {
chainID, err := ethClient.EthChainID()
if err != nil {
@@ -104,7 +104,7 @@ func (t *TxManager) AddBatch(ctx context.Context, batchInfo *BatchInfo) {
// SetSyncStatsVars is a thread safe method to sets the synchronizer Stats
func (t *TxManager) SetSyncStatsVars(ctx context.Context, stats *synchronizer.Stats,
vars *synchronizer.SCVariablesPtr) {
vars *common.SCVariablesPtr) {
select {
case t.statsVarsCh <- statsVars{Stats: *stats, Vars: *vars}:
case <-ctx.Done():
@@ -120,7 +120,7 @@ func (t *TxManager) DiscardPipeline(ctx context.Context, pipelineNum int) {
}
}
func (t *TxManager) syncSCVars(vars synchronizer.SCVariablesPtr) {
func (t *TxManager) syncSCVars(vars common.SCVariablesPtr) {
updateSCVars(&t.vars, vars)
}
@@ -147,7 +147,7 @@ func (t *TxManager) NewAuth(ctx context.Context, batchInfo *BatchInfo) (*bind.Tr
auth.Value = big.NewInt(0) // in wei
gasLimit := t.cfg.ForgeBatchGasCost.Fixed +
uint64(len(batchInfo.L1UserTxsExtra))*t.cfg.ForgeBatchGasCost.L1UserTx +
uint64(len(batchInfo.L1UserTxs))*t.cfg.ForgeBatchGasCost.L1UserTx +
uint64(len(batchInfo.L1CoordTxs))*t.cfg.ForgeBatchGasCost.L1CoordTx +
uint64(len(batchInfo.L2Txs))*t.cfg.ForgeBatchGasCost.L2Tx
auth.GasLimit = gasLimit
@@ -608,7 +608,7 @@ func (t *TxManager) removeBadBatchInfos(ctx context.Context) error {
func (t *TxManager) canForgeAt(blockNum int64) bool {
return canForge(&t.consts.Auction, &t.vars.Auction,
&t.stats.Sync.Auction.CurrentSlot, &t.stats.Sync.Auction.NextSlot,
t.cfg.ForgerAddress, blockNum)
t.cfg.ForgerAddress, blockNum, t.cfg.MustForgeAtSlotDeadline)
}
func (t *TxManager) mustL1L2Batch(blockNum int64) bool {

View File

@@ -1,10 +1,14 @@
package historydb
import (
"database/sql"
"errors"
"fmt"
"math/big"
"time"
ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db"
"github.com/hermeznetwork/tracerr"
@@ -32,9 +36,18 @@ func (hdb *HistoryDB) GetBatchAPI(batchNum common.BatchNum) (*BatchAPI, error) {
return nil, tracerr.Wrap(err)
}
defer hdb.apiConnCon.Release()
return hdb.getBatchAPI(hdb.dbRead, batchNum)
}
// GetBatchInternalAPI return the batch with the given batchNum
func (hdb *HistoryDB) GetBatchInternalAPI(batchNum common.BatchNum) (*BatchAPI, error) {
return hdb.getBatchAPI(hdb.dbRead, batchNum)
}
func (hdb *HistoryDB) getBatchAPI(d meddler.DB, batchNum common.BatchNum) (*BatchAPI, error) {
batch := &BatchAPI{}
return batch, tracerr.Wrap(meddler.QueryRow(
hdb.dbRead, batch,
if err := meddler.QueryRow(
d, batch,
`SELECT batch.item_id, batch.batch_num, batch.eth_block_num,
batch.forger_addr, batch.fees_collected, batch.total_fees_usd, batch.state_root,
batch.num_accounts, batch.exit_root, batch.forge_l1_txs_num, batch.slot_num,
@@ -42,7 +55,11 @@ func (hdb *HistoryDB) GetBatchAPI(batchNum common.BatchNum) (*BatchAPI, error) {
COALESCE ((SELECT COUNT(*) FROM tx WHERE batch_num = batch.batch_num), 0) AS forged_txs
FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num
WHERE batch_num = $1;`, batchNum,
))
); err != nil {
return nil, tracerr.Wrap(err)
}
batch.CollectedFeesAPI = apitypes.NewCollectedFeesAPI(batch.CollectedFeesDB)
return batch, nil
}
// GetBatchesAPI return the batches applying the given filters
@@ -143,6 +160,9 @@ func (hdb *HistoryDB) GetBatchesAPI(
if len(batches) == 0 {
return batches, 0, nil
}
for i := range batches {
batches[i].CollectedFeesAPI = apitypes.NewCollectedFeesAPI(batches[i].CollectedFeesDB)
}
return batches, batches[0].TotalItems - uint64(len(batches)), nil
}
@@ -180,6 +200,14 @@ func (hdb *HistoryDB) GetBestBidsAPI(
return nil, 0, tracerr.Wrap(err)
}
defer hdb.apiConnCon.Release()
return hdb.getBestBidsAPI(hdb.dbRead, minSlotNum, maxSlotNum, bidderAddr, limit, order)
}
func (hdb *HistoryDB) getBestBidsAPI(
d meddler.DB,
minSlotNum, maxSlotNum *int64,
bidderAddr *ethCommon.Address,
limit *uint, order string,
) ([]BidAPI, uint64, error) {
var query string
var args []interface{}
// JOIN the best bid of each slot with the latest update of each coordinator
@@ -214,7 +242,7 @@ func (hdb *HistoryDB) GetBestBidsAPI(
}
query = hdb.dbRead.Rebind(queryStr)
bidPtrs := []*BidAPI{}
if err := meddler.QueryAll(hdb.dbRead, &bidPtrs, query, args...); err != nil {
if err := meddler.QueryAll(d, &bidPtrs, query, args...); err != nil {
return nil, 0, tracerr.Wrap(err)
}
// log.Debug(query)
@@ -697,25 +725,6 @@ func (hdb *HistoryDB) GetExitsAPI(
return db.SlicePtrsToSlice(exits).([]ExitAPI), exits[0].TotalItems - uint64(len(exits)), nil
}
// GetBucketUpdatesAPI retrieves latest values for each bucket
func (hdb *HistoryDB) GetBucketUpdatesAPI() ([]BucketUpdateAPI, error) {
cancel, err := hdb.apiConnCon.Acquire()
defer cancel()
if err != nil {
return nil, tracerr.Wrap(err)
}
defer hdb.apiConnCon.Release()
var bucketUpdates []*BucketUpdateAPI
err = meddler.QueryAll(
hdb.dbRead, &bucketUpdates,
`SELECT num_bucket, withdrawals FROM bucket_update
WHERE item_id in(SELECT max(item_id) FROM bucket_update
group by num_bucket)
ORDER BY num_bucket ASC;`,
)
return db.SlicePtrsToSlice(bucketUpdates).([]BucketUpdateAPI), tracerr.Wrap(err)
}
// GetCoordinatorsAPI returns a list of coordinators from the DB and pagination info
func (hdb *HistoryDB) GetCoordinatorsAPI(
bidderAddr, forgerAddr *ethCommon.Address,
@@ -800,29 +809,6 @@ func (hdb *HistoryDB) GetAuctionVarsAPI() (*common.AuctionVariables, error) {
return auctionVars, tracerr.Wrap(err)
}
// GetAuctionVarsUntilSetSlotNumAPI returns all the updates of the auction vars
// from the last entry in which DefaultSlotSetBidSlotNum <= slotNum
func (hdb *HistoryDB) GetAuctionVarsUntilSetSlotNumAPI(slotNum int64, maxItems int) ([]MinBidInfo, error) {
cancel, err := hdb.apiConnCon.Acquire()
defer cancel()
if err != nil {
return nil, tracerr.Wrap(err)
}
defer hdb.apiConnCon.Release()
auctionVars := []*MinBidInfo{}
query := `
SELECT DISTINCT default_slot_set_bid, default_slot_set_bid_slot_num FROM auction_vars
WHERE default_slot_set_bid_slot_num < $1
ORDER BY default_slot_set_bid_slot_num DESC
LIMIT $2;
`
err = meddler.QueryAll(hdb.dbRead, &auctionVars, query, slotNum, maxItems)
if err != nil {
return nil, tracerr.Wrap(err)
}
return db.SlicePtrsToSlice(auctionVars).([]MinBidInfo), nil
}
// GetAccountAPI returns an account by its index
func (hdb *HistoryDB) GetAccountAPI(idx common.Idx) (*AccountAPI, error) {
cancel, err := hdb.apiConnCon.Acquire()
@@ -941,125 +927,6 @@ func (hdb *HistoryDB) GetAccountsAPI(
accounts[0].TotalItems - uint64(len(accounts)), nil
}
// GetMetricsAPI returns metrics
func (hdb *HistoryDB) GetMetricsAPI(lastBatchNum common.BatchNum) (*Metrics, error) {
cancel, err := hdb.apiConnCon.Acquire()
defer cancel()
if err != nil {
return nil, tracerr.Wrap(err)
}
defer hdb.apiConnCon.Release()
metricsTotals := &MetricsTotals{}
metrics := &Metrics{}
err = meddler.QueryRow(
hdb.dbRead, metricsTotals, `SELECT
COALESCE (MIN(batch.batch_num), 0) as batch_num,
COALESCE (MIN(block.timestamp), NOW()) AS min_timestamp,
COALESCE (MAX(block.timestamp), NOW()) AS max_timestamp
FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num
WHERE block.timestamp >= NOW() - INTERVAL '24 HOURS' and batch.batch_num <= $1;`, lastBatchNum)
if err != nil {
return nil, tracerr.Wrap(err)
}
err = meddler.QueryRow(
hdb.dbRead, metricsTotals, `SELECT COUNT(*) as total_txs
FROM tx WHERE tx.batch_num between $1 AND $2;`, metricsTotals.FirstBatchNum, lastBatchNum)
if err != nil {
return nil, tracerr.Wrap(err)
}
seconds := metricsTotals.MaxTimestamp.Sub(metricsTotals.MinTimestamp).Seconds()
// Avoid dividing by 0
if seconds == 0 {
seconds++
}
metrics.TransactionsPerSecond = float64(metricsTotals.TotalTransactions) / seconds
if (lastBatchNum - metricsTotals.FirstBatchNum) > 0 {
metrics.TransactionsPerBatch = float64(metricsTotals.TotalTransactions) /
float64(lastBatchNum-metricsTotals.FirstBatchNum+1)
} else {
metrics.TransactionsPerBatch = float64(0)
}
err = meddler.QueryRow(
hdb.dbRead, metricsTotals, `SELECT COUNT(*) AS total_batches,
COALESCE (SUM(total_fees_usd), 0) AS total_fees FROM batch
WHERE batch_num between $1 and $2;`, metricsTotals.FirstBatchNum, lastBatchNum)
if err != nil {
return nil, tracerr.Wrap(err)
}
if metricsTotals.TotalBatches > 0 {
metrics.BatchFrequency = seconds / float64(metricsTotals.TotalBatches)
} else {
metrics.BatchFrequency = 0
}
if metricsTotals.TotalTransactions > 0 {
metrics.AvgTransactionFee = metricsTotals.TotalFeesUSD / float64(metricsTotals.TotalTransactions)
} else {
metrics.AvgTransactionFee = 0
}
err = meddler.QueryRow(
hdb.dbRead, metrics,
`SELECT COUNT(*) AS total_bjjs, COUNT(DISTINCT(bjj)) AS total_accounts FROM account;`)
if err != nil {
return nil, tracerr.Wrap(err)
}
err = meddler.QueryRow(
hdb.dbRead, metrics,
`SELECT COALESCE (AVG(EXTRACT(EPOCH FROM (forged.timestamp - added.timestamp))), 0)
AS estimated_time_to_forge_l1 FROM tx
INNER JOIN block AS added ON tx.eth_block_num = added.eth_block_num
INNER JOIN batch AS forged_batch ON tx.batch_num = forged_batch.batch_num
INNER JOIN block AS forged ON forged_batch.eth_block_num = forged.eth_block_num
WHERE tx.batch_num between $1 and $2 AND tx.is_l1 AND tx.user_origin;`,
metricsTotals.FirstBatchNum, lastBatchNum,
)
if err != nil {
return nil, tracerr.Wrap(err)
}
return metrics, nil
}
// GetAvgTxFeeAPI returns average transaction fee of the last 1h
func (hdb *HistoryDB) GetAvgTxFeeAPI() (float64, error) {
cancel, err := hdb.apiConnCon.Acquire()
defer cancel()
if err != nil {
return 0, tracerr.Wrap(err)
}
defer hdb.apiConnCon.Release()
metricsTotals := &MetricsTotals{}
err = meddler.QueryRow(
hdb.dbRead, metricsTotals, `SELECT COUNT(tx.*) as total_txs,
COALESCE (MIN(tx.batch_num), 0) as batch_num
FROM tx INNER JOIN block ON tx.eth_block_num = block.eth_block_num
WHERE block.timestamp >= NOW() - INTERVAL '1 HOURS';`)
if err != nil {
return 0, tracerr.Wrap(err)
}
err = meddler.QueryRow(
hdb.dbRead, metricsTotals, `SELECT COUNT(*) AS total_batches,
COALESCE (SUM(total_fees_usd), 0) AS total_fees FROM batch
WHERE batch_num > $1;`, metricsTotals.FirstBatchNum)
if err != nil {
return 0, tracerr.Wrap(err)
}
var avgTransactionFee float64
if metricsTotals.TotalTransactions > 0 {
avgTransactionFee = metricsTotals.TotalFeesUSD / float64(metricsTotals.TotalTransactions)
} else {
avgTransactionFee = 0
}
return avgTransactionFee, nil
}
// GetCommonAccountAPI returns the account associated to an account idx
func (hdb *HistoryDB) GetCommonAccountAPI(idx common.Idx) (*common.Account, error) {
cancel, err := hdb.apiConnCon.Acquire()
@@ -1075,3 +942,265 @@ func (hdb *HistoryDB) GetCommonAccountAPI(idx common.Idx) (*common.Account, erro
)
return account, tracerr.Wrap(err)
}
// GetCoordinatorAPI returns a coordinator by its bidderAddr
func (hdb *HistoryDB) GetCoordinatorAPI(bidderAddr ethCommon.Address) (*CoordinatorAPI, error) {
cancel, err := hdb.apiConnCon.Acquire()
defer cancel()
if err != nil {
return nil, tracerr.Wrap(err)
}
defer hdb.apiConnCon.Release()
return hdb.getCoordinatorAPI(hdb.dbRead, bidderAddr)
}
func (hdb *HistoryDB) getCoordinatorAPI(d meddler.DB, bidderAddr ethCommon.Address) (*CoordinatorAPI, error) {
coordinator := &CoordinatorAPI{}
err := meddler.QueryRow(
d, coordinator,
"SELECT * FROM coordinator WHERE bidder_addr = $1 ORDER BY item_id DESC LIMIT 1;",
bidderAddr,
)
return coordinator, tracerr.Wrap(err)
}
// GetNodeInfoAPI retusnt he NodeInfo
func (hdb *HistoryDB) GetNodeInfoAPI() (*NodeInfo, error) {
cancel, err := hdb.apiConnCon.Acquire()
defer cancel()
if err != nil {
return nil, tracerr.Wrap(err)
}
defer hdb.apiConnCon.Release()
return hdb.GetNodeInfo()
}
// GetBucketUpdatesInternalAPI returns the latest bucket updates
func (hdb *HistoryDB) GetBucketUpdatesInternalAPI() ([]BucketUpdateAPI, error) {
var bucketUpdates []*BucketUpdateAPI
err := meddler.QueryAll(
hdb.dbRead, &bucketUpdates,
`SELECT num_bucket, withdrawals FROM bucket_update
WHERE item_id in(SELECT max(item_id) FROM bucket_update
group by num_bucket)
ORDER BY num_bucket ASC;`,
)
return db.SlicePtrsToSlice(bucketUpdates).([]BucketUpdateAPI), tracerr.Wrap(err)
}
// GetNextForgersInternalAPI returns next forgers
func (hdb *HistoryDB) GetNextForgersInternalAPI(auctionVars *common.AuctionVariables,
auctionConsts *common.AuctionConstants,
lastBlock common.Block, currentSlot, lastClosedSlot int64) ([]NextForgerAPI, error) {
secondsPerBlock := int64(15) //nolint:gomnd
// currentSlot and lastClosedSlot included
limit := uint(lastClosedSlot - currentSlot + 1)
bids, _, err := hdb.getBestBidsAPI(hdb.dbRead, &currentSlot, &lastClosedSlot, nil, &limit, "ASC")
if err != nil && tracerr.Unwrap(err) != sql.ErrNoRows {
return nil, tracerr.Wrap(err)
}
nextForgers := []NextForgerAPI{}
// Get min bid info
var minBidInfo []MinBidInfo
if currentSlot >= auctionVars.DefaultSlotSetBidSlotNum {
// All min bids can be calculated with the last update of AuctionVariables
minBidInfo = []MinBidInfo{{
DefaultSlotSetBid: auctionVars.DefaultSlotSetBid,
DefaultSlotSetBidSlotNum: auctionVars.DefaultSlotSetBidSlotNum,
}}
} else {
// Get all the relevant updates from the DB
minBidInfo, err = hdb.getMinBidInfo(hdb.dbRead, currentSlot, lastClosedSlot)
if err != nil {
return nil, tracerr.Wrap(err)
}
}
// Create nextForger for each slot
for i := currentSlot; i <= lastClosedSlot; i++ {
fromBlock := i*int64(auctionConsts.BlocksPerSlot) +
auctionConsts.GenesisBlockNum
toBlock := (i+1)*int64(auctionConsts.BlocksPerSlot) +
auctionConsts.GenesisBlockNum - 1
nextForger := NextForgerAPI{
Period: Period{
SlotNum: i,
FromBlock: fromBlock,
ToBlock: toBlock,
FromTimestamp: lastBlock.Timestamp.Add(time.Second *
time.Duration(secondsPerBlock*(fromBlock-lastBlock.Num))),
ToTimestamp: lastBlock.Timestamp.Add(time.Second *
time.Duration(secondsPerBlock*(toBlock-lastBlock.Num))),
},
}
foundForger := false
// If there is a bid for a slot, get forger (coordinator)
for j := range bids {
slotNum := bids[j].SlotNum
if slotNum == i {
// There's a bid for the slot
// Check if the bid is greater than the minimum required
for i := 0; i < len(minBidInfo); i++ {
// Find the most recent update
if slotNum >= minBidInfo[i].DefaultSlotSetBidSlotNum {
// Get min bid
minBidSelector := slotNum % int64(len(auctionVars.DefaultSlotSetBid))
minBid := minBidInfo[i].DefaultSlotSetBid[minBidSelector]
// Check if the bid has beaten the minimum
bid, ok := new(big.Int).SetString(string(bids[j].BidValue), 10)
if !ok {
return nil, tracerr.New("Wrong bid value, error parsing it as big.Int")
}
if minBid.Cmp(bid) == 1 {
// Min bid is greater than bid, the slot will be forged by boot coordinator
break
}
foundForger = true
break
}
}
if !foundForger { // There is no bid or it's smaller than the minimum
break
}
coordinator, err := hdb.getCoordinatorAPI(hdb.dbRead, bids[j].Bidder)
if err != nil {
return nil, tracerr.Wrap(err)
}
nextForger.Coordinator = *coordinator
break
}
}
// If there is no bid, the coordinator that will forge is boot coordinator
if !foundForger {
nextForger.Coordinator = CoordinatorAPI{
Forger: auctionVars.BootCoordinator,
URL: auctionVars.BootCoordinatorURL,
}
}
nextForgers = append(nextForgers, nextForger)
}
return nextForgers, nil
}
// GetMetricsInternalAPI returns the MetricsAPI
func (hdb *HistoryDB) GetMetricsInternalAPI(lastBatchNum common.BatchNum) (metrics *MetricsAPI, poolLoad int64, err error) {
metrics = &MetricsAPI{}
type period struct {
FromBatchNum common.BatchNum `meddler:"from_batch_num"`
FromTimestamp time.Time `meddler:"from_timestamp"`
ToBatchNum common.BatchNum `meddler:"-"`
ToTimestamp time.Time `meddler:"to_timestamp"`
}
p := &period{
ToBatchNum: lastBatchNum,
}
if err := meddler.QueryRow(
hdb.dbRead, p, `SELECT
COALESCE (MIN(batch.batch_num), 0) as from_batch_num,
COALESCE (MIN(block.timestamp), NOW()) AS from_timestamp,
COALESCE (MAX(block.timestamp), NOW()) AS to_timestamp
FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num
WHERE block.timestamp >= NOW() - INTERVAL '24 HOURS';`,
); err != nil {
return nil, 0, tracerr.Wrap(err)
}
// Get the amount of txs of that period
row := hdb.dbRead.QueryRow(
`SELECT COUNT(*) as total_txs FROM tx WHERE tx.batch_num between $1 AND $2;`,
p.FromBatchNum, p.ToBatchNum,
)
var nTxs int
if err := row.Scan(&nTxs); err != nil {
return nil, 0, tracerr.Wrap(err)
}
// Set txs/s
seconds := p.ToTimestamp.Sub(p.FromTimestamp).Seconds()
if seconds == 0 { // Avoid dividing by 0
seconds++
}
metrics.TransactionsPerSecond = float64(nTxs) / seconds
// Set txs/batch
nBatches := p.ToBatchNum - p.FromBatchNum + 1
if nBatches == 0 { // Avoid dividing by 0
nBatches++
}
if (p.ToBatchNum - p.FromBatchNum) > 0 {
metrics.TransactionsPerBatch = float64(nTxs) /
float64(nBatches)
} else {
metrics.TransactionsPerBatch = 0
}
// Get total fee of that period
row = hdb.dbRead.QueryRow(
`SELECT COALESCE (SUM(total_fees_usd), 0) FROM batch WHERE batch_num between $1 AND $2;`,
p.FromBatchNum, p.ToBatchNum,
)
var totalFee float64
if err := row.Scan(&totalFee); err != nil {
return nil, 0, tracerr.Wrap(err)
}
// Set batch frequency
metrics.BatchFrequency = seconds / float64(nBatches)
// Set avg transaction fee (only L2 txs have fee)
row = hdb.dbRead.QueryRow(
`SELECT COUNT(*) as total_txs FROM tx WHERE tx.batch_num between $1 AND $2 AND NOT is_l1;`,
p.FromBatchNum, p.ToBatchNum,
)
var nL2Txs int
if err := row.Scan(&nL2Txs); err != nil {
return nil, 0, tracerr.Wrap(err)
}
if nL2Txs > 0 {
metrics.AvgTransactionFee = totalFee / float64(nL2Txs)
} else {
metrics.AvgTransactionFee = 0
}
// Get and set amount of registered accounts
type registeredAccounts struct {
TokenAccounts int64 `meddler:"token_accounts"`
Wallets int64 `meddler:"wallets"`
}
ra := &registeredAccounts{}
if err := meddler.QueryRow(
hdb.dbRead, ra,
`SELECT COUNT(*) AS token_accounts, COUNT(DISTINCT(bjj)) AS wallets FROM account;`,
); err != nil {
return nil, 0, tracerr.Wrap(err)
}
metrics.TokenAccounts = ra.TokenAccounts
metrics.Wallets = ra.Wallets
// Get and set estimated time to forge L1 tx
row = hdb.dbRead.QueryRow(
`SELECT COALESCE (AVG(EXTRACT(EPOCH FROM (forged.timestamp - added.timestamp))), 0) FROM tx
INNER JOIN block AS added ON tx.eth_block_num = added.eth_block_num
INNER JOIN batch AS forged_batch ON tx.batch_num = forged_batch.batch_num
INNER JOIN block AS forged ON forged_batch.eth_block_num = forged.eth_block_num
WHERE tx.batch_num between $1 and $2 AND tx.is_l1 AND tx.user_origin;`,
p.FromBatchNum, p.ToBatchNum,
)
var timeToForgeL1 float64
if err := row.Scan(&timeToForgeL1); err != nil {
return nil, 0, tracerr.Wrap(err)
}
metrics.EstimatedTimeToForgeL1 = timeToForgeL1
// Get amount of txs in the pool
row = hdb.dbRead.QueryRow(
`SELECT COUNT(*) FROM tx_pool WHERE state = $1 AND NOT external_delete;`,
common.PoolL2TxStatePending,
)
if err := row.Scan(&poolLoad); err != nil {
return nil, 0, tracerr.Wrap(err)
}
return metrics, poolLoad, nil
}
// GetStateAPI returns the StateAPI
func (hdb *HistoryDB) GetStateAPI() (*StateAPI, error) {
cancel, err := hdb.apiConnCon.Acquire()
defer cancel()
if err != nil {
return nil, tracerr.Wrap(err)
}
defer hdb.apiConnCon.Release()
return hdb.getStateAPI(hdb.dbRead)
}

View File

@@ -456,13 +456,10 @@ func (hdb *HistoryDB) addTokens(d meddler.DB, tokens []common.Token) error {
// UpdateTokenValue updates the USD value of a token. Value is the price in
// USD of a normalized token (1 token = 10^decimals units)
func (hdb *HistoryDB) UpdateTokenValue(tokenSymbol string, value float64) error {
// Sanitize symbol
tokenSymbol = strings.ToValidUTF8(tokenSymbol, " ")
func (hdb *HistoryDB) UpdateTokenValue(tokenAddr ethCommon.Address, value float64) error {
_, err := hdb.dbWrite.Exec(
"UPDATE token SET usd = $1 WHERE symbol = $2;",
value, tokenSymbol,
"UPDATE token SET usd = $1 WHERE eth_addr = $2;",
value, tokenAddr,
)
return tracerr.Wrap(err)
}
@@ -696,11 +693,11 @@ func (hdb *HistoryDB) GetAllExits() ([]common.ExitInfo, error) {
func (hdb *HistoryDB) GetAllL1UserTxs() ([]common.L1Tx, error) {
var txs []*common.L1Tx
err := meddler.QueryAll(
hdb.dbRead, &txs, // Note that '\x' gets parsed as a big.Int with value = 0
hdb.dbRead, &txs,
`SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
tx.amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.amount_success THEN tx.amount ELSE '\x' END) AS effective_amount,
tx.deposit_amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.deposit_amount_success THEN tx.deposit_amount ELSE '\x' END) AS effective_deposit_amount,
tx.amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.amount_success THEN tx.amount ELSE 0 END) AS effective_amount,
tx.deposit_amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.deposit_amount_success THEN tx.deposit_amount ELSE 0 END) AS effective_deposit_amount,
tx.eth_block_num, tx.type, tx.batch_num
FROM tx WHERE is_l1 = TRUE AND user_origin = TRUE ORDER BY item_id;`,
)
@@ -842,6 +839,18 @@ func (hdb *HistoryDB) GetAllBucketUpdates() ([]common.BucketUpdate, error) {
return db.SlicePtrsToSlice(bucketUpdates).([]common.BucketUpdate), tracerr.Wrap(err)
}
func (hdb *HistoryDB) getMinBidInfo(d meddler.DB,
currentSlot, lastClosedSlot int64) ([]MinBidInfo, error) {
minBidInfo := []*MinBidInfo{}
query := `
SELECT DISTINCT default_slot_set_bid, default_slot_set_bid_slot_num FROM auction_vars
WHERE default_slot_set_bid_slot_num < $1
ORDER BY default_slot_set_bid_slot_num DESC
LIMIT $2;`
err := meddler.QueryAll(d, &minBidInfo, query, lastClosedSlot, int(lastClosedSlot-currentSlot)+1)
return db.SlicePtrsToSlice(minBidInfo).([]MinBidInfo), tracerr.Wrap(err)
}
func (hdb *HistoryDB) addTokenExchanges(d meddler.DB, tokenExchanges []common.TokenExchange) error {
if len(tokenExchanges) == 0 {
return nil
@@ -1140,17 +1149,6 @@ func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) {
return tracerr.Wrap(txn.Commit())
}
// GetCoordinatorAPI returns a coordinator by its bidderAddr
func (hdb *HistoryDB) GetCoordinatorAPI(bidderAddr ethCommon.Address) (*CoordinatorAPI, error) {
coordinator := &CoordinatorAPI{}
err := meddler.QueryRow(
hdb.dbRead, coordinator,
"SELECT * FROM coordinator WHERE bidder_addr = $1 ORDER BY item_id DESC LIMIT 1;",
bidderAddr,
)
return coordinator, tracerr.Wrap(err)
}
// AddAuctionVars insert auction vars into the DB
func (hdb *HistoryDB) AddAuctionVars(auctionVars *common.AuctionVariables) error {
return tracerr.Wrap(meddler.Insert(hdb.dbWrite, "auction_vars", auctionVars))
@@ -1161,7 +1159,7 @@ func (hdb *HistoryDB) GetTokensTest() ([]TokenWithUSD, error) {
tokens := []*TokenWithUSD{}
if err := meddler.QueryAll(
hdb.dbRead, &tokens,
"SELECT * FROM TOKEN",
"SELECT * FROM token ORDER BY token_id ASC",
); err != nil {
return nil, tracerr.Wrap(err)
}
@@ -1170,3 +1168,60 @@ func (hdb *HistoryDB) GetTokensTest() ([]TokenWithUSD, error) {
}
return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), nil
}
const (
// CreateAccountExtraFeePercentage is the multiplication factor over
// the average fee for CreateAccount that is applied to obtain the
// recommended fee for CreateAccount
CreateAccountExtraFeePercentage float64 = 2.5
// CreateAccountInternalExtraFeePercentage is the multiplication factor
// over the average fee for CreateAccountInternal that is applied to
// obtain the recommended fee for CreateAccountInternal
CreateAccountInternalExtraFeePercentage float64 = 2.0
)
// GetRecommendedFee returns the RecommendedFee information
func (hdb *HistoryDB) GetRecommendedFee(minFeeUSD, maxFeeUSD float64) (*common.RecommendedFee, error) {
var recommendedFee common.RecommendedFee
// Get total txs and the batch of the first selected tx of the last hour
type totalTxsSinceBatchNum struct {
TotalTxs int `meddler:"total_txs"`
FirstBatchNum common.BatchNum `meddler:"batch_num"`
}
ttsbn := &totalTxsSinceBatchNum{}
if err := meddler.QueryRow(
hdb.dbRead, ttsbn, `SELECT COUNT(tx.*) as total_txs,
COALESCE (MIN(tx.batch_num), 0) as batch_num
FROM tx INNER JOIN block ON tx.eth_block_num = block.eth_block_num
WHERE block.timestamp >= NOW() - INTERVAL '1 HOURS';`,
); err != nil {
return nil, tracerr.Wrap(err)
}
// Get the amount of batches and acumulated fees for the last hour
type totalBatchesAndFee struct {
TotalBatches int `meddler:"total_batches"`
TotalFees float64 `meddler:"total_fees"`
}
tbf := &totalBatchesAndFee{}
if err := meddler.QueryRow(
hdb.dbRead, tbf, `SELECT COUNT(*) AS total_batches,
COALESCE (SUM(total_fees_usd), 0) AS total_fees FROM batch
WHERE batch_num > $1;`, ttsbn.FirstBatchNum,
); err != nil {
return nil, tracerr.Wrap(err)
}
// Update NodeInfo struct
var avgTransactionFee float64
if ttsbn.TotalTxs > 0 {
avgTransactionFee = tbf.TotalFees / float64(ttsbn.TotalTxs)
} else {
avgTransactionFee = 0
}
recommendedFee.ExistingAccount = math.Min(maxFeeUSD,
math.Max(avgTransactionFee, minFeeUSD))
recommendedFee.CreatesAccount = math.Min(maxFeeUSD,
math.Max(CreateAccountExtraFeePercentage*avgTransactionFee, minFeeUSD))
recommendedFee.CreatesAccountInternal = math.Min(maxFeeUSD,
math.Max(CreateAccountInternalExtraFeePercentage*avgTransactionFee, minFeeUSD))
return &recommendedFee, nil
}

View File

@@ -11,6 +11,7 @@ import (
"time"
ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common"
dbUtils "github.com/hermeznetwork/hermez-node/db"
"github.com/hermeznetwork/hermez-node/log"
@@ -166,7 +167,7 @@ func TestBatches(t *testing.T) {
if i%2 != 0 {
// Set value to the token
value := (float64(i) + 5) * 5.389329
assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
assert.NoError(t, historyDB.UpdateTokenValue(token.EthAddr, value))
tokensValue[token.TokenID] = value / math.Pow(10, float64(token.Decimals))
}
}
@@ -276,7 +277,7 @@ func TestTokens(t *testing.T) {
// Update token value
for i, token := range tokens {
value := 1.01 * float64(i)
assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
assert.NoError(t, historyDB.UpdateTokenValue(token.EthAddr, value))
}
// Fetch tokens
fetchedTokens, err = historyDB.GetTokensTest()
@@ -302,7 +303,7 @@ func TestTokensUTF8(t *testing.T) {
// Generate fake tokens
const nTokens = 5
tokens, ethToken := test.GenTokens(nTokens, blocks)
nonUTFTokens := make([]common.Token, len(tokens)+1)
nonUTFTokens := make([]common.Token, len(tokens))
// Force token.name and token.symbol to be non UTF-8 Strings
for i, token := range tokens {
token.Name = fmt.Sprint("NON-UTF8-NAME-\xc5-", i)
@@ -332,7 +333,7 @@ func TestTokensUTF8(t *testing.T) {
// Update token value
for i, token := range nonUTFTokens {
value := 1.01 * float64(i)
assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
assert.NoError(t, historyDB.UpdateTokenValue(token.EthAddr, value))
}
// Fetch tokens
fetchedTokens, err = historyDB.GetTokensTest()
@@ -1176,7 +1177,7 @@ func TestGetMetricsAPI(t *testing.T) {
assert.NoError(t, err)
}
res, err := historyDBWithACC.GetMetricsAPI(common.BatchNum(numBatches))
res, _, err := historyDB.GetMetricsInternalAPI(common.BatchNum(numBatches))
assert.NoError(t, err)
assert.Equal(t, float64(numTx)/float64(numBatches), res.TransactionsPerBatch)
@@ -1185,8 +1186,8 @@ func TestGetMetricsAPI(t *testing.T) {
// There is a -2 as time for first and last batch is not taken into account
assert.InEpsilon(t, float64(frequency)*float64(numBatches-2)/float64(numBatches), res.BatchFrequency, 0.01)
assert.InEpsilon(t, float64(numTx)/float64(frequency*blockNum-frequency), res.TransactionsPerSecond, 0.01)
assert.Equal(t, int64(3), res.TotalAccounts)
assert.Equal(t, int64(3), res.TotalBJJs)
assert.Equal(t, int64(3), res.TokenAccounts)
assert.Equal(t, int64(3), res.Wallets)
// Til does not set fees
assert.Equal(t, float64(0), res.AvgTransactionFee)
}
@@ -1254,28 +1255,22 @@ func TestGetMetricsAPIMoreThan24Hours(t *testing.T) {
assert.NoError(t, err)
}
res, err := historyDBWithACC.GetMetricsAPI(common.BatchNum(numBatches))
res, _, err := historyDBWithACC.GetMetricsInternalAPI(common.BatchNum(numBatches))
assert.NoError(t, err)
assert.InEpsilon(t, 1.0, res.TransactionsPerBatch, 0.1)
assert.InEpsilon(t, res.BatchFrequency, float64(blockTime/time.Second), 0.1)
assert.InEpsilon(t, 1.0/float64(blockTime/time.Second), res.TransactionsPerSecond, 0.1)
assert.Equal(t, int64(3), res.TotalAccounts)
assert.Equal(t, int64(3), res.TotalBJJs)
assert.Equal(t, int64(3), res.TokenAccounts)
assert.Equal(t, int64(3), res.Wallets)
// Til does not set fees
assert.Equal(t, float64(0), res.AvgTransactionFee)
}
func TestGetMetricsAPIEmpty(t *testing.T) {
test.WipeDB(historyDB.DB())
_, err := historyDBWithACC.GetMetricsAPI(0)
assert.NoError(t, err)
}
func TestGetAvgTxFeeEmpty(t *testing.T) {
test.WipeDB(historyDB.DB())
_, err := historyDBWithACC.GetAvgTxFeeAPI()
_, _, err := historyDBWithACC.GetMetricsInternalAPI(0)
assert.NoError(t, err)
}
@@ -1464,3 +1459,128 @@ func setTestBlocks(from, to int64) []common.Block {
}
return blocks
}
func TestNodeInfo(t *testing.T) {
test.WipeDB(historyDB.DB())
err := historyDB.SetStateInternalAPI(&StateAPI{})
require.NoError(t, err)
clientSetup := test.NewClientSetupExample()
constants := &Constants{
SCConsts: common.SCConsts{
Rollup: *clientSetup.RollupConstants,
Auction: *clientSetup.AuctionConstants,
WDelayer: *clientSetup.WDelayerConstants,
},
ChainID: 42,
HermezAddress: clientSetup.AuctionConstants.HermezRollup,
}
err = historyDB.SetConstants(constants)
require.NoError(t, err)
// Test parameters
var f64 float64 = 1.2
var i64 int64 = 8888
addr := ethCommon.HexToAddress("0x1234")
hash := ethCommon.HexToHash("0x5678")
stateAPI := &StateAPI{
NodePublicInfo: NodePublicInfo{
ForgeDelay: 3.1,
},
Network: NetworkAPI{
LastEthBlock: 12,
LastSyncBlock: 34,
LastBatch: &BatchAPI{
ItemID: 123,
BatchNum: 456,
EthBlockNum: 789,
EthBlockHash: hash,
Timestamp: time.Now(),
ForgerAddr: addr,
// CollectedFeesDB: map[common.TokenID]*big.Int{
// 0: big.NewInt(11111),
// 1: big.NewInt(21111),
// 2: big.NewInt(31111),
// },
CollectedFeesAPI: apitypes.CollectedFeesAPI(map[common.TokenID]apitypes.BigIntStr{
0: apitypes.BigIntStr("11111"),
1: apitypes.BigIntStr("21111"),
2: apitypes.BigIntStr("31111"),
}),
TotalFeesUSD: &f64,
StateRoot: apitypes.BigIntStr("1234"),
NumAccounts: 11,
ExitRoot: apitypes.BigIntStr("5678"),
ForgeL1TxsNum: &i64,
SlotNum: 44,
ForgedTxs: 23,
TotalItems: 0,
FirstItem: 0,
LastItem: 0,
},
CurrentSlot: 22,
NextForgers: []NextForgerAPI{
{
Coordinator: CoordinatorAPI{
ItemID: 111,
Bidder: addr,
Forger: addr,
EthBlockNum: 566,
URL: "asd",
TotalItems: 0,
FirstItem: 0,
LastItem: 0,
},
Period: Period{
SlotNum: 33,
FromBlock: 55,
ToBlock: 66,
FromTimestamp: time.Now(),
ToTimestamp: time.Now(),
},
},
},
},
Metrics: MetricsAPI{
TransactionsPerBatch: 1.1,
TokenAccounts: 42,
},
Rollup: *NewRollupVariablesAPI(clientSetup.RollupVariables),
Auction: *NewAuctionVariablesAPI(clientSetup.AuctionVariables),
WithdrawalDelayer: *clientSetup.WDelayerVariables,
RecommendedFee: common.RecommendedFee{
ExistingAccount: 0.15,
},
}
err = historyDB.SetStateInternalAPI(stateAPI)
require.NoError(t, err)
nodeConfig := &NodeConfig{
MaxPoolTxs: 123,
MinFeeUSD: 0.5,
}
err = historyDB.SetNodeConfig(nodeConfig)
require.NoError(t, err)
dbConstants, err := historyDB.GetConstants()
require.NoError(t, err)
assert.Equal(t, constants, dbConstants)
dbNodeConfig, err := historyDB.GetNodeConfig()
require.NoError(t, err)
assert.Equal(t, nodeConfig, dbNodeConfig)
dbStateAPI, err := historyDB.getStateAPI(historyDB.dbRead)
require.NoError(t, err)
assert.Equal(t, stateAPI.Network.LastBatch.Timestamp.Unix(),
dbStateAPI.Network.LastBatch.Timestamp.Unix())
dbStateAPI.Network.LastBatch.Timestamp = stateAPI.Network.LastBatch.Timestamp
assert.Equal(t, stateAPI.Network.NextForgers[0].Period.FromTimestamp.Unix(),
dbStateAPI.Network.NextForgers[0].Period.FromTimestamp.Unix())
dbStateAPI.Network.NextForgers[0].Period.FromTimestamp = stateAPI.Network.NextForgers[0].Period.FromTimestamp
assert.Equal(t, stateAPI.Network.NextForgers[0].Period.ToTimestamp.Unix(),
dbStateAPI.Network.NextForgers[0].Period.ToTimestamp.Unix())
dbStateAPI.Network.NextForgers[0].Period.ToTimestamp = stateAPI.Network.NextForgers[0].Period.ToTimestamp
assert.Equal(t, stateAPI, dbStateAPI)
}

173
db/historydb/nodeinfo.go Normal file
View File

@@ -0,0 +1,173 @@
package historydb
import (
"time"
ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/tracerr"
"github.com/russross/meddler"
)
// Period represents a time period in ethereum
type Period struct {
SlotNum int64 `json:"slotNum"`
FromBlock int64 `json:"fromBlock"`
ToBlock int64 `json:"toBlock"`
FromTimestamp time.Time `json:"fromTimestamp"`
ToTimestamp time.Time `json:"toTimestamp"`
}
// NextForgerAPI represents the next forger exposed via the API
type NextForgerAPI struct {
Coordinator CoordinatorAPI `json:"coordinator"`
Period Period `json:"period"`
}
// NetworkAPI is the network state exposed via the API
type NetworkAPI struct {
LastEthBlock int64 `json:"lastEthereumBlock"`
LastSyncBlock int64 `json:"lastSynchedBlock"`
LastBatch *BatchAPI `json:"lastBatch"`
CurrentSlot int64 `json:"currentSlot"`
NextForgers []NextForgerAPI `json:"nextForgers"`
PendingL1Txs int `json:"pendingL1Transactions"`
}
// NodePublicInfo is the configuration and metrics of the node that is exposed via API
type NodePublicInfo struct {
// ForgeDelay in seconds
ForgeDelay float64 `json:"forgeDelay"`
// PoolLoad amount of transactions in the pool
PoolLoad int64 `json:"poolLoad"`
}
// StateAPI is an object representing the node and network state exposed via the API
type StateAPI struct {
NodePublicInfo NodePublicInfo `json:"node"`
Network NetworkAPI `json:"network"`
Metrics MetricsAPI `json:"metrics"`
Rollup RollupVariablesAPI `json:"rollup"`
Auction AuctionVariablesAPI `json:"auction"`
WithdrawalDelayer common.WDelayerVariables `json:"withdrawalDelayer"`
RecommendedFee common.RecommendedFee `json:"recommendedFee"`
}
// Constants contains network constants
type Constants struct {
common.SCConsts
ChainID uint16
HermezAddress ethCommon.Address
}
// NodeConfig contains the node config exposed in the API
type NodeConfig struct {
MaxPoolTxs uint32
MinFeeUSD float64
MaxFeeUSD float64
ForgeDelay float64
}
// NodeInfo contains information about he node used when serving the API
type NodeInfo struct {
ItemID int `meddler:"item_id,pk"`
StateAPI *StateAPI `meddler:"state,json"`
NodeConfig *NodeConfig `meddler:"config,json"`
Constants *Constants `meddler:"constants,json"`
}
// GetNodeInfo returns the NodeInfo
func (hdb *HistoryDB) GetNodeInfo() (*NodeInfo, error) {
ni := &NodeInfo{}
err := meddler.QueryRow(
hdb.dbRead, ni, `SELECT * FROM node_info WHERE item_id = 1;`,
)
return ni, tracerr.Wrap(err)
}
// GetConstants returns the Constats
func (hdb *HistoryDB) GetConstants() (*Constants, error) {
var nodeInfo NodeInfo
err := meddler.QueryRow(
hdb.dbRead, &nodeInfo,
"SELECT constants FROM node_info WHERE item_id = 1;",
)
return nodeInfo.Constants, tracerr.Wrap(err)
}
// SetConstants sets the Constants
func (hdb *HistoryDB) SetConstants(constants *Constants) error {
_constants := struct {
Constants *Constants `meddler:"constants,json"`
}{constants}
values, err := meddler.Default.Values(&_constants, false)
if err != nil {
return tracerr.Wrap(err)
}
_, err = hdb.dbWrite.Exec(
"UPDATE node_info SET constants = $1 WHERE item_id = 1;",
values[0],
)
return tracerr.Wrap(err)
}
// GetStateInternalAPI returns the StateAPI
func (hdb *HistoryDB) GetStateInternalAPI() (*StateAPI, error) {
return hdb.getStateAPI(hdb.dbRead)
}
func (hdb *HistoryDB) getStateAPI(d meddler.DB) (*StateAPI, error) {
var nodeInfo NodeInfo
err := meddler.QueryRow(
d, &nodeInfo,
"SELECT state FROM node_info WHERE item_id = 1;",
)
return nodeInfo.StateAPI, tracerr.Wrap(err)
}
// SetStateInternalAPI sets the StateAPI
func (hdb *HistoryDB) SetStateInternalAPI(stateAPI *StateAPI) error {
if stateAPI.Network.LastBatch != nil {
stateAPI.Network.LastBatch.CollectedFeesAPI =
apitypes.NewCollectedFeesAPI(stateAPI.Network.LastBatch.CollectedFeesDB)
}
_stateAPI := struct {
StateAPI *StateAPI `meddler:"state,json"`
}{stateAPI}
values, err := meddler.Default.Values(&_stateAPI, false)
if err != nil {
return tracerr.Wrap(err)
}
_, err = hdb.dbWrite.Exec(
"UPDATE node_info SET state = $1 WHERE item_id = 1;",
values[0],
)
return tracerr.Wrap(err)
}
// GetNodeConfig returns the NodeConfig
func (hdb *HistoryDB) GetNodeConfig() (*NodeConfig, error) {
var nodeInfo NodeInfo
err := meddler.QueryRow(
hdb.dbRead, &nodeInfo,
"SELECT config FROM node_info WHERE item_id = 1;",
)
return nodeInfo.NodeConfig, tracerr.Wrap(err)
}
// SetNodeConfig sets the NodeConfig
func (hdb *HistoryDB) SetNodeConfig(nodeConfig *NodeConfig) error {
_nodeConfig := struct {
NodeConfig *NodeConfig `meddler:"config,json"`
}{nodeConfig}
values, err := meddler.Default.Values(&_nodeConfig, false)
if err != nil {
return tracerr.Wrap(err)
}
_, err = hdb.dbWrite.Exec(
"UPDATE node_info SET config = $1 WHERE item_id = 1;",
values[0],
)
return tracerr.Wrap(err)
}

View File

@@ -6,7 +6,7 @@ import (
"time"
ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common"
"github.com/iden3/go-iden3-crypto/babyjub"
"github.com/iden3/go-merkletree"
@@ -295,7 +295,8 @@ type BatchAPI struct {
EthBlockHash ethCommon.Hash `json:"ethereumBlockHash" meddler:"hash"`
Timestamp time.Time `json:"timestamp" meddler:"timestamp,utctime"`
ForgerAddr ethCommon.Address `json:"forgerAddr" meddler:"forger_addr"`
CollectedFees apitypes.CollectedFees `json:"collectedFees" meddler:"fees_collected,json"`
CollectedFeesDB map[common.TokenID]*big.Int `json:"-" meddler:"fees_collected,json"`
CollectedFeesAPI apitypes.CollectedFeesAPI `json:"collectedFees" meddler:"-"`
TotalFeesUSD *float64 `json:"historicTotalCollectedFeesUSD" meddler:"total_fees_usd"`
StateRoot apitypes.BigIntStr `json:"stateRoot" meddler:"state_root"`
NumAccounts int `json:"numAccounts" meddler:"num_accounts"`
@@ -308,28 +309,17 @@ type BatchAPI struct {
LastItem uint64 `json:"-" meddler:"last_item"`
}
// Metrics define metrics of the network
type Metrics struct {
// MetricsAPI define metrics of the network
type MetricsAPI struct {
TransactionsPerBatch float64 `json:"transactionsPerBatch"`
BatchFrequency float64 `json:"batchFrequency"`
TransactionsPerSecond float64 `json:"transactionsPerSecond"`
TotalAccounts int64 `json:"totalAccounts" meddler:"total_accounts"`
TotalBJJs int64 `json:"totalBJJs" meddler:"total_bjjs"`
TokenAccounts int64 `json:"tokenAccounts"`
Wallets int64 `json:"wallets"`
AvgTransactionFee float64 `json:"avgTransactionFee"`
EstimatedTimeToForgeL1 float64 `json:"estimatedTimeToForgeL1" meddler:"estimated_time_to_forge_l1"`
}
// MetricsTotals is used to get temporal information from HistoryDB
// to calculate data to be stored into the Metrics struct
type MetricsTotals struct {
TotalTransactions uint64 `meddler:"total_txs"`
FirstBatchNum common.BatchNum `meddler:"batch_num"`
TotalBatches int64 `meddler:"total_batches"`
TotalFeesUSD float64 `meddler:"total_fees"`
MinTimestamp time.Time `meddler:"min_timestamp,utctime"`
MaxTimestamp time.Time `meddler:"max_timestamp,utctime"`
}
// BidAPI is a representation of a bid with additional information
// required by the API
type BidAPI struct {
@@ -380,6 +370,27 @@ type RollupVariablesAPI struct {
SafeMode bool `json:"safeMode" meddler:"safe_mode"`
}
// NewRollupVariablesAPI creates a RollupVariablesAPI from common.RollupVariables
func NewRollupVariablesAPI(rollupVariables *common.RollupVariables) *RollupVariablesAPI {
rollupVars := RollupVariablesAPI{
EthBlockNum: rollupVariables.EthBlockNum,
FeeAddToken: apitypes.NewBigIntStr(rollupVariables.FeeAddToken),
ForgeL1L2BatchTimeout: rollupVariables.ForgeL1L2BatchTimeout,
WithdrawalDelay: rollupVariables.WithdrawalDelay,
SafeMode: rollupVariables.SafeMode,
}
for i, bucket := range rollupVariables.Buckets {
rollupVars.Buckets[i] = BucketParamsAPI{
CeilUSD: apitypes.NewBigIntStr(bucket.CeilUSD),
Withdrawals: apitypes.NewBigIntStr(bucket.Withdrawals),
BlockWithdrawalRate: apitypes.NewBigIntStr(bucket.BlockWithdrawalRate),
MaxWithdrawals: apitypes.NewBigIntStr(bucket.MaxWithdrawals),
}
}
return &rollupVars
}
// AuctionVariablesAPI are the variables of the Auction Smart Contract
type AuctionVariablesAPI struct {
EthBlockNum int64 `json:"ethereumBlockNum" meddler:"eth_block_num"`
@@ -404,3 +415,28 @@ type AuctionVariablesAPI struct {
// SlotDeadline Number of blocks at the end of a slot in which any coordinator can forge if the winner has not forged one before
SlotDeadline uint8 `json:"slotDeadline" meddler:"slot_deadline" validate:"required"`
}
// NewAuctionVariablesAPI creates a AuctionVariablesAPI from common.AuctionVariables
func NewAuctionVariablesAPI(auctionVariables *common.AuctionVariables) *AuctionVariablesAPI {
auctionVars := AuctionVariablesAPI{
EthBlockNum: auctionVariables.EthBlockNum,
DonationAddress: auctionVariables.DonationAddress,
BootCoordinator: auctionVariables.BootCoordinator,
BootCoordinatorURL: auctionVariables.BootCoordinatorURL,
DefaultSlotSetBidSlotNum: auctionVariables.DefaultSlotSetBidSlotNum,
ClosedAuctionSlots: auctionVariables.ClosedAuctionSlots,
OpenAuctionSlots: auctionVariables.OpenAuctionSlots,
Outbidding: auctionVariables.Outbidding,
SlotDeadline: auctionVariables.SlotDeadline,
}
for i, slot := range auctionVariables.DefaultSlotSetBid {
auctionVars.DefaultSlotSetBid[i] = apitypes.NewBigIntStr(slot)
}
for i, ratio := range auctionVariables.AllocationRatio {
auctionVars.AllocationRatio[i] = ratio
}
return &auctionVars
}

View File

@@ -62,6 +62,10 @@ func (l2db *L2DB) AddTxAPI(tx *PoolL2TxWrite) error {
return tracerr.Wrap(fmt.Errorf("tx.feeUSD (%v) < minFeeUSD (%v)",
feeUSD, l2db.minFeeUSD))
}
if feeUSD > l2db.maxFeeUSD {
return tracerr.Wrap(fmt.Errorf("tx.feeUSD (%v) > maxFeeUSD (%v)",
feeUSD, l2db.maxFeeUSD))
}
// Prepare insert SQL query argument parameters
namesPart, err := meddler.Default.ColumnsQuoted(tx, false)
@@ -80,7 +84,7 @@ func (l2db *L2DB) AddTxAPI(tx *PoolL2TxWrite) error {
q := fmt.Sprintf(
`INSERT INTO tx_pool (%s)
SELECT %s
WHERE (SELECT COUNT(*) FROM tx_pool WHERE state = $%v) < $%v;`,
WHERE (SELECT COUNT(*) FROM tx_pool WHERE state = $%v AND NOT external_delete) < $%v;`,
namesPart, valuesPart,
len(values)+1, len(values)+2) //nolint:gomnd
values = append(values, common.PoolL2TxStatePending, l2db.maxTxs)

View File

@@ -27,6 +27,7 @@ type L2DB struct {
ttl time.Duration
maxTxs uint32 // limit of txs that are accepted in the pool
minFeeUSD float64
maxFeeUSD float64
apiConnCon *db.APIConnectionController
}
@@ -38,6 +39,7 @@ func NewL2DB(
safetyPeriod common.BatchNum,
maxTxs uint32,
minFeeUSD float64,
maxFeeUSD float64,
TTL time.Duration,
apiConnCon *db.APIConnectionController,
) *L2DB {
@@ -48,6 +50,7 @@ func NewL2DB(
ttl: TTL,
maxTxs: maxTxs,
minFeeUSD: minFeeUSD,
maxFeeUSD: maxFeeUSD,
apiConnCon: apiConnCon,
}
}
@@ -204,7 +207,7 @@ func (l2db *L2DB) GetPendingTxs() ([]common.PoolL2Tx, error) {
var txs []*common.PoolL2Tx
err := meddler.QueryAll(
l2db.dbRead, &txs,
selectPoolTxCommon+"WHERE state = $1",
selectPoolTxCommon+"WHERE state = $1 AND NOT external_delete;",
common.PoolL2TxStatePending,
)
return db.SlicePtrsToSlice(txs).([]common.PoolL2Tx), tracerr.Wrap(err)

View File

@@ -37,9 +37,9 @@ func TestMain(m *testing.M) {
if err != nil {
panic(err)
}
l2DB = NewL2DB(db, db, 10, 1000, 0.0, 24*time.Hour, nil)
l2DB = NewL2DB(db, db, 10, 1000, 0.0, 1000.0, 24*time.Hour, nil)
apiConnCon := dbUtils.NewAPIConnectionController(1, time.Second)
l2DBWithACC = NewL2DB(db, db, 10, 1000, 0.0, 24*time.Hour, apiConnCon)
l2DBWithACC = NewL2DB(db, db, 10, 1000, 0.0, 1000.0, 24*time.Hour, apiConnCon)
test.WipeDB(l2DB.DB())
historyDB = historydb.NewHistoryDB(db, db, nil)
// Run tests
@@ -121,7 +121,7 @@ func prepareHistoryDB(historyDB *historydb.HistoryDB) error {
}
tokens[token.TokenID] = readToken
// Set value to the tokens
err := historyDB.UpdateTokenValue(readToken.Symbol, *readToken.USD)
err := historyDB.UpdateTokenValue(readToken.EthAddr, *readToken.USD)
if err != nil {
return tracerr.Wrap(err)
}

View File

@@ -6,7 +6,7 @@ import (
"time"
ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common"
"github.com/iden3/go-iden3-crypto/babyjub"
)

View File

@@ -1,5 +1,11 @@
-- +migrate Up
-- NOTE: We use "DECIMAL(78,0)" to encode go *big.Int types. All the *big.Int
-- that we deal with represent a value in the SNARK field, which is an integer
-- of 256 bits. `log(2**256, 10) = 77.06`: that is, a 256 bit number can have
-- at most 78 digits, so we use this value to specify the precision in the
-- PostgreSQL DECIMAL guaranteeing that we will never lose precision.
-- History
CREATE TABLE block (
eth_block_num BIGINT PRIMARY KEY,
@@ -22,10 +28,10 @@ CREATE TABLE batch (
forger_addr BYTEA NOT NULL, -- fake foreign key for coordinator
fees_collected BYTEA NOT NULL,
fee_idxs_coordinator BYTEA NOT NULL,
state_root BYTEA NOT NULL,
state_root DECIMAL(78,0) NOT NULL,
num_accounts BIGINT NOT NULL,
last_idx BIGINT NOT NULL,
exit_root BYTEA NOT NULL,
exit_root DECIMAL(78,0) NOT NULL,
forge_l1_txs_num BIGINT,
slot_num BIGINT NOT NULL,
total_fees_usd NUMERIC
@@ -34,7 +40,7 @@ CREATE TABLE batch (
CREATE TABLE bid (
item_id SERIAL PRIMARY KEY,
slot_num BIGINT NOT NULL,
bid_value BYTEA NOT NULL,
bid_value DECIMAL(78,0) NOT NULL,
eth_block_num BIGINT NOT NULL REFERENCES block (eth_block_num) ON DELETE CASCADE,
bidder_addr BYTEA NOT NULL -- fake foreign key for coordinator
);
@@ -106,7 +112,7 @@ CREATE TABLE account_update (
batch_num BIGINT NOT NULL REFERENCES batch (batch_num) ON DELETE CASCADE,
idx BIGINT NOT NULL REFERENCES account (idx) ON DELETE CASCADE,
nonce BIGINT NOT NULL,
balance BYTEA NOT NULL
balance DECIMAL(78,0) NOT NULL
);
CREATE TABLE exit_tree (
@@ -114,7 +120,7 @@ CREATE TABLE exit_tree (
batch_num BIGINT REFERENCES batch (batch_num) ON DELETE CASCADE,
account_idx BIGINT REFERENCES account (idx) ON DELETE CASCADE,
merkle_proof BYTEA NOT NULL,
balance BYTEA NOT NULL,
balance DECIMAL(78,0) NOT NULL,
instant_withdrawn BIGINT REFERENCES block (eth_block_num) ON DELETE SET NULL,
delayed_withdraw_request BIGINT REFERENCES block (eth_block_num) ON DELETE SET NULL,
owner BYTEA,
@@ -164,7 +170,7 @@ CREATE TABLE tx (
to_idx BIGINT NOT NULL,
to_eth_addr BYTEA,
to_bjj BYTEA,
amount BYTEA NOT NULL,
amount DECIMAL(78,0) NOT NULL,
amount_success BOOLEAN NOT NULL DEFAULT true,
amount_f NUMERIC NOT NULL,
token_id INT NOT NULL REFERENCES token (token_id),
@@ -174,7 +180,7 @@ CREATE TABLE tx (
-- L1
to_forge_l1_txs_num BIGINT,
user_origin BOOLEAN,
deposit_amount BYTEA,
deposit_amount DECIMAL(78,0),
deposit_amount_success BOOLEAN NOT NULL DEFAULT true,
deposit_amount_f NUMERIC,
deposit_amount_usd NUMERIC,
@@ -544,7 +550,7 @@ FOR EACH ROW EXECUTE PROCEDURE forge_l1_user_txs();
CREATE TABLE rollup_vars (
eth_block_num BIGINT PRIMARY KEY REFERENCES block (eth_block_num) ON DELETE CASCADE,
fee_add_token BYTEA NOT NULL,
fee_add_token DECIMAL(78,0) NOT NULL,
forge_l1_timeout BIGINT NOT NULL,
withdrawal_delay BIGINT NOT NULL,
buckets BYTEA NOT NULL,
@@ -556,7 +562,7 @@ CREATE TABLE bucket_update (
eth_block_num BIGINT NOT NULL REFERENCES block (eth_block_num) ON DELETE CASCADE,
num_bucket BIGINT NOT NULL,
block_stamp BIGINT NOT NULL,
withdrawals BYTEA NOT NULL
withdrawals DECIMAL(78,0) NOT NULL
);
CREATE TABLE token_exchange (
@@ -572,7 +578,7 @@ CREATE TABLE escape_hatch_withdrawal (
who_addr BYTEA NOT NULL,
to_addr BYTEA NOT NULL,
token_addr BYTEA NOT NULL,
amount BYTEA NOT NULL
amount DECIMAL(78,0) NOT NULL
);
CREATE TABLE auction_vars (
@@ -610,7 +616,7 @@ CREATE TABLE tx_pool (
effective_to_eth_addr BYTEA,
effective_to_bjj BYTEA,
token_id INT NOT NULL REFERENCES token (token_id) ON DELETE CASCADE,
amount BYTEA NOT NULL,
amount DECIMAL(78,0) NOT NULL,
amount_f NUMERIC NOT NULL,
fee SMALLINT NOT NULL,
nonce BIGINT NOT NULL,
@@ -624,7 +630,7 @@ CREATE TABLE tx_pool (
rq_to_eth_addr BYTEA,
rq_to_bjj BYTEA,
rq_token_id INT,
rq_amount BYTEA,
rq_amount DECIMAL(78,0),
rq_fee SMALLINT,
rq_nonce BIGINT,
tx_type VARCHAR(40) NOT NULL,
@@ -661,12 +667,32 @@ CREATE TABLE account_creation_auth (
timestamp TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT timezone('utc', now())
);
CREATE TABLE node_info (
item_id SERIAL PRIMARY KEY,
state BYTEA, -- object returned by GET /state
config BYTEA, -- Node config
-- max_pool_txs BIGINT, -- L2DB config
-- min_fee NUMERIC, -- L2DB config
constants BYTEA -- info of the network that is constant
);
INSERT INTO node_info(item_id) VALUES (1); -- Always have a single row that we will update
CREATE VIEW account_state AS SELECT DISTINCT idx,
first_value(nonce) OVER w AS nonce,
first_value(balance) OVER w AS balance,
first_value(eth_block_num) OVER w AS eth_block_num,
first_value(batch_num) OVER w AS batch_num
FROM account_update
window w AS (partition by idx ORDER BY item_id desc);
-- +migrate Down
-- triggers
DROP TRIGGER IF EXISTS trigger_token_usd_update ON token;
DROP TRIGGER IF EXISTS trigger_set_tx ON tx;
DROP TRIGGER IF EXISTS trigger_forge_l1_txs ON batch;
DROP TRIGGER IF EXISTS trigger_set_pool_tx ON tx_pool;
-- drop views IF EXISTS
DROP VIEW IF EXISTS account_state;
-- functions
DROP FUNCTION IF EXISTS hez_idx;
DROP FUNCTION IF EXISTS set_token_usd_update;
@@ -675,6 +701,7 @@ DROP FUNCTION IF EXISTS set_tx;
DROP FUNCTION IF EXISTS forge_l1_user_txs;
DROP FUNCTION IF EXISTS set_pool_tx;
-- drop tables IF EXISTS
DROP TABLE IF EXISTS node_info;
DROP TABLE IF EXISTS account_creation_auth;
DROP TABLE IF EXISTS tx_pool;
DROP TABLE IF EXISTS auction_vars;

View File

@@ -13,6 +13,9 @@ import (
"github.com/hermeznetwork/hermez-node/log"
"github.com/hermeznetwork/tracerr"
"github.com/jmoiron/sqlx"
//nolint:errcheck // driver for postgres DB
_ "github.com/lib/pq"
migrate "github.com/rubenv/sql-migrate"
"github.com/russross/meddler"
"golang.org/x/sync/semaphore"
@@ -165,7 +168,11 @@ func (b BigIntMeddler) PostRead(fieldPtr, scanTarget interface{}) error {
return tracerr.Wrap(fmt.Errorf("BigIntMeddler.PostRead: nil pointer"))
}
field := fieldPtr.(**big.Int)
*field = new(big.Int).SetBytes([]byte(*ptr))
var ok bool
*field, ok = new(big.Int).SetString(*ptr, 10)
if !ok {
return tracerr.Wrap(fmt.Errorf("big.Int.SetString failed on \"%v\"", *ptr))
}
return nil
}
@@ -173,7 +180,7 @@ func (b BigIntMeddler) PostRead(fieldPtr, scanTarget interface{}) error {
func (b BigIntMeddler) PreWrite(fieldPtr interface{}) (saveValue interface{}, err error) {
field := fieldPtr.(*big.Int)
return field.Bytes(), nil
return field.String(), nil
}
// BigIntNullMeddler encodes or decodes the field value to or from JSON
@@ -198,7 +205,12 @@ func (b BigIntNullMeddler) PostRead(fieldPtr, scanTarget interface{}) error {
if ptr == nil {
return tracerr.Wrap(fmt.Errorf("BigIntMeddler.PostRead: nil pointer"))
}
*field = new(big.Int).SetBytes(ptr)
var ok bool
*field, ok = new(big.Int).SetString(string(ptr), 10)
if !ok {
return tracerr.Wrap(fmt.Errorf("big.Int.SetString failed on \"%v\"", string(ptr)))
}
return nil
}
@@ -208,7 +220,7 @@ func (b BigIntNullMeddler) PreWrite(fieldPtr interface{}) (saveValue interface{}
if field == nil {
return nil, nil
}
return field.Bytes(), nil
return field.String(), nil
}
// SliceToSlicePtrs converts any []Foo to []*Foo

View File

@@ -1,9 +1,13 @@
package db
import (
"math/big"
"os"
"testing"
"github.com/russross/meddler"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type foo struct {
@@ -33,3 +37,42 @@ func TestSlicePtrsToSlice(t *testing.T) {
assert.Equal(t, *a[i], b[i])
}
}
func TestBigInt(t *testing.T) {
pass := os.Getenv("POSTGRES_PASS")
db, err := InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err)
defer func() {
_, err := db.Exec("DROP TABLE IF EXISTS test_big_int;")
require.NoError(t, err)
err = db.Close()
require.NoError(t, err)
}()
_, err = db.Exec("DROP TABLE IF EXISTS test_big_int;")
require.NoError(t, err)
_, err = db.Exec(`CREATE TABLE test_big_int (
item_id SERIAL PRIMARY KEY,
value1 DECIMAL(78, 0) NOT NULL,
value2 DECIMAL(78, 0),
value3 DECIMAL(78, 0)
);`)
require.NoError(t, err)
type Entry struct {
ItemID int `meddler:"item_id"`
Value1 *big.Int `meddler:"value1,bigint"`
Value2 *big.Int `meddler:"value2,bigintnull"`
Value3 *big.Int `meddler:"value3,bigintnull"`
}
entry := Entry{ItemID: 1, Value1: big.NewInt(1234567890), Value2: big.NewInt(9876543210), Value3: nil}
err = meddler.Insert(db, "test_big_int", &entry)
require.NoError(t, err)
var dbEntry Entry
err = meddler.QueryRow(db, &dbEntry, "SELECT * FROM test_big_int WHERE item_id = 1;")
require.NoError(t, err)
assert.Equal(t, entry, dbEntry)
}

View File

@@ -260,7 +260,7 @@ type AuctionInterface interface {
AuctionConstants() (*common.AuctionConstants, error)
AuctionEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*AuctionEvents, error)
AuctionEventInit() (*AuctionEventInitialize, int64, error)
AuctionEventInit(genesisBlockNum int64) (*AuctionEventInitialize, int64, error)
}
//
@@ -809,11 +809,13 @@ var (
)
// AuctionEventInit returns the initialize event with its corresponding block number
func (c *AuctionClient) AuctionEventInit() (*AuctionEventInitialize, int64, error) {
func (c *AuctionClient) AuctionEventInit(genesisBlockNum int64) (*AuctionEventInitialize, int64, error) {
query := ethereum.FilterQuery{
Addresses: []ethCommon.Address{
c.address,
},
FromBlock: big.NewInt(max(0, genesisBlockNum-blocksPerDay)),
ToBlock: big.NewInt(genesisBlockNum),
Topics: [][]ethCommon.Hash{{logAuctionInitialize}},
}
logs, err := c.client.client.FilterLogs(context.Background(), query)

View File

@@ -28,7 +28,7 @@ func TestAuctionGetCurrentSlotNumber(t *testing.T) {
}
func TestAuctionEventInit(t *testing.T) {
auctionInit, blockNum, err := auctionClientTest.AuctionEventInit()
auctionInit, blockNum, err := auctionClientTest.AuctionEventInit(genesisBlock)
require.NoError(t, err)
assert.Equal(t, int64(18), blockNum)
assert.Equal(t, donationAddressConst, auctionInit.DonationAddress)

View File

@@ -12,6 +12,17 @@ import (
var errTODO = fmt.Errorf("TODO: Not implemented yet")
const (
blocksPerDay = (3600 * 24) / 15
)
func max(x, y int64) int64 {
if x > y {
return x
}
return y
}
// ClientInterface is the eth Client interface used by hermez-node modules to
// interact with Ethereum Blockchain and smart contracts.
type ClientInterface interface {

View File

@@ -273,7 +273,7 @@ type RollupInterface interface {
RollupConstants() (*common.RollupConstants, error)
RollupEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*RollupEvents, error)
RollupForgeBatchArgs(ethCommon.Hash, uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error)
RollupEventInit() (*RollupEventInitialize, int64, error)
RollupEventInit(genesisBlockNum int64) (*RollupEventInitialize, int64, error)
}
//
@@ -749,11 +749,13 @@ var (
)
// RollupEventInit returns the initialize event with its corresponding block number
func (c *RollupClient) RollupEventInit() (*RollupEventInitialize, int64, error) {
func (c *RollupClient) RollupEventInit(genesisBlockNum int64) (*RollupEventInitialize, int64, error) {
query := ethereum.FilterQuery{
Addresses: []ethCommon.Address{
c.address,
},
FromBlock: big.NewInt(max(0, genesisBlockNum-blocksPerDay)),
ToBlock: big.NewInt(genesisBlockNum),
Topics: [][]ethCommon.Hash{{logHermezInitialize}},
}
logs, err := c.client.client.FilterLogs(context.Background(), query)

View File

@@ -56,7 +56,7 @@ func genKeysBjj(i int64) *keys {
}
func TestRollupEventInit(t *testing.T) {
rollupInit, blockNum, err := rollupClient.RollupEventInit()
rollupInit, blockNum, err := rollupClient.RollupEventInit(genesisBlock)
require.NoError(t, err)
assert.Equal(t, int64(19), blockNum)
assert.Equal(t, uint8(10), rollupInit.ForgeL1L2BatchTimeout)

View File

@@ -137,7 +137,7 @@ type WDelayerInterface interface {
WDelayerEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*WDelayerEvents, error)
WDelayerConstants() (*common.WDelayerConstants, error)
WDelayerEventInit() (*WDelayerEventInitialize, int64, error)
WDelayerEventInit(genesisBlockNum int64) (*WDelayerEventInitialize, int64, error)
}
//
@@ -415,11 +415,13 @@ var (
)
// WDelayerEventInit returns the initialize event with its corresponding block number
func (c *WDelayerClient) WDelayerEventInit() (*WDelayerEventInitialize, int64, error) {
func (c *WDelayerClient) WDelayerEventInit(genesisBlockNum int64) (*WDelayerEventInitialize, int64, error) {
query := ethereum.FilterQuery{
Addresses: []ethCommon.Address{
c.address,
},
FromBlock: big.NewInt(max(0, genesisBlockNum-blocksPerDay)),
ToBlock: big.NewInt(genesisBlockNum),
Topics: [][]ethCommon.Hash{{logWDelayerInitialize}},
}
logs, err := c.client.client.FilterLogs(context.Background(), query)

View File

@@ -18,7 +18,7 @@ var maxEmergencyModeTime = time.Hour * 24 * 7 * 26
var maxWithdrawalDelay = time.Hour * 24 * 7 * 2
func TestWDelayerInit(t *testing.T) {
wDelayerInit, blockNum, err := wdelayerClientTest.WDelayerEventInit()
wDelayerInit, blockNum, err := wdelayerClientTest.WDelayerEventInit(genesisBlock)
require.NoError(t, err)
assert.Equal(t, int64(16), blockNum)
assert.Equal(t, uint64(initWithdrawalDelay), wDelayerInit.InitialWithdrawalDelay)

View File

@@ -15,6 +15,7 @@ import (
"github.com/gin-contrib/cors"
"github.com/gin-gonic/gin"
"github.com/hermeznetwork/hermez-node/api"
"github.com/hermeznetwork/hermez-node/api/stateapiupdater"
"github.com/hermeznetwork/hermez-node/batchbuilder"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/config"
@@ -54,6 +55,7 @@ const (
// Node is the Hermez Node
type Node struct {
nodeAPI *NodeAPI
stateAPIUpdater *stateapiupdater.Updater
debugAPI *debugapi.DebugAPI
priceUpdater *priceupdater.PriceUpdater
// Coordinator
@@ -67,6 +69,7 @@ type Node struct {
mode Mode
sqlConnRead *sqlx.DB
sqlConnWrite *sqlx.DB
historyDB *historydb.HistoryDB
ctx context.Context
wg sync.WaitGroup
cancel context.CancelFunc
@@ -227,6 +230,7 @@ func NewNode(mode Mode, cfg *config.Node) (*Node, error) {
cfg.Coordinator.L2DB.SafetyPeriod,
cfg.Coordinator.L2DB.MaxTxs,
cfg.Coordinator.L2DB.MinFeeUSD,
cfg.Coordinator.L2DB.MaxFeeUSD,
cfg.Coordinator.L2DB.TTL.Duration,
apiConnCon,
)
@@ -241,12 +245,36 @@ func NewNode(mode Mode, cfg *config.Node) (*Node, error) {
}
initSCVars := sync.SCVars()
scConsts := synchronizer.SCConsts{
scConsts := common.SCConsts{
Rollup: *sync.RollupConstants(),
Auction: *sync.AuctionConstants(),
WDelayer: *sync.WDelayerConstants(),
}
hdbNodeCfg := historydb.NodeConfig{
MaxPoolTxs: cfg.Coordinator.L2DB.MaxTxs,
MinFeeUSD: cfg.Coordinator.L2DB.MinFeeUSD,
MaxFeeUSD: cfg.Coordinator.L2DB.MaxFeeUSD,
ForgeDelay: cfg.Coordinator.ForgeDelay.Duration.Seconds(),
}
if err := historyDB.SetNodeConfig(&hdbNodeCfg); err != nil {
return nil, tracerr.Wrap(err)
}
hdbConsts := historydb.Constants{
SCConsts: common.SCConsts{
Rollup: scConsts.Rollup,
Auction: scConsts.Auction,
WDelayer: scConsts.WDelayer,
},
ChainID: chainIDU16,
HermezAddress: cfg.SmartContracts.Rollup,
}
if err := historyDB.SetConstants(&hdbConsts); err != nil {
return nil, tracerr.Wrap(err)
}
stateAPIUpdater := stateapiupdater.NewUpdater(historyDB, &hdbNodeCfg, initSCVars, &hdbConsts)
var coord *coordinator.Coordinator
if mode == ModeCoordinator {
// Unlock FeeAccount EthAddr in the keystore to generate the
@@ -339,6 +367,9 @@ func NewNode(mode Mode, cfg *config.Node) (*Node, error) {
L1BatchTimeoutPerc: cfg.Coordinator.L1BatchTimeoutPerc,
ForgeRetryInterval: cfg.Coordinator.ForgeRetryInterval.Duration,
ForgeDelay: cfg.Coordinator.ForgeDelay.Duration,
MustForgeAtSlotDeadline: cfg.Coordinator.MustForgeAtSlotDeadline,
IgnoreSlotCommitment: cfg.Coordinator.IgnoreSlotCommitment,
ForgeOncePerSlotIfTxs: cfg.Coordinator.ForgeOncePerSlotIfTxs,
ForgeNoTxsDelay: cfg.Coordinator.ForgeNoTxsDelay.Duration,
SyncRetryInterval: cfg.Coordinator.SyncRetryInterval.Duration,
PurgeByExtDelInterval: cfg.Coordinator.PurgeByExtDelInterval.Duration,
@@ -367,11 +398,7 @@ func NewNode(mode Mode, cfg *config.Node) (*Node, error) {
serverProofs,
client,
&scConsts,
&synchronizer.SCVariables{
Rollup: *initSCVars.Rollup,
Auction: *initSCVars.Auction,
WDelayer: *initSCVars.WDelayer,
},
initSCVars,
)
if err != nil {
return nil, tracerr.Wrap(err)
@@ -403,35 +430,29 @@ func NewNode(mode Mode, cfg *config.Node) (*Node, error) {
coord, cfg.API.Explorer,
server,
historyDB,
stateDB,
l2DB,
&api.Config{
RollupConstants: scConsts.Rollup,
AuctionConstants: scConsts.Auction,
WDelayerConstants: scConsts.WDelayer,
ChainID: chainIDU16,
HermezAddress: cfg.SmartContracts.Rollup,
},
cfg.Coordinator.ForgeDelay.Duration,
)
if err != nil {
return nil, tracerr.Wrap(err)
}
nodeAPI.api.SetRollupVariables(*initSCVars.Rollup)
nodeAPI.api.SetAuctionVariables(*initSCVars.Auction)
nodeAPI.api.SetWDelayerVariables(*initSCVars.WDelayer)
}
var debugAPI *debugapi.DebugAPI
if cfg.Debug.APIAddress != "" {
debugAPI = debugapi.NewDebugAPI(cfg.Debug.APIAddress, stateDB, sync)
}
priceUpdater, err := priceupdater.NewPriceUpdater(cfg.PriceUpdater.URL,
priceupdater.APIType(cfg.PriceUpdater.Type), historyDB)
priceUpdater, err := priceupdater.NewPriceUpdater(
cfg.PriceUpdater.DefaultUpdateMethod,
cfg.PriceUpdater.TokensConfig,
historyDB,
cfg.PriceUpdater.URLBitfinexV2,
cfg.PriceUpdater.URLCoinGeckoV3,
)
if err != nil {
return nil, tracerr.Wrap(err)
}
ctx, cancel := context.WithCancel(context.Background())
return &Node{
stateAPIUpdater: stateAPIUpdater,
nodeAPI: nodeAPI,
debugAPI: debugAPI,
priceUpdater: priceUpdater,
@@ -441,11 +462,129 @@ func NewNode(mode Mode, cfg *config.Node) (*Node, error) {
mode: mode,
sqlConnRead: dbRead,
sqlConnWrite: dbWrite,
historyDB: historyDB,
ctx: ctx,
cancel: cancel,
}, nil
}
// APIServer is a server that only runs the API
type APIServer struct {
nodeAPI *NodeAPI
mode Mode
ctx context.Context
wg sync.WaitGroup
cancel context.CancelFunc
}
// NewAPIServer creates a new APIServer
func NewAPIServer(mode Mode, cfg *config.APIServer) (*APIServer, error) {
meddler.Debug = cfg.Debug.MeddlerLogs
// Stablish DB connection
dbWrite, err := dbUtils.InitSQLDB(
cfg.PostgreSQL.PortWrite,
cfg.PostgreSQL.HostWrite,
cfg.PostgreSQL.UserWrite,
cfg.PostgreSQL.PasswordWrite,
cfg.PostgreSQL.NameWrite,
)
if err != nil {
return nil, tracerr.Wrap(fmt.Errorf("dbUtils.InitSQLDB: %w", err))
}
var dbRead *sqlx.DB
if cfg.PostgreSQL.HostRead == "" {
dbRead = dbWrite
} else if cfg.PostgreSQL.HostRead == cfg.PostgreSQL.HostWrite {
return nil, tracerr.Wrap(fmt.Errorf(
"PostgreSQL.HostRead and PostgreSQL.HostWrite must be different",
))
} else {
dbRead, err = dbUtils.InitSQLDB(
cfg.PostgreSQL.PortRead,
cfg.PostgreSQL.HostRead,
cfg.PostgreSQL.UserRead,
cfg.PostgreSQL.PasswordRead,
cfg.PostgreSQL.NameRead,
)
if err != nil {
return nil, tracerr.Wrap(fmt.Errorf("dbUtils.InitSQLDB: %w", err))
}
}
apiConnCon := dbUtils.NewAPIConnectionController(
cfg.API.MaxSQLConnections,
cfg.API.SQLConnectionTimeout.Duration,
)
historyDB := historydb.NewHistoryDB(dbRead, dbWrite, apiConnCon)
var l2DB *l2db.L2DB
if mode == ModeCoordinator {
l2DB = l2db.NewL2DB(
dbRead, dbWrite,
0,
cfg.Coordinator.L2DB.MaxTxs,
cfg.Coordinator.L2DB.MinFeeUSD,
cfg.Coordinator.L2DB.MaxFeeUSD,
0,
apiConnCon,
)
}
if cfg.Debug.GinDebugMode {
gin.SetMode(gin.DebugMode)
} else {
gin.SetMode(gin.ReleaseMode)
}
server := gin.Default()
coord := false
if mode == ModeCoordinator {
coord = cfg.Coordinator.API.Coordinator
}
nodeAPI, err := NewNodeAPI(
cfg.API.Address,
coord, cfg.API.Explorer,
server,
historyDB,
l2DB,
)
if err != nil {
return nil, tracerr.Wrap(err)
}
ctx, cancel := context.WithCancel(context.Background())
return &APIServer{
nodeAPI: nodeAPI,
mode: mode,
ctx: ctx,
cancel: cancel,
}, nil
}
// Start the APIServer
func (s *APIServer) Start() {
log.Infow("Starting api server...", "mode", s.mode)
log.Info("Starting NodeAPI...")
s.wg.Add(1)
go func() {
defer func() {
log.Info("NodeAPI routine stopped")
s.wg.Done()
}()
if err := s.nodeAPI.Run(s.ctx); err != nil {
if s.ctx.Err() != nil {
return
}
log.Fatalw("NodeAPI.Run", "err", err)
}
}()
}
// Stop the APIServer
func (s *APIServer) Stop() {
log.Infow("Stopping NodeAPI...")
s.cancel()
s.wg.Wait()
}
// NodeAPI holds the node http API
type NodeAPI struct { //nolint:golint
api *api.API
@@ -465,10 +604,7 @@ func NewNodeAPI(
coordinatorEndpoints, explorerEndpoints bool,
server *gin.Engine,
hdb *historydb.HistoryDB,
sdb *statedb.StateDB,
l2db *l2db.L2DB,
config *api.Config,
forgeDelay time.Duration,
) (*NodeAPI, error) {
engine := gin.Default()
engine.NoRoute(handleNoRoute)
@@ -478,10 +614,6 @@ func NewNodeAPI(
engine,
hdb,
l2db,
config,
&api.NodeConfig{
ForgeDelay: forgeDelay.Seconds(),
},
)
if err != nil {
return nil, tracerr.Wrap(err)
@@ -527,58 +659,50 @@ func (a *NodeAPI) Run(ctx context.Context) error {
}
func (n *Node) handleNewBlock(ctx context.Context, stats *synchronizer.Stats,
vars synchronizer.SCVariablesPtr, batches []common.BatchData) {
vars *common.SCVariablesPtr, batches []common.BatchData) error {
if n.mode == ModeCoordinator {
n.coord.SendMsg(ctx, coordinator.MsgSyncBlock{
Stats: *stats,
Vars: vars,
Vars: *vars,
Batches: batches,
})
}
if n.nodeAPI != nil {
if vars.Rollup != nil {
n.nodeAPI.api.SetRollupVariables(*vars.Rollup)
}
if vars.Auction != nil {
n.nodeAPI.api.SetAuctionVariables(*vars.Auction)
}
if vars.WDelayer != nil {
n.nodeAPI.api.SetWDelayerVariables(*vars.WDelayer)
}
n.stateAPIUpdater.SetSCVars(vars)
if stats.Synced() {
if err := n.nodeAPI.api.UpdateNetworkInfo(
if err := n.stateAPIUpdater.UpdateNetworkInfo(
stats.Eth.LastBlock, stats.Sync.LastBlock,
common.BatchNum(stats.Eth.LastBatchNum),
stats.Sync.Auction.CurrentSlot.SlotNum,
); err != nil {
log.Errorw("API.UpdateNetworkInfo", "err", err)
log.Errorw("ApiStateUpdater.UpdateNetworkInfo", "err", err)
}
} else {
n.nodeAPI.api.UpdateNetworkInfoBlock(
n.stateAPIUpdater.UpdateNetworkInfoBlock(
stats.Eth.LastBlock, stats.Sync.LastBlock,
)
}
if err := n.stateAPIUpdater.Store(); err != nil {
return tracerr.Wrap(err)
}
return nil
}
func (n *Node) handleReorg(ctx context.Context, stats *synchronizer.Stats,
vars synchronizer.SCVariablesPtr) {
vars *common.SCVariables) error {
if n.mode == ModeCoordinator {
n.coord.SendMsg(ctx, coordinator.MsgSyncReorg{
Stats: *stats,
Vars: vars,
Vars: *vars.AsPtr(),
})
}
if n.nodeAPI != nil {
vars := n.sync.SCVars()
n.nodeAPI.api.SetRollupVariables(*vars.Rollup)
n.nodeAPI.api.SetAuctionVariables(*vars.Auction)
n.nodeAPI.api.SetWDelayerVariables(*vars.WDelayer)
n.nodeAPI.api.UpdateNetworkInfoBlock(
n.stateAPIUpdater.SetSCVars(vars.AsPtr())
n.stateAPIUpdater.UpdateNetworkInfoBlock(
stats.Eth.LastBlock, stats.Sync.LastBlock,
)
if err := n.stateAPIUpdater.Store(); err != nil {
return tracerr.Wrap(err)
}
return nil
}
// TODO(Edu): Consider keeping the `lastBlock` inside synchronizer so that we
@@ -594,16 +718,20 @@ func (n *Node) syncLoopFn(ctx context.Context, lastBlock *common.Block) (*common
// case: reorg
log.Infow("Synchronizer.Sync reorg", "discarded", *discarded)
vars := n.sync.SCVars()
n.handleReorg(ctx, stats, vars)
if err := n.handleReorg(ctx, stats, vars); err != nil {
return nil, time.Duration(0), tracerr.Wrap(err)
}
return nil, time.Duration(0), nil
} else if blockData != nil {
// case: new block
vars := synchronizer.SCVariablesPtr{
vars := common.SCVariablesPtr{
Rollup: blockData.Rollup.Vars,
Auction: blockData.Auction.Vars,
WDelayer: blockData.WDelayer.Vars,
}
n.handleNewBlock(ctx, stats, vars, blockData.Rollup.Batches)
if err := n.handleNewBlock(ctx, stats, &vars, blockData.Rollup.Batches); err != nil {
return nil, time.Duration(0), tracerr.Wrap(err)
}
return &blockData.Block, time.Duration(0), nil
} else {
// case: no block
@@ -622,7 +750,9 @@ func (n *Node) StartSynchronizer() {
// the last synced one) is synchronized
stats := n.sync.Stats()
vars := n.sync.SCVars()
n.handleNewBlock(n.ctx, stats, vars, []common.BatchData{})
if err := n.handleNewBlock(n.ctx, stats, vars.AsPtr(), []common.BatchData{}); err != nil {
log.Fatalw("Node.handleNewBlock", "err", err)
}
n.wg.Add(1)
go func() {
@@ -709,18 +839,25 @@ func (n *Node) StartNodeAPI() {
n.wg.Add(1)
go func() {
// Do an initial update on startup
if err := n.nodeAPI.api.UpdateMetrics(); err != nil {
log.Errorw("API.UpdateMetrics", "err", err)
if err := n.stateAPIUpdater.UpdateMetrics(); err != nil {
log.Errorw("ApiStateUpdater.UpdateMetrics", "err", err)
}
if err := n.stateAPIUpdater.Store(); err != nil {
log.Errorw("ApiStateUpdater.Store", "err", err)
}
for {
select {
case <-n.ctx.Done():
log.Info("API.UpdateMetrics loop done")
log.Info("ApiStateUpdater.UpdateMetrics loop done")
n.wg.Done()
return
case <-time.After(n.cfg.API.UpdateMetricsInterval.Duration):
if err := n.nodeAPI.api.UpdateMetrics(); err != nil {
log.Errorw("API.UpdateMetrics", "err", err)
if err := n.stateAPIUpdater.UpdateMetrics(); err != nil {
log.Errorw("ApiStateUpdater.UpdateMetrics", "err", err)
continue
}
if err := n.stateAPIUpdater.Store(); err != nil {
log.Errorw("ApiStateUpdater.Store", "err", err)
}
}
}
@@ -729,18 +866,25 @@ func (n *Node) StartNodeAPI() {
n.wg.Add(1)
go func() {
// Do an initial update on startup
if err := n.nodeAPI.api.UpdateRecommendedFee(); err != nil {
log.Errorw("API.UpdateRecommendedFee", "err", err)
if err := n.stateAPIUpdater.UpdateRecommendedFee(); err != nil {
log.Errorw("ApiStateUpdater.UpdateRecommendedFee", "err", err)
}
if err := n.stateAPIUpdater.Store(); err != nil {
log.Errorw("ApiStateUpdater.Store", "err", err)
}
for {
select {
case <-n.ctx.Done():
log.Info("API.UpdateRecommendedFee loop done")
log.Info("ApiStateUpdaterAPI.UpdateRecommendedFee loop done")
n.wg.Done()
return
case <-time.After(n.cfg.API.UpdateRecommendedFeeInterval.Duration):
if err := n.nodeAPI.api.UpdateRecommendedFee(); err != nil {
log.Errorw("API.UpdateRecommendedFee", "err", err)
if err := n.stateAPIUpdater.UpdateRecommendedFee(); err != nil {
log.Errorw("ApiStateUpdaterAPI.UpdateRecommendedFee", "err", err)
continue
}
if err := n.stateAPIUpdater.Store(); err != nil {
log.Errorw("ApiStateUpdater.Store", "err", err)
}
}
}

View File

@@ -20,57 +20,107 @@ const (
defaultIdleConnTimeout = 2 * time.Second
)
// APIType defines the token exchange API
type APIType string
// UpdateMethodType defines the token price update mechanism
type UpdateMethodType string
const (
// APITypeBitFinexV2 is the http API used by bitfinex V2
APITypeBitFinexV2 APIType = "bitfinexV2"
// APITypeCoingeckoV3 is the http API used by copingecko V3
APITypeCoingeckoV3 APIType = "coingeckoV3"
// UpdateMethodTypeBitFinexV2 is the http API used by bitfinex V2
UpdateMethodTypeBitFinexV2 UpdateMethodType = "bitfinexV2"
// UpdateMethodTypeCoingeckoV3 is the http API used by copingecko V3
UpdateMethodTypeCoingeckoV3 UpdateMethodType = "coingeckoV3"
// UpdateMethodTypeStatic is the value given by the configuration
UpdateMethodTypeStatic UpdateMethodType = "static"
// UpdateMethodTypeIgnore indicates to not update the value, to set value 0
// it's better to use UpdateMethodTypeStatic
UpdateMethodTypeIgnore UpdateMethodType = "ignore"
)
func (t *APIType) valid() bool {
func (t *UpdateMethodType) valid() bool {
switch *t {
case APITypeBitFinexV2:
case UpdateMethodTypeBitFinexV2:
return true
case APITypeCoingeckoV3:
case UpdateMethodTypeCoingeckoV3:
return true
case UpdateMethodTypeStatic:
return true
case UpdateMethodTypeIgnore:
return true
default:
return false
}
}
// TokenConfig specifies how a single token get its price updated
type TokenConfig struct {
UpdateMethod UpdateMethodType
StaticValue float64 // required by UpdateMethodTypeStatic
Symbol string
Addr ethCommon.Address
}
func (t *TokenConfig) valid() bool {
if (t.Addr == common.EmptyAddr && t.Symbol != "ETH") ||
(t.Symbol == "" && t.UpdateMethod == UpdateMethodTypeBitFinexV2) {
return false
}
return t.UpdateMethod.valid()
}
// PriceUpdater definition
type PriceUpdater struct {
db *historydb.HistoryDB
apiURL string
apiType APIType
tokens []historydb.TokenSymbolAndAddr
defaultUpdateMethod UpdateMethodType
tokensList []historydb.TokenSymbolAndAddr
tokensConfig map[ethCommon.Address]TokenConfig
clientCoingeckoV3 *sling.Sling
clientBitfinexV2 *sling.Sling
}
// NewPriceUpdater is the constructor for the updater
func NewPriceUpdater(apiURL string, apiType APIType, db *historydb.HistoryDB) (*PriceUpdater,
error) {
if !apiType.valid() {
return nil, tracerr.Wrap(fmt.Errorf("Invalid apiType: %v", apiType))
func NewPriceUpdater(
defaultUpdateMethodType UpdateMethodType,
tokensConfig []TokenConfig,
db *historydb.HistoryDB,
bitfinexV2URL, coingeckoV3URL string,
) (*PriceUpdater, error) {
// Validate params
if !defaultUpdateMethodType.valid() || defaultUpdateMethodType == UpdateMethodTypeStatic {
return nil, tracerr.Wrap(
fmt.Errorf("Invalid defaultUpdateMethodType: %v", defaultUpdateMethodType),
)
}
tokensConfigMap := make(map[ethCommon.Address]TokenConfig)
for _, t := range tokensConfig {
if !t.valid() {
return nil, tracerr.Wrap(fmt.Errorf("Invalid tokensConfig, wrong entry: %+v", t))
}
tokensConfigMap[t.Addr] = t
}
// Init
tr := &http.Transport{
MaxIdleConns: defaultMaxIdleConns,
IdleConnTimeout: defaultIdleConnTimeout,
DisableCompression: true,
}
httpClient := &http.Client{Transport: tr}
return &PriceUpdater{
db: db,
apiURL: apiURL,
apiType: apiType,
tokens: []historydb.TokenSymbolAndAddr{},
defaultUpdateMethod: defaultUpdateMethodType,
tokensList: []historydb.TokenSymbolAndAddr{},
tokensConfig: tokensConfigMap,
clientCoingeckoV3: sling.New().Base(coingeckoV3URL).Client(httpClient),
clientBitfinexV2: sling.New().Base(bitfinexV2URL).Client(httpClient),
}, nil
}
func getTokenPriceBitfinex(ctx context.Context, client *sling.Sling,
tokenSymbol string) (float64, error) {
func (p *PriceUpdater) getTokenPriceBitfinex(ctx context.Context, tokenSymbol string) (float64, error) {
state := [10]float64{}
req, err := client.New().Get("ticker/t" + tokenSymbol + "USD").Request()
url := "ticker/t" + tokenSymbol + "USD"
req, err := p.clientBitfinexV2.New().Get(url).Request()
if err != nil {
return 0, tracerr.Wrap(err)
}
res, err := client.Do(req.WithContext(ctx), &state, nil)
res, err := p.clientBitfinexV2.Do(req.WithContext(ctx), &state, nil)
if err != nil {
return 0, tracerr.Wrap(err)
}
@@ -80,8 +130,7 @@ func getTokenPriceBitfinex(ctx context.Context, client *sling.Sling,
return state[6], nil
}
func getTokenPriceCoingecko(ctx context.Context, client *sling.Sling,
tokenAddr ethCommon.Address) (float64, error) {
func (p *PriceUpdater) getTokenPriceCoingecko(ctx context.Context, tokenAddr ethCommon.Address) (float64, error) {
responseObject := make(map[string]map[string]float64)
var url string
var id string
@@ -93,11 +142,11 @@ func getTokenPriceCoingecko(ctx context.Context, client *sling.Sling,
url = "simple/token_price/ethereum?contract_addresses=" +
id + "&vs_currencies=usd"
}
req, err := client.New().Get(url).Request()
req, err := p.clientCoingeckoV3.New().Get(url).Request()
if err != nil {
return 0, tracerr.Wrap(err)
}
res, err := client.Do(req.WithContext(ctx), &responseObject, nil)
res, err := p.clientCoingeckoV3.Do(req.WithContext(ctx), &responseObject, nil)
if err != nil {
return 0, tracerr.Wrap(err)
}
@@ -114,43 +163,50 @@ func getTokenPriceCoingecko(ctx context.Context, client *sling.Sling,
// UpdatePrices is triggered by the Coordinator, and internally will update the
// token prices in the db
func (p *PriceUpdater) UpdatePrices(ctx context.Context) {
tr := &http.Transport{
MaxIdleConns: defaultMaxIdleConns,
IdleConnTimeout: defaultIdleConnTimeout,
DisableCompression: true,
}
httpClient := &http.Client{Transport: tr}
client := sling.New().Base(p.apiURL).Client(httpClient)
for _, token := range p.tokens {
for _, token := range p.tokensConfig {
var tokenPrice float64
var err error
switch p.apiType {
case APITypeBitFinexV2:
tokenPrice, err = getTokenPriceBitfinex(ctx, client, token.Symbol)
case APITypeCoingeckoV3:
tokenPrice, err = getTokenPriceCoingecko(ctx, client, token.Addr)
switch token.UpdateMethod {
case UpdateMethodTypeBitFinexV2:
tokenPrice, err = p.getTokenPriceBitfinex(ctx, token.Symbol)
case UpdateMethodTypeCoingeckoV3:
tokenPrice, err = p.getTokenPriceCoingecko(ctx, token.Addr)
case UpdateMethodTypeStatic:
tokenPrice = token.StaticValue
case UpdateMethodTypeIgnore:
continue
}
if ctx.Err() != nil {
return
}
if err != nil {
log.Warnw("token price not updated (get error)",
"err", err, "token", token.Symbol, "apiType", p.apiType)
"err", err, "token", token.Symbol, "updateMethod", token.UpdateMethod)
}
if err = p.db.UpdateTokenValue(token.Symbol, tokenPrice); err != nil {
if err = p.db.UpdateTokenValue(token.Addr, tokenPrice); err != nil {
log.Errorw("token price not updated (db error)",
"err", err, "token", token.Symbol, "apiType", p.apiType)
"err", err, "token", token.Symbol, "updateMethod", token.UpdateMethod)
}
}
}
// UpdateTokenList get the registered token symbols from HistoryDB
func (p *PriceUpdater) UpdateTokenList() error {
tokens, err := p.db.GetTokenSymbolsAndAddrs()
dbTokens, err := p.db.GetTokenSymbolsAndAddrs()
if err != nil {
return tracerr.Wrap(err)
}
p.tokens = tokens
// For each token from the DB
for _, dbToken := range dbTokens {
// If the token doesn't exists in the config list,
// add it with default update emthod
if _, ok := p.tokensConfig[dbToken.Addr]; !ok {
p.tokensConfig[dbToken.Addr] = TokenConfig{
UpdateMethod: p.defaultUpdateMethod,
Symbol: dbToken.Symbol,
Addr: dbToken.Addr,
}
}
}
return nil
}

View File

@@ -16,7 +16,9 @@ import (
var historyDB *historydb.HistoryDB
func TestMain(m *testing.M) {
const usdtAddr = "0xdac17f958d2ee523a2206206994597c13d831ec7"
func TestPriceUpdaterBitfinex(t *testing.T) {
// Init DB
pass := os.Getenv("POSTGRES_PASS")
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
@@ -29,60 +31,113 @@ func TestMain(m *testing.M) {
// Populate DB
// Gen blocks and add them to DB
blocks := test.GenBlocks(1, 2)
err = historyDB.AddBlocks(blocks)
if err != nil {
panic(err)
}
require.NoError(t, historyDB.AddBlocks(blocks))
// Gen tokens and add them to DB
tokens := []common.Token{}
tokens = append(tokens, common.Token{
tokens := []common.Token{
{
TokenID: 1,
EthBlockNum: blocks[0].Num,
EthAddr: ethCommon.HexToAddress("0x6b175474e89094c44da98b954eedeac495271d0f"),
EthAddr: ethCommon.HexToAddress("0x1"),
Name: "DAI",
Symbol: "DAI",
Decimals: 18,
})
err = historyDB.AddTokens(tokens)
if err != nil {
panic(err)
}, // Used to test get by SC addr
{
TokenID: 2,
EthBlockNum: blocks[0].Num,
EthAddr: ethCommon.HexToAddress(usdtAddr),
Name: "Tether",
Symbol: "USDT",
Decimals: 18,
}, // Used to test get by token symbol
{
TokenID: 3,
EthBlockNum: blocks[0].Num,
EthAddr: ethCommon.HexToAddress("0x2"),
Name: "FOO",
Symbol: "FOO",
Decimals: 18,
}, // Used to test ignore
{
TokenID: 4,
EthBlockNum: blocks[0].Num,
EthAddr: ethCommon.HexToAddress("0x3"),
Name: "BAR",
Symbol: "BAR",
Decimals: 18,
}, // Used to test static
{
TokenID: 5,
EthBlockNum: blocks[0].Num,
EthAddr: ethCommon.HexToAddress("0x1f9840a85d5af5bf1d1762f925bdaddc4201f984"),
Name: "Uniswap",
Symbol: "UNI",
Decimals: 18,
}, // Used to test default
}
require.NoError(t, historyDB.AddTokens(tokens)) // ETH token exist in DB by default
// Update token price used to test ignore
ignoreValue := 44.44
require.NoError(t, historyDB.UpdateTokenValue(tokens[2].EthAddr, ignoreValue))
// Prepare token config
staticValue := 0.12345
tc := []TokenConfig{
// ETH and UNI tokens use default method
{ // DAI uses SC addr
UpdateMethod: UpdateMethodTypeBitFinexV2,
Addr: ethCommon.HexToAddress("0x1"),
Symbol: "DAI",
},
{ // USDT uses symbol
UpdateMethod: UpdateMethodTypeCoingeckoV3,
Addr: ethCommon.HexToAddress(usdtAddr),
},
{ // FOO uses ignore
UpdateMethod: UpdateMethodTypeIgnore,
Addr: ethCommon.HexToAddress("0x2"),
},
{ // BAR uses static
UpdateMethod: UpdateMethodTypeStatic,
Addr: ethCommon.HexToAddress("0x3"),
StaticValue: staticValue,
},
}
result := m.Run()
os.Exit(result)
}
func TestPriceUpdaterBitfinex(t *testing.T) {
bitfinexV2URL := "https://api-pub.bitfinex.com/v2/"
coingeckoV3URL := "https://api.coingecko.com/api/v3/"
// Init price updater
pu, err := NewPriceUpdater("https://api-pub.bitfinex.com/v2/", APITypeBitFinexV2, historyDB)
pu, err := NewPriceUpdater(
UpdateMethodTypeCoingeckoV3,
tc,
historyDB,
bitfinexV2URL,
coingeckoV3URL,
)
require.NoError(t, err)
// Update token list
assert.NoError(t, pu.UpdateTokenList())
require.NoError(t, pu.UpdateTokenList())
// Update prices
pu.UpdatePrices(context.Background())
assertTokenHasPriceAndClean(t)
}
func TestPriceUpdaterCoingecko(t *testing.T) {
// Init price updater
pu, err := NewPriceUpdater("https://api.coingecko.com/api/v3/", APITypeCoingeckoV3, historyDB)
require.NoError(t, err)
// Update token list
assert.NoError(t, pu.UpdateTokenList())
// Update prices
pu.UpdatePrices(context.Background())
assertTokenHasPriceAndClean(t)
}
func assertTokenHasPriceAndClean(t *testing.T) {
// Check that prices have been updated
// Check results: get tokens from DB
fetchedTokens, err := historyDB.GetTokensTest()
require.NoError(t, err)
// TokenID 0 (ETH) is always on the DB
assert.Equal(t, 2, len(fetchedTokens))
for _, token := range fetchedTokens {
require.NotNil(t, token.USD)
require.NotNil(t, token.USDUpdate)
assert.Greater(t, *token.USD, 0.0)
}
// Check that tokens that are updated via API have value:
// ETH
require.NotNil(t, fetchedTokens[0].USDUpdate)
assert.Greater(t, *fetchedTokens[0].USD, 0.0)
// DAI
require.NotNil(t, fetchedTokens[1].USDUpdate)
assert.Greater(t, *fetchedTokens[1].USD, 0.0)
// USDT
require.NotNil(t, fetchedTokens[2].USDUpdate)
assert.Greater(t, *fetchedTokens[2].USD, 0.0)
// UNI
require.NotNil(t, fetchedTokens[5].USDUpdate)
assert.Greater(t, *fetchedTokens[5].USD, 0.0)
// Check ignored token
assert.Equal(t, ignoreValue, *fetchedTokens[3].USD)
// Check static value
assert.Equal(t, staticValue, *fetchedTokens[4].USD)
}

View File

@@ -23,6 +23,12 @@ const (
// errStrUnknownBlock is the string returned by geth when querying an
// unknown block
errStrUnknownBlock = "unknown block"
// updateEthBlockNumThreshold is a threshold of number of ethereum blocks left to synchronize, such that
// if we have more blocks to sync than the defined value we can aggressively skip calling UpdateEth
updateEthBlockNumThreshold = 100
// While having more blocks to sync than updateEthBlockNumThreshold, UpdateEth will be called once in a
// defined number of blocks
updateEthFrequencyDivider = 100
)
var (
@@ -183,28 +189,6 @@ type StartBlockNums struct {
WDelayer int64
}
// SCVariables joins all the smart contract variables in a single struct
type SCVariables struct {
Rollup common.RollupVariables `validate:"required"`
Auction common.AuctionVariables `validate:"required"`
WDelayer common.WDelayerVariables `validate:"required"`
}
// SCVariablesPtr joins all the smart contract variables as pointers in a single
// struct
type SCVariablesPtr struct {
Rollup *common.RollupVariables `validate:"required"`
Auction *common.AuctionVariables `validate:"required"`
WDelayer *common.WDelayerVariables `validate:"required"`
}
// SCConsts joins all the smart contract constants in a single struct
type SCConsts struct {
Rollup common.RollupConstants
Auction common.AuctionConstants
WDelayer common.WDelayerConstants
}
// Config is the Synchronizer configuration
type Config struct {
StatsRefreshPeriod time.Duration
@@ -214,14 +198,14 @@ type Config struct {
// Synchronizer implements the Synchronizer type
type Synchronizer struct {
ethClient eth.ClientInterface
consts SCConsts
consts common.SCConsts
historyDB *historydb.HistoryDB
l2DB *l2db.L2DB
stateDB *statedb.StateDB
cfg Config
initVars SCVariables
initVars common.SCVariables
startBlockNum int64
vars SCVariables
vars common.SCVariables
stats *StatsHolder
resetStateFailed bool
}
@@ -244,7 +228,7 @@ func NewSynchronizer(ethClient eth.ClientInterface, historyDB *historydb.History
return nil, tracerr.Wrap(fmt.Errorf("NewSynchronizer ethClient.WDelayerConstants(): %w",
err))
}
consts := SCConsts{
consts := common.SCConsts{
Rollup: *rollupConstants,
Auction: *auctionConstants,
WDelayer: *wDelayerConstants,
@@ -310,11 +294,11 @@ func (s *Synchronizer) WDelayerConstants() *common.WDelayerConstants {
}
// SCVars returns a copy of the Smart Contract Variables
func (s *Synchronizer) SCVars() SCVariablesPtr {
return SCVariablesPtr{
Rollup: s.vars.Rollup.Copy(),
Auction: s.vars.Auction.Copy(),
WDelayer: s.vars.WDelayer.Copy(),
func (s *Synchronizer) SCVars() *common.SCVariables {
return &common.SCVariables{
Rollup: *s.vars.Rollup.Copy(),
Auction: *s.vars.Auction.Copy(),
WDelayer: *s.vars.WDelayer.Copy(),
}
}
@@ -357,7 +341,7 @@ func (s *Synchronizer) updateCurrentSlot(slot *common.Slot, reset bool, hasBatch
// We want the next block because the current one is already mined
blockNum := s.stats.Sync.LastBlock.Num + 1
slotNum := s.consts.Auction.SlotNum(blockNum)
firstBatchBlockNum := s.stats.Sync.LastBlock.Num
syncLastBlockNum := s.stats.Sync.LastBlock.Num
if reset {
// Using this query only to know if there
dbFirstBatchBlockNum, err := s.historyDB.GetFirstBatchBlockNumBySlot(slotNum)
@@ -367,7 +351,7 @@ func (s *Synchronizer) updateCurrentSlot(slot *common.Slot, reset bool, hasBatch
hasBatch = false
} else {
hasBatch = true
firstBatchBlockNum = dbFirstBatchBlockNum
syncLastBlockNum = dbFirstBatchBlockNum
}
slot.ForgerCommitment = false
} else if slotNum > slot.SlotNum {
@@ -376,16 +360,14 @@ func (s *Synchronizer) updateCurrentSlot(slot *common.Slot, reset bool, hasBatch
}
slot.SlotNum = slotNum
slot.StartBlock, slot.EndBlock = s.consts.Auction.SlotBlocks(slot.SlotNum)
if hasBatch && s.consts.Auction.RelativeBlock(syncLastBlockNum) < int64(s.vars.Auction.SlotDeadline) {
slot.ForgerCommitment = true
}
// If Synced, update the current coordinator
if s.stats.Synced() && blockNum >= s.consts.Auction.GenesisBlockNum {
if err := s.setSlotCoordinator(slot); err != nil {
return tracerr.Wrap(err)
}
if hasBatch &&
s.consts.Auction.RelativeBlock(firstBatchBlockNum) <
int64(s.vars.Auction.SlotDeadline) {
slot.ForgerCommitment = true
}
// TODO: Remove this SANITY CHECK once this code is tested enough
// BEGIN SANITY CHECK
@@ -552,9 +534,12 @@ func (s *Synchronizer) Sync(ctx context.Context,
log.Debugf("ethBlock: num: %v, parent: %v, hash: %v",
ethBlock.Num, ethBlock.ParentHash.String(), ethBlock.Hash.String())
if nextBlockNum+updateEthBlockNumThreshold >= s.stats.Eth.LastBlock.Num ||
nextBlockNum%updateEthFrequencyDivider == 0 {
if err := s.stats.UpdateEth(s.ethClient); err != nil {
return nil, nil, tracerr.Wrap(err)
}
}
log.Debugw("Syncing...",
"block", nextBlockNum,
@@ -727,23 +712,23 @@ func (s *Synchronizer) reorg(uncleBlock *common.Block) (int64, error) {
}
func getInitialVariables(ethClient eth.ClientInterface,
consts *SCConsts) (*SCVariables, *StartBlockNums, error) {
rollupInit, rollupInitBlock, err := ethClient.RollupEventInit()
consts *common.SCConsts) (*common.SCVariables, *StartBlockNums, error) {
rollupInit, rollupInitBlock, err := ethClient.RollupEventInit(consts.Auction.GenesisBlockNum)
if err != nil {
return nil, nil, tracerr.Wrap(fmt.Errorf("RollupEventInit: %w", err))
}
auctionInit, auctionInitBlock, err := ethClient.AuctionEventInit()
auctionInit, auctionInitBlock, err := ethClient.AuctionEventInit(consts.Auction.GenesisBlockNum)
if err != nil {
return nil, nil, tracerr.Wrap(fmt.Errorf("AuctionEventInit: %w", err))
}
wDelayerInit, wDelayerInitBlock, err := ethClient.WDelayerEventInit()
wDelayerInit, wDelayerInitBlock, err := ethClient.WDelayerEventInit(consts.Auction.GenesisBlockNum)
if err != nil {
return nil, nil, tracerr.Wrap(fmt.Errorf("WDelayerEventInit: %w", err))
}
rollupVars := rollupInit.RollupVariables()
auctionVars := auctionInit.AuctionVariables(consts.Auction.InitialMinimalBidding)
wDelayerVars := wDelayerInit.WDelayerVariables()
return &SCVariables{
return &common.SCVariables{
Rollup: *rollupVars,
Auction: *auctionVars,
WDelayer: *wDelayerVars,

View File

@@ -323,7 +323,7 @@ func newTestModules(t *testing.T) (*statedb.StateDB, *historydb.HistoryDB, *l2db
test.WipeDB(historyDB.DB())
// Init L2 DB
l2DB := l2db.NewL2DB(db, db, 10, 100, 0.0, 24*time.Hour, nil)
l2DB := l2db.NewL2DB(db, db, 10, 100, 0.0, 1000.0, 24*time.Hour, nil)
return stateDB, historyDB, l2DB
}
@@ -378,9 +378,9 @@ func TestSyncGeneral(t *testing.T) {
assert.Equal(t, int64(1), stats.Eth.LastBlock.Num)
assert.Equal(t, int64(1), stats.Sync.LastBlock.Num)
vars := s.SCVars()
assert.Equal(t, clientSetup.RollupVariables, vars.Rollup)
assert.Equal(t, clientSetup.AuctionVariables, vars.Auction)
assert.Equal(t, clientSetup.WDelayerVariables, vars.WDelayer)
assert.Equal(t, *clientSetup.RollupVariables, vars.Rollup)
assert.Equal(t, *clientSetup.AuctionVariables, vars.Auction)
assert.Equal(t, *clientSetup.WDelayerVariables, vars.WDelayer)
dbBlocks, err := s.historyDB.GetAllBlocks()
require.NoError(t, err)
@@ -541,9 +541,9 @@ func TestSyncGeneral(t *testing.T) {
assert.Equal(t, int64(4), stats.Eth.LastBlock.Num)
assert.Equal(t, int64(4), stats.Sync.LastBlock.Num)
vars = s.SCVars()
assert.Equal(t, clientSetup.RollupVariables, vars.Rollup)
assert.Equal(t, clientSetup.AuctionVariables, vars.Auction)
assert.Equal(t, clientSetup.WDelayerVariables, vars.WDelayer)
assert.Equal(t, *clientSetup.RollupVariables, vars.Rollup)
assert.Equal(t, *clientSetup.AuctionVariables, vars.Auction)
assert.Equal(t, *clientSetup.WDelayerVariables, vars.WDelayer)
dbExits, err := s.historyDB.GetAllExits()
require.NoError(t, err)
@@ -673,9 +673,9 @@ func TestSyncGeneral(t *testing.T) {
assert.Equal(t, false, stats.Synced())
assert.Equal(t, int64(6), stats.Eth.LastBlock.Num)
vars = s.SCVars()
assert.Equal(t, clientSetup.RollupVariables, vars.Rollup)
assert.Equal(t, clientSetup.AuctionVariables, vars.Auction)
assert.Equal(t, clientSetup.WDelayerVariables, vars.WDelayer)
assert.Equal(t, *clientSetup.RollupVariables, vars.Rollup)
assert.Equal(t, *clientSetup.AuctionVariables, vars.Auction)
assert.Equal(t, *clientSetup.WDelayerVariables, vars.WDelayer)
// At this point, the DB only has data up to block 1
dbBlock, err := s.historyDB.GetLastBlock()
@@ -712,9 +712,9 @@ func TestSyncGeneral(t *testing.T) {
}
vars = s.SCVars()
assert.Equal(t, clientSetup.RollupVariables, vars.Rollup)
assert.Equal(t, clientSetup.AuctionVariables, vars.Auction)
assert.Equal(t, clientSetup.WDelayerVariables, vars.WDelayer)
assert.Equal(t, *clientSetup.RollupVariables, vars.Rollup)
assert.Equal(t, *clientSetup.AuctionVariables, vars.Auction)
assert.Equal(t, *clientSetup.WDelayerVariables, vars.WDelayer)
}
dbBlock, err = s.historyDB.GetLastBlock()

View File

@@ -1152,7 +1152,7 @@ func (c *Client) RollupEventsByBlock(blockNum int64,
}
// RollupEventInit returns the initialize event with its corresponding block number
func (c *Client) RollupEventInit() (*eth.RollupEventInitialize, int64, error) {
func (c *Client) RollupEventInit(genesisBlockNum int64) (*eth.RollupEventInitialize, int64, error) {
vars := c.blocks[0].Rollup.Vars
return &eth.RollupEventInitialize{
ForgeL1L2BatchTimeout: uint8(vars.ForgeL1L2BatchTimeout),
@@ -1628,7 +1628,7 @@ func (c *Client) AuctionEventsByBlock(blockNum int64,
}
// AuctionEventInit returns the initialize event with its corresponding block number
func (c *Client) AuctionEventInit() (*eth.AuctionEventInitialize, int64, error) {
func (c *Client) AuctionEventInit(genesisBlockNum int64) (*eth.AuctionEventInitialize, int64, error) {
vars := c.blocks[0].Auction.Vars
return &eth.AuctionEventInitialize{
DonationAddress: vars.DonationAddress,
@@ -1863,7 +1863,7 @@ func (c *Client) WDelayerConstants() (*common.WDelayerConstants, error) {
}
// WDelayerEventInit returns the initialize event with its corresponding block number
func (c *Client) WDelayerEventInit() (*eth.WDelayerEventInitialize, int64, error) {
func (c *Client) WDelayerEventInit(genesisBlockNum int64) (*eth.WDelayerEventInitialize, int64, error) {
vars := c.blocks[0].WDelayer.Vars
return &eth.WDelayerEventInitialize{
InitialWithdrawalDelay: vars.WithdrawalDelay,

View File

@@ -212,6 +212,17 @@ PoolTransferToBJJ(1) A-C: 3 (1)
// SetBlockchainMinimumFlow0 contains a set of transactions with a minimal flow
var SetBlockchainMinimumFlow0 = `
Type: Blockchain
// Idxs:
// 256: A(0)
// 257: C(1)
// 258: A(1)
// 259: B(0)
// 260: D(0)
// 261: Coord(1)
// 262: Coord(0)
// 263: B(1)
// 264: C(0)
// 265: F(0)
AddToken(1)
@@ -255,10 +266,11 @@ CreateAccountDeposit(0) D: 800
// C(0): 0
// Coordinator creates needed accounts to receive Fees
CreateAccountCoordinator(1) Coord
CreateAccountCoordinator(0) Coord
// Coordinator creates needed 'To' accounts for the L2Txs
// sorted in the way that the TxSelector creates them
CreateAccountCoordinator(1) Coord
CreateAccountCoordinator(1) B
CreateAccountCoordinator(0) Coord
CreateAccountCoordinator(0) C

View File

@@ -77,7 +77,7 @@ func initTxSelector(t *testing.T, chainID uint16, hermezContractAddr ethCommon.A
pass := os.Getenv("POSTGRES_PASS")
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err)
l2DB := l2db.NewL2DB(db, db, 10, 100, 0.0, 24*time.Hour, nil)
l2DB := l2db.NewL2DB(db, db, 10, 100, 0.0, 1000.0, 24*time.Hour, nil)
dir, err := ioutil.TempDir("", "tmpSyncDB")
require.NoError(t, err)
@@ -186,7 +186,7 @@ func TestTxSelectorBatchBuilderZKInputsMinimumFlow0(t *testing.T) {
zki, err := bb.BuildBatch(coordIdxs, configBatch, oL1UserTxs, oL1CoordTxs, oL2Txs)
require.NoError(t, err)
assert.Equal(t,
"3844339393304253264418296322137281996442345663805792718218845145754742722151",
"4392049343656836675348565048374261353937130287163762821533580216441778455298",
bb.LocalStateDB().MT.Root().BigInt().String())
sendProofAndCheckResp(t, zki)
err = l2DBTxSel.StartForging(common.TxIDsFromPoolL2Txs(oL2Txs),
@@ -215,7 +215,7 @@ func TestTxSelectorBatchBuilderZKInputsMinimumFlow0(t *testing.T) {
zki, err = bb.BuildBatch(coordIdxs, configBatch, oL1UserTxs, oL1CoordTxs, oL2Txs)
require.NoError(t, err)
assert.Equal(t,
"2537294203394018451170116789946369404362093672592091326351037700505720139801",
"8905191229562583213069132470917469035834300549892959854483573322676101624713",
bb.LocalStateDB().MT.Root().BigInt().String())
sendProofAndCheckResp(t, zki)
err = l2DBTxSel.StartForging(common.TxIDsFromPoolL2Txs(l2Txs),
@@ -242,7 +242,7 @@ func TestTxSelectorBatchBuilderZKInputsMinimumFlow0(t *testing.T) {
zki, err = bb.BuildBatch(coordIdxs, configBatch, oL1UserTxs, oL1CoordTxs, oL2Txs)
require.NoError(t, err)
assert.Equal(t,
"13463929859122729344499006353544877221550995454069650137270994940730475267399",
"20593679664586247774284790801579542411781976279024409415159440382607791042723",
bb.LocalStateDB().MT.Root().BigInt().String())
sendProofAndCheckResp(t, zki)
err = l2DBTxSel.StartForging(common.TxIDsFromPoolL2Txs(l2Txs),
@@ -264,7 +264,7 @@ func TestTxSelectorBatchBuilderZKInputsMinimumFlow0(t *testing.T) {
// same root as previous batch, as the L1CoordinatorTxs created by the
// Til set is not created by the TxSelector in this test
assert.Equal(t,
"13463929859122729344499006353544877221550995454069650137270994940730475267399",
"20593679664586247774284790801579542411781976279024409415159440382607791042723",
bb.LocalStateDB().MT.Root().BigInt().String())
sendProofAndCheckResp(t, zki)
err = l2DBTxSel.StartForging(common.TxIDsFromPoolL2Txs(l2Txs),

View File

@@ -732,13 +732,13 @@ func (tp *TxProcessor) ProcessL2Tx(coordIdxsMap map[common.TokenID]common.Idx,
// if tx.ToIdx==0, get toIdx by ToEthAddr or ToBJJ
if tx.ToIdx == common.Idx(0) && tx.AuxToIdx == common.Idx(0) {
if tp.s.Type() == statedb.TypeSynchronizer {
// thisTypeould never be reached
// this in TypeSynchronizer should never be reached
log.Error("WARNING: In StateDB with Synchronizer mode L2.ToIdx can't be 0")
return nil, nil, false,
tracerr.Wrap(fmt.Errorf("In StateDB with Synchronizer mode L2.ToIdx can't be 0"))
}
// case when tx.Type== common.TxTypeTransferToEthAddr or common.TxTypeTransferToBJJ
// case when tx.Type == common.TxTypeTransferToEthAddr or
// common.TxTypeTransferToBJJ:
accSender, err := tp.s.GetAccount(tx.FromIdx)
if err != nil {
return nil, nil, false, tracerr.Wrap(err)

View File

@@ -218,7 +218,7 @@ func TestProcessTxsBalances(t *testing.T) {
assert.NoError(t, err)
chainID := uint16(0)
// generate test transactions from test.SetBlockchain0 code
// generate test transactions from test.SetBlockchainMinimumFlow0 code
tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
blocks, err := tc.GenerateBlocks(txsets.SetBlockchainMinimumFlow0)
require.NoError(t, err)
@@ -288,7 +288,7 @@ func TestProcessTxsBalances(t *testing.T) {
"9061858435528794221929846392270405504056106238451760714188625065949729889651",
tp.s.MT.Root().BigInt().String())
coordIdxs := []common.Idx{261, 262}
coordIdxs := []common.Idx{261, 263}
log.Debug("block:0 batch:7")
l1UserTxs = til.L1TxsToCommonL1Txs(tc.Queues[*blocks[0].Rollup.Batches[6].Batch.ForgeL1TxsNum])
l2Txs = common.L2TxsToPoolL2Txs(blocks[0].Rollup.Batches[6].L2Txs)
@@ -303,7 +303,7 @@ func TestProcessTxsBalances(t *testing.T) {
checkBalance(t, tc, sdb, "C", 0, "100")
checkBalance(t, tc, sdb, "D", 0, "800")
assert.Equal(t,
"3844339393304253264418296322137281996442345663805792718218845145754742722151",
"4392049343656836675348565048374261353937130287163762821533580216441778455298",
tp.s.MT.Root().BigInt().String())
log.Debug("block:0 batch:8")
@@ -321,7 +321,7 @@ func TestProcessTxsBalances(t *testing.T) {
checkBalance(t, tc, sdb, "C", 1, "100")
checkBalance(t, tc, sdb, "D", 0, "800")
assert.Equal(t,
"2537294203394018451170116789946369404362093672592091326351037700505720139801",
"8905191229562583213069132470917469035834300549892959854483573322676101624713",
tp.s.MT.Root().BigInt().String())
coordIdxs = []common.Idx{262}
@@ -330,8 +330,8 @@ func TestProcessTxsBalances(t *testing.T) {
l2Txs = common.L2TxsToPoolL2Txs(blocks[1].Rollup.Batches[0].L2Txs)
_, err = tp.ProcessTxs(coordIdxs, l1UserTxs, blocks[1].Rollup.Batches[0].L1CoordinatorTxs, l2Txs)
require.NoError(t, err)
checkBalance(t, tc, sdb, "Coord", 0, "75")
checkBalance(t, tc, sdb, "Coord", 1, "30")
checkBalance(t, tc, sdb, "Coord", 0, "35")
checkBalance(t, tc, sdb, "A", 0, "730")
checkBalance(t, tc, sdb, "A", 1, "280")
checkBalance(t, tc, sdb, "B", 0, "380")
@@ -340,7 +340,7 @@ func TestProcessTxsBalances(t *testing.T) {
checkBalance(t, tc, sdb, "C", 1, "100")
checkBalance(t, tc, sdb, "D", 0, "470")
assert.Equal(t,
"13463929859122729344499006353544877221550995454069650137270994940730475267399",
"12063160053709941400160547588624831667157042937323422396363359123696668555050",
tp.s.MT.Root().BigInt().String())
coordIdxs = []common.Idx{}
@@ -350,7 +350,7 @@ func TestProcessTxsBalances(t *testing.T) {
_, err = tp.ProcessTxs(coordIdxs, l1UserTxs, blocks[1].Rollup.Batches[1].L1CoordinatorTxs, l2Txs)
require.NoError(t, err)
assert.Equal(t,
"21058792089669864857092637997959333050678445584244682889041632034478049099916",
"20375835796927052406196249140510136992262283055544831070430919054949353249481",
tp.s.MT.Root().BigInt().String())
// use Set of PoolL2 txs
@@ -359,8 +359,8 @@ func TestProcessTxsBalances(t *testing.T) {
_, err = tp.ProcessTxs(coordIdxs, []common.L1Tx{}, []common.L1Tx{}, poolL2Txs)
require.NoError(t, err)
checkBalance(t, tc, sdb, "Coord", 0, "75")
checkBalance(t, tc, sdb, "Coord", 1, "30")
checkBalance(t, tc, sdb, "Coord", 0, "35")
checkBalance(t, tc, sdb, "A", 0, "510")
checkBalance(t, tc, sdb, "A", 1, "170")
checkBalance(t, tc, sdb, "B", 0, "480")

File diff suppressed because one or more lines are too long

View File

@@ -148,26 +148,49 @@ func (txsel *TxSelector) GetL1L2TxSelection(selectionConfig txprocessor.Config,
discardedL2Txs, tracerr.Wrap(err)
}
// getL1L2TxSelection returns the selection of L1 + L2 txs.
// It returns: the CoordinatorIdxs used to receive the fees of the selected
// L2Txs. An array of bytearrays with the signatures of the
// AccountCreationAuthorization of the accounts of the users created by the
// Coordinator with L1CoordinatorTxs of those accounts that does not exist yet
// but there is a transactions to them and the authorization of account
// creation exists. The L1UserTxs, L1CoordinatorTxs, PoolL2Txs that will be
// included in the next batch.
func (txsel *TxSelector) getL1L2TxSelection(selectionConfig txprocessor.Config,
l1UserTxs []common.L1Tx) ([]common.Idx, [][]byte, []common.L1Tx,
[]common.L1Tx, []common.PoolL2Tx, []common.PoolL2Tx, error) {
// WIP.0: the TxSelector is not optimized and will need a redesign. The
// current version is implemented in order to have a functional
// implementation that can be used asap.
//
// WIP.1: this method uses a 'cherry-pick' of internal calls of the
// StateDB, a refactor of the StateDB to reorganize it internally is
// planned once the main functionallities are covered, with that
// refactor the TxSelector will be updated also.
// implementation that can be used ASAP.
// get pending l2-tx from tx-pool
l2TxsRaw, err := txsel.l2db.GetPendingTxs()
if err != nil {
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
}
// Steps of this method:
// - ProcessL1Txs (User txs)
// - getPendingTxs (forgable directly with current state & not forgable
// yet)
// - split between l2TxsForgable & l2TxsNonForgable, where:
// - l2TxsForgable are the txs that are directly forgable with the
// current state
// - l2TxsNonForgable are the txs that are not directly forgable
// with the current state, but that may be forgable once the
// l2TxsForgable ones are processed
// - for l2TxsForgable, and if needed, for l2TxsNonForgable:
// - sort by Fee & Nonce
// - loop over l2Txs (txsel.processL2Txs)
// - Fill tx.TokenID tx.Nonce
// - Check enough Balance on sender
// - Check Nonce
// - Create CoordAccount L1CoordTx for TokenID if needed
// - & ProcessL1Tx of L1CoordTx
// - Check validity of receiver Account for ToEthAddr / ToBJJ
// - Create UserAccount L1CoordTx if needed (and possible)
// - If everything is fine, store l2Tx to validTxs & update NoncesMap
// - Prepare coordIdxsMap & AccumulatedFees
// - Distribute AccumulatedFees to CoordIdxs
// - MakeCheckpoint
txselStateDB := txsel.localAccountsDB.StateDB
tp := txprocessor.NewTxProcessor(txselStateDB, selectionConfig)
tp.AccumulatedFees = make(map[common.Idx]*big.Int)
// Process L1UserTxs
for i := 0; i < len(l1UserTxs); i++ {
@@ -178,71 +201,167 @@ func (txsel *TxSelector) getL1L2TxSelection(selectionConfig txprocessor.Config,
}
}
var l1CoordinatorTxs []common.L1Tx
positionL1 := len(l1UserTxs)
l2TxsFromDB, err := txsel.l2db.GetPendingTxs()
if err != nil {
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
}
l2TxsForgable, l2TxsNonForgable := splitL2ForgableAndNonForgable(tp, l2TxsFromDB)
// in case that length of l2TxsForgable is 0, no need to continue, there
// is no L2Txs to forge at all
if len(l2TxsForgable) == 0 {
var discardedL2Txs []common.PoolL2Tx
for i := 0; i < len(l2TxsNonForgable); i++ {
l2TxsNonForgable[i].Info =
"Tx not selected due impossibility to be forged with the current state"
discardedL2Txs = append(discardedL2Txs, l2TxsNonForgable[i])
}
err = tp.StateDB().MakeCheckpoint()
if err != nil {
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
}
metricSelectedL1UserTxs.Set(float64(len(l1UserTxs)))
metricSelectedL1CoordinatorTxs.Set(0)
metricSelectedL2Txs.Set(0)
metricDiscardedL2Txs.Set(float64(len(discardedL2Txs)))
return nil, nil, l1UserTxs, nil, nil, discardedL2Txs, nil
}
var accAuths [][]byte
// Sort l2TxsRaw (cropping at MaxTx at this point).
// discardedL2Txs contains an array of the L2Txs that have not been
// selected in this Batch.
l2Txs0, discardedL2Txs := txsel.getL2Profitable(l2TxsRaw, selectionConfig.MaxTx)
for i := range discardedL2Txs {
discardedL2Txs[i].Info = "Tx not selected due to low absolute fee (does not fit inside the profitable set)"
}
noncesMap := make(map[common.Idx]common.Nonce)
var l2Txs []common.PoolL2Tx
// iterate over l2Txs
// - if tx.TokenID does not exist at CoordsIdxDB
// - create new L1CoordinatorTx creating a CoordAccount, for
// Coordinator to receive the fee of the new TokenID
for i := 0; i < len(l2Txs0); i++ {
accSender, err := tp.StateDB().GetAccount(l2Txs0[i].FromIdx)
var l1CoordinatorTxs []common.L1Tx
var validTxs, discardedL2Txs []common.PoolL2Tx
l2TxsForgable = sortL2Txs(l2TxsForgable)
accAuths, l1CoordinatorTxs, validTxs, discardedL2Txs, err =
txsel.processL2Txs(tp, selectionConfig, len(l1UserTxs),
l2TxsForgable, validTxs, discardedL2Txs)
if err != nil {
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
}
l2Txs0[i].TokenID = accSender.TokenID
// populate the noncesMap used at the next iteration
noncesMap[l2Txs0[i].FromIdx] = accSender.Nonce
// if TokenID does not exist yet, create new L1CoordinatorTx to
// create the CoordinatorAccount for that TokenID, to receive
// the fees. Only in the case that there does not exist yet a
// pending L1CoordinatorTx to create the account for the
// Coordinator for that TokenID
var newL1CoordTx *common.L1Tx
newL1CoordTx, positionL1, err =
txsel.coordAccountForTokenID(l1CoordinatorTxs,
accSender.TokenID, positionL1)
// if there is space for more txs get also the NonForgable txs, that may
// be unblocked once the Forgable ones are processed
if len(validTxs) < int(selectionConfig.MaxTx)-(len(l1UserTxs)+len(l1CoordinatorTxs)) {
l2TxsNonForgable = sortL2Txs(l2TxsNonForgable)
var accAuths2 [][]byte
var l1CoordinatorTxs2 []common.L1Tx
accAuths2, l1CoordinatorTxs2, validTxs, discardedL2Txs, err =
txsel.processL2Txs(tp, selectionConfig,
len(l1UserTxs)+len(l1CoordinatorTxs), l2TxsNonForgable,
validTxs, discardedL2Txs)
if err != nil {
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
}
if newL1CoordTx != nil {
// if there is no space for the L1CoordinatorTx, discard the L2Tx
if len(l1CoordinatorTxs) >= int(selectionConfig.MaxL1Tx)-len(l1UserTxs) {
// discard L2Tx, and update Info parameter of
// the tx, and add it to the discardedTxs array
l2Txs0[i].Info = "Tx not selected because the L2Tx depends on a " +
"L1CoordinatorTx and there is not enough space for L1Coordinator"
discardedL2Txs = append(discardedL2Txs, l2Txs0[i])
continue
accAuths = append(accAuths, accAuths2...)
l1CoordinatorTxs = append(l1CoordinatorTxs, l1CoordinatorTxs2...)
} else {
// if there is no space for NonForgable txs, put them at the
// discardedL2Txs array
for i := 0; i < len(l2TxsNonForgable); i++ {
l2TxsNonForgable[i].Info =
"Tx not selected due not available slots for L2Txs"
discardedL2Txs = append(discardedL2Txs, l2TxsNonForgable[i])
}
// increase positionL1
positionL1++
l1CoordinatorTxs = append(l1CoordinatorTxs, *newL1CoordTx)
accAuths = append(accAuths, txsel.coordAccount.AccountCreationAuth)
}
l2Txs = append(l2Txs, l2Txs0[i])
}
var validTxs []common.PoolL2Tx
// iterate over l2TxsRaw
// get CoordIdxsMap for the TokenIDs
coordIdxsMap := make(map[common.TokenID]common.Idx)
for i := 0; i < len(validTxs); i++ {
// get TokenID from tx.Sender
accSender, err := tp.StateDB().GetAccount(validTxs[i].FromIdx)
if err != nil {
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
}
tokenID := accSender.TokenID
coordIdx, err := txsel.getCoordIdx(tokenID)
if err != nil {
// if err is db.ErrNotFound, should not happen, as all
// the validTxs.TokenID should have a CoordinatorIdx
// created in the DB at this point
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
}
coordIdxsMap[tokenID] = coordIdx
}
var coordIdxs []common.Idx
for _, idx := range coordIdxsMap {
coordIdxs = append(coordIdxs, idx)
}
// sort CoordIdxs
sort.SliceStable(coordIdxs, func(i, j int) bool {
return coordIdxs[i] < coordIdxs[j]
})
// distribute the AccumulatedFees from the processed L2Txs into the
// Coordinator Idxs
for idx, accumulatedFee := range tp.AccumulatedFees {
cmp := accumulatedFee.Cmp(big.NewInt(0))
if cmp == 1 { // accumulatedFee>0
// send the fee to the Idx of the Coordinator for the TokenID
accCoord, err := txsel.localAccountsDB.GetAccount(idx)
if err != nil {
log.Errorw("Can not distribute accumulated fees to coordinator "+
"account: No coord Idx to receive fee", "idx", idx)
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
}
accCoord.Balance = new(big.Int).Add(accCoord.Balance, accumulatedFee)
_, err = txsel.localAccountsDB.UpdateAccount(idx, accCoord)
if err != nil {
log.Error(err)
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
}
}
}
err = tp.StateDB().MakeCheckpoint()
if err != nil {
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
}
metricSelectedL1UserTxs.Set(float64(len(l1UserTxs)))
metricSelectedL1CoordinatorTxs.Set(float64(len(l1CoordinatorTxs)))
metricSelectedL2Txs.Set(float64(len(validTxs)))
metricDiscardedL2Txs.Set(float64(len(discardedL2Txs)))
return coordIdxs, accAuths, l1UserTxs, l1CoordinatorTxs, validTxs, discardedL2Txs, nil
}
func (txsel *TxSelector) processL2Txs(tp *txprocessor.TxProcessor,
selectionConfig txprocessor.Config, nL1Txs int, l2Txs, validTxs, discardedL2Txs []common.PoolL2Tx) (
[][]byte, []common.L1Tx, []common.PoolL2Tx, []common.PoolL2Tx, error) {
var l1CoordinatorTxs []common.L1Tx
positionL1 := nL1Txs
var accAuths [][]byte
// Iterate over l2Txs
// - check Nonces
// - check enough Balance for the Amount+Fee
// - if needed, create new L1CoordinatorTxs for unexisting ToIdx
// - keep used accAuths
// - put the valid txs into validTxs array
for i := 0; i < len(l2Txs); i++ {
// Check if there is space for more L2Txs in the selection
maxL2Txs := int(selectionConfig.MaxTx) - nL1Txs - len(l1CoordinatorTxs)
if len(validTxs) >= maxL2Txs {
// no more available slots for L2Txs, so mark this tx
// but also the rest of remaining txs as discarded
for j := i; j < len(l2Txs); j++ {
l2Txs[j].Info =
"Tx not selected due not available slots for L2Txs"
discardedL2Txs = append(discardedL2Txs, l2Txs[j])
}
break
}
// get Nonce & TokenID from the Account by l2Tx.FromIdx
accSender, err := tp.StateDB().GetAccount(l2Txs[i].FromIdx)
if err != nil {
return nil, nil, nil, nil, tracerr.Wrap(err)
}
l2Txs[i].TokenID = accSender.TokenID
// Check enough Balance on sender
enoughBalance, balance, feeAndAmount := tp.CheckEnoughBalance(l2Txs[i])
if !enoughBalance {
// not valid Amount with current Balance. Discard L2Tx,
@@ -254,18 +373,54 @@ func (txsel *TxSelector) getL1L2TxSelection(selectionConfig txprocessor.Config,
discardedL2Txs = append(discardedL2Txs, l2Txs[i])
continue
}
// check if Nonce is correct
nonce := noncesMap[l2Txs[i].FromIdx]
if l2Txs[i].Nonce != nonce {
// Check if Nonce is correct
if l2Txs[i].Nonce != accSender.Nonce {
// not valid Nonce at tx. Discard L2Tx, and update Info
// parameter of the tx, and add it to the discardedTxs
// array
l2Txs[i].Info = fmt.Sprintf("Tx not selected due to not current Nonce. "+
"Tx.Nonce: %d, Account.Nonce: %d", l2Txs[i].Nonce, nonce)
"Tx.Nonce: %d, Account.Nonce: %d", l2Txs[i].Nonce, accSender.Nonce)
discardedL2Txs = append(discardedL2Txs, l2Txs[i])
continue
}
// if TokenID does not exist yet, create new L1CoordinatorTx to
// create the CoordinatorAccount for that TokenID, to receive
// the fees. Only in the case that there does not exist yet a
// pending L1CoordinatorTx to create the account for the
// Coordinator for that TokenID
var newL1CoordTx *common.L1Tx
newL1CoordTx, positionL1, err =
txsel.coordAccountForTokenID(l1CoordinatorTxs,
accSender.TokenID, positionL1)
if err != nil {
return nil, nil, nil, nil, tracerr.Wrap(err)
}
if newL1CoordTx != nil {
// if there is no space for the L1CoordinatorTx as MaxL1Tx, or no space
// for L1CoordinatorTx + L2Tx as MaxTx, discard the L2Tx
if len(l1CoordinatorTxs) >= int(selectionConfig.MaxL1Tx)-nL1Txs ||
len(l1CoordinatorTxs)+1 >= int(selectionConfig.MaxTx)-nL1Txs {
// discard L2Tx, and update Info parameter of
// the tx, and add it to the discardedTxs array
l2Txs[i].Info = "Tx not selected because the L2Tx depends on a " +
"L1CoordinatorTx and there is not enough space for L1Coordinator"
discardedL2Txs = append(discardedL2Txs, l2Txs[i])
continue
}
// increase positionL1
positionL1++
l1CoordinatorTxs = append(l1CoordinatorTxs, *newL1CoordTx)
accAuths = append(accAuths, txsel.coordAccount.AccountCreationAuth)
// process the L1CoordTx
_, _, _, _, err := tp.ProcessL1Tx(nil, newL1CoordTx)
if err != nil {
return nil, nil, nil, nil, tracerr.Wrap(err)
}
}
// If tx.ToIdx>=256, tx.ToIdx should exist to localAccountsDB,
// if so, tx is used. If tx.ToIdx==0, for an L2Tx will be the
// case of TxToEthAddr or TxToBJJ, check if
@@ -277,7 +432,7 @@ func (txsel *TxSelector) getL1L2TxSelection(selectionConfig txprocessor.Config,
if l2Txs[i].ToIdx == 0 { // ToEthAddr/ToBJJ case
validL2Tx, l1CoordinatorTx, accAuth, err :=
txsel.processTxToEthAddrBJJ(validTxs, selectionConfig,
len(l1UserTxs), l1CoordinatorTxs, positionL1, l2Txs[i])
nL1Txs, l1CoordinatorTxs, positionL1, l2Txs[i])
if err != nil {
log.Debugw("txsel.processTxToEthAddrBJJ", "err", err)
// Discard L2Tx, and update Info parameter of
@@ -287,7 +442,19 @@ func (txsel *TxSelector) getL1L2TxSelection(selectionConfig txprocessor.Config,
discardedL2Txs = append(discardedL2Txs, l2Txs[i])
continue
}
if l1CoordinatorTx != nil {
// if there is no space for the L1CoordinatorTx as MaxL1Tx, or no space
// for L1CoordinatorTx + L2Tx as MaxTx, discard the L2Tx
if len(l1CoordinatorTxs) >= int(selectionConfig.MaxL1Tx)-nL1Txs ||
len(l1CoordinatorTxs)+1 >= int(selectionConfig.MaxTx)-nL1Txs {
// discard L2Tx, and update Info parameter of
// the tx, and add it to the discardedTxs array
l2Txs[i].Info = "Tx not selected because the L2Tx depends on a " +
"L1CoordinatorTx and there is not enough space for L1Coordinator"
discardedL2Txs = append(discardedL2Txs, l2Txs[i])
continue
}
if l1CoordinatorTx != nil && validL2Tx != nil {
// If ToEthAddr == 0xff.. this means that we
// are handling a TransferToBJJ, which doesn't
// require an authorization because it doesn't
@@ -303,9 +470,16 @@ func (txsel *TxSelector) getL1L2TxSelection(selectionConfig txprocessor.Config,
l1CoordinatorTxs = append(l1CoordinatorTxs, *l1CoordinatorTx)
positionL1++
}
// process the L1CoordTx
_, _, _, _, err := tp.ProcessL1Tx(nil, l1CoordinatorTx)
if err != nil {
return nil, nil, nil, nil, tracerr.Wrap(err)
}
if validL2Tx != nil {
validTxs = append(validTxs, *validL2Tx)
}
if validL2Tx == nil {
discardedL2Txs = append(discardedL2Txs, l2Txs[i])
continue
}
} else if l2Txs[i].ToIdx >= common.IdxUserThreshold {
receiverAcc, err := txsel.localAccountsDB.GetAccount(l2Txs[i].ToIdx)
@@ -352,110 +526,43 @@ func (txsel *TxSelector) getL1L2TxSelection(selectionConfig txprocessor.Config,
continue
}
}
// Account found in the DB, include the l2Tx in the selection
validTxs = append(validTxs, l2Txs[i])
} else if l2Txs[i].ToIdx == common.Idx(1) {
// valid txs (of Exit type)
validTxs = append(validTxs, l2Txs[i])
}
noncesMap[l2Txs[i].FromIdx]++
}
// Process L1CoordinatorTxs
for i := 0; i < len(l1CoordinatorTxs); i++ {
_, _, _, _, err := tp.ProcessL1Tx(nil, &l1CoordinatorTxs[i])
if err != nil {
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
}
}
// get CoordIdxsMap for the TokenIDs
coordIdxsMap := make(map[common.TokenID]common.Idx)
for i := 0; i < len(validTxs); i++ {
// get TokenID from tx.Sender
accSender, err := tp.StateDB().GetAccount(validTxs[i].FromIdx)
if err != nil {
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
}
// get CoordIdxsMap for the TokenID of the current l2Txs[i]
// get TokenID from tx.Sender account
tokenID := accSender.TokenID
coordIdx, err := txsel.getCoordIdx(tokenID)
if err != nil {
// if err is db.ErrNotFound, should not happen, as all
// the validTxs.TokenID should have a CoordinatorIdx
// created in the DB at this point
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
return nil, nil, nil, nil,
tracerr.Wrap(fmt.Errorf("Could not get CoordIdx for TokenID=%d, "+
"due: %s", tokenID, err))
}
coordIdxsMap[tokenID] = coordIdx
// prepare temp coordIdxsMap & AccumulatedFees for the call to
// ProcessL2Tx
coordIdxsMap := map[common.TokenID]common.Idx{tokenID: coordIdx}
// tp.AccumulatedFees = make(map[common.Idx]*big.Int)
if _, ok := tp.AccumulatedFees[coordIdx]; !ok {
tp.AccumulatedFees[coordIdx] = big.NewInt(0)
}
var coordIdxs []common.Idx
tp.AccumulatedFees = make(map[common.Idx]*big.Int)
for _, idx := range coordIdxsMap {
tp.AccumulatedFees[idx] = big.NewInt(0)
coordIdxs = append(coordIdxs, idx)
}
// sort CoordIdxs
sort.SliceStable(coordIdxs, func(i, j int) bool {
return coordIdxs[i] < coordIdxs[j]
})
// get most profitable L2-tx
maxL2Txs := int(selectionConfig.MaxTx) -
len(l1UserTxs) - len(l1CoordinatorTxs)
selectedL2Txs := validTxs
if len(validTxs) > maxL2Txs {
selectedL2Txs = selectedL2Txs[:maxL2Txs]
}
var finalL2Txs []common.PoolL2Tx
for i := 0; i < len(selectedL2Txs); i++ {
_, _, _, err = tp.ProcessL2Tx(coordIdxsMap, nil, nil, &selectedL2Txs[i])
_, _, _, err = tp.ProcessL2Tx(coordIdxsMap, nil, nil, &l2Txs[i])
if err != nil {
// the error can be due not valid tx data, or due other
// cases (such as StateDB error). At this initial
// version of the TxSelector, we discard the L2Tx and
// log the error, assuming that this will be iterated in
// a near future.
return nil, nil, nil, nil, nil, nil,
tracerr.Wrap(fmt.Errorf("TxSelector: txprocessor.ProcessL2Tx: %w", err))
}
finalL2Txs = append(finalL2Txs, selectedL2Txs[i])
log.Debugw("txselector.getL1L2TxSelection at ProcessL2Tx", "err", err)
// Discard L2Tx, and update Info parameter of the tx,
// and add it to the discardedTxs array
l2Txs[i].Info = fmt.Sprintf("Tx not selected (in ProcessL2Tx) due to %s",
err.Error())
discardedL2Txs = append(discardedL2Txs, l2Txs[i])
continue
}
// distribute the AccumulatedFees from the processed L2Txs into the
// Coordinator Idxs
for idx, accumulatedFee := range tp.AccumulatedFees {
cmp := accumulatedFee.Cmp(big.NewInt(0))
if cmp == 1 { // accumulatedFee>0
// send the fee to the Idx of the Coordinator for the TokenID
accCoord, err := txsel.localAccountsDB.GetAccount(idx)
if err != nil {
log.Errorw("Can not distribute accumulated fees to coordinator "+
"account: No coord Idx to receive fee", "idx", idx)
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
}
accCoord.Balance = new(big.Int).Add(accCoord.Balance, accumulatedFee)
_, err = txsel.localAccountsDB.UpdateAccount(idx, accCoord)
if err != nil {
log.Error(err)
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
}
}
}
validTxs = append(validTxs, l2Txs[i])
} // after this loop, no checks to discard txs should be done
err = tp.StateDB().MakeCheckpoint()
if err != nil {
return nil, nil, nil, nil, nil, nil, tracerr.Wrap(err)
}
metricSelectedL1CoordinatorTxs.Set(float64(len(l1CoordinatorTxs)))
metricSelectedL1UserTxs.Set(float64(len(l1UserTxs)))
metricSelectedL2Txs.Set(float64(len(finalL2Txs)))
metricDiscardedL2Txs.Set(float64(len(discardedL2Txs)))
return coordIdxs, accAuths, l1UserTxs, l1CoordinatorTxs, finalL2Txs, discardedL2Txs, nil
return accAuths, l1CoordinatorTxs, validTxs, discardedL2Txs, nil
}
// processTxsToEthAddrBJJ process the common.PoolL2Tx in the case where
@@ -567,7 +674,10 @@ func (txsel *TxSelector) processTxToEthAddrBJJ(validTxs []common.PoolL2Tx,
Type: common.TxTypeCreateAccountDeposit,
}
}
if len(l1CoordinatorTxs) >= int(selectionConfig.MaxL1Tx)-nL1UserTxs {
// if there is no space for the L1CoordinatorTx as MaxL1Tx, or no space
// for L1CoordinatorTx + L2Tx as MaxTx, discard the L2Tx
if len(l1CoordinatorTxs) >= int(selectionConfig.MaxL1Tx)-nL1UserTxs ||
len(l1CoordinatorTxs)+1 >= int(selectionConfig.MaxTx)-nL1UserTxs {
// L2Tx discarded
return nil, nil, nil, tracerr.Wrap(fmt.Errorf("L2Tx discarded due to no available slots " +
"for L1CoordinatorTx to create a new account for receiver of L2Tx"))
@@ -588,26 +698,14 @@ func checkAlreadyPendingToCreate(l1CoordinatorTxs []common.L1Tx, tokenID common.
return false
}
// getL2Profitable returns the profitable selection of L2Txssorted by Nonce
func (txsel *TxSelector) getL2Profitable(l2Txs []common.PoolL2Tx, max uint32) ([]common.PoolL2Tx,
[]common.PoolL2Tx) {
// First sort by nonce so that txs from the same account are sorted so
// that they could be applied in succession.
sort.Slice(l2Txs, func(i, j int) bool {
return l2Txs[i].Nonce < l2Txs[j].Nonce
})
// sortL2Txs sorts the PoolL2Txs by AbsoluteFee and then by Nonce
func sortL2Txs(l2Txs []common.PoolL2Tx) []common.PoolL2Tx {
// Sort by absolute fee with SliceStable, so that txs with same
// AbsoluteFee are not rearranged and nonce order is kept in such case
sort.SliceStable(l2Txs, func(i, j int) bool {
return l2Txs[i].AbsoluteFee > l2Txs[j].AbsoluteFee
})
discardedL2Txs := []common.PoolL2Tx{}
if len(l2Txs) > int(max) {
discardedL2Txs = l2Txs[max:]
l2Txs = l2Txs[:max]
}
// sort l2Txs by Nonce. This can be done in many different ways, what
// is needed is to output the l2Txs where the Nonce of l2Txs for each
// Account is sorted, but the l2Txs can not be grouped by sender Account
@@ -617,5 +715,29 @@ func (txsel *TxSelector) getL2Profitable(l2Txs []common.PoolL2Tx, max uint32) ([
return l2Txs[i].Nonce < l2Txs[j].Nonce
})
return l2Txs, discardedL2Txs
return l2Txs
}
func splitL2ForgableAndNonForgable(tp *txprocessor.TxProcessor,
l2Txs []common.PoolL2Tx) ([]common.PoolL2Tx, []common.PoolL2Tx) {
var l2TxsForgable, l2TxsNonForgable []common.PoolL2Tx
for i := 0; i < len(l2Txs); i++ {
accSender, err := tp.StateDB().GetAccount(l2Txs[i].FromIdx)
if err != nil {
l2TxsNonForgable = append(l2TxsNonForgable, l2Txs[i])
continue
}
if l2Txs[i].Nonce != accSender.Nonce {
l2TxsNonForgable = append(l2TxsNonForgable, l2Txs[i])
continue
}
enoughBalance, _, _ := tp.CheckEnoughBalance(l2Txs[i])
if !enoughBalance {
l2TxsNonForgable = append(l2TxsNonForgable, l2Txs[i])
continue
}
l2TxsForgable = append(l2TxsForgable, l2Txs[i])
}
return l2TxsForgable, l2TxsNonForgable
}

View File

@@ -26,11 +26,13 @@ import (
)
func initTest(t *testing.T, chainID uint16, hermezContractAddr ethCommon.Address,
coordUser *til.User) *TxSelector {
coordUser *til.User) (*TxSelector, *historydb.HistoryDB) {
pass := os.Getenv("POSTGRES_PASS")
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err)
l2DB := l2db.NewL2DB(db, db, 10, 100, 0.0, 24*time.Hour, nil)
l2DB := l2db.NewL2DB(db, db, 10, 100, 0.0, 1000.0, 24*time.Hour, nil)
historyDB := historydb.NewHistoryDB(db, db, nil)
dir, err := ioutil.TempDir("", "tmpdb")
require.NoError(t, err)
@@ -65,7 +67,7 @@ func initTest(t *testing.T, chainID uint16, hermezContractAddr ethCommon.Address
test.WipeDB(txsel.l2db.DB())
return txsel
return txsel, historyDB
}
func addAccCreationAuth(t *testing.T, tc *til.Context, txsel *TxSelector, chainID uint16,
@@ -157,7 +159,7 @@ func TestGetL2TxSelectionMinimumFlow0(t *testing.T) {
assert.NoError(t, err)
hermezContractAddr := ethCommon.HexToAddress("0xc344E203a046Da13b0B4467EB7B3629D0C99F6E6")
txsel := initTest(t, chainID, hermezContractAddr, tc.Users["Coord"])
txsel, _ := initTest(t, chainID, hermezContractAddr, tc.Users["Coord"])
// restart nonces of TilContext, as will be set by generating directly
// the PoolL2Txs for each specific batch with tc.GeneratePoolL2Txs
@@ -276,22 +278,23 @@ func TestGetL2TxSelectionMinimumFlow0(t *testing.T) {
assert.True(t, l2TxsFromDB[0].VerifySignature(chainID, tc.Users["A"].BJJ.Public().Compress()))
assert.True(t, l2TxsFromDB[1].VerifySignature(chainID, tc.Users["B"].BJJ.Public().Compress()))
l1UserTxs = til.L1TxsToCommonL1Txs(tc.Queues[*blocks[0].Rollup.Batches[6].Batch.ForgeL1TxsNum])
coordIdxs, accAuths, oL1UserTxs, oL1CoordTxs, oL2Txs, _, err :=
coordIdxs, accAuths, oL1UserTxs, oL1CoordTxs, oL2Txs, discardedL2Txs, err :=
txsel.GetL1L2TxSelection(tpc, l1UserTxs)
require.NoError(t, err)
assert.Equal(t, []common.Idx{261, 262}, coordIdxs)
assert.Equal(t, []common.Idx{261, 263}, coordIdxs)
assert.Equal(t, txsel.coordAccount.AccountCreationAuth, accAuths[0])
assert.Equal(t, txsel.coordAccount.AccountCreationAuth, accAuths[1])
assert.Equal(t, accAuthSig0, accAuths[2])
assert.Equal(t, txsel.coordAccount.AccountCreationAuth, accAuths[2])
assert.Equal(t, accAuthSig0, accAuths[1])
assert.Equal(t, accAuthSig1, accAuths[3])
assert.Equal(t, 1, len(oL1UserTxs))
assert.Equal(t, 4, len(oL1CoordTxs))
assert.Equal(t, 2, len(oL2Txs))
assert.Equal(t, 0, len(discardedL2Txs))
assert.Equal(t, len(oL1CoordTxs), len(accAuths))
assert.Equal(t, common.BatchNum(7), txsel.localAccountsDB.CurrentBatch())
assert.Equal(t, common.Idx(264), txsel.localAccountsDB.CurrentIdx())
checkBalanceByIdx(t, txsel, 261, "20") // CoordIdx for TokenID=1
checkBalanceByIdx(t, txsel, 262, "10") // CoordIdx for TokenID=0
checkBalance(t, tc, txsel, "Coord", 1, "20") // CoordIdx for TokenID=1
checkBalance(t, tc, txsel, "Coord", 0, "10") // CoordIdx for TokenID=1
checkBalance(t, tc, txsel, "A", 0, "600")
checkBalance(t, tc, txsel, "A", 1, "280")
checkBalance(t, tc, txsel, "B", 0, "290")
@@ -324,19 +327,20 @@ func TestGetL2TxSelectionMinimumFlow0(t *testing.T) {
assert.True(t, l2TxsFromDB[2].VerifySignature(chainID, tc.Users["B"].BJJ.Public().Compress()))
assert.True(t, l2TxsFromDB[3].VerifySignature(chainID, tc.Users["A"].BJJ.Public().Compress()))
l1UserTxs = til.L1TxsToCommonL1Txs(tc.Queues[*blocks[0].Rollup.Batches[7].Batch.ForgeL1TxsNum])
coordIdxs, accAuths, oL1UserTxs, oL1CoordTxs, oL2Txs, _, err =
coordIdxs, accAuths, oL1UserTxs, oL1CoordTxs, oL2Txs, discardedL2Txs, err =
txsel.GetL1L2TxSelection(tpc, l1UserTxs)
require.NoError(t, err)
assert.Equal(t, []common.Idx{261, 262}, coordIdxs)
assert.Equal(t, []common.Idx{261, 263}, coordIdxs)
assert.Equal(t, 0, len(accAuths))
assert.Equal(t, 0, len(oL1UserTxs))
assert.Equal(t, 0, len(oL1CoordTxs))
assert.Equal(t, 4, len(oL2Txs))
assert.Equal(t, 0, len(discardedL2Txs))
assert.Equal(t, len(oL1CoordTxs), len(accAuths))
assert.Equal(t, common.BatchNum(8), txsel.localAccountsDB.CurrentBatch())
assert.Equal(t, common.Idx(264), txsel.localAccountsDB.CurrentIdx())
checkBalanceByIdx(t, txsel, 261, "30")
checkBalanceByIdx(t, txsel, 262, "35")
checkBalance(t, tc, txsel, "Coord", 1, "30") // CoordIdx for TokenID=1
checkBalance(t, tc, txsel, "Coord", 0, "35") // CoordIdx for TokenID=1
checkBalance(t, tc, txsel, "A", 0, "430")
checkBalance(t, tc, txsel, "A", 1, "280")
checkBalance(t, tc, txsel, "B", 0, "390")
@@ -370,7 +374,7 @@ func TestGetL2TxSelectionMinimumFlow0(t *testing.T) {
coordIdxs, accAuths, oL1UserTxs, oL1CoordTxs, oL2Txs, _, err =
txsel.GetL1L2TxSelection(tpc, l1UserTxs)
require.NoError(t, err)
assert.Equal(t, []common.Idx{262}, coordIdxs)
assert.Equal(t, []common.Idx{263}, coordIdxs)
assert.Equal(t, 0, len(accAuths))
assert.Equal(t, 4, len(oL1UserTxs))
assert.Equal(t, 0, len(oL1CoordTxs))
@@ -379,7 +383,7 @@ func TestGetL2TxSelectionMinimumFlow0(t *testing.T) {
assert.Equal(t, common.BatchNum(9), txsel.localAccountsDB.CurrentBatch())
assert.Equal(t, common.Idx(264), txsel.localAccountsDB.CurrentIdx())
checkBalanceByIdx(t, txsel, 261, "30")
checkBalanceByIdx(t, txsel, 262, "75")
checkBalanceByIdx(t, txsel, 263, "75")
checkBalance(t, tc, txsel, "A", 0, "730")
checkBalance(t, tc, txsel, "A", 1, "280")
checkBalance(t, tc, txsel, "B", 0, "380")
@@ -415,7 +419,7 @@ func TestPoolL2TxsWithoutEnoughBalance(t *testing.T) {
assert.NoError(t, err)
hermezContractAddr := ethCommon.HexToAddress("0xc344E203a046Da13b0B4467EB7B3629D0C99F6E6")
txsel := initTest(t, chainID, hermezContractAddr, tc.Users["Coord"])
txsel, _ := initTest(t, chainID, hermezContractAddr, tc.Users["Coord"])
// restart nonces of TilContext, as will be set by generating directly
// the PoolL2Txs for each specific batch with tc.GeneratePoolL2Txs
@@ -468,11 +472,6 @@ func TestPoolL2TxsWithoutEnoughBalance(t *testing.T) {
tc.RestartNonces()
// batch3
// NOTE: this batch will result with 1 L2Tx, as the PoolExit tx is not
// possible, as the PoolTransferToEthAddr is not processed yet when
// checking availability of PoolExit. This, in a near-future iteration
// of the TxSelector will return the 2 transactions as valid and
// selected, as the TxSelector will handle this kind of combinations.
batchPoolL2 = `
Type: PoolL2
PoolTransferToEthAddr(0) A-B: 50 (126)`
@@ -486,11 +485,11 @@ func TestPoolL2TxsWithoutEnoughBalance(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, 0, len(oL1UserTxs))
assert.Equal(t, 0, len(oL1CoordTxs))
assert.Equal(t, 1, len(oL2Txs)) // see 'NOTE' at the beginning of 'batch3' of this test
assert.Equal(t, 2, len(discardedL2Txs))
assert.Equal(t, 2, len(oL2Txs))
assert.Equal(t, 1, len(discardedL2Txs))
assert.Equal(t, expectedTxID2, oL2Txs[0].TxID.String())
assert.Equal(t, expectedTxID1, oL2Txs[1].TxID.String())
assert.Equal(t, expectedTxID0, discardedL2Txs[0].TxID.String())
assert.Equal(t, expectedTxID1, discardedL2Txs[1].TxID.String())
assert.Equal(t, common.TxTypeTransferToEthAddr, oL2Txs[0].Type)
err = txsel.l2db.StartForging(common.TxIDsFromPoolL2Txs(oL2Txs),
txsel.localAccountsDB.CurrentBatch())
@@ -505,12 +504,8 @@ func TestPoolL2TxsWithoutEnoughBalance(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, 0, len(oL1UserTxs))
assert.Equal(t, 0, len(oL1CoordTxs))
assert.Equal(t, 1, len(oL2Txs))
assert.Equal(t, 0, len(oL2Txs))
assert.Equal(t, 1, len(discardedL2Txs))
// the Exit that was not accepted at the batch2
assert.Equal(t, expectedTxID1, oL2Txs[0].TxID.String())
assert.Equal(t, expectedTxID0, discardedL2Txs[0].TxID.String())
assert.Equal(t, common.TxTypeExit, oL2Txs[0].Type)
err = txsel.l2db.StartForging(common.TxIDsFromPoolL2Txs(oL2Txs),
txsel.localAccountsDB.CurrentBatch())
require.NoError(t, err)
@@ -537,7 +532,7 @@ func TestTransferToBjj(t *testing.T) {
assert.NoError(t, err)
hermezContractAddr := ethCommon.HexToAddress("0xc344E203a046Da13b0B4467EB7B3629D0C99F6E6")
txsel := initTest(t, chainID, hermezContractAddr, tc.Users["Coord"])
txsel, _ := initTest(t, chainID, hermezContractAddr, tc.Users["Coord"])
// restart nonces of TilContext, as will be set by generating directly
// the PoolL2Txs for each specific batch with tc.GeneratePoolL2Txs
@@ -580,7 +575,6 @@ func TestTransferToBjj(t *testing.T) {
require.Equal(t, 1, len(oL1CoordTxs))
assert.Equal(t, poolL2Txs[0].ToEthAddr, oL1CoordTxs[0].FromEthAddr)
assert.Equal(t, poolL2Txs[0].ToBJJ, oL1CoordTxs[0].FromBJJ)
// fmt.Printf("DBG l1CoordTx[0]: %+v\n", oL1CoordTxs[0])
assert.Equal(t, 1, len(oL2Txs))
assert.Equal(t, 0, len(discardedL2Txs))
err = txsel.l2db.StartForging(common.TxIDsFromPoolL2Txs(oL2Txs),
@@ -669,7 +663,7 @@ func TestTransferManyFromSameAccount(t *testing.T) {
assert.NoError(t, err)
hermezContractAddr := ethCommon.HexToAddress("0xc344E203a046Da13b0B4467EB7B3629D0C99F6E6")
txsel := initTest(t, chainID, hermezContractAddr, tc.Users["Coord"])
txsel, _ := initTest(t, chainID, hermezContractAddr, tc.Users["Coord"])
// restart nonces of TilContext, as will be set by generating directly
// the PoolL2Txs for each specific batch with tc.GeneratePoolL2Txs
@@ -712,7 +706,8 @@ func TestTransferManyFromSameAccount(t *testing.T) {
// add the PoolL2Txs to the l2DB
addL2Txs(t, txsel, poolL2Txs)
// batch 2 to crate some accounts with positive balance, and do 8 L2Tx transfers from account A
// batch 2 to crate some accounts with positive balance, and do 8 L2Tx
// transfers from account A
l1UserTxs = til.L1TxsToCommonL1Txs(tc.Queues[*blocks[0].Rollup.Batches[1].Batch.ForgeL1TxsNum])
_, _, oL1UserTxs, oL1CoordTxs, oL2Txs, discardedL2Txs, err :=
txsel.GetL1L2TxSelection(tpc, l1UserTxs)
@@ -720,7 +715,7 @@ func TestTransferManyFromSameAccount(t *testing.T) {
assert.Equal(t, 3, len(oL1UserTxs))
require.Equal(t, 0, len(oL1CoordTxs))
assert.Equal(t, 7, len(oL2Txs))
assert.Equal(t, 1, len(discardedL2Txs))
assert.Equal(t, 4, len(discardedL2Txs))
err = txsel.l2db.StartForging(common.TxIDsFromPoolL2Txs(oL2Txs),
txsel.localAccountsDB.CurrentBatch())
@@ -750,7 +745,7 @@ func TestPoolL2TxInvalidNonces(t *testing.T) {
assert.NoError(t, err)
hermezContractAddr := ethCommon.HexToAddress("0xc344E203a046Da13b0B4467EB7B3629D0C99F6E6")
txsel := initTest(t, chainID, hermezContractAddr, tc.Users["Coord"])
txsel, _ := initTest(t, chainID, hermezContractAddr, tc.Users["Coord"])
// restart nonces of TilContext, as will be set by generating directly
// the PoolL2Txs for each specific batch with tc.GeneratePoolL2Txs
@@ -803,8 +798,8 @@ func TestPoolL2TxInvalidNonces(t *testing.T) {
require.NoError(t, err)
require.Equal(t, 3, len(oL1UserTxs))
require.Equal(t, 0, len(oL1CoordTxs))
require.Equal(t, 2, len(oL2Txs))
require.Equal(t, 8, len(discardedL2Txs))
require.Equal(t, 0, len(oL2Txs))
require.Equal(t, 10, len(discardedL2Txs))
require.Equal(t, 0, len(accAuths))
err = txsel.l2db.StartForging(common.TxIDsFromPoolL2Txs(oL2Txs),
@@ -819,7 +814,7 @@ func TestPoolL2TxInvalidNonces(t *testing.T) {
require.Equal(t, 0, len(oL1UserTxs))
require.Equal(t, 3, len(oL1CoordTxs))
require.Equal(t, 6, len(oL2Txs))
require.Equal(t, 8, len(oL2Txs))
require.Equal(t, 2, len(discardedL2Txs))
require.Equal(t, 3, len(accAuths))
@@ -843,3 +838,203 @@ func TestPoolL2TxInvalidNonces(t *testing.T) {
txsel.localAccountsDB.CurrentBatch())
require.NoError(t, err)
}
func TestProcessL2Selection(t *testing.T) {
set := `
Type: Blockchain
CreateAccountDeposit(0) Coord: 0
CreateAccountDeposit(0) A: 18
CreateAccountDeposit(0) B: 0
> batchL1 // freeze L1User{3}
> batchL1 // forge L1User{3}
> block
`
chainID := uint16(0)
tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
blocks, err := tc.GenerateBlocks(set)
assert.NoError(t, err)
hermezContractAddr := ethCommon.HexToAddress("0xc344E203a046Da13b0B4467EB7B3629D0C99F6E6")
txsel, _ := initTest(t, chainID, hermezContractAddr, tc.Users["Coord"])
// restart nonces of TilContext, as will be set by generating directly
// the PoolL2Txs for each specific batch with tc.GeneratePoolL2Txs
tc.RestartNonces()
tpc := txprocessor.Config{
NLevels: 16,
MaxFeeTx: 10,
MaxTx: 10,
MaxL1Tx: 10,
ChainID: chainID,
}
// batch1 to freeze L1UserTxs
l1UserTxs := []common.L1Tx{}
_, _, _, _, _, _, err = txsel.GetL1L2TxSelection(tpc, l1UserTxs)
require.NoError(t, err)
// 8 transfers from the same account
batchPoolL2 := `
Type: PoolL2
PoolTransfer(0) A-B: 10 (126)
PoolTransfer(0) A-B: 10 (126) // not enough funds
PoolTransfer(0) A-B: 5 (126) // enough funds
`
poolL2Txs, err := tc.GeneratePoolL2Txs(batchPoolL2)
require.NoError(t, err)
require.Equal(t, 3, len(poolL2Txs))
// add the PoolL2Txs to the l2DB
addL2Txs(t, txsel, poolL2Txs)
// batch 2 to crate some accounts with positive balance, and do 8 L2Tx transfers from account A
l1UserTxs = til.L1TxsToCommonL1Txs(tc.Queues[*blocks[0].Rollup.Batches[1].Batch.ForgeL1TxsNum])
_, _, oL1UserTxs, oL1CoordTxs, oL2Txs, discardedL2Txs, err :=
txsel.GetL1L2TxSelection(tpc, l1UserTxs)
require.NoError(t, err)
assert.Equal(t, 3, len(oL1UserTxs))
require.Equal(t, 0, len(oL1CoordTxs))
// only 1st L2Tx should be accepted, as:
// - 2nd will not be selected as has not enough funds
// - 3rd will not be selected as has Nonce=2, and the account Nonce==1
// (due the 2nd txs not being accepted)
assert.Equal(t, 1, len(oL2Txs))
assert.Equal(t, 2, len(discardedL2Txs))
assert.Equal(t, common.Nonce(0), oL2Txs[0].Nonce)
assert.Equal(t, common.Nonce(1), discardedL2Txs[0].Nonce)
assert.Equal(t, common.Nonce(2), discardedL2Txs[1].Nonce)
assert.Equal(t, "Tx not selected due to not enough Balance at the sender. "+
"Current sender account Balance: 7, Amount+Fee: 11", discardedL2Txs[0].Info)
assert.Equal(t, "Tx not selected due to not current Nonce. Tx.Nonce: 2, "+
"Account.Nonce: 1", discardedL2Txs[1].Info)
err = txsel.l2db.StartForging(common.TxIDsFromPoolL2Txs(oL2Txs),
txsel.localAccountsDB.CurrentBatch())
require.NoError(t, err)
}
func TestValidTxsWithLowFeeAndInvalidTxsWithHighFee(t *testing.T) {
// This test recreates the case where there are
set := `
Type: Blockchain
CreateAccountDeposit(0) Coord: 0
CreateAccountDeposit(0) A: 100
CreateAccountDeposit(0) B: 0
> batchL1 // Batch1: freeze L1User{3}
> batchL1 // Batch2: forge L1User{3}
> block
`
chainID := uint16(0)
tc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
tilCfgExtra := til.ConfigExtra{
BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
CoordUser: "Coord",
}
blocks, err := tc.GenerateBlocks(set)
require.NoError(t, err)
err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
require.NoError(t, err)
err = tc.FillBlocksForgedL1UserTxs(blocks)
require.NoError(t, err)
hermezContractAddr := ethCommon.HexToAddress("0xc344E203a046Da13b0B4467EB7B3629D0C99F6E6")
txsel, historyDB := initTest(t, chainID, hermezContractAddr, tc.Users["Coord"])
// Insert blocks into DB
for i := range blocks {
err = historyDB.AddBlockSCData(&blocks[i])
assert.NoError(t, err)
}
err = historyDB.UpdateTokenValue(common.EmptyAddr, 1000)
require.NoError(t, err)
// restart nonces of TilContext, as will be set by generating directly
// the PoolL2Txs for each specific batch with tc.GeneratePoolL2Txs
tc.RestartNonces()
tpc := txprocessor.Config{
NLevels: 16,
MaxFeeTx: 5,
MaxTx: 5,
MaxL1Tx: 3,
ChainID: chainID,
}
// batch1 to freeze L1UserTxs
l1UserTxs := []common.L1Tx{}
_, _, _, _, _, _, err = txsel.GetL1L2TxSelection(tpc, l1UserTxs)
require.NoError(t, err)
// batch 2 to crate the accounts (from L1UserTxs)
l1UserTxs = til.L1TxsToCommonL1Txs(tc.Queues[*blocks[0].Rollup.Batches[1].Batch.ForgeL1TxsNum])
// select L1 & L2 txs
_, accAuths, oL1UserTxs, oL1CoordTxs, oL2Txs, discardedL2Txs, err :=
txsel.GetL1L2TxSelection(tpc, l1UserTxs)
require.NoError(t, err)
require.Equal(t, 3, len(oL1UserTxs))
require.Equal(t, 0, len(oL1CoordTxs))
require.Equal(t, 0, len(oL2Txs))
require.Equal(t, 0, len(discardedL2Txs))
require.Equal(t, 0, len(accAuths))
err = txsel.l2db.StartForging(common.TxIDsFromPoolL2Txs(oL2Txs),
txsel.localAccountsDB.CurrentBatch())
require.NoError(t, err)
// batch 3. The A-B txs have lower fee, but are the only ones possible
// with the current Accounts Balances, as the B-A tx of amount 40 will
// not be included as will be processed first when there is not enough
// balance at B (processed first as the TxSelector sorts by Fee and then
// by Nonce).
batchPoolL2 := `
Type: PoolL2
PoolTransfer(0) B-A: 40 (130) // B-A txs are only possible once A-B txs are processed
PoolTransfer(0) B-A: 1 (126)
PoolTransfer(0) B-A: 1 (126)
PoolTransfer(0) B-A: 1 (126)
PoolTransfer(0) B-A: 1 (126)
PoolTransfer(0) B-A: 1 (126)
PoolTransfer(0) B-A: 1 (126)
PoolTransfer(0) B-A: 1 (126)
PoolTransfer(0) A-B: 20 (20)
PoolTransfer(0) A-B: 25 (150)
PoolTransfer(0) A-B: 20 (20)
`
poolL2Txs, err := tc.GeneratePoolL2Txs(batchPoolL2)
require.NoError(t, err)
require.Equal(t, 11, len(poolL2Txs))
// add the PoolL2Txs to the l2DB
addL2Txs(t, txsel, poolL2Txs)
l1UserTxs = []common.L1Tx{}
_, accAuths, oL1UserTxs, oL1CoordTxs, oL2Txs, discardedL2Txs, err =
txsel.GetL1L2TxSelection(tpc, l1UserTxs)
require.NoError(t, err)
require.Equal(t, 0, len(oL1UserTxs))
require.Equal(t, 0, len(oL1CoordTxs))
require.Equal(t, 3, len(oL2Txs)) // the 3 txs A-B
require.Equal(t, 8, len(discardedL2Txs)) // the 8 txs B-A
require.Equal(t, 0, len(accAuths))
err = txsel.l2db.StartForging(common.TxIDsFromPoolL2Txs(oL2Txs),
txsel.localAccountsDB.CurrentBatch())
require.NoError(t, err)
// batch 4. In this Batch, account B has enough balance to send the txs
_, accAuths, oL1UserTxs, oL1CoordTxs, oL2Txs, discardedL2Txs, err =
txsel.GetL1L2TxSelection(tpc, l1UserTxs)
require.NoError(t, err)
require.Equal(t, 0, len(oL1UserTxs))
require.Equal(t, 0, len(oL1CoordTxs))
require.Equal(t, 5, len(oL2Txs))
require.Equal(t, 3, len(discardedL2Txs))
require.Equal(t, 0, len(accAuths))
}