mirror of
https://github.com/arnaucube/hermez-node.git
synced 2026-02-07 11:26:44 +01:00
Compare commits
1 Commits
feature/fa
...
feature/to
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5ccea68905 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -1 +0,0 @@
|
|||||||
bin/
|
|
||||||
661
LICENSE
661
LICENSE
@@ -1,661 +0,0 @@
|
|||||||
GNU AFFERO GENERAL PUBLIC LICENSE
|
|
||||||
Version 3, 19 November 2007
|
|
||||||
|
|
||||||
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
|
|
||||||
Everyone is permitted to copy and distribute verbatim copies
|
|
||||||
of this license document, but changing it is not allowed.
|
|
||||||
|
|
||||||
Preamble
|
|
||||||
|
|
||||||
The GNU Affero General Public License is a free, copyleft license for
|
|
||||||
software and other kinds of works, specifically designed to ensure
|
|
||||||
cooperation with the community in the case of network server software.
|
|
||||||
|
|
||||||
The licenses for most software and other practical works are designed
|
|
||||||
to take away your freedom to share and change the works. By contrast,
|
|
||||||
our General Public Licenses are intended to guarantee your freedom to
|
|
||||||
share and change all versions of a program--to make sure it remains free
|
|
||||||
software for all its users.
|
|
||||||
|
|
||||||
When we speak of free software, we are referring to freedom, not
|
|
||||||
price. Our General Public Licenses are designed to make sure that you
|
|
||||||
have the freedom to distribute copies of free software (and charge for
|
|
||||||
them if you wish), that you receive source code or can get it if you
|
|
||||||
want it, that you can change the software or use pieces of it in new
|
|
||||||
free programs, and that you know you can do these things.
|
|
||||||
|
|
||||||
Developers that use our General Public Licenses protect your rights
|
|
||||||
with two steps: (1) assert copyright on the software, and (2) offer
|
|
||||||
you this License which gives you legal permission to copy, distribute
|
|
||||||
and/or modify the software.
|
|
||||||
|
|
||||||
A secondary benefit of defending all users' freedom is that
|
|
||||||
improvements made in alternate versions of the program, if they
|
|
||||||
receive widespread use, become available for other developers to
|
|
||||||
incorporate. Many developers of free software are heartened and
|
|
||||||
encouraged by the resulting cooperation. However, in the case of
|
|
||||||
software used on network servers, this result may fail to come about.
|
|
||||||
The GNU General Public License permits making a modified version and
|
|
||||||
letting the public access it on a server without ever releasing its
|
|
||||||
source code to the public.
|
|
||||||
|
|
||||||
The GNU Affero General Public License is designed specifically to
|
|
||||||
ensure that, in such cases, the modified source code becomes available
|
|
||||||
to the community. It requires the operator of a network server to
|
|
||||||
provide the source code of the modified version running there to the
|
|
||||||
users of that server. Therefore, public use of a modified version, on
|
|
||||||
a publicly accessible server, gives the public access to the source
|
|
||||||
code of the modified version.
|
|
||||||
|
|
||||||
An older license, called the Affero General Public License and
|
|
||||||
published by Affero, was designed to accomplish similar goals. This is
|
|
||||||
a different license, not a version of the Affero GPL, but Affero has
|
|
||||||
released a new version of the Affero GPL which permits relicensing under
|
|
||||||
this license.
|
|
||||||
|
|
||||||
The precise terms and conditions for copying, distribution and
|
|
||||||
modification follow.
|
|
||||||
|
|
||||||
TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
0. Definitions.
|
|
||||||
|
|
||||||
"This License" refers to version 3 of the GNU Affero General Public License.
|
|
||||||
|
|
||||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
|
||||||
works, such as semiconductor masks.
|
|
||||||
|
|
||||||
"The Program" refers to any copyrightable work licensed under this
|
|
||||||
License. Each licensee is addressed as "you". "Licensees" and
|
|
||||||
"recipients" may be individuals or organizations.
|
|
||||||
|
|
||||||
To "modify" a work means to copy from or adapt all or part of the work
|
|
||||||
in a fashion requiring copyright permission, other than the making of an
|
|
||||||
exact copy. The resulting work is called a "modified version" of the
|
|
||||||
earlier work or a work "based on" the earlier work.
|
|
||||||
|
|
||||||
A "covered work" means either the unmodified Program or a work based
|
|
||||||
on the Program.
|
|
||||||
|
|
||||||
To "propagate" a work means to do anything with it that, without
|
|
||||||
permission, would make you directly or secondarily liable for
|
|
||||||
infringement under applicable copyright law, except executing it on a
|
|
||||||
computer or modifying a private copy. Propagation includes copying,
|
|
||||||
distribution (with or without modification), making available to the
|
|
||||||
public, and in some countries other activities as well.
|
|
||||||
|
|
||||||
To "convey" a work means any kind of propagation that enables other
|
|
||||||
parties to make or receive copies. Mere interaction with a user through
|
|
||||||
a computer network, with no transfer of a copy, is not conveying.
|
|
||||||
|
|
||||||
An interactive user interface displays "Appropriate Legal Notices"
|
|
||||||
to the extent that it includes a convenient and prominently visible
|
|
||||||
feature that (1) displays an appropriate copyright notice, and (2)
|
|
||||||
tells the user that there is no warranty for the work (except to the
|
|
||||||
extent that warranties are provided), that licensees may convey the
|
|
||||||
work under this License, and how to view a copy of this License. If
|
|
||||||
the interface presents a list of user commands or options, such as a
|
|
||||||
menu, a prominent item in the list meets this criterion.
|
|
||||||
|
|
||||||
1. Source Code.
|
|
||||||
|
|
||||||
The "source code" for a work means the preferred form of the work
|
|
||||||
for making modifications to it. "Object code" means any non-source
|
|
||||||
form of a work.
|
|
||||||
|
|
||||||
A "Standard Interface" means an interface that either is an official
|
|
||||||
standard defined by a recognized standards body, or, in the case of
|
|
||||||
interfaces specified for a particular programming language, one that
|
|
||||||
is widely used among developers working in that language.
|
|
||||||
|
|
||||||
The "System Libraries" of an executable work include anything, other
|
|
||||||
than the work as a whole, that (a) is included in the normal form of
|
|
||||||
packaging a Major Component, but which is not part of that Major
|
|
||||||
Component, and (b) serves only to enable use of the work with that
|
|
||||||
Major Component, or to implement a Standard Interface for which an
|
|
||||||
implementation is available to the public in source code form. A
|
|
||||||
"Major Component", in this context, means a major essential component
|
|
||||||
(kernel, window system, and so on) of the specific operating system
|
|
||||||
(if any) on which the executable work runs, or a compiler used to
|
|
||||||
produce the work, or an object code interpreter used to run it.
|
|
||||||
|
|
||||||
The "Corresponding Source" for a work in object code form means all
|
|
||||||
the source code needed to generate, install, and (for an executable
|
|
||||||
work) run the object code and to modify the work, including scripts to
|
|
||||||
control those activities. However, it does not include the work's
|
|
||||||
System Libraries, or general-purpose tools or generally available free
|
|
||||||
programs which are used unmodified in performing those activities but
|
|
||||||
which are not part of the work. For example, Corresponding Source
|
|
||||||
includes interface definition files associated with source files for
|
|
||||||
the work, and the source code for shared libraries and dynamically
|
|
||||||
linked subprograms that the work is specifically designed to require,
|
|
||||||
such as by intimate data communication or control flow between those
|
|
||||||
subprograms and other parts of the work.
|
|
||||||
|
|
||||||
The Corresponding Source need not include anything that users
|
|
||||||
can regenerate automatically from other parts of the Corresponding
|
|
||||||
Source.
|
|
||||||
|
|
||||||
The Corresponding Source for a work in source code form is that
|
|
||||||
same work.
|
|
||||||
|
|
||||||
2. Basic Permissions.
|
|
||||||
|
|
||||||
All rights granted under this License are granted for the term of
|
|
||||||
copyright on the Program, and are irrevocable provided the stated
|
|
||||||
conditions are met. This License explicitly affirms your unlimited
|
|
||||||
permission to run the unmodified Program. The output from running a
|
|
||||||
covered work is covered by this License only if the output, given its
|
|
||||||
content, constitutes a covered work. This License acknowledges your
|
|
||||||
rights of fair use or other equivalent, as provided by copyright law.
|
|
||||||
|
|
||||||
You may make, run and propagate covered works that you do not
|
|
||||||
convey, without conditions so long as your license otherwise remains
|
|
||||||
in force. You may convey covered works to others for the sole purpose
|
|
||||||
of having them make modifications exclusively for you, or provide you
|
|
||||||
with facilities for running those works, provided that you comply with
|
|
||||||
the terms of this License in conveying all material for which you do
|
|
||||||
not control copyright. Those thus making or running the covered works
|
|
||||||
for you must do so exclusively on your behalf, under your direction
|
|
||||||
and control, on terms that prohibit them from making any copies of
|
|
||||||
your copyrighted material outside their relationship with you.
|
|
||||||
|
|
||||||
Conveying under any other circumstances is permitted solely under
|
|
||||||
the conditions stated below. Sublicensing is not allowed; section 10
|
|
||||||
makes it unnecessary.
|
|
||||||
|
|
||||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
|
||||||
|
|
||||||
No covered work shall be deemed part of an effective technological
|
|
||||||
measure under any applicable law fulfilling obligations under article
|
|
||||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
|
||||||
similar laws prohibiting or restricting circumvention of such
|
|
||||||
measures.
|
|
||||||
|
|
||||||
When you convey a covered work, you waive any legal power to forbid
|
|
||||||
circumvention of technological measures to the extent such circumvention
|
|
||||||
is effected by exercising rights under this License with respect to
|
|
||||||
the covered work, and you disclaim any intention to limit operation or
|
|
||||||
modification of the work as a means of enforcing, against the work's
|
|
||||||
users, your or third parties' legal rights to forbid circumvention of
|
|
||||||
technological measures.
|
|
||||||
|
|
||||||
4. Conveying Verbatim Copies.
|
|
||||||
|
|
||||||
You may convey verbatim copies of the Program's source code as you
|
|
||||||
receive it, in any medium, provided that you conspicuously and
|
|
||||||
appropriately publish on each copy an appropriate copyright notice;
|
|
||||||
keep intact all notices stating that this License and any
|
|
||||||
non-permissive terms added in accord with section 7 apply to the code;
|
|
||||||
keep intact all notices of the absence of any warranty; and give all
|
|
||||||
recipients a copy of this License along with the Program.
|
|
||||||
|
|
||||||
You may charge any price or no price for each copy that you convey,
|
|
||||||
and you may offer support or warranty protection for a fee.
|
|
||||||
|
|
||||||
5. Conveying Modified Source Versions.
|
|
||||||
|
|
||||||
You may convey a work based on the Program, or the modifications to
|
|
||||||
produce it from the Program, in the form of source code under the
|
|
||||||
terms of section 4, provided that you also meet all of these conditions:
|
|
||||||
|
|
||||||
a) The work must carry prominent notices stating that you modified
|
|
||||||
it, and giving a relevant date.
|
|
||||||
|
|
||||||
b) The work must carry prominent notices stating that it is
|
|
||||||
released under this License and any conditions added under section
|
|
||||||
7. This requirement modifies the requirement in section 4 to
|
|
||||||
"keep intact all notices".
|
|
||||||
|
|
||||||
c) You must license the entire work, as a whole, under this
|
|
||||||
License to anyone who comes into possession of a copy. This
|
|
||||||
License will therefore apply, along with any applicable section 7
|
|
||||||
additional terms, to the whole of the work, and all its parts,
|
|
||||||
regardless of how they are packaged. This License gives no
|
|
||||||
permission to license the work in any other way, but it does not
|
|
||||||
invalidate such permission if you have separately received it.
|
|
||||||
|
|
||||||
d) If the work has interactive user interfaces, each must display
|
|
||||||
Appropriate Legal Notices; however, if the Program has interactive
|
|
||||||
interfaces that do not display Appropriate Legal Notices, your
|
|
||||||
work need not make them do so.
|
|
||||||
|
|
||||||
A compilation of a covered work with other separate and independent
|
|
||||||
works, which are not by their nature extensions of the covered work,
|
|
||||||
and which are not combined with it such as to form a larger program,
|
|
||||||
in or on a volume of a storage or distribution medium, is called an
|
|
||||||
"aggregate" if the compilation and its resulting copyright are not
|
|
||||||
used to limit the access or legal rights of the compilation's users
|
|
||||||
beyond what the individual works permit. Inclusion of a covered work
|
|
||||||
in an aggregate does not cause this License to apply to the other
|
|
||||||
parts of the aggregate.
|
|
||||||
|
|
||||||
6. Conveying Non-Source Forms.
|
|
||||||
|
|
||||||
You may convey a covered work in object code form under the terms
|
|
||||||
of sections 4 and 5, provided that you also convey the
|
|
||||||
machine-readable Corresponding Source under the terms of this License,
|
|
||||||
in one of these ways:
|
|
||||||
|
|
||||||
a) Convey the object code in, or embodied in, a physical product
|
|
||||||
(including a physical distribution medium), accompanied by the
|
|
||||||
Corresponding Source fixed on a durable physical medium
|
|
||||||
customarily used for software interchange.
|
|
||||||
|
|
||||||
b) Convey the object code in, or embodied in, a physical product
|
|
||||||
(including a physical distribution medium), accompanied by a
|
|
||||||
written offer, valid for at least three years and valid for as
|
|
||||||
long as you offer spare parts or customer support for that product
|
|
||||||
model, to give anyone who possesses the object code either (1) a
|
|
||||||
copy of the Corresponding Source for all the software in the
|
|
||||||
product that is covered by this License, on a durable physical
|
|
||||||
medium customarily used for software interchange, for a price no
|
|
||||||
more than your reasonable cost of physically performing this
|
|
||||||
conveying of source, or (2) access to copy the
|
|
||||||
Corresponding Source from a network server at no charge.
|
|
||||||
|
|
||||||
c) Convey individual copies of the object code with a copy of the
|
|
||||||
written offer to provide the Corresponding Source. This
|
|
||||||
alternative is allowed only occasionally and noncommercially, and
|
|
||||||
only if you received the object code with such an offer, in accord
|
|
||||||
with subsection 6b.
|
|
||||||
|
|
||||||
d) Convey the object code by offering access from a designated
|
|
||||||
place (gratis or for a charge), and offer equivalent access to the
|
|
||||||
Corresponding Source in the same way through the same place at no
|
|
||||||
further charge. You need not require recipients to copy the
|
|
||||||
Corresponding Source along with the object code. If the place to
|
|
||||||
copy the object code is a network server, the Corresponding Source
|
|
||||||
may be on a different server (operated by you or a third party)
|
|
||||||
that supports equivalent copying facilities, provided you maintain
|
|
||||||
clear directions next to the object code saying where to find the
|
|
||||||
Corresponding Source. Regardless of what server hosts the
|
|
||||||
Corresponding Source, you remain obligated to ensure that it is
|
|
||||||
available for as long as needed to satisfy these requirements.
|
|
||||||
|
|
||||||
e) Convey the object code using peer-to-peer transmission, provided
|
|
||||||
you inform other peers where the object code and Corresponding
|
|
||||||
Source of the work are being offered to the general public at no
|
|
||||||
charge under subsection 6d.
|
|
||||||
|
|
||||||
A separable portion of the object code, whose source code is excluded
|
|
||||||
from the Corresponding Source as a System Library, need not be
|
|
||||||
included in conveying the object code work.
|
|
||||||
|
|
||||||
A "User Product" is either (1) a "consumer product", which means any
|
|
||||||
tangible personal property which is normally used for personal, family,
|
|
||||||
or household purposes, or (2) anything designed or sold for incorporation
|
|
||||||
into a dwelling. In determining whether a product is a consumer product,
|
|
||||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
|
||||||
product received by a particular user, "normally used" refers to a
|
|
||||||
typical or common use of that class of product, regardless of the status
|
|
||||||
of the particular user or of the way in which the particular user
|
|
||||||
actually uses, or expects or is expected to use, the product. A product
|
|
||||||
is a consumer product regardless of whether the product has substantial
|
|
||||||
commercial, industrial or non-consumer uses, unless such uses represent
|
|
||||||
the only significant mode of use of the product.
|
|
||||||
|
|
||||||
"Installation Information" for a User Product means any methods,
|
|
||||||
procedures, authorization keys, or other information required to install
|
|
||||||
and execute modified versions of a covered work in that User Product from
|
|
||||||
a modified version of its Corresponding Source. The information must
|
|
||||||
suffice to ensure that the continued functioning of the modified object
|
|
||||||
code is in no case prevented or interfered with solely because
|
|
||||||
modification has been made.
|
|
||||||
|
|
||||||
If you convey an object code work under this section in, or with, or
|
|
||||||
specifically for use in, a User Product, and the conveying occurs as
|
|
||||||
part of a transaction in which the right of possession and use of the
|
|
||||||
User Product is transferred to the recipient in perpetuity or for a
|
|
||||||
fixed term (regardless of how the transaction is characterized), the
|
|
||||||
Corresponding Source conveyed under this section must be accompanied
|
|
||||||
by the Installation Information. But this requirement does not apply
|
|
||||||
if neither you nor any third party retains the ability to install
|
|
||||||
modified object code on the User Product (for example, the work has
|
|
||||||
been installed in ROM).
|
|
||||||
|
|
||||||
The requirement to provide Installation Information does not include a
|
|
||||||
requirement to continue to provide support service, warranty, or updates
|
|
||||||
for a work that has been modified or installed by the recipient, or for
|
|
||||||
the User Product in which it has been modified or installed. Access to a
|
|
||||||
network may be denied when the modification itself materially and
|
|
||||||
adversely affects the operation of the network or violates the rules and
|
|
||||||
protocols for communication across the network.
|
|
||||||
|
|
||||||
Corresponding Source conveyed, and Installation Information provided,
|
|
||||||
in accord with this section must be in a format that is publicly
|
|
||||||
documented (and with an implementation available to the public in
|
|
||||||
source code form), and must require no special password or key for
|
|
||||||
unpacking, reading or copying.
|
|
||||||
|
|
||||||
7. Additional Terms.
|
|
||||||
|
|
||||||
"Additional permissions" are terms that supplement the terms of this
|
|
||||||
License by making exceptions from one or more of its conditions.
|
|
||||||
Additional permissions that are applicable to the entire Program shall
|
|
||||||
be treated as though they were included in this License, to the extent
|
|
||||||
that they are valid under applicable law. If additional permissions
|
|
||||||
apply only to part of the Program, that part may be used separately
|
|
||||||
under those permissions, but the entire Program remains governed by
|
|
||||||
this License without regard to the additional permissions.
|
|
||||||
|
|
||||||
When you convey a copy of a covered work, you may at your option
|
|
||||||
remove any additional permissions from that copy, or from any part of
|
|
||||||
it. (Additional permissions may be written to require their own
|
|
||||||
removal in certain cases when you modify the work.) You may place
|
|
||||||
additional permissions on material, added by you to a covered work,
|
|
||||||
for which you have or can give appropriate copyright permission.
|
|
||||||
|
|
||||||
Notwithstanding any other provision of this License, for material you
|
|
||||||
add to a covered work, you may (if authorized by the copyright holders of
|
|
||||||
that material) supplement the terms of this License with terms:
|
|
||||||
|
|
||||||
a) Disclaiming warranty or limiting liability differently from the
|
|
||||||
terms of sections 15 and 16 of this License; or
|
|
||||||
|
|
||||||
b) Requiring preservation of specified reasonable legal notices or
|
|
||||||
author attributions in that material or in the Appropriate Legal
|
|
||||||
Notices displayed by works containing it; or
|
|
||||||
|
|
||||||
c) Prohibiting misrepresentation of the origin of that material, or
|
|
||||||
requiring that modified versions of such material be marked in
|
|
||||||
reasonable ways as different from the original version; or
|
|
||||||
|
|
||||||
d) Limiting the use for publicity purposes of names of licensors or
|
|
||||||
authors of the material; or
|
|
||||||
|
|
||||||
e) Declining to grant rights under trademark law for use of some
|
|
||||||
trade names, trademarks, or service marks; or
|
|
||||||
|
|
||||||
f) Requiring indemnification of licensors and authors of that
|
|
||||||
material by anyone who conveys the material (or modified versions of
|
|
||||||
it) with contractual assumptions of liability to the recipient, for
|
|
||||||
any liability that these contractual assumptions directly impose on
|
|
||||||
those licensors and authors.
|
|
||||||
|
|
||||||
All other non-permissive additional terms are considered "further
|
|
||||||
restrictions" within the meaning of section 10. If the Program as you
|
|
||||||
received it, or any part of it, contains a notice stating that it is
|
|
||||||
governed by this License along with a term that is a further
|
|
||||||
restriction, you may remove that term. If a license document contains
|
|
||||||
a further restriction but permits relicensing or conveying under this
|
|
||||||
License, you may add to a covered work material governed by the terms
|
|
||||||
of that license document, provided that the further restriction does
|
|
||||||
not survive such relicensing or conveying.
|
|
||||||
|
|
||||||
If you add terms to a covered work in accord with this section, you
|
|
||||||
must place, in the relevant source files, a statement of the
|
|
||||||
additional terms that apply to those files, or a notice indicating
|
|
||||||
where to find the applicable terms.
|
|
||||||
|
|
||||||
Additional terms, permissive or non-permissive, may be stated in the
|
|
||||||
form of a separately written license, or stated as exceptions;
|
|
||||||
the above requirements apply either way.
|
|
||||||
|
|
||||||
8. Termination.
|
|
||||||
|
|
||||||
You may not propagate or modify a covered work except as expressly
|
|
||||||
provided under this License. Any attempt otherwise to propagate or
|
|
||||||
modify it is void, and will automatically terminate your rights under
|
|
||||||
this License (including any patent licenses granted under the third
|
|
||||||
paragraph of section 11).
|
|
||||||
|
|
||||||
However, if you cease all violation of this License, then your
|
|
||||||
license from a particular copyright holder is reinstated (a)
|
|
||||||
provisionally, unless and until the copyright holder explicitly and
|
|
||||||
finally terminates your license, and (b) permanently, if the copyright
|
|
||||||
holder fails to notify you of the violation by some reasonable means
|
|
||||||
prior to 60 days after the cessation.
|
|
||||||
|
|
||||||
Moreover, your license from a particular copyright holder is
|
|
||||||
reinstated permanently if the copyright holder notifies you of the
|
|
||||||
violation by some reasonable means, this is the first time you have
|
|
||||||
received notice of violation of this License (for any work) from that
|
|
||||||
copyright holder, and you cure the violation prior to 30 days after
|
|
||||||
your receipt of the notice.
|
|
||||||
|
|
||||||
Termination of your rights under this section does not terminate the
|
|
||||||
licenses of parties who have received copies or rights from you under
|
|
||||||
this License. If your rights have been terminated and not permanently
|
|
||||||
reinstated, you do not qualify to receive new licenses for the same
|
|
||||||
material under section 10.
|
|
||||||
|
|
||||||
9. Acceptance Not Required for Having Copies.
|
|
||||||
|
|
||||||
You are not required to accept this License in order to receive or
|
|
||||||
run a copy of the Program. Ancillary propagation of a covered work
|
|
||||||
occurring solely as a consequence of using peer-to-peer transmission
|
|
||||||
to receive a copy likewise does not require acceptance. However,
|
|
||||||
nothing other than this License grants you permission to propagate or
|
|
||||||
modify any covered work. These actions infringe copyright if you do
|
|
||||||
not accept this License. Therefore, by modifying or propagating a
|
|
||||||
covered work, you indicate your acceptance of this License to do so.
|
|
||||||
|
|
||||||
10. Automatic Licensing of Downstream Recipients.
|
|
||||||
|
|
||||||
Each time you convey a covered work, the recipient automatically
|
|
||||||
receives a license from the original licensors, to run, modify and
|
|
||||||
propagate that work, subject to this License. You are not responsible
|
|
||||||
for enforcing compliance by third parties with this License.
|
|
||||||
|
|
||||||
An "entity transaction" is a transaction transferring control of an
|
|
||||||
organization, or substantially all assets of one, or subdividing an
|
|
||||||
organization, or merging organizations. If propagation of a covered
|
|
||||||
work results from an entity transaction, each party to that
|
|
||||||
transaction who receives a copy of the work also receives whatever
|
|
||||||
licenses to the work the party's predecessor in interest had or could
|
|
||||||
give under the previous paragraph, plus a right to possession of the
|
|
||||||
Corresponding Source of the work from the predecessor in interest, if
|
|
||||||
the predecessor has it or can get it with reasonable efforts.
|
|
||||||
|
|
||||||
You may not impose any further restrictions on the exercise of the
|
|
||||||
rights granted or affirmed under this License. For example, you may
|
|
||||||
not impose a license fee, royalty, or other charge for exercise of
|
|
||||||
rights granted under this License, and you may not initiate litigation
|
|
||||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
|
||||||
any patent claim is infringed by making, using, selling, offering for
|
|
||||||
sale, or importing the Program or any portion of it.
|
|
||||||
|
|
||||||
11. Patents.
|
|
||||||
|
|
||||||
A "contributor" is a copyright holder who authorizes use under this
|
|
||||||
License of the Program or a work on which the Program is based. The
|
|
||||||
work thus licensed is called the contributor's "contributor version".
|
|
||||||
|
|
||||||
A contributor's "essential patent claims" are all patent claims
|
|
||||||
owned or controlled by the contributor, whether already acquired or
|
|
||||||
hereafter acquired, that would be infringed by some manner, permitted
|
|
||||||
by this License, of making, using, or selling its contributor version,
|
|
||||||
but do not include claims that would be infringed only as a
|
|
||||||
consequence of further modification of the contributor version. For
|
|
||||||
purposes of this definition, "control" includes the right to grant
|
|
||||||
patent sublicenses in a manner consistent with the requirements of
|
|
||||||
this License.
|
|
||||||
|
|
||||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
|
||||||
patent license under the contributor's essential patent claims, to
|
|
||||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
|
||||||
propagate the contents of its contributor version.
|
|
||||||
|
|
||||||
In the following three paragraphs, a "patent license" is any express
|
|
||||||
agreement or commitment, however denominated, not to enforce a patent
|
|
||||||
(such as an express permission to practice a patent or covenant not to
|
|
||||||
sue for patent infringement). To "grant" such a patent license to a
|
|
||||||
party means to make such an agreement or commitment not to enforce a
|
|
||||||
patent against the party.
|
|
||||||
|
|
||||||
If you convey a covered work, knowingly relying on a patent license,
|
|
||||||
and the Corresponding Source of the work is not available for anyone
|
|
||||||
to copy, free of charge and under the terms of this License, through a
|
|
||||||
publicly available network server or other readily accessible means,
|
|
||||||
then you must either (1) cause the Corresponding Source to be so
|
|
||||||
available, or (2) arrange to deprive yourself of the benefit of the
|
|
||||||
patent license for this particular work, or (3) arrange, in a manner
|
|
||||||
consistent with the requirements of this License, to extend the patent
|
|
||||||
license to downstream recipients. "Knowingly relying" means you have
|
|
||||||
actual knowledge that, but for the patent license, your conveying the
|
|
||||||
covered work in a country, or your recipient's use of the covered work
|
|
||||||
in a country, would infringe one or more identifiable patents in that
|
|
||||||
country that you have reason to believe are valid.
|
|
||||||
|
|
||||||
If, pursuant to or in connection with a single transaction or
|
|
||||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
|
||||||
covered work, and grant a patent license to some of the parties
|
|
||||||
receiving the covered work authorizing them to use, propagate, modify
|
|
||||||
or convey a specific copy of the covered work, then the patent license
|
|
||||||
you grant is automatically extended to all recipients of the covered
|
|
||||||
work and works based on it.
|
|
||||||
|
|
||||||
A patent license is "discriminatory" if it does not include within
|
|
||||||
the scope of its coverage, prohibits the exercise of, or is
|
|
||||||
conditioned on the non-exercise of one or more of the rights that are
|
|
||||||
specifically granted under this License. You may not convey a covered
|
|
||||||
work if you are a party to an arrangement with a third party that is
|
|
||||||
in the business of distributing software, under which you make payment
|
|
||||||
to the third party based on the extent of your activity of conveying
|
|
||||||
the work, and under which the third party grants, to any of the
|
|
||||||
parties who would receive the covered work from you, a discriminatory
|
|
||||||
patent license (a) in connection with copies of the covered work
|
|
||||||
conveyed by you (or copies made from those copies), or (b) primarily
|
|
||||||
for and in connection with specific products or compilations that
|
|
||||||
contain the covered work, unless you entered into that arrangement,
|
|
||||||
or that patent license was granted, prior to 28 March 2007.
|
|
||||||
|
|
||||||
Nothing in this License shall be construed as excluding or limiting
|
|
||||||
any implied license or other defenses to infringement that may
|
|
||||||
otherwise be available to you under applicable patent law.
|
|
||||||
|
|
||||||
12. No Surrender of Others' Freedom.
|
|
||||||
|
|
||||||
If conditions are imposed on you (whether by court order, agreement or
|
|
||||||
otherwise) that contradict the conditions of this License, they do not
|
|
||||||
excuse you from the conditions of this License. If you cannot convey a
|
|
||||||
covered work so as to satisfy simultaneously your obligations under this
|
|
||||||
License and any other pertinent obligations, then as a consequence you may
|
|
||||||
not convey it at all. For example, if you agree to terms that obligate you
|
|
||||||
to collect a royalty for further conveying from those to whom you convey
|
|
||||||
the Program, the only way you could satisfy both those terms and this
|
|
||||||
License would be to refrain entirely from conveying the Program.
|
|
||||||
|
|
||||||
13. Remote Network Interaction; Use with the GNU General Public License.
|
|
||||||
|
|
||||||
Notwithstanding any other provision of this License, if you modify the
|
|
||||||
Program, your modified version must prominently offer all users
|
|
||||||
interacting with it remotely through a computer network (if your version
|
|
||||||
supports such interaction) an opportunity to receive the Corresponding
|
|
||||||
Source of your version by providing access to the Corresponding Source
|
|
||||||
from a network server at no charge, through some standard or customary
|
|
||||||
means of facilitating copying of software. This Corresponding Source
|
|
||||||
shall include the Corresponding Source for any work covered by version 3
|
|
||||||
of the GNU General Public License that is incorporated pursuant to the
|
|
||||||
following paragraph.
|
|
||||||
|
|
||||||
Notwithstanding any other provision of this License, you have
|
|
||||||
permission to link or combine any covered work with a work licensed
|
|
||||||
under version 3 of the GNU General Public License into a single
|
|
||||||
combined work, and to convey the resulting work. The terms of this
|
|
||||||
License will continue to apply to the part which is the covered work,
|
|
||||||
but the work with which it is combined will remain governed by version
|
|
||||||
3 of the GNU General Public License.
|
|
||||||
|
|
||||||
14. Revised Versions of this License.
|
|
||||||
|
|
||||||
The Free Software Foundation may publish revised and/or new versions of
|
|
||||||
the GNU Affero General Public License from time to time. Such new versions
|
|
||||||
will be similar in spirit to the present version, but may differ in detail to
|
|
||||||
address new problems or concerns.
|
|
||||||
|
|
||||||
Each version is given a distinguishing version number. If the
|
|
||||||
Program specifies that a certain numbered version of the GNU Affero General
|
|
||||||
Public License "or any later version" applies to it, you have the
|
|
||||||
option of following the terms and conditions either of that numbered
|
|
||||||
version or of any later version published by the Free Software
|
|
||||||
Foundation. If the Program does not specify a version number of the
|
|
||||||
GNU Affero General Public License, you may choose any version ever published
|
|
||||||
by the Free Software Foundation.
|
|
||||||
|
|
||||||
If the Program specifies that a proxy can decide which future
|
|
||||||
versions of the GNU Affero General Public License can be used, that proxy's
|
|
||||||
public statement of acceptance of a version permanently authorizes you
|
|
||||||
to choose that version for the Program.
|
|
||||||
|
|
||||||
Later license versions may give you additional or different
|
|
||||||
permissions. However, no additional obligations are imposed on any
|
|
||||||
author or copyright holder as a result of your choosing to follow a
|
|
||||||
later version.
|
|
||||||
|
|
||||||
15. Disclaimer of Warranty.
|
|
||||||
|
|
||||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
|
||||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
|
||||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
|
||||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
|
||||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
|
||||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
|
||||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
|
||||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
|
||||||
|
|
||||||
16. Limitation of Liability.
|
|
||||||
|
|
||||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
|
||||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
|
||||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
|
||||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
|
||||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
|
||||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
|
||||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
|
||||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
|
||||||
SUCH DAMAGES.
|
|
||||||
|
|
||||||
17. Interpretation of Sections 15 and 16.
|
|
||||||
|
|
||||||
If the disclaimer of warranty and limitation of liability provided
|
|
||||||
above cannot be given local legal effect according to their terms,
|
|
||||||
reviewing courts shall apply local law that most closely approximates
|
|
||||||
an absolute waiver of all civil liability in connection with the
|
|
||||||
Program, unless a warranty or assumption of liability accompanies a
|
|
||||||
copy of the Program in return for a fee.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
How to Apply These Terms to Your New Programs
|
|
||||||
|
|
||||||
If you develop a new program, and you want it to be of the greatest
|
|
||||||
possible use to the public, the best way to achieve this is to make it
|
|
||||||
free software which everyone can redistribute and change under these terms.
|
|
||||||
|
|
||||||
To do so, attach the following notices to the program. It is safest
|
|
||||||
to attach them to the start of each source file to most effectively
|
|
||||||
state the exclusion of warranty; and each file should have at least
|
|
||||||
the "copyright" line and a pointer to where the full notice is found.
|
|
||||||
|
|
||||||
<one line to give the program's name and a brief idea of what it does.>
|
|
||||||
Copyright (C) <year> <name of author>
|
|
||||||
|
|
||||||
This program is free software: you can redistribute it and/or modify
|
|
||||||
it under the terms of the GNU Affero General Public License as published
|
|
||||||
by the Free Software Foundation, either version 3 of the License, or
|
|
||||||
(at your option) any later version.
|
|
||||||
|
|
||||||
This program is distributed in the hope that it will be useful,
|
|
||||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
|
||||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
|
||||||
GNU Affero General Public License for more details.
|
|
||||||
|
|
||||||
You should have received a copy of the GNU Affero General Public License
|
|
||||||
along with this program. If not, see <https://www.gnu.org/licenses/>.
|
|
||||||
|
|
||||||
Also add information on how to contact you by electronic and paper mail.
|
|
||||||
|
|
||||||
If your software can interact with users remotely through a computer
|
|
||||||
network, you should also make sure that it provides a way for users to
|
|
||||||
get its source. For example, if your program is a web application, its
|
|
||||||
interface could display a "Source" link that leads users to an archive
|
|
||||||
of the code. There are many ways you could offer source, and different
|
|
||||||
solutions will be better for different programs; see section 13 for the
|
|
||||||
specific requirements.
|
|
||||||
|
|
||||||
You should also get your employer (if you work as a programmer) or school,
|
|
||||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
|
||||||
For more information on this, and how to apply and follow the GNU AGPL, see
|
|
||||||
<https://www.gnu.org/licenses/>.
|
|
||||||
135
Makefile
135
Makefile
@@ -1,135 +0,0 @@
|
|||||||
#! /usr/bin/make -f
|
|
||||||
|
|
||||||
# Project variables.
|
|
||||||
PACKAGE := github.com/hermeznetwork/hermez-node
|
|
||||||
VERSION := $(shell git describe --tags --always)
|
|
||||||
BUILD := $(shell git rev-parse --short HEAD)
|
|
||||||
BUILD_DATE := $(shell date +%Y-%m-%dT%H:%M:%S%z)
|
|
||||||
PROJECT_NAME := $(shell basename "$(PWD)")
|
|
||||||
|
|
||||||
# Go related variables.
|
|
||||||
GO_FILES ?= $$(find . -name '*.go' | grep -v vendor)
|
|
||||||
GOBASE := $(shell pwd)
|
|
||||||
GOBIN := $(GOBASE)/bin
|
|
||||||
GOPKG := $(.)
|
|
||||||
GOENVVARS := GOBIN=$(GOBIN)
|
|
||||||
GOCMD := $(GOBASE)/cli/node
|
|
||||||
GOPROOF := $(GOBASE)/test/proofserver/cli
|
|
||||||
GOBINARY := node
|
|
||||||
|
|
||||||
# Project configs.
|
|
||||||
MODE ?= sync
|
|
||||||
CONFIG ?= $(GOBASE)/cli/node/cfg.buidler.toml
|
|
||||||
POSTGRES_PASS ?= yourpasswordhere
|
|
||||||
|
|
||||||
# Use linker flags to provide version/build settings.
|
|
||||||
LDFLAGS=-ldflags "-X=main.Version=$(VERSION) -X=main.Build=$(BUILD) -X=main.Date=$(BUILD_DATE)"
|
|
||||||
|
|
||||||
# PID file will keep the process id of the server.
|
|
||||||
PID_PROOF_MOCK := /tmp/.$(PROJECT_NAME).proof.pid
|
|
||||||
|
|
||||||
# Make is verbose in Linux. Make it silent.
|
|
||||||
MAKEFLAGS += --silent
|
|
||||||
|
|
||||||
.PHONY: help
|
|
||||||
help: Makefile
|
|
||||||
@echo
|
|
||||||
@echo " Choose a command run in "$(PROJECT_NAME)":"
|
|
||||||
@echo
|
|
||||||
@sed -n 's/^##//p' $< | column -t -s ':' | sed -e 's/^/ /'
|
|
||||||
@echo
|
|
||||||
|
|
||||||
## test: Run the application check and all tests.
|
|
||||||
test: govet gocilint test-unit
|
|
||||||
|
|
||||||
## test-unit: Run all unit tests.
|
|
||||||
test-unit:
|
|
||||||
@echo " > Running unit tests"
|
|
||||||
$(GOENVVARS) go test -race -p 1 -failfast -timeout 300s -v ./...
|
|
||||||
|
|
||||||
## test-api-server: Run the API server using the Go tests.
|
|
||||||
test-api-server:
|
|
||||||
@echo " > Running unit tests"
|
|
||||||
$(GOENVVARS) FAKE_SERVER=yes go test -timeout 0 ./api -p 1 -count 1 -v
|
|
||||||
|
|
||||||
## gofmt: Run `go fmt` for all go files.
|
|
||||||
gofmt:
|
|
||||||
@echo " > Format all go files"
|
|
||||||
$(GOENVVARS) gofmt -w ${GO_FILES}
|
|
||||||
|
|
||||||
## govet: Run go vet.
|
|
||||||
govet:
|
|
||||||
@echo " > Running go vet"
|
|
||||||
$(GOENVVARS) go vet ./...
|
|
||||||
|
|
||||||
## golint: Run default golint.
|
|
||||||
golint:
|
|
||||||
@echo " > Running golint"
|
|
||||||
$(GOENVVARS) golint -set_exit_status ./...
|
|
||||||
|
|
||||||
## gocilint: Run Golang CI Lint.
|
|
||||||
gocilint:
|
|
||||||
@echo " > Running Golang CI Lint"
|
|
||||||
$-golangci-lint run --timeout=5m -E whitespace -E gosec -E gci -E misspell -E gomnd -E gofmt -E goimports -E golint --exclude-use-default=false --max-same-issues 0
|
|
||||||
|
|
||||||
## exec: Run given command. e.g; make exec run="go test ./..."
|
|
||||||
exec:
|
|
||||||
GOBIN=$(GOBIN) $(run)
|
|
||||||
|
|
||||||
## clean: Clean build files. Runs `go clean` internally.
|
|
||||||
clean:
|
|
||||||
@-rm $(GOBIN)/ 2> /dev/null
|
|
||||||
@echo " > Cleaning build cache"
|
|
||||||
$(GOENVVARS) go clean
|
|
||||||
|
|
||||||
## build: Build the project.
|
|
||||||
build: install
|
|
||||||
@echo " > Building Hermez binary..."
|
|
||||||
@bash -c "$(MAKE) migration-pack"
|
|
||||||
$(GOENVVARS) go build $(LDFLAGS) -o $(GOBIN)/$(GOBINARY) $(GOCMD)
|
|
||||||
@bash -c "$(MAKE) migration-clean"
|
|
||||||
|
|
||||||
## install: Install missing dependencies. Runs `go get` internally. e.g; make install get=github.com/foo/bar
|
|
||||||
install:
|
|
||||||
@echo " > Checking if there is any missing dependencies..."
|
|
||||||
$(GOENVVARS) go get $(GOCMD)/... $(get)
|
|
||||||
|
|
||||||
## run: Run Hermez node.
|
|
||||||
run:
|
|
||||||
@bash -c "$(MAKE) clean build"
|
|
||||||
@echo " > Running $(PROJECT_NAME)"
|
|
||||||
@$(GOBIN)/$(GOBINARY) --mode $(MODE) --cfg $(CONFIG) run
|
|
||||||
|
|
||||||
## run-proof-mock: Run proof server mock API.
|
|
||||||
run-proof-mock: stop-proof-mock
|
|
||||||
@echo " > Running Proof Server Mock"
|
|
||||||
$(GOENVVARS) go build -o $(GOBIN)/proof $(GOPROOF)
|
|
||||||
@$(GOBIN)/proof 2>&1 & echo $$! > $(PID_PROOF_MOCK)
|
|
||||||
@cat $(PID_PROOF_MOCK) | sed "/^/s/^/ \> Proof Server Mock PID: /"
|
|
||||||
|
|
||||||
## stop-proof-mock: Stop proof server mock API.
|
|
||||||
stop-proof-mock:
|
|
||||||
@-touch $(PID_PROOF_MOCK)
|
|
||||||
@-kill -s INT `cat $(PID_PROOF_MOCK)` 2> /dev/null || true
|
|
||||||
@-rm $(PID_PROOF_MOCK) $(GOBIN)/proof 2> /dev/null || true
|
|
||||||
|
|
||||||
## migration-pack: Pack the database migrations into the binary.
|
|
||||||
migration-pack:
|
|
||||||
@echo " > Packing the migrations..."
|
|
||||||
@cd /tmp && go get -u github.com/gobuffalo/packr/v2/packr2 && cd -
|
|
||||||
@cd $(GOBASE)/db && packr2 && cd -
|
|
||||||
|
|
||||||
## migration-clean: Clean the database migrations pack.
|
|
||||||
migration-clean:
|
|
||||||
@echo " > Cleaning the migrations..."
|
|
||||||
@cd $(GOBASE)/db && packr2 clean && cd -
|
|
||||||
|
|
||||||
## run-database-container: Run the Postgres container
|
|
||||||
run-database-container:
|
|
||||||
@echo " > Running the postgreSQL DB..."
|
|
||||||
@-docker run --rm --name hermez-db -p 5432:5432 -e POSTGRES_DB=hermez -e POSTGRES_USER=hermez -e POSTGRES_PASSWORD="$(POSTGRES_PASS)" -d postgres
|
|
||||||
|
|
||||||
## stop-database-container: Stop the Postgres container
|
|
||||||
stop-database-container:
|
|
||||||
@echo " > Stopping the postgreSQL DB..."
|
|
||||||
@-docker stop hermez-db
|
|
||||||
83
README.md
83
README.md
@@ -8,75 +8,42 @@ Go implementation of the Hermez node.
|
|||||||
|
|
||||||
The `hermez-node` has been tested with go version 1.14
|
The `hermez-node` has been tested with go version 1.14
|
||||||
|
|
||||||
### Build
|
|
||||||
|
|
||||||
Build the binary and check the current version:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ make build
|
|
||||||
$ bin/node version
|
|
||||||
```
|
|
||||||
|
|
||||||
### Run
|
|
||||||
|
|
||||||
First you must edit the default/template config file into [cli/node/cfg.buidler.toml](cli/node/cfg.buidler.toml),
|
|
||||||
there are more information about the config file into [cli/node/README.md](cli/node/README.md)
|
|
||||||
|
|
||||||
After setting the config, you can build and run the Hermez Node as a synchronizer:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ make run
|
|
||||||
```
|
|
||||||
|
|
||||||
Or build and run as a coordinator, and also passing the config file from other location:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ MODE=sync CONFIG=cli/node/cfg.buidler.toml make run
|
|
||||||
```
|
|
||||||
|
|
||||||
To check the useful make commands:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ make help
|
|
||||||
```
|
|
||||||
|
|
||||||
### Unit testing
|
### Unit testing
|
||||||
|
|
||||||
Running the unit tests requires a connection to a PostgreSQL database. You can
|
Running the unit tests requires a connection to a PostgreSQL database. You can
|
||||||
run PostgreSQL with docker easily this way (where `yourpasswordhere` should
|
start PostgreSQL with docker easily this way (where `yourpasswordhere` should
|
||||||
be your password):
|
be your password):
|
||||||
|
|
||||||
```shell
|
```
|
||||||
$ POSTGRES_PASS="yourpasswordhere" make run-database-container
|
POSTGRES_PASS=yourpasswordhere sudo docker run --rm --name hermez-db-test -p 5432:5432 -e POSTGRES_DB=hermez -e POSTGRES_USER=hermez -e POSTGRES_PASSWORD="$POSTGRES_PASS" -d postgres
|
||||||
```
|
```
|
||||||
|
|
||||||
Afterward, run the tests with the password as env var:
|
Afterwards, run the tests with the password as env var:
|
||||||
|
|
||||||
```shell
|
```
|
||||||
$ POSTGRES_PASS="yourpasswordhere" make test
|
POSTGRES_PASS=yourpasswordhere go test -p 1 ./...
|
||||||
```
|
```
|
||||||
|
|
||||||
NOTE: `-p 1` forces execution of package test in serial. Otherwise, they may be
|
NOTE: `-p 1` forces execution of package test in serial. Otherwise they may be
|
||||||
executed in parallel, and the test may find unexpected entries in the SQL database
|
executed in paralel and the test may find unexpected entries in the SQL databse
|
||||||
because it's shared among all tests.
|
because it's shared among all tests.
|
||||||
|
|
||||||
There is an extra temporary option that allows you to run the API server using the
|
There is an extra temporary option that allows you to run the API server using
|
||||||
Go tests. It will be removed once the API can be properly initialized with data
|
the Go tests. This will be removed once the API can be properly initialized,
|
||||||
from the synchronizer. To use this, run:
|
with data from the synchronizer and so on. To use this, run:
|
||||||
|
|
||||||
```shell
|
```
|
||||||
$ POSTGRES_PASS="yourpasswordhere" make test-api-server
|
FAKE_SERVER=yes POSTGRES_PASS=yourpasswordhere go test -timeout 0 ./api -p 1 -count 1 -v`
|
||||||
```
|
```
|
||||||
|
|
||||||
### Lint
|
### Lint
|
||||||
|
|
||||||
All Pull Requests need to pass the configured linter.
|
All Pull Requests need to pass the configured linter.
|
||||||
|
|
||||||
To run the linter locally, first, install [golangci-lint](https://golangci-lint.run).
|
To run the linter locally, first install [golangci-lint](https://golangci-lint.run). Afterwards you can check the lints with this command:
|
||||||
Afterward, you can check the lints with this command:
|
|
||||||
|
|
||||||
```shell
|
```
|
||||||
$ make gocilint
|
golangci-lint run --timeout=5m -E whitespace -E gosec -E gci -E misspell -E gomnd -E gofmt -E goimports -E golint --exclude-use-default=false --max-same-issues 0
|
||||||
```
|
```
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
@@ -87,13 +54,13 @@ See [cli/node/README.md](cli/node/README.md)
|
|||||||
|
|
||||||
### Proof Server
|
### Proof Server
|
||||||
|
|
||||||
The node in mode coordinator requires a proof server (a server capable of
|
The node in mode coordinator requires a proof server (a server that is capable
|
||||||
calculating proofs from the zkInputs). There is a mock proof server CLI
|
of calculating proofs from the zkInputs). For testing purposes there is a mock
|
||||||
at `test/proofserver/cli` for testing purposes.
|
proof server cli at `test/proofserver/cli`.
|
||||||
|
|
||||||
Usage of `test/proofserver/cli`:
|
Usage of `test/proofserver/cli`:
|
||||||
|
|
||||||
```shell
|
```
|
||||||
USAGE:
|
USAGE:
|
||||||
go run ./test/proofserver/cli OPTIONS
|
go run ./test/proofserver/cli OPTIONS
|
||||||
|
|
||||||
@@ -104,19 +71,11 @@ OPTIONS:
|
|||||||
proving time duration (default 2s)
|
proving time duration (default 2s)
|
||||||
```
|
```
|
||||||
|
|
||||||
Also, the Makefile commands can be used to run and stop the proof server
|
|
||||||
in the background:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
$ make run-proof-mock
|
|
||||||
$ make stop-proof-mock
|
|
||||||
```
|
|
||||||
|
|
||||||
### `/tmp` as tmpfs
|
### `/tmp` as tmpfs
|
||||||
|
|
||||||
For every processed batch, the node builds a temporary exit tree in a key-value
|
For every processed batch, the node builds a temporary exit tree in a key-value
|
||||||
DB stored in `/tmp`. It is highly recommended that `/tmp` is mounted as a RAM
|
DB stored in `/tmp`. It is highly recommended that `/tmp` is mounted as a RAM
|
||||||
file system in production to avoid unnecessary reads a writes to disk. This
|
file system in production to avoid unecessary reads an writes to disk. This
|
||||||
can be done by mounting `/tmp` as tmpfs; for example, by having this line in
|
can be done by mounting `/tmp` as tmpfs; for example, by having this line in
|
||||||
`/etc/fstab`:
|
`/etc/fstab`:
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -4,7 +4,10 @@ import (
|
|||||||
"net/http"
|
"net/http"
|
||||||
|
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
|
"github.com/hermeznetwork/hermez-node/apitypes"
|
||||||
"github.com/hermeznetwork/hermez-node/db/historydb"
|
"github.com/hermeznetwork/hermez-node/db/historydb"
|
||||||
|
"github.com/hermeznetwork/hermez-node/db/statedb"
|
||||||
|
"github.com/hermeznetwork/tracerr"
|
||||||
)
|
)
|
||||||
|
|
||||||
func (a *API) getAccount(c *gin.Context) {
|
func (a *API) getAccount(c *gin.Context) {
|
||||||
@@ -20,6 +23,16 @@ func (a *API) getAccount(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Get balance from stateDB
|
||||||
|
account, err := a.s.LastGetAccount(*idx)
|
||||||
|
if err != nil {
|
||||||
|
retSQLErr(err, c)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
apiAccount.Balance = apitypes.NewBigIntStr(account.Balance)
|
||||||
|
apiAccount.Nonce = account.Nonce
|
||||||
|
|
||||||
c.JSON(http.StatusOK, apiAccount)
|
c.JSON(http.StatusOK, apiAccount)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -44,7 +57,27 @@ func (a *API) getAccounts(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build successful response
|
// Get balances from stateDB
|
||||||
|
if err := a.s.LastRead(func(sdb *statedb.Last) error {
|
||||||
|
for x, apiAccount := range apiAccounts {
|
||||||
|
idx, err := stringToIdx(string(apiAccount.Idx), "Account Idx")
|
||||||
|
if err != nil {
|
||||||
|
return tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
account, err := sdb.GetAccount(*idx)
|
||||||
|
if err != nil {
|
||||||
|
return tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
apiAccounts[x].Balance = apitypes.NewBigIntStr(account.Balance)
|
||||||
|
apiAccounts[x].Nonce = account.Nonce
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}); err != nil {
|
||||||
|
retSQLErr(err, c)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Build succesfull response
|
||||||
type accountResponse struct {
|
type accountResponse struct {
|
||||||
Accounts []historydb.AccountAPI `json:"accounts"`
|
Accounts []historydb.AccountAPI `json:"accounts"`
|
||||||
PendingItems uint64 `json:"pendingItems"`
|
PendingItems uint64 `json:"pendingItems"`
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/hermeznetwork/hermez-node/api/apitypes"
|
"github.com/hermeznetwork/hermez-node/apitypes"
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/hermeznetwork/hermez-node/db/historydb"
|
"github.com/hermeznetwork/hermez-node/db/historydb"
|
||||||
"github.com/mitchellh/copystructure"
|
"github.com/mitchellh/copystructure"
|
||||||
|
|||||||
@@ -7,7 +7,7 @@ import (
|
|||||||
|
|
||||||
ethCommon "github.com/ethereum/go-ethereum/common"
|
ethCommon "github.com/ethereum/go-ethereum/common"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
"github.com/hermeznetwork/hermez-node/api/apitypes"
|
"github.com/hermeznetwork/hermez-node/apitypes"
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/iden3/go-iden3-crypto/babyjub"
|
"github.com/iden3/go-iden3-crypto/babyjub"
|
||||||
)
|
)
|
||||||
@@ -47,7 +47,7 @@ func (a *API) getAccountCreationAuth(c *gin.Context) {
|
|||||||
retSQLErr(err, c)
|
retSQLErr(err, c)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
// Build successful response
|
// Build succesfull response
|
||||||
c.JSON(http.StatusOK, auth)
|
c.JSON(http.StatusOK, auth)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
85
api/api.go
85
api/api.go
@@ -2,19 +2,41 @@ package api
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
"errors"
|
||||||
|
"sync"
|
||||||
|
|
||||||
ethCommon "github.com/ethereum/go-ethereum/common"
|
ethCommon "github.com/ethereum/go-ethereum/common"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/hermeznetwork/hermez-node/db/historydb"
|
"github.com/hermeznetwork/hermez-node/db/historydb"
|
||||||
"github.com/hermeznetwork/hermez-node/db/l2db"
|
"github.com/hermeznetwork/hermez-node/db/l2db"
|
||||||
|
"github.com/hermeznetwork/hermez-node/db/statedb"
|
||||||
"github.com/hermeznetwork/tracerr"
|
"github.com/hermeznetwork/tracerr"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
// TODO: Add correct values to constants
|
||||||
|
const (
|
||||||
|
createAccountExtraFeePercentage float64 = 2
|
||||||
|
createAccountInternalExtraFeePercentage float64 = 2.5
|
||||||
|
)
|
||||||
|
|
||||||
|
// Status define status of the network
|
||||||
|
type Status struct {
|
||||||
|
sync.RWMutex
|
||||||
|
Network Network `json:"network"`
|
||||||
|
Metrics historydb.Metrics `json:"metrics"`
|
||||||
|
Rollup historydb.RollupVariablesAPI `json:"rollup"`
|
||||||
|
Auction historydb.AuctionVariablesAPI `json:"auction"`
|
||||||
|
WithdrawalDelayer common.WDelayerVariables `json:"withdrawalDelayer"`
|
||||||
|
RecommendedFee common.RecommendedFee `json:"recommendedFee"`
|
||||||
|
}
|
||||||
|
|
||||||
// API serves HTTP requests to allow external interaction with the Hermez node
|
// API serves HTTP requests to allow external interaction with the Hermez node
|
||||||
type API struct {
|
type API struct {
|
||||||
h *historydb.HistoryDB
|
h *historydb.HistoryDB
|
||||||
cg *configAPI
|
cg *configAPI
|
||||||
|
s *statedb.StateDB
|
||||||
l2 *l2db.L2DB
|
l2 *l2db.L2DB
|
||||||
|
status Status
|
||||||
chainID uint16
|
chainID uint16
|
||||||
hermezAddress ethCommon.Address
|
hermezAddress ethCommon.Address
|
||||||
}
|
}
|
||||||
@@ -24,7 +46,9 @@ func NewAPI(
|
|||||||
coordinatorEndpoints, explorerEndpoints bool,
|
coordinatorEndpoints, explorerEndpoints bool,
|
||||||
server *gin.Engine,
|
server *gin.Engine,
|
||||||
hdb *historydb.HistoryDB,
|
hdb *historydb.HistoryDB,
|
||||||
|
sdb *statedb.StateDB,
|
||||||
l2db *l2db.L2DB,
|
l2db *l2db.L2DB,
|
||||||
|
config *Config,
|
||||||
) (*API, error) {
|
) (*API, error) {
|
||||||
// Check input
|
// Check input
|
||||||
// TODO: is stateDB only needed for explorer endpoints or for both?
|
// TODO: is stateDB only needed for explorer endpoints or for both?
|
||||||
@@ -34,56 +58,53 @@ func NewAPI(
|
|||||||
if explorerEndpoints && hdb == nil {
|
if explorerEndpoints && hdb == nil {
|
||||||
return nil, tracerr.Wrap(errors.New("cannot serve Explorer endpoints without HistoryDB"))
|
return nil, tracerr.Wrap(errors.New("cannot serve Explorer endpoints without HistoryDB"))
|
||||||
}
|
}
|
||||||
consts, err := hdb.GetConstants()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
a := &API{
|
a := &API{
|
||||||
h: hdb,
|
h: hdb,
|
||||||
cg: &configAPI{
|
cg: &configAPI{
|
||||||
RollupConstants: *newRollupConstants(consts.Rollup),
|
RollupConstants: *newRollupConstants(config.RollupConstants),
|
||||||
AuctionConstants: consts.Auction,
|
AuctionConstants: config.AuctionConstants,
|
||||||
WDelayerConstants: consts.WDelayer,
|
WDelayerConstants: config.WDelayerConstants,
|
||||||
},
|
},
|
||||||
|
s: sdb,
|
||||||
l2: l2db,
|
l2: l2db,
|
||||||
chainID: consts.ChainID,
|
status: Status{},
|
||||||
hermezAddress: consts.HermezAddress,
|
chainID: config.ChainID,
|
||||||
|
hermezAddress: config.HermezAddress,
|
||||||
}
|
}
|
||||||
|
|
||||||
v1 := server.Group("/v1")
|
|
||||||
|
|
||||||
// Add coordinator endpoints
|
// Add coordinator endpoints
|
||||||
if coordinatorEndpoints {
|
if coordinatorEndpoints {
|
||||||
// Account
|
// Account
|
||||||
v1.POST("/account-creation-authorization", a.postAccountCreationAuth)
|
server.POST("/account-creation-authorization", a.postAccountCreationAuth)
|
||||||
v1.GET("/account-creation-authorization/:hezEthereumAddress", a.getAccountCreationAuth)
|
server.GET("/account-creation-authorization/:hezEthereumAddress", a.getAccountCreationAuth)
|
||||||
// Transaction
|
// Transaction
|
||||||
v1.POST("/transactions-pool", a.postPoolTx)
|
server.POST("/transactions-pool", a.postPoolTx)
|
||||||
v1.GET("/transactions-pool/:id", a.getPoolTx)
|
server.GET("/transactions-pool/:id", a.getPoolTx)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add explorer endpoints
|
// Add explorer endpoints
|
||||||
if explorerEndpoints {
|
if explorerEndpoints {
|
||||||
// Account
|
// Account
|
||||||
v1.GET("/accounts", a.getAccounts)
|
server.GET("/accounts", a.getAccounts)
|
||||||
v1.GET("/accounts/:accountIndex", a.getAccount)
|
server.GET("/accounts/:accountIndex", a.getAccount)
|
||||||
v1.GET("/exits", a.getExits)
|
server.GET("/exits", a.getExits)
|
||||||
v1.GET("/exits/:batchNum/:accountIndex", a.getExit)
|
server.GET("/exits/:batchNum/:accountIndex", a.getExit)
|
||||||
// Transaction
|
// Transaction
|
||||||
v1.GET("/transactions-history", a.getHistoryTxs)
|
server.GET("/transactions-history", a.getHistoryTxs)
|
||||||
v1.GET("/transactions-history/:id", a.getHistoryTx)
|
server.GET("/transactions-history/:id", a.getHistoryTx)
|
||||||
// Status
|
// Status
|
||||||
v1.GET("/batches", a.getBatches)
|
server.GET("/batches", a.getBatches)
|
||||||
v1.GET("/batches/:batchNum", a.getBatch)
|
server.GET("/batches/:batchNum", a.getBatch)
|
||||||
v1.GET("/full-batches/:batchNum", a.getFullBatch)
|
server.GET("/full-batches/:batchNum", a.getFullBatch)
|
||||||
v1.GET("/slots", a.getSlots)
|
server.GET("/slots", a.getSlots)
|
||||||
v1.GET("/slots/:slotNum", a.getSlot)
|
server.GET("/slots/:slotNum", a.getSlot)
|
||||||
v1.GET("/bids", a.getBids)
|
server.GET("/bids", a.getBids)
|
||||||
v1.GET("/state", a.getState)
|
server.GET("/state", a.getState)
|
||||||
v1.GET("/config", a.getConfig)
|
server.GET("/config", a.getConfig)
|
||||||
v1.GET("/tokens", a.getTokens)
|
server.GET("/tokens", a.getTokens)
|
||||||
v1.GET("/tokens/:id", a.getToken)
|
server.GET("/tokens/:id", a.getToken)
|
||||||
v1.GET("/coordinators", a.getCoordinators)
|
server.GET("/coordinators", a.getCoordinators)
|
||||||
}
|
}
|
||||||
|
|
||||||
return a, nil
|
return a, nil
|
||||||
|
|||||||
157
api/api_test.go
157
api/api_test.go
@@ -8,7 +8,6 @@ import (
|
|||||||
"io"
|
"io"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"math/big"
|
"math/big"
|
||||||
"net"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
"strconv"
|
"strconv"
|
||||||
@@ -19,11 +18,11 @@ import (
|
|||||||
ethCommon "github.com/ethereum/go-ethereum/common"
|
ethCommon "github.com/ethereum/go-ethereum/common"
|
||||||
swagger "github.com/getkin/kin-openapi/openapi3filter"
|
swagger "github.com/getkin/kin-openapi/openapi3filter"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
"github.com/hermeznetwork/hermez-node/api/stateapiupdater"
|
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/hermeznetwork/hermez-node/db"
|
"github.com/hermeznetwork/hermez-node/db"
|
||||||
"github.com/hermeznetwork/hermez-node/db/historydb"
|
"github.com/hermeznetwork/hermez-node/db/historydb"
|
||||||
"github.com/hermeznetwork/hermez-node/db/l2db"
|
"github.com/hermeznetwork/hermez-node/db/l2db"
|
||||||
|
"github.com/hermeznetwork/hermez-node/db/statedb"
|
||||||
"github.com/hermeznetwork/hermez-node/log"
|
"github.com/hermeznetwork/hermez-node/log"
|
||||||
"github.com/hermeznetwork/hermez-node/test"
|
"github.com/hermeznetwork/hermez-node/test"
|
||||||
"github.com/hermeznetwork/hermez-node/test/til"
|
"github.com/hermeznetwork/hermez-node/test/til"
|
||||||
@@ -40,8 +39,8 @@ type Pendinger interface {
|
|||||||
New() Pendinger
|
New() Pendinger
|
||||||
}
|
}
|
||||||
|
|
||||||
const apiPort = "4010"
|
const apiPort = ":4010"
|
||||||
const apiURL = "http://localhost:" + apiPort + "/v1/"
|
const apiURL = "http://localhost" + apiPort + "/"
|
||||||
|
|
||||||
var SetBlockchain = `
|
var SetBlockchain = `
|
||||||
Type: Blockchain
|
Type: Blockchain
|
||||||
@@ -181,13 +180,12 @@ type testCommon struct {
|
|||||||
auctionVars common.AuctionVariables
|
auctionVars common.AuctionVariables
|
||||||
rollupVars common.RollupVariables
|
rollupVars common.RollupVariables
|
||||||
wdelayerVars common.WDelayerVariables
|
wdelayerVars common.WDelayerVariables
|
||||||
nextForgers []historydb.NextForgerAPI
|
nextForgers []NextForger
|
||||||
}
|
}
|
||||||
|
|
||||||
var tc testCommon
|
var tc testCommon
|
||||||
var config configAPI
|
var config configAPI
|
||||||
var api *API
|
var api *API
|
||||||
var stateAPIUpdater *stateapiupdater.Updater
|
|
||||||
|
|
||||||
// TestMain initializes the API server, and fill HistoryDB and StateDB with fake data,
|
// TestMain initializes the API server, and fill HistoryDB and StateDB with fake data,
|
||||||
// emulating the task of the synchronizer in order to have data to be returned
|
// emulating the task of the synchronizer in order to have data to be returned
|
||||||
@@ -203,13 +201,27 @@ func TestMain(m *testing.M) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
apiConnCon := db.NewAPIConnectionController(1, time.Second)
|
apiConnCon := db.NewAPICnnectionController(1, time.Second)
|
||||||
hdb := historydb.NewHistoryDB(database, database, apiConnCon)
|
hdb := historydb.NewHistoryDB(database, apiConnCon)
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
// StateDB
|
||||||
|
dir, err := ioutil.TempDir("", "tmpdb")
|
||||||
|
if err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
defer func() {
|
||||||
|
if err := os.RemoveAll(dir); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
sdb, err := statedb.NewStateDB(statedb.Config{Path: dir, Keep: 128, Type: statedb.TypeTxSelector, NLevels: 0})
|
||||||
if err != nil {
|
if err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
// L2DB
|
// L2DB
|
||||||
l2DB := l2db.NewL2DB(database, database, 10, 1000, 0.0, 1000.0, 24*time.Hour, apiConnCon)
|
l2DB := l2db.NewL2DB(database, 10, 1000, 24*time.Hour, apiConnCon)
|
||||||
test.WipeDB(l2DB.DB()) // this will clean HistoryDB and L2DB
|
test.WipeDB(l2DB.DB()) // this will clean HistoryDB and L2DB
|
||||||
// Config (smart contract constants)
|
// Config (smart contract constants)
|
||||||
chainID := uint16(0)
|
chainID := uint16(0)
|
||||||
@@ -222,55 +234,30 @@ func TestMain(m *testing.M) {
|
|||||||
|
|
||||||
// API
|
// API
|
||||||
apiGin := gin.Default()
|
apiGin := gin.Default()
|
||||||
// Reset DB
|
|
||||||
test.WipeDB(hdb.DB())
|
|
||||||
|
|
||||||
constants := &historydb.Constants{
|
|
||||||
SCConsts: common.SCConsts{
|
|
||||||
Rollup: _config.RollupConstants,
|
|
||||||
Auction: _config.AuctionConstants,
|
|
||||||
WDelayer: _config.WDelayerConstants,
|
|
||||||
},
|
|
||||||
ChainID: chainID,
|
|
||||||
HermezAddress: _config.HermezAddress,
|
|
||||||
}
|
|
||||||
if err := hdb.SetConstants(constants); err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
nodeConfig := &historydb.NodeConfig{
|
|
||||||
MaxPoolTxs: 10,
|
|
||||||
MinFeeUSD: 0,
|
|
||||||
MaxFeeUSD: 10000000000,
|
|
||||||
}
|
|
||||||
if err := hdb.SetNodeConfig(nodeConfig); err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
api, err = NewAPI(
|
api, err = NewAPI(
|
||||||
true,
|
true,
|
||||||
true,
|
true,
|
||||||
apiGin,
|
apiGin,
|
||||||
hdb,
|
hdb,
|
||||||
|
sdb,
|
||||||
l2DB,
|
l2DB,
|
||||||
|
&_config,
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error(err)
|
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
// Start server
|
// Start server
|
||||||
listener, err := net.Listen("tcp", ":"+apiPort) //nolint:gosec
|
server := &http.Server{Addr: apiPort, Handler: apiGin}
|
||||||
if err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
server := &http.Server{Handler: apiGin}
|
|
||||||
go func() {
|
go func() {
|
||||||
if err := server.Serve(listener); err != nil &&
|
if err := server.ListenAndServe(); err != nil && tracerr.Unwrap(err) != http.ErrServerClosed {
|
||||||
tracerr.Unwrap(err) != http.ErrServerClosed {
|
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
|
||||||
// Generate blockchain data with til
|
// Reset DB
|
||||||
|
test.WipeDB(api.h.DB())
|
||||||
|
|
||||||
|
// Genratre blockchain data with til
|
||||||
tcc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
|
tcc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
|
||||||
tilCfgExtra := til.ConfigExtra{
|
tilCfgExtra := til.ConfigExtra{
|
||||||
BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
|
BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
|
||||||
@@ -316,7 +303,7 @@ func TestMain(m *testing.M) {
|
|||||||
USD: ðUSD,
|
USD: ðUSD,
|
||||||
USDUpdate: ðNow,
|
USDUpdate: ðNow,
|
||||||
})
|
})
|
||||||
err = api.h.UpdateTokenValue(common.EmptyAddr, ethUSD)
|
err = api.h.UpdateTokenValue(test.EthToken.Symbol, ethUSD)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
@@ -343,7 +330,7 @@ func TestMain(m *testing.M) {
|
|||||||
token.USD = &value
|
token.USD = &value
|
||||||
token.USDUpdate = &now
|
token.USDUpdate = &now
|
||||||
// Set value in DB
|
// Set value in DB
|
||||||
err = api.h.UpdateTokenValue(token.EthAddr, value)
|
err = api.h.UpdateTokenValue(token.Symbol, value)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
@@ -363,6 +350,19 @@ func TestMain(m *testing.M) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// lastBlockNum2 := blocksData[len(blocksData)-1].Block.EthBlockNum
|
||||||
|
|
||||||
|
// Add accounts to StateDB
|
||||||
|
for i := 0; i < len(commonAccounts); i++ {
|
||||||
|
if _, err := api.s.CreateAccount(commonAccounts[i].Idx, &commonAccounts[i]); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Make a checkpoint to make the accounts available in Last
|
||||||
|
if err := api.s.MakeCheckpoint(); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
|
|
||||||
// Generate Coordinators and add them to HistoryDB
|
// Generate Coordinators and add them to HistoryDB
|
||||||
const nCoords = 10
|
const nCoords = 10
|
||||||
commonCoords := test.GenCoordinators(nCoords, commonBlocks)
|
commonCoords := test.GenCoordinators(nCoords, commonBlocks)
|
||||||
@@ -470,19 +470,19 @@ func TestMain(m *testing.M) {
|
|||||||
if err = api.h.AddBids(bids); err != nil {
|
if err = api.h.AddBids(bids); err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
bootForger := historydb.NextForgerAPI{
|
bootForger := NextForger{
|
||||||
Coordinator: historydb.CoordinatorAPI{
|
Coordinator: historydb.CoordinatorAPI{
|
||||||
Forger: auctionVars.BootCoordinator,
|
Forger: auctionVars.BootCoordinator,
|
||||||
URL: auctionVars.BootCoordinatorURL,
|
URL: auctionVars.BootCoordinatorURL,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
// Set next forgers: set all as boot coordinator then replace the non boot coordinators
|
// Set next forgers: set all as boot coordinator then replace the non boot coordinators
|
||||||
nextForgers := []historydb.NextForgerAPI{}
|
nextForgers := []NextForger{}
|
||||||
var initBlock int64 = 140
|
var initBlock int64 = 140
|
||||||
var deltaBlocks int64 = 40
|
var deltaBlocks int64 = 40
|
||||||
for i := 1; i < int(auctionVars.ClosedAuctionSlots)+2; i++ {
|
for i := 1; i < int(auctionVars.ClosedAuctionSlots)+2; i++ {
|
||||||
fromBlock := initBlock + deltaBlocks*int64(i-1)
|
fromBlock := initBlock + deltaBlocks*int64(i-1)
|
||||||
bootForger.Period = historydb.Period{
|
bootForger.Period = Period{
|
||||||
SlotNum: int64(i),
|
SlotNum: int64(i),
|
||||||
FromBlock: fromBlock,
|
FromBlock: fromBlock,
|
||||||
ToBlock: fromBlock + deltaBlocks - 1,
|
ToBlock: fromBlock + deltaBlocks - 1,
|
||||||
@@ -522,12 +522,6 @@ func TestMain(m *testing.M) {
|
|||||||
WithdrawalDelay: uint64(3000),
|
WithdrawalDelay: uint64(3000),
|
||||||
}
|
}
|
||||||
|
|
||||||
stateAPIUpdater = stateapiupdater.NewUpdater(hdb, nodeConfig, &common.SCVariables{
|
|
||||||
Rollup: rollupVars,
|
|
||||||
Auction: auctionVars,
|
|
||||||
WDelayer: wdelayerVars,
|
|
||||||
}, constants)
|
|
||||||
|
|
||||||
// Generate test data, as expected to be received/sended from/to the API
|
// Generate test data, as expected to be received/sended from/to the API
|
||||||
testCoords := genTestCoordinators(commonCoords)
|
testCoords := genTestCoordinators(commonCoords)
|
||||||
testBids := genTestBids(commonBlocks, testCoords, bids)
|
testBids := genTestBids(commonBlocks, testCoords, bids)
|
||||||
@@ -535,41 +529,13 @@ func TestMain(m *testing.M) {
|
|||||||
testTxs := genTestTxs(commonL1Txs, commonL2Txs, commonAccounts, testTokens, commonBlocks)
|
testTxs := genTestTxs(commonL1Txs, commonL2Txs, commonAccounts, testTokens, commonBlocks)
|
||||||
testBatches, testFullBatches := genTestBatches(commonBlocks, commonBatches, testTxs)
|
testBatches, testFullBatches := genTestBatches(commonBlocks, commonBatches, testTxs)
|
||||||
poolTxsToSend, poolTxsToReceive := genTestPoolTxs(commonPoolTxs, testTokens, commonAccounts)
|
poolTxsToSend, poolTxsToReceive := genTestPoolTxs(commonPoolTxs, testTokens, commonAccounts)
|
||||||
// Add balance and nonce to historyDB
|
|
||||||
accounts := genTestAccounts(commonAccounts, testTokens)
|
|
||||||
accUpdates := []common.AccountUpdate{}
|
|
||||||
for i := 0; i < len(accounts); i++ {
|
|
||||||
balance := new(big.Int)
|
|
||||||
balance.SetString(string(*accounts[i].Balance), 10)
|
|
||||||
idx, err := stringToIdx(string(accounts[i].Idx), "foo")
|
|
||||||
if err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
accUpdates = append(accUpdates, common.AccountUpdate{
|
|
||||||
EthBlockNum: 0,
|
|
||||||
BatchNum: 1,
|
|
||||||
Idx: *idx,
|
|
||||||
Nonce: 0,
|
|
||||||
Balance: balance,
|
|
||||||
})
|
|
||||||
accUpdates = append(accUpdates, common.AccountUpdate{
|
|
||||||
EthBlockNum: 0,
|
|
||||||
BatchNum: 1,
|
|
||||||
Idx: *idx,
|
|
||||||
Nonce: accounts[i].Nonce,
|
|
||||||
Balance: balance,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
if err := api.h.AddAccountUpdates(accUpdates); err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
tc = testCommon{
|
tc = testCommon{
|
||||||
blocks: commonBlocks,
|
blocks: commonBlocks,
|
||||||
tokens: testTokens,
|
tokens: testTokens,
|
||||||
batches: testBatches,
|
batches: testBatches,
|
||||||
fullBatches: testFullBatches,
|
fullBatches: testFullBatches,
|
||||||
coordinators: testCoords,
|
coordinators: testCoords,
|
||||||
accounts: accounts,
|
accounts: genTestAccounts(commonAccounts, testTokens),
|
||||||
txs: testTxs,
|
txs: testTxs,
|
||||||
exits: testExits,
|
exits: testExits,
|
||||||
poolTxsToSend: poolTxsToSend,
|
poolTxsToSend: poolTxsToSend,
|
||||||
@@ -605,24 +571,27 @@ func TestMain(m *testing.M) {
|
|||||||
if err := database.Close(); err != nil {
|
if err := database.Close(); err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
|
if err := os.RemoveAll(dir); err != nil {
|
||||||
|
panic(err)
|
||||||
|
}
|
||||||
os.Exit(result)
|
os.Exit(result)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestTimeout(t *testing.T) {
|
func TestTimeout(t *testing.T) {
|
||||||
pass := os.Getenv("POSTGRES_PASS")
|
pass := os.Getenv("POSTGRES_PASS")
|
||||||
databaseTO, err := db.ConnectSQLDB(5432, "localhost", "hermez", pass, "hermez")
|
databaseTO, err := db.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
apiConnConTO := db.NewAPIConnectionController(1, 100*time.Millisecond)
|
apiConnConTO := db.NewAPICnnectionController(1, 100*time.Millisecond)
|
||||||
hdbTO := historydb.NewHistoryDB(databaseTO, databaseTO, apiConnConTO)
|
hdbTO := historydb.NewHistoryDB(databaseTO, apiConnConTO)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
// L2DB
|
// L2DB
|
||||||
l2DBTO := l2db.NewL2DB(databaseTO, databaseTO, 10, 1000, 1.0, 1000.0, 24*time.Hour, apiConnConTO)
|
l2DBTO := l2db.NewL2DB(databaseTO, 10, 1000, 24*time.Hour, apiConnConTO)
|
||||||
|
|
||||||
// API
|
// API
|
||||||
apiGinTO := gin.Default()
|
apiGinTO := gin.Default()
|
||||||
finishWait := make(chan interface{})
|
finishWait := make(chan interface{})
|
||||||
startWait := make(chan interface{})
|
startWait := make(chan interface{})
|
||||||
apiGinTO.GET("/v1/wait", func(c *gin.Context) {
|
apiGinTO.GET("/wait", func(c *gin.Context) {
|
||||||
cancel, err := apiConnConTO.Acquire()
|
cancel, err := apiConnConTO.Acquire()
|
||||||
defer cancel()
|
defer cancel()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -631,28 +600,28 @@ func TestTimeout(t *testing.T) {
|
|||||||
<-finishWait
|
<-finishWait
|
||||||
})
|
})
|
||||||
// Start server
|
// Start server
|
||||||
serverTO := &http.Server{Handler: apiGinTO}
|
serverTO := &http.Server{Addr: ":4444", Handler: apiGinTO}
|
||||||
listener, err := net.Listen("tcp", ":4444") //nolint:gosec
|
|
||||||
require.NoError(t, err)
|
|
||||||
go func() {
|
go func() {
|
||||||
if err := serverTO.Serve(listener); err != nil &&
|
if err := serverTO.ListenAndServe(); err != nil && tracerr.Unwrap(err) != http.ErrServerClosed {
|
||||||
tracerr.Unwrap(err) != http.ErrServerClosed {
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
_config := getConfigTest(0)
|
||||||
_, err = NewAPI(
|
_, err = NewAPI(
|
||||||
true,
|
true,
|
||||||
true,
|
true,
|
||||||
apiGinTO,
|
apiGinTO,
|
||||||
hdbTO,
|
hdbTO,
|
||||||
|
nil,
|
||||||
l2DBTO,
|
l2DBTO,
|
||||||
|
&_config,
|
||||||
)
|
)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
client := &http.Client{}
|
client := &http.Client{}
|
||||||
httpReq, err := http.NewRequest("GET", "http://localhost:4444/v1/tokens", nil)
|
httpReq, err := http.NewRequest("GET", "http://localhost:4444/tokens", nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
httpReqWait, err := http.NewRequest("GET", "http://localhost:4444/v1/wait", nil)
|
httpReqWait, err := http.NewRequest("GET", "http://localhost:4444/wait", nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
// Request that will get timed out
|
// Request that will get timed out
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
|
|||||||
@@ -52,7 +52,7 @@ func (a *API) getBatches(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build successful response
|
// Build succesfull response
|
||||||
type batchesResponse struct {
|
type batchesResponse struct {
|
||||||
Batches []historydb.BatchAPI `json:"batches"`
|
Batches []historydb.BatchAPI `json:"batches"`
|
||||||
PendingItems uint64 `json:"pendingItems"`
|
PendingItems uint64 `json:"pendingItems"`
|
||||||
|
|||||||
@@ -7,12 +7,10 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
ethCommon "github.com/ethereum/go-ethereum/common"
|
ethCommon "github.com/ethereum/go-ethereum/common"
|
||||||
"github.com/hermeznetwork/hermez-node/api/apitypes"
|
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/hermeznetwork/hermez-node/db/historydb"
|
"github.com/hermeznetwork/hermez-node/db/historydb"
|
||||||
"github.com/mitchellh/copystructure"
|
"github.com/mitchellh/copystructure"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type testBatch struct {
|
type testBatch struct {
|
||||||
@@ -22,7 +20,7 @@ type testBatch struct {
|
|||||||
EthBlockHash ethCommon.Hash `json:"ethereumBlockHash"`
|
EthBlockHash ethCommon.Hash `json:"ethereumBlockHash"`
|
||||||
Timestamp time.Time `json:"timestamp"`
|
Timestamp time.Time `json:"timestamp"`
|
||||||
ForgerAddr ethCommon.Address `json:"forgerAddr"`
|
ForgerAddr ethCommon.Address `json:"forgerAddr"`
|
||||||
CollectedFees apitypes.CollectedFeesAPI `json:"collectedFees"`
|
CollectedFees map[common.TokenID]string `json:"collectedFees"`
|
||||||
TotalFeesUSD *float64 `json:"historicTotalCollectedFeesUSD"`
|
TotalFeesUSD *float64 `json:"historicTotalCollectedFeesUSD"`
|
||||||
StateRoot string `json:"stateRoot"`
|
StateRoot string `json:"stateRoot"`
|
||||||
NumAccounts int `json:"numAccounts"`
|
NumAccounts int `json:"numAccounts"`
|
||||||
@@ -75,9 +73,9 @@ func genTestBatches(
|
|||||||
if !found {
|
if !found {
|
||||||
panic("block not found")
|
panic("block not found")
|
||||||
}
|
}
|
||||||
collectedFees := apitypes.CollectedFeesAPI(make(map[common.TokenID]apitypes.BigIntStr))
|
collectedFees := make(map[common.TokenID]string)
|
||||||
for k, v := range cBatches[i].CollectedFees {
|
for k, v := range cBatches[i].CollectedFees {
|
||||||
collectedFees[k] = *apitypes.NewBigIntStr(v)
|
collectedFees[k] = v.String()
|
||||||
}
|
}
|
||||||
forgedTxs := 0
|
forgedTxs := 0
|
||||||
for _, tx := range txs {
|
for _, tx := range txs {
|
||||||
@@ -134,7 +132,7 @@ func TestGetBatches(t *testing.T) {
|
|||||||
limit := 3
|
limit := 3
|
||||||
path := fmt.Sprintf("%s?limit=%d", endpoint, limit)
|
path := fmt.Sprintf("%s?limit=%d", endpoint, limit)
|
||||||
err := doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
|
err := doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assertBatches(t, tc.batches, fetchedBatches)
|
assertBatches(t, tc.batches, fetchedBatches)
|
||||||
|
|
||||||
// minBatchNum
|
// minBatchNum
|
||||||
@@ -143,7 +141,7 @@ func TestGetBatches(t *testing.T) {
|
|||||||
minBatchNum := tc.batches[len(tc.batches)/2].BatchNum
|
minBatchNum := tc.batches[len(tc.batches)/2].BatchNum
|
||||||
path = fmt.Sprintf("%s?minBatchNum=%d&limit=%d", endpoint, minBatchNum, limit)
|
path = fmt.Sprintf("%s?minBatchNum=%d&limit=%d", endpoint, minBatchNum, limit)
|
||||||
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
|
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
minBatchNumBatches := []testBatch{}
|
minBatchNumBatches := []testBatch{}
|
||||||
for i := 0; i < len(tc.batches); i++ {
|
for i := 0; i < len(tc.batches); i++ {
|
||||||
if tc.batches[i].BatchNum > minBatchNum {
|
if tc.batches[i].BatchNum > minBatchNum {
|
||||||
@@ -158,7 +156,7 @@ func TestGetBatches(t *testing.T) {
|
|||||||
maxBatchNum := tc.batches[len(tc.batches)/2].BatchNum
|
maxBatchNum := tc.batches[len(tc.batches)/2].BatchNum
|
||||||
path = fmt.Sprintf("%s?maxBatchNum=%d&limit=%d", endpoint, maxBatchNum, limit)
|
path = fmt.Sprintf("%s?maxBatchNum=%d&limit=%d", endpoint, maxBatchNum, limit)
|
||||||
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
|
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
maxBatchNumBatches := []testBatch{}
|
maxBatchNumBatches := []testBatch{}
|
||||||
for i := 0; i < len(tc.batches); i++ {
|
for i := 0; i < len(tc.batches); i++ {
|
||||||
if tc.batches[i].BatchNum < maxBatchNum {
|
if tc.batches[i].BatchNum < maxBatchNum {
|
||||||
@@ -173,7 +171,7 @@ func TestGetBatches(t *testing.T) {
|
|||||||
slotNum := tc.batches[len(tc.batches)/2].SlotNum
|
slotNum := tc.batches[len(tc.batches)/2].SlotNum
|
||||||
path = fmt.Sprintf("%s?slotNum=%d&limit=%d", endpoint, slotNum, limit)
|
path = fmt.Sprintf("%s?slotNum=%d&limit=%d", endpoint, slotNum, limit)
|
||||||
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
|
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
slotNumBatches := []testBatch{}
|
slotNumBatches := []testBatch{}
|
||||||
for i := 0; i < len(tc.batches); i++ {
|
for i := 0; i < len(tc.batches); i++ {
|
||||||
if tc.batches[i].SlotNum == slotNum {
|
if tc.batches[i].SlotNum == slotNum {
|
||||||
@@ -188,7 +186,7 @@ func TestGetBatches(t *testing.T) {
|
|||||||
forgerAddr := tc.batches[len(tc.batches)/2].ForgerAddr
|
forgerAddr := tc.batches[len(tc.batches)/2].ForgerAddr
|
||||||
path = fmt.Sprintf("%s?forgerAddr=%s&limit=%d", endpoint, forgerAddr.String(), limit)
|
path = fmt.Sprintf("%s?forgerAddr=%s&limit=%d", endpoint, forgerAddr.String(), limit)
|
||||||
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
|
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
forgerAddrBatches := []testBatch{}
|
forgerAddrBatches := []testBatch{}
|
||||||
for i := 0; i < len(tc.batches); i++ {
|
for i := 0; i < len(tc.batches); i++ {
|
||||||
if tc.batches[i].ForgerAddr == forgerAddr {
|
if tc.batches[i].ForgerAddr == forgerAddr {
|
||||||
@@ -202,7 +200,7 @@ func TestGetBatches(t *testing.T) {
|
|||||||
limit = 6
|
limit = 6
|
||||||
path = fmt.Sprintf("%s?limit=%d", endpoint, limit)
|
path = fmt.Sprintf("%s?limit=%d", endpoint, limit)
|
||||||
err = doGoodReqPaginated(path, historydb.OrderDesc, &testBatchesResponse{}, appendIter)
|
err = doGoodReqPaginated(path, historydb.OrderDesc, &testBatchesResponse{}, appendIter)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
flippedBatches := []testBatch{}
|
flippedBatches := []testBatch{}
|
||||||
for i := len(tc.batches) - 1; i >= 0; i-- {
|
for i := len(tc.batches) - 1; i >= 0; i-- {
|
||||||
flippedBatches = append(flippedBatches, tc.batches[i])
|
flippedBatches = append(flippedBatches, tc.batches[i])
|
||||||
@@ -216,7 +214,7 @@ func TestGetBatches(t *testing.T) {
|
|||||||
minBatchNum = tc.batches[len(tc.batches)/4].BatchNum
|
minBatchNum = tc.batches[len(tc.batches)/4].BatchNum
|
||||||
path = fmt.Sprintf("%s?minBatchNum=%d&maxBatchNum=%d&limit=%d", endpoint, minBatchNum, maxBatchNum, limit)
|
path = fmt.Sprintf("%s?minBatchNum=%d&maxBatchNum=%d&limit=%d", endpoint, minBatchNum, maxBatchNum, limit)
|
||||||
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
|
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
minMaxBatchNumBatches := []testBatch{}
|
minMaxBatchNumBatches := []testBatch{}
|
||||||
for i := 0; i < len(tc.batches); i++ {
|
for i := 0; i < len(tc.batches); i++ {
|
||||||
if tc.batches[i].BatchNum < maxBatchNum && tc.batches[i].BatchNum > minBatchNum {
|
if tc.batches[i].BatchNum < maxBatchNum && tc.batches[i].BatchNum > minBatchNum {
|
||||||
@@ -229,25 +227,25 @@ func TestGetBatches(t *testing.T) {
|
|||||||
fetchedBatches = []testBatch{}
|
fetchedBatches = []testBatch{}
|
||||||
path = fmt.Sprintf("%s?slotNum=%d&minBatchNum=%d", endpoint, 1, 25)
|
path = fmt.Sprintf("%s?slotNum=%d&minBatchNum=%d", endpoint, 1, 25)
|
||||||
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
|
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assertBatches(t, []testBatch{}, fetchedBatches)
|
assertBatches(t, []testBatch{}, fetchedBatches)
|
||||||
|
|
||||||
// 400
|
// 400
|
||||||
// Invalid minBatchNum
|
// Invalid minBatchNum
|
||||||
path = fmt.Sprintf("%s?minBatchNum=%d", endpoint, -2)
|
path = fmt.Sprintf("%s?minBatchNum=%d", endpoint, -2)
|
||||||
err = doBadReq("GET", path, nil, 400)
|
err = doBadReq("GET", path, nil, 400)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Invalid forgerAddr
|
// Invalid forgerAddr
|
||||||
path = fmt.Sprintf("%s?forgerAddr=%s", endpoint, "0xG0000001")
|
path = fmt.Sprintf("%s?forgerAddr=%s", endpoint, "0xG0000001")
|
||||||
err = doBadReq("GET", path, nil, 400)
|
err = doBadReq("GET", path, nil, 400)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestGetBatch(t *testing.T) {
|
func TestGetBatch(t *testing.T) {
|
||||||
endpoint := apiURL + "batches/"
|
endpoint := apiURL + "batches/"
|
||||||
for _, batch := range tc.batches {
|
for _, batch := range tc.batches {
|
||||||
fetchedBatch := testBatch{}
|
fetchedBatch := testBatch{}
|
||||||
require.NoError(
|
assert.NoError(
|
||||||
t, doGoodReq(
|
t, doGoodReq(
|
||||||
"GET",
|
"GET",
|
||||||
endpoint+strconv.Itoa(int(batch.BatchNum)),
|
endpoint+strconv.Itoa(int(batch.BatchNum)),
|
||||||
@@ -257,16 +255,16 @@ func TestGetBatch(t *testing.T) {
|
|||||||
assertBatch(t, batch, fetchedBatch)
|
assertBatch(t, batch, fetchedBatch)
|
||||||
}
|
}
|
||||||
// 400
|
// 400
|
||||||
require.NoError(t, doBadReq("GET", endpoint+"foo", nil, 400))
|
assert.NoError(t, doBadReq("GET", endpoint+"foo", nil, 400))
|
||||||
// 404
|
// 404
|
||||||
require.NoError(t, doBadReq("GET", endpoint+"99999", nil, 404))
|
assert.NoError(t, doBadReq("GET", endpoint+"99999", nil, 404))
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestGetFullBatch(t *testing.T) {
|
func TestGetFullBatch(t *testing.T) {
|
||||||
endpoint := apiURL + "full-batches/"
|
endpoint := apiURL + "full-batches/"
|
||||||
for _, fullBatch := range tc.fullBatches {
|
for _, fullBatch := range tc.fullBatches {
|
||||||
fetchedFullBatch := testFullBatch{}
|
fetchedFullBatch := testFullBatch{}
|
||||||
require.NoError(
|
assert.NoError(
|
||||||
t, doGoodReq(
|
t, doGoodReq(
|
||||||
"GET",
|
"GET",
|
||||||
endpoint+strconv.Itoa(int(fullBatch.Batch.BatchNum)),
|
endpoint+strconv.Itoa(int(fullBatch.Batch.BatchNum)),
|
||||||
@@ -277,9 +275,9 @@ func TestGetFullBatch(t *testing.T) {
|
|||||||
assertTxs(t, fullBatch.Txs, fetchedFullBatch.Txs)
|
assertTxs(t, fullBatch.Txs, fetchedFullBatch.Txs)
|
||||||
}
|
}
|
||||||
// 400
|
// 400
|
||||||
require.NoError(t, doBadReq("GET", endpoint+"foo", nil, 400))
|
assert.NoError(t, doBadReq("GET", endpoint+"foo", nil, 400))
|
||||||
// 404
|
// 404
|
||||||
require.NoError(t, doBadReq("GET", endpoint+"99999", nil, 404))
|
assert.NoError(t, doBadReq("GET", endpoint+"99999", nil, 404))
|
||||||
}
|
}
|
||||||
|
|
||||||
func assertBatches(t *testing.T, expected, actual []testBatch) {
|
func assertBatches(t *testing.T, expected, actual []testBatch) {
|
||||||
|
|||||||
@@ -34,7 +34,7 @@ func (a *API) getBids(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build successful response
|
// Build succesfull response
|
||||||
type bidsResponse struct {
|
type bidsResponse struct {
|
||||||
Bids []historydb.BidAPI `json:"bids"`
|
Bids []historydb.BidAPI `json:"bids"`
|
||||||
PendingItems uint64 `json:"pendingItems"`
|
PendingItems uint64 `json:"pendingItems"`
|
||||||
|
|||||||
@@ -32,7 +32,7 @@ func (a *API) getCoordinators(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build successful response
|
// Build succesfull response
|
||||||
type coordinatorsResponse struct {
|
type coordinatorsResponse struct {
|
||||||
Coordinators []historydb.CoordinatorAPI `json:"coordinators"`
|
Coordinators []historydb.CoordinatorAPI `json:"coordinators"`
|
||||||
PendingItems uint64 `json:"pendingItems"`
|
PendingItems uint64 `json:"pendingItems"`
|
||||||
|
|||||||
@@ -43,7 +43,7 @@ func (a *API) getExits(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build successful response
|
// Build succesfull response
|
||||||
type exitsResponse struct {
|
type exitsResponse struct {
|
||||||
Exits []historydb.ExitAPI `json:"exits"`
|
Exits []historydb.ExitAPI `json:"exits"`
|
||||||
PendingItems uint64 `json:"pendingItems"`
|
PendingItems uint64 `json:"pendingItems"`
|
||||||
@@ -72,6 +72,6 @@ func (a *API) getExit(c *gin.Context) {
|
|||||||
retSQLErr(err, c)
|
retSQLErr(err, c)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
// Build successful response
|
// Build succesfull response
|
||||||
c.JSON(http.StatusOK, exit)
|
c.JSON(http.StatusOK, exit)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/hermeznetwork/hermez-node/api/apitypes"
|
"github.com/hermeznetwork/hermez-node/apitypes"
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/hermeznetwork/hermez-node/db/historydb"
|
"github.com/hermeznetwork/hermez-node/db/historydb"
|
||||||
"github.com/mitchellh/copystructure"
|
"github.com/mitchellh/copystructure"
|
||||||
|
|||||||
@@ -10,11 +10,10 @@ import (
|
|||||||
"github.com/hermeznetwork/hermez-node/log"
|
"github.com/hermeznetwork/hermez-node/log"
|
||||||
"github.com/hermeznetwork/tracerr"
|
"github.com/hermeznetwork/tracerr"
|
||||||
"github.com/lib/pq"
|
"github.com/lib/pq"
|
||||||
"github.com/russross/meddler"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
// maxLimit is the max permitted items to be returned in paginated responses
|
// maxLimit is the max permited items to be returned in paginated responses
|
||||||
maxLimit uint = 2049
|
maxLimit uint = 2049
|
||||||
|
|
||||||
// dfltOrder indicates how paginated endpoints are ordered if not specified
|
// dfltOrder indicates how paginated endpoints are ordered if not specified
|
||||||
@@ -40,40 +39,31 @@ const (
|
|||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
// ErrNilBidderAddr is used when a nil bidderAddr is received in the getCoordinator method
|
// ErrNillBidderAddr is used when a nil bidderAddr is received in the getCoordinator method
|
||||||
ErrNilBidderAddr = errors.New("biderAddr can not be nil")
|
ErrNillBidderAddr = errors.New("biderAddr can not be nil")
|
||||||
)
|
)
|
||||||
|
|
||||||
func retSQLErr(err error, c *gin.Context) {
|
func retSQLErr(err error, c *gin.Context) {
|
||||||
log.Warnw("HTTP API SQL request error", "err", err)
|
log.Warnw("HTTP API SQL request error", "err", err)
|
||||||
errMsg := tracerr.Unwrap(err).Error()
|
errMsg := tracerr.Unwrap(err).Error()
|
||||||
retDupKey := func(errCode pq.ErrorCode) {
|
|
||||||
// https://www.postgresql.org/docs/current/errcodes-appendix.html
|
|
||||||
if errCode == "23505" {
|
|
||||||
c.JSON(http.StatusInternalServerError, errorMsg{
|
|
||||||
Message: errDuplicatedKey,
|
|
||||||
})
|
|
||||||
} else {
|
|
||||||
c.JSON(http.StatusInternalServerError, errorMsg{
|
|
||||||
Message: errMsg,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if errMsg == errCtxTimeout {
|
if errMsg == errCtxTimeout {
|
||||||
c.JSON(http.StatusServiceUnavailable, errorMsg{
|
c.JSON(http.StatusServiceUnavailable, errorMsg{
|
||||||
Message: errSQLTimeout,
|
Message: errSQLTimeout,
|
||||||
})
|
})
|
||||||
} else if sqlErr, ok := tracerr.Unwrap(err).(*pq.Error); ok {
|
} else if sqlErr, ok := tracerr.Unwrap(err).(*pq.Error); ok {
|
||||||
retDupKey(sqlErr.Code)
|
// https://www.postgresql.org/docs/current/errcodes-appendix.html
|
||||||
} else if sqlErr, ok := meddler.DriverErr(tracerr.Unwrap(err)); ok {
|
if sqlErr.Code == "23505" {
|
||||||
retDupKey(sqlErr.(*pq.Error).Code)
|
c.JSON(http.StatusInternalServerError, errorMsg{
|
||||||
|
Message: errDuplicatedKey,
|
||||||
|
})
|
||||||
|
}
|
||||||
} else if tracerr.Unwrap(err) == sql.ErrNoRows {
|
} else if tracerr.Unwrap(err) == sql.ErrNoRows {
|
||||||
c.JSON(http.StatusNotFound, errorMsg{
|
c.JSON(http.StatusNotFound, errorMsg{
|
||||||
Message: errMsg,
|
Message: err.Error(),
|
||||||
})
|
})
|
||||||
} else {
|
} else {
|
||||||
c.JSON(http.StatusInternalServerError, errorMsg{
|
c.JSON(http.StatusInternalServerError, errorMsg{
|
||||||
Message: errMsg,
|
Message: err.Error(),
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -50,19 +50,19 @@ func parsePagination(c querier) (fromItem *uint, order string, limit *uint, err
|
|||||||
return fromItem, order, limit, nil
|
return fromItem, order, limit, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// nolint reason: res may be not overwritten
|
// nolint reason: res may be not overwriten
|
||||||
func parseQueryUint(name string, dflt *uint, min, max uint, c querier) (*uint, error) { //nolint:SA4009
|
func parseQueryUint(name string, dflt *uint, min, max uint, c querier) (*uint, error) { //nolint:SA4009
|
||||||
str := c.Query(name)
|
str := c.Query(name)
|
||||||
return stringToUint(str, name, dflt, min, max)
|
return stringToUint(str, name, dflt, min, max)
|
||||||
}
|
}
|
||||||
|
|
||||||
// nolint reason: res may be not overwritten
|
// nolint reason: res may be not overwriten
|
||||||
func parseQueryInt64(name string, dflt *int64, min, max int64, c querier) (*int64, error) { //nolint:SA4009
|
func parseQueryInt64(name string, dflt *int64, min, max int64, c querier) (*int64, error) { //nolint:SA4009
|
||||||
str := c.Query(name)
|
str := c.Query(name)
|
||||||
return stringToInt64(str, name, dflt, min, max)
|
return stringToInt64(str, name, dflt, min, max)
|
||||||
}
|
}
|
||||||
|
|
||||||
// nolint reason: res may be not overwritten
|
// nolint reason: res may be not overwriten
|
||||||
func parseQueryBool(name string, dflt *bool, c querier) (*bool, error) { //nolint:SA4009
|
func parseQueryBool(name string, dflt *bool, c querier) (*bool, error) { //nolint:SA4009
|
||||||
str := c.Query(name)
|
str := c.Query(name)
|
||||||
if str == "" {
|
if str == "" {
|
||||||
@@ -295,13 +295,13 @@ func parseParamIdx(c paramer) (*common.Idx, error) {
|
|||||||
return stringToIdx(idxStr, name)
|
return stringToIdx(idxStr, name)
|
||||||
}
|
}
|
||||||
|
|
||||||
// nolint reason: res may be not overwritten
|
// nolint reason: res may be not overwriten
|
||||||
func parseParamUint(name string, dflt *uint, min, max uint, c paramer) (*uint, error) { //nolint:SA4009
|
func parseParamUint(name string, dflt *uint, min, max uint, c paramer) (*uint, error) { //nolint:SA4009
|
||||||
str := c.Param(name)
|
str := c.Param(name)
|
||||||
return stringToUint(str, name, dflt, min, max)
|
return stringToUint(str, name, dflt, min, max)
|
||||||
}
|
}
|
||||||
|
|
||||||
// nolint reason: res may be not overwritten
|
// nolint reason: res may be not overwriten
|
||||||
func parseParamInt64(name string, dflt *int64, min, max int64, c paramer) (*int64, error) { //nolint:SA4009
|
func parseParamInt64(name string, dflt *int64, min, max int64, c paramer) (*int64, error) { //nolint:SA4009
|
||||||
str := c.Param(name)
|
str := c.Param(name)
|
||||||
return stringToInt64(str, name, dflt, min, max)
|
return stringToInt64(str, name, dflt, min, max)
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ import (
|
|||||||
"github.com/hermeznetwork/tracerr"
|
"github.com/hermeznetwork/tracerr"
|
||||||
)
|
)
|
||||||
|
|
||||||
// SlotAPI is a representation of a slot information
|
// SlotAPI is a repesentation of a slot information
|
||||||
type SlotAPI struct {
|
type SlotAPI struct {
|
||||||
ItemID uint64 `json:"itemId"`
|
ItemID uint64 `json:"itemId"`
|
||||||
SlotNum int64 `json:"slotNum"`
|
SlotNum int64 `json:"slotNum"`
|
||||||
@@ -316,7 +316,7 @@ func (a *API) getSlots(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build successful response
|
// Build succesfull response
|
||||||
type slotsResponse struct {
|
type slotsResponse struct {
|
||||||
Slots []SlotAPI `json:"slots"`
|
Slots []SlotAPI `json:"slots"`
|
||||||
PendingItems uint64 `json:"pendingItems"`
|
PendingItems uint64 `json:"pendingItems"`
|
||||||
|
|||||||
@@ -99,9 +99,7 @@ func TestGetSlot(t *testing.T) {
|
|||||||
nil, &fetchedSlot,
|
nil, &fetchedSlot,
|
||||||
),
|
),
|
||||||
)
|
)
|
||||||
// ni, err := api.h.GetNodeInfoAPI()
|
emptySlot := api.getEmptyTestSlot(slotNum, api.status.Network.LastSyncBlock, tc.auctionVars)
|
||||||
// assert.NoError(t, err)
|
|
||||||
emptySlot := api.getEmptyTestSlot(slotNum, 0, tc.auctionVars)
|
|
||||||
assertSlot(t, emptySlot, fetchedSlot)
|
assertSlot(t, emptySlot, fetchedSlot)
|
||||||
|
|
||||||
// Invalid slotNum
|
// Invalid slotNum
|
||||||
@@ -129,10 +127,8 @@ func TestGetSlots(t *testing.T) {
|
|||||||
err := doGoodReqPaginated(path, historydb.OrderAsc, &testSlotsResponse{}, appendIter)
|
err := doGoodReqPaginated(path, historydb.OrderAsc, &testSlotsResponse{}, appendIter)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
allSlots := tc.slots
|
allSlots := tc.slots
|
||||||
// ni, err := api.h.GetNodeInfoAPI()
|
|
||||||
// assert.NoError(t, err)
|
|
||||||
for i := tc.slots[len(tc.slots)-1].SlotNum; i < maxSlotNum; i++ {
|
for i := tc.slots[len(tc.slots)-1].SlotNum; i < maxSlotNum; i++ {
|
||||||
emptySlot := api.getEmptyTestSlot(i+1, 0, tc.auctionVars)
|
emptySlot := api.getEmptyTestSlot(i+1, api.status.Network.LastSyncBlock, tc.auctionVars)
|
||||||
allSlots = append(allSlots, emptySlot)
|
allSlots = append(allSlots, emptySlot)
|
||||||
}
|
}
|
||||||
assertSlots(t, allSlots, fetchedSlots)
|
assertSlots(t, allSlots, fetchedSlots)
|
||||||
|
|||||||
304
api/state.go
304
api/state.go
@@ -1,16 +1,306 @@
|
|||||||
package api
|
package api
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"database/sql"
|
||||||
|
"fmt"
|
||||||
|
"math/big"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
"time"
|
||||||
|
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
|
"github.com/hermeznetwork/hermez-node/apitypes"
|
||||||
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
|
"github.com/hermeznetwork/hermez-node/db/historydb"
|
||||||
|
"github.com/hermeznetwork/tracerr"
|
||||||
)
|
)
|
||||||
|
|
||||||
func (a *API) getState(c *gin.Context) {
|
// Network define status of the network
|
||||||
stateAPI, err := a.h.GetStateAPI()
|
type Network struct {
|
||||||
if err != nil {
|
LastEthBlock int64 `json:"lastEthereumBlock"`
|
||||||
retBadReq(err, c)
|
LastSyncBlock int64 `json:"lastSynchedBlock"`
|
||||||
return
|
LastBatch *historydb.BatchAPI `json:"lastBatch"`
|
||||||
}
|
CurrentSlot int64 `json:"currentSlot"`
|
||||||
c.JSON(http.StatusOK, stateAPI)
|
NextForgers []NextForger `json:"nextForgers"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// NextForger is a representation of the information of a coordinator and the period will forge
|
||||||
|
type NextForger struct {
|
||||||
|
Coordinator historydb.CoordinatorAPI `json:"coordinator"`
|
||||||
|
Period Period `json:"period"`
|
||||||
|
}
|
||||||
|
|
||||||
|
// Period is a representation of a period
|
||||||
|
type Period struct {
|
||||||
|
SlotNum int64 `json:"slotNum"`
|
||||||
|
FromBlock int64 `json:"fromBlock"`
|
||||||
|
ToBlock int64 `json:"toBlock"`
|
||||||
|
FromTimestamp time.Time `json:"fromTimestamp"`
|
||||||
|
ToTimestamp time.Time `json:"toTimestamp"`
|
||||||
|
}
|
||||||
|
|
||||||
|
func (a *API) getState(c *gin.Context) {
|
||||||
|
// TODO: There are no events for the buckets information, so now this information will be 0
|
||||||
|
a.status.RLock()
|
||||||
|
status := a.status //nolint
|
||||||
|
a.status.RUnlock()
|
||||||
|
c.JSON(http.StatusOK, status) //nolint
|
||||||
|
}
|
||||||
|
|
||||||
|
// SC Vars
|
||||||
|
|
||||||
|
// SetRollupVariables set Status.Rollup variables
|
||||||
|
func (a *API) SetRollupVariables(rollupVariables common.RollupVariables) {
|
||||||
|
a.status.Lock()
|
||||||
|
var rollupVAPI historydb.RollupVariablesAPI
|
||||||
|
rollupVAPI.EthBlockNum = rollupVariables.EthBlockNum
|
||||||
|
rollupVAPI.FeeAddToken = apitypes.NewBigIntStr(rollupVariables.FeeAddToken)
|
||||||
|
rollupVAPI.ForgeL1L2BatchTimeout = rollupVariables.ForgeL1L2BatchTimeout
|
||||||
|
rollupVAPI.WithdrawalDelay = rollupVariables.WithdrawalDelay
|
||||||
|
|
||||||
|
for i, bucket := range rollupVariables.Buckets {
|
||||||
|
var apiBucket historydb.BucketParamsAPI
|
||||||
|
apiBucket.CeilUSD = apitypes.NewBigIntStr(bucket.CeilUSD)
|
||||||
|
apiBucket.Withdrawals = apitypes.NewBigIntStr(bucket.Withdrawals)
|
||||||
|
apiBucket.BlockWithdrawalRate = apitypes.NewBigIntStr(bucket.BlockWithdrawalRate)
|
||||||
|
apiBucket.MaxWithdrawals = apitypes.NewBigIntStr(bucket.MaxWithdrawals)
|
||||||
|
rollupVAPI.Buckets[i] = apiBucket
|
||||||
|
}
|
||||||
|
|
||||||
|
rollupVAPI.SafeMode = rollupVariables.SafeMode
|
||||||
|
a.status.Rollup = rollupVAPI
|
||||||
|
a.status.Unlock()
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetWDelayerVariables set Status.WithdrawalDelayer variables
|
||||||
|
func (a *API) SetWDelayerVariables(wDelayerVariables common.WDelayerVariables) {
|
||||||
|
a.status.Lock()
|
||||||
|
a.status.WithdrawalDelayer = wDelayerVariables
|
||||||
|
a.status.Unlock()
|
||||||
|
}
|
||||||
|
|
||||||
|
// SetAuctionVariables set Status.Auction variables
|
||||||
|
func (a *API) SetAuctionVariables(auctionVariables common.AuctionVariables) {
|
||||||
|
a.status.Lock()
|
||||||
|
var auctionAPI historydb.AuctionVariablesAPI
|
||||||
|
|
||||||
|
auctionAPI.EthBlockNum = auctionVariables.EthBlockNum
|
||||||
|
auctionAPI.DonationAddress = auctionVariables.DonationAddress
|
||||||
|
auctionAPI.BootCoordinator = auctionVariables.BootCoordinator
|
||||||
|
auctionAPI.BootCoordinatorURL = auctionVariables.BootCoordinatorURL
|
||||||
|
auctionAPI.DefaultSlotSetBidSlotNum = auctionVariables.DefaultSlotSetBidSlotNum
|
||||||
|
auctionAPI.ClosedAuctionSlots = auctionVariables.ClosedAuctionSlots
|
||||||
|
auctionAPI.OpenAuctionSlots = auctionVariables.OpenAuctionSlots
|
||||||
|
auctionAPI.Outbidding = auctionVariables.Outbidding
|
||||||
|
auctionAPI.SlotDeadline = auctionVariables.SlotDeadline
|
||||||
|
|
||||||
|
for i, slot := range auctionVariables.DefaultSlotSetBid {
|
||||||
|
auctionAPI.DefaultSlotSetBid[i] = apitypes.NewBigIntStr(slot)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, ratio := range auctionVariables.AllocationRatio {
|
||||||
|
auctionAPI.AllocationRatio[i] = ratio
|
||||||
|
}
|
||||||
|
|
||||||
|
a.status.Auction = auctionAPI
|
||||||
|
a.status.Unlock()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Network
|
||||||
|
|
||||||
|
// UpdateNetworkInfoBlock update Status.Network block related information
|
||||||
|
func (a *API) UpdateNetworkInfoBlock(
|
||||||
|
lastEthBlock, lastSyncBlock common.Block,
|
||||||
|
) {
|
||||||
|
a.status.Network.LastSyncBlock = lastSyncBlock.Num
|
||||||
|
a.status.Network.LastEthBlock = lastEthBlock.Num
|
||||||
|
}
|
||||||
|
|
||||||
|
// UpdateNetworkInfo update Status.Network information
|
||||||
|
func (a *API) UpdateNetworkInfo(
|
||||||
|
lastEthBlock, lastSyncBlock common.Block,
|
||||||
|
lastBatchNum common.BatchNum, currentSlot int64,
|
||||||
|
) error {
|
||||||
|
lastBatch, err := a.h.GetBatchAPI(lastBatchNum)
|
||||||
|
if tracerr.Unwrap(err) == sql.ErrNoRows {
|
||||||
|
lastBatch = nil
|
||||||
|
} else if err != nil {
|
||||||
|
return tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
lastClosedSlot := currentSlot + int64(a.status.Auction.ClosedAuctionSlots)
|
||||||
|
nextForgers, err := a.getNextForgers(lastSyncBlock, currentSlot, lastClosedSlot)
|
||||||
|
if tracerr.Unwrap(err) == sql.ErrNoRows {
|
||||||
|
nextForgers = nil
|
||||||
|
} else if err != nil {
|
||||||
|
return tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
a.status.Lock()
|
||||||
|
a.status.Network.LastSyncBlock = lastSyncBlock.Num
|
||||||
|
a.status.Network.LastEthBlock = lastEthBlock.Num
|
||||||
|
a.status.Network.LastBatch = lastBatch
|
||||||
|
a.status.Network.CurrentSlot = currentSlot
|
||||||
|
a.status.Network.NextForgers = nextForgers
|
||||||
|
|
||||||
|
// Update buckets withdrawals
|
||||||
|
bucketsUpdate, err := a.h.GetBucketUpdatesAPI()
|
||||||
|
if tracerr.Unwrap(err) == sql.ErrNoRows {
|
||||||
|
bucketsUpdate = nil
|
||||||
|
} else if err != nil {
|
||||||
|
return tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for i, bucketParams := range a.status.Rollup.Buckets {
|
||||||
|
for _, bucketUpdate := range bucketsUpdate {
|
||||||
|
if bucketUpdate.NumBucket == i {
|
||||||
|
bucketParams.Withdrawals = bucketUpdate.Withdrawals
|
||||||
|
a.status.Rollup.Buckets[i] = bucketParams
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
a.status.Unlock()
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// apiSlotToBigInts converts from [6]*apitypes.BigIntStr to [6]*big.Int
|
||||||
|
func apiSlotToBigInts(defaultSlotSetBid [6]*apitypes.BigIntStr) ([6]*big.Int, error) {
|
||||||
|
var slots [6]*big.Int
|
||||||
|
|
||||||
|
for i, slot := range defaultSlotSetBid {
|
||||||
|
bigInt, ok := new(big.Int).SetString(string(*slot), 10)
|
||||||
|
if !ok {
|
||||||
|
return slots, tracerr.Wrap(fmt.Errorf("can't convert %T into big.Int", slot))
|
||||||
|
}
|
||||||
|
slots[i] = bigInt
|
||||||
|
}
|
||||||
|
|
||||||
|
return slots, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// getNextForgers returns next forgers
|
||||||
|
func (a *API) getNextForgers(lastBlock common.Block, currentSlot, lastClosedSlot int64) ([]NextForger, error) {
|
||||||
|
secondsPerBlock := int64(15) //nolint:gomnd
|
||||||
|
// currentSlot and lastClosedSlot included
|
||||||
|
limit := uint(lastClosedSlot - currentSlot + 1)
|
||||||
|
bids, _, err := a.h.GetBestBidsAPI(¤tSlot, &lastClosedSlot, nil, &limit, "ASC")
|
||||||
|
if err != nil && tracerr.Unwrap(err) != sql.ErrNoRows {
|
||||||
|
return nil, tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
nextForgers := []NextForger{}
|
||||||
|
// Get min bid info
|
||||||
|
var minBidInfo []historydb.MinBidInfo
|
||||||
|
if currentSlot >= a.status.Auction.DefaultSlotSetBidSlotNum {
|
||||||
|
// All min bids can be calculated with the last update of AuctionVariables
|
||||||
|
bigIntSlots, err := apiSlotToBigInts(a.status.Auction.DefaultSlotSetBid)
|
||||||
|
if err != nil {
|
||||||
|
return nil, tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
minBidInfo = []historydb.MinBidInfo{{
|
||||||
|
DefaultSlotSetBid: bigIntSlots,
|
||||||
|
DefaultSlotSetBidSlotNum: a.status.Auction.DefaultSlotSetBidSlotNum,
|
||||||
|
}}
|
||||||
|
} else {
|
||||||
|
// Get all the relevant updates from the DB
|
||||||
|
minBidInfo, err = a.h.GetAuctionVarsUntilSetSlotNumAPI(lastClosedSlot, int(lastClosedSlot-currentSlot)+1)
|
||||||
|
if err != nil {
|
||||||
|
return nil, tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// Create nextForger for each slot
|
||||||
|
for i := currentSlot; i <= lastClosedSlot; i++ {
|
||||||
|
fromBlock := i*int64(a.cg.AuctionConstants.BlocksPerSlot) + a.cg.AuctionConstants.GenesisBlockNum
|
||||||
|
toBlock := (i+1)*int64(a.cg.AuctionConstants.BlocksPerSlot) + a.cg.AuctionConstants.GenesisBlockNum - 1
|
||||||
|
nextForger := NextForger{
|
||||||
|
Period: Period{
|
||||||
|
SlotNum: i,
|
||||||
|
FromBlock: fromBlock,
|
||||||
|
ToBlock: toBlock,
|
||||||
|
FromTimestamp: lastBlock.Timestamp.Add(time.Second * time.Duration(secondsPerBlock*(fromBlock-lastBlock.Num))),
|
||||||
|
ToTimestamp: lastBlock.Timestamp.Add(time.Second * time.Duration(secondsPerBlock*(toBlock-lastBlock.Num))),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
foundForger := false
|
||||||
|
// If there is a bid for a slot, get forger (coordinator)
|
||||||
|
for j := range bids {
|
||||||
|
slotNum := bids[j].SlotNum
|
||||||
|
if slotNum == i {
|
||||||
|
// There's a bid for the slot
|
||||||
|
// Check if the bid is greater than the minimum required
|
||||||
|
for i := 0; i < len(minBidInfo); i++ {
|
||||||
|
// Find the most recent update
|
||||||
|
if slotNum >= minBidInfo[i].DefaultSlotSetBidSlotNum {
|
||||||
|
// Get min bid
|
||||||
|
minBidSelector := slotNum % int64(len(a.status.Auction.DefaultSlotSetBid))
|
||||||
|
minBid := minBidInfo[i].DefaultSlotSetBid[minBidSelector]
|
||||||
|
// Check if the bid has beaten the minimum
|
||||||
|
bid, ok := new(big.Int).SetString(string(bids[j].BidValue), 10)
|
||||||
|
if !ok {
|
||||||
|
return nil, tracerr.New("Wrong bid value, error parsing it as big.Int")
|
||||||
|
}
|
||||||
|
if minBid.Cmp(bid) == 1 {
|
||||||
|
// Min bid is greater than bid, the slot will be forged by boot coordinator
|
||||||
|
break
|
||||||
|
}
|
||||||
|
foundForger = true
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !foundForger { // There is no bid or it's smaller than the minimum
|
||||||
|
break
|
||||||
|
}
|
||||||
|
coordinator, err := a.h.GetCoordinatorAPI(bids[j].Bidder)
|
||||||
|
if err != nil {
|
||||||
|
return nil, tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
nextForger.Coordinator = *coordinator
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// If there is no bid, the coordinator that will forge is boot coordinator
|
||||||
|
if !foundForger {
|
||||||
|
nextForger.Coordinator = historydb.CoordinatorAPI{
|
||||||
|
Forger: a.status.Auction.BootCoordinator,
|
||||||
|
URL: a.status.Auction.BootCoordinatorURL,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
nextForgers = append(nextForgers, nextForger)
|
||||||
|
}
|
||||||
|
return nextForgers, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Metrics
|
||||||
|
|
||||||
|
// UpdateMetrics update Status.Metrics information
|
||||||
|
func (a *API) UpdateMetrics() error {
|
||||||
|
a.status.RLock()
|
||||||
|
if a.status.Network.LastBatch == nil {
|
||||||
|
a.status.RUnlock()
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
batchNum := a.status.Network.LastBatch.BatchNum
|
||||||
|
a.status.RUnlock()
|
||||||
|
metrics, err := a.h.GetMetricsAPI(batchNum)
|
||||||
|
if err != nil {
|
||||||
|
return tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
a.status.Lock()
|
||||||
|
a.status.Metrics = *metrics
|
||||||
|
a.status.Unlock()
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Recommended fee
|
||||||
|
|
||||||
|
// UpdateRecommendedFee update Status.RecommendedFee information
|
||||||
|
func (a *API) UpdateRecommendedFee() error {
|
||||||
|
feeExistingAccount, err := a.h.GetAvgTxFeeAPI()
|
||||||
|
if err != nil {
|
||||||
|
return tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
a.status.Lock()
|
||||||
|
a.status.RecommendedFee.ExistingAccount = feeExistingAccount
|
||||||
|
a.status.RecommendedFee.CreatesAccount = createAccountExtraFeePercentage * feeExistingAccount
|
||||||
|
a.status.RecommendedFee.CreatesAccountAndRegister = createAccountInternalExtraFeePercentage * feeExistingAccount
|
||||||
|
a.status.Unlock()
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,7 +4,7 @@ import (
|
|||||||
"math/big"
|
"math/big"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/hermeznetwork/hermez-node/api/apitypes"
|
"github.com/hermeznetwork/hermez-node/apitypes"
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/hermeznetwork/hermez-node/db/historydb"
|
"github.com/hermeznetwork/hermez-node/db/historydb"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
@@ -13,7 +13,7 @@ import (
|
|||||||
|
|
||||||
type testStatus struct {
|
type testStatus struct {
|
||||||
Network testNetwork `json:"network"`
|
Network testNetwork `json:"network"`
|
||||||
Metrics historydb.MetricsAPI `json:"metrics"`
|
Metrics historydb.Metrics `json:"metrics"`
|
||||||
Rollup historydb.RollupVariablesAPI `json:"rollup"`
|
Rollup historydb.RollupVariablesAPI `json:"rollup"`
|
||||||
Auction historydb.AuctionVariablesAPI `json:"auction"`
|
Auction historydb.AuctionVariablesAPI `json:"auction"`
|
||||||
WithdrawalDelayer common.WDelayerVariables `json:"withdrawalDelayer"`
|
WithdrawalDelayer common.WDelayerVariables `json:"withdrawalDelayer"`
|
||||||
@@ -21,19 +21,18 @@ type testStatus struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type testNetwork struct {
|
type testNetwork struct {
|
||||||
LastEthBlock int64 `json:"lastEthereumBlock"`
|
LastEthBlock int64 `json:"lastEthereumBlock"`
|
||||||
LastSyncBlock int64 `json:"lastSynchedBlock"`
|
LastSyncBlock int64 `json:"lastSynchedBlock"`
|
||||||
LastBatch testBatch `json:"lastBatch"`
|
LastBatch testBatch `json:"lastBatch"`
|
||||||
CurrentSlot int64 `json:"currentSlot"`
|
CurrentSlot int64 `json:"currentSlot"`
|
||||||
NextForgers []historydb.NextForgerAPI `json:"nextForgers"`
|
NextForgers []NextForger `json:"nextForgers"`
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestSetRollupVariables(t *testing.T) {
|
func TestSetRollupVariables(t *testing.T) {
|
||||||
stateAPIUpdater.SetSCVars(&common.SCVariablesPtr{Rollup: &tc.rollupVars})
|
rollupVars := &common.RollupVariables{}
|
||||||
require.NoError(t, stateAPIUpdater.Store())
|
assertEqualRollupVariables(t, *rollupVars, api.status.Rollup, true)
|
||||||
ni, err := api.h.GetNodeInfoAPI()
|
api.SetRollupVariables(tc.rollupVars)
|
||||||
require.NoError(t, err)
|
assertEqualRollupVariables(t, tc.rollupVars, api.status.Rollup, true)
|
||||||
assertEqualRollupVariables(t, tc.rollupVars, ni.StateAPI.Rollup, true)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func assertEqualRollupVariables(t *testing.T, rollupVariables common.RollupVariables, apiVariables historydb.RollupVariablesAPI, checkBuckets bool) {
|
func assertEqualRollupVariables(t *testing.T, rollupVariables common.RollupVariables, apiVariables historydb.RollupVariablesAPI, checkBuckets bool) {
|
||||||
@@ -52,19 +51,17 @@ func assertEqualRollupVariables(t *testing.T, rollupVariables common.RollupVaria
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestSetWDelayerVariables(t *testing.T) {
|
func TestSetWDelayerVariables(t *testing.T) {
|
||||||
stateAPIUpdater.SetSCVars(&common.SCVariablesPtr{WDelayer: &tc.wdelayerVars})
|
wdelayerVars := &common.WDelayerVariables{}
|
||||||
require.NoError(t, stateAPIUpdater.Store())
|
assert.Equal(t, *wdelayerVars, api.status.WithdrawalDelayer)
|
||||||
ni, err := api.h.GetNodeInfoAPI()
|
api.SetWDelayerVariables(tc.wdelayerVars)
|
||||||
require.NoError(t, err)
|
assert.Equal(t, tc.wdelayerVars, api.status.WithdrawalDelayer)
|
||||||
assert.Equal(t, tc.wdelayerVars, ni.StateAPI.WithdrawalDelayer)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestSetAuctionVariables(t *testing.T) {
|
func TestSetAuctionVariables(t *testing.T) {
|
||||||
stateAPIUpdater.SetSCVars(&common.SCVariablesPtr{Auction: &tc.auctionVars})
|
auctionVars := &common.AuctionVariables{}
|
||||||
require.NoError(t, stateAPIUpdater.Store())
|
assertEqualAuctionVariables(t, *auctionVars, api.status.Auction)
|
||||||
ni, err := api.h.GetNodeInfoAPI()
|
api.SetAuctionVariables(tc.auctionVars)
|
||||||
require.NoError(t, err)
|
assertEqualAuctionVariables(t, tc.auctionVars, api.status.Auction)
|
||||||
assertEqualAuctionVariables(t, tc.auctionVars, ni.StateAPI.Auction)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func assertEqualAuctionVariables(t *testing.T, auctionVariables common.AuctionVariables, apiVariables historydb.AuctionVariablesAPI) {
|
func assertEqualAuctionVariables(t *testing.T, auctionVariables common.AuctionVariables, apiVariables historydb.AuctionVariablesAPI) {
|
||||||
@@ -88,6 +85,11 @@ func assertEqualAuctionVariables(t *testing.T, auctionVariables common.AuctionVa
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestUpdateNetworkInfo(t *testing.T) {
|
func TestUpdateNetworkInfo(t *testing.T) {
|
||||||
|
status := &Network{}
|
||||||
|
assert.Equal(t, status.LastSyncBlock, api.status.Network.LastSyncBlock)
|
||||||
|
assert.Equal(t, status.LastBatch, api.status.Network.LastBatch)
|
||||||
|
assert.Equal(t, status.CurrentSlot, api.status.Network.CurrentSlot)
|
||||||
|
assert.Equal(t, status.NextForgers, api.status.Network.NextForgers)
|
||||||
lastBlock := tc.blocks[3]
|
lastBlock := tc.blocks[3]
|
||||||
lastBatchNum := common.BatchNum(3)
|
lastBatchNum := common.BatchNum(3)
|
||||||
currentSlotNum := int64(1)
|
currentSlotNum := int64(1)
|
||||||
@@ -116,80 +118,62 @@ func TestUpdateNetworkInfo(t *testing.T) {
|
|||||||
err := api.h.AddBucketUpdatesTest(api.h.DB(), bucketUpdates)
|
err := api.h.AddBucketUpdatesTest(api.h.DB(), bucketUpdates)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
err = stateAPIUpdater.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
|
err = api.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
require.NoError(t, stateAPIUpdater.Store())
|
assert.Equal(t, lastBlock.Num, api.status.Network.LastSyncBlock)
|
||||||
ni, err := api.h.GetNodeInfoAPI()
|
assert.Equal(t, lastBatchNum, api.status.Network.LastBatch.BatchNum)
|
||||||
require.NoError(t, err)
|
assert.Equal(t, currentSlotNum, api.status.Network.CurrentSlot)
|
||||||
assert.Equal(t, lastBlock.Num, ni.StateAPI.Network.LastSyncBlock)
|
assert.Equal(t, int(api.status.Auction.ClosedAuctionSlots)+1, len(api.status.Network.NextForgers))
|
||||||
assert.Equal(t, lastBatchNum, ni.StateAPI.Network.LastBatch.BatchNum)
|
assert.Equal(t, api.status.Rollup.Buckets[0].Withdrawals, apitypes.NewBigIntStr(big.NewInt(123)))
|
||||||
assert.Equal(t, currentSlotNum, ni.StateAPI.Network.CurrentSlot)
|
assert.Equal(t, api.status.Rollup.Buckets[2].Withdrawals, apitypes.NewBigIntStr(big.NewInt(43)))
|
||||||
assert.Equal(t, int(ni.StateAPI.Auction.ClosedAuctionSlots)+1, len(ni.StateAPI.Network.NextForgers))
|
|
||||||
assert.Equal(t, ni.StateAPI.Rollup.Buckets[0].Withdrawals, apitypes.NewBigIntStr(big.NewInt(123)))
|
|
||||||
assert.Equal(t, ni.StateAPI.Rollup.Buckets[2].Withdrawals, apitypes.NewBigIntStr(big.NewInt(43)))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestUpdateMetrics(t *testing.T) {
|
func TestUpdateMetrics(t *testing.T) {
|
||||||
// Update Metrics needs api.status.Network.LastBatch.BatchNum to be updated
|
// Update Metrics needs api.status.Network.LastBatch.BatchNum to be updated
|
||||||
lastBlock := tc.blocks[3]
|
lastBlock := tc.blocks[3]
|
||||||
lastBatchNum := common.BatchNum(12)
|
lastBatchNum := common.BatchNum(3)
|
||||||
currentSlotNum := int64(1)
|
currentSlotNum := int64(1)
|
||||||
err := stateAPIUpdater.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
|
err := api.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
err = stateAPIUpdater.UpdateMetrics()
|
err = api.UpdateMetrics()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
require.NoError(t, stateAPIUpdater.Store())
|
assert.Greater(t, api.status.Metrics.TransactionsPerBatch, float64(0))
|
||||||
ni, err := api.h.GetNodeInfoAPI()
|
assert.Greater(t, api.status.Metrics.BatchFrequency, float64(0))
|
||||||
require.NoError(t, err)
|
assert.Greater(t, api.status.Metrics.TransactionsPerSecond, float64(0))
|
||||||
assert.Greater(t, ni.StateAPI.Metrics.TransactionsPerBatch, float64(0))
|
assert.Greater(t, api.status.Metrics.TotalAccounts, int64(0))
|
||||||
assert.Greater(t, ni.StateAPI.Metrics.BatchFrequency, float64(0))
|
assert.Greater(t, api.status.Metrics.TotalBJJs, int64(0))
|
||||||
assert.Greater(t, ni.StateAPI.Metrics.TransactionsPerSecond, float64(0))
|
assert.Greater(t, api.status.Metrics.AvgTransactionFee, float64(0))
|
||||||
assert.Greater(t, ni.StateAPI.Metrics.TokenAccounts, int64(0))
|
|
||||||
assert.Greater(t, ni.StateAPI.Metrics.Wallets, int64(0))
|
|
||||||
assert.Greater(t, ni.StateAPI.Metrics.AvgTransactionFee, float64(0))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestUpdateRecommendedFee(t *testing.T) {
|
func TestUpdateRecommendedFee(t *testing.T) {
|
||||||
err := stateAPIUpdater.UpdateRecommendedFee()
|
err := api.UpdateRecommendedFee()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
require.NoError(t, stateAPIUpdater.Store())
|
assert.Greater(t, api.status.RecommendedFee.ExistingAccount, float64(0))
|
||||||
var minFeeUSD float64
|
assert.Equal(t, api.status.RecommendedFee.CreatesAccount,
|
||||||
if api.l2 != nil {
|
api.status.RecommendedFee.ExistingAccount*createAccountExtraFeePercentage)
|
||||||
minFeeUSD = api.l2.MinFeeUSD()
|
assert.Equal(t, api.status.RecommendedFee.CreatesAccountAndRegister,
|
||||||
}
|
api.status.RecommendedFee.ExistingAccount*createAccountInternalExtraFeePercentage)
|
||||||
ni, err := api.h.GetNodeInfoAPI()
|
|
||||||
require.NoError(t, err)
|
|
||||||
assert.Greater(t, ni.StateAPI.RecommendedFee.ExistingAccount, minFeeUSD)
|
|
||||||
assert.Equal(t, ni.StateAPI.RecommendedFee.CreatesAccount,
|
|
||||||
ni.StateAPI.RecommendedFee.ExistingAccount*
|
|
||||||
historydb.CreateAccountExtraFeePercentage)
|
|
||||||
assert.Equal(t, ni.StateAPI.RecommendedFee.CreatesAccountInternal,
|
|
||||||
ni.StateAPI.RecommendedFee.ExistingAccount*
|
|
||||||
historydb.CreateAccountInternalExtraFeePercentage)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestGetState(t *testing.T) {
|
func TestGetState(t *testing.T) {
|
||||||
lastBlock := tc.blocks[3]
|
lastBlock := tc.blocks[3]
|
||||||
lastBatchNum := common.BatchNum(12)
|
lastBatchNum := common.BatchNum(3)
|
||||||
currentSlotNum := int64(1)
|
currentSlotNum := int64(1)
|
||||||
stateAPIUpdater.SetSCVars(&common.SCVariablesPtr{
|
api.SetRollupVariables(tc.rollupVars)
|
||||||
Rollup: &tc.rollupVars,
|
api.SetWDelayerVariables(tc.wdelayerVars)
|
||||||
Auction: &tc.auctionVars,
|
api.SetAuctionVariables(tc.auctionVars)
|
||||||
WDelayer: &tc.wdelayerVars,
|
err := api.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
|
||||||
})
|
assert.NoError(t, err)
|
||||||
err := stateAPIUpdater.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
|
err = api.UpdateMetrics()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
err = stateAPIUpdater.UpdateMetrics()
|
err = api.UpdateRecommendedFee()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
err = stateAPIUpdater.UpdateRecommendedFee()
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.NoError(t, stateAPIUpdater.Store())
|
|
||||||
|
|
||||||
endpoint := apiURL + "state"
|
endpoint := apiURL + "state"
|
||||||
var status testStatus
|
var status testStatus
|
||||||
|
|
||||||
require.NoError(t, doGoodReq("GET", endpoint, nil, &status))
|
assert.NoError(t, doGoodReq("GET", endpoint, nil, &status))
|
||||||
|
|
||||||
// SC vars
|
// SC vars
|
||||||
// UpdateNetworkInfo will overwrite buckets withdrawal values
|
// UpdateNetworkInfo will overwrite buckets withdrawal values
|
||||||
@@ -210,21 +194,19 @@ func TestGetState(t *testing.T) {
|
|||||||
assert.Greater(t, status.Metrics.TransactionsPerBatch, float64(0))
|
assert.Greater(t, status.Metrics.TransactionsPerBatch, float64(0))
|
||||||
assert.Greater(t, status.Metrics.BatchFrequency, float64(0))
|
assert.Greater(t, status.Metrics.BatchFrequency, float64(0))
|
||||||
assert.Greater(t, status.Metrics.TransactionsPerSecond, float64(0))
|
assert.Greater(t, status.Metrics.TransactionsPerSecond, float64(0))
|
||||||
assert.Greater(t, status.Metrics.TokenAccounts, int64(0))
|
assert.Greater(t, status.Metrics.TotalAccounts, int64(0))
|
||||||
assert.Greater(t, status.Metrics.Wallets, int64(0))
|
assert.Greater(t, status.Metrics.TotalBJJs, int64(0))
|
||||||
assert.Greater(t, status.Metrics.AvgTransactionFee, float64(0))
|
assert.Greater(t, status.Metrics.AvgTransactionFee, float64(0))
|
||||||
// Recommended fee
|
// Recommended fee
|
||||||
// TODO: perform real asserts (not just greater than 0)
|
// TODO: perform real asserts (not just greater than 0)
|
||||||
assert.Greater(t, status.RecommendedFee.ExistingAccount, float64(0))
|
assert.Greater(t, status.RecommendedFee.ExistingAccount, float64(0))
|
||||||
assert.Equal(t, status.RecommendedFee.CreatesAccount,
|
assert.Equal(t, status.RecommendedFee.CreatesAccount,
|
||||||
status.RecommendedFee.ExistingAccount*
|
status.RecommendedFee.ExistingAccount*createAccountExtraFeePercentage)
|
||||||
historydb.CreateAccountExtraFeePercentage)
|
assert.Equal(t, status.RecommendedFee.CreatesAccountAndRegister,
|
||||||
assert.Equal(t, status.RecommendedFee.CreatesAccountInternal,
|
status.RecommendedFee.ExistingAccount*createAccountInternalExtraFeePercentage)
|
||||||
status.RecommendedFee.ExistingAccount*
|
|
||||||
historydb.CreateAccountInternalExtraFeePercentage)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func assertNextForgers(t *testing.T, expected, actual []historydb.NextForgerAPI) {
|
func assertNextForgers(t *testing.T, expected, actual []NextForger) {
|
||||||
assert.Equal(t, len(expected), len(actual))
|
assert.Equal(t, len(expected), len(actual))
|
||||||
for i := range expected {
|
for i := range expected {
|
||||||
// ignore timestamps and other metadata
|
// ignore timestamps and other metadata
|
||||||
|
|||||||
@@ -1,162 +0,0 @@
|
|||||||
package stateapiupdater
|
|
||||||
|
|
||||||
import (
|
|
||||||
"database/sql"
|
|
||||||
"sync"
|
|
||||||
|
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
|
||||||
"github.com/hermeznetwork/hermez-node/db/historydb"
|
|
||||||
"github.com/hermeznetwork/tracerr"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Updater is an utility object to facilitate updating the StateAPI
|
|
||||||
type Updater struct {
|
|
||||||
hdb *historydb.HistoryDB
|
|
||||||
state historydb.StateAPI
|
|
||||||
config historydb.NodeConfig
|
|
||||||
vars common.SCVariablesPtr
|
|
||||||
consts historydb.Constants
|
|
||||||
rw sync.RWMutex
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewUpdater creates a new Updater
|
|
||||||
func NewUpdater(hdb *historydb.HistoryDB, config *historydb.NodeConfig, vars *common.SCVariables,
|
|
||||||
consts *historydb.Constants) *Updater {
|
|
||||||
u := Updater{
|
|
||||||
hdb: hdb,
|
|
||||||
config: *config,
|
|
||||||
consts: *consts,
|
|
||||||
state: historydb.StateAPI{
|
|
||||||
NodePublicInfo: historydb.NodePublicInfo{
|
|
||||||
ForgeDelay: config.ForgeDelay,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
u.SetSCVars(vars.AsPtr())
|
|
||||||
return &u
|
|
||||||
}
|
|
||||||
|
|
||||||
// Store the State in the HistoryDB
|
|
||||||
func (u *Updater) Store() error {
|
|
||||||
u.rw.RLock()
|
|
||||||
defer u.rw.RUnlock()
|
|
||||||
return tracerr.Wrap(u.hdb.SetStateInternalAPI(&u.state))
|
|
||||||
}
|
|
||||||
|
|
||||||
// SetSCVars sets the smart contract vars (ony updates those that are not nil)
|
|
||||||
func (u *Updater) SetSCVars(vars *common.SCVariablesPtr) {
|
|
||||||
u.rw.Lock()
|
|
||||||
defer u.rw.Unlock()
|
|
||||||
if vars.Rollup != nil {
|
|
||||||
u.vars.Rollup = vars.Rollup
|
|
||||||
rollupVars := historydb.NewRollupVariablesAPI(u.vars.Rollup)
|
|
||||||
u.state.Rollup = *rollupVars
|
|
||||||
}
|
|
||||||
if vars.Auction != nil {
|
|
||||||
u.vars.Auction = vars.Auction
|
|
||||||
auctionVars := historydb.NewAuctionVariablesAPI(u.vars.Auction)
|
|
||||||
u.state.Auction = *auctionVars
|
|
||||||
}
|
|
||||||
if vars.WDelayer != nil {
|
|
||||||
u.vars.WDelayer = vars.WDelayer
|
|
||||||
u.state.WithdrawalDelayer = *u.vars.WDelayer
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// UpdateRecommendedFee update Status.RecommendedFee information
|
|
||||||
func (u *Updater) UpdateRecommendedFee() error {
|
|
||||||
recommendedFee, err := u.hdb.GetRecommendedFee(u.config.MinFeeUSD, u.config.MaxFeeUSD)
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
u.rw.Lock()
|
|
||||||
u.state.RecommendedFee = *recommendedFee
|
|
||||||
u.rw.Unlock()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// UpdateMetrics update Status.Metrics information
|
|
||||||
func (u *Updater) UpdateMetrics() error {
|
|
||||||
u.rw.RLock()
|
|
||||||
lastBatch := u.state.Network.LastBatch
|
|
||||||
u.rw.RUnlock()
|
|
||||||
if lastBatch == nil {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
lastBatchNum := lastBatch.BatchNum
|
|
||||||
metrics, poolLoad, err := u.hdb.GetMetricsInternalAPI(lastBatchNum)
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
u.rw.Lock()
|
|
||||||
u.state.Metrics = *metrics
|
|
||||||
u.state.NodePublicInfo.PoolLoad = poolLoad
|
|
||||||
u.rw.Unlock()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// UpdateNetworkInfoBlock update Status.Network block related information
|
|
||||||
func (u *Updater) UpdateNetworkInfoBlock(lastEthBlock, lastSyncBlock common.Block) {
|
|
||||||
u.rw.Lock()
|
|
||||||
u.state.Network.LastSyncBlock = lastSyncBlock.Num
|
|
||||||
u.state.Network.LastEthBlock = lastEthBlock.Num
|
|
||||||
u.rw.Unlock()
|
|
||||||
}
|
|
||||||
|
|
||||||
// UpdateNetworkInfo update Status.Network information
|
|
||||||
func (u *Updater) UpdateNetworkInfo(
|
|
||||||
lastEthBlock, lastSyncBlock common.Block,
|
|
||||||
lastBatchNum common.BatchNum, currentSlot int64,
|
|
||||||
) error {
|
|
||||||
// Get last batch in API format
|
|
||||||
lastBatch, err := u.hdb.GetBatchInternalAPI(lastBatchNum)
|
|
||||||
if tracerr.Unwrap(err) == sql.ErrNoRows {
|
|
||||||
lastBatch = nil
|
|
||||||
} else if err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
u.rw.RLock()
|
|
||||||
auctionVars := u.vars.Auction
|
|
||||||
u.rw.RUnlock()
|
|
||||||
// Get next forgers
|
|
||||||
lastClosedSlot := currentSlot + int64(auctionVars.ClosedAuctionSlots)
|
|
||||||
nextForgers, err := u.hdb.GetNextForgersInternalAPI(auctionVars, &u.consts.Auction,
|
|
||||||
lastSyncBlock, currentSlot, lastClosedSlot)
|
|
||||||
if tracerr.Unwrap(err) == sql.ErrNoRows {
|
|
||||||
nextForgers = nil
|
|
||||||
} else if err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
bucketUpdates, err := u.hdb.GetBucketUpdatesInternalAPI()
|
|
||||||
if err == sql.ErrNoRows {
|
|
||||||
bucketUpdates = nil
|
|
||||||
} else if err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
u.rw.Lock()
|
|
||||||
// Update NodeInfo struct
|
|
||||||
for i, bucketParams := range u.state.Rollup.Buckets {
|
|
||||||
for _, bucketUpdate := range bucketUpdates {
|
|
||||||
if bucketUpdate.NumBucket == i {
|
|
||||||
bucketParams.Withdrawals = bucketUpdate.Withdrawals
|
|
||||||
u.state.Rollup.Buckets[i] = bucketParams
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Update pending L1s
|
|
||||||
pendingL1s, err := u.hdb.GetUnforgedL1UserTxsCount()
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
u.state.Network.LastSyncBlock = lastSyncBlock.Num
|
|
||||||
u.state.Network.LastEthBlock = lastEthBlock.Num
|
|
||||||
u.state.Network.LastBatch = lastBatch
|
|
||||||
u.state.Network.CurrentSlot = currentSlot
|
|
||||||
u.state.Network.NextForgers = nextForgers
|
|
||||||
u.state.Network.PendingL1Txs = pendingL1s
|
|
||||||
u.rw.Unlock()
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
@@ -60,9 +60,9 @@ externalDocs:
|
|||||||
url: 'https://hermez.io'
|
url: 'https://hermez.io'
|
||||||
servers:
|
servers:
|
||||||
- description: Hosted mock up
|
- description: Hosted mock up
|
||||||
url: https://apimock.hermez.network/v1
|
url: https://apimock.hermez.network
|
||||||
- description: Localhost mock Up
|
- description: Localhost mock Up
|
||||||
url: http://localhost:4010/v1
|
url: http://localhost:4010
|
||||||
tags:
|
tags:
|
||||||
- name: Coordinator
|
- name: Coordinator
|
||||||
description: Endpoints used by the nodes running in coordinator mode. They are used to interact with the network.
|
description: Endpoints used by the nodes running in coordinator mode. They are used to interact with the network.
|
||||||
@@ -1329,6 +1329,13 @@ components:
|
|||||||
type: string
|
type: string
|
||||||
description: Moment in which the transaction was added to the pool.
|
description: Moment in which the transaction was added to the pool.
|
||||||
format: date-time
|
format: date-time
|
||||||
|
batchNum:
|
||||||
|
type: integer
|
||||||
|
description: Identifier of a batch. Every new forged batch increases by one the batchNum, starting at 0.
|
||||||
|
minimum: 0
|
||||||
|
maximum: 4294967295
|
||||||
|
nullable: true
|
||||||
|
example: null
|
||||||
requestFromAccountIndex:
|
requestFromAccountIndex:
|
||||||
type: string
|
type: string
|
||||||
description: >-
|
description: >-
|
||||||
@@ -1383,6 +1390,7 @@ components:
|
|||||||
$ref: '#/components/schemas/Token'
|
$ref: '#/components/schemas/Token'
|
||||||
example:
|
example:
|
||||||
amount: '100000000000000'
|
amount: '100000000000000'
|
||||||
|
batchNum:
|
||||||
fee: 0
|
fee: 0
|
||||||
fromAccountIndex: hez:SCC:256
|
fromAccountIndex: hez:SCC:256
|
||||||
fromBJJ: hez:r_trOasVEk0zNaalOoS9aLedu6mO7jI5XTIPu_zGXoyn
|
fromBJJ: hez:r_trOasVEk0zNaalOoS9aLedu6mO7jI5XTIPu_zGXoyn
|
||||||
@@ -1430,6 +1438,7 @@ components:
|
|||||||
- info
|
- info
|
||||||
- signature
|
- signature
|
||||||
- timestamp
|
- timestamp
|
||||||
|
- batchNum
|
||||||
- requestFromAccountIndex
|
- requestFromAccountIndex
|
||||||
- requestToAccountIndex
|
- requestToAccountIndex
|
||||||
- requestToHezEthereumAddress
|
- requestToHezEthereumAddress
|
||||||
@@ -2569,26 +2578,6 @@ components:
|
|||||||
description: List of next coordinators to forge.
|
description: List of next coordinators to forge.
|
||||||
items:
|
items:
|
||||||
$ref: '#/components/schemas/NextForger'
|
$ref: '#/components/schemas/NextForger'
|
||||||
Node:
|
|
||||||
type: object
|
|
||||||
description: Configuration and metrics of the coordinator node. Note that this is specific for each coordinator.
|
|
||||||
properties:
|
|
||||||
forgeDelay:
|
|
||||||
type: number
|
|
||||||
description: |
|
|
||||||
Delay in seconds after which a batch is forged if the slot is
|
|
||||||
already committed. If set to 0s, the coordinator will continuously
|
|
||||||
forge at the maximum rate. Note that this is a configuration parameter of a node,
|
|
||||||
so each coordinator may have a different value.
|
|
||||||
example: 193.4
|
|
||||||
poolLoad:
|
|
||||||
type: number
|
|
||||||
description: Number of pending transactions in the pool
|
|
||||||
example: 23201
|
|
||||||
additionalProperties: false
|
|
||||||
required:
|
|
||||||
- forgeDelay
|
|
||||||
- poolLoad
|
|
||||||
State:
|
State:
|
||||||
type: object
|
type: object
|
||||||
description: Gobal variables of the network
|
description: Gobal variables of the network
|
||||||
@@ -2605,8 +2594,6 @@ components:
|
|||||||
$ref: '#/components/schemas/StateWithdrawDelayer'
|
$ref: '#/components/schemas/StateWithdrawDelayer'
|
||||||
recommendedFee:
|
recommendedFee:
|
||||||
$ref: '#/components/schemas/RecommendedFee'
|
$ref: '#/components/schemas/RecommendedFee'
|
||||||
node:
|
|
||||||
$ref: '#/components/schemas/Node'
|
|
||||||
additionalProperties: false
|
additionalProperties: false
|
||||||
required:
|
required:
|
||||||
- network
|
- network
|
||||||
@@ -2615,7 +2602,6 @@ components:
|
|||||||
- auction
|
- auction
|
||||||
- withdrawalDelayer
|
- withdrawalDelayer
|
||||||
- recommendedFee
|
- recommendedFee
|
||||||
- node
|
|
||||||
StateNetwork:
|
StateNetwork:
|
||||||
type: object
|
type: object
|
||||||
description: Gobal statistics of the network
|
description: Gobal statistics of the network
|
||||||
@@ -2639,10 +2625,6 @@ components:
|
|||||||
- example: 2334
|
- example: 2334
|
||||||
nextForgers:
|
nextForgers:
|
||||||
$ref: '#/components/schemas/NextForgers'
|
$ref: '#/components/schemas/NextForgers'
|
||||||
pendingL1Transactions:
|
|
||||||
type: number
|
|
||||||
description: Number of pending L1 transactions (added in the smart contract queue but not forged).
|
|
||||||
example: 22
|
|
||||||
additionalProperties: false
|
additionalProperties: false
|
||||||
required:
|
required:
|
||||||
- lastEthereumBlock
|
- lastEthereumBlock
|
||||||
@@ -2818,11 +2800,11 @@ components:
|
|||||||
type: number
|
type: number
|
||||||
description: Average transactions per second in the last 24 hours.
|
description: Average transactions per second in the last 24 hours.
|
||||||
example: 302.3
|
example: 302.3
|
||||||
tokenAccounts:
|
totalAccounts:
|
||||||
type: integer
|
type: integer
|
||||||
description: Number of created accounts.
|
description: Number of created accounts.
|
||||||
example: 90473
|
example: 90473
|
||||||
wallets:
|
totalBJJs:
|
||||||
type: integer
|
type: integer
|
||||||
description: Number of different registered BJJs.
|
description: Number of different registered BJJs.
|
||||||
example: 23067
|
example: 23067
|
||||||
@@ -2830,19 +2812,14 @@ components:
|
|||||||
type: number
|
type: number
|
||||||
description: Average fee percentage paid for L2 transactions in the last 24 hours.
|
description: Average fee percentage paid for L2 transactions in the last 24 hours.
|
||||||
example: 1.54
|
example: 1.54
|
||||||
estimatedTimeToForgeL1:
|
|
||||||
type: number
|
|
||||||
description: Estimated time needed to forge a L1 transaction, from the time it's added on the smart contract, until it's actualy forged. In seconds.
|
|
||||||
example: 193.4
|
|
||||||
additionalProperties: false
|
additionalProperties: false
|
||||||
required:
|
required:
|
||||||
- transactionsPerBatch
|
- transactionsPerBatch
|
||||||
- batchFrequency
|
- batchFrequency
|
||||||
- transactionsPerSecond
|
- transactionsPerSecond
|
||||||
- tokenAccounts
|
- totalAccounts
|
||||||
- wallets
|
- totalBJJs
|
||||||
- avgTransactionFee
|
- avgTransactionFee
|
||||||
- estimatedTimeToForgeL1
|
|
||||||
PendingItems:
|
PendingItems:
|
||||||
type: integer
|
type: integer
|
||||||
description: Amount of items that will be returned in subsequent calls to the endpoint, as long as they are done with same filters. When the value is 0 it means that all items have been sent.
|
description: Amount of items that will be returned in subsequent calls to the endpoint, as long as they are done with same filters. When the value is 0 it means that all items have been sent.
|
||||||
|
|||||||
@@ -53,7 +53,7 @@ func (a *API) getTokens(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build successful response
|
// Build succesfull response
|
||||||
type tokensResponse struct {
|
type tokensResponse struct {
|
||||||
Tokens []historydb.TokenWithUSD `json:"tokens"`
|
Tokens []historydb.TokenWithUSD `json:"tokens"`
|
||||||
PendingItems uint64 `json:"pendingItems"`
|
PendingItems uint64 `json:"pendingItems"`
|
||||||
|
|||||||
@@ -42,7 +42,7 @@ func (a *API) getHistoryTxs(c *gin.Context) {
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Build successful response
|
// Build succesfull response
|
||||||
type txsResponse struct {
|
type txsResponse struct {
|
||||||
Txs []historydb.TxAPI `json:"transactions"`
|
Txs []historydb.TxAPI `json:"transactions"`
|
||||||
PendingItems uint64 `json:"pendingItems"`
|
PendingItems uint64 `json:"pendingItems"`
|
||||||
@@ -66,6 +66,6 @@ func (a *API) getHistoryTx(c *gin.Context) {
|
|||||||
retSQLErr(err, c)
|
retSQLErr(err, c)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
// Build successful response
|
// Build succesfull response
|
||||||
c.JSON(http.StatusOK, tx)
|
c.JSON(http.StatusOK, tx)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ import (
|
|||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/hermeznetwork/hermez-node/api/apitypes"
|
"github.com/hermeznetwork/hermez-node/apitypes"
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/hermeznetwork/hermez-node/db/historydb"
|
"github.com/hermeznetwork/hermez-node/db/historydb"
|
||||||
"github.com/hermeznetwork/hermez-node/test"
|
"github.com/hermeznetwork/hermez-node/test"
|
||||||
@@ -455,7 +455,7 @@ func TestGetHistoryTx(t *testing.T) {
|
|||||||
// 400, due invalid TxID
|
// 400, due invalid TxID
|
||||||
err := doBadReq("GET", endpoint+"0x001", nil, 400)
|
err := doBadReq("GET", endpoint+"0x001", nil, 400)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// 404, due nonexistent TxID in DB
|
// 404, due inexistent TxID in DB
|
||||||
err = doBadReq("GET", endpoint+"0x00eb5e95e1ce5e9f6c4ed402d415e8d0bdd7664769cfd2064d28da04a2c76be432", nil, 404)
|
err = doBadReq("GET", endpoint+"0x00eb5e95e1ce5e9f6c4ed402d415e8d0bdd7664769cfd2064d28da04a2c76be432", nil, 404)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,13 +2,12 @@ package api
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
|
||||||
"math/big"
|
"math/big"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
|
||||||
ethCommon "github.com/ethereum/go-ethereum/common"
|
ethCommon "github.com/ethereum/go-ethereum/common"
|
||||||
"github.com/gin-gonic/gin"
|
"github.com/gin-gonic/gin"
|
||||||
"github.com/hermeznetwork/hermez-node/api/apitypes"
|
"github.com/hermeznetwork/hermez-node/apitypes"
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/hermeznetwork/hermez-node/db/l2db"
|
"github.com/hermeznetwork/hermez-node/db/l2db"
|
||||||
"github.com/hermeznetwork/tracerr"
|
"github.com/hermeznetwork/tracerr"
|
||||||
@@ -28,7 +27,6 @@ func (a *API) postPoolTx(c *gin.Context) {
|
|||||||
retBadReq(err, c)
|
retBadReq(err, c)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
writeTx.ClientIP = c.ClientIP()
|
|
||||||
// Insert to DB
|
// Insert to DB
|
||||||
if err := a.l2.AddTxAPI(writeTx); err != nil {
|
if err := a.l2.AddTxAPI(writeTx); err != nil {
|
||||||
retSQLErr(err, c)
|
retSQLErr(err, c)
|
||||||
@@ -51,7 +49,7 @@ func (a *API) getPoolTx(c *gin.Context) {
|
|||||||
retSQLErr(err, c)
|
retSQLErr(err, c)
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
// Build successful response
|
// Build succesfull response
|
||||||
c.JSON(http.StatusOK, tx)
|
c.JSON(http.StatusOK, tx)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -171,21 +169,16 @@ func (a *API) verifyPoolL2TxWrite(txw *l2db.PoolL2TxWrite) error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
// Get public key
|
||||||
|
account, err := a.s.LastGetAccount(poolTx.FromIdx)
|
||||||
|
if err != nil {
|
||||||
|
return tracerr.Wrap(err)
|
||||||
|
}
|
||||||
// Validate feeAmount
|
// Validate feeAmount
|
||||||
_, err = common.CalcFeeAmount(poolTx.Amount, poolTx.Fee)
|
_, err = common.CalcFeeAmount(poolTx.Amount, poolTx.Fee)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
// Get public key
|
|
||||||
account, err := a.h.GetCommonAccountAPI(poolTx.FromIdx)
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("Error getting from account: %w", err))
|
|
||||||
}
|
|
||||||
// Validate TokenID
|
|
||||||
if poolTx.TokenID != account.TokenID {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("tx.TokenID (%v) != account.TokenID (%v)",
|
|
||||||
poolTx.TokenID, account.TokenID))
|
|
||||||
}
|
|
||||||
// Check signature
|
// Check signature
|
||||||
if !poolTx.VerifySignature(a.chainID, account.BJJ) {
|
if !poolTx.VerifySignature(a.chainID, account.BJJ) {
|
||||||
return tracerr.Wrap(errors.New("wrong signature"))
|
return tracerr.Wrap(errors.New("wrong signature"))
|
||||||
|
|||||||
@@ -2,20 +2,14 @@ package api
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"crypto/ecdsa"
|
|
||||||
"encoding/binary"
|
|
||||||
"encoding/hex"
|
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"math/big"
|
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
ethCrypto "github.com/ethereum/go-ethereum/crypto"
|
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/hermeznetwork/hermez-node/db/historydb"
|
"github.com/hermeznetwork/hermez-node/db/historydb"
|
||||||
"github.com/iden3/go-iden3-crypto/babyjub"
|
"github.com/iden3/go-iden3-crypto/babyjub"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// testPoolTxReceive is a struct to be used to assert the response
|
// testPoolTxReceive is a struct to be used to assert the response
|
||||||
@@ -176,9 +170,9 @@ func TestPoolTxs(t *testing.T) {
|
|||||||
fetchedTxID := common.TxID{}
|
fetchedTxID := common.TxID{}
|
||||||
for _, tx := range tc.poolTxsToSend {
|
for _, tx := range tc.poolTxsToSend {
|
||||||
jsonTxBytes, err := json.Marshal(tx)
|
jsonTxBytes, err := json.Marshal(tx)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
jsonTxReader := bytes.NewReader(jsonTxBytes)
|
jsonTxReader := bytes.NewReader(jsonTxBytes)
|
||||||
require.NoError(
|
assert.NoError(
|
||||||
t, doGoodReq(
|
t, doGoodReq(
|
||||||
"POST",
|
"POST",
|
||||||
endpoint,
|
endpoint,
|
||||||
@@ -193,42 +187,42 @@ func TestPoolTxs(t *testing.T) {
|
|||||||
badTx.Amount = "99950000000000000"
|
badTx.Amount = "99950000000000000"
|
||||||
badTx.Fee = 255
|
badTx.Fee = 255
|
||||||
jsonTxBytes, err := json.Marshal(badTx)
|
jsonTxBytes, err := json.Marshal(badTx)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
jsonTxReader := bytes.NewReader(jsonTxBytes)
|
jsonTxReader := bytes.NewReader(jsonTxBytes)
|
||||||
err = doBadReq("POST", endpoint, jsonTxReader, 400)
|
err = doBadReq("POST", endpoint, jsonTxReader, 400)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Wrong signature
|
// Wrong signature
|
||||||
badTx = tc.poolTxsToSend[0]
|
badTx = tc.poolTxsToSend[0]
|
||||||
badTx.FromIdx = "hez:foo:1000"
|
badTx.FromIdx = "hez:foo:1000"
|
||||||
jsonTxBytes, err = json.Marshal(badTx)
|
jsonTxBytes, err = json.Marshal(badTx)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
jsonTxReader = bytes.NewReader(jsonTxBytes)
|
jsonTxReader = bytes.NewReader(jsonTxBytes)
|
||||||
err = doBadReq("POST", endpoint, jsonTxReader, 400)
|
err = doBadReq("POST", endpoint, jsonTxReader, 400)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Wrong to
|
// Wrong to
|
||||||
badTx = tc.poolTxsToSend[0]
|
badTx = tc.poolTxsToSend[0]
|
||||||
ethAddr := "hez:0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"
|
ethAddr := "hez:0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF"
|
||||||
badTx.ToEthAddr = ðAddr
|
badTx.ToEthAddr = ðAddr
|
||||||
badTx.ToIdx = nil
|
badTx.ToIdx = nil
|
||||||
jsonTxBytes, err = json.Marshal(badTx)
|
jsonTxBytes, err = json.Marshal(badTx)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
jsonTxReader = bytes.NewReader(jsonTxBytes)
|
jsonTxReader = bytes.NewReader(jsonTxBytes)
|
||||||
err = doBadReq("POST", endpoint, jsonTxReader, 400)
|
err = doBadReq("POST", endpoint, jsonTxReader, 400)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Wrong rq
|
// Wrong rq
|
||||||
badTx = tc.poolTxsToSend[0]
|
badTx = tc.poolTxsToSend[0]
|
||||||
rqFromIdx := "hez:foo:30"
|
rqFromIdx := "hez:foo:30"
|
||||||
badTx.RqFromIdx = &rqFromIdx
|
badTx.RqFromIdx = &rqFromIdx
|
||||||
jsonTxBytes, err = json.Marshal(badTx)
|
jsonTxBytes, err = json.Marshal(badTx)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
jsonTxReader = bytes.NewReader(jsonTxBytes)
|
jsonTxReader = bytes.NewReader(jsonTxBytes)
|
||||||
err = doBadReq("POST", endpoint, jsonTxReader, 400)
|
err = doBadReq("POST", endpoint, jsonTxReader, 400)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// GET
|
// GET
|
||||||
endpoint += "/"
|
endpoint += "/"
|
||||||
for _, tx := range tc.poolTxsToReceive {
|
for _, tx := range tc.poolTxsToReceive {
|
||||||
fetchedTx := testPoolTxReceive{}
|
fetchedTx := testPoolTxReceive{}
|
||||||
require.NoError(
|
assert.NoError(
|
||||||
t, doGoodReq(
|
t, doGoodReq(
|
||||||
"GET",
|
"GET",
|
||||||
endpoint+tx.TxID.String(),
|
endpoint+tx.TxID.String(),
|
||||||
@@ -239,10 +233,10 @@ func TestPoolTxs(t *testing.T) {
|
|||||||
}
|
}
|
||||||
// 400, due invalid TxID
|
// 400, due invalid TxID
|
||||||
err = doBadReq("GET", endpoint+"0xG2241b6f2b1dd772dba391f4a1a3407c7c21f598d86e2585a14e616fb4a255f823", nil, 400)
|
err = doBadReq("GET", endpoint+"0xG2241b6f2b1dd772dba391f4a1a3407c7c21f598d86e2585a14e616fb4a255f823", nil, 400)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// 404, due nonexistent TxID in DB
|
// 404, due inexistent TxID in DB
|
||||||
err = doBadReq("GET", endpoint+"0x02241b6f2b1dd772dba391f4a1a3407c7c21f598d86e2585a14e616fb4a255f823", nil, 404)
|
err = doBadReq("GET", endpoint+"0x02241b6f2b1dd772dba391f4a1a3407c7c21f598d86e2585a14e616fb4a255f823", nil, 404)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
func assertPoolTx(t *testing.T, expected, actual testPoolTxReceive) {
|
func assertPoolTx(t *testing.T, expected, actual testPoolTxReceive) {
|
||||||
@@ -262,73 +256,3 @@ func assertPoolTx(t *testing.T, expected, actual testPoolTxReceive) {
|
|||||||
}
|
}
|
||||||
assert.Equal(t, expected, actual)
|
assert.Equal(t, expected, actual)
|
||||||
}
|
}
|
||||||
|
|
||||||
// TestAllTosNull test that the API doesn't accept txs with all the TOs set to null (to eth, to bjj, to idx)
|
|
||||||
func TestAllTosNull(t *testing.T) {
|
|
||||||
// Generate account:
|
|
||||||
// Ethereum private key
|
|
||||||
var key ecdsa.PrivateKey
|
|
||||||
key.D = big.NewInt(int64(4444)) // only for testing
|
|
||||||
key.PublicKey.X, key.PublicKey.Y = ethCrypto.S256().ScalarBaseMult(key.D.Bytes())
|
|
||||||
key.Curve = ethCrypto.S256()
|
|
||||||
addr := ethCrypto.PubkeyToAddress(key.PublicKey)
|
|
||||||
// BJJ private key
|
|
||||||
var sk babyjub.PrivateKey
|
|
||||||
var iBytes [8]byte
|
|
||||||
binary.LittleEndian.PutUint64(iBytes[:], 4444)
|
|
||||||
copy(sk[:], iBytes[:]) // only for testing
|
|
||||||
account := common.Account{
|
|
||||||
Idx: 4444,
|
|
||||||
TokenID: 0,
|
|
||||||
BatchNum: 1,
|
|
||||||
BJJ: sk.Public().Compress(),
|
|
||||||
EthAddr: addr,
|
|
||||||
Nonce: 0,
|
|
||||||
Balance: big.NewInt(1000000),
|
|
||||||
}
|
|
||||||
// Add account to history DB (required to verify signature)
|
|
||||||
err := api.h.AddAccounts([]common.Account{account})
|
|
||||||
assert.NoError(t, err)
|
|
||||||
// Genrate tx with all tos set to nil (to eth, to bjj, to idx)
|
|
||||||
tx := common.PoolL2Tx{
|
|
||||||
FromIdx: account.Idx,
|
|
||||||
TokenID: account.TokenID,
|
|
||||||
Amount: big.NewInt(1000),
|
|
||||||
Fee: 200,
|
|
||||||
Nonce: 0,
|
|
||||||
}
|
|
||||||
// Set idx and type manually, and check that the function doesn't allow it
|
|
||||||
_, err = common.NewPoolL2Tx(&tx)
|
|
||||||
assert.Error(t, err)
|
|
||||||
tx.Type = common.TxTypeTransfer
|
|
||||||
var txID common.TxID
|
|
||||||
txIDRaw, err := hex.DecodeString("02e66e24f7f25272906647c8fd1d7fe8acf3cf3e9b38ffc9f94bbb5090dc275073")
|
|
||||||
assert.NoError(t, err)
|
|
||||||
copy(txID[:], txIDRaw)
|
|
||||||
tx.TxID = txID
|
|
||||||
// Sign tx
|
|
||||||
toSign, err := tx.HashToSign(0)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
sig := sk.SignPoseidon(toSign)
|
|
||||||
tx.Signature = sig.Compress()
|
|
||||||
// Transform common.PoolL2Tx ==> testPoolTxSend
|
|
||||||
txToSend := testPoolTxSend{
|
|
||||||
TxID: tx.TxID,
|
|
||||||
Type: tx.Type,
|
|
||||||
TokenID: tx.TokenID,
|
|
||||||
FromIdx: idxToHez(tx.FromIdx, "ETH"),
|
|
||||||
Amount: tx.Amount.String(),
|
|
||||||
Fee: tx.Fee,
|
|
||||||
Nonce: tx.Nonce,
|
|
||||||
Signature: tx.Signature,
|
|
||||||
}
|
|
||||||
// Send tx to the API
|
|
||||||
jsonTxBytes, err := json.Marshal(txToSend)
|
|
||||||
require.NoError(t, err)
|
|
||||||
jsonTxReader := bytes.NewReader(jsonTxBytes)
|
|
||||||
err = doBadReq("POST", apiURL+"transactions-pool", jsonTxReader, 400)
|
|
||||||
require.NoError(t, err)
|
|
||||||
// Clean historyDB: the added account shouldn't be there for other tests
|
|
||||||
_, err = api.h.DB().DB.Exec("delete from account where idx = 4444")
|
|
||||||
assert.NoError(t, err)
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -4,6 +4,7 @@ import (
|
|||||||
"database/sql/driver"
|
"database/sql/driver"
|
||||||
"encoding/base64"
|
"encoding/base64"
|
||||||
"encoding/hex"
|
"encoding/hex"
|
||||||
|
"encoding/json"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"math/big"
|
"math/big"
|
||||||
@@ -18,10 +19,7 @@ import (
|
|||||||
|
|
||||||
// BigIntStr is used to scan/value *big.Int directly into strings from/to sql DBs.
|
// BigIntStr is used to scan/value *big.Int directly into strings from/to sql DBs.
|
||||||
// It assumes that *big.Int are inserted/fetched to/from the DB using the BigIntMeddler meddler
|
// It assumes that *big.Int are inserted/fetched to/from the DB using the BigIntMeddler meddler
|
||||||
// defined at github.com/hermeznetwork/hermez-node/db. Since *big.Int is
|
// defined at github.com/hermeznetwork/hermez-node/db
|
||||||
// stored as DECIMAL in SQL, there's no need to implement Scan()/Value()
|
|
||||||
// because DECIMALS are encoded/decoded as strings by the sql driver, and
|
|
||||||
// BigIntStr is already a string.
|
|
||||||
type BigIntStr string
|
type BigIntStr string
|
||||||
|
|
||||||
// NewBigIntStr creates a *BigIntStr from a *big.Int.
|
// NewBigIntStr creates a *BigIntStr from a *big.Int.
|
||||||
@@ -34,6 +32,34 @@ func NewBigIntStr(bigInt *big.Int) *BigIntStr {
|
|||||||
return &bigIntStr
|
return &bigIntStr
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Scan implements Scanner for database/sql
|
||||||
|
func (b *BigIntStr) Scan(src interface{}) error {
|
||||||
|
srcBytes, ok := src.([]byte)
|
||||||
|
if !ok {
|
||||||
|
return tracerr.Wrap(fmt.Errorf("can't scan %T into apitypes.BigIntStr", src))
|
||||||
|
}
|
||||||
|
// bytes to *big.Int
|
||||||
|
bigInt := new(big.Int).SetBytes(srcBytes)
|
||||||
|
// *big.Int to BigIntStr
|
||||||
|
bigIntStr := NewBigIntStr(bigInt)
|
||||||
|
if bigIntStr == nil {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
*b = *bigIntStr
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Value implements valuer for database/sql
|
||||||
|
func (b BigIntStr) Value() (driver.Value, error) {
|
||||||
|
// string to *big.Int
|
||||||
|
bigInt, ok := new(big.Int).SetString(string(b), 10)
|
||||||
|
if !ok || bigInt == nil {
|
||||||
|
return nil, tracerr.Wrap(errors.New("invalid representation of a *big.Int"))
|
||||||
|
}
|
||||||
|
// *big.Int to bytes
|
||||||
|
return bigInt.Bytes(), nil
|
||||||
|
}
|
||||||
|
|
||||||
// StrBigInt is used to unmarshal BigIntStr directly into an alias of big.Int
|
// StrBigInt is used to unmarshal BigIntStr directly into an alias of big.Int
|
||||||
type StrBigInt big.Int
|
type StrBigInt big.Int
|
||||||
|
|
||||||
@@ -47,19 +73,25 @@ func (s *StrBigInt) UnmarshalText(text []byte) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// CollectedFeesAPI is send common.batch.CollectedFee through the API
|
// CollectedFees is used to retrieve common.batch.CollectedFee from the DB
|
||||||
type CollectedFeesAPI map[common.TokenID]BigIntStr
|
type CollectedFees map[common.TokenID]BigIntStr
|
||||||
|
|
||||||
// NewCollectedFeesAPI creates a new CollectedFeesAPI from a *big.Int map
|
// UnmarshalJSON unmarshals a json representation of map[common.TokenID]*big.Int
|
||||||
func NewCollectedFeesAPI(m map[common.TokenID]*big.Int) CollectedFeesAPI {
|
func (c *CollectedFees) UnmarshalJSON(text []byte) error {
|
||||||
c := CollectedFeesAPI(make(map[common.TokenID]BigIntStr))
|
bigIntMap := make(map[common.TokenID]*big.Int)
|
||||||
for k, v := range m {
|
if err := json.Unmarshal(text, &bigIntMap); err != nil {
|
||||||
c[k] = *NewBigIntStr(v)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
return c
|
*c = CollectedFees(make(map[common.TokenID]BigIntStr))
|
||||||
|
for k, v := range bigIntMap {
|
||||||
|
bStr := NewBigIntStr(v)
|
||||||
|
(CollectedFees(*c)[k]) = *bStr
|
||||||
|
}
|
||||||
|
// *c = CollectedFees(bStrMap)
|
||||||
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// HezEthAddr is used to scan/value Ethereum Address directly into strings that follow the Ethereum address hez format (^hez:0x[a-fA-F0-9]{40}$) from/to sql DBs.
|
// HezEthAddr is used to scan/value Ethereum Address directly into strings that follow the Ethereum address hez fotmat (^hez:0x[a-fA-F0-9]{40}$) from/to sql DBs.
|
||||||
// It assumes that Ethereum Address are inserted/fetched to/from the DB using the default Scan/Value interface
|
// It assumes that Ethereum Address are inserted/fetched to/from the DB using the default Scan/Value interface
|
||||||
type HezEthAddr string
|
type HezEthAddr string
|
||||||
|
|
||||||
@@ -111,7 +143,7 @@ func (s *StrHezEthAddr) UnmarshalText(text []byte) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// HezBJJ is used to scan/value *babyjub.PublicKeyComp directly into strings that follow the BJJ public key hez format (^hez:[A-Za-z0-9_-]{44}$) from/to sql DBs.
|
// HezBJJ is used to scan/value *babyjub.PublicKeyComp directly into strings that follow the BJJ public key hez fotmat (^hez:[A-Za-z0-9_-]{44}$) from/to sql DBs.
|
||||||
// It assumes that *babyjub.PublicKeyComp are inserted/fetched to/from the DB using the default Scan/Value interface
|
// It assumes that *babyjub.PublicKeyComp are inserted/fetched to/from the DB using the default Scan/Value interface
|
||||||
type HezBJJ string
|
type HezBJJ string
|
||||||
|
|
||||||
@@ -184,7 +216,7 @@ func (b HezBJJ) Value() (driver.Value, error) {
|
|||||||
// StrHezBJJ is used to unmarshal HezBJJ directly into an alias of babyjub.PublicKeyComp
|
// StrHezBJJ is used to unmarshal HezBJJ directly into an alias of babyjub.PublicKeyComp
|
||||||
type StrHezBJJ babyjub.PublicKeyComp
|
type StrHezBJJ babyjub.PublicKeyComp
|
||||||
|
|
||||||
// UnmarshalText unmarshalls a StrHezBJJ
|
// UnmarshalText unmarshals a StrHezBJJ
|
||||||
func (s *StrHezBJJ) UnmarshalText(text []byte) error {
|
func (s *StrHezBJJ) UnmarshalText(text []byte) error {
|
||||||
bjj, err := hezStrToBJJ(string(text))
|
bjj, err := hezStrToBJJ(string(text))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -194,8 +226,8 @@ func (s *StrHezBJJ) UnmarshalText(text []byte) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// HezIdx is used to value common.Idx directly into strings that follow the Idx key hez format (hez:tokenSymbol:idx) to sql DBs.
|
// HezIdx is used to value common.Idx directly into strings that follow the Idx key hez fotmat (hez:tokenSymbol:idx) to sql DBs.
|
||||||
// Note that this can only be used to insert to DB since there is no way to automatically read from the DB since it needs the tokenSymbol
|
// Note that this can only be used to insert to DB since there is no way to automaticaly read from the DB since it needs the tokenSymbol
|
||||||
type HezIdx string
|
type HezIdx string
|
||||||
|
|
||||||
// StrHezIdx is used to unmarshal HezIdx directly into an alias of common.Idx
|
// StrHezIdx is used to unmarshal HezIdx directly into an alias of common.Idx
|
||||||
@@ -28,8 +28,7 @@ type ConfigBatch struct {
|
|||||||
|
|
||||||
// NewBatchBuilder constructs a new BatchBuilder, and executes the bb.Reset
|
// NewBatchBuilder constructs a new BatchBuilder, and executes the bb.Reset
|
||||||
// method
|
// method
|
||||||
func NewBatchBuilder(dbpath string, synchronizerStateDB *statedb.StateDB, batchNum common.BatchNum,
|
func NewBatchBuilder(dbpath string, synchronizerStateDB *statedb.StateDB, batchNum common.BatchNum, nLevels uint64) (*BatchBuilder, error) {
|
||||||
nLevels uint64) (*BatchBuilder, error) {
|
|
||||||
localStateDB, err := statedb.NewLocalStateDB(
|
localStateDB, err := statedb.NewLocalStateDB(
|
||||||
statedb.Config{
|
statedb.Config{
|
||||||
Path: dbpath,
|
Path: dbpath,
|
||||||
@@ -65,10 +64,7 @@ func (bb *BatchBuilder) BuildBatch(coordIdxs []common.Idx, configBatch *ConfigBa
|
|||||||
tp := txprocessor.NewTxProcessor(bbStateDB, configBatch.TxProcessorConfig)
|
tp := txprocessor.NewTxProcessor(bbStateDB, configBatch.TxProcessorConfig)
|
||||||
|
|
||||||
ptOut, err := tp.ProcessTxs(coordIdxs, l1usertxs, l1coordinatortxs, pooll2txs)
|
ptOut, err := tp.ProcessTxs(coordIdxs, l1usertxs, l1coordinatortxs, pooll2txs)
|
||||||
if err != nil {
|
return ptOut.ZKInputs, tracerr.Wrap(err)
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
return ptOut.ZKInputs, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// LocalStateDB returns the underlying LocalStateDB
|
// LocalStateDB returns the underlying LocalStateDB
|
||||||
|
|||||||
@@ -15,8 +15,7 @@ func TestBatchBuilder(t *testing.T) {
|
|||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
defer assert.Nil(t, os.RemoveAll(dir))
|
defer assert.Nil(t, os.RemoveAll(dir))
|
||||||
|
|
||||||
synchDB, err := statedb.NewStateDB(statedb.Config{Path: dir, Keep: 128,
|
synchDB, err := statedb.NewStateDB(statedb.Config{Path: dir, Keep: 128, Type: statedb.TypeBatchBuilder, NLevels: 0})
|
||||||
Type: statedb.TypeBatchBuilder, NLevels: 0})
|
|
||||||
assert.Nil(t, err)
|
assert.Nil(t, err)
|
||||||
|
|
||||||
bbDir, err := ioutil.TempDir("", "tmpBatchBuilderDB")
|
bbDir, err := ioutil.TempDir("", "tmpBatchBuilderDB")
|
||||||
|
|||||||
1
cli/node/.gitignore
vendored
1
cli/node/.gitignore
vendored
@@ -1,3 +1,2 @@
|
|||||||
cfg.example.secret.toml
|
cfg.example.secret.toml
|
||||||
cfg.toml
|
cfg.toml
|
||||||
node
|
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ The `hermez-node` has been tested with go version 1.14
|
|||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
```shell
|
```
|
||||||
NAME:
|
NAME:
|
||||||
hermez-node - A new cli application
|
hermez-node - A new cli application
|
||||||
|
|
||||||
@@ -16,19 +16,18 @@ USAGE:
|
|||||||
node [global options] command [command options] [arguments...]
|
node [global options] command [command options] [arguments...]
|
||||||
|
|
||||||
VERSION:
|
VERSION:
|
||||||
v0.1.0-6-gd8a50c5
|
0.1.0-alpha
|
||||||
|
|
||||||
COMMANDS:
|
COMMANDS:
|
||||||
version Show the application version
|
|
||||||
importkey Import ethereum private key
|
importkey Import ethereum private key
|
||||||
genbjj Generate a new BabyJubJub key
|
genbjj Generate a new BabyJubJub key
|
||||||
wipesql Wipe the SQL DB (HistoryDB and L2DB) and the StateDBs, leaving the DB in a clean state
|
wipesql Wipe the SQL DB (HistoryDB and L2DB), leaving the DB in a clean state
|
||||||
run Run the hermez-node in the indicated mode
|
run Run the hermez-node in the indicated mode
|
||||||
serveapi Serve the API only
|
|
||||||
discard Discard blocks up to a specified block number
|
|
||||||
help, h Shows a list of commands or help for one command
|
help, h Shows a list of commands or help for one command
|
||||||
|
|
||||||
GLOBAL OPTIONS:
|
GLOBAL OPTIONS:
|
||||||
|
--mode MODE Set node MODE (can be "sync" or "coord")
|
||||||
|
--cfg FILE Node configuration FILE
|
||||||
--help, -h show help (default: false)
|
--help, -h show help (default: false)
|
||||||
--version, -v print the version (default: false)
|
--version, -v print the version (default: false)
|
||||||
```
|
```
|
||||||
@@ -55,10 +54,6 @@ To read the documentation of each configuration parameter, please check the
|
|||||||
with `Coordinator` are only used in coord mode, and don't need to be defined
|
with `Coordinator` are only used in coord mode, and don't need to be defined
|
||||||
when running the coordinator in sync mode
|
when running the coordinator in sync mode
|
||||||
|
|
||||||
When running the API in standalone mode, the required configuration is a subset
|
|
||||||
of the node configuration. Please, check the `type APIServer` at
|
|
||||||
[config/config.go](../../config/config.go) to learn about all the parametes.
|
|
||||||
|
|
||||||
### Notes
|
### Notes
|
||||||
|
|
||||||
- The private key corresponding to the parameter `Coordinator.ForgerAddress` needs to be imported in the ethereum keystore
|
- The private key corresponding to the parameter `Coordinator.ForgerAddress` needs to be imported in the ethereum keystore
|
||||||
@@ -80,7 +75,7 @@ of the node configuration. Please, check the `type APIServer` at
|
|||||||
|
|
||||||
Building the node requires using the packr utility to bundle the database
|
Building the node requires using the packr utility to bundle the database
|
||||||
migrations inside the resulting binary. Install the packr utility with:
|
migrations inside the resulting binary. Install the packr utility with:
|
||||||
```shell
|
```
|
||||||
cd /tmp && go get -u github.com/gobuffalo/packr/v2/packr2 && cd -
|
cd /tmp && go get -u github.com/gobuffalo/packr/v2/packr2 && cd -
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -88,7 +83,7 @@ Make sure your `$PATH` contains `$GOPATH/bin`, otherwise the packr utility will
|
|||||||
not be found.
|
not be found.
|
||||||
|
|
||||||
Now build the node executable:
|
Now build the node executable:
|
||||||
```shell
|
```
|
||||||
cd ../../db && packr2 && cd -
|
cd ../../db && packr2 && cd -
|
||||||
go build .
|
go build .
|
||||||
cd ../../db && packr2 clean && cd -
|
cd ../../db && packr2 clean && cd -
|
||||||
@@ -103,48 +98,35 @@ run the following examples by replacing `./node` with `go run .` and executing
|
|||||||
them in the `cli/node` directory to build from source and run at the same time.
|
them in the `cli/node` directory to build from source and run at the same time.
|
||||||
|
|
||||||
Run the node in mode synchronizer:
|
Run the node in mode synchronizer:
|
||||||
```shell
|
```
|
||||||
./node run --mode sync --cfg cfg.buidler.toml
|
./node --mode sync --cfg cfg.buidler.toml run
|
||||||
```
|
```
|
||||||
|
|
||||||
Run the node in mode coordinator:
|
Run the node in mode coordinator:
|
||||||
```shell
|
|
||||||
./node run --mode coord --cfg cfg.buidler.toml
|
|
||||||
```
|
```
|
||||||
|
./node --mode coord --cfg cfg.buidler.toml run
|
||||||
Serve the API in standalone mode. This command allows serving the API just
|
|
||||||
with access to the PostgreSQL database that a node is using. Several instances
|
|
||||||
of `serveapi` can be running at the same time with a single PostgreSQL
|
|
||||||
database:
|
|
||||||
```shell
|
|
||||||
./node serveapi --mode coord --cfg cfg.buidler.toml
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Import an ethereum private key into the keystore:
|
Import an ethereum private key into the keystore:
|
||||||
```shell
|
```
|
||||||
./node importkey --mode coord --cfg cfg.buidler.toml --privatekey 0x618b35096c477aab18b11a752be619f0023a539bb02dd6c813477a6211916cde
|
./node --mode coord --cfg cfg.buidler.toml importkey --privatekey 0x618b35096c477aab18b11a752be619f0023a539bb02dd6c813477a6211916cde
|
||||||
```
|
```
|
||||||
|
|
||||||
Generate a new BabyJubJub key pair:
|
Generate a new BabyJubJub key pair:
|
||||||
```shell
|
|
||||||
./node genbjj
|
|
||||||
```
|
```
|
||||||
|
./node --mode coord --cfg cfg.buidler.toml genbjj
|
||||||
Check the binary version:
|
|
||||||
```shell
|
|
||||||
./node version
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Wipe the entier SQL database (this will destroy all synchronized and pool
|
Wipe the entier SQL database (this will destroy all synchronized and pool
|
||||||
data):
|
data):
|
||||||
```shell
|
```
|
||||||
./node wipesql --mode coord --cfg cfg.buidler.toml
|
./node --mode coord --cfg cfg.buidler.toml wipesql
|
||||||
```
|
```
|
||||||
|
|
||||||
Discard all synchronized blocks and associated state up to a given block
|
Discard all synchronized blocks and associated state up to a given block
|
||||||
number. This command is useful in case the synchronizer reaches an invalid
|
number. This command is useful in case the synchronizer reaches an invalid
|
||||||
state and you want to roll back a few blocks and try again (maybe with some
|
state and you want to roll back a few blocks and try again (maybe with some
|
||||||
fixes in the code).
|
fixes in the code).
|
||||||
```shell
|
```
|
||||||
./node discard --mode coord --cfg cfg.buidler.toml --block 8061330
|
./node --mode coord --cfg cfg.buidler.toml discard --block 8061330
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -1,24 +0,0 @@
|
|||||||
[API]
|
|
||||||
Address = "localhost:8386"
|
|
||||||
Explorer = true
|
|
||||||
MaxSQLConnections = 10
|
|
||||||
SQLConnectionTimeout = "2s"
|
|
||||||
|
|
||||||
[PostgreSQL]
|
|
||||||
PortWrite = 5432
|
|
||||||
HostWrite = "localhost"
|
|
||||||
UserWrite = "hermez"
|
|
||||||
PasswordWrite = "yourpasswordhere"
|
|
||||||
NameWrite = "hermez"
|
|
||||||
|
|
||||||
[Coordinator.L2DB]
|
|
||||||
SafetyPeriod = 10
|
|
||||||
MaxTxs = 512
|
|
||||||
TTL = "24h"
|
|
||||||
PurgeBatchDelay = 10
|
|
||||||
InvalidateBatchDelay = 20
|
|
||||||
PurgeBlockDelay = 10
|
|
||||||
InvalidateBlockDelay = 20
|
|
||||||
|
|
||||||
[Coordinator.API]
|
|
||||||
Coordinator = true
|
|
||||||
@@ -8,52 +8,23 @@ SQLConnectionTimeout = "2s"
|
|||||||
|
|
||||||
[PriceUpdater]
|
[PriceUpdater]
|
||||||
Interval = "10s"
|
Interval = "10s"
|
||||||
URLBitfinexV2 = "https://api-pub.bitfinex.com/v2/"
|
URL = "https://api-pub.bitfinex.com/v2/"
|
||||||
URLCoinGeckoV3 = "https://api.coingecko.com/api/v3/"
|
Type = "bitfinexV2"
|
||||||
# Available update methods:
|
|
||||||
# - coingeckoV3 (recommended): get price by SC addr using coingecko API
|
|
||||||
# - bitfinexV2: get price by token symbol using bitfinex API
|
|
||||||
# - static (recommended for blacklisting tokens): use the given StaticValue to set the price (if not provided 0 will be used)
|
|
||||||
# - ignore: don't update the price leave it as it is on the DB
|
|
||||||
DefaultUpdateMethod = "coingeckoV3" # Update method used for all the tokens registered on the network, and not listed in [[PriceUpdater.TokensConfig]]
|
|
||||||
[[PriceUpdater.TokensConfig]]
|
|
||||||
UpdateMethod = "bitfinexV2"
|
|
||||||
Symbol = "USDT"
|
|
||||||
Addr = "0xdac17f958d2ee523a2206206994597c13d831ec7"
|
|
||||||
[[PriceUpdater.TokensConfig]]
|
|
||||||
UpdateMethod = "coingeckoV3"
|
|
||||||
Symbol = "ETH"
|
|
||||||
Addr = "0x0000000000000000000000000000000000000000"
|
|
||||||
[[PriceUpdater.TokensConfig]]
|
|
||||||
UpdateMethod = "static"
|
|
||||||
Symbol = "UNI"
|
|
||||||
Addr = "0x1f9840a85d5af5bf1d1762f925bdaddc4201f984"
|
|
||||||
StaticValue = 30.12
|
|
||||||
[[PriceUpdater.TokensConfig]]
|
|
||||||
UpdateMethod = "ignore"
|
|
||||||
Symbol = "SUSHI"
|
|
||||||
Addr = "0x6b3595068778dd592e39a122f4f5a5cf09c90fe2"
|
|
||||||
|
|
||||||
[Debug]
|
[Debug]
|
||||||
APIAddress = "localhost:12345"
|
APIAddress = "localhost:12345"
|
||||||
MeddlerLogs = true
|
MeddlerLogs = true
|
||||||
GinDebugMode = true
|
|
||||||
|
|
||||||
[StateDB]
|
[StateDB]
|
||||||
Path = "/tmp/iden3-test/hermez/statedb"
|
Path = "/tmp/iden3-test/hermez/statedb"
|
||||||
Keep = 256
|
Keep = 256
|
||||||
|
|
||||||
[PostgreSQL]
|
[PostgreSQL]
|
||||||
PortWrite = 5432
|
Port = 5432
|
||||||
HostWrite = "localhost"
|
Host = "localhost"
|
||||||
UserWrite = "hermez"
|
User = "hermez"
|
||||||
PasswordWrite = "yourpasswordhere"
|
Password = "yourpasswordhere"
|
||||||
NameWrite = "hermez"
|
Name = "hermez"
|
||||||
# PortRead = 5432
|
|
||||||
# HostRead = "localhost"
|
|
||||||
# UserRead = "hermez"
|
|
||||||
# PasswordRead = "yourpasswordhere"
|
|
||||||
# NameRead = "hermez"
|
|
||||||
|
|
||||||
[Web3]
|
[Web3]
|
||||||
URL = "http://localhost:8545"
|
URL = "http://localhost:8545"
|
||||||
@@ -74,7 +45,6 @@ ForgerAddress = "0x05c23b938a85ab26A36E6314a0D02080E9ca6BeD" # Non-Boot Coordina
|
|||||||
# ForgerAddressPrivateKey = "0x30f5fddb34cd4166adb2c6003fa6b18f380fd2341376be42cf1c7937004ac7a3"
|
# ForgerAddressPrivateKey = "0x30f5fddb34cd4166adb2c6003fa6b18f380fd2341376be42cf1c7937004ac7a3"
|
||||||
# ForgerAddress = "0xb4124ceb3451635dacedd11767f004d8a28c6ee7" # Boot Coordinator
|
# ForgerAddress = "0xb4124ceb3451635dacedd11767f004d8a28c6ee7" # Boot Coordinator
|
||||||
# ForgerAddressPrivateKey = "0xa8a54b2d8197bc0b19bb8a084031be71835580a01e70a45a13babd16c9bc1563"
|
# ForgerAddressPrivateKey = "0xa8a54b2d8197bc0b19bb8a084031be71835580a01e70a45a13babd16c9bc1563"
|
||||||
MinimumForgeAddressBalance = "0"
|
|
||||||
ConfirmBlocks = 10
|
ConfirmBlocks = 10
|
||||||
L1BatchTimeoutPerc = 0.6
|
L1BatchTimeoutPerc = 0.6
|
||||||
StartSlotBlocksDelay = 2
|
StartSlotBlocksDelay = 2
|
||||||
@@ -85,9 +55,6 @@ ForgeRetryInterval = "500ms"
|
|||||||
SyncRetryInterval = "1s"
|
SyncRetryInterval = "1s"
|
||||||
ForgeDelay = "10s"
|
ForgeDelay = "10s"
|
||||||
ForgeNoTxsDelay = "0s"
|
ForgeNoTxsDelay = "0s"
|
||||||
PurgeByExtDelInterval = "1m"
|
|
||||||
MustForgeAtSlotDeadline = true
|
|
||||||
IgnoreSlotCommitment = false
|
|
||||||
|
|
||||||
[Coordinator.FeeAccount]
|
[Coordinator.FeeAccount]
|
||||||
Address = "0x56232B1c5B10038125Bc7345664B4AFD745bcF8E"
|
Address = "0x56232B1c5B10038125Bc7345664B4AFD745bcF8E"
|
||||||
@@ -98,8 +65,6 @@ BJJ = "0x1b176232f78ba0d388ecc5f4896eca2d3b3d4f272092469f559247297f5c0c13"
|
|||||||
[Coordinator.L2DB]
|
[Coordinator.L2DB]
|
||||||
SafetyPeriod = 10
|
SafetyPeriod = 10
|
||||||
MaxTxs = 512
|
MaxTxs = 512
|
||||||
MinFeeUSD = 0.0
|
|
||||||
MaxFeeUSD = 50.0
|
|
||||||
TTL = "24h"
|
TTL = "24h"
|
||||||
PurgeBatchDelay = 10
|
PurgeBatchDelay = 10
|
||||||
InvalidateBatchDelay = 20
|
InvalidateBatchDelay = 20
|
||||||
@@ -132,12 +97,6 @@ GasPriceIncPerc = 10
|
|||||||
Path = "/tmp/iden3-test/hermez/ethkeystore"
|
Path = "/tmp/iden3-test/hermez/ethkeystore"
|
||||||
Password = "yourpasswordhere"
|
Password = "yourpasswordhere"
|
||||||
|
|
||||||
[Coordinator.EthClient.ForgeBatchGasCost]
|
|
||||||
Fixed = 600000
|
|
||||||
L1UserTx = 15000
|
|
||||||
L1CoordTx = 8000
|
|
||||||
L2Tx = 250
|
|
||||||
|
|
||||||
[Coordinator.API]
|
[Coordinator.API]
|
||||||
Coordinator = true
|
Coordinator = true
|
||||||
|
|
||||||
|
|||||||
@@ -1,10 +1,10 @@
|
|||||||
#!/bin/sh
|
#!/bin/sh
|
||||||
|
|
||||||
# Non-Boot Coordinator
|
# Non-Boot Coordinator
|
||||||
go run . importkey --mode coord --cfg cfg.buidler.toml --privatekey 0x30f5fddb34cd4166adb2c6003fa6b18f380fd2341376be42cf1c7937004ac7a3
|
go run . --mode coord --cfg cfg.buidler.toml importkey --privatekey 0x30f5fddb34cd4166adb2c6003fa6b18f380fd2341376be42cf1c7937004ac7a3
|
||||||
|
|
||||||
# Boot Coordinator
|
# Boot Coordinator
|
||||||
go run . importkey --mode coord --cfg cfg.buidler.toml --privatekey 0xa8a54b2d8197bc0b19bb8a084031be71835580a01e70a45a13babd16c9bc1563
|
go run . --mode coord --cfg cfg.buidler.toml importkey --privatekey 0xa8a54b2d8197bc0b19bb8a084031be71835580a01e70a45a13babd16c9bc1563
|
||||||
|
|
||||||
# FeeAccount
|
# FeeAccount
|
||||||
go run . importkey --mode coord --cfg cfg.buidler.toml --privatekey 0x3a9270c020e169097808da4b02e8d9100be0f8a38cfad3dcfc0b398076381fdd
|
go run . --mode coord --cfg cfg.buidler.toml importkey --privatekey 0x3a9270c020e169097808da4b02e8d9100be0f8a38cfad3dcfc0b398076381fdd
|
||||||
|
|||||||
284
cli/node/main.go
284
cli/node/main.go
@@ -5,22 +5,18 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
"os/signal"
|
"os/signal"
|
||||||
"path"
|
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
ethKeystore "github.com/ethereum/go-ethereum/accounts/keystore"
|
ethKeystore "github.com/ethereum/go-ethereum/accounts/keystore"
|
||||||
"github.com/ethereum/go-ethereum/crypto"
|
"github.com/ethereum/go-ethereum/crypto"
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
|
||||||
"github.com/hermeznetwork/hermez-node/config"
|
"github.com/hermeznetwork/hermez-node/config"
|
||||||
dbUtils "github.com/hermeznetwork/hermez-node/db"
|
dbUtils "github.com/hermeznetwork/hermez-node/db"
|
||||||
"github.com/hermeznetwork/hermez-node/db/historydb"
|
"github.com/hermeznetwork/hermez-node/db/historydb"
|
||||||
"github.com/hermeznetwork/hermez-node/db/kvdb"
|
|
||||||
"github.com/hermeznetwork/hermez-node/db/l2db"
|
"github.com/hermeznetwork/hermez-node/db/l2db"
|
||||||
"github.com/hermeznetwork/hermez-node/log"
|
"github.com/hermeznetwork/hermez-node/log"
|
||||||
"github.com/hermeznetwork/hermez-node/node"
|
"github.com/hermeznetwork/hermez-node/node"
|
||||||
"github.com/hermeznetwork/tracerr"
|
"github.com/hermeznetwork/tracerr"
|
||||||
"github.com/iden3/go-iden3-crypto/babyjub"
|
"github.com/iden3/go-iden3-crypto/babyjub"
|
||||||
"github.com/jmoiron/sqlx"
|
|
||||||
"github.com/urfave/cli/v2"
|
"github.com/urfave/cli/v2"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -34,22 +30,6 @@ const (
|
|||||||
modeCoord = "coord"
|
modeCoord = "coord"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
|
||||||
// Version represents the program based on the git tag
|
|
||||||
Version = "v0.1.0"
|
|
||||||
// Build represents the program based on the git commit
|
|
||||||
Build = "dev"
|
|
||||||
// Date represents the date of application was built
|
|
||||||
Date = ""
|
|
||||||
)
|
|
||||||
|
|
||||||
func cmdVersion(c *cli.Context) error {
|
|
||||||
fmt.Printf("Version = \"%v\"\n", Version)
|
|
||||||
fmt.Printf("Build = \"%v\"\n", Build)
|
|
||||||
fmt.Printf("Date = \"%v\"\n", Date)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func cmdGenBJJ(c *cli.Context) error {
|
func cmdGenBJJ(c *cli.Context) error {
|
||||||
sk := babyjub.NewRandPrivKey()
|
sk := babyjub.NewRandPrivKey()
|
||||||
skBuf := [32]byte(sk)
|
skBuf := [32]byte(sk)
|
||||||
@@ -91,86 +71,6 @@ func cmdImportKey(c *cli.Context) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func resetStateDBs(cfg *Config, batchNum common.BatchNum) error {
|
|
||||||
log.Infof("Reset Synchronizer StateDB to batchNum %v...", batchNum)
|
|
||||||
|
|
||||||
// Manually make a checkpoint from batchNum to current to force current
|
|
||||||
// to be a valid checkpoint. This is useful because in case of a
|
|
||||||
// crash, current can be corrupted and the first thing that
|
|
||||||
// `kvdb.NewKVDB` does is read the current checkpoint, which wouldn't
|
|
||||||
// succeed in case of corruption.
|
|
||||||
dbPath := cfg.node.StateDB.Path
|
|
||||||
source := path.Join(dbPath, fmt.Sprintf("%s%d", kvdb.PathBatchNum, batchNum))
|
|
||||||
current := path.Join(dbPath, kvdb.PathCurrent)
|
|
||||||
last := path.Join(dbPath, kvdb.PathLast)
|
|
||||||
if err := os.RemoveAll(last); err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("os.RemoveAll: %w", err))
|
|
||||||
}
|
|
||||||
if batchNum == 0 {
|
|
||||||
if err := os.RemoveAll(current); err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("os.RemoveAll: %w", err))
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
if err := kvdb.PebbleMakeCheckpoint(source, current); err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("kvdb.PebbleMakeCheckpoint: %w", err))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
db, err := kvdb.NewKVDB(kvdb.Config{
|
|
||||||
Path: dbPath,
|
|
||||||
NoGapsCheck: true,
|
|
||||||
NoLast: true,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("kvdb.NewKVDB: %w", err))
|
|
||||||
}
|
|
||||||
if err := db.Reset(batchNum); err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("db.Reset: %w", err))
|
|
||||||
}
|
|
||||||
|
|
||||||
if cfg.mode == node.ModeCoordinator {
|
|
||||||
log.Infof("Wipe Coordinator StateDBs...")
|
|
||||||
|
|
||||||
// We wipe the Coordinator StateDBs entirely (by deleting
|
|
||||||
// current and resetting to batchNum 0) because the Coordinator
|
|
||||||
// StateDBs are always reset from Synchronizer when the
|
|
||||||
// coordinator pipeline starts.
|
|
||||||
dbPath := cfg.node.Coordinator.TxSelector.Path
|
|
||||||
current := path.Join(dbPath, kvdb.PathCurrent)
|
|
||||||
if err := os.RemoveAll(current); err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("os.RemoveAll: %w", err))
|
|
||||||
}
|
|
||||||
db, err := kvdb.NewKVDB(kvdb.Config{
|
|
||||||
Path: dbPath,
|
|
||||||
NoGapsCheck: true,
|
|
||||||
NoLast: true,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("kvdb.NewKVDB: %w", err))
|
|
||||||
}
|
|
||||||
if err := db.Reset(0); err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("db.Reset: %w", err))
|
|
||||||
}
|
|
||||||
|
|
||||||
dbPath = cfg.node.Coordinator.BatchBuilder.Path
|
|
||||||
current = path.Join(dbPath, kvdb.PathCurrent)
|
|
||||||
if err := os.RemoveAll(current); err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("os.RemoveAll: %w", err))
|
|
||||||
}
|
|
||||||
db, err = kvdb.NewKVDB(kvdb.Config{
|
|
||||||
Path: dbPath,
|
|
||||||
NoGapsCheck: true,
|
|
||||||
NoLast: true,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("statedb.NewKVDB: %w", err))
|
|
||||||
}
|
|
||||||
if err := db.Reset(0); err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("db.Reset: %w", err))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func cmdWipeSQL(c *cli.Context) error {
|
func cmdWipeSQL(c *cli.Context) error {
|
||||||
_cfg, err := parseCli(c)
|
_cfg, err := parseCli(c)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -179,8 +79,7 @@ func cmdWipeSQL(c *cli.Context) error {
|
|||||||
cfg := _cfg.node
|
cfg := _cfg.node
|
||||||
yes := c.Bool(flagYes)
|
yes := c.Bool(flagYes)
|
||||||
if !yes {
|
if !yes {
|
||||||
fmt.Print("*WARNING* Are you sure you want to delete " +
|
fmt.Print("*WARNING* Are you sure you want to delete the SQL DB? [y/N]: ")
|
||||||
"the SQL DB and StateDBs? [y/N]: ")
|
|
||||||
var input string
|
var input string
|
||||||
if _, err := fmt.Scanln(&input); err != nil {
|
if _, err := fmt.Scanln(&input); err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
@@ -191,28 +90,33 @@ func cmdWipeSQL(c *cli.Context) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
db, err := dbUtils.ConnectSQLDB(
|
db, err := dbUtils.ConnectSQLDB(
|
||||||
cfg.PostgreSQL.PortWrite,
|
cfg.PostgreSQL.Port,
|
||||||
cfg.PostgreSQL.HostWrite,
|
cfg.PostgreSQL.Host,
|
||||||
cfg.PostgreSQL.UserWrite,
|
cfg.PostgreSQL.User,
|
||||||
cfg.PostgreSQL.PasswordWrite,
|
cfg.PostgreSQL.Password,
|
||||||
cfg.PostgreSQL.NameWrite,
|
cfg.PostgreSQL.Name,
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
log.Info("Wiping SQL DB...")
|
log.Info("Wiping SQL DB...")
|
||||||
if err := dbUtils.MigrationsDown(db.DB); err != nil {
|
if err := dbUtils.MigrationsDown(db.DB); err != nil {
|
||||||
return tracerr.Wrap(fmt.Errorf("dbUtils.MigrationsDown: %w", err))
|
return tracerr.Wrap(err)
|
||||||
}
|
|
||||||
|
|
||||||
log.Info("Wiping StateDBs...")
|
|
||||||
if err := resetStateDBs(_cfg, 0); err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("resetStateDBs: %w", err))
|
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func waitSigInt() {
|
func cmdRun(c *cli.Context) error {
|
||||||
|
cfg, err := parseCli(c)
|
||||||
|
if err != nil {
|
||||||
|
return tracerr.Wrap(fmt.Errorf("error parsing flags and config: %w", err))
|
||||||
|
}
|
||||||
|
node, err := node.NewNode(cfg.mode, cfg.node)
|
||||||
|
if err != nil {
|
||||||
|
return tracerr.Wrap(fmt.Errorf("error starting node: %w", err))
|
||||||
|
}
|
||||||
|
node.Start()
|
||||||
|
|
||||||
stopCh := make(chan interface{})
|
stopCh := make(chan interface{})
|
||||||
|
|
||||||
// catch ^C to send the stop signal
|
// catch ^C to send the stop signal
|
||||||
@@ -233,40 +137,11 @@ func waitSigInt() {
|
|||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
<-stopCh
|
<-stopCh
|
||||||
}
|
|
||||||
|
|
||||||
func cmdRun(c *cli.Context) error {
|
|
||||||
cfg, err := parseCli(c)
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("error parsing flags and config: %w", err))
|
|
||||||
}
|
|
||||||
node, err := node.NewNode(cfg.mode, cfg.node)
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("error starting node: %w", err))
|
|
||||||
}
|
|
||||||
node.Start()
|
|
||||||
waitSigInt()
|
|
||||||
node.Stop()
|
node.Stop()
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func cmdServeAPI(c *cli.Context) error {
|
|
||||||
cfg, err := parseCliAPIServer(c)
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("error parsing flags and config: %w", err))
|
|
||||||
}
|
|
||||||
srv, err := node.NewAPIServer(cfg.mode, cfg.server)
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("error starting api server: %w", err))
|
|
||||||
}
|
|
||||||
srv.Start()
|
|
||||||
waitSigInt()
|
|
||||||
srv.Stop()
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func cmdDiscard(c *cli.Context) error {
|
func cmdDiscard(c *cli.Context) error {
|
||||||
_cfg, err := parseCli(c)
|
_cfg, err := parseCli(c)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -276,36 +151,17 @@ func cmdDiscard(c *cli.Context) error {
|
|||||||
blockNum := c.Int64(flagBlock)
|
blockNum := c.Int64(flagBlock)
|
||||||
log.Infof("Discarding all blocks up to block %v...", blockNum)
|
log.Infof("Discarding all blocks up to block %v...", blockNum)
|
||||||
|
|
||||||
dbWrite, err := dbUtils.InitSQLDB(
|
db, err := dbUtils.InitSQLDB(
|
||||||
cfg.PostgreSQL.PortWrite,
|
cfg.PostgreSQL.Port,
|
||||||
cfg.PostgreSQL.HostWrite,
|
cfg.PostgreSQL.Host,
|
||||||
cfg.PostgreSQL.UserWrite,
|
cfg.PostgreSQL.User,
|
||||||
cfg.PostgreSQL.PasswordWrite,
|
cfg.PostgreSQL.Password,
|
||||||
cfg.PostgreSQL.NameWrite,
|
cfg.PostgreSQL.Name,
|
||||||
)
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(fmt.Errorf("dbUtils.InitSQLDB: %w", err))
|
return tracerr.Wrap(fmt.Errorf("dbUtils.InitSQLDB: %w", err))
|
||||||
}
|
}
|
||||||
var dbRead *sqlx.DB
|
historyDB := historydb.NewHistoryDB(db, nil)
|
||||||
if cfg.PostgreSQL.HostRead == "" {
|
|
||||||
dbRead = dbWrite
|
|
||||||
} else if cfg.PostgreSQL.HostRead == cfg.PostgreSQL.HostWrite {
|
|
||||||
return tracerr.Wrap(fmt.Errorf(
|
|
||||||
"PostgreSQL.HostRead and PostgreSQL.HostWrite must be different",
|
|
||||||
))
|
|
||||||
} else {
|
|
||||||
dbRead, err = dbUtils.InitSQLDB(
|
|
||||||
cfg.PostgreSQL.PortRead,
|
|
||||||
cfg.PostgreSQL.HostRead,
|
|
||||||
cfg.PostgreSQL.UserRead,
|
|
||||||
cfg.PostgreSQL.PasswordRead,
|
|
||||||
cfg.PostgreSQL.NameRead,
|
|
||||||
)
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("dbUtils.InitSQLDB: %w", err))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
historyDB := historydb.NewHistoryDB(dbRead, dbWrite, nil)
|
|
||||||
if err := historyDB.Reorg(blockNum); err != nil {
|
if err := historyDB.Reorg(blockNum); err != nil {
|
||||||
return tracerr.Wrap(fmt.Errorf("historyDB.Reorg: %w", err))
|
return tracerr.Wrap(fmt.Errorf("historyDB.Reorg: %w", err))
|
||||||
}
|
}
|
||||||
@@ -314,11 +170,9 @@ func cmdDiscard(c *cli.Context) error {
|
|||||||
return tracerr.Wrap(fmt.Errorf("historyDB.GetLastBatchNum: %w", err))
|
return tracerr.Wrap(fmt.Errorf("historyDB.GetLastBatchNum: %w", err))
|
||||||
}
|
}
|
||||||
l2DB := l2db.NewL2DB(
|
l2DB := l2db.NewL2DB(
|
||||||
dbRead, dbWrite,
|
db,
|
||||||
cfg.Coordinator.L2DB.SafetyPeriod,
|
cfg.Coordinator.L2DB.SafetyPeriod,
|
||||||
cfg.Coordinator.L2DB.MaxTxs,
|
cfg.Coordinator.L2DB.MaxTxs,
|
||||||
cfg.Coordinator.L2DB.MinFeeUSD,
|
|
||||||
cfg.Coordinator.L2DB.MaxFeeUSD,
|
|
||||||
cfg.Coordinator.L2DB.TTL.Duration,
|
cfg.Coordinator.L2DB.TTL.Duration,
|
||||||
nil,
|
nil,
|
||||||
)
|
)
|
||||||
@@ -326,11 +180,6 @@ func cmdDiscard(c *cli.Context) error {
|
|||||||
return tracerr.Wrap(fmt.Errorf("l2DB.Reorg: %w", err))
|
return tracerr.Wrap(fmt.Errorf("l2DB.Reorg: %w", err))
|
||||||
}
|
}
|
||||||
|
|
||||||
log.Info("Resetting StateDBs...")
|
|
||||||
if err := resetStateDBs(_cfg, batchNum); err != nil {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("resetStateDBs: %w", err))
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -355,59 +204,20 @@ func getConfig(c *cli.Context) (*Config, error) {
|
|||||||
var cfg Config
|
var cfg Config
|
||||||
mode := c.String(flagMode)
|
mode := c.String(flagMode)
|
||||||
nodeCfgPath := c.String(flagCfg)
|
nodeCfgPath := c.String(flagCfg)
|
||||||
|
if nodeCfgPath == "" {
|
||||||
|
return nil, tracerr.Wrap(fmt.Errorf("required flag \"%v\" not set", flagCfg))
|
||||||
|
}
|
||||||
var err error
|
var err error
|
||||||
switch mode {
|
switch mode {
|
||||||
case modeSync:
|
case modeSync:
|
||||||
cfg.mode = node.ModeSynchronizer
|
cfg.mode = node.ModeSynchronizer
|
||||||
cfg.node, err = config.LoadNode(nodeCfgPath, false)
|
cfg.node, err = config.LoadNode(nodeCfgPath)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
case modeCoord:
|
case modeCoord:
|
||||||
cfg.mode = node.ModeCoordinator
|
cfg.mode = node.ModeCoordinator
|
||||||
cfg.node, err = config.LoadNode(nodeCfgPath, true)
|
cfg.node, err = config.LoadCoordinator(nodeCfgPath)
|
||||||
if err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
default:
|
|
||||||
return nil, tracerr.Wrap(fmt.Errorf("invalid mode \"%v\"", mode))
|
|
||||||
}
|
|
||||||
|
|
||||||
return &cfg, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ConfigAPIServer is the configuration of the api server execution
|
|
||||||
type ConfigAPIServer struct {
|
|
||||||
mode node.Mode
|
|
||||||
server *config.APIServer
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseCliAPIServer(c *cli.Context) (*ConfigAPIServer, error) {
|
|
||||||
cfg, err := getConfigAPIServer(c)
|
|
||||||
if err != nil {
|
|
||||||
if err := cli.ShowAppHelp(c); err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
return cfg, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func getConfigAPIServer(c *cli.Context) (*ConfigAPIServer, error) {
|
|
||||||
var cfg ConfigAPIServer
|
|
||||||
mode := c.String(flagMode)
|
|
||||||
nodeCfgPath := c.String(flagCfg)
|
|
||||||
var err error
|
|
||||||
switch mode {
|
|
||||||
case modeSync:
|
|
||||||
cfg.mode = node.ModeSynchronizer
|
|
||||||
cfg.server, err = config.LoadAPIServer(nodeCfgPath, false)
|
|
||||||
if err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
case modeCoord:
|
|
||||||
cfg.mode = node.ModeCoordinator
|
|
||||||
cfg.server, err = config.LoadAPIServer(nodeCfgPath, true)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -421,8 +231,8 @@ func getConfigAPIServer(c *cli.Context) (*ConfigAPIServer, error) {
|
|||||||
func main() {
|
func main() {
|
||||||
app := cli.NewApp()
|
app := cli.NewApp()
|
||||||
app.Name = "hermez-node"
|
app.Name = "hermez-node"
|
||||||
app.Version = Version
|
app.Version = "0.1.0-alpha"
|
||||||
flags := []cli.Flag{
|
app.Flags = []cli.Flag{
|
||||||
&cli.StringFlag{
|
&cli.StringFlag{
|
||||||
Name: flagMode,
|
Name: flagMode,
|
||||||
Usage: fmt.Sprintf("Set node `MODE` (can be \"%v\" or \"%v\")", modeSync, modeCoord),
|
Usage: fmt.Sprintf("Set node `MODE` (can be \"%v\" or \"%v\")", modeSync, modeCoord),
|
||||||
@@ -436,23 +246,17 @@ func main() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
app.Commands = []*cli.Command{
|
app.Commands = []*cli.Command{
|
||||||
{
|
|
||||||
Name: "version",
|
|
||||||
Aliases: []string{},
|
|
||||||
Usage: "Show the application version and build",
|
|
||||||
Action: cmdVersion,
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
Name: "importkey",
|
Name: "importkey",
|
||||||
Aliases: []string{},
|
Aliases: []string{},
|
||||||
Usage: "Import ethereum private key",
|
Usage: "Import ethereum private key",
|
||||||
Action: cmdImportKey,
|
Action: cmdImportKey,
|
||||||
Flags: append(flags,
|
Flags: []cli.Flag{
|
||||||
&cli.StringFlag{
|
&cli.StringFlag{
|
||||||
Name: flagSK,
|
Name: flagSK,
|
||||||
Usage: "ethereum `PRIVATE_KEY` in hex",
|
Usage: "ethereum `PRIVATE_KEY` in hex",
|
||||||
Required: true,
|
Required: true,
|
||||||
}),
|
}},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "genbjj",
|
Name: "genbjj",
|
||||||
@@ -463,41 +267,33 @@ func main() {
|
|||||||
{
|
{
|
||||||
Name: "wipesql",
|
Name: "wipesql",
|
||||||
Aliases: []string{},
|
Aliases: []string{},
|
||||||
Usage: "Wipe the SQL DB (HistoryDB and L2DB) and the StateDBs, " +
|
Usage: "Wipe the SQL DB (HistoryDB and L2DB), " +
|
||||||
"leaving the DB in a clean state",
|
"leaving the DB in a clean state",
|
||||||
Action: cmdWipeSQL,
|
Action: cmdWipeSQL,
|
||||||
Flags: append(flags,
|
Flags: []cli.Flag{
|
||||||
&cli.BoolFlag{
|
&cli.BoolFlag{
|
||||||
Name: flagYes,
|
Name: flagYes,
|
||||||
Usage: "automatic yes to the prompt",
|
Usage: "automatic yes to the prompt",
|
||||||
Required: false,
|
Required: false,
|
||||||
}),
|
}},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "run",
|
Name: "run",
|
||||||
Aliases: []string{},
|
Aliases: []string{},
|
||||||
Usage: "Run the hermez-node in the indicated mode",
|
Usage: "Run the hermez-node in the indicated mode",
|
||||||
Action: cmdRun,
|
Action: cmdRun,
|
||||||
Flags: flags,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
Name: "serveapi",
|
|
||||||
Aliases: []string{},
|
|
||||||
Usage: "Serve the API only",
|
|
||||||
Action: cmdServeAPI,
|
|
||||||
Flags: flags,
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
Name: "discard",
|
Name: "discard",
|
||||||
Aliases: []string{},
|
Aliases: []string{},
|
||||||
Usage: "Discard blocks up to a specified block number",
|
Usage: "Discard blocks up to a specified block number",
|
||||||
Action: cmdDiscard,
|
Action: cmdDiscard,
|
||||||
Flags: append(flags,
|
Flags: []cli.Flag{
|
||||||
&cli.Int64Flag{
|
&cli.Int64Flag{
|
||||||
Name: flagBlock,
|
Name: flagBlock,
|
||||||
Usage: "last block number to keep",
|
Usage: "last block number to keep",
|
||||||
Required: false,
|
Required: false,
|
||||||
}),
|
}},
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -72,8 +72,7 @@ func (idx Idx) BigInt() *big.Int {
|
|||||||
// IdxFromBytes returns Idx from a byte array
|
// IdxFromBytes returns Idx from a byte array
|
||||||
func IdxFromBytes(b []byte) (Idx, error) {
|
func IdxFromBytes(b []byte) (Idx, error) {
|
||||||
if len(b) != IdxBytesLen {
|
if len(b) != IdxBytesLen {
|
||||||
return 0, tracerr.Wrap(fmt.Errorf("can not parse Idx, bytes len %d, expected %d",
|
return 0, tracerr.Wrap(fmt.Errorf("can not parse Idx, bytes len %d, expected %d", len(b), IdxBytesLen))
|
||||||
len(b), IdxBytesLen))
|
|
||||||
}
|
}
|
||||||
var idxBytes [8]byte
|
var idxBytes [8]byte
|
||||||
copy(idxBytes[2:], b[:])
|
copy(idxBytes[2:], b[:])
|
||||||
@@ -195,8 +194,7 @@ func (a *Account) BigInts() ([NLeafElems]*big.Int, error) {
|
|||||||
return e, nil
|
return e, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// HashValue returns the value of the Account, which is the Poseidon hash of its
|
// HashValue returns the value of the Account, which is the Poseidon hash of its *big.Int representation
|
||||||
// *big.Int representation
|
|
||||||
func (a *Account) HashValue() (*big.Int, error) {
|
func (a *Account) HashValue() (*big.Int, error) {
|
||||||
bi, err := a.BigInts()
|
bi, err := a.BigInts()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -265,13 +263,3 @@ type IdxNonce struct {
|
|||||||
Idx Idx `db:"idx"`
|
Idx Idx `db:"idx"`
|
||||||
Nonce Nonce `db:"nonce"`
|
Nonce Nonce `db:"nonce"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// AccountUpdate represents an account balance and/or nonce update after a
|
|
||||||
// processed batch
|
|
||||||
type AccountUpdate struct {
|
|
||||||
EthBlockNum int64 `meddler:"eth_block_num"`
|
|
||||||
BatchNum BatchNum `meddler:"batch_num"`
|
|
||||||
Idx Idx `meddler:"idx"`
|
|
||||||
Nonce Nonce `meddler:"nonce"`
|
|
||||||
Balance *big.Int `meddler:"balance,bigint"`
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -76,8 +76,7 @@ func TestNonceParser(t *testing.T) {
|
|||||||
|
|
||||||
func TestAccount(t *testing.T) {
|
func TestAccount(t *testing.T) {
|
||||||
var sk babyjub.PrivateKey
|
var sk babyjub.PrivateKey
|
||||||
_, err := hex.Decode(sk[:],
|
_, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
||||||
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
pk := sk.Public()
|
pk := sk.Public()
|
||||||
|
|
||||||
@@ -116,8 +115,7 @@ func TestAccountLoop(t *testing.T) {
|
|||||||
// check that for different deterministic BabyJubJub keys & random Address there is no problem
|
// check that for different deterministic BabyJubJub keys & random Address there is no problem
|
||||||
for i := 0; i < 256; i++ {
|
for i := 0; i < 256; i++ {
|
||||||
var sk babyjub.PrivateKey
|
var sk babyjub.PrivateKey
|
||||||
_, err := hex.Decode(sk[:],
|
_, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
||||||
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
pk := sk.Public()
|
pk := sk.Public()
|
||||||
|
|
||||||
@@ -201,8 +199,7 @@ func bigFromStr(h string, u int) *big.Int {
|
|||||||
|
|
||||||
func TestAccountHashValue(t *testing.T) {
|
func TestAccountHashValue(t *testing.T) {
|
||||||
var sk babyjub.PrivateKey
|
var sk babyjub.PrivateKey
|
||||||
_, err := hex.Decode(sk[:],
|
_, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
||||||
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
pk := sk.Public()
|
pk := sk.Public()
|
||||||
|
|
||||||
@@ -215,16 +212,13 @@ func TestAccountHashValue(t *testing.T) {
|
|||||||
}
|
}
|
||||||
v, err := account.HashValue()
|
v, err := account.HashValue()
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t,
|
assert.Equal(t, "16297758255249203915951182296472515138555043617458222397753168518282206850764", v.String())
|
||||||
"447675324273474410516096114710387312413478475468606444107594732044698919451",
|
|
||||||
v.String())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestAccountHashValueTestVectors(t *testing.T) {
|
func TestAccountHashValueTestVectors(t *testing.T) {
|
||||||
// values from js test vectors
|
// values from js test vectors
|
||||||
ay := new(big.Int).Sub(new(big.Int).Exp(big.NewInt(2), big.NewInt(253), nil), big.NewInt(1))
|
ay := new(big.Int).Sub(new(big.Int).Exp(big.NewInt(2), big.NewInt(253), nil), big.NewInt(1))
|
||||||
assert.Equal(t, "1fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
|
assert.Equal(t, "1fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", (hex.EncodeToString(ay.Bytes())))
|
||||||
(hex.EncodeToString(ay.Bytes())))
|
|
||||||
bjjPoint, err := babyjub.PointFromSignAndY(true, ay)
|
bjjPoint, err := babyjub.PointFromSignAndY(true, ay)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
bjj := babyjub.PublicKey(*bjjPoint)
|
bjj := babyjub.PublicKey(*bjjPoint)
|
||||||
@@ -242,22 +236,16 @@ func TestAccountHashValueTestVectors(t *testing.T) {
|
|||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, "9444732965739290427391", e[0].String())
|
assert.Equal(t, "9444732965739290427391", e[0].String())
|
||||||
assert.Equal(t, "6277101735386680763835789423207666416102355444464034512895", e[1].String())
|
assert.Equal(t, "6277101735386680763835789423207666416102355444464034512895", e[1].String())
|
||||||
assert.Equal(t,
|
assert.Equal(t, "14474011154664524427946373126085988481658748083205070504932198000989141204991", e[2].String())
|
||||||
"14474011154664524427946373126085988481658748083205070504932198000989141204991",
|
|
||||||
e[2].String())
|
|
||||||
assert.Equal(t, "1461501637330902918203684832716283019655932542975", e[3].String())
|
assert.Equal(t, "1461501637330902918203684832716283019655932542975", e[3].String())
|
||||||
|
|
||||||
h, err := poseidon.Hash(e[:])
|
h, err := poseidon.Hash(e[:])
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t,
|
assert.Equal(t, "4550823210217540218403400309533329186487982452461145263910122718498735057257", h.String())
|
||||||
"13265203488631320682117942952393454767418777767637549409684833552016769103047",
|
|
||||||
h.String())
|
|
||||||
|
|
||||||
v, err := account.HashValue()
|
v, err := account.HashValue()
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t,
|
assert.Equal(t, "4550823210217540218403400309533329186487982452461145263910122718498735057257", v.String())
|
||||||
"13265203488631320682117942952393454767418777767637549409684833552016769103047",
|
|
||||||
v.String())
|
|
||||||
|
|
||||||
// second account
|
// second account
|
||||||
ay = big.NewInt(0)
|
ay = big.NewInt(0)
|
||||||
@@ -273,9 +261,7 @@ func TestAccountHashValueTestVectors(t *testing.T) {
|
|||||||
}
|
}
|
||||||
v, err = account.HashValue()
|
v, err = account.HashValue()
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t,
|
assert.Equal(t, "7750253361301235345986002241352365187241910378619330147114280396816709365657", v.String())
|
||||||
"2351654555892372227640888372176282444150254868378439619268573230312091195718",
|
|
||||||
v.String())
|
|
||||||
|
|
||||||
// third account
|
// third account
|
||||||
ay = bigFromStr("21b0a1688b37f77b1d1d5539ec3b826db5ac78b2513f574a04c50a7d4f8246d7", 16)
|
ay = bigFromStr("21b0a1688b37f77b1d1d5539ec3b826db5ac78b2513f574a04c50a7d4f8246d7", 16)
|
||||||
@@ -293,15 +279,11 @@ func TestAccountHashValueTestVectors(t *testing.T) {
|
|||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, "554050781187", e[0].String())
|
assert.Equal(t, "554050781187", e[0].String())
|
||||||
assert.Equal(t, "42000000000000000000", e[1].String())
|
assert.Equal(t, "42000000000000000000", e[1].String())
|
||||||
assert.Equal(t,
|
assert.Equal(t, "15238403086306505038849621710779816852318505119327426213168494964113886299863", e[2].String())
|
||||||
"15238403086306505038849621710779816852318505119327426213168494964113886299863",
|
|
||||||
e[2].String())
|
|
||||||
assert.Equal(t, "935037732739828347587684875151694054123613453305", e[3].String())
|
assert.Equal(t, "935037732739828347587684875151694054123613453305", e[3].String())
|
||||||
v, err = account.HashValue()
|
v, err = account.HashValue()
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t,
|
assert.Equal(t, "10565754214047872850889045989683221123564392137456000481397520902594455245517", v.String())
|
||||||
"15036148928138382129196903417666258171042923749783835283230591475172197254845",
|
|
||||||
v.String())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestAccountErrNotInFF(t *testing.T) {
|
func TestAccountErrNotInFF(t *testing.T) {
|
||||||
@@ -330,8 +312,7 @@ func TestAccountErrNotInFF(t *testing.T) {
|
|||||||
|
|
||||||
func TestAccountErrNumOverflowNonce(t *testing.T) {
|
func TestAccountErrNumOverflowNonce(t *testing.T) {
|
||||||
var sk babyjub.PrivateKey
|
var sk babyjub.PrivateKey
|
||||||
_, err := hex.Decode(sk[:],
|
_, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
||||||
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
pk := sk.Public()
|
pk := sk.Public()
|
||||||
|
|
||||||
@@ -358,8 +339,7 @@ func TestAccountErrNumOverflowNonce(t *testing.T) {
|
|||||||
|
|
||||||
func TestAccountErrNumOverflowBalance(t *testing.T) {
|
func TestAccountErrNumOverflowBalance(t *testing.T) {
|
||||||
var sk babyjub.PrivateKey
|
var sk babyjub.PrivateKey
|
||||||
_, err := hex.Decode(sk[:],
|
_, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
||||||
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
pk := sk.Public()
|
pk := sk.Public()
|
||||||
|
|
||||||
@@ -371,16 +351,14 @@ func TestAccountErrNumOverflowBalance(t *testing.T) {
|
|||||||
BJJ: pk.Compress(),
|
BJJ: pk.Compress(),
|
||||||
EthAddr: ethCommon.HexToAddress("0xc58d29fA6e86E4FAe04DDcEd660d45BCf3Cb2370"),
|
EthAddr: ethCommon.HexToAddress("0xc58d29fA6e86E4FAe04DDcEd660d45BCf3Cb2370"),
|
||||||
}
|
}
|
||||||
assert.Equal(t, "6277101735386680763835789423207666416102355444464034512895",
|
assert.Equal(t, "6277101735386680763835789423207666416102355444464034512895", account.Balance.String())
|
||||||
account.Balance.String())
|
|
||||||
|
|
||||||
_, err = account.Bytes()
|
_, err = account.Bytes()
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
// force value overflow
|
// force value overflow
|
||||||
account.Balance = new(big.Int).Exp(big.NewInt(2), big.NewInt(192), nil)
|
account.Balance = new(big.Int).Exp(big.NewInt(2), big.NewInt(192), nil)
|
||||||
assert.Equal(t, "6277101735386680763835789423207666416102355444464034512896",
|
assert.Equal(t, "6277101735386680763835789423207666416102355444464034512896", account.Balance.String())
|
||||||
account.Balance.String())
|
|
||||||
b, err := account.Bytes()
|
b, err := account.Bytes()
|
||||||
assert.NotNil(t, err)
|
assert.NotNil(t, err)
|
||||||
assert.Equal(t, fmt.Errorf("%s Balance", ErrNumOverflow), tracerr.Unwrap(err))
|
assert.Equal(t, fmt.Errorf("%s Balance", ErrNumOverflow), tracerr.Unwrap(err))
|
||||||
|
|||||||
@@ -1,25 +1,21 @@
|
|||||||
package common
|
package common
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"encoding/binary"
|
||||||
|
"strconv"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
ethCommon "github.com/ethereum/go-ethereum/common"
|
ethCommon "github.com/ethereum/go-ethereum/common"
|
||||||
ethMath "github.com/ethereum/go-ethereum/common/math"
|
|
||||||
ethCrypto "github.com/ethereum/go-ethereum/crypto"
|
ethCrypto "github.com/ethereum/go-ethereum/crypto"
|
||||||
ethSigner "github.com/ethereum/go-ethereum/signer/core"
|
|
||||||
"github.com/hermeznetwork/tracerr"
|
|
||||||
"github.com/iden3/go-iden3-crypto/babyjub"
|
"github.com/iden3/go-iden3-crypto/babyjub"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
// AccountCreationAuthMsg is the message that is signed to authorize a Hermez
|
||||||
// AccountCreationAuthMsg is the message that is signed to authorize a
|
// account creation
|
||||||
// Hermez account creation
|
const AccountCreationAuthMsg = "I authorize this babyjubjub key for hermez rollup account creation"
|
||||||
AccountCreationAuthMsg = "Account creation"
|
|
||||||
// EIP712Version is the used version of the EIP-712
|
// EthMsgPrefix is the prefix for message signing at the Ethereum ecosystem
|
||||||
EIP712Version = "1"
|
const EthMsgPrefix = "\x19Ethereum Signed Message:\n"
|
||||||
// EIP712Provider defines the Provider for the EIP-712
|
|
||||||
EIP712Provider = "Hermez Network"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
var (
|
||||||
// EmptyEthSignature is an ethereum signature of all zeroes
|
// EmptyEthSignature is an ethereum signature of all zeroes
|
||||||
@@ -35,82 +31,44 @@ type AccountCreationAuth struct {
|
|||||||
Timestamp time.Time `meddler:"timestamp,utctime"`
|
Timestamp time.Time `meddler:"timestamp,utctime"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// toHash returns a byte array to be hashed from the AccountCreationAuth, which
|
|
||||||
// follows the EIP-712 encoding
|
|
||||||
func (a *AccountCreationAuth) toHash(chainID uint16,
|
func (a *AccountCreationAuth) toHash(chainID uint16,
|
||||||
hermezContractAddr ethCommon.Address) ([]byte, error) {
|
hermezContractAddr ethCommon.Address) []byte {
|
||||||
chainIDFormatted := ethMath.NewHexOrDecimal256(int64(chainID))
|
var chainIDBytes [2]byte
|
||||||
|
binary.BigEndian.PutUint16(chainIDBytes[:], chainID)
|
||||||
|
// [EthPrefix | AccountCreationAuthMsg | compressedBJJ | chainID | hermezContractAddr]
|
||||||
|
var b []byte
|
||||||
|
b = append(b, []byte(AccountCreationAuthMsg)...)
|
||||||
|
b = append(b, SwapEndianness(a.BJJ[:])...) // for js implementation compatibility
|
||||||
|
b = append(b, chainIDBytes[:]...)
|
||||||
|
b = append(b, hermezContractAddr[:]...)
|
||||||
|
|
||||||
signerData := ethSigner.TypedData{
|
ethPrefix := EthMsgPrefix + strconv.Itoa(len(b))
|
||||||
Types: ethSigner.Types{
|
return append([]byte(ethPrefix), b...)
|
||||||
"EIP712Domain": []ethSigner.Type{
|
|
||||||
{Name: "name", Type: "string"},
|
|
||||||
{Name: "version", Type: "string"},
|
|
||||||
{Name: "chainId", Type: "uint256"},
|
|
||||||
{Name: "verifyingContract", Type: "address"},
|
|
||||||
},
|
|
||||||
"Authorise": []ethSigner.Type{
|
|
||||||
{Name: "Provider", Type: "string"},
|
|
||||||
{Name: "Authorisation", Type: "string"},
|
|
||||||
{Name: "BJJKey", Type: "bytes32"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
PrimaryType: "Authorise",
|
|
||||||
Domain: ethSigner.TypedDataDomain{
|
|
||||||
Name: EIP712Provider,
|
|
||||||
Version: EIP712Version,
|
|
||||||
ChainId: chainIDFormatted,
|
|
||||||
VerifyingContract: hermezContractAddr.Hex(),
|
|
||||||
},
|
|
||||||
Message: ethSigner.TypedDataMessage{
|
|
||||||
"Provider": EIP712Provider,
|
|
||||||
"Authorisation": AccountCreationAuthMsg,
|
|
||||||
"BJJKey": SwapEndianness(a.BJJ[:]),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
domainSeparator, err := signerData.HashStruct("EIP712Domain", signerData.Domain.Map())
|
|
||||||
if err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
typedDataHash, err := signerData.HashStruct(signerData.PrimaryType, signerData.Message)
|
|
||||||
if err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
rawData := []byte{0x19, 0x01} // "\x19\x01"
|
|
||||||
rawData = append(rawData, domainSeparator...)
|
|
||||||
rawData = append(rawData, typedDataHash...)
|
|
||||||
return rawData, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// HashToSign returns the hash to be signed by the Ethereum address to authorize
|
// HashToSign returns the hash to be signed by the Etherum address to authorize
|
||||||
// the account creation, which follows the EIP-712 encoding
|
// the account creation
|
||||||
func (a *AccountCreationAuth) HashToSign(chainID uint16,
|
func (a *AccountCreationAuth) HashToSign(chainID uint16,
|
||||||
hermezContractAddr ethCommon.Address) ([]byte, error) {
|
hermezContractAddr ethCommon.Address) ([]byte, error) {
|
||||||
b, err := a.toHash(chainID, hermezContractAddr)
|
b := a.toHash(chainID, hermezContractAddr)
|
||||||
if err != nil {
|
return ethCrypto.Keccak256Hash(b).Bytes(), nil
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
return ethCrypto.Keccak256(b), nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Sign signs the account creation authorization message using the provided
|
// Sign signs the account creation authorization message using the provided
|
||||||
// `signHash` function, and stores the signature in `a.Signature`. `signHash`
|
// `signHash` function, and stores the signaure in `a.Signature`. `signHash`
|
||||||
// should do an ethereum signature using the account corresponding to
|
// should do an ethereum signature using the account corresponding to
|
||||||
// `a.EthAddr`. The `signHash` function is used to make signing flexible: in
|
// `a.EthAddr`. The `signHash` function is used to make signig flexible: in
|
||||||
// tests we sign directly using the private key, outside tests we sign using
|
// tests we sign directly using the private key, outside tests we sign using
|
||||||
// the keystore (which never exposes the private key). Sign follows the EIP-712
|
// the keystore (which never exposes the private key).
|
||||||
// encoding.
|
|
||||||
func (a *AccountCreationAuth) Sign(signHash func(hash []byte) ([]byte, error),
|
func (a *AccountCreationAuth) Sign(signHash func(hash []byte) ([]byte, error),
|
||||||
chainID uint16, hermezContractAddr ethCommon.Address) error {
|
chainID uint16, hermezContractAddr ethCommon.Address) error {
|
||||||
hash, err := a.HashToSign(chainID, hermezContractAddr)
|
hash, err := a.HashToSign(chainID, hermezContractAddr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return err
|
||||||
}
|
}
|
||||||
sig, err := signHash(hash)
|
sig, err := signHash(hash)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return err
|
||||||
}
|
}
|
||||||
sig[64] += 27
|
sig[64] += 27
|
||||||
a.Signature = sig
|
a.Signature = sig
|
||||||
@@ -119,8 +77,7 @@ func (a *AccountCreationAuth) Sign(signHash func(hash []byte) ([]byte, error),
|
|||||||
}
|
}
|
||||||
|
|
||||||
// VerifySignature ensures that the Signature is done with the EthAddr, for the
|
// VerifySignature ensures that the Signature is done with the EthAddr, for the
|
||||||
// chainID and hermezContractAddress passed by parameter. VerifySignature
|
// chainID and hermezContractAddress passed by parameter
|
||||||
// follows the EIP-712 encoding.
|
|
||||||
func (a *AccountCreationAuth) VerifySignature(chainID uint16,
|
func (a *AccountCreationAuth) VerifySignature(chainID uint16,
|
||||||
hermezContractAddr ethCommon.Address) bool {
|
hermezContractAddr ethCommon.Address) bool {
|
||||||
// Calculate hash to be signed
|
// Calculate hash to be signed
|
||||||
|
|||||||
@@ -13,8 +13,7 @@ import (
|
|||||||
|
|
||||||
func TestAccountCreationAuthSignVerify(t *testing.T) {
|
func TestAccountCreationAuthSignVerify(t *testing.T) {
|
||||||
// Ethereum key
|
// Ethereum key
|
||||||
ethSk, err :=
|
ethSk, err := ethCrypto.HexToECDSA("fad9c8855b740a0b7ed4c221dbad0f33a83a49cad6b3fe8d5817ac83d38b6a19")
|
||||||
ethCrypto.HexToECDSA("fad9c8855b740a0b7ed4c221dbad0f33a83a49cad6b3fe8d5817ac83d38b6a19")
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
ethAddr := ethCrypto.PubkeyToAddress(ethSk.PublicKey)
|
ethAddr := ethCrypto.PubkeyToAddress(ethSk.PublicKey)
|
||||||
|
|
||||||
@@ -40,7 +39,7 @@ func TestAccountCreationAuthSignVerify(t *testing.T) {
|
|||||||
// Hash and sign manually and compare the generated signature
|
// Hash and sign manually and compare the generated signature
|
||||||
hash, err := a.HashToSign(chainID, hermezContractAddr)
|
hash, err := a.HashToSign(chainID, hermezContractAddr)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.Equal(t, "9414667457e658dd31949b82996b75c65a055512244c3bbfd22ff56add02ba65",
|
assert.Equal(t, "4f8df75e96fdce1ac90bb2f8d81c42047600f85bfcef80ce3b91c2a2afc58c1e",
|
||||||
hex.EncodeToString(hash))
|
hex.EncodeToString(hash))
|
||||||
sig, err := ethCrypto.Sign(hash, ethSk)
|
sig, err := ethCrypto.Sign(hash, ethSk)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -70,38 +69,35 @@ func TestAccountCreationAuthJSComp(t *testing.T) {
|
|||||||
sigExpected string
|
sigExpected string
|
||||||
}
|
}
|
||||||
var tvs []testVector
|
var tvs []testVector
|
||||||
//nolint:lll
|
|
||||||
tv0 := testVector{
|
tv0 := testVector{
|
||||||
ethSk: "0000000000000000000000000000000000000000000000000000000000000001",
|
ethSk: "0000000000000000000000000000000000000000000000000000000000000001",
|
||||||
expectedAddress: "0x7E5F4552091A69125d5DfCb7b8C2659029395Bdf",
|
expectedAddress: "0x7E5F4552091A69125d5DfCb7b8C2659029395Bdf",
|
||||||
pkCompStr: "21b0a1688b37f77b1d1d5539ec3b826db5ac78b2513f574a04c50a7d4f8246d7",
|
pkCompStr: "21b0a1688b37f77b1d1d5539ec3b826db5ac78b2513f574a04c50a7d4f8246d7",
|
||||||
chainID: uint16(4),
|
chainID: uint16(4),
|
||||||
hermezContractAddr: "0x7e5f4552091a69125d5dfcb7b8c2659029395bdf",
|
hermezContractAddr: "0x7e5f4552091a69125d5dfcb7b8c2659029395bdf",
|
||||||
toHashExpected: "190189658bba487e11c7da602676ee32bc90b77d3f32a305b147e4f3c3b35f19672e5d84ccc38d0ab245c469b719549d837113465c2abf9972c49403ca6fd10ed3dc",
|
toHashExpected: "19457468657265756d205369676e6564204d6573736167653a0a3132304920617574686f72697a65207468697320626162796a75626a7562206b657920666f72206865726d657a20726f6c6c7570206163636f756e74206372656174696f6e21b0a1688b37f77b1d1d5539ec3b826db5ac78b2513f574a04c50a7d4f8246d700047e5f4552091a69125d5dfcb7b8c2659029395bdf",
|
||||||
hashExpected: "c56eba41e511df100c804c5c09288f35887efea4f033be956481af335df3bea2",
|
hashExpected: "39afea52d843a4de905b6b5ebb0ee8c678141f711d96d9b429c4aec10ef9911f",
|
||||||
sigExpected: "dbedcc5ce02db8f48afbdb2feba9a3a31848eaa8fca5f312ce37b01db45d2199208335330d4445bd2f51d1db68dbc0d0bf3585c4a07504b4efbe46a69eaae5a21b",
|
sigExpected: "73d10d6ecf06ee8a5f60ac90f06b78bef9c650f414ba3ac73e176dc32e896159147457e9c86f0b4bd60fdaf2c0b2aec890a7df993d69a4805e242a6b845ebf231c",
|
||||||
}
|
}
|
||||||
//nolint:lll
|
|
||||||
tv1 := testVector{
|
tv1 := testVector{
|
||||||
ethSk: "0000000000000000000000000000000000000000000000000000000000000002",
|
ethSk: "0000000000000000000000000000000000000000000000000000000000000002",
|
||||||
expectedAddress: "0x2B5AD5c4795c026514f8317c7a215E218DcCD6cF",
|
expectedAddress: "0x2B5AD5c4795c026514f8317c7a215E218DcCD6cF",
|
||||||
pkCompStr: "093985b1993d9f743f9d7d943ed56f38601cb8b196db025f79650c4007c3054d",
|
pkCompStr: "093985b1993d9f743f9d7d943ed56f38601cb8b196db025f79650c4007c3054d",
|
||||||
chainID: uint16(0),
|
chainID: uint16(0),
|
||||||
hermezContractAddr: "0x2b5ad5c4795c026514f8317c7a215e218dccd6cf",
|
hermezContractAddr: "0x2b5ad5c4795c026514f8317c7a215e218dccd6cf",
|
||||||
toHashExpected: "1901dafbc253dedf90d6421dc6e25d5d9efc6985133cb2a8d363d0a081a0e3eddddc65f603a88de36aaeabd3b4cf586538c7f3fd50c94780530a3707c8c14ad9fd11",
|
toHashExpected: "19457468657265756d205369676e6564204d6573736167653a0a3132304920617574686f72697a65207468697320626162796a75626a7562206b657920666f72206865726d657a20726f6c6c7570206163636f756e74206372656174696f6e093985b1993d9f743f9d7d943ed56f38601cb8b196db025f79650c4007c3054d00002b5ad5c4795c026514f8317c7a215e218dccd6cf",
|
||||||
hashExpected: "deb9afa479282cf27b442ce8ba86b19448aa87eacef691521a33db5d0feb9959",
|
hashExpected: "89a3895993a4736232212e59566294feb3da227af44375daf3307dcad5451d5d",
|
||||||
sigExpected: "6a0da90ba2d2b1be679a28ebe54ee03082d44b836087391cd7d2607c1e4dafe04476e6e88dccb8707c68312512f16c947524b35c80f26c642d23953e9bb84c701c",
|
sigExpected: "bb4156156c705494ad5f99030342c64657e51e2994750f92125717c40bf56ad632044aa6bd00979feea92c417b552401e65fe5f531f15010d9d1c278da8be1df1b",
|
||||||
}
|
}
|
||||||
//nolint:lll
|
|
||||||
tv2 := testVector{
|
tv2 := testVector{
|
||||||
ethSk: "c5e8f61d1ab959b397eecc0a37a6517b8e67a0e7cf1f4bce5591f3ed80199122",
|
ethSk: "c5e8f61d1ab959b397eecc0a37a6517b8e67a0e7cf1f4bce5591f3ed80199122",
|
||||||
expectedAddress: "0xc783df8a850f42e7F7e57013759C285caa701eB6",
|
expectedAddress: "0xc783df8a850f42e7F7e57013759C285caa701eB6",
|
||||||
pkCompStr: "22870c1bcc451396202d62f566026eab8e438c6c91decf8ddf63a6c162619b52",
|
pkCompStr: "22870c1bcc451396202d62f566026eab8e438c6c91decf8ddf63a6c162619b52",
|
||||||
chainID: uint16(31337), // =0x7a69
|
chainID: uint16(31337), // =0x7a69
|
||||||
hermezContractAddr: "0xf4e77E5Da47AC3125140c470c71cBca77B5c638c",
|
hermezContractAddr: "0xf4e77E5Da47AC3125140c470c71cBca77B5c638c",
|
||||||
toHashExpected: "190167617949b934d7e01add4009cd3d47415a26727b7d6288e5dce33fb3721d5a1a9ce511b19b694c9aaf8183f4987ed752f24884c54c003d11daa2e98c7547a79e",
|
toHashExpected: "19457468657265756d205369676e6564204d6573736167653a0a3132304920617574686f72697a65207468697320626162796a75626a7562206b657920666f72206865726d657a20726f6c6c7570206163636f756e74206372656174696f6e22870c1bcc451396202d62f566026eab8e438c6c91decf8ddf63a6c162619b527a69f4e77e5da47ac3125140c470c71cbca77b5c638c",
|
||||||
hashExpected: "157b570c597e615b8356ce008ac39f43bc9b6d50080bc07d968031b9378acbbb",
|
hashExpected: "4f6ead01278ba4597d4720e37482f585a713497cea994a95209f4c57a963b4a7",
|
||||||
sigExpected: "a0766181102428b5672e523dc4b905c10ddf025c10dbd0b3534ef864632a14652737610041c670b302fc7dca28edd5d6eac42b72d69ce58da8ce21287b244e381b",
|
sigExpected: "43b5818802a137a72a190c1d8d767ca507f7a4804b1b69b5e055abf31f4f2b476c80bb1ba63260d95610f6f831420d32130e7f22fec5d76e16644ddfcedd0d441c",
|
||||||
}
|
}
|
||||||
tvs = append(tvs, tv0)
|
tvs = append(tvs, tv0)
|
||||||
tvs = append(tvs, tv1)
|
tvs = append(tvs, tv1)
|
||||||
@@ -126,10 +122,10 @@ func TestAccountCreationAuthJSComp(t *testing.T) {
|
|||||||
BJJ: pkComp,
|
BJJ: pkComp,
|
||||||
}
|
}
|
||||||
|
|
||||||
toHash, err := a.toHash(chainID, hermezContractAddr)
|
toHash := a.toHash(chainID, hermezContractAddr)
|
||||||
require.NoError(t, err)
|
|
||||||
assert.Equal(t, tv.toHashExpected,
|
assert.Equal(t, tv.toHashExpected,
|
||||||
hex.EncodeToString(toHash))
|
hex.EncodeToString(toHash))
|
||||||
|
assert.Equal(t, 120+len(EthMsgPrefix)+len([]byte("120")), len(toHash))
|
||||||
|
|
||||||
msg, err := a.HashToSign(chainID, hermezContractAddr)
|
msg, err := a.HashToSign(chainID, hermezContractAddr)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|||||||
@@ -13,9 +13,8 @@ const batchNumBytesLen = 8
|
|||||||
|
|
||||||
// Batch is a struct that represents Hermez network batch
|
// Batch is a struct that represents Hermez network batch
|
||||||
type Batch struct {
|
type Batch struct {
|
||||||
BatchNum BatchNum `meddler:"batch_num"`
|
BatchNum BatchNum `meddler:"batch_num"`
|
||||||
// Ethereum block in which the batch is forged
|
EthBlockNum int64 `meddler:"eth_block_num"` // Ethereum block in which the batch is forged
|
||||||
EthBlockNum int64 `meddler:"eth_block_num"`
|
|
||||||
ForgerAddr ethCommon.Address `meddler:"forger_addr"`
|
ForgerAddr ethCommon.Address `meddler:"forger_addr"`
|
||||||
CollectedFees map[TokenID]*big.Int `meddler:"fees_collected,json"`
|
CollectedFees map[TokenID]*big.Int `meddler:"fees_collected,json"`
|
||||||
FeeIdxsCoordinator []Idx `meddler:"fee_idxs_coordinator,json"`
|
FeeIdxsCoordinator []Idx `meddler:"fee_idxs_coordinator,json"`
|
||||||
@@ -23,11 +22,9 @@ type Batch struct {
|
|||||||
NumAccounts int `meddler:"num_accounts"`
|
NumAccounts int `meddler:"num_accounts"`
|
||||||
LastIdx int64 `meddler:"last_idx"`
|
LastIdx int64 `meddler:"last_idx"`
|
||||||
ExitRoot *big.Int `meddler:"exit_root,bigint"`
|
ExitRoot *big.Int `meddler:"exit_root,bigint"`
|
||||||
// ForgeL1TxsNum is optional, Only when the batch forges L1 txs. Identifier that corresponds
|
ForgeL1TxsNum *int64 `meddler:"forge_l1_txs_num"` // optional, Only when the batch forges L1 txs. Identifier that corresponds to the group of L1 txs forged in the current batch.
|
||||||
// to the group of L1 txs forged in the current batch.
|
SlotNum int64 `meddler:"slot_num"` // Slot in which the batch is forged
|
||||||
ForgeL1TxsNum *int64 `meddler:"forge_l1_txs_num"`
|
TotalFeesUSD *float64 `meddler:"total_fees_usd"`
|
||||||
SlotNum int64 `meddler:"slot_num"` // Slot in which the batch is forged
|
|
||||||
TotalFeesUSD *float64 `meddler:"total_fees_usd"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewEmptyBatch creates a new empty batch
|
// NewEmptyBatch creates a new empty batch
|
||||||
@@ -66,9 +63,7 @@ func (bn BatchNum) BigInt() *big.Int {
|
|||||||
// BatchNumFromBytes returns BatchNum from a []byte
|
// BatchNumFromBytes returns BatchNum from a []byte
|
||||||
func BatchNumFromBytes(b []byte) (BatchNum, error) {
|
func BatchNumFromBytes(b []byte) (BatchNum, error) {
|
||||||
if len(b) != batchNumBytesLen {
|
if len(b) != batchNumBytesLen {
|
||||||
return 0,
|
return 0, tracerr.Wrap(fmt.Errorf("can not parse BatchNumFromBytes, bytes len %d, expected %d", len(b), batchNumBytesLen))
|
||||||
tracerr.Wrap(fmt.Errorf("can not parse BatchNumFromBytes, bytes len %d, expected %d",
|
|
||||||
len(b), batchNumBytesLen))
|
|
||||||
}
|
}
|
||||||
batchNum := binary.BigEndian.Uint64(b[:batchNumBytesLen])
|
batchNum := binary.BigEndian.Uint64(b[:batchNumBytesLen])
|
||||||
return BatchNum(batchNum), nil
|
return BatchNum(batchNum), nil
|
||||||
@@ -82,7 +77,6 @@ type BatchData struct {
|
|||||||
L1CoordinatorTxs []L1Tx
|
L1CoordinatorTxs []L1Tx
|
||||||
L2Txs []L2Tx
|
L2Txs []L2Tx
|
||||||
CreatedAccounts []Account
|
CreatedAccounts []Account
|
||||||
UpdatedAccounts []AccountUpdate
|
|
||||||
ExitTree []ExitInfo
|
ExitTree []ExitInfo
|
||||||
Batch Batch
|
Batch Batch
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -34,7 +34,7 @@ type Slot struct {
|
|||||||
// BatchesLen int
|
// BatchesLen int
|
||||||
BidValue *big.Int
|
BidValue *big.Int
|
||||||
BootCoord bool
|
BootCoord bool
|
||||||
// Bidder, Forger and URL correspond to the winner of the slot (which is
|
// Bidder, Forer and URL correspond to the winner of the slot (which is
|
||||||
// not always the highest bidder). These are the values of the
|
// not always the highest bidder). These are the values of the
|
||||||
// coordinator that is able to forge exclusively before the deadline.
|
// coordinator that is able to forge exclusively before the deadline.
|
||||||
Bidder ethCommon.Address
|
Bidder ethCommon.Address
|
||||||
|
|||||||
@@ -5,15 +5,10 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
// Coordinator represents a Hermez network coordinator who wins an auction for an specific slot
|
// Coordinator represents a Hermez network coordinator who wins an auction for an specific slot
|
||||||
// WARNING: this is strongly based on the previous implementation, once the new spec is done, this
|
// WARNING: this is strongly based on the previous implementation, once the new spec is done, this may change a lot.
|
||||||
// may change a lot.
|
|
||||||
type Coordinator struct {
|
type Coordinator struct {
|
||||||
// Bidder is the address of the bidder
|
Bidder ethCommon.Address `meddler:"bidder_addr"` // address of the bidder
|
||||||
Bidder ethCommon.Address `meddler:"bidder_addr"`
|
Forger ethCommon.Address `meddler:"forger_addr"` // address of the forger
|
||||||
// Forger is the address of the forger
|
EthBlockNum int64 `meddler:"eth_block_num"` // block in which the coordinator was registered
|
||||||
Forger ethCommon.Address `meddler:"forger_addr"`
|
URL string `meddler:"url"` // URL of the coordinators API
|
||||||
// EthBlockNum is the block in which the coordinator was registered
|
|
||||||
EthBlockNum int64 `meddler:"eth_block_num"`
|
|
||||||
// URL of the coordinators API
|
|
||||||
URL string `meddler:"url"`
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,33 +0,0 @@
|
|||||||
package common
|
|
||||||
|
|
||||||
// SCVariables joins all the smart contract variables in a single struct
|
|
||||||
type SCVariables struct {
|
|
||||||
Rollup RollupVariables `validate:"required"`
|
|
||||||
Auction AuctionVariables `validate:"required"`
|
|
||||||
WDelayer WDelayerVariables `validate:"required"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// AsPtr returns the SCVariables as a SCVariablesPtr using pointers to the
|
|
||||||
// original SCVariables
|
|
||||||
func (v *SCVariables) AsPtr() *SCVariablesPtr {
|
|
||||||
return &SCVariablesPtr{
|
|
||||||
Rollup: &v.Rollup,
|
|
||||||
Auction: &v.Auction,
|
|
||||||
WDelayer: &v.WDelayer,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// SCVariablesPtr joins all the smart contract variables as pointers in a single
|
|
||||||
// struct
|
|
||||||
type SCVariablesPtr struct {
|
|
||||||
Rollup *RollupVariables `validate:"required"`
|
|
||||||
Auction *AuctionVariables `validate:"required"`
|
|
||||||
WDelayer *WDelayerVariables `validate:"required"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// SCConsts joins all the smart contract constants in a single struct
|
|
||||||
type SCConsts struct {
|
|
||||||
Rollup RollupConstants
|
|
||||||
Auction AuctionConstants
|
|
||||||
WDelayer WDelayerConstants
|
|
||||||
}
|
|
||||||
@@ -68,13 +68,11 @@ type AuctionVariables struct {
|
|||||||
ClosedAuctionSlots uint16 `meddler:"closed_auction_slots" validate:"required"`
|
ClosedAuctionSlots uint16 `meddler:"closed_auction_slots" validate:"required"`
|
||||||
// Distance (#slots) to the farthest slot to which you can bid (30 days = 4320 slots )
|
// Distance (#slots) to the farthest slot to which you can bid (30 days = 4320 slots )
|
||||||
OpenAuctionSlots uint16 `meddler:"open_auction_slots" validate:"required"`
|
OpenAuctionSlots uint16 `meddler:"open_auction_slots" validate:"required"`
|
||||||
// How the HEZ tokens deposited by the slot winner are distributed (Burn: 40% - Donation:
|
// How the HEZ tokens deposited by the slot winner are distributed (Burn: 40% - Donation: 40% - HGT: 20%)
|
||||||
// 40% - HGT: 20%)
|
|
||||||
AllocationRatio [3]uint16 `meddler:"allocation_ratio,json" validate:"required"`
|
AllocationRatio [3]uint16 `meddler:"allocation_ratio,json" validate:"required"`
|
||||||
// Minimum outbid (percentage) over the previous one to consider it valid
|
// Minimum outbid (percentage) over the previous one to consider it valid
|
||||||
Outbidding uint16 `meddler:"outbidding" validate:"required"`
|
Outbidding uint16 `meddler:"outbidding" validate:"required"`
|
||||||
// Number of blocks at the end of a slot in which any coordinator can forge if the winner
|
// Number of blocks at the end of a slot in which any coordinator can forge if the winner has not forged one before
|
||||||
// has not forged one before
|
|
||||||
SlotDeadline uint8 `meddler:"slot_deadline" validate:"required"`
|
SlotDeadline uint8 `meddler:"slot_deadline" validate:"required"`
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -20,22 +20,19 @@ const (
|
|||||||
// RollupConstExitIDx IDX 1 is reserved for exits
|
// RollupConstExitIDx IDX 1 is reserved for exits
|
||||||
RollupConstExitIDx = 1
|
RollupConstExitIDx = 1
|
||||||
// RollupConstLimitTokens Max number of tokens allowed to be registered inside the rollup
|
// RollupConstLimitTokens Max number of tokens allowed to be registered inside the rollup
|
||||||
RollupConstLimitTokens = (1 << 32) //nolint:gomnd
|
RollupConstLimitTokens = (1 << 32)
|
||||||
// RollupConstL1CoordinatorTotalBytes [4 bytes] token + [32 bytes] babyjub + [65 bytes]
|
// RollupConstL1CoordinatorTotalBytes [4 bytes] token + [32 bytes] babyjub + [65 bytes] compressedSignature
|
||||||
// compressedSignature
|
|
||||||
RollupConstL1CoordinatorTotalBytes = 101
|
RollupConstL1CoordinatorTotalBytes = 101
|
||||||
// RollupConstL1UserTotalBytes [20 bytes] fromEthAddr + [32 bytes] fromBjj-compressed + [6
|
// RollupConstL1UserTotalBytes [20 bytes] fromEthAddr + [32 bytes] fromBjj-compressed + [6 bytes] fromIdx +
|
||||||
// bytes] fromIdx + [5 bytes] depositAmountFloat40 + [5 bytes] amountFloat40 + [4 bytes]
|
// [5 bytes] depositAmountFloat40 + [5 bytes] amountFloat40 + [4 bytes] tokenId + [6 bytes] toIdx
|
||||||
// tokenId + [6 bytes] toIdx
|
|
||||||
RollupConstL1UserTotalBytes = 78
|
RollupConstL1UserTotalBytes = 78
|
||||||
// RollupConstMaxL1UserTx Maximum L1-user transactions allowed to be queued in a batch
|
// RollupConstMaxL1UserTx Maximum L1-user transactions allowed to be queued in a batch
|
||||||
RollupConstMaxL1UserTx = 128
|
RollupConstMaxL1UserTx = 128
|
||||||
// RollupConstMaxL1Tx Maximum L1 transactions allowed to be queued in a batch
|
// RollupConstMaxL1Tx Maximum L1 transactions allowed to be queued in a batch
|
||||||
RollupConstMaxL1Tx = 256
|
RollupConstMaxL1Tx = 256
|
||||||
// RollupConstInputSHAConstantBytes [6 bytes] lastIdx + [6 bytes] newLastIdx + [32 bytes]
|
// RollupConstInputSHAConstantBytes [6 bytes] lastIdx + [6 bytes] newLastIdx + [32 bytes] stateRoot + [32 bytes] newStRoot + [32 bytes] newExitRoot +
|
||||||
// stateRoot + [32 bytes] newStRoot + [32 bytes] newExitRoot + [_MAX_L1_TX *
|
// [_MAX_L1_TX * _L1_USER_TOTALBYTES bytes] l1TxsData + totalL2TxsDataLength + feeIdxCoordinatorLength + [2 bytes] chainID =
|
||||||
// _L1_USER_TOTALBYTES bytes] l1TxsData + totalL2TxsDataLength + feeIdxCoordinatorLength +
|
// 18542 bytes + totalL2TxsDataLength + feeIdxCoordinatorLength
|
||||||
// [2 bytes] chainID = 18542 bytes + totalL2TxsDataLength + feeIdxCoordinatorLength
|
|
||||||
RollupConstInputSHAConstantBytes = 18546
|
RollupConstInputSHAConstantBytes = 18546
|
||||||
// RollupConstNumBuckets Number of buckets
|
// RollupConstNumBuckets Number of buckets
|
||||||
RollupConstNumBuckets = 5
|
RollupConstNumBuckets = 5
|
||||||
@@ -47,18 +44,14 @@ const (
|
|||||||
|
|
||||||
var (
|
var (
|
||||||
// RollupConstLimitDepositAmount Max deposit amount allowed (depositAmount: L1 --> L2)
|
// RollupConstLimitDepositAmount Max deposit amount allowed (depositAmount: L1 --> L2)
|
||||||
RollupConstLimitDepositAmount, _ = new(big.Int).SetString(
|
RollupConstLimitDepositAmount, _ = new(big.Int).SetString("340282366920938463463374607431768211456", 10)
|
||||||
"340282366920938463463374607431768211456", 10)
|
|
||||||
// RollupConstLimitL2TransferAmount Max amount allowed (amount L2 --> L2)
|
// RollupConstLimitL2TransferAmount Max amount allowed (amount L2 --> L2)
|
||||||
RollupConstLimitL2TransferAmount, _ = new(big.Int).SetString(
|
RollupConstLimitL2TransferAmount, _ = new(big.Int).SetString("6277101735386680763835789423207666416102355444464034512896", 10)
|
||||||
"6277101735386680763835789423207666416102355444464034512896", 10)
|
|
||||||
|
|
||||||
// RollupConstEthAddressInternalOnly This ethereum address is used internally for rollup
|
// RollupConstEthAddressInternalOnly This ethereum address is used internally for rollup accounts that don't have ethereum address, only Babyjubjub
|
||||||
// accounts that don't have ethereum address, only Babyjubjub.
|
// This non-ethereum accounts can be created by the coordinator and allow users to have a rollup
|
||||||
// This non-ethereum accounts can be created by the coordinator and allow users to have a
|
// account without needing an ethereum address
|
||||||
// rollup account without needing an ethereum address
|
RollupConstEthAddressInternalOnly = ethCommon.HexToAddress("0xFFfFfFffFFfffFFfFFfFFFFFffFFFffffFfFFFfF")
|
||||||
RollupConstEthAddressInternalOnly = ethCommon.HexToAddress(
|
|
||||||
"0xFFfFfFffFFfffFFfFFfFFFFFffFFFffffFfFFFfF")
|
|
||||||
// RollupConstRfield Modulus zkSNARK
|
// RollupConstRfield Modulus zkSNARK
|
||||||
RollupConstRfield, _ = new(big.Int).SetString(
|
RollupConstRfield, _ = new(big.Int).SetString(
|
||||||
"21888242871839275222246405745257275088548364400416034343698204186575808495617", 10)
|
"21888242871839275222246405745257275088548364400416034343698204186575808495617", 10)
|
||||||
@@ -70,32 +63,24 @@ var (
|
|||||||
|
|
||||||
// RollupConstRecipientInterfaceHash ERC777 recipient interface hash
|
// RollupConstRecipientInterfaceHash ERC777 recipient interface hash
|
||||||
RollupConstRecipientInterfaceHash = crypto.Keccak256([]byte("ERC777TokensRecipient"))
|
RollupConstRecipientInterfaceHash = crypto.Keccak256([]byte("ERC777TokensRecipient"))
|
||||||
// RollupConstPerformL1UserTxSignature the signature of the function that can be called thru
|
// RollupConstPerformL1UserTxSignature the signature of the function that can be called thru an ERC777 `send`
|
||||||
// an ERC777 `send`
|
RollupConstPerformL1UserTxSignature = crypto.Keccak256([]byte("addL1Transaction(uint256,uint48,uint16,uint16,uint32,uint48)"))
|
||||||
RollupConstPerformL1UserTxSignature = crypto.Keccak256([]byte(
|
// RollupConstAddTokenSignature the signature of the function that can be called thru an ERC777 `send`
|
||||||
"addL1Transaction(uint256,uint48,uint16,uint16,uint32,uint48)"))
|
|
||||||
// RollupConstAddTokenSignature the signature of the function that can be called thru an
|
|
||||||
// ERC777 `send`
|
|
||||||
RollupConstAddTokenSignature = crypto.Keccak256([]byte("addToken(address)"))
|
RollupConstAddTokenSignature = crypto.Keccak256([]byte("addToken(address)"))
|
||||||
// RollupConstSendSignature ERC777 Signature
|
// RollupConstSendSignature ERC777 Signature
|
||||||
RollupConstSendSignature = crypto.Keccak256([]byte("send(address,uint256,bytes)"))
|
RollupConstSendSignature = crypto.Keccak256([]byte("send(address,uint256,bytes)"))
|
||||||
// RollupConstERC777Granularity ERC777 Signature
|
// RollupConstERC777Granularity ERC777 Signature
|
||||||
RollupConstERC777Granularity = crypto.Keccak256([]byte("granularity()"))
|
RollupConstERC777Granularity = crypto.Keccak256([]byte("granularity()"))
|
||||||
// RollupConstWithdrawalDelayerDeposit This constant are used to deposit tokens from ERC77
|
// RollupConstWithdrawalDelayerDeposit This constant are used to deposit tokens from ERC77 tokens into withdrawal delayer
|
||||||
// tokens into withdrawal delayer
|
|
||||||
RollupConstWithdrawalDelayerDeposit = crypto.Keccak256([]byte("deposit(address,address,uint192)"))
|
RollupConstWithdrawalDelayerDeposit = crypto.Keccak256([]byte("deposit(address,address,uint192)"))
|
||||||
|
|
||||||
// ERC20 signature
|
// ERC20 signature
|
||||||
|
|
||||||
// RollupConstTransferSignature This constant is used in the _safeTransfer internal method
|
// RollupConstTransferSignature This constant is used in the _safeTransfer internal method in order to safe GAS.
|
||||||
// in order to safe GAS.
|
|
||||||
RollupConstTransferSignature = crypto.Keccak256([]byte("transfer(address,uint256)"))
|
RollupConstTransferSignature = crypto.Keccak256([]byte("transfer(address,uint256)"))
|
||||||
// RollupConstTransferFromSignature This constant is used in the _safeTransfer internal
|
// RollupConstTransferFromSignature This constant is used in the _safeTransfer internal method in order to safe GAS.
|
||||||
// method in order to safe GAS.
|
RollupConstTransferFromSignature = crypto.Keccak256([]byte("transferFrom(address,address,uint256)"))
|
||||||
RollupConstTransferFromSignature = crypto.Keccak256([]byte(
|
// RollupConstApproveSignature This constant is used in the _safeTransfer internal method in order to safe GAS.
|
||||||
"transferFrom(address,address,uint256)"))
|
|
||||||
// RollupConstApproveSignature This constant is used in the _safeTransfer internal method in
|
|
||||||
// order to safe GAS.
|
|
||||||
RollupConstApproveSignature = crypto.Keccak256([]byte("approve(address,uint256)"))
|
RollupConstApproveSignature = crypto.Keccak256([]byte("approve(address,uint256)"))
|
||||||
// RollupConstERC20Signature ERC20 decimals signature
|
// RollupConstERC20Signature ERC20 decimals signature
|
||||||
RollupConstERC20Signature = crypto.Keccak256([]byte("decimals()"))
|
RollupConstERC20Signature = crypto.Keccak256([]byte("decimals()"))
|
||||||
@@ -156,7 +141,6 @@ type TokenExchange struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// RollupVariables are the variables of the Rollup Smart Contract
|
// RollupVariables are the variables of the Rollup Smart Contract
|
||||||
//nolint:lll
|
|
||||||
type RollupVariables struct {
|
type RollupVariables struct {
|
||||||
EthBlockNum int64 `meddler:"eth_block_num"`
|
EthBlockNum int64 `meddler:"eth_block_num"`
|
||||||
FeeAddToken *big.Int `meddler:"fee_add_token,bigint" validate:"required"`
|
FeeAddToken *big.Int `meddler:"fee_add_token,bigint" validate:"required"`
|
||||||
|
|||||||
@@ -27,7 +27,6 @@ type WDelayerEscapeHatchWithdrawal struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// WDelayerVariables are the variables of the Withdrawal Delayer Smart Contract
|
// WDelayerVariables are the variables of the Withdrawal Delayer Smart Contract
|
||||||
//nolint:lll
|
|
||||||
type WDelayerVariables struct {
|
type WDelayerVariables struct {
|
||||||
EthBlockNum int64 `json:"ethereumBlockNum" meddler:"eth_block_num"`
|
EthBlockNum int64 `json:"ethereumBlockNum" meddler:"eth_block_num"`
|
||||||
// HermezRollupAddress ethCommon.Address `json:"hermezRollupAddress" meddler:"rollup_address"`
|
// HermezRollupAddress ethCommon.Address `json:"hermezRollupAddress" meddler:"rollup_address"`
|
||||||
|
|||||||
@@ -22,9 +22,9 @@ var FeeFactorLsh60 [256]*big.Int
|
|||||||
// the coordinator according to the tx type (if the tx requires to create an
|
// the coordinator according to the tx type (if the tx requires to create an
|
||||||
// account and register, only register or he account already esists)
|
// account and register, only register or he account already esists)
|
||||||
type RecommendedFee struct {
|
type RecommendedFee struct {
|
||||||
ExistingAccount float64 `json:"existingAccount"`
|
ExistingAccount float64 `json:"existingAccount"`
|
||||||
CreatesAccount float64 `json:"createAccount"`
|
CreatesAccount float64 `json:"createAccount"`
|
||||||
CreatesAccountInternal float64 `json:"createAccountInternal"`
|
CreatesAccountAndRegister float64 `json:"createAccountInternal"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// FeeSelector is used to select a percentage from the FeePlan.
|
// FeeSelector is used to select a percentage from the FeePlan.
|
||||||
|
|||||||
@@ -1,4 +1,4 @@
|
|||||||
// Package common float40.go provides methods to work with Hermez custom half
|
// Package common Float40 provides methods to work with Hermez custom half
|
||||||
// float precision, 40 bits, codification internally called Float40 has been
|
// float precision, 40 bits, codification internally called Float40 has been
|
||||||
// adopted to encode large integers. This is done in order to save bits when L2
|
// adopted to encode large integers. This is done in order to save bits when L2
|
||||||
// transactions are published.
|
// transactions are published.
|
||||||
@@ -32,8 +32,6 @@ var (
|
|||||||
// ErrFloat40NotEnoughPrecission is used when the given *big.Int can
|
// ErrFloat40NotEnoughPrecission is used when the given *big.Int can
|
||||||
// not be represented as Float40 due not enough precission
|
// not be represented as Float40 due not enough precission
|
||||||
ErrFloat40NotEnoughPrecission = errors.New("Float40 error, not enough precission")
|
ErrFloat40NotEnoughPrecission = errors.New("Float40 error, not enough precission")
|
||||||
|
|
||||||
thres = big.NewInt(0x08_00_00_00_00)
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Float40 represents a float in a 64 bit format
|
// Float40 represents a float in a 64 bit format
|
||||||
@@ -70,7 +68,7 @@ func (f40 Float40) BigInt() (*big.Int, error) {
|
|||||||
var f40Uint64 uint64 = uint64(f40) & 0x00_00_00_FF_FF_FF_FF_FF
|
var f40Uint64 uint64 = uint64(f40) & 0x00_00_00_FF_FF_FF_FF_FF
|
||||||
f40Bytes, err := f40.Bytes()
|
f40Bytes, err := f40.Bytes()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
e := f40Bytes[0] & 0xF8 >> 3 // take first 5 bits
|
e := f40Bytes[0] & 0xF8 >> 3 // take first 5 bits
|
||||||
@@ -88,41 +86,18 @@ func NewFloat40(f *big.Int) (Float40, error) {
|
|||||||
e := big.NewInt(0)
|
e := big.NewInt(0)
|
||||||
zero := big.NewInt(0)
|
zero := big.NewInt(0)
|
||||||
ten := big.NewInt(10)
|
ten := big.NewInt(10)
|
||||||
|
thres := big.NewInt(0x08_00_00_00_00)
|
||||||
for new(big.Int).Mod(m, ten).Cmp(zero) == 0 && m.Cmp(thres) >= 0 {
|
for new(big.Int).Mod(m, ten).Cmp(zero) == 0 && m.Cmp(thres) >= 0 {
|
||||||
m = new(big.Int).Div(m, ten)
|
m = new(big.Int).Div(m, ten)
|
||||||
e = new(big.Int).Add(e, big.NewInt(1))
|
e = new(big.Int).Add(e, big.NewInt(1))
|
||||||
}
|
}
|
||||||
if e.Int64() > 31 {
|
if e.Int64() > 31 {
|
||||||
return 0, tracerr.Wrap(ErrFloat40E31)
|
return 0, ErrFloat40E31
|
||||||
}
|
}
|
||||||
if m.Cmp(thres) >= 0 {
|
if m.Cmp(thres) >= 0 {
|
||||||
return 0, tracerr.Wrap(ErrFloat40NotEnoughPrecission)
|
return 0, ErrFloat40NotEnoughPrecission
|
||||||
}
|
}
|
||||||
r := new(big.Int).Add(m,
|
r := new(big.Int).Add(m,
|
||||||
new(big.Int).Mul(e, thres))
|
new(big.Int).Mul(e, thres))
|
||||||
return Float40(r.Uint64()), nil
|
return Float40(r.Uint64()), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewFloat40Floor encodes a *big.Int integer as a Float40, rounding down in
|
|
||||||
// case of loss during the encoding. It returns an error in case that the number
|
|
||||||
// is too big (e>31). Warning: this method should not be used inside the
|
|
||||||
// hermez-node, it's a helper for external usage to generate valid Float40
|
|
||||||
// values.
|
|
||||||
func NewFloat40Floor(f *big.Int) (Float40, error) {
|
|
||||||
m := f
|
|
||||||
e := big.NewInt(0)
|
|
||||||
// zero := big.NewInt(0)
|
|
||||||
ten := big.NewInt(10)
|
|
||||||
for m.Cmp(thres) >= 0 {
|
|
||||||
m = new(big.Int).Div(m, ten)
|
|
||||||
e = new(big.Int).Add(e, big.NewInt(1))
|
|
||||||
}
|
|
||||||
if e.Int64() > 31 {
|
|
||||||
return 0, tracerr.Wrap(ErrFloat40E31)
|
|
||||||
}
|
|
||||||
|
|
||||||
r := new(big.Int).Add(m,
|
|
||||||
new(big.Int).Mul(e, thres))
|
|
||||||
|
|
||||||
return Float40(r.Uint64()), nil
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,11 +1,9 @@
|
|||||||
package common
|
package common
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
|
||||||
"math/big"
|
"math/big"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/hermeznetwork/tracerr"
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
@@ -57,56 +55,7 @@ func TestExpectError(t *testing.T) {
|
|||||||
bi, ok := new(big.Int).SetString(test, 10)
|
bi, ok := new(big.Int).SetString(test, 10)
|
||||||
require.True(t, ok)
|
require.True(t, ok)
|
||||||
_, err := NewFloat40(bi)
|
_, err := NewFloat40(bi)
|
||||||
assert.Equal(t, testVector[test], tracerr.Unwrap(err))
|
assert.Equal(t, testVector[test], err)
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestNewFloat40Floor(t *testing.T) {
|
|
||||||
testVector := map[string][]string{
|
|
||||||
// []int contains [Float40 value, Flot40 Floor value], when
|
|
||||||
// Float40 value is expected to be 0, is because is expected to
|
|
||||||
// be an error
|
|
||||||
"9922334455000000000000000000000000000000": {
|
|
||||||
"1040714485495", "1040714485495", "9922334455000000000000000000000000000000"},
|
|
||||||
"9922334455000000000000000000000000000001": { // Floor [2] will be same as prev line
|
|
||||||
"0", "1040714485495", "9922334455000000000000000000000000000000"},
|
|
||||||
"9922334454999999999999999999999999999999": {
|
|
||||||
"0", "1040714485494", "9922334454000000000000000000000000000000"},
|
|
||||||
"42949672950000000000000000000000000000000": {
|
|
||||||
"1069446856703", "1069446856703", "42949672950000000000000000000000000000000"},
|
|
||||||
"99223344556573838487575": {
|
|
||||||
"0", "456598933239", "99223344550000000000000"},
|
|
||||||
"992233445500000000000000000000000000000000": {
|
|
||||||
"0", "0", "0"}, // e>31, returns 0, err
|
|
||||||
"343597383670000000000000000000000000000000": {
|
|
||||||
"1099511627775", "1099511627775", "343597383670000000000000000000000000000000"},
|
|
||||||
"343597383680000000000000000000000000000000": {
|
|
||||||
"0", "0", "0"}, // e>31, returns 0, err
|
|
||||||
"1157073197879933027": {
|
|
||||||
"0", "286448638922", "1157073197800000000"},
|
|
||||||
}
|
|
||||||
for test := range testVector {
|
|
||||||
bi, ok := new(big.Int).SetString(test, 10)
|
|
||||||
require.True(t, ok)
|
|
||||||
f40, err := NewFloat40(bi)
|
|
||||||
if f40 == 0 {
|
|
||||||
assert.Error(t, err)
|
|
||||||
} else {
|
|
||||||
assert.NoError(t, err)
|
|
||||||
}
|
|
||||||
assert.Equal(t, testVector[test][0], fmt.Sprint(uint64(f40)))
|
|
||||||
|
|
||||||
f40, err = NewFloat40Floor(bi)
|
|
||||||
if f40 == 0 {
|
|
||||||
assert.Equal(t, ErrFloat40E31, tracerr.Unwrap(err))
|
|
||||||
} else {
|
|
||||||
assert.NoError(t, err)
|
|
||||||
}
|
|
||||||
assert.Equal(t, testVector[test][1], fmt.Sprint(uint64(f40)))
|
|
||||||
|
|
||||||
bi2, err := f40.BigInt()
|
|
||||||
require.NoError(t, err)
|
|
||||||
assert.Equal(t, fmt.Sprint(testVector[test][2]), bi2.String())
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -21,33 +21,25 @@ type L1Tx struct {
|
|||||||
// where type:
|
// where type:
|
||||||
// - L1UserTx: 0
|
// - L1UserTx: 0
|
||||||
// - L1CoordinatorTx: 1
|
// - L1CoordinatorTx: 1
|
||||||
TxID TxID `meddler:"id"`
|
TxID TxID `meddler:"id"`
|
||||||
// ToForgeL1TxsNum indicates in which the tx was forged / will be forged
|
ToForgeL1TxsNum *int64 `meddler:"to_forge_l1_txs_num"` // toForgeL1TxsNum in which the tx was forged / will be forged
|
||||||
ToForgeL1TxsNum *int64 `meddler:"to_forge_l1_txs_num"`
|
Position int `meddler:"position"`
|
||||||
Position int `meddler:"position"`
|
UserOrigin bool `meddler:"user_origin"` // true if the tx was originated by a user, false if it was aoriginated by a coordinator. Note that this differ from the spec for implementation simplification purpposes
|
||||||
// UserOrigin is set to true if the tx was originated by a user, false if it was
|
FromIdx Idx `meddler:"from_idx,zeroisnull"` // FromIdx is used by L1Tx/Deposit to indicate the Idx receiver of the L1Tx.DepositAmount (deposit)
|
||||||
// aoriginated by a coordinator. Note that this differ from the spec for implementation
|
|
||||||
// simplification purpposes
|
|
||||||
UserOrigin bool `meddler:"user_origin"`
|
|
||||||
// FromIdx is used by L1Tx/Deposit to indicate the Idx receiver of the L1Tx.DepositAmount
|
|
||||||
// (deposit)
|
|
||||||
FromIdx Idx `meddler:"from_idx,zeroisnull"`
|
|
||||||
EffectiveFromIdx Idx `meddler:"effective_from_idx,zeroisnull"`
|
EffectiveFromIdx Idx `meddler:"effective_from_idx,zeroisnull"`
|
||||||
FromEthAddr ethCommon.Address `meddler:"from_eth_addr,zeroisnull"`
|
FromEthAddr ethCommon.Address `meddler:"from_eth_addr,zeroisnull"`
|
||||||
FromBJJ babyjub.PublicKeyComp `meddler:"from_bjj,zeroisnull"`
|
FromBJJ babyjub.PublicKeyComp `meddler:"from_bjj,zeroisnull"`
|
||||||
// ToIdx is ignored in L1Tx/Deposit, but used in the L1Tx/DepositAndTransfer
|
ToIdx Idx `meddler:"to_idx"` // ToIdx is ignored in L1Tx/Deposit, but used in the L1Tx/DepositAndTransfer
|
||||||
ToIdx Idx `meddler:"to_idx"`
|
TokenID TokenID `meddler:"token_id"`
|
||||||
TokenID TokenID `meddler:"token_id"`
|
Amount *big.Int `meddler:"amount,bigint"`
|
||||||
Amount *big.Int `meddler:"amount,bigint"`
|
|
||||||
// EffectiveAmount only applies to L1UserTx.
|
// EffectiveAmount only applies to L1UserTx.
|
||||||
EffectiveAmount *big.Int `meddler:"effective_amount,bigintnull"`
|
EffectiveAmount *big.Int `meddler:"effective_amount,bigintnull"`
|
||||||
DepositAmount *big.Int `meddler:"deposit_amount,bigint"`
|
DepositAmount *big.Int `meddler:"deposit_amount,bigint"`
|
||||||
// EffectiveDepositAmount only applies to L1UserTx.
|
// EffectiveDepositAmount only applies to L1UserTx.
|
||||||
EffectiveDepositAmount *big.Int `meddler:"effective_deposit_amount,bigintnull"`
|
EffectiveDepositAmount *big.Int `meddler:"effective_deposit_amount,bigintnull"`
|
||||||
// Ethereum Block Number in which this L1Tx was added to the queue
|
EthBlockNum int64 `meddler:"eth_block_num"` // Ethereum Block Number in which this L1Tx was added to the queue
|
||||||
EthBlockNum int64 `meddler:"eth_block_num"`
|
Type TxType `meddler:"type"`
|
||||||
Type TxType `meddler:"type"`
|
BatchNum *BatchNum `meddler:"batch_num"`
|
||||||
BatchNum *BatchNum `meddler:"batch_num"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewL1Tx returns the given L1Tx with the TxId & Type parameters calculated
|
// NewL1Tx returns the given L1Tx with the TxId & Type parameters calculated
|
||||||
@@ -259,7 +251,7 @@ func L1TxFromDataAvailability(b []byte, nLevels uint32) (*L1Tx, error) {
|
|||||||
}
|
}
|
||||||
l1tx.ToIdx = toIdx
|
l1tx.ToIdx = toIdx
|
||||||
l1tx.EffectiveAmount, err = Float40FromBytes(amountBytes).BigInt()
|
l1tx.EffectiveAmount, err = Float40FromBytes(amountBytes).BigInt()
|
||||||
return &l1tx, tracerr.Wrap(err)
|
return &l1tx, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// BytesGeneric returns the generic representation of a L1Tx. This method is
|
// BytesGeneric returns the generic representation of a L1Tx. This method is
|
||||||
@@ -339,9 +331,7 @@ func (tx *L1Tx) BytesCoordinatorTx(compressedSignatureBytes []byte) ([]byte, err
|
|||||||
// L1UserTxFromBytes decodes a L1Tx from []byte
|
// L1UserTxFromBytes decodes a L1Tx from []byte
|
||||||
func L1UserTxFromBytes(b []byte) (*L1Tx, error) {
|
func L1UserTxFromBytes(b []byte) (*L1Tx, error) {
|
||||||
if len(b) != RollupConstL1UserTotalBytes {
|
if len(b) != RollupConstL1UserTotalBytes {
|
||||||
return nil,
|
return nil, tracerr.Wrap(fmt.Errorf("Can not parse L1Tx bytes, expected length %d, current: %d", 68, len(b)))
|
||||||
tracerr.Wrap(fmt.Errorf("Can not parse L1Tx bytes, expected length %d, current: %d",
|
|
||||||
68, len(b)))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
tx := &L1Tx{
|
tx := &L1Tx{
|
||||||
@@ -378,15 +368,19 @@ func L1UserTxFromBytes(b []byte) (*L1Tx, error) {
|
|||||||
return tx, nil
|
return tx, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func signHash(data []byte) []byte {
|
||||||
|
msg := fmt.Sprintf("\x19Ethereum Signed Message:\n%d%s", len(data), data)
|
||||||
|
return ethCrypto.Keccak256([]byte(msg))
|
||||||
|
}
|
||||||
|
|
||||||
// L1CoordinatorTxFromBytes decodes a L1Tx from []byte
|
// L1CoordinatorTxFromBytes decodes a L1Tx from []byte
|
||||||
func L1CoordinatorTxFromBytes(b []byte, chainID *big.Int, hermezAddress ethCommon.Address) (*L1Tx,
|
func L1CoordinatorTxFromBytes(b []byte, chainID *big.Int, hermezAddress ethCommon.Address) (*L1Tx, error) {
|
||||||
error) {
|
|
||||||
if len(b) != RollupConstL1CoordinatorTotalBytes {
|
if len(b) != RollupConstL1CoordinatorTotalBytes {
|
||||||
return nil, tracerr.Wrap(
|
return nil, tracerr.Wrap(fmt.Errorf("Can not parse L1CoordinatorTx bytes, expected length %d, current: %d", 101, len(b)))
|
||||||
fmt.Errorf("Can not parse L1CoordinatorTx bytes, expected length %d, current: %d",
|
|
||||||
101, len(b)))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bytesMessage := []byte("I authorize this babyjubjub key for hermez rollup account creation")
|
||||||
|
|
||||||
tx := &L1Tx{
|
tx := &L1Tx{
|
||||||
UserOrigin: false,
|
UserOrigin: false,
|
||||||
}
|
}
|
||||||
@@ -407,20 +401,18 @@ func L1CoordinatorTxFromBytes(b []byte, chainID *big.Int, hermezAddress ethCommo
|
|||||||
// L1CoordinatorTX ETH
|
// L1CoordinatorTX ETH
|
||||||
// Ethereum adds 27 to v
|
// Ethereum adds 27 to v
|
||||||
v = b[0] - byte(27) //nolint:gomnd
|
v = b[0] - byte(27) //nolint:gomnd
|
||||||
|
chainIDBytes := ethCommon.LeftPadBytes(chainID.Bytes(), 2)
|
||||||
|
var data []byte
|
||||||
|
data = append(data, bytesMessage...)
|
||||||
|
data = append(data, pkCompB...)
|
||||||
|
data = append(data, chainIDBytes[:]...)
|
||||||
|
data = append(data, hermezAddress.Bytes()...)
|
||||||
var signature []byte
|
var signature []byte
|
||||||
signature = append(signature, r[:]...)
|
signature = append(signature, r[:]...)
|
||||||
signature = append(signature, s[:]...)
|
signature = append(signature, s[:]...)
|
||||||
signature = append(signature, v)
|
signature = append(signature, v)
|
||||||
|
hash := signHash(data)
|
||||||
accCreationAuth := AccountCreationAuth{
|
pubKeyBytes, err := ethCrypto.Ecrecover(hash, signature)
|
||||||
BJJ: tx.FromBJJ,
|
|
||||||
}
|
|
||||||
h, err := accCreationAuth.HashToSign(uint16(chainID.Uint64()), hermezAddress)
|
|
||||||
if err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
pubKeyBytes, err := ethCrypto.Ecrecover(h, signature)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -29,8 +29,7 @@ func TestNewL1UserTx(t *testing.T) {
|
|||||||
}
|
}
|
||||||
l1Tx, err := NewL1Tx(l1Tx)
|
l1Tx, err := NewL1Tx(l1Tx)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, "0x00a6cbae3b8661fb75b0919ca6605a02cfb04d9c6dd16870fa0fcdf01befa32768",
|
assert.Equal(t, "0x00a6cbae3b8661fb75b0919ca6605a02cfb04d9c6dd16870fa0fcdf01befa32768", l1Tx.TxID.String())
|
||||||
l1Tx.TxID.String())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestNewL1CoordinatorTx(t *testing.T) {
|
func TestNewL1CoordinatorTx(t *testing.T) {
|
||||||
@@ -47,8 +46,7 @@ func TestNewL1CoordinatorTx(t *testing.T) {
|
|||||||
}
|
}
|
||||||
l1Tx, err := NewL1Tx(l1Tx)
|
l1Tx, err := NewL1Tx(l1Tx)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, "0x01274482d73df4dab34a1b6740adfca347a462513aa14e82f27b12f818d1b68c84",
|
assert.Equal(t, "0x01274482d73df4dab34a1b6740adfca347a462513aa14e82f27b12f818d1b68c84", l1Tx.TxID.String())
|
||||||
l1Tx.TxID.String())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestL1TxCompressedData(t *testing.T) {
|
func TestL1TxCompressedData(t *testing.T) {
|
||||||
@@ -201,8 +199,7 @@ func TestL1userTxByteParsers(t *testing.T) {
|
|||||||
func TestL1TxByteParsersCompatibility(t *testing.T) {
|
func TestL1TxByteParsersCompatibility(t *testing.T) {
|
||||||
// Data from compatibility test
|
// Data from compatibility test
|
||||||
var pkComp babyjub.PublicKeyComp
|
var pkComp babyjub.PublicKeyComp
|
||||||
pkCompB, err :=
|
pkCompB, err := hex.DecodeString("0dd02deb2c81068e7a0f7e327df80b4ab79ee1f41a7def613e73a20c32eece5a")
|
||||||
hex.DecodeString("0dd02deb2c81068e7a0f7e327df80b4ab79ee1f41a7def613e73a20c32eece5a")
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
pkCompL := SwapEndianness(pkCompB)
|
pkCompL := SwapEndianness(pkCompB)
|
||||||
err = pkComp.UnmarshalText([]byte(hex.EncodeToString(pkCompL)))
|
err = pkComp.UnmarshalText([]byte(hex.EncodeToString(pkCompL)))
|
||||||
@@ -223,17 +220,16 @@ func TestL1TxByteParsersCompatibility(t *testing.T) {
|
|||||||
|
|
||||||
encodedData, err := l1Tx.BytesUser()
|
encodedData, err := l1Tx.BytesUser()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
expected := "85dab5b9e2e361d0c208d77be90efcc0439b0a530dd02deb2c81068e7a0f7e327df80b4ab79e" +
|
expected := "85dab5b9e2e361d0c208d77be90efcc0439b0a530dd02deb2c81068e7a0f7e327df80b4ab79ee1f41a7def613e73a20c32eece5a000001c638db52540be400459682f0000020039c0000053cb88d"
|
||||||
"e1f41a7def613e73a20c32eece5a000001c638db52540be400459682f0000020039c0000053cb88d"
|
|
||||||
assert.Equal(t, expected, hex.EncodeToString(encodedData))
|
assert.Equal(t, expected, hex.EncodeToString(encodedData))
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestL1CoordinatorTxByteParsers(t *testing.T) {
|
func TestL1CoordinatorTxByteParsers(t *testing.T) {
|
||||||
hermezAddress := ethCommon.HexToAddress("0xD6C850aeBFDC46D7F4c207e445cC0d6B0919BDBe")
|
hermezAddress := ethCommon.HexToAddress("0xD6C850aeBFDC46D7F4c207e445cC0d6B0919BDBe")
|
||||||
chainID := big.NewInt(1337)
|
chainID := big.NewInt(1337)
|
||||||
|
chainIDBytes := ethCommon.LeftPadBytes(chainID.Bytes(), 2)
|
||||||
|
|
||||||
privateKey, err :=
|
privateKey, err := crypto.HexToECDSA("fad9c8855b740a0b7ed4c221dbad0f33a83a49cad6b3fe8d5817ac83d38b6a19")
|
||||||
crypto.HexToECDSA("fad9c8855b740a0b7ed4c221dbad0f33a83a49cad6b3fe8d5817ac83d38b6a19")
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
publicKey := privateKey.Public()
|
publicKey := privateKey.Public()
|
||||||
@@ -249,16 +245,18 @@ func TestL1CoordinatorTxByteParsers(t *testing.T) {
|
|||||||
pkCompL := []byte("56ca90f80d7c374ae7485e9bcc47d4ac399460948da6aeeb899311097925a72c")
|
pkCompL := []byte("56ca90f80d7c374ae7485e9bcc47d4ac399460948da6aeeb899311097925a72c")
|
||||||
err = pkComp.UnmarshalText(pkCompL)
|
err = pkComp.UnmarshalText(pkCompL)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
bytesMessage1 := []byte("\x19Ethereum Signed Message:\n120")
|
||||||
|
bytesMessage2 := []byte("I authorize this babyjubjub key for hermez rollup account creation")
|
||||||
|
|
||||||
accCreationAuth := AccountCreationAuth{
|
babyjubB := SwapEndianness(pkComp[:])
|
||||||
EthAddr: fromEthAddr,
|
var data []byte
|
||||||
BJJ: pkComp,
|
data = append(data, bytesMessage1...)
|
||||||
}
|
data = append(data, bytesMessage2...)
|
||||||
|
data = append(data, babyjubB[:]...)
|
||||||
h, err := accCreationAuth.HashToSign(uint16(chainID.Uint64()), hermezAddress)
|
data = append(data, chainIDBytes...)
|
||||||
require.NoError(t, err)
|
data = append(data, hermezAddress.Bytes()...)
|
||||||
|
hash := crypto.Keccak256Hash(data)
|
||||||
signature, err := crypto.Sign(h, privateKey)
|
signature, err := crypto.Sign(hash.Bytes(), privateKey)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
// Ethereum adds 27 to v
|
// Ethereum adds 27 to v
|
||||||
v := int(signature[64])
|
v := int(signature[64])
|
||||||
@@ -305,8 +303,7 @@ func TestL1CoordinatorTxByteParsersCompatibility(t *testing.T) {
|
|||||||
signature = append(signature, v[:]...)
|
signature = append(signature, v[:]...)
|
||||||
|
|
||||||
var pkComp babyjub.PublicKeyComp
|
var pkComp babyjub.PublicKeyComp
|
||||||
pkCompB, err :=
|
pkCompB, err := hex.DecodeString("a2c2807ee39c3b3378738cff85a46a9465bb8fcf44ea597c33da9719be7c259c")
|
||||||
hex.DecodeString("a2c2807ee39c3b3378738cff85a46a9465bb8fcf44ea597c33da9719be7c259c")
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
pkCompL := SwapEndianness(pkCompB)
|
pkCompL := SwapEndianness(pkCompB)
|
||||||
err = pkComp.UnmarshalText([]byte(hex.EncodeToString(pkCompL)))
|
err = pkComp.UnmarshalText([]byte(hex.EncodeToString(pkCompL)))
|
||||||
@@ -321,9 +318,7 @@ func TestL1CoordinatorTxByteParsersCompatibility(t *testing.T) {
|
|||||||
encodeData, err := l1Tx.BytesCoordinatorTx(signature)
|
encodeData, err := l1Tx.BytesCoordinatorTx(signature)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
expected, err := utils.HexDecode("1b186d7122ff7f654cfed3156719774898d573900c86599a885a706" +
|
expected, err := utils.HexDecode("1b186d7122ff7f654cfed3156719774898d573900c86599a885a706dbdffe5ea8cda71e5eb097e115405d84d1e7b464009b434b32c014a2df502d1f065ced8bc3ba2c2807ee39c3b3378738cff85a46a9465bb8fcf44ea597c33da9719be7c259c000000e7")
|
||||||
"dbdffe5ea8cda71e5eb097e115405d84d1e7b464009b434b32c014a2df502d1f065ced8bc3ba2c28" +
|
|
||||||
"07ee39c3b3378738cff85a46a9465bb8fcf44ea597c33da9719be7c259c000000e7")
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
assert.Equal(t, expected, encodeData)
|
assert.Equal(t, expected, encodeData)
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ import (
|
|||||||
|
|
||||||
// L2Tx is a struct that represents an already forged L2 tx
|
// L2Tx is a struct that represents an already forged L2 tx
|
||||||
type L2Tx struct {
|
type L2Tx struct {
|
||||||
// Stored in DB: mandatory fields
|
// Stored in DB: mandatory fileds
|
||||||
TxID TxID `meddler:"id"`
|
TxID TxID `meddler:"id"`
|
||||||
BatchNum BatchNum `meddler:"batch_num"` // batchNum in which this tx was forged.
|
BatchNum BatchNum `meddler:"batch_num"` // batchNum in which this tx was forged.
|
||||||
Position int `meddler:"position"`
|
Position int `meddler:"position"`
|
||||||
@@ -21,10 +21,9 @@ type L2Tx struct {
|
|||||||
Amount *big.Int `meddler:"amount,bigint"`
|
Amount *big.Int `meddler:"amount,bigint"`
|
||||||
Fee FeeSelector `meddler:"fee"`
|
Fee FeeSelector `meddler:"fee"`
|
||||||
// Nonce is filled by the TxProcessor
|
// Nonce is filled by the TxProcessor
|
||||||
Nonce Nonce `meddler:"nonce"`
|
Nonce Nonce `meddler:"nonce"`
|
||||||
Type TxType `meddler:"type"`
|
Type TxType `meddler:"type"`
|
||||||
// EthBlockNum in which this L2Tx was added to the queue
|
EthBlockNum int64 `meddler:"eth_block_num"` // EthereumBlockNumber in which this L2Tx was added to the queue
|
||||||
EthBlockNum int64 `meddler:"eth_block_num"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewL2Tx returns the given L2Tx with the TxId & Type parameters calculated
|
// NewL2Tx returns the given L2Tx with the TxId & Type parameters calculated
|
||||||
|
|||||||
@@ -19,8 +19,7 @@ func TestNewL2Tx(t *testing.T) {
|
|||||||
}
|
}
|
||||||
l2Tx, err := NewL2Tx(l2Tx)
|
l2Tx, err := NewL2Tx(l2Tx)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, "0x022669acda59b827d20ef5354a3eebd1dffb3972b0a6bf89d18bfd2efa0ab9f41e",
|
assert.Equal(t, "0x022669acda59b827d20ef5354a3eebd1dffb3972b0a6bf89d18bfd2efa0ab9f41e", l2Tx.TxID.String())
|
||||||
l2Tx.TxID.String())
|
|
||||||
|
|
||||||
l2Tx = &L2Tx{
|
l2Tx = &L2Tx{
|
||||||
FromIdx: 87654,
|
FromIdx: 87654,
|
||||||
@@ -31,8 +30,7 @@ func TestNewL2Tx(t *testing.T) {
|
|||||||
}
|
}
|
||||||
l2Tx, err = NewL2Tx(l2Tx)
|
l2Tx, err = NewL2Tx(l2Tx)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, "0x029e7499a830f8f5eb17c07da48cf91415710f1bcbe0169d363ff91e81faf92fc2",
|
assert.Equal(t, "0x029e7499a830f8f5eb17c07da48cf91415710f1bcbe0169d363ff91e81faf92fc2", l2Tx.TxID.String())
|
||||||
l2Tx.TxID.String())
|
|
||||||
|
|
||||||
l2Tx = &L2Tx{
|
l2Tx = &L2Tx{
|
||||||
FromIdx: 87654,
|
FromIdx: 87654,
|
||||||
@@ -44,8 +42,7 @@ func TestNewL2Tx(t *testing.T) {
|
|||||||
}
|
}
|
||||||
l2Tx, err = NewL2Tx(l2Tx)
|
l2Tx, err = NewL2Tx(l2Tx)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, "0x0255c70ed20e1b8935232e1b9c5884dbcc88a6e1a3454d24f2d77252eb2bb0b64e",
|
assert.Equal(t, "0x0255c70ed20e1b8935232e1b9c5884dbcc88a6e1a3454d24f2d77252eb2bb0b64e", l2Tx.TxID.String())
|
||||||
l2Tx.TxID.String())
|
|
||||||
|
|
||||||
l2Tx = &L2Tx{
|
l2Tx = &L2Tx{
|
||||||
FromIdx: 87654,
|
FromIdx: 87654,
|
||||||
@@ -57,8 +54,7 @@ func TestNewL2Tx(t *testing.T) {
|
|||||||
}
|
}
|
||||||
l2Tx, err = NewL2Tx(l2Tx)
|
l2Tx, err = NewL2Tx(l2Tx)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, "0x0206b372f967061d1148bbcff679de38120e075141a80a07326d0f514c2efc6ca9",
|
assert.Equal(t, "0x0206b372f967061d1148bbcff679de38120e075141a80a07326d0f514c2efc6ca9", l2Tx.TxID.String())
|
||||||
l2Tx.TxID.String())
|
|
||||||
|
|
||||||
l2Tx = &L2Tx{
|
l2Tx = &L2Tx{
|
||||||
FromIdx: 1,
|
FromIdx: 1,
|
||||||
@@ -70,8 +66,7 @@ func TestNewL2Tx(t *testing.T) {
|
|||||||
}
|
}
|
||||||
l2Tx, err = NewL2Tx(l2Tx)
|
l2Tx, err = NewL2Tx(l2Tx)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, "0x0236f7ea5bccf78ba60baf56c058d235a844f9b09259fd0efa4f5f72a7d4a26618",
|
assert.Equal(t, "0x0236f7ea5bccf78ba60baf56c058d235a844f9b09259fd0efa4f5f72a7d4a26618", l2Tx.TxID.String())
|
||||||
l2Tx.TxID.String())
|
|
||||||
|
|
||||||
l2Tx = &L2Tx{
|
l2Tx = &L2Tx{
|
||||||
FromIdx: 999,
|
FromIdx: 999,
|
||||||
@@ -83,8 +78,7 @@ func TestNewL2Tx(t *testing.T) {
|
|||||||
}
|
}
|
||||||
l2Tx, err = NewL2Tx(l2Tx)
|
l2Tx, err = NewL2Tx(l2Tx)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, "0x02ac122f5b709ce190129fecbbe35bfd30c70e6433dbd85a8eb743d110906a1dc1",
|
assert.Equal(t, "0x02ac122f5b709ce190129fecbbe35bfd30c70e6433dbd85a8eb743d110906a1dc1", l2Tx.TxID.String())
|
||||||
l2Tx.TxID.String())
|
|
||||||
|
|
||||||
l2Tx = &L2Tx{
|
l2Tx = &L2Tx{
|
||||||
FromIdx: 4444,
|
FromIdx: 4444,
|
||||||
@@ -96,8 +90,7 @@ func TestNewL2Tx(t *testing.T) {
|
|||||||
}
|
}
|
||||||
l2Tx, err = NewL2Tx(l2Tx)
|
l2Tx, err = NewL2Tx(l2Tx)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, "0x02c674951a81881b7bc50db3b9e5efd97ac88550c7426ac548720e5057cfba515a",
|
assert.Equal(t, "0x02c674951a81881b7bc50db3b9e5efd97ac88550c7426ac548720e5057cfba515a", l2Tx.TxID.String())
|
||||||
l2Tx.TxID.String())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestL2TxByteParsers(t *testing.T) {
|
func TestL2TxByteParsers(t *testing.T) {
|
||||||
|
|||||||
@@ -16,8 +16,7 @@ import (
|
|||||||
// EmptyBJJComp contains the 32 byte array of a empty BabyJubJub PublicKey
|
// EmptyBJJComp contains the 32 byte array of a empty BabyJubJub PublicKey
|
||||||
// Compressed. It is a valid point in the BabyJubJub curve, so does not give
|
// Compressed. It is a valid point in the BabyJubJub curve, so does not give
|
||||||
// errors when being decompressed.
|
// errors when being decompressed.
|
||||||
var EmptyBJJComp = babyjub.PublicKeyComp([32]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
var EmptyBJJComp = babyjub.PublicKeyComp([32]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0})
|
||||||
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0})
|
|
||||||
|
|
||||||
// PoolL2Tx is a struct that represents a L2Tx sent by an account to the
|
// PoolL2Tx is a struct that represents a L2Tx sent by an account to the
|
||||||
// coordinator that is waiting to be forged
|
// coordinator that is waiting to be forged
|
||||||
@@ -74,7 +73,7 @@ func NewPoolL2Tx(tx *PoolL2Tx) (*PoolL2Tx, error) {
|
|||||||
// If original Type doesn't match the correct one, return error
|
// If original Type doesn't match the correct one, return error
|
||||||
if txTypeOld != "" && txTypeOld != tx.Type {
|
if txTypeOld != "" && txTypeOld != tx.Type {
|
||||||
return nil, tracerr.Wrap(fmt.Errorf("L2Tx.Type: %s, should be: %s",
|
return nil, tracerr.Wrap(fmt.Errorf("L2Tx.Type: %s, should be: %s",
|
||||||
txTypeOld, tx.Type))
|
tx.Type, txTypeOld))
|
||||||
}
|
}
|
||||||
|
|
||||||
txIDOld := tx.TxID
|
txIDOld := tx.TxID
|
||||||
@@ -84,7 +83,7 @@ func NewPoolL2Tx(tx *PoolL2Tx) (*PoolL2Tx, error) {
|
|||||||
// If original TxID doesn't match the correct one, return error
|
// If original TxID doesn't match the correct one, return error
|
||||||
if txIDOld != (TxID{}) && txIDOld != tx.TxID {
|
if txIDOld != (TxID{}) && txIDOld != tx.TxID {
|
||||||
return tx, tracerr.Wrap(fmt.Errorf("PoolL2Tx.TxID: %s, should be: %s",
|
return tx, tracerr.Wrap(fmt.Errorf("PoolL2Tx.TxID: %s, should be: %s",
|
||||||
txIDOld.String(), tx.TxID.String()))
|
tx.TxID.String(), txIDOld.String()))
|
||||||
}
|
}
|
||||||
|
|
||||||
return tx, nil
|
return tx, nil
|
||||||
@@ -101,8 +100,6 @@ func (tx *PoolL2Tx) SetType() error {
|
|||||||
tx.Type = TxTypeTransferToBJJ
|
tx.Type = TxTypeTransferToBJJ
|
||||||
} else if tx.ToEthAddr != FFAddr && tx.ToEthAddr != EmptyAddr {
|
} else if tx.ToEthAddr != FFAddr && tx.ToEthAddr != EmptyAddr {
|
||||||
tx.Type = TxTypeTransferToEthAddr
|
tx.Type = TxTypeTransferToEthAddr
|
||||||
} else {
|
|
||||||
return tracerr.Wrap(errors.New("malformed transaction"))
|
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
return tracerr.Wrap(errors.New("malformed transaction"))
|
return tracerr.Wrap(errors.New("malformed transaction"))
|
||||||
@@ -306,8 +303,10 @@ func (tx *PoolL2Tx) HashToSign(chainID uint16) (*big.Int, error) {
|
|||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
copy(e1B[0:5], amountFloat40Bytes)
|
copy(e1B[0:5], amountFloat40Bytes)
|
||||||
copy(e1B[5:25], tx.ToEthAddr[:])
|
toEthAddr := EthAddrToBigInt(tx.ToEthAddr)
|
||||||
|
copy(e1B[5:25], toEthAddr.Bytes())
|
||||||
e1 := new(big.Int).SetBytes(e1B[:])
|
e1 := new(big.Int).SetBytes(e1B[:])
|
||||||
|
|
||||||
rqToEthAddr := EthAddrToBigInt(tx.RqToEthAddr)
|
rqToEthAddr := EthAddrToBigInt(tx.RqToEthAddr)
|
||||||
|
|
||||||
_, toBJJY := babyjub.UnpackSignY(tx.ToBJJ)
|
_, toBJJY := babyjub.UnpackSignY(tx.ToBJJ)
|
||||||
@@ -319,8 +318,7 @@ func (tx *PoolL2Tx) HashToSign(chainID uint16) (*big.Int, error) {
|
|||||||
|
|
||||||
_, rqToBJJY := babyjub.UnpackSignY(tx.RqToBJJ)
|
_, rqToBJJY := babyjub.UnpackSignY(tx.RqToBJJ)
|
||||||
|
|
||||||
return poseidon.Hash([]*big.Int{toCompressedData, e1, toBJJY, rqTxCompressedDataV2,
|
return poseidon.Hash([]*big.Int{toCompressedData, e1, toBJJY, rqTxCompressedDataV2, rqToEthAddr, rqToBJJY})
|
||||||
rqToEthAddr, rqToBJJY})
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// VerifySignature returns true if the signature verification is correct for the given PublicKeyComp
|
// VerifySignature returns true if the signature verification is correct for the given PublicKeyComp
|
||||||
|
|||||||
@@ -21,20 +21,17 @@ func TestNewPoolL2Tx(t *testing.T) {
|
|||||||
}
|
}
|
||||||
poolL2Tx, err := NewPoolL2Tx(poolL2Tx)
|
poolL2Tx, err := NewPoolL2Tx(poolL2Tx)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, "0x022669acda59b827d20ef5354a3eebd1dffb3972b0a6bf89d18bfd2efa0ab9f41e",
|
assert.Equal(t, "0x022669acda59b827d20ef5354a3eebd1dffb3972b0a6bf89d18bfd2efa0ab9f41e", poolL2Tx.TxID.String())
|
||||||
poolL2Tx.TxID.String())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestTxCompressedDataAndTxCompressedDataV2JSVectors(t *testing.T) {
|
func TestTxCompressedDataAndTxCompressedDataV2JSVectors(t *testing.T) {
|
||||||
// test vectors values generated from javascript implementation
|
// test vectors values generated from javascript implementation
|
||||||
var skPositive babyjub.PrivateKey // 'Positive' refers to the sign
|
var skPositive babyjub.PrivateKey // 'Positive' refers to the sign
|
||||||
_, err := hex.Decode(skPositive[:],
|
_, err := hex.Decode(skPositive[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
||||||
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
var skNegative babyjub.PrivateKey // 'Negative' refers to the sign
|
var skNegative babyjub.PrivateKey // 'Negative' refers to the sign
|
||||||
_, err = hex.Decode(skNegative[:],
|
_, err = hex.Decode(skNegative[:], []byte("0001020304050607080900010203040506070809000102030405060708090002"))
|
||||||
[]byte("0001020304050607080900010203040506070809000102030405060708090002"))
|
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
amount, ok := new(big.Int).SetString("343597383670000000000000000000000000000000", 10)
|
amount, ok := new(big.Int).SetString("343597383670000000000000000000000000000000", 10)
|
||||||
@@ -126,8 +123,7 @@ func TestTxCompressedDataAndTxCompressedDataV2JSVectors(t *testing.T) {
|
|||||||
|
|
||||||
func TestRqTxCompressedDataV2(t *testing.T) {
|
func TestRqTxCompressedDataV2(t *testing.T) {
|
||||||
var sk babyjub.PrivateKey
|
var sk babyjub.PrivateKey
|
||||||
_, err := hex.Decode(sk[:],
|
_, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
||||||
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
tx := PoolL2Tx{
|
tx := PoolL2Tx{
|
||||||
RqFromIdx: 7,
|
RqFromIdx: 7,
|
||||||
@@ -146,8 +142,7 @@ func TestRqTxCompressedDataV2(t *testing.T) {
|
|||||||
expected, ok := new(big.Int).SetString(expectedStr, 10)
|
expected, ok := new(big.Int).SetString(expectedStr, 10)
|
||||||
assert.True(t, ok)
|
assert.True(t, ok)
|
||||||
assert.Equal(t, expected.Bytes(), txCompressedData.Bytes())
|
assert.Equal(t, expected.Bytes(), txCompressedData.Bytes())
|
||||||
assert.Equal(t, "010c000000000b0000000a0000000009000000000008000000000007",
|
assert.Equal(t, "010c000000000b0000000a0000000009000000000008000000000007", hex.EncodeToString(txCompressedData.Bytes()))
|
||||||
hex.EncodeToString(txCompressedData.Bytes()))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestHashToSign(t *testing.T) {
|
func TestHashToSign(t *testing.T) {
|
||||||
@@ -162,15 +157,13 @@ func TestHashToSign(t *testing.T) {
|
|||||||
}
|
}
|
||||||
toSign, err := tx.HashToSign(chainID)
|
toSign, err := tx.HashToSign(chainID)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, "0b8abaf6b7933464e4450df2514da8b72606c02bf7f89bf6e54816fbda9d9d57",
|
assert.Equal(t, "2d49ce1d4136e06f64e3eb1f79a346e6ee3e93ceeac909a57806a8d87005c263", hex.EncodeToString(toSign.Bytes()))
|
||||||
hex.EncodeToString(toSign.Bytes()))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestVerifyTxSignature(t *testing.T) {
|
func TestVerifyTxSignature(t *testing.T) {
|
||||||
chainID := uint16(0)
|
chainID := uint16(0)
|
||||||
var sk babyjub.PrivateKey
|
var sk babyjub.PrivateKey
|
||||||
_, err := hex.Decode(sk[:],
|
_, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
||||||
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
tx := PoolL2Tx{
|
tx := PoolL2Tx{
|
||||||
FromIdx: 2,
|
FromIdx: 2,
|
||||||
@@ -184,49 +177,18 @@ func TestVerifyTxSignature(t *testing.T) {
|
|||||||
}
|
}
|
||||||
toSign, err := tx.HashToSign(chainID)
|
toSign, err := tx.HashToSign(chainID)
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t,
|
assert.Equal(t, "1571327027383224465388301747239444557034990637650927918405777653988509342917", toSign.String())
|
||||||
"3144939470626721092564692894890580265754250231349521601298746071096761507003",
|
|
||||||
toSign.String())
|
|
||||||
|
|
||||||
sig := sk.SignPoseidon(toSign)
|
sig := sk.SignPoseidon(toSign)
|
||||||
tx.Signature = sig.Compress()
|
tx.Signature = sig.Compress()
|
||||||
assert.True(t, tx.VerifySignature(chainID, sk.Public().Compress()))
|
assert.True(t, tx.VerifySignature(chainID, sk.Public().Compress()))
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestVerifyTxSignatureEthAddrWith0(t *testing.T) {
|
|
||||||
chainID := uint16(5)
|
|
||||||
var sk babyjub.PrivateKey
|
|
||||||
_, err := hex.Decode(sk[:],
|
|
||||||
[]byte("02f0b4f87065af3797aaaf934e8b5c31563c17f2272fa71bd0146535bfbb4184"))
|
|
||||||
assert.NoError(t, err)
|
|
||||||
tx := PoolL2Tx{
|
|
||||||
FromIdx: 10659,
|
|
||||||
ToIdx: 0,
|
|
||||||
ToEthAddr: ethCommon.HexToAddress("0x0004308BD15Ead4F1173624dC289DBdcC806a309"),
|
|
||||||
Amount: big.NewInt(5000),
|
|
||||||
TokenID: 0,
|
|
||||||
Nonce: 946,
|
|
||||||
Fee: 231,
|
|
||||||
}
|
|
||||||
toSign, err := tx.HashToSign(chainID)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
|
|
||||||
sig := sk.SignPoseidon(toSign)
|
|
||||||
assert.Equal(t,
|
|
||||||
"f208b8298d5f37148ac3c0c03703272ea47b9f836851bcf8dd5f7e4e3b336ca1d2f6e92ad85dc25f174daf7a0abfd5f71dead3f059b783f4c4b2f56a18a47000",
|
|
||||||
sig.Compress().String(),
|
|
||||||
)
|
|
||||||
tx.Signature = sig.Compress()
|
|
||||||
assert.True(t, tx.VerifySignature(chainID, sk.Public().Compress()))
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDecompressEmptyBJJComp(t *testing.T) {
|
func TestDecompressEmptyBJJComp(t *testing.T) {
|
||||||
pkComp := EmptyBJJComp
|
pkComp := EmptyBJJComp
|
||||||
pk, err := pkComp.Decompress()
|
pk, err := pkComp.Decompress()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.Equal(t,
|
assert.Equal(t, "2957874849018779266517920829765869116077630550401372566248359756137677864698", pk.X.String())
|
||||||
"2957874849018779266517920829765869116077630550401372566248359756137677864698",
|
|
||||||
pk.X.String())
|
|
||||||
assert.Equal(t, "0", pk.Y.String())
|
assert.Equal(t, "0", pk.Y.String())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -15,9 +15,8 @@ const tokenIDBytesLen = 4
|
|||||||
|
|
||||||
// Token is a struct that represents an Ethereum token that is supported in Hermez network
|
// Token is a struct that represents an Ethereum token that is supported in Hermez network
|
||||||
type Token struct {
|
type Token struct {
|
||||||
TokenID TokenID `json:"id" meddler:"token_id"`
|
TokenID TokenID `json:"id" meddler:"token_id"`
|
||||||
// EthBlockNum indicates the Ethereum block number in which this token was registered
|
EthBlockNum int64 `json:"ethereumBlockNum" meddler:"eth_block_num"` // Ethereum block number in which this token was registered
|
||||||
EthBlockNum int64 `json:"ethereumBlockNum" meddler:"eth_block_num"`
|
|
||||||
EthAddr ethCommon.Address `json:"ethereumAddress" meddler:"eth_addr"`
|
EthAddr ethCommon.Address `json:"ethereumAddress" meddler:"eth_addr"`
|
||||||
Name string `json:"name" meddler:"name"`
|
Name string `json:"name" meddler:"name"`
|
||||||
Symbol string `json:"symbol" meddler:"symbol"`
|
Symbol string `json:"symbol" meddler:"symbol"`
|
||||||
@@ -49,8 +48,7 @@ func (t TokenID) BigInt() *big.Int {
|
|||||||
// TokenIDFromBytes returns TokenID from a byte array
|
// TokenIDFromBytes returns TokenID from a byte array
|
||||||
func TokenIDFromBytes(b []byte) (TokenID, error) {
|
func TokenIDFromBytes(b []byte) (TokenID, error) {
|
||||||
if len(b) != tokenIDBytesLen {
|
if len(b) != tokenIDBytesLen {
|
||||||
return 0, tracerr.Wrap(fmt.Errorf("can not parse TokenID, bytes len %d, expected 4",
|
return 0, tracerr.Wrap(fmt.Errorf("can not parse TokenID, bytes len %d, expected 4", len(b)))
|
||||||
len(b)))
|
|
||||||
}
|
}
|
||||||
tid := binary.BigEndian.Uint32(b[:4])
|
tid := binary.BigEndian.Uint32(b[:4])
|
||||||
return TokenID(tid), nil
|
return TokenID(tid), nil
|
||||||
|
|||||||
54
common/tx.go
54
common/tx.go
@@ -15,12 +15,12 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
// TxIDPrefixL1UserTx is the prefix that determines that the TxID is for
|
// TXIDPrefixL1UserTx is the prefix that determines that the TxID is
|
||||||
// a L1UserTx
|
// for a L1UserTx
|
||||||
//nolinter:gomnd
|
//nolinter:gomnd
|
||||||
TxIDPrefixL1UserTx = byte(0)
|
TxIDPrefixL1UserTx = byte(0)
|
||||||
|
|
||||||
// TxIDPrefixL1CoordTx is the prefix that determines that the TxID is
|
// TXIDPrefixL1CoordTx is the prefix that determines that the TxID is
|
||||||
// for a L1CoordinatorTx
|
// for a L1CoordinatorTx
|
||||||
//nolinter:gomnd
|
//nolinter:gomnd
|
||||||
TxIDPrefixL1CoordTx = byte(1)
|
TxIDPrefixL1CoordTx = byte(1)
|
||||||
@@ -51,8 +51,7 @@ func (txid *TxID) Scan(src interface{}) error {
|
|||||||
return tracerr.Wrap(fmt.Errorf("can't scan %T into TxID", src))
|
return tracerr.Wrap(fmt.Errorf("can't scan %T into TxID", src))
|
||||||
}
|
}
|
||||||
if len(srcB) != TxIDLen {
|
if len(srcB) != TxIDLen {
|
||||||
return tracerr.Wrap(fmt.Errorf("can't scan []byte of len %d into TxID, need %d",
|
return tracerr.Wrap(fmt.Errorf("can't scan []byte of len %d into TxID, need %d", len(srcB), TxIDLen))
|
||||||
len(srcB), TxIDLen))
|
|
||||||
}
|
}
|
||||||
copy(txid[:], srcB)
|
copy(txid[:], srcB)
|
||||||
return nil
|
return nil
|
||||||
@@ -88,7 +87,7 @@ func (txid TxID) MarshalText() ([]byte, error) {
|
|||||||
return []byte(txid.String()), nil
|
return []byte(txid.String()), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// UnmarshalText unmarshalls a TxID
|
// UnmarshalText unmarshals a TxID
|
||||||
func (txid *TxID) UnmarshalText(data []byte) error {
|
func (txid *TxID) UnmarshalText(data []byte) error {
|
||||||
idStr := string(data)
|
idStr := string(data)
|
||||||
id, err := NewTxIDFromString(idStr)
|
id, err := NewTxIDFromString(idStr)
|
||||||
@@ -103,15 +102,13 @@ func (txid *TxID) UnmarshalText(data []byte) error {
|
|||||||
type TxType string
|
type TxType string
|
||||||
|
|
||||||
const (
|
const (
|
||||||
// TxTypeExit represents L2->L1 token transfer. A leaf for this account appears in the exit
|
// TxTypeExit represents L2->L1 token transfer. A leaf for this account appears in the exit tree of the block
|
||||||
// tree of the block
|
|
||||||
TxTypeExit TxType = "Exit"
|
TxTypeExit TxType = "Exit"
|
||||||
// TxTypeTransfer represents L2->L2 token transfer
|
// TxTypeTransfer represents L2->L2 token transfer
|
||||||
TxTypeTransfer TxType = "Transfer"
|
TxTypeTransfer TxType = "Transfer"
|
||||||
// TxTypeDeposit represents L1->L2 transfer
|
// TxTypeDeposit represents L1->L2 transfer
|
||||||
TxTypeDeposit TxType = "Deposit"
|
TxTypeDeposit TxType = "Deposit"
|
||||||
// TxTypeCreateAccountDeposit represents creation of a new leaf in the state tree
|
// TxTypeCreateAccountDeposit represents creation of a new leaf in the state tree (newAcconut) + L1->L2 transfer
|
||||||
// (newAcconut) + L1->L2 transfer
|
|
||||||
TxTypeCreateAccountDeposit TxType = "CreateAccountDeposit"
|
TxTypeCreateAccountDeposit TxType = "CreateAccountDeposit"
|
||||||
// TxTypeCreateAccountDepositTransfer represents L1->L2 transfer + L2->L2 transfer
|
// TxTypeCreateAccountDepositTransfer represents L1->L2 transfer + L2->L2 transfer
|
||||||
TxTypeCreateAccountDepositTransfer TxType = "CreateAccountDepositTransfer"
|
TxTypeCreateAccountDepositTransfer TxType = "CreateAccountDepositTransfer"
|
||||||
@@ -127,31 +124,24 @@ const (
|
|||||||
TxTypeTransferToBJJ TxType = "TransferToBJJ"
|
TxTypeTransferToBJJ TxType = "TransferToBJJ"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Tx is a struct used by the TxSelector & BatchBuilder as a generic type generated from L1Tx &
|
// Tx is a struct used by the TxSelector & BatchBuilder as a generic type generated from L1Tx & PoolL2Tx
|
||||||
// PoolL2Tx
|
|
||||||
type Tx struct {
|
type Tx struct {
|
||||||
// Generic
|
// Generic
|
||||||
IsL1 bool `meddler:"is_l1"`
|
IsL1 bool `meddler:"is_l1"`
|
||||||
TxID TxID `meddler:"id"`
|
TxID TxID `meddler:"id"`
|
||||||
Type TxType `meddler:"type"`
|
Type TxType `meddler:"type"`
|
||||||
Position int `meddler:"position"`
|
Position int `meddler:"position"`
|
||||||
FromIdx Idx `meddler:"from_idx"`
|
FromIdx Idx `meddler:"from_idx"`
|
||||||
ToIdx Idx `meddler:"to_idx"`
|
ToIdx Idx `meddler:"to_idx"`
|
||||||
Amount *big.Int `meddler:"amount,bigint"`
|
Amount *big.Int `meddler:"amount,bigint"`
|
||||||
AmountFloat float64 `meddler:"amount_f"`
|
AmountFloat float64 `meddler:"amount_f"`
|
||||||
TokenID TokenID `meddler:"token_id"`
|
TokenID TokenID `meddler:"token_id"`
|
||||||
USD *float64 `meddler:"amount_usd"`
|
USD *float64 `meddler:"amount_usd"`
|
||||||
// BatchNum in which this tx was forged. If the tx is L2, this must be != 0
|
BatchNum *BatchNum `meddler:"batch_num"` // batchNum in which this tx was forged. If the tx is L2, this must be != 0
|
||||||
BatchNum *BatchNum `meddler:"batch_num"`
|
EthBlockNum int64 `meddler:"eth_block_num"` // Ethereum Block Number in which this L1Tx was added to the queue
|
||||||
// Ethereum Block Number in which this L1Tx was added to the queue
|
|
||||||
EthBlockNum int64 `meddler:"eth_block_num"`
|
|
||||||
// L1
|
// L1
|
||||||
// ToForgeL1TxsNum in which the tx was forged / will be forged
|
ToForgeL1TxsNum *int64 `meddler:"to_forge_l1_txs_num"` // toForgeL1TxsNum in which the tx was forged / will be forged
|
||||||
ToForgeL1TxsNum *int64 `meddler:"to_forge_l1_txs_num"`
|
UserOrigin *bool `meddler:"user_origin"` // true if the tx was originated by a user, false if it was aoriginated by a coordinator. Note that this differ from the spec for implementation simplification purpposes
|
||||||
// UserOrigin is set to true if the tx was originated by a user, false if it was aoriginated
|
|
||||||
// by a coordinator. Note that this differ from the spec for implementation simplification
|
|
||||||
// purpposes
|
|
||||||
UserOrigin *bool `meddler:"user_origin"`
|
|
||||||
FromEthAddr ethCommon.Address `meddler:"from_eth_addr"`
|
FromEthAddr ethCommon.Address `meddler:"from_eth_addr"`
|
||||||
FromBJJ babyjub.PublicKeyComp `meddler:"from_bjj"`
|
FromBJJ babyjub.PublicKeyComp `meddler:"from_bjj"`
|
||||||
DepositAmount *big.Int `meddler:"deposit_amount,bigintnull"`
|
DepositAmount *big.Int `meddler:"deposit_amount,bigintnull"`
|
||||||
|
|||||||
@@ -21,10 +21,8 @@ func TestSignatureConstant(t *testing.T) {
|
|||||||
func TestTxIDScannerValue(t *testing.T) {
|
func TestTxIDScannerValue(t *testing.T) {
|
||||||
txid0 := &TxID{}
|
txid0 := &TxID{}
|
||||||
txid1 := &TxID{}
|
txid1 := &TxID{}
|
||||||
txid0B := [TxIDLen]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2,
|
txid0B := [TxIDLen]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2}
|
||||||
3, 4, 5, 6, 7, 8, 9, 0, 1, 2}
|
txid1B := [TxIDLen]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
|
||||||
txid1B := [TxIDLen]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
|
|
||||||
0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
|
|
||||||
|
|
||||||
copy(txid0[:], txid0B[:])
|
copy(txid0[:], txid0B[:])
|
||||||
copy(txid1[:], txid1B[:])
|
copy(txid1[:], txid1B[:])
|
||||||
|
|||||||
@@ -62,17 +62,3 @@ func RmEndingZeroes(siblings []*merkletree.Hash) []*merkletree.Hash {
|
|||||||
}
|
}
|
||||||
return siblings[:pos]
|
return siblings[:pos]
|
||||||
}
|
}
|
||||||
|
|
||||||
// TokensToUSD is a helper function to calculate the USD value of a certain
|
|
||||||
// amount of tokens considering the normalized token price (which is the price
|
|
||||||
// commonly reported by exhanges)
|
|
||||||
func TokensToUSD(amount *big.Int, decimals uint64, valueUSD float64) float64 {
|
|
||||||
amountF := new(big.Float).SetInt(amount)
|
|
||||||
// Divide by 10^decimals to normalize the amount
|
|
||||||
baseF := new(big.Float).SetInt(new(big.Int).Exp(
|
|
||||||
big.NewInt(10), big.NewInt(int64(decimals)), nil)) //nolint:gomnd
|
|
||||||
amountF.Mul(amountF, big.NewFloat(valueUSD))
|
|
||||||
amountF.Quo(amountF, baseF)
|
|
||||||
amountUSD, _ := amountF.Float64()
|
|
||||||
return amountUSD
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -21,23 +21,16 @@ func TestBJJFromStringWithChecksum(t *testing.T) {
|
|||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
// expected values computed with js implementation
|
// expected values computed with js implementation
|
||||||
assert.Equal(t,
|
assert.Equal(t, "2492816973395423007340226948038371729989170225696553239457870892535792679622", pk.X.String())
|
||||||
"2492816973395423007340226948038371729989170225696553239457870892535792679622",
|
assert.Equal(t, "15238403086306505038849621710779816852318505119327426213168494964113886299863", pk.Y.String())
|
||||||
pk.X.String())
|
|
||||||
assert.Equal(t,
|
|
||||||
"15238403086306505038849621710779816852318505119327426213168494964113886299863",
|
|
||||||
pk.Y.String())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRmEndingZeroes(t *testing.T) {
|
func TestRmEndingZeroes(t *testing.T) {
|
||||||
s0, err :=
|
s0, err := merkletree.NewHashFromHex("0x0000000000000000000000000000000000000000000000000000000000000000")
|
||||||
merkletree.NewHashFromHex("0x0000000000000000000000000000000000000000000000000000000000000000")
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
s1, err :=
|
s1, err := merkletree.NewHashFromHex("0x0000000000000000000000000000000000000000000000000000000000000001")
|
||||||
merkletree.NewHashFromHex("0x0000000000000000000000000000000000000000000000000000000000000001")
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
s2, err :=
|
s2, err := merkletree.NewHashFromHex("0x0000000000000000000000000000000000000000000000000000000000000002")
|
||||||
merkletree.NewHashFromHex("0x0000000000000000000000000000000000000000000000000000000000000002")
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
// expect cropped last zeroes
|
// expect cropped last zeroes
|
||||||
|
|||||||
20
common/zk.go
20
common/zk.go
@@ -1,4 +1,4 @@
|
|||||||
// Package common zk.go contains all the common data structures used at the
|
// Package common contains all the common data structures used at the
|
||||||
// hermez-node, zk.go contains the zkSnark inputs used to generate the proof
|
// hermez-node, zk.go contains the zkSnark inputs used to generate the proof
|
||||||
package common
|
package common
|
||||||
|
|
||||||
@@ -67,7 +67,7 @@ type ZKInputs struct {
|
|||||||
|
|
||||||
// accumulate fees
|
// accumulate fees
|
||||||
// FeePlanTokens contains all the tokenIDs for which the fees are being
|
// FeePlanTokens contains all the tokenIDs for which the fees are being
|
||||||
// accumulated and those fees accumulated will be paid to the FeeIdxs
|
// accumulated and those fees accoumulated will be paid to the FeeIdxs
|
||||||
// array. The order of FeeIdxs & FeePlanTokens & State3 must match.
|
// array. The order of FeeIdxs & FeePlanTokens & State3 must match.
|
||||||
// Coordinator fees are processed correlated such as:
|
// Coordinator fees are processed correlated such as:
|
||||||
// [FeePlanTokens[i], FeeIdxs[i]]
|
// [FeePlanTokens[i], FeeIdxs[i]]
|
||||||
@@ -130,8 +130,8 @@ type ZKInputs struct {
|
|||||||
RqOffset []*big.Int `json:"rqOffset"` // uint8 (max 3 bits), len: [maxTx]
|
RqOffset []*big.Int `json:"rqOffset"` // uint8 (max 3 bits), len: [maxTx]
|
||||||
|
|
||||||
// transaction L2 request data
|
// transaction L2 request data
|
||||||
// RqTxCompressedDataV2 big.Int (max 251 bits), len: [maxTx]
|
// RqTxCompressedDataV2
|
||||||
RqTxCompressedDataV2 []*big.Int `json:"rqTxCompressedDataV2"`
|
RqTxCompressedDataV2 []*big.Int `json:"rqTxCompressedDataV2"` // big.Int (max 251 bits), len: [maxTx]
|
||||||
// RqToEthAddr
|
// RqToEthAddr
|
||||||
RqToEthAddr []*big.Int `json:"rqToEthAddr"` // ethCommon.Address, len: [maxTx]
|
RqToEthAddr []*big.Int `json:"rqToEthAddr"` // ethCommon.Address, len: [maxTx]
|
||||||
// RqToBJJAy
|
// RqToBJJAy
|
||||||
@@ -301,8 +301,7 @@ func (z ZKInputs) MarshalJSON() ([]byte, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// NewZKInputs returns a pointer to an initialized struct of ZKInputs
|
// NewZKInputs returns a pointer to an initialized struct of ZKInputs
|
||||||
func NewZKInputs(chainID uint16, maxTx, maxL1Tx, maxFeeIdxs, nLevels uint32,
|
func NewZKInputs(chainID uint16, maxTx, maxL1Tx, maxFeeIdxs, nLevels uint32, currentNumBatch *big.Int) *ZKInputs {
|
||||||
currentNumBatch *big.Int) *ZKInputs {
|
|
||||||
zki := &ZKInputs{}
|
zki := &ZKInputs{}
|
||||||
zki.Metadata.MaxFeeIdxs = maxFeeIdxs
|
zki.Metadata.MaxFeeIdxs = maxFeeIdxs
|
||||||
zki.Metadata.MaxLevels = uint32(48) //nolint:gomnd
|
zki.Metadata.MaxLevels = uint32(48) //nolint:gomnd
|
||||||
@@ -481,7 +480,7 @@ func (z ZKInputs) ToHashGlobalData() ([]byte, error) {
|
|||||||
b = append(b, newExitRoot...)
|
b = append(b, newExitRoot...)
|
||||||
|
|
||||||
// [MAX_L1_TX * (2 * MAX_NLEVELS + 528) bits] L1TxsData
|
// [MAX_L1_TX * (2 * MAX_NLEVELS + 528) bits] L1TxsData
|
||||||
l1TxDataLen := (2*z.Metadata.MaxLevels + 528) //nolint:gomnd
|
l1TxDataLen := (2*z.Metadata.MaxLevels + 528)
|
||||||
l1TxsDataLen := (z.Metadata.MaxL1Tx * l1TxDataLen)
|
l1TxsDataLen := (z.Metadata.MaxL1Tx * l1TxDataLen)
|
||||||
l1TxsData := make([]byte, l1TxsDataLen/8) //nolint:gomnd
|
l1TxsData := make([]byte, l1TxsDataLen/8) //nolint:gomnd
|
||||||
for i := 0; i < len(z.Metadata.L1TxsData); i++ {
|
for i := 0; i < len(z.Metadata.L1TxsData); i++ {
|
||||||
@@ -507,14 +506,11 @@ func (z ZKInputs) ToHashGlobalData() ([]byte, error) {
|
|||||||
l2TxsData = append(l2TxsData, z.Metadata.L2TxsData[i]...)
|
l2TxsData = append(l2TxsData, z.Metadata.L2TxsData[i]...)
|
||||||
}
|
}
|
||||||
if len(l2TxsData) > int(expectedL2TxsDataLen) {
|
if len(l2TxsData) > int(expectedL2TxsDataLen) {
|
||||||
return nil, tracerr.Wrap(fmt.Errorf("len(l2TxsData): %d, expected: %d",
|
return nil, tracerr.Wrap(fmt.Errorf("len(l2TxsData): %d, expected: %d", len(l2TxsData), expectedL2TxsDataLen))
|
||||||
len(l2TxsData), expectedL2TxsDataLen))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
b = append(b, l2TxsData...)
|
b = append(b, l2TxsData...)
|
||||||
l2TxsPadding := make([]byte,
|
l2TxsPadding := make([]byte, (int(z.Metadata.MaxTx)-len(z.Metadata.L1TxsDataAvailability)-len(z.Metadata.L2TxsData))*int(l2TxDataLen)/8) //nolint:gomnd
|
||||||
(int(z.Metadata.MaxTx)-len(z.Metadata.L1TxsDataAvailability)-
|
|
||||||
len(z.Metadata.L2TxsData))*int(l2TxDataLen)/8) //nolint:gomnd
|
|
||||||
b = append(b, l2TxsPadding...)
|
b = append(b, l2TxsPadding...)
|
||||||
|
|
||||||
// [NLevels * MAX_TOKENS_FEE bits] feeTxsData
|
// [NLevels * MAX_TOKENS_FEE bits] feeTxsData
|
||||||
|
|||||||
205
config/config.go
205
config/config.go
@@ -9,7 +9,6 @@ import (
|
|||||||
"github.com/BurntSushi/toml"
|
"github.com/BurntSushi/toml"
|
||||||
ethCommon "github.com/ethereum/go-ethereum/common"
|
ethCommon "github.com/ethereum/go-ethereum/common"
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/hermeznetwork/hermez-node/priceupdater"
|
|
||||||
"github.com/hermeznetwork/tracerr"
|
"github.com/hermeznetwork/tracerr"
|
||||||
"github.com/iden3/go-iden3-crypto/babyjub"
|
"github.com/iden3/go-iden3-crypto/babyjub"
|
||||||
"gopkg.in/go-playground/validator.v9"
|
"gopkg.in/go-playground/validator.v9"
|
||||||
@@ -36,30 +35,10 @@ type ServerProof struct {
|
|||||||
URL string `validate:"required"`
|
URL string `validate:"required"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// ForgeBatchGasCost is the costs associated to a ForgeBatch transaction, split
|
|
||||||
// into different parts to be used in a formula.
|
|
||||||
type ForgeBatchGasCost struct {
|
|
||||||
Fixed uint64 `validate:"required"`
|
|
||||||
L1UserTx uint64 `validate:"required"`
|
|
||||||
L1CoordTx uint64 `validate:"required"`
|
|
||||||
L2Tx uint64 `validate:"required"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// CoordinatorAPI specifies the configuration parameters of the API in mode
|
|
||||||
// coordinator
|
|
||||||
type CoordinatorAPI struct {
|
|
||||||
// Coordinator enables the coordinator API endpoints
|
|
||||||
Coordinator bool
|
|
||||||
}
|
|
||||||
|
|
||||||
// Coordinator is the coordinator specific configuration.
|
// Coordinator is the coordinator specific configuration.
|
||||||
type Coordinator struct {
|
type Coordinator struct {
|
||||||
// ForgerAddress is the address under which this coordinator is forging
|
// ForgerAddress is the address under which this coordinator is forging
|
||||||
ForgerAddress ethCommon.Address `validate:"required"`
|
ForgerAddress ethCommon.Address `validate:"required"`
|
||||||
// MinimumForgeAddressBalance is the minimum balance the forger address
|
|
||||||
// needs to start the coordinator in wei. Of set to 0, the coordinator
|
|
||||||
// will not check the balance before starting.
|
|
||||||
MinimumForgeAddressBalance *big.Int
|
|
||||||
// FeeAccount is the Hermez account that the coordinator uses to receive fees
|
// FeeAccount is the Hermez account that the coordinator uses to receive fees
|
||||||
FeeAccount struct {
|
FeeAccount struct {
|
||||||
// Address is the ethereum address of the account to receive fees
|
// Address is the ethereum address of the account to receive fees
|
||||||
@@ -81,7 +60,7 @@ type Coordinator struct {
|
|||||||
// checking the next block), used to decide when to stop scheduling new
|
// checking the next block), used to decide when to stop scheduling new
|
||||||
// batches (by stopping the pipeline).
|
// batches (by stopping the pipeline).
|
||||||
// For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck
|
// For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck
|
||||||
// is 5, even though at block 11 we canForge, the pipeline will be
|
// is 5, eventhough at block 11 we canForge, the pipeline will be
|
||||||
// stopped if we can't forge at block 15.
|
// stopped if we can't forge at block 15.
|
||||||
// This value should be the expected number of blocks it takes between
|
// This value should be the expected number of blocks it takes between
|
||||||
// scheduling a batch and having it mined.
|
// scheduling a batch and having it mined.
|
||||||
@@ -91,7 +70,7 @@ type Coordinator struct {
|
|||||||
// from the next block; used to decide when to stop sending batches to
|
// from the next block; used to decide when to stop sending batches to
|
||||||
// the smart contract.
|
// the smart contract.
|
||||||
// For example, if we are at block 10 and SendBatchBlocksMarginCheck is
|
// For example, if we are at block 10 and SendBatchBlocksMarginCheck is
|
||||||
// 5, even though at block 11 we canForge, the batch will be discarded
|
// 5, eventhough at block 11 we canForge, the batch will be discarded
|
||||||
// if we can't forge at block 15.
|
// if we can't forge at block 15.
|
||||||
SendBatchBlocksMarginCheck int64
|
SendBatchBlocksMarginCheck int64
|
||||||
// ProofServerPollInterval is the waiting interval between polling the
|
// ProofServerPollInterval is the waiting interval between polling the
|
||||||
@@ -109,27 +88,9 @@ type Coordinator struct {
|
|||||||
// to 0s, the coordinator will continuously forge even if the batches
|
// to 0s, the coordinator will continuously forge even if the batches
|
||||||
// are empty.
|
// are empty.
|
||||||
ForgeNoTxsDelay Duration `validate:"-"`
|
ForgeNoTxsDelay Duration `validate:"-"`
|
||||||
// MustForgeAtSlotDeadline enables the coordinator to forge slots if
|
|
||||||
// the empty slots reach the slot deadline.
|
|
||||||
MustForgeAtSlotDeadline bool
|
|
||||||
// IgnoreSlotCommitment disables forcing the coordinator to forge a
|
|
||||||
// slot immediately when the slot is not committed. If set to false,
|
|
||||||
// the coordinator will immediately forge a batch at the beginning of a
|
|
||||||
// slot if it's the slot winner.
|
|
||||||
IgnoreSlotCommitment bool
|
|
||||||
// ForgeOncePerSlotIfTxs will make the coordinator forge at most one
|
|
||||||
// batch per slot, only if there are included txs in that batch, or
|
|
||||||
// pending l1UserTxs in the smart contract. Setting this parameter
|
|
||||||
// overrides `ForgeDelay`, `ForgeNoTxsDelay`, `MustForgeAtSlotDeadline`
|
|
||||||
// and `IgnoreSlotCommitment`.
|
|
||||||
ForgeOncePerSlotIfTxs bool
|
|
||||||
// SyncRetryInterval is the waiting interval between calls to the main
|
// SyncRetryInterval is the waiting interval between calls to the main
|
||||||
// handler of a synced block after an error
|
// handler of a synced block after an error
|
||||||
SyncRetryInterval Duration `validate:"required"`
|
SyncRetryInterval Duration `validate:"required"`
|
||||||
// PurgeByExtDelInterval is the waiting interval between calls
|
|
||||||
// to the PurgeByExternalDelete function of the l2db which deletes
|
|
||||||
// pending txs externally marked by the column `external_delete`
|
|
||||||
PurgeByExtDelInterval Duration `validate:"required"`
|
|
||||||
// L2DB is the DB that holds the pool of L2Txs
|
// L2DB is the DB that holds the pool of L2Txs
|
||||||
L2DB struct {
|
L2DB struct {
|
||||||
// SafetyPeriod is the number of batches after which
|
// SafetyPeriod is the number of batches after which
|
||||||
@@ -140,19 +101,11 @@ type Coordinator struct {
|
|||||||
// reached, inserts to the pool will be denied until some of
|
// reached, inserts to the pool will be denied until some of
|
||||||
// the pending txs are forged.
|
// the pending txs are forged.
|
||||||
MaxTxs uint32 `validate:"required"`
|
MaxTxs uint32 `validate:"required"`
|
||||||
// MinFeeUSD is the minimum fee in USD that a tx must pay in
|
|
||||||
// order to be accepted into the pool. Txs with lower than
|
|
||||||
// minimum fee will be rejected at the API level.
|
|
||||||
MinFeeUSD float64
|
|
||||||
// MaxFeeUSD is the maximum fee in USD that a tx must pay in
|
|
||||||
// order to be accepted into the pool. Txs with greater than
|
|
||||||
// maximum fee will be rejected at the API level.
|
|
||||||
MaxFeeUSD float64 `validate:"required"`
|
|
||||||
// TTL is the Time To Live for L2Txs in the pool. Once MaxTxs
|
// TTL is the Time To Live for L2Txs in the pool. Once MaxTxs
|
||||||
// L2Txs is reached, L2Txs older than TTL will be deleted.
|
// L2Txs is reached, L2Txs older than TTL will be deleted.
|
||||||
TTL Duration `validate:"required"`
|
TTL Duration `validate:"required"`
|
||||||
// PurgeBatchDelay is the delay between batches to purge
|
// PurgeBatchDelay is the delay between batches to purge
|
||||||
// outdated transactions. Outdated L2Txs are those that have
|
// outdated transactions. Oudated L2Txs are those that have
|
||||||
// been forged or marked as invalid for longer than the
|
// been forged or marked as invalid for longer than the
|
||||||
// SafetyPeriod and pending L2Txs that have been in the pool
|
// SafetyPeriod and pending L2Txs that have been in the pool
|
||||||
// for longer than TTL once there are MaxTxs.
|
// for longer than TTL once there are MaxTxs.
|
||||||
@@ -162,7 +115,7 @@ type Coordinator struct {
|
|||||||
// nonce.
|
// nonce.
|
||||||
InvalidateBatchDelay int64 `validate:"required"`
|
InvalidateBatchDelay int64 `validate:"required"`
|
||||||
// PurgeBlockDelay is the delay between blocks to purge
|
// PurgeBlockDelay is the delay between blocks to purge
|
||||||
// outdated transactions. Outdated L2Txs are those that have
|
// outdated transactions. Oudated L2Txs are those that have
|
||||||
// been forged or marked as invalid for longer than the
|
// been forged or marked as invalid for longer than the
|
||||||
// SafetyPeriod and pending L2Txs that have been in the pool
|
// SafetyPeriod and pending L2Txs that have been in the pool
|
||||||
// for longer than TTL once there are MaxTxs.
|
// for longer than TTL once there are MaxTxs.
|
||||||
@@ -194,7 +147,7 @@ type Coordinator struct {
|
|||||||
MaxGasPrice *big.Int `validate:"required"`
|
MaxGasPrice *big.Int `validate:"required"`
|
||||||
// GasPriceIncPerc is the percentage increase of gas price set
|
// GasPriceIncPerc is the percentage increase of gas price set
|
||||||
// in an ethereum transaction from the suggested gas price by
|
// in an ethereum transaction from the suggested gas price by
|
||||||
// the ethereum node
|
// the ehtereum node
|
||||||
GasPriceIncPerc int64
|
GasPriceIncPerc int64
|
||||||
// CheckLoopInterval is the waiting interval between receipt
|
// CheckLoopInterval is the waiting interval between receipt
|
||||||
// checks of ethereum transactions in the TxManager
|
// checks of ethereum transactions in the TxManager
|
||||||
@@ -219,11 +172,11 @@ type Coordinator struct {
|
|||||||
// Password used to decrypt the keys in the keystore
|
// Password used to decrypt the keys in the keystore
|
||||||
Password string `validate:"required"`
|
Password string `validate:"required"`
|
||||||
} `validate:"required"`
|
} `validate:"required"`
|
||||||
// ForgeBatchGasCost contains the cost of each action in the
|
|
||||||
// ForgeBatch transaction.
|
|
||||||
ForgeBatchGasCost ForgeBatchGasCost `validate:"required"`
|
|
||||||
} `validate:"required"`
|
} `validate:"required"`
|
||||||
API CoordinatorAPI `validate:"required"`
|
API struct {
|
||||||
|
// Coordinator enables the coordinator API endpoints
|
||||||
|
Coordinator bool
|
||||||
|
} `validate:"required"`
|
||||||
Debug struct {
|
Debug struct {
|
||||||
// BatchPath if set, specifies the path where batchInfo is stored
|
// BatchPath if set, specifies the path where batchInfo is stored
|
||||||
// in JSON in every step/update of the pipeline
|
// in JSON in every step/update of the pipeline
|
||||||
@@ -238,58 +191,15 @@ type Coordinator struct {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// PostgreSQL is the postgreSQL configuration parameters. It's possible to use
|
|
||||||
// diferentiated SQL connections for read/write. If the read configuration is
|
|
||||||
// not provided, the write one it's going to be used for both reads and writes
|
|
||||||
type PostgreSQL struct {
|
|
||||||
// Port of the PostgreSQL write server
|
|
||||||
PortWrite int `validate:"required"`
|
|
||||||
// Host of the PostgreSQL write server
|
|
||||||
HostWrite string `validate:"required"`
|
|
||||||
// User of the PostgreSQL write server
|
|
||||||
UserWrite string `validate:"required"`
|
|
||||||
// Password of the PostgreSQL write server
|
|
||||||
PasswordWrite string `validate:"required"`
|
|
||||||
// Name of the PostgreSQL write server database
|
|
||||||
NameWrite string `validate:"required"`
|
|
||||||
// Port of the PostgreSQL read server
|
|
||||||
PortRead int
|
|
||||||
// Host of the PostgreSQL read server
|
|
||||||
HostRead string
|
|
||||||
// User of the PostgreSQL read server
|
|
||||||
UserRead string
|
|
||||||
// Password of the PostgreSQL read server
|
|
||||||
PasswordRead string
|
|
||||||
// Name of the PostgreSQL read server database
|
|
||||||
NameRead string
|
|
||||||
}
|
|
||||||
|
|
||||||
// NodeDebug specifies debug configuration parameters
|
|
||||||
type NodeDebug struct {
|
|
||||||
// APIAddress is the address where the debugAPI will listen if
|
|
||||||
// set
|
|
||||||
APIAddress string
|
|
||||||
// MeddlerLogs enables meddler debug mode, where unused columns and struct
|
|
||||||
// fields will be logged
|
|
||||||
MeddlerLogs bool
|
|
||||||
// GinDebugMode sets Gin-Gonic (the web framework) to run in
|
|
||||||
// debug mode
|
|
||||||
GinDebugMode bool
|
|
||||||
}
|
|
||||||
|
|
||||||
// Node is the hermez node configuration.
|
// Node is the hermez node configuration.
|
||||||
type Node struct {
|
type Node struct {
|
||||||
PriceUpdater struct {
|
PriceUpdater struct {
|
||||||
// Interval between price updater calls
|
// Interval between price updater calls
|
||||||
Interval Duration `validate:"required"`
|
Interval Duration `valudate:"required"`
|
||||||
// URLBitfinexV2 is the URL of bitfinex V2 API
|
// URL of the token prices provider
|
||||||
URLBitfinexV2 string `validate:"required"`
|
URL string `valudate:"required"`
|
||||||
// URLCoinGeckoV3 is the URL of coingecko V3 API
|
// Type of the API of the token prices provider
|
||||||
URLCoinGeckoV3 string `validate:"required"`
|
Type string `valudate:"required"`
|
||||||
// DefaultUpdateMethod to get token prices
|
|
||||||
DefaultUpdateMethod priceupdater.UpdateMethodType `validate:"required"`
|
|
||||||
// TokensConfig to specify how each token get it's price updated
|
|
||||||
TokensConfig []priceupdater.TokenConfig
|
|
||||||
} `validate:"required"`
|
} `validate:"required"`
|
||||||
StateDB struct {
|
StateDB struct {
|
||||||
// Path where the synchronizer StateDB is stored
|
// Path where the synchronizer StateDB is stored
|
||||||
@@ -297,8 +207,19 @@ type Node struct {
|
|||||||
// Keep is the number of checkpoints to keep
|
// Keep is the number of checkpoints to keep
|
||||||
Keep int `validate:"required"`
|
Keep int `validate:"required"`
|
||||||
} `validate:"required"`
|
} `validate:"required"`
|
||||||
PostgreSQL PostgreSQL `validate:"required"`
|
PostgreSQL struct {
|
||||||
Web3 struct {
|
// Port of the PostgreSQL server
|
||||||
|
Port int `validate:"required"`
|
||||||
|
// Host of the PostgreSQL server
|
||||||
|
Host string `validate:"required"`
|
||||||
|
// User of the PostgreSQL server
|
||||||
|
User string `validate:"required"`
|
||||||
|
// Password of the PostgreSQL server
|
||||||
|
Password string `validate:"required"`
|
||||||
|
// Name of the PostgreSQL server database
|
||||||
|
Name string `validate:"required"`
|
||||||
|
} `validate:"required"`
|
||||||
|
Web3 struct {
|
||||||
// URL is the URL of the web3 ethereum-node RPC server
|
// URL is the URL of the web3 ethereum-node RPC server
|
||||||
URL string `validate:"required"`
|
URL string `validate:"required"`
|
||||||
} `validate:"required"`
|
} `validate:"required"`
|
||||||
@@ -328,7 +249,6 @@ type Node struct {
|
|||||||
// TokenHEZ address
|
// TokenHEZ address
|
||||||
TokenHEZName string `validate:"required"`
|
TokenHEZName string `validate:"required"`
|
||||||
} `validate:"required"`
|
} `validate:"required"`
|
||||||
// API specifies the configuration parameters of the API
|
|
||||||
API struct {
|
API struct {
|
||||||
// Address where the API will listen if set
|
// Address where the API will listen if set
|
||||||
Address string
|
Address string
|
||||||
@@ -346,47 +266,15 @@ type Node struct {
|
|||||||
// can wait to stablish a SQL connection
|
// can wait to stablish a SQL connection
|
||||||
SQLConnectionTimeout Duration
|
SQLConnectionTimeout Duration
|
||||||
} `validate:"required"`
|
} `validate:"required"`
|
||||||
Debug NodeDebug `validate:"required"`
|
Debug struct {
|
||||||
Coordinator Coordinator `validate:"-"`
|
// APIAddress is the address where the debugAPI will listen if
|
||||||
}
|
// set
|
||||||
|
APIAddress string
|
||||||
// APIServer is the api server configuration parameters
|
// MeddlerLogs enables meddler debug mode, where unused columns and struct
|
||||||
type APIServer struct {
|
// fields will be logged
|
||||||
// NodeAPI specifies the configuration parameters of the API
|
MeddlerLogs bool
|
||||||
API struct {
|
|
||||||
// Address where the API will listen if set
|
|
||||||
Address string `validate:"required"`
|
|
||||||
// Explorer enables the Explorer API endpoints
|
|
||||||
Explorer bool
|
|
||||||
// Maximum concurrent connections allowed between API and SQL
|
|
||||||
MaxSQLConnections int `validate:"required"`
|
|
||||||
// SQLConnectionTimeout is the maximum amount of time that an API request
|
|
||||||
// can wait to stablish a SQL connection
|
|
||||||
SQLConnectionTimeout Duration
|
|
||||||
} `validate:"required"`
|
|
||||||
PostgreSQL PostgreSQL `validate:"required"`
|
|
||||||
Coordinator struct {
|
|
||||||
API struct {
|
|
||||||
// Coordinator enables the coordinator API endpoints
|
|
||||||
Coordinator bool
|
|
||||||
} `validate:"required"`
|
|
||||||
L2DB struct {
|
|
||||||
// MaxTxs is the maximum number of pending L2Txs that can be
|
|
||||||
// stored in the pool. Once this number of pending L2Txs is
|
|
||||||
// reached, inserts to the pool will be denied until some of
|
|
||||||
// the pending txs are forged.
|
|
||||||
MaxTxs uint32 `validate:"required"`
|
|
||||||
// MinFeeUSD is the minimum fee in USD that a tx must pay in
|
|
||||||
// order to be accepted into the pool. Txs with lower than
|
|
||||||
// minimum fee will be rejected at the API level.
|
|
||||||
MinFeeUSD float64
|
|
||||||
// MaxFeeUSD is the maximum fee in USD that a tx must pay in
|
|
||||||
// order to be accepted into the pool. Txs with greater than
|
|
||||||
// maximum fee will be rejected at the API level.
|
|
||||||
MaxFeeUSD float64 `validate:"required"`
|
|
||||||
} `validate:"required"`
|
|
||||||
}
|
}
|
||||||
Debug NodeDebug `validate:"required"`
|
Coordinator Coordinator `validate:"-"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// Load loads a generic config.
|
// Load loads a generic config.
|
||||||
@@ -402,8 +290,8 @@ func Load(path string, cfg interface{}) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// LoadNode loads the Node configuration from path.
|
// LoadCoordinator loads the Coordinator configuration from path.
|
||||||
func LoadNode(path string, coordinator bool) (*Node, error) {
|
func LoadCoordinator(path string) (*Node, error) {
|
||||||
var cfg Node
|
var cfg Node
|
||||||
if err := Load(path, &cfg); err != nil {
|
if err := Load(path, &cfg); err != nil {
|
||||||
return nil, tracerr.Wrap(fmt.Errorf("error loading node configuration file: %w", err))
|
return nil, tracerr.Wrap(fmt.Errorf("error loading node configuration file: %w", err))
|
||||||
@@ -412,28 +300,21 @@ func LoadNode(path string, coordinator bool) (*Node, error) {
|
|||||||
if err := validate.Struct(cfg); err != nil {
|
if err := validate.Struct(cfg); err != nil {
|
||||||
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
|
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
|
||||||
}
|
}
|
||||||
if coordinator {
|
if err := validate.Struct(cfg.Coordinator); err != nil {
|
||||||
if err := validate.Struct(cfg.Coordinator); err != nil {
|
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
|
||||||
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
return &cfg, nil
|
return &cfg, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// LoadAPIServer loads the APIServer configuration from path.
|
// LoadNode loads the Node configuration from path.
|
||||||
func LoadAPIServer(path string, coordinator bool) (*APIServer, error) {
|
func LoadNode(path string) (*Node, error) {
|
||||||
var cfg APIServer
|
var cfg Node
|
||||||
if err := Load(path, &cfg); err != nil {
|
if err := Load(path, &cfg); err != nil {
|
||||||
return nil, tracerr.Wrap(fmt.Errorf("error loading apiServer configuration file: %w", err))
|
return nil, tracerr.Wrap(fmt.Errorf("error loading node configuration file: %w", err))
|
||||||
}
|
}
|
||||||
validate := validator.New()
|
validate := validator.New()
|
||||||
if err := validate.Struct(cfg); err != nil {
|
if err := validate.Struct(cfg); err != nil {
|
||||||
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
|
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
|
||||||
}
|
}
|
||||||
if coordinator {
|
|
||||||
if err := validate.Struct(cfg.Coordinator); err != nil {
|
|
||||||
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return &cfg, nil
|
return &cfg, nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -8,7 +8,6 @@ import (
|
|||||||
"path"
|
"path"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ethereum/go-ethereum/accounts/abi/bind"
|
|
||||||
"github.com/ethereum/go-ethereum/core/types"
|
"github.com/ethereum/go-ethereum/core/types"
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/hermeznetwork/hermez-node/eth"
|
"github.com/hermeznetwork/hermez-node/eth"
|
||||||
@@ -85,15 +84,15 @@ type BatchInfo struct {
|
|||||||
PublicInputs []*big.Int
|
PublicInputs []*big.Int
|
||||||
L1Batch bool
|
L1Batch bool
|
||||||
VerifierIdx uint8
|
VerifierIdx uint8
|
||||||
L1UserTxs []common.L1Tx
|
L1UserTxsExtra []common.L1Tx
|
||||||
L1CoordTxs []common.L1Tx
|
L1CoordTxs []common.L1Tx
|
||||||
L1CoordinatorTxsAuths [][]byte
|
L1CoordinatorTxsAuths [][]byte
|
||||||
L2Txs []common.L2Tx
|
L2Txs []common.L2Tx
|
||||||
CoordIdxs []common.Idx
|
CoordIdxs []common.Idx
|
||||||
ForgeBatchArgs *eth.RollupForgeBatchArgs
|
ForgeBatchArgs *eth.RollupForgeBatchArgs
|
||||||
Auth *bind.TransactOpts `json:"-"`
|
// FeesInfo
|
||||||
EthTx *types.Transaction
|
EthTx *types.Transaction
|
||||||
EthTxErr error
|
EthTxErr error
|
||||||
// SendTimestamp the time of batch sent to ethereum
|
// SendTimestamp the time of batch sent to ethereum
|
||||||
SendTimestamp time.Time
|
SendTimestamp time.Time
|
||||||
Receipt *types.Receipt
|
Receipt *types.Receipt
|
||||||
|
|||||||
@@ -11,7 +11,6 @@ import (
|
|||||||
ethCommon "github.com/ethereum/go-ethereum/common"
|
ethCommon "github.com/ethereum/go-ethereum/common"
|
||||||
"github.com/hermeznetwork/hermez-node/batchbuilder"
|
"github.com/hermeznetwork/hermez-node/batchbuilder"
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/hermeznetwork/hermez-node/config"
|
|
||||||
"github.com/hermeznetwork/hermez-node/db/historydb"
|
"github.com/hermeznetwork/hermez-node/db/historydb"
|
||||||
"github.com/hermeznetwork/hermez-node/db/l2db"
|
"github.com/hermeznetwork/hermez-node/db/l2db"
|
||||||
"github.com/hermeznetwork/hermez-node/eth"
|
"github.com/hermeznetwork/hermez-node/eth"
|
||||||
@@ -24,8 +23,9 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
errLastL1BatchNotSynced = fmt.Errorf("last L1Batch not synced yet")
|
errLastL1BatchNotSynced = fmt.Errorf("last L1Batch not synced yet")
|
||||||
errSkipBatchByPolicy = fmt.Errorf("skip batch by policy")
|
errForgeNoTxsBeforeDelay = fmt.Errorf("no txs to forge and we haven't reached the forge no txs delay")
|
||||||
|
errForgeBeforeDelay = fmt.Errorf("we haven't reached the forge delay")
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -52,7 +52,7 @@ type Config struct {
|
|||||||
// checking the next block), used to decide when to stop scheduling new
|
// checking the next block), used to decide when to stop scheduling new
|
||||||
// batches (by stopping the pipeline).
|
// batches (by stopping the pipeline).
|
||||||
// For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck
|
// For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck
|
||||||
// is 5, even though at block 11 we canForge, the pipeline will be
|
// is 5, eventhough at block 11 we canForge, the pipeline will be
|
||||||
// stopped if we can't forge at block 15.
|
// stopped if we can't forge at block 15.
|
||||||
// This value should be the expected number of blocks it takes between
|
// This value should be the expected number of blocks it takes between
|
||||||
// scheduling a batch and having it mined.
|
// scheduling a batch and having it mined.
|
||||||
@@ -62,7 +62,7 @@ type Config struct {
|
|||||||
// from the next block; used to decide when to stop sending batches to
|
// from the next block; used to decide when to stop sending batches to
|
||||||
// the smart contract.
|
// the smart contract.
|
||||||
// For example, if we are at block 10 and SendBatchBlocksMarginCheck is
|
// For example, if we are at block 10 and SendBatchBlocksMarginCheck is
|
||||||
// 5, even though at block 11 we canForge, the batch will be discarded
|
// 5, eventhough at block 11 we canForge, the batch will be discarded
|
||||||
// if we can't forge at block 15.
|
// if we can't forge at block 15.
|
||||||
// This value should be the expected number of blocks it takes between
|
// This value should be the expected number of blocks it takes between
|
||||||
// sending a batch and having it mined.
|
// sending a batch and having it mined.
|
||||||
@@ -82,27 +82,9 @@ type Config struct {
|
|||||||
// to 0s, the coordinator will continuously forge even if the batches
|
// to 0s, the coordinator will continuously forge even if the batches
|
||||||
// are empty.
|
// are empty.
|
||||||
ForgeNoTxsDelay time.Duration
|
ForgeNoTxsDelay time.Duration
|
||||||
// MustForgeAtSlotDeadline enables the coordinator to forge slots if
|
|
||||||
// the empty slots reach the slot deadline.
|
|
||||||
MustForgeAtSlotDeadline bool
|
|
||||||
// IgnoreSlotCommitment disables forcing the coordinator to forge a
|
|
||||||
// slot immediately when the slot is not committed. If set to false,
|
|
||||||
// the coordinator will immediately forge a batch at the beginning of
|
|
||||||
// a slot if it's the slot winner.
|
|
||||||
IgnoreSlotCommitment bool
|
|
||||||
// ForgeOncePerSlotIfTxs will make the coordinator forge at most one
|
|
||||||
// batch per slot, only if there are included txs in that batch, or
|
|
||||||
// pending l1UserTxs in the smart contract. Setting this parameter
|
|
||||||
// overrides `ForgeDelay`, `ForgeNoTxsDelay`, `MustForgeAtSlotDeadline`
|
|
||||||
// and `IgnoreSlotCommitment`.
|
|
||||||
ForgeOncePerSlotIfTxs bool
|
|
||||||
// SyncRetryInterval is the waiting interval between calls to the main
|
// SyncRetryInterval is the waiting interval between calls to the main
|
||||||
// handler of a synced block after an error
|
// handler of a synced block after an error
|
||||||
SyncRetryInterval time.Duration
|
SyncRetryInterval time.Duration
|
||||||
// PurgeByExtDelInterval is the waiting interval between calls
|
|
||||||
// to the PurgeByExternalDelete function of the l2db which deletes
|
|
||||||
// pending txs externally marked by the column `external_delete`
|
|
||||||
PurgeByExtDelInterval time.Duration
|
|
||||||
// EthClientAttemptsDelay is delay between attempts do do an eth client
|
// EthClientAttemptsDelay is delay between attempts do do an eth client
|
||||||
// RPC call
|
// RPC call
|
||||||
EthClientAttemptsDelay time.Duration
|
EthClientAttemptsDelay time.Duration
|
||||||
@@ -129,10 +111,7 @@ type Config struct {
|
|||||||
Purger PurgerCfg
|
Purger PurgerCfg
|
||||||
// VerifierIdx is the index of the verifier contract registered in the
|
// VerifierIdx is the index of the verifier contract registered in the
|
||||||
// smart contract
|
// smart contract
|
||||||
VerifierIdx uint8
|
VerifierIdx uint8
|
||||||
// ForgeBatchGasCost contains the cost of each action in the
|
|
||||||
// ForgeBatch transaction.
|
|
||||||
ForgeBatchGasCost config.ForgeBatchGasCost
|
|
||||||
TxProcessorConfig txprocessor.Config
|
TxProcessorConfig txprocessor.Config
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -157,8 +136,8 @@ type Coordinator struct {
|
|||||||
pipelineNum int // Pipeline sequential number. The first pipeline is 1
|
pipelineNum int // Pipeline sequential number. The first pipeline is 1
|
||||||
pipelineFromBatch fromBatch // batch from which we started the pipeline
|
pipelineFromBatch fromBatch // batch from which we started the pipeline
|
||||||
provers []prover.Client
|
provers []prover.Client
|
||||||
consts common.SCConsts
|
consts synchronizer.SCConsts
|
||||||
vars common.SCVariables
|
vars synchronizer.SCVariables
|
||||||
stats synchronizer.Stats
|
stats synchronizer.Stats
|
||||||
started bool
|
started bool
|
||||||
|
|
||||||
@@ -174,15 +153,6 @@ type Coordinator struct {
|
|||||||
wg sync.WaitGroup
|
wg sync.WaitGroup
|
||||||
cancel context.CancelFunc
|
cancel context.CancelFunc
|
||||||
|
|
||||||
// mutexL2DBUpdateDelete protects updates to the L2DB so that
|
|
||||||
// these two processes always happen exclusively:
|
|
||||||
// - Pipeline taking pending txs, running through the TxProcessor and
|
|
||||||
// marking selected txs as forging
|
|
||||||
// - Coordinator deleting pending txs that have been marked with
|
|
||||||
// `external_delete`.
|
|
||||||
// Without this mutex, the coordinator could delete a pending txs that
|
|
||||||
// has just been selected by the TxProcessor in the pipeline.
|
|
||||||
mutexL2DBUpdateDelete sync.Mutex
|
|
||||||
pipeline *Pipeline
|
pipeline *Pipeline
|
||||||
lastNonFailedBatchNum common.BatchNum
|
lastNonFailedBatchNum common.BatchNum
|
||||||
|
|
||||||
@@ -198,8 +168,8 @@ func NewCoordinator(cfg Config,
|
|||||||
batchBuilder *batchbuilder.BatchBuilder,
|
batchBuilder *batchbuilder.BatchBuilder,
|
||||||
serverProofs []prover.Client,
|
serverProofs []prover.Client,
|
||||||
ethClient eth.ClientInterface,
|
ethClient eth.ClientInterface,
|
||||||
scConsts *common.SCConsts,
|
scConsts *synchronizer.SCConsts,
|
||||||
initSCVars *common.SCVariables,
|
initSCVars *synchronizer.SCVariables,
|
||||||
) (*Coordinator, error) {
|
) (*Coordinator, error) {
|
||||||
// nolint reason: hardcoded `1.0`, by design the percentage can't be over 100%
|
// nolint reason: hardcoded `1.0`, by design the percentage can't be over 100%
|
||||||
if cfg.L1BatchTimeoutPerc >= 1.0 { //nolint:gomnd
|
if cfg.L1BatchTimeoutPerc >= 1.0 { //nolint:gomnd
|
||||||
@@ -278,8 +248,7 @@ func (c *Coordinator) BatchBuilder() *batchbuilder.BatchBuilder {
|
|||||||
func (c *Coordinator) newPipeline(ctx context.Context) (*Pipeline, error) {
|
func (c *Coordinator) newPipeline(ctx context.Context) (*Pipeline, error) {
|
||||||
c.pipelineNum++
|
c.pipelineNum++
|
||||||
return NewPipeline(ctx, c.cfg, c.pipelineNum, c.historyDB, c.l2DB, c.txSelector,
|
return NewPipeline(ctx, c.cfg, c.pipelineNum, c.historyDB, c.l2DB, c.txSelector,
|
||||||
c.batchBuilder, &c.mutexL2DBUpdateDelete, c.purger, c, c.txManager,
|
c.batchBuilder, c.purger, c, c.txManager, c.provers, &c.consts)
|
||||||
c.provers, &c.consts)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// MsgSyncBlock indicates an update to the Synchronizer stats
|
// MsgSyncBlock indicates an update to the Synchronizer stats
|
||||||
@@ -288,13 +257,13 @@ type MsgSyncBlock struct {
|
|||||||
Batches []common.BatchData
|
Batches []common.BatchData
|
||||||
// Vars contains each Smart Contract variables if they are updated, or
|
// Vars contains each Smart Contract variables if they are updated, or
|
||||||
// nil if they haven't changed.
|
// nil if they haven't changed.
|
||||||
Vars common.SCVariablesPtr
|
Vars synchronizer.SCVariablesPtr
|
||||||
}
|
}
|
||||||
|
|
||||||
// MsgSyncReorg indicates a reorg
|
// MsgSyncReorg indicates a reorg
|
||||||
type MsgSyncReorg struct {
|
type MsgSyncReorg struct {
|
||||||
Stats synchronizer.Stats
|
Stats synchronizer.Stats
|
||||||
Vars common.SCVariablesPtr
|
Vars synchronizer.SCVariablesPtr
|
||||||
}
|
}
|
||||||
|
|
||||||
// MsgStopPipeline indicates a signal to reset the pipeline
|
// MsgStopPipeline indicates a signal to reset the pipeline
|
||||||
@@ -313,7 +282,7 @@ func (c *Coordinator) SendMsg(ctx context.Context, msg interface{}) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func updateSCVars(vars *common.SCVariables, update common.SCVariablesPtr) {
|
func updateSCVars(vars *synchronizer.SCVariables, update synchronizer.SCVariablesPtr) {
|
||||||
if update.Rollup != nil {
|
if update.Rollup != nil {
|
||||||
vars.Rollup = *update.Rollup
|
vars.Rollup = *update.Rollup
|
||||||
}
|
}
|
||||||
@@ -325,13 +294,12 @@ func updateSCVars(vars *common.SCVariables, update common.SCVariablesPtr) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Coordinator) syncSCVars(vars common.SCVariablesPtr) {
|
func (c *Coordinator) syncSCVars(vars synchronizer.SCVariablesPtr) {
|
||||||
updateSCVars(&c.vars, vars)
|
updateSCVars(&c.vars, vars)
|
||||||
}
|
}
|
||||||
|
|
||||||
func canForge(auctionConstants *common.AuctionConstants, auctionVars *common.AuctionVariables,
|
func canForge(auctionConstants *common.AuctionConstants, auctionVars *common.AuctionVariables,
|
||||||
currentSlot *common.Slot, nextSlot *common.Slot, addr ethCommon.Address, blockNum int64,
|
currentSlot *common.Slot, nextSlot *common.Slot, addr ethCommon.Address, blockNum int64) bool {
|
||||||
mustForgeAtDeadline bool) bool {
|
|
||||||
if blockNum < auctionConstants.GenesisBlockNum {
|
if blockNum < auctionConstants.GenesisBlockNum {
|
||||||
log.Infow("canForge: requested blockNum is < genesis", "blockNum", blockNum,
|
log.Infow("canForge: requested blockNum is < genesis", "blockNum", blockNum,
|
||||||
"genesis", auctionConstants.GenesisBlockNum)
|
"genesis", auctionConstants.GenesisBlockNum)
|
||||||
@@ -356,7 +324,7 @@ func canForge(auctionConstants *common.AuctionConstants, auctionVars *common.Auc
|
|||||||
"block", blockNum)
|
"block", blockNum)
|
||||||
anyoneForge = true
|
anyoneForge = true
|
||||||
}
|
}
|
||||||
if slot.Forger == addr || (anyoneForge && mustForgeAtDeadline) {
|
if slot.Forger == addr || anyoneForge {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
log.Debugw("canForge: can't forge", "slot.Forger", slot.Forger)
|
log.Debugw("canForge: can't forge", "slot.Forger", slot.Forger)
|
||||||
@@ -366,14 +334,14 @@ func canForge(auctionConstants *common.AuctionConstants, auctionVars *common.Auc
|
|||||||
func (c *Coordinator) canForgeAt(blockNum int64) bool {
|
func (c *Coordinator) canForgeAt(blockNum int64) bool {
|
||||||
return canForge(&c.consts.Auction, &c.vars.Auction,
|
return canForge(&c.consts.Auction, &c.vars.Auction,
|
||||||
&c.stats.Sync.Auction.CurrentSlot, &c.stats.Sync.Auction.NextSlot,
|
&c.stats.Sync.Auction.CurrentSlot, &c.stats.Sync.Auction.NextSlot,
|
||||||
c.cfg.ForgerAddress, blockNum, c.cfg.MustForgeAtSlotDeadline)
|
c.cfg.ForgerAddress, blockNum)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Coordinator) canForge() bool {
|
func (c *Coordinator) canForge() bool {
|
||||||
blockNum := c.stats.Eth.LastBlock.Num + 1
|
blockNum := c.stats.Eth.LastBlock.Num + 1
|
||||||
return canForge(&c.consts.Auction, &c.vars.Auction,
|
return canForge(&c.consts.Auction, &c.vars.Auction,
|
||||||
&c.stats.Sync.Auction.CurrentSlot, &c.stats.Sync.Auction.NextSlot,
|
&c.stats.Sync.Auction.CurrentSlot, &c.stats.Sync.Auction.NextSlot,
|
||||||
c.cfg.ForgerAddress, blockNum, c.cfg.MustForgeAtSlotDeadline)
|
c.cfg.ForgerAddress, blockNum)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *Coordinator) syncStats(ctx context.Context, stats *synchronizer.Stats) error {
|
func (c *Coordinator) syncStats(ctx context.Context, stats *synchronizer.Stats) error {
|
||||||
@@ -401,23 +369,11 @@ func (c *Coordinator) syncStats(ctx context.Context, stats *synchronizer.Stats)
|
|||||||
fromBatch.ForgerAddr = c.cfg.ForgerAddress
|
fromBatch.ForgerAddr = c.cfg.ForgerAddress
|
||||||
fromBatch.StateRoot = big.NewInt(0)
|
fromBatch.StateRoot = big.NewInt(0)
|
||||||
}
|
}
|
||||||
// Before starting the pipeline make sure we reset any
|
|
||||||
// l2tx from the pool that was forged in a batch that
|
|
||||||
// didn't end up being mined. We are already doing
|
|
||||||
// this in handleStopPipeline, but we do it again as a
|
|
||||||
// failsafe in case the last synced batchnum is
|
|
||||||
// different than in the previous call to l2DB.Reorg,
|
|
||||||
// or in case the node was restarted when there was a
|
|
||||||
// started batch that included l2txs but was not mined.
|
|
||||||
if err := c.l2DB.Reorg(fromBatch.BatchNum); err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
var err error
|
var err error
|
||||||
if c.pipeline, err = c.newPipeline(ctx); err != nil {
|
if c.pipeline, err = c.newPipeline(ctx); err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
c.pipelineFromBatch = fromBatch
|
c.pipelineFromBatch = fromBatch
|
||||||
// Start the pipeline
|
|
||||||
if err := c.pipeline.Start(fromBatch.BatchNum, stats, &c.vars); err != nil {
|
if err := c.pipeline.Start(fromBatch.BatchNum, stats, &c.vars); err != nil {
|
||||||
c.pipeline = nil
|
c.pipeline = nil
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
@@ -487,8 +443,7 @@ func (c *Coordinator) handleReorg(ctx context.Context, msg *MsgSyncReorg) error
|
|||||||
// handleStopPipeline handles stopping the pipeline. If failedBatchNum is 0,
|
// handleStopPipeline handles stopping the pipeline. If failedBatchNum is 0,
|
||||||
// the next pipeline will start from the last state of the synchronizer,
|
// the next pipeline will start from the last state of the synchronizer,
|
||||||
// otherwise, it will state from failedBatchNum-1.
|
// otherwise, it will state from failedBatchNum-1.
|
||||||
func (c *Coordinator) handleStopPipeline(ctx context.Context, reason string,
|
func (c *Coordinator) handleStopPipeline(ctx context.Context, reason string, failedBatchNum common.BatchNum) error {
|
||||||
failedBatchNum common.BatchNum) error {
|
|
||||||
batchNum := c.stats.Sync.LastBatch.BatchNum
|
batchNum := c.stats.Sync.LastBatch.BatchNum
|
||||||
if failedBatchNum != 0 {
|
if failedBatchNum != 0 {
|
||||||
batchNum = failedBatchNum - 1
|
batchNum = failedBatchNum - 1
|
||||||
@@ -539,7 +494,7 @@ func (c *Coordinator) Start() {
|
|||||||
|
|
||||||
c.wg.Add(1)
|
c.wg.Add(1)
|
||||||
go func() {
|
go func() {
|
||||||
timer := time.NewTimer(longWaitDuration)
|
waitCh := time.After(longWaitDuration)
|
||||||
for {
|
for {
|
||||||
select {
|
select {
|
||||||
case <-c.ctx.Done():
|
case <-c.ctx.Done():
|
||||||
@@ -551,45 +506,24 @@ func (c *Coordinator) Start() {
|
|||||||
continue
|
continue
|
||||||
} else if err != nil {
|
} else if err != nil {
|
||||||
log.Errorw("Coordinator.handleMsg", "err", err)
|
log.Errorw("Coordinator.handleMsg", "err", err)
|
||||||
if !timer.Stop() {
|
waitCh = time.After(c.cfg.SyncRetryInterval)
|
||||||
<-timer.C
|
|
||||||
}
|
|
||||||
timer.Reset(c.cfg.SyncRetryInterval)
|
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
case <-timer.C:
|
waitCh = time.After(longWaitDuration)
|
||||||
timer.Reset(longWaitDuration)
|
case <-waitCh:
|
||||||
if !c.stats.Synced() {
|
if !c.stats.Synced() {
|
||||||
|
waitCh = time.After(longWaitDuration)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
if err := c.syncStats(c.ctx, &c.stats); c.ctx.Err() != nil {
|
if err := c.syncStats(c.ctx, &c.stats); c.ctx.Err() != nil {
|
||||||
|
waitCh = time.After(longWaitDuration)
|
||||||
continue
|
continue
|
||||||
} else if err != nil {
|
} else if err != nil {
|
||||||
log.Errorw("Coordinator.syncStats", "err", err)
|
log.Errorw("Coordinator.syncStats", "err", err)
|
||||||
if !timer.Stop() {
|
waitCh = time.After(c.cfg.SyncRetryInterval)
|
||||||
<-timer.C
|
|
||||||
}
|
|
||||||
timer.Reset(c.cfg.SyncRetryInterval)
|
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
}
|
waitCh = time.After(longWaitDuration)
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
c.wg.Add(1)
|
|
||||||
go func() {
|
|
||||||
for {
|
|
||||||
select {
|
|
||||||
case <-c.ctx.Done():
|
|
||||||
log.Info("Coordinator L2DB.PurgeByExternalDelete loop done")
|
|
||||||
c.wg.Done()
|
|
||||||
return
|
|
||||||
case <-time.After(c.cfg.PurgeByExtDelInterval):
|
|
||||||
c.mutexL2DBUpdateDelete.Lock()
|
|
||||||
if err := c.l2DB.PurgeByExternalDelete(); err != nil {
|
|
||||||
log.Errorw("L2DB.PurgeByExternalDelete", "err", err)
|
|
||||||
}
|
|
||||||
c.mutexL2DBUpdateDelete.Unlock()
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
|
|||||||
@@ -105,8 +105,8 @@ func newTestModules(t *testing.T) modules {
|
|||||||
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
|
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
test.WipeDB(db)
|
test.WipeDB(db)
|
||||||
l2DB := l2db.NewL2DB(db, db, 10, 100, 0.0, 1000.0, 24*time.Hour, nil)
|
l2DB := l2db.NewL2DB(db, 10, 100, 24*time.Hour, nil)
|
||||||
historyDB := historydb.NewHistoryDB(db, db, nil)
|
historyDB := historydb.NewHistoryDB(db, nil)
|
||||||
|
|
||||||
txSelDBPath, err = ioutil.TempDir("", "tmpTxSelDB")
|
txSelDBPath, err = ioutil.TempDir("", "tmpTxSelDB")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -126,8 +126,7 @@ func newTestModules(t *testing.T) modules {
|
|||||||
batchBuilderDBPath, err = ioutil.TempDir("", "tmpBatchBuilderDB")
|
batchBuilderDBPath, err = ioutil.TempDir("", "tmpBatchBuilderDB")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
deleteme = append(deleteme, batchBuilderDBPath)
|
deleteme = append(deleteme, batchBuilderDBPath)
|
||||||
batchBuilder, err := batchbuilder.NewBatchBuilder(batchBuilderDBPath, syncStateDB, 0,
|
batchBuilder, err := batchbuilder.NewBatchBuilder(batchBuilderDBPath, syncStateDB, 0, uint64(nLevels))
|
||||||
uint64(nLevels))
|
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
return modules{
|
return modules{
|
||||||
@@ -159,15 +158,14 @@ func newTestCoordinator(t *testing.T, forgerAddr ethCommon.Address, ethClient *t
|
|||||||
deleteme = append(deleteme, debugBatchPath)
|
deleteme = append(deleteme, debugBatchPath)
|
||||||
|
|
||||||
conf := Config{
|
conf := Config{
|
||||||
ForgerAddress: forgerAddr,
|
ForgerAddress: forgerAddr,
|
||||||
ConfirmBlocks: 5,
|
ConfirmBlocks: 5,
|
||||||
L1BatchTimeoutPerc: 0.5,
|
L1BatchTimeoutPerc: 0.5,
|
||||||
EthClientAttempts: 5,
|
EthClientAttempts: 5,
|
||||||
SyncRetryInterval: 400 * time.Microsecond,
|
SyncRetryInterval: 400 * time.Microsecond,
|
||||||
EthClientAttemptsDelay: 100 * time.Millisecond,
|
EthClientAttemptsDelay: 100 * time.Millisecond,
|
||||||
TxManagerCheckInterval: 300 * time.Millisecond,
|
TxManagerCheckInterval: 300 * time.Millisecond,
|
||||||
DebugBatchPath: debugBatchPath,
|
DebugBatchPath: debugBatchPath,
|
||||||
MustForgeAtSlotDeadline: true,
|
|
||||||
Purger: PurgerCfg{
|
Purger: PurgerCfg{
|
||||||
PurgeBatchDelay: 10,
|
PurgeBatchDelay: 10,
|
||||||
PurgeBlockDelay: 10,
|
PurgeBlockDelay: 10,
|
||||||
@@ -189,12 +187,12 @@ func newTestCoordinator(t *testing.T, forgerAddr ethCommon.Address, ethClient *t
|
|||||||
&prover.MockClient{Delay: 400 * time.Millisecond},
|
&prover.MockClient{Delay: 400 * time.Millisecond},
|
||||||
}
|
}
|
||||||
|
|
||||||
scConsts := &common.SCConsts{
|
scConsts := &synchronizer.SCConsts{
|
||||||
Rollup: *ethClientSetup.RollupConstants,
|
Rollup: *ethClientSetup.RollupConstants,
|
||||||
Auction: *ethClientSetup.AuctionConstants,
|
Auction: *ethClientSetup.AuctionConstants,
|
||||||
WDelayer: *ethClientSetup.WDelayerConstants,
|
WDelayer: *ethClientSetup.WDelayerConstants,
|
||||||
}
|
}
|
||||||
initSCVars := &common.SCVariables{
|
initSCVars := &synchronizer.SCVariables{
|
||||||
Rollup: *ethClientSetup.RollupVariables,
|
Rollup: *ethClientSetup.RollupVariables,
|
||||||
Auction: *ethClientSetup.AuctionVariables,
|
Auction: *ethClientSetup.AuctionVariables,
|
||||||
WDelayer: *ethClientSetup.WDelayerVariables,
|
WDelayer: *ethClientSetup.WDelayerVariables,
|
||||||
@@ -207,7 +205,7 @@ func newTestCoordinator(t *testing.T, forgerAddr ethCommon.Address, ethClient *t
|
|||||||
|
|
||||||
func newTestSynchronizer(t *testing.T, ethClient *test.Client, ethClientSetup *test.ClientSetup,
|
func newTestSynchronizer(t *testing.T, ethClient *test.Client, ethClientSetup *test.ClientSetup,
|
||||||
modules modules) *synchronizer.Synchronizer {
|
modules modules) *synchronizer.Synchronizer {
|
||||||
sync, err := synchronizer.NewSynchronizer(ethClient, modules.historyDB, modules.l2DB, modules.stateDB,
|
sync, err := synchronizer.NewSynchronizer(ethClient, modules.historyDB, modules.stateDB,
|
||||||
synchronizer.Config{
|
synchronizer.Config{
|
||||||
StatsRefreshPeriod: 0 * time.Second,
|
StatsRefreshPeriod: 0 * time.Second,
|
||||||
})
|
})
|
||||||
@@ -392,10 +390,6 @@ func TestCoordCanForge(t *testing.T) {
|
|||||||
assert.Equal(t, true, coord.canForge())
|
assert.Equal(t, true, coord.canForge())
|
||||||
assert.Equal(t, true, bootCoord.canForge())
|
assert.Equal(t, true, bootCoord.canForge())
|
||||||
|
|
||||||
// Anyone can forge but the node MustForgeAtSlotDeadline as set as false
|
|
||||||
coord.cfg.MustForgeAtSlotDeadline = false
|
|
||||||
assert.Equal(t, false, coord.canForge())
|
|
||||||
|
|
||||||
// Slot 3. coordinator bid, so the winner is the coordinator
|
// Slot 3. coordinator bid, so the winner is the coordinator
|
||||||
stats.Eth.LastBlock.Num = ethClientSetup.AuctionConstants.GenesisBlockNum +
|
stats.Eth.LastBlock.Num = ethClientSetup.AuctionConstants.GenesisBlockNum +
|
||||||
3*int64(ethClientSetup.AuctionConstants.BlocksPerSlot)
|
3*int64(ethClientSetup.AuctionConstants.BlocksPerSlot)
|
||||||
@@ -523,7 +517,7 @@ func TestCoordinatorStress(t *testing.T) {
|
|||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
go func() {
|
go func() {
|
||||||
for {
|
for {
|
||||||
blockData, _, err := syn.Sync(ctx, nil)
|
blockData, _, err := syn.Sync2(ctx, nil)
|
||||||
if ctx.Err() != nil {
|
if ctx.Err() != nil {
|
||||||
wg.Done()
|
wg.Done()
|
||||||
return
|
return
|
||||||
@@ -534,7 +528,7 @@ func TestCoordinatorStress(t *testing.T) {
|
|||||||
coord.SendMsg(ctx, MsgSyncBlock{
|
coord.SendMsg(ctx, MsgSyncBlock{
|
||||||
Stats: *stats,
|
Stats: *stats,
|
||||||
Batches: blockData.Rollup.Batches,
|
Batches: blockData.Rollup.Batches,
|
||||||
Vars: common.SCVariablesPtr{
|
Vars: synchronizer.SCVariablesPtr{
|
||||||
Rollup: blockData.Rollup.Vars,
|
Rollup: blockData.Rollup.Vars,
|
||||||
Auction: blockData.Auction.Vars,
|
Auction: blockData.Auction.Vars,
|
||||||
WDelayer: blockData.WDelayer.Vars,
|
WDelayer: blockData.WDelayer.Vars,
|
||||||
|
|||||||
@@ -22,7 +22,7 @@ import (
|
|||||||
|
|
||||||
type statsVars struct {
|
type statsVars struct {
|
||||||
Stats synchronizer.Stats
|
Stats synchronizer.Stats
|
||||||
Vars common.SCVariablesPtr
|
Vars synchronizer.SCVariablesPtr
|
||||||
}
|
}
|
||||||
|
|
||||||
type state struct {
|
type state struct {
|
||||||
@@ -36,7 +36,7 @@ type state struct {
|
|||||||
type Pipeline struct {
|
type Pipeline struct {
|
||||||
num int
|
num int
|
||||||
cfg Config
|
cfg Config
|
||||||
consts common.SCConsts
|
consts synchronizer.SCConsts
|
||||||
|
|
||||||
// state
|
// state
|
||||||
state state
|
state state
|
||||||
@@ -45,19 +45,18 @@ type Pipeline struct {
|
|||||||
errAtBatchNum common.BatchNum
|
errAtBatchNum common.BatchNum
|
||||||
lastForgeTime time.Time
|
lastForgeTime time.Time
|
||||||
|
|
||||||
proversPool *ProversPool
|
proversPool *ProversPool
|
||||||
provers []prover.Client
|
provers []prover.Client
|
||||||
coord *Coordinator
|
coord *Coordinator
|
||||||
txManager *TxManager
|
txManager *TxManager
|
||||||
historyDB *historydb.HistoryDB
|
historyDB *historydb.HistoryDB
|
||||||
l2DB *l2db.L2DB
|
l2DB *l2db.L2DB
|
||||||
txSelector *txselector.TxSelector
|
txSelector *txselector.TxSelector
|
||||||
batchBuilder *batchbuilder.BatchBuilder
|
batchBuilder *batchbuilder.BatchBuilder
|
||||||
mutexL2DBUpdateDelete *sync.Mutex
|
purger *Purger
|
||||||
purger *Purger
|
|
||||||
|
|
||||||
stats synchronizer.Stats
|
stats synchronizer.Stats
|
||||||
vars common.SCVariables
|
vars synchronizer.SCVariables
|
||||||
statsVarsCh chan statsVars
|
statsVarsCh chan statsVars
|
||||||
|
|
||||||
ctx context.Context
|
ctx context.Context
|
||||||
@@ -85,12 +84,11 @@ func NewPipeline(ctx context.Context,
|
|||||||
l2DB *l2db.L2DB,
|
l2DB *l2db.L2DB,
|
||||||
txSelector *txselector.TxSelector,
|
txSelector *txselector.TxSelector,
|
||||||
batchBuilder *batchbuilder.BatchBuilder,
|
batchBuilder *batchbuilder.BatchBuilder,
|
||||||
mutexL2DBUpdateDelete *sync.Mutex,
|
|
||||||
purger *Purger,
|
purger *Purger,
|
||||||
coord *Coordinator,
|
coord *Coordinator,
|
||||||
txManager *TxManager,
|
txManager *TxManager,
|
||||||
provers []prover.Client,
|
provers []prover.Client,
|
||||||
scConsts *common.SCConsts,
|
scConsts *synchronizer.SCConsts,
|
||||||
) (*Pipeline, error) {
|
) (*Pipeline, error) {
|
||||||
proversPool := NewProversPool(len(provers))
|
proversPool := NewProversPool(len(provers))
|
||||||
proversPoolSize := 0
|
proversPoolSize := 0
|
||||||
@@ -106,26 +104,24 @@ func NewPipeline(ctx context.Context,
|
|||||||
return nil, tracerr.Wrap(fmt.Errorf("no provers in the pool"))
|
return nil, tracerr.Wrap(fmt.Errorf("no provers in the pool"))
|
||||||
}
|
}
|
||||||
return &Pipeline{
|
return &Pipeline{
|
||||||
num: num,
|
num: num,
|
||||||
cfg: cfg,
|
cfg: cfg,
|
||||||
historyDB: historyDB,
|
historyDB: historyDB,
|
||||||
l2DB: l2DB,
|
l2DB: l2DB,
|
||||||
txSelector: txSelector,
|
txSelector: txSelector,
|
||||||
batchBuilder: batchBuilder,
|
batchBuilder: batchBuilder,
|
||||||
provers: provers,
|
provers: provers,
|
||||||
proversPool: proversPool,
|
proversPool: proversPool,
|
||||||
mutexL2DBUpdateDelete: mutexL2DBUpdateDelete,
|
purger: purger,
|
||||||
purger: purger,
|
coord: coord,
|
||||||
coord: coord,
|
txManager: txManager,
|
||||||
txManager: txManager,
|
consts: *scConsts,
|
||||||
consts: *scConsts,
|
statsVarsCh: make(chan statsVars, queueLen),
|
||||||
statsVarsCh: make(chan statsVars, queueLen),
|
|
||||||
}, nil
|
}, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// SetSyncStatsVars is a thread safe method to sets the synchronizer Stats
|
// SetSyncStatsVars is a thread safe method to sets the synchronizer Stats
|
||||||
func (p *Pipeline) SetSyncStatsVars(ctx context.Context, stats *synchronizer.Stats,
|
func (p *Pipeline) SetSyncStatsVars(ctx context.Context, stats *synchronizer.Stats, vars *synchronizer.SCVariablesPtr) {
|
||||||
vars *common.SCVariablesPtr) {
|
|
||||||
select {
|
select {
|
||||||
case p.statsVarsCh <- statsVars{Stats: *stats, Vars: *vars}:
|
case p.statsVarsCh <- statsVars{Stats: *stats, Vars: *vars}:
|
||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
@@ -134,7 +130,7 @@ func (p *Pipeline) SetSyncStatsVars(ctx context.Context, stats *synchronizer.Sta
|
|||||||
|
|
||||||
// reset pipeline state
|
// reset pipeline state
|
||||||
func (p *Pipeline) reset(batchNum common.BatchNum,
|
func (p *Pipeline) reset(batchNum common.BatchNum,
|
||||||
stats *synchronizer.Stats, vars *common.SCVariables) error {
|
stats *synchronizer.Stats, vars *synchronizer.SCVariables) error {
|
||||||
p.state = state{
|
p.state = state{
|
||||||
batchNum: batchNum,
|
batchNum: batchNum,
|
||||||
lastForgeL1TxsNum: stats.Sync.LastForgeL1TxsNum,
|
lastForgeL1TxsNum: stats.Sync.LastForgeL1TxsNum,
|
||||||
@@ -195,39 +191,15 @@ func (p *Pipeline) reset(batchNum common.BatchNum,
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *Pipeline) syncSCVars(vars common.SCVariablesPtr) {
|
func (p *Pipeline) syncSCVars(vars synchronizer.SCVariablesPtr) {
|
||||||
updateSCVars(&p.vars, vars)
|
updateSCVars(&p.vars, vars)
|
||||||
}
|
}
|
||||||
|
|
||||||
// handleForgeBatch waits for an available proof server, calls p.forgeBatch to
|
// handleForgeBatch calls p.forgeBatch to forge the batch and get the zkInputs,
|
||||||
// forge the batch and get the zkInputs, and then sends the zkInputs to the
|
// and then waits for an available proof server and sends the zkInputs to it so
|
||||||
// selected proof server so that the proof computation begins.
|
// that the proof computation begins.
|
||||||
func (p *Pipeline) handleForgeBatch(ctx context.Context,
|
func (p *Pipeline) handleForgeBatch(ctx context.Context, batchNum common.BatchNum) (*BatchInfo, error) {
|
||||||
batchNum common.BatchNum) (batchInfo *BatchInfo, err error) {
|
batchInfo, err := p.forgeBatch(batchNum)
|
||||||
// 1. Wait for an available serverProof (blocking call)
|
|
||||||
serverProof, err := p.proversPool.Get(ctx)
|
|
||||||
if ctx.Err() != nil {
|
|
||||||
return nil, ctx.Err()
|
|
||||||
} else if err != nil {
|
|
||||||
log.Errorw("proversPool.Get", "err", err)
|
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
defer func() {
|
|
||||||
// If we encounter any error (notice that this function returns
|
|
||||||
// errors to notify that a batch is not forged not only because
|
|
||||||
// of unexpected errors but also due to benign causes), add the
|
|
||||||
// serverProof back to the pool
|
|
||||||
if err != nil {
|
|
||||||
p.proversPool.Add(ctx, serverProof)
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
// 2. Forge the batch internally (make a selection of txs and prepare
|
|
||||||
// all the smart contract arguments)
|
|
||||||
var skipReason *string
|
|
||||||
p.mutexL2DBUpdateDelete.Lock()
|
|
||||||
batchInfo, skipReason, err = p.forgeBatch(batchNum)
|
|
||||||
p.mutexL2DBUpdateDelete.Unlock()
|
|
||||||
if ctx.Err() != nil {
|
if ctx.Err() != nil {
|
||||||
return nil, ctx.Err()
|
return nil, ctx.Err()
|
||||||
} else if err != nil {
|
} else if err != nil {
|
||||||
@@ -235,29 +207,37 @@ func (p *Pipeline) handleForgeBatch(ctx context.Context,
|
|||||||
log.Warnw("forgeBatch: scheduled L1Batch too early", "err", err,
|
log.Warnw("forgeBatch: scheduled L1Batch too early", "err", err,
|
||||||
"lastForgeL1TxsNum", p.state.lastForgeL1TxsNum,
|
"lastForgeL1TxsNum", p.state.lastForgeL1TxsNum,
|
||||||
"syncLastForgeL1TxsNum", p.stats.Sync.LastForgeL1TxsNum)
|
"syncLastForgeL1TxsNum", p.stats.Sync.LastForgeL1TxsNum)
|
||||||
|
} else if tracerr.Unwrap(err) == errForgeNoTxsBeforeDelay ||
|
||||||
|
tracerr.Unwrap(err) == errForgeBeforeDelay {
|
||||||
|
// no log
|
||||||
} else {
|
} else {
|
||||||
log.Errorw("forgeBatch", "err", err)
|
log.Errorw("forgeBatch", "err", err)
|
||||||
}
|
}
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, err
|
||||||
} else if skipReason != nil {
|
}
|
||||||
log.Debugw("skipping batch", "batch", batchNum, "reason", *skipReason)
|
// 6. Wait for an available server proof (blocking call)
|
||||||
return nil, tracerr.Wrap(errSkipBatchByPolicy)
|
serverProof, err := p.proversPool.Get(ctx)
|
||||||
|
if ctx.Err() != nil {
|
||||||
|
return nil, ctx.Err()
|
||||||
|
} else if err != nil {
|
||||||
|
log.Errorw("proversPool.Get", "err", err)
|
||||||
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// 3. Send the ZKInputs to the proof server
|
|
||||||
batchInfo.ServerProof = serverProof
|
batchInfo.ServerProof = serverProof
|
||||||
if err := p.sendServerProof(ctx, batchInfo); ctx.Err() != nil {
|
if err := p.sendServerProof(ctx, batchInfo); ctx.Err() != nil {
|
||||||
return nil, ctx.Err()
|
return nil, ctx.Err()
|
||||||
} else if err != nil {
|
} else if err != nil {
|
||||||
log.Errorw("sendServerProof", "err", err)
|
log.Errorw("sendServerProof", "err", err)
|
||||||
return nil, tracerr.Wrap(err)
|
batchInfo.ServerProof = nil
|
||||||
|
p.proversPool.Add(ctx, serverProof)
|
||||||
|
return nil, err
|
||||||
}
|
}
|
||||||
return batchInfo, nil
|
return batchInfo, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// Start the forging pipeline
|
// Start the forging pipeline
|
||||||
func (p *Pipeline) Start(batchNum common.BatchNum,
|
func (p *Pipeline) Start(batchNum common.BatchNum,
|
||||||
stats *synchronizer.Stats, vars *common.SCVariables) error {
|
stats *synchronizer.Stats, vars *synchronizer.SCVariables) error {
|
||||||
if p.started {
|
if p.started {
|
||||||
log.Fatal("Pipeline already started")
|
log.Fatal("Pipeline already started")
|
||||||
}
|
}
|
||||||
@@ -273,7 +253,7 @@ func (p *Pipeline) Start(batchNum common.BatchNum,
|
|||||||
|
|
||||||
p.wg.Add(1)
|
p.wg.Add(1)
|
||||||
go func() {
|
go func() {
|
||||||
timer := time.NewTimer(zeroDuration)
|
waitCh := time.After(zeroDuration)
|
||||||
for {
|
for {
|
||||||
select {
|
select {
|
||||||
case <-p.ctx.Done():
|
case <-p.ctx.Done():
|
||||||
@@ -283,20 +263,23 @@ func (p *Pipeline) Start(batchNum common.BatchNum,
|
|||||||
case statsVars := <-p.statsVarsCh:
|
case statsVars := <-p.statsVarsCh:
|
||||||
p.stats = statsVars.Stats
|
p.stats = statsVars.Stats
|
||||||
p.syncSCVars(statsVars.Vars)
|
p.syncSCVars(statsVars.Vars)
|
||||||
case <-timer.C:
|
case <-waitCh:
|
||||||
timer.Reset(p.cfg.ForgeRetryInterval)
|
|
||||||
// Once errAtBatchNum != 0, we stop forging
|
// Once errAtBatchNum != 0, we stop forging
|
||||||
// batches because there's been an error and we
|
// batches because there's been an error and we
|
||||||
// wait for the pipeline to be stopped.
|
// wait for the pipeline to be stopped.
|
||||||
if p.getErrAtBatchNum() != 0 {
|
if p.getErrAtBatchNum() != 0 {
|
||||||
|
waitCh = time.After(p.cfg.ForgeRetryInterval)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
batchNum = p.state.batchNum + 1
|
batchNum = p.state.batchNum + 1
|
||||||
batchInfo, err := p.handleForgeBatch(p.ctx, batchNum)
|
batchInfo, err := p.handleForgeBatch(p.ctx, batchNum)
|
||||||
if p.ctx.Err() != nil {
|
if p.ctx.Err() != nil {
|
||||||
|
waitCh = time.After(p.cfg.ForgeRetryInterval)
|
||||||
continue
|
continue
|
||||||
} else if tracerr.Unwrap(err) == errLastL1BatchNotSynced ||
|
} else if tracerr.Unwrap(err) == errLastL1BatchNotSynced ||
|
||||||
tracerr.Unwrap(err) == errSkipBatchByPolicy {
|
tracerr.Unwrap(err) == errForgeNoTxsBeforeDelay ||
|
||||||
|
tracerr.Unwrap(err) == errForgeBeforeDelay {
|
||||||
|
waitCh = time.After(p.cfg.ForgeRetryInterval)
|
||||||
continue
|
continue
|
||||||
} else if err != nil {
|
} else if err != nil {
|
||||||
p.setErrAtBatchNum(batchNum)
|
p.setErrAtBatchNum(batchNum)
|
||||||
@@ -305,6 +288,7 @@ func (p *Pipeline) Start(batchNum common.BatchNum,
|
|||||||
"Pipeline.handleForgBatch: %v", err),
|
"Pipeline.handleForgBatch: %v", err),
|
||||||
FailedBatchNum: batchNum,
|
FailedBatchNum: batchNum,
|
||||||
})
|
})
|
||||||
|
waitCh = time.After(p.cfg.ForgeRetryInterval)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
p.lastForgeTime = time.Now()
|
p.lastForgeTime = time.Now()
|
||||||
@@ -314,10 +298,7 @@ func (p *Pipeline) Start(batchNum common.BatchNum,
|
|||||||
case batchChSentServerProof <- batchInfo:
|
case batchChSentServerProof <- batchInfo:
|
||||||
case <-p.ctx.Done():
|
case <-p.ctx.Done():
|
||||||
}
|
}
|
||||||
if !timer.Stop() {
|
waitCh = time.After(zeroDuration)
|
||||||
<-timer.C
|
|
||||||
}
|
|
||||||
timer.Reset(zeroDuration)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}()
|
}()
|
||||||
@@ -352,6 +333,7 @@ func (p *Pipeline) Start(batchNum common.BatchNum,
|
|||||||
}
|
}
|
||||||
// We are done with this serverProof, add it back to the pool
|
// We are done with this serverProof, add it back to the pool
|
||||||
p.proversPool.Add(p.ctx, batchInfo.ServerProof)
|
p.proversPool.Add(p.ctx, batchInfo.ServerProof)
|
||||||
|
// batchInfo.ServerProof = nil
|
||||||
p.txManager.AddBatch(p.ctx, batchInfo)
|
p.txManager.AddBatch(p.ctx, batchInfo)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -389,109 +371,17 @@ func (p *Pipeline) sendServerProof(ctx context.Context, batchInfo *BatchInfo) er
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// slotCommitted returns true if the current slot has already been committed
|
|
||||||
func (p *Pipeline) slotCommitted() bool {
|
|
||||||
// Synchronizer has synchronized a batch in the current slot (setting
|
|
||||||
// CurrentSlot.ForgerCommitment) or the pipeline has already
|
|
||||||
// internally-forged a batch in the current slot
|
|
||||||
return p.stats.Sync.Auction.CurrentSlot.ForgerCommitment ||
|
|
||||||
p.stats.Sync.Auction.CurrentSlot.SlotNum == p.state.lastSlotForged
|
|
||||||
}
|
|
||||||
|
|
||||||
// forgePolicySkipPreSelection is called before doing a tx selection in a batch to
|
|
||||||
// determine by policy if we should forge the batch or not. Returns true and
|
|
||||||
// the reason when the forging of the batch must be skipped.
|
|
||||||
func (p *Pipeline) forgePolicySkipPreSelection(now time.Time) (bool, string) {
|
|
||||||
// Check if the slot is not yet fulfilled
|
|
||||||
slotCommitted := p.slotCommitted()
|
|
||||||
if p.cfg.ForgeOncePerSlotIfTxs {
|
|
||||||
if slotCommitted {
|
|
||||||
return true, "cfg.ForgeOncePerSlotIfTxs = true and slot already committed"
|
|
||||||
}
|
|
||||||
return false, ""
|
|
||||||
}
|
|
||||||
// Determine if we must commit the slot
|
|
||||||
if !p.cfg.IgnoreSlotCommitment && !slotCommitted {
|
|
||||||
return false, ""
|
|
||||||
}
|
|
||||||
|
|
||||||
// If we haven't reached the ForgeDelay, skip forging the batch
|
|
||||||
if now.Sub(p.lastForgeTime) < p.cfg.ForgeDelay {
|
|
||||||
return true, "we haven't reached the forge delay"
|
|
||||||
}
|
|
||||||
return false, ""
|
|
||||||
}
|
|
||||||
|
|
||||||
// forgePolicySkipPostSelection is called after doing a tx selection in a batch to
|
|
||||||
// determine by policy if we should forge the batch or not. Returns true and
|
|
||||||
// the reason when the forging of the batch must be skipped.
|
|
||||||
func (p *Pipeline) forgePolicySkipPostSelection(now time.Time, l1UserTxsExtra, l1CoordTxs []common.L1Tx,
|
|
||||||
poolL2Txs []common.PoolL2Tx, batchInfo *BatchInfo) (bool, string, error) {
|
|
||||||
// Check if the slot is not yet fulfilled
|
|
||||||
slotCommitted := p.slotCommitted()
|
|
||||||
|
|
||||||
pendingTxs := true
|
|
||||||
if len(l1UserTxsExtra) == 0 && len(l1CoordTxs) == 0 && len(poolL2Txs) == 0 {
|
|
||||||
if batchInfo.L1Batch {
|
|
||||||
// Query the number of unforged L1UserTxs
|
|
||||||
// (either in a open queue or in a frozen
|
|
||||||
// not-yet-forged queue).
|
|
||||||
count, err := p.historyDB.GetUnforgedL1UserTxsCount()
|
|
||||||
if err != nil {
|
|
||||||
return false, "", err
|
|
||||||
}
|
|
||||||
// If there are future L1UserTxs, we forge a
|
|
||||||
// batch to advance the queues to be able to
|
|
||||||
// forge the L1UserTxs in the future.
|
|
||||||
// Otherwise, skip.
|
|
||||||
if count == 0 {
|
|
||||||
pendingTxs = false
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
pendingTxs = false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if p.cfg.ForgeOncePerSlotIfTxs {
|
|
||||||
if slotCommitted {
|
|
||||||
return true, "cfg.ForgeOncePerSlotIfTxs = true and slot already committed",
|
|
||||||
nil
|
|
||||||
}
|
|
||||||
if pendingTxs {
|
|
||||||
return false, "", nil
|
|
||||||
}
|
|
||||||
return true, "cfg.ForgeOncePerSlotIfTxs = true and no pending txs",
|
|
||||||
nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Determine if we must commit the slot
|
|
||||||
if !p.cfg.IgnoreSlotCommitment && !slotCommitted {
|
|
||||||
return false, "", nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// check if there is no txs to forge, no l1UserTxs in the open queue to
|
|
||||||
// freeze and we haven't reached the ForgeNoTxsDelay
|
|
||||||
if now.Sub(p.lastForgeTime) < p.cfg.ForgeNoTxsDelay {
|
|
||||||
if !pendingTxs {
|
|
||||||
return true, "no txs to forge and we haven't reached the forge no txs delay",
|
|
||||||
nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false, "", nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// forgeBatch forges the batchNum batch.
|
// forgeBatch forges the batchNum batch.
|
||||||
func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo,
|
func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo, err error) {
|
||||||
skipReason *string, err error) {
|
|
||||||
// remove transactions from the pool that have been there for too long
|
// remove transactions from the pool that have been there for too long
|
||||||
_, err = p.purger.InvalidateMaybe(p.l2DB, p.txSelector.LocalAccountsDB(),
|
_, err = p.purger.InvalidateMaybe(p.l2DB, p.txSelector.LocalAccountsDB(),
|
||||||
p.stats.Sync.LastBlock.Num, int64(batchNum))
|
p.stats.Sync.LastBlock.Num, int64(batchNum))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
_, err = p.purger.PurgeMaybe(p.l2DB, p.stats.Sync.LastBlock.Num, int64(batchNum))
|
_, err = p.purger.PurgeMaybe(p.l2DB, p.stats.Sync.LastBlock.Num, int64(batchNum))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
// Structure to accumulate data and metadata of the batch
|
// Structure to accumulate data and metadata of the batch
|
||||||
now := time.Now()
|
now := time.Now()
|
||||||
@@ -499,50 +389,85 @@ func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo,
|
|||||||
batchInfo.Debug.StartTimestamp = now
|
batchInfo.Debug.StartTimestamp = now
|
||||||
batchInfo.Debug.StartBlockNum = p.stats.Eth.LastBlock.Num + 1
|
batchInfo.Debug.StartBlockNum = p.stats.Eth.LastBlock.Num + 1
|
||||||
|
|
||||||
|
selectionCfg := &txselector.SelectionConfig{
|
||||||
|
MaxL1UserTxs: common.RollupConstMaxL1UserTx,
|
||||||
|
TxProcessorConfig: p.cfg.TxProcessorConfig,
|
||||||
|
}
|
||||||
|
|
||||||
var poolL2Txs []common.PoolL2Tx
|
var poolL2Txs []common.PoolL2Tx
|
||||||
var discardedL2Txs []common.PoolL2Tx
|
var discardedL2Txs []common.PoolL2Tx
|
||||||
var l1UserTxs, l1CoordTxs []common.L1Tx
|
var l1UserTxsExtra, l1CoordTxs []common.L1Tx
|
||||||
var auths [][]byte
|
var auths [][]byte
|
||||||
var coordIdxs []common.Idx
|
var coordIdxs []common.Idx
|
||||||
|
|
||||||
if skip, reason := p.forgePolicySkipPreSelection(now); skip {
|
// Check if the slot is not yet fulfilled
|
||||||
return nil, &reason, nil
|
slotCommitted := false
|
||||||
|
if p.stats.Sync.Auction.CurrentSlot.ForgerCommitment ||
|
||||||
|
p.stats.Sync.Auction.CurrentSlot.SlotNum == p.state.lastSlotForged {
|
||||||
|
slotCommitted = true
|
||||||
|
}
|
||||||
|
|
||||||
|
// If we haven't reached the ForgeDelay, skip forging the batch
|
||||||
|
if slotCommitted && now.Sub(p.lastForgeTime) < p.cfg.ForgeDelay {
|
||||||
|
return nil, errForgeBeforeDelay
|
||||||
}
|
}
|
||||||
|
|
||||||
// 1. Decide if we forge L2Tx or L1+L2Tx
|
// 1. Decide if we forge L2Tx or L1+L2Tx
|
||||||
if p.shouldL1L2Batch(batchInfo) {
|
if p.shouldL1L2Batch(batchInfo) {
|
||||||
batchInfo.L1Batch = true
|
batchInfo.L1Batch = true
|
||||||
if p.state.lastForgeL1TxsNum != p.stats.Sync.LastForgeL1TxsNum {
|
if p.state.lastForgeL1TxsNum != p.stats.Sync.LastForgeL1TxsNum {
|
||||||
return nil, nil, tracerr.Wrap(errLastL1BatchNotSynced)
|
return nil, tracerr.Wrap(errLastL1BatchNotSynced)
|
||||||
}
|
}
|
||||||
// 2a: L1+L2 txs
|
// 2a: L1+L2 txs
|
||||||
_l1UserTxs, err := p.historyDB.GetUnforgedL1UserTxs(p.state.lastForgeL1TxsNum + 1)
|
l1UserTxs, err := p.historyDB.GetUnforgedL1UserTxs(p.state.lastForgeL1TxsNum + 1)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
coordIdxs, auths, l1UserTxs, l1CoordTxs, poolL2Txs, discardedL2Txs, err =
|
coordIdxs, auths, l1UserTxsExtra, l1CoordTxs, poolL2Txs, discardedL2Txs, err =
|
||||||
p.txSelector.GetL1L2TxSelection(p.cfg.TxProcessorConfig, _l1UserTxs)
|
p.txSelector.GetL1L2TxSelection(selectionCfg, l1UserTxs)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
// 2b: only L2 txs
|
// 2b: only L2 txs
|
||||||
coordIdxs, auths, l1CoordTxs, poolL2Txs, discardedL2Txs, err =
|
coordIdxs, auths, l1CoordTxs, poolL2Txs, discardedL2Txs, err =
|
||||||
p.txSelector.GetL2TxSelection(p.cfg.TxProcessorConfig)
|
p.txSelector.GetL2TxSelection(selectionCfg)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
l1UserTxs = nil
|
l1UserTxsExtra = nil
|
||||||
}
|
}
|
||||||
|
|
||||||
if skip, reason, err := p.forgePolicySkipPostSelection(now,
|
// If there are no txs to forge, no l1UserTxs in the open queue to
|
||||||
l1UserTxs, l1CoordTxs, poolL2Txs, batchInfo); err != nil {
|
// freeze, and we haven't reached the ForgeNoTxsDelay, skip forging the
|
||||||
return nil, nil, tracerr.Wrap(err)
|
// batch.
|
||||||
} else if skip {
|
if slotCommitted && now.Sub(p.lastForgeTime) < p.cfg.ForgeNoTxsDelay {
|
||||||
if err := p.txSelector.Reset(batchInfo.BatchNum-1, false); err != nil {
|
noTxs := false
|
||||||
return nil, nil, tracerr.Wrap(err)
|
if len(l1UserTxsExtra) == 0 && len(l1CoordTxs) == 0 && len(poolL2Txs) == 0 {
|
||||||
|
if batchInfo.L1Batch {
|
||||||
|
// Query the L1UserTxs in the queue following
|
||||||
|
// the one we are trying to forge.
|
||||||
|
nextL1UserTxs, err := p.historyDB.GetUnforgedL1UserTxs(
|
||||||
|
p.state.lastForgeL1TxsNum + 1)
|
||||||
|
if err != nil {
|
||||||
|
return nil, tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
// If there are future L1UserTxs, we forge a
|
||||||
|
// batch to advance the queues and forge the
|
||||||
|
// L1UserTxs in the future. Otherwise, skip.
|
||||||
|
if len(nextL1UserTxs) == 0 {
|
||||||
|
noTxs = true
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
noTxs = true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if noTxs {
|
||||||
|
if err := p.txSelector.Reset(batchInfo.BatchNum-1, false); err != nil {
|
||||||
|
return nil, tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
return nil, errForgeNoTxsBeforeDelay
|
||||||
}
|
}
|
||||||
return nil, &reason, tracerr.Wrap(err)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
if batchInfo.L1Batch {
|
if batchInfo.L1Batch {
|
||||||
@@ -551,41 +476,40 @@ func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo,
|
|||||||
}
|
}
|
||||||
|
|
||||||
// 3. Save metadata from TxSelector output for BatchNum
|
// 3. Save metadata from TxSelector output for BatchNum
|
||||||
batchInfo.L1UserTxs = l1UserTxs
|
batchInfo.L1UserTxsExtra = l1UserTxsExtra
|
||||||
batchInfo.L1CoordTxs = l1CoordTxs
|
batchInfo.L1CoordTxs = l1CoordTxs
|
||||||
batchInfo.L1CoordinatorTxsAuths = auths
|
batchInfo.L1CoordinatorTxsAuths = auths
|
||||||
batchInfo.CoordIdxs = coordIdxs
|
batchInfo.CoordIdxs = coordIdxs
|
||||||
batchInfo.VerifierIdx = p.cfg.VerifierIdx
|
batchInfo.VerifierIdx = p.cfg.VerifierIdx
|
||||||
|
|
||||||
if err := p.l2DB.StartForging(common.TxIDsFromPoolL2Txs(poolL2Txs),
|
if err := p.l2DB.StartForging(common.TxIDsFromPoolL2Txs(poolL2Txs), batchInfo.BatchNum); err != nil {
|
||||||
batchInfo.BatchNum); err != nil {
|
return nil, tracerr.Wrap(err)
|
||||||
return nil, nil, tracerr.Wrap(err)
|
|
||||||
}
|
}
|
||||||
if err := p.l2DB.UpdateTxsInfo(discardedL2Txs); err != nil {
|
if err := p.l2DB.UpdateTxsInfo(discardedL2Txs); err != nil {
|
||||||
return nil, nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Invalidate transactions that become invalid because of
|
// Invalidate transactions that become invalid beause of
|
||||||
// the poolL2Txs selected. Will mark as invalid the txs that have a
|
// the poolL2Txs selected. Will mark as invalid the txs that have a
|
||||||
// (fromIdx, nonce) which already appears in the selected txs (includes
|
// (fromIdx, nonce) which already appears in the selected txs (includes
|
||||||
// all the nonces smaller than the current one)
|
// all the nonces smaller than the current one)
|
||||||
err = p.l2DB.InvalidateOldNonces(idxsNonceFromPoolL2Txs(poolL2Txs), batchInfo.BatchNum)
|
err = p.l2DB.InvalidateOldNonces(idxsNonceFromPoolL2Txs(poolL2Txs), batchInfo.BatchNum)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// 4. Call BatchBuilder with TxSelector output
|
// 4. Call BatchBuilder with TxSelector output
|
||||||
configBatch := &batchbuilder.ConfigBatch{
|
configBatch := &batchbuilder.ConfigBatch{
|
||||||
TxProcessorConfig: p.cfg.TxProcessorConfig,
|
TxProcessorConfig: p.cfg.TxProcessorConfig,
|
||||||
}
|
}
|
||||||
zkInputs, err := p.batchBuilder.BuildBatch(coordIdxs, configBatch, l1UserTxs,
|
zkInputs, err := p.batchBuilder.BuildBatch(coordIdxs, configBatch, l1UserTxsExtra,
|
||||||
l1CoordTxs, poolL2Txs)
|
l1CoordTxs, poolL2Txs)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
l2Txs, err := common.PoolL2TxsToL2Txs(poolL2Txs) // NOTE: This is a big uggly, find a better way
|
l2Txs, err := common.PoolL2TxsToL2Txs(poolL2Txs) // NOTE: This is a big uggly, find a better way
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
batchInfo.L2Txs = l2Txs
|
batchInfo.L2Txs = l2Txs
|
||||||
|
|
||||||
@@ -597,13 +521,12 @@ func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo,
|
|||||||
|
|
||||||
p.state.lastSlotForged = p.stats.Sync.Auction.CurrentSlot.SlotNum
|
p.state.lastSlotForged = p.stats.Sync.Auction.CurrentSlot.SlotNum
|
||||||
|
|
||||||
return batchInfo, nil, nil
|
return batchInfo, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// waitServerProof gets the generated zkProof & sends it to the SmartContract
|
// waitServerProof gets the generated zkProof & sends it to the SmartContract
|
||||||
func (p *Pipeline) waitServerProof(ctx context.Context, batchInfo *BatchInfo) error {
|
func (p *Pipeline) waitServerProof(ctx context.Context, batchInfo *BatchInfo) error {
|
||||||
proof, pubInputs, err := batchInfo.ServerProof.GetProof(ctx) // blocking call,
|
proof, pubInputs, err := batchInfo.ServerProof.GetProof(ctx) // blocking call, until not resolved don't continue. Returns when the proof server has calculated the proof
|
||||||
// until not resolved don't continue. Returns when the proof server has calculated the proof
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -642,7 +565,7 @@ func prepareForgeBatchArgs(batchInfo *BatchInfo) *eth.RollupForgeBatchArgs {
|
|||||||
NewLastIdx: int64(zki.Metadata.NewLastIdxRaw),
|
NewLastIdx: int64(zki.Metadata.NewLastIdxRaw),
|
||||||
NewStRoot: zki.Metadata.NewStateRootRaw.BigInt(),
|
NewStRoot: zki.Metadata.NewStateRootRaw.BigInt(),
|
||||||
NewExitRoot: zki.Metadata.NewExitRootRaw.BigInt(),
|
NewExitRoot: zki.Metadata.NewExitRootRaw.BigInt(),
|
||||||
L1UserTxs: batchInfo.L1UserTxs,
|
L1UserTxs: batchInfo.L1UserTxsExtra,
|
||||||
L1CoordinatorTxs: batchInfo.L1CoordTxs,
|
L1CoordinatorTxs: batchInfo.L1CoordTxs,
|
||||||
L1CoordinatorTxsAuths: batchInfo.L1CoordinatorTxsAuths,
|
L1CoordinatorTxsAuths: batchInfo.L1CoordinatorTxsAuths,
|
||||||
L2TxsData: batchInfo.L2Txs,
|
L2TxsData: batchInfo.L2Txs,
|
||||||
|
|||||||
@@ -140,7 +140,7 @@ func preloadSync(t *testing.T, ethClient *test.Client, sync *synchronizer.Synchr
|
|||||||
blocks[0].Rollup.Batches[0].Batch.StateRoot =
|
blocks[0].Rollup.Batches[0].Batch.StateRoot =
|
||||||
newBigInt("0")
|
newBigInt("0")
|
||||||
blocks[0].Rollup.Batches[1].Batch.StateRoot =
|
blocks[0].Rollup.Batches[1].Batch.StateRoot =
|
||||||
newBigInt("6860514559199319426609623120853503165917774887908204288119245630904770452486")
|
newBigInt("10941365282189107056349764238909072001483688090878331371699519307087372995595")
|
||||||
|
|
||||||
ethAddTokens(blocks, ethClient)
|
ethAddTokens(blocks, ethClient)
|
||||||
err = ethClient.CtlAddBlocks(blocks)
|
err = ethClient.CtlAddBlocks(blocks)
|
||||||
@@ -148,7 +148,7 @@ func preloadSync(t *testing.T, ethClient *test.Client, sync *synchronizer.Synchr
|
|||||||
|
|
||||||
ctx := context.Background()
|
ctx := context.Background()
|
||||||
for {
|
for {
|
||||||
syncBlock, discards, err := sync.Sync(ctx, nil)
|
syncBlock, discards, err := sync.Sync2(ctx, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
require.Nil(t, discards)
|
require.Nil(t, discards)
|
||||||
if syncBlock == nil {
|
if syncBlock == nil {
|
||||||
@@ -206,7 +206,11 @@ PoolTransfer(0) User2-User3: 300 (126)
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
err = pipeline.reset(batchNum, syncStats, syncSCVars)
|
err = pipeline.reset(batchNum, syncStats, &synchronizer.SCVariables{
|
||||||
|
Rollup: *syncSCVars.Rollup,
|
||||||
|
Auction: *syncSCVars.Auction,
|
||||||
|
WDelayer: *syncSCVars.WDelayer,
|
||||||
|
})
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
// Sanity check
|
// Sanity check
|
||||||
sdbAccounts, err := pipeline.txSelector.LocalAccountsDB().TestGetAccounts()
|
sdbAccounts, err := pipeline.txSelector.LocalAccountsDB().TestGetAccounts()
|
||||||
@@ -224,12 +228,12 @@ PoolTransfer(0) User2-User3: 300 (126)
|
|||||||
|
|
||||||
batchNum++
|
batchNum++
|
||||||
|
|
||||||
batchInfo, _, err := pipeline.forgeBatch(batchNum)
|
batchInfo, err := pipeline.forgeBatch(batchNum)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.Equal(t, 3, len(batchInfo.L2Txs))
|
assert.Equal(t, 3, len(batchInfo.L2Txs))
|
||||||
|
|
||||||
batchNum++
|
batchNum++
|
||||||
batchInfo, _, err = pipeline.forgeBatch(batchNum)
|
batchInfo, err = pipeline.forgeBatch(batchNum)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.Equal(t, 0, len(batchInfo.L2Txs))
|
assert.Equal(t, 0, len(batchInfo.L2Txs))
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -14,7 +14,7 @@ import (
|
|||||||
// PurgerCfg is the purger configuration
|
// PurgerCfg is the purger configuration
|
||||||
type PurgerCfg struct {
|
type PurgerCfg struct {
|
||||||
// PurgeBatchDelay is the delay between batches to purge outdated
|
// PurgeBatchDelay is the delay between batches to purge outdated
|
||||||
// transactions. Outdated L2Txs are those that have been forged or
|
// transactions. Oudated L2Txs are those that have been forged or
|
||||||
// marked as invalid for longer than the SafetyPeriod and pending L2Txs
|
// marked as invalid for longer than the SafetyPeriod and pending L2Txs
|
||||||
// that have been in the pool for longer than TTL once there are
|
// that have been in the pool for longer than TTL once there are
|
||||||
// MaxTxs.
|
// MaxTxs.
|
||||||
@@ -23,7 +23,7 @@ type PurgerCfg struct {
|
|||||||
// transactions due to nonce lower than the account nonce.
|
// transactions due to nonce lower than the account nonce.
|
||||||
InvalidateBatchDelay int64
|
InvalidateBatchDelay int64
|
||||||
// PurgeBlockDelay is the delay between blocks to purge outdated
|
// PurgeBlockDelay is the delay between blocks to purge outdated
|
||||||
// transactions. Outdated L2Txs are those that have been forged or
|
// transactions. Oudated L2Txs are those that have been forged or
|
||||||
// marked as invalid for longer than the SafetyPeriod and pending L2Txs
|
// marked as invalid for longer than the SafetyPeriod and pending L2Txs
|
||||||
// that have been in the pool for longer than TTL once there are
|
// that have been in the pool for longer than TTL once there are
|
||||||
// MaxTxs.
|
// MaxTxs.
|
||||||
|
|||||||
@@ -21,7 +21,7 @@ func newL2DB(t *testing.T) *l2db.L2DB {
|
|||||||
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
|
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
test.WipeDB(db)
|
test.WipeDB(db)
|
||||||
return l2db.NewL2DB(db, db, 10, 100, 0.0, 1000.0, 24*time.Hour, nil)
|
return l2db.NewL2DB(db, 10, 100, 24*time.Hour, nil)
|
||||||
}
|
}
|
||||||
|
|
||||||
func newStateDB(t *testing.T) *statedb.LocalStateDB {
|
func newStateDB(t *testing.T) *statedb.LocalStateDB {
|
||||||
|
|||||||
@@ -2,9 +2,9 @@ package coordinator
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"context"
|
"context"
|
||||||
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"math/big"
|
"math/big"
|
||||||
"strings"
|
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/ethereum/go-ethereum"
|
"github.com/ethereum/go-ethereum"
|
||||||
@@ -31,10 +31,10 @@ type TxManager struct {
|
|||||||
batchCh chan *BatchInfo
|
batchCh chan *BatchInfo
|
||||||
chainID *big.Int
|
chainID *big.Int
|
||||||
account accounts.Account
|
account accounts.Account
|
||||||
consts common.SCConsts
|
consts synchronizer.SCConsts
|
||||||
|
|
||||||
stats synchronizer.Stats
|
stats synchronizer.Stats
|
||||||
vars common.SCVariables
|
vars synchronizer.SCVariables
|
||||||
statsVarsCh chan statsVars
|
statsVarsCh chan statsVars
|
||||||
|
|
||||||
discardPipelineCh chan int // int refers to the pipelineNum
|
discardPipelineCh chan int // int refers to the pipelineNum
|
||||||
@@ -55,8 +55,7 @@ type TxManager struct {
|
|||||||
|
|
||||||
// NewTxManager creates a new TxManager
|
// NewTxManager creates a new TxManager
|
||||||
func NewTxManager(ctx context.Context, cfg *Config, ethClient eth.ClientInterface, l2DB *l2db.L2DB,
|
func NewTxManager(ctx context.Context, cfg *Config, ethClient eth.ClientInterface, l2DB *l2db.L2DB,
|
||||||
coord *Coordinator, scConsts *common.SCConsts, initSCVars *common.SCVariables) (
|
coord *Coordinator, scConsts *synchronizer.SCConsts, initSCVars *synchronizer.SCVariables) (*TxManager, error) {
|
||||||
*TxManager, error) {
|
|
||||||
chainID, err := ethClient.EthChainID()
|
chainID, err := ethClient.EthChainID()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
@@ -67,7 +66,7 @@ func NewTxManager(ctx context.Context, cfg *Config, ethClient eth.ClientInterfac
|
|||||||
}
|
}
|
||||||
accNonce, err := ethClient.EthNonceAt(ctx, *address, nil)
|
accNonce, err := ethClient.EthNonceAt(ctx, *address, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, err
|
||||||
}
|
}
|
||||||
log.Infow("TxManager started", "nonce", accNonce)
|
log.Infow("TxManager started", "nonce", accNonce)
|
||||||
return &TxManager{
|
return &TxManager{
|
||||||
@@ -103,8 +102,7 @@ func (t *TxManager) AddBatch(ctx context.Context, batchInfo *BatchInfo) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// SetSyncStatsVars is a thread safe method to sets the synchronizer Stats
|
// SetSyncStatsVars is a thread safe method to sets the synchronizer Stats
|
||||||
func (t *TxManager) SetSyncStatsVars(ctx context.Context, stats *synchronizer.Stats,
|
func (t *TxManager) SetSyncStatsVars(ctx context.Context, stats *synchronizer.Stats, vars *synchronizer.SCVariablesPtr) {
|
||||||
vars *common.SCVariablesPtr) {
|
|
||||||
select {
|
select {
|
||||||
case t.statsVarsCh <- statsVars{Stats: *stats, Vars: *vars}:
|
case t.statsVarsCh <- statsVars{Stats: *stats, Vars: *vars}:
|
||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
@@ -120,12 +118,12 @@ func (t *TxManager) DiscardPipeline(ctx context.Context, pipelineNum int) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *TxManager) syncSCVars(vars common.SCVariablesPtr) {
|
func (t *TxManager) syncSCVars(vars synchronizer.SCVariablesPtr) {
|
||||||
updateSCVars(&t.vars, vars)
|
updateSCVars(&t.vars, vars)
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewAuth generates a new auth object for an ethereum transaction
|
// NewAuth generates a new auth object for an ethereum transaction
|
||||||
func (t *TxManager) NewAuth(ctx context.Context, batchInfo *BatchInfo) (*bind.TransactOpts, error) {
|
func (t *TxManager) NewAuth(ctx context.Context) (*bind.TransactOpts, error) {
|
||||||
gasPrice, err := t.ethClient.EthSuggestGasPrice(ctx)
|
gasPrice, err := t.ethClient.EthSuggestGasPrice(ctx)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
@@ -145,12 +143,15 @@ func (t *TxManager) NewAuth(ctx context.Context, batchInfo *BatchInfo) (*bind.Tr
|
|||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
auth.Value = big.NewInt(0) // in wei
|
auth.Value = big.NewInt(0) // in wei
|
||||||
|
// TODO: Calculate GasLimit based on the contents of the ForgeBatchArgs
|
||||||
gasLimit := t.cfg.ForgeBatchGasCost.Fixed +
|
// This requires a function that estimates the gas usage of the
|
||||||
uint64(len(batchInfo.L1UserTxs))*t.cfg.ForgeBatchGasCost.L1UserTx +
|
// forgeBatch call based on the contents of the ForgeBatch args:
|
||||||
uint64(len(batchInfo.L1CoordTxs))*t.cfg.ForgeBatchGasCost.L1CoordTx +
|
// - length of l2txs
|
||||||
uint64(len(batchInfo.L2Txs))*t.cfg.ForgeBatchGasCost.L2Tx
|
// - length of l1Usertxs
|
||||||
auth.GasLimit = gasLimit
|
// - length of l1CoordTxs with authorization signature
|
||||||
|
// - length of l1CoordTxs without authoriation signature
|
||||||
|
// - etc.
|
||||||
|
auth.GasLimit = 1000000
|
||||||
auth.GasPrice = gasPrice
|
auth.GasPrice = gasPrice
|
||||||
auth.Nonce = nil
|
auth.Nonce = nil
|
||||||
|
|
||||||
@@ -184,30 +185,19 @@ func addPerc(v *big.Int, p int64) *big.Int {
|
|||||||
r.Mul(r, big.NewInt(p))
|
r.Mul(r, big.NewInt(p))
|
||||||
// nolint reason: to calculate percentages we divide by 100
|
// nolint reason: to calculate percentages we divide by 100
|
||||||
r.Div(r, big.NewInt(100)) //nolit:gomnd
|
r.Div(r, big.NewInt(100)) //nolit:gomnd
|
||||||
// If the increase is 0, force it to be 1 so that a gas increase
|
|
||||||
// doesn't result in the same value, making the transaction to be equal
|
|
||||||
// than before.
|
|
||||||
if r.Cmp(big.NewInt(0)) == 0 {
|
|
||||||
r = big.NewInt(1)
|
|
||||||
}
|
|
||||||
return r.Add(v, r)
|
return r.Add(v, r)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *TxManager) sendRollupForgeBatch(ctx context.Context, batchInfo *BatchInfo,
|
func (t *TxManager) sendRollupForgeBatch(ctx context.Context, batchInfo *BatchInfo, resend bool) error {
|
||||||
resend bool) error {
|
|
||||||
var ethTx *types.Transaction
|
var ethTx *types.Transaction
|
||||||
var err error
|
var err error
|
||||||
var auth *bind.TransactOpts
|
auth, err := t.NewAuth(ctx)
|
||||||
|
if err != nil {
|
||||||
|
return tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
auth.Nonce = big.NewInt(int64(t.accNextNonce))
|
||||||
if resend {
|
if resend {
|
||||||
auth = batchInfo.Auth
|
auth.Nonce = big.NewInt(int64(batchInfo.EthTx.Nonce()))
|
||||||
auth.GasPrice = addPerc(auth.GasPrice, 10)
|
|
||||||
} else {
|
|
||||||
auth, err = t.NewAuth(ctx, batchInfo)
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
batchInfo.Auth = auth
|
|
||||||
auth.Nonce = big.NewInt(int64(t.accNextNonce))
|
|
||||||
}
|
}
|
||||||
for attempt := 0; attempt < t.cfg.EthClientAttempts; attempt++ {
|
for attempt := 0; attempt < t.cfg.EthClientAttempts; attempt++ {
|
||||||
if auth.GasPrice.Cmp(t.cfg.MaxGasPrice) > 0 {
|
if auth.GasPrice.Cmp(t.cfg.MaxGasPrice) > 0 {
|
||||||
@@ -216,35 +206,32 @@ func (t *TxManager) sendRollupForgeBatch(ctx context.Context, batchInfo *BatchIn
|
|||||||
}
|
}
|
||||||
// RollupForgeBatch() calls ethclient.SendTransaction()
|
// RollupForgeBatch() calls ethclient.SendTransaction()
|
||||||
ethTx, err = t.ethClient.RollupForgeBatch(batchInfo.ForgeBatchArgs, auth)
|
ethTx, err = t.ethClient.RollupForgeBatch(batchInfo.ForgeBatchArgs, auth)
|
||||||
// We check the errors via strings because we match the
|
if errors.Is(err, core.ErrNonceTooLow) {
|
||||||
// definition of the error from geth, with the string returned
|
|
||||||
// via RPC obtained by the client.
|
|
||||||
if err == nil {
|
|
||||||
break
|
|
||||||
} else if strings.Contains(err.Error(), core.ErrNonceTooLow.Error()) {
|
|
||||||
log.Warnw("TxManager ethClient.RollupForgeBatch incrementing nonce",
|
log.Warnw("TxManager ethClient.RollupForgeBatch incrementing nonce",
|
||||||
"err", err, "nonce", auth.Nonce, "batchNum", batchInfo.BatchNum)
|
"err", err, "nonce", auth.Nonce, "batchNum", batchInfo.BatchNum)
|
||||||
auth.Nonce.Add(auth.Nonce, big.NewInt(1))
|
auth.Nonce.Add(auth.Nonce, big.NewInt(1))
|
||||||
attempt--
|
attempt--
|
||||||
} else if strings.Contains(err.Error(), core.ErrNonceTooHigh.Error()) {
|
} else if errors.Is(err, core.ErrNonceTooHigh) {
|
||||||
log.Warnw("TxManager ethClient.RollupForgeBatch decrementing nonce",
|
log.Warnw("TxManager ethClient.RollupForgeBatch decrementing nonce",
|
||||||
"err", err, "nonce", auth.Nonce, "batchNum", batchInfo.BatchNum)
|
"err", err, "nonce", auth.Nonce, "batchNum", batchInfo.BatchNum)
|
||||||
auth.Nonce.Sub(auth.Nonce, big.NewInt(1))
|
auth.Nonce.Sub(auth.Nonce, big.NewInt(1))
|
||||||
attempt--
|
attempt--
|
||||||
} else if strings.Contains(err.Error(), core.ErrReplaceUnderpriced.Error()) {
|
} else if errors.Is(err, core.ErrUnderpriced) {
|
||||||
log.Warnw("TxManager ethClient.RollupForgeBatch incrementing gasPrice",
|
log.Warnw("TxManager ethClient.RollupForgeBatch incrementing gasPrice",
|
||||||
"err", err, "gasPrice", auth.GasPrice, "batchNum", batchInfo.BatchNum)
|
"err", err, "gasPrice", auth.GasPrice, "batchNum", batchInfo.BatchNum)
|
||||||
auth.GasPrice = addPerc(auth.GasPrice, 10)
|
auth.GasPrice = addPerc(auth.GasPrice, 10)
|
||||||
attempt--
|
attempt--
|
||||||
} else if strings.Contains(err.Error(), core.ErrUnderpriced.Error()) {
|
} else if errors.Is(err, core.ErrReplaceUnderpriced) {
|
||||||
log.Warnw("TxManager ethClient.RollupForgeBatch incrementing gasPrice",
|
log.Warnw("TxManager ethClient.RollupForgeBatch incrementing gasPrice",
|
||||||
"err", err, "gasPrice", auth.GasPrice, "batchNum", batchInfo.BatchNum)
|
"err", err, "gasPrice", auth.GasPrice, "batchNum", batchInfo.BatchNum)
|
||||||
auth.GasPrice = addPerc(auth.GasPrice, 10)
|
auth.GasPrice = addPerc(auth.GasPrice, 10)
|
||||||
attempt--
|
attempt--
|
||||||
} else {
|
} else if err != nil {
|
||||||
log.Errorw("TxManager ethClient.RollupForgeBatch",
|
log.Errorw("TxManager ethClient.RollupForgeBatch",
|
||||||
"attempt", attempt, "err", err, "block", t.stats.Eth.LastBlock.Num+1,
|
"attempt", attempt, "err", err, "block", t.stats.Eth.LastBlock.Num+1,
|
||||||
"batchNum", batchInfo.BatchNum)
|
"batchNum", batchInfo.BatchNum)
|
||||||
|
} else {
|
||||||
|
break
|
||||||
}
|
}
|
||||||
select {
|
select {
|
||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
@@ -278,8 +265,7 @@ func (t *TxManager) sendRollupForgeBatch(ctx context.Context, batchInfo *BatchIn
|
|||||||
t.lastSentL1BatchBlockNum = t.stats.Eth.LastBlock.Num + 1
|
t.lastSentL1BatchBlockNum = t.stats.Eth.LastBlock.Num + 1
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if err := t.l2DB.DoneForging(common.TxIDsFromL2Txs(batchInfo.L2Txs),
|
if err := t.l2DB.DoneForging(common.TxIDsFromL2Txs(batchInfo.L2Txs), batchInfo.BatchNum); err != nil {
|
||||||
batchInfo.BatchNum); err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
@@ -311,9 +297,7 @@ func (t *TxManager) checkEthTransactionReceipt(ctx context.Context, batchInfo *B
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(
|
return tracerr.Wrap(fmt.Errorf("reached max attempts for ethClient.EthTransactionReceipt: %w", err))
|
||||||
fmt.Errorf("reached max attempts for ethClient.EthTransactionReceipt: %w",
|
|
||||||
err))
|
|
||||||
}
|
}
|
||||||
batchInfo.Receipt = receipt
|
batchInfo.Receipt = receipt
|
||||||
t.cfg.debugBatchStore(batchInfo)
|
t.cfg.debugBatchStore(batchInfo)
|
||||||
@@ -432,6 +416,8 @@ func (q *Queue) Push(batchInfo *BatchInfo) {
|
|||||||
|
|
||||||
// Run the TxManager
|
// Run the TxManager
|
||||||
func (t *TxManager) Run(ctx context.Context) {
|
func (t *TxManager) Run(ctx context.Context) {
|
||||||
|
waitCh := time.After(longWaitDuration)
|
||||||
|
|
||||||
var statsVars statsVars
|
var statsVars statsVars
|
||||||
select {
|
select {
|
||||||
case statsVars = <-t.statsVarsCh:
|
case statsVars = <-t.statsVarsCh:
|
||||||
@@ -442,7 +428,6 @@ func (t *TxManager) Run(ctx context.Context) {
|
|||||||
log.Infow("TxManager: received initial statsVars",
|
log.Infow("TxManager: received initial statsVars",
|
||||||
"block", t.stats.Eth.LastBlock.Num, "batch", t.stats.Eth.LastBatchNum)
|
"block", t.stats.Eth.LastBlock.Num, "batch", t.stats.Eth.LastBatchNum)
|
||||||
|
|
||||||
timer := time.NewTimer(longWaitDuration)
|
|
||||||
for {
|
for {
|
||||||
select {
|
select {
|
||||||
case <-ctx.Done():
|
case <-ctx.Done():
|
||||||
@@ -486,24 +471,20 @@ func (t *TxManager) Run(ctx context.Context) {
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
t.queue.Push(batchInfo)
|
t.queue.Push(batchInfo)
|
||||||
if !timer.Stop() {
|
waitCh = time.After(t.cfg.TxManagerCheckInterval)
|
||||||
<-timer.C
|
case <-waitCh:
|
||||||
}
|
|
||||||
timer.Reset(t.cfg.TxManagerCheckInterval)
|
|
||||||
case <-timer.C:
|
|
||||||
queuePosition, batchInfo := t.queue.Next()
|
queuePosition, batchInfo := t.queue.Next()
|
||||||
if batchInfo == nil {
|
if batchInfo == nil {
|
||||||
timer.Reset(longWaitDuration)
|
waitCh = time.After(longWaitDuration)
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
timer.Reset(t.cfg.TxManagerCheckInterval)
|
|
||||||
if err := t.checkEthTransactionReceipt(ctx, batchInfo); ctx.Err() != nil {
|
if err := t.checkEthTransactionReceipt(ctx, batchInfo); ctx.Err() != nil {
|
||||||
continue
|
continue
|
||||||
} else if err != nil { //nolint:staticcheck
|
} else if err != nil { //nolint:staticcheck
|
||||||
// Our ethNode is giving an error different
|
// Our ethNode is giving an error different
|
||||||
// than "not found" when getting the receipt
|
// than "not found" when getting the receipt
|
||||||
// for the transaction, so we can't figure out
|
// for the transaction, so we can't figure out
|
||||||
// if it was not mined, mined and successful or
|
// if it was not mined, mined and succesfull or
|
||||||
// mined and failed. This could be due to the
|
// mined and failed. This could be due to the
|
||||||
// ethNode failure.
|
// ethNode failure.
|
||||||
t.coord.SendMsg(ctx, MsgStopPipeline{
|
t.coord.SendMsg(ctx, MsgStopPipeline{
|
||||||
@@ -568,7 +549,7 @@ func (t *TxManager) removeBadBatchInfos(ctx context.Context) error {
|
|||||||
// Our ethNode is giving an error different
|
// Our ethNode is giving an error different
|
||||||
// than "not found" when getting the receipt
|
// than "not found" when getting the receipt
|
||||||
// for the transaction, so we can't figure out
|
// for the transaction, so we can't figure out
|
||||||
// if it was not mined, mined and successful or
|
// if it was not mined, mined and succesfull or
|
||||||
// mined and failed. This could be due to the
|
// mined and failed. This could be due to the
|
||||||
// ethNode failure.
|
// ethNode failure.
|
||||||
next++
|
next++
|
||||||
@@ -608,7 +589,7 @@ func (t *TxManager) removeBadBatchInfos(ctx context.Context) error {
|
|||||||
func (t *TxManager) canForgeAt(blockNum int64) bool {
|
func (t *TxManager) canForgeAt(blockNum int64) bool {
|
||||||
return canForge(&t.consts.Auction, &t.vars.Auction,
|
return canForge(&t.consts.Auction, &t.vars.Auction,
|
||||||
&t.stats.Sync.Auction.CurrentSlot, &t.stats.Sync.Auction.NextSlot,
|
&t.stats.Sync.Auction.CurrentSlot, &t.stats.Sync.Auction.NextSlot,
|
||||||
t.cfg.ForgerAddress, blockNum, t.cfg.MustForgeAtSlotDeadline)
|
t.cfg.ForgerAddress, blockNum)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *TxManager) mustL1L2Batch(blockNum int64) bool {
|
func (t *TxManager) mustL1L2Batch(blockNum int64) bool {
|
||||||
|
|||||||
@@ -1,14 +1,10 @@
|
|||||||
package historydb
|
package historydb
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"database/sql"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"math/big"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
ethCommon "github.com/ethereum/go-ethereum/common"
|
ethCommon "github.com/ethereum/go-ethereum/common"
|
||||||
"github.com/hermeznetwork/hermez-node/api/apitypes"
|
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/hermeznetwork/hermez-node/db"
|
"github.com/hermeznetwork/hermez-node/db"
|
||||||
"github.com/hermeznetwork/tracerr"
|
"github.com/hermeznetwork/tracerr"
|
||||||
@@ -36,18 +32,9 @@ func (hdb *HistoryDB) GetBatchAPI(batchNum common.BatchNum) (*BatchAPI, error) {
|
|||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
defer hdb.apiConnCon.Release()
|
defer hdb.apiConnCon.Release()
|
||||||
return hdb.getBatchAPI(hdb.dbRead, batchNum)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetBatchInternalAPI return the batch with the given batchNum
|
|
||||||
func (hdb *HistoryDB) GetBatchInternalAPI(batchNum common.BatchNum) (*BatchAPI, error) {
|
|
||||||
return hdb.getBatchAPI(hdb.dbRead, batchNum)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (hdb *HistoryDB) getBatchAPI(d meddler.DB, batchNum common.BatchNum) (*BatchAPI, error) {
|
|
||||||
batch := &BatchAPI{}
|
batch := &BatchAPI{}
|
||||||
if err := meddler.QueryRow(
|
return batch, tracerr.Wrap(meddler.QueryRow(
|
||||||
d, batch,
|
hdb.db, batch,
|
||||||
`SELECT batch.item_id, batch.batch_num, batch.eth_block_num,
|
`SELECT batch.item_id, batch.batch_num, batch.eth_block_num,
|
||||||
batch.forger_addr, batch.fees_collected, batch.total_fees_usd, batch.state_root,
|
batch.forger_addr, batch.fees_collected, batch.total_fees_usd, batch.state_root,
|
||||||
batch.num_accounts, batch.exit_root, batch.forge_l1_txs_num, batch.slot_num,
|
batch.num_accounts, batch.exit_root, batch.forge_l1_txs_num, batch.slot_num,
|
||||||
@@ -55,11 +42,7 @@ func (hdb *HistoryDB) getBatchAPI(d meddler.DB, batchNum common.BatchNum) (*Batc
|
|||||||
COALESCE ((SELECT COUNT(*) FROM tx WHERE batch_num = batch.batch_num), 0) AS forged_txs
|
COALESCE ((SELECT COUNT(*) FROM tx WHERE batch_num = batch.batch_num), 0) AS forged_txs
|
||||||
FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num
|
FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num
|
||||||
WHERE batch_num = $1;`, batchNum,
|
WHERE batch_num = $1;`, batchNum,
|
||||||
); err != nil {
|
))
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
batch.CollectedFeesAPI = apitypes.NewCollectedFeesAPI(batch.CollectedFeesDB)
|
|
||||||
return batch, nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetBatchesAPI return the batches applying the given filters
|
// GetBatchesAPI return the batches applying the given filters
|
||||||
@@ -150,19 +133,16 @@ func (hdb *HistoryDB) GetBatchesAPI(
|
|||||||
queryStr += " DESC "
|
queryStr += " DESC "
|
||||||
}
|
}
|
||||||
queryStr += fmt.Sprintf("LIMIT %d;", *limit)
|
queryStr += fmt.Sprintf("LIMIT %d;", *limit)
|
||||||
query = hdb.dbRead.Rebind(queryStr)
|
query = hdb.db.Rebind(queryStr)
|
||||||
// log.Debug(query)
|
// log.Debug(query)
|
||||||
batchPtrs := []*BatchAPI{}
|
batchPtrs := []*BatchAPI{}
|
||||||
if err := meddler.QueryAll(hdb.dbRead, &batchPtrs, query, args...); err != nil {
|
if err := meddler.QueryAll(hdb.db, &batchPtrs, query, args...); err != nil {
|
||||||
return nil, 0, tracerr.Wrap(err)
|
return nil, 0, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
batches := db.SlicePtrsToSlice(batchPtrs).([]BatchAPI)
|
batches := db.SlicePtrsToSlice(batchPtrs).([]BatchAPI)
|
||||||
if len(batches) == 0 {
|
if len(batches) == 0 {
|
||||||
return batches, 0, nil
|
return batches, 0, nil
|
||||||
}
|
}
|
||||||
for i := range batches {
|
|
||||||
batches[i].CollectedFeesAPI = apitypes.NewCollectedFeesAPI(batches[i].CollectedFeesDB)
|
|
||||||
}
|
|
||||||
return batches, batches[0].TotalItems - uint64(len(batches)), nil
|
return batches, batches[0].TotalItems - uint64(len(batches)), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -176,7 +156,7 @@ func (hdb *HistoryDB) GetBestBidAPI(slotNum *int64) (BidAPI, error) {
|
|||||||
}
|
}
|
||||||
defer hdb.apiConnCon.Release()
|
defer hdb.apiConnCon.Release()
|
||||||
err = meddler.QueryRow(
|
err = meddler.QueryRow(
|
||||||
hdb.dbRead, bid, `SELECT bid.*, block.timestamp, coordinator.forger_addr, coordinator.url
|
hdb.db, bid, `SELECT bid.*, block.timestamp, coordinator.forger_addr, coordinator.url
|
||||||
FROM bid INNER JOIN block ON bid.eth_block_num = block.eth_block_num
|
FROM bid INNER JOIN block ON bid.eth_block_num = block.eth_block_num
|
||||||
INNER JOIN (
|
INNER JOIN (
|
||||||
SELECT bidder_addr, MAX(item_id) AS item_id FROM coordinator
|
SELECT bidder_addr, MAX(item_id) AS item_id FROM coordinator
|
||||||
@@ -200,14 +180,6 @@ func (hdb *HistoryDB) GetBestBidsAPI(
|
|||||||
return nil, 0, tracerr.Wrap(err)
|
return nil, 0, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
defer hdb.apiConnCon.Release()
|
defer hdb.apiConnCon.Release()
|
||||||
return hdb.getBestBidsAPI(hdb.dbRead, minSlotNum, maxSlotNum, bidderAddr, limit, order)
|
|
||||||
}
|
|
||||||
func (hdb *HistoryDB) getBestBidsAPI(
|
|
||||||
d meddler.DB,
|
|
||||||
minSlotNum, maxSlotNum *int64,
|
|
||||||
bidderAddr *ethCommon.Address,
|
|
||||||
limit *uint, order string,
|
|
||||||
) ([]BidAPI, uint64, error) {
|
|
||||||
var query string
|
var query string
|
||||||
var args []interface{}
|
var args []interface{}
|
||||||
// JOIN the best bid of each slot with the latest update of each coordinator
|
// JOIN the best bid of each slot with the latest update of each coordinator
|
||||||
@@ -240,9 +212,9 @@ func (hdb *HistoryDB) getBestBidsAPI(
|
|||||||
if limit != nil {
|
if limit != nil {
|
||||||
queryStr += fmt.Sprintf("LIMIT %d;", *limit)
|
queryStr += fmt.Sprintf("LIMIT %d;", *limit)
|
||||||
}
|
}
|
||||||
query = hdb.dbRead.Rebind(queryStr)
|
query = hdb.db.Rebind(queryStr)
|
||||||
bidPtrs := []*BidAPI{}
|
bidPtrs := []*BidAPI{}
|
||||||
if err := meddler.QueryAll(d, &bidPtrs, query, args...); err != nil {
|
if err := meddler.QueryAll(hdb.db, &bidPtrs, query, args...); err != nil {
|
||||||
return nil, 0, tracerr.Wrap(err)
|
return nil, 0, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
// log.Debug(query)
|
// log.Debug(query)
|
||||||
@@ -324,9 +296,9 @@ func (hdb *HistoryDB) GetBidsAPI(
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, 0, tracerr.Wrap(err)
|
return nil, 0, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
query = hdb.dbRead.Rebind(query)
|
query = hdb.db.Rebind(query)
|
||||||
bids := []*BidAPI{}
|
bids := []*BidAPI{}
|
||||||
if err := meddler.QueryAll(hdb.dbRead, &bids, query, argsQ...); err != nil {
|
if err := meddler.QueryAll(hdb.db, &bids, query, argsQ...); err != nil {
|
||||||
return nil, 0, tracerr.Wrap(err)
|
return nil, 0, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
if len(bids) == 0 {
|
if len(bids) == 0 {
|
||||||
@@ -412,9 +384,9 @@ func (hdb *HistoryDB) GetTokensAPI(
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, 0, tracerr.Wrap(err)
|
return nil, 0, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
query = hdb.dbRead.Rebind(query)
|
query = hdb.db.Rebind(query)
|
||||||
tokens := []*TokenWithUSD{}
|
tokens := []*TokenWithUSD{}
|
||||||
if err := meddler.QueryAll(hdb.dbRead, &tokens, query, argsQ...); err != nil {
|
if err := meddler.QueryAll(hdb.db, &tokens, query, argsQ...); err != nil {
|
||||||
return nil, 0, tracerr.Wrap(err)
|
return nil, 0, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
if len(tokens) == 0 {
|
if len(tokens) == 0 {
|
||||||
@@ -436,7 +408,7 @@ func (hdb *HistoryDB) GetTxAPI(txID common.TxID) (*TxAPI, error) {
|
|||||||
defer hdb.apiConnCon.Release()
|
defer hdb.apiConnCon.Release()
|
||||||
tx := &TxAPI{}
|
tx := &TxAPI{}
|
||||||
err = meddler.QueryRow(
|
err = meddler.QueryRow(
|
||||||
hdb.dbRead, tx, `SELECT tx.item_id, tx.is_l1, tx.id, tx.type, tx.position,
|
hdb.db, tx, `SELECT tx.item_id, tx.is_l1, tx.id, tx.type, tx.position,
|
||||||
hez_idx(tx.effective_from_idx, token.symbol) AS from_idx, tx.from_eth_addr, tx.from_bjj,
|
hez_idx(tx.effective_from_idx, token.symbol) AS from_idx, tx.from_eth_addr, tx.from_bjj,
|
||||||
hez_idx(tx.to_idx, token.symbol) AS to_idx, tx.to_eth_addr, tx.to_bjj,
|
hez_idx(tx.to_idx, token.symbol) AS to_idx, tx.to_eth_addr, tx.to_bjj,
|
||||||
tx.amount, tx.amount_success, tx.token_id, tx.amount_usd,
|
tx.amount, tx.amount_success, tx.token_id, tx.amount_usd,
|
||||||
@@ -569,10 +541,10 @@ func (hdb *HistoryDB) GetTxsAPI(
|
|||||||
queryStr += " DESC "
|
queryStr += " DESC "
|
||||||
}
|
}
|
||||||
queryStr += fmt.Sprintf("LIMIT %d;", *limit)
|
queryStr += fmt.Sprintf("LIMIT %d;", *limit)
|
||||||
query = hdb.dbRead.Rebind(queryStr)
|
query = hdb.db.Rebind(queryStr)
|
||||||
// log.Debug(query)
|
// log.Debug(query)
|
||||||
txsPtrs := []*TxAPI{}
|
txsPtrs := []*TxAPI{}
|
||||||
if err := meddler.QueryAll(hdb.dbRead, &txsPtrs, query, args...); err != nil {
|
if err := meddler.QueryAll(hdb.db, &txsPtrs, query, args...); err != nil {
|
||||||
return nil, 0, tracerr.Wrap(err)
|
return nil, 0, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
txs := db.SlicePtrsToSlice(txsPtrs).([]TxAPI)
|
txs := db.SlicePtrsToSlice(txsPtrs).([]TxAPI)
|
||||||
@@ -592,7 +564,7 @@ func (hdb *HistoryDB) GetExitAPI(batchNum *uint, idx *common.Idx) (*ExitAPI, err
|
|||||||
defer hdb.apiConnCon.Release()
|
defer hdb.apiConnCon.Release()
|
||||||
exit := &ExitAPI{}
|
exit := &ExitAPI{}
|
||||||
err = meddler.QueryRow(
|
err = meddler.QueryRow(
|
||||||
hdb.dbRead, exit, `SELECT exit_tree.item_id, exit_tree.batch_num,
|
hdb.db, exit, `SELECT exit_tree.item_id, exit_tree.batch_num,
|
||||||
hez_idx(exit_tree.account_idx, token.symbol) AS account_idx,
|
hez_idx(exit_tree.account_idx, token.symbol) AS account_idx,
|
||||||
account.bjj, account.eth_addr,
|
account.bjj, account.eth_addr,
|
||||||
exit_tree.merkle_proof, exit_tree.balance, exit_tree.instant_withdrawn,
|
exit_tree.merkle_proof, exit_tree.balance, exit_tree.instant_withdrawn,
|
||||||
@@ -713,10 +685,10 @@ func (hdb *HistoryDB) GetExitsAPI(
|
|||||||
queryStr += " DESC "
|
queryStr += " DESC "
|
||||||
}
|
}
|
||||||
queryStr += fmt.Sprintf("LIMIT %d;", *limit)
|
queryStr += fmt.Sprintf("LIMIT %d;", *limit)
|
||||||
query = hdb.dbRead.Rebind(queryStr)
|
query = hdb.db.Rebind(queryStr)
|
||||||
// log.Debug(query)
|
// log.Debug(query)
|
||||||
exits := []*ExitAPI{}
|
exits := []*ExitAPI{}
|
||||||
if err := meddler.QueryAll(hdb.dbRead, &exits, query, args...); err != nil {
|
if err := meddler.QueryAll(hdb.db, &exits, query, args...); err != nil {
|
||||||
return nil, 0, tracerr.Wrap(err)
|
return nil, 0, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
if len(exits) == 0 {
|
if len(exits) == 0 {
|
||||||
@@ -725,6 +697,25 @@ func (hdb *HistoryDB) GetExitsAPI(
|
|||||||
return db.SlicePtrsToSlice(exits).([]ExitAPI), exits[0].TotalItems - uint64(len(exits)), nil
|
return db.SlicePtrsToSlice(exits).([]ExitAPI), exits[0].TotalItems - uint64(len(exits)), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetBucketUpdatesAPI retrieves latest values for each bucket
|
||||||
|
func (hdb *HistoryDB) GetBucketUpdatesAPI() ([]BucketUpdateAPI, error) {
|
||||||
|
cancel, err := hdb.apiConnCon.Acquire()
|
||||||
|
defer cancel()
|
||||||
|
if err != nil {
|
||||||
|
return nil, tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
defer hdb.apiConnCon.Release()
|
||||||
|
var bucketUpdates []*BucketUpdateAPI
|
||||||
|
err = meddler.QueryAll(
|
||||||
|
hdb.db, &bucketUpdates,
|
||||||
|
`SELECT num_bucket, withdrawals FROM bucket_update
|
||||||
|
WHERE item_id in(SELECT max(item_id) FROM bucket_update
|
||||||
|
group by num_bucket)
|
||||||
|
ORDER BY num_bucket ASC;`,
|
||||||
|
)
|
||||||
|
return db.SlicePtrsToSlice(bucketUpdates).([]BucketUpdateAPI), tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
|
||||||
// GetCoordinatorsAPI returns a list of coordinators from the DB and pagination info
|
// GetCoordinatorsAPI returns a list of coordinators from the DB and pagination info
|
||||||
func (hdb *HistoryDB) GetCoordinatorsAPI(
|
func (hdb *HistoryDB) GetCoordinatorsAPI(
|
||||||
bidderAddr, forgerAddr *ethCommon.Address,
|
bidderAddr, forgerAddr *ethCommon.Address,
|
||||||
@@ -781,10 +772,10 @@ func (hdb *HistoryDB) GetCoordinatorsAPI(
|
|||||||
queryStr += " DESC "
|
queryStr += " DESC "
|
||||||
}
|
}
|
||||||
queryStr += fmt.Sprintf("LIMIT %d;", *limit)
|
queryStr += fmt.Sprintf("LIMIT %d;", *limit)
|
||||||
query = hdb.dbRead.Rebind(queryStr)
|
query = hdb.db.Rebind(queryStr)
|
||||||
|
|
||||||
coordinators := []*CoordinatorAPI{}
|
coordinators := []*CoordinatorAPI{}
|
||||||
if err := meddler.QueryAll(hdb.dbRead, &coordinators, query, args...); err != nil {
|
if err := meddler.QueryAll(hdb.db, &coordinators, query, args...); err != nil {
|
||||||
return nil, 0, tracerr.Wrap(err)
|
return nil, 0, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
if len(coordinators) == 0 {
|
if len(coordinators) == 0 {
|
||||||
@@ -804,11 +795,34 @@ func (hdb *HistoryDB) GetAuctionVarsAPI() (*common.AuctionVariables, error) {
|
|||||||
defer hdb.apiConnCon.Release()
|
defer hdb.apiConnCon.Release()
|
||||||
auctionVars := &common.AuctionVariables{}
|
auctionVars := &common.AuctionVariables{}
|
||||||
err = meddler.QueryRow(
|
err = meddler.QueryRow(
|
||||||
hdb.dbRead, auctionVars, `SELECT * FROM auction_vars;`,
|
hdb.db, auctionVars, `SELECT * FROM auction_vars;`,
|
||||||
)
|
)
|
||||||
return auctionVars, tracerr.Wrap(err)
|
return auctionVars, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetAuctionVarsUntilSetSlotNumAPI returns all the updates of the auction vars
|
||||||
|
// from the last entry in which DefaultSlotSetBidSlotNum <= slotNum
|
||||||
|
func (hdb *HistoryDB) GetAuctionVarsUntilSetSlotNumAPI(slotNum int64, maxItems int) ([]MinBidInfo, error) {
|
||||||
|
cancel, err := hdb.apiConnCon.Acquire()
|
||||||
|
defer cancel()
|
||||||
|
if err != nil {
|
||||||
|
return nil, tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
defer hdb.apiConnCon.Release()
|
||||||
|
auctionVars := []*MinBidInfo{}
|
||||||
|
query := `
|
||||||
|
SELECT DISTINCT default_slot_set_bid, default_slot_set_bid_slot_num FROM auction_vars
|
||||||
|
WHERE default_slot_set_bid_slot_num < $1
|
||||||
|
ORDER BY default_slot_set_bid_slot_num DESC
|
||||||
|
LIMIT $2;
|
||||||
|
`
|
||||||
|
err = meddler.QueryAll(hdb.db, &auctionVars, query, slotNum, maxItems)
|
||||||
|
if err != nil {
|
||||||
|
return nil, tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
return db.SlicePtrsToSlice(auctionVars).([]MinBidInfo), nil
|
||||||
|
}
|
||||||
|
|
||||||
// GetAccountAPI returns an account by its index
|
// GetAccountAPI returns an account by its index
|
||||||
func (hdb *HistoryDB) GetAccountAPI(idx common.Idx) (*AccountAPI, error) {
|
func (hdb *HistoryDB) GetAccountAPI(idx common.Idx) (*AccountAPI, error) {
|
||||||
cancel, err := hdb.apiConnCon.Acquire()
|
cancel, err := hdb.apiConnCon.Acquire()
|
||||||
@@ -818,19 +832,11 @@ func (hdb *HistoryDB) GetAccountAPI(idx common.Idx) (*AccountAPI, error) {
|
|||||||
}
|
}
|
||||||
defer hdb.apiConnCon.Release()
|
defer hdb.apiConnCon.Release()
|
||||||
account := &AccountAPI{}
|
account := &AccountAPI{}
|
||||||
err = meddler.QueryRow(hdb.dbRead, account, `SELECT account.item_id, hez_idx(account.idx,
|
err = meddler.QueryRow(hdb.db, account, `SELECT account.item_id, hez_idx(account.idx,
|
||||||
token.symbol) as idx, account.batch_num, account.bjj, account.eth_addr,
|
token.symbol) as idx, account.batch_num, account.bjj, account.eth_addr,
|
||||||
token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
|
token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
|
||||||
token.eth_addr as token_eth_addr, token.name, token.symbol, token.decimals, token.usd,
|
token.eth_addr as token_eth_addr, token.name, token.symbol, token.decimals, token.usd, token.usd_update
|
||||||
token.usd_update, account_update.nonce, account_update.balance
|
FROM account INNER JOIN token ON account.token_id = token.token_id WHERE idx = $1;`, idx)
|
||||||
FROM account inner JOIN (
|
|
||||||
SELECT idx, nonce, balance
|
|
||||||
FROM account_update
|
|
||||||
WHERE idx = $1
|
|
||||||
ORDER BY item_id DESC LIMIT 1
|
|
||||||
) AS account_update ON account_update.idx = account.idx
|
|
||||||
INNER JOIN token ON account.token_id = token.token_id
|
|
||||||
WHERE account.idx = $1;`, idx)
|
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
@@ -858,13 +864,8 @@ func (hdb *HistoryDB) GetAccountsAPI(
|
|||||||
queryStr := `SELECT account.item_id, hez_idx(account.idx, token.symbol) as idx, account.batch_num,
|
queryStr := `SELECT account.item_id, hez_idx(account.idx, token.symbol) as idx, account.batch_num,
|
||||||
account.bjj, account.eth_addr, token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
|
account.bjj, account.eth_addr, token.token_id, token.item_id AS token_item_id, token.eth_block_num AS token_block,
|
||||||
token.eth_addr as token_eth_addr, token.name, token.symbol, token.decimals, token.usd, token.usd_update,
|
token.eth_addr as token_eth_addr, token.name, token.symbol, token.decimals, token.usd, token.usd_update,
|
||||||
account_update.nonce, account_update.balance, COUNT(*) OVER() AS total_items
|
COUNT(*) OVER() AS total_items
|
||||||
FROM account inner JOIN (
|
FROM account INNER JOIN token ON account.token_id = token.token_id `
|
||||||
SELECT DISTINCT idx,
|
|
||||||
first_value(nonce) over(partition by idx ORDER BY item_id DESC) as nonce,
|
|
||||||
first_value(balance) over(partition by idx ORDER BY item_id DESC) as balance
|
|
||||||
FROM account_update
|
|
||||||
) AS account_update ON account_update.idx = account.idx INNER JOIN token ON account.token_id = token.token_id `
|
|
||||||
// Apply filters
|
// Apply filters
|
||||||
nextIsAnd := false
|
nextIsAnd := false
|
||||||
// ethAddr filter
|
// ethAddr filter
|
||||||
@@ -913,10 +914,10 @@ func (hdb *HistoryDB) GetAccountsAPI(
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, 0, tracerr.Wrap(err)
|
return nil, 0, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
query = hdb.dbRead.Rebind(query)
|
query = hdb.db.Rebind(query)
|
||||||
|
|
||||||
accounts := []*AccountAPI{}
|
accounts := []*AccountAPI{}
|
||||||
if err := meddler.QueryAll(hdb.dbRead, &accounts, query, argsQ...); err != nil {
|
if err := meddler.QueryAll(hdb.db, &accounts, query, argsQ...); err != nil {
|
||||||
return nil, 0, tracerr.Wrap(err)
|
return nil, 0, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
if len(accounts) == 0 {
|
if len(accounts) == 0 {
|
||||||
@@ -927,280 +928,99 @@ func (hdb *HistoryDB) GetAccountsAPI(
|
|||||||
accounts[0].TotalItems - uint64(len(accounts)), nil
|
accounts[0].TotalItems - uint64(len(accounts)), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetCommonAccountAPI returns the account associated to an account idx
|
// GetMetricsAPI returns metrics
|
||||||
func (hdb *HistoryDB) GetCommonAccountAPI(idx common.Idx) (*common.Account, error) {
|
func (hdb *HistoryDB) GetMetricsAPI(lastBatchNum common.BatchNum) (*Metrics, error) {
|
||||||
cancel, err := hdb.apiConnCon.Acquire()
|
cancel, err := hdb.apiConnCon.Acquire()
|
||||||
defer cancel()
|
defer cancel()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
defer hdb.apiConnCon.Release()
|
defer hdb.apiConnCon.Release()
|
||||||
account := &common.Account{}
|
metricsTotals := &MetricsTotals{}
|
||||||
|
metrics := &Metrics{}
|
||||||
err = meddler.QueryRow(
|
err = meddler.QueryRow(
|
||||||
hdb.dbRead, account, `SELECT idx, token_id, batch_num, bjj, eth_addr
|
hdb.db, metricsTotals, `SELECT COUNT(tx.*) as total_txs,
|
||||||
FROM account WHERE idx = $1;`, idx,
|
COALESCE (MIN(tx.batch_num), 0) as batch_num, COALESCE (MIN(block.timestamp),
|
||||||
)
|
NOW()) AS min_timestamp, COALESCE (MAX(block.timestamp), NOW()) AS max_timestamp
|
||||||
return account, tracerr.Wrap(err)
|
FROM tx INNER JOIN block ON tx.eth_block_num = block.eth_block_num
|
||||||
}
|
WHERE block.timestamp >= NOW() - INTERVAL '24 HOURS';`)
|
||||||
|
|
||||||
// GetCoordinatorAPI returns a coordinator by its bidderAddr
|
|
||||||
func (hdb *HistoryDB) GetCoordinatorAPI(bidderAddr ethCommon.Address) (*CoordinatorAPI, error) {
|
|
||||||
cancel, err := hdb.apiConnCon.Acquire()
|
|
||||||
defer cancel()
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
defer hdb.apiConnCon.Release()
|
|
||||||
return hdb.getCoordinatorAPI(hdb.dbRead, bidderAddr)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (hdb *HistoryDB) getCoordinatorAPI(d meddler.DB, bidderAddr ethCommon.Address) (*CoordinatorAPI, error) {
|
seconds := metricsTotals.MaxTimestamp.Sub(metricsTotals.MinTimestamp).Seconds()
|
||||||
coordinator := &CoordinatorAPI{}
|
// Avoid dividing by 0
|
||||||
err := meddler.QueryRow(
|
if seconds == 0 {
|
||||||
d, coordinator,
|
|
||||||
"SELECT * FROM coordinator WHERE bidder_addr = $1 ORDER BY item_id DESC LIMIT 1;",
|
|
||||||
bidderAddr,
|
|
||||||
)
|
|
||||||
return coordinator, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetNodeInfoAPI retusnt he NodeInfo
|
|
||||||
func (hdb *HistoryDB) GetNodeInfoAPI() (*NodeInfo, error) {
|
|
||||||
cancel, err := hdb.apiConnCon.Acquire()
|
|
||||||
defer cancel()
|
|
||||||
if err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
defer hdb.apiConnCon.Release()
|
|
||||||
return hdb.GetNodeInfo()
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetBucketUpdatesInternalAPI returns the latest bucket updates
|
|
||||||
func (hdb *HistoryDB) GetBucketUpdatesInternalAPI() ([]BucketUpdateAPI, error) {
|
|
||||||
var bucketUpdates []*BucketUpdateAPI
|
|
||||||
err := meddler.QueryAll(
|
|
||||||
hdb.dbRead, &bucketUpdates,
|
|
||||||
`SELECT num_bucket, withdrawals FROM bucket_update
|
|
||||||
WHERE item_id in(SELECT max(item_id) FROM bucket_update
|
|
||||||
group by num_bucket)
|
|
||||||
ORDER BY num_bucket ASC;`,
|
|
||||||
)
|
|
||||||
return db.SlicePtrsToSlice(bucketUpdates).([]BucketUpdateAPI), tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetNextForgersInternalAPI returns next forgers
|
|
||||||
func (hdb *HistoryDB) GetNextForgersInternalAPI(auctionVars *common.AuctionVariables,
|
|
||||||
auctionConsts *common.AuctionConstants,
|
|
||||||
lastBlock common.Block, currentSlot, lastClosedSlot int64) ([]NextForgerAPI, error) {
|
|
||||||
secondsPerBlock := int64(15) //nolint:gomnd
|
|
||||||
// currentSlot and lastClosedSlot included
|
|
||||||
limit := uint(lastClosedSlot - currentSlot + 1)
|
|
||||||
bids, _, err := hdb.getBestBidsAPI(hdb.dbRead, ¤tSlot, &lastClosedSlot, nil, &limit, "ASC")
|
|
||||||
if err != nil && tracerr.Unwrap(err) != sql.ErrNoRows {
|
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
nextForgers := []NextForgerAPI{}
|
|
||||||
// Get min bid info
|
|
||||||
var minBidInfo []MinBidInfo
|
|
||||||
if currentSlot >= auctionVars.DefaultSlotSetBidSlotNum {
|
|
||||||
// All min bids can be calculated with the last update of AuctionVariables
|
|
||||||
|
|
||||||
minBidInfo = []MinBidInfo{{
|
|
||||||
DefaultSlotSetBid: auctionVars.DefaultSlotSetBid,
|
|
||||||
DefaultSlotSetBidSlotNum: auctionVars.DefaultSlotSetBidSlotNum,
|
|
||||||
}}
|
|
||||||
} else {
|
|
||||||
// Get all the relevant updates from the DB
|
|
||||||
minBidInfo, err = hdb.getMinBidInfo(hdb.dbRead, currentSlot, lastClosedSlot)
|
|
||||||
if err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Create nextForger for each slot
|
|
||||||
for i := currentSlot; i <= lastClosedSlot; i++ {
|
|
||||||
fromBlock := i*int64(auctionConsts.BlocksPerSlot) +
|
|
||||||
auctionConsts.GenesisBlockNum
|
|
||||||
toBlock := (i+1)*int64(auctionConsts.BlocksPerSlot) +
|
|
||||||
auctionConsts.GenesisBlockNum - 1
|
|
||||||
nextForger := NextForgerAPI{
|
|
||||||
Period: Period{
|
|
||||||
SlotNum: i,
|
|
||||||
FromBlock: fromBlock,
|
|
||||||
ToBlock: toBlock,
|
|
||||||
FromTimestamp: lastBlock.Timestamp.Add(time.Second *
|
|
||||||
time.Duration(secondsPerBlock*(fromBlock-lastBlock.Num))),
|
|
||||||
ToTimestamp: lastBlock.Timestamp.Add(time.Second *
|
|
||||||
time.Duration(secondsPerBlock*(toBlock-lastBlock.Num))),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
foundForger := false
|
|
||||||
// If there is a bid for a slot, get forger (coordinator)
|
|
||||||
for j := range bids {
|
|
||||||
slotNum := bids[j].SlotNum
|
|
||||||
if slotNum == i {
|
|
||||||
// There's a bid for the slot
|
|
||||||
// Check if the bid is greater than the minimum required
|
|
||||||
for i := 0; i < len(minBidInfo); i++ {
|
|
||||||
// Find the most recent update
|
|
||||||
if slotNum >= minBidInfo[i].DefaultSlotSetBidSlotNum {
|
|
||||||
// Get min bid
|
|
||||||
minBidSelector := slotNum % int64(len(auctionVars.DefaultSlotSetBid))
|
|
||||||
minBid := minBidInfo[i].DefaultSlotSetBid[minBidSelector]
|
|
||||||
// Check if the bid has beaten the minimum
|
|
||||||
bid, ok := new(big.Int).SetString(string(bids[j].BidValue), 10)
|
|
||||||
if !ok {
|
|
||||||
return nil, tracerr.New("Wrong bid value, error parsing it as big.Int")
|
|
||||||
}
|
|
||||||
if minBid.Cmp(bid) == 1 {
|
|
||||||
// Min bid is greater than bid, the slot will be forged by boot coordinator
|
|
||||||
break
|
|
||||||
}
|
|
||||||
foundForger = true
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !foundForger { // There is no bid or it's smaller than the minimum
|
|
||||||
break
|
|
||||||
}
|
|
||||||
coordinator, err := hdb.getCoordinatorAPI(hdb.dbRead, bids[j].Bidder)
|
|
||||||
if err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
nextForger.Coordinator = *coordinator
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// If there is no bid, the coordinator that will forge is boot coordinator
|
|
||||||
if !foundForger {
|
|
||||||
nextForger.Coordinator = CoordinatorAPI{
|
|
||||||
Forger: auctionVars.BootCoordinator,
|
|
||||||
URL: auctionVars.BootCoordinatorURL,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
nextForgers = append(nextForgers, nextForger)
|
|
||||||
}
|
|
||||||
return nextForgers, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetMetricsInternalAPI returns the MetricsAPI
|
|
||||||
func (hdb *HistoryDB) GetMetricsInternalAPI(lastBatchNum common.BatchNum) (metrics *MetricsAPI, poolLoad int64, err error) {
|
|
||||||
metrics = &MetricsAPI{}
|
|
||||||
type period struct {
|
|
||||||
FromBatchNum common.BatchNum `meddler:"from_batch_num"`
|
|
||||||
FromTimestamp time.Time `meddler:"from_timestamp"`
|
|
||||||
ToBatchNum common.BatchNum `meddler:"-"`
|
|
||||||
ToTimestamp time.Time `meddler:"to_timestamp"`
|
|
||||||
}
|
|
||||||
p := &period{
|
|
||||||
ToBatchNum: lastBatchNum,
|
|
||||||
}
|
|
||||||
if err := meddler.QueryRow(
|
|
||||||
hdb.dbRead, p, `SELECT
|
|
||||||
COALESCE (MIN(batch.batch_num), 0) as from_batch_num,
|
|
||||||
COALESCE (MIN(block.timestamp), NOW()) AS from_timestamp,
|
|
||||||
COALESCE (MAX(block.timestamp), NOW()) AS to_timestamp
|
|
||||||
FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num
|
|
||||||
WHERE block.timestamp >= NOW() - INTERVAL '24 HOURS';`,
|
|
||||||
); err != nil {
|
|
||||||
return nil, 0, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
// Get the amount of txs of that period
|
|
||||||
row := hdb.dbRead.QueryRow(
|
|
||||||
`SELECT COUNT(*) as total_txs FROM tx WHERE tx.batch_num between $1 AND $2;`,
|
|
||||||
p.FromBatchNum, p.ToBatchNum,
|
|
||||||
)
|
|
||||||
var nTxs int
|
|
||||||
if err := row.Scan(&nTxs); err != nil {
|
|
||||||
return nil, 0, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
// Set txs/s
|
|
||||||
seconds := p.ToTimestamp.Sub(p.FromTimestamp).Seconds()
|
|
||||||
if seconds == 0 { // Avoid dividing by 0
|
|
||||||
seconds++
|
seconds++
|
||||||
}
|
}
|
||||||
metrics.TransactionsPerSecond = float64(nTxs) / seconds
|
|
||||||
// Set txs/batch
|
metrics.TransactionsPerSecond = float64(metricsTotals.TotalTransactions) / seconds
|
||||||
nBatches := p.ToBatchNum - p.FromBatchNum + 1
|
|
||||||
if nBatches == 0 { // Avoid dividing by 0
|
if (lastBatchNum - metricsTotals.FirstBatchNum) > 0 {
|
||||||
nBatches++
|
metrics.TransactionsPerBatch = float64(metricsTotals.TotalTransactions) /
|
||||||
}
|
float64(lastBatchNum-metricsTotals.FirstBatchNum+1)
|
||||||
if (p.ToBatchNum - p.FromBatchNum) > 0 {
|
|
||||||
metrics.TransactionsPerBatch = float64(nTxs) /
|
|
||||||
float64(nBatches)
|
|
||||||
} else {
|
} else {
|
||||||
metrics.TransactionsPerBatch = 0
|
metrics.TransactionsPerBatch = float64(0)
|
||||||
}
|
}
|
||||||
// Get total fee of that period
|
|
||||||
row = hdb.dbRead.QueryRow(
|
err = meddler.QueryRow(
|
||||||
`SELECT COALESCE (SUM(total_fees_usd), 0) FROM batch WHERE batch_num between $1 AND $2;`,
|
hdb.db, metricsTotals, `SELECT COUNT(*) AS total_batches,
|
||||||
p.FromBatchNum, p.ToBatchNum,
|
COALESCE (SUM(total_fees_usd), 0) AS total_fees FROM batch
|
||||||
)
|
WHERE batch_num > $1;`, metricsTotals.FirstBatchNum)
|
||||||
var totalFee float64
|
if err != nil {
|
||||||
if err := row.Scan(&totalFee); err != nil {
|
return nil, tracerr.Wrap(err)
|
||||||
return nil, 0, tracerr.Wrap(err)
|
|
||||||
}
|
}
|
||||||
// Set batch frequency
|
if metricsTotals.TotalBatches > 0 {
|
||||||
metrics.BatchFrequency = seconds / float64(nBatches)
|
metrics.BatchFrequency = seconds / float64(metricsTotals.TotalBatches)
|
||||||
// Set avg transaction fee (only L2 txs have fee)
|
} else {
|
||||||
row = hdb.dbRead.QueryRow(
|
metrics.BatchFrequency = 0
|
||||||
`SELECT COUNT(*) as total_txs FROM tx WHERE tx.batch_num between $1 AND $2 AND NOT is_l1;`,
|
|
||||||
p.FromBatchNum, p.ToBatchNum,
|
|
||||||
)
|
|
||||||
var nL2Txs int
|
|
||||||
if err := row.Scan(&nL2Txs); err != nil {
|
|
||||||
return nil, 0, tracerr.Wrap(err)
|
|
||||||
}
|
}
|
||||||
if nL2Txs > 0 {
|
if metricsTotals.TotalTransactions > 0 {
|
||||||
metrics.AvgTransactionFee = totalFee / float64(nL2Txs)
|
metrics.AvgTransactionFee = metricsTotals.TotalFeesUSD / float64(metricsTotals.TotalTransactions)
|
||||||
} else {
|
} else {
|
||||||
metrics.AvgTransactionFee = 0
|
metrics.AvgTransactionFee = 0
|
||||||
}
|
}
|
||||||
// Get and set amount of registered accounts
|
err = meddler.QueryRow(
|
||||||
type registeredAccounts struct {
|
hdb.db, metrics,
|
||||||
TokenAccounts int64 `meddler:"token_accounts"`
|
`SELECT COUNT(*) AS total_bjjs, COUNT(DISTINCT(bjj)) AS total_accounts FROM account;`)
|
||||||
Wallets int64 `meddler:"wallets"`
|
|
||||||
}
|
|
||||||
ra := ®isteredAccounts{}
|
|
||||||
if err := meddler.QueryRow(
|
|
||||||
hdb.dbRead, ra,
|
|
||||||
`SELECT COUNT(*) AS token_accounts, COUNT(DISTINCT(bjj)) AS wallets FROM account;`,
|
|
||||||
); err != nil {
|
|
||||||
return nil, 0, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
metrics.TokenAccounts = ra.TokenAccounts
|
|
||||||
metrics.Wallets = ra.Wallets
|
|
||||||
// Get and set estimated time to forge L1 tx
|
|
||||||
row = hdb.dbRead.QueryRow(
|
|
||||||
`SELECT COALESCE (AVG(EXTRACT(EPOCH FROM (forged.timestamp - added.timestamp))), 0) FROM tx
|
|
||||||
INNER JOIN block AS added ON tx.eth_block_num = added.eth_block_num
|
|
||||||
INNER JOIN batch AS forged_batch ON tx.batch_num = forged_batch.batch_num
|
|
||||||
INNER JOIN block AS forged ON forged_batch.eth_block_num = forged.eth_block_num
|
|
||||||
WHERE tx.batch_num between $1 and $2 AND tx.is_l1 AND tx.user_origin;`,
|
|
||||||
p.FromBatchNum, p.ToBatchNum,
|
|
||||||
)
|
|
||||||
var timeToForgeL1 float64
|
|
||||||
if err := row.Scan(&timeToForgeL1); err != nil {
|
|
||||||
return nil, 0, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
metrics.EstimatedTimeToForgeL1 = timeToForgeL1
|
|
||||||
// Get amount of txs in the pool
|
|
||||||
row = hdb.dbRead.QueryRow(
|
|
||||||
`SELECT COUNT(*) FROM tx_pool WHERE state = $1 AND NOT external_delete;`,
|
|
||||||
common.PoolL2TxStatePending,
|
|
||||||
)
|
|
||||||
if err := row.Scan(&poolLoad); err != nil {
|
|
||||||
return nil, 0, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
return metrics, poolLoad, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetStateAPI returns the StateAPI
|
|
||||||
func (hdb *HistoryDB) GetStateAPI() (*StateAPI, error) {
|
|
||||||
cancel, err := hdb.apiConnCon.Acquire()
|
|
||||||
defer cancel()
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
defer hdb.apiConnCon.Release()
|
|
||||||
return hdb.getStateAPI(hdb.dbRead)
|
return metrics, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// GetAvgTxFeeAPI returns average transaction fee of the last 1h
|
||||||
|
func (hdb *HistoryDB) GetAvgTxFeeAPI() (float64, error) {
|
||||||
|
cancel, err := hdb.apiConnCon.Acquire()
|
||||||
|
defer cancel()
|
||||||
|
if err != nil {
|
||||||
|
return 0, tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
defer hdb.apiConnCon.Release()
|
||||||
|
metricsTotals := &MetricsTotals{}
|
||||||
|
err = meddler.QueryRow(
|
||||||
|
hdb.db, metricsTotals, `SELECT COUNT(tx.*) as total_txs,
|
||||||
|
COALESCE (MIN(tx.batch_num), 0) as batch_num
|
||||||
|
FROM tx INNER JOIN block ON tx.eth_block_num = block.eth_block_num
|
||||||
|
WHERE block.timestamp >= NOW() - INTERVAL '1 HOURS';`)
|
||||||
|
if err != nil {
|
||||||
|
return 0, tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
err = meddler.QueryRow(
|
||||||
|
hdb.db, metricsTotals, `SELECT COUNT(*) AS total_batches,
|
||||||
|
COALESCE (SUM(total_fees_usd), 0) AS total_fees FROM batch
|
||||||
|
WHERE batch_num > $1;`, metricsTotals.FirstBatchNum)
|
||||||
|
if err != nil {
|
||||||
|
return 0, tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
|
||||||
|
var avgTransactionFee float64
|
||||||
|
if metricsTotals.TotalTransactions > 0 {
|
||||||
|
avgTransactionFee = metricsTotals.TotalFeesUSD / float64(metricsTotals.TotalTransactions)
|
||||||
|
} else {
|
||||||
|
avgTransactionFee = 0
|
||||||
|
}
|
||||||
|
|
||||||
|
return avgTransactionFee, nil
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -27,35 +27,30 @@ const (
|
|||||||
|
|
||||||
// HistoryDB persist the historic of the rollup
|
// HistoryDB persist the historic of the rollup
|
||||||
type HistoryDB struct {
|
type HistoryDB struct {
|
||||||
dbRead *sqlx.DB
|
db *sqlx.DB
|
||||||
dbWrite *sqlx.DB
|
|
||||||
apiConnCon *db.APIConnectionController
|
apiConnCon *db.APIConnectionController
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewHistoryDB initialize the DB
|
// NewHistoryDB initialize the DB
|
||||||
func NewHistoryDB(dbRead, dbWrite *sqlx.DB, apiConnCon *db.APIConnectionController) *HistoryDB {
|
func NewHistoryDB(db *sqlx.DB, apiConnCon *db.APIConnectionController) *HistoryDB {
|
||||||
return &HistoryDB{
|
return &HistoryDB{db: db, apiConnCon: apiConnCon}
|
||||||
dbRead: dbRead,
|
|
||||||
dbWrite: dbWrite,
|
|
||||||
apiConnCon: apiConnCon,
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// DB returns a pointer to the L2DB.db. This method should be used only for
|
// DB returns a pointer to the L2DB.db. This method should be used only for
|
||||||
// internal testing purposes.
|
// internal testing purposes.
|
||||||
func (hdb *HistoryDB) DB() *sqlx.DB {
|
func (hdb *HistoryDB) DB() *sqlx.DB {
|
||||||
return hdb.dbWrite
|
return hdb.db
|
||||||
}
|
}
|
||||||
|
|
||||||
// AddBlock insert a block into the DB
|
// AddBlock insert a block into the DB
|
||||||
func (hdb *HistoryDB) AddBlock(block *common.Block) error { return hdb.addBlock(hdb.dbWrite, block) }
|
func (hdb *HistoryDB) AddBlock(block *common.Block) error { return hdb.addBlock(hdb.db, block) }
|
||||||
func (hdb *HistoryDB) addBlock(d meddler.DB, block *common.Block) error {
|
func (hdb *HistoryDB) addBlock(d meddler.DB, block *common.Block) error {
|
||||||
return tracerr.Wrap(meddler.Insert(d, "block", block))
|
return tracerr.Wrap(meddler.Insert(d, "block", block))
|
||||||
}
|
}
|
||||||
|
|
||||||
// AddBlocks inserts blocks into the DB
|
// AddBlocks inserts blocks into the DB
|
||||||
func (hdb *HistoryDB) AddBlocks(blocks []common.Block) error {
|
func (hdb *HistoryDB) AddBlocks(blocks []common.Block) error {
|
||||||
return tracerr.Wrap(hdb.addBlocks(hdb.dbWrite, blocks))
|
return tracerr.Wrap(hdb.addBlocks(hdb.db, blocks))
|
||||||
}
|
}
|
||||||
|
|
||||||
func (hdb *HistoryDB) addBlocks(d meddler.DB, blocks []common.Block) error {
|
func (hdb *HistoryDB) addBlocks(d meddler.DB, blocks []common.Block) error {
|
||||||
@@ -66,7 +61,7 @@ func (hdb *HistoryDB) addBlocks(d meddler.DB, blocks []common.Block) error {
|
|||||||
timestamp,
|
timestamp,
|
||||||
hash
|
hash
|
||||||
) VALUES %s;`,
|
) VALUES %s;`,
|
||||||
blocks,
|
blocks[:],
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -74,7 +69,7 @@ func (hdb *HistoryDB) addBlocks(d meddler.DB, blocks []common.Block) error {
|
|||||||
func (hdb *HistoryDB) GetBlock(blockNum int64) (*common.Block, error) {
|
func (hdb *HistoryDB) GetBlock(blockNum int64) (*common.Block, error) {
|
||||||
block := &common.Block{}
|
block := &common.Block{}
|
||||||
err := meddler.QueryRow(
|
err := meddler.QueryRow(
|
||||||
hdb.dbRead, block,
|
hdb.db, block,
|
||||||
"SELECT * FROM block WHERE eth_block_num = $1;", blockNum,
|
"SELECT * FROM block WHERE eth_block_num = $1;", blockNum,
|
||||||
)
|
)
|
||||||
return block, tracerr.Wrap(err)
|
return block, tracerr.Wrap(err)
|
||||||
@@ -84,7 +79,7 @@ func (hdb *HistoryDB) GetBlock(blockNum int64) (*common.Block, error) {
|
|||||||
func (hdb *HistoryDB) GetAllBlocks() ([]common.Block, error) {
|
func (hdb *HistoryDB) GetAllBlocks() ([]common.Block, error) {
|
||||||
var blocks []*common.Block
|
var blocks []*common.Block
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
hdb.dbRead, &blocks,
|
hdb.db, &blocks,
|
||||||
"SELECT * FROM block ORDER BY eth_block_num;",
|
"SELECT * FROM block ORDER BY eth_block_num;",
|
||||||
)
|
)
|
||||||
return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err)
|
return db.SlicePtrsToSlice(blocks).([]common.Block), tracerr.Wrap(err)
|
||||||
@@ -94,7 +89,7 @@ func (hdb *HistoryDB) GetAllBlocks() ([]common.Block, error) {
|
|||||||
func (hdb *HistoryDB) getBlocks(from, to int64) ([]common.Block, error) {
|
func (hdb *HistoryDB) getBlocks(from, to int64) ([]common.Block, error) {
|
||||||
var blocks []*common.Block
|
var blocks []*common.Block
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
hdb.dbRead, &blocks,
|
hdb.db, &blocks,
|
||||||
"SELECT * FROM block WHERE $1 <= eth_block_num AND eth_block_num < $2 ORDER BY eth_block_num;",
|
"SELECT * FROM block WHERE $1 <= eth_block_num AND eth_block_num < $2 ORDER BY eth_block_num;",
|
||||||
from, to,
|
from, to,
|
||||||
)
|
)
|
||||||
@@ -105,13 +100,13 @@ func (hdb *HistoryDB) getBlocks(from, to int64) ([]common.Block, error) {
|
|||||||
func (hdb *HistoryDB) GetLastBlock() (*common.Block, error) {
|
func (hdb *HistoryDB) GetLastBlock() (*common.Block, error) {
|
||||||
block := &common.Block{}
|
block := &common.Block{}
|
||||||
err := meddler.QueryRow(
|
err := meddler.QueryRow(
|
||||||
hdb.dbRead, block, "SELECT * FROM block ORDER BY eth_block_num DESC LIMIT 1;",
|
hdb.db, block, "SELECT * FROM block ORDER BY eth_block_num DESC LIMIT 1;",
|
||||||
)
|
)
|
||||||
return block, tracerr.Wrap(err)
|
return block, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// AddBatch insert a Batch into the DB
|
// AddBatch insert a Batch into the DB
|
||||||
func (hdb *HistoryDB) AddBatch(batch *common.Batch) error { return hdb.addBatch(hdb.dbWrite, batch) }
|
func (hdb *HistoryDB) AddBatch(batch *common.Batch) error { return hdb.addBatch(hdb.db, batch) }
|
||||||
func (hdb *HistoryDB) addBatch(d meddler.DB, batch *common.Batch) error {
|
func (hdb *HistoryDB) addBatch(d meddler.DB, batch *common.Batch) error {
|
||||||
// Calculate total collected fees in USD
|
// Calculate total collected fees in USD
|
||||||
// Get IDs of collected tokens for fees
|
// Get IDs of collected tokens for fees
|
||||||
@@ -134,9 +129,9 @@ func (hdb *HistoryDB) addBatch(d meddler.DB, batch *common.Batch) error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
query = hdb.dbWrite.Rebind(query)
|
query = hdb.db.Rebind(query)
|
||||||
if err := meddler.QueryAll(
|
if err := meddler.QueryAll(
|
||||||
hdb.dbWrite, &tokenPrices, query, args...,
|
hdb.db, &tokenPrices, query, args...,
|
||||||
); err != nil {
|
); err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -158,7 +153,7 @@ func (hdb *HistoryDB) addBatch(d meddler.DB, batch *common.Batch) error {
|
|||||||
|
|
||||||
// AddBatches insert Bids into the DB
|
// AddBatches insert Bids into the DB
|
||||||
func (hdb *HistoryDB) AddBatches(batches []common.Batch) error {
|
func (hdb *HistoryDB) AddBatches(batches []common.Batch) error {
|
||||||
return tracerr.Wrap(hdb.addBatches(hdb.dbWrite, batches))
|
return tracerr.Wrap(hdb.addBatches(hdb.db, batches))
|
||||||
}
|
}
|
||||||
func (hdb *HistoryDB) addBatches(d meddler.DB, batches []common.Batch) error {
|
func (hdb *HistoryDB) addBatches(d meddler.DB, batches []common.Batch) error {
|
||||||
for i := 0; i < len(batches); i++ {
|
for i := 0; i < len(batches); i++ {
|
||||||
@@ -173,20 +168,20 @@ func (hdb *HistoryDB) addBatches(d meddler.DB, batches []common.Batch) error {
|
|||||||
func (hdb *HistoryDB) GetBatch(batchNum common.BatchNum) (*common.Batch, error) {
|
func (hdb *HistoryDB) GetBatch(batchNum common.BatchNum) (*common.Batch, error) {
|
||||||
var batch common.Batch
|
var batch common.Batch
|
||||||
err := meddler.QueryRow(
|
err := meddler.QueryRow(
|
||||||
hdb.dbRead, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr,
|
hdb.db, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr,
|
||||||
batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root,
|
batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root,
|
||||||
batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num,
|
batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num,
|
||||||
batch.slot_num, batch.total_fees_usd FROM batch WHERE batch_num = $1;`,
|
batch.slot_num, batch.total_fees_usd FROM batch WHERE batch_num = $1;`,
|
||||||
batchNum,
|
batchNum,
|
||||||
)
|
)
|
||||||
return &batch, tracerr.Wrap(err)
|
return &batch, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetAllBatches retrieve all batches from the DB
|
// GetAllBatches retrieve all batches from the DB
|
||||||
func (hdb *HistoryDB) GetAllBatches() ([]common.Batch, error) {
|
func (hdb *HistoryDB) GetAllBatches() ([]common.Batch, error) {
|
||||||
var batches []*common.Batch
|
var batches []*common.Batch
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
hdb.dbRead, &batches,
|
hdb.db, &batches,
|
||||||
`SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr, batch.fees_collected,
|
`SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr, batch.fees_collected,
|
||||||
batch.fee_idxs_coordinator, batch.state_root, batch.num_accounts, batch.last_idx, batch.exit_root,
|
batch.fee_idxs_coordinator, batch.state_root, batch.num_accounts, batch.last_idx, batch.exit_root,
|
||||||
batch.forge_l1_txs_num, batch.slot_num, batch.total_fees_usd FROM batch
|
batch.forge_l1_txs_num, batch.slot_num, batch.total_fees_usd FROM batch
|
||||||
@@ -199,7 +194,7 @@ func (hdb *HistoryDB) GetAllBatches() ([]common.Batch, error) {
|
|||||||
func (hdb *HistoryDB) GetBatches(from, to common.BatchNum) ([]common.Batch, error) {
|
func (hdb *HistoryDB) GetBatches(from, to common.BatchNum) ([]common.Batch, error) {
|
||||||
var batches []*common.Batch
|
var batches []*common.Batch
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
hdb.dbRead, &batches,
|
hdb.db, &batches,
|
||||||
`SELECT batch_num, eth_block_num, forger_addr, fees_collected, fee_idxs_coordinator,
|
`SELECT batch_num, eth_block_num, forger_addr, fees_collected, fee_idxs_coordinator,
|
||||||
state_root, num_accounts, last_idx, exit_root, forge_l1_txs_num, slot_num, total_fees_usd
|
state_root, num_accounts, last_idx, exit_root, forge_l1_txs_num, slot_num, total_fees_usd
|
||||||
FROM batch WHERE $1 <= batch_num AND batch_num < $2 ORDER BY batch_num;`,
|
FROM batch WHERE $1 <= batch_num AND batch_num < $2 ORDER BY batch_num;`,
|
||||||
@@ -211,7 +206,7 @@ func (hdb *HistoryDB) GetBatches(from, to common.BatchNum) ([]common.Batch, erro
|
|||||||
// GetFirstBatchBlockNumBySlot returns the ethereum block number of the first
|
// GetFirstBatchBlockNumBySlot returns the ethereum block number of the first
|
||||||
// batch within a slot
|
// batch within a slot
|
||||||
func (hdb *HistoryDB) GetFirstBatchBlockNumBySlot(slotNum int64) (int64, error) {
|
func (hdb *HistoryDB) GetFirstBatchBlockNumBySlot(slotNum int64) (int64, error) {
|
||||||
row := hdb.dbRead.QueryRow(
|
row := hdb.db.QueryRow(
|
||||||
`SELECT eth_block_num FROM batch
|
`SELECT eth_block_num FROM batch
|
||||||
WHERE slot_num = $1 ORDER BY batch_num ASC LIMIT 1;`, slotNum,
|
WHERE slot_num = $1 ORDER BY batch_num ASC LIMIT 1;`, slotNum,
|
||||||
)
|
)
|
||||||
@@ -221,7 +216,7 @@ func (hdb *HistoryDB) GetFirstBatchBlockNumBySlot(slotNum int64) (int64, error)
|
|||||||
|
|
||||||
// GetLastBatchNum returns the BatchNum of the latest forged batch
|
// GetLastBatchNum returns the BatchNum of the latest forged batch
|
||||||
func (hdb *HistoryDB) GetLastBatchNum() (common.BatchNum, error) {
|
func (hdb *HistoryDB) GetLastBatchNum() (common.BatchNum, error) {
|
||||||
row := hdb.dbRead.QueryRow("SELECT batch_num FROM batch ORDER BY batch_num DESC LIMIT 1;")
|
row := hdb.db.QueryRow("SELECT batch_num FROM batch ORDER BY batch_num DESC LIMIT 1;")
|
||||||
var batchNum common.BatchNum
|
var batchNum common.BatchNum
|
||||||
return batchNum, tracerr.Wrap(row.Scan(&batchNum))
|
return batchNum, tracerr.Wrap(row.Scan(&batchNum))
|
||||||
}
|
}
|
||||||
@@ -230,17 +225,17 @@ func (hdb *HistoryDB) GetLastBatchNum() (common.BatchNum, error) {
|
|||||||
func (hdb *HistoryDB) GetLastBatch() (*common.Batch, error) {
|
func (hdb *HistoryDB) GetLastBatch() (*common.Batch, error) {
|
||||||
var batch common.Batch
|
var batch common.Batch
|
||||||
err := meddler.QueryRow(
|
err := meddler.QueryRow(
|
||||||
hdb.dbRead, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr,
|
hdb.db, &batch, `SELECT batch.batch_num, batch.eth_block_num, batch.forger_addr,
|
||||||
batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root,
|
batch.fees_collected, batch.fee_idxs_coordinator, batch.state_root,
|
||||||
batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num,
|
batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num,
|
||||||
batch.slot_num, batch.total_fees_usd FROM batch ORDER BY batch_num DESC LIMIT 1;`,
|
batch.slot_num, batch.total_fees_usd FROM batch ORDER BY batch_num DESC LIMIT 1;`,
|
||||||
)
|
)
|
||||||
return &batch, tracerr.Wrap(err)
|
return &batch, err
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetLastL1BatchBlockNum returns the blockNum of the latest forged l1Batch
|
// GetLastL1BatchBlockNum returns the blockNum of the latest forged l1Batch
|
||||||
func (hdb *HistoryDB) GetLastL1BatchBlockNum() (int64, error) {
|
func (hdb *HistoryDB) GetLastL1BatchBlockNum() (int64, error) {
|
||||||
row := hdb.dbRead.QueryRow(`SELECT eth_block_num FROM batch
|
row := hdb.db.QueryRow(`SELECT eth_block_num FROM batch
|
||||||
WHERE forge_l1_txs_num IS NOT NULL
|
WHERE forge_l1_txs_num IS NOT NULL
|
||||||
ORDER BY batch_num DESC LIMIT 1;`)
|
ORDER BY batch_num DESC LIMIT 1;`)
|
||||||
var blockNum int64
|
var blockNum int64
|
||||||
@@ -250,7 +245,7 @@ func (hdb *HistoryDB) GetLastL1BatchBlockNum() (int64, error) {
|
|||||||
// GetLastL1TxsNum returns the greatest ForgeL1TxsNum in the DB from forged
|
// GetLastL1TxsNum returns the greatest ForgeL1TxsNum in the DB from forged
|
||||||
// batches. If there's no batch in the DB (nil, nil) is returned.
|
// batches. If there's no batch in the DB (nil, nil) is returned.
|
||||||
func (hdb *HistoryDB) GetLastL1TxsNum() (*int64, error) {
|
func (hdb *HistoryDB) GetLastL1TxsNum() (*int64, error) {
|
||||||
row := hdb.dbRead.QueryRow("SELECT MAX(forge_l1_txs_num) FROM batch;")
|
row := hdb.db.QueryRow("SELECT MAX(forge_l1_txs_num) FROM batch;")
|
||||||
lastL1TxsNum := new(int64)
|
lastL1TxsNum := new(int64)
|
||||||
return lastL1TxsNum, tracerr.Wrap(row.Scan(&lastL1TxsNum))
|
return lastL1TxsNum, tracerr.Wrap(row.Scan(&lastL1TxsNum))
|
||||||
}
|
}
|
||||||
@@ -261,15 +256,15 @@ func (hdb *HistoryDB) GetLastL1TxsNum() (*int64, error) {
|
|||||||
func (hdb *HistoryDB) Reorg(lastValidBlock int64) error {
|
func (hdb *HistoryDB) Reorg(lastValidBlock int64) error {
|
||||||
var err error
|
var err error
|
||||||
if lastValidBlock < 0 {
|
if lastValidBlock < 0 {
|
||||||
_, err = hdb.dbWrite.Exec("DELETE FROM block;")
|
_, err = hdb.db.Exec("DELETE FROM block;")
|
||||||
} else {
|
} else {
|
||||||
_, err = hdb.dbWrite.Exec("DELETE FROM block WHERE eth_block_num > $1;", lastValidBlock)
|
_, err = hdb.db.Exec("DELETE FROM block WHERE eth_block_num > $1;", lastValidBlock)
|
||||||
}
|
}
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// AddBids insert Bids into the DB
|
// AddBids insert Bids into the DB
|
||||||
func (hdb *HistoryDB) AddBids(bids []common.Bid) error { return hdb.addBids(hdb.dbWrite, bids) }
|
func (hdb *HistoryDB) AddBids(bids []common.Bid) error { return hdb.addBids(hdb.db, bids) }
|
||||||
func (hdb *HistoryDB) addBids(d meddler.DB, bids []common.Bid) error {
|
func (hdb *HistoryDB) addBids(d meddler.DB, bids []common.Bid) error {
|
||||||
if len(bids) == 0 {
|
if len(bids) == 0 {
|
||||||
return nil
|
return nil
|
||||||
@@ -278,7 +273,7 @@ func (hdb *HistoryDB) addBids(d meddler.DB, bids []common.Bid) error {
|
|||||||
return tracerr.Wrap(db.BulkInsert(
|
return tracerr.Wrap(db.BulkInsert(
|
||||||
d,
|
d,
|
||||||
"INSERT INTO bid (slot_num, bid_value, eth_block_num, bidder_addr) VALUES %s;",
|
"INSERT INTO bid (slot_num, bid_value, eth_block_num, bidder_addr) VALUES %s;",
|
||||||
bids,
|
bids[:],
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -286,7 +281,7 @@ func (hdb *HistoryDB) addBids(d meddler.DB, bids []common.Bid) error {
|
|||||||
func (hdb *HistoryDB) GetAllBids() ([]common.Bid, error) {
|
func (hdb *HistoryDB) GetAllBids() ([]common.Bid, error) {
|
||||||
var bids []*common.Bid
|
var bids []*common.Bid
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
hdb.dbRead, &bids,
|
hdb.db, &bids,
|
||||||
`SELECT bid.slot_num, bid.bid_value, bid.eth_block_num, bid.bidder_addr FROM bid
|
`SELECT bid.slot_num, bid.bid_value, bid.eth_block_num, bid.bidder_addr FROM bid
|
||||||
ORDER BY item_id;`,
|
ORDER BY item_id;`,
|
||||||
)
|
)
|
||||||
@@ -297,7 +292,7 @@ func (hdb *HistoryDB) GetAllBids() ([]common.Bid, error) {
|
|||||||
func (hdb *HistoryDB) GetBestBidCoordinator(slotNum int64) (*common.BidCoordinator, error) {
|
func (hdb *HistoryDB) GetBestBidCoordinator(slotNum int64) (*common.BidCoordinator, error) {
|
||||||
bidCoord := &common.BidCoordinator{}
|
bidCoord := &common.BidCoordinator{}
|
||||||
err := meddler.QueryRow(
|
err := meddler.QueryRow(
|
||||||
hdb.dbRead, bidCoord,
|
hdb.db, bidCoord,
|
||||||
`SELECT (
|
`SELECT (
|
||||||
SELECT default_slot_set_bid
|
SELECT default_slot_set_bid
|
||||||
FROM auction_vars
|
FROM auction_vars
|
||||||
@@ -320,7 +315,7 @@ func (hdb *HistoryDB) GetBestBidCoordinator(slotNum int64) (*common.BidCoordinat
|
|||||||
|
|
||||||
// AddCoordinators insert Coordinators into the DB
|
// AddCoordinators insert Coordinators into the DB
|
||||||
func (hdb *HistoryDB) AddCoordinators(coordinators []common.Coordinator) error {
|
func (hdb *HistoryDB) AddCoordinators(coordinators []common.Coordinator) error {
|
||||||
return tracerr.Wrap(hdb.addCoordinators(hdb.dbWrite, coordinators))
|
return tracerr.Wrap(hdb.addCoordinators(hdb.db, coordinators))
|
||||||
}
|
}
|
||||||
func (hdb *HistoryDB) addCoordinators(d meddler.DB, coordinators []common.Coordinator) error {
|
func (hdb *HistoryDB) addCoordinators(d meddler.DB, coordinators []common.Coordinator) error {
|
||||||
if len(coordinators) == 0 {
|
if len(coordinators) == 0 {
|
||||||
@@ -329,13 +324,13 @@ func (hdb *HistoryDB) addCoordinators(d meddler.DB, coordinators []common.Coordi
|
|||||||
return tracerr.Wrap(db.BulkInsert(
|
return tracerr.Wrap(db.BulkInsert(
|
||||||
d,
|
d,
|
||||||
"INSERT INTO coordinator (bidder_addr, forger_addr, eth_block_num, url) VALUES %s;",
|
"INSERT INTO coordinator (bidder_addr, forger_addr, eth_block_num, url) VALUES %s;",
|
||||||
coordinators,
|
coordinators[:],
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
// AddExitTree insert Exit tree into the DB
|
// AddExitTree insert Exit tree into the DB
|
||||||
func (hdb *HistoryDB) AddExitTree(exitTree []common.ExitInfo) error {
|
func (hdb *HistoryDB) AddExitTree(exitTree []common.ExitInfo) error {
|
||||||
return tracerr.Wrap(hdb.addExitTree(hdb.dbWrite, exitTree))
|
return tracerr.Wrap(hdb.addExitTree(hdb.db, exitTree))
|
||||||
}
|
}
|
||||||
func (hdb *HistoryDB) addExitTree(d meddler.DB, exitTree []common.ExitInfo) error {
|
func (hdb *HistoryDB) addExitTree(d meddler.DB, exitTree []common.ExitInfo) error {
|
||||||
if len(exitTree) == 0 {
|
if len(exitTree) == 0 {
|
||||||
@@ -345,7 +340,7 @@ func (hdb *HistoryDB) addExitTree(d meddler.DB, exitTree []common.ExitInfo) erro
|
|||||||
d,
|
d,
|
||||||
"INSERT INTO exit_tree (batch_num, account_idx, merkle_proof, balance, "+
|
"INSERT INTO exit_tree (batch_num, account_idx, merkle_proof, balance, "+
|
||||||
"instant_withdrawn, delayed_withdraw_request, delayed_withdrawn) VALUES %s;",
|
"instant_withdrawn, delayed_withdraw_request, delayed_withdrawn) VALUES %s;",
|
||||||
exitTree,
|
exitTree[:],
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -423,13 +418,11 @@ func (hdb *HistoryDB) updateExitTree(d sqlx.Ext, blockNum int64,
|
|||||||
|
|
||||||
// AddToken insert a token into the DB
|
// AddToken insert a token into the DB
|
||||||
func (hdb *HistoryDB) AddToken(token *common.Token) error {
|
func (hdb *HistoryDB) AddToken(token *common.Token) error {
|
||||||
return tracerr.Wrap(meddler.Insert(hdb.dbWrite, "token", token))
|
return tracerr.Wrap(meddler.Insert(hdb.db, "token", token))
|
||||||
}
|
}
|
||||||
|
|
||||||
// AddTokens insert tokens into the DB
|
// AddTokens insert tokens into the DB
|
||||||
func (hdb *HistoryDB) AddTokens(tokens []common.Token) error {
|
func (hdb *HistoryDB) AddTokens(tokens []common.Token) error { return hdb.addTokens(hdb.db, tokens) }
|
||||||
return hdb.addTokens(hdb.dbWrite, tokens)
|
|
||||||
}
|
|
||||||
func (hdb *HistoryDB) addTokens(d meddler.DB, tokens []common.Token) error {
|
func (hdb *HistoryDB) addTokens(d meddler.DB, tokens []common.Token) error {
|
||||||
if len(tokens) == 0 {
|
if len(tokens) == 0 {
|
||||||
return nil
|
return nil
|
||||||
@@ -450,16 +443,18 @@ func (hdb *HistoryDB) addTokens(d meddler.DB, tokens []common.Token) error {
|
|||||||
symbol,
|
symbol,
|
||||||
decimals
|
decimals
|
||||||
) VALUES %s;`,
|
) VALUES %s;`,
|
||||||
tokens,
|
tokens[:],
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
// UpdateTokenValue updates the USD value of a token. Value is the price in
|
// UpdateTokenValue updates the USD value of a token
|
||||||
// USD of a normalized token (1 token = 10^decimals units)
|
func (hdb *HistoryDB) UpdateTokenValue(tokenSymbol string, value float64) error {
|
||||||
func (hdb *HistoryDB) UpdateTokenValue(tokenAddr ethCommon.Address, value float64) error {
|
// Sanitize symbol
|
||||||
_, err := hdb.dbWrite.Exec(
|
tokenSymbol = strings.ToValidUTF8(tokenSymbol, " ")
|
||||||
"UPDATE token SET usd = $1 WHERE eth_addr = $2;",
|
|
||||||
value, tokenAddr,
|
_, err := hdb.db.Exec(
|
||||||
|
"UPDATE token SET usd = $1 WHERE symbol = $2;",
|
||||||
|
value, tokenSymbol,
|
||||||
)
|
)
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -468,7 +463,7 @@ func (hdb *HistoryDB) UpdateTokenValue(tokenAddr ethCommon.Address, value float6
|
|||||||
func (hdb *HistoryDB) GetToken(tokenID common.TokenID) (*TokenWithUSD, error) {
|
func (hdb *HistoryDB) GetToken(tokenID common.TokenID) (*TokenWithUSD, error) {
|
||||||
token := &TokenWithUSD{}
|
token := &TokenWithUSD{}
|
||||||
err := meddler.QueryRow(
|
err := meddler.QueryRow(
|
||||||
hdb.dbRead, token, `SELECT * FROM token WHERE token_id = $1;`, tokenID,
|
hdb.db, token, `SELECT * FROM token WHERE token_id = $1;`, tokenID,
|
||||||
)
|
)
|
||||||
return token, tracerr.Wrap(err)
|
return token, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -477,25 +472,34 @@ func (hdb *HistoryDB) GetToken(tokenID common.TokenID) (*TokenWithUSD, error) {
|
|||||||
func (hdb *HistoryDB) GetAllTokens() ([]TokenWithUSD, error) {
|
func (hdb *HistoryDB) GetAllTokens() ([]TokenWithUSD, error) {
|
||||||
var tokens []*TokenWithUSD
|
var tokens []*TokenWithUSD
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
hdb.dbRead, &tokens,
|
hdb.db, &tokens,
|
||||||
"SELECT * FROM token ORDER BY token_id;",
|
"SELECT * FROM token ORDER BY token_id;",
|
||||||
)
|
)
|
||||||
return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), tracerr.Wrap(err)
|
return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetTokenSymbolsAndAddrs returns all the token symbols and addresses from the DB
|
// GetTokenSymbols returns all the token symbols from the DB
|
||||||
func (hdb *HistoryDB) GetTokenSymbolsAndAddrs() ([]TokenSymbolAndAddr, error) {
|
func (hdb *HistoryDB) GetTokenSymbols() ([]string, error) {
|
||||||
var tokens []*TokenSymbolAndAddr
|
var tokenSymbols []string
|
||||||
err := meddler.QueryAll(
|
rows, err := hdb.db.Query("SELECT symbol FROM token;")
|
||||||
hdb.dbRead, &tokens,
|
if err != nil {
|
||||||
"SELECT symbol, eth_addr FROM token;",
|
return nil, tracerr.Wrap(err)
|
||||||
)
|
}
|
||||||
return db.SlicePtrsToSlice(tokens).([]TokenSymbolAndAddr), tracerr.Wrap(err)
|
defer db.RowsClose(rows)
|
||||||
|
sym := new(string)
|
||||||
|
for rows.Next() {
|
||||||
|
err = rows.Scan(sym)
|
||||||
|
if err != nil {
|
||||||
|
return nil, tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
tokenSymbols = append(tokenSymbols, *sym)
|
||||||
|
}
|
||||||
|
return tokenSymbols, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// AddAccounts insert accounts into the DB
|
// AddAccounts insert accounts into the DB
|
||||||
func (hdb *HistoryDB) AddAccounts(accounts []common.Account) error {
|
func (hdb *HistoryDB) AddAccounts(accounts []common.Account) error {
|
||||||
return tracerr.Wrap(hdb.addAccounts(hdb.dbWrite, accounts))
|
return tracerr.Wrap(hdb.addAccounts(hdb.db, accounts))
|
||||||
}
|
}
|
||||||
func (hdb *HistoryDB) addAccounts(d meddler.DB, accounts []common.Account) error {
|
func (hdb *HistoryDB) addAccounts(d meddler.DB, accounts []common.Account) error {
|
||||||
if len(accounts) == 0 {
|
if len(accounts) == 0 {
|
||||||
@@ -510,7 +514,7 @@ func (hdb *HistoryDB) addAccounts(d meddler.DB, accounts []common.Account) error
|
|||||||
bjj,
|
bjj,
|
||||||
eth_addr
|
eth_addr
|
||||||
) VALUES %s;`,
|
) VALUES %s;`,
|
||||||
accounts,
|
accounts[:],
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -518,49 +522,18 @@ func (hdb *HistoryDB) addAccounts(d meddler.DB, accounts []common.Account) error
|
|||||||
func (hdb *HistoryDB) GetAllAccounts() ([]common.Account, error) {
|
func (hdb *HistoryDB) GetAllAccounts() ([]common.Account, error) {
|
||||||
var accs []*common.Account
|
var accs []*common.Account
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
hdb.dbRead, &accs,
|
hdb.db, &accs,
|
||||||
"SELECT idx, token_id, batch_num, bjj, eth_addr FROM account ORDER BY idx;",
|
"SELECT idx, token_id, batch_num, bjj, eth_addr FROM account ORDER BY idx;",
|
||||||
)
|
)
|
||||||
return db.SlicePtrsToSlice(accs).([]common.Account), tracerr.Wrap(err)
|
return db.SlicePtrsToSlice(accs).([]common.Account), tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// AddAccountUpdates inserts accUpdates into the DB
|
|
||||||
func (hdb *HistoryDB) AddAccountUpdates(accUpdates []common.AccountUpdate) error {
|
|
||||||
return tracerr.Wrap(hdb.addAccountUpdates(hdb.dbWrite, accUpdates))
|
|
||||||
}
|
|
||||||
func (hdb *HistoryDB) addAccountUpdates(d meddler.DB, accUpdates []common.AccountUpdate) error {
|
|
||||||
if len(accUpdates) == 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return tracerr.Wrap(db.BulkInsert(
|
|
||||||
d,
|
|
||||||
`INSERT INTO account_update (
|
|
||||||
eth_block_num,
|
|
||||||
batch_num,
|
|
||||||
idx,
|
|
||||||
nonce,
|
|
||||||
balance
|
|
||||||
) VALUES %s;`,
|
|
||||||
accUpdates,
|
|
||||||
))
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetAllAccountUpdates returns all the AccountUpdate from the DB
|
|
||||||
func (hdb *HistoryDB) GetAllAccountUpdates() ([]common.AccountUpdate, error) {
|
|
||||||
var accUpdates []*common.AccountUpdate
|
|
||||||
err := meddler.QueryAll(
|
|
||||||
hdb.dbRead, &accUpdates,
|
|
||||||
"SELECT eth_block_num, batch_num, idx, nonce, balance FROM account_update ORDER BY idx;",
|
|
||||||
)
|
|
||||||
return db.SlicePtrsToSlice(accUpdates).([]common.AccountUpdate), tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// AddL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
|
// AddL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
|
||||||
// If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
|
// If the tx is originated by a coordinator, BatchNum must be provided. If it's originated by a user,
|
||||||
// BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
|
// BatchNum should be null, and the value will be setted by a trigger when a batch forges the tx.
|
||||||
// EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB.
|
// EffectiveAmount and EffectiveDepositAmount are seted with default values by the DB.
|
||||||
func (hdb *HistoryDB) AddL1Txs(l1txs []common.L1Tx) error {
|
func (hdb *HistoryDB) AddL1Txs(l1txs []common.L1Tx) error {
|
||||||
return tracerr.Wrap(hdb.addL1Txs(hdb.dbWrite, l1txs))
|
return tracerr.Wrap(hdb.addL1Txs(hdb.db, l1txs))
|
||||||
}
|
}
|
||||||
|
|
||||||
// addL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
|
// addL1Txs inserts L1 txs to the DB. USD and DepositAmountUSD will be set automatically before storing the tx.
|
||||||
@@ -614,7 +587,7 @@ func (hdb *HistoryDB) addL1Txs(d meddler.DB, l1txs []common.L1Tx) error {
|
|||||||
|
|
||||||
// AddL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
|
// AddL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
|
||||||
func (hdb *HistoryDB) AddL2Txs(l2txs []common.L2Tx) error {
|
func (hdb *HistoryDB) AddL2Txs(l2txs []common.L2Tx) error {
|
||||||
return tracerr.Wrap(hdb.addL2Txs(hdb.dbWrite, l2txs))
|
return tracerr.Wrap(hdb.addL2Txs(hdb.db, l2txs))
|
||||||
}
|
}
|
||||||
|
|
||||||
// addL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
|
// addL2Txs inserts L2 txs to the DB. TokenID, USD and FeeUSD will be set automatically before storing the tx.
|
||||||
@@ -673,7 +646,7 @@ func (hdb *HistoryDB) addTxs(d meddler.DB, txs []txWrite) error {
|
|||||||
fee,
|
fee,
|
||||||
nonce
|
nonce
|
||||||
) VALUES %s;`,
|
) VALUES %s;`,
|
||||||
txs,
|
txs[:],
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -681,7 +654,7 @@ func (hdb *HistoryDB) addTxs(d meddler.DB, txs []txWrite) error {
|
|||||||
func (hdb *HistoryDB) GetAllExits() ([]common.ExitInfo, error) {
|
func (hdb *HistoryDB) GetAllExits() ([]common.ExitInfo, error) {
|
||||||
var exits []*common.ExitInfo
|
var exits []*common.ExitInfo
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
hdb.dbRead, &exits,
|
hdb.db, &exits,
|
||||||
`SELECT exit_tree.batch_num, exit_tree.account_idx, exit_tree.merkle_proof,
|
`SELECT exit_tree.batch_num, exit_tree.account_idx, exit_tree.merkle_proof,
|
||||||
exit_tree.balance, exit_tree.instant_withdrawn, exit_tree.delayed_withdraw_request,
|
exit_tree.balance, exit_tree.instant_withdrawn, exit_tree.delayed_withdraw_request,
|
||||||
exit_tree.delayed_withdrawn FROM exit_tree ORDER BY item_id;`,
|
exit_tree.delayed_withdrawn FROM exit_tree ORDER BY item_id;`,
|
||||||
@@ -693,11 +666,11 @@ func (hdb *HistoryDB) GetAllExits() ([]common.ExitInfo, error) {
|
|||||||
func (hdb *HistoryDB) GetAllL1UserTxs() ([]common.L1Tx, error) {
|
func (hdb *HistoryDB) GetAllL1UserTxs() ([]common.L1Tx, error) {
|
||||||
var txs []*common.L1Tx
|
var txs []*common.L1Tx
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
hdb.dbRead, &txs,
|
hdb.db, &txs, // Note that '\x' gets parsed as a big.Int with value = 0
|
||||||
`SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
|
`SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
|
||||||
tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
|
tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
|
||||||
tx.amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.amount_success THEN tx.amount ELSE 0 END) AS effective_amount,
|
tx.amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.amount_success THEN tx.amount ELSE '\x' END) AS effective_amount,
|
||||||
tx.deposit_amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.deposit_amount_success THEN tx.deposit_amount ELSE 0 END) AS effective_deposit_amount,
|
tx.deposit_amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.deposit_amount_success THEN tx.deposit_amount ELSE '\x' END) AS effective_deposit_amount,
|
||||||
tx.eth_block_num, tx.type, tx.batch_num
|
tx.eth_block_num, tx.type, tx.batch_num
|
||||||
FROM tx WHERE is_l1 = TRUE AND user_origin = TRUE ORDER BY item_id;`,
|
FROM tx WHERE is_l1 = TRUE AND user_origin = TRUE ORDER BY item_id;`,
|
||||||
)
|
)
|
||||||
@@ -710,7 +683,7 @@ func (hdb *HistoryDB) GetAllL1CoordinatorTxs() ([]common.L1Tx, error) {
|
|||||||
// Since the query specifies that only coordinator txs are returned, it's safe to assume
|
// Since the query specifies that only coordinator txs are returned, it's safe to assume
|
||||||
// that returned txs will always have effective amounts
|
// that returned txs will always have effective amounts
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
hdb.dbRead, &txs,
|
hdb.db, &txs,
|
||||||
`SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
|
`SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
|
||||||
tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
|
tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
|
||||||
tx.amount, tx.amount AS effective_amount,
|
tx.amount, tx.amount AS effective_amount,
|
||||||
@@ -725,7 +698,7 @@ func (hdb *HistoryDB) GetAllL1CoordinatorTxs() ([]common.L1Tx, error) {
|
|||||||
func (hdb *HistoryDB) GetAllL2Txs() ([]common.L2Tx, error) {
|
func (hdb *HistoryDB) GetAllL2Txs() ([]common.L2Tx, error) {
|
||||||
var txs []*common.L2Tx
|
var txs []*common.L2Tx
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
hdb.dbRead, &txs,
|
hdb.db, &txs,
|
||||||
`SELECT tx.id, tx.batch_num, tx.position,
|
`SELECT tx.id, tx.batch_num, tx.position,
|
||||||
tx.from_idx, tx.to_idx, tx.amount, tx.token_id,
|
tx.from_idx, tx.to_idx, tx.amount, tx.token_id,
|
||||||
tx.fee, tx.nonce, tx.type, tx.eth_block_num
|
tx.fee, tx.nonce, tx.type, tx.eth_block_num
|
||||||
@@ -738,7 +711,7 @@ func (hdb *HistoryDB) GetAllL2Txs() ([]common.L2Tx, error) {
|
|||||||
func (hdb *HistoryDB) GetUnforgedL1UserTxs(toForgeL1TxsNum int64) ([]common.L1Tx, error) {
|
func (hdb *HistoryDB) GetUnforgedL1UserTxs(toForgeL1TxsNum int64) ([]common.L1Tx, error) {
|
||||||
var txs []*common.L1Tx
|
var txs []*common.L1Tx
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
hdb.dbRead, &txs, // only L1 user txs can have batch_num set to null
|
hdb.db, &txs, // only L1 user txs can have batch_num set to null
|
||||||
`SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
|
`SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
|
||||||
tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
|
tx.from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
|
||||||
tx.amount, NULL AS effective_amount,
|
tx.amount, NULL AS effective_amount,
|
||||||
@@ -751,21 +724,11 @@ func (hdb *HistoryDB) GetUnforgedL1UserTxs(toForgeL1TxsNum int64) ([]common.L1Tx
|
|||||||
return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
|
return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetUnforgedL1UserTxsCount returns the count of unforged L1Txs (either in
|
|
||||||
// open or frozen queues that are not yet forged)
|
|
||||||
func (hdb *HistoryDB) GetUnforgedL1UserTxsCount() (int, error) {
|
|
||||||
row := hdb.dbRead.QueryRow(
|
|
||||||
`SELECT COUNT(*) FROM tx WHERE batch_num IS NULL;`,
|
|
||||||
)
|
|
||||||
var count int
|
|
||||||
return count, tracerr.Wrap(row.Scan(&count))
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO: Think about chaning all the queries that return a last value, to queries that return the next valid value.
|
// TODO: Think about chaning all the queries that return a last value, to queries that return the next valid value.
|
||||||
|
|
||||||
// GetLastTxsPosition for a given to_forge_l1_txs_num
|
// GetLastTxsPosition for a given to_forge_l1_txs_num
|
||||||
func (hdb *HistoryDB) GetLastTxsPosition(toForgeL1TxsNum int64) (int, error) {
|
func (hdb *HistoryDB) GetLastTxsPosition(toForgeL1TxsNum int64) (int, error) {
|
||||||
row := hdb.dbRead.QueryRow(
|
row := hdb.db.QueryRow(
|
||||||
"SELECT position FROM tx WHERE to_forge_l1_txs_num = $1 ORDER BY position DESC;",
|
"SELECT position FROM tx WHERE to_forge_l1_txs_num = $1 ORDER BY position DESC;",
|
||||||
toForgeL1TxsNum,
|
toForgeL1TxsNum,
|
||||||
)
|
)
|
||||||
@@ -779,15 +742,15 @@ func (hdb *HistoryDB) GetSCVars() (*common.RollupVariables, *common.AuctionVaria
|
|||||||
var rollup common.RollupVariables
|
var rollup common.RollupVariables
|
||||||
var auction common.AuctionVariables
|
var auction common.AuctionVariables
|
||||||
var wDelayer common.WDelayerVariables
|
var wDelayer common.WDelayerVariables
|
||||||
if err := meddler.QueryRow(hdb.dbRead, &rollup,
|
if err := meddler.QueryRow(hdb.db, &rollup,
|
||||||
"SELECT * FROM rollup_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
|
"SELECT * FROM rollup_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
|
||||||
return nil, nil, nil, tracerr.Wrap(err)
|
return nil, nil, nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
if err := meddler.QueryRow(hdb.dbRead, &auction,
|
if err := meddler.QueryRow(hdb.db, &auction,
|
||||||
"SELECT * FROM auction_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
|
"SELECT * FROM auction_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
|
||||||
return nil, nil, nil, tracerr.Wrap(err)
|
return nil, nil, nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
if err := meddler.QueryRow(hdb.dbRead, &wDelayer,
|
if err := meddler.QueryRow(hdb.db, &wDelayer,
|
||||||
"SELECT * FROM wdelayer_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
|
"SELECT * FROM wdelayer_vars ORDER BY eth_block_num DESC LIMIT 1;"); err != nil {
|
||||||
return nil, nil, nil, tracerr.Wrap(err)
|
return nil, nil, nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -818,7 +781,7 @@ func (hdb *HistoryDB) addBucketUpdates(d meddler.DB, bucketUpdates []common.Buck
|
|||||||
block_stamp,
|
block_stamp,
|
||||||
withdrawals
|
withdrawals
|
||||||
) VALUES %s;`,
|
) VALUES %s;`,
|
||||||
bucketUpdates,
|
bucketUpdates[:],
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -832,25 +795,13 @@ func (hdb *HistoryDB) AddBucketUpdatesTest(d meddler.DB, bucketUpdates []common.
|
|||||||
func (hdb *HistoryDB) GetAllBucketUpdates() ([]common.BucketUpdate, error) {
|
func (hdb *HistoryDB) GetAllBucketUpdates() ([]common.BucketUpdate, error) {
|
||||||
var bucketUpdates []*common.BucketUpdate
|
var bucketUpdates []*common.BucketUpdate
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
hdb.dbRead, &bucketUpdates,
|
hdb.db, &bucketUpdates,
|
||||||
`SELECT eth_block_num, num_bucket, block_stamp, withdrawals
|
`SELECT eth_block_num, num_bucket, block_stamp, withdrawals
|
||||||
FROM bucket_update ORDER BY item_id;`,
|
FROM bucket_update ORDER BY item_id;`,
|
||||||
)
|
)
|
||||||
return db.SlicePtrsToSlice(bucketUpdates).([]common.BucketUpdate), tracerr.Wrap(err)
|
return db.SlicePtrsToSlice(bucketUpdates).([]common.BucketUpdate), tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (hdb *HistoryDB) getMinBidInfo(d meddler.DB,
|
|
||||||
currentSlot, lastClosedSlot int64) ([]MinBidInfo, error) {
|
|
||||||
minBidInfo := []*MinBidInfo{}
|
|
||||||
query := `
|
|
||||||
SELECT DISTINCT default_slot_set_bid, default_slot_set_bid_slot_num FROM auction_vars
|
|
||||||
WHERE default_slot_set_bid_slot_num < $1
|
|
||||||
ORDER BY default_slot_set_bid_slot_num DESC
|
|
||||||
LIMIT $2;`
|
|
||||||
err := meddler.QueryAll(d, &minBidInfo, query, lastClosedSlot, int(lastClosedSlot-currentSlot)+1)
|
|
||||||
return db.SlicePtrsToSlice(minBidInfo).([]MinBidInfo), tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (hdb *HistoryDB) addTokenExchanges(d meddler.DB, tokenExchanges []common.TokenExchange) error {
|
func (hdb *HistoryDB) addTokenExchanges(d meddler.DB, tokenExchanges []common.TokenExchange) error {
|
||||||
if len(tokenExchanges) == 0 {
|
if len(tokenExchanges) == 0 {
|
||||||
return nil
|
return nil
|
||||||
@@ -862,7 +813,7 @@ func (hdb *HistoryDB) addTokenExchanges(d meddler.DB, tokenExchanges []common.To
|
|||||||
eth_addr,
|
eth_addr,
|
||||||
value_usd
|
value_usd
|
||||||
) VALUES %s;`,
|
) VALUES %s;`,
|
||||||
tokenExchanges,
|
tokenExchanges[:],
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -870,7 +821,7 @@ func (hdb *HistoryDB) addTokenExchanges(d meddler.DB, tokenExchanges []common.To
|
|||||||
func (hdb *HistoryDB) GetAllTokenExchanges() ([]common.TokenExchange, error) {
|
func (hdb *HistoryDB) GetAllTokenExchanges() ([]common.TokenExchange, error) {
|
||||||
var tokenExchanges []*common.TokenExchange
|
var tokenExchanges []*common.TokenExchange
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
hdb.dbRead, &tokenExchanges,
|
hdb.db, &tokenExchanges,
|
||||||
"SELECT eth_block_num, eth_addr, value_usd FROM token_exchange ORDER BY item_id;",
|
"SELECT eth_block_num, eth_addr, value_usd FROM token_exchange ORDER BY item_id;",
|
||||||
)
|
)
|
||||||
return db.SlicePtrsToSlice(tokenExchanges).([]common.TokenExchange), tracerr.Wrap(err)
|
return db.SlicePtrsToSlice(tokenExchanges).([]common.TokenExchange), tracerr.Wrap(err)
|
||||||
@@ -890,7 +841,7 @@ func (hdb *HistoryDB) addEscapeHatchWithdrawals(d meddler.DB,
|
|||||||
token_addr,
|
token_addr,
|
||||||
amount
|
amount
|
||||||
) VALUES %s;`,
|
) VALUES %s;`,
|
||||||
escapeHatchWithdrawals,
|
escapeHatchWithdrawals[:],
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -898,7 +849,7 @@ func (hdb *HistoryDB) addEscapeHatchWithdrawals(d meddler.DB,
|
|||||||
func (hdb *HistoryDB) GetAllEscapeHatchWithdrawals() ([]common.WDelayerEscapeHatchWithdrawal, error) {
|
func (hdb *HistoryDB) GetAllEscapeHatchWithdrawals() ([]common.WDelayerEscapeHatchWithdrawal, error) {
|
||||||
var escapeHatchWithdrawals []*common.WDelayerEscapeHatchWithdrawal
|
var escapeHatchWithdrawals []*common.WDelayerEscapeHatchWithdrawal
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
hdb.dbRead, &escapeHatchWithdrawals,
|
hdb.db, &escapeHatchWithdrawals,
|
||||||
"SELECT eth_block_num, who_addr, to_addr, token_addr, amount FROM escape_hatch_withdrawal ORDER BY item_id;",
|
"SELECT eth_block_num, who_addr, to_addr, token_addr, amount FROM escape_hatch_withdrawal ORDER BY item_id;",
|
||||||
)
|
)
|
||||||
return db.SlicePtrsToSlice(escapeHatchWithdrawals).([]common.WDelayerEscapeHatchWithdrawal),
|
return db.SlicePtrsToSlice(escapeHatchWithdrawals).([]common.WDelayerEscapeHatchWithdrawal),
|
||||||
@@ -911,7 +862,7 @@ func (hdb *HistoryDB) GetAllEscapeHatchWithdrawals() ([]common.WDelayerEscapeHat
|
|||||||
// exist in the smart contracts.
|
// exist in the smart contracts.
|
||||||
func (hdb *HistoryDB) SetInitialSCVars(rollup *common.RollupVariables,
|
func (hdb *HistoryDB) SetInitialSCVars(rollup *common.RollupVariables,
|
||||||
auction *common.AuctionVariables, wDelayer *common.WDelayerVariables) error {
|
auction *common.AuctionVariables, wDelayer *common.WDelayerVariables) error {
|
||||||
txn, err := hdb.dbWrite.Beginx()
|
txn, err := hdb.db.Beginx()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -995,7 +946,7 @@ func (hdb *HistoryDB) setExtraInfoForgedL1UserTxs(d sqlx.Ext, txs []common.L1Tx)
|
|||||||
// the pagination system of the API/DB depends on this. Within blocks, all
|
// the pagination system of the API/DB depends on this. Within blocks, all
|
||||||
// items should also be in the correct order (Accounts, Tokens, Txs, etc.)
|
// items should also be in the correct order (Accounts, Tokens, Txs, etc.)
|
||||||
func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) {
|
func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) {
|
||||||
txn, err := hdb.dbWrite.Beginx()
|
txn, err := hdb.db.Beginx()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -1067,11 +1018,6 @@ func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) {
|
|||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Add accountBalances if it exists
|
|
||||||
if err := hdb.addAccountUpdates(txn, batch.UpdatedAccounts); err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set the EffectiveAmount and EffectiveDepositAmount of all the
|
// Set the EffectiveAmount and EffectiveDepositAmount of all the
|
||||||
// L1UserTxs that have been forged in this batch
|
// L1UserTxs that have been forged in this batch
|
||||||
if err = hdb.setExtraInfoForgedL1UserTxs(txn, batch.L1UserTxs); err != nil {
|
if err = hdb.setExtraInfoForgedL1UserTxs(txn, batch.L1UserTxs); err != nil {
|
||||||
@@ -1149,17 +1095,28 @@ func (hdb *HistoryDB) AddBlockSCData(blockData *common.BlockData) (err error) {
|
|||||||
return tracerr.Wrap(txn.Commit())
|
return tracerr.Wrap(txn.Commit())
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// GetCoordinatorAPI returns a coordinator by its bidderAddr
|
||||||
|
func (hdb *HistoryDB) GetCoordinatorAPI(bidderAddr ethCommon.Address) (*CoordinatorAPI, error) {
|
||||||
|
coordinator := &CoordinatorAPI{}
|
||||||
|
err := meddler.QueryRow(
|
||||||
|
hdb.db, coordinator,
|
||||||
|
"SELECT * FROM coordinator WHERE bidder_addr = $1 ORDER BY item_id DESC LIMIT 1;",
|
||||||
|
bidderAddr,
|
||||||
|
)
|
||||||
|
return coordinator, tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
|
||||||
// AddAuctionVars insert auction vars into the DB
|
// AddAuctionVars insert auction vars into the DB
|
||||||
func (hdb *HistoryDB) AddAuctionVars(auctionVars *common.AuctionVariables) error {
|
func (hdb *HistoryDB) AddAuctionVars(auctionVars *common.AuctionVariables) error {
|
||||||
return tracerr.Wrap(meddler.Insert(hdb.dbWrite, "auction_vars", auctionVars))
|
return tracerr.Wrap(meddler.Insert(hdb.db, "auction_vars", auctionVars))
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetTokensTest used to get tokens in a testing context
|
// GetTokensTest used to get tokens in a testing context
|
||||||
func (hdb *HistoryDB) GetTokensTest() ([]TokenWithUSD, error) {
|
func (hdb *HistoryDB) GetTokensTest() ([]TokenWithUSD, error) {
|
||||||
tokens := []*TokenWithUSD{}
|
tokens := []*TokenWithUSD{}
|
||||||
if err := meddler.QueryAll(
|
if err := meddler.QueryAll(
|
||||||
hdb.dbRead, &tokens,
|
hdb.db, &tokens,
|
||||||
"SELECT * FROM token ORDER BY token_id ASC",
|
"SELECT * FROM TOKEN",
|
||||||
); err != nil {
|
); err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -1168,60 +1125,3 @@ func (hdb *HistoryDB) GetTokensTest() ([]TokenWithUSD, error) {
|
|||||||
}
|
}
|
||||||
return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), nil
|
return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
const (
|
|
||||||
// CreateAccountExtraFeePercentage is the multiplication factor over
|
|
||||||
// the average fee for CreateAccount that is applied to obtain the
|
|
||||||
// recommended fee for CreateAccount
|
|
||||||
CreateAccountExtraFeePercentage float64 = 2.5
|
|
||||||
// CreateAccountInternalExtraFeePercentage is the multiplication factor
|
|
||||||
// over the average fee for CreateAccountInternal that is applied to
|
|
||||||
// obtain the recommended fee for CreateAccountInternal
|
|
||||||
CreateAccountInternalExtraFeePercentage float64 = 2.0
|
|
||||||
)
|
|
||||||
|
|
||||||
// GetRecommendedFee returns the RecommendedFee information
|
|
||||||
func (hdb *HistoryDB) GetRecommendedFee(minFeeUSD, maxFeeUSD float64) (*common.RecommendedFee, error) {
|
|
||||||
var recommendedFee common.RecommendedFee
|
|
||||||
// Get total txs and the batch of the first selected tx of the last hour
|
|
||||||
type totalTxsSinceBatchNum struct {
|
|
||||||
TotalTxs int `meddler:"total_txs"`
|
|
||||||
FirstBatchNum common.BatchNum `meddler:"batch_num"`
|
|
||||||
}
|
|
||||||
ttsbn := &totalTxsSinceBatchNum{}
|
|
||||||
if err := meddler.QueryRow(
|
|
||||||
hdb.dbRead, ttsbn, `SELECT COUNT(tx.*) as total_txs,
|
|
||||||
COALESCE (MIN(tx.batch_num), 0) as batch_num
|
|
||||||
FROM tx INNER JOIN block ON tx.eth_block_num = block.eth_block_num
|
|
||||||
WHERE block.timestamp >= NOW() - INTERVAL '1 HOURS';`,
|
|
||||||
); err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
// Get the amount of batches and acumulated fees for the last hour
|
|
||||||
type totalBatchesAndFee struct {
|
|
||||||
TotalBatches int `meddler:"total_batches"`
|
|
||||||
TotalFees float64 `meddler:"total_fees"`
|
|
||||||
}
|
|
||||||
tbf := &totalBatchesAndFee{}
|
|
||||||
if err := meddler.QueryRow(
|
|
||||||
hdb.dbRead, tbf, `SELECT COUNT(*) AS total_batches,
|
|
||||||
COALESCE (SUM(total_fees_usd), 0) AS total_fees FROM batch
|
|
||||||
WHERE batch_num > $1;`, ttsbn.FirstBatchNum,
|
|
||||||
); err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
// Update NodeInfo struct
|
|
||||||
var avgTransactionFee float64
|
|
||||||
if ttsbn.TotalTxs > 0 {
|
|
||||||
avgTransactionFee = tbf.TotalFees / float64(ttsbn.TotalTxs)
|
|
||||||
} else {
|
|
||||||
avgTransactionFee = 0
|
|
||||||
}
|
|
||||||
recommendedFee.ExistingAccount = math.Min(maxFeeUSD,
|
|
||||||
math.Max(avgTransactionFee, minFeeUSD))
|
|
||||||
recommendedFee.CreatesAccount = math.Min(maxFeeUSD,
|
|
||||||
math.Max(CreateAccountExtraFeePercentage*avgTransactionFee, minFeeUSD))
|
|
||||||
recommendedFee.CreatesAccountInternal = math.Min(maxFeeUSD,
|
|
||||||
math.Max(CreateAccountInternalExtraFeePercentage*avgTransactionFee, minFeeUSD))
|
|
||||||
return &recommendedFee, nil
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -11,7 +11,6 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
ethCommon "github.com/ethereum/go-ethereum/common"
|
ethCommon "github.com/ethereum/go-ethereum/common"
|
||||||
"github.com/hermeznetwork/hermez-node/api/apitypes"
|
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
dbUtils "github.com/hermeznetwork/hermez-node/db"
|
dbUtils "github.com/hermeznetwork/hermez-node/db"
|
||||||
"github.com/hermeznetwork/hermez-node/log"
|
"github.com/hermeznetwork/hermez-node/log"
|
||||||
@@ -40,12 +39,12 @@ func TestMain(m *testing.M) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
historyDB = NewHistoryDB(db, db, nil)
|
historyDB = NewHistoryDB(db, nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
apiConnCon := dbUtils.NewAPIConnectionController(1, time.Second)
|
apiConnCon := dbUtils.NewAPICnnectionController(1, time.Second)
|
||||||
historyDBWithACC = NewHistoryDB(db, db, apiConnCon)
|
historyDBWithACC = NewHistoryDB(db, apiConnCon)
|
||||||
// Run tests
|
// Run tests
|
||||||
result := m.Run()
|
result := m.Run()
|
||||||
// Close DB
|
// Close DB
|
||||||
@@ -167,7 +166,7 @@ func TestBatches(t *testing.T) {
|
|||||||
if i%2 != 0 {
|
if i%2 != 0 {
|
||||||
// Set value to the token
|
// Set value to the token
|
||||||
value := (float64(i) + 5) * 5.389329
|
value := (float64(i) + 5) * 5.389329
|
||||||
assert.NoError(t, historyDB.UpdateTokenValue(token.EthAddr, value))
|
assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
|
||||||
tokensValue[token.TokenID] = value / math.Pow(10, float64(token.Decimals))
|
tokensValue[token.TokenID] = value / math.Pow(10, float64(token.Decimals))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -277,7 +276,7 @@ func TestTokens(t *testing.T) {
|
|||||||
// Update token value
|
// Update token value
|
||||||
for i, token := range tokens {
|
for i, token := range tokens {
|
||||||
value := 1.01 * float64(i)
|
value := 1.01 * float64(i)
|
||||||
assert.NoError(t, historyDB.UpdateTokenValue(token.EthAddr, value))
|
assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
|
||||||
}
|
}
|
||||||
// Fetch tokens
|
// Fetch tokens
|
||||||
fetchedTokens, err = historyDB.GetTokensTest()
|
fetchedTokens, err = historyDB.GetTokensTest()
|
||||||
@@ -303,7 +302,7 @@ func TestTokensUTF8(t *testing.T) {
|
|||||||
// Generate fake tokens
|
// Generate fake tokens
|
||||||
const nTokens = 5
|
const nTokens = 5
|
||||||
tokens, ethToken := test.GenTokens(nTokens, blocks)
|
tokens, ethToken := test.GenTokens(nTokens, blocks)
|
||||||
nonUTFTokens := make([]common.Token, len(tokens))
|
nonUTFTokens := make([]common.Token, len(tokens)+1)
|
||||||
// Force token.name and token.symbol to be non UTF-8 Strings
|
// Force token.name and token.symbol to be non UTF-8 Strings
|
||||||
for i, token := range tokens {
|
for i, token := range tokens {
|
||||||
token.Name = fmt.Sprint("NON-UTF8-NAME-\xc5-", i)
|
token.Name = fmt.Sprint("NON-UTF8-NAME-\xc5-", i)
|
||||||
@@ -333,7 +332,7 @@ func TestTokensUTF8(t *testing.T) {
|
|||||||
// Update token value
|
// Update token value
|
||||||
for i, token := range nonUTFTokens {
|
for i, token := range nonUTFTokens {
|
||||||
value := 1.01 * float64(i)
|
value := 1.01 * float64(i)
|
||||||
assert.NoError(t, historyDB.UpdateTokenValue(token.EthAddr, value))
|
assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
|
||||||
}
|
}
|
||||||
// Fetch tokens
|
// Fetch tokens
|
||||||
fetchedTokens, err = historyDB.GetTokensTest()
|
fetchedTokens, err = historyDB.GetTokensTest()
|
||||||
@@ -378,22 +377,6 @@ func TestAccounts(t *testing.T) {
|
|||||||
accs[i].Balance = nil
|
accs[i].Balance = nil
|
||||||
assert.Equal(t, accs[i], acc)
|
assert.Equal(t, accs[i], acc)
|
||||||
}
|
}
|
||||||
// Test AccountBalances
|
|
||||||
accUpdates := make([]common.AccountUpdate, len(accs))
|
|
||||||
for i, acc := range accs {
|
|
||||||
accUpdates[i] = common.AccountUpdate{
|
|
||||||
EthBlockNum: batches[acc.BatchNum-1].EthBlockNum,
|
|
||||||
BatchNum: acc.BatchNum,
|
|
||||||
Idx: acc.Idx,
|
|
||||||
Nonce: common.Nonce(i),
|
|
||||||
Balance: big.NewInt(int64(i)),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
err = historyDB.AddAccountUpdates(accUpdates)
|
|
||||||
require.NoError(t, err)
|
|
||||||
fetchedAccBalances, err := historyDB.GetAllAccountUpdates()
|
|
||||||
require.NoError(t, err)
|
|
||||||
assert.Equal(t, accUpdates, fetchedAccBalances)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestTxs(t *testing.T) {
|
func TestTxs(t *testing.T) {
|
||||||
@@ -721,10 +704,6 @@ func TestGetUnforgedL1UserTxs(t *testing.T) {
|
|||||||
assert.Equal(t, 5, len(l1UserTxs))
|
assert.Equal(t, 5, len(l1UserTxs))
|
||||||
assert.Equal(t, blocks[0].Rollup.L1UserTxs, l1UserTxs)
|
assert.Equal(t, blocks[0].Rollup.L1UserTxs, l1UserTxs)
|
||||||
|
|
||||||
count, err := historyDB.GetUnforgedL1UserTxsCount()
|
|
||||||
require.NoError(t, err)
|
|
||||||
assert.Equal(t, 5, count)
|
|
||||||
|
|
||||||
// No l1UserTxs for this toForgeL1TxsNum
|
// No l1UserTxs for this toForgeL1TxsNum
|
||||||
l1UserTxs, err = historyDB.GetUnforgedL1UserTxs(2)
|
l1UserTxs, err = historyDB.GetUnforgedL1UserTxs(2)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -822,11 +801,11 @@ func TestSetExtraInfoForgedL1UserTxs(t *testing.T) {
|
|||||||
}
|
}
|
||||||
// Add second batch to trigger the update of the batch_num,
|
// Add second batch to trigger the update of the batch_num,
|
||||||
// while avoiding the implicit call of setExtraInfoForgedL1UserTxs
|
// while avoiding the implicit call of setExtraInfoForgedL1UserTxs
|
||||||
err = historyDB.addBlock(historyDB.dbWrite, &blocks[1].Block)
|
err = historyDB.addBlock(historyDB.db, &blocks[1].Block)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
err = historyDB.addBatch(historyDB.dbWrite, &blocks[1].Rollup.Batches[0].Batch)
|
err = historyDB.addBatch(historyDB.db, &blocks[1].Rollup.Batches[0].Batch)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
err = historyDB.addAccounts(historyDB.dbWrite, blocks[1].Rollup.Batches[0].CreatedAccounts)
|
err = historyDB.addAccounts(historyDB.db, blocks[1].Rollup.Batches[0].CreatedAccounts)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
// Set the Effective{Amount,DepositAmount} of the L1UserTxs that are forged in the second block
|
// Set the Effective{Amount,DepositAmount} of the L1UserTxs that are forged in the second block
|
||||||
@@ -836,7 +815,7 @@ func TestSetExtraInfoForgedL1UserTxs(t *testing.T) {
|
|||||||
l1Txs[1].EffectiveAmount = big.NewInt(0)
|
l1Txs[1].EffectiveAmount = big.NewInt(0)
|
||||||
l1Txs[2].EffectiveDepositAmount = big.NewInt(0)
|
l1Txs[2].EffectiveDepositAmount = big.NewInt(0)
|
||||||
l1Txs[2].EffectiveAmount = big.NewInt(0)
|
l1Txs[2].EffectiveAmount = big.NewInt(0)
|
||||||
err = historyDB.setExtraInfoForgedL1UserTxs(historyDB.dbWrite, l1Txs)
|
err = historyDB.setExtraInfoForgedL1UserTxs(historyDB.db, l1Txs)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
dbL1Txs, err := historyDB.GetAllL1UserTxs()
|
dbL1Txs, err := historyDB.GetAllL1UserTxs()
|
||||||
@@ -923,10 +902,10 @@ func TestUpdateExitTree(t *testing.T) {
|
|||||||
common.WithdrawInfo{Idx: 259, NumExitRoot: 3, InstantWithdraw: false,
|
common.WithdrawInfo{Idx: 259, NumExitRoot: 3, InstantWithdraw: false,
|
||||||
Owner: tc.UsersByIdx[259].Addr, Token: tokenAddr},
|
Owner: tc.UsersByIdx[259].Addr, Token: tokenAddr},
|
||||||
)
|
)
|
||||||
err = historyDB.addBlock(historyDB.dbWrite, &block.Block)
|
err = historyDB.addBlock(historyDB.db, &block.Block)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
err = historyDB.updateExitTree(historyDB.dbWrite, block.Block.Num,
|
err = historyDB.updateExitTree(historyDB.db, block.Block.Num,
|
||||||
block.Rollup.Withdrawals, block.WDelayer.Withdrawals)
|
block.Rollup.Withdrawals, block.WDelayer.Withdrawals)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
@@ -956,10 +935,10 @@ func TestUpdateExitTree(t *testing.T) {
|
|||||||
Token: tokenAddr,
|
Token: tokenAddr,
|
||||||
Amount: big.NewInt(80),
|
Amount: big.NewInt(80),
|
||||||
})
|
})
|
||||||
err = historyDB.addBlock(historyDB.dbWrite, &block.Block)
|
err = historyDB.addBlock(historyDB.db, &block.Block)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
err = historyDB.updateExitTree(historyDB.dbWrite, block.Block.Num,
|
err = historyDB.updateExitTree(historyDB.db, block.Block.Num,
|
||||||
block.Rollup.Withdrawals, block.WDelayer.Withdrawals)
|
block.Rollup.Withdrawals, block.WDelayer.Withdrawals)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
@@ -1002,7 +981,7 @@ func TestGetBestBidCoordinator(t *testing.T) {
|
|||||||
URL: "bar",
|
URL: "bar",
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
err = historyDB.addCoordinators(historyDB.dbWrite, coords)
|
err = historyDB.addCoordinators(historyDB.db, coords)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
bids := []common.Bid{
|
bids := []common.Bid{
|
||||||
@@ -1020,7 +999,7 @@ func TestGetBestBidCoordinator(t *testing.T) {
|
|||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
err = historyDB.addBids(historyDB.dbWrite, bids)
|
err = historyDB.addBids(historyDB.db, bids)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
forger10, err := historyDB.GetBestBidCoordinator(10)
|
forger10, err := historyDB.GetBestBidCoordinator(10)
|
||||||
@@ -1058,7 +1037,7 @@ func TestAddBucketUpdates(t *testing.T) {
|
|||||||
Withdrawals: big.NewInt(42),
|
Withdrawals: big.NewInt(42),
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
err := historyDB.addBucketUpdates(historyDB.dbWrite, bucketUpdates)
|
err := historyDB.addBucketUpdates(historyDB.db, bucketUpdates)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
dbBucketUpdates, err := historyDB.GetAllBucketUpdates()
|
dbBucketUpdates, err := historyDB.GetAllBucketUpdates()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -1083,7 +1062,7 @@ func TestAddTokenExchanges(t *testing.T) {
|
|||||||
ValueUSD: 67890,
|
ValueUSD: 67890,
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
err := historyDB.addTokenExchanges(historyDB.dbWrite, tokenExchanges)
|
err := historyDB.addTokenExchanges(historyDB.db, tokenExchanges)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
dbTokenExchanges, err := historyDB.GetAllTokenExchanges()
|
dbTokenExchanges, err := historyDB.GetAllTokenExchanges()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -1112,7 +1091,7 @@ func TestAddEscapeHatchWithdrawals(t *testing.T) {
|
|||||||
Amount: big.NewInt(20003),
|
Amount: big.NewInt(20003),
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
err := historyDB.addEscapeHatchWithdrawals(historyDB.dbWrite, escapeHatchWithdrawals)
|
err := historyDB.addEscapeHatchWithdrawals(historyDB.db, escapeHatchWithdrawals)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
dbEscapeHatchWithdrawals, err := historyDB.GetAllEscapeHatchWithdrawals()
|
dbEscapeHatchWithdrawals, err := historyDB.GetAllEscapeHatchWithdrawals()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -1177,17 +1156,21 @@ func TestGetMetricsAPI(t *testing.T) {
|
|||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
res, _, err := historyDB.GetMetricsInternalAPI(common.BatchNum(numBatches))
|
res, err := historyDBWithACC.GetMetricsAPI(common.BatchNum(numBatches))
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
assert.Equal(t, float64(numTx)/float64(numBatches), res.TransactionsPerBatch)
|
assert.Equal(t, float64(numTx)/float64(numBatches-1), res.TransactionsPerBatch)
|
||||||
|
|
||||||
// Frequency is not exactly the desired one, some decimals may appear
|
// Frequency is not exactly the desired one, some decimals may appear
|
||||||
// There is a -2 as time for first and last batch is not taken into account
|
assert.GreaterOrEqual(t, res.BatchFrequency, float64(frequency))
|
||||||
assert.InEpsilon(t, float64(frequency)*float64(numBatches-2)/float64(numBatches), res.BatchFrequency, 0.01)
|
assert.Less(t, res.BatchFrequency, float64(frequency+1))
|
||||||
assert.InEpsilon(t, float64(numTx)/float64(frequency*blockNum-frequency), res.TransactionsPerSecond, 0.01)
|
// Truncate frecuency into an int to do an exact check
|
||||||
assert.Equal(t, int64(3), res.TokenAccounts)
|
assert.Equal(t, frequency, int(res.BatchFrequency))
|
||||||
assert.Equal(t, int64(3), res.Wallets)
|
// This may also be different in some decimals
|
||||||
|
// Truncate it to the third decimal to compare
|
||||||
|
assert.Equal(t, math.Trunc((float64(numTx)/float64(frequency*blockNum-frequency))/0.001)*0.001, math.Trunc(res.TransactionsPerSecond/0.001)*0.001)
|
||||||
|
assert.Equal(t, int64(3), res.TotalAccounts)
|
||||||
|
assert.Equal(t, int64(3), res.TotalBJJs)
|
||||||
// Til does not set fees
|
// Til does not set fees
|
||||||
assert.Equal(t, float64(0), res.AvgTransactionFee)
|
assert.Equal(t, float64(0), res.AvgTransactionFee)
|
||||||
}
|
}
|
||||||
@@ -1212,8 +1195,7 @@ func TestGetMetricsAPIMoreThan24Hours(t *testing.T) {
|
|||||||
set = append(set, til.Instruction{Typ: til.TypeNewBlock})
|
set = append(set, til.Instruction{Typ: til.TypeNewBlock})
|
||||||
|
|
||||||
// Transfers
|
// Transfers
|
||||||
const numBlocks int = 30
|
for x := 0; x < 6000; x++ {
|
||||||
for x := 0; x < numBlocks; x++ {
|
|
||||||
set = append(set, til.Instruction{
|
set = append(set, til.Instruction{
|
||||||
Typ: common.TxTypeTransfer,
|
Typ: common.TxTypeTransfer,
|
||||||
TokenID: common.TokenID(0),
|
TokenID: common.TokenID(0),
|
||||||
@@ -1237,40 +1219,51 @@ func TestGetMetricsAPIMoreThan24Hours(t *testing.T) {
|
|||||||
err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
|
err = tc.FillBlocksExtra(blocks, &tilCfgExtra)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
const numBatches int = 2 + numBlocks
|
const numBatches int = 6002
|
||||||
const blockNum = 4 + numBlocks
|
const numTx int = 6003
|
||||||
|
const blockNum = 6005 - 1
|
||||||
|
|
||||||
// Sanity check
|
// Sanity check
|
||||||
require.Equal(t, blockNum, len(blocks))
|
require.Equal(t, blockNum, len(blocks))
|
||||||
|
|
||||||
// Adding one batch per block
|
// Adding one batch per block
|
||||||
// batch frequency can be chosen
|
// batch frequency can be chosen
|
||||||
const blockTime time.Duration = 3600 * time.Second
|
const frequency int = 15
|
||||||
now := time.Now()
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
for i := range blocks {
|
for i := range blocks {
|
||||||
blocks[i].Block.Timestamp = now.Add(-time.Duration(len(blocks)-1-i) * blockTime)
|
blocks[i].Block.Timestamp = time.Now().Add(-time.Second * time.Duration(frequency*(len(blocks)-i)))
|
||||||
err = historyDB.AddBlockSCData(&blocks[i])
|
err = historyDB.AddBlockSCData(&blocks[i])
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
res, _, err := historyDBWithACC.GetMetricsInternalAPI(common.BatchNum(numBatches))
|
res, err := historyDBWithACC.GetMetricsAPI(common.BatchNum(numBatches))
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
assert.InEpsilon(t, 1.0, res.TransactionsPerBatch, 0.1)
|
assert.Equal(t, math.Trunc((float64(numTx)/float64(numBatches-1))/0.001)*0.001, math.Trunc(res.TransactionsPerBatch/0.001)*0.001)
|
||||||
|
|
||||||
assert.InEpsilon(t, res.BatchFrequency, float64(blockTime/time.Second), 0.1)
|
// Frequency is not exactly the desired one, some decimals may appear
|
||||||
assert.InEpsilon(t, 1.0/float64(blockTime/time.Second), res.TransactionsPerSecond, 0.1)
|
assert.GreaterOrEqual(t, res.BatchFrequency, float64(frequency))
|
||||||
assert.Equal(t, int64(3), res.TokenAccounts)
|
assert.Less(t, res.BatchFrequency, float64(frequency+1))
|
||||||
assert.Equal(t, int64(3), res.Wallets)
|
// Truncate frecuency into an int to do an exact check
|
||||||
|
assert.Equal(t, frequency, int(res.BatchFrequency))
|
||||||
|
// This may also be different in some decimals
|
||||||
|
// Truncate it to the third decimal to compare
|
||||||
|
assert.Equal(t, math.Trunc((float64(numTx)/float64(frequency*blockNum-frequency))/0.001)*0.001, math.Trunc(res.TransactionsPerSecond/0.001)*0.001)
|
||||||
|
assert.Equal(t, int64(3), res.TotalAccounts)
|
||||||
|
assert.Equal(t, int64(3), res.TotalBJJs)
|
||||||
// Til does not set fees
|
// Til does not set fees
|
||||||
assert.Equal(t, float64(0), res.AvgTransactionFee)
|
assert.Equal(t, float64(0), res.AvgTransactionFee)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestGetMetricsAPIEmpty(t *testing.T) {
|
func TestGetMetricsAPIEmpty(t *testing.T) {
|
||||||
test.WipeDB(historyDB.DB())
|
test.WipeDB(historyDB.DB())
|
||||||
_, _, err := historyDBWithACC.GetMetricsInternalAPI(0)
|
_, err := historyDBWithACC.GetMetricsAPI(0)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestGetAvgTxFeeEmpty(t *testing.T) {
|
||||||
|
test.WipeDB(historyDB.DB())
|
||||||
|
_, err := historyDBWithACC.GetAvgTxFeeAPI()
|
||||||
assert.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1459,128 +1452,3 @@ func setTestBlocks(from, to int64) []common.Block {
|
|||||||
}
|
}
|
||||||
return blocks
|
return blocks
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestNodeInfo(t *testing.T) {
|
|
||||||
test.WipeDB(historyDB.DB())
|
|
||||||
|
|
||||||
err := historyDB.SetStateInternalAPI(&StateAPI{})
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
clientSetup := test.NewClientSetupExample()
|
|
||||||
constants := &Constants{
|
|
||||||
SCConsts: common.SCConsts{
|
|
||||||
Rollup: *clientSetup.RollupConstants,
|
|
||||||
Auction: *clientSetup.AuctionConstants,
|
|
||||||
WDelayer: *clientSetup.WDelayerConstants,
|
|
||||||
},
|
|
||||||
ChainID: 42,
|
|
||||||
HermezAddress: clientSetup.AuctionConstants.HermezRollup,
|
|
||||||
}
|
|
||||||
err = historyDB.SetConstants(constants)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
// Test parameters
|
|
||||||
var f64 float64 = 1.2
|
|
||||||
var i64 int64 = 8888
|
|
||||||
addr := ethCommon.HexToAddress("0x1234")
|
|
||||||
hash := ethCommon.HexToHash("0x5678")
|
|
||||||
stateAPI := &StateAPI{
|
|
||||||
NodePublicInfo: NodePublicInfo{
|
|
||||||
ForgeDelay: 3.1,
|
|
||||||
},
|
|
||||||
Network: NetworkAPI{
|
|
||||||
LastEthBlock: 12,
|
|
||||||
LastSyncBlock: 34,
|
|
||||||
LastBatch: &BatchAPI{
|
|
||||||
ItemID: 123,
|
|
||||||
BatchNum: 456,
|
|
||||||
EthBlockNum: 789,
|
|
||||||
EthBlockHash: hash,
|
|
||||||
Timestamp: time.Now(),
|
|
||||||
ForgerAddr: addr,
|
|
||||||
// CollectedFeesDB: map[common.TokenID]*big.Int{
|
|
||||||
// 0: big.NewInt(11111),
|
|
||||||
// 1: big.NewInt(21111),
|
|
||||||
// 2: big.NewInt(31111),
|
|
||||||
// },
|
|
||||||
CollectedFeesAPI: apitypes.CollectedFeesAPI(map[common.TokenID]apitypes.BigIntStr{
|
|
||||||
0: apitypes.BigIntStr("11111"),
|
|
||||||
1: apitypes.BigIntStr("21111"),
|
|
||||||
2: apitypes.BigIntStr("31111"),
|
|
||||||
}),
|
|
||||||
TotalFeesUSD: &f64,
|
|
||||||
StateRoot: apitypes.BigIntStr("1234"),
|
|
||||||
NumAccounts: 11,
|
|
||||||
ExitRoot: apitypes.BigIntStr("5678"),
|
|
||||||
ForgeL1TxsNum: &i64,
|
|
||||||
SlotNum: 44,
|
|
||||||
ForgedTxs: 23,
|
|
||||||
TotalItems: 0,
|
|
||||||
FirstItem: 0,
|
|
||||||
LastItem: 0,
|
|
||||||
},
|
|
||||||
CurrentSlot: 22,
|
|
||||||
NextForgers: []NextForgerAPI{
|
|
||||||
{
|
|
||||||
Coordinator: CoordinatorAPI{
|
|
||||||
ItemID: 111,
|
|
||||||
Bidder: addr,
|
|
||||||
Forger: addr,
|
|
||||||
EthBlockNum: 566,
|
|
||||||
URL: "asd",
|
|
||||||
TotalItems: 0,
|
|
||||||
FirstItem: 0,
|
|
||||||
LastItem: 0,
|
|
||||||
},
|
|
||||||
Period: Period{
|
|
||||||
SlotNum: 33,
|
|
||||||
FromBlock: 55,
|
|
||||||
ToBlock: 66,
|
|
||||||
FromTimestamp: time.Now(),
|
|
||||||
ToTimestamp: time.Now(),
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
Metrics: MetricsAPI{
|
|
||||||
TransactionsPerBatch: 1.1,
|
|
||||||
TokenAccounts: 42,
|
|
||||||
},
|
|
||||||
Rollup: *NewRollupVariablesAPI(clientSetup.RollupVariables),
|
|
||||||
Auction: *NewAuctionVariablesAPI(clientSetup.AuctionVariables),
|
|
||||||
WithdrawalDelayer: *clientSetup.WDelayerVariables,
|
|
||||||
RecommendedFee: common.RecommendedFee{
|
|
||||||
ExistingAccount: 0.15,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
err = historyDB.SetStateInternalAPI(stateAPI)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
nodeConfig := &NodeConfig{
|
|
||||||
MaxPoolTxs: 123,
|
|
||||||
MinFeeUSD: 0.5,
|
|
||||||
}
|
|
||||||
err = historyDB.SetNodeConfig(nodeConfig)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
dbConstants, err := historyDB.GetConstants()
|
|
||||||
require.NoError(t, err)
|
|
||||||
assert.Equal(t, constants, dbConstants)
|
|
||||||
|
|
||||||
dbNodeConfig, err := historyDB.GetNodeConfig()
|
|
||||||
require.NoError(t, err)
|
|
||||||
assert.Equal(t, nodeConfig, dbNodeConfig)
|
|
||||||
|
|
||||||
dbStateAPI, err := historyDB.getStateAPI(historyDB.dbRead)
|
|
||||||
require.NoError(t, err)
|
|
||||||
assert.Equal(t, stateAPI.Network.LastBatch.Timestamp.Unix(),
|
|
||||||
dbStateAPI.Network.LastBatch.Timestamp.Unix())
|
|
||||||
dbStateAPI.Network.LastBatch.Timestamp = stateAPI.Network.LastBatch.Timestamp
|
|
||||||
assert.Equal(t, stateAPI.Network.NextForgers[0].Period.FromTimestamp.Unix(),
|
|
||||||
dbStateAPI.Network.NextForgers[0].Period.FromTimestamp.Unix())
|
|
||||||
dbStateAPI.Network.NextForgers[0].Period.FromTimestamp = stateAPI.Network.NextForgers[0].Period.FromTimestamp
|
|
||||||
assert.Equal(t, stateAPI.Network.NextForgers[0].Period.ToTimestamp.Unix(),
|
|
||||||
dbStateAPI.Network.NextForgers[0].Period.ToTimestamp.Unix())
|
|
||||||
dbStateAPI.Network.NextForgers[0].Period.ToTimestamp = stateAPI.Network.NextForgers[0].Period.ToTimestamp
|
|
||||||
assert.Equal(t, stateAPI, dbStateAPI)
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,173 +0,0 @@
|
|||||||
package historydb
|
|
||||||
|
|
||||||
import (
|
|
||||||
"time"
|
|
||||||
|
|
||||||
ethCommon "github.com/ethereum/go-ethereum/common"
|
|
||||||
"github.com/hermeznetwork/hermez-node/api/apitypes"
|
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
|
||||||
"github.com/hermeznetwork/tracerr"
|
|
||||||
"github.com/russross/meddler"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Period represents a time period in ethereum
|
|
||||||
type Period struct {
|
|
||||||
SlotNum int64 `json:"slotNum"`
|
|
||||||
FromBlock int64 `json:"fromBlock"`
|
|
||||||
ToBlock int64 `json:"toBlock"`
|
|
||||||
FromTimestamp time.Time `json:"fromTimestamp"`
|
|
||||||
ToTimestamp time.Time `json:"toTimestamp"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// NextForgerAPI represents the next forger exposed via the API
|
|
||||||
type NextForgerAPI struct {
|
|
||||||
Coordinator CoordinatorAPI `json:"coordinator"`
|
|
||||||
Period Period `json:"period"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// NetworkAPI is the network state exposed via the API
|
|
||||||
type NetworkAPI struct {
|
|
||||||
LastEthBlock int64 `json:"lastEthereumBlock"`
|
|
||||||
LastSyncBlock int64 `json:"lastSynchedBlock"`
|
|
||||||
LastBatch *BatchAPI `json:"lastBatch"`
|
|
||||||
CurrentSlot int64 `json:"currentSlot"`
|
|
||||||
NextForgers []NextForgerAPI `json:"nextForgers"`
|
|
||||||
PendingL1Txs int `json:"pendingL1Transactions"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// NodePublicInfo is the configuration and metrics of the node that is exposed via API
|
|
||||||
type NodePublicInfo struct {
|
|
||||||
// ForgeDelay in seconds
|
|
||||||
ForgeDelay float64 `json:"forgeDelay"`
|
|
||||||
// PoolLoad amount of transactions in the pool
|
|
||||||
PoolLoad int64 `json:"poolLoad"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// StateAPI is an object representing the node and network state exposed via the API
|
|
||||||
type StateAPI struct {
|
|
||||||
NodePublicInfo NodePublicInfo `json:"node"`
|
|
||||||
Network NetworkAPI `json:"network"`
|
|
||||||
Metrics MetricsAPI `json:"metrics"`
|
|
||||||
Rollup RollupVariablesAPI `json:"rollup"`
|
|
||||||
Auction AuctionVariablesAPI `json:"auction"`
|
|
||||||
WithdrawalDelayer common.WDelayerVariables `json:"withdrawalDelayer"`
|
|
||||||
RecommendedFee common.RecommendedFee `json:"recommendedFee"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// Constants contains network constants
|
|
||||||
type Constants struct {
|
|
||||||
common.SCConsts
|
|
||||||
ChainID uint16
|
|
||||||
HermezAddress ethCommon.Address
|
|
||||||
}
|
|
||||||
|
|
||||||
// NodeConfig contains the node config exposed in the API
|
|
||||||
type NodeConfig struct {
|
|
||||||
MaxPoolTxs uint32
|
|
||||||
MinFeeUSD float64
|
|
||||||
MaxFeeUSD float64
|
|
||||||
ForgeDelay float64
|
|
||||||
}
|
|
||||||
|
|
||||||
// NodeInfo contains information about he node used when serving the API
|
|
||||||
type NodeInfo struct {
|
|
||||||
ItemID int `meddler:"item_id,pk"`
|
|
||||||
StateAPI *StateAPI `meddler:"state,json"`
|
|
||||||
NodeConfig *NodeConfig `meddler:"config,json"`
|
|
||||||
Constants *Constants `meddler:"constants,json"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetNodeInfo returns the NodeInfo
|
|
||||||
func (hdb *HistoryDB) GetNodeInfo() (*NodeInfo, error) {
|
|
||||||
ni := &NodeInfo{}
|
|
||||||
err := meddler.QueryRow(
|
|
||||||
hdb.dbRead, ni, `SELECT * FROM node_info WHERE item_id = 1;`,
|
|
||||||
)
|
|
||||||
return ni, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetConstants returns the Constats
|
|
||||||
func (hdb *HistoryDB) GetConstants() (*Constants, error) {
|
|
||||||
var nodeInfo NodeInfo
|
|
||||||
err := meddler.QueryRow(
|
|
||||||
hdb.dbRead, &nodeInfo,
|
|
||||||
"SELECT constants FROM node_info WHERE item_id = 1;",
|
|
||||||
)
|
|
||||||
return nodeInfo.Constants, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// SetConstants sets the Constants
|
|
||||||
func (hdb *HistoryDB) SetConstants(constants *Constants) error {
|
|
||||||
_constants := struct {
|
|
||||||
Constants *Constants `meddler:"constants,json"`
|
|
||||||
}{constants}
|
|
||||||
values, err := meddler.Default.Values(&_constants, false)
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
_, err = hdb.dbWrite.Exec(
|
|
||||||
"UPDATE node_info SET constants = $1 WHERE item_id = 1;",
|
|
||||||
values[0],
|
|
||||||
)
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetStateInternalAPI returns the StateAPI
|
|
||||||
func (hdb *HistoryDB) GetStateInternalAPI() (*StateAPI, error) {
|
|
||||||
return hdb.getStateAPI(hdb.dbRead)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (hdb *HistoryDB) getStateAPI(d meddler.DB) (*StateAPI, error) {
|
|
||||||
var nodeInfo NodeInfo
|
|
||||||
err := meddler.QueryRow(
|
|
||||||
d, &nodeInfo,
|
|
||||||
"SELECT state FROM node_info WHERE item_id = 1;",
|
|
||||||
)
|
|
||||||
return nodeInfo.StateAPI, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// SetStateInternalAPI sets the StateAPI
|
|
||||||
func (hdb *HistoryDB) SetStateInternalAPI(stateAPI *StateAPI) error {
|
|
||||||
if stateAPI.Network.LastBatch != nil {
|
|
||||||
stateAPI.Network.LastBatch.CollectedFeesAPI =
|
|
||||||
apitypes.NewCollectedFeesAPI(stateAPI.Network.LastBatch.CollectedFeesDB)
|
|
||||||
}
|
|
||||||
_stateAPI := struct {
|
|
||||||
StateAPI *StateAPI `meddler:"state,json"`
|
|
||||||
}{stateAPI}
|
|
||||||
values, err := meddler.Default.Values(&_stateAPI, false)
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
_, err = hdb.dbWrite.Exec(
|
|
||||||
"UPDATE node_info SET state = $1 WHERE item_id = 1;",
|
|
||||||
values[0],
|
|
||||||
)
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetNodeConfig returns the NodeConfig
|
|
||||||
func (hdb *HistoryDB) GetNodeConfig() (*NodeConfig, error) {
|
|
||||||
var nodeInfo NodeInfo
|
|
||||||
err := meddler.QueryRow(
|
|
||||||
hdb.dbRead, &nodeInfo,
|
|
||||||
"SELECT config FROM node_info WHERE item_id = 1;",
|
|
||||||
)
|
|
||||||
return nodeInfo.NodeConfig, tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// SetNodeConfig sets the NodeConfig
|
|
||||||
func (hdb *HistoryDB) SetNodeConfig(nodeConfig *NodeConfig) error {
|
|
||||||
_nodeConfig := struct {
|
|
||||||
NodeConfig *NodeConfig `meddler:"config,json"`
|
|
||||||
}{nodeConfig}
|
|
||||||
values, err := meddler.Default.Values(&_nodeConfig, false)
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
_, err = hdb.dbWrite.Exec(
|
|
||||||
"UPDATE node_info SET config = $1 WHERE item_id = 1;",
|
|
||||||
values[0],
|
|
||||||
)
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
@@ -6,7 +6,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
ethCommon "github.com/ethereum/go-ethereum/common"
|
ethCommon "github.com/ethereum/go-ethereum/common"
|
||||||
"github.com/hermeznetwork/hermez-node/api/apitypes"
|
"github.com/hermeznetwork/hermez-node/apitypes"
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/iden3/go-iden3-crypto/babyjub"
|
"github.com/iden3/go-iden3-crypto/babyjub"
|
||||||
"github.com/iden3/go-merkletree"
|
"github.com/iden3/go-merkletree"
|
||||||
@@ -147,12 +147,6 @@ type txWrite struct {
|
|||||||
Nonce *common.Nonce `meddler:"nonce"`
|
Nonce *common.Nonce `meddler:"nonce"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// TokenSymbolAndAddr token representation with only Eth addr and symbol
|
|
||||||
type TokenSymbolAndAddr struct {
|
|
||||||
Symbol string `meddler:"symbol"`
|
|
||||||
Addr ethCommon.Address `meddler:"eth_addr"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// TokenWithUSD add USD info to common.Token
|
// TokenWithUSD add USD info to common.Token
|
||||||
type TokenWithUSD struct {
|
type TokenWithUSD struct {
|
||||||
ItemID uint64 `json:"itemId" meddler:"item_id"`
|
ItemID uint64 `json:"itemId" meddler:"item_id"`
|
||||||
@@ -245,8 +239,8 @@ type AccountAPI struct {
|
|||||||
BatchNum common.BatchNum `meddler:"batch_num"`
|
BatchNum common.BatchNum `meddler:"batch_num"`
|
||||||
PublicKey apitypes.HezBJJ `meddler:"bjj"`
|
PublicKey apitypes.HezBJJ `meddler:"bjj"`
|
||||||
EthAddr apitypes.HezEthAddr `meddler:"eth_addr"`
|
EthAddr apitypes.HezEthAddr `meddler:"eth_addr"`
|
||||||
Nonce common.Nonce `meddler:"nonce"` // max of 40 bits used
|
Nonce common.Nonce `meddler:"-"` // max of 40 bits used
|
||||||
Balance *apitypes.BigIntStr `meddler:"balance"` // max of 192 bits used
|
Balance *apitypes.BigIntStr `meddler:"-"` // max of 192 bits used
|
||||||
TotalItems uint64 `meddler:"total_items"`
|
TotalItems uint64 `meddler:"total_items"`
|
||||||
FirstItem uint64 `meddler:"first_item"`
|
FirstItem uint64 `meddler:"first_item"`
|
||||||
LastItem uint64 `meddler:"last_item"`
|
LastItem uint64 `meddler:"last_item"`
|
||||||
@@ -289,35 +283,44 @@ func (account AccountAPI) MarshalJSON() ([]byte, error) {
|
|||||||
// BatchAPI is a representation of a batch with additional information
|
// BatchAPI is a representation of a batch with additional information
|
||||||
// required by the API, and extracted by joining block table
|
// required by the API, and extracted by joining block table
|
||||||
type BatchAPI struct {
|
type BatchAPI struct {
|
||||||
ItemID uint64 `json:"itemId" meddler:"item_id"`
|
ItemID uint64 `json:"itemId" meddler:"item_id"`
|
||||||
BatchNum common.BatchNum `json:"batchNum" meddler:"batch_num"`
|
BatchNum common.BatchNum `json:"batchNum" meddler:"batch_num"`
|
||||||
EthBlockNum int64 `json:"ethereumBlockNum" meddler:"eth_block_num"`
|
EthBlockNum int64 `json:"ethereumBlockNum" meddler:"eth_block_num"`
|
||||||
EthBlockHash ethCommon.Hash `json:"ethereumBlockHash" meddler:"hash"`
|
EthBlockHash ethCommon.Hash `json:"ethereumBlockHash" meddler:"hash"`
|
||||||
Timestamp time.Time `json:"timestamp" meddler:"timestamp,utctime"`
|
Timestamp time.Time `json:"timestamp" meddler:"timestamp,utctime"`
|
||||||
ForgerAddr ethCommon.Address `json:"forgerAddr" meddler:"forger_addr"`
|
ForgerAddr ethCommon.Address `json:"forgerAddr" meddler:"forger_addr"`
|
||||||
CollectedFeesDB map[common.TokenID]*big.Int `json:"-" meddler:"fees_collected,json"`
|
CollectedFees apitypes.CollectedFees `json:"collectedFees" meddler:"fees_collected,json"`
|
||||||
CollectedFeesAPI apitypes.CollectedFeesAPI `json:"collectedFees" meddler:"-"`
|
TotalFeesUSD *float64 `json:"historicTotalCollectedFeesUSD" meddler:"total_fees_usd"`
|
||||||
TotalFeesUSD *float64 `json:"historicTotalCollectedFeesUSD" meddler:"total_fees_usd"`
|
StateRoot apitypes.BigIntStr `json:"stateRoot" meddler:"state_root"`
|
||||||
StateRoot apitypes.BigIntStr `json:"stateRoot" meddler:"state_root"`
|
NumAccounts int `json:"numAccounts" meddler:"num_accounts"`
|
||||||
NumAccounts int `json:"numAccounts" meddler:"num_accounts"`
|
ExitRoot apitypes.BigIntStr `json:"exitRoot" meddler:"exit_root"`
|
||||||
ExitRoot apitypes.BigIntStr `json:"exitRoot" meddler:"exit_root"`
|
ForgeL1TxsNum *int64 `json:"forgeL1TransactionsNum" meddler:"forge_l1_txs_num"`
|
||||||
ForgeL1TxsNum *int64 `json:"forgeL1TransactionsNum" meddler:"forge_l1_txs_num"`
|
SlotNum int64 `json:"slotNum" meddler:"slot_num"`
|
||||||
SlotNum int64 `json:"slotNum" meddler:"slot_num"`
|
ForgedTxs int `json:"forgedTransactions" meddler:"forged_txs"`
|
||||||
ForgedTxs int `json:"forgedTransactions" meddler:"forged_txs"`
|
TotalItems uint64 `json:"-" meddler:"total_items"`
|
||||||
TotalItems uint64 `json:"-" meddler:"total_items"`
|
FirstItem uint64 `json:"-" meddler:"first_item"`
|
||||||
FirstItem uint64 `json:"-" meddler:"first_item"`
|
LastItem uint64 `json:"-" meddler:"last_item"`
|
||||||
LastItem uint64 `json:"-" meddler:"last_item"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// MetricsAPI define metrics of the network
|
// Metrics define metrics of the network
|
||||||
type MetricsAPI struct {
|
type Metrics struct {
|
||||||
TransactionsPerBatch float64 `json:"transactionsPerBatch"`
|
TransactionsPerBatch float64 `json:"transactionsPerBatch"`
|
||||||
BatchFrequency float64 `json:"batchFrequency"`
|
BatchFrequency float64 `json:"batchFrequency"`
|
||||||
TransactionsPerSecond float64 `json:"transactionsPerSecond"`
|
TransactionsPerSecond float64 `json:"transactionsPerSecond"`
|
||||||
TokenAccounts int64 `json:"tokenAccounts"`
|
TotalAccounts int64 `json:"totalAccounts" meddler:"total_accounts"`
|
||||||
Wallets int64 `json:"wallets"`
|
TotalBJJs int64 `json:"totalBJJs" meddler:"total_bjjs"`
|
||||||
AvgTransactionFee float64 `json:"avgTransactionFee"`
|
AvgTransactionFee float64 `json:"avgTransactionFee"`
|
||||||
EstimatedTimeToForgeL1 float64 `json:"estimatedTimeToForgeL1" meddler:"estimated_time_to_forge_l1"`
|
}
|
||||||
|
|
||||||
|
// MetricsTotals is used to get temporal information from HistoryDB
|
||||||
|
// to calculate data to be stored into the Metrics struct
|
||||||
|
type MetricsTotals struct {
|
||||||
|
TotalTransactions uint64 `meddler:"total_txs"`
|
||||||
|
FirstBatchNum common.BatchNum `meddler:"batch_num"`
|
||||||
|
TotalBatches int64 `meddler:"total_batches"`
|
||||||
|
TotalFeesUSD float64 `meddler:"total_fees"`
|
||||||
|
MinTimestamp time.Time `meddler:"min_timestamp,utctime"`
|
||||||
|
MaxTimestamp time.Time `meddler:"max_timestamp,utctime"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// BidAPI is a representation of a bid with additional information
|
// BidAPI is a representation of a bid with additional information
|
||||||
@@ -370,27 +373,6 @@ type RollupVariablesAPI struct {
|
|||||||
SafeMode bool `json:"safeMode" meddler:"safe_mode"`
|
SafeMode bool `json:"safeMode" meddler:"safe_mode"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewRollupVariablesAPI creates a RollupVariablesAPI from common.RollupVariables
|
|
||||||
func NewRollupVariablesAPI(rollupVariables *common.RollupVariables) *RollupVariablesAPI {
|
|
||||||
rollupVars := RollupVariablesAPI{
|
|
||||||
EthBlockNum: rollupVariables.EthBlockNum,
|
|
||||||
FeeAddToken: apitypes.NewBigIntStr(rollupVariables.FeeAddToken),
|
|
||||||
ForgeL1L2BatchTimeout: rollupVariables.ForgeL1L2BatchTimeout,
|
|
||||||
WithdrawalDelay: rollupVariables.WithdrawalDelay,
|
|
||||||
SafeMode: rollupVariables.SafeMode,
|
|
||||||
}
|
|
||||||
|
|
||||||
for i, bucket := range rollupVariables.Buckets {
|
|
||||||
rollupVars.Buckets[i] = BucketParamsAPI{
|
|
||||||
CeilUSD: apitypes.NewBigIntStr(bucket.CeilUSD),
|
|
||||||
Withdrawals: apitypes.NewBigIntStr(bucket.Withdrawals),
|
|
||||||
BlockWithdrawalRate: apitypes.NewBigIntStr(bucket.BlockWithdrawalRate),
|
|
||||||
MaxWithdrawals: apitypes.NewBigIntStr(bucket.MaxWithdrawals),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return &rollupVars
|
|
||||||
}
|
|
||||||
|
|
||||||
// AuctionVariablesAPI are the variables of the Auction Smart Contract
|
// AuctionVariablesAPI are the variables of the Auction Smart Contract
|
||||||
type AuctionVariablesAPI struct {
|
type AuctionVariablesAPI struct {
|
||||||
EthBlockNum int64 `json:"ethereumBlockNum" meddler:"eth_block_num"`
|
EthBlockNum int64 `json:"ethereumBlockNum" meddler:"eth_block_num"`
|
||||||
@@ -415,28 +397,3 @@ type AuctionVariablesAPI struct {
|
|||||||
// SlotDeadline Number of blocks at the end of a slot in which any coordinator can forge if the winner has not forged one before
|
// SlotDeadline Number of blocks at the end of a slot in which any coordinator can forge if the winner has not forged one before
|
||||||
SlotDeadline uint8 `json:"slotDeadline" meddler:"slot_deadline" validate:"required"`
|
SlotDeadline uint8 `json:"slotDeadline" meddler:"slot_deadline" validate:"required"`
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewAuctionVariablesAPI creates a AuctionVariablesAPI from common.AuctionVariables
|
|
||||||
func NewAuctionVariablesAPI(auctionVariables *common.AuctionVariables) *AuctionVariablesAPI {
|
|
||||||
auctionVars := AuctionVariablesAPI{
|
|
||||||
EthBlockNum: auctionVariables.EthBlockNum,
|
|
||||||
DonationAddress: auctionVariables.DonationAddress,
|
|
||||||
BootCoordinator: auctionVariables.BootCoordinator,
|
|
||||||
BootCoordinatorURL: auctionVariables.BootCoordinatorURL,
|
|
||||||
DefaultSlotSetBidSlotNum: auctionVariables.DefaultSlotSetBidSlotNum,
|
|
||||||
ClosedAuctionSlots: auctionVariables.ClosedAuctionSlots,
|
|
||||||
OpenAuctionSlots: auctionVariables.OpenAuctionSlots,
|
|
||||||
Outbidding: auctionVariables.Outbidding,
|
|
||||||
SlotDeadline: auctionVariables.SlotDeadline,
|
|
||||||
}
|
|
||||||
|
|
||||||
for i, slot := range auctionVariables.DefaultSlotSetBid {
|
|
||||||
auctionVars.DefaultSlotSetBid[i] = apitypes.NewBigIntStr(slot)
|
|
||||||
}
|
|
||||||
|
|
||||||
for i, ratio := range auctionVariables.AllocationRatio {
|
|
||||||
auctionVars.AllocationRatio[i] = ratio
|
|
||||||
}
|
|
||||||
|
|
||||||
return &auctionVars
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -49,8 +49,6 @@ type KVDB struct {
|
|||||||
CurrentIdx common.Idx
|
CurrentIdx common.Idx
|
||||||
CurrentBatch common.BatchNum
|
CurrentBatch common.BatchNum
|
||||||
m sync.Mutex
|
m sync.Mutex
|
||||||
mutexDelOld sync.Mutex
|
|
||||||
wg sync.WaitGroup
|
|
||||||
last *Last
|
last *Last
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -318,7 +316,7 @@ func (k *KVDB) ResetFromSynchronizer(batchNum common.BatchNum, synchronizerKVDB
|
|||||||
|
|
||||||
checkpointPath := path.Join(k.cfg.Path, fmt.Sprintf("%s%d", PathBatchNum, batchNum))
|
checkpointPath := path.Join(k.cfg.Path, fmt.Sprintf("%s%d", PathBatchNum, batchNum))
|
||||||
|
|
||||||
// copy synchronizer 'BatchNumX' to 'BatchNumX'
|
// copy synchronizer'BatchNumX' to 'BatchNumX'
|
||||||
if err := synchronizerKVDB.MakeCheckpointFromTo(batchNum, checkpointPath); err != nil {
|
if err := synchronizerKVDB.MakeCheckpointFromTo(batchNum, checkpointPath); err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -446,15 +444,10 @@ func (k *KVDB) MakeCheckpoint() error {
|
|||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
// delete old checkpoints
|
||||||
k.wg.Add(1)
|
if err := k.deleteOldCheckpoints(); err != nil {
|
||||||
go func() {
|
return tracerr.Wrap(err)
|
||||||
delErr := k.DeleteOldCheckpoints()
|
}
|
||||||
if delErr != nil {
|
|
||||||
log.Errorw("delete old checkpoints failed", "err", delErr)
|
|
||||||
}
|
|
||||||
k.wg.Done()
|
|
||||||
}()
|
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
@@ -465,7 +458,7 @@ func (k *KVDB) CheckpointExists(batchNum common.BatchNum) (bool, error) {
|
|||||||
if _, err := os.Stat(source); os.IsNotExist(err) {
|
if _, err := os.Stat(source); os.IsNotExist(err) {
|
||||||
return false, nil
|
return false, nil
|
||||||
} else if err != nil {
|
} else if err != nil {
|
||||||
return false, tracerr.Wrap(err)
|
return false, err
|
||||||
}
|
}
|
||||||
return true, nil
|
return true, nil
|
||||||
}
|
}
|
||||||
@@ -516,12 +509,9 @@ func (k *KVDB) ListCheckpoints() ([]int, error) {
|
|||||||
return checkpoints, nil
|
return checkpoints, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// DeleteOldCheckpoints deletes old checkpoints when there are more than
|
// deleteOldCheckpoints deletes old checkpoints when there are more than
|
||||||
// `s.keep` checkpoints
|
// `s.keep` checkpoints
|
||||||
func (k *KVDB) DeleteOldCheckpoints() error {
|
func (k *KVDB) deleteOldCheckpoints() error {
|
||||||
k.mutexDelOld.Lock()
|
|
||||||
defer k.mutexDelOld.Unlock()
|
|
||||||
|
|
||||||
list, err := k.ListCheckpoints()
|
list, err := k.ListCheckpoints()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
@@ -554,12 +544,10 @@ func (k *KVDB) MakeCheckpointFromTo(fromBatchNum common.BatchNum, dest string) e
|
|||||||
// synchronizer to the same batchNum
|
// synchronizer to the same batchNum
|
||||||
k.m.Lock()
|
k.m.Lock()
|
||||||
defer k.m.Unlock()
|
defer k.m.Unlock()
|
||||||
return PebbleMakeCheckpoint(source, dest)
|
return pebbleMakeCheckpoint(source, dest)
|
||||||
}
|
}
|
||||||
|
|
||||||
// PebbleMakeCheckpoint is a hepler function to make a pebble checkpoint from
|
func pebbleMakeCheckpoint(source, dest string) error {
|
||||||
// source to dest.
|
|
||||||
func PebbleMakeCheckpoint(source, dest string) error {
|
|
||||||
// Remove dest folder (if it exists) before doing the checkpoint
|
// Remove dest folder (if it exists) before doing the checkpoint
|
||||||
if _, err := os.Stat(dest); os.IsNotExist(err) {
|
if _, err := os.Stat(dest); os.IsNotExist(err) {
|
||||||
} else if err != nil {
|
} else if err != nil {
|
||||||
@@ -594,6 +582,4 @@ func (k *KVDB) Close() {
|
|||||||
if k.last != nil {
|
if k.last != nil {
|
||||||
k.last.close()
|
k.last.close()
|
||||||
}
|
}
|
||||||
// wait for deletion of old checkpoints
|
|
||||||
k.wg.Wait()
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,7 +4,6 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"os"
|
"os"
|
||||||
"sync"
|
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
@@ -191,67 +190,12 @@ func TestDeleteOldCheckpoints(t *testing.T) {
|
|||||||
for i := 0; i < numCheckpoints; i++ {
|
for i := 0; i < numCheckpoints; i++ {
|
||||||
err = db.MakeCheckpoint()
|
err = db.MakeCheckpoint()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
err = db.DeleteOldCheckpoints()
|
|
||||||
require.NoError(t, err)
|
|
||||||
checkpoints, err := db.ListCheckpoints()
|
checkpoints, err := db.ListCheckpoints()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.LessOrEqual(t, len(checkpoints), keep)
|
assert.LessOrEqual(t, len(checkpoints), keep)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestConcurrentDeleteOldCheckpoints(t *testing.T) {
|
|
||||||
dir, err := ioutil.TempDir("", "tmpdb")
|
|
||||||
require.NoError(t, err)
|
|
||||||
defer require.NoError(t, os.RemoveAll(dir))
|
|
||||||
|
|
||||||
keep := 16
|
|
||||||
db, err := NewKVDB(Config{Path: dir, Keep: keep})
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
numCheckpoints := 32
|
|
||||||
|
|
||||||
var wg sync.WaitGroup
|
|
||||||
wg.Add(numCheckpoints)
|
|
||||||
|
|
||||||
// do checkpoints and check that we never have more than `keep`
|
|
||||||
// checkpoints.
|
|
||||||
// 1 async DeleteOldCheckpoint after 1 MakeCheckpoint
|
|
||||||
for i := 0; i < numCheckpoints; i++ {
|
|
||||||
err = db.MakeCheckpoint()
|
|
||||||
require.NoError(t, err)
|
|
||||||
go func() {
|
|
||||||
err = db.DeleteOldCheckpoints()
|
|
||||||
require.NoError(t, err)
|
|
||||||
wg.Done()
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
wg.Wait()
|
|
||||||
checkpoints, err := db.ListCheckpoints()
|
|
||||||
require.NoError(t, err)
|
|
||||||
assert.LessOrEqual(t, len(checkpoints), keep)
|
|
||||||
|
|
||||||
wg.Add(numCheckpoints)
|
|
||||||
|
|
||||||
// do checkpoints and check that we never have more than `keep`
|
|
||||||
// checkpoints
|
|
||||||
// 32 concurrent DeleteOldCheckpoint after 32 MakeCheckpoint
|
|
||||||
for i := 0; i < numCheckpoints; i++ {
|
|
||||||
err = db.MakeCheckpoint()
|
|
||||||
require.NoError(t, err)
|
|
||||||
}
|
|
||||||
for i := 0; i < numCheckpoints; i++ {
|
|
||||||
go func() {
|
|
||||||
err = db.DeleteOldCheckpoints()
|
|
||||||
require.NoError(t, err)
|
|
||||||
wg.Done()
|
|
||||||
}()
|
|
||||||
}
|
|
||||||
wg.Wait()
|
|
||||||
checkpoints, err = db.ListCheckpoints()
|
|
||||||
require.NoError(t, err)
|
|
||||||
assert.LessOrEqual(t, len(checkpoints), keep)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetCurrentIdx(t *testing.T) {
|
func TestGetCurrentIdx(t *testing.T) {
|
||||||
dir, err := ioutil.TempDir("", "tmpdb")
|
dir, err := ioutil.TempDir("", "tmpdb")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|||||||
@@ -1,18 +1,12 @@
|
|||||||
package l2db
|
package l2db
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
|
||||||
|
|
||||||
ethCommon "github.com/ethereum/go-ethereum/common"
|
ethCommon "github.com/ethereum/go-ethereum/common"
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/hermeznetwork/tracerr"
|
"github.com/hermeznetwork/tracerr"
|
||||||
"github.com/russross/meddler"
|
"github.com/russross/meddler"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
|
||||||
errPoolFull = fmt.Errorf("the pool is at full capacity. More transactions are not accepted currently")
|
|
||||||
)
|
|
||||||
|
|
||||||
// AddAccountCreationAuthAPI inserts an account creation authorization into the DB
|
// AddAccountCreationAuthAPI inserts an account creation authorization into the DB
|
||||||
func (l2db *L2DB) AddAccountCreationAuthAPI(auth *common.AccountCreationAuth) error {
|
func (l2db *L2DB) AddAccountCreationAuthAPI(auth *common.AccountCreationAuth) error {
|
||||||
cancel, err := l2db.apiConnCon.Acquire()
|
cancel, err := l2db.apiConnCon.Acquire()
|
||||||
@@ -34,7 +28,7 @@ func (l2db *L2DB) GetAccountCreationAuthAPI(addr ethCommon.Address) (*AccountCre
|
|||||||
defer l2db.apiConnCon.Release()
|
defer l2db.apiConnCon.Release()
|
||||||
auth := new(AccountCreationAuthAPI)
|
auth := new(AccountCreationAuthAPI)
|
||||||
return auth, tracerr.Wrap(meddler.QueryRow(
|
return auth, tracerr.Wrap(meddler.QueryRow(
|
||||||
l2db.dbRead, auth,
|
l2db.db, auth,
|
||||||
"SELECT * FROM account_creation_auth WHERE eth_addr = $1;",
|
"SELECT * FROM account_creation_auth WHERE eth_addr = $1;",
|
||||||
addr,
|
addr,
|
||||||
))
|
))
|
||||||
@@ -48,58 +42,20 @@ func (l2db *L2DB) AddTxAPI(tx *PoolL2TxWrite) error {
|
|||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
defer l2db.apiConnCon.Release()
|
defer l2db.apiConnCon.Release()
|
||||||
|
row := l2db.db.QueryRow(
|
||||||
row := l2db.dbRead.QueryRow(`SELECT
|
"SELECT COUNT(*) FROM tx_pool WHERE state = $1;",
|
||||||
($1::NUMERIC * COALESCE(token.usd, 0) * fee_percentage($2::NUMERIC)) /
|
common.PoolL2TxStatePending,
|
||||||
(10.0 ^ token.decimals::NUMERIC)
|
)
|
||||||
FROM token WHERE token.token_id = $3;`,
|
var totalTxs uint32
|
||||||
tx.AmountFloat, tx.Fee, tx.TokenID)
|
if err := row.Scan(&totalTxs); err != nil {
|
||||||
var feeUSD float64
|
|
||||||
if err := row.Scan(&feeUSD); err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
if feeUSD < l2db.minFeeUSD {
|
if totalTxs >= l2db.maxTxs {
|
||||||
return tracerr.Wrap(fmt.Errorf("tx.feeUSD (%v) < minFeeUSD (%v)",
|
return tracerr.New(
|
||||||
feeUSD, l2db.minFeeUSD))
|
"The pool is at full capacity. More transactions are not accepted currently",
|
||||||
|
)
|
||||||
}
|
}
|
||||||
if feeUSD > l2db.maxFeeUSD {
|
return tracerr.Wrap(meddler.Insert(l2db.db, "tx_pool", tx))
|
||||||
return tracerr.Wrap(fmt.Errorf("tx.feeUSD (%v) > maxFeeUSD (%v)",
|
|
||||||
feeUSD, l2db.maxFeeUSD))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Prepare insert SQL query argument parameters
|
|
||||||
namesPart, err := meddler.Default.ColumnsQuoted(tx, false)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
valuesPart, err := meddler.Default.PlaceholdersString(tx, false)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
values, err := meddler.Default.Values(tx, false)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
q := fmt.Sprintf(
|
|
||||||
`INSERT INTO tx_pool (%s)
|
|
||||||
SELECT %s
|
|
||||||
WHERE (SELECT COUNT(*) FROM tx_pool WHERE state = $%v AND NOT external_delete) < $%v;`,
|
|
||||||
namesPart, valuesPart,
|
|
||||||
len(values)+1, len(values)+2) //nolint:gomnd
|
|
||||||
values = append(values, common.PoolL2TxStatePending, l2db.maxTxs)
|
|
||||||
res, err := l2db.dbWrite.Exec(q, values...)
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
rowsAffected, err := res.RowsAffected()
|
|
||||||
if err != nil {
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
if rowsAffected == 0 {
|
|
||||||
return tracerr.Wrap(errPoolFull)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// selectPoolTxAPI select part of queries to get PoolL2TxRead
|
// selectPoolTxAPI select part of queries to get PoolL2TxRead
|
||||||
@@ -122,7 +78,7 @@ func (l2db *L2DB) GetTxAPI(txID common.TxID) (*PoolTxAPI, error) {
|
|||||||
defer l2db.apiConnCon.Release()
|
defer l2db.apiConnCon.Release()
|
||||||
tx := new(PoolTxAPI)
|
tx := new(PoolTxAPI)
|
||||||
return tx, tracerr.Wrap(meddler.QueryRow(
|
return tx, tracerr.Wrap(meddler.QueryRow(
|
||||||
l2db.dbRead, tx,
|
l2db.db, tx,
|
||||||
selectPoolTxAPI+"WHERE tx_id = $1;",
|
selectPoolTxAPI+"WHERE tx_id = $1;",
|
||||||
txID,
|
txID,
|
||||||
))
|
))
|
||||||
|
|||||||
112
db/l2db/l2db.go
112
db/l2db/l2db.go
@@ -21,13 +21,10 @@ import (
|
|||||||
// L2DB stores L2 txs and authorization registers received by the coordinator and keeps them until they are no longer relevant
|
// L2DB stores L2 txs and authorization registers received by the coordinator and keeps them until they are no longer relevant
|
||||||
// due to them being forged or invalid after a safety period
|
// due to them being forged or invalid after a safety period
|
||||||
type L2DB struct {
|
type L2DB struct {
|
||||||
dbRead *sqlx.DB
|
db *sqlx.DB
|
||||||
dbWrite *sqlx.DB
|
|
||||||
safetyPeriod common.BatchNum
|
safetyPeriod common.BatchNum
|
||||||
ttl time.Duration
|
ttl time.Duration
|
||||||
maxTxs uint32 // limit of txs that are accepted in the pool
|
maxTxs uint32 // limit of txs that are accepted in the pool
|
||||||
minFeeUSD float64
|
|
||||||
maxFeeUSD float64
|
|
||||||
apiConnCon *db.APIConnectionController
|
apiConnCon *db.APIConnectionController
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -35,22 +32,17 @@ type L2DB struct {
|
|||||||
// To create it, it's needed db connection, safety period expressed in batches,
|
// To create it, it's needed db connection, safety period expressed in batches,
|
||||||
// maxTxs that the DB should have and TTL (time to live) for pending txs.
|
// maxTxs that the DB should have and TTL (time to live) for pending txs.
|
||||||
func NewL2DB(
|
func NewL2DB(
|
||||||
dbRead, dbWrite *sqlx.DB,
|
db *sqlx.DB,
|
||||||
safetyPeriod common.BatchNum,
|
safetyPeriod common.BatchNum,
|
||||||
maxTxs uint32,
|
maxTxs uint32,
|
||||||
minFeeUSD float64,
|
|
||||||
maxFeeUSD float64,
|
|
||||||
TTL time.Duration,
|
TTL time.Duration,
|
||||||
apiConnCon *db.APIConnectionController,
|
apiConnCon *db.APIConnectionController,
|
||||||
) *L2DB {
|
) *L2DB {
|
||||||
return &L2DB{
|
return &L2DB{
|
||||||
dbRead: dbRead,
|
db: db,
|
||||||
dbWrite: dbWrite,
|
|
||||||
safetyPeriod: safetyPeriod,
|
safetyPeriod: safetyPeriod,
|
||||||
ttl: TTL,
|
ttl: TTL,
|
||||||
maxTxs: maxTxs,
|
maxTxs: maxTxs,
|
||||||
minFeeUSD: minFeeUSD,
|
|
||||||
maxFeeUSD: maxFeeUSD,
|
|
||||||
apiConnCon: apiConnCon,
|
apiConnCon: apiConnCon,
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -58,18 +50,12 @@ func NewL2DB(
|
|||||||
// DB returns a pointer to the L2DB.db. This method should be used only for
|
// DB returns a pointer to the L2DB.db. This method should be used only for
|
||||||
// internal testing purposes.
|
// internal testing purposes.
|
||||||
func (l2db *L2DB) DB() *sqlx.DB {
|
func (l2db *L2DB) DB() *sqlx.DB {
|
||||||
return l2db.dbWrite
|
return l2db.db
|
||||||
}
|
|
||||||
|
|
||||||
// MinFeeUSD returns the minimum fee in USD that is required to accept txs into
|
|
||||||
// the pool
|
|
||||||
func (l2db *L2DB) MinFeeUSD() float64 {
|
|
||||||
return l2db.minFeeUSD
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// AddAccountCreationAuth inserts an account creation authorization into the DB
|
// AddAccountCreationAuth inserts an account creation authorization into the DB
|
||||||
func (l2db *L2DB) AddAccountCreationAuth(auth *common.AccountCreationAuth) error {
|
func (l2db *L2DB) AddAccountCreationAuth(auth *common.AccountCreationAuth) error {
|
||||||
_, err := l2db.dbWrite.Exec(
|
_, err := l2db.db.Exec(
|
||||||
`INSERT INTO account_creation_auth (eth_addr, bjj, signature)
|
`INSERT INTO account_creation_auth (eth_addr, bjj, signature)
|
||||||
VALUES ($1, $2, $3);`,
|
VALUES ($1, $2, $3);`,
|
||||||
auth.EthAddr, auth.BJJ, auth.Signature,
|
auth.EthAddr, auth.BJJ, auth.Signature,
|
||||||
@@ -77,26 +63,34 @@ func (l2db *L2DB) AddAccountCreationAuth(auth *common.AccountCreationAuth) error
|
|||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// AddManyAccountCreationAuth inserts a batch of accounts creation authorization
|
|
||||||
// if not exist into the DB
|
|
||||||
func (l2db *L2DB) AddManyAccountCreationAuth(auths []common.AccountCreationAuth) error {
|
|
||||||
_, err := sqlx.NamedExec(l2db.dbWrite,
|
|
||||||
`INSERT INTO account_creation_auth (eth_addr, bjj, signature)
|
|
||||||
VALUES (:ethaddr, :bjj, :signature)
|
|
||||||
ON CONFLICT (eth_addr) DO NOTHING`, auths)
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetAccountCreationAuth returns an account creation authorization from the DB
|
// GetAccountCreationAuth returns an account creation authorization from the DB
|
||||||
func (l2db *L2DB) GetAccountCreationAuth(addr ethCommon.Address) (*common.AccountCreationAuth, error) {
|
func (l2db *L2DB) GetAccountCreationAuth(addr ethCommon.Address) (*common.AccountCreationAuth, error) {
|
||||||
auth := new(common.AccountCreationAuth)
|
auth := new(common.AccountCreationAuth)
|
||||||
return auth, tracerr.Wrap(meddler.QueryRow(
|
return auth, tracerr.Wrap(meddler.QueryRow(
|
||||||
l2db.dbRead, auth,
|
l2db.db, auth,
|
||||||
"SELECT * FROM account_creation_auth WHERE eth_addr = $1;",
|
"SELECT * FROM account_creation_auth WHERE eth_addr = $1;",
|
||||||
addr,
|
addr,
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// AddTx inserts a tx to the pool
|
||||||
|
func (l2db *L2DB) AddTx(tx *PoolL2TxWrite) error {
|
||||||
|
row := l2db.db.QueryRow(
|
||||||
|
"SELECT COUNT(*) FROM tx_pool WHERE state = $1;",
|
||||||
|
common.PoolL2TxStatePending,
|
||||||
|
)
|
||||||
|
var totalTxs uint32
|
||||||
|
if err := row.Scan(&totalTxs); err != nil {
|
||||||
|
return tracerr.Wrap(err)
|
||||||
|
}
|
||||||
|
if totalTxs >= l2db.maxTxs {
|
||||||
|
return tracerr.New(
|
||||||
|
"The pool is at full capacity. More transactions are not accepted currently",
|
||||||
|
)
|
||||||
|
}
|
||||||
|
return tracerr.Wrap(meddler.Insert(l2db.db, "tx_pool", tx))
|
||||||
|
}
|
||||||
|
|
||||||
// UpdateTxsInfo updates the parameter Info of the pool transactions
|
// UpdateTxsInfo updates the parameter Info of the pool transactions
|
||||||
func (l2db *L2DB) UpdateTxsInfo(txs []common.PoolL2Tx) error {
|
func (l2db *L2DB) UpdateTxsInfo(txs []common.PoolL2Tx) error {
|
||||||
if len(txs) == 0 {
|
if len(txs) == 0 {
|
||||||
@@ -120,7 +114,7 @@ func (l2db *L2DB) UpdateTxsInfo(txs []common.PoolL2Tx) error {
|
|||||||
WHERE tx_pool.tx_id = tx_update.id;
|
WHERE tx_pool.tx_id = tx_update.id;
|
||||||
`
|
`
|
||||||
if len(txUpdates) > 0 {
|
if len(txUpdates) > 0 {
|
||||||
if _, err := sqlx.NamedExec(l2db.dbWrite, query, txUpdates); err != nil {
|
if _, err := sqlx.NamedExec(l2db.db, query, txUpdates); err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -128,8 +122,9 @@ func (l2db *L2DB) UpdateTxsInfo(txs []common.PoolL2Tx) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewPoolL2TxWriteFromPoolL2Tx creates a new PoolL2TxWrite from a PoolL2Tx
|
// AddTxTest inserts a tx into the L2DB. This is useful for test purposes,
|
||||||
func NewPoolL2TxWriteFromPoolL2Tx(tx *common.PoolL2Tx) *PoolL2TxWrite {
|
// but in production txs will only be inserted through the API
|
||||||
|
func (l2db *L2DB) AddTxTest(tx *common.PoolL2Tx) error {
|
||||||
// transform tx from *common.PoolL2Tx to PoolL2TxWrite
|
// transform tx from *common.PoolL2Tx to PoolL2TxWrite
|
||||||
insertTx := &PoolL2TxWrite{
|
insertTx := &PoolL2TxWrite{
|
||||||
TxID: tx.TxID,
|
TxID: tx.TxID,
|
||||||
@@ -171,15 +166,8 @@ func NewPoolL2TxWriteFromPoolL2Tx(tx *common.PoolL2Tx) *PoolL2TxWrite {
|
|||||||
f := new(big.Float).SetInt(tx.Amount)
|
f := new(big.Float).SetInt(tx.Amount)
|
||||||
amountF, _ := f.Float64()
|
amountF, _ := f.Float64()
|
||||||
insertTx.AmountFloat = amountF
|
insertTx.AmountFloat = amountF
|
||||||
return insertTx
|
|
||||||
}
|
|
||||||
|
|
||||||
// AddTxTest inserts a tx into the L2DB. This is useful for test purposes,
|
|
||||||
// but in production txs will only be inserted through the API
|
|
||||||
func (l2db *L2DB) AddTxTest(tx *common.PoolL2Tx) error {
|
|
||||||
insertTx := NewPoolL2TxWriteFromPoolL2Tx(tx)
|
|
||||||
// insert tx
|
// insert tx
|
||||||
return tracerr.Wrap(meddler.Insert(l2db.dbWrite, "tx_pool", insertTx))
|
return tracerr.Wrap(meddler.Insert(l2db.db, "tx_pool", insertTx))
|
||||||
}
|
}
|
||||||
|
|
||||||
// selectPoolTxCommon select part of queries to get common.PoolL2Tx
|
// selectPoolTxCommon select part of queries to get common.PoolL2Tx
|
||||||
@@ -188,15 +176,14 @@ tx_pool.to_bjj, tx_pool.token_id, tx_pool.amount, tx_pool.fee, tx_pool.nonce,
|
|||||||
tx_pool.state, tx_pool.info, tx_pool.signature, tx_pool.timestamp, rq_from_idx,
|
tx_pool.state, tx_pool.info, tx_pool.signature, tx_pool.timestamp, rq_from_idx,
|
||||||
rq_to_idx, tx_pool.rq_to_eth_addr, tx_pool.rq_to_bjj, tx_pool.rq_token_id, tx_pool.rq_amount,
|
rq_to_idx, tx_pool.rq_to_eth_addr, tx_pool.rq_to_bjj, tx_pool.rq_token_id, tx_pool.rq_amount,
|
||||||
tx_pool.rq_fee, tx_pool.rq_nonce, tx_pool.tx_type,
|
tx_pool.rq_fee, tx_pool.rq_nonce, tx_pool.tx_type,
|
||||||
(fee_percentage(tx_pool.fee::NUMERIC) * token.usd * tx_pool.amount_f) /
|
fee_percentage(tx_pool.fee::NUMERIC) * token.usd * tx_pool.amount_f AS fee_usd, token.usd_update
|
||||||
(10.0 ^ token.decimals::NUMERIC) AS fee_usd, token.usd_update
|
|
||||||
FROM tx_pool INNER JOIN token ON tx_pool.token_id = token.token_id `
|
FROM tx_pool INNER JOIN token ON tx_pool.token_id = token.token_id `
|
||||||
|
|
||||||
// GetTx return the specified Tx in common.PoolL2Tx format
|
// GetTx return the specified Tx in common.PoolL2Tx format
|
||||||
func (l2db *L2DB) GetTx(txID common.TxID) (*common.PoolL2Tx, error) {
|
func (l2db *L2DB) GetTx(txID common.TxID) (*common.PoolL2Tx, error) {
|
||||||
tx := new(common.PoolL2Tx)
|
tx := new(common.PoolL2Tx)
|
||||||
return tx, tracerr.Wrap(meddler.QueryRow(
|
return tx, tracerr.Wrap(meddler.QueryRow(
|
||||||
l2db.dbRead, tx,
|
l2db.db, tx,
|
||||||
selectPoolTxCommon+"WHERE tx_id = $1;",
|
selectPoolTxCommon+"WHERE tx_id = $1;",
|
||||||
txID,
|
txID,
|
||||||
))
|
))
|
||||||
@@ -206,8 +193,8 @@ func (l2db *L2DB) GetTx(txID common.TxID) (*common.PoolL2Tx, error) {
|
|||||||
func (l2db *L2DB) GetPendingTxs() ([]common.PoolL2Tx, error) {
|
func (l2db *L2DB) GetPendingTxs() ([]common.PoolL2Tx, error) {
|
||||||
var txs []*common.PoolL2Tx
|
var txs []*common.PoolL2Tx
|
||||||
err := meddler.QueryAll(
|
err := meddler.QueryAll(
|
||||||
l2db.dbRead, &txs,
|
l2db.db, &txs,
|
||||||
selectPoolTxCommon+"WHERE state = $1 AND NOT external_delete;",
|
selectPoolTxCommon+"WHERE state = $1",
|
||||||
common.PoolL2TxStatePending,
|
common.PoolL2TxStatePending,
|
||||||
)
|
)
|
||||||
return db.SlicePtrsToSlice(txs).([]common.PoolL2Tx), tracerr.Wrap(err)
|
return db.SlicePtrsToSlice(txs).([]common.PoolL2Tx), tracerr.Wrap(err)
|
||||||
@@ -231,8 +218,8 @@ func (l2db *L2DB) StartForging(txIDs []common.TxID, batchNum common.BatchNum) er
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
query = l2db.dbWrite.Rebind(query)
|
query = l2db.db.Rebind(query)
|
||||||
_, err = l2db.dbWrite.Exec(query, args...)
|
_, err = l2db.db.Exec(query, args...)
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -254,8 +241,8 @@ func (l2db *L2DB) DoneForging(txIDs []common.TxID, batchNum common.BatchNum) err
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
query = l2db.dbWrite.Rebind(query)
|
query = l2db.db.Rebind(query)
|
||||||
_, err = l2db.dbWrite.Exec(query, args...)
|
_, err = l2db.db.Exec(query, args...)
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -276,8 +263,8 @@ func (l2db *L2DB) InvalidateTxs(txIDs []common.TxID, batchNum common.BatchNum) e
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
query = l2db.dbWrite.Rebind(query)
|
query = l2db.db.Rebind(query)
|
||||||
_, err = l2db.dbWrite.Exec(query, args...)
|
_, err = l2db.db.Exec(query, args...)
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -285,7 +272,7 @@ func (l2db *L2DB) InvalidateTxs(txIDs []common.TxID, batchNum common.BatchNum) e
|
|||||||
// of unique FromIdx
|
// of unique FromIdx
|
||||||
func (l2db *L2DB) GetPendingUniqueFromIdxs() ([]common.Idx, error) {
|
func (l2db *L2DB) GetPendingUniqueFromIdxs() ([]common.Idx, error) {
|
||||||
var idxs []common.Idx
|
var idxs []common.Idx
|
||||||
rows, err := l2db.dbRead.Query(`SELECT DISTINCT from_idx FROM tx_pool
|
rows, err := l2db.db.Query(`SELECT DISTINCT from_idx FROM tx_pool
|
||||||
WHERE state = $1;`, common.PoolL2TxStatePending)
|
WHERE state = $1;`, common.PoolL2TxStatePending)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
@@ -323,10 +310,10 @@ func (l2db *L2DB) InvalidateOldNonces(updatedAccounts []common.IdxNonce, batchNu
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
// Fill the batch_num in the query with Sprintf because we are using a
|
// Fill the batch_num in the query with Sprintf because we are using a
|
||||||
// named query which works with slices, and doesn't handle an extra
|
// named query which works with slices, and doens't handle an extra
|
||||||
// individual argument.
|
// individual argument.
|
||||||
query := fmt.Sprintf(invalidateOldNoncesQuery, batchNum)
|
query := fmt.Sprintf(invalidateOldNoncesQuery, batchNum)
|
||||||
if _, err := sqlx.NamedExec(l2db.dbWrite, query, updatedAccounts); err != nil {
|
if _, err := sqlx.NamedExec(l2db.db, query, updatedAccounts); err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
@@ -335,7 +322,7 @@ func (l2db *L2DB) InvalidateOldNonces(updatedAccounts []common.IdxNonce, batchNu
|
|||||||
// Reorg updates the state of txs that were updated in a batch that has been discarted due to a blockchain reorg.
|
// Reorg updates the state of txs that were updated in a batch that has been discarted due to a blockchain reorg.
|
||||||
// The state of the affected txs can change form Forged -> Pending or from Invalid -> Pending
|
// The state of the affected txs can change form Forged -> Pending or from Invalid -> Pending
|
||||||
func (l2db *L2DB) Reorg(lastValidBatch common.BatchNum) error {
|
func (l2db *L2DB) Reorg(lastValidBatch common.BatchNum) error {
|
||||||
_, err := l2db.dbWrite.Exec(
|
_, err := l2db.db.Exec(
|
||||||
`UPDATE tx_pool SET batch_num = NULL, state = $1
|
`UPDATE tx_pool SET batch_num = NULL, state = $1
|
||||||
WHERE (state = $2 OR state = $3 OR state = $4) AND batch_num > $5`,
|
WHERE (state = $2 OR state = $3 OR state = $4) AND batch_num > $5`,
|
||||||
common.PoolL2TxStatePending,
|
common.PoolL2TxStatePending,
|
||||||
@@ -351,7 +338,7 @@ func (l2db *L2DB) Reorg(lastValidBatch common.BatchNum) error {
|
|||||||
// it also deletes pending txs that have been in the L2DB for longer than the ttl if maxTxs has been exceeded
|
// it also deletes pending txs that have been in the L2DB for longer than the ttl if maxTxs has been exceeded
|
||||||
func (l2db *L2DB) Purge(currentBatchNum common.BatchNum) (err error) {
|
func (l2db *L2DB) Purge(currentBatchNum common.BatchNum) (err error) {
|
||||||
now := time.Now().UTC().Unix()
|
now := time.Now().UTC().Unix()
|
||||||
_, err = l2db.dbWrite.Exec(
|
_, err = l2db.db.Exec(
|
||||||
`DELETE FROM tx_pool WHERE (
|
`DELETE FROM tx_pool WHERE (
|
||||||
batch_num < $1 AND (state = $2 OR state = $3)
|
batch_num < $1 AND (state = $2 OR state = $3)
|
||||||
) OR (
|
) OR (
|
||||||
@@ -367,14 +354,3 @@ func (l2db *L2DB) Purge(currentBatchNum common.BatchNum) (err error) {
|
|||||||
)
|
)
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// PurgeByExternalDelete deletes all pending transactions marked with true in
|
|
||||||
// the `external_delete` column. An external process can set this column to
|
|
||||||
// true to instruct the coordinator to delete the tx when possible.
|
|
||||||
func (l2db *L2DB) PurgeByExternalDelete() error {
|
|
||||||
_, err := l2db.dbWrite.Exec(
|
|
||||||
`DELETE from tx_pool WHERE (external_delete = true AND state = $1);`,
|
|
||||||
common.PoolL2TxStatePending,
|
|
||||||
)
|
|
||||||
return tracerr.Wrap(err)
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -1,8 +1,8 @@
|
|||||||
package l2db
|
package l2db
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"database/sql"
|
"math"
|
||||||
"fmt"
|
"math/big"
|
||||||
"os"
|
"os"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
@@ -20,14 +20,12 @@ import (
|
|||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
)
|
)
|
||||||
|
|
||||||
var decimals = uint64(3)
|
|
||||||
var tokenValue = 1.0 // The price update gives a value of 1.0 USD to the token
|
|
||||||
var l2DB *L2DB
|
var l2DB *L2DB
|
||||||
var l2DBWithACC *L2DB
|
var l2DBWithACC *L2DB
|
||||||
var historyDB *historydb.HistoryDB
|
var historyDB *historydb.HistoryDB
|
||||||
var tc *til.Context
|
var tc *til.Context
|
||||||
var tokens map[common.TokenID]historydb.TokenWithUSD
|
var tokens map[common.TokenID]historydb.TokenWithUSD
|
||||||
|
var tokensValue map[common.TokenID]float64
|
||||||
var accs map[common.Idx]common.Account
|
var accs map[common.Idx]common.Account
|
||||||
|
|
||||||
func TestMain(m *testing.M) {
|
func TestMain(m *testing.M) {
|
||||||
@@ -37,11 +35,11 @@ func TestMain(m *testing.M) {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
panic(err)
|
panic(err)
|
||||||
}
|
}
|
||||||
l2DB = NewL2DB(db, db, 10, 1000, 0.0, 1000.0, 24*time.Hour, nil)
|
l2DB = NewL2DB(db, 10, 1000, 24*time.Hour, nil)
|
||||||
apiConnCon := dbUtils.NewAPIConnectionController(1, time.Second)
|
apiConnCon := dbUtils.NewAPICnnectionController(1, time.Second)
|
||||||
l2DBWithACC = NewL2DB(db, db, 10, 1000, 0.0, 1000.0, 24*time.Hour, apiConnCon)
|
l2DBWithACC = NewL2DB(db, 10, 1000, 24*time.Hour, apiConnCon)
|
||||||
test.WipeDB(l2DB.DB())
|
test.WipeDB(l2DB.DB())
|
||||||
historyDB = historydb.NewHistoryDB(db, db, nil)
|
historyDB = historydb.NewHistoryDB(db, nil)
|
||||||
// Run tests
|
// Run tests
|
||||||
result := m.Run()
|
result := m.Run()
|
||||||
// Close DB
|
// Close DB
|
||||||
@@ -60,10 +58,10 @@ func prepareHistoryDB(historyDB *historydb.HistoryDB) error {
|
|||||||
|
|
||||||
AddToken(1)
|
AddToken(1)
|
||||||
AddToken(2)
|
AddToken(2)
|
||||||
CreateAccountDeposit(1) A: 20000
|
CreateAccountDeposit(1) A: 2000
|
||||||
CreateAccountDeposit(2) A: 20000
|
CreateAccountDeposit(2) A: 2000
|
||||||
CreateAccountDeposit(1) B: 10000
|
CreateAccountDeposit(1) B: 1000
|
||||||
CreateAccountDeposit(2) B: 10000
|
CreateAccountDeposit(2) B: 1000
|
||||||
> batchL1
|
> batchL1
|
||||||
> batchL1
|
> batchL1
|
||||||
> block
|
> block
|
||||||
@@ -84,23 +82,15 @@ func prepareHistoryDB(historyDB *historydb.HistoryDB) error {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
for i := range blocks {
|
|
||||||
block := &blocks[i]
|
|
||||||
for j := range block.Rollup.AddedTokens {
|
|
||||||
token := &block.Rollup.AddedTokens[j]
|
|
||||||
token.Name = fmt.Sprintf("Token %d", token.TokenID)
|
|
||||||
token.Symbol = fmt.Sprintf("TK%d", token.TokenID)
|
|
||||||
token.Decimals = decimals
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
tokens = make(map[common.TokenID]historydb.TokenWithUSD)
|
tokens = make(map[common.TokenID]historydb.TokenWithUSD)
|
||||||
// tokensValue = make(map[common.TokenID]float64)
|
tokensValue = make(map[common.TokenID]float64)
|
||||||
accs = make(map[common.Idx]common.Account)
|
accs = make(map[common.Idx]common.Account)
|
||||||
|
value := 5 * 5.389329
|
||||||
now := time.Now().UTC()
|
now := time.Now().UTC()
|
||||||
// Add all blocks except for the last one
|
// Add all blocks except for the last one
|
||||||
for i := range blocks[:len(blocks)-1] {
|
for i := range blocks[:len(blocks)-1] {
|
||||||
if err := historyDB.AddBlockSCData(&blocks[i]); err != nil {
|
err = historyDB.AddBlockSCData(&blocks[i])
|
||||||
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
for _, batch := range blocks[i].Rollup.Batches {
|
for _, batch := range blocks[i].Rollup.Batches {
|
||||||
@@ -116,38 +106,39 @@ func prepareHistoryDB(historyDB *historydb.HistoryDB) error {
|
|||||||
Name: token.Name,
|
Name: token.Name,
|
||||||
Symbol: token.Symbol,
|
Symbol: token.Symbol,
|
||||||
Decimals: token.Decimals,
|
Decimals: token.Decimals,
|
||||||
USD: &tokenValue,
|
|
||||||
USDUpdate: &now,
|
|
||||||
}
|
}
|
||||||
|
tokensValue[token.TokenID] = value / math.Pow(10, float64(token.Decimals))
|
||||||
|
readToken.USDUpdate = &now
|
||||||
|
readToken.USD = &value
|
||||||
tokens[token.TokenID] = readToken
|
tokens[token.TokenID] = readToken
|
||||||
// Set value to the tokens
|
}
|
||||||
err := historyDB.UpdateTokenValue(readToken.EthAddr, *readToken.USD)
|
// Set value to the tokens (tokens have no symbol)
|
||||||
if err != nil {
|
tokenSymbol := ""
|
||||||
return tracerr.Wrap(err)
|
err := historyDB.UpdateTokenValue(tokenSymbol, value)
|
||||||
}
|
if err != nil {
|
||||||
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func generatePoolL2Txs() ([]common.PoolL2Tx, error) {
|
func generatePoolL2Txs() ([]common.PoolL2Tx, error) {
|
||||||
// Fee = 126 corresponds to ~10%
|
|
||||||
setPool := `
|
setPool := `
|
||||||
Type: PoolL2
|
Type: PoolL2
|
||||||
PoolTransfer(1) A-B: 6000 (126)
|
PoolTransfer(1) A-B: 6 (4)
|
||||||
PoolTransfer(2) A-B: 3000 (126)
|
PoolTransfer(2) A-B: 3 (1)
|
||||||
PoolTransfer(1) B-A: 5000 (126)
|
PoolTransfer(1) B-A: 5 (2)
|
||||||
PoolTransfer(2) B-A: 10000 (126)
|
PoolTransfer(2) B-A: 10 (3)
|
||||||
PoolTransfer(1) A-B: 7000 (126)
|
PoolTransfer(1) A-B: 7 (2)
|
||||||
PoolTransfer(2) A-B: 2000 (126)
|
PoolTransfer(2) A-B: 2 (1)
|
||||||
PoolTransfer(1) B-A: 8000 (126)
|
PoolTransfer(1) B-A: 8 (2)
|
||||||
PoolTransfer(2) B-A: 1000 (126)
|
PoolTransfer(2) B-A: 1 (1)
|
||||||
PoolTransfer(1) A-B: 3000 (126)
|
PoolTransfer(1) A-B: 3 (1)
|
||||||
PoolTransferToEthAddr(2) B-A: 5000 (126)
|
PoolTransferToEthAddr(2) B-A: 5 (2)
|
||||||
PoolTransferToBJJ(2) B-A: 5000 (126)
|
PoolTransferToBJJ(2) B-A: 5 (2)
|
||||||
|
|
||||||
PoolExit(1) A: 5000 (126)
|
PoolExit(1) A: 5 (2)
|
||||||
PoolExit(2) B: 3000 (126)
|
PoolExit(2) B: 3 (1)
|
||||||
`
|
`
|
||||||
poolL2Txs, err := tc.GeneratePoolL2Txs(setPool)
|
poolL2Txs, err := tc.GeneratePoolL2Txs(setPool)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -162,74 +153,25 @@ func TestAddTxTest(t *testing.T) {
|
|||||||
log.Error("Error prepare historyDB", err)
|
log.Error("Error prepare historyDB", err)
|
||||||
}
|
}
|
||||||
poolL2Txs, err := generatePoolL2Txs()
|
poolL2Txs, err := generatePoolL2Txs()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
for i := range poolL2Txs {
|
for i := range poolL2Txs {
|
||||||
err := l2DB.AddTxTest(&poolL2Txs[i])
|
err := l2DB.AddTxTest(&poolL2Txs[i])
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
fetchedTx, err := l2DB.GetTx(poolL2Txs[i].TxID)
|
fetchedTx, err := l2DB.GetTx(poolL2Txs[i].TxID)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assertTx(t, &poolL2Txs[i], fetchedTx)
|
assertTx(t, &poolL2Txs[i], fetchedTx)
|
||||||
nameZone, offset := fetchedTx.Timestamp.Zone()
|
nameZone, offset := fetchedTx.Timestamp.Zone()
|
||||||
assert.Equal(t, "UTC", nameZone)
|
assert.Equal(t, "UTC", nameZone)
|
||||||
assert.Equal(t, 0, offset)
|
assert.Equal(t, 0, offset)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestAddTxAPI(t *testing.T) {
|
|
||||||
err := prepareHistoryDB(historyDB)
|
|
||||||
if err != nil {
|
|
||||||
log.Error("Error prepare historyDB", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
oldMaxTxs := l2DBWithACC.maxTxs
|
|
||||||
// set max number of pending txs that can be kept in the pool to 5
|
|
||||||
l2DBWithACC.maxTxs = 5
|
|
||||||
|
|
||||||
poolL2Txs, err := generatePoolL2Txs()
|
|
||||||
txs := make([]*PoolL2TxWrite, len(poolL2Txs))
|
|
||||||
for i := range poolL2Txs {
|
|
||||||
txs[i] = NewPoolL2TxWriteFromPoolL2Tx(&poolL2Txs[i])
|
|
||||||
}
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.GreaterOrEqual(t, len(poolL2Txs), 8)
|
|
||||||
for i := range txs[:5] {
|
|
||||||
err := l2DBWithACC.AddTxAPI(txs[i])
|
|
||||||
require.NoError(t, err)
|
|
||||||
fetchedTx, err := l2DB.GetTx(poolL2Txs[i].TxID)
|
|
||||||
require.NoError(t, err)
|
|
||||||
assertTx(t, &poolL2Txs[i], fetchedTx)
|
|
||||||
nameZone, offset := fetchedTx.Timestamp.Zone()
|
|
||||||
assert.Equal(t, "UTC", nameZone)
|
|
||||||
assert.Equal(t, 0, offset)
|
|
||||||
}
|
|
||||||
err = l2DBWithACC.AddTxAPI(txs[5])
|
|
||||||
assert.Equal(t, errPoolFull, tracerr.Unwrap(err))
|
|
||||||
// reset maxTxs to original value
|
|
||||||
l2DBWithACC.maxTxs = oldMaxTxs
|
|
||||||
|
|
||||||
// set minFeeUSD to a high value than the tx feeUSD to test the error
|
|
||||||
// of inserting a tx with lower than min fee
|
|
||||||
oldMinFeeUSD := l2DBWithACC.minFeeUSD
|
|
||||||
tx := txs[5]
|
|
||||||
feeAmount, err := common.CalcFeeAmount(tx.Amount, tx.Fee)
|
|
||||||
require.NoError(t, err)
|
|
||||||
feeAmountUSD := common.TokensToUSD(feeAmount, decimals, tokenValue)
|
|
||||||
// set minFeeUSD higher than the tx fee to trigger the error
|
|
||||||
l2DBWithACC.minFeeUSD = feeAmountUSD + 1
|
|
||||||
err = l2DBWithACC.AddTxAPI(tx)
|
|
||||||
require.Error(t, err)
|
|
||||||
assert.Regexp(t, "tx.feeUSD (.*) < minFeeUSD (.*)", err.Error())
|
|
||||||
// reset minFeeUSD to original value
|
|
||||||
l2DBWithACC.minFeeUSD = oldMinFeeUSD
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestUpdateTxsInfo(t *testing.T) {
|
func TestUpdateTxsInfo(t *testing.T) {
|
||||||
err := prepareHistoryDB(historyDB)
|
err := prepareHistoryDB(historyDB)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error("Error prepare historyDB", err)
|
log.Error("Error prepare historyDB", err)
|
||||||
}
|
}
|
||||||
poolL2Txs, err := generatePoolL2Txs()
|
poolL2Txs, err := generatePoolL2Txs()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
for i := range poolL2Txs {
|
for i := range poolL2Txs {
|
||||||
err := l2DB.AddTxTest(&poolL2Txs[i])
|
err := l2DB.AddTxTest(&poolL2Txs[i])
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
@@ -243,7 +185,7 @@ func TestUpdateTxsInfo(t *testing.T) {
|
|||||||
|
|
||||||
for i := range poolL2Txs {
|
for i := range poolL2Txs {
|
||||||
fetchedTx, err := l2DB.GetTx(poolL2Txs[i].TxID)
|
fetchedTx, err := l2DB.GetTx(poolL2Txs[i].TxID)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, "test", fetchedTx.Info)
|
assert.Equal(t, "test", fetchedTx.Info)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -261,8 +203,9 @@ func assertTx(t *testing.T, expected, actual *common.PoolL2Tx) {
|
|||||||
assert.Less(t, token.USDUpdate.Unix()-3, actual.AbsoluteFeeUpdate.Unix())
|
assert.Less(t, token.USDUpdate.Unix()-3, actual.AbsoluteFeeUpdate.Unix())
|
||||||
expected.AbsoluteFeeUpdate = actual.AbsoluteFeeUpdate
|
expected.AbsoluteFeeUpdate = actual.AbsoluteFeeUpdate
|
||||||
// Set expected fee
|
// Set expected fee
|
||||||
amountUSD := common.TokensToUSD(expected.Amount, token.Decimals, *token.USD)
|
f := new(big.Float).SetInt(expected.Amount)
|
||||||
expected.AbsoluteFee = amountUSD * expected.Fee.Percentage()
|
amountF, _ := f.Float64()
|
||||||
|
expected.AbsoluteFee = *token.USD * amountF * expected.Fee.Percentage()
|
||||||
test.AssertUSD(t, &expected.AbsoluteFee, &actual.AbsoluteFee)
|
test.AssertUSD(t, &expected.AbsoluteFee, &actual.AbsoluteFee)
|
||||||
}
|
}
|
||||||
assert.Equal(t, expected, actual)
|
assert.Equal(t, expected, actual)
|
||||||
@@ -287,28 +230,19 @@ func TestGetPending(t *testing.T) {
|
|||||||
log.Error("Error prepare historyDB", err)
|
log.Error("Error prepare historyDB", err)
|
||||||
}
|
}
|
||||||
poolL2Txs, err := generatePoolL2Txs()
|
poolL2Txs, err := generatePoolL2Txs()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
var pendingTxs []*common.PoolL2Tx
|
var pendingTxs []*common.PoolL2Tx
|
||||||
for i := range poolL2Txs {
|
for i := range poolL2Txs {
|
||||||
err := l2DB.AddTxTest(&poolL2Txs[i])
|
err := l2DB.AddTxTest(&poolL2Txs[i])
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
pendingTxs = append(pendingTxs, &poolL2Txs[i])
|
pendingTxs = append(pendingTxs, &poolL2Txs[i])
|
||||||
}
|
}
|
||||||
fetchedTxs, err := l2DB.GetPendingTxs()
|
fetchedTxs, err := l2DB.GetPendingTxs()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, len(pendingTxs), len(fetchedTxs))
|
assert.Equal(t, len(pendingTxs), len(fetchedTxs))
|
||||||
for i := range fetchedTxs {
|
for i := range fetchedTxs {
|
||||||
assertTx(t, pendingTxs[i], &fetchedTxs[i])
|
assertTx(t, pendingTxs[i], &fetchedTxs[i])
|
||||||
}
|
}
|
||||||
// Check AbsoluteFee amount
|
|
||||||
for i := range fetchedTxs {
|
|
||||||
tx := &fetchedTxs[i]
|
|
||||||
feeAmount, err := common.CalcFeeAmount(tx.Amount, tx.Fee)
|
|
||||||
require.NoError(t, err)
|
|
||||||
feeAmountUSD := common.TokensToUSD(feeAmount,
|
|
||||||
tokens[tx.TokenID].Decimals, *tokens[tx.TokenID].USD)
|
|
||||||
assert.InEpsilon(t, feeAmountUSD, tx.AbsoluteFee, 0.01)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestStartForging(t *testing.T) {
|
func TestStartForging(t *testing.T) {
|
||||||
@@ -319,13 +253,13 @@ func TestStartForging(t *testing.T) {
|
|||||||
log.Error("Error prepare historyDB", err)
|
log.Error("Error prepare historyDB", err)
|
||||||
}
|
}
|
||||||
poolL2Txs, err := generatePoolL2Txs()
|
poolL2Txs, err := generatePoolL2Txs()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
var startForgingTxIDs []common.TxID
|
var startForgingTxIDs []common.TxID
|
||||||
randomizer := 0
|
randomizer := 0
|
||||||
// Add txs to DB
|
// Add txs to DB
|
||||||
for i := range poolL2Txs {
|
for i := range poolL2Txs {
|
||||||
err := l2DB.AddTxTest(&poolL2Txs[i])
|
err := l2DB.AddTxTest(&poolL2Txs[i])
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 {
|
if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 {
|
||||||
startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID)
|
startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID)
|
||||||
}
|
}
|
||||||
@@ -333,11 +267,11 @@ func TestStartForging(t *testing.T) {
|
|||||||
}
|
}
|
||||||
// Start forging txs
|
// Start forging txs
|
||||||
err = l2DB.StartForging(startForgingTxIDs, fakeBatchNum)
|
err = l2DB.StartForging(startForgingTxIDs, fakeBatchNum)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Fetch txs and check that they've been updated correctly
|
// Fetch txs and check that they've been updated correctly
|
||||||
for _, id := range startForgingTxIDs {
|
for _, id := range startForgingTxIDs {
|
||||||
fetchedTx, err := l2DBWithACC.GetTxAPI(id)
|
fetchedTx, err := l2DBWithACC.GetTxAPI(id)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, common.PoolL2TxStateForging, fetchedTx.State)
|
assert.Equal(t, common.PoolL2TxStateForging, fetchedTx.State)
|
||||||
assert.Equal(t, &fakeBatchNum, fetchedTx.BatchNum)
|
assert.Equal(t, &fakeBatchNum, fetchedTx.BatchNum)
|
||||||
}
|
}
|
||||||
@@ -351,13 +285,13 @@ func TestDoneForging(t *testing.T) {
|
|||||||
log.Error("Error prepare historyDB", err)
|
log.Error("Error prepare historyDB", err)
|
||||||
}
|
}
|
||||||
poolL2Txs, err := generatePoolL2Txs()
|
poolL2Txs, err := generatePoolL2Txs()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
var startForgingTxIDs []common.TxID
|
var startForgingTxIDs []common.TxID
|
||||||
randomizer := 0
|
randomizer := 0
|
||||||
// Add txs to DB
|
// Add txs to DB
|
||||||
for i := range poolL2Txs {
|
for i := range poolL2Txs {
|
||||||
err := l2DB.AddTxTest(&poolL2Txs[i])
|
err := l2DB.AddTxTest(&poolL2Txs[i])
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 {
|
if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 {
|
||||||
startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID)
|
startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID)
|
||||||
}
|
}
|
||||||
@@ -365,7 +299,7 @@ func TestDoneForging(t *testing.T) {
|
|||||||
}
|
}
|
||||||
// Start forging txs
|
// Start forging txs
|
||||||
err = l2DB.StartForging(startForgingTxIDs, fakeBatchNum)
|
err = l2DB.StartForging(startForgingTxIDs, fakeBatchNum)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
var doneForgingTxIDs []common.TxID
|
var doneForgingTxIDs []common.TxID
|
||||||
randomizer = 0
|
randomizer = 0
|
||||||
@@ -377,12 +311,12 @@ func TestDoneForging(t *testing.T) {
|
|||||||
}
|
}
|
||||||
// Done forging txs
|
// Done forging txs
|
||||||
err = l2DB.DoneForging(doneForgingTxIDs, fakeBatchNum)
|
err = l2DB.DoneForging(doneForgingTxIDs, fakeBatchNum)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
// Fetch txs and check that they've been updated correctly
|
// Fetch txs and check that they've been updated correctly
|
||||||
for _, id := range doneForgingTxIDs {
|
for _, id := range doneForgingTxIDs {
|
||||||
fetchedTx, err := l2DBWithACC.GetTxAPI(id)
|
fetchedTx, err := l2DBWithACC.GetTxAPI(id)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, common.PoolL2TxStateForged, fetchedTx.State)
|
assert.Equal(t, common.PoolL2TxStateForged, fetchedTx.State)
|
||||||
assert.Equal(t, &fakeBatchNum, fetchedTx.BatchNum)
|
assert.Equal(t, &fakeBatchNum, fetchedTx.BatchNum)
|
||||||
}
|
}
|
||||||
@@ -396,13 +330,13 @@ func TestInvalidate(t *testing.T) {
|
|||||||
log.Error("Error prepare historyDB", err)
|
log.Error("Error prepare historyDB", err)
|
||||||
}
|
}
|
||||||
poolL2Txs, err := generatePoolL2Txs()
|
poolL2Txs, err := generatePoolL2Txs()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
var invalidTxIDs []common.TxID
|
var invalidTxIDs []common.TxID
|
||||||
randomizer := 0
|
randomizer := 0
|
||||||
// Add txs to DB
|
// Add txs to DB
|
||||||
for i := range poolL2Txs {
|
for i := range poolL2Txs {
|
||||||
err := l2DB.AddTxTest(&poolL2Txs[i])
|
err := l2DB.AddTxTest(&poolL2Txs[i])
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
if poolL2Txs[i].State != common.PoolL2TxStateInvalid && randomizer%2 == 0 {
|
if poolL2Txs[i].State != common.PoolL2TxStateInvalid && randomizer%2 == 0 {
|
||||||
randomizer++
|
randomizer++
|
||||||
invalidTxIDs = append(invalidTxIDs, poolL2Txs[i].TxID)
|
invalidTxIDs = append(invalidTxIDs, poolL2Txs[i].TxID)
|
||||||
@@ -410,11 +344,11 @@ func TestInvalidate(t *testing.T) {
|
|||||||
}
|
}
|
||||||
// Invalidate txs
|
// Invalidate txs
|
||||||
err = l2DB.InvalidateTxs(invalidTxIDs, fakeBatchNum)
|
err = l2DB.InvalidateTxs(invalidTxIDs, fakeBatchNum)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Fetch txs and check that they've been updated correctly
|
// Fetch txs and check that they've been updated correctly
|
||||||
for _, id := range invalidTxIDs {
|
for _, id := range invalidTxIDs {
|
||||||
fetchedTx, err := l2DBWithACC.GetTxAPI(id)
|
fetchedTx, err := l2DBWithACC.GetTxAPI(id)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, common.PoolL2TxStateInvalid, fetchedTx.State)
|
assert.Equal(t, common.PoolL2TxStateInvalid, fetchedTx.State)
|
||||||
assert.Equal(t, &fakeBatchNum, fetchedTx.BatchNum)
|
assert.Equal(t, &fakeBatchNum, fetchedTx.BatchNum)
|
||||||
}
|
}
|
||||||
@@ -428,7 +362,7 @@ func TestInvalidateOldNonces(t *testing.T) {
|
|||||||
log.Error("Error prepare historyDB", err)
|
log.Error("Error prepare historyDB", err)
|
||||||
}
|
}
|
||||||
poolL2Txs, err := generatePoolL2Txs()
|
poolL2Txs, err := generatePoolL2Txs()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Update Accounts currentNonce
|
// Update Accounts currentNonce
|
||||||
var updateAccounts []common.IdxNonce
|
var updateAccounts []common.IdxNonce
|
||||||
var currentNonce = common.Nonce(1)
|
var currentNonce = common.Nonce(1)
|
||||||
@@ -445,13 +379,13 @@ func TestInvalidateOldNonces(t *testing.T) {
|
|||||||
invalidTxIDs = append(invalidTxIDs, poolL2Txs[i].TxID)
|
invalidTxIDs = append(invalidTxIDs, poolL2Txs[i].TxID)
|
||||||
}
|
}
|
||||||
err := l2DB.AddTxTest(&poolL2Txs[i])
|
err := l2DB.AddTxTest(&poolL2Txs[i])
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
}
|
}
|
||||||
// sanity check
|
// sanity check
|
||||||
require.Greater(t, len(invalidTxIDs), 0)
|
require.Greater(t, len(invalidTxIDs), 0)
|
||||||
|
|
||||||
err = l2DB.InvalidateOldNonces(updateAccounts, fakeBatchNum)
|
err = l2DB.InvalidateOldNonces(updateAccounts, fakeBatchNum)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Fetch txs and check that they've been updated correctly
|
// Fetch txs and check that they've been updated correctly
|
||||||
for _, id := range invalidTxIDs {
|
for _, id := range invalidTxIDs {
|
||||||
fetchedTx, err := l2DBWithACC.GetTxAPI(id)
|
fetchedTx, err := l2DBWithACC.GetTxAPI(id)
|
||||||
@@ -473,7 +407,7 @@ func TestReorg(t *testing.T) {
|
|||||||
log.Error("Error prepare historyDB", err)
|
log.Error("Error prepare historyDB", err)
|
||||||
}
|
}
|
||||||
poolL2Txs, err := generatePoolL2Txs()
|
poolL2Txs, err := generatePoolL2Txs()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
reorgedTxIDs := []common.TxID{}
|
reorgedTxIDs := []common.TxID{}
|
||||||
nonReorgedTxIDs := []common.TxID{}
|
nonReorgedTxIDs := []common.TxID{}
|
||||||
@@ -484,7 +418,7 @@ func TestReorg(t *testing.T) {
|
|||||||
// Add txs to DB
|
// Add txs to DB
|
||||||
for i := range poolL2Txs {
|
for i := range poolL2Txs {
|
||||||
err := l2DB.AddTxTest(&poolL2Txs[i])
|
err := l2DB.AddTxTest(&poolL2Txs[i])
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 {
|
if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 {
|
||||||
startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID)
|
startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID)
|
||||||
allTxRandomize = append(allTxRandomize, poolL2Txs[i].TxID)
|
allTxRandomize = append(allTxRandomize, poolL2Txs[i].TxID)
|
||||||
@@ -496,7 +430,7 @@ func TestReorg(t *testing.T) {
|
|||||||
}
|
}
|
||||||
// Start forging txs
|
// Start forging txs
|
||||||
err = l2DB.StartForging(startForgingTxIDs, lastValidBatch)
|
err = l2DB.StartForging(startForgingTxIDs, lastValidBatch)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
var doneForgingTxIDs []common.TxID
|
var doneForgingTxIDs []common.TxID
|
||||||
randomizer = 0
|
randomizer = 0
|
||||||
@@ -521,22 +455,22 @@ func TestReorg(t *testing.T) {
|
|||||||
|
|
||||||
// Invalidate txs BEFORE reorgBatch --> nonReorg
|
// Invalidate txs BEFORE reorgBatch --> nonReorg
|
||||||
err = l2DB.InvalidateTxs(invalidTxIDs, lastValidBatch)
|
err = l2DB.InvalidateTxs(invalidTxIDs, lastValidBatch)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Done forging txs in reorgBatch --> Reorg
|
// Done forging txs in reorgBatch --> Reorg
|
||||||
err = l2DB.DoneForging(doneForgingTxIDs, reorgBatch)
|
err = l2DB.DoneForging(doneForgingTxIDs, reorgBatch)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
err = l2DB.Reorg(lastValidBatch)
|
err = l2DB.Reorg(lastValidBatch)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
for _, id := range reorgedTxIDs {
|
for _, id := range reorgedTxIDs {
|
||||||
tx, err := l2DBWithACC.GetTxAPI(id)
|
tx, err := l2DBWithACC.GetTxAPI(id)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Nil(t, tx.BatchNum)
|
assert.Nil(t, tx.BatchNum)
|
||||||
assert.Equal(t, common.PoolL2TxStatePending, tx.State)
|
assert.Equal(t, common.PoolL2TxStatePending, tx.State)
|
||||||
}
|
}
|
||||||
for _, id := range nonReorgedTxIDs {
|
for _, id := range nonReorgedTxIDs {
|
||||||
fetchedTx, err := l2DBWithACC.GetTxAPI(id)
|
fetchedTx, err := l2DBWithACC.GetTxAPI(id)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, lastValidBatch, *fetchedTx.BatchNum)
|
assert.Equal(t, lastValidBatch, *fetchedTx.BatchNum)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -553,7 +487,7 @@ func TestReorg2(t *testing.T) {
|
|||||||
log.Error("Error prepare historyDB", err)
|
log.Error("Error prepare historyDB", err)
|
||||||
}
|
}
|
||||||
poolL2Txs, err := generatePoolL2Txs()
|
poolL2Txs, err := generatePoolL2Txs()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
reorgedTxIDs := []common.TxID{}
|
reorgedTxIDs := []common.TxID{}
|
||||||
nonReorgedTxIDs := []common.TxID{}
|
nonReorgedTxIDs := []common.TxID{}
|
||||||
@@ -564,7 +498,7 @@ func TestReorg2(t *testing.T) {
|
|||||||
// Add txs to DB
|
// Add txs to DB
|
||||||
for i := range poolL2Txs {
|
for i := range poolL2Txs {
|
||||||
err := l2DB.AddTxTest(&poolL2Txs[i])
|
err := l2DB.AddTxTest(&poolL2Txs[i])
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 {
|
if poolL2Txs[i].State == common.PoolL2TxStatePending && randomizer%2 == 0 {
|
||||||
startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID)
|
startForgingTxIDs = append(startForgingTxIDs, poolL2Txs[i].TxID)
|
||||||
allTxRandomize = append(allTxRandomize, poolL2Txs[i].TxID)
|
allTxRandomize = append(allTxRandomize, poolL2Txs[i].TxID)
|
||||||
@@ -576,7 +510,7 @@ func TestReorg2(t *testing.T) {
|
|||||||
}
|
}
|
||||||
// Start forging txs
|
// Start forging txs
|
||||||
err = l2DB.StartForging(startForgingTxIDs, lastValidBatch)
|
err = l2DB.StartForging(startForgingTxIDs, lastValidBatch)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
var doneForgingTxIDs []common.TxID
|
var doneForgingTxIDs []common.TxID
|
||||||
randomizer = 0
|
randomizer = 0
|
||||||
@@ -598,22 +532,22 @@ func TestReorg2(t *testing.T) {
|
|||||||
}
|
}
|
||||||
// Done forging txs BEFORE reorgBatch --> nonReorg
|
// Done forging txs BEFORE reorgBatch --> nonReorg
|
||||||
err = l2DB.DoneForging(doneForgingTxIDs, lastValidBatch)
|
err = l2DB.DoneForging(doneForgingTxIDs, lastValidBatch)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Invalidate txs in reorgBatch --> Reorg
|
// Invalidate txs in reorgBatch --> Reorg
|
||||||
err = l2DB.InvalidateTxs(invalidTxIDs, reorgBatch)
|
err = l2DB.InvalidateTxs(invalidTxIDs, reorgBatch)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
err = l2DB.Reorg(lastValidBatch)
|
err = l2DB.Reorg(lastValidBatch)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
for _, id := range reorgedTxIDs {
|
for _, id := range reorgedTxIDs {
|
||||||
tx, err := l2DBWithACC.GetTxAPI(id)
|
tx, err := l2DBWithACC.GetTxAPI(id)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Nil(t, tx.BatchNum)
|
assert.Nil(t, tx.BatchNum)
|
||||||
assert.Equal(t, common.PoolL2TxStatePending, tx.State)
|
assert.Equal(t, common.PoolL2TxStatePending, tx.State)
|
||||||
}
|
}
|
||||||
for _, id := range nonReorgedTxIDs {
|
for _, id := range nonReorgedTxIDs {
|
||||||
fetchedTx, err := l2DBWithACC.GetTxAPI(id)
|
fetchedTx, err := l2DBWithACC.GetTxAPI(id)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
assert.Equal(t, lastValidBatch, *fetchedTx.BatchNum)
|
assert.Equal(t, lastValidBatch, *fetchedTx.BatchNum)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -629,7 +563,7 @@ func TestPurge(t *testing.T) {
|
|||||||
var poolL2Tx []common.PoolL2Tx
|
var poolL2Tx []common.PoolL2Tx
|
||||||
for i := 0; i < generateTx; i++ {
|
for i := 0; i < generateTx; i++ {
|
||||||
poolL2TxAux, err := generatePoolL2Txs()
|
poolL2TxAux, err := generatePoolL2Txs()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
poolL2Tx = append(poolL2Tx, poolL2TxAux...)
|
poolL2Tx = append(poolL2Tx, poolL2TxAux...)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -656,39 +590,39 @@ func TestPurge(t *testing.T) {
|
|||||||
deletedIDs = append(deletedIDs, poolL2Tx[i].TxID)
|
deletedIDs = append(deletedIDs, poolL2Tx[i].TxID)
|
||||||
}
|
}
|
||||||
err := l2DB.AddTxTest(&tx)
|
err := l2DB.AddTxTest(&tx)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
}
|
}
|
||||||
// Set batchNum keeped txs
|
// Set batchNum keeped txs
|
||||||
for i := range keepedIDs {
|
for i := range keepedIDs {
|
||||||
_, err = l2DB.dbWrite.Exec(
|
_, err = l2DB.db.Exec(
|
||||||
"UPDATE tx_pool SET batch_num = $1 WHERE tx_id = $2;",
|
"UPDATE tx_pool SET batch_num = $1 WHERE tx_id = $2;",
|
||||||
safeBatchNum, keepedIDs[i],
|
safeBatchNum, keepedIDs[i],
|
||||||
)
|
)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
}
|
}
|
||||||
// Start forging txs and set batchNum
|
// Start forging txs and set batchNum
|
||||||
err = l2DB.StartForging(doneForgingTxIDs, toDeleteBatchNum)
|
err = l2DB.StartForging(doneForgingTxIDs, toDeleteBatchNum)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Done forging txs and set batchNum
|
// Done forging txs and set batchNum
|
||||||
err = l2DB.DoneForging(doneForgingTxIDs, toDeleteBatchNum)
|
err = l2DB.DoneForging(doneForgingTxIDs, toDeleteBatchNum)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Invalidate txs and set batchNum
|
// Invalidate txs and set batchNum
|
||||||
err = l2DB.InvalidateTxs(invalidTxIDs, toDeleteBatchNum)
|
err = l2DB.InvalidateTxs(invalidTxIDs, toDeleteBatchNum)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Update timestamp of afterTTL txs
|
// Update timestamp of afterTTL txs
|
||||||
deleteTimestamp := time.Unix(time.Now().UTC().Unix()-int64(l2DB.ttl.Seconds()+float64(4*time.Second)), 0)
|
deleteTimestamp := time.Unix(time.Now().UTC().Unix()-int64(l2DB.ttl.Seconds()+float64(4*time.Second)), 0)
|
||||||
for _, id := range afterTTLIDs {
|
for _, id := range afterTTLIDs {
|
||||||
// Set timestamp
|
// Set timestamp
|
||||||
_, err = l2DB.dbWrite.Exec(
|
_, err = l2DB.db.Exec(
|
||||||
"UPDATE tx_pool SET timestamp = $1, state = $2 WHERE tx_id = $3;",
|
"UPDATE tx_pool SET timestamp = $1, state = $2 WHERE tx_id = $3;",
|
||||||
deleteTimestamp, common.PoolL2TxStatePending, id,
|
deleteTimestamp, common.PoolL2TxStatePending, id,
|
||||||
)
|
)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Purge txs
|
// Purge txs
|
||||||
err = l2DB.Purge(safeBatchNum)
|
err = l2DB.Purge(safeBatchNum)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Check results
|
// Check results
|
||||||
for _, id := range deletedIDs {
|
for _, id := range deletedIDs {
|
||||||
_, err := l2DB.GetTx(id)
|
_, err := l2DB.GetTx(id)
|
||||||
@@ -696,7 +630,7 @@ func TestPurge(t *testing.T) {
|
|||||||
}
|
}
|
||||||
for _, id := range keepedIDs {
|
for _, id := range keepedIDs {
|
||||||
_, err := l2DB.GetTx(id)
|
_, err := l2DB.GetTx(id)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -710,47 +644,10 @@ func TestAuth(t *testing.T) {
|
|||||||
for i := 0; i < len(auths); i++ {
|
for i := 0; i < len(auths); i++ {
|
||||||
// Add to the DB
|
// Add to the DB
|
||||||
err := l2DB.AddAccountCreationAuth(auths[i])
|
err := l2DB.AddAccountCreationAuth(auths[i])
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Fetch from DB
|
// Fetch from DB
|
||||||
auth, err := l2DB.GetAccountCreationAuth(auths[i].EthAddr)
|
auth, err := l2DB.GetAccountCreationAuth(auths[i].EthAddr)
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
// Check fetched vs generated
|
|
||||||
assert.Equal(t, auths[i].EthAddr, auth.EthAddr)
|
|
||||||
assert.Equal(t, auths[i].BJJ, auth.BJJ)
|
|
||||||
assert.Equal(t, auths[i].Signature, auth.Signature)
|
|
||||||
assert.Equal(t, auths[i].Timestamp.Unix(), auths[i].Timestamp.Unix())
|
|
||||||
nameZone, offset := auths[i].Timestamp.Zone()
|
|
||||||
assert.Equal(t, "UTC", nameZone)
|
|
||||||
assert.Equal(t, 0, offset)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestManyAuth(t *testing.T) {
|
|
||||||
test.WipeDB(l2DB.DB())
|
|
||||||
const nAuths = 5
|
|
||||||
chainID := uint16(0)
|
|
||||||
hermezContractAddr := ethCommon.HexToAddress("0xc344E203a046Da13b0B4467EB7B3629D0C99F6E6")
|
|
||||||
// Generate authorizations
|
|
||||||
genAuths := test.GenAuths(nAuths, chainID, hermezContractAddr)
|
|
||||||
auths := make([]common.AccountCreationAuth, len(genAuths))
|
|
||||||
// Convert to a non-pointer slice
|
|
||||||
for i := 0; i < len(genAuths); i++ {
|
|
||||||
auths[i] = *genAuths[i]
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add a duplicate one to check the not exist condition
|
|
||||||
err := l2DB.AddAccountCreationAuth(genAuths[0])
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
// Add to the DB
|
|
||||||
err = l2DB.AddManyAccountCreationAuth(auths)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
// Assert the result
|
|
||||||
for i := 0; i < len(auths); i++ {
|
|
||||||
// Fetch from DB
|
|
||||||
auth, err := l2DB.GetAccountCreationAuth(auths[i].EthAddr)
|
|
||||||
require.NoError(t, err)
|
|
||||||
// Check fetched vs generated
|
// Check fetched vs generated
|
||||||
assert.Equal(t, auths[i].EthAddr, auth.EthAddr)
|
assert.Equal(t, auths[i].EthAddr, auth.EthAddr)
|
||||||
assert.Equal(t, auths[i].BJJ, auth.BJJ)
|
assert.Equal(t, auths[i].BJJ, auth.BJJ)
|
||||||
@@ -768,7 +665,7 @@ func TestAddGet(t *testing.T) {
|
|||||||
log.Error("Error prepare historyDB", err)
|
log.Error("Error prepare historyDB", err)
|
||||||
}
|
}
|
||||||
poolL2Txs, err := generatePoolL2Txs()
|
poolL2Txs, err := generatePoolL2Txs()
|
||||||
require.NoError(t, err)
|
assert.NoError(t, err)
|
||||||
|
|
||||||
// We will work with only 3 txs
|
// We will work with only 3 txs
|
||||||
require.GreaterOrEqual(t, len(poolL2Txs), 3)
|
require.GreaterOrEqual(t, len(poolL2Txs), 3)
|
||||||
@@ -804,56 +701,3 @@ func TestAddGet(t *testing.T) {
|
|||||||
assert.Equal(t, txs[i], *dbTx)
|
assert.Equal(t, txs[i], *dbTx)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestPurgeByExternalDelete(t *testing.T) {
|
|
||||||
err := prepareHistoryDB(historyDB)
|
|
||||||
if err != nil {
|
|
||||||
log.Error("Error prepare historyDB", err)
|
|
||||||
}
|
|
||||||
txs, err := generatePoolL2Txs()
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
// We will work with 8 txs
|
|
||||||
require.GreaterOrEqual(t, len(txs), 8)
|
|
||||||
txs = txs[:8]
|
|
||||||
for i := range txs {
|
|
||||||
require.NoError(t, l2DB.AddTxTest(&txs[i]))
|
|
||||||
}
|
|
||||||
|
|
||||||
// We will recreate this scenario:
|
|
||||||
// tx index, status , external_delete
|
|
||||||
// 0 , pending, false
|
|
||||||
// 1 , pending, false
|
|
||||||
// 2 , pending, true // will be deleted
|
|
||||||
// 3 , pending, true // will be deleted
|
|
||||||
// 4 , fging , false
|
|
||||||
// 5 , fging , false
|
|
||||||
// 6 , fging , true
|
|
||||||
// 7 , fging , true
|
|
||||||
|
|
||||||
require.NoError(t, l2DB.StartForging(
|
|
||||||
[]common.TxID{txs[4].TxID, txs[5].TxID, txs[6].TxID, txs[7].TxID},
|
|
||||||
1))
|
|
||||||
_, err = l2DB.dbWrite.Exec(
|
|
||||||
`UPDATE tx_pool SET external_delete = true WHERE
|
|
||||||
tx_id IN ($1, $2, $3, $4)
|
|
||||||
;`,
|
|
||||||
txs[2].TxID, txs[3].TxID, txs[6].TxID, txs[7].TxID,
|
|
||||||
)
|
|
||||||
require.NoError(t, err)
|
|
||||||
require.NoError(t, l2DB.PurgeByExternalDelete())
|
|
||||||
|
|
||||||
// Query txs that are have been not deleted
|
|
||||||
for _, i := range []int{0, 1, 4, 5, 6, 7} {
|
|
||||||
txID := txs[i].TxID
|
|
||||||
_, err := l2DB.GetTx(txID)
|
|
||||||
require.NoError(t, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Query txs that have been deleted
|
|
||||||
for _, i := range []int{2, 3} {
|
|
||||||
txID := txs[i].TxID
|
|
||||||
_, err := l2DB.GetTx(txID)
|
|
||||||
require.Equal(t, sql.ErrNoRows, tracerr.Unwrap(err))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
ethCommon "github.com/ethereum/go-ethereum/common"
|
ethCommon "github.com/ethereum/go-ethereum/common"
|
||||||
"github.com/hermeznetwork/hermez-node/api/apitypes"
|
"github.com/hermeznetwork/hermez-node/apitypes"
|
||||||
"github.com/hermeznetwork/hermez-node/common"
|
"github.com/hermeznetwork/hermez-node/common"
|
||||||
"github.com/iden3/go-iden3-crypto/babyjub"
|
"github.com/iden3/go-iden3-crypto/babyjub"
|
||||||
)
|
)
|
||||||
@@ -34,7 +34,6 @@ type PoolL2TxWrite struct {
|
|||||||
RqFee *common.FeeSelector `meddler:"rq_fee"`
|
RqFee *common.FeeSelector `meddler:"rq_fee"`
|
||||||
RqNonce *common.Nonce `meddler:"rq_nonce"`
|
RqNonce *common.Nonce `meddler:"rq_nonce"`
|
||||||
Type common.TxType `meddler:"tx_type"`
|
Type common.TxType `meddler:"tx_type"`
|
||||||
ClientIP string `meddler:"client_ip"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// PoolTxAPI represents a L2 Tx pool with extra metadata used by the API
|
// PoolTxAPI represents a L2 Tx pool with extra metadata used by the API
|
||||||
@@ -95,6 +94,7 @@ func (tx PoolTxAPI) MarshalJSON() ([]byte, error) {
|
|||||||
"info": tx.Info,
|
"info": tx.Info,
|
||||||
"signature": tx.Signature,
|
"signature": tx.Signature,
|
||||||
"timestamp": tx.Timestamp,
|
"timestamp": tx.Timestamp,
|
||||||
|
"batchNum": tx.BatchNum,
|
||||||
"requestFromAccountIndex": tx.RqFromIdx,
|
"requestFromAccountIndex": tx.RqFromIdx,
|
||||||
"requestToAccountIndex": tx.RqToIdx,
|
"requestToAccountIndex": tx.RqToIdx,
|
||||||
"requestToHezEthereumAddress": tx.RqToEthAddr,
|
"requestToHezEthereumAddress": tx.RqToEthAddr,
|
||||||
|
|||||||
@@ -1,11 +1,5 @@
|
|||||||
-- +migrate Up
|
-- +migrate Up
|
||||||
|
|
||||||
-- NOTE: We use "DECIMAL(78,0)" to encode go *big.Int types. All the *big.Int
|
|
||||||
-- that we deal with represent a value in the SNARK field, which is an integer
|
|
||||||
-- of 256 bits. `log(2**256, 10) = 77.06`: that is, a 256 bit number can have
|
|
||||||
-- at most 78 digits, so we use this value to specify the precision in the
|
|
||||||
-- PostgreSQL DECIMAL guaranteeing that we will never lose precision.
|
|
||||||
|
|
||||||
-- History
|
-- History
|
||||||
CREATE TABLE block (
|
CREATE TABLE block (
|
||||||
eth_block_num BIGINT PRIMARY KEY,
|
eth_block_num BIGINT PRIMARY KEY,
|
||||||
@@ -28,10 +22,10 @@ CREATE TABLE batch (
|
|||||||
forger_addr BYTEA NOT NULL, -- fake foreign key for coordinator
|
forger_addr BYTEA NOT NULL, -- fake foreign key for coordinator
|
||||||
fees_collected BYTEA NOT NULL,
|
fees_collected BYTEA NOT NULL,
|
||||||
fee_idxs_coordinator BYTEA NOT NULL,
|
fee_idxs_coordinator BYTEA NOT NULL,
|
||||||
state_root DECIMAL(78,0) NOT NULL,
|
state_root BYTEA NOT NULL,
|
||||||
num_accounts BIGINT NOT NULL,
|
num_accounts BIGINT NOT NULL,
|
||||||
last_idx BIGINT NOT NULL,
|
last_idx BIGINT NOT NULL,
|
||||||
exit_root DECIMAL(78,0) NOT NULL,
|
exit_root BYTEA NOT NULL,
|
||||||
forge_l1_txs_num BIGINT,
|
forge_l1_txs_num BIGINT,
|
||||||
slot_num BIGINT NOT NULL,
|
slot_num BIGINT NOT NULL,
|
||||||
total_fees_usd NUMERIC
|
total_fees_usd NUMERIC
|
||||||
@@ -40,7 +34,7 @@ CREATE TABLE batch (
|
|||||||
CREATE TABLE bid (
|
CREATE TABLE bid (
|
||||||
item_id SERIAL PRIMARY KEY,
|
item_id SERIAL PRIMARY KEY,
|
||||||
slot_num BIGINT NOT NULL,
|
slot_num BIGINT NOT NULL,
|
||||||
bid_value DECIMAL(78,0) NOT NULL,
|
bid_value BYTEA NOT NULL,
|
||||||
eth_block_num BIGINT NOT NULL REFERENCES block (eth_block_num) ON DELETE CASCADE,
|
eth_block_num BIGINT NOT NULL REFERENCES block (eth_block_num) ON DELETE CASCADE,
|
||||||
bidder_addr BYTEA NOT NULL -- fake foreign key for coordinator
|
bidder_addr BYTEA NOT NULL -- fake foreign key for coordinator
|
||||||
);
|
);
|
||||||
@@ -53,7 +47,7 @@ CREATE TABLE token (
|
|||||||
name VARCHAR(20) NOT NULL,
|
name VARCHAR(20) NOT NULL,
|
||||||
symbol VARCHAR(10) NOT NULL,
|
symbol VARCHAR(10) NOT NULL,
|
||||||
decimals INT NOT NULL,
|
decimals INT NOT NULL,
|
||||||
usd NUMERIC, -- value of a normalized token (1 token = 10^decimals units)
|
usd NUMERIC,
|
||||||
usd_update TIMESTAMP WITHOUT TIME ZONE
|
usd_update TIMESTAMP WITHOUT TIME ZONE
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -106,21 +100,12 @@ CREATE TABLE account (
|
|||||||
eth_addr BYTEA NOT NULL
|
eth_addr BYTEA NOT NULL
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE TABLE account_update (
|
|
||||||
item_id SERIAL,
|
|
||||||
eth_block_num BIGINT NOT NULL REFERENCES block (eth_block_num) ON DELETE CASCADE,
|
|
||||||
batch_num BIGINT NOT NULL REFERENCES batch (batch_num) ON DELETE CASCADE,
|
|
||||||
idx BIGINT NOT NULL REFERENCES account (idx) ON DELETE CASCADE,
|
|
||||||
nonce BIGINT NOT NULL,
|
|
||||||
balance DECIMAL(78,0) NOT NULL
|
|
||||||
);
|
|
||||||
|
|
||||||
CREATE TABLE exit_tree (
|
CREATE TABLE exit_tree (
|
||||||
item_id SERIAL PRIMARY KEY,
|
item_id SERIAL PRIMARY KEY,
|
||||||
batch_num BIGINT REFERENCES batch (batch_num) ON DELETE CASCADE,
|
batch_num BIGINT REFERENCES batch (batch_num) ON DELETE CASCADE,
|
||||||
account_idx BIGINT REFERENCES account (idx) ON DELETE CASCADE,
|
account_idx BIGINT REFERENCES account (idx) ON DELETE CASCADE,
|
||||||
merkle_proof BYTEA NOT NULL,
|
merkle_proof BYTEA NOT NULL,
|
||||||
balance DECIMAL(78,0) NOT NULL,
|
balance BYTEA NOT NULL,
|
||||||
instant_withdrawn BIGINT REFERENCES block (eth_block_num) ON DELETE SET NULL,
|
instant_withdrawn BIGINT REFERENCES block (eth_block_num) ON DELETE SET NULL,
|
||||||
delayed_withdraw_request BIGINT REFERENCES block (eth_block_num) ON DELETE SET NULL,
|
delayed_withdraw_request BIGINT REFERENCES block (eth_block_num) ON DELETE SET NULL,
|
||||||
owner BYTEA,
|
owner BYTEA,
|
||||||
@@ -170,7 +155,7 @@ CREATE TABLE tx (
|
|||||||
to_idx BIGINT NOT NULL,
|
to_idx BIGINT NOT NULL,
|
||||||
to_eth_addr BYTEA,
|
to_eth_addr BYTEA,
|
||||||
to_bjj BYTEA,
|
to_bjj BYTEA,
|
||||||
amount DECIMAL(78,0) NOT NULL,
|
amount BYTEA NOT NULL,
|
||||||
amount_success BOOLEAN NOT NULL DEFAULT true,
|
amount_success BOOLEAN NOT NULL DEFAULT true,
|
||||||
amount_f NUMERIC NOT NULL,
|
amount_f NUMERIC NOT NULL,
|
||||||
token_id INT NOT NULL REFERENCES token (token_id),
|
token_id INT NOT NULL REFERENCES token (token_id),
|
||||||
@@ -180,7 +165,7 @@ CREATE TABLE tx (
|
|||||||
-- L1
|
-- L1
|
||||||
to_forge_l1_txs_num BIGINT,
|
to_forge_l1_txs_num BIGINT,
|
||||||
user_origin BOOLEAN,
|
user_origin BOOLEAN,
|
||||||
deposit_amount DECIMAL(78,0),
|
deposit_amount BYTEA,
|
||||||
deposit_amount_success BOOLEAN NOT NULL DEFAULT true,
|
deposit_amount_success BOOLEAN NOT NULL DEFAULT true,
|
||||||
deposit_amount_f NUMERIC,
|
deposit_amount_f NUMERIC,
|
||||||
deposit_amount_usd NUMERIC,
|
deposit_amount_usd NUMERIC,
|
||||||
@@ -550,7 +535,7 @@ FOR EACH ROW EXECUTE PROCEDURE forge_l1_user_txs();
|
|||||||
|
|
||||||
CREATE TABLE rollup_vars (
|
CREATE TABLE rollup_vars (
|
||||||
eth_block_num BIGINT PRIMARY KEY REFERENCES block (eth_block_num) ON DELETE CASCADE,
|
eth_block_num BIGINT PRIMARY KEY REFERENCES block (eth_block_num) ON DELETE CASCADE,
|
||||||
fee_add_token DECIMAL(78,0) NOT NULL,
|
fee_add_token BYTEA NOT NULL,
|
||||||
forge_l1_timeout BIGINT NOT NULL,
|
forge_l1_timeout BIGINT NOT NULL,
|
||||||
withdrawal_delay BIGINT NOT NULL,
|
withdrawal_delay BIGINT NOT NULL,
|
||||||
buckets BYTEA NOT NULL,
|
buckets BYTEA NOT NULL,
|
||||||
@@ -562,7 +547,7 @@ CREATE TABLE bucket_update (
|
|||||||
eth_block_num BIGINT NOT NULL REFERENCES block (eth_block_num) ON DELETE CASCADE,
|
eth_block_num BIGINT NOT NULL REFERENCES block (eth_block_num) ON DELETE CASCADE,
|
||||||
num_bucket BIGINT NOT NULL,
|
num_bucket BIGINT NOT NULL,
|
||||||
block_stamp BIGINT NOT NULL,
|
block_stamp BIGINT NOT NULL,
|
||||||
withdrawals DECIMAL(78,0) NOT NULL
|
withdrawals BYTEA NOT NULL
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE TABLE token_exchange (
|
CREATE TABLE token_exchange (
|
||||||
@@ -578,7 +563,7 @@ CREATE TABLE escape_hatch_withdrawal (
|
|||||||
who_addr BYTEA NOT NULL,
|
who_addr BYTEA NOT NULL,
|
||||||
to_addr BYTEA NOT NULL,
|
to_addr BYTEA NOT NULL,
|
||||||
token_addr BYTEA NOT NULL,
|
token_addr BYTEA NOT NULL,
|
||||||
amount DECIMAL(78,0) NOT NULL
|
amount BYTEA NOT NULL
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE TABLE auction_vars (
|
CREATE TABLE auction_vars (
|
||||||
@@ -616,7 +601,7 @@ CREATE TABLE tx_pool (
|
|||||||
effective_to_eth_addr BYTEA,
|
effective_to_eth_addr BYTEA,
|
||||||
effective_to_bjj BYTEA,
|
effective_to_bjj BYTEA,
|
||||||
token_id INT NOT NULL REFERENCES token (token_id) ON DELETE CASCADE,
|
token_id INT NOT NULL REFERENCES token (token_id) ON DELETE CASCADE,
|
||||||
amount DECIMAL(78,0) NOT NULL,
|
amount BYTEA NOT NULL,
|
||||||
amount_f NUMERIC NOT NULL,
|
amount_f NUMERIC NOT NULL,
|
||||||
fee SMALLINT NOT NULL,
|
fee SMALLINT NOT NULL,
|
||||||
nonce BIGINT NOT NULL,
|
nonce BIGINT NOT NULL,
|
||||||
@@ -630,12 +615,10 @@ CREATE TABLE tx_pool (
|
|||||||
rq_to_eth_addr BYTEA,
|
rq_to_eth_addr BYTEA,
|
||||||
rq_to_bjj BYTEA,
|
rq_to_bjj BYTEA,
|
||||||
rq_token_id INT,
|
rq_token_id INT,
|
||||||
rq_amount DECIMAL(78,0),
|
rq_amount BYTEA,
|
||||||
rq_fee SMALLINT,
|
rq_fee SMALLINT,
|
||||||
rq_nonce BIGINT,
|
rq_nonce BIGINT,
|
||||||
tx_type VARCHAR(40) NOT NULL,
|
tx_type VARCHAR(40) NOT NULL
|
||||||
client_ip VARCHAR,
|
|
||||||
external_delete BOOLEAN NOT NULL DEFAULT false
|
|
||||||
);
|
);
|
||||||
|
|
||||||
-- +migrate StatementBegin
|
-- +migrate StatementBegin
|
||||||
@@ -667,57 +650,35 @@ CREATE TABLE account_creation_auth (
|
|||||||
timestamp TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT timezone('utc', now())
|
timestamp TIMESTAMP WITHOUT TIME ZONE NOT NULL DEFAULT timezone('utc', now())
|
||||||
);
|
);
|
||||||
|
|
||||||
CREATE TABLE node_info (
|
|
||||||
item_id SERIAL PRIMARY KEY,
|
|
||||||
state BYTEA, -- object returned by GET /state
|
|
||||||
config BYTEA, -- Node config
|
|
||||||
-- max_pool_txs BIGINT, -- L2DB config
|
|
||||||
-- min_fee NUMERIC, -- L2DB config
|
|
||||||
constants BYTEA -- info of the network that is constant
|
|
||||||
);
|
|
||||||
INSERT INTO node_info(item_id) VALUES (1); -- Always have a single row that we will update
|
|
||||||
|
|
||||||
CREATE VIEW account_state AS SELECT DISTINCT idx,
|
|
||||||
first_value(nonce) OVER w AS nonce,
|
|
||||||
first_value(balance) OVER w AS balance,
|
|
||||||
first_value(eth_block_num) OVER w AS eth_block_num,
|
|
||||||
first_value(batch_num) OVER w AS batch_num
|
|
||||||
FROM account_update
|
|
||||||
window w AS (partition by idx ORDER BY item_id desc);
|
|
||||||
|
|
||||||
-- +migrate Down
|
-- +migrate Down
|
||||||
-- triggers
|
-- drop triggers
|
||||||
DROP TRIGGER IF EXISTS trigger_token_usd_update ON token;
|
DROP TRIGGER trigger_token_usd_update ON token;
|
||||||
DROP TRIGGER IF EXISTS trigger_set_tx ON tx;
|
DROP TRIGGER trigger_set_tx ON tx;
|
||||||
DROP TRIGGER IF EXISTS trigger_forge_l1_txs ON batch;
|
DROP TRIGGER trigger_forge_l1_txs ON batch;
|
||||||
DROP TRIGGER IF EXISTS trigger_set_pool_tx ON tx_pool;
|
DROP TRIGGER trigger_set_pool_tx ON tx_pool;
|
||||||
-- drop views IF EXISTS
|
-- drop functions
|
||||||
DROP VIEW IF EXISTS account_state;
|
DROP FUNCTION hez_idx;
|
||||||
-- functions
|
DROP FUNCTION set_token_usd_update;
|
||||||
DROP FUNCTION IF EXISTS hez_idx;
|
DROP FUNCTION fee_percentage;
|
||||||
DROP FUNCTION IF EXISTS set_token_usd_update;
|
DROP FUNCTION set_tx;
|
||||||
DROP FUNCTION IF EXISTS fee_percentage;
|
DROP FUNCTION forge_l1_user_txs;
|
||||||
DROP FUNCTION IF EXISTS set_tx;
|
DROP FUNCTION set_pool_tx;
|
||||||
DROP FUNCTION IF EXISTS forge_l1_user_txs;
|
-- drop tables
|
||||||
DROP FUNCTION IF EXISTS set_pool_tx;
|
DROP TABLE account_creation_auth;
|
||||||
-- drop tables IF EXISTS
|
DROP TABLE tx_pool;
|
||||||
DROP TABLE IF EXISTS node_info;
|
DROP TABLE auction_vars;
|
||||||
DROP TABLE IF EXISTS account_creation_auth;
|
DROP TABLE rollup_vars;
|
||||||
DROP TABLE IF EXISTS tx_pool;
|
DROP TABLE escape_hatch_withdrawal;
|
||||||
DROP TABLE IF EXISTS auction_vars;
|
DROP TABLE bucket_update;
|
||||||
DROP TABLE IF EXISTS rollup_vars;
|
DROP TABLE token_exchange;
|
||||||
DROP TABLE IF EXISTS escape_hatch_withdrawal;
|
DROP TABLE wdelayer_vars;
|
||||||
DROP TABLE IF EXISTS bucket_update;
|
DROP TABLE tx;
|
||||||
DROP TABLE IF EXISTS token_exchange;
|
DROP TABLE exit_tree;
|
||||||
DROP TABLE IF EXISTS wdelayer_vars;
|
DROP TABLE account;
|
||||||
DROP TABLE IF EXISTS tx;
|
DROP TABLE token;
|
||||||
DROP TABLE IF EXISTS exit_tree;
|
DROP TABLE bid;
|
||||||
DROP TABLE IF EXISTS account_update;
|
DROP TABLE batch;
|
||||||
DROP TABLE IF EXISTS account;
|
DROP TABLE coordinator;
|
||||||
DROP TABLE IF EXISTS token;
|
DROP TABLE block;
|
||||||
DROP TABLE IF EXISTS bid;
|
-- drop sequences
|
||||||
DROP TABLE IF EXISTS batch;
|
DROP SEQUENCE tx_item_id;
|
||||||
DROP TABLE IF EXISTS coordinator;
|
|
||||||
DROP TABLE IF EXISTS block;
|
|
||||||
-- sequences
|
|
||||||
DROP SEQUENCE IF EXISTS tx_item_id;
|
|
||||||
|
|||||||
@@ -17,8 +17,7 @@ import (
|
|||||||
var (
|
var (
|
||||||
// ErrStateDBWithoutMT is used when a method that requires a MerkleTree
|
// ErrStateDBWithoutMT is used when a method that requires a MerkleTree
|
||||||
// is called in a StateDB that does not have a MerkleTree defined
|
// is called in a StateDB that does not have a MerkleTree defined
|
||||||
ErrStateDBWithoutMT = errors.New(
|
ErrStateDBWithoutMT = errors.New("Can not call method to use MerkleTree in a StateDB without MerkleTree")
|
||||||
"Can not call method to use MerkleTree in a StateDB without MerkleTree")
|
|
||||||
|
|
||||||
// ErrAccountAlreadyExists is used when CreateAccount is called and the
|
// ErrAccountAlreadyExists is used when CreateAccount is called and the
|
||||||
// Account already exists
|
// Account already exists
|
||||||
@@ -29,8 +28,7 @@ var (
|
|||||||
ErrIdxNotFound = errors.New("Idx can not be found")
|
ErrIdxNotFound = errors.New("Idx can not be found")
|
||||||
// ErrGetIdxNoCase is used when trying to get the Idx from EthAddr &
|
// ErrGetIdxNoCase is used when trying to get the Idx from EthAddr &
|
||||||
// BJJ with not compatible combination
|
// BJJ with not compatible combination
|
||||||
ErrGetIdxNoCase = errors.New(
|
ErrGetIdxNoCase = errors.New("Can not get Idx due unexpected combination of ethereum Address & BabyJubJub PublicKey")
|
||||||
"Can not get Idx due unexpected combination of ethereum Address & BabyJubJub PublicKey")
|
|
||||||
|
|
||||||
// PrefixKeyIdx is the key prefix for idx in the db
|
// PrefixKeyIdx is the key prefix for idx in the db
|
||||||
PrefixKeyIdx = []byte("i:")
|
PrefixKeyIdx = []byte("i:")
|
||||||
@@ -146,8 +144,7 @@ func NewStateDB(cfg Config) (*StateDB, error) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
if cfg.Type == TypeTxSelector && cfg.NLevels != 0 {
|
if cfg.Type == TypeTxSelector && cfg.NLevels != 0 {
|
||||||
return nil, tracerr.Wrap(
|
return nil, tracerr.Wrap(fmt.Errorf("invalid StateDB parameters: StateDB type==TypeStateDB can not have nLevels!=0"))
|
||||||
fmt.Errorf("invalid StateDB parameters: StateDB type==TypeStateDB can not have nLevels!=0"))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return &StateDB{
|
return &StateDB{
|
||||||
@@ -227,12 +224,6 @@ func (s *StateDB) MakeCheckpoint() error {
|
|||||||
return s.db.MakeCheckpoint()
|
return s.db.MakeCheckpoint()
|
||||||
}
|
}
|
||||||
|
|
||||||
// DeleteOldCheckpoints deletes old checkpoints when there are more than
|
|
||||||
// `cfg.keep` checkpoints
|
|
||||||
func (s *StateDB) DeleteOldCheckpoints() error {
|
|
||||||
return s.db.DeleteOldCheckpoints()
|
|
||||||
}
|
|
||||||
|
|
||||||
// CurrentBatch returns the current in-memory CurrentBatch of the StateDB.db
|
// CurrentBatch returns the current in-memory CurrentBatch of the StateDB.db
|
||||||
func (s *StateDB) CurrentBatch() common.BatchNum {
|
func (s *StateDB) CurrentBatch() common.BatchNum {
|
||||||
return s.db.CurrentBatch
|
return s.db.CurrentBatch
|
||||||
@@ -284,7 +275,8 @@ func (s *StateDB) GetAccount(idx common.Idx) (*common.Account, error) {
|
|||||||
return GetAccountInTreeDB(s.db.DB(), idx)
|
return GetAccountInTreeDB(s.db.DB(), idx)
|
||||||
}
|
}
|
||||||
|
|
||||||
func accountsIter(db db.Storage, fn func(a *common.Account) (bool, error)) error {
|
// AccountsIter iterates over all the accounts in db, calling fn for each one
|
||||||
|
func AccountsIter(db db.Storage, fn func(a *common.Account) (bool, error)) error {
|
||||||
idxDB := db.WithPrefix(PrefixKeyIdx)
|
idxDB := db.WithPrefix(PrefixKeyIdx)
|
||||||
if err := idxDB.Iterate(func(k []byte, v []byte) (bool, error) {
|
if err := idxDB.Iterate(func(k []byte, v []byte) (bool, error) {
|
||||||
idx, err := common.IdxFromBytes(k)
|
idx, err := common.IdxFromBytes(k)
|
||||||
@@ -356,8 +348,7 @@ func GetAccountInTreeDB(sto db.Storage, idx common.Idx) (*common.Account, error)
|
|||||||
// CreateAccount creates a new Account in the StateDB for the given Idx. If
|
// CreateAccount creates a new Account in the StateDB for the given Idx. If
|
||||||
// StateDB.MT==nil, MerkleTree is not affected, otherwise updates the
|
// StateDB.MT==nil, MerkleTree is not affected, otherwise updates the
|
||||||
// MerkleTree, returning a CircomProcessorProof.
|
// MerkleTree, returning a CircomProcessorProof.
|
||||||
func (s *StateDB) CreateAccount(idx common.Idx, account *common.Account) (
|
func (s *StateDB) CreateAccount(idx common.Idx, account *common.Account) (*merkletree.CircomProcessorProof, error) {
|
||||||
*merkletree.CircomProcessorProof, error) {
|
|
||||||
cpp, err := CreateAccountInTreeDB(s.db.DB(), s.MT, idx, account)
|
cpp, err := CreateAccountInTreeDB(s.db.DB(), s.MT, idx, account)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return cpp, tracerr.Wrap(err)
|
return cpp, tracerr.Wrap(err)
|
||||||
@@ -371,8 +362,7 @@ func (s *StateDB) CreateAccount(idx common.Idx, account *common.Account) (
|
|||||||
// from ExitTree. Creates a new Account in the StateDB for the given Idx. If
|
// from ExitTree. Creates a new Account in the StateDB for the given Idx. If
|
||||||
// StateDB.MT==nil, MerkleTree is not affected, otherwise updates the
|
// StateDB.MT==nil, MerkleTree is not affected, otherwise updates the
|
||||||
// MerkleTree, returning a CircomProcessorProof.
|
// MerkleTree, returning a CircomProcessorProof.
|
||||||
func CreateAccountInTreeDB(sto db.Storage, mt *merkletree.MerkleTree, idx common.Idx,
|
func CreateAccountInTreeDB(sto db.Storage, mt *merkletree.MerkleTree, idx common.Idx, account *common.Account) (*merkletree.CircomProcessorProof, error) {
|
||||||
account *common.Account) (*merkletree.CircomProcessorProof, error) {
|
|
||||||
// store at the DB the key: v, and value: leaf.Bytes()
|
// store at the DB the key: v, and value: leaf.Bytes()
|
||||||
v, err := account.HashValue()
|
v, err := account.HashValue()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -421,8 +411,7 @@ func CreateAccountInTreeDB(sto db.Storage, mt *merkletree.MerkleTree, idx common
|
|||||||
// UpdateAccount updates the Account in the StateDB for the given Idx. If
|
// UpdateAccount updates the Account in the StateDB for the given Idx. If
|
||||||
// StateDB.mt==nil, MerkleTree is not affected, otherwise updates the
|
// StateDB.mt==nil, MerkleTree is not affected, otherwise updates the
|
||||||
// MerkleTree, returning a CircomProcessorProof.
|
// MerkleTree, returning a CircomProcessorProof.
|
||||||
func (s *StateDB) UpdateAccount(idx common.Idx, account *common.Account) (
|
func (s *StateDB) UpdateAccount(idx common.Idx, account *common.Account) (*merkletree.CircomProcessorProof, error) {
|
||||||
*merkletree.CircomProcessorProof, error) {
|
|
||||||
return UpdateAccountInTreeDB(s.db.DB(), s.MT, idx, account)
|
return UpdateAccountInTreeDB(s.db.DB(), s.MT, idx, account)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -430,8 +419,7 @@ func (s *StateDB) UpdateAccount(idx common.Idx, account *common.Account) (
|
|||||||
// from ExitTree. Updates the Account in the StateDB for the given Idx. If
|
// from ExitTree. Updates the Account in the StateDB for the given Idx. If
|
||||||
// StateDB.mt==nil, MerkleTree is not affected, otherwise updates the
|
// StateDB.mt==nil, MerkleTree is not affected, otherwise updates the
|
||||||
// MerkleTree, returning a CircomProcessorProof.
|
// MerkleTree, returning a CircomProcessorProof.
|
||||||
func UpdateAccountInTreeDB(sto db.Storage, mt *merkletree.MerkleTree, idx common.Idx,
|
func UpdateAccountInTreeDB(sto db.Storage, mt *merkletree.MerkleTree, idx common.Idx, account *common.Account) (*merkletree.CircomProcessorProof, error) {
|
||||||
account *common.Account) (*merkletree.CircomProcessorProof, error) {
|
|
||||||
// store at the DB the key: v, and value: account.Bytes()
|
// store at the DB the key: v, and value: account.Bytes()
|
||||||
v, err := account.HashValue()
|
v, err := account.HashValue()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -516,7 +504,7 @@ func (l *LocalStateDB) CheckpointExists(batchNum common.BatchNum) (bool, error)
|
|||||||
return l.db.CheckpointExists(batchNum)
|
return l.db.CheckpointExists(batchNum)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Reset performs a reset in the LocalStateDB. If fromSynchronizer is true, it
|
// Reset performs a reset in the LocaStateDB. If fromSynchronizer is true, it
|
||||||
// gets the state from LocalStateDB.synchronizerStateDB for the given batchNum.
|
// gets the state from LocalStateDB.synchronizerStateDB for the given batchNum.
|
||||||
// If fromSynchronizer is false, get the state from LocalStateDB checkpoints.
|
// If fromSynchronizer is false, get the state from LocalStateDB checkpoints.
|
||||||
func (l *LocalStateDB) Reset(batchNum common.BatchNum, fromSynchronizer bool) error {
|
func (l *LocalStateDB) Reset(batchNum common.BatchNum, fromSynchronizer bool) error {
|
||||||
|
|||||||
@@ -7,7 +7,6 @@ import (
|
|||||||
"math/big"
|
"math/big"
|
||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
ethCommon "github.com/ethereum/go-ethereum/common"
|
ethCommon "github.com/ethereum/go-ethereum/common"
|
||||||
@@ -23,8 +22,7 @@ import (
|
|||||||
|
|
||||||
func newAccount(t *testing.T, i int) *common.Account {
|
func newAccount(t *testing.T, i int) *common.Account {
|
||||||
var sk babyjub.PrivateKey
|
var sk babyjub.PrivateKey
|
||||||
_, err := hex.Decode(sk[:],
|
_, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
||||||
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
pk := sk.Public()
|
pk := sk.Public()
|
||||||
|
|
||||||
@@ -373,8 +371,7 @@ func TestCheckpoints(t *testing.T) {
|
|||||||
dirLocal, err := ioutil.TempDir("", "ldb")
|
dirLocal, err := ioutil.TempDir("", "ldb")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
defer require.NoError(t, os.RemoveAll(dirLocal))
|
defer require.NoError(t, os.RemoveAll(dirLocal))
|
||||||
ldb, err := NewLocalStateDB(Config{Path: dirLocal, Keep: 128, Type: TypeBatchBuilder,
|
ldb, err := NewLocalStateDB(Config{Path: dirLocal, Keep: 128, Type: TypeBatchBuilder, NLevels: 32}, sdb)
|
||||||
NLevels: 32}, sdb)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
// get checkpoint 4 from sdb (StateDB) to ldb (LocalStateDB)
|
// get checkpoint 4 from sdb (StateDB) to ldb (LocalStateDB)
|
||||||
@@ -395,8 +392,7 @@ func TestCheckpoints(t *testing.T) {
|
|||||||
dirLocal2, err := ioutil.TempDir("", "ldb2")
|
dirLocal2, err := ioutil.TempDir("", "ldb2")
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
defer require.NoError(t, os.RemoveAll(dirLocal2))
|
defer require.NoError(t, os.RemoveAll(dirLocal2))
|
||||||
ldb2, err := NewLocalStateDB(Config{Path: dirLocal2, Keep: 128, Type: TypeBatchBuilder,
|
ldb2, err := NewLocalStateDB(Config{Path: dirLocal2, Keep: 128, Type: TypeBatchBuilder, NLevels: 32}, sdb)
|
||||||
NLevels: 32}, sdb)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
// get checkpoint 4 from sdb (StateDB) to ldb (LocalStateDB)
|
// get checkpoint 4 from sdb (StateDB) to ldb (LocalStateDB)
|
||||||
@@ -475,8 +471,7 @@ func TestCheckAccountsTreeTestVectors(t *testing.T) {
|
|||||||
|
|
||||||
ay0 := new(big.Int).Sub(new(big.Int).Exp(big.NewInt(2), big.NewInt(253), nil), big.NewInt(1))
|
ay0 := new(big.Int).Sub(new(big.Int).Exp(big.NewInt(2), big.NewInt(253), nil), big.NewInt(1))
|
||||||
// test value from js version (compatibility-canary)
|
// test value from js version (compatibility-canary)
|
||||||
assert.Equal(t, "1fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
|
assert.Equal(t, "1fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", (hex.EncodeToString(ay0.Bytes())))
|
||||||
(hex.EncodeToString(ay0.Bytes())))
|
|
||||||
bjjPoint0Comp := babyjub.PackSignY(true, ay0)
|
bjjPoint0Comp := babyjub.PackSignY(true, ay0)
|
||||||
bjj0 := babyjub.PublicKeyComp(bjjPoint0Comp)
|
bjj0 := babyjub.PublicKeyComp(bjjPoint0Comp)
|
||||||
|
|
||||||
@@ -535,9 +530,7 @@ func TestCheckAccountsTreeTestVectors(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
}
|
}
|
||||||
// root value generated by js version:
|
// root value generated by js version:
|
||||||
assert.Equal(t,
|
assert.Equal(t, "17298264051379321456969039521810887093935433569451713402227686942080129181291", sdb.MT.Root().BigInt().String())
|
||||||
"13174362770971232417413036794215823584762073355951212910715422236001731746065",
|
|
||||||
sdb.MT.Root().BigInt().String())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// TestListCheckpoints performs almost the same test than kvdb/kvdb_test.go
|
// TestListCheckpoints performs almost the same test than kvdb/kvdb_test.go
|
||||||
@@ -589,48 +582,6 @@ func TestDeleteOldCheckpoints(t *testing.T) {
|
|||||||
for i := 0; i < numCheckpoints; i++ {
|
for i := 0; i < numCheckpoints; i++ {
|
||||||
err = sdb.MakeCheckpoint()
|
err = sdb.MakeCheckpoint()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
err := sdb.DeleteOldCheckpoints()
|
|
||||||
require.NoError(t, err)
|
|
||||||
checkpoints, err := sdb.db.ListCheckpoints()
|
|
||||||
require.NoError(t, err)
|
|
||||||
assert.LessOrEqual(t, len(checkpoints), keep)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestConcurrentDeleteOldCheckpoints performs almost the same test than
|
|
||||||
// kvdb/kvdb_test.go TestConcurrentDeleteOldCheckpoints, but over the StateDB
|
|
||||||
func TestConcurrentDeleteOldCheckpoints(t *testing.T) {
|
|
||||||
dir, err := ioutil.TempDir("", "tmpdb")
|
|
||||||
require.NoError(t, err)
|
|
||||||
defer require.NoError(t, os.RemoveAll(dir))
|
|
||||||
|
|
||||||
keep := 16
|
|
||||||
sdb, err := NewStateDB(Config{Path: dir, Keep: keep, Type: TypeSynchronizer, NLevels: 32})
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
numCheckpoints := 32
|
|
||||||
// do checkpoints and check that we never have more than `keep`
|
|
||||||
// checkpoints
|
|
||||||
for i := 0; i < numCheckpoints; i++ {
|
|
||||||
err = sdb.MakeCheckpoint()
|
|
||||||
require.NoError(t, err)
|
|
||||||
wg := sync.WaitGroup{}
|
|
||||||
n := 10
|
|
||||||
wg.Add(n)
|
|
||||||
for j := 0; j < n; j++ {
|
|
||||||
go func() {
|
|
||||||
err := sdb.DeleteOldCheckpoints()
|
|
||||||
require.NoError(t, err)
|
|
||||||
checkpoints, err := sdb.db.ListCheckpoints()
|
|
||||||
require.NoError(t, err)
|
|
||||||
assert.LessOrEqual(t, len(checkpoints), keep)
|
|
||||||
wg.Done()
|
|
||||||
}()
|
|
||||||
_, err := sdb.db.ListCheckpoints()
|
|
||||||
// only checking here for absence of errors, not the count of checkpoints
|
|
||||||
require.NoError(t, err)
|
|
||||||
}
|
|
||||||
wg.Wait()
|
|
||||||
checkpoints, err := sdb.db.ListCheckpoints()
|
checkpoints, err := sdb.db.ListCheckpoints()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.LessOrEqual(t, len(checkpoints), keep)
|
assert.LessOrEqual(t, len(checkpoints), keep)
|
||||||
|
|||||||
@@ -18,8 +18,7 @@ func concatEthAddrTokenID(addr ethCommon.Address, tokenID common.TokenID) []byte
|
|||||||
b = append(b[:], tokenID.Bytes()[:]...)
|
b = append(b[:], tokenID.Bytes()[:]...)
|
||||||
return b
|
return b
|
||||||
}
|
}
|
||||||
func concatEthAddrBJJTokenID(addr ethCommon.Address, pk babyjub.PublicKeyComp,
|
func concatEthAddrBJJTokenID(addr ethCommon.Address, pk babyjub.PublicKeyComp, tokenID common.TokenID) []byte {
|
||||||
tokenID common.TokenID) []byte {
|
|
||||||
pkComp := pk
|
pkComp := pk
|
||||||
var b []byte
|
var b []byte
|
||||||
b = append(b, addr.Bytes()...)
|
b = append(b, addr.Bytes()...)
|
||||||
@@ -33,8 +32,7 @@ func concatEthAddrBJJTokenID(addr ethCommon.Address, pk babyjub.PublicKeyComp,
|
|||||||
// - key: EthAddr & BabyJubJub PublicKey Compressed, value: idx
|
// - key: EthAddr & BabyJubJub PublicKey Compressed, value: idx
|
||||||
// If Idx already exist for the given EthAddr & BJJ, the remaining Idx will be
|
// If Idx already exist for the given EthAddr & BJJ, the remaining Idx will be
|
||||||
// always the smallest one.
|
// always the smallest one.
|
||||||
func (s *StateDB) setIdxByEthAddrBJJ(idx common.Idx, addr ethCommon.Address,
|
func (s *StateDB) setIdxByEthAddrBJJ(idx common.Idx, addr ethCommon.Address, pk babyjub.PublicKeyComp, tokenID common.TokenID) error {
|
||||||
pk babyjub.PublicKeyComp, tokenID common.TokenID) error {
|
|
||||||
oldIdx, err := s.GetIdxByEthAddrBJJ(addr, pk, tokenID)
|
oldIdx, err := s.GetIdxByEthAddrBJJ(addr, pk, tokenID)
|
||||||
if err == nil {
|
if err == nil {
|
||||||
// EthAddr & BJJ already have an Idx
|
// EthAddr & BJJ already have an Idx
|
||||||
@@ -42,8 +40,7 @@ func (s *StateDB) setIdxByEthAddrBJJ(idx common.Idx, addr ethCommon.Address,
|
|||||||
// if new idx is smaller, store the new one
|
// if new idx is smaller, store the new one
|
||||||
// if new idx is bigger, don't store and return, as the used one will be the old
|
// if new idx is bigger, don't store and return, as the used one will be the old
|
||||||
if idx >= oldIdx {
|
if idx >= oldIdx {
|
||||||
log.Debug("StateDB.setIdxByEthAddrBJJ: Idx not stored because there " +
|
log.Debug("StateDB.setIdxByEthAddrBJJ: Idx not stored because there already exist a smaller Idx for the given EthAddr & BJJ")
|
||||||
"already exist a smaller Idx for the given EthAddr & BJJ")
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -83,8 +80,7 @@ func (s *StateDB) setIdxByEthAddrBJJ(idx common.Idx, addr ethCommon.Address,
|
|||||||
// GetIdxByEthAddr returns the smallest Idx in the StateDB for the given
|
// GetIdxByEthAddr returns the smallest Idx in the StateDB for the given
|
||||||
// Ethereum Address. Will return common.Idx(0) and error in case that Idx is
|
// Ethereum Address. Will return common.Idx(0) and error in case that Idx is
|
||||||
// not found in the StateDB.
|
// not found in the StateDB.
|
||||||
func (s *StateDB) GetIdxByEthAddr(addr ethCommon.Address, tokenID common.TokenID) (common.Idx,
|
func (s *StateDB) GetIdxByEthAddr(addr ethCommon.Address, tokenID common.TokenID) (common.Idx, error) {
|
||||||
error) {
|
|
||||||
k := concatEthAddrTokenID(addr, tokenID)
|
k := concatEthAddrTokenID(addr, tokenID)
|
||||||
b, err := s.db.DB().Get(append(PrefixKeyAddr, k...))
|
b, err := s.db.DB().Get(append(PrefixKeyAddr, k...))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -120,22 +116,18 @@ func (s *StateDB) GetIdxByEthAddrBJJ(addr ethCommon.Address, pk babyjub.PublicKe
|
|||||||
return common.Idx(0), tracerr.Wrap(ErrIdxNotFound)
|
return common.Idx(0), tracerr.Wrap(ErrIdxNotFound)
|
||||||
} else if err != nil {
|
} else if err != nil {
|
||||||
return common.Idx(0),
|
return common.Idx(0),
|
||||||
tracerr.Wrap(fmt.Errorf("GetIdxByEthAddrBJJ: %s: ToEthAddr: %s, ToBJJ: %s, TokenID: %d",
|
tracerr.Wrap(fmt.Errorf("GetIdxByEthAddrBJJ: %s: ToEthAddr: %s, ToBJJ: %s, TokenID: %d", ErrIdxNotFound, addr.Hex(), pk, tokenID))
|
||||||
ErrIdxNotFound, addr.Hex(), pk, tokenID))
|
|
||||||
}
|
}
|
||||||
idx, err := common.IdxFromBytes(b)
|
idx, err := common.IdxFromBytes(b)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return common.Idx(0),
|
return common.Idx(0),
|
||||||
tracerr.Wrap(fmt.Errorf("GetIdxByEthAddrBJJ: %s: ToEthAddr: %s, ToBJJ: %s, TokenID: %d",
|
tracerr.Wrap(fmt.Errorf("GetIdxByEthAddrBJJ: %s: ToEthAddr: %s, ToBJJ: %s, TokenID: %d", err, addr.Hex(), pk, tokenID))
|
||||||
err, addr.Hex(), pk, tokenID))
|
|
||||||
}
|
}
|
||||||
return idx, nil
|
return idx, nil
|
||||||
}
|
}
|
||||||
// rest of cases (included case ToEthAddr==0) are not possible
|
// rest of cases (included case ToEthAddr==0) are not possible
|
||||||
return common.Idx(0),
|
return common.Idx(0),
|
||||||
tracerr.Wrap(
|
tracerr.Wrap(fmt.Errorf("GetIdxByEthAddrBJJ: Not found, %s: ToEthAddr: %s, ToBJJ: %s, TokenID: %d", ErrGetIdxNoCase, addr.Hex(), pk, tokenID))
|
||||||
fmt.Errorf("GetIdxByEthAddrBJJ: Not found, %s: ToEthAddr: %s, ToBJJ: %s, TokenID: %d",
|
|
||||||
ErrGetIdxNoCase, addr.Hex(), pk, tokenID))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// GetTokenIDsFromIdxs returns a map containing the common.TokenID with its
|
// GetTokenIDsFromIdxs returns a map containing the common.TokenID with its
|
||||||
@@ -145,9 +137,7 @@ func (s *StateDB) GetTokenIDsFromIdxs(idxs []common.Idx) (map[common.TokenID]com
|
|||||||
for i := 0; i < len(idxs); i++ {
|
for i := 0; i < len(idxs); i++ {
|
||||||
a, err := s.GetAccount(idxs[i])
|
a, err := s.GetAccount(idxs[i])
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil,
|
return nil, tracerr.Wrap(fmt.Errorf("GetTokenIDsFromIdxs error on GetAccount with Idx==%d: %s", idxs[i], err.Error()))
|
||||||
tracerr.Wrap(fmt.Errorf("GetTokenIDsFromIdxs error on GetAccount with Idx==%d: %s",
|
|
||||||
idxs[i], err.Error()))
|
|
||||||
}
|
}
|
||||||
m[a.TokenID] = idxs[i]
|
m[a.TokenID] = idxs[i]
|
||||||
}
|
}
|
||||||
|
|||||||
24
db/utils.go
24
db/utils.go
@@ -13,9 +13,6 @@ import (
|
|||||||
"github.com/hermeznetwork/hermez-node/log"
|
"github.com/hermeznetwork/hermez-node/log"
|
||||||
"github.com/hermeznetwork/tracerr"
|
"github.com/hermeznetwork/tracerr"
|
||||||
"github.com/jmoiron/sqlx"
|
"github.com/jmoiron/sqlx"
|
||||||
|
|
||||||
//nolint:errcheck // driver for postgres DB
|
|
||||||
_ "github.com/lib/pq"
|
|
||||||
migrate "github.com/rubenv/sql-migrate"
|
migrate "github.com/rubenv/sql-migrate"
|
||||||
"github.com/russross/meddler"
|
"github.com/russross/meddler"
|
||||||
"golang.org/x/sync/semaphore"
|
"golang.org/x/sync/semaphore"
|
||||||
@@ -96,8 +93,8 @@ type APIConnectionController struct {
|
|||||||
timeout time.Duration
|
timeout time.Duration
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewAPIConnectionController initialize APIConnectionController
|
// NewAPICnnectionController initialize APIConnectionController
|
||||||
func NewAPIConnectionController(maxConnections int, timeout time.Duration) *APIConnectionController {
|
func NewAPICnnectionController(maxConnections int, timeout time.Duration) *APIConnectionController {
|
||||||
return &APIConnectionController{
|
return &APIConnectionController{
|
||||||
smphr: semaphore.NewWeighted(int64(maxConnections)),
|
smphr: semaphore.NewWeighted(int64(maxConnections)),
|
||||||
timeout: timeout,
|
timeout: timeout,
|
||||||
@@ -168,11 +165,7 @@ func (b BigIntMeddler) PostRead(fieldPtr, scanTarget interface{}) error {
|
|||||||
return tracerr.Wrap(fmt.Errorf("BigIntMeddler.PostRead: nil pointer"))
|
return tracerr.Wrap(fmt.Errorf("BigIntMeddler.PostRead: nil pointer"))
|
||||||
}
|
}
|
||||||
field := fieldPtr.(**big.Int)
|
field := fieldPtr.(**big.Int)
|
||||||
var ok bool
|
*field = new(big.Int).SetBytes([]byte(*ptr))
|
||||||
*field, ok = new(big.Int).SetString(*ptr, 10)
|
|
||||||
if !ok {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("big.Int.SetString failed on \"%v\"", *ptr))
|
|
||||||
}
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -180,7 +173,7 @@ func (b BigIntMeddler) PostRead(fieldPtr, scanTarget interface{}) error {
|
|||||||
func (b BigIntMeddler) PreWrite(fieldPtr interface{}) (saveValue interface{}, err error) {
|
func (b BigIntMeddler) PreWrite(fieldPtr interface{}) (saveValue interface{}, err error) {
|
||||||
field := fieldPtr.(*big.Int)
|
field := fieldPtr.(*big.Int)
|
||||||
|
|
||||||
return field.String(), nil
|
return field.Bytes(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// BigIntNullMeddler encodes or decodes the field value to or from JSON
|
// BigIntNullMeddler encodes or decodes the field value to or from JSON
|
||||||
@@ -205,12 +198,7 @@ func (b BigIntNullMeddler) PostRead(fieldPtr, scanTarget interface{}) error {
|
|||||||
if ptr == nil {
|
if ptr == nil {
|
||||||
return tracerr.Wrap(fmt.Errorf("BigIntMeddler.PostRead: nil pointer"))
|
return tracerr.Wrap(fmt.Errorf("BigIntMeddler.PostRead: nil pointer"))
|
||||||
}
|
}
|
||||||
var ok bool
|
*field = new(big.Int).SetBytes(ptr)
|
||||||
*field, ok = new(big.Int).SetString(string(ptr), 10)
|
|
||||||
if !ok {
|
|
||||||
return tracerr.Wrap(fmt.Errorf("big.Int.SetString failed on \"%v\"", string(ptr)))
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -220,7 +208,7 @@ func (b BigIntNullMeddler) PreWrite(fieldPtr interface{}) (saveValue interface{}
|
|||||||
if field == nil {
|
if field == nil {
|
||||||
return nil, nil
|
return nil, nil
|
||||||
}
|
}
|
||||||
return field.String(), nil
|
return field.Bytes(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// SliceToSlicePtrs converts any []Foo to []*Foo
|
// SliceToSlicePtrs converts any []Foo to []*Foo
|
||||||
|
|||||||
@@ -1,13 +1,9 @@
|
|||||||
package db
|
package db
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"math/big"
|
|
||||||
"os"
|
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/russross/meddler"
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type foo struct {
|
type foo struct {
|
||||||
@@ -37,42 +33,3 @@ func TestSlicePtrsToSlice(t *testing.T) {
|
|||||||
assert.Equal(t, *a[i], b[i])
|
assert.Equal(t, *a[i], b[i])
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestBigInt(t *testing.T) {
|
|
||||||
pass := os.Getenv("POSTGRES_PASS")
|
|
||||||
db, err := InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
|
|
||||||
require.NoError(t, err)
|
|
||||||
defer func() {
|
|
||||||
_, err := db.Exec("DROP TABLE IF EXISTS test_big_int;")
|
|
||||||
require.NoError(t, err)
|
|
||||||
err = db.Close()
|
|
||||||
require.NoError(t, err)
|
|
||||||
}()
|
|
||||||
|
|
||||||
_, err = db.Exec("DROP TABLE IF EXISTS test_big_int;")
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
_, err = db.Exec(`CREATE TABLE test_big_int (
|
|
||||||
item_id SERIAL PRIMARY KEY,
|
|
||||||
value1 DECIMAL(78, 0) NOT NULL,
|
|
||||||
value2 DECIMAL(78, 0),
|
|
||||||
value3 DECIMAL(78, 0)
|
|
||||||
);`)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
type Entry struct {
|
|
||||||
ItemID int `meddler:"item_id"`
|
|
||||||
Value1 *big.Int `meddler:"value1,bigint"`
|
|
||||||
Value2 *big.Int `meddler:"value2,bigintnull"`
|
|
||||||
Value3 *big.Int `meddler:"value3,bigintnull"`
|
|
||||||
}
|
|
||||||
|
|
||||||
entry := Entry{ItemID: 1, Value1: big.NewInt(1234567890), Value2: big.NewInt(9876543210), Value3: nil}
|
|
||||||
err = meddler.Insert(db, "test_big_int", &entry)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
var dbEntry Entry
|
|
||||||
err = meddler.QueryRow(db, &dbEntry, "SELECT * FROM test_big_int WHERE item_id = 1;")
|
|
||||||
require.NoError(t, err)
|
|
||||||
assert.Equal(t, entry, dbEntry)
|
|
||||||
}
|
|
||||||
|
|||||||
194
eth/auction.go
194
eth/auction.go
@@ -70,8 +70,7 @@ type AuctionEventInitialize struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionVariables returns the AuctionVariables from the initialize event
|
// AuctionVariables returns the AuctionVariables from the initialize event
|
||||||
func (ei *AuctionEventInitialize) AuctionVariables(
|
func (ei *AuctionEventInitialize) AuctionVariables(InitialMinimalBidding *big.Int) *common.AuctionVariables {
|
||||||
InitialMinimalBidding *big.Int) *common.AuctionVariables {
|
|
||||||
return &common.AuctionVariables{
|
return &common.AuctionVariables{
|
||||||
EthBlockNum: 0,
|
EthBlockNum: 0,
|
||||||
DonationAddress: ei.DonationAddress,
|
DonationAddress: ei.DonationAddress,
|
||||||
@@ -223,15 +222,12 @@ type AuctionInterface interface {
|
|||||||
AuctionGetAllocationRatio() ([3]uint16, error)
|
AuctionGetAllocationRatio() ([3]uint16, error)
|
||||||
AuctionSetDonationAddress(newDonationAddress ethCommon.Address) (*types.Transaction, error)
|
AuctionSetDonationAddress(newDonationAddress ethCommon.Address) (*types.Transaction, error)
|
||||||
AuctionGetDonationAddress() (*ethCommon.Address, error)
|
AuctionGetDonationAddress() (*ethCommon.Address, error)
|
||||||
AuctionSetBootCoordinator(newBootCoordinator ethCommon.Address,
|
AuctionSetBootCoordinator(newBootCoordinator ethCommon.Address, newBootCoordinatorURL string) (*types.Transaction, error)
|
||||||
newBootCoordinatorURL string) (*types.Transaction, error)
|
|
||||||
AuctionGetBootCoordinator() (*ethCommon.Address, error)
|
AuctionGetBootCoordinator() (*ethCommon.Address, error)
|
||||||
AuctionChangeDefaultSlotSetBid(slotSet int64,
|
AuctionChangeDefaultSlotSetBid(slotSet int64, newInitialMinBid *big.Int) (*types.Transaction, error)
|
||||||
newInitialMinBid *big.Int) (*types.Transaction, error)
|
|
||||||
|
|
||||||
// Coordinator Management
|
// Coordinator Management
|
||||||
AuctionSetCoordinator(forger ethCommon.Address, coordinatorURL string) (*types.Transaction,
|
AuctionSetCoordinator(forger ethCommon.Address, coordinatorURL string) (*types.Transaction, error)
|
||||||
error)
|
|
||||||
|
|
||||||
// Slot Info
|
// Slot Info
|
||||||
AuctionGetSlotNumber(blockNum int64) (int64, error)
|
AuctionGetSlotNumber(blockNum int64) (int64, error)
|
||||||
@@ -241,8 +237,7 @@ type AuctionInterface interface {
|
|||||||
AuctionGetSlotSet(slot int64) (*big.Int, error)
|
AuctionGetSlotSet(slot int64) (*big.Int, error)
|
||||||
|
|
||||||
// Bidding
|
// Bidding
|
||||||
AuctionBid(amount *big.Int, slot int64, bidAmount *big.Int, deadline *big.Int) (
|
AuctionBid(amount *big.Int, slot int64, bidAmount *big.Int, deadline *big.Int) (tx *types.Transaction, err error)
|
||||||
tx *types.Transaction, err error)
|
|
||||||
AuctionMultiBid(amount *big.Int, startingSlot, endingSlot int64, slotSets [6]bool,
|
AuctionMultiBid(amount *big.Int, startingSlot, endingSlot int64, slotSets [6]bool,
|
||||||
maxBid, minBid, deadline *big.Int) (tx *types.Transaction, err error)
|
maxBid, minBid, deadline *big.Int) (tx *types.Transaction, err error)
|
||||||
|
|
||||||
@@ -260,7 +255,7 @@ type AuctionInterface interface {
|
|||||||
|
|
||||||
AuctionConstants() (*common.AuctionConstants, error)
|
AuctionConstants() (*common.AuctionConstants, error)
|
||||||
AuctionEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*AuctionEvents, error)
|
AuctionEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*AuctionEvents, error)
|
||||||
AuctionEventInit(genesisBlockNum int64) (*AuctionEventInitialize, int64, error)
|
AuctionEventInit() (*AuctionEventInitialize, int64, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
//
|
//
|
||||||
@@ -280,10 +275,8 @@ type AuctionClient struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// NewAuctionClient creates a new AuctionClient. `tokenAddress` is the address of the HEZ tokens.
|
// NewAuctionClient creates a new AuctionClient. `tokenAddress` is the address of the HEZ tokens.
|
||||||
func NewAuctionClient(client *EthereumClient, address ethCommon.Address,
|
func NewAuctionClient(client *EthereumClient, address ethCommon.Address, tokenHEZCfg TokenConfig) (*AuctionClient, error) {
|
||||||
tokenHEZCfg TokenConfig) (*AuctionClient, error) {
|
contractAbi, err := abi.JSON(strings.NewReader(string(HermezAuctionProtocol.HermezAuctionProtocolABI)))
|
||||||
contractAbi, err :=
|
|
||||||
abi.JSON(strings.NewReader(string(HermezAuctionProtocol.HermezAuctionProtocolABI)))
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -338,8 +331,7 @@ func (c *AuctionClient) AuctionGetSlotDeadline() (slotDeadline uint8, err error)
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionSetOpenAuctionSlots is the interface to call the smart contract function
|
// AuctionSetOpenAuctionSlots is the interface to call the smart contract function
|
||||||
func (c *AuctionClient) AuctionSetOpenAuctionSlots(
|
func (c *AuctionClient) AuctionSetOpenAuctionSlots(newOpenAuctionSlots uint16) (tx *types.Transaction, err error) {
|
||||||
newOpenAuctionSlots uint16) (tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -363,8 +355,7 @@ func (c *AuctionClient) AuctionGetOpenAuctionSlots() (openAuctionSlots uint16, e
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionSetClosedAuctionSlots is the interface to call the smart contract function
|
// AuctionSetClosedAuctionSlots is the interface to call the smart contract function
|
||||||
func (c *AuctionClient) AuctionSetClosedAuctionSlots(
|
func (c *AuctionClient) AuctionSetClosedAuctionSlots(newClosedAuctionSlots uint16) (tx *types.Transaction, err error) {
|
||||||
newClosedAuctionSlots uint16) (tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -388,8 +379,7 @@ func (c *AuctionClient) AuctionGetClosedAuctionSlots() (closedAuctionSlots uint1
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionSetOutbidding is the interface to call the smart contract function
|
// AuctionSetOutbidding is the interface to call the smart contract function
|
||||||
func (c *AuctionClient) AuctionSetOutbidding(newOutbidding uint16) (tx *types.Transaction,
|
func (c *AuctionClient) AuctionSetOutbidding(newOutbidding uint16) (tx *types.Transaction, err error) {
|
||||||
err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
12500000, //nolint:gomnd
|
12500000, //nolint:gomnd
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -413,8 +403,7 @@ func (c *AuctionClient) AuctionGetOutbidding() (outbidding uint16, err error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionSetAllocationRatio is the interface to call the smart contract function
|
// AuctionSetAllocationRatio is the interface to call the smart contract function
|
||||||
func (c *AuctionClient) AuctionSetAllocationRatio(
|
func (c *AuctionClient) AuctionSetAllocationRatio(newAllocationRatio [3]uint16) (tx *types.Transaction, err error) {
|
||||||
newAllocationRatio [3]uint16) (tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -438,8 +427,7 @@ func (c *AuctionClient) AuctionGetAllocationRatio() (allocationRation [3]uint16,
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionSetDonationAddress is the interface to call the smart contract function
|
// AuctionSetDonationAddress is the interface to call the smart contract function
|
||||||
func (c *AuctionClient) AuctionSetDonationAddress(
|
func (c *AuctionClient) AuctionSetDonationAddress(newDonationAddress ethCommon.Address) (tx *types.Transaction, err error) {
|
||||||
newDonationAddress ethCommon.Address) (tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -452,8 +440,7 @@ func (c *AuctionClient) AuctionSetDonationAddress(
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionGetDonationAddress is the interface to call the smart contract function
|
// AuctionGetDonationAddress is the interface to call the smart contract function
|
||||||
func (c *AuctionClient) AuctionGetDonationAddress() (donationAddress *ethCommon.Address,
|
func (c *AuctionClient) AuctionGetDonationAddress() (donationAddress *ethCommon.Address, err error) {
|
||||||
err error) {
|
|
||||||
var _donationAddress ethCommon.Address
|
var _donationAddress ethCommon.Address
|
||||||
if err := c.client.Call(func(ec *ethclient.Client) error {
|
if err := c.client.Call(func(ec *ethclient.Client) error {
|
||||||
_donationAddress, err = c.auction.GetDonationAddress(c.opts)
|
_donationAddress, err = c.auction.GetDonationAddress(c.opts)
|
||||||
@@ -465,13 +452,11 @@ func (c *AuctionClient) AuctionGetDonationAddress() (donationAddress *ethCommon.
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionSetBootCoordinator is the interface to call the smart contract function
|
// AuctionSetBootCoordinator is the interface to call the smart contract function
|
||||||
func (c *AuctionClient) AuctionSetBootCoordinator(newBootCoordinator ethCommon.Address,
|
func (c *AuctionClient) AuctionSetBootCoordinator(newBootCoordinator ethCommon.Address, newBootCoordinatorURL string) (tx *types.Transaction, err error) {
|
||||||
newBootCoordinatorURL string) (tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
return c.auction.SetBootCoordinator(auth, newBootCoordinator,
|
return c.auction.SetBootCoordinator(auth, newBootCoordinator, newBootCoordinatorURL)
|
||||||
newBootCoordinatorURL)
|
|
||||||
},
|
},
|
||||||
); err != nil {
|
); err != nil {
|
||||||
return nil, tracerr.Wrap(fmt.Errorf("Failed setting bootCoordinator: %w", err))
|
return nil, tracerr.Wrap(fmt.Errorf("Failed setting bootCoordinator: %w", err))
|
||||||
@@ -480,8 +465,7 @@ func (c *AuctionClient) AuctionSetBootCoordinator(newBootCoordinator ethCommon.A
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionGetBootCoordinator is the interface to call the smart contract function
|
// AuctionGetBootCoordinator is the interface to call the smart contract function
|
||||||
func (c *AuctionClient) AuctionGetBootCoordinator() (bootCoordinator *ethCommon.Address,
|
func (c *AuctionClient) AuctionGetBootCoordinator() (bootCoordinator *ethCommon.Address, err error) {
|
||||||
err error) {
|
|
||||||
var _bootCoordinator ethCommon.Address
|
var _bootCoordinator ethCommon.Address
|
||||||
if err := c.client.Call(func(ec *ethclient.Client) error {
|
if err := c.client.Call(func(ec *ethclient.Client) error {
|
||||||
_bootCoordinator, err = c.auction.GetBootCoordinator(c.opts)
|
_bootCoordinator, err = c.auction.GetBootCoordinator(c.opts)
|
||||||
@@ -493,8 +477,7 @@ func (c *AuctionClient) AuctionGetBootCoordinator() (bootCoordinator *ethCommon.
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionChangeDefaultSlotSetBid is the interface to call the smart contract function
|
// AuctionChangeDefaultSlotSetBid is the interface to call the smart contract function
|
||||||
func (c *AuctionClient) AuctionChangeDefaultSlotSetBid(slotSet int64,
|
func (c *AuctionClient) AuctionChangeDefaultSlotSetBid(slotSet int64, newInitialMinBid *big.Int) (tx *types.Transaction, err error) {
|
||||||
newInitialMinBid *big.Int) (tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -508,8 +491,7 @@ func (c *AuctionClient) AuctionChangeDefaultSlotSetBid(slotSet int64,
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionGetClaimableHEZ is the interface to call the smart contract function
|
// AuctionGetClaimableHEZ is the interface to call the smart contract function
|
||||||
func (c *AuctionClient) AuctionGetClaimableHEZ(
|
func (c *AuctionClient) AuctionGetClaimableHEZ(claimAddress ethCommon.Address) (claimableHEZ *big.Int, err error) {
|
||||||
claimAddress ethCommon.Address) (claimableHEZ *big.Int, err error) {
|
|
||||||
if err := c.client.Call(func(ec *ethclient.Client) error {
|
if err := c.client.Call(func(ec *ethclient.Client) error {
|
||||||
claimableHEZ, err = c.auction.GetClaimableHEZ(c.opts, claimAddress)
|
claimableHEZ, err = c.auction.GetClaimableHEZ(c.opts, claimAddress)
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
@@ -520,8 +502,7 @@ func (c *AuctionClient) AuctionGetClaimableHEZ(
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionSetCoordinator is the interface to call the smart contract function
|
// AuctionSetCoordinator is the interface to call the smart contract function
|
||||||
func (c *AuctionClient) AuctionSetCoordinator(forger ethCommon.Address,
|
func (c *AuctionClient) AuctionSetCoordinator(forger ethCommon.Address, coordinatorURL string) (tx *types.Transaction, err error) {
|
||||||
coordinatorURL string) (tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -570,8 +551,7 @@ func (c *AuctionClient) AuctionGetSlotSet(slot int64) (slotSet *big.Int, err err
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionGetDefaultSlotSetBid is the interface to call the smart contract function
|
// AuctionGetDefaultSlotSetBid is the interface to call the smart contract function
|
||||||
func (c *AuctionClient) AuctionGetDefaultSlotSetBid(slotSet uint8) (minBidSlotSet *big.Int,
|
func (c *AuctionClient) AuctionGetDefaultSlotSetBid(slotSet uint8) (minBidSlotSet *big.Int, err error) {
|
||||||
err error) {
|
|
||||||
if err := c.client.Call(func(ec *ethclient.Client) error {
|
if err := c.client.Call(func(ec *ethclient.Client) error {
|
||||||
minBidSlotSet, err = c.auction.GetDefaultSlotSetBid(c.opts, slotSet)
|
minBidSlotSet, err = c.auction.GetDefaultSlotSetBid(c.opts, slotSet)
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
@@ -594,8 +574,7 @@ func (c *AuctionClient) AuctionGetSlotNumber(blockNum int64) (slot int64, err er
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionBid is the interface to call the smart contract function
|
// AuctionBid is the interface to call the smart contract function
|
||||||
func (c *AuctionClient) AuctionBid(amount *big.Int, slot int64, bidAmount *big.Int,
|
func (c *AuctionClient) AuctionBid(amount *big.Int, slot int64, bidAmount *big.Int, deadline *big.Int) (tx *types.Transaction, err error) {
|
||||||
deadline *big.Int) (tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -607,8 +586,7 @@ func (c *AuctionClient) AuctionBid(amount *big.Int, slot int64, bidAmount *big.I
|
|||||||
}
|
}
|
||||||
tokenName := c.tokenHEZCfg.Name
|
tokenName := c.tokenHEZCfg.Name
|
||||||
tokenAddr := c.tokenHEZCfg.Address
|
tokenAddr := c.tokenHEZCfg.Address
|
||||||
digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID,
|
digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID, amount, nonce, deadline, tokenName)
|
||||||
amount, nonce, deadline, tokenName)
|
|
||||||
signature, _ := c.client.ks.SignHash(*c.client.account, digest)
|
signature, _ := c.client.ks.SignHash(*c.client.account, digest)
|
||||||
permit := createPermit(owner, spender, amount, deadline, digest, signature)
|
permit := createPermit(owner, spender, amount, deadline, digest, signature)
|
||||||
_slot := big.NewInt(slot)
|
_slot := big.NewInt(slot)
|
||||||
@@ -621,8 +599,8 @@ func (c *AuctionClient) AuctionBid(amount *big.Int, slot int64, bidAmount *big.I
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionMultiBid is the interface to call the smart contract function
|
// AuctionMultiBid is the interface to call the smart contract function
|
||||||
func (c *AuctionClient) AuctionMultiBid(amount *big.Int, startingSlot, endingSlot int64,
|
func (c *AuctionClient) AuctionMultiBid(amount *big.Int, startingSlot, endingSlot int64, slotSets [6]bool,
|
||||||
slotSets [6]bool, maxBid, minBid, deadline *big.Int) (tx *types.Transaction, err error) {
|
maxBid, minBid, deadline *big.Int) (tx *types.Transaction, err error) {
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
1000000, //nolint:gomnd
|
1000000, //nolint:gomnd
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -635,14 +613,12 @@ func (c *AuctionClient) AuctionMultiBid(amount *big.Int, startingSlot, endingSlo
|
|||||||
tokenName := c.tokenHEZCfg.Name
|
tokenName := c.tokenHEZCfg.Name
|
||||||
tokenAddr := c.tokenHEZCfg.Address
|
tokenAddr := c.tokenHEZCfg.Address
|
||||||
|
|
||||||
digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID,
|
digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID, amount, nonce, deadline, tokenName)
|
||||||
amount, nonce, deadline, tokenName)
|
|
||||||
signature, _ := c.client.ks.SignHash(*c.client.account, digest)
|
signature, _ := c.client.ks.SignHash(*c.client.account, digest)
|
||||||
permit := createPermit(owner, spender, amount, deadline, digest, signature)
|
permit := createPermit(owner, spender, amount, deadline, digest, signature)
|
||||||
_startingSlot := big.NewInt(startingSlot)
|
_startingSlot := big.NewInt(startingSlot)
|
||||||
_endingSlot := big.NewInt(endingSlot)
|
_endingSlot := big.NewInt(endingSlot)
|
||||||
return c.auction.ProcessMultiBid(auth, amount, _startingSlot, _endingSlot,
|
return c.auction.ProcessMultiBid(auth, amount, _startingSlot, _endingSlot, slotSets, maxBid, minBid, permit)
|
||||||
slotSets, maxBid, minBid, permit)
|
|
||||||
},
|
},
|
||||||
); err != nil {
|
); err != nil {
|
||||||
return nil, tracerr.Wrap(fmt.Errorf("Failed multibid: %w", err))
|
return nil, tracerr.Wrap(fmt.Errorf("Failed multibid: %w", err))
|
||||||
@@ -651,8 +627,7 @@ func (c *AuctionClient) AuctionMultiBid(amount *big.Int, startingSlot, endingSlo
|
|||||||
}
|
}
|
||||||
|
|
||||||
// AuctionCanForge is the interface to call the smart contract function
|
// AuctionCanForge is the interface to call the smart contract function
|
||||||
func (c *AuctionClient) AuctionCanForge(forger ethCommon.Address, blockNum int64) (canForge bool,
|
func (c *AuctionClient) AuctionCanForge(forger ethCommon.Address, blockNum int64) (canForge bool, err error) {
|
||||||
err error) {
|
|
||||||
if err := c.client.Call(func(ec *ethclient.Client) error {
|
if err := c.client.Call(func(ec *ethclient.Client) error {
|
||||||
canForge, err = c.auction.CanForge(c.opts, forger, big.NewInt(blockNum))
|
canForge, err = c.auction.CanForge(c.opts, forger, big.NewInt(blockNum))
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
@@ -705,8 +680,7 @@ func (c *AuctionClient) AuctionConstants() (auctionConstants *common.AuctionCons
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
auctionConstants.InitialMinimalBidding, err =
|
auctionConstants.InitialMinimalBidding, err = c.auction.INITIALMINIMALBIDDING(c.opts)
|
||||||
c.auction.INITIALMINIMALBIDDING(c.opts)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -777,54 +751,37 @@ func (c *AuctionClient) AuctionVariables() (auctionVariables *common.AuctionVari
|
|||||||
}
|
}
|
||||||
|
|
||||||
var (
|
var (
|
||||||
logAuctionNewBid = crypto.Keccak256Hash([]byte(
|
logAuctionNewBid = crypto.Keccak256Hash([]byte("NewBid(uint128,uint128,address)"))
|
||||||
"NewBid(uint128,uint128,address)"))
|
logAuctionNewSlotDeadline = crypto.Keccak256Hash([]byte("NewSlotDeadline(uint8)"))
|
||||||
logAuctionNewSlotDeadline = crypto.Keccak256Hash([]byte(
|
logAuctionNewClosedAuctionSlots = crypto.Keccak256Hash([]byte("NewClosedAuctionSlots(uint16)"))
|
||||||
"NewSlotDeadline(uint8)"))
|
logAuctionNewOutbidding = crypto.Keccak256Hash([]byte("NewOutbidding(uint16)"))
|
||||||
logAuctionNewClosedAuctionSlots = crypto.Keccak256Hash([]byte(
|
logAuctionNewDonationAddress = crypto.Keccak256Hash([]byte("NewDonationAddress(address)"))
|
||||||
"NewClosedAuctionSlots(uint16)"))
|
logAuctionNewBootCoordinator = crypto.Keccak256Hash([]byte("NewBootCoordinator(address,string)"))
|
||||||
logAuctionNewOutbidding = crypto.Keccak256Hash([]byte(
|
logAuctionNewOpenAuctionSlots = crypto.Keccak256Hash([]byte("NewOpenAuctionSlots(uint16)"))
|
||||||
"NewOutbidding(uint16)"))
|
logAuctionNewAllocationRatio = crypto.Keccak256Hash([]byte("NewAllocationRatio(uint16[3])"))
|
||||||
logAuctionNewDonationAddress = crypto.Keccak256Hash([]byte(
|
logAuctionSetCoordinator = crypto.Keccak256Hash([]byte("SetCoordinator(address,address,string)"))
|
||||||
"NewDonationAddress(address)"))
|
logAuctionNewForgeAllocated = crypto.Keccak256Hash([]byte("NewForgeAllocated(address,address,uint128,uint128,uint128,uint128)"))
|
||||||
logAuctionNewBootCoordinator = crypto.Keccak256Hash([]byte(
|
logAuctionNewDefaultSlotSetBid = crypto.Keccak256Hash([]byte("NewDefaultSlotSetBid(uint128,uint128)"))
|
||||||
"NewBootCoordinator(address,string)"))
|
logAuctionNewForge = crypto.Keccak256Hash([]byte("NewForge(address,uint128)"))
|
||||||
logAuctionNewOpenAuctionSlots = crypto.Keccak256Hash([]byte(
|
logAuctionHEZClaimed = crypto.Keccak256Hash([]byte("HEZClaimed(address,uint128)"))
|
||||||
"NewOpenAuctionSlots(uint16)"))
|
logAuctionInitialize = crypto.Keccak256Hash([]byte(
|
||||||
logAuctionNewAllocationRatio = crypto.Keccak256Hash([]byte(
|
"InitializeHermezAuctionProtocolEvent(address,address,string,uint16,uint8,uint16,uint16,uint16[3])"))
|
||||||
"NewAllocationRatio(uint16[3])"))
|
|
||||||
logAuctionSetCoordinator = crypto.Keccak256Hash([]byte(
|
|
||||||
"SetCoordinator(address,address,string)"))
|
|
||||||
logAuctionNewForgeAllocated = crypto.Keccak256Hash([]byte(
|
|
||||||
"NewForgeAllocated(address,address,uint128,uint128,uint128,uint128)"))
|
|
||||||
logAuctionNewDefaultSlotSetBid = crypto.Keccak256Hash([]byte(
|
|
||||||
"NewDefaultSlotSetBid(uint128,uint128)"))
|
|
||||||
logAuctionNewForge = crypto.Keccak256Hash([]byte(
|
|
||||||
"NewForge(address,uint128)"))
|
|
||||||
logAuctionHEZClaimed = crypto.Keccak256Hash([]byte(
|
|
||||||
"HEZClaimed(address,uint128)"))
|
|
||||||
logAuctionInitialize = crypto.Keccak256Hash([]byte(
|
|
||||||
"InitializeHermezAuctionProtocolEvent(address,address,string," +
|
|
||||||
"uint16,uint8,uint16,uint16,uint16[3])"))
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// AuctionEventInit returns the initialize event with its corresponding block number
|
// AuctionEventInit returns the initialize event with its corresponding block number
|
||||||
func (c *AuctionClient) AuctionEventInit(genesisBlockNum int64) (*AuctionEventInitialize, int64, error) {
|
func (c *AuctionClient) AuctionEventInit() (*AuctionEventInitialize, int64, error) {
|
||||||
query := ethereum.FilterQuery{
|
query := ethereum.FilterQuery{
|
||||||
Addresses: []ethCommon.Address{
|
Addresses: []ethCommon.Address{
|
||||||
c.address,
|
c.address,
|
||||||
},
|
},
|
||||||
FromBlock: big.NewInt(max(0, genesisBlockNum-blocksPerDay)),
|
Topics: [][]ethCommon.Hash{{logAuctionInitialize}},
|
||||||
ToBlock: big.NewInt(genesisBlockNum),
|
|
||||||
Topics: [][]ethCommon.Hash{{logAuctionInitialize}},
|
|
||||||
}
|
}
|
||||||
logs, err := c.client.client.FilterLogs(context.Background(), query)
|
logs, err := c.client.client.FilterLogs(context.Background(), query)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, 0, tracerr.Wrap(err)
|
return nil, 0, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
if len(logs) != 1 {
|
if len(logs) != 1 {
|
||||||
return nil, 0,
|
return nil, 0, tracerr.Wrap(fmt.Errorf("no event of type InitializeHermezAuctionProtocolEvent found"))
|
||||||
tracerr.Wrap(fmt.Errorf("no event of type InitializeHermezAuctionProtocolEvent found"))
|
|
||||||
}
|
}
|
||||||
vLog := logs[0]
|
vLog := logs[0]
|
||||||
if vLog.Topics[0] != logAuctionInitialize {
|
if vLog.Topics[0] != logAuctionInitialize {
|
||||||
@@ -872,8 +829,7 @@ func (c *AuctionClient) AuctionEventsByBlock(blockNum int64,
|
|||||||
|
|
||||||
for _, vLog := range logs {
|
for _, vLog := range logs {
|
||||||
if blockHash != nil && vLog.BlockHash != *blockHash {
|
if blockHash != nil && vLog.BlockHash != *blockHash {
|
||||||
log.Errorw("Block hash mismatch", "expected", blockHash.String(), "got",
|
log.Errorw("Block hash mismatch", "expected", blockHash.String(), "got", vLog.BlockHash.String())
|
||||||
vLog.BlockHash.String())
|
|
||||||
return nil, tracerr.Wrap(ErrBlockHashMismatchEvent)
|
return nil, tracerr.Wrap(ErrBlockHashMismatchEvent)
|
||||||
}
|
}
|
||||||
switch vLog.Topics[0] {
|
switch vLog.Topics[0] {
|
||||||
@@ -884,8 +840,7 @@ func (c *AuctionClient) AuctionEventsByBlock(blockNum int64,
|
|||||||
Address ethCommon.Address
|
Address ethCommon.Address
|
||||||
}
|
}
|
||||||
var newBid AuctionEventNewBid
|
var newBid AuctionEventNewBid
|
||||||
if err := c.contractAbi.UnpackIntoInterface(&auxNewBid, "NewBid",
|
if err := c.contractAbi.UnpackIntoInterface(&auxNewBid, "NewBid", vLog.Data); err != nil {
|
||||||
vLog.Data); err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
newBid.BidAmount = auxNewBid.BidAmount
|
newBid.BidAmount = auxNewBid.BidAmount
|
||||||
@@ -894,60 +849,48 @@ func (c *AuctionClient) AuctionEventsByBlock(blockNum int64,
|
|||||||
auctionEvents.NewBid = append(auctionEvents.NewBid, newBid)
|
auctionEvents.NewBid = append(auctionEvents.NewBid, newBid)
|
||||||
case logAuctionNewSlotDeadline:
|
case logAuctionNewSlotDeadline:
|
||||||
var newSlotDeadline AuctionEventNewSlotDeadline
|
var newSlotDeadline AuctionEventNewSlotDeadline
|
||||||
if err := c.contractAbi.UnpackIntoInterface(&newSlotDeadline,
|
if err := c.contractAbi.UnpackIntoInterface(&newSlotDeadline, "NewSlotDeadline", vLog.Data); err != nil {
|
||||||
"NewSlotDeadline", vLog.Data); err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
auctionEvents.NewSlotDeadline = append(auctionEvents.NewSlotDeadline, newSlotDeadline)
|
auctionEvents.NewSlotDeadline = append(auctionEvents.NewSlotDeadline, newSlotDeadline)
|
||||||
case logAuctionNewClosedAuctionSlots:
|
case logAuctionNewClosedAuctionSlots:
|
||||||
var newClosedAuctionSlots AuctionEventNewClosedAuctionSlots
|
var newClosedAuctionSlots AuctionEventNewClosedAuctionSlots
|
||||||
if err := c.contractAbi.UnpackIntoInterface(&newClosedAuctionSlots,
|
if err := c.contractAbi.UnpackIntoInterface(&newClosedAuctionSlots, "NewClosedAuctionSlots", vLog.Data); err != nil {
|
||||||
"NewClosedAuctionSlots", vLog.Data); err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
auctionEvents.NewClosedAuctionSlots =
|
auctionEvents.NewClosedAuctionSlots = append(auctionEvents.NewClosedAuctionSlots, newClosedAuctionSlots)
|
||||||
append(auctionEvents.NewClosedAuctionSlots, newClosedAuctionSlots)
|
|
||||||
case logAuctionNewOutbidding:
|
case logAuctionNewOutbidding:
|
||||||
var newOutbidding AuctionEventNewOutbidding
|
var newOutbidding AuctionEventNewOutbidding
|
||||||
if err := c.contractAbi.UnpackIntoInterface(&newOutbidding, "NewOutbidding",
|
if err := c.contractAbi.UnpackIntoInterface(&newOutbidding, "NewOutbidding", vLog.Data); err != nil {
|
||||||
vLog.Data); err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
auctionEvents.NewOutbidding = append(auctionEvents.NewOutbidding, newOutbidding)
|
auctionEvents.NewOutbidding = append(auctionEvents.NewOutbidding, newOutbidding)
|
||||||
case logAuctionNewDonationAddress:
|
case logAuctionNewDonationAddress:
|
||||||
var newDonationAddress AuctionEventNewDonationAddress
|
var newDonationAddress AuctionEventNewDonationAddress
|
||||||
newDonationAddress.NewDonationAddress = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
|
newDonationAddress.NewDonationAddress = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
|
||||||
auctionEvents.NewDonationAddress = append(auctionEvents.NewDonationAddress,
|
auctionEvents.NewDonationAddress = append(auctionEvents.NewDonationAddress, newDonationAddress)
|
||||||
newDonationAddress)
|
|
||||||
case logAuctionNewBootCoordinator:
|
case logAuctionNewBootCoordinator:
|
||||||
var newBootCoordinator AuctionEventNewBootCoordinator
|
var newBootCoordinator AuctionEventNewBootCoordinator
|
||||||
if err := c.contractAbi.UnpackIntoInterface(&newBootCoordinator,
|
if err := c.contractAbi.UnpackIntoInterface(&newBootCoordinator, "NewBootCoordinator", vLog.Data); err != nil {
|
||||||
"NewBootCoordinator", vLog.Data); err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
newBootCoordinator.NewBootCoordinator = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
|
newBootCoordinator.NewBootCoordinator = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
|
||||||
auctionEvents.NewBootCoordinator = append(auctionEvents.NewBootCoordinator,
|
auctionEvents.NewBootCoordinator = append(auctionEvents.NewBootCoordinator, newBootCoordinator)
|
||||||
newBootCoordinator)
|
|
||||||
case logAuctionNewOpenAuctionSlots:
|
case logAuctionNewOpenAuctionSlots:
|
||||||
var newOpenAuctionSlots AuctionEventNewOpenAuctionSlots
|
var newOpenAuctionSlots AuctionEventNewOpenAuctionSlots
|
||||||
if err := c.contractAbi.UnpackIntoInterface(&newOpenAuctionSlots,
|
if err := c.contractAbi.UnpackIntoInterface(&newOpenAuctionSlots, "NewOpenAuctionSlots", vLog.Data); err != nil {
|
||||||
"NewOpenAuctionSlots", vLog.Data); err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
auctionEvents.NewOpenAuctionSlots =
|
auctionEvents.NewOpenAuctionSlots = append(auctionEvents.NewOpenAuctionSlots, newOpenAuctionSlots)
|
||||||
append(auctionEvents.NewOpenAuctionSlots, newOpenAuctionSlots)
|
|
||||||
case logAuctionNewAllocationRatio:
|
case logAuctionNewAllocationRatio:
|
||||||
var newAllocationRatio AuctionEventNewAllocationRatio
|
var newAllocationRatio AuctionEventNewAllocationRatio
|
||||||
if err := c.contractAbi.UnpackIntoInterface(&newAllocationRatio,
|
if err := c.contractAbi.UnpackIntoInterface(&newAllocationRatio, "NewAllocationRatio", vLog.Data); err != nil {
|
||||||
"NewAllocationRatio", vLog.Data); err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
auctionEvents.NewAllocationRatio = append(auctionEvents.NewAllocationRatio,
|
auctionEvents.NewAllocationRatio = append(auctionEvents.NewAllocationRatio, newAllocationRatio)
|
||||||
newAllocationRatio)
|
|
||||||
case logAuctionSetCoordinator:
|
case logAuctionSetCoordinator:
|
||||||
var setCoordinator AuctionEventSetCoordinator
|
var setCoordinator AuctionEventSetCoordinator
|
||||||
if err := c.contractAbi.UnpackIntoInterface(&setCoordinator,
|
if err := c.contractAbi.UnpackIntoInterface(&setCoordinator, "SetCoordinator", vLog.Data); err != nil {
|
||||||
"SetCoordinator", vLog.Data); err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
setCoordinator.BidderAddress = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
|
setCoordinator.BidderAddress = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
|
||||||
@@ -955,29 +898,25 @@ func (c *AuctionClient) AuctionEventsByBlock(blockNum int64,
|
|||||||
auctionEvents.SetCoordinator = append(auctionEvents.SetCoordinator, setCoordinator)
|
auctionEvents.SetCoordinator = append(auctionEvents.SetCoordinator, setCoordinator)
|
||||||
case logAuctionNewForgeAllocated:
|
case logAuctionNewForgeAllocated:
|
||||||
var newForgeAllocated AuctionEventNewForgeAllocated
|
var newForgeAllocated AuctionEventNewForgeAllocated
|
||||||
if err := c.contractAbi.UnpackIntoInterface(&newForgeAllocated,
|
if err := c.contractAbi.UnpackIntoInterface(&newForgeAllocated, "NewForgeAllocated", vLog.Data); err != nil {
|
||||||
"NewForgeAllocated", vLog.Data); err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
newForgeAllocated.Bidder = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
|
newForgeAllocated.Bidder = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
|
||||||
newForgeAllocated.Forger = ethCommon.BytesToAddress(vLog.Topics[2].Bytes())
|
newForgeAllocated.Forger = ethCommon.BytesToAddress(vLog.Topics[2].Bytes())
|
||||||
newForgeAllocated.SlotToForge = new(big.Int).SetBytes(vLog.Topics[3][:]).Int64()
|
newForgeAllocated.SlotToForge = new(big.Int).SetBytes(vLog.Topics[3][:]).Int64()
|
||||||
auctionEvents.NewForgeAllocated = append(auctionEvents.NewForgeAllocated,
|
auctionEvents.NewForgeAllocated = append(auctionEvents.NewForgeAllocated, newForgeAllocated)
|
||||||
newForgeAllocated)
|
|
||||||
case logAuctionNewDefaultSlotSetBid:
|
case logAuctionNewDefaultSlotSetBid:
|
||||||
var auxNewDefaultSlotSetBid struct {
|
var auxNewDefaultSlotSetBid struct {
|
||||||
SlotSet *big.Int
|
SlotSet *big.Int
|
||||||
NewInitialMinBid *big.Int
|
NewInitialMinBid *big.Int
|
||||||
}
|
}
|
||||||
var newDefaultSlotSetBid AuctionEventNewDefaultSlotSetBid
|
var newDefaultSlotSetBid AuctionEventNewDefaultSlotSetBid
|
||||||
if err := c.contractAbi.UnpackIntoInterface(&auxNewDefaultSlotSetBid,
|
if err := c.contractAbi.UnpackIntoInterface(&auxNewDefaultSlotSetBid, "NewDefaultSlotSetBid", vLog.Data); err != nil {
|
||||||
"NewDefaultSlotSetBid", vLog.Data); err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
newDefaultSlotSetBid.NewInitialMinBid = auxNewDefaultSlotSetBid.NewInitialMinBid
|
newDefaultSlotSetBid.NewInitialMinBid = auxNewDefaultSlotSetBid.NewInitialMinBid
|
||||||
newDefaultSlotSetBid.SlotSet = auxNewDefaultSlotSetBid.SlotSet.Int64()
|
newDefaultSlotSetBid.SlotSet = auxNewDefaultSlotSetBid.SlotSet.Int64()
|
||||||
auctionEvents.NewDefaultSlotSetBid =
|
auctionEvents.NewDefaultSlotSetBid = append(auctionEvents.NewDefaultSlotSetBid, newDefaultSlotSetBid)
|
||||||
append(auctionEvents.NewDefaultSlotSetBid, newDefaultSlotSetBid)
|
|
||||||
case logAuctionNewForge:
|
case logAuctionNewForge:
|
||||||
var newForge AuctionEventNewForge
|
var newForge AuctionEventNewForge
|
||||||
newForge.Forger = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
|
newForge.Forger = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
|
||||||
@@ -985,8 +924,7 @@ func (c *AuctionClient) AuctionEventsByBlock(blockNum int64,
|
|||||||
auctionEvents.NewForge = append(auctionEvents.NewForge, newForge)
|
auctionEvents.NewForge = append(auctionEvents.NewForge, newForge)
|
||||||
case logAuctionHEZClaimed:
|
case logAuctionHEZClaimed:
|
||||||
var HEZClaimed AuctionEventHEZClaimed
|
var HEZClaimed AuctionEventHEZClaimed
|
||||||
if err := c.contractAbi.UnpackIntoInterface(&HEZClaimed, "HEZClaimed",
|
if err := c.contractAbi.UnpackIntoInterface(&HEZClaimed, "HEZClaimed", vLog.Data); err != nil {
|
||||||
vLog.Data); err != nil {
|
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
HEZClaimed.Owner = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
|
HEZClaimed.Owner = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
|
||||||
|
|||||||
@@ -28,7 +28,7 @@ func TestAuctionGetCurrentSlotNumber(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestAuctionEventInit(t *testing.T) {
|
func TestAuctionEventInit(t *testing.T) {
|
||||||
auctionInit, blockNum, err := auctionClientTest.AuctionEventInit(genesisBlock)
|
auctionInit, blockNum, err := auctionClientTest.AuctionEventInit()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.Equal(t, int64(18), blockNum)
|
assert.Equal(t, int64(18), blockNum)
|
||||||
assert.Equal(t, donationAddressConst, auctionInit.DonationAddress)
|
assert.Equal(t, donationAddressConst, auctionInit.DonationAddress)
|
||||||
@@ -58,8 +58,7 @@ func TestAuctionConstants(t *testing.T) {
|
|||||||
func TestAuctionVariables(t *testing.T) {
|
func TestAuctionVariables(t *testing.T) {
|
||||||
INITMINBID := new(big.Int)
|
INITMINBID := new(big.Int)
|
||||||
INITMINBID.SetString(minBidStr, 10)
|
INITMINBID.SetString(minBidStr, 10)
|
||||||
defaultSlotSetBid := [6]*big.Int{INITMINBID, INITMINBID, INITMINBID, INITMINBID, INITMINBID,
|
defaultSlotSetBid := [6]*big.Int{INITMINBID, INITMINBID, INITMINBID, INITMINBID, INITMINBID, INITMINBID}
|
||||||
INITMINBID}
|
|
||||||
|
|
||||||
auctionVariables, err := auctionClientTest.AuctionVariables()
|
auctionVariables, err := auctionClientTest.AuctionVariables()
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
@@ -133,8 +132,7 @@ func TestAuctionSetClosedAuctionSlots(t *testing.T) {
|
|||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
auctionEvents, err := auctionClientTest.AuctionEventsByBlock(currentBlockNum, nil)
|
auctionEvents, err := auctionClientTest.AuctionEventsByBlock(currentBlockNum, nil)
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
assert.Equal(t, newClosedAuctionSlots,
|
assert.Equal(t, newClosedAuctionSlots, auctionEvents.NewClosedAuctionSlots[0].NewClosedAuctionSlots)
|
||||||
auctionEvents.NewClosedAuctionSlots[0].NewClosedAuctionSlots)
|
|
||||||
_, err = auctionClientTest.AuctionSetClosedAuctionSlots(closedAuctionSlots)
|
_, err = auctionClientTest.AuctionSetClosedAuctionSlots(closedAuctionSlots)
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
}
|
}
|
||||||
@@ -230,8 +228,7 @@ func TestAuctionSetBootCoordinator(t *testing.T) {
|
|||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
assert.Equal(t, newBootCoordinator, auctionEvents.NewBootCoordinator[0].NewBootCoordinator)
|
assert.Equal(t, newBootCoordinator, auctionEvents.NewBootCoordinator[0].NewBootCoordinator)
|
||||||
assert.Equal(t, newBootCoordinatorURL, auctionEvents.NewBootCoordinator[0].NewBootCoordinatorURL)
|
assert.Equal(t, newBootCoordinatorURL, auctionEvents.NewBootCoordinator[0].NewBootCoordinatorURL)
|
||||||
_, err = auctionClientTest.AuctionSetBootCoordinator(bootCoordinatorAddressConst,
|
_, err = auctionClientTest.AuctionSetBootCoordinator(bootCoordinatorAddressConst, bootCoordinatorURL)
|
||||||
bootCoordinatorURL)
|
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -345,8 +342,7 @@ func TestAuctionMultiBid(t *testing.T) {
|
|||||||
budget := new(big.Int)
|
budget := new(big.Int)
|
||||||
budget.SetString("45200000000000000000", 10)
|
budget.SetString("45200000000000000000", 10)
|
||||||
bidderAddress := governanceAddressConst
|
bidderAddress := governanceAddressConst
|
||||||
_, err = auctionClientTest.AuctionMultiBid(budget, currentSlot+4, currentSlot+10, slotSet,
|
_, err = auctionClientTest.AuctionMultiBid(budget, currentSlot+4, currentSlot+10, slotSet, maxBid, minBid, deadline)
|
||||||
maxBid, minBid, deadline)
|
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
currentBlockNum, err := auctionClientTest.client.EthLastBlock()
|
currentBlockNum, err := auctionClientTest.client.EthLastBlock()
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
@@ -387,8 +383,7 @@ func TestAuctionClaimHEZ(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestAuctionForge(t *testing.T) {
|
func TestAuctionForge(t *testing.T) {
|
||||||
auctionClientTestHermez, err := NewAuctionClient(ethereumClientHermez,
|
auctionClientTestHermez, err := NewAuctionClient(ethereumClientHermez, auctionTestAddressConst, tokenHEZ)
|
||||||
auctionTestAddressConst, tokenHEZ)
|
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
slotConst := 4
|
slotConst := 4
|
||||||
blockNum := int64(int(blocksPerSlot)*slotConst + int(genesisBlock))
|
blockNum := int64(int(blocksPerSlot)*slotConst + int(genesisBlock))
|
||||||
|
|||||||
@@ -12,17 +12,6 @@ import (
|
|||||||
|
|
||||||
var errTODO = fmt.Errorf("TODO: Not implemented yet")
|
var errTODO = fmt.Errorf("TODO: Not implemented yet")
|
||||||
|
|
||||||
const (
|
|
||||||
blocksPerDay = (3600 * 24) / 15
|
|
||||||
)
|
|
||||||
|
|
||||||
func max(x, y int64) int64 {
|
|
||||||
if x > y {
|
|
||||||
return x
|
|
||||||
}
|
|
||||||
return y
|
|
||||||
}
|
|
||||||
|
|
||||||
// ClientInterface is the eth Client interface used by hermez-node modules to
|
// ClientInterface is the eth Client interface used by hermez-node modules to
|
||||||
// interact with Ethereum Blockchain and smart contracts.
|
// interact with Ethereum Blockchain and smart contracts.
|
||||||
type ClientInterface interface {
|
type ClientInterface interface {
|
||||||
@@ -75,19 +64,16 @@ type ClientConfig struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// NewClient creates a new Client to interact with Ethereum and the Hermez smart contracts.
|
// NewClient creates a new Client to interact with Ethereum and the Hermez smart contracts.
|
||||||
func NewClient(client *ethclient.Client, account *accounts.Account, ks *ethKeystore.KeyStore,
|
func NewClient(client *ethclient.Client, account *accounts.Account, ks *ethKeystore.KeyStore, cfg *ClientConfig) (*Client, error) {
|
||||||
cfg *ClientConfig) (*Client, error) {
|
|
||||||
ethereumClient, err := NewEthereumClient(client, account, ks, &cfg.Ethereum)
|
ethereumClient, err := NewEthereumClient(client, account, ks, &cfg.Ethereum)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
auctionClient, err := NewAuctionClient(ethereumClient, cfg.Auction.Address,
|
auctionClient, err := NewAuctionClient(ethereumClient, cfg.Auction.Address, cfg.Auction.TokenHEZ)
|
||||||
cfg.Auction.TokenHEZ)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
rollupClient, err := NewRollupClient(ethereumClient, cfg.Rollup.Address,
|
rollupClient, err := NewRollupClient(ethereumClient, cfg.Rollup.Address, cfg.Auction.TokenHEZ)
|
||||||
cfg.Auction.TokenHEZ)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -64,8 +64,7 @@ type EthereumConfig struct {
|
|||||||
GasPriceDiv uint64
|
GasPriceDiv uint64
|
||||||
}
|
}
|
||||||
|
|
||||||
// EthereumClient is an ethereum client to call Smart Contract methods and check blockchain
|
// EthereumClient is an ethereum client to call Smart Contract methods and check blockchain information.
|
||||||
// information.
|
|
||||||
type EthereumClient struct {
|
type EthereumClient struct {
|
||||||
client *ethclient.Client
|
client *ethclient.Client
|
||||||
chainID *big.Int
|
chainID *big.Int
|
||||||
@@ -77,8 +76,7 @@ type EthereumClient struct {
|
|||||||
|
|
||||||
// NewEthereumClient creates a EthereumClient instance. The account is not mandatory (it can
|
// NewEthereumClient creates a EthereumClient instance. The account is not mandatory (it can
|
||||||
// be nil). If the account is nil, CallAuth will fail with ErrAccountNil.
|
// be nil). If the account is nil, CallAuth will fail with ErrAccountNil.
|
||||||
func NewEthereumClient(client *ethclient.Client, account *accounts.Account,
|
func NewEthereumClient(client *ethclient.Client, account *accounts.Account, ks *ethKeystore.KeyStore, config *EthereumConfig) (*EthereumClient, error) {
|
||||||
ks *ethKeystore.KeyStore, config *EthereumConfig) (*EthereumClient, error) {
|
|
||||||
if config == nil {
|
if config == nil {
|
||||||
config = &EthereumConfig{
|
config = &EthereumConfig{
|
||||||
CallGasLimit: defaultCallGasLimit,
|
CallGasLimit: defaultCallGasLimit,
|
||||||
@@ -168,8 +166,7 @@ func (c *EthereumClient) NewAuth() (*bind.TransactOpts, error) {
|
|||||||
// This call requires a valid account with Ether that can be spend during the
|
// This call requires a valid account with Ether that can be spend during the
|
||||||
// call.
|
// call.
|
||||||
func (c *EthereumClient) CallAuth(gasLimit uint64,
|
func (c *EthereumClient) CallAuth(gasLimit uint64,
|
||||||
fn func(*ethclient.Client, *bind.TransactOpts) (*types.Transaction, error)) (*types.Transaction,
|
fn func(*ethclient.Client, *bind.TransactOpts) (*types.Transaction, error)) (*types.Transaction, error) {
|
||||||
error) {
|
|
||||||
if c.account == nil {
|
if c.account == nil {
|
||||||
return nil, tracerr.Wrap(ErrAccountNil)
|
return nil, tracerr.Wrap(ErrAccountNil)
|
||||||
}
|
}
|
||||||
@@ -215,8 +212,7 @@ func (c *EthereumClient) Call(fn func(*ethclient.Client) error) error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// EthTransactionReceipt returns the transaction receipt of the given txHash
|
// EthTransactionReceipt returns the transaction receipt of the given txHash
|
||||||
func (c *EthereumClient) EthTransactionReceipt(ctx context.Context,
|
func (c *EthereumClient) EthTransactionReceipt(ctx context.Context, txHash ethCommon.Hash) (*types.Receipt, error) {
|
||||||
txHash ethCommon.Hash) (*types.Receipt, error) {
|
|
||||||
return c.client.TransactionReceipt(ctx, txHash)
|
return c.client.TransactionReceipt(ctx, txHash)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -232,28 +228,26 @@ func (c *EthereumClient) EthLastBlock() (int64, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// EthHeaderByNumber internally calls ethclient.Client HeaderByNumber
|
// EthHeaderByNumber internally calls ethclient.Client HeaderByNumber
|
||||||
// func (c *EthereumClient) EthHeaderByNumber(ctx context.Context, number *big.Int) (*types.Header,
|
// func (c *EthereumClient) EthHeaderByNumber(ctx context.Context, number *big.Int) (*types.Header, error) {
|
||||||
// error) {
|
|
||||||
// return c.client.HeaderByNumber(ctx, number)
|
// return c.client.HeaderByNumber(ctx, number)
|
||||||
// }
|
// }
|
||||||
|
|
||||||
// EthBlockByNumber internally calls ethclient.Client BlockByNumber and returns
|
// EthBlockByNumber internally calls ethclient.Client BlockByNumber and returns
|
||||||
// *common.Block. If number == -1, the latests known block is returned.
|
// *common.Block. If number == -1, the latests known block is returned.
|
||||||
func (c *EthereumClient) EthBlockByNumber(ctx context.Context, number int64) (*common.Block,
|
func (c *EthereumClient) EthBlockByNumber(ctx context.Context, number int64) (*common.Block, error) {
|
||||||
error) {
|
|
||||||
blockNum := big.NewInt(number)
|
blockNum := big.NewInt(number)
|
||||||
if number == -1 {
|
if number == -1 {
|
||||||
blockNum = nil
|
blockNum = nil
|
||||||
}
|
}
|
||||||
header, err := c.client.HeaderByNumber(ctx, blockNum)
|
block, err := c.client.BlockByNumber(ctx, blockNum)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
b := &common.Block{
|
b := &common.Block{
|
||||||
Num: header.Number.Int64(),
|
Num: block.Number().Int64(),
|
||||||
Timestamp: time.Unix(int64(header.Time), 0),
|
Timestamp: time.Unix(int64(block.Time()), 0),
|
||||||
ParentHash: header.ParentHash,
|
ParentHash: block.ParentHash(),
|
||||||
Hash: header.Hash(),
|
Hash: block.Hash(),
|
||||||
}
|
}
|
||||||
return b, nil
|
return b, nil
|
||||||
}
|
}
|
||||||
@@ -330,6 +324,5 @@ func (c *EthereumClient) EthCall(ctx context.Context, tx *types.Transaction,
|
|||||||
Value: tx.Value(),
|
Value: tx.Value(),
|
||||||
Data: tx.Data(),
|
Data: tx.Data(),
|
||||||
}
|
}
|
||||||
result, err := c.client.CallContract(ctx, msg, blockNum)
|
return c.client.CallContract(ctx, msg, blockNum)
|
||||||
return result, tracerr.Wrap(err)
|
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -14,8 +14,7 @@ import (
|
|||||||
func addBlock(url string) {
|
func addBlock(url string) {
|
||||||
method := "POST"
|
method := "POST"
|
||||||
|
|
||||||
payload := strings.NewReader(
|
payload := strings.NewReader("{\n \"jsonrpc\":\"2.0\",\n \"method\":\"evm_mine\",\n \"params\":[],\n \"id\":1\n}")
|
||||||
"{\n \"jsonrpc\":\"2.0\",\n \"method\":\"evm_mine\",\n \"params\":[],\n \"id\":1\n}")
|
|
||||||
|
|
||||||
client := &http.Client{}
|
client := &http.Client{}
|
||||||
req, err := http.NewRequest(method, url, payload)
|
req, err := http.NewRequest(method, url, payload)
|
||||||
@@ -46,9 +45,7 @@ func addTime(seconds float64, url string) {
|
|||||||
secondsStr := strconv.FormatFloat(seconds, 'E', -1, 32)
|
secondsStr := strconv.FormatFloat(seconds, 'E', -1, 32)
|
||||||
|
|
||||||
method := "POST"
|
method := "POST"
|
||||||
payload := strings.NewReader(
|
payload := strings.NewReader("{\n \"jsonrpc\":\"2.0\",\n \"method\":\"evm_increaseTime\",\n \"params\":[" + secondsStr + "],\n \"id\":1\n}")
|
||||||
"{\n \"jsonrpc\":\"2.0\",\n \"method\":\"evm_increaseTime\",\n \"params\":[" +
|
|
||||||
secondsStr + "],\n \"id\":1\n}")
|
|
||||||
|
|
||||||
client := &http.Client{}
|
client := &http.Client{}
|
||||||
req, err := http.NewRequest(method, url, payload)
|
req, err := http.NewRequest(method, url, payload)
|
||||||
@@ -69,16 +66,13 @@ func addTime(seconds float64, url string) {
|
|||||||
}()
|
}()
|
||||||
}
|
}
|
||||||
|
|
||||||
func createPermitDigest(tokenAddr, owner, spender ethCommon.Address, chainID, value, nonce,
|
func createPermitDigest(tokenAddr, owner, spender ethCommon.Address, chainID, value, nonce, deadline *big.Int, tokenName string) ([]byte, error) {
|
||||||
deadline *big.Int, tokenName string) ([]byte, error) {
|
|
||||||
// NOTE: We ignore hash.Write errors because we are writing to a memory
|
// NOTE: We ignore hash.Write errors because we are writing to a memory
|
||||||
// buffer and don't expect any errors to occur.
|
// buffer and don't expect any errors to occur.
|
||||||
abiPermit :=
|
abiPermit := []byte("Permit(address owner,address spender,uint256 value,uint256 nonce,uint256 deadline)")
|
||||||
[]byte("Permit(address owner,address spender,uint256 value,uint256 nonce,uint256 deadline)")
|
|
||||||
hashPermit := sha3.NewLegacyKeccak256()
|
hashPermit := sha3.NewLegacyKeccak256()
|
||||||
hashPermit.Write(abiPermit) //nolint:errcheck,gosec
|
hashPermit.Write(abiPermit) //nolint:errcheck,gosec
|
||||||
abiEIP712Domain :=
|
abiEIP712Domain := []byte("EIP712Domain(string name,string version,uint256 chainId,address verifyingContract)")
|
||||||
[]byte("EIP712Domain(string name,string version,uint256 chainId,address verifyingContract)")
|
|
||||||
hashEIP712Domain := sha3.NewLegacyKeccak256()
|
hashEIP712Domain := sha3.NewLegacyKeccak256()
|
||||||
hashEIP712Domain.Write(abiEIP712Domain) //nolint:errcheck,gosec
|
hashEIP712Domain.Write(abiEIP712Domain) //nolint:errcheck,gosec
|
||||||
var encodeBytes []byte
|
var encodeBytes []byte
|
||||||
@@ -130,8 +124,7 @@ func createPermitDigest(tokenAddr, owner, spender ethCommon.Address, chainID, va
|
|||||||
return hashBytes2.Sum(nil), nil
|
return hashBytes2.Sum(nil), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func createPermit(owner, spender ethCommon.Address, amount, deadline *big.Int, digest,
|
func createPermit(owner, spender ethCommon.Address, amount, deadline *big.Int, digest, signature []byte) []byte {
|
||||||
signature []byte) []byte {
|
|
||||||
r := signature[0:32]
|
r := signature[0:32]
|
||||||
s := signature[32:64]
|
s := signature[32:64]
|
||||||
v := signature[64] + byte(27) //nolint:gomnd
|
v := signature[64] + byte(27) //nolint:gomnd
|
||||||
|
|||||||
@@ -26,8 +26,7 @@ var (
|
|||||||
mnemonic = "explain tackle mirror kit van hammer degree position ginger unfair soup bonus"
|
mnemonic = "explain tackle mirror kit van hammer degree position ginger unfair soup bonus"
|
||||||
)
|
)
|
||||||
|
|
||||||
func genAcc(w *hdwallet.Wallet, ks *keystore.KeyStore, i int) (*accounts.Account,
|
func genAcc(w *hdwallet.Wallet, ks *keystore.KeyStore, i int) (*accounts.Account, ethCommon.Address) {
|
||||||
ethCommon.Address) {
|
|
||||||
path := hdwallet.MustParseDerivationPath(fmt.Sprintf("m/44'/60'/0'/0/%d", i))
|
path := hdwallet.MustParseDerivationPath(fmt.Sprintf("m/44'/60'/0'/0/%d", i))
|
||||||
account, err := w.Derive(path, false)
|
account, err := w.Derive(path, false)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -112,9 +111,7 @@ func getEnvVariables() {
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal(errEnvVar)
|
log.Fatal(errEnvVar)
|
||||||
}
|
}
|
||||||
if auctionAddressStr == "" || auctionTestAddressStr == "" || tokenHEZAddressStr == "" ||
|
if auctionAddressStr == "" || auctionTestAddressStr == "" || tokenHEZAddressStr == "" || hermezRollupAddressStr == "" || wdelayerAddressStr == "" || wdelayerTestAddressStr == "" || genesisBlockEnv == "" {
|
||||||
hermezRollupAddressStr == "" || wdelayerAddressStr == "" || wdelayerTestAddressStr == "" ||
|
|
||||||
genesisBlockEnv == "" {
|
|
||||||
log.Fatal(errEnvVar)
|
log.Fatal(errEnvVar)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -192,8 +189,7 @@ func TestMain(m *testing.M) {
|
|||||||
log.Fatal(err)
|
log.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|
||||||
ethereumClientEmergencyCouncil, err = NewEthereumClient(ethClient,
|
ethereumClientEmergencyCouncil, err = NewEthereumClient(ethClient, emergencyCouncilAccount, ks, nil)
|
||||||
emergencyCouncilAccount, ks, nil)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatal(err)
|
log.Fatal(err)
|
||||||
}
|
}
|
||||||
|
|||||||
180
eth/rollup.go
180
eth/rollup.go
@@ -243,20 +243,13 @@ type RollupInterface interface {
|
|||||||
// Public Functions
|
// Public Functions
|
||||||
|
|
||||||
RollupForgeBatch(*RollupForgeBatchArgs, *bind.TransactOpts) (*types.Transaction, error)
|
RollupForgeBatch(*RollupForgeBatchArgs, *bind.TransactOpts) (*types.Transaction, error)
|
||||||
RollupAddToken(tokenAddress ethCommon.Address, feeAddToken,
|
RollupAddToken(tokenAddress ethCommon.Address, feeAddToken, deadline *big.Int) (*types.Transaction, error)
|
||||||
deadline *big.Int) (*types.Transaction, error)
|
|
||||||
|
|
||||||
RollupWithdrawMerkleProof(babyPubKey babyjub.PublicKeyComp, tokenID uint32, numExitRoot,
|
RollupWithdrawMerkleProof(babyPubKey babyjub.PublicKeyComp, tokenID uint32, numExitRoot, idx int64, amount *big.Int, siblings []*big.Int, instantWithdraw bool) (*types.Transaction, error)
|
||||||
idx int64, amount *big.Int, siblings []*big.Int, instantWithdraw bool) (*types.Transaction,
|
RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int, tokenID uint32, numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction, error)
|
||||||
error)
|
|
||||||
RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int, tokenID uint32,
|
|
||||||
numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction, error)
|
|
||||||
|
|
||||||
RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64, depositAmount *big.Int,
|
RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64, depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64) (*types.Transaction, error)
|
||||||
amount *big.Int, tokenID uint32, toIdx int64) (*types.Transaction, error)
|
RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64, depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64, deadline *big.Int) (tx *types.Transaction, err error)
|
||||||
RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
|
|
||||||
depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64,
|
|
||||||
deadline *big.Int) (tx *types.Transaction, err error)
|
|
||||||
|
|
||||||
// Governance Public Functions
|
// Governance Public Functions
|
||||||
RollupUpdateForgeL1L2BatchTimeout(newForgeL1L2BatchTimeout int64) (*types.Transaction, error)
|
RollupUpdateForgeL1L2BatchTimeout(newForgeL1L2BatchTimeout int64) (*types.Transaction, error)
|
||||||
@@ -273,7 +266,7 @@ type RollupInterface interface {
|
|||||||
RollupConstants() (*common.RollupConstants, error)
|
RollupConstants() (*common.RollupConstants, error)
|
||||||
RollupEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*RollupEvents, error)
|
RollupEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*RollupEvents, error)
|
||||||
RollupForgeBatchArgs(ethCommon.Hash, uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error)
|
RollupForgeBatchArgs(ethCommon.Hash, uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error)
|
||||||
RollupEventInit(genesisBlockNum int64) (*RollupEventInitialize, int64, error)
|
RollupEventInit() (*RollupEventInitialize, int64, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
//
|
//
|
||||||
@@ -294,8 +287,7 @@ type RollupClient struct {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// NewRollupClient creates a new RollupClient
|
// NewRollupClient creates a new RollupClient
|
||||||
func NewRollupClient(client *EthereumClient, address ethCommon.Address,
|
func NewRollupClient(client *EthereumClient, address ethCommon.Address, tokenHEZCfg TokenConfig) (*RollupClient, error) {
|
||||||
tokenHEZCfg TokenConfig) (*RollupClient, error) {
|
|
||||||
contractAbi, err := abi.JSON(strings.NewReader(string(Hermez.HermezABI)))
|
contractAbi, err := abi.JSON(strings.NewReader(string(Hermez.HermezABI)))
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
@@ -331,12 +323,11 @@ func NewRollupClient(client *EthereumClient, address ethCommon.Address,
|
|||||||
}
|
}
|
||||||
|
|
||||||
// RollupForgeBatch is the interface to call the smart contract function
|
// RollupForgeBatch is the interface to call the smart contract function
|
||||||
func (c *RollupClient) RollupForgeBatch(args *RollupForgeBatchArgs,
|
func (c *RollupClient) RollupForgeBatch(args *RollupForgeBatchArgs, auth *bind.TransactOpts) (tx *types.Transaction, err error) {
|
||||||
auth *bind.TransactOpts) (tx *types.Transaction, err error) {
|
|
||||||
if auth == nil {
|
if auth == nil {
|
||||||
auth, err = c.client.NewAuth()
|
auth, err = c.client.NewAuth()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, err
|
||||||
}
|
}
|
||||||
auth.GasLimit = 1000000
|
auth.GasLimit = 1000000
|
||||||
}
|
}
|
||||||
@@ -402,7 +393,7 @@ func (c *RollupClient) RollupForgeBatch(args *RollupForgeBatchArgs,
|
|||||||
l1CoordinatorBytes, l1l2TxData, feeIdxCoordinator, args.VerifierIdx, args.L1Batch,
|
l1CoordinatorBytes, l1l2TxData, feeIdxCoordinator, args.VerifierIdx, args.L1Batch,
|
||||||
args.ProofA, args.ProofB, args.ProofC)
|
args.ProofA, args.ProofB, args.ProofC)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(fmt.Errorf("Hermez.ForgeBatch: %w", err))
|
return nil, tracerr.Wrap(fmt.Errorf("Failed Hermez.ForgeBatch: %w", err))
|
||||||
}
|
}
|
||||||
return tx, nil
|
return tx, nil
|
||||||
}
|
}
|
||||||
@@ -410,8 +401,7 @@ func (c *RollupClient) RollupForgeBatch(args *RollupForgeBatchArgs,
|
|||||||
// RollupAddToken is the interface to call the smart contract function.
|
// RollupAddToken is the interface to call the smart contract function.
|
||||||
// `feeAddToken` is the amount of HEZ tokens that will be paid to add the
|
// `feeAddToken` is the amount of HEZ tokens that will be paid to add the
|
||||||
// token. `feeAddToken` must match the public value of the smart contract.
|
// token. `feeAddToken` must match the public value of the smart contract.
|
||||||
func (c *RollupClient) RollupAddToken(tokenAddress ethCommon.Address, feeAddToken,
|
func (c *RollupClient) RollupAddToken(tokenAddress ethCommon.Address, feeAddToken, deadline *big.Int) (tx *types.Transaction, err error) {
|
||||||
deadline *big.Int) (tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -423,11 +413,9 @@ func (c *RollupClient) RollupAddToken(tokenAddress ethCommon.Address, feeAddToke
|
|||||||
}
|
}
|
||||||
tokenName := c.tokenHEZCfg.Name
|
tokenName := c.tokenHEZCfg.Name
|
||||||
tokenAddr := c.tokenHEZCfg.Address
|
tokenAddr := c.tokenHEZCfg.Address
|
||||||
digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID,
|
digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID, feeAddToken, nonce, deadline, tokenName)
|
||||||
feeAddToken, nonce, deadline, tokenName)
|
|
||||||
signature, _ := c.client.ks.SignHash(*c.client.account, digest)
|
signature, _ := c.client.ks.SignHash(*c.client.account, digest)
|
||||||
permit := createPermit(owner, spender, feeAddToken, deadline, digest,
|
permit := createPermit(owner, spender, feeAddToken, deadline, digest, signature)
|
||||||
signature)
|
|
||||||
|
|
||||||
return c.hermez.AddToken(auth, tokenAddress, permit)
|
return c.hermez.AddToken(auth, tokenAddress, permit)
|
||||||
},
|
},
|
||||||
@@ -438,9 +426,7 @@ func (c *RollupClient) RollupAddToken(tokenAddress ethCommon.Address, feeAddToke
|
|||||||
}
|
}
|
||||||
|
|
||||||
// RollupWithdrawMerkleProof is the interface to call the smart contract function
|
// RollupWithdrawMerkleProof is the interface to call the smart contract function
|
||||||
func (c *RollupClient) RollupWithdrawMerkleProof(fromBJJ babyjub.PublicKeyComp, tokenID uint32,
|
func (c *RollupClient) RollupWithdrawMerkleProof(fromBJJ babyjub.PublicKeyComp, tokenID uint32, numExitRoot, idx int64, amount *big.Int, siblings []*big.Int, instantWithdraw bool) (tx *types.Transaction, err error) {
|
||||||
numExitRoot, idx int64, amount *big.Int, siblings []*big.Int,
|
|
||||||
instantWithdraw bool) (tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -448,8 +434,7 @@ func (c *RollupClient) RollupWithdrawMerkleProof(fromBJJ babyjub.PublicKeyComp,
|
|||||||
babyPubKey := new(big.Int).SetBytes(pkCompB)
|
babyPubKey := new(big.Int).SetBytes(pkCompB)
|
||||||
numExitRootB := uint32(numExitRoot)
|
numExitRootB := uint32(numExitRoot)
|
||||||
idxBig := big.NewInt(idx)
|
idxBig := big.NewInt(idx)
|
||||||
return c.hermez.WithdrawMerkleProof(auth, tokenID, amount, babyPubKey,
|
return c.hermez.WithdrawMerkleProof(auth, tokenID, amount, babyPubKey, numExitRootB, siblings, idxBig, instantWithdraw)
|
||||||
numExitRootB, siblings, idxBig, instantWithdraw)
|
|
||||||
},
|
},
|
||||||
); err != nil {
|
); err != nil {
|
||||||
return nil, tracerr.Wrap(fmt.Errorf("Failed update WithdrawMerkleProof: %w", err))
|
return nil, tracerr.Wrap(fmt.Errorf("Failed update WithdrawMerkleProof: %w", err))
|
||||||
@@ -458,17 +443,13 @@ func (c *RollupClient) RollupWithdrawMerkleProof(fromBJJ babyjub.PublicKeyComp,
|
|||||||
}
|
}
|
||||||
|
|
||||||
// RollupWithdrawCircuit is the interface to call the smart contract function
|
// RollupWithdrawCircuit is the interface to call the smart contract function
|
||||||
func (c *RollupClient) RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int,
|
func (c *RollupClient) RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int, tokenID uint32, numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction, error) {
|
||||||
tokenID uint32, numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction,
|
|
||||||
error) {
|
|
||||||
log.Error("TODO")
|
log.Error("TODO")
|
||||||
return nil, tracerr.Wrap(errTODO)
|
return nil, tracerr.Wrap(errTODO)
|
||||||
}
|
}
|
||||||
|
|
||||||
// RollupL1UserTxERC20ETH is the interface to call the smart contract function
|
// RollupL1UserTxERC20ETH is the interface to call the smart contract function
|
||||||
func (c *RollupClient) RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
|
func (c *RollupClient) RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64, depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64) (tx *types.Transaction, err error) {
|
||||||
depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64) (tx *types.Transaction,
|
|
||||||
err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -503,9 +484,7 @@ func (c *RollupClient) RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fro
|
|||||||
}
|
}
|
||||||
|
|
||||||
// RollupL1UserTxERC20Permit is the interface to call the smart contract function
|
// RollupL1UserTxERC20Permit is the interface to call the smart contract function
|
||||||
func (c *RollupClient) RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
|
func (c *RollupClient) RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64, depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64, deadline *big.Int) (tx *types.Transaction, err error) {
|
||||||
depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64,
|
|
||||||
deadline *big.Int) (tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -537,12 +516,11 @@ func (c *RollupClient) RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp,
|
|||||||
}
|
}
|
||||||
tokenName := c.tokenHEZCfg.Name
|
tokenName := c.tokenHEZCfg.Name
|
||||||
tokenAddr := c.tokenHEZCfg.Address
|
tokenAddr := c.tokenHEZCfg.Address
|
||||||
digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID,
|
digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID, amount, nonce, deadline, tokenName)
|
||||||
amount, nonce, deadline, tokenName)
|
|
||||||
signature, _ := c.client.ks.SignHash(*c.client.account, digest)
|
signature, _ := c.client.ks.SignHash(*c.client.account, digest)
|
||||||
permit := createPermit(owner, spender, amount, deadline, digest, signature)
|
permit := createPermit(owner, spender, amount, deadline, digest, signature)
|
||||||
return c.hermez.AddL1Transaction(auth, babyPubKey, fromIdxBig,
|
return c.hermez.AddL1Transaction(auth, babyPubKey, fromIdxBig, uint16(depositAmountF),
|
||||||
uint16(depositAmountF), uint16(amountF), tokenID, toIdxBig, permit)
|
uint16(amountF), tokenID, toIdxBig, permit)
|
||||||
},
|
},
|
||||||
); err != nil {
|
); err != nil {
|
||||||
return nil, tracerr.Wrap(fmt.Errorf("Failed add L1 Tx ERC20Permit: %w", err))
|
return nil, tracerr.Wrap(fmt.Errorf("Failed add L1 Tx ERC20Permit: %w", err))
|
||||||
@@ -574,13 +552,11 @@ func (c *RollupClient) RollupLastForgedBatch() (lastForgedBatch int64, err error
|
|||||||
}
|
}
|
||||||
|
|
||||||
// RollupUpdateForgeL1L2BatchTimeout is the interface to call the smart contract function
|
// RollupUpdateForgeL1L2BatchTimeout is the interface to call the smart contract function
|
||||||
func (c *RollupClient) RollupUpdateForgeL1L2BatchTimeout(
|
func (c *RollupClient) RollupUpdateForgeL1L2BatchTimeout(newForgeL1L2BatchTimeout int64) (tx *types.Transaction, err error) {
|
||||||
newForgeL1L2BatchTimeout int64) (tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
return c.hermez.UpdateForgeL1L2BatchTimeout(auth,
|
return c.hermez.UpdateForgeL1L2BatchTimeout(auth, uint8(newForgeL1L2BatchTimeout))
|
||||||
uint8(newForgeL1L2BatchTimeout))
|
|
||||||
},
|
},
|
||||||
); err != nil {
|
); err != nil {
|
||||||
return nil, tracerr.Wrap(fmt.Errorf("Failed update ForgeL1L2BatchTimeout: %w", err))
|
return nil, tracerr.Wrap(fmt.Errorf("Failed update ForgeL1L2BatchTimeout: %w", err))
|
||||||
@@ -589,8 +565,7 @@ func (c *RollupClient) RollupUpdateForgeL1L2BatchTimeout(
|
|||||||
}
|
}
|
||||||
|
|
||||||
// RollupUpdateFeeAddToken is the interface to call the smart contract function
|
// RollupUpdateFeeAddToken is the interface to call the smart contract function
|
||||||
func (c *RollupClient) RollupUpdateFeeAddToken(newFeeAddToken *big.Int) (tx *types.Transaction,
|
func (c *RollupClient) RollupUpdateFeeAddToken(newFeeAddToken *big.Int) (tx *types.Transaction, err error) {
|
||||||
err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -625,8 +600,7 @@ func (c *RollupClient) RollupUpdateBucketsParameters(
|
|||||||
}
|
}
|
||||||
|
|
||||||
// RollupUpdateTokenExchange is the interface to call the smart contract function
|
// RollupUpdateTokenExchange is the interface to call the smart contract function
|
||||||
func (c *RollupClient) RollupUpdateTokenExchange(addressArray []ethCommon.Address,
|
func (c *RollupClient) RollupUpdateTokenExchange(addressArray []ethCommon.Address, valueArray []uint64) (tx *types.Transaction, err error) {
|
||||||
valueArray []uint64) (tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -639,8 +613,7 @@ func (c *RollupClient) RollupUpdateTokenExchange(addressArray []ethCommon.Addres
|
|||||||
}
|
}
|
||||||
|
|
||||||
// RollupUpdateWithdrawalDelay is the interface to call the smart contract function
|
// RollupUpdateWithdrawalDelay is the interface to call the smart contract function
|
||||||
func (c *RollupClient) RollupUpdateWithdrawalDelay(newWithdrawalDelay int64) (tx *types.Transaction,
|
func (c *RollupClient) RollupUpdateWithdrawalDelay(newWithdrawalDelay int64) (tx *types.Transaction, err error) {
|
||||||
err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -666,8 +639,7 @@ func (c *RollupClient) RollupSafeMode() (tx *types.Transaction, err error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// RollupInstantWithdrawalViewer is the interface to call the smart contract function
|
// RollupInstantWithdrawalViewer is the interface to call the smart contract function
|
||||||
func (c *RollupClient) RollupInstantWithdrawalViewer(tokenAddress ethCommon.Address,
|
func (c *RollupClient) RollupInstantWithdrawalViewer(tokenAddress ethCommon.Address, amount *big.Int) (instantAllowed bool, err error) {
|
||||||
amount *big.Int) (instantAllowed bool, err error) {
|
|
||||||
if err := c.client.Call(func(ec *ethclient.Client) error {
|
if err := c.client.Call(func(ec *ethclient.Client) error {
|
||||||
instantAllowed, err = c.hermez.InstantWithdrawalViewer(c.opts, tokenAddress, amount)
|
instantAllowed, err = c.hermez.InstantWithdrawalViewer(c.opts, tokenAddress, amount)
|
||||||
return tracerr.Wrap(err)
|
return tracerr.Wrap(err)
|
||||||
@@ -702,8 +674,7 @@ func (c *RollupClient) RollupConstants() (rollupConstants *common.RollupConstant
|
|||||||
}
|
}
|
||||||
newRollupVerifier.MaxTx = rollupVerifier.MaxTx.Int64()
|
newRollupVerifier.MaxTx = rollupVerifier.MaxTx.Int64()
|
||||||
newRollupVerifier.NLevels = rollupVerifier.NLevels.Int64()
|
newRollupVerifier.NLevels = rollupVerifier.NLevels.Int64()
|
||||||
rollupConstants.Verifiers = append(rollupConstants.Verifiers,
|
rollupConstants.Verifiers = append(rollupConstants.Verifiers, newRollupVerifier)
|
||||||
newRollupVerifier)
|
|
||||||
}
|
}
|
||||||
rollupConstants.HermezAuctionContract, err = c.hermez.HermezAuctionContract(c.opts)
|
rollupConstants.HermezAuctionContract, err = c.hermez.HermezAuctionContract(c.opts)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -722,41 +693,28 @@ func (c *RollupClient) RollupConstants() (rollupConstants *common.RollupConstant
|
|||||||
}
|
}
|
||||||
|
|
||||||
var (
|
var (
|
||||||
logHermezL1UserTxEvent = crypto.Keccak256Hash([]byte(
|
logHermezL1UserTxEvent = crypto.Keccak256Hash([]byte("L1UserTxEvent(uint32,uint8,bytes)"))
|
||||||
"L1UserTxEvent(uint32,uint8,bytes)"))
|
logHermezAddToken = crypto.Keccak256Hash([]byte("AddToken(address,uint32)"))
|
||||||
logHermezAddToken = crypto.Keccak256Hash([]byte(
|
logHermezForgeBatch = crypto.Keccak256Hash([]byte("ForgeBatch(uint32,uint16)"))
|
||||||
"AddToken(address,uint32)"))
|
logHermezUpdateForgeL1L2BatchTimeout = crypto.Keccak256Hash([]byte("UpdateForgeL1L2BatchTimeout(uint8)"))
|
||||||
logHermezForgeBatch = crypto.Keccak256Hash([]byte(
|
logHermezUpdateFeeAddToken = crypto.Keccak256Hash([]byte("UpdateFeeAddToken(uint256)"))
|
||||||
"ForgeBatch(uint32,uint16)"))
|
logHermezWithdrawEvent = crypto.Keccak256Hash([]byte("WithdrawEvent(uint48,uint32,bool)"))
|
||||||
logHermezUpdateForgeL1L2BatchTimeout = crypto.Keccak256Hash([]byte(
|
logHermezUpdateBucketWithdraw = crypto.Keccak256Hash([]byte("UpdateBucketWithdraw(uint8,uint256,uint256)"))
|
||||||
"UpdateForgeL1L2BatchTimeout(uint8)"))
|
logHermezUpdateWithdrawalDelay = crypto.Keccak256Hash([]byte("UpdateWithdrawalDelay(uint64)"))
|
||||||
logHermezUpdateFeeAddToken = crypto.Keccak256Hash([]byte(
|
logHermezUpdateBucketsParameters = crypto.Keccak256Hash([]byte("UpdateBucketsParameters(uint256[4][" +
|
||||||
"UpdateFeeAddToken(uint256)"))
|
strconv.Itoa(common.RollupConstNumBuckets) + "])"))
|
||||||
logHermezWithdrawEvent = crypto.Keccak256Hash([]byte(
|
logHermezUpdateTokenExchange = crypto.Keccak256Hash([]byte("UpdateTokenExchange(address[],uint64[])"))
|
||||||
"WithdrawEvent(uint48,uint32,bool)"))
|
logHermezSafeMode = crypto.Keccak256Hash([]byte("SafeMode()"))
|
||||||
logHermezUpdateBucketWithdraw = crypto.Keccak256Hash([]byte(
|
logHermezInitialize = crypto.Keccak256Hash([]byte("InitializeHermezEvent(uint8,uint256,uint64)"))
|
||||||
"UpdateBucketWithdraw(uint8,uint256,uint256)"))
|
|
||||||
logHermezUpdateWithdrawalDelay = crypto.Keccak256Hash([]byte(
|
|
||||||
"UpdateWithdrawalDelay(uint64)"))
|
|
||||||
logHermezUpdateBucketsParameters = crypto.Keccak256Hash([]byte(
|
|
||||||
"UpdateBucketsParameters(uint256[4][" + strconv.Itoa(common.RollupConstNumBuckets) + "])"))
|
|
||||||
logHermezUpdateTokenExchange = crypto.Keccak256Hash([]byte(
|
|
||||||
"UpdateTokenExchange(address[],uint64[])"))
|
|
||||||
logHermezSafeMode = crypto.Keccak256Hash([]byte(
|
|
||||||
"SafeMode()"))
|
|
||||||
logHermezInitialize = crypto.Keccak256Hash([]byte(
|
|
||||||
"InitializeHermezEvent(uint8,uint256,uint64)"))
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// RollupEventInit returns the initialize event with its corresponding block number
|
// RollupEventInit returns the initialize event with its corresponding block number
|
||||||
func (c *RollupClient) RollupEventInit(genesisBlockNum int64) (*RollupEventInitialize, int64, error) {
|
func (c *RollupClient) RollupEventInit() (*RollupEventInitialize, int64, error) {
|
||||||
query := ethereum.FilterQuery{
|
query := ethereum.FilterQuery{
|
||||||
Addresses: []ethCommon.Address{
|
Addresses: []ethCommon.Address{
|
||||||
c.address,
|
c.address,
|
||||||
},
|
},
|
||||||
FromBlock: big.NewInt(max(0, genesisBlockNum-blocksPerDay)),
|
Topics: [][]ethCommon.Hash{{logHermezInitialize}},
|
||||||
ToBlock: big.NewInt(genesisBlockNum),
|
|
||||||
Topics: [][]ethCommon.Hash{{logHermezInitialize}},
|
|
||||||
}
|
}
|
||||||
logs, err := c.client.client.FilterLogs(context.Background(), query)
|
logs, err := c.client.client.FilterLogs(context.Background(), query)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -771,8 +729,7 @@ func (c *RollupClient) RollupEventInit(genesisBlockNum int64) (*RollupEventIniti
|
|||||||
}
|
}
|
||||||
|
|
||||||
var rollupInit RollupEventInitialize
|
var rollupInit RollupEventInitialize
|
||||||
if err := c.contractAbi.UnpackIntoInterface(&rollupInit, "InitializeHermezEvent",
|
if err := c.contractAbi.UnpackIntoInterface(&rollupInit, "InitializeHermezEvent", vLog.Data); err != nil {
|
||||||
vLog.Data); err != nil {
|
|
||||||
return nil, 0, tracerr.Wrap(err)
|
return nil, 0, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
return &rollupInit, int64(vLog.BlockNumber), tracerr.Wrap(err)
|
return &rollupInit, int64(vLog.BlockNumber), tracerr.Wrap(err)
|
||||||
@@ -853,8 +810,7 @@ func (c *RollupClient) RollupEventsByBlock(blockNum int64,
|
|||||||
var updateForgeL1L2BatchTimeout struct {
|
var updateForgeL1L2BatchTimeout struct {
|
||||||
NewForgeL1L2BatchTimeout uint8
|
NewForgeL1L2BatchTimeout uint8
|
||||||
}
|
}
|
||||||
err := c.contractAbi.UnpackIntoInterface(&updateForgeL1L2BatchTimeout,
|
err := c.contractAbi.UnpackIntoInterface(&updateForgeL1L2BatchTimeout, "UpdateForgeL1L2BatchTimeout", vLog.Data)
|
||||||
"UpdateForgeL1L2BatchTimeout", vLog.Data)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -882,16 +838,14 @@ func (c *RollupClient) RollupEventsByBlock(blockNum int64,
|
|||||||
case logHermezUpdateBucketWithdraw:
|
case logHermezUpdateBucketWithdraw:
|
||||||
var updateBucketWithdrawAux rollupEventUpdateBucketWithdrawAux
|
var updateBucketWithdrawAux rollupEventUpdateBucketWithdrawAux
|
||||||
var updateBucketWithdraw RollupEventUpdateBucketWithdraw
|
var updateBucketWithdraw RollupEventUpdateBucketWithdraw
|
||||||
err := c.contractAbi.UnpackIntoInterface(&updateBucketWithdrawAux,
|
err := c.contractAbi.UnpackIntoInterface(&updateBucketWithdrawAux, "UpdateBucketWithdraw", vLog.Data)
|
||||||
"UpdateBucketWithdraw", vLog.Data)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
updateBucketWithdraw.Withdrawals = updateBucketWithdrawAux.Withdrawals
|
updateBucketWithdraw.Withdrawals = updateBucketWithdrawAux.Withdrawals
|
||||||
updateBucketWithdraw.NumBucket = int(new(big.Int).SetBytes(vLog.Topics[1][:]).Int64())
|
updateBucketWithdraw.NumBucket = int(new(big.Int).SetBytes(vLog.Topics[1][:]).Int64())
|
||||||
updateBucketWithdraw.BlockStamp = new(big.Int).SetBytes(vLog.Topics[2][:]).Int64()
|
updateBucketWithdraw.BlockStamp = new(big.Int).SetBytes(vLog.Topics[2][:]).Int64()
|
||||||
rollupEvents.UpdateBucketWithdraw =
|
rollupEvents.UpdateBucketWithdraw = append(rollupEvents.UpdateBucketWithdraw, updateBucketWithdraw)
|
||||||
append(rollupEvents.UpdateBucketWithdraw, updateBucketWithdraw)
|
|
||||||
|
|
||||||
case logHermezUpdateWithdrawalDelay:
|
case logHermezUpdateWithdrawalDelay:
|
||||||
var withdrawalDelay RollupEventUpdateWithdrawalDelay
|
var withdrawalDelay RollupEventUpdateWithdrawalDelay
|
||||||
@@ -903,8 +857,7 @@ func (c *RollupClient) RollupEventsByBlock(blockNum int64,
|
|||||||
case logHermezUpdateBucketsParameters:
|
case logHermezUpdateBucketsParameters:
|
||||||
var bucketsParametersAux rollupEventUpdateBucketsParametersAux
|
var bucketsParametersAux rollupEventUpdateBucketsParametersAux
|
||||||
var bucketsParameters RollupEventUpdateBucketsParameters
|
var bucketsParameters RollupEventUpdateBucketsParameters
|
||||||
err := c.contractAbi.UnpackIntoInterface(&bucketsParametersAux,
|
err := c.contractAbi.UnpackIntoInterface(&bucketsParametersAux, "UpdateBucketsParameters", vLog.Data)
|
||||||
"UpdateBucketsParameters", vLog.Data)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -914,8 +867,7 @@ func (c *RollupClient) RollupEventsByBlock(blockNum int64,
|
|||||||
bucketsParameters.ArrayBuckets[i].BlockWithdrawalRate = bucket[2]
|
bucketsParameters.ArrayBuckets[i].BlockWithdrawalRate = bucket[2]
|
||||||
bucketsParameters.ArrayBuckets[i].MaxWithdrawals = bucket[3]
|
bucketsParameters.ArrayBuckets[i].MaxWithdrawals = bucket[3]
|
||||||
}
|
}
|
||||||
rollupEvents.UpdateBucketsParameters =
|
rollupEvents.UpdateBucketsParameters = append(rollupEvents.UpdateBucketsParameters, bucketsParameters)
|
||||||
append(rollupEvents.UpdateBucketsParameters, bucketsParameters)
|
|
||||||
case logHermezUpdateTokenExchange:
|
case logHermezUpdateTokenExchange:
|
||||||
var tokensExchange RollupEventUpdateTokenExchange
|
var tokensExchange RollupEventUpdateTokenExchange
|
||||||
err := c.contractAbi.UnpackIntoInterface(&tokensExchange, "UpdateTokenExchange", vLog.Data)
|
err := c.contractAbi.UnpackIntoInterface(&tokensExchange, "UpdateTokenExchange", vLog.Data)
|
||||||
@@ -947,8 +899,7 @@ func (c *RollupClient) RollupEventsByBlock(blockNum int64,
|
|||||||
|
|
||||||
// RollupForgeBatchArgs returns the arguments used in a ForgeBatch call in the
|
// RollupForgeBatchArgs returns the arguments used in a ForgeBatch call in the
|
||||||
// Rollup Smart Contract in the given transaction, and the sender address.
|
// Rollup Smart Contract in the given transaction, and the sender address.
|
||||||
func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash,
|
func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash, l1UserTxsLen uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error) {
|
||||||
l1UserTxsLen uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error) {
|
|
||||||
tx, _, err := c.client.client.TransactionByHash(context.Background(), ethTxHash)
|
tx, _, err := c.client.client.TransactionByHash(context.Background(), ethTxHash)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, tracerr.Wrap(fmt.Errorf("TransactionByHash: %w", err))
|
return nil, nil, tracerr.Wrap(fmt.Errorf("TransactionByHash: %w", err))
|
||||||
@@ -963,8 +914,7 @@ func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash,
|
|||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, tracerr.Wrap(err)
|
return nil, nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
sender, err := c.client.client.TransactionSender(context.Background(), tx,
|
sender, err := c.client.client.TransactionSender(context.Background(), tx, receipt.Logs[0].BlockHash, receipt.Logs[0].Index)
|
||||||
receipt.Logs[0].BlockHash, receipt.Logs[0].Index)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, tracerr.Wrap(err)
|
return nil, nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -989,7 +939,7 @@ func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash,
|
|||||||
FeeIdxCoordinator: []common.Idx{},
|
FeeIdxCoordinator: []common.Idx{},
|
||||||
}
|
}
|
||||||
nLevels := c.consts.Verifiers[rollupForgeBatchArgs.VerifierIdx].NLevels
|
nLevels := c.consts.Verifiers[rollupForgeBatchArgs.VerifierIdx].NLevels
|
||||||
lenL1L2TxsBytes := int((nLevels/8)*2 + common.Float40BytesLength + 1) //nolint:gomnd
|
lenL1L2TxsBytes := int((nLevels/8)*2 + common.Float40BytesLength + 1)
|
||||||
numBytesL1TxUser := int(l1UserTxsLen) * lenL1L2TxsBytes
|
numBytesL1TxUser := int(l1UserTxsLen) * lenL1L2TxsBytes
|
||||||
numTxsL1Coord := len(aux.EncodedL1CoordinatorTx) / common.RollupConstL1CoordinatorTotalBytes
|
numTxsL1Coord := len(aux.EncodedL1CoordinatorTx) / common.RollupConstL1CoordinatorTotalBytes
|
||||||
numBytesL1TxCoord := numTxsL1Coord * lenL1L2TxsBytes
|
numBytesL1TxCoord := numTxsL1Coord * lenL1L2TxsBytes
|
||||||
@@ -999,9 +949,7 @@ func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash,
|
|||||||
l1UserTxsData = aux.L1L2TxsData[:numBytesL1TxUser]
|
l1UserTxsData = aux.L1L2TxsData[:numBytesL1TxUser]
|
||||||
}
|
}
|
||||||
for i := 0; i < int(l1UserTxsLen); i++ {
|
for i := 0; i < int(l1UserTxsLen); i++ {
|
||||||
l1Tx, err :=
|
l1Tx, err := common.L1TxFromDataAvailability(l1UserTxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes], uint32(nLevels))
|
||||||
common.L1TxFromDataAvailability(l1UserTxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes],
|
|
||||||
uint32(nLevels))
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, tracerr.Wrap(err)
|
return nil, nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
@@ -1013,17 +961,14 @@ func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash,
|
|||||||
}
|
}
|
||||||
numTxsL2 := len(l2TxsData) / lenL1L2TxsBytes
|
numTxsL2 := len(l2TxsData) / lenL1L2TxsBytes
|
||||||
for i := 0; i < numTxsL2; i++ {
|
for i := 0; i < numTxsL2; i++ {
|
||||||
l2Tx, err :=
|
l2Tx, err := common.L2TxFromBytesDataAvailability(l2TxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes], int(nLevels))
|
||||||
common.L2TxFromBytesDataAvailability(l2TxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes],
|
|
||||||
int(nLevels))
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, tracerr.Wrap(err)
|
return nil, nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
rollupForgeBatchArgs.L2TxsData = append(rollupForgeBatchArgs.L2TxsData, *l2Tx)
|
rollupForgeBatchArgs.L2TxsData = append(rollupForgeBatchArgs.L2TxsData, *l2Tx)
|
||||||
}
|
}
|
||||||
for i := 0; i < numTxsL1Coord; i++ {
|
for i := 0; i < numTxsL1Coord; i++ {
|
||||||
bytesL1Coordinator :=
|
bytesL1Coordinator := aux.EncodedL1CoordinatorTx[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*common.RollupConstL1CoordinatorTotalBytes]
|
||||||
aux.EncodedL1CoordinatorTx[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*common.RollupConstL1CoordinatorTotalBytes] //nolint:lll
|
|
||||||
var signature []byte
|
var signature []byte
|
||||||
v := bytesL1Coordinator[0]
|
v := bytesL1Coordinator[0]
|
||||||
s := bytesL1Coordinator[1:33]
|
s := bytesL1Coordinator[1:33]
|
||||||
@@ -1036,29 +981,24 @@ func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash,
|
|||||||
return nil, nil, tracerr.Wrap(err)
|
return nil, nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
rollupForgeBatchArgs.L1CoordinatorTxs = append(rollupForgeBatchArgs.L1CoordinatorTxs, *l1Tx)
|
rollupForgeBatchArgs.L1CoordinatorTxs = append(rollupForgeBatchArgs.L1CoordinatorTxs, *l1Tx)
|
||||||
rollupForgeBatchArgs.L1CoordinatorTxsAuths =
|
rollupForgeBatchArgs.L1CoordinatorTxsAuths = append(rollupForgeBatchArgs.L1CoordinatorTxsAuths, signature)
|
||||||
append(rollupForgeBatchArgs.L1CoordinatorTxsAuths, signature)
|
|
||||||
}
|
}
|
||||||
lenFeeIdxCoordinatorBytes := int(nLevels / 8) //nolint:gomnd
|
lenFeeIdxCoordinatorBytes := int(nLevels / 8) //nolint:gomnd
|
||||||
numFeeIdxCoordinator := len(aux.FeeIdxCoordinator) / lenFeeIdxCoordinatorBytes
|
numFeeIdxCoordinator := len(aux.FeeIdxCoordinator) / lenFeeIdxCoordinatorBytes
|
||||||
for i := 0; i < numFeeIdxCoordinator; i++ {
|
for i := 0; i < numFeeIdxCoordinator; i++ {
|
||||||
var paddedFeeIdx [6]byte
|
var paddedFeeIdx [6]byte
|
||||||
// TODO: This check is not necessary: the first case will always work. Test it
|
// TODO: This check is not necessary: the first case will always work. Test it before removing the if.
|
||||||
// before removing the if.
|
|
||||||
if lenFeeIdxCoordinatorBytes < common.IdxBytesLen {
|
if lenFeeIdxCoordinatorBytes < common.IdxBytesLen {
|
||||||
copy(paddedFeeIdx[6-lenFeeIdxCoordinatorBytes:],
|
copy(paddedFeeIdx[6-lenFeeIdxCoordinatorBytes:], aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
|
||||||
aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
|
|
||||||
} else {
|
} else {
|
||||||
copy(paddedFeeIdx[:],
|
copy(paddedFeeIdx[:], aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
|
||||||
aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
|
|
||||||
}
|
}
|
||||||
feeIdxCoordinator, err := common.IdxFromBytes(paddedFeeIdx[:])
|
feeIdxCoordinator, err := common.IdxFromBytes(paddedFeeIdx[:])
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, nil, tracerr.Wrap(err)
|
return nil, nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
if feeIdxCoordinator != common.Idx(0) {
|
if feeIdxCoordinator != common.Idx(0) {
|
||||||
rollupForgeBatchArgs.FeeIdxCoordinator =
|
rollupForgeBatchArgs.FeeIdxCoordinator = append(rollupForgeBatchArgs.FeeIdxCoordinator, feeIdxCoordinator)
|
||||||
append(rollupForgeBatchArgs.FeeIdxCoordinator, feeIdxCoordinator)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return &rollupForgeBatchArgs, &sender, nil
|
return &rollupForgeBatchArgs, &sender, nil
|
||||||
|
|||||||
@@ -56,7 +56,7 @@ func genKeysBjj(i int64) *keys {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupEventInit(t *testing.T) {
|
func TestRollupEventInit(t *testing.T) {
|
||||||
rollupInit, blockNum, err := rollupClient.RollupEventInit(genesisBlock)
|
rollupInit, blockNum, err := rollupClient.RollupEventInit()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.Equal(t, int64(19), blockNum)
|
assert.Equal(t, int64(19), blockNum)
|
||||||
assert.Equal(t, uint8(10), rollupInit.ForgeL1L2BatchTimeout)
|
assert.Equal(t, uint8(10), rollupInit.ForgeL1L2BatchTimeout)
|
||||||
@@ -116,8 +116,7 @@ func TestRollupForgeBatch(t *testing.T) {
|
|||||||
minBid.SetString("11000000000000000000", 10)
|
minBid.SetString("11000000000000000000", 10)
|
||||||
budget := new(big.Int)
|
budget := new(big.Int)
|
||||||
budget.SetString("45200000000000000000", 10)
|
budget.SetString("45200000000000000000", 10)
|
||||||
_, err = auctionClient.AuctionMultiBid(budget, currentSlot+4, currentSlot+10, slotSet,
|
_, err = auctionClient.AuctionMultiBid(budget, currentSlot+4, currentSlot+10, slotSet, maxBid, minBid, deadline)
|
||||||
maxBid, minBid, deadline)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
// Add Blocks
|
// Add Blocks
|
||||||
@@ -129,18 +128,12 @@ func TestRollupForgeBatch(t *testing.T) {
|
|||||||
|
|
||||||
// Forge Batch 1
|
// Forge Batch 1
|
||||||
args := new(RollupForgeBatchArgs)
|
args := new(RollupForgeBatchArgs)
|
||||||
// When encoded, 64 times the 0 idx means that no idx to collect fees is specified.
|
args.FeeIdxCoordinator = []common.Idx{} // When encoded, 64 times the 0 idx means that no idx to collect fees is specified.
|
||||||
args.FeeIdxCoordinator = []common.Idx{}
|
l1CoordinatorBytes, err := hex.DecodeString("1c660323607bb113e586183609964a333d07ebe4bef3be82ec13af453bae9590bd7711cdb6abf42f176eadfbe5506fbef5e092e5543733f91b0061d9a7747fa10694a915a6470fa230de387b51e6f4db0b09787867778687b55197ad6d6a86eac000000001")
|
||||||
l1CoordinatorBytes, err := hex.DecodeString(
|
|
||||||
"1c660323607bb113e586183609964a333d07ebe4bef3be82ec13af453bae9590bd7711cdb6abf" +
|
|
||||||
"42f176eadfbe5506fbef5e092e5543733f91b0061d9a7747fa10694a915a6470fa230" +
|
|
||||||
"de387b51e6f4db0b09787867778687b55197ad6d6a86eac000000001")
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
numTxsL1 := len(l1CoordinatorBytes) / common.RollupConstL1CoordinatorTotalBytes
|
numTxsL1 := len(l1CoordinatorBytes) / common.RollupConstL1CoordinatorTotalBytes
|
||||||
for i := 0; i < numTxsL1; i++ {
|
for i := 0; i < numTxsL1; i++ {
|
||||||
bytesL1Coordinator :=
|
bytesL1Coordinator := l1CoordinatorBytes[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*common.RollupConstL1CoordinatorTotalBytes]
|
||||||
l1CoordinatorBytes[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*
|
|
||||||
common.RollupConstL1CoordinatorTotalBytes]
|
|
||||||
var signature []byte
|
var signature []byte
|
||||||
v := bytesL1Coordinator[0]
|
v := bytesL1Coordinator[0]
|
||||||
s := bytesL1Coordinator[1:33]
|
s := bytesL1Coordinator[1:33]
|
||||||
@@ -156,12 +149,9 @@ func TestRollupForgeBatch(t *testing.T) {
|
|||||||
args.L1UserTxs = []common.L1Tx{}
|
args.L1UserTxs = []common.L1Tx{}
|
||||||
args.L2TxsData = []common.L2Tx{}
|
args.L2TxsData = []common.L2Tx{}
|
||||||
newStateRoot := new(big.Int)
|
newStateRoot := new(big.Int)
|
||||||
newStateRoot.SetString(
|
newStateRoot.SetString("18317824016047294649053625209337295956588174734569560016974612130063629505228", 10)
|
||||||
"18317824016047294649053625209337295956588174734569560016974612130063629505228",
|
|
||||||
10)
|
|
||||||
newExitRoot := new(big.Int)
|
newExitRoot := new(big.Int)
|
||||||
bytesNumExitRoot, err := hex.DecodeString(
|
bytesNumExitRoot, err := hex.DecodeString("10a89d5fe8d488eda1ba371d633515739933c706c210c604f5bd209180daa43b")
|
||||||
"10a89d5fe8d488eda1ba371d633515739933c706c210c604f5bd209180daa43b")
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
newExitRoot.SetBytes(bytesNumExitRoot)
|
newExitRoot.SetBytes(bytesNumExitRoot)
|
||||||
args.NewLastIdx = int64(300)
|
args.NewLastIdx = int64(300)
|
||||||
@@ -216,8 +206,7 @@ func TestRollupUpdateForgeL1L2BatchTimeout(t *testing.T) {
|
|||||||
rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
|
rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
assert.Equal(t, newForgeL1L2BatchTimeout,
|
assert.Equal(t, newForgeL1L2BatchTimeout, rollupEvents.UpdateForgeL1L2BatchTimeout[0].NewForgeL1L2BatchTimeout)
|
||||||
rollupEvents.UpdateForgeL1L2BatchTimeout[0].NewForgeL1L2BatchTimeout)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupUpdateFeeAddToken(t *testing.T) {
|
func TestRollupUpdateFeeAddToken(t *testing.T) {
|
||||||
@@ -259,8 +248,7 @@ func TestRollupUpdateWithdrawalDelay(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
|
rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.Equal(t, newWithdrawalDelay,
|
assert.Equal(t, newWithdrawalDelay, int64(rollupEvents.UpdateWithdrawalDelay[0].NewWithdrawalDelay))
|
||||||
int64(rollupEvents.UpdateWithdrawalDelay[0].NewWithdrawalDelay))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupUpdateTokenExchange(t *testing.T) {
|
func TestRollupUpdateTokenExchange(t *testing.T) {
|
||||||
@@ -299,8 +287,7 @@ func TestRollupL1UserTxETHCreateAccountDeposit(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -312,13 +299,11 @@ func TestRollupL1UserTxETHCreateAccountDeposit(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux.client.account.Address,
|
assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxERC20CreateAccountDeposit(t *testing.T) {
|
func TestRollupL1UserTxERC20CreateAccountDeposit(t *testing.T) {
|
||||||
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
|
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
key := genKeysBjj(1)
|
key := genKeysBjj(1)
|
||||||
fromIdxInt64 := int64(0)
|
fromIdxInt64 := int64(0)
|
||||||
@@ -334,8 +319,7 @@ func TestRollupL1UserTxERC20CreateAccountDeposit(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -347,13 +331,11 @@ func TestRollupL1UserTxERC20CreateAccountDeposit(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux2.client.account.Address,
|
assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxERC20PermitCreateAccountDeposit(t *testing.T) {
|
func TestRollupL1UserTxERC20PermitCreateAccountDeposit(t *testing.T) {
|
||||||
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
|
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
key := genKeysBjj(3)
|
key := genKeysBjj(3)
|
||||||
fromIdxInt64 := int64(0)
|
fromIdxInt64 := int64(0)
|
||||||
@@ -369,8 +351,7 @@ func TestRollupL1UserTxERC20PermitCreateAccountDeposit(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -382,13 +363,11 @@ func TestRollupL1UserTxERC20PermitCreateAccountDeposit(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux.client.account.Address,
|
assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxETHDeposit(t *testing.T) {
|
func TestRollupL1UserTxETHDeposit(t *testing.T) {
|
||||||
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
|
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
fromIdxInt64 := int64(256)
|
fromIdxInt64 := int64(256)
|
||||||
toIdxInt64 := int64(0)
|
toIdxInt64 := int64(0)
|
||||||
@@ -404,8 +383,7 @@ func TestRollupL1UserTxETHDeposit(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -416,13 +394,11 @@ func TestRollupL1UserTxETHDeposit(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux.client.account.Address,
|
assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxERC20Deposit(t *testing.T) {
|
func TestRollupL1UserTxERC20Deposit(t *testing.T) {
|
||||||
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
|
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
fromIdxInt64 := int64(257)
|
fromIdxInt64 := int64(257)
|
||||||
toIdxInt64 := int64(0)
|
toIdxInt64 := int64(0)
|
||||||
@@ -437,8 +413,7 @@ func TestRollupL1UserTxERC20Deposit(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -449,13 +424,11 @@ func TestRollupL1UserTxERC20Deposit(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux2.client.account.Address,
|
assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxERC20PermitDeposit(t *testing.T) {
|
func TestRollupL1UserTxERC20PermitDeposit(t *testing.T) {
|
||||||
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
|
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
fromIdxInt64 := int64(258)
|
fromIdxInt64 := int64(258)
|
||||||
toIdxInt64 := int64(0)
|
toIdxInt64 := int64(0)
|
||||||
@@ -469,8 +442,7 @@ func TestRollupL1UserTxERC20PermitDeposit(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -481,13 +453,11 @@ func TestRollupL1UserTxERC20PermitDeposit(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux.client.account.Address,
|
assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxETHDepositTransfer(t *testing.T) {
|
func TestRollupL1UserTxETHDepositTransfer(t *testing.T) {
|
||||||
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
|
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
fromIdxInt64 := int64(256)
|
fromIdxInt64 := int64(256)
|
||||||
toIdxInt64 := int64(257)
|
toIdxInt64 := int64(257)
|
||||||
@@ -503,8 +473,7 @@ func TestRollupL1UserTxETHDepositTransfer(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -515,13 +484,11 @@ func TestRollupL1UserTxETHDepositTransfer(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux.client.account.Address,
|
assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxERC20DepositTransfer(t *testing.T) {
|
func TestRollupL1UserTxERC20DepositTransfer(t *testing.T) {
|
||||||
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
|
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
fromIdxInt64 := int64(257)
|
fromIdxInt64 := int64(257)
|
||||||
toIdxInt64 := int64(258)
|
toIdxInt64 := int64(258)
|
||||||
@@ -536,8 +503,7 @@ func TestRollupL1UserTxERC20DepositTransfer(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -548,13 +514,11 @@ func TestRollupL1UserTxERC20DepositTransfer(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux2.client.account.Address,
|
assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxERC20PermitDepositTransfer(t *testing.T) {
|
func TestRollupL1UserTxERC20PermitDepositTransfer(t *testing.T) {
|
||||||
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
|
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
fromIdxInt64 := int64(258)
|
fromIdxInt64 := int64(258)
|
||||||
toIdxInt64 := int64(259)
|
toIdxInt64 := int64(259)
|
||||||
@@ -569,8 +533,7 @@ func TestRollupL1UserTxERC20PermitDepositTransfer(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -581,13 +544,11 @@ func TestRollupL1UserTxERC20PermitDepositTransfer(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux.client.account.Address,
|
assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxETHCreateAccountDepositTransfer(t *testing.T) {
|
func TestRollupL1UserTxETHCreateAccountDepositTransfer(t *testing.T) {
|
||||||
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
|
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
fromIdxInt64 := int64(256)
|
fromIdxInt64 := int64(256)
|
||||||
toIdxInt64 := int64(257)
|
toIdxInt64 := int64(257)
|
||||||
@@ -603,8 +564,7 @@ func TestRollupL1UserTxETHCreateAccountDepositTransfer(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -615,13 +575,11 @@ func TestRollupL1UserTxETHCreateAccountDepositTransfer(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux.client.account.Address,
|
assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxERC20CreateAccountDepositTransfer(t *testing.T) {
|
func TestRollupL1UserTxERC20CreateAccountDepositTransfer(t *testing.T) {
|
||||||
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
|
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
fromIdxInt64 := int64(257)
|
fromIdxInt64 := int64(257)
|
||||||
toIdxInt64 := int64(258)
|
toIdxInt64 := int64(258)
|
||||||
@@ -636,8 +594,7 @@ func TestRollupL1UserTxERC20CreateAccountDepositTransfer(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -648,13 +605,11 @@ func TestRollupL1UserTxERC20CreateAccountDepositTransfer(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux2.client.account.Address,
|
assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxERC20PermitCreateAccountDepositTransfer(t *testing.T) {
|
func TestRollupL1UserTxERC20PermitCreateAccountDepositTransfer(t *testing.T) {
|
||||||
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
|
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
fromIdxInt64 := int64(258)
|
fromIdxInt64 := int64(258)
|
||||||
toIdxInt64 := int64(259)
|
toIdxInt64 := int64(259)
|
||||||
@@ -669,8 +624,7 @@ func TestRollupL1UserTxERC20PermitCreateAccountDepositTransfer(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -681,13 +635,11 @@ func TestRollupL1UserTxERC20PermitCreateAccountDepositTransfer(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux.client.account.Address,
|
assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxETHForceTransfer(t *testing.T) {
|
func TestRollupL1UserTxETHForceTransfer(t *testing.T) {
|
||||||
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
|
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
fromIdxInt64 := int64(256)
|
fromIdxInt64 := int64(256)
|
||||||
toIdxInt64 := int64(257)
|
toIdxInt64 := int64(257)
|
||||||
@@ -702,8 +654,7 @@ func TestRollupL1UserTxETHForceTransfer(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -714,13 +665,11 @@ func TestRollupL1UserTxETHForceTransfer(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux.client.account.Address,
|
assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxERC20ForceTransfer(t *testing.T) {
|
func TestRollupL1UserTxERC20ForceTransfer(t *testing.T) {
|
||||||
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
|
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
fromIdxInt64 := int64(257)
|
fromIdxInt64 := int64(257)
|
||||||
toIdxInt64 := int64(258)
|
toIdxInt64 := int64(258)
|
||||||
@@ -734,8 +683,7 @@ func TestRollupL1UserTxERC20ForceTransfer(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -746,13 +694,11 @@ func TestRollupL1UserTxERC20ForceTransfer(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux2.client.account.Address,
|
assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxERC20PermitForceTransfer(t *testing.T) {
|
func TestRollupL1UserTxERC20PermitForceTransfer(t *testing.T) {
|
||||||
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
|
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
fromIdxInt64 := int64(259)
|
fromIdxInt64 := int64(259)
|
||||||
toIdxInt64 := int64(260)
|
toIdxInt64 := int64(260)
|
||||||
@@ -766,8 +712,7 @@ func TestRollupL1UserTxERC20PermitForceTransfer(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -778,13 +723,11 @@ func TestRollupL1UserTxERC20PermitForceTransfer(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux.client.account.Address,
|
assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxETHForceExit(t *testing.T) {
|
func TestRollupL1UserTxETHForceExit(t *testing.T) {
|
||||||
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
|
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
fromIdxInt64 := int64(256)
|
fromIdxInt64 := int64(256)
|
||||||
toIdxInt64 := int64(1)
|
toIdxInt64 := int64(1)
|
||||||
@@ -799,8 +742,7 @@ func TestRollupL1UserTxETHForceExit(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -811,13 +753,11 @@ func TestRollupL1UserTxETHForceExit(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux.client.account.Address,
|
assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxERC20ForceExit(t *testing.T) {
|
func TestRollupL1UserTxERC20ForceExit(t *testing.T) {
|
||||||
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst,
|
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
fromIdxInt64 := int64(257)
|
fromIdxInt64 := int64(257)
|
||||||
toIdxInt64 := int64(1)
|
toIdxInt64 := int64(1)
|
||||||
@@ -831,8 +771,7 @@ func TestRollupL1UserTxERC20ForceExit(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -843,13 +782,11 @@ func TestRollupL1UserTxERC20ForceExit(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux2.client.account.Address,
|
assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupL1UserTxERC20PermitForceExit(t *testing.T) {
|
func TestRollupL1UserTxERC20PermitForceExit(t *testing.T) {
|
||||||
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst,
|
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
|
||||||
tokenHEZ)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
fromIdxInt64 := int64(258)
|
fromIdxInt64 := int64(258)
|
||||||
toIdxInt64 := int64(1)
|
toIdxInt64 := int64(1)
|
||||||
@@ -865,8 +802,7 @@ func TestRollupL1UserTxERC20PermitForceExit(t *testing.T) {
|
|||||||
}
|
}
|
||||||
L1UserTxs = append(L1UserTxs, l1Tx)
|
L1UserTxs = append(L1UserTxs, l1Tx)
|
||||||
|
|
||||||
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64,
|
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
|
||||||
l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
@@ -877,8 +813,7 @@ func TestRollupL1UserTxERC20PermitForceExit(t *testing.T) {
|
|||||||
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
|
||||||
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
|
||||||
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
|
||||||
assert.Equal(t, rollupClientAux.client.account.Address,
|
assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
||||||
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestRollupForgeBatch2(t *testing.T) {
|
func TestRollupForgeBatch2(t *testing.T) {
|
||||||
@@ -894,8 +829,7 @@ func TestRollupForgeBatch2(t *testing.T) {
|
|||||||
|
|
||||||
// Forge Batch 3
|
// Forge Batch 3
|
||||||
args := new(RollupForgeBatchArgs)
|
args := new(RollupForgeBatchArgs)
|
||||||
// When encoded, 64 times the 0 idx means that no idx to collect fees is specified.
|
args.FeeIdxCoordinator = []common.Idx{} // When encoded, 64 times the 0 idx means that no idx to collect fees is specified.
|
||||||
args.FeeIdxCoordinator = []common.Idx{}
|
|
||||||
args.L1CoordinatorTxs = argsForge.L1CoordinatorTxs
|
args.L1CoordinatorTxs = argsForge.L1CoordinatorTxs
|
||||||
args.L1CoordinatorTxsAuths = argsForge.L1CoordinatorTxsAuths
|
args.L1CoordinatorTxsAuths = argsForge.L1CoordinatorTxsAuths
|
||||||
for i := 0; i < len(L1UserTxs); i++ {
|
for i := 0; i < len(L1UserTxs); i++ {
|
||||||
@@ -903,19 +837,14 @@ func TestRollupForgeBatch2(t *testing.T) {
|
|||||||
l1UserTx.EffectiveAmount = l1UserTx.Amount
|
l1UserTx.EffectiveAmount = l1UserTx.Amount
|
||||||
l1Bytes, err := l1UserTx.BytesDataAvailability(uint32(nLevels))
|
l1Bytes, err := l1UserTx.BytesDataAvailability(uint32(nLevels))
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
l1UserTxDataAvailability, err := common.L1TxFromDataAvailability(l1Bytes,
|
l1UserTxDataAvailability, err := common.L1TxFromDataAvailability(l1Bytes, uint32(nLevels))
|
||||||
uint32(nLevels))
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
args.L1UserTxs = append(args.L1UserTxs, *l1UserTxDataAvailability)
|
args.L1UserTxs = append(args.L1UserTxs, *l1UserTxDataAvailability)
|
||||||
}
|
}
|
||||||
newStateRoot := new(big.Int)
|
newStateRoot := new(big.Int)
|
||||||
newStateRoot.SetString(
|
newStateRoot.SetString("18317824016047294649053625209337295956588174734569560016974612130063629505228", 10)
|
||||||
"18317824016047294649053625209337295956588174734569560016974612130063629505228",
|
|
||||||
10)
|
|
||||||
newExitRoot := new(big.Int)
|
newExitRoot := new(big.Int)
|
||||||
newExitRoot.SetString(
|
newExitRoot.SetString("1114281409737474688393837964161044726766678436313681099613347372031079422302", 10)
|
||||||
"1114281409737474688393837964161044726766678436313681099613347372031079422302",
|
|
||||||
10)
|
|
||||||
amount := new(big.Int)
|
amount := new(big.Int)
|
||||||
amount.SetString("79000000", 10)
|
amount.SetString("79000000", 10)
|
||||||
l2Tx := common.L2Tx{
|
l2Tx := common.L2Tx{
|
||||||
@@ -975,8 +904,7 @@ func TestRollupWithdrawMerkleProof(t *testing.T) {
|
|||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
var pkComp babyjub.PublicKeyComp
|
var pkComp babyjub.PublicKeyComp
|
||||||
pkCompBE, err :=
|
pkCompBE, err := hex.DecodeString("adc3b754f8da621967b073a787bef8eec7052f2ba712b23af57d98f65beea8b2")
|
||||||
hex.DecodeString("adc3b754f8da621967b073a787bef8eec7052f2ba712b23af57d98f65beea8b2")
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
pkCompLE := common.SwapEndianness(pkCompBE)
|
pkCompLE := common.SwapEndianness(pkCompBE)
|
||||||
copy(pkComp[:], pkCompLE)
|
copy(pkComp[:], pkCompLE)
|
||||||
@@ -986,20 +914,16 @@ func TestRollupWithdrawMerkleProof(t *testing.T) {
|
|||||||
numExitRoot := int64(3)
|
numExitRoot := int64(3)
|
||||||
fromIdx := int64(256)
|
fromIdx := int64(256)
|
||||||
amount, _ := new(big.Int).SetString("20000000000000000000", 10)
|
amount, _ := new(big.Int).SetString("20000000000000000000", 10)
|
||||||
// siblingBytes0, err := new(big.Int).SetString(
|
// siblingBytes0, err := new(big.Int).SetString("19508838618377323910556678335932426220272947530531646682154552299216398748115", 10)
|
||||||
// "19508838618377323910556678335932426220272947530531646682154552299216398748115",
|
|
||||||
// 10)
|
|
||||||
// require.NoError(t, err)
|
// require.NoError(t, err)
|
||||||
// siblingBytes1, err := new(big.Int).SetString(
|
// siblingBytes1, err := new(big.Int).SetString("15198806719713909654457742294233381653226080862567104272457668857208564789571", 10)
|
||||||
// "15198806719713909654457742294233381653226080862567104272457668857208564789571", 10)
|
|
||||||
// require.NoError(t, err)
|
// require.NoError(t, err)
|
||||||
var siblings []*big.Int
|
var siblings []*big.Int
|
||||||
// siblings = append(siblings, siblingBytes0)
|
// siblings = append(siblings, siblingBytes0)
|
||||||
// siblings = append(siblings, siblingBytes1)
|
// siblings = append(siblings, siblingBytes1)
|
||||||
instantWithdraw := true
|
instantWithdraw := true
|
||||||
|
|
||||||
_, err = rollupClientAux.RollupWithdrawMerkleProof(pkComp, tokenID, numExitRoot, fromIdx,
|
_, err = rollupClientAux.RollupWithdrawMerkleProof(pkComp, tokenID, numExitRoot, fromIdx, amount, siblings, instantWithdraw)
|
||||||
amount, siblings, instantWithdraw)
|
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
currentBlockNum, err := rollupClient.client.EthLastBlock()
|
||||||
|
|||||||
@@ -132,20 +132,18 @@ type WDelayerInterface interface {
|
|||||||
WDelayerDepositInfo(owner, token ethCommon.Address) (depositInfo DepositState, err error)
|
WDelayerDepositInfo(owner, token ethCommon.Address) (depositInfo DepositState, err error)
|
||||||
WDelayerDeposit(onwer, token ethCommon.Address, amount *big.Int) (*types.Transaction, error)
|
WDelayerDeposit(onwer, token ethCommon.Address, amount *big.Int) (*types.Transaction, error)
|
||||||
WDelayerWithdrawal(owner, token ethCommon.Address) (*types.Transaction, error)
|
WDelayerWithdrawal(owner, token ethCommon.Address) (*types.Transaction, error)
|
||||||
WDelayerEscapeHatchWithdrawal(to, token ethCommon.Address,
|
WDelayerEscapeHatchWithdrawal(to, token ethCommon.Address, amount *big.Int) (*types.Transaction, error)
|
||||||
amount *big.Int) (*types.Transaction, error)
|
|
||||||
|
|
||||||
WDelayerEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*WDelayerEvents, error)
|
WDelayerEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*WDelayerEvents, error)
|
||||||
WDelayerConstants() (*common.WDelayerConstants, error)
|
WDelayerConstants() (*common.WDelayerConstants, error)
|
||||||
WDelayerEventInit(genesisBlockNum int64) (*WDelayerEventInitialize, int64, error)
|
WDelayerEventInit() (*WDelayerEventInitialize, int64, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
//
|
//
|
||||||
// Implementation
|
// Implementation
|
||||||
//
|
//
|
||||||
|
|
||||||
// WDelayerClient is the implementation of the interface to the WithdrawDelayer
|
// WDelayerClient is the implementation of the interface to the WithdrawDelayer Smart Contract in ethereum.
|
||||||
// Smart Contract in ethereum.
|
|
||||||
type WDelayerClient struct {
|
type WDelayerClient struct {
|
||||||
client *EthereumClient
|
client *EthereumClient
|
||||||
address ethCommon.Address
|
address ethCommon.Address
|
||||||
@@ -174,8 +172,7 @@ func NewWDelayerClient(client *EthereumClient, address ethCommon.Address) (*WDel
|
|||||||
}
|
}
|
||||||
|
|
||||||
// WDelayerGetHermezGovernanceAddress is the interface to call the smart contract function
|
// WDelayerGetHermezGovernanceAddress is the interface to call the smart contract function
|
||||||
func (c *WDelayerClient) WDelayerGetHermezGovernanceAddress() (
|
func (c *WDelayerClient) WDelayerGetHermezGovernanceAddress() (hermezGovernanceAddress *ethCommon.Address, err error) {
|
||||||
hermezGovernanceAddress *ethCommon.Address, err error) {
|
|
||||||
var _hermezGovernanceAddress ethCommon.Address
|
var _hermezGovernanceAddress ethCommon.Address
|
||||||
if err := c.client.Call(func(ec *ethclient.Client) error {
|
if err := c.client.Call(func(ec *ethclient.Client) error {
|
||||||
_hermezGovernanceAddress, err = c.wdelayer.GetHermezGovernanceAddress(c.opts)
|
_hermezGovernanceAddress, err = c.wdelayer.GetHermezGovernanceAddress(c.opts)
|
||||||
@@ -187,8 +184,7 @@ func (c *WDelayerClient) WDelayerGetHermezGovernanceAddress() (
|
|||||||
}
|
}
|
||||||
|
|
||||||
// WDelayerTransferGovernance is the interface to call the smart contract function
|
// WDelayerTransferGovernance is the interface to call the smart contract function
|
||||||
func (c *WDelayerClient) WDelayerTransferGovernance(newAddress ethCommon.Address) (
|
func (c *WDelayerClient) WDelayerTransferGovernance(newAddress ethCommon.Address) (tx *types.Transaction, err error) {
|
||||||
tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -214,8 +210,7 @@ func (c *WDelayerClient) WDelayerClaimGovernance() (tx *types.Transaction, err e
|
|||||||
}
|
}
|
||||||
|
|
||||||
// WDelayerGetEmergencyCouncil is the interface to call the smart contract function
|
// WDelayerGetEmergencyCouncil is the interface to call the smart contract function
|
||||||
func (c *WDelayerClient) WDelayerGetEmergencyCouncil() (emergencyCouncilAddress *ethCommon.Address,
|
func (c *WDelayerClient) WDelayerGetEmergencyCouncil() (emergencyCouncilAddress *ethCommon.Address, err error) {
|
||||||
err error) {
|
|
||||||
var _emergencyCouncilAddress ethCommon.Address
|
var _emergencyCouncilAddress ethCommon.Address
|
||||||
if err := c.client.Call(func(ec *ethclient.Client) error {
|
if err := c.client.Call(func(ec *ethclient.Client) error {
|
||||||
_emergencyCouncilAddress, err = c.wdelayer.GetEmergencyCouncil(c.opts)
|
_emergencyCouncilAddress, err = c.wdelayer.GetEmergencyCouncil(c.opts)
|
||||||
@@ -227,8 +222,7 @@ func (c *WDelayerClient) WDelayerGetEmergencyCouncil() (emergencyCouncilAddress
|
|||||||
}
|
}
|
||||||
|
|
||||||
// WDelayerTransferEmergencyCouncil is the interface to call the smart contract function
|
// WDelayerTransferEmergencyCouncil is the interface to call the smart contract function
|
||||||
func (c *WDelayerClient) WDelayerTransferEmergencyCouncil(newAddress ethCommon.Address) (
|
func (c *WDelayerClient) WDelayerTransferEmergencyCouncil(newAddress ethCommon.Address) (tx *types.Transaction, err error) {
|
||||||
tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -277,8 +271,7 @@ func (c *WDelayerClient) WDelayerGetWithdrawalDelay() (withdrawalDelay int64, er
|
|||||||
}
|
}
|
||||||
|
|
||||||
// WDelayerGetEmergencyModeStartingTime is the interface to call the smart contract function
|
// WDelayerGetEmergencyModeStartingTime is the interface to call the smart contract function
|
||||||
func (c *WDelayerClient) WDelayerGetEmergencyModeStartingTime() (emergencyModeStartingTime int64,
|
func (c *WDelayerClient) WDelayerGetEmergencyModeStartingTime() (emergencyModeStartingTime int64, err error) {
|
||||||
err error) {
|
|
||||||
var _emergencyModeStartingTime uint64
|
var _emergencyModeStartingTime uint64
|
||||||
if err := c.client.Call(func(ec *ethclient.Client) error {
|
if err := c.client.Call(func(ec *ethclient.Client) error {
|
||||||
_emergencyModeStartingTime, err = c.wdelayer.GetEmergencyModeStartingTime(c.opts)
|
_emergencyModeStartingTime, err = c.wdelayer.GetEmergencyModeStartingTime(c.opts)
|
||||||
@@ -303,8 +296,7 @@ func (c *WDelayerClient) WDelayerEnableEmergencyMode() (tx *types.Transaction, e
|
|||||||
}
|
}
|
||||||
|
|
||||||
// WDelayerChangeWithdrawalDelay is the interface to call the smart contract function
|
// WDelayerChangeWithdrawalDelay is the interface to call the smart contract function
|
||||||
func (c *WDelayerClient) WDelayerChangeWithdrawalDelay(newWithdrawalDelay uint64) (
|
func (c *WDelayerClient) WDelayerChangeWithdrawalDelay(newWithdrawalDelay uint64) (tx *types.Transaction, err error) {
|
||||||
tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -317,8 +309,7 @@ func (c *WDelayerClient) WDelayerChangeWithdrawalDelay(newWithdrawalDelay uint64
|
|||||||
}
|
}
|
||||||
|
|
||||||
// WDelayerDepositInfo is the interface to call the smart contract function
|
// WDelayerDepositInfo is the interface to call the smart contract function
|
||||||
func (c *WDelayerClient) WDelayerDepositInfo(owner, token ethCommon.Address) (
|
func (c *WDelayerClient) WDelayerDepositInfo(owner, token ethCommon.Address) (depositInfo DepositState, err error) {
|
||||||
depositInfo DepositState, err error) {
|
|
||||||
if err := c.client.Call(func(ec *ethclient.Client) error {
|
if err := c.client.Call(func(ec *ethclient.Client) error {
|
||||||
amount, depositTimestamp, err := c.wdelayer.DepositInfo(c.opts, owner, token)
|
amount, depositTimestamp, err := c.wdelayer.DepositInfo(c.opts, owner, token)
|
||||||
depositInfo.Amount = amount
|
depositInfo.Amount = amount
|
||||||
@@ -331,8 +322,7 @@ func (c *WDelayerClient) WDelayerDepositInfo(owner, token ethCommon.Address) (
|
|||||||
}
|
}
|
||||||
|
|
||||||
// WDelayerDeposit is the interface to call the smart contract function
|
// WDelayerDeposit is the interface to call the smart contract function
|
||||||
func (c *WDelayerClient) WDelayerDeposit(owner, token ethCommon.Address, amount *big.Int) (
|
func (c *WDelayerClient) WDelayerDeposit(owner, token ethCommon.Address, amount *big.Int) (tx *types.Transaction, err error) {
|
||||||
tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -345,8 +335,7 @@ func (c *WDelayerClient) WDelayerDeposit(owner, token ethCommon.Address, amount
|
|||||||
}
|
}
|
||||||
|
|
||||||
// WDelayerWithdrawal is the interface to call the smart contract function
|
// WDelayerWithdrawal is the interface to call the smart contract function
|
||||||
func (c *WDelayerClient) WDelayerWithdrawal(owner, token ethCommon.Address) (tx *types.Transaction,
|
func (c *WDelayerClient) WDelayerWithdrawal(owner, token ethCommon.Address) (tx *types.Transaction, err error) {
|
||||||
err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -359,8 +348,7 @@ func (c *WDelayerClient) WDelayerWithdrawal(owner, token ethCommon.Address) (tx
|
|||||||
}
|
}
|
||||||
|
|
||||||
// WDelayerEscapeHatchWithdrawal is the interface to call the smart contract function
|
// WDelayerEscapeHatchWithdrawal is the interface to call the smart contract function
|
||||||
func (c *WDelayerClient) WDelayerEscapeHatchWithdrawal(to, token ethCommon.Address,
|
func (c *WDelayerClient) WDelayerEscapeHatchWithdrawal(to, token ethCommon.Address, amount *big.Int) (tx *types.Transaction, err error) {
|
||||||
amount *big.Int) (tx *types.Transaction, err error) {
|
|
||||||
if tx, err = c.client.CallAuth(
|
if tx, err = c.client.CallAuth(
|
||||||
0,
|
0,
|
||||||
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
|
||||||
@@ -396,33 +384,24 @@ func (c *WDelayerClient) WDelayerConstants() (constants *common.WDelayerConstant
|
|||||||
}
|
}
|
||||||
|
|
||||||
var (
|
var (
|
||||||
logWDelayerDeposit = crypto.Keccak256Hash([]byte(
|
logWDelayerDeposit = crypto.Keccak256Hash([]byte("Deposit(address,address,uint192,uint64)"))
|
||||||
"Deposit(address,address,uint192,uint64)"))
|
logWDelayerWithdraw = crypto.Keccak256Hash([]byte("Withdraw(address,address,uint192)"))
|
||||||
logWDelayerWithdraw = crypto.Keccak256Hash([]byte(
|
logWDelayerEmergencyModeEnabled = crypto.Keccak256Hash([]byte("EmergencyModeEnabled()"))
|
||||||
"Withdraw(address,address,uint192)"))
|
logWDelayerNewWithdrawalDelay = crypto.Keccak256Hash([]byte("NewWithdrawalDelay(uint64)"))
|
||||||
logWDelayerEmergencyModeEnabled = crypto.Keccak256Hash([]byte(
|
logWDelayerEscapeHatchWithdrawal = crypto.Keccak256Hash([]byte("EscapeHatchWithdrawal(address,address,address,uint256)"))
|
||||||
"EmergencyModeEnabled()"))
|
logWDelayerNewEmergencyCouncil = crypto.Keccak256Hash([]byte("NewEmergencyCouncil(address)"))
|
||||||
logWDelayerNewWithdrawalDelay = crypto.Keccak256Hash([]byte(
|
logWDelayerNewHermezGovernanceAddress = crypto.Keccak256Hash([]byte("NewHermezGovernanceAddress(address)"))
|
||||||
"NewWithdrawalDelay(uint64)"))
|
logWDelayerInitialize = crypto.Keccak256Hash([]byte(
|
||||||
logWDelayerEscapeHatchWithdrawal = crypto.Keccak256Hash([]byte(
|
|
||||||
"EscapeHatchWithdrawal(address,address,address,uint256)"))
|
|
||||||
logWDelayerNewEmergencyCouncil = crypto.Keccak256Hash([]byte(
|
|
||||||
"NewEmergencyCouncil(address)"))
|
|
||||||
logWDelayerNewHermezGovernanceAddress = crypto.Keccak256Hash([]byte(
|
|
||||||
"NewHermezGovernanceAddress(address)"))
|
|
||||||
logWDelayerInitialize = crypto.Keccak256Hash([]byte(
|
|
||||||
"InitializeWithdrawalDelayerEvent(uint64,address,address)"))
|
"InitializeWithdrawalDelayerEvent(uint64,address,address)"))
|
||||||
)
|
)
|
||||||
|
|
||||||
// WDelayerEventInit returns the initialize event with its corresponding block number
|
// WDelayerEventInit returns the initialize event with its corresponding block number
|
||||||
func (c *WDelayerClient) WDelayerEventInit(genesisBlockNum int64) (*WDelayerEventInitialize, int64, error) {
|
func (c *WDelayerClient) WDelayerEventInit() (*WDelayerEventInitialize, int64, error) {
|
||||||
query := ethereum.FilterQuery{
|
query := ethereum.FilterQuery{
|
||||||
Addresses: []ethCommon.Address{
|
Addresses: []ethCommon.Address{
|
||||||
c.address,
|
c.address,
|
||||||
},
|
},
|
||||||
FromBlock: big.NewInt(max(0, genesisBlockNum-blocksPerDay)),
|
Topics: [][]ethCommon.Hash{{logWDelayerInitialize}},
|
||||||
ToBlock: big.NewInt(genesisBlockNum),
|
|
||||||
Topics: [][]ethCommon.Hash{{logWDelayerInitialize}},
|
|
||||||
}
|
}
|
||||||
logs, err := c.client.client.FilterLogs(context.Background(), query)
|
logs, err := c.client.client.FilterLogs(context.Background(), query)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -504,51 +483,42 @@ func (c *WDelayerClient) WDelayerEventsByBlock(blockNum int64,
|
|||||||
|
|
||||||
case logWDelayerEmergencyModeEnabled:
|
case logWDelayerEmergencyModeEnabled:
|
||||||
var emergencyModeEnabled WDelayerEventEmergencyModeEnabled
|
var emergencyModeEnabled WDelayerEventEmergencyModeEnabled
|
||||||
wdelayerEvents.EmergencyModeEnabled =
|
wdelayerEvents.EmergencyModeEnabled = append(wdelayerEvents.EmergencyModeEnabled, emergencyModeEnabled)
|
||||||
append(wdelayerEvents.EmergencyModeEnabled, emergencyModeEnabled)
|
|
||||||
|
|
||||||
case logWDelayerNewWithdrawalDelay:
|
case logWDelayerNewWithdrawalDelay:
|
||||||
var withdrawalDelay WDelayerEventNewWithdrawalDelay
|
var withdrawalDelay WDelayerEventNewWithdrawalDelay
|
||||||
err := c.contractAbi.UnpackIntoInterface(&withdrawalDelay,
|
err := c.contractAbi.UnpackIntoInterface(&withdrawalDelay, "NewWithdrawalDelay", vLog.Data)
|
||||||
"NewWithdrawalDelay", vLog.Data)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
wdelayerEvents.NewWithdrawalDelay =
|
wdelayerEvents.NewWithdrawalDelay = append(wdelayerEvents.NewWithdrawalDelay, withdrawalDelay)
|
||||||
append(wdelayerEvents.NewWithdrawalDelay, withdrawalDelay)
|
|
||||||
|
|
||||||
case logWDelayerEscapeHatchWithdrawal:
|
case logWDelayerEscapeHatchWithdrawal:
|
||||||
var escapeHatchWithdrawal WDelayerEventEscapeHatchWithdrawal
|
var escapeHatchWithdrawal WDelayerEventEscapeHatchWithdrawal
|
||||||
err := c.contractAbi.UnpackIntoInterface(&escapeHatchWithdrawal,
|
err := c.contractAbi.UnpackIntoInterface(&escapeHatchWithdrawal, "EscapeHatchWithdrawal", vLog.Data)
|
||||||
"EscapeHatchWithdrawal", vLog.Data)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
escapeHatchWithdrawal.Who = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
|
escapeHatchWithdrawal.Who = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
|
||||||
escapeHatchWithdrawal.To = ethCommon.BytesToAddress(vLog.Topics[2].Bytes())
|
escapeHatchWithdrawal.To = ethCommon.BytesToAddress(vLog.Topics[2].Bytes())
|
||||||
escapeHatchWithdrawal.Token = ethCommon.BytesToAddress(vLog.Topics[3].Bytes())
|
escapeHatchWithdrawal.Token = ethCommon.BytesToAddress(vLog.Topics[3].Bytes())
|
||||||
wdelayerEvents.EscapeHatchWithdrawal =
|
wdelayerEvents.EscapeHatchWithdrawal = append(wdelayerEvents.EscapeHatchWithdrawal, escapeHatchWithdrawal)
|
||||||
append(wdelayerEvents.EscapeHatchWithdrawal, escapeHatchWithdrawal)
|
|
||||||
|
|
||||||
case logWDelayerNewEmergencyCouncil:
|
case logWDelayerNewEmergencyCouncil:
|
||||||
var emergencyCouncil WDelayerEventNewEmergencyCouncil
|
var emergencyCouncil WDelayerEventNewEmergencyCouncil
|
||||||
err := c.contractAbi.UnpackIntoInterface(&emergencyCouncil,
|
err := c.contractAbi.UnpackIntoInterface(&emergencyCouncil, "NewEmergencyCouncil", vLog.Data)
|
||||||
"NewEmergencyCouncil", vLog.Data)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
wdelayerEvents.NewEmergencyCouncil =
|
wdelayerEvents.NewEmergencyCouncil = append(wdelayerEvents.NewEmergencyCouncil, emergencyCouncil)
|
||||||
append(wdelayerEvents.NewEmergencyCouncil, emergencyCouncil)
|
|
||||||
|
|
||||||
case logWDelayerNewHermezGovernanceAddress:
|
case logWDelayerNewHermezGovernanceAddress:
|
||||||
var governanceAddress WDelayerEventNewHermezGovernanceAddress
|
var governanceAddress WDelayerEventNewHermezGovernanceAddress
|
||||||
err := c.contractAbi.UnpackIntoInterface(&governanceAddress,
|
err := c.contractAbi.UnpackIntoInterface(&governanceAddress, "NewHermezGovernanceAddress", vLog.Data)
|
||||||
"NewHermezGovernanceAddress", vLog.Data)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, tracerr.Wrap(err)
|
return nil, tracerr.Wrap(err)
|
||||||
}
|
}
|
||||||
wdelayerEvents.NewHermezGovernanceAddress =
|
wdelayerEvents.NewHermezGovernanceAddress = append(wdelayerEvents.NewHermezGovernanceAddress, governanceAddress)
|
||||||
append(wdelayerEvents.NewHermezGovernanceAddress, governanceAddress)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return &wdelayerEvents, nil
|
return &wdelayerEvents, nil
|
||||||
|
|||||||
@@ -18,7 +18,7 @@ var maxEmergencyModeTime = time.Hour * 24 * 7 * 26
|
|||||||
var maxWithdrawalDelay = time.Hour * 24 * 7 * 2
|
var maxWithdrawalDelay = time.Hour * 24 * 7 * 2
|
||||||
|
|
||||||
func TestWDelayerInit(t *testing.T) {
|
func TestWDelayerInit(t *testing.T) {
|
||||||
wDelayerInit, blockNum, err := wdelayerClientTest.WDelayerEventInit(genesisBlock)
|
wDelayerInit, blockNum, err := wdelayerClientTest.WDelayerEventInit()
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
assert.Equal(t, int64(16), blockNum)
|
assert.Equal(t, int64(16), blockNum)
|
||||||
assert.Equal(t, uint64(initWithdrawalDelay), wDelayerInit.InitialWithdrawalDelay)
|
assert.Equal(t, uint64(initWithdrawalDelay), wDelayerInit.InitialWithdrawalDelay)
|
||||||
@@ -54,8 +54,7 @@ func TestWDelayerSetHermezGovernanceAddress(t *testing.T) {
|
|||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
wdelayerEvents, err := wdelayerClientTest.WDelayerEventsByBlock(currentBlockNum, nil)
|
wdelayerEvents, err := wdelayerClientTest.WDelayerEventsByBlock(currentBlockNum, nil)
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
assert.Equal(t, auxAddressConst,
|
assert.Equal(t, auxAddressConst, wdelayerEvents.NewHermezGovernanceAddress[0].NewHermezGovernanceAddress)
|
||||||
wdelayerEvents.NewHermezGovernanceAddress[0].NewHermezGovernanceAddress)
|
|
||||||
_, err = wdelayerClientAux.WDelayerTransferGovernance(governanceAddressConst)
|
_, err = wdelayerClientAux.WDelayerTransferGovernance(governanceAddressConst)
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
_, err = wdelayerClientTest.WDelayerClaimGovernance()
|
_, err = wdelayerClientTest.WDelayerClaimGovernance()
|
||||||
@@ -69,8 +68,7 @@ func TestWDelayerGetEmergencyCouncil(t *testing.T) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func TestWDelayerSetEmergencyCouncil(t *testing.T) {
|
func TestWDelayerSetEmergencyCouncil(t *testing.T) {
|
||||||
wdelayerClientEmergencyCouncil, err := NewWDelayerClient(ethereumClientEmergencyCouncil,
|
wdelayerClientEmergencyCouncil, err := NewWDelayerClient(ethereumClientEmergencyCouncil, wdelayerTestAddressConst)
|
||||||
wdelayerTestAddressConst)
|
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
wdelayerClientAux, err := NewWDelayerClient(ethereumClientAux, wdelayerTestAddressConst)
|
wdelayerClientAux, err := NewWDelayerClient(ethereumClientAux, wdelayerTestAddressConst)
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
@@ -202,18 +200,13 @@ func TestWDelayerGetEmergencyModeStartingTime(t *testing.T) {
|
|||||||
func TestWDelayerEscapeHatchWithdrawal(t *testing.T) {
|
func TestWDelayerEscapeHatchWithdrawal(t *testing.T) {
|
||||||
amount := new(big.Int)
|
amount := new(big.Int)
|
||||||
amount.SetString("10000000000000000", 10)
|
amount.SetString("10000000000000000", 10)
|
||||||
wdelayerClientEmergencyCouncil, err := NewWDelayerClient(ethereumClientEmergencyCouncil,
|
wdelayerClientEmergencyCouncil, err := NewWDelayerClient(ethereumClientEmergencyCouncil, wdelayerTestAddressConst)
|
||||||
wdelayerTestAddressConst)
|
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
_, err =
|
_, err = wdelayerClientEmergencyCouncil.WDelayerEscapeHatchWithdrawal(governanceAddressConst, tokenHEZAddressConst, amount)
|
||||||
wdelayerClientEmergencyCouncil.WDelayerEscapeHatchWithdrawal(governanceAddressConst,
|
|
||||||
tokenHEZAddressConst, amount)
|
|
||||||
require.Contains(t, err.Error(), "NO_MAX_EMERGENCY_MODE_TIME")
|
require.Contains(t, err.Error(), "NO_MAX_EMERGENCY_MODE_TIME")
|
||||||
seconds := maxEmergencyModeTime.Seconds()
|
seconds := maxEmergencyModeTime.Seconds()
|
||||||
addTime(seconds, ethClientDialURL)
|
addTime(seconds, ethClientDialURL)
|
||||||
_, err =
|
_, err = wdelayerClientEmergencyCouncil.WDelayerEscapeHatchWithdrawal(governanceAddressConst, tokenHEZAddressConst, amount)
|
||||||
wdelayerClientEmergencyCouncil.WDelayerEscapeHatchWithdrawal(governanceAddressConst,
|
|
||||||
tokenHEZAddressConst, amount)
|
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
currentBlockNum, err := wdelayerClientTest.client.EthLastBlock()
|
currentBlockNum, err := wdelayerClientTest.client.EthLastBlock()
|
||||||
require.Nil(t, err)
|
require.Nil(t, err)
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user