Compare commits

..

13 Commits

Author SHA1 Message Date
Eduard S
a5ef822c64 WIP3 2021-03-08 18:17:15 +01:00
Eduard S
5501f30062 WIP2 2021-03-04 14:05:35 +01:00
Eduard S
d4f6926311 WIP 2021-03-03 14:37:41 +01:00
Eduard S
bfba1ba2d2 Clean 2021-03-03 12:50:21 +01:00
arnaubennassar
eed635539f pull 2021-03-02 18:49:34 +01:00
arnaubennassar
87610f6188 wip 2021-03-02 18:46:56 +01:00
arnaubennassar
4b596072d2 Add table to decouple API from node 2021-03-02 15:22:02 +01:00
Eduard S
95c4019cb2 WIP 2021-03-01 10:51:30 +01:00
Eduard S
c4d5e8a7ab WIP 2021-03-01 10:51:30 +01:00
Eduard S
c1375d9c5f Serve API only via cli 2021-03-01 10:51:30 +01:00
Eduard S
26e2bbc262 WIP 2021-02-26 16:17:06 +01:00
Eduard S
bb4c464200 WIP 2021-02-26 13:09:24 +01:00
Eduard S
982899efed Serve API only via cli 2021-02-26 13:09:24 +01:00
128 changed files with 2204 additions and 5664 deletions

View File

@@ -1,29 +0,0 @@
name: goreleaser
on:
push:
tags:
- '*'
jobs:
goreleaser:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
-
name: Set up Go
uses: actions/setup-go@v2
with:
go-version: 1.16
-
name: Run GoReleaser
uses: goreleaser/goreleaser-action@v2
with:
version: latest
args: release --rm-dist
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

1
.gitignore vendored
View File

@@ -1 +0,0 @@
bin/

View File

@@ -1,36 +0,0 @@
before:
hooks:
- go mod download
builds:
- main: ./cli/node/main.go
binary: node
id: node
goos:
- linux
- darwin
- windows
hooks:
pre: make migration-pack
post: make migration-clean
archives:
- replacements:
darwin: Darwin
linux: Linux
windows: Windows
386: i386
amd64: x86_64
checksum:
name_template: 'checksums.txt'
snapshot:
name_template: "{{ .Tag }}-next"
changelog:
sort: asc
filters:
exclude:
- '^docs:'
- '^test:'

661
LICENSE
View File

@@ -1,661 +0,0 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

135
Makefile
View File

@@ -1,135 +0,0 @@
#! /usr/bin/make -f
# Project variables.
PACKAGE := github.com/hermeznetwork/hermez-node
VERSION := $(shell git describe --tags --always)
COMMIT := $(shell git rev-parse --short HEAD)
DATE := $(shell date +%Y-%m-%dT%H:%M:%S%z)
PROJECT_NAME := $(shell basename "$(PWD)")
# Go related variables.
GO_FILES ?= $$(find . -name '*.go' | grep -v vendor)
GOBASE := $(shell pwd)
GOBIN := $(GOBASE)/bin
GOPKG := $(.)
GOENVVARS := GOBIN=$(GOBIN)
GOCMD := $(GOBASE)/cli/node
GOPROOF := $(GOBASE)/test/proofserver/cli
GOBINARY := node
# Project configs.
MODE ?= sync
CONFIG ?= $(GOBASE)/cli/node/cfg.buidler.toml
POSTGRES_PASS ?= yourpasswordhere
# Use linker flags to provide version/build settings.
LDFLAGS=-ldflags "-X main.version=$(VERSION) -X main.commit=$(COMMIT) -X main.date=$(DATE)"
# PID file will keep the process id of the server.
PID_PROOF_MOCK := /tmp/.$(PROJECT_NAME).proof.pid
# Make is verbose in Linux. Make it silent.
MAKEFLAGS += --silent
.PHONY: help
help: Makefile
@echo
@echo " Choose a command run in "$(PROJECT_NAME)":"
@echo
@sed -n 's/^##//p' $< | column -t -s ':' | sed -e 's/^/ /'
@echo
## test: Run the application check and all tests.
test: govet gocilint test-unit
## test-unit: Run all unit tests.
test-unit:
@echo " > Running unit tests"
$(GOENVVARS) go test -race -p 1 -failfast -timeout 300s -v ./...
## test-api-server: Run the API server using the Go tests.
test-api-server:
@echo " > Running unit tests"
$(GOENVVARS) FAKE_SERVER=yes go test -timeout 0 ./api -p 1 -count 1 -v
## gofmt: Run `go fmt` for all go files.
gofmt:
@echo " > Format all go files"
$(GOENVVARS) gofmt -w ${GO_FILES}
## govet: Run go vet.
govet:
@echo " > Running go vet"
$(GOENVVARS) go vet ./...
## golint: Run default golint.
golint:
@echo " > Running golint"
$(GOENVVARS) golint -set_exit_status ./...
## gocilint: Run Golang CI Lint.
gocilint:
@echo " > Running Golang CI Lint"
$-golangci-lint run --timeout=5m -E whitespace -E gosec -E gci -E misspell -E gomnd -E gofmt -E goimports -E golint --exclude-use-default=false --max-same-issues 0
## exec: Run given command. e.g; make exec run="go test ./..."
exec:
GOBIN=$(GOBIN) $(run)
## clean: Clean build files. Runs `go clean` internally.
clean:
@-rm $(GOBIN)/ 2> /dev/null
@echo " > Cleaning build cache"
$(GOENVVARS) go clean
## build: Build the project.
build: install
@echo " > Building Hermez binary..."
@bash -c "$(MAKE) migration-pack"
$(GOENVVARS) go build $(LDFLAGS) -o $(GOBIN)/$(GOBINARY) $(GOCMD)
@bash -c "$(MAKE) migration-clean"
## install: Install missing dependencies. Runs `go get` internally. e.g; make install get=github.com/foo/bar
install:
@echo " > Checking if there is any missing dependencies..."
$(GOENVVARS) go get $(GOCMD)/... $(get)
## run-node: Run Hermez node.
run-node:
@bash -c "$(MAKE) clean build"
@echo " > Running $(PROJECT_NAME)"
@$(GOBIN)/$(GOBINARY) run --mode $(MODE) --cfg $(CONFIG)
## run-proof-mock: Run proof server mock API.
run-proof-mock: stop-proof-mock
@echo " > Running Proof Server Mock"
$(GOENVVARS) go build -o $(GOBIN)/proof $(GOPROOF)
@$(GOBIN)/proof 2>&1 & echo $$! > $(PID_PROOF_MOCK)
@cat $(PID_PROOF_MOCK) | sed "/^/s/^/ \> Proof Server Mock PID: /"
## stop-proof-mock: Stop proof server mock API.
stop-proof-mock:
@-touch $(PID_PROOF_MOCK)
@-kill -s INT `cat $(PID_PROOF_MOCK)` 2> /dev/null || true
@-rm $(PID_PROOF_MOCK) $(GOBIN)/proof 2> /dev/null || true
## migration-pack: Pack the database migrations into the binary.
migration-pack:
@echo " > Packing the migrations..."
@cd /tmp && go get -u github.com/gobuffalo/packr/v2/packr2 && cd -
@cd $(GOBASE)/db && packr2 && cd -
## migration-clean: Clean the database migrations pack.
migration-clean:
@echo " > Cleaning the migrations..."
@cd $(GOBASE)/db && packr2 clean && cd -
## run-database-container: Run the Postgres container
run-database-container:
@echo " > Running the postgreSQL DB..."
@-docker run --rm --name hermez-db -p 5432:5432 -e POSTGRES_DB=hermez -e POSTGRES_USER=hermez -e POSTGRES_PASSWORD="$(POSTGRES_PASS)" -d postgres
## stop-database-container: Stop the Postgres container
stop-database-container:
@echo " > Stopping the postgreSQL DB..."
@-docker stop hermez-db

View File

@@ -8,75 +8,42 @@ Go implementation of the Hermez node.
The `hermez-node` has been tested with go version 1.14 The `hermez-node` has been tested with go version 1.14
### Build
Build the binary and check the current version:
```shell
$ make build
$ bin/node version
```
### Run
First you must edit the default/template config file into [cli/node/cfg.buidler.toml](cli/node/cfg.buidler.toml),
there are more information about the config file into [cli/node/README.md](cli/node/README.md)
After setting the config, you can build and run the Hermez Node as a synchronizer:
```shell
$ make run-node
```
Or build and run as a coordinator, and also passing the config file from other location:
```shell
$ MODE=sync CONFIG=cli/node/cfg.buidler.toml make run-node
```
To check the useful make commands:
```shell
$ make help
```
### Unit testing ### Unit testing
Running the unit tests requires a connection to a PostgreSQL database. You can Running the unit tests requires a connection to a PostgreSQL database. You can
run PostgreSQL with docker easily this way (where `yourpasswordhere` should start PostgreSQL with docker easily this way (where `yourpasswordhere` should
be your password): be your password):
```shell ```
$ POSTGRES_PASS="yourpasswordhere" make run-database-container POSTGRES_PASS=yourpasswordhere; sudo docker run --rm --name hermez-db-test -p 5432:5432 -e POSTGRES_DB=hermez -e POSTGRES_USER=hermez -e POSTGRES_PASSWORD="$POSTGRES_PASS" -d postgres
``` ```
Afterward, run the tests with the password as env var: Afterwards, run the tests with the password as env var:
```shell ```
$ POSTGRES_PASS="yourpasswordhere" make test POSTGRES_PASS=yourpasswordhere go test -p 1 ./...
``` ```
NOTE: `-p 1` forces execution of package test in serial. Otherwise, they may be NOTE: `-p 1` forces execution of package test in serial. Otherwise they may be
executed in parallel, and the test may find unexpected entries in the SQL database executed in paralel and the test may find unexpected entries in the SQL databse
because it's shared among all tests. because it's shared among all tests.
There is an extra temporary option that allows you to run the API server using the There is an extra temporary option that allows you to run the API server using
Go tests. It will be removed once the API can be properly initialized with data the Go tests. This will be removed once the API can be properly initialized,
from the synchronizer. To use this, run: with data from the synchronizer and so on. To use this, run:
```shell ```
$ POSTGRES_PASS="yourpasswordhere" make test-api-server FAKE_SERVER=yes POSTGRES_PASS=yourpasswordhere go test -timeout 0 ./api -p 1 -count 1 -v`
``` ```
### Lint ### Lint
All Pull Requests need to pass the configured linter. All Pull Requests need to pass the configured linter.
To run the linter locally, first, install [golangci-lint](https://golangci-lint.run). To run the linter locally, first install [golangci-lint](https://golangci-lint.run). Afterwards you can check the lints with this command:
Afterward, you can check the lints with this command:
```shell ```
$ make gocilint golangci-lint run --timeout=5m -E whitespace -E gosec -E gci -E misspell -E gomnd -E gofmt -E goimports -E golint --exclude-use-default=false --max-same-issues 0
``` ```
## Usage ## Usage
@@ -87,13 +54,13 @@ See [cli/node/README.md](cli/node/README.md)
### Proof Server ### Proof Server
The node in mode coordinator requires a proof server (a server capable of The node in mode coordinator requires a proof server (a server that is capable
calculating proofs from the zkInputs). There is a mock proof server CLI of calculating proofs from the zkInputs). For testing purposes there is a mock
at `test/proofserver/cli` for testing purposes. proof server cli at `test/proofserver/cli`.
Usage of `test/proofserver/cli`: Usage of `test/proofserver/cli`:
```shell ```
USAGE: USAGE:
go run ./test/proofserver/cli OPTIONS go run ./test/proofserver/cli OPTIONS
@@ -104,19 +71,11 @@ OPTIONS:
proving time duration (default 2s) proving time duration (default 2s)
``` ```
Also, the Makefile commands can be used to run and stop the proof server
in the background:
```shell
$ make run-proof-mock
$ make stop-proof-mock
```
### `/tmp` as tmpfs ### `/tmp` as tmpfs
For every processed batch, the node builds a temporary exit tree in a key-value For every processed batch, the node builds a temporary exit tree in a key-value
DB stored in `/tmp`. It is highly recommended that `/tmp` is mounted as a RAM DB stored in `/tmp`. It is highly recommended that `/tmp` is mounted as a RAM
file system in production to avoid unnecessary reads a writes to disk. This file system in production to avoid unecessary reads an writes to disk. This
can be done by mounting `/tmp` as tmpfs; for example, by having this line in can be done by mounting `/tmp` as tmpfs; for example, by having this line in
`/etc/fstab`: `/etc/fstab`:
``` ```

View File

@@ -44,7 +44,7 @@ func (a *API) getAccounts(c *gin.Context) {
return return
} }
// Build successful response // Build succesfull response
type accountResponse struct { type accountResponse struct {
Accounts []historydb.AccountAPI `json:"accounts"` Accounts []historydb.AccountAPI `json:"accounts"`
PendingItems uint64 `json:"pendingItems"` PendingItems uint64 `json:"pendingItems"`

View File

@@ -5,7 +5,7 @@ import (
"strconv" "strconv"
"testing" "testing"
"github.com/hermeznetwork/hermez-node/api/apitypes" "github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb" "github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/mitchellh/copystructure" "github.com/mitchellh/copystructure"

View File

@@ -7,7 +7,7 @@ import (
ethCommon "github.com/ethereum/go-ethereum/common" ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/hermeznetwork/hermez-node/api/apitypes" "github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/iden3/go-iden3-crypto/babyjub" "github.com/iden3/go-iden3-crypto/babyjub"
) )
@@ -47,7 +47,7 @@ func (a *API) getAccountCreationAuth(c *gin.Context) {
retSQLErr(err, c) retSQLErr(err, c)
return return
} }
// Build successful response // Build succesfull response
c.JSON(http.StatusOK, auth) c.JSON(http.StatusOK, auth)
} }

View File

@@ -34,56 +34,54 @@ func NewAPI(
if explorerEndpoints && hdb == nil { if explorerEndpoints && hdb == nil {
return nil, tracerr.Wrap(errors.New("cannot serve Explorer endpoints without HistoryDB")) return nil, tracerr.Wrap(errors.New("cannot serve Explorer endpoints without HistoryDB"))
} }
consts, err := hdb.GetConstants() ni, err := hdb.GetNodeInfo()
if err != nil { if err != nil {
return nil, err return nil, err
} }
a := &API{ a := &API{
h: hdb, h: hdb,
cg: &configAPI{ cg: &configAPI{
RollupConstants: *newRollupConstants(consts.Rollup), RollupConstants: *newRollupConstants(ni.Constants.RollupConstants),
AuctionConstants: consts.Auction, AuctionConstants: ni.Constants.AuctionConstants,
WDelayerConstants: consts.WDelayer, WDelayerConstants: ni.Constants.WDelayerConstants,
}, },
l2: l2db, l2: l2db,
chainID: consts.ChainID, chainID: ni.Constants.ChainID,
hermezAddress: consts.HermezAddress, hermezAddress: ni.Constants.HermezAddress,
} }
v1 := server.Group("/v1")
// Add coordinator endpoints // Add coordinator endpoints
if coordinatorEndpoints { if coordinatorEndpoints {
// Account // Account
v1.POST("/account-creation-authorization", a.postAccountCreationAuth) server.POST("/account-creation-authorization", a.postAccountCreationAuth)
v1.GET("/account-creation-authorization/:hezEthereumAddress", a.getAccountCreationAuth) server.GET("/account-creation-authorization/:hezEthereumAddress", a.getAccountCreationAuth)
// Transaction // Transaction
v1.POST("/transactions-pool", a.postPoolTx) server.POST("/transactions-pool", a.postPoolTx)
v1.GET("/transactions-pool/:id", a.getPoolTx) server.GET("/transactions-pool/:id", a.getPoolTx)
} }
// Add explorer endpoints // Add explorer endpoints
if explorerEndpoints { if explorerEndpoints {
// Account // Account
v1.GET("/accounts", a.getAccounts) server.GET("/accounts", a.getAccounts)
v1.GET("/accounts/:accountIndex", a.getAccount) server.GET("/accounts/:accountIndex", a.getAccount)
v1.GET("/exits", a.getExits) server.GET("/exits", a.getExits)
v1.GET("/exits/:batchNum/:accountIndex", a.getExit) server.GET("/exits/:batchNum/:accountIndex", a.getExit)
// Transaction // Transaction
v1.GET("/transactions-history", a.getHistoryTxs) server.GET("/transactions-history", a.getHistoryTxs)
v1.GET("/transactions-history/:id", a.getHistoryTx) server.GET("/transactions-history/:id", a.getHistoryTx)
// Status // Status
v1.GET("/batches", a.getBatches) server.GET("/batches", a.getBatches)
v1.GET("/batches/:batchNum", a.getBatch) server.GET("/batches/:batchNum", a.getBatch)
v1.GET("/full-batches/:batchNum", a.getFullBatch) server.GET("/full-batches/:batchNum", a.getFullBatch)
v1.GET("/slots", a.getSlots) server.GET("/slots", a.getSlots)
v1.GET("/slots/:slotNum", a.getSlot) server.GET("/slots/:slotNum", a.getSlot)
v1.GET("/bids", a.getBids) server.GET("/bids", a.getBids)
v1.GET("/state", a.getState) server.GET("/state", a.getState)
v1.GET("/config", a.getConfig) server.GET("/config", a.getConfig)
v1.GET("/tokens", a.getTokens) server.GET("/tokens", a.getTokens)
v1.GET("/tokens/:id", a.getToken) server.GET("/tokens/:id", a.getToken)
v1.GET("/coordinators", a.getCoordinators) server.GET("/coordinators", a.getCoordinators)
} }
return a, nil return a, nil

View File

@@ -19,7 +19,6 @@ import (
ethCommon "github.com/ethereum/go-ethereum/common" ethCommon "github.com/ethereum/go-ethereum/common"
swagger "github.com/getkin/kin-openapi/openapi3filter" swagger "github.com/getkin/kin-openapi/openapi3filter"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/hermeznetwork/hermez-node/api/stateapiupdater"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db" "github.com/hermeznetwork/hermez-node/db"
"github.com/hermeznetwork/hermez-node/db/historydb" "github.com/hermeznetwork/hermez-node/db/historydb"
@@ -40,8 +39,8 @@ type Pendinger interface {
New() Pendinger New() Pendinger
} }
const apiPort = "4010" const apiAddr = ":4010"
const apiURL = "http://localhost:" + apiPort + "/v1/" const apiURL = "http://localhost" + apiAddr + "/"
var SetBlockchain = ` var SetBlockchain = `
Type: Blockchain Type: Blockchain
@@ -187,7 +186,6 @@ type testCommon struct {
var tc testCommon var tc testCommon
var config configAPI var config configAPI
var api *API var api *API
var stateAPIUpdater *stateapiupdater.Updater
// TestMain initializes the API server, and fill HistoryDB and StateDB with fake data, // TestMain initializes the API server, and fill HistoryDB and StateDB with fake data,
// emulating the task of the synchronizer in order to have data to be returned // emulating the task of the synchronizer in order to have data to be returned
@@ -203,13 +201,13 @@ func TestMain(m *testing.M) {
if err != nil { if err != nil {
panic(err) panic(err)
} }
apiConnCon := db.NewAPIConnectionController(1, time.Second) apiConnCon := db.NewAPICnnectionController(1, time.Second)
hdb := historydb.NewHistoryDB(database, database, apiConnCon) hdb := historydb.NewHistoryDB(database, database, apiConnCon)
if err != nil { if err != nil {
panic(err) panic(err)
} }
// L2DB // L2DB
l2DB := l2db.NewL2DB(database, database, 10, 1000, 0.0, 1000.0, 24*time.Hour, apiConnCon) l2DB := l2db.NewL2DB(database, database, 10, 1000, 0.0, 24*time.Hour, apiConnCon)
test.WipeDB(l2DB.DB()) // this will clean HistoryDB and L2DB test.WipeDB(l2DB.DB()) // this will clean HistoryDB and L2DB
// Config (smart contract constants) // Config (smart contract constants)
chainID := uint16(0) chainID := uint16(0)
@@ -224,28 +222,15 @@ func TestMain(m *testing.M) {
apiGin := gin.Default() apiGin := gin.Default()
// Reset DB // Reset DB
test.WipeDB(hdb.DB()) test.WipeDB(hdb.DB())
if err := hdb.SetInitialNodeInfo(10, 0.0, &historydb.Constants{
constants := &historydb.Constants{ RollupConstants: _config.RollupConstants,
SCConsts: common.SCConsts{ AuctionConstants: _config.AuctionConstants,
Rollup: _config.RollupConstants, WDelayerConstants: _config.WDelayerConstants,
Auction: _config.AuctionConstants, ChainID: chainID,
WDelayer: _config.WDelayerConstants, HermezAddress: _config.HermezAddress,
}, }); err != nil {
ChainID: chainID,
HermezAddress: _config.HermezAddress,
}
if err := hdb.SetConstants(constants); err != nil {
panic(err) panic(err)
} }
nodeConfig := &historydb.NodeConfig{
MaxPoolTxs: 10,
MinFeeUSD: 0,
MaxFeeUSD: 10000000000,
}
if err := hdb.SetNodeConfig(nodeConfig); err != nil {
panic(err)
}
api, err = NewAPI( api, err = NewAPI(
true, true,
true, true,
@@ -258,7 +243,7 @@ func TestMain(m *testing.M) {
panic(err) panic(err)
} }
// Start server // Start server
listener, err := net.Listen("tcp", ":"+apiPort) //nolint:gosec listener, err := net.Listen("tcp", apiAddr) //nolint:gosec
if err != nil { if err != nil {
panic(err) panic(err)
} }
@@ -270,7 +255,7 @@ func TestMain(m *testing.M) {
} }
}() }()
// Generate blockchain data with til // Genratre blockchain data with til
tcc := til.NewContext(chainID, common.RollupConstMaxL1UserTx) tcc := til.NewContext(chainID, common.RollupConstMaxL1UserTx)
tilCfgExtra := til.ConfigExtra{ tilCfgExtra := til.ConfigExtra{
BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"), BootCoordAddr: ethCommon.HexToAddress("0xE39fEc6224708f0772D2A74fd3f9055A90E0A9f2"),
@@ -316,7 +301,7 @@ func TestMain(m *testing.M) {
USD: &ethUSD, USD: &ethUSD,
USDUpdate: &ethNow, USDUpdate: &ethNow,
}) })
err = api.h.UpdateTokenValue(common.EmptyAddr, ethUSD) err = api.h.UpdateTokenValue(test.EthToken.Symbol, ethUSD)
if err != nil { if err != nil {
panic(err) panic(err)
} }
@@ -343,7 +328,7 @@ func TestMain(m *testing.M) {
token.USD = &value token.USD = &value
token.USDUpdate = &now token.USDUpdate = &now
// Set value in DB // Set value in DB
err = api.h.UpdateTokenValue(token.EthAddr, value) err = api.h.UpdateTokenValue(token.Symbol, value)
if err != nil { if err != nil {
panic(err) panic(err)
} }
@@ -522,17 +507,6 @@ func TestMain(m *testing.M) {
WithdrawalDelay: uint64(3000), WithdrawalDelay: uint64(3000),
} }
stateAPIUpdater, err = stateapiupdater.NewUpdater(hdb, nodeConfig, &common.SCVariables{
Rollup: rollupVars,
Auction: auctionVars,
WDelayer: wdelayerVars,
}, constants, &stateapiupdater.RecommendedFeePolicy{
PolicyType: stateapiupdater.RecommendedFeePolicyTypeAvgLastHour,
})
if err != nil {
panic(err)
}
// Generate test data, as expected to be received/sended from/to the API // Generate test data, as expected to be received/sended from/to the API
testCoords := genTestCoordinators(commonCoords) testCoords := genTestCoordinators(commonCoords)
testBids := genTestBids(commonBlocks, testCoords, bids) testBids := genTestBids(commonBlocks, testCoords, bids)
@@ -617,17 +591,17 @@ func TestTimeout(t *testing.T) {
pass := os.Getenv("POSTGRES_PASS") pass := os.Getenv("POSTGRES_PASS")
databaseTO, err := db.ConnectSQLDB(5432, "localhost", "hermez", pass, "hermez") databaseTO, err := db.ConnectSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err) require.NoError(t, err)
apiConnConTO := db.NewAPIConnectionController(1, 100*time.Millisecond) apiConnConTO := db.NewAPICnnectionController(1, 100*time.Millisecond)
hdbTO := historydb.NewHistoryDB(databaseTO, databaseTO, apiConnConTO) hdbTO := historydb.NewHistoryDB(databaseTO, databaseTO, apiConnConTO)
require.NoError(t, err) require.NoError(t, err)
// L2DB // L2DB
l2DBTO := l2db.NewL2DB(databaseTO, databaseTO, 10, 1000, 1.0, 1000.0, 24*time.Hour, apiConnConTO) l2DBTO := l2db.NewL2DB(databaseTO, databaseTO, 10, 1000, 1.0, 24*time.Hour, apiConnConTO)
// API // API
apiGinTO := gin.Default() apiGinTO := gin.Default()
finishWait := make(chan interface{}) finishWait := make(chan interface{})
startWait := make(chan interface{}) startWait := make(chan interface{})
apiGinTO.GET("/v1/wait", func(c *gin.Context) { apiGinTO.GET("/wait", func(c *gin.Context) {
cancel, err := apiConnConTO.Acquire() cancel, err := apiConnConTO.Acquire()
defer cancel() defer cancel()
require.NoError(t, err) require.NoError(t, err)
@@ -655,9 +629,9 @@ func TestTimeout(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
client := &http.Client{} client := &http.Client{}
httpReq, err := http.NewRequest("GET", "http://localhost:4444/v1/tokens", nil) httpReq, err := http.NewRequest("GET", "http://localhost:4444/tokens", nil)
require.NoError(t, err) require.NoError(t, err)
httpReqWait, err := http.NewRequest("GET", "http://localhost:4444/v1/wait", nil) httpReqWait, err := http.NewRequest("GET", "http://localhost:4444/wait", nil)
require.NoError(t, err) require.NoError(t, err)
// Request that will get timed out // Request that will get timed out
var wg sync.WaitGroup var wg sync.WaitGroup

View File

@@ -52,7 +52,7 @@ func (a *API) getBatches(c *gin.Context) {
return return
} }
// Build successful response // Build succesfull response
type batchesResponse struct { type batchesResponse struct {
Batches []historydb.BatchAPI `json:"batches"` Batches []historydb.BatchAPI `json:"batches"`
PendingItems uint64 `json:"pendingItems"` PendingItems uint64 `json:"pendingItems"`

View File

@@ -7,12 +7,10 @@ import (
"time" "time"
ethCommon "github.com/ethereum/go-ethereum/common" ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb" "github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/mitchellh/copystructure" "github.com/mitchellh/copystructure"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
type testBatch struct { type testBatch struct {
@@ -22,7 +20,7 @@ type testBatch struct {
EthBlockHash ethCommon.Hash `json:"ethereumBlockHash"` EthBlockHash ethCommon.Hash `json:"ethereumBlockHash"`
Timestamp time.Time `json:"timestamp"` Timestamp time.Time `json:"timestamp"`
ForgerAddr ethCommon.Address `json:"forgerAddr"` ForgerAddr ethCommon.Address `json:"forgerAddr"`
CollectedFees apitypes.CollectedFeesAPI `json:"collectedFees"` CollectedFees map[common.TokenID]string `json:"collectedFees"`
TotalFeesUSD *float64 `json:"historicTotalCollectedFeesUSD"` TotalFeesUSD *float64 `json:"historicTotalCollectedFeesUSD"`
StateRoot string `json:"stateRoot"` StateRoot string `json:"stateRoot"`
NumAccounts int `json:"numAccounts"` NumAccounts int `json:"numAccounts"`
@@ -75,9 +73,9 @@ func genTestBatches(
if !found { if !found {
panic("block not found") panic("block not found")
} }
collectedFees := apitypes.CollectedFeesAPI(make(map[common.TokenID]apitypes.BigIntStr)) collectedFees := make(map[common.TokenID]string)
for k, v := range cBatches[i].CollectedFees { for k, v := range cBatches[i].CollectedFees {
collectedFees[k] = *apitypes.NewBigIntStr(v) collectedFees[k] = v.String()
} }
forgedTxs := 0 forgedTxs := 0
for _, tx := range txs { for _, tx := range txs {
@@ -134,7 +132,7 @@ func TestGetBatches(t *testing.T) {
limit := 3 limit := 3
path := fmt.Sprintf("%s?limit=%d", endpoint, limit) path := fmt.Sprintf("%s?limit=%d", endpoint, limit)
err := doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter) err := doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
require.NoError(t, err) assert.NoError(t, err)
assertBatches(t, tc.batches, fetchedBatches) assertBatches(t, tc.batches, fetchedBatches)
// minBatchNum // minBatchNum
@@ -143,7 +141,7 @@ func TestGetBatches(t *testing.T) {
minBatchNum := tc.batches[len(tc.batches)/2].BatchNum minBatchNum := tc.batches[len(tc.batches)/2].BatchNum
path = fmt.Sprintf("%s?minBatchNum=%d&limit=%d", endpoint, minBatchNum, limit) path = fmt.Sprintf("%s?minBatchNum=%d&limit=%d", endpoint, minBatchNum, limit)
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter) err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
require.NoError(t, err) assert.NoError(t, err)
minBatchNumBatches := []testBatch{} minBatchNumBatches := []testBatch{}
for i := 0; i < len(tc.batches); i++ { for i := 0; i < len(tc.batches); i++ {
if tc.batches[i].BatchNum > minBatchNum { if tc.batches[i].BatchNum > minBatchNum {
@@ -158,7 +156,7 @@ func TestGetBatches(t *testing.T) {
maxBatchNum := tc.batches[len(tc.batches)/2].BatchNum maxBatchNum := tc.batches[len(tc.batches)/2].BatchNum
path = fmt.Sprintf("%s?maxBatchNum=%d&limit=%d", endpoint, maxBatchNum, limit) path = fmt.Sprintf("%s?maxBatchNum=%d&limit=%d", endpoint, maxBatchNum, limit)
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter) err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
require.NoError(t, err) assert.NoError(t, err)
maxBatchNumBatches := []testBatch{} maxBatchNumBatches := []testBatch{}
for i := 0; i < len(tc.batches); i++ { for i := 0; i < len(tc.batches); i++ {
if tc.batches[i].BatchNum < maxBatchNum { if tc.batches[i].BatchNum < maxBatchNum {
@@ -173,7 +171,7 @@ func TestGetBatches(t *testing.T) {
slotNum := tc.batches[len(tc.batches)/2].SlotNum slotNum := tc.batches[len(tc.batches)/2].SlotNum
path = fmt.Sprintf("%s?slotNum=%d&limit=%d", endpoint, slotNum, limit) path = fmt.Sprintf("%s?slotNum=%d&limit=%d", endpoint, slotNum, limit)
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter) err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
require.NoError(t, err) assert.NoError(t, err)
slotNumBatches := []testBatch{} slotNumBatches := []testBatch{}
for i := 0; i < len(tc.batches); i++ { for i := 0; i < len(tc.batches); i++ {
if tc.batches[i].SlotNum == slotNum { if tc.batches[i].SlotNum == slotNum {
@@ -188,7 +186,7 @@ func TestGetBatches(t *testing.T) {
forgerAddr := tc.batches[len(tc.batches)/2].ForgerAddr forgerAddr := tc.batches[len(tc.batches)/2].ForgerAddr
path = fmt.Sprintf("%s?forgerAddr=%s&limit=%d", endpoint, forgerAddr.String(), limit) path = fmt.Sprintf("%s?forgerAddr=%s&limit=%d", endpoint, forgerAddr.String(), limit)
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter) err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
require.NoError(t, err) assert.NoError(t, err)
forgerAddrBatches := []testBatch{} forgerAddrBatches := []testBatch{}
for i := 0; i < len(tc.batches); i++ { for i := 0; i < len(tc.batches); i++ {
if tc.batches[i].ForgerAddr == forgerAddr { if tc.batches[i].ForgerAddr == forgerAddr {
@@ -202,7 +200,7 @@ func TestGetBatches(t *testing.T) {
limit = 6 limit = 6
path = fmt.Sprintf("%s?limit=%d", endpoint, limit) path = fmt.Sprintf("%s?limit=%d", endpoint, limit)
err = doGoodReqPaginated(path, historydb.OrderDesc, &testBatchesResponse{}, appendIter) err = doGoodReqPaginated(path, historydb.OrderDesc, &testBatchesResponse{}, appendIter)
require.NoError(t, err) assert.NoError(t, err)
flippedBatches := []testBatch{} flippedBatches := []testBatch{}
for i := len(tc.batches) - 1; i >= 0; i-- { for i := len(tc.batches) - 1; i >= 0; i-- {
flippedBatches = append(flippedBatches, tc.batches[i]) flippedBatches = append(flippedBatches, tc.batches[i])
@@ -216,7 +214,7 @@ func TestGetBatches(t *testing.T) {
minBatchNum = tc.batches[len(tc.batches)/4].BatchNum minBatchNum = tc.batches[len(tc.batches)/4].BatchNum
path = fmt.Sprintf("%s?minBatchNum=%d&maxBatchNum=%d&limit=%d", endpoint, minBatchNum, maxBatchNum, limit) path = fmt.Sprintf("%s?minBatchNum=%d&maxBatchNum=%d&limit=%d", endpoint, minBatchNum, maxBatchNum, limit)
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter) err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
require.NoError(t, err) assert.NoError(t, err)
minMaxBatchNumBatches := []testBatch{} minMaxBatchNumBatches := []testBatch{}
for i := 0; i < len(tc.batches); i++ { for i := 0; i < len(tc.batches); i++ {
if tc.batches[i].BatchNum < maxBatchNum && tc.batches[i].BatchNum > minBatchNum { if tc.batches[i].BatchNum < maxBatchNum && tc.batches[i].BatchNum > minBatchNum {
@@ -229,25 +227,25 @@ func TestGetBatches(t *testing.T) {
fetchedBatches = []testBatch{} fetchedBatches = []testBatch{}
path = fmt.Sprintf("%s?slotNum=%d&minBatchNum=%d", endpoint, 1, 25) path = fmt.Sprintf("%s?slotNum=%d&minBatchNum=%d", endpoint, 1, 25)
err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter) err = doGoodReqPaginated(path, historydb.OrderAsc, &testBatchesResponse{}, appendIter)
require.NoError(t, err) assert.NoError(t, err)
assertBatches(t, []testBatch{}, fetchedBatches) assertBatches(t, []testBatch{}, fetchedBatches)
// 400 // 400
// Invalid minBatchNum // Invalid minBatchNum
path = fmt.Sprintf("%s?minBatchNum=%d", endpoint, -2) path = fmt.Sprintf("%s?minBatchNum=%d", endpoint, -2)
err = doBadReq("GET", path, nil, 400) err = doBadReq("GET", path, nil, 400)
require.NoError(t, err) assert.NoError(t, err)
// Invalid forgerAddr // Invalid forgerAddr
path = fmt.Sprintf("%s?forgerAddr=%s", endpoint, "0xG0000001") path = fmt.Sprintf("%s?forgerAddr=%s", endpoint, "0xG0000001")
err = doBadReq("GET", path, nil, 400) err = doBadReq("GET", path, nil, 400)
require.NoError(t, err) assert.NoError(t, err)
} }
func TestGetBatch(t *testing.T) { func TestGetBatch(t *testing.T) {
endpoint := apiURL + "batches/" endpoint := apiURL + "batches/"
for _, batch := range tc.batches { for _, batch := range tc.batches {
fetchedBatch := testBatch{} fetchedBatch := testBatch{}
require.NoError( assert.NoError(
t, doGoodReq( t, doGoodReq(
"GET", "GET",
endpoint+strconv.Itoa(int(batch.BatchNum)), endpoint+strconv.Itoa(int(batch.BatchNum)),
@@ -257,16 +255,16 @@ func TestGetBatch(t *testing.T) {
assertBatch(t, batch, fetchedBatch) assertBatch(t, batch, fetchedBatch)
} }
// 400 // 400
require.NoError(t, doBadReq("GET", endpoint+"foo", nil, 400)) assert.NoError(t, doBadReq("GET", endpoint+"foo", nil, 400))
// 404 // 404
require.NoError(t, doBadReq("GET", endpoint+"99999", nil, 404)) assert.NoError(t, doBadReq("GET", endpoint+"99999", nil, 404))
} }
func TestGetFullBatch(t *testing.T) { func TestGetFullBatch(t *testing.T) {
endpoint := apiURL + "full-batches/" endpoint := apiURL + "full-batches/"
for _, fullBatch := range tc.fullBatches { for _, fullBatch := range tc.fullBatches {
fetchedFullBatch := testFullBatch{} fetchedFullBatch := testFullBatch{}
require.NoError( assert.NoError(
t, doGoodReq( t, doGoodReq(
"GET", "GET",
endpoint+strconv.Itoa(int(fullBatch.Batch.BatchNum)), endpoint+strconv.Itoa(int(fullBatch.Batch.BatchNum)),
@@ -277,9 +275,9 @@ func TestGetFullBatch(t *testing.T) {
assertTxs(t, fullBatch.Txs, fetchedFullBatch.Txs) assertTxs(t, fullBatch.Txs, fetchedFullBatch.Txs)
} }
// 400 // 400
require.NoError(t, doBadReq("GET", endpoint+"foo", nil, 400)) assert.NoError(t, doBadReq("GET", endpoint+"foo", nil, 400))
// 404 // 404
require.NoError(t, doBadReq("GET", endpoint+"99999", nil, 404)) assert.NoError(t, doBadReq("GET", endpoint+"99999", nil, 404))
} }
func assertBatches(t *testing.T, expected, actual []testBatch) { func assertBatches(t *testing.T, expected, actual []testBatch) {

View File

@@ -34,7 +34,7 @@ func (a *API) getBids(c *gin.Context) {
return return
} }
// Build successful response // Build succesfull response
type bidsResponse struct { type bidsResponse struct {
Bids []historydb.BidAPI `json:"bids"` Bids []historydb.BidAPI `json:"bids"`
PendingItems uint64 `json:"pendingItems"` PendingItems uint64 `json:"pendingItems"`

View File

@@ -32,7 +32,7 @@ func (a *API) getCoordinators(c *gin.Context) {
return return
} }
// Build successful response // Build succesfull response
type coordinatorsResponse struct { type coordinatorsResponse struct {
Coordinators []historydb.CoordinatorAPI `json:"coordinators"` Coordinators []historydb.CoordinatorAPI `json:"coordinators"`
PendingItems uint64 `json:"pendingItems"` PendingItems uint64 `json:"pendingItems"`

View File

@@ -43,7 +43,7 @@ func (a *API) getExits(c *gin.Context) {
return return
} }
// Build successful response // Build succesfull response
type exitsResponse struct { type exitsResponse struct {
Exits []historydb.ExitAPI `json:"exits"` Exits []historydb.ExitAPI `json:"exits"`
PendingItems uint64 `json:"pendingItems"` PendingItems uint64 `json:"pendingItems"`
@@ -72,6 +72,6 @@ func (a *API) getExit(c *gin.Context) {
retSQLErr(err, c) retSQLErr(err, c)
return return
} }
// Build successful response // Build succesfull response
c.JSON(http.StatusOK, exit) c.JSON(http.StatusOK, exit)
} }

View File

@@ -4,7 +4,7 @@ import (
"fmt" "fmt"
"testing" "testing"
"github.com/hermeznetwork/hermez-node/api/apitypes" "github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb" "github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/mitchellh/copystructure" "github.com/mitchellh/copystructure"

View File

@@ -14,7 +14,7 @@ import (
) )
const ( const (
// maxLimit is the max permitted items to be returned in paginated responses // maxLimit is the max permited items to be returned in paginated responses
maxLimit uint = 2049 maxLimit uint = 2049
// dfltOrder indicates how paginated endpoints are ordered if not specified // dfltOrder indicates how paginated endpoints are ordered if not specified
@@ -40,8 +40,8 @@ const (
) )
var ( var (
// ErrNilBidderAddr is used when a nil bidderAddr is received in the getCoordinator method // ErrNillBidderAddr is used when a nil bidderAddr is received in the getCoordinator method
ErrNilBidderAddr = errors.New("biderAddr can not be nil") ErrNillBidderAddr = errors.New("biderAddr can not be nil")
) )
func retSQLErr(err error, c *gin.Context) { func retSQLErr(err error, c *gin.Context) {

View File

@@ -50,19 +50,19 @@ func parsePagination(c querier) (fromItem *uint, order string, limit *uint, err
return fromItem, order, limit, nil return fromItem, order, limit, nil
} }
// nolint reason: res may be not overwritten // nolint reason: res may be not overwriten
func parseQueryUint(name string, dflt *uint, min, max uint, c querier) (*uint, error) { //nolint:SA4009 func parseQueryUint(name string, dflt *uint, min, max uint, c querier) (*uint, error) { //nolint:SA4009
str := c.Query(name) str := c.Query(name)
return stringToUint(str, name, dflt, min, max) return stringToUint(str, name, dflt, min, max)
} }
// nolint reason: res may be not overwritten // nolint reason: res may be not overwriten
func parseQueryInt64(name string, dflt *int64, min, max int64, c querier) (*int64, error) { //nolint:SA4009 func parseQueryInt64(name string, dflt *int64, min, max int64, c querier) (*int64, error) { //nolint:SA4009
str := c.Query(name) str := c.Query(name)
return stringToInt64(str, name, dflt, min, max) return stringToInt64(str, name, dflt, min, max)
} }
// nolint reason: res may be not overwritten // nolint reason: res may be not overwriten
func parseQueryBool(name string, dflt *bool, c querier) (*bool, error) { //nolint:SA4009 func parseQueryBool(name string, dflt *bool, c querier) (*bool, error) { //nolint:SA4009
str := c.Query(name) str := c.Query(name)
if str == "" { if str == "" {
@@ -295,13 +295,13 @@ func parseParamIdx(c paramer) (*common.Idx, error) {
return stringToIdx(idxStr, name) return stringToIdx(idxStr, name)
} }
// nolint reason: res may be not overwritten // nolint reason: res may be not overwriten
func parseParamUint(name string, dflt *uint, min, max uint, c paramer) (*uint, error) { //nolint:SA4009 func parseParamUint(name string, dflt *uint, min, max uint, c paramer) (*uint, error) { //nolint:SA4009
str := c.Param(name) str := c.Param(name)
return stringToUint(str, name, dflt, min, max) return stringToUint(str, name, dflt, min, max)
} }
// nolint reason: res may be not overwritten // nolint reason: res may be not overwriten
func parseParamInt64(name string, dflt *int64, min, max int64, c paramer) (*int64, error) { //nolint:SA4009 func parseParamInt64(name string, dflt *int64, min, max int64, c paramer) (*int64, error) { //nolint:SA4009
str := c.Param(name) str := c.Param(name)
return stringToInt64(str, name, dflt, min, max) return stringToInt64(str, name, dflt, min, max)

View File

@@ -11,7 +11,7 @@ import (
"github.com/hermeznetwork/tracerr" "github.com/hermeznetwork/tracerr"
) )
// SlotAPI is a representation of a slot information // SlotAPI is a repesentation of a slot information
type SlotAPI struct { type SlotAPI struct {
ItemID uint64 `json:"itemId"` ItemID uint64 `json:"itemId"`
SlotNum int64 `json:"slotNum"` SlotNum int64 `json:"slotNum"`
@@ -316,7 +316,7 @@ func (a *API) getSlots(c *gin.Context) {
return return
} }
// Build successful response // Build succesfull response
type slotsResponse struct { type slotsResponse struct {
Slots []SlotAPI `json:"slots"` Slots []SlotAPI `json:"slots"`
PendingItems uint64 `json:"pendingItems"` PendingItems uint64 `json:"pendingItems"`

View File

@@ -99,14 +99,14 @@ func TestGetSlot(t *testing.T) {
nil, &fetchedSlot, nil, &fetchedSlot,
), ),
) )
// ni, err := api.h.GetNodeInfoAPI() ni, err := api.h.GetNodeInfoAPI()
// assert.NoError(t, err) assert.NoError(t, err)
emptySlot := api.getEmptyTestSlot(slotNum, 0, tc.auctionVars) emptySlot := api.getEmptyTestSlot(slotNum, ni.APIState.Network.LastSyncBlock, tc.auctionVars)
assertSlot(t, emptySlot, fetchedSlot) assertSlot(t, emptySlot, fetchedSlot)
// Invalid slotNum // Invalid slotNum
path := endpoint + strconv.Itoa(-2) path := endpoint + strconv.Itoa(-2)
err := doBadReq("GET", path, nil, 400) err = doBadReq("GET", path, nil, 400)
assert.NoError(t, err) assert.NoError(t, err)
} }
@@ -129,10 +129,10 @@ func TestGetSlots(t *testing.T) {
err := doGoodReqPaginated(path, historydb.OrderAsc, &testSlotsResponse{}, appendIter) err := doGoodReqPaginated(path, historydb.OrderAsc, &testSlotsResponse{}, appendIter)
assert.NoError(t, err) assert.NoError(t, err)
allSlots := tc.slots allSlots := tc.slots
// ni, err := api.h.GetNodeInfoAPI() ni, err := api.h.GetNodeInfoAPI()
// assert.NoError(t, err) assert.NoError(t, err)
for i := tc.slots[len(tc.slots)-1].SlotNum; i < maxSlotNum; i++ { for i := tc.slots[len(tc.slots)-1].SlotNum; i < maxSlotNum; i++ {
emptySlot := api.getEmptyTestSlot(i+1, 0, tc.auctionVars) emptySlot := api.getEmptyTestSlot(i+1, ni.APIState.Network.LastSyncBlock, tc.auctionVars)
allSlots = append(allSlots, emptySlot) allSlots = append(allSlots, emptySlot)
} }
assertSlots(t, allSlots, fetchedSlots) assertSlots(t, allSlots, fetchedSlots)

View File

@@ -1,9 +1,13 @@
package api package api
import ( import (
"database/sql"
"net/http" "net/http"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/hermeznetwork/tracerr"
) )
func (a *API) getState(c *gin.Context) { func (a *API) getState(c *gin.Context) {
@@ -14,3 +18,106 @@ func (a *API) getState(c *gin.Context) {
} }
c.JSON(http.StatusOK, stateAPI) c.JSON(http.StatusOK, stateAPI)
} }
type APIStateUpdater struct {
hdb *historydb.HistoryDB
state historydb.StateAPI
config historydb.NodeConfig
vars common.SCVariablesPtr
consts historydb.Constants
}
func NewAPIStateUpdater(hdb *historydb.HistoryDB, config *historydb.NodeConfig, vars *common.SCVariables,
consts *historydb.Constants) *APIStateUpdater {
u := APIStateUpdater{
hdb: hdb,
config: *config,
consts: *consts,
}
u.SetSCVars(&common.SCVariablesPtr{&vars.Rollup, &vars.Auction, &vars.WDelayer})
return &u
}
func (u *APIStateUpdater) Store() error {
return tracerr.Wrap(u.hdb.SetAPIState(&u.state))
}
func (u *APIStateUpdater) SetSCVars(vars *common.SCVariablesPtr) {
if vars.Rollup != nil {
u.vars.Rollup = vars.Rollup
rollupVars := historydb.NewRollupVariablesAPI(u.vars.Rollup)
u.state.Rollup = *rollupVars
}
if vars.Auction != nil {
u.vars.Auction = vars.Auction
auctionVars := historydb.NewAuctionVariablesAPI(u.vars.Auction)
u.state.Auction = *auctionVars
}
if vars.WDelayer != nil {
u.vars.WDelayer = vars.WDelayer
u.state.WithdrawalDelayer = *u.vars.WDelayer
}
}
func (u *APIStateUpdater) UpdateMetrics() error {
if u.state.Network.LastBatch == nil {
return nil
}
lastBatchNum := u.state.Network.LastBatch.BatchNum
metrics, err := u.hdb.GetMetricsInternalAPI(lastBatchNum)
if err != nil {
return tracerr.Wrap(err)
}
u.state.Metrics = *metrics
return nil
}
func (u *APIStateUpdater) UpdateNetworkInfoBlock(lastEthBlock, lastSyncBlock common.Block) {
u.state.Network.LastSyncBlock = lastSyncBlock.Num
u.state.Network.LastEthBlock = lastEthBlock.Num
}
func (u *APIStateUpdater) UpdateNetworkInfo(
lastEthBlock, lastSyncBlock common.Block,
lastBatchNum common.BatchNum, currentSlot int64,
) error {
// Get last batch in API format
lastBatch, err := u.hdb.GetBatchInternalAPI(lastBatchNum)
if tracerr.Unwrap(err) == sql.ErrNoRows {
lastBatch = nil
} else if err != nil {
return tracerr.Wrap(err)
}
// Get next forgers
lastClosedSlot := currentSlot + int64(u.state.Auction.ClosedAuctionSlots)
nextForgers, err := u.hdb.GetNextForgersInternalAPI(u.vars.Auction, &u.consts.Auction,
lastSyncBlock, currentSlot, lastClosedSlot)
if tracerr.Unwrap(err) == sql.ErrNoRows {
nextForgers = nil
} else if err != nil {
return tracerr.Wrap(err)
}
bucketUpdates, err := u.hdb.GetBucketUpdatesInternalAPI()
if err == sql.ErrNoRows {
bucketUpdates = nil
} else if err != nil {
return tracerr.Wrap(err)
}
// Update NodeInfo struct
for i, bucketParams := range u.state.Rollup.Buckets {
for _, bucketUpdate := range bucketUpdates {
if bucketUpdate.NumBucket == i {
bucketParams.Withdrawals = bucketUpdate.Withdrawals
u.state.Rollup.Buckets[i] = bucketParams
break
}
}
}
u.state.Network.LastSyncBlock = lastSyncBlock.Num
u.state.Network.LastEthBlock = lastEthBlock.Num
u.state.Network.LastBatch = lastBatch
u.state.Network.CurrentSlot = currentSlot
u.state.Network.NextForgers = nextForgers
return nil
}

View File

@@ -4,7 +4,7 @@ import (
"math/big" "math/big"
"testing" "testing"
"github.com/hermeznetwork/hermez-node/api/apitypes" "github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb" "github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@@ -29,11 +29,10 @@ type testNetwork struct {
} }
func TestSetRollupVariables(t *testing.T) { func TestSetRollupVariables(t *testing.T) {
stateAPIUpdater.SetSCVars(&common.SCVariablesPtr{Rollup: &tc.rollupVars}) api.h.SetRollupVariables(&tc.rollupVars)
require.NoError(t, stateAPIUpdater.Store())
ni, err := api.h.GetNodeInfoAPI() ni, err := api.h.GetNodeInfoAPI()
require.NoError(t, err) assert.NoError(t, err)
assertEqualRollupVariables(t, tc.rollupVars, ni.StateAPI.Rollup, true) assertEqualRollupVariables(t, tc.rollupVars, ni.APIState.Rollup, true)
} }
func assertEqualRollupVariables(t *testing.T, rollupVariables common.RollupVariables, apiVariables historydb.RollupVariablesAPI, checkBuckets bool) { func assertEqualRollupVariables(t *testing.T, rollupVariables common.RollupVariables, apiVariables historydb.RollupVariablesAPI, checkBuckets bool) {
@@ -52,19 +51,17 @@ func assertEqualRollupVariables(t *testing.T, rollupVariables common.RollupVaria
} }
func TestSetWDelayerVariables(t *testing.T) { func TestSetWDelayerVariables(t *testing.T) {
stateAPIUpdater.SetSCVars(&common.SCVariablesPtr{WDelayer: &tc.wdelayerVars}) api.h.SetWDelayerVariables(&tc.wdelayerVars)
require.NoError(t, stateAPIUpdater.Store())
ni, err := api.h.GetNodeInfoAPI() ni, err := api.h.GetNodeInfoAPI()
require.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, tc.wdelayerVars, ni.StateAPI.WithdrawalDelayer) assert.Equal(t, tc.wdelayerVars, ni.APIState.WithdrawalDelayer)
} }
func TestSetAuctionVariables(t *testing.T) { func TestSetAuctionVariables(t *testing.T) {
stateAPIUpdater.SetSCVars(&common.SCVariablesPtr{Auction: &tc.auctionVars}) api.h.SetAuctionVariables(&tc.auctionVars)
require.NoError(t, stateAPIUpdater.Store())
ni, err := api.h.GetNodeInfoAPI() ni, err := api.h.GetNodeInfoAPI()
require.NoError(t, err) assert.NoError(t, err)
assertEqualAuctionVariables(t, tc.auctionVars, ni.StateAPI.Auction) assertEqualAuctionVariables(t, tc.auctionVars, ni.APIState.Auction)
} }
func assertEqualAuctionVariables(t *testing.T, auctionVariables common.AuctionVariables, apiVariables historydb.AuctionVariablesAPI) { func assertEqualAuctionVariables(t *testing.T, auctionVariables common.AuctionVariables, apiVariables historydb.AuctionVariablesAPI) {
@@ -116,17 +113,16 @@ func TestUpdateNetworkInfo(t *testing.T) {
err := api.h.AddBucketUpdatesTest(api.h.DB(), bucketUpdates) err := api.h.AddBucketUpdatesTest(api.h.DB(), bucketUpdates)
require.NoError(t, err) require.NoError(t, err)
err = stateAPIUpdater.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum) err = api.h.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
require.NoError(t, err) assert.NoError(t, err)
require.NoError(t, stateAPIUpdater.Store())
ni, err := api.h.GetNodeInfoAPI() ni, err := api.h.GetNodeInfoAPI()
require.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, lastBlock.Num, ni.StateAPI.Network.LastSyncBlock) assert.Equal(t, lastBlock.Num, ni.APIState.Network.LastSyncBlock)
assert.Equal(t, lastBatchNum, ni.StateAPI.Network.LastBatch.BatchNum) assert.Equal(t, lastBatchNum, ni.APIState.Network.LastBatch.BatchNum)
assert.Equal(t, currentSlotNum, ni.StateAPI.Network.CurrentSlot) assert.Equal(t, currentSlotNum, ni.APIState.Network.CurrentSlot)
assert.Equal(t, int(ni.StateAPI.Auction.ClosedAuctionSlots)+1, len(ni.StateAPI.Network.NextForgers)) assert.Equal(t, int(ni.APIState.Auction.ClosedAuctionSlots)+1, len(ni.APIState.Network.NextForgers))
assert.Equal(t, ni.StateAPI.Rollup.Buckets[0].Withdrawals, apitypes.NewBigIntStr(big.NewInt(123))) assert.Equal(t, ni.APIState.Rollup.Buckets[0].Withdrawals, apitypes.NewBigIntStr(big.NewInt(123)))
assert.Equal(t, ni.StateAPI.Rollup.Buckets[2].Withdrawals, apitypes.NewBigIntStr(big.NewInt(43))) assert.Equal(t, ni.APIState.Rollup.Buckets[2].Withdrawals, apitypes.NewBigIntStr(big.NewInt(43)))
} }
func TestUpdateMetrics(t *testing.T) { func TestUpdateMetrics(t *testing.T) {
@@ -134,62 +130,55 @@ func TestUpdateMetrics(t *testing.T) {
lastBlock := tc.blocks[3] lastBlock := tc.blocks[3]
lastBatchNum := common.BatchNum(12) lastBatchNum := common.BatchNum(12)
currentSlotNum := int64(1) currentSlotNum := int64(1)
err := stateAPIUpdater.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum) err := api.h.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
require.NoError(t, err) assert.NoError(t, err)
err = stateAPIUpdater.UpdateMetrics() err = api.h.UpdateMetrics()
require.NoError(t, err) assert.NoError(t, err)
require.NoError(t, stateAPIUpdater.Store())
ni, err := api.h.GetNodeInfoAPI() ni, err := api.h.GetNodeInfoAPI()
require.NoError(t, err) assert.NoError(t, err)
assert.Greater(t, ni.StateAPI.Metrics.TransactionsPerBatch, float64(0)) assert.Greater(t, ni.APIState.Metrics.TransactionsPerBatch, float64(0))
assert.Greater(t, ni.StateAPI.Metrics.BatchFrequency, float64(0)) assert.Greater(t, ni.APIState.Metrics.BatchFrequency, float64(0))
assert.Greater(t, ni.StateAPI.Metrics.TransactionsPerSecond, float64(0)) assert.Greater(t, ni.APIState.Metrics.TransactionsPerSecond, float64(0))
assert.Greater(t, ni.StateAPI.Metrics.TokenAccounts, int64(0)) assert.Greater(t, ni.APIState.Metrics.TotalAccounts, int64(0))
assert.Greater(t, ni.StateAPI.Metrics.Wallets, int64(0)) assert.Greater(t, ni.APIState.Metrics.TotalBJJs, int64(0))
assert.Greater(t, ni.StateAPI.Metrics.AvgTransactionFee, float64(0)) assert.Greater(t, ni.APIState.Metrics.AvgTransactionFee, float64(0))
} }
func TestUpdateRecommendedFee(t *testing.T) { func TestUpdateRecommendedFee(t *testing.T) {
err := stateAPIUpdater.UpdateRecommendedFee() err := api.h.UpdateRecommendedFee()
require.NoError(t, err) assert.NoError(t, err)
require.NoError(t, stateAPIUpdater.Store())
var minFeeUSD float64 var minFeeUSD float64
if api.l2 != nil { if api.l2 != nil {
minFeeUSD = api.l2.MinFeeUSD() minFeeUSD = api.l2.MinFeeUSD()
} }
ni, err := api.h.GetNodeInfoAPI() ni, err := api.h.GetNodeInfoAPI()
require.NoError(t, err) assert.NoError(t, err)
assert.Greater(t, ni.StateAPI.RecommendedFee.ExistingAccount, minFeeUSD) assert.Greater(t, ni.APIState.RecommendedFee.ExistingAccount, minFeeUSD)
assert.Equal(t, ni.StateAPI.RecommendedFee.CreatesAccount, // assert.Equal(t, ni.StateAPI.RecommendedFee.CreatesAccount,
ni.StateAPI.RecommendedFee.ExistingAccount* // ni.StateAPI.RecommendedFee.ExistingAccount*createAccountExtraFeePercentage)
historydb.CreateAccountExtraFeePercentage) // assert.Equal(t, ni.StateAPI.RecommendedFee.CreatesAccountAndRegister,
assert.Equal(t, ni.StateAPI.RecommendedFee.CreatesAccountInternal, // ni.StateAPI.RecommendedFee.ExistingAccount*createAccountInternalExtraFeePercentage)
ni.StateAPI.RecommendedFee.ExistingAccount*
historydb.CreateAccountInternalExtraFeePercentage)
} }
func TestGetState(t *testing.T) { func TestGetState(t *testing.T) {
lastBlock := tc.blocks[3] lastBlock := tc.blocks[3]
lastBatchNum := common.BatchNum(12) lastBatchNum := common.BatchNum(12)
currentSlotNum := int64(1) currentSlotNum := int64(1)
stateAPIUpdater.SetSCVars(&common.SCVariablesPtr{ api.h.SetRollupVariables(&tc.rollupVars)
Rollup: &tc.rollupVars, api.h.SetWDelayerVariables(&tc.wdelayerVars)
Auction: &tc.auctionVars, api.h.SetAuctionVariables(&tc.auctionVars)
WDelayer: &tc.wdelayerVars, err := api.h.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum)
}) assert.NoError(t, err)
err := stateAPIUpdater.UpdateNetworkInfo(lastBlock, lastBlock, lastBatchNum, currentSlotNum) err = api.h.UpdateMetrics()
require.NoError(t, err) assert.NoError(t, err)
err = stateAPIUpdater.UpdateMetrics() err = api.h.UpdateRecommendedFee()
require.NoError(t, err) assert.NoError(t, err)
err = stateAPIUpdater.UpdateRecommendedFee()
require.NoError(t, err)
require.NoError(t, stateAPIUpdater.Store())
endpoint := apiURL + "state" endpoint := apiURL + "state"
var status testStatus var status testStatus
require.NoError(t, doGoodReq("GET", endpoint, nil, &status)) assert.NoError(t, doGoodReq("GET", endpoint, nil, &status))
// SC vars // SC vars
// UpdateNetworkInfo will overwrite buckets withdrawal values // UpdateNetworkInfo will overwrite buckets withdrawal values
@@ -210,18 +199,16 @@ func TestGetState(t *testing.T) {
assert.Greater(t, status.Metrics.TransactionsPerBatch, float64(0)) assert.Greater(t, status.Metrics.TransactionsPerBatch, float64(0))
assert.Greater(t, status.Metrics.BatchFrequency, float64(0)) assert.Greater(t, status.Metrics.BatchFrequency, float64(0))
assert.Greater(t, status.Metrics.TransactionsPerSecond, float64(0)) assert.Greater(t, status.Metrics.TransactionsPerSecond, float64(0))
assert.Greater(t, status.Metrics.TokenAccounts, int64(0)) assert.Greater(t, status.Metrics.TotalAccounts, int64(0))
assert.Greater(t, status.Metrics.Wallets, int64(0)) assert.Greater(t, status.Metrics.TotalBJJs, int64(0))
assert.Greater(t, status.Metrics.AvgTransactionFee, float64(0)) assert.Greater(t, status.Metrics.AvgTransactionFee, float64(0))
// Recommended fee // Recommended fee
// TODO: perform real asserts (not just greater than 0) // TODO: perform real asserts (not just greater than 0)
assert.Greater(t, status.RecommendedFee.ExistingAccount, float64(0)) assert.Greater(t, status.RecommendedFee.ExistingAccount, float64(0))
assert.Equal(t, status.RecommendedFee.CreatesAccount, // assert.Equal(t, status.RecommendedFee.CreatesAccount,
status.RecommendedFee.ExistingAccount* // status.RecommendedFee.ExistingAccount*createAccountExtraFeePercentage)
historydb.CreateAccountExtraFeePercentage) // assert.Equal(t, status.RecommendedFee.CreatesAccountAndRegister,
assert.Equal(t, status.RecommendedFee.CreatesAccountInternal, // status.RecommendedFee.ExistingAccount*createAccountInternalExtraFeePercentage)
status.RecommendedFee.ExistingAccount*
historydb.CreateAccountInternalExtraFeePercentage)
} }
func assertNextForgers(t *testing.T, expected, actual []historydb.NextForgerAPI) { func assertNextForgers(t *testing.T, expected, actual []historydb.NextForgerAPI) {

View File

@@ -1,212 +0,0 @@
package stateapiupdater
import (
"database/sql"
"sync"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/hermeznetwork/hermez-node/log"
"github.com/hermeznetwork/tracerr"
)
// Updater is an utility object to facilitate updating the StateAPI
type Updater struct {
hdb *historydb.HistoryDB
state historydb.StateAPI
config historydb.NodeConfig
vars common.SCVariablesPtr
consts historydb.Constants
rw sync.RWMutex
rfp *RecommendedFeePolicy
}
// RecommendedFeePolicy describes how the recommended fee is calculated
type RecommendedFeePolicy struct {
PolicyType RecommendedFeePolicyType
StaticValue float64
}
// RecommendedFeePolicyType describes the different available recommended fee strategies
type RecommendedFeePolicyType string
const (
// Always give the same StaticValue as recommended fee
RecommendedFeePolicyTypeStatic RecommendedFeePolicyType = "Static"
// Set the recommended fee using the average fee of the last hour
RecommendedFeePolicyTypeAvgLastHour RecommendedFeePolicyType = "AvgLastHour"
)
func (rfp *RecommendedFeePolicy) valid() bool {
switch rfp.PolicyType {
case RecommendedFeePolicyTypeStatic:
if rfp.StaticValue == 0 {
log.Warn("RcommendedFee is set to 0 USD, and the policy is static")
}
return true
case RecommendedFeePolicyTypeAvgLastHour:
return true
default:
return false
}
}
// NewUpdater creates a new Updater
func NewUpdater(hdb *historydb.HistoryDB, config *historydb.NodeConfig, vars *common.SCVariables,
consts *historydb.Constants, rfp *RecommendedFeePolicy) (*Updater, error) {
if ok := rfp.valid(); !ok {
return nil, tracerr.New("Invalid recommende fee policy")
}
u := Updater{
hdb: hdb,
config: *config,
consts: *consts,
state: historydb.StateAPI{
NodePublicInfo: historydb.NodePublicInfo{
ForgeDelay: config.ForgeDelay,
},
},
rfp: rfp,
}
u.SetSCVars(vars.AsPtr())
return &u, nil
}
// Store the State in the HistoryDB
func (u *Updater) Store() error {
u.rw.RLock()
defer u.rw.RUnlock()
return tracerr.Wrap(u.hdb.SetStateInternalAPI(&u.state))
}
// SetSCVars sets the smart contract vars (ony updates those that are not nil)
func (u *Updater) SetSCVars(vars *common.SCVariablesPtr) {
u.rw.Lock()
defer u.rw.Unlock()
if vars.Rollup != nil {
u.vars.Rollup = vars.Rollup
rollupVars := historydb.NewRollupVariablesAPI(u.vars.Rollup)
u.state.Rollup = *rollupVars
}
if vars.Auction != nil {
u.vars.Auction = vars.Auction
auctionVars := historydb.NewAuctionVariablesAPI(u.vars.Auction)
u.state.Auction = *auctionVars
}
if vars.WDelayer != nil {
u.vars.WDelayer = vars.WDelayer
u.state.WithdrawalDelayer = *u.vars.WDelayer
}
}
// UpdateRecommendedFee update Status.RecommendedFee information
func (u *Updater) UpdateRecommendedFee() error {
switch u.rfp.PolicyType {
case RecommendedFeePolicyTypeStatic:
u.rw.Lock()
u.state.RecommendedFee = common.RecommendedFee{
ExistingAccount: u.rfp.StaticValue,
CreatesAccount: u.rfp.StaticValue,
CreatesAccountInternal: u.rfp.StaticValue,
}
u.rw.Unlock()
case RecommendedFeePolicyTypeAvgLastHour:
recommendedFee, err := u.hdb.GetRecommendedFee(u.config.MinFeeUSD, u.config.MaxFeeUSD)
if err != nil {
return tracerr.Wrap(err)
}
u.rw.Lock()
u.state.RecommendedFee = *recommendedFee
u.rw.Unlock()
default:
return tracerr.New("Invalid recommende fee policy")
}
return nil
}
// UpdateMetrics update Status.Metrics information
func (u *Updater) UpdateMetrics() error {
u.rw.RLock()
lastBatch := u.state.Network.LastBatch
u.rw.RUnlock()
if lastBatch == nil {
return nil
}
lastBatchNum := lastBatch.BatchNum
metrics, poolLoad, err := u.hdb.GetMetricsInternalAPI(lastBatchNum)
if err != nil {
return tracerr.Wrap(err)
}
u.rw.Lock()
u.state.Metrics = *metrics
u.state.NodePublicInfo.PoolLoad = poolLoad
u.rw.Unlock()
return nil
}
// UpdateNetworkInfoBlock update Status.Network block related information
func (u *Updater) UpdateNetworkInfoBlock(lastEthBlock, lastSyncBlock common.Block) {
u.rw.Lock()
u.state.Network.LastSyncBlock = lastSyncBlock.Num
u.state.Network.LastEthBlock = lastEthBlock.Num
u.rw.Unlock()
}
// UpdateNetworkInfo update Status.Network information
func (u *Updater) UpdateNetworkInfo(
lastEthBlock, lastSyncBlock common.Block,
lastBatchNum common.BatchNum, currentSlot int64,
) error {
// Get last batch in API format
lastBatch, err := u.hdb.GetBatchInternalAPI(lastBatchNum)
if tracerr.Unwrap(err) == sql.ErrNoRows {
lastBatch = nil
} else if err != nil {
return tracerr.Wrap(err)
}
u.rw.RLock()
auctionVars := u.vars.Auction
u.rw.RUnlock()
// Get next forgers
lastClosedSlot := currentSlot + int64(auctionVars.ClosedAuctionSlots)
nextForgers, err := u.hdb.GetNextForgersInternalAPI(auctionVars, &u.consts.Auction,
lastSyncBlock, currentSlot, lastClosedSlot)
if tracerr.Unwrap(err) == sql.ErrNoRows {
nextForgers = nil
} else if err != nil {
return tracerr.Wrap(err)
}
bucketUpdates, err := u.hdb.GetBucketUpdatesInternalAPI()
if err == sql.ErrNoRows {
bucketUpdates = nil
} else if err != nil {
return tracerr.Wrap(err)
}
u.rw.Lock()
// Update NodeInfo struct
for i, bucketParams := range u.state.Rollup.Buckets {
for _, bucketUpdate := range bucketUpdates {
if bucketUpdate.NumBucket == i {
bucketParams.Withdrawals = bucketUpdate.Withdrawals
u.state.Rollup.Buckets[i] = bucketParams
break
}
}
}
// Update pending L1s
pendingL1s, err := u.hdb.GetUnforgedL1UserTxsCount()
if err != nil {
return tracerr.Wrap(err)
}
u.state.Network.LastSyncBlock = lastSyncBlock.Num
u.state.Network.LastEthBlock = lastEthBlock.Num
u.state.Network.LastBatch = lastBatch
u.state.Network.CurrentSlot = currentSlot
u.state.Network.NextForgers = nextForgers
u.state.Network.PendingL1Txs = pendingL1s
u.rw.Unlock()
return nil
}

View File

@@ -59,21 +59,17 @@ externalDocs:
description: Find out more about Hermez network. description: Find out more about Hermez network.
url: 'https://hermez.io' url: 'https://hermez.io'
servers: servers:
- description: Hosted mock up, returns fake data useful for development - description: Hosted mock up
url: https://apimock.hermez.network url: https://apimock.hermez.network
- description: Localhost mock up, returns fake data useful for development - description: Localhost mock Up
url: http://localhost:4010 url: http://localhost:4010
- description: Testnet (Rinkeby) server
url: https://api.testnet.hermez.io
- description: Mainnet (Ethereum) server, use it carefully, specially if attempting to send transactions. You could lose money!
url: https://api.hermez.io
tags: tags:
- name: Coordinator - name: Coordinator
description: Endpoints used by the nodes running in coordinator mode. They are used to interact with the network. description: Endpoints used by the nodes running in coordinator mode. They are used to interact with the network.
- name: Explorer - name: Explorer
description: Endpoints used by the nodes running in explorer mode. They are used to get information of the netwrok. description: Endpoints used by the nodes running in explorer mode. They are used to get information of the netwrok.
paths: paths:
'/v1/account-creation-authorization': '/account-creation-authorization':
post: post:
tags: tags:
- Coordinator - Coordinator
@@ -103,7 +99,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/account-creation-authorization/{hezEthereumAddress}': '/account-creation-authorization/{hezEthereumAddress}':
get: get:
tags: tags:
- Coordinator - Coordinator
@@ -143,7 +139,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/accounts': '/accounts':
get: get:
tags: tags:
- Explorer - Explorer
@@ -214,7 +210,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/accounts/{accountIndex}': '/accounts/{accountIndex}':
get: get:
tags: tags:
- Explorer - Explorer
@@ -253,7 +249,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/exits': '/exits':
get: get:
tags: tags:
- Explorer - Explorer
@@ -340,7 +336,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/exits/{batchNum}/{accountIndex}': '/exits/{batchNum}/{accountIndex}':
get: get:
tags: tags:
- Explorer - Explorer
@@ -385,7 +381,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/transactions-pool': '/transactions-pool':
post: post:
tags: tags:
- Coordinator - Coordinator
@@ -419,7 +415,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/transactions-pool/{id}': '/transactions-pool/{id}':
get: get:
tags: tags:
- Coordinator - Coordinator
@@ -462,7 +458,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/transactions-history': '/transactions-history':
get: get:
tags: tags:
- Explorer - Explorer
@@ -552,7 +548,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/transactions-history/{id}': '/transactions-history/{id}':
get: get:
tags: tags:
- Explorer - Explorer
@@ -592,7 +588,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/batches': '/batches':
get: get:
tags: tags:
- Explorer - Explorer
@@ -668,7 +664,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/batches/{batchNum}': '/batches/{batchNum}':
get: get:
tags: tags:
- Explorer - Explorer
@@ -708,7 +704,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/full-batches/{batchNum}': '/full-batches/{batchNum}':
get: get:
tags: tags:
- Explorer - Explorer
@@ -749,7 +745,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/slots': '/slots':
get: get:
tags: tags:
- Explorer - Explorer
@@ -825,7 +821,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/slots/{slotNum}': '/slots/{slotNum}':
get: get:
tags: tags:
- Explorer - Explorer
@@ -865,7 +861,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/bids': '/bids':
get: get:
tags: tags:
- Explorer - Explorer
@@ -929,7 +925,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/state': '/state':
get: get:
tags: tags:
- Explorer - Explorer
@@ -955,7 +951,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/config': '/config':
get: get:
tags: tags:
- Explorer - Explorer
@@ -975,7 +971,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/tokens': '/tokens':
get: get:
tags: tags:
- Explorer - Explorer
@@ -1048,7 +1044,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/tokens/{id}': '/tokens/{id}':
get: get:
tags: tags:
- Explorer - Explorer
@@ -1087,7 +1083,7 @@ paths:
application/json: application/json:
schema: schema:
$ref: '#/components/schemas/Error500' $ref: '#/components/schemas/Error500'
'/v1/coordinators': '/coordinators':
get: get:
tags: tags:
- Explorer - Explorer
@@ -2573,9 +2569,9 @@ components:
description: List of next coordinators to forge. description: List of next coordinators to forge.
items: items:
$ref: '#/components/schemas/NextForger' $ref: '#/components/schemas/NextForger'
Node: NodeConfig:
type: object type: object
description: Configuration and metrics of the coordinator node. Note that this is specific for each coordinator. description: Configuration of the coordinator node. Note that this is specific for each coordinator.
properties: properties:
forgeDelay: forgeDelay:
type: number type: number
@@ -2585,14 +2581,9 @@ components:
forge at the maximum rate. Note that this is a configuration parameter of a node, forge at the maximum rate. Note that this is a configuration parameter of a node,
so each coordinator may have a different value. so each coordinator may have a different value.
example: 193.4 example: 193.4
poolLoad:
type: number
description: Number of pending transactions in the pool
example: 23201
additionalProperties: false additionalProperties: false
required: required:
- forgeDelay - forgeDelay
- poolLoad
State: State:
type: object type: object
description: Gobal variables of the network description: Gobal variables of the network
@@ -2609,8 +2600,8 @@ components:
$ref: '#/components/schemas/StateWithdrawDelayer' $ref: '#/components/schemas/StateWithdrawDelayer'
recommendedFee: recommendedFee:
$ref: '#/components/schemas/RecommendedFee' $ref: '#/components/schemas/RecommendedFee'
node: nodeConfig:
$ref: '#/components/schemas/Node' $ref: '#/components/schemas/NodeConfig'
additionalProperties: false additionalProperties: false
required: required:
- network - network
@@ -2619,7 +2610,7 @@ components:
- auction - auction
- withdrawalDelayer - withdrawalDelayer
- recommendedFee - recommendedFee
- node - nodeConfig
StateNetwork: StateNetwork:
type: object type: object
description: Gobal statistics of the network description: Gobal statistics of the network
@@ -2643,10 +2634,6 @@ components:
- example: 2334 - example: 2334
nextForgers: nextForgers:
$ref: '#/components/schemas/NextForgers' $ref: '#/components/schemas/NextForgers'
pendingL1Transactions:
type: number
description: Number of pending L1 transactions (added in the smart contract queue but not forged).
example: 22
additionalProperties: false additionalProperties: false
required: required:
- lastEthereumBlock - lastEthereumBlock
@@ -2822,11 +2809,11 @@ components:
type: number type: number
description: Average transactions per second in the last 24 hours. description: Average transactions per second in the last 24 hours.
example: 302.3 example: 302.3
tokenAccounts: totalAccounts:
type: integer type: integer
description: Number of created accounts. description: Number of created accounts.
example: 90473 example: 90473
wallets: totalBJJs:
type: integer type: integer
description: Number of different registered BJJs. description: Number of different registered BJJs.
example: 23067 example: 23067
@@ -2843,8 +2830,8 @@ components:
- transactionsPerBatch - transactionsPerBatch
- batchFrequency - batchFrequency
- transactionsPerSecond - transactionsPerSecond
- tokenAccounts - totalAccounts
- wallets - totalBJJs
- avgTransactionFee - avgTransactionFee
- estimatedTimeToForgeL1 - estimatedTimeToForgeL1
PendingItems: PendingItems:

View File

@@ -53,7 +53,7 @@ func (a *API) getTokens(c *gin.Context) {
return return
} }
// Build successful response // Build succesfull response
type tokensResponse struct { type tokensResponse struct {
Tokens []historydb.TokenWithUSD `json:"tokens"` Tokens []historydb.TokenWithUSD `json:"tokens"`
PendingItems uint64 `json:"pendingItems"` PendingItems uint64 `json:"pendingItems"`

View File

@@ -42,7 +42,7 @@ func (a *API) getHistoryTxs(c *gin.Context) {
return return
} }
// Build successful response // Build succesfull response
type txsResponse struct { type txsResponse struct {
Txs []historydb.TxAPI `json:"transactions"` Txs []historydb.TxAPI `json:"transactions"`
PendingItems uint64 `json:"pendingItems"` PendingItems uint64 `json:"pendingItems"`
@@ -66,6 +66,6 @@ func (a *API) getHistoryTx(c *gin.Context) {
retSQLErr(err, c) retSQLErr(err, c)
return return
} }
// Build successful response // Build succesfull response
c.JSON(http.StatusOK, tx) c.JSON(http.StatusOK, tx)
} }

View File

@@ -8,7 +8,7 @@ import (
"testing" "testing"
"time" "time"
"github.com/hermeznetwork/hermez-node/api/apitypes" "github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb" "github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/hermeznetwork/hermez-node/test" "github.com/hermeznetwork/hermez-node/test"
@@ -455,7 +455,7 @@ func TestGetHistoryTx(t *testing.T) {
// 400, due invalid TxID // 400, due invalid TxID
err := doBadReq("GET", endpoint+"0x001", nil, 400) err := doBadReq("GET", endpoint+"0x001", nil, 400)
assert.NoError(t, err) assert.NoError(t, err)
// 404, due nonexistent TxID in DB // 404, due inexistent TxID in DB
err = doBadReq("GET", endpoint+"0x00eb5e95e1ce5e9f6c4ed402d415e8d0bdd7664769cfd2064d28da04a2c76be432", nil, 404) err = doBadReq("GET", endpoint+"0x00eb5e95e1ce5e9f6c4ed402d415e8d0bdd7664769cfd2064d28da04a2c76be432", nil, 404)
assert.NoError(t, err) assert.NoError(t, err)
} }

View File

@@ -8,7 +8,7 @@ import (
ethCommon "github.com/ethereum/go-ethereum/common" ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/gin-gonic/gin" "github.com/gin-gonic/gin"
"github.com/hermeznetwork/hermez-node/api/apitypes" "github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/l2db" "github.com/hermeznetwork/hermez-node/db/l2db"
"github.com/hermeznetwork/tracerr" "github.com/hermeznetwork/tracerr"
@@ -51,7 +51,7 @@ func (a *API) getPoolTx(c *gin.Context) {
retSQLErr(err, c) retSQLErr(err, c)
return return
} }
// Build successful response // Build succesfull response
c.JSON(http.StatusOK, tx) c.JSON(http.StatusOK, tx)
} }
@@ -179,7 +179,7 @@ func (a *API) verifyPoolL2TxWrite(txw *l2db.PoolL2TxWrite) error {
// Get public key // Get public key
account, err := a.h.GetCommonAccountAPI(poolTx.FromIdx) account, err := a.h.GetCommonAccountAPI(poolTx.FromIdx)
if err != nil { if err != nil {
return tracerr.Wrap(fmt.Errorf("Error getting from account: %w", err)) return tracerr.Wrap(err)
} }
// Validate TokenID // Validate TokenID
if poolTx.TokenID != account.TokenID { if poolTx.TokenID != account.TokenID {

View File

@@ -2,15 +2,10 @@ package api
import ( import (
"bytes" "bytes"
"crypto/ecdsa"
"encoding/binary"
"encoding/hex"
"encoding/json" "encoding/json"
"math/big"
"testing" "testing"
"time" "time"
ethCrypto "github.com/ethereum/go-ethereum/crypto"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db/historydb" "github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/iden3/go-iden3-crypto/babyjub" "github.com/iden3/go-iden3-crypto/babyjub"
@@ -240,7 +235,7 @@ func TestPoolTxs(t *testing.T) {
// 400, due invalid TxID // 400, due invalid TxID
err = doBadReq("GET", endpoint+"0xG2241b6f2b1dd772dba391f4a1a3407c7c21f598d86e2585a14e616fb4a255f823", nil, 400) err = doBadReq("GET", endpoint+"0xG2241b6f2b1dd772dba391f4a1a3407c7c21f598d86e2585a14e616fb4a255f823", nil, 400)
require.NoError(t, err) require.NoError(t, err)
// 404, due nonexistent TxID in DB // 404, due inexistent TxID in DB
err = doBadReq("GET", endpoint+"0x02241b6f2b1dd772dba391f4a1a3407c7c21f598d86e2585a14e616fb4a255f823", nil, 404) err = doBadReq("GET", endpoint+"0x02241b6f2b1dd772dba391f4a1a3407c7c21f598d86e2585a14e616fb4a255f823", nil, 404)
require.NoError(t, err) require.NoError(t, err)
} }
@@ -262,73 +257,3 @@ func assertPoolTx(t *testing.T, expected, actual testPoolTxReceive) {
} }
assert.Equal(t, expected, actual) assert.Equal(t, expected, actual)
} }
// TestAllTosNull test that the API doesn't accept txs with all the TOs set to null (to eth, to bjj, to idx)
func TestAllTosNull(t *testing.T) {
// Generate account:
// Ethereum private key
var key ecdsa.PrivateKey
key.D = big.NewInt(int64(4444)) // only for testing
key.PublicKey.X, key.PublicKey.Y = ethCrypto.S256().ScalarBaseMult(key.D.Bytes())
key.Curve = ethCrypto.S256()
addr := ethCrypto.PubkeyToAddress(key.PublicKey)
// BJJ private key
var sk babyjub.PrivateKey
var iBytes [8]byte
binary.LittleEndian.PutUint64(iBytes[:], 4444)
copy(sk[:], iBytes[:]) // only for testing
account := common.Account{
Idx: 4444,
TokenID: 0,
BatchNum: 1,
BJJ: sk.Public().Compress(),
EthAddr: addr,
Nonce: 0,
Balance: big.NewInt(1000000),
}
// Add account to history DB (required to verify signature)
err := api.h.AddAccounts([]common.Account{account})
assert.NoError(t, err)
// Genrate tx with all tos set to nil (to eth, to bjj, to idx)
tx := common.PoolL2Tx{
FromIdx: account.Idx,
TokenID: account.TokenID,
Amount: big.NewInt(1000),
Fee: 200,
Nonce: 0,
}
// Set idx and type manually, and check that the function doesn't allow it
_, err = common.NewPoolL2Tx(&tx)
assert.Error(t, err)
tx.Type = common.TxTypeTransfer
var txID common.TxID
txIDRaw, err := hex.DecodeString("02e66e24f7f25272906647c8fd1d7fe8acf3cf3e9b38ffc9f94bbb5090dc275073")
assert.NoError(t, err)
copy(txID[:], txIDRaw)
tx.TxID = txID
// Sign tx
toSign, err := tx.HashToSign(0)
assert.NoError(t, err)
sig := sk.SignPoseidon(toSign)
tx.Signature = sig.Compress()
// Transform common.PoolL2Tx ==> testPoolTxSend
txToSend := testPoolTxSend{
TxID: tx.TxID,
Type: tx.Type,
TokenID: tx.TokenID,
FromIdx: idxToHez(tx.FromIdx, "ETH"),
Amount: tx.Amount.String(),
Fee: tx.Fee,
Nonce: tx.Nonce,
Signature: tx.Signature,
}
// Send tx to the API
jsonTxBytes, err := json.Marshal(txToSend)
require.NoError(t, err)
jsonTxReader := bytes.NewReader(jsonTxBytes)
err = doBadReq("POST", apiURL+"transactions-pool", jsonTxReader, 400)
require.NoError(t, err)
// Clean historyDB: the added account shouldn't be there for other tests
_, err = api.h.DB().DB.Exec("delete from account where idx = 4444")
assert.NoError(t, err)
}

View File

@@ -4,6 +4,7 @@ import (
"database/sql/driver" "database/sql/driver"
"encoding/base64" "encoding/base64"
"encoding/hex" "encoding/hex"
"encoding/json"
"errors" "errors"
"fmt" "fmt"
"math/big" "math/big"
@@ -18,10 +19,7 @@ import (
// BigIntStr is used to scan/value *big.Int directly into strings from/to sql DBs. // BigIntStr is used to scan/value *big.Int directly into strings from/to sql DBs.
// It assumes that *big.Int are inserted/fetched to/from the DB using the BigIntMeddler meddler // It assumes that *big.Int are inserted/fetched to/from the DB using the BigIntMeddler meddler
// defined at github.com/hermeznetwork/hermez-node/db. Since *big.Int is // defined at github.com/hermeznetwork/hermez-node/db
// stored as DECIMAL in SQL, there's no need to implement Scan()/Value()
// because DECIMALS are encoded/decoded as strings by the sql driver, and
// BigIntStr is already a string.
type BigIntStr string type BigIntStr string
// NewBigIntStr creates a *BigIntStr from a *big.Int. // NewBigIntStr creates a *BigIntStr from a *big.Int.
@@ -34,6 +32,34 @@ func NewBigIntStr(bigInt *big.Int) *BigIntStr {
return &bigIntStr return &bigIntStr
} }
// Scan implements Scanner for database/sql
func (b *BigIntStr) Scan(src interface{}) error {
srcBytes, ok := src.([]byte)
if !ok {
return tracerr.Wrap(fmt.Errorf("can't scan %T into apitypes.BigIntStr", src))
}
// bytes to *big.Int
bigInt := new(big.Int).SetBytes(srcBytes)
// *big.Int to BigIntStr
bigIntStr := NewBigIntStr(bigInt)
if bigIntStr == nil {
return nil
}
*b = *bigIntStr
return nil
}
// Value implements valuer for database/sql
func (b BigIntStr) Value() (driver.Value, error) {
// string to *big.Int
bigInt, ok := new(big.Int).SetString(string(b), 10)
if !ok || bigInt == nil {
return nil, tracerr.Wrap(errors.New("invalid representation of a *big.Int"))
}
// *big.Int to bytes
return bigInt.Bytes(), nil
}
// StrBigInt is used to unmarshal BigIntStr directly into an alias of big.Int // StrBigInt is used to unmarshal BigIntStr directly into an alias of big.Int
type StrBigInt big.Int type StrBigInt big.Int
@@ -47,19 +73,25 @@ func (s *StrBigInt) UnmarshalText(text []byte) error {
return nil return nil
} }
// CollectedFeesAPI is send common.batch.CollectedFee through the API // CollectedFees is used to retrieve common.batch.CollectedFee from the DB
type CollectedFeesAPI map[common.TokenID]BigIntStr type CollectedFees map[common.TokenID]BigIntStr
// NewCollectedFeesAPI creates a new CollectedFeesAPI from a *big.Int map // UnmarshalJSON unmarshals a json representation of map[common.TokenID]*big.Int
func NewCollectedFeesAPI(m map[common.TokenID]*big.Int) CollectedFeesAPI { func (c *CollectedFees) UnmarshalJSON(text []byte) error {
c := CollectedFeesAPI(make(map[common.TokenID]BigIntStr)) bigIntMap := make(map[common.TokenID]*big.Int)
for k, v := range m { if err := json.Unmarshal(text, &bigIntMap); err != nil {
c[k] = *NewBigIntStr(v) return tracerr.Wrap(err)
} }
return c *c = CollectedFees(make(map[common.TokenID]BigIntStr))
for k, v := range bigIntMap {
bStr := NewBigIntStr(v)
(CollectedFees(*c)[k]) = *bStr
}
// *c = CollectedFees(bStrMap)
return nil
} }
// HezEthAddr is used to scan/value Ethereum Address directly into strings that follow the Ethereum address hez format (^hez:0x[a-fA-F0-9]{40}$) from/to sql DBs. // HezEthAddr is used to scan/value Ethereum Address directly into strings that follow the Ethereum address hez fotmat (^hez:0x[a-fA-F0-9]{40}$) from/to sql DBs.
// It assumes that Ethereum Address are inserted/fetched to/from the DB using the default Scan/Value interface // It assumes that Ethereum Address are inserted/fetched to/from the DB using the default Scan/Value interface
type HezEthAddr string type HezEthAddr string
@@ -111,7 +143,7 @@ func (s *StrHezEthAddr) UnmarshalText(text []byte) error {
return nil return nil
} }
// HezBJJ is used to scan/value *babyjub.PublicKeyComp directly into strings that follow the BJJ public key hez format (^hez:[A-Za-z0-9_-]{44}$) from/to sql DBs. // HezBJJ is used to scan/value *babyjub.PublicKeyComp directly into strings that follow the BJJ public key hez fotmat (^hez:[A-Za-z0-9_-]{44}$) from/to sql DBs.
// It assumes that *babyjub.PublicKeyComp are inserted/fetched to/from the DB using the default Scan/Value interface // It assumes that *babyjub.PublicKeyComp are inserted/fetched to/from the DB using the default Scan/Value interface
type HezBJJ string type HezBJJ string
@@ -184,7 +216,7 @@ func (b HezBJJ) Value() (driver.Value, error) {
// StrHezBJJ is used to unmarshal HezBJJ directly into an alias of babyjub.PublicKeyComp // StrHezBJJ is used to unmarshal HezBJJ directly into an alias of babyjub.PublicKeyComp
type StrHezBJJ babyjub.PublicKeyComp type StrHezBJJ babyjub.PublicKeyComp
// UnmarshalText unmarshalls a StrHezBJJ // UnmarshalText unmarshals a StrHezBJJ
func (s *StrHezBJJ) UnmarshalText(text []byte) error { func (s *StrHezBJJ) UnmarshalText(text []byte) error {
bjj, err := hezStrToBJJ(string(text)) bjj, err := hezStrToBJJ(string(text))
if err != nil { if err != nil {
@@ -194,8 +226,8 @@ func (s *StrHezBJJ) UnmarshalText(text []byte) error {
return nil return nil
} }
// HezIdx is used to value common.Idx directly into strings that follow the Idx key hez format (hez:tokenSymbol:idx) to sql DBs. // HezIdx is used to value common.Idx directly into strings that follow the Idx key hez fotmat (hez:tokenSymbol:idx) to sql DBs.
// Note that this can only be used to insert to DB since there is no way to automatically read from the DB since it needs the tokenSymbol // Note that this can only be used to insert to DB since there is no way to automaticaly read from the DB since it needs the tokenSymbol
type HezIdx string type HezIdx string
// StrHezIdx is used to unmarshal HezIdx directly into an alias of common.Idx // StrHezIdx is used to unmarshal HezIdx directly into an alias of common.Idx

View File

@@ -28,8 +28,7 @@ type ConfigBatch struct {
// NewBatchBuilder constructs a new BatchBuilder, and executes the bb.Reset // NewBatchBuilder constructs a new BatchBuilder, and executes the bb.Reset
// method // method
func NewBatchBuilder(dbpath string, synchronizerStateDB *statedb.StateDB, batchNum common.BatchNum, func NewBatchBuilder(dbpath string, synchronizerStateDB *statedb.StateDB, batchNum common.BatchNum, nLevels uint64) (*BatchBuilder, error) {
nLevels uint64) (*BatchBuilder, error) {
localStateDB, err := statedb.NewLocalStateDB( localStateDB, err := statedb.NewLocalStateDB(
statedb.Config{ statedb.Config{
Path: dbpath, Path: dbpath,

View File

@@ -15,8 +15,7 @@ func TestBatchBuilder(t *testing.T) {
require.Nil(t, err) require.Nil(t, err)
defer assert.Nil(t, os.RemoveAll(dir)) defer assert.Nil(t, os.RemoveAll(dir))
synchDB, err := statedb.NewStateDB(statedb.Config{Path: dir, Keep: 128, synchDB, err := statedb.NewStateDB(statedb.Config{Path: dir, Keep: 128, Type: statedb.TypeBatchBuilder, NLevels: 0})
Type: statedb.TypeBatchBuilder, NLevels: 0})
assert.Nil(t, err) assert.Nil(t, err)
bbDir, err := ioutil.TempDir("", "tmpBatchBuilderDB") bbDir, err := ioutil.TempDir("", "tmpBatchBuilderDB")

View File

@@ -8,7 +8,7 @@ The `hermez-node` has been tested with go version 1.14
## Usage ## Usage
```shell ```
NAME: NAME:
hermez-node - A new cli application hermez-node - A new cli application
@@ -16,19 +16,18 @@ USAGE:
node [global options] command [command options] [arguments...] node [global options] command [command options] [arguments...]
VERSION: VERSION:
v0.1.0-6-gd8a50c5 0.1.0-alpha
COMMANDS: COMMANDS:
version Show the application version
importkey Import ethereum private key importkey Import ethereum private key
genbjj Generate a new BabyJubJub key genbjj Generate a new BabyJubJub key
wipesql Wipe the SQL DB (HistoryDB and L2DB) and the StateDBs, leaving the DB in a clean state wipesql Wipe the SQL DB (HistoryDB and L2DB), leaving the DB in a clean state
run Run the hermez-node in the indicated mode run Run the hermez-node in the indicated mode
serveapi Serve the API only
discard Discard blocks up to a specified block number
help, h Shows a list of commands or help for one command help, h Shows a list of commands or help for one command
GLOBAL OPTIONS: GLOBAL OPTIONS:
--mode MODE Set node MODE (can be "sync" or "coord")
--cfg FILE Node configuration FILE
--help, -h show help (default: false) --help, -h show help (default: false)
--version, -v print the version (default: false) --version, -v print the version (default: false)
``` ```
@@ -55,10 +54,6 @@ To read the documentation of each configuration parameter, please check the
with `Coordinator` are only used in coord mode, and don't need to be defined with `Coordinator` are only used in coord mode, and don't need to be defined
when running the coordinator in sync mode when running the coordinator in sync mode
When running the API in standalone mode, the required configuration is a subset
of the node configuration. Please, check the `type APIServer` at
[config/config.go](../../config/config.go) to learn about all the parametes.
### Notes ### Notes
- The private key corresponding to the parameter `Coordinator.ForgerAddress` needs to be imported in the ethereum keystore - The private key corresponding to the parameter `Coordinator.ForgerAddress` needs to be imported in the ethereum keystore
@@ -73,9 +68,6 @@ of the node configuration. Please, check the `type APIServer` at
monitor the size of the folder to avoid running out of space. monitor the size of the folder to avoid running out of space.
- The node requires a PostgreSQL database. The parameters of the server and - The node requires a PostgreSQL database. The parameters of the server and
database must be set in the `PostgreSQL` section. database must be set in the `PostgreSQL` section.
- The node requires a web3 RPC server to work. The node has only been tested
with geth and may not work correctly with other ethereum nodes
implementations.
## Building ## Building
@@ -83,7 +75,7 @@ of the node configuration. Please, check the `type APIServer` at
Building the node requires using the packr utility to bundle the database Building the node requires using the packr utility to bundle the database
migrations inside the resulting binary. Install the packr utility with: migrations inside the resulting binary. Install the packr utility with:
```shell ```
cd /tmp && go get -u github.com/gobuffalo/packr/v2/packr2 && cd - cd /tmp && go get -u github.com/gobuffalo/packr/v2/packr2 && cd -
``` ```
@@ -91,7 +83,7 @@ Make sure your `$PATH` contains `$GOPATH/bin`, otherwise the packr utility will
not be found. not be found.
Now build the node executable: Now build the node executable:
```shell ```
cd ../../db && packr2 && cd - cd ../../db && packr2 && cd -
go build . go build .
cd ../../db && packr2 clean && cd - cd ../../db && packr2 clean && cd -
@@ -106,48 +98,35 @@ run the following examples by replacing `./node` with `go run .` and executing
them in the `cli/node` directory to build from source and run at the same time. them in the `cli/node` directory to build from source and run at the same time.
Run the node in mode synchronizer: Run the node in mode synchronizer:
```shell ```
./node run --mode sync --cfg cfg.buidler.toml ./node --mode sync --cfg cfg.buidler.toml run
``` ```
Run the node in mode coordinator: Run the node in mode coordinator:
```shell
./node run --mode coord --cfg cfg.buidler.toml
``` ```
./node --mode coord --cfg cfg.buidler.toml run
Serve the API in standalone mode. This command allows serving the API just
with access to the PostgreSQL database that a node is using. Several instances
of `serveapi` can be running at the same time with a single PostgreSQL
database:
```shell
./node serveapi --mode coord --cfg cfg.buidler.toml
``` ```
Import an ethereum private key into the keystore: Import an ethereum private key into the keystore:
```shell ```
./node importkey --mode coord --cfg cfg.buidler.toml --privatekey 0x618b35096c477aab18b11a752be619f0023a539bb02dd6c813477a6211916cde ./node --mode coord --cfg cfg.buidler.toml importkey --privatekey 0x618b35096c477aab18b11a752be619f0023a539bb02dd6c813477a6211916cde
``` ```
Generate a new BabyJubJub key pair: Generate a new BabyJubJub key pair:
```shell
./node genbjj
``` ```
./node --mode coord --cfg cfg.buidler.toml genbjj
Check the binary version:
```shell
./node version
``` ```
Wipe the entier SQL database (this will destroy all synchronized and pool Wipe the entier SQL database (this will destroy all synchronized and pool
data): data):
```shell ```
./node wipesql --mode coord --cfg cfg.buidler.toml ./node --mode coord --cfg cfg.buidler.toml wipesql
``` ```
Discard all synchronized blocks and associated state up to a given block Discard all synchronized blocks and associated state up to a given block
number. This command is useful in case the synchronizer reaches an invalid number. This command is useful in case the synchronizer reaches an invalid
state and you want to roll back a few blocks and try again (maybe with some state and you want to roll back a few blocks and try again (maybe with some
fixes in the code). fixes in the code).
```shell ```
./node discard --mode coord --cfg cfg.buidler.toml --block 8061330 ./node --mode coord --cfg cfg.buidler.toml discard --block 8061330
``` ```

View File

@@ -1,24 +0,0 @@
[API]
Address = "localhost:8386"
Explorer = true
MaxSQLConnections = 10
SQLConnectionTimeout = "2s"
[PostgreSQL]
PortWrite = 5432
HostWrite = "localhost"
UserWrite = "hermez"
PasswordWrite = "yourpasswordhere"
NameWrite = "hermez"
[Coordinator.L2DB]
SafetyPeriod = 10
MaxTxs = 512
TTL = "24h"
PurgeBatchDelay = 10
InvalidateBatchDelay = 20
PurgeBlockDelay = 10
InvalidateBlockDelay = 20
[Coordinator.API]
Coordinator = true

View File

@@ -8,31 +8,8 @@ SQLConnectionTimeout = "2s"
[PriceUpdater] [PriceUpdater]
Interval = "10s" Interval = "10s"
URLBitfinexV2 = "https://api-pub.bitfinex.com/v2/" URL = "https://api-pub.bitfinex.com/v2/"
URLCoinGeckoV3 = "https://api.coingecko.com/api/v3/" Type = "bitfinexV2"
# Available update methods:
# - coingeckoV3 (recommended): get price by SC addr using coingecko API
# - bitfinexV2: get price by token symbol using bitfinex API
# - static (recommended for blacklisting tokens): use the given StaticValue to set the price (if not provided 0 will be used)
# - ignore: don't update the price leave it as it is on the DB
DefaultUpdateMethod = "coingeckoV3" # Update method used for all the tokens registered on the network, and not listed in [[PriceUpdater.TokensConfig]]
[[PriceUpdater.TokensConfig]]
UpdateMethod = "bitfinexV2"
Symbol = "USDT"
Addr = "0xdac17f958d2ee523a2206206994597c13d831ec7"
[[PriceUpdater.TokensConfig]]
UpdateMethod = "coingeckoV3"
Symbol = "ETH"
Addr = "0x0000000000000000000000000000000000000000"
[[PriceUpdater.TokensConfig]]
UpdateMethod = "static"
Symbol = "UNI"
Addr = "0x1f9840a85d5af5bf1d1762f925bdaddc4201f984"
StaticValue = 30.12
[[PriceUpdater.TokensConfig]]
UpdateMethod = "ignore"
Symbol = "SUSHI"
Addr = "0x6b3595068778dd592e39a122f4f5a5cf09c90fe2"
[Debug] [Debug]
APIAddress = "localhost:12345" APIAddress = "localhost:12345"
@@ -74,7 +51,7 @@ ForgerAddress = "0x05c23b938a85ab26A36E6314a0D02080E9ca6BeD" # Non-Boot Coordina
# ForgerAddressPrivateKey = "0x30f5fddb34cd4166adb2c6003fa6b18f380fd2341376be42cf1c7937004ac7a3" # ForgerAddressPrivateKey = "0x30f5fddb34cd4166adb2c6003fa6b18f380fd2341376be42cf1c7937004ac7a3"
# ForgerAddress = "0xb4124ceb3451635dacedd11767f004d8a28c6ee7" # Boot Coordinator # ForgerAddress = "0xb4124ceb3451635dacedd11767f004d8a28c6ee7" # Boot Coordinator
# ForgerAddressPrivateKey = "0xa8a54b2d8197bc0b19bb8a084031be71835580a01e70a45a13babd16c9bc1563" # ForgerAddressPrivateKey = "0xa8a54b2d8197bc0b19bb8a084031be71835580a01e70a45a13babd16c9bc1563"
MinimumForgeAddressBalance = "0" MinimumForgeAddressBalance = 0
ConfirmBlocks = 10 ConfirmBlocks = 10
L1BatchTimeoutPerc = 0.6 L1BatchTimeoutPerc = 0.6
StartSlotBlocksDelay = 2 StartSlotBlocksDelay = 2
@@ -86,8 +63,6 @@ SyncRetryInterval = "1s"
ForgeDelay = "10s" ForgeDelay = "10s"
ForgeNoTxsDelay = "0s" ForgeNoTxsDelay = "0s"
PurgeByExtDelInterval = "1m" PurgeByExtDelInterval = "1m"
MustForgeAtSlotDeadline = true
IgnoreSlotCommitment = false
[Coordinator.FeeAccount] [Coordinator.FeeAccount]
Address = "0x56232B1c5B10038125Bc7345664B4AFD745bcF8E" Address = "0x56232B1c5B10038125Bc7345664B4AFD745bcF8E"
@@ -99,7 +74,6 @@ BJJ = "0x1b176232f78ba0d388ecc5f4896eca2d3b3d4f272092469f559247297f5c0c13"
SafetyPeriod = 10 SafetyPeriod = 10
MaxTxs = 512 MaxTxs = 512
MinFeeUSD = 0.0 MinFeeUSD = 0.0
MaxFeeUSD = 50.0
TTL = "24h" TTL = "24h"
PurgeBatchDelay = 10 PurgeBatchDelay = 10
InvalidateBatchDelay = 20 InvalidateBatchDelay = 20
@@ -133,10 +107,10 @@ Path = "/tmp/iden3-test/hermez/ethkeystore"
Password = "yourpasswordhere" Password = "yourpasswordhere"
[Coordinator.EthClient.ForgeBatchGasCost] [Coordinator.EthClient.ForgeBatchGasCost]
Fixed = 600000 Fixed = 500000
L1UserTx = 15000 L1UserTx = 8000
L1CoordTx = 8000 L1CoordTx = 9000
L2Tx = 250 L2Tx = 1
[Coordinator.API] [Coordinator.API]
Coordinator = true Coordinator = true
@@ -145,11 +119,3 @@ Coordinator = true
BatchPath = "/tmp/iden3-test/hermez/batchesdebug" BatchPath = "/tmp/iden3-test/hermez/batchesdebug"
LightScrypt = true LightScrypt = true
# RollupVerifierIndex = 0 # RollupVerifierIndex = 0
[RecommendedFeePolicy]
# Strategy used to calculate the recommended fee that the API will expose.
# Available options:
# - Static: always return the same value (StaticValue) in USD
# - AvgLastHour: calculate using the average fee of the forged transactions during the last hour
PolicyType = "Static"
StaticValue = 0.99

View File

@@ -1,10 +1,10 @@
#!/bin/sh #!/bin/sh
# Non-Boot Coordinator # Non-Boot Coordinator
go run . importkey --mode coord --cfg cfg.buidler.toml --privatekey 0x30f5fddb34cd4166adb2c6003fa6b18f380fd2341376be42cf1c7937004ac7a3 go run . --mode coord --cfg cfg.buidler.toml importkey --privatekey 0x30f5fddb34cd4166adb2c6003fa6b18f380fd2341376be42cf1c7937004ac7a3
# Boot Coordinator # Boot Coordinator
go run . importkey --mode coord --cfg cfg.buidler.toml --privatekey 0xa8a54b2d8197bc0b19bb8a084031be71835580a01e70a45a13babd16c9bc1563 go run . --mode coord --cfg cfg.buidler.toml importkey --privatekey 0xa8a54b2d8197bc0b19bb8a084031be71835580a01e70a45a13babd16c9bc1563
# FeeAccount # FeeAccount
go run . importkey --mode coord --cfg cfg.buidler.toml --privatekey 0x3a9270c020e169097808da4b02e8d9100be0f8a38cfad3dcfc0b398076381fdd go run . --mode coord --cfg cfg.buidler.toml importkey --privatekey 0x3a9270c020e169097808da4b02e8d9100be0f8a38cfad3dcfc0b398076381fdd

View File

@@ -5,16 +5,13 @@ import (
"fmt" "fmt"
"os" "os"
"os/signal" "os/signal"
"path"
"strings" "strings"
ethKeystore "github.com/ethereum/go-ethereum/accounts/keystore" ethKeystore "github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/crypto" "github.com/ethereum/go-ethereum/crypto"
"github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/config" "github.com/hermeznetwork/hermez-node/config"
dbUtils "github.com/hermeznetwork/hermez-node/db" dbUtils "github.com/hermeznetwork/hermez-node/db"
"github.com/hermeznetwork/hermez-node/db/historydb" "github.com/hermeznetwork/hermez-node/db/historydb"
"github.com/hermeznetwork/hermez-node/db/kvdb"
"github.com/hermeznetwork/hermez-node/db/l2db" "github.com/hermeznetwork/hermez-node/db/l2db"
"github.com/hermeznetwork/hermez-node/log" "github.com/hermeznetwork/hermez-node/log"
"github.com/hermeznetwork/hermez-node/node" "github.com/hermeznetwork/hermez-node/node"
@@ -34,22 +31,6 @@ const (
modeCoord = "coord" modeCoord = "coord"
) )
var (
// version represents the program based on the git tag
version = "v0.1.0"
// commit represents the program based on the git commit
commit = "dev"
// date represents the date of application was built
date = ""
)
func cmdVersion(c *cli.Context) error {
fmt.Printf("Version = \"%v\"\n", version)
fmt.Printf("Build = \"%v\"\n", commit)
fmt.Printf("Date = \"%v\"\n", date)
return nil
}
func cmdGenBJJ(c *cli.Context) error { func cmdGenBJJ(c *cli.Context) error {
sk := babyjub.NewRandPrivKey() sk := babyjub.NewRandPrivKey()
skBuf := [32]byte(sk) skBuf := [32]byte(sk)
@@ -91,86 +72,6 @@ func cmdImportKey(c *cli.Context) error {
return nil return nil
} }
func resetStateDBs(cfg *Config, batchNum common.BatchNum) error {
log.Infof("Reset Synchronizer StateDB to batchNum %v...", batchNum)
// Manually make a checkpoint from batchNum to current to force current
// to be a valid checkpoint. This is useful because in case of a
// crash, current can be corrupted and the first thing that
// `kvdb.NewKVDB` does is read the current checkpoint, which wouldn't
// succeed in case of corruption.
dbPath := cfg.node.StateDB.Path
source := path.Join(dbPath, fmt.Sprintf("%s%d", kvdb.PathBatchNum, batchNum))
current := path.Join(dbPath, kvdb.PathCurrent)
last := path.Join(dbPath, kvdb.PathLast)
if err := os.RemoveAll(last); err != nil {
return tracerr.Wrap(fmt.Errorf("os.RemoveAll: %w", err))
}
if batchNum == 0 {
if err := os.RemoveAll(current); err != nil {
return tracerr.Wrap(fmt.Errorf("os.RemoveAll: %w", err))
}
} else {
if err := kvdb.PebbleMakeCheckpoint(source, current); err != nil {
return tracerr.Wrap(fmt.Errorf("kvdb.PebbleMakeCheckpoint: %w", err))
}
}
db, err := kvdb.NewKVDB(kvdb.Config{
Path: dbPath,
NoGapsCheck: true,
NoLast: true,
})
if err != nil {
return tracerr.Wrap(fmt.Errorf("kvdb.NewKVDB: %w", err))
}
if err := db.Reset(batchNum); err != nil {
return tracerr.Wrap(fmt.Errorf("db.Reset: %w", err))
}
if cfg.mode == node.ModeCoordinator {
log.Infof("Wipe Coordinator StateDBs...")
// We wipe the Coordinator StateDBs entirely (by deleting
// current and resetting to batchNum 0) because the Coordinator
// StateDBs are always reset from Synchronizer when the
// coordinator pipeline starts.
dbPath := cfg.node.Coordinator.TxSelector.Path
current := path.Join(dbPath, kvdb.PathCurrent)
if err := os.RemoveAll(current); err != nil {
return tracerr.Wrap(fmt.Errorf("os.RemoveAll: %w", err))
}
db, err := kvdb.NewKVDB(kvdb.Config{
Path: dbPath,
NoGapsCheck: true,
NoLast: true,
})
if err != nil {
return tracerr.Wrap(fmt.Errorf("kvdb.NewKVDB: %w", err))
}
if err := db.Reset(0); err != nil {
return tracerr.Wrap(fmt.Errorf("db.Reset: %w", err))
}
dbPath = cfg.node.Coordinator.BatchBuilder.Path
current = path.Join(dbPath, kvdb.PathCurrent)
if err := os.RemoveAll(current); err != nil {
return tracerr.Wrap(fmt.Errorf("os.RemoveAll: %w", err))
}
db, err = kvdb.NewKVDB(kvdb.Config{
Path: dbPath,
NoGapsCheck: true,
NoLast: true,
})
if err != nil {
return tracerr.Wrap(fmt.Errorf("statedb.NewKVDB: %w", err))
}
if err := db.Reset(0); err != nil {
return tracerr.Wrap(fmt.Errorf("db.Reset: %w", err))
}
}
return nil
}
func cmdWipeSQL(c *cli.Context) error { func cmdWipeSQL(c *cli.Context) error {
_cfg, err := parseCli(c) _cfg, err := parseCli(c)
if err != nil { if err != nil {
@@ -179,8 +80,7 @@ func cmdWipeSQL(c *cli.Context) error {
cfg := _cfg.node cfg := _cfg.node
yes := c.Bool(flagYes) yes := c.Bool(flagYes)
if !yes { if !yes {
fmt.Print("*WARNING* Are you sure you want to delete " + fmt.Print("*WARNING* Are you sure you want to delete the SQL DB? [y/N]: ")
"the SQL DB and StateDBs? [y/N]: ")
var input string var input string
if _, err := fmt.Scanln(&input); err != nil { if _, err := fmt.Scanln(&input); err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
@@ -202,17 +102,22 @@ func cmdWipeSQL(c *cli.Context) error {
} }
log.Info("Wiping SQL DB...") log.Info("Wiping SQL DB...")
if err := dbUtils.MigrationsDown(db.DB); err != nil { if err := dbUtils.MigrationsDown(db.DB); err != nil {
return tracerr.Wrap(fmt.Errorf("dbUtils.MigrationsDown: %w", err)) return tracerr.Wrap(err)
}
log.Info("Wiping StateDBs...")
if err := resetStateDBs(_cfg, 0); err != nil {
return tracerr.Wrap(fmt.Errorf("resetStateDBs: %w", err))
} }
return nil return nil
} }
func waitSigInt() { func cmdRun(c *cli.Context) error {
cfg, err := parseCli(c)
if err != nil {
return tracerr.Wrap(fmt.Errorf("error parsing flags and config: %w", err))
}
node, err := node.NewNode(cfg.mode, cfg.node)
if err != nil {
return tracerr.Wrap(fmt.Errorf("error starting node: %w", err))
}
node.Start()
stopCh := make(chan interface{}) stopCh := make(chan interface{})
// catch ^C to send the stop signal // catch ^C to send the stop signal
@@ -233,36 +138,48 @@ func waitSigInt() {
} }
}() }()
<-stopCh <-stopCh
}
func cmdRun(c *cli.Context) error {
cfg, err := parseCli(c)
if err != nil {
return tracerr.Wrap(fmt.Errorf("error parsing flags and config: %w", err))
}
node, err := node.NewNode(cfg.mode, cfg.node)
if err != nil {
return tracerr.Wrap(fmt.Errorf("error starting node: %w", err))
}
node.Start()
waitSigInt()
node.Stop() node.Stop()
return nil return nil
} }
func cmdServeAPI(c *cli.Context) error { func cmdServeAPI(c *cli.Context) error {
cfg, err := parseCliAPIServer(c) cfgPath := c.String(flagCfg)
cfg, err := config.LoadAPIServer(cfgPath)
if err != nil { if err != nil {
if err := cli.ShowAppHelp(c); err != nil {
panic(err)
}
return tracerr.Wrap(fmt.Errorf("error parsing flags and config: %w", err)) return tracerr.Wrap(fmt.Errorf("error parsing flags and config: %w", err))
} }
srv, err := node.NewAPIServer(cfg.mode, cfg.server)
node, err := node.NewNode(cfg.mode, cfg.node)
if err != nil { if err != nil {
return tracerr.Wrap(fmt.Errorf("error starting api server: %w", err)) return tracerr.Wrap(fmt.Errorf("error starting node: %w", err))
} }
srv.Start() node.Start()
waitSigInt()
srv.Stop() stopCh := make(chan interface{})
// catch ^C to send the stop signal
ossig := make(chan os.Signal, 1)
signal.Notify(ossig, os.Interrupt)
const forceStopCount = 3
go func() {
n := 0
for sig := range ossig {
if sig == os.Interrupt {
log.Info("Received Interrupt Signal")
stopCh <- nil
n++
if n == forceStopCount {
log.Fatalf("Received %v Interrupt Signals", forceStopCount)
}
}
}
}()
<-stopCh
node.Stop()
return nil return nil
} }
@@ -318,7 +235,6 @@ func cmdDiscard(c *cli.Context) error {
cfg.Coordinator.L2DB.SafetyPeriod, cfg.Coordinator.L2DB.SafetyPeriod,
cfg.Coordinator.L2DB.MaxTxs, cfg.Coordinator.L2DB.MaxTxs,
cfg.Coordinator.L2DB.MinFeeUSD, cfg.Coordinator.L2DB.MinFeeUSD,
cfg.Coordinator.L2DB.MaxFeeUSD,
cfg.Coordinator.L2DB.TTL.Duration, cfg.Coordinator.L2DB.TTL.Duration,
nil, nil,
) )
@@ -326,11 +242,6 @@ func cmdDiscard(c *cli.Context) error {
return tracerr.Wrap(fmt.Errorf("l2DB.Reorg: %w", err)) return tracerr.Wrap(fmt.Errorf("l2DB.Reorg: %w", err))
} }
log.Info("Resetting StateDBs...")
if err := resetStateDBs(_cfg, batchNum); err != nil {
return tracerr.Wrap(fmt.Errorf("resetStateDBs: %w", err))
}
return nil return nil
} }
@@ -359,56 +270,13 @@ func getConfig(c *cli.Context) (*Config, error) {
switch mode { switch mode {
case modeSync: case modeSync:
cfg.mode = node.ModeSynchronizer cfg.mode = node.ModeSynchronizer
cfg.node, err = config.LoadNode(nodeCfgPath, false) cfg.node, err = config.LoadNode(nodeCfgPath)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
case modeCoord: case modeCoord:
cfg.mode = node.ModeCoordinator cfg.mode = node.ModeCoordinator
fmt.Println("LOADING CFG") cfg.node, err = config.LoadCoordinator(nodeCfgPath)
cfg.node, err = config.LoadNode(nodeCfgPath, true)
if err != nil {
return nil, tracerr.Wrap(err)
}
default:
return nil, tracerr.Wrap(fmt.Errorf("invalid mode \"%v\"", mode))
}
return &cfg, nil
}
// ConfigAPIServer is the configuration of the api server execution
type ConfigAPIServer struct {
mode node.Mode
server *config.APIServer
}
func parseCliAPIServer(c *cli.Context) (*ConfigAPIServer, error) {
cfg, err := getConfigAPIServer(c)
if err != nil {
if err := cli.ShowAppHelp(c); err != nil {
panic(err)
}
return nil, tracerr.Wrap(err)
}
return cfg, nil
}
func getConfigAPIServer(c *cli.Context) (*ConfigAPIServer, error) {
var cfg ConfigAPIServer
mode := c.String(flagMode)
nodeCfgPath := c.String(flagCfg)
var err error
switch mode {
case modeSync:
cfg.mode = node.ModeSynchronizer
cfg.server, err = config.LoadAPIServer(nodeCfgPath, false)
if err != nil {
return nil, tracerr.Wrap(err)
}
case modeCoord:
cfg.mode = node.ModeCoordinator
cfg.server, err = config.LoadAPIServer(nodeCfgPath, true)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
@@ -422,8 +290,8 @@ func getConfigAPIServer(c *cli.Context) (*ConfigAPIServer, error) {
func main() { func main() {
app := cli.NewApp() app := cli.NewApp()
app.Name = "hermez-node" app.Name = "hermez-node"
app.Version = version app.Version = "0.1.0-alpha"
flags := []cli.Flag{ app.Flags = []cli.Flag{
&cli.StringFlag{ &cli.StringFlag{
Name: flagMode, Name: flagMode,
Usage: fmt.Sprintf("Set node `MODE` (can be \"%v\" or \"%v\")", modeSync, modeCoord), Usage: fmt.Sprintf("Set node `MODE` (can be \"%v\" or \"%v\")", modeSync, modeCoord),
@@ -437,23 +305,17 @@ func main() {
} }
app.Commands = []*cli.Command{ app.Commands = []*cli.Command{
{
Name: "version",
Aliases: []string{},
Usage: "Show the application version and build",
Action: cmdVersion,
},
{ {
Name: "importkey", Name: "importkey",
Aliases: []string{}, Aliases: []string{},
Usage: "Import ethereum private key", Usage: "Import ethereum private key",
Action: cmdImportKey, Action: cmdImportKey,
Flags: append(flags, Flags: []cli.Flag{
&cli.StringFlag{ &cli.StringFlag{
Name: flagSK, Name: flagSK,
Usage: "ethereum `PRIVATE_KEY` in hex", Usage: "ethereum `PRIVATE_KEY` in hex",
Required: true, Required: true,
}), }},
}, },
{ {
Name: "genbjj", Name: "genbjj",
@@ -464,41 +326,39 @@ func main() {
{ {
Name: "wipesql", Name: "wipesql",
Aliases: []string{}, Aliases: []string{},
Usage: "Wipe the SQL DB (HistoryDB and L2DB) and the StateDBs, " + Usage: "Wipe the SQL DB (HistoryDB and L2DB), " +
"leaving the DB in a clean state", "leaving the DB in a clean state",
Action: cmdWipeSQL, Action: cmdWipeSQL,
Flags: append(flags, Flags: []cli.Flag{
&cli.BoolFlag{ &cli.BoolFlag{
Name: flagYes, Name: flagYes,
Usage: "automatic yes to the prompt", Usage: "automatic yes to the prompt",
Required: false, Required: false,
}), }},
}, },
{ {
Name: "run", Name: "run",
Aliases: []string{}, Aliases: []string{},
Usage: "Run the hermez-node in the indicated mode", Usage: "Run the hermez-node in the indicated mode",
Action: cmdRun, Action: cmdRun,
Flags: flags,
}, },
{ {
Name: "serveapi", Name: "serveapi",
Aliases: []string{}, Aliases: []string{},
Usage: "Serve the API only", Usage: "Serve the API only",
Action: cmdServeAPI, Action: cmdServeAPI,
Flags: flags,
}, },
{ {
Name: "discard", Name: "discard",
Aliases: []string{}, Aliases: []string{},
Usage: "Discard blocks up to a specified block number", Usage: "Discard blocks up to a specified block number",
Action: cmdDiscard, Action: cmdDiscard,
Flags: append(flags, Flags: []cli.Flag{
&cli.Int64Flag{ &cli.Int64Flag{
Name: flagBlock, Name: flagBlock,
Usage: "last block number to keep", Usage: "last block number to keep",
Required: false, Required: false,
}), }},
}, },
} }

View File

@@ -72,8 +72,7 @@ func (idx Idx) BigInt() *big.Int {
// IdxFromBytes returns Idx from a byte array // IdxFromBytes returns Idx from a byte array
func IdxFromBytes(b []byte) (Idx, error) { func IdxFromBytes(b []byte) (Idx, error) {
if len(b) != IdxBytesLen { if len(b) != IdxBytesLen {
return 0, tracerr.Wrap(fmt.Errorf("can not parse Idx, bytes len %d, expected %d", return 0, tracerr.Wrap(fmt.Errorf("can not parse Idx, bytes len %d, expected %d", len(b), IdxBytesLen))
len(b), IdxBytesLen))
} }
var idxBytes [8]byte var idxBytes [8]byte
copy(idxBytes[2:], b[:]) copy(idxBytes[2:], b[:])
@@ -195,8 +194,7 @@ func (a *Account) BigInts() ([NLeafElems]*big.Int, error) {
return e, nil return e, nil
} }
// HashValue returns the value of the Account, which is the Poseidon hash of its // HashValue returns the value of the Account, which is the Poseidon hash of its *big.Int representation
// *big.Int representation
func (a *Account) HashValue() (*big.Int, error) { func (a *Account) HashValue() (*big.Int, error) {
bi, err := a.BigInts() bi, err := a.BigInts()
if err != nil { if err != nil {

View File

@@ -76,8 +76,7 @@ func TestNonceParser(t *testing.T) {
func TestAccount(t *testing.T) { func TestAccount(t *testing.T) {
var sk babyjub.PrivateKey var sk babyjub.PrivateKey
_, err := hex.Decode(sk[:], _, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
assert.NoError(t, err) assert.NoError(t, err)
pk := sk.Public() pk := sk.Public()
@@ -116,8 +115,7 @@ func TestAccountLoop(t *testing.T) {
// check that for different deterministic BabyJubJub keys & random Address there is no problem // check that for different deterministic BabyJubJub keys & random Address there is no problem
for i := 0; i < 256; i++ { for i := 0; i < 256; i++ {
var sk babyjub.PrivateKey var sk babyjub.PrivateKey
_, err := hex.Decode(sk[:], _, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
assert.NoError(t, err) assert.NoError(t, err)
pk := sk.Public() pk := sk.Public()
@@ -201,8 +199,7 @@ func bigFromStr(h string, u int) *big.Int {
func TestAccountHashValue(t *testing.T) { func TestAccountHashValue(t *testing.T) {
var sk babyjub.PrivateKey var sk babyjub.PrivateKey
_, err := hex.Decode(sk[:], _, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
assert.NoError(t, err) assert.NoError(t, err)
pk := sk.Public() pk := sk.Public()
@@ -215,16 +212,13 @@ func TestAccountHashValue(t *testing.T) {
} }
v, err := account.HashValue() v, err := account.HashValue()
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, assert.Equal(t, "16297758255249203915951182296472515138555043617458222397753168518282206850764", v.String())
"447675324273474410516096114710387312413478475468606444107594732044698919451",
v.String())
} }
func TestAccountHashValueTestVectors(t *testing.T) { func TestAccountHashValueTestVectors(t *testing.T) {
// values from js test vectors // values from js test vectors
ay := new(big.Int).Sub(new(big.Int).Exp(big.NewInt(2), big.NewInt(253), nil), big.NewInt(1)) ay := new(big.Int).Sub(new(big.Int).Exp(big.NewInt(2), big.NewInt(253), nil), big.NewInt(1))
assert.Equal(t, "1fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", assert.Equal(t, "1fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", (hex.EncodeToString(ay.Bytes())))
(hex.EncodeToString(ay.Bytes())))
bjjPoint, err := babyjub.PointFromSignAndY(true, ay) bjjPoint, err := babyjub.PointFromSignAndY(true, ay)
require.NoError(t, err) require.NoError(t, err)
bjj := babyjub.PublicKey(*bjjPoint) bjj := babyjub.PublicKey(*bjjPoint)
@@ -242,22 +236,16 @@ func TestAccountHashValueTestVectors(t *testing.T) {
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "9444732965739290427391", e[0].String()) assert.Equal(t, "9444732965739290427391", e[0].String())
assert.Equal(t, "6277101735386680763835789423207666416102355444464034512895", e[1].String()) assert.Equal(t, "6277101735386680763835789423207666416102355444464034512895", e[1].String())
assert.Equal(t, assert.Equal(t, "14474011154664524427946373126085988481658748083205070504932198000989141204991", e[2].String())
"14474011154664524427946373126085988481658748083205070504932198000989141204991",
e[2].String())
assert.Equal(t, "1461501637330902918203684832716283019655932542975", e[3].String()) assert.Equal(t, "1461501637330902918203684832716283019655932542975", e[3].String())
h, err := poseidon.Hash(e[:]) h, err := poseidon.Hash(e[:])
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, assert.Equal(t, "4550823210217540218403400309533329186487982452461145263910122718498735057257", h.String())
"13265203488631320682117942952393454767418777767637549409684833552016769103047",
h.String())
v, err := account.HashValue() v, err := account.HashValue()
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, assert.Equal(t, "4550823210217540218403400309533329186487982452461145263910122718498735057257", v.String())
"13265203488631320682117942952393454767418777767637549409684833552016769103047",
v.String())
// second account // second account
ay = big.NewInt(0) ay = big.NewInt(0)
@@ -273,9 +261,7 @@ func TestAccountHashValueTestVectors(t *testing.T) {
} }
v, err = account.HashValue() v, err = account.HashValue()
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, assert.Equal(t, "7750253361301235345986002241352365187241910378619330147114280396816709365657", v.String())
"2351654555892372227640888372176282444150254868378439619268573230312091195718",
v.String())
// third account // third account
ay = bigFromStr("21b0a1688b37f77b1d1d5539ec3b826db5ac78b2513f574a04c50a7d4f8246d7", 16) ay = bigFromStr("21b0a1688b37f77b1d1d5539ec3b826db5ac78b2513f574a04c50a7d4f8246d7", 16)
@@ -293,15 +279,11 @@ func TestAccountHashValueTestVectors(t *testing.T) {
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "554050781187", e[0].String()) assert.Equal(t, "554050781187", e[0].String())
assert.Equal(t, "42000000000000000000", e[1].String()) assert.Equal(t, "42000000000000000000", e[1].String())
assert.Equal(t, assert.Equal(t, "15238403086306505038849621710779816852318505119327426213168494964113886299863", e[2].String())
"15238403086306505038849621710779816852318505119327426213168494964113886299863",
e[2].String())
assert.Equal(t, "935037732739828347587684875151694054123613453305", e[3].String()) assert.Equal(t, "935037732739828347587684875151694054123613453305", e[3].String())
v, err = account.HashValue() v, err = account.HashValue()
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, assert.Equal(t, "10565754214047872850889045989683221123564392137456000481397520902594455245517", v.String())
"15036148928138382129196903417666258171042923749783835283230591475172197254845",
v.String())
} }
func TestAccountErrNotInFF(t *testing.T) { func TestAccountErrNotInFF(t *testing.T) {
@@ -330,8 +312,7 @@ func TestAccountErrNotInFF(t *testing.T) {
func TestAccountErrNumOverflowNonce(t *testing.T) { func TestAccountErrNumOverflowNonce(t *testing.T) {
var sk babyjub.PrivateKey var sk babyjub.PrivateKey
_, err := hex.Decode(sk[:], _, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
assert.NoError(t, err) assert.NoError(t, err)
pk := sk.Public() pk := sk.Public()
@@ -358,8 +339,7 @@ func TestAccountErrNumOverflowNonce(t *testing.T) {
func TestAccountErrNumOverflowBalance(t *testing.T) { func TestAccountErrNumOverflowBalance(t *testing.T) {
var sk babyjub.PrivateKey var sk babyjub.PrivateKey
_, err := hex.Decode(sk[:], _, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
assert.NoError(t, err) assert.NoError(t, err)
pk := sk.Public() pk := sk.Public()
@@ -371,16 +351,14 @@ func TestAccountErrNumOverflowBalance(t *testing.T) {
BJJ: pk.Compress(), BJJ: pk.Compress(),
EthAddr: ethCommon.HexToAddress("0xc58d29fA6e86E4FAe04DDcEd660d45BCf3Cb2370"), EthAddr: ethCommon.HexToAddress("0xc58d29fA6e86E4FAe04DDcEd660d45BCf3Cb2370"),
} }
assert.Equal(t, "6277101735386680763835789423207666416102355444464034512895", assert.Equal(t, "6277101735386680763835789423207666416102355444464034512895", account.Balance.String())
account.Balance.String())
_, err = account.Bytes() _, err = account.Bytes()
assert.NoError(t, err) assert.NoError(t, err)
// force value overflow // force value overflow
account.Balance = new(big.Int).Exp(big.NewInt(2), big.NewInt(192), nil) account.Balance = new(big.Int).Exp(big.NewInt(2), big.NewInt(192), nil)
assert.Equal(t, "6277101735386680763835789423207666416102355444464034512896", assert.Equal(t, "6277101735386680763835789423207666416102355444464034512896", account.Balance.String())
account.Balance.String())
b, err := account.Bytes() b, err := account.Bytes()
assert.NotNil(t, err) assert.NotNil(t, err)
assert.Equal(t, fmt.Errorf("%s Balance", ErrNumOverflow), tracerr.Unwrap(err)) assert.Equal(t, fmt.Errorf("%s Balance", ErrNumOverflow), tracerr.Unwrap(err))

View File

@@ -11,15 +11,15 @@ import (
"github.com/iden3/go-iden3-crypto/babyjub" "github.com/iden3/go-iden3-crypto/babyjub"
) )
const ( // AccountCreationAuthMsg is the message that is signed to authorize a Hermez
// AccountCreationAuthMsg is the message that is signed to authorize a // account creation
// Hermez account creation const AccountCreationAuthMsg = "Account creation"
AccountCreationAuthMsg = "Account creation"
// EIP712Version is the used version of the EIP-712 // EIP712Version is the used version of the EIP-712
EIP712Version = "1" const EIP712Version = "1"
// EIP712Provider defines the Provider for the EIP-712
EIP712Provider = "Hermez Network" // EIP712Provider defines the Provider for the EIP-712
) const EIP712Provider = "Hermez Network"
var ( var (
// EmptyEthSignature is an ethereum signature of all zeroes // EmptyEthSignature is an ethereum signature of all zeroes
@@ -84,7 +84,7 @@ func (a *AccountCreationAuth) toHash(chainID uint16,
return rawData, nil return rawData, nil
} }
// HashToSign returns the hash to be signed by the Ethereum address to authorize // HashToSign returns the hash to be signed by the Etherum address to authorize
// the account creation, which follows the EIP-712 encoding // the account creation, which follows the EIP-712 encoding
func (a *AccountCreationAuth) HashToSign(chainID uint16, func (a *AccountCreationAuth) HashToSign(chainID uint16,
hermezContractAddr ethCommon.Address) ([]byte, error) { hermezContractAddr ethCommon.Address) ([]byte, error) {
@@ -96,9 +96,9 @@ func (a *AccountCreationAuth) HashToSign(chainID uint16,
} }
// Sign signs the account creation authorization message using the provided // Sign signs the account creation authorization message using the provided
// `signHash` function, and stores the signature in `a.Signature`. `signHash` // `signHash` function, and stores the signaure in `a.Signature`. `signHash`
// should do an ethereum signature using the account corresponding to // should do an ethereum signature using the account corresponding to
// `a.EthAddr`. The `signHash` function is used to make signing flexible: in // `a.EthAddr`. The `signHash` function is used to make signig flexible: in
// tests we sign directly using the private key, outside tests we sign using // tests we sign directly using the private key, outside tests we sign using
// the keystore (which never exposes the private key). Sign follows the EIP-712 // the keystore (which never exposes the private key). Sign follows the EIP-712
// encoding. // encoding.

View File

@@ -13,8 +13,7 @@ import (
func TestAccountCreationAuthSignVerify(t *testing.T) { func TestAccountCreationAuthSignVerify(t *testing.T) {
// Ethereum key // Ethereum key
ethSk, err := ethSk, err := ethCrypto.HexToECDSA("fad9c8855b740a0b7ed4c221dbad0f33a83a49cad6b3fe8d5817ac83d38b6a19")
ethCrypto.HexToECDSA("fad9c8855b740a0b7ed4c221dbad0f33a83a49cad6b3fe8d5817ac83d38b6a19")
require.NoError(t, err) require.NoError(t, err)
ethAddr := ethCrypto.PubkeyToAddress(ethSk.PublicKey) ethAddr := ethCrypto.PubkeyToAddress(ethSk.PublicKey)
@@ -70,7 +69,6 @@ func TestAccountCreationAuthJSComp(t *testing.T) {
sigExpected string sigExpected string
} }
var tvs []testVector var tvs []testVector
//nolint:lll
tv0 := testVector{ tv0 := testVector{
ethSk: "0000000000000000000000000000000000000000000000000000000000000001", ethSk: "0000000000000000000000000000000000000000000000000000000000000001",
expectedAddress: "0x7E5F4552091A69125d5DfCb7b8C2659029395Bdf", expectedAddress: "0x7E5F4552091A69125d5DfCb7b8C2659029395Bdf",
@@ -81,7 +79,6 @@ func TestAccountCreationAuthJSComp(t *testing.T) {
hashExpected: "c56eba41e511df100c804c5c09288f35887efea4f033be956481af335df3bea2", hashExpected: "c56eba41e511df100c804c5c09288f35887efea4f033be956481af335df3bea2",
sigExpected: "dbedcc5ce02db8f48afbdb2feba9a3a31848eaa8fca5f312ce37b01db45d2199208335330d4445bd2f51d1db68dbc0d0bf3585c4a07504b4efbe46a69eaae5a21b", sigExpected: "dbedcc5ce02db8f48afbdb2feba9a3a31848eaa8fca5f312ce37b01db45d2199208335330d4445bd2f51d1db68dbc0d0bf3585c4a07504b4efbe46a69eaae5a21b",
} }
//nolint:lll
tv1 := testVector{ tv1 := testVector{
ethSk: "0000000000000000000000000000000000000000000000000000000000000002", ethSk: "0000000000000000000000000000000000000000000000000000000000000002",
expectedAddress: "0x2B5AD5c4795c026514f8317c7a215E218DcCD6cF", expectedAddress: "0x2B5AD5c4795c026514f8317c7a215E218DcCD6cF",
@@ -92,7 +89,6 @@ func TestAccountCreationAuthJSComp(t *testing.T) {
hashExpected: "deb9afa479282cf27b442ce8ba86b19448aa87eacef691521a33db5d0feb9959", hashExpected: "deb9afa479282cf27b442ce8ba86b19448aa87eacef691521a33db5d0feb9959",
sigExpected: "6a0da90ba2d2b1be679a28ebe54ee03082d44b836087391cd7d2607c1e4dafe04476e6e88dccb8707c68312512f16c947524b35c80f26c642d23953e9bb84c701c", sigExpected: "6a0da90ba2d2b1be679a28ebe54ee03082d44b836087391cd7d2607c1e4dafe04476e6e88dccb8707c68312512f16c947524b35c80f26c642d23953e9bb84c701c",
} }
//nolint:lll
tv2 := testVector{ tv2 := testVector{
ethSk: "c5e8f61d1ab959b397eecc0a37a6517b8e67a0e7cf1f4bce5591f3ed80199122", ethSk: "c5e8f61d1ab959b397eecc0a37a6517b8e67a0e7cf1f4bce5591f3ed80199122",
expectedAddress: "0xc783df8a850f42e7F7e57013759C285caa701eB6", expectedAddress: "0xc783df8a850f42e7F7e57013759C285caa701eB6",

View File

@@ -13,9 +13,8 @@ const batchNumBytesLen = 8
// Batch is a struct that represents Hermez network batch // Batch is a struct that represents Hermez network batch
type Batch struct { type Batch struct {
BatchNum BatchNum `meddler:"batch_num"` BatchNum BatchNum `meddler:"batch_num"`
// Ethereum block in which the batch is forged EthBlockNum int64 `meddler:"eth_block_num"` // Ethereum block in which the batch is forged
EthBlockNum int64 `meddler:"eth_block_num"`
ForgerAddr ethCommon.Address `meddler:"forger_addr"` ForgerAddr ethCommon.Address `meddler:"forger_addr"`
CollectedFees map[TokenID]*big.Int `meddler:"fees_collected,json"` CollectedFees map[TokenID]*big.Int `meddler:"fees_collected,json"`
FeeIdxsCoordinator []Idx `meddler:"fee_idxs_coordinator,json"` FeeIdxsCoordinator []Idx `meddler:"fee_idxs_coordinator,json"`
@@ -23,11 +22,9 @@ type Batch struct {
NumAccounts int `meddler:"num_accounts"` NumAccounts int `meddler:"num_accounts"`
LastIdx int64 `meddler:"last_idx"` LastIdx int64 `meddler:"last_idx"`
ExitRoot *big.Int `meddler:"exit_root,bigint"` ExitRoot *big.Int `meddler:"exit_root,bigint"`
// ForgeL1TxsNum is optional, Only when the batch forges L1 txs. Identifier that corresponds ForgeL1TxsNum *int64 `meddler:"forge_l1_txs_num"` // optional, Only when the batch forges L1 txs. Identifier that corresponds to the group of L1 txs forged in the current batch.
// to the group of L1 txs forged in the current batch. SlotNum int64 `meddler:"slot_num"` // Slot in which the batch is forged
ForgeL1TxsNum *int64 `meddler:"forge_l1_txs_num"` TotalFeesUSD *float64 `meddler:"total_fees_usd"`
SlotNum int64 `meddler:"slot_num"` // Slot in which the batch is forged
TotalFeesUSD *float64 `meddler:"total_fees_usd"`
} }
// NewEmptyBatch creates a new empty batch // NewEmptyBatch creates a new empty batch
@@ -66,9 +63,7 @@ func (bn BatchNum) BigInt() *big.Int {
// BatchNumFromBytes returns BatchNum from a []byte // BatchNumFromBytes returns BatchNum from a []byte
func BatchNumFromBytes(b []byte) (BatchNum, error) { func BatchNumFromBytes(b []byte) (BatchNum, error) {
if len(b) != batchNumBytesLen { if len(b) != batchNumBytesLen {
return 0, return 0, tracerr.Wrap(fmt.Errorf("can not parse BatchNumFromBytes, bytes len %d, expected %d", len(b), batchNumBytesLen))
tracerr.Wrap(fmt.Errorf("can not parse BatchNumFromBytes, bytes len %d, expected %d",
len(b), batchNumBytesLen))
} }
batchNum := binary.BigEndian.Uint64(b[:batchNumBytesLen]) batchNum := binary.BigEndian.Uint64(b[:batchNumBytesLen])
return BatchNum(batchNum), nil return BatchNum(batchNum), nil

View File

@@ -34,7 +34,7 @@ type Slot struct {
// BatchesLen int // BatchesLen int
BidValue *big.Int BidValue *big.Int
BootCoord bool BootCoord bool
// Bidder, Forger and URL correspond to the winner of the slot (which is // Bidder, Forer and URL correspond to the winner of the slot (which is
// not always the highest bidder). These are the values of the // not always the highest bidder). These are the values of the
// coordinator that is able to forge exclusively before the deadline. // coordinator that is able to forge exclusively before the deadline.
Bidder ethCommon.Address Bidder ethCommon.Address

View File

@@ -5,15 +5,10 @@ import (
) )
// Coordinator represents a Hermez network coordinator who wins an auction for an specific slot // Coordinator represents a Hermez network coordinator who wins an auction for an specific slot
// WARNING: this is strongly based on the previous implementation, once the new spec is done, this // WARNING: this is strongly based on the previous implementation, once the new spec is done, this may change a lot.
// may change a lot.
type Coordinator struct { type Coordinator struct {
// Bidder is the address of the bidder Bidder ethCommon.Address `meddler:"bidder_addr"` // address of the bidder
Bidder ethCommon.Address `meddler:"bidder_addr"` Forger ethCommon.Address `meddler:"forger_addr"` // address of the forger
// Forger is the address of the forger EthBlockNum int64 `meddler:"eth_block_num"` // block in which the coordinator was registered
Forger ethCommon.Address `meddler:"forger_addr"` URL string `meddler:"url"` // URL of the coordinators API
// EthBlockNum is the block in which the coordinator was registered
EthBlockNum int64 `meddler:"eth_block_num"`
// URL of the coordinators API
URL string `meddler:"url"`
} }

View File

@@ -7,8 +7,6 @@ type SCVariables struct {
WDelayer WDelayerVariables `validate:"required"` WDelayer WDelayerVariables `validate:"required"`
} }
// AsPtr returns the SCVariables as a SCVariablesPtr using pointers to the
// original SCVariables
func (v *SCVariables) AsPtr() *SCVariablesPtr { func (v *SCVariables) AsPtr() *SCVariablesPtr {
return &SCVariablesPtr{ return &SCVariablesPtr{
Rollup: &v.Rollup, Rollup: &v.Rollup,

View File

@@ -68,13 +68,11 @@ type AuctionVariables struct {
ClosedAuctionSlots uint16 `meddler:"closed_auction_slots" validate:"required"` ClosedAuctionSlots uint16 `meddler:"closed_auction_slots" validate:"required"`
// Distance (#slots) to the farthest slot to which you can bid (30 days = 4320 slots ) // Distance (#slots) to the farthest slot to which you can bid (30 days = 4320 slots )
OpenAuctionSlots uint16 `meddler:"open_auction_slots" validate:"required"` OpenAuctionSlots uint16 `meddler:"open_auction_slots" validate:"required"`
// How the HEZ tokens deposited by the slot winner are distributed (Burn: 40% - Donation: // How the HEZ tokens deposited by the slot winner are distributed (Burn: 40% - Donation: 40% - HGT: 20%)
// 40% - HGT: 20%)
AllocationRatio [3]uint16 `meddler:"allocation_ratio,json" validate:"required"` AllocationRatio [3]uint16 `meddler:"allocation_ratio,json" validate:"required"`
// Minimum outbid (percentage) over the previous one to consider it valid // Minimum outbid (percentage) over the previous one to consider it valid
Outbidding uint16 `meddler:"outbidding" validate:"required"` Outbidding uint16 `meddler:"outbidding" validate:"required"`
// Number of blocks at the end of a slot in which any coordinator can forge if the winner // Number of blocks at the end of a slot in which any coordinator can forge if the winner has not forged one before
// has not forged one before
SlotDeadline uint8 `meddler:"slot_deadline" validate:"required"` SlotDeadline uint8 `meddler:"slot_deadline" validate:"required"`
} }

View File

@@ -20,22 +20,19 @@ const (
// RollupConstExitIDx IDX 1 is reserved for exits // RollupConstExitIDx IDX 1 is reserved for exits
RollupConstExitIDx = 1 RollupConstExitIDx = 1
// RollupConstLimitTokens Max number of tokens allowed to be registered inside the rollup // RollupConstLimitTokens Max number of tokens allowed to be registered inside the rollup
RollupConstLimitTokens = (1 << 32) //nolint:gomnd RollupConstLimitTokens = (1 << 32)
// RollupConstL1CoordinatorTotalBytes [4 bytes] token + [32 bytes] babyjub + [65 bytes] // RollupConstL1CoordinatorTotalBytes [4 bytes] token + [32 bytes] babyjub + [65 bytes] compressedSignature
// compressedSignature
RollupConstL1CoordinatorTotalBytes = 101 RollupConstL1CoordinatorTotalBytes = 101
// RollupConstL1UserTotalBytes [20 bytes] fromEthAddr + [32 bytes] fromBjj-compressed + [6 // RollupConstL1UserTotalBytes [20 bytes] fromEthAddr + [32 bytes] fromBjj-compressed + [6 bytes] fromIdx +
// bytes] fromIdx + [5 bytes] depositAmountFloat40 + [5 bytes] amountFloat40 + [4 bytes] // [5 bytes] depositAmountFloat40 + [5 bytes] amountFloat40 + [4 bytes] tokenId + [6 bytes] toIdx
// tokenId + [6 bytes] toIdx
RollupConstL1UserTotalBytes = 78 RollupConstL1UserTotalBytes = 78
// RollupConstMaxL1UserTx Maximum L1-user transactions allowed to be queued in a batch // RollupConstMaxL1UserTx Maximum L1-user transactions allowed to be queued in a batch
RollupConstMaxL1UserTx = 128 RollupConstMaxL1UserTx = 128
// RollupConstMaxL1Tx Maximum L1 transactions allowed to be queued in a batch // RollupConstMaxL1Tx Maximum L1 transactions allowed to be queued in a batch
RollupConstMaxL1Tx = 256 RollupConstMaxL1Tx = 256
// RollupConstInputSHAConstantBytes [6 bytes] lastIdx + [6 bytes] newLastIdx + [32 bytes] // RollupConstInputSHAConstantBytes [6 bytes] lastIdx + [6 bytes] newLastIdx + [32 bytes] stateRoot + [32 bytes] newStRoot + [32 bytes] newExitRoot +
// stateRoot + [32 bytes] newStRoot + [32 bytes] newExitRoot + [_MAX_L1_TX * // [_MAX_L1_TX * _L1_USER_TOTALBYTES bytes] l1TxsData + totalL2TxsDataLength + feeIdxCoordinatorLength + [2 bytes] chainID =
// _L1_USER_TOTALBYTES bytes] l1TxsData + totalL2TxsDataLength + feeIdxCoordinatorLength + // 18542 bytes + totalL2TxsDataLength + feeIdxCoordinatorLength
// [2 bytes] chainID = 18542 bytes + totalL2TxsDataLength + feeIdxCoordinatorLength
RollupConstInputSHAConstantBytes = 18546 RollupConstInputSHAConstantBytes = 18546
// RollupConstNumBuckets Number of buckets // RollupConstNumBuckets Number of buckets
RollupConstNumBuckets = 5 RollupConstNumBuckets = 5
@@ -47,18 +44,14 @@ const (
var ( var (
// RollupConstLimitDepositAmount Max deposit amount allowed (depositAmount: L1 --> L2) // RollupConstLimitDepositAmount Max deposit amount allowed (depositAmount: L1 --> L2)
RollupConstLimitDepositAmount, _ = new(big.Int).SetString( RollupConstLimitDepositAmount, _ = new(big.Int).SetString("340282366920938463463374607431768211456", 10)
"340282366920938463463374607431768211456", 10)
// RollupConstLimitL2TransferAmount Max amount allowed (amount L2 --> L2) // RollupConstLimitL2TransferAmount Max amount allowed (amount L2 --> L2)
RollupConstLimitL2TransferAmount, _ = new(big.Int).SetString( RollupConstLimitL2TransferAmount, _ = new(big.Int).SetString("6277101735386680763835789423207666416102355444464034512896", 10)
"6277101735386680763835789423207666416102355444464034512896", 10)
// RollupConstEthAddressInternalOnly This ethereum address is used internally for rollup // RollupConstEthAddressInternalOnly This ethereum address is used internally for rollup accounts that don't have ethereum address, only Babyjubjub
// accounts that don't have ethereum address, only Babyjubjub. // This non-ethereum accounts can be created by the coordinator and allow users to have a rollup
// This non-ethereum accounts can be created by the coordinator and allow users to have a // account without needing an ethereum address
// rollup account without needing an ethereum address RollupConstEthAddressInternalOnly = ethCommon.HexToAddress("0xFFfFfFffFFfffFFfFFfFFFFFffFFFffffFfFFFfF")
RollupConstEthAddressInternalOnly = ethCommon.HexToAddress(
"0xFFfFfFffFFfffFFfFFfFFFFFffFFFffffFfFFFfF")
// RollupConstRfield Modulus zkSNARK // RollupConstRfield Modulus zkSNARK
RollupConstRfield, _ = new(big.Int).SetString( RollupConstRfield, _ = new(big.Int).SetString(
"21888242871839275222246405745257275088548364400416034343698204186575808495617", 10) "21888242871839275222246405745257275088548364400416034343698204186575808495617", 10)
@@ -70,32 +63,24 @@ var (
// RollupConstRecipientInterfaceHash ERC777 recipient interface hash // RollupConstRecipientInterfaceHash ERC777 recipient interface hash
RollupConstRecipientInterfaceHash = crypto.Keccak256([]byte("ERC777TokensRecipient")) RollupConstRecipientInterfaceHash = crypto.Keccak256([]byte("ERC777TokensRecipient"))
// RollupConstPerformL1UserTxSignature the signature of the function that can be called thru // RollupConstPerformL1UserTxSignature the signature of the function that can be called thru an ERC777 `send`
// an ERC777 `send` RollupConstPerformL1UserTxSignature = crypto.Keccak256([]byte("addL1Transaction(uint256,uint48,uint16,uint16,uint32,uint48)"))
RollupConstPerformL1UserTxSignature = crypto.Keccak256([]byte( // RollupConstAddTokenSignature the signature of the function that can be called thru an ERC777 `send`
"addL1Transaction(uint256,uint48,uint16,uint16,uint32,uint48)"))
// RollupConstAddTokenSignature the signature of the function that can be called thru an
// ERC777 `send`
RollupConstAddTokenSignature = crypto.Keccak256([]byte("addToken(address)")) RollupConstAddTokenSignature = crypto.Keccak256([]byte("addToken(address)"))
// RollupConstSendSignature ERC777 Signature // RollupConstSendSignature ERC777 Signature
RollupConstSendSignature = crypto.Keccak256([]byte("send(address,uint256,bytes)")) RollupConstSendSignature = crypto.Keccak256([]byte("send(address,uint256,bytes)"))
// RollupConstERC777Granularity ERC777 Signature // RollupConstERC777Granularity ERC777 Signature
RollupConstERC777Granularity = crypto.Keccak256([]byte("granularity()")) RollupConstERC777Granularity = crypto.Keccak256([]byte("granularity()"))
// RollupConstWithdrawalDelayerDeposit This constant are used to deposit tokens from ERC77 // RollupConstWithdrawalDelayerDeposit This constant are used to deposit tokens from ERC77 tokens into withdrawal delayer
// tokens into withdrawal delayer
RollupConstWithdrawalDelayerDeposit = crypto.Keccak256([]byte("deposit(address,address,uint192)")) RollupConstWithdrawalDelayerDeposit = crypto.Keccak256([]byte("deposit(address,address,uint192)"))
// ERC20 signature // ERC20 signature
// RollupConstTransferSignature This constant is used in the _safeTransfer internal method // RollupConstTransferSignature This constant is used in the _safeTransfer internal method in order to safe GAS.
// in order to safe GAS.
RollupConstTransferSignature = crypto.Keccak256([]byte("transfer(address,uint256)")) RollupConstTransferSignature = crypto.Keccak256([]byte("transfer(address,uint256)"))
// RollupConstTransferFromSignature This constant is used in the _safeTransfer internal // RollupConstTransferFromSignature This constant is used in the _safeTransfer internal method in order to safe GAS.
// method in order to safe GAS. RollupConstTransferFromSignature = crypto.Keccak256([]byte("transferFrom(address,address,uint256)"))
RollupConstTransferFromSignature = crypto.Keccak256([]byte( // RollupConstApproveSignature This constant is used in the _safeTransfer internal method in order to safe GAS.
"transferFrom(address,address,uint256)"))
// RollupConstApproveSignature This constant is used in the _safeTransfer internal method in
// order to safe GAS.
RollupConstApproveSignature = crypto.Keccak256([]byte("approve(address,uint256)")) RollupConstApproveSignature = crypto.Keccak256([]byte("approve(address,uint256)"))
// RollupConstERC20Signature ERC20 decimals signature // RollupConstERC20Signature ERC20 decimals signature
RollupConstERC20Signature = crypto.Keccak256([]byte("decimals()")) RollupConstERC20Signature = crypto.Keccak256([]byte("decimals()"))
@@ -156,7 +141,6 @@ type TokenExchange struct {
} }
// RollupVariables are the variables of the Rollup Smart Contract // RollupVariables are the variables of the Rollup Smart Contract
//nolint:lll
type RollupVariables struct { type RollupVariables struct {
EthBlockNum int64 `meddler:"eth_block_num"` EthBlockNum int64 `meddler:"eth_block_num"`
FeeAddToken *big.Int `meddler:"fee_add_token,bigint" validate:"required"` FeeAddToken *big.Int `meddler:"fee_add_token,bigint" validate:"required"`

View File

@@ -27,7 +27,6 @@ type WDelayerEscapeHatchWithdrawal struct {
} }
// WDelayerVariables are the variables of the Withdrawal Delayer Smart Contract // WDelayerVariables are the variables of the Withdrawal Delayer Smart Contract
//nolint:lll
type WDelayerVariables struct { type WDelayerVariables struct {
EthBlockNum int64 `json:"ethereumBlockNum" meddler:"eth_block_num"` EthBlockNum int64 `json:"ethereumBlockNum" meddler:"eth_block_num"`
// HermezRollupAddress ethCommon.Address `json:"hermezRollupAddress" meddler:"rollup_address"` // HermezRollupAddress ethCommon.Address `json:"hermezRollupAddress" meddler:"rollup_address"`

View File

@@ -22,9 +22,9 @@ var FeeFactorLsh60 [256]*big.Int
// the coordinator according to the tx type (if the tx requires to create an // the coordinator according to the tx type (if the tx requires to create an
// account and register, only register or he account already esists) // account and register, only register or he account already esists)
type RecommendedFee struct { type RecommendedFee struct {
ExistingAccount float64 `json:"existingAccount"` ExistingAccount float64 `json:"existingAccount"`
CreatesAccount float64 `json:"createAccount"` CreatesAccount float64 `json:"createAccount"`
CreatesAccountInternal float64 `json:"createAccountInternal"` CreatesAccountAndRegister float64 `json:"createAccountInternal"`
} }
// FeeSelector is used to select a percentage from the FeePlan. // FeeSelector is used to select a percentage from the FeePlan.

View File

@@ -1,4 +1,4 @@
// Package common float40.go provides methods to work with Hermez custom half // Package common Float40 provides methods to work with Hermez custom half
// float precision, 40 bits, codification internally called Float40 has been // float precision, 40 bits, codification internally called Float40 has been
// adopted to encode large integers. This is done in order to save bits when L2 // adopted to encode large integers. This is done in order to save bits when L2
// transactions are published. // transactions are published.
@@ -32,8 +32,6 @@ var (
// ErrFloat40NotEnoughPrecission is used when the given *big.Int can // ErrFloat40NotEnoughPrecission is used when the given *big.Int can
// not be represented as Float40 due not enough precission // not be represented as Float40 due not enough precission
ErrFloat40NotEnoughPrecission = errors.New("Float40 error, not enough precission") ErrFloat40NotEnoughPrecission = errors.New("Float40 error, not enough precission")
thres = big.NewInt(0x08_00_00_00_00)
) )
// Float40 represents a float in a 64 bit format // Float40 represents a float in a 64 bit format
@@ -70,7 +68,7 @@ func (f40 Float40) BigInt() (*big.Int, error) {
var f40Uint64 uint64 = uint64(f40) & 0x00_00_00_FF_FF_FF_FF_FF var f40Uint64 uint64 = uint64(f40) & 0x00_00_00_FF_FF_FF_FF_FF
f40Bytes, err := f40.Bytes() f40Bytes, err := f40.Bytes()
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, err
} }
e := f40Bytes[0] & 0xF8 >> 3 // take first 5 bits e := f40Bytes[0] & 0xF8 >> 3 // take first 5 bits
@@ -88,41 +86,18 @@ func NewFloat40(f *big.Int) (Float40, error) {
e := big.NewInt(0) e := big.NewInt(0)
zero := big.NewInt(0) zero := big.NewInt(0)
ten := big.NewInt(10) ten := big.NewInt(10)
thres := big.NewInt(0x08_00_00_00_00)
for new(big.Int).Mod(m, ten).Cmp(zero) == 0 && m.Cmp(thres) >= 0 { for new(big.Int).Mod(m, ten).Cmp(zero) == 0 && m.Cmp(thres) >= 0 {
m = new(big.Int).Div(m, ten) m = new(big.Int).Div(m, ten)
e = new(big.Int).Add(e, big.NewInt(1)) e = new(big.Int).Add(e, big.NewInt(1))
} }
if e.Int64() > 31 { if e.Int64() > 31 {
return 0, tracerr.Wrap(ErrFloat40E31) return 0, ErrFloat40E31
} }
if m.Cmp(thres) >= 0 { if m.Cmp(thres) >= 0 {
return 0, tracerr.Wrap(ErrFloat40NotEnoughPrecission) return 0, ErrFloat40NotEnoughPrecission
} }
r := new(big.Int).Add(m, r := new(big.Int).Add(m,
new(big.Int).Mul(e, thres)) new(big.Int).Mul(e, thres))
return Float40(r.Uint64()), nil return Float40(r.Uint64()), nil
} }
// NewFloat40Floor encodes a *big.Int integer as a Float40, rounding down in
// case of loss during the encoding. It returns an error in case that the number
// is too big (e>31). Warning: this method should not be used inside the
// hermez-node, it's a helper for external usage to generate valid Float40
// values.
func NewFloat40Floor(f *big.Int) (Float40, error) {
m := f
e := big.NewInt(0)
// zero := big.NewInt(0)
ten := big.NewInt(10)
for m.Cmp(thres) >= 0 {
m = new(big.Int).Div(m, ten)
e = new(big.Int).Add(e, big.NewInt(1))
}
if e.Int64() > 31 {
return 0, tracerr.Wrap(ErrFloat40E31)
}
r := new(big.Int).Add(m,
new(big.Int).Mul(e, thres))
return Float40(r.Uint64()), nil
}

View File

@@ -1,11 +1,9 @@
package common package common
import ( import (
"fmt"
"math/big" "math/big"
"testing" "testing"
"github.com/hermeznetwork/tracerr"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@@ -57,56 +55,7 @@ func TestExpectError(t *testing.T) {
bi, ok := new(big.Int).SetString(test, 10) bi, ok := new(big.Int).SetString(test, 10)
require.True(t, ok) require.True(t, ok)
_, err := NewFloat40(bi) _, err := NewFloat40(bi)
assert.Equal(t, testVector[test], tracerr.Unwrap(err)) assert.Equal(t, testVector[test], err)
}
}
func TestNewFloat40Floor(t *testing.T) {
testVector := map[string][]string{
// []int contains [Float40 value, Flot40 Floor value], when
// Float40 value is expected to be 0, is because is expected to
// be an error
"9922334455000000000000000000000000000000": {
"1040714485495", "1040714485495", "9922334455000000000000000000000000000000"},
"9922334455000000000000000000000000000001": { // Floor [2] will be same as prev line
"0", "1040714485495", "9922334455000000000000000000000000000000"},
"9922334454999999999999999999999999999999": {
"0", "1040714485494", "9922334454000000000000000000000000000000"},
"42949672950000000000000000000000000000000": {
"1069446856703", "1069446856703", "42949672950000000000000000000000000000000"},
"99223344556573838487575": {
"0", "456598933239", "99223344550000000000000"},
"992233445500000000000000000000000000000000": {
"0", "0", "0"}, // e>31, returns 0, err
"343597383670000000000000000000000000000000": {
"1099511627775", "1099511627775", "343597383670000000000000000000000000000000"},
"343597383680000000000000000000000000000000": {
"0", "0", "0"}, // e>31, returns 0, err
"1157073197879933027": {
"0", "286448638922", "1157073197800000000"},
}
for test := range testVector {
bi, ok := new(big.Int).SetString(test, 10)
require.True(t, ok)
f40, err := NewFloat40(bi)
if f40 == 0 {
assert.Error(t, err)
} else {
assert.NoError(t, err)
}
assert.Equal(t, testVector[test][0], fmt.Sprint(uint64(f40)))
f40, err = NewFloat40Floor(bi)
if f40 == 0 {
assert.Equal(t, ErrFloat40E31, tracerr.Unwrap(err))
} else {
assert.NoError(t, err)
}
assert.Equal(t, testVector[test][1], fmt.Sprint(uint64(f40)))
bi2, err := f40.BigInt()
require.NoError(t, err)
assert.Equal(t, fmt.Sprint(testVector[test][2]), bi2.String())
} }
} }

View File

@@ -21,33 +21,25 @@ type L1Tx struct {
// where type: // where type:
// - L1UserTx: 0 // - L1UserTx: 0
// - L1CoordinatorTx: 1 // - L1CoordinatorTx: 1
TxID TxID `meddler:"id"` TxID TxID `meddler:"id"`
// ToForgeL1TxsNum indicates in which the tx was forged / will be forged ToForgeL1TxsNum *int64 `meddler:"to_forge_l1_txs_num"` // toForgeL1TxsNum in which the tx was forged / will be forged
ToForgeL1TxsNum *int64 `meddler:"to_forge_l1_txs_num"` Position int `meddler:"position"`
Position int `meddler:"position"` UserOrigin bool `meddler:"user_origin"` // true if the tx was originated by a user, false if it was aoriginated by a coordinator. Note that this differ from the spec for implementation simplification purpposes
// UserOrigin is set to true if the tx was originated by a user, false if it was FromIdx Idx `meddler:"from_idx,zeroisnull"` // FromIdx is used by L1Tx/Deposit to indicate the Idx receiver of the L1Tx.DepositAmount (deposit)
// aoriginated by a coordinator. Note that this differ from the spec for implementation
// simplification purpposes
UserOrigin bool `meddler:"user_origin"`
// FromIdx is used by L1Tx/Deposit to indicate the Idx receiver of the L1Tx.DepositAmount
// (deposit)
FromIdx Idx `meddler:"from_idx,zeroisnull"`
EffectiveFromIdx Idx `meddler:"effective_from_idx,zeroisnull"` EffectiveFromIdx Idx `meddler:"effective_from_idx,zeroisnull"`
FromEthAddr ethCommon.Address `meddler:"from_eth_addr,zeroisnull"` FromEthAddr ethCommon.Address `meddler:"from_eth_addr,zeroisnull"`
FromBJJ babyjub.PublicKeyComp `meddler:"from_bjj,zeroisnull"` FromBJJ babyjub.PublicKeyComp `meddler:"from_bjj,zeroisnull"`
// ToIdx is ignored in L1Tx/Deposit, but used in the L1Tx/DepositAndTransfer ToIdx Idx `meddler:"to_idx"` // ToIdx is ignored in L1Tx/Deposit, but used in the L1Tx/DepositAndTransfer
ToIdx Idx `meddler:"to_idx"` TokenID TokenID `meddler:"token_id"`
TokenID TokenID `meddler:"token_id"` Amount *big.Int `meddler:"amount,bigint"`
Amount *big.Int `meddler:"amount,bigint"`
// EffectiveAmount only applies to L1UserTx. // EffectiveAmount only applies to L1UserTx.
EffectiveAmount *big.Int `meddler:"effective_amount,bigintnull"` EffectiveAmount *big.Int `meddler:"effective_amount,bigintnull"`
DepositAmount *big.Int `meddler:"deposit_amount,bigint"` DepositAmount *big.Int `meddler:"deposit_amount,bigint"`
// EffectiveDepositAmount only applies to L1UserTx. // EffectiveDepositAmount only applies to L1UserTx.
EffectiveDepositAmount *big.Int `meddler:"effective_deposit_amount,bigintnull"` EffectiveDepositAmount *big.Int `meddler:"effective_deposit_amount,bigintnull"`
// Ethereum Block Number in which this L1Tx was added to the queue EthBlockNum int64 `meddler:"eth_block_num"` // Ethereum Block Number in which this L1Tx was added to the queue
EthBlockNum int64 `meddler:"eth_block_num"` Type TxType `meddler:"type"`
Type TxType `meddler:"type"` BatchNum *BatchNum `meddler:"batch_num"`
BatchNum *BatchNum `meddler:"batch_num"`
} }
// NewL1Tx returns the given L1Tx with the TxId & Type parameters calculated // NewL1Tx returns the given L1Tx with the TxId & Type parameters calculated
@@ -259,7 +251,7 @@ func L1TxFromDataAvailability(b []byte, nLevels uint32) (*L1Tx, error) {
} }
l1tx.ToIdx = toIdx l1tx.ToIdx = toIdx
l1tx.EffectiveAmount, err = Float40FromBytes(amountBytes).BigInt() l1tx.EffectiveAmount, err = Float40FromBytes(amountBytes).BigInt()
return &l1tx, tracerr.Wrap(err) return &l1tx, err
} }
// BytesGeneric returns the generic representation of a L1Tx. This method is // BytesGeneric returns the generic representation of a L1Tx. This method is
@@ -339,9 +331,7 @@ func (tx *L1Tx) BytesCoordinatorTx(compressedSignatureBytes []byte) ([]byte, err
// L1UserTxFromBytes decodes a L1Tx from []byte // L1UserTxFromBytes decodes a L1Tx from []byte
func L1UserTxFromBytes(b []byte) (*L1Tx, error) { func L1UserTxFromBytes(b []byte) (*L1Tx, error) {
if len(b) != RollupConstL1UserTotalBytes { if len(b) != RollupConstL1UserTotalBytes {
return nil, return nil, tracerr.Wrap(fmt.Errorf("Can not parse L1Tx bytes, expected length %d, current: %d", 68, len(b)))
tracerr.Wrap(fmt.Errorf("Can not parse L1Tx bytes, expected length %d, current: %d",
68, len(b)))
} }
tx := &L1Tx{ tx := &L1Tx{
@@ -379,12 +369,9 @@ func L1UserTxFromBytes(b []byte) (*L1Tx, error) {
} }
// L1CoordinatorTxFromBytes decodes a L1Tx from []byte // L1CoordinatorTxFromBytes decodes a L1Tx from []byte
func L1CoordinatorTxFromBytes(b []byte, chainID *big.Int, hermezAddress ethCommon.Address) (*L1Tx, func L1CoordinatorTxFromBytes(b []byte, chainID *big.Int, hermezAddress ethCommon.Address) (*L1Tx, error) {
error) {
if len(b) != RollupConstL1CoordinatorTotalBytes { if len(b) != RollupConstL1CoordinatorTotalBytes {
return nil, tracerr.Wrap( return nil, tracerr.Wrap(fmt.Errorf("Can not parse L1CoordinatorTx bytes, expected length %d, current: %d", 101, len(b)))
fmt.Errorf("Can not parse L1CoordinatorTx bytes, expected length %d, current: %d",
101, len(b)))
} }
tx := &L1Tx{ tx := &L1Tx{

View File

@@ -29,8 +29,7 @@ func TestNewL1UserTx(t *testing.T) {
} }
l1Tx, err := NewL1Tx(l1Tx) l1Tx, err := NewL1Tx(l1Tx)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "0x00a6cbae3b8661fb75b0919ca6605a02cfb04d9c6dd16870fa0fcdf01befa32768", assert.Equal(t, "0x00a6cbae3b8661fb75b0919ca6605a02cfb04d9c6dd16870fa0fcdf01befa32768", l1Tx.TxID.String())
l1Tx.TxID.String())
} }
func TestNewL1CoordinatorTx(t *testing.T) { func TestNewL1CoordinatorTx(t *testing.T) {
@@ -47,8 +46,7 @@ func TestNewL1CoordinatorTx(t *testing.T) {
} }
l1Tx, err := NewL1Tx(l1Tx) l1Tx, err := NewL1Tx(l1Tx)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "0x01274482d73df4dab34a1b6740adfca347a462513aa14e82f27b12f818d1b68c84", assert.Equal(t, "0x01274482d73df4dab34a1b6740adfca347a462513aa14e82f27b12f818d1b68c84", l1Tx.TxID.String())
l1Tx.TxID.String())
} }
func TestL1TxCompressedData(t *testing.T) { func TestL1TxCompressedData(t *testing.T) {
@@ -201,8 +199,7 @@ func TestL1userTxByteParsers(t *testing.T) {
func TestL1TxByteParsersCompatibility(t *testing.T) { func TestL1TxByteParsersCompatibility(t *testing.T) {
// Data from compatibility test // Data from compatibility test
var pkComp babyjub.PublicKeyComp var pkComp babyjub.PublicKeyComp
pkCompB, err := pkCompB, err := hex.DecodeString("0dd02deb2c81068e7a0f7e327df80b4ab79ee1f41a7def613e73a20c32eece5a")
hex.DecodeString("0dd02deb2c81068e7a0f7e327df80b4ab79ee1f41a7def613e73a20c32eece5a")
require.NoError(t, err) require.NoError(t, err)
pkCompL := SwapEndianness(pkCompB) pkCompL := SwapEndianness(pkCompB)
err = pkComp.UnmarshalText([]byte(hex.EncodeToString(pkCompL))) err = pkComp.UnmarshalText([]byte(hex.EncodeToString(pkCompL)))
@@ -223,8 +220,7 @@ func TestL1TxByteParsersCompatibility(t *testing.T) {
encodedData, err := l1Tx.BytesUser() encodedData, err := l1Tx.BytesUser()
require.NoError(t, err) require.NoError(t, err)
expected := "85dab5b9e2e361d0c208d77be90efcc0439b0a530dd02deb2c81068e7a0f7e327df80b4ab79e" + expected := "85dab5b9e2e361d0c208d77be90efcc0439b0a530dd02deb2c81068e7a0f7e327df80b4ab79ee1f41a7def613e73a20c32eece5a000001c638db52540be400459682f0000020039c0000053cb88d"
"e1f41a7def613e73a20c32eece5a000001c638db52540be400459682f0000020039c0000053cb88d"
assert.Equal(t, expected, hex.EncodeToString(encodedData)) assert.Equal(t, expected, hex.EncodeToString(encodedData))
} }
@@ -232,8 +228,7 @@ func TestL1CoordinatorTxByteParsers(t *testing.T) {
hermezAddress := ethCommon.HexToAddress("0xD6C850aeBFDC46D7F4c207e445cC0d6B0919BDBe") hermezAddress := ethCommon.HexToAddress("0xD6C850aeBFDC46D7F4c207e445cC0d6B0919BDBe")
chainID := big.NewInt(1337) chainID := big.NewInt(1337)
privateKey, err := privateKey, err := crypto.HexToECDSA("fad9c8855b740a0b7ed4c221dbad0f33a83a49cad6b3fe8d5817ac83d38b6a19")
crypto.HexToECDSA("fad9c8855b740a0b7ed4c221dbad0f33a83a49cad6b3fe8d5817ac83d38b6a19")
require.NoError(t, err) require.NoError(t, err)
publicKey := privateKey.Public() publicKey := privateKey.Public()
@@ -305,8 +300,7 @@ func TestL1CoordinatorTxByteParsersCompatibility(t *testing.T) {
signature = append(signature, v[:]...) signature = append(signature, v[:]...)
var pkComp babyjub.PublicKeyComp var pkComp babyjub.PublicKeyComp
pkCompB, err := pkCompB, err := hex.DecodeString("a2c2807ee39c3b3378738cff85a46a9465bb8fcf44ea597c33da9719be7c259c")
hex.DecodeString("a2c2807ee39c3b3378738cff85a46a9465bb8fcf44ea597c33da9719be7c259c")
require.NoError(t, err) require.NoError(t, err)
pkCompL := SwapEndianness(pkCompB) pkCompL := SwapEndianness(pkCompB)
err = pkComp.UnmarshalText([]byte(hex.EncodeToString(pkCompL))) err = pkComp.UnmarshalText([]byte(hex.EncodeToString(pkCompL)))
@@ -321,9 +315,7 @@ func TestL1CoordinatorTxByteParsersCompatibility(t *testing.T) {
encodeData, err := l1Tx.BytesCoordinatorTx(signature) encodeData, err := l1Tx.BytesCoordinatorTx(signature)
require.NoError(t, err) require.NoError(t, err)
expected, err := utils.HexDecode("1b186d7122ff7f654cfed3156719774898d573900c86599a885a706" + expected, err := utils.HexDecode("1b186d7122ff7f654cfed3156719774898d573900c86599a885a706dbdffe5ea8cda71e5eb097e115405d84d1e7b464009b434b32c014a2df502d1f065ced8bc3ba2c2807ee39c3b3378738cff85a46a9465bb8fcf44ea597c33da9719be7c259c000000e7")
"dbdffe5ea8cda71e5eb097e115405d84d1e7b464009b434b32c014a2df502d1f065ced8bc3ba2c28" +
"07ee39c3b3378738cff85a46a9465bb8fcf44ea597c33da9719be7c259c000000e7")
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, expected, encodeData) assert.Equal(t, expected, encodeData)

View File

@@ -10,7 +10,7 @@ import (
// L2Tx is a struct that represents an already forged L2 tx // L2Tx is a struct that represents an already forged L2 tx
type L2Tx struct { type L2Tx struct {
// Stored in DB: mandatory fields // Stored in DB: mandatory fileds
TxID TxID `meddler:"id"` TxID TxID `meddler:"id"`
BatchNum BatchNum `meddler:"batch_num"` // batchNum in which this tx was forged. BatchNum BatchNum `meddler:"batch_num"` // batchNum in which this tx was forged.
Position int `meddler:"position"` Position int `meddler:"position"`
@@ -21,10 +21,9 @@ type L2Tx struct {
Amount *big.Int `meddler:"amount,bigint"` Amount *big.Int `meddler:"amount,bigint"`
Fee FeeSelector `meddler:"fee"` Fee FeeSelector `meddler:"fee"`
// Nonce is filled by the TxProcessor // Nonce is filled by the TxProcessor
Nonce Nonce `meddler:"nonce"` Nonce Nonce `meddler:"nonce"`
Type TxType `meddler:"type"` Type TxType `meddler:"type"`
// EthBlockNum in which this L2Tx was added to the queue EthBlockNum int64 `meddler:"eth_block_num"` // EthereumBlockNumber in which this L2Tx was added to the queue
EthBlockNum int64 `meddler:"eth_block_num"`
} }
// NewL2Tx returns the given L2Tx with the TxId & Type parameters calculated // NewL2Tx returns the given L2Tx with the TxId & Type parameters calculated

View File

@@ -19,8 +19,7 @@ func TestNewL2Tx(t *testing.T) {
} }
l2Tx, err := NewL2Tx(l2Tx) l2Tx, err := NewL2Tx(l2Tx)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "0x022669acda59b827d20ef5354a3eebd1dffb3972b0a6bf89d18bfd2efa0ab9f41e", assert.Equal(t, "0x022669acda59b827d20ef5354a3eebd1dffb3972b0a6bf89d18bfd2efa0ab9f41e", l2Tx.TxID.String())
l2Tx.TxID.String())
l2Tx = &L2Tx{ l2Tx = &L2Tx{
FromIdx: 87654, FromIdx: 87654,
@@ -31,8 +30,7 @@ func TestNewL2Tx(t *testing.T) {
} }
l2Tx, err = NewL2Tx(l2Tx) l2Tx, err = NewL2Tx(l2Tx)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "0x029e7499a830f8f5eb17c07da48cf91415710f1bcbe0169d363ff91e81faf92fc2", assert.Equal(t, "0x029e7499a830f8f5eb17c07da48cf91415710f1bcbe0169d363ff91e81faf92fc2", l2Tx.TxID.String())
l2Tx.TxID.String())
l2Tx = &L2Tx{ l2Tx = &L2Tx{
FromIdx: 87654, FromIdx: 87654,
@@ -44,8 +42,7 @@ func TestNewL2Tx(t *testing.T) {
} }
l2Tx, err = NewL2Tx(l2Tx) l2Tx, err = NewL2Tx(l2Tx)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "0x0255c70ed20e1b8935232e1b9c5884dbcc88a6e1a3454d24f2d77252eb2bb0b64e", assert.Equal(t, "0x0255c70ed20e1b8935232e1b9c5884dbcc88a6e1a3454d24f2d77252eb2bb0b64e", l2Tx.TxID.String())
l2Tx.TxID.String())
l2Tx = &L2Tx{ l2Tx = &L2Tx{
FromIdx: 87654, FromIdx: 87654,
@@ -57,8 +54,7 @@ func TestNewL2Tx(t *testing.T) {
} }
l2Tx, err = NewL2Tx(l2Tx) l2Tx, err = NewL2Tx(l2Tx)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "0x0206b372f967061d1148bbcff679de38120e075141a80a07326d0f514c2efc6ca9", assert.Equal(t, "0x0206b372f967061d1148bbcff679de38120e075141a80a07326d0f514c2efc6ca9", l2Tx.TxID.String())
l2Tx.TxID.String())
l2Tx = &L2Tx{ l2Tx = &L2Tx{
FromIdx: 1, FromIdx: 1,
@@ -70,8 +66,7 @@ func TestNewL2Tx(t *testing.T) {
} }
l2Tx, err = NewL2Tx(l2Tx) l2Tx, err = NewL2Tx(l2Tx)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "0x0236f7ea5bccf78ba60baf56c058d235a844f9b09259fd0efa4f5f72a7d4a26618", assert.Equal(t, "0x0236f7ea5bccf78ba60baf56c058d235a844f9b09259fd0efa4f5f72a7d4a26618", l2Tx.TxID.String())
l2Tx.TxID.String())
l2Tx = &L2Tx{ l2Tx = &L2Tx{
FromIdx: 999, FromIdx: 999,
@@ -83,8 +78,7 @@ func TestNewL2Tx(t *testing.T) {
} }
l2Tx, err = NewL2Tx(l2Tx) l2Tx, err = NewL2Tx(l2Tx)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "0x02ac122f5b709ce190129fecbbe35bfd30c70e6433dbd85a8eb743d110906a1dc1", assert.Equal(t, "0x02ac122f5b709ce190129fecbbe35bfd30c70e6433dbd85a8eb743d110906a1dc1", l2Tx.TxID.String())
l2Tx.TxID.String())
l2Tx = &L2Tx{ l2Tx = &L2Tx{
FromIdx: 4444, FromIdx: 4444,
@@ -96,8 +90,7 @@ func TestNewL2Tx(t *testing.T) {
} }
l2Tx, err = NewL2Tx(l2Tx) l2Tx, err = NewL2Tx(l2Tx)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "0x02c674951a81881b7bc50db3b9e5efd97ac88550c7426ac548720e5057cfba515a", assert.Equal(t, "0x02c674951a81881b7bc50db3b9e5efd97ac88550c7426ac548720e5057cfba515a", l2Tx.TxID.String())
l2Tx.TxID.String())
} }
func TestL2TxByteParsers(t *testing.T) { func TestL2TxByteParsers(t *testing.T) {

View File

@@ -16,8 +16,7 @@ import (
// EmptyBJJComp contains the 32 byte array of a empty BabyJubJub PublicKey // EmptyBJJComp contains the 32 byte array of a empty BabyJubJub PublicKey
// Compressed. It is a valid point in the BabyJubJub curve, so does not give // Compressed. It is a valid point in the BabyJubJub curve, so does not give
// errors when being decompressed. // errors when being decompressed.
var EmptyBJJComp = babyjub.PublicKeyComp([32]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, var EmptyBJJComp = babyjub.PublicKeyComp([32]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0})
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0})
// PoolL2Tx is a struct that represents a L2Tx sent by an account to the // PoolL2Tx is a struct that represents a L2Tx sent by an account to the
// coordinator that is waiting to be forged // coordinator that is waiting to be forged
@@ -101,8 +100,6 @@ func (tx *PoolL2Tx) SetType() error {
tx.Type = TxTypeTransferToBJJ tx.Type = TxTypeTransferToBJJ
} else if tx.ToEthAddr != FFAddr && tx.ToEthAddr != EmptyAddr { } else if tx.ToEthAddr != FFAddr && tx.ToEthAddr != EmptyAddr {
tx.Type = TxTypeTransferToEthAddr tx.Type = TxTypeTransferToEthAddr
} else {
return tracerr.Wrap(errors.New("malformed transaction"))
} }
} else { } else {
return tracerr.Wrap(errors.New("malformed transaction")) return tracerr.Wrap(errors.New("malformed transaction"))
@@ -306,8 +303,10 @@ func (tx *PoolL2Tx) HashToSign(chainID uint16) (*big.Int, error) {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
copy(e1B[0:5], amountFloat40Bytes) copy(e1B[0:5], amountFloat40Bytes)
copy(e1B[5:25], tx.ToEthAddr[:]) toEthAddr := EthAddrToBigInt(tx.ToEthAddr)
copy(e1B[5:25], toEthAddr.Bytes())
e1 := new(big.Int).SetBytes(e1B[:]) e1 := new(big.Int).SetBytes(e1B[:])
rqToEthAddr := EthAddrToBigInt(tx.RqToEthAddr) rqToEthAddr := EthAddrToBigInt(tx.RqToEthAddr)
_, toBJJY := babyjub.UnpackSignY(tx.ToBJJ) _, toBJJY := babyjub.UnpackSignY(tx.ToBJJ)
@@ -319,8 +318,7 @@ func (tx *PoolL2Tx) HashToSign(chainID uint16) (*big.Int, error) {
_, rqToBJJY := babyjub.UnpackSignY(tx.RqToBJJ) _, rqToBJJY := babyjub.UnpackSignY(tx.RqToBJJ)
return poseidon.Hash([]*big.Int{toCompressedData, e1, toBJJY, rqTxCompressedDataV2, return poseidon.Hash([]*big.Int{toCompressedData, e1, toBJJY, rqTxCompressedDataV2, rqToEthAddr, rqToBJJY})
rqToEthAddr, rqToBJJY})
} }
// VerifySignature returns true if the signature verification is correct for the given PublicKeyComp // VerifySignature returns true if the signature verification is correct for the given PublicKeyComp

View File

@@ -21,20 +21,17 @@ func TestNewPoolL2Tx(t *testing.T) {
} }
poolL2Tx, err := NewPoolL2Tx(poolL2Tx) poolL2Tx, err := NewPoolL2Tx(poolL2Tx)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "0x022669acda59b827d20ef5354a3eebd1dffb3972b0a6bf89d18bfd2efa0ab9f41e", assert.Equal(t, "0x022669acda59b827d20ef5354a3eebd1dffb3972b0a6bf89d18bfd2efa0ab9f41e", poolL2Tx.TxID.String())
poolL2Tx.TxID.String())
} }
func TestTxCompressedDataAndTxCompressedDataV2JSVectors(t *testing.T) { func TestTxCompressedDataAndTxCompressedDataV2JSVectors(t *testing.T) {
// test vectors values generated from javascript implementation // test vectors values generated from javascript implementation
var skPositive babyjub.PrivateKey // 'Positive' refers to the sign var skPositive babyjub.PrivateKey // 'Positive' refers to the sign
_, err := hex.Decode(skPositive[:], _, err := hex.Decode(skPositive[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
assert.NoError(t, err) assert.NoError(t, err)
var skNegative babyjub.PrivateKey // 'Negative' refers to the sign var skNegative babyjub.PrivateKey // 'Negative' refers to the sign
_, err = hex.Decode(skNegative[:], _, err = hex.Decode(skNegative[:], []byte("0001020304050607080900010203040506070809000102030405060708090002"))
[]byte("0001020304050607080900010203040506070809000102030405060708090002"))
assert.NoError(t, err) assert.NoError(t, err)
amount, ok := new(big.Int).SetString("343597383670000000000000000000000000000000", 10) amount, ok := new(big.Int).SetString("343597383670000000000000000000000000000000", 10)
@@ -126,8 +123,7 @@ func TestTxCompressedDataAndTxCompressedDataV2JSVectors(t *testing.T) {
func TestRqTxCompressedDataV2(t *testing.T) { func TestRqTxCompressedDataV2(t *testing.T) {
var sk babyjub.PrivateKey var sk babyjub.PrivateKey
_, err := hex.Decode(sk[:], _, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
assert.NoError(t, err) assert.NoError(t, err)
tx := PoolL2Tx{ tx := PoolL2Tx{
RqFromIdx: 7, RqFromIdx: 7,
@@ -146,8 +142,7 @@ func TestRqTxCompressedDataV2(t *testing.T) {
expected, ok := new(big.Int).SetString(expectedStr, 10) expected, ok := new(big.Int).SetString(expectedStr, 10)
assert.True(t, ok) assert.True(t, ok)
assert.Equal(t, expected.Bytes(), txCompressedData.Bytes()) assert.Equal(t, expected.Bytes(), txCompressedData.Bytes())
assert.Equal(t, "010c000000000b0000000a0000000009000000000008000000000007", assert.Equal(t, "010c000000000b0000000a0000000009000000000008000000000007", hex.EncodeToString(txCompressedData.Bytes()))
hex.EncodeToString(txCompressedData.Bytes()))
} }
func TestHashToSign(t *testing.T) { func TestHashToSign(t *testing.T) {
@@ -162,15 +157,13 @@ func TestHashToSign(t *testing.T) {
} }
toSign, err := tx.HashToSign(chainID) toSign, err := tx.HashToSign(chainID)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, "0b8abaf6b7933464e4450df2514da8b72606c02bf7f89bf6e54816fbda9d9d57", assert.Equal(t, "2d49ce1d4136e06f64e3eb1f79a346e6ee3e93ceeac909a57806a8d87005c263", hex.EncodeToString(toSign.Bytes()))
hex.EncodeToString(toSign.Bytes()))
} }
func TestVerifyTxSignature(t *testing.T) { func TestVerifyTxSignature(t *testing.T) {
chainID := uint16(0) chainID := uint16(0)
var sk babyjub.PrivateKey var sk babyjub.PrivateKey
_, err := hex.Decode(sk[:], _, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
assert.NoError(t, err) assert.NoError(t, err)
tx := PoolL2Tx{ tx := PoolL2Tx{
FromIdx: 2, FromIdx: 2,
@@ -184,49 +177,18 @@ func TestVerifyTxSignature(t *testing.T) {
} }
toSign, err := tx.HashToSign(chainID) toSign, err := tx.HashToSign(chainID)
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, assert.Equal(t, "1571327027383224465388301747239444557034990637650927918405777653988509342917", toSign.String())
"3144939470626721092564692894890580265754250231349521601298746071096761507003",
toSign.String())
sig := sk.SignPoseidon(toSign) sig := sk.SignPoseidon(toSign)
tx.Signature = sig.Compress() tx.Signature = sig.Compress()
assert.True(t, tx.VerifySignature(chainID, sk.Public().Compress())) assert.True(t, tx.VerifySignature(chainID, sk.Public().Compress()))
} }
func TestVerifyTxSignatureEthAddrWith0(t *testing.T) {
chainID := uint16(5)
var sk babyjub.PrivateKey
_, err := hex.Decode(sk[:],
[]byte("02f0b4f87065af3797aaaf934e8b5c31563c17f2272fa71bd0146535bfbb4184"))
assert.NoError(t, err)
tx := PoolL2Tx{
FromIdx: 10659,
ToIdx: 0,
ToEthAddr: ethCommon.HexToAddress("0x0004308BD15Ead4F1173624dC289DBdcC806a309"),
Amount: big.NewInt(5000),
TokenID: 0,
Nonce: 946,
Fee: 231,
}
toSign, err := tx.HashToSign(chainID)
assert.NoError(t, err)
sig := sk.SignPoseidon(toSign)
assert.Equal(t,
"f208b8298d5f37148ac3c0c03703272ea47b9f836851bcf8dd5f7e4e3b336ca1d2f6e92ad85dc25f174daf7a0abfd5f71dead3f059b783f4c4b2f56a18a47000",
sig.Compress().String(),
)
tx.Signature = sig.Compress()
assert.True(t, tx.VerifySignature(chainID, sk.Public().Compress()))
}
func TestDecompressEmptyBJJComp(t *testing.T) { func TestDecompressEmptyBJJComp(t *testing.T) {
pkComp := EmptyBJJComp pkComp := EmptyBJJComp
pk, err := pkComp.Decompress() pk, err := pkComp.Decompress()
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, assert.Equal(t, "2957874849018779266517920829765869116077630550401372566248359756137677864698", pk.X.String())
"2957874849018779266517920829765869116077630550401372566248359756137677864698",
pk.X.String())
assert.Equal(t, "0", pk.Y.String()) assert.Equal(t, "0", pk.Y.String())
} }

View File

@@ -15,9 +15,8 @@ const tokenIDBytesLen = 4
// Token is a struct that represents an Ethereum token that is supported in Hermez network // Token is a struct that represents an Ethereum token that is supported in Hermez network
type Token struct { type Token struct {
TokenID TokenID `json:"id" meddler:"token_id"` TokenID TokenID `json:"id" meddler:"token_id"`
// EthBlockNum indicates the Ethereum block number in which this token was registered EthBlockNum int64 `json:"ethereumBlockNum" meddler:"eth_block_num"` // Ethereum block number in which this token was registered
EthBlockNum int64 `json:"ethereumBlockNum" meddler:"eth_block_num"`
EthAddr ethCommon.Address `json:"ethereumAddress" meddler:"eth_addr"` EthAddr ethCommon.Address `json:"ethereumAddress" meddler:"eth_addr"`
Name string `json:"name" meddler:"name"` Name string `json:"name" meddler:"name"`
Symbol string `json:"symbol" meddler:"symbol"` Symbol string `json:"symbol" meddler:"symbol"`
@@ -49,8 +48,7 @@ func (t TokenID) BigInt() *big.Int {
// TokenIDFromBytes returns TokenID from a byte array // TokenIDFromBytes returns TokenID from a byte array
func TokenIDFromBytes(b []byte) (TokenID, error) { func TokenIDFromBytes(b []byte) (TokenID, error) {
if len(b) != tokenIDBytesLen { if len(b) != tokenIDBytesLen {
return 0, tracerr.Wrap(fmt.Errorf("can not parse TokenID, bytes len %d, expected 4", return 0, tracerr.Wrap(fmt.Errorf("can not parse TokenID, bytes len %d, expected 4", len(b)))
len(b)))
} }
tid := binary.BigEndian.Uint32(b[:4]) tid := binary.BigEndian.Uint32(b[:4])
return TokenID(tid), nil return TokenID(tid), nil

View File

@@ -15,12 +15,12 @@ import (
) )
const ( const (
// TxIDPrefixL1UserTx is the prefix that determines that the TxID is for // TXIDPrefixL1UserTx is the prefix that determines that the TxID is
// a L1UserTx // for a L1UserTx
//nolinter:gomnd //nolinter:gomnd
TxIDPrefixL1UserTx = byte(0) TxIDPrefixL1UserTx = byte(0)
// TxIDPrefixL1CoordTx is the prefix that determines that the TxID is // TXIDPrefixL1CoordTx is the prefix that determines that the TxID is
// for a L1CoordinatorTx // for a L1CoordinatorTx
//nolinter:gomnd //nolinter:gomnd
TxIDPrefixL1CoordTx = byte(1) TxIDPrefixL1CoordTx = byte(1)
@@ -51,8 +51,7 @@ func (txid *TxID) Scan(src interface{}) error {
return tracerr.Wrap(fmt.Errorf("can't scan %T into TxID", src)) return tracerr.Wrap(fmt.Errorf("can't scan %T into TxID", src))
} }
if len(srcB) != TxIDLen { if len(srcB) != TxIDLen {
return tracerr.Wrap(fmt.Errorf("can't scan []byte of len %d into TxID, need %d", return tracerr.Wrap(fmt.Errorf("can't scan []byte of len %d into TxID, need %d", len(srcB), TxIDLen))
len(srcB), TxIDLen))
} }
copy(txid[:], srcB) copy(txid[:], srcB)
return nil return nil
@@ -88,7 +87,7 @@ func (txid TxID) MarshalText() ([]byte, error) {
return []byte(txid.String()), nil return []byte(txid.String()), nil
} }
// UnmarshalText unmarshalls a TxID // UnmarshalText unmarshals a TxID
func (txid *TxID) UnmarshalText(data []byte) error { func (txid *TxID) UnmarshalText(data []byte) error {
idStr := string(data) idStr := string(data)
id, err := NewTxIDFromString(idStr) id, err := NewTxIDFromString(idStr)
@@ -103,15 +102,13 @@ func (txid *TxID) UnmarshalText(data []byte) error {
type TxType string type TxType string
const ( const (
// TxTypeExit represents L2->L1 token transfer. A leaf for this account appears in the exit // TxTypeExit represents L2->L1 token transfer. A leaf for this account appears in the exit tree of the block
// tree of the block
TxTypeExit TxType = "Exit" TxTypeExit TxType = "Exit"
// TxTypeTransfer represents L2->L2 token transfer // TxTypeTransfer represents L2->L2 token transfer
TxTypeTransfer TxType = "Transfer" TxTypeTransfer TxType = "Transfer"
// TxTypeDeposit represents L1->L2 transfer // TxTypeDeposit represents L1->L2 transfer
TxTypeDeposit TxType = "Deposit" TxTypeDeposit TxType = "Deposit"
// TxTypeCreateAccountDeposit represents creation of a new leaf in the state tree // TxTypeCreateAccountDeposit represents creation of a new leaf in the state tree (newAcconut) + L1->L2 transfer
// (newAcconut) + L1->L2 transfer
TxTypeCreateAccountDeposit TxType = "CreateAccountDeposit" TxTypeCreateAccountDeposit TxType = "CreateAccountDeposit"
// TxTypeCreateAccountDepositTransfer represents L1->L2 transfer + L2->L2 transfer // TxTypeCreateAccountDepositTransfer represents L1->L2 transfer + L2->L2 transfer
TxTypeCreateAccountDepositTransfer TxType = "CreateAccountDepositTransfer" TxTypeCreateAccountDepositTransfer TxType = "CreateAccountDepositTransfer"
@@ -127,31 +124,24 @@ const (
TxTypeTransferToBJJ TxType = "TransferToBJJ" TxTypeTransferToBJJ TxType = "TransferToBJJ"
) )
// Tx is a struct used by the TxSelector & BatchBuilder as a generic type generated from L1Tx & // Tx is a struct used by the TxSelector & BatchBuilder as a generic type generated from L1Tx & PoolL2Tx
// PoolL2Tx
type Tx struct { type Tx struct {
// Generic // Generic
IsL1 bool `meddler:"is_l1"` IsL1 bool `meddler:"is_l1"`
TxID TxID `meddler:"id"` TxID TxID `meddler:"id"`
Type TxType `meddler:"type"` Type TxType `meddler:"type"`
Position int `meddler:"position"` Position int `meddler:"position"`
FromIdx Idx `meddler:"from_idx"` FromIdx Idx `meddler:"from_idx"`
ToIdx Idx `meddler:"to_idx"` ToIdx Idx `meddler:"to_idx"`
Amount *big.Int `meddler:"amount,bigint"` Amount *big.Int `meddler:"amount,bigint"`
AmountFloat float64 `meddler:"amount_f"` AmountFloat float64 `meddler:"amount_f"`
TokenID TokenID `meddler:"token_id"` TokenID TokenID `meddler:"token_id"`
USD *float64 `meddler:"amount_usd"` USD *float64 `meddler:"amount_usd"`
// BatchNum in which this tx was forged. If the tx is L2, this must be != 0 BatchNum *BatchNum `meddler:"batch_num"` // batchNum in which this tx was forged. If the tx is L2, this must be != 0
BatchNum *BatchNum `meddler:"batch_num"` EthBlockNum int64 `meddler:"eth_block_num"` // Ethereum Block Number in which this L1Tx was added to the queue
// Ethereum Block Number in which this L1Tx was added to the queue
EthBlockNum int64 `meddler:"eth_block_num"`
// L1 // L1
// ToForgeL1TxsNum in which the tx was forged / will be forged ToForgeL1TxsNum *int64 `meddler:"to_forge_l1_txs_num"` // toForgeL1TxsNum in which the tx was forged / will be forged
ToForgeL1TxsNum *int64 `meddler:"to_forge_l1_txs_num"` UserOrigin *bool `meddler:"user_origin"` // true if the tx was originated by a user, false if it was aoriginated by a coordinator. Note that this differ from the spec for implementation simplification purpposes
// UserOrigin is set to true if the tx was originated by a user, false if it was aoriginated
// by a coordinator. Note that this differ from the spec for implementation simplification
// purpposes
UserOrigin *bool `meddler:"user_origin"`
FromEthAddr ethCommon.Address `meddler:"from_eth_addr"` FromEthAddr ethCommon.Address `meddler:"from_eth_addr"`
FromBJJ babyjub.PublicKeyComp `meddler:"from_bjj"` FromBJJ babyjub.PublicKeyComp `meddler:"from_bjj"`
DepositAmount *big.Int `meddler:"deposit_amount,bigintnull"` DepositAmount *big.Int `meddler:"deposit_amount,bigintnull"`

View File

@@ -21,10 +21,8 @@ func TestSignatureConstant(t *testing.T) {
func TestTxIDScannerValue(t *testing.T) { func TestTxIDScannerValue(t *testing.T) {
txid0 := &TxID{} txid0 := &TxID{}
txid1 := &TxID{} txid1 := &TxID{}
txid0B := [TxIDLen]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, txid0B := [TxIDLen]byte{0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2}
3, 4, 5, 6, 7, 8, 9, 0, 1, 2} txid1B := [TxIDLen]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
txid1B := [TxIDLen]byte{0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0}
copy(txid0[:], txid0B[:]) copy(txid0[:], txid0B[:])
copy(txid1[:], txid1B[:]) copy(txid1[:], txid1B[:])

View File

@@ -21,23 +21,16 @@ func TestBJJFromStringWithChecksum(t *testing.T) {
assert.NoError(t, err) assert.NoError(t, err)
// expected values computed with js implementation // expected values computed with js implementation
assert.Equal(t, assert.Equal(t, "2492816973395423007340226948038371729989170225696553239457870892535792679622", pk.X.String())
"2492816973395423007340226948038371729989170225696553239457870892535792679622", assert.Equal(t, "15238403086306505038849621710779816852318505119327426213168494964113886299863", pk.Y.String())
pk.X.String())
assert.Equal(t,
"15238403086306505038849621710779816852318505119327426213168494964113886299863",
pk.Y.String())
} }
func TestRmEndingZeroes(t *testing.T) { func TestRmEndingZeroes(t *testing.T) {
s0, err := s0, err := merkletree.NewHashFromHex("0x0000000000000000000000000000000000000000000000000000000000000000")
merkletree.NewHashFromHex("0x0000000000000000000000000000000000000000000000000000000000000000")
require.NoError(t, err) require.NoError(t, err)
s1, err := s1, err := merkletree.NewHashFromHex("0x0000000000000000000000000000000000000000000000000000000000000001")
merkletree.NewHashFromHex("0x0000000000000000000000000000000000000000000000000000000000000001")
require.NoError(t, err) require.NoError(t, err)
s2, err := s2, err := merkletree.NewHashFromHex("0x0000000000000000000000000000000000000000000000000000000000000002")
merkletree.NewHashFromHex("0x0000000000000000000000000000000000000000000000000000000000000002")
require.NoError(t, err) require.NoError(t, err)
// expect cropped last zeroes // expect cropped last zeroes

View File

@@ -1,4 +1,4 @@
// Package common zk.go contains all the common data structures used at the // Package common contains all the common data structures used at the
// hermez-node, zk.go contains the zkSnark inputs used to generate the proof // hermez-node, zk.go contains the zkSnark inputs used to generate the proof
package common package common
@@ -67,7 +67,7 @@ type ZKInputs struct {
// accumulate fees // accumulate fees
// FeePlanTokens contains all the tokenIDs for which the fees are being // FeePlanTokens contains all the tokenIDs for which the fees are being
// accumulated and those fees accumulated will be paid to the FeeIdxs // accumulated and those fees accoumulated will be paid to the FeeIdxs
// array. The order of FeeIdxs & FeePlanTokens & State3 must match. // array. The order of FeeIdxs & FeePlanTokens & State3 must match.
// Coordinator fees are processed correlated such as: // Coordinator fees are processed correlated such as:
// [FeePlanTokens[i], FeeIdxs[i]] // [FeePlanTokens[i], FeeIdxs[i]]
@@ -130,8 +130,8 @@ type ZKInputs struct {
RqOffset []*big.Int `json:"rqOffset"` // uint8 (max 3 bits), len: [maxTx] RqOffset []*big.Int `json:"rqOffset"` // uint8 (max 3 bits), len: [maxTx]
// transaction L2 request data // transaction L2 request data
// RqTxCompressedDataV2 big.Int (max 251 bits), len: [maxTx] // RqTxCompressedDataV2
RqTxCompressedDataV2 []*big.Int `json:"rqTxCompressedDataV2"` RqTxCompressedDataV2 []*big.Int `json:"rqTxCompressedDataV2"` // big.Int (max 251 bits), len: [maxTx]
// RqToEthAddr // RqToEthAddr
RqToEthAddr []*big.Int `json:"rqToEthAddr"` // ethCommon.Address, len: [maxTx] RqToEthAddr []*big.Int `json:"rqToEthAddr"` // ethCommon.Address, len: [maxTx]
// RqToBJJAy // RqToBJJAy
@@ -301,8 +301,7 @@ func (z ZKInputs) MarshalJSON() ([]byte, error) {
} }
// NewZKInputs returns a pointer to an initialized struct of ZKInputs // NewZKInputs returns a pointer to an initialized struct of ZKInputs
func NewZKInputs(chainID uint16, maxTx, maxL1Tx, maxFeeIdxs, nLevels uint32, func NewZKInputs(chainID uint16, maxTx, maxL1Tx, maxFeeIdxs, nLevels uint32, currentNumBatch *big.Int) *ZKInputs {
currentNumBatch *big.Int) *ZKInputs {
zki := &ZKInputs{} zki := &ZKInputs{}
zki.Metadata.MaxFeeIdxs = maxFeeIdxs zki.Metadata.MaxFeeIdxs = maxFeeIdxs
zki.Metadata.MaxLevels = uint32(48) //nolint:gomnd zki.Metadata.MaxLevels = uint32(48) //nolint:gomnd
@@ -481,7 +480,7 @@ func (z ZKInputs) ToHashGlobalData() ([]byte, error) {
b = append(b, newExitRoot...) b = append(b, newExitRoot...)
// [MAX_L1_TX * (2 * MAX_NLEVELS + 528) bits] L1TxsData // [MAX_L1_TX * (2 * MAX_NLEVELS + 528) bits] L1TxsData
l1TxDataLen := (2*z.Metadata.MaxLevels + 528) //nolint:gomnd l1TxDataLen := (2*z.Metadata.MaxLevels + 528)
l1TxsDataLen := (z.Metadata.MaxL1Tx * l1TxDataLen) l1TxsDataLen := (z.Metadata.MaxL1Tx * l1TxDataLen)
l1TxsData := make([]byte, l1TxsDataLen/8) //nolint:gomnd l1TxsData := make([]byte, l1TxsDataLen/8) //nolint:gomnd
for i := 0; i < len(z.Metadata.L1TxsData); i++ { for i := 0; i < len(z.Metadata.L1TxsData); i++ {
@@ -507,14 +506,11 @@ func (z ZKInputs) ToHashGlobalData() ([]byte, error) {
l2TxsData = append(l2TxsData, z.Metadata.L2TxsData[i]...) l2TxsData = append(l2TxsData, z.Metadata.L2TxsData[i]...)
} }
if len(l2TxsData) > int(expectedL2TxsDataLen) { if len(l2TxsData) > int(expectedL2TxsDataLen) {
return nil, tracerr.Wrap(fmt.Errorf("len(l2TxsData): %d, expected: %d", return nil, tracerr.Wrap(fmt.Errorf("len(l2TxsData): %d, expected: %d", len(l2TxsData), expectedL2TxsDataLen))
len(l2TxsData), expectedL2TxsDataLen))
} }
b = append(b, l2TxsData...) b = append(b, l2TxsData...)
l2TxsPadding := make([]byte, l2TxsPadding := make([]byte, (int(z.Metadata.MaxTx)-len(z.Metadata.L1TxsDataAvailability)-len(z.Metadata.L2TxsData))*int(l2TxDataLen)/8) //nolint:gomnd
(int(z.Metadata.MaxTx)-len(z.Metadata.L1TxsDataAvailability)-
len(z.Metadata.L2TxsData))*int(l2TxDataLen)/8) //nolint:gomnd
b = append(b, l2TxsPadding...) b = append(b, l2TxsPadding...)
// [NLevels * MAX_TOKENS_FEE bits] feeTxsData // [NLevels * MAX_TOKENS_FEE bits] feeTxsData

View File

@@ -8,9 +8,7 @@ import (
"github.com/BurntSushi/toml" "github.com/BurntSushi/toml"
ethCommon "github.com/ethereum/go-ethereum/common" ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/api/stateapiupdater"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/priceupdater"
"github.com/hermeznetwork/tracerr" "github.com/hermeznetwork/tracerr"
"github.com/iden3/go-iden3-crypto/babyjub" "github.com/iden3/go-iden3-crypto/babyjub"
"gopkg.in/go-playground/validator.v9" "gopkg.in/go-playground/validator.v9"
@@ -82,7 +80,7 @@ type Coordinator struct {
// checking the next block), used to decide when to stop scheduling new // checking the next block), used to decide when to stop scheduling new
// batches (by stopping the pipeline). // batches (by stopping the pipeline).
// For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck // For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck
// is 5, even though at block 11 we canForge, the pipeline will be // is 5, eventhough at block 11 we canForge, the pipeline will be
// stopped if we can't forge at block 15. // stopped if we can't forge at block 15.
// This value should be the expected number of blocks it takes between // This value should be the expected number of blocks it takes between
// scheduling a batch and having it mined. // scheduling a batch and having it mined.
@@ -92,7 +90,7 @@ type Coordinator struct {
// from the next block; used to decide when to stop sending batches to // from the next block; used to decide when to stop sending batches to
// the smart contract. // the smart contract.
// For example, if we are at block 10 and SendBatchBlocksMarginCheck is // For example, if we are at block 10 and SendBatchBlocksMarginCheck is
// 5, even though at block 11 we canForge, the batch will be discarded // 5, eventhough at block 11 we canForge, the batch will be discarded
// if we can't forge at block 15. // if we can't forge at block 15.
SendBatchBlocksMarginCheck int64 SendBatchBlocksMarginCheck int64
// ProofServerPollInterval is the waiting interval between polling the // ProofServerPollInterval is the waiting interval between polling the
@@ -110,20 +108,6 @@ type Coordinator struct {
// to 0s, the coordinator will continuously forge even if the batches // to 0s, the coordinator will continuously forge even if the batches
// are empty. // are empty.
ForgeNoTxsDelay Duration `validate:"-"` ForgeNoTxsDelay Duration `validate:"-"`
// MustForgeAtSlotDeadline enables the coordinator to forge slots if
// the empty slots reach the slot deadline.
MustForgeAtSlotDeadline bool
// IgnoreSlotCommitment disables forcing the coordinator to forge a
// slot immediately when the slot is not committed. If set to false,
// the coordinator will immediately forge a batch at the beginning of a
// slot if it's the slot winner.
IgnoreSlotCommitment bool
// ForgeOncePerSlotIfTxs will make the coordinator forge at most one
// batch per slot, only if there are included txs in that batch, or
// pending l1UserTxs in the smart contract. Setting this parameter
// overrides `ForgeDelay`, `ForgeNoTxsDelay`, `MustForgeAtSlotDeadline`
// and `IgnoreSlotCommitment`.
ForgeOncePerSlotIfTxs bool
// SyncRetryInterval is the waiting interval between calls to the main // SyncRetryInterval is the waiting interval between calls to the main
// handler of a synced block after an error // handler of a synced block after an error
SyncRetryInterval Duration `validate:"required"` SyncRetryInterval Duration `validate:"required"`
@@ -145,15 +129,11 @@ type Coordinator struct {
// order to be accepted into the pool. Txs with lower than // order to be accepted into the pool. Txs with lower than
// minimum fee will be rejected at the API level. // minimum fee will be rejected at the API level.
MinFeeUSD float64 MinFeeUSD float64
// MaxFeeUSD is the maximum fee in USD that a tx must pay in
// order to be accepted into the pool. Txs with greater than
// maximum fee will be rejected at the API level.
MaxFeeUSD float64 `validate:"required"`
// TTL is the Time To Live for L2Txs in the pool. Once MaxTxs // TTL is the Time To Live for L2Txs in the pool. Once MaxTxs
// L2Txs is reached, L2Txs older than TTL will be deleted. // L2Txs is reached, L2Txs older than TTL will be deleted.
TTL Duration `validate:"required"` TTL Duration `validate:"required"`
// PurgeBatchDelay is the delay between batches to purge // PurgeBatchDelay is the delay between batches to purge
// outdated transactions. Outdated L2Txs are those that have // outdated transactions. Oudated L2Txs are those that have
// been forged or marked as invalid for longer than the // been forged or marked as invalid for longer than the
// SafetyPeriod and pending L2Txs that have been in the pool // SafetyPeriod and pending L2Txs that have been in the pool
// for longer than TTL once there are MaxTxs. // for longer than TTL once there are MaxTxs.
@@ -163,7 +143,7 @@ type Coordinator struct {
// nonce. // nonce.
InvalidateBatchDelay int64 `validate:"required"` InvalidateBatchDelay int64 `validate:"required"`
// PurgeBlockDelay is the delay between blocks to purge // PurgeBlockDelay is the delay between blocks to purge
// outdated transactions. Outdated L2Txs are those that have // outdated transactions. Oudated L2Txs are those that have
// been forged or marked as invalid for longer than the // been forged or marked as invalid for longer than the
// SafetyPeriod and pending L2Txs that have been in the pool // SafetyPeriod and pending L2Txs that have been in the pool
// for longer than TTL once there are MaxTxs. // for longer than TTL once there are MaxTxs.
@@ -195,7 +175,7 @@ type Coordinator struct {
MaxGasPrice *big.Int `validate:"required"` MaxGasPrice *big.Int `validate:"required"`
// GasPriceIncPerc is the percentage increase of gas price set // GasPriceIncPerc is the percentage increase of gas price set
// in an ethereum transaction from the suggested gas price by // in an ethereum transaction from the suggested gas price by
// the ethereum node // the ehtereum node
GasPriceIncPerc int64 GasPriceIncPerc int64
// CheckLoopInterval is the waiting interval between receipt // CheckLoopInterval is the waiting interval between receipt
// checks of ethereum transactions in the TxManager // checks of ethereum transactions in the TxManager
@@ -239,9 +219,28 @@ type Coordinator struct {
} }
} }
// PostgreSQL is the postgreSQL configuration parameters. It's possible to use // NodeAPI specifies the configuration parameters of the API
// diferentiated SQL connections for read/write. If the read configuration is type NodeAPI struct {
// not provided, the write one it's going to be used for both reads and writes // Address where the API will listen if set
Address string
// Explorer enables the Explorer API endpoints
Explorer bool
// UpdateMetricsInterval is the interval between updates of the
// API metrics
UpdateMetricsInterval Duration
// UpdateRecommendedFeeInterval is the interval between updates of the
// recommended fees
UpdateRecommendedFeeInterval Duration
// Maximum concurrent connections allowed between API and SQL
MaxSQLConnections int `validate:"required"`
// SQLConnectionTimeout is the maximum amount of time that an API request
// can wait to stablish a SQL connection
SQLConnectionTimeout Duration
}
// It's possible to use diferentiated SQL connections for read/write.
// If the read configuration is not provided, the write one it's going to be used
// for both reads and writes
type PostgreSQL struct { type PostgreSQL struct {
// Port of the PostgreSQL write server // Port of the PostgreSQL write server
PortWrite int `validate:"required"` PortWrite int `validate:"required"`
@@ -282,15 +281,11 @@ type NodeDebug struct {
type Node struct { type Node struct {
PriceUpdater struct { PriceUpdater struct {
// Interval between price updater calls // Interval between price updater calls
Interval Duration `validate:"required"` Interval Duration `valudate:"required"`
// URLBitfinexV2 is the URL of bitfinex V2 API // URL of the token prices provider
URLBitfinexV2 string `validate:"required"` URL string `valudate:"required"`
// URLCoinGeckoV3 is the URL of coingecko V3 API // Type of the API of the token prices provider
URLCoinGeckoV3 string `validate:"required"` Type string `valudate:"required"`
// DefaultUpdateMethod to get token prices
DefaultUpdateMethod priceupdater.UpdateMethodType `validate:"required"`
// TokensConfig to specify how each token get it's price updated
TokensConfig []priceupdater.TokenConfig
} `validate:"required"` } `validate:"required"`
StateDB struct { StateDB struct {
// Path where the synchronizer StateDB is stored // Path where the synchronizer StateDB is stored
@@ -300,8 +295,7 @@ type Node struct {
} `validate:"required"` } `validate:"required"`
PostgreSQL PostgreSQL `validate:"required"` PostgreSQL PostgreSQL `validate:"required"`
Web3 struct { Web3 struct {
// URL is the URL of the web3 ethereum-node RPC server. Only // URL is the URL of the web3 ethereum-node RPC server
// geth is officially supported.
URL string `validate:"required"` URL string `validate:"required"`
} `validate:"required"` } `validate:"required"`
Synchronizer struct { Synchronizer struct {
@@ -330,65 +324,31 @@ type Node struct {
// TokenHEZ address // TokenHEZ address
TokenHEZName string `validate:"required"` TokenHEZName string `validate:"required"`
} `validate:"required"` } `validate:"required"`
// API specifies the configuration parameters of the API API NodeAPI `validate:"required"`
API struct { Debug NodeDebug `validate:"required"`
// Address where the API will listen if set Coordinator Coordinator `validate:"-"`
Address string
// Explorer enables the Explorer API endpoints
Explorer bool
// UpdateMetricsInterval is the interval between updates of the
// API metrics
UpdateMetricsInterval Duration
// UpdateRecommendedFeeInterval is the interval between updates of the
// recommended fees
UpdateRecommendedFeeInterval Duration
// Maximum concurrent connections allowed between API and SQL
MaxSQLConnections int `validate:"required"`
// SQLConnectionTimeout is the maximum amount of time that an API request
// can wait to stablish a SQL connection
SQLConnectionTimeout Duration
} `validate:"required"`
RecommendedFeePolicy stateapiupdater.RecommendedFeePolicy `validate:"required"`
Debug NodeDebug `validate:"required"`
Coordinator Coordinator `validate:"-"`
} }
// APIServer is the api server configuration parameters
type APIServer struct { type APIServer struct {
// NodeAPI specifies the configuration parameters of the API API NodeAPI `validate:"required"`
API struct {
// Address where the API will listen if set
Address string `validate:"required"`
// Explorer enables the Explorer API endpoints
Explorer bool
// Maximum concurrent connections allowed between API and SQL
MaxSQLConnections int `validate:"required"`
// SQLConnectionTimeout is the maximum amount of time that an API request
// can wait to stablish a SQL connection
SQLConnectionTimeout Duration
} `validate:"required"`
PostgreSQL PostgreSQL `validate:"required"` PostgreSQL PostgreSQL `validate:"required"`
Coordinator struct { Coordinator struct {
API struct { API struct {
// Coordinator enables the coordinator API endpoints // Coordinator enables the coordinator API endpoints
Coordinator bool Coordinator bool
} `validate:"required"` } `validate:"required"`
L2DB struct { } `validate:"required"`
// MaxTxs is the maximum number of pending L2Txs that can be L2DB struct {
// stored in the pool. Once this number of pending L2Txs is // MaxTxs is the maximum number of pending L2Txs that can be
// reached, inserts to the pool will be denied until some of // stored in the pool. Once this number of pending L2Txs is
// the pending txs are forged. // reached, inserts to the pool will be denied until some of
MaxTxs uint32 `validate:"required"` // the pending txs are forged.
// MinFeeUSD is the minimum fee in USD that a tx must pay in MaxTxs uint32 `validate:"required"`
// order to be accepted into the pool. Txs with lower than // MinFeeUSD is the minimum fee in USD that a tx must pay in
// minimum fee will be rejected at the API level. // order to be accepted into the pool. Txs with lower than
MinFeeUSD float64 // minimum fee will be rejected at the API level.
// MaxFeeUSD is the maximum fee in USD that a tx must pay in MinFeeUSD float64
// order to be accepted into the pool. Txs with greater than } `validate:"required"`
// maximum fee will be rejected at the API level.
MaxFeeUSD float64 `validate:"required"`
} `validate:"required"`
}
Debug NodeDebug `validate:"required"` Debug NodeDebug `validate:"required"`
} }
@@ -405,8 +365,8 @@ func Load(path string, cfg interface{}) error {
return nil return nil
} }
// LoadNode loads the Node configuration from path. // LoadCoordinator loads the Coordinator configuration from path.
func LoadNode(path string, coordinator bool) (*Node, error) { func LoadCoordinator(path string) (*Node, error) {
var cfg Node var cfg Node
if err := Load(path, &cfg); err != nil { if err := Load(path, &cfg); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("error loading node configuration file: %w", err)) return nil, tracerr.Wrap(fmt.Errorf("error loading node configuration file: %w", err))
@@ -415,16 +375,27 @@ func LoadNode(path string, coordinator bool) (*Node, error) {
if err := validate.Struct(cfg); err != nil { if err := validate.Struct(cfg); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err)) return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
} }
if coordinator { if err := validate.Struct(cfg.Coordinator); err != nil {
if err := validate.Struct(cfg.Coordinator); err != nil { return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err)) }
} return &cfg, nil
}
// LoadNode loads the Node configuration from path.
func LoadNode(path string) (*Node, error) {
var cfg Node
if err := Load(path, &cfg); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("error loading node configuration file: %w", err))
}
validate := validator.New()
if err := validate.Struct(cfg); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
} }
return &cfg, nil return &cfg, nil
} }
// LoadAPIServer loads the APIServer configuration from path. // LoadAPIServer loads the APIServer configuration from path.
func LoadAPIServer(path string, coordinator bool) (*APIServer, error) { func LoadAPIServer(path string) (*APIServer, error) {
var cfg APIServer var cfg APIServer
if err := Load(path, &cfg); err != nil { if err := Load(path, &cfg); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("error loading apiServer configuration file: %w", err)) return nil, tracerr.Wrap(fmt.Errorf("error loading apiServer configuration file: %w", err))
@@ -433,10 +404,5 @@ func LoadAPIServer(path string, coordinator bool) (*APIServer, error) {
if err := validate.Struct(cfg); err != nil { if err := validate.Struct(cfg); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err)) return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
} }
if coordinator {
if err := validate.Struct(cfg.Coordinator); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("error validating configuration file: %w", err))
}
}
return &cfg, nil return &cfg, nil
} }

View File

@@ -8,7 +8,6 @@ import (
"path" "path"
"time" "time"
"github.com/ethereum/go-ethereum/accounts/abi/bind"
"github.com/ethereum/go-ethereum/core/types" "github.com/ethereum/go-ethereum/core/types"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/eth" "github.com/hermeznetwork/hermez-node/eth"
@@ -85,15 +84,15 @@ type BatchInfo struct {
PublicInputs []*big.Int PublicInputs []*big.Int
L1Batch bool L1Batch bool
VerifierIdx uint8 VerifierIdx uint8
L1UserTxs []common.L1Tx L1UserTxsExtra []common.L1Tx
L1CoordTxs []common.L1Tx L1CoordTxs []common.L1Tx
L1CoordinatorTxsAuths [][]byte L1CoordinatorTxsAuths [][]byte
L2Txs []common.L2Tx L2Txs []common.L2Tx
CoordIdxs []common.Idx CoordIdxs []common.Idx
ForgeBatchArgs *eth.RollupForgeBatchArgs ForgeBatchArgs *eth.RollupForgeBatchArgs
Auth *bind.TransactOpts `json:"-"` // FeesInfo
EthTx *types.Transaction EthTx *types.Transaction
EthTxErr error EthTxErr error
// SendTimestamp the time of batch sent to ethereum // SendTimestamp the time of batch sent to ethereum
SendTimestamp time.Time SendTimestamp time.Time
Receipt *types.Receipt Receipt *types.Receipt

View File

@@ -1,43 +1,3 @@
/*
Package coordinator handles all the logic related to forging batches as a
coordinator in the hermez network.
The forging of batches is done with a pipeline in order to allow multiple
batches being forged in parallel. The maximum number of batches that can be
forged in parallel is determined by the number of available proof servers.
The Coordinator begins with the pipeline stopped. The main Coordinator
goroutine keeps listening for synchronizer events sent by the node package,
which allow the coordinator to determine if the configured forger address is
allowed to forge at the current block or not. When the forger address becomes
allowed to forge, the pipeline is started, and when it terminates being allowed
to forge, the pipeline is stopped.
The Pipeline consists of two goroutines. The first one is in charge of
preparing a batch internally, which involves making a selection of transactions
and calculating the ZKInputs for the batch proof, and sending these ZKInputs to
an idle proof server. This goroutine will keep preparing batches while there
are idle proof servers, if the forging policy determines that a batch should be
forged in the current state. The second goroutine is in charge of waiting for
the proof server to finish computing the proof, retreiving it, prepare the
arguments for the `forgeBatch` Rollup transaction, and sending the result to
the TxManager. All the batch information moves between functions and
goroutines via the BatchInfo struct.
Finally, the TxManager contains a single goroutine that makes forgeBatch
ethereum transactions for the batches sent by the Pipeline, and keeps them in a
list to check them periodically. In the periodic checks, the ethereum
transaction is checked for successfulness, and it's only forgotten after a
number of confirmation blocks have passed after being successfully mined. At
any point if a transaction failure is detected, the TxManager can signal the
Coordinator to reset the Pipeline in order to reforge the failed batches.
The Coordinator goroutine acts as a manager. The synchronizer events (which
notify about new blocks and associated new state) that it receives are
broadcasted to the Pipeline and the TxManager. This allows the Coordinator,
Pipeline and TxManager to have a copy of the current hermez network state
required to perform their duties.
*/
package coordinator package coordinator
import ( import (
@@ -64,8 +24,9 @@ import (
) )
var ( var (
errLastL1BatchNotSynced = fmt.Errorf("last L1Batch not synced yet") errLastL1BatchNotSynced = fmt.Errorf("last L1Batch not synced yet")
errSkipBatchByPolicy = fmt.Errorf("skip batch by policy") errForgeNoTxsBeforeDelay = fmt.Errorf("no txs to forge and we haven't reached the forge no txs delay")
errForgeBeforeDelay = fmt.Errorf("we haven't reached the forge delay")
) )
const ( const (
@@ -92,7 +53,7 @@ type Config struct {
// checking the next block), used to decide when to stop scheduling new // checking the next block), used to decide when to stop scheduling new
// batches (by stopping the pipeline). // batches (by stopping the pipeline).
// For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck // For example, if we are at block 10 and ScheduleBatchBlocksAheadCheck
// is 5, even though at block 11 we canForge, the pipeline will be // is 5, eventhough at block 11 we canForge, the pipeline will be
// stopped if we can't forge at block 15. // stopped if we can't forge at block 15.
// This value should be the expected number of blocks it takes between // This value should be the expected number of blocks it takes between
// scheduling a batch and having it mined. // scheduling a batch and having it mined.
@@ -102,7 +63,7 @@ type Config struct {
// from the next block; used to decide when to stop sending batches to // from the next block; used to decide when to stop sending batches to
// the smart contract. // the smart contract.
// For example, if we are at block 10 and SendBatchBlocksMarginCheck is // For example, if we are at block 10 and SendBatchBlocksMarginCheck is
// 5, even though at block 11 we canForge, the batch will be discarded // 5, eventhough at block 11 we canForge, the batch will be discarded
// if we can't forge at block 15. // if we can't forge at block 15.
// This value should be the expected number of blocks it takes between // This value should be the expected number of blocks it takes between
// sending a batch and having it mined. // sending a batch and having it mined.
@@ -122,20 +83,6 @@ type Config struct {
// to 0s, the coordinator will continuously forge even if the batches // to 0s, the coordinator will continuously forge even if the batches
// are empty. // are empty.
ForgeNoTxsDelay time.Duration ForgeNoTxsDelay time.Duration
// MustForgeAtSlotDeadline enables the coordinator to forge slots if
// the empty slots reach the slot deadline.
MustForgeAtSlotDeadline bool
// IgnoreSlotCommitment disables forcing the coordinator to forge a
// slot immediately when the slot is not committed. If set to false,
// the coordinator will immediately forge a batch at the beginning of
// a slot if it's the slot winner.
IgnoreSlotCommitment bool
// ForgeOncePerSlotIfTxs will make the coordinator forge at most one
// batch per slot, only if there are included txs in that batch, or
// pending l1UserTxs in the smart contract. Setting this parameter
// overrides `ForgeDelay`, `ForgeNoTxsDelay`, `MustForgeAtSlotDeadline`
// and `IgnoreSlotCommitment`.
ForgeOncePerSlotIfTxs bool
// SyncRetryInterval is the waiting interval between calls to the main // SyncRetryInterval is the waiting interval between calls to the main
// handler of a synced block after an error // handler of a synced block after an error
SyncRetryInterval time.Duration SyncRetryInterval time.Duration
@@ -197,8 +144,8 @@ type Coordinator struct {
pipelineNum int // Pipeline sequential number. The first pipeline is 1 pipelineNum int // Pipeline sequential number. The first pipeline is 1
pipelineFromBatch fromBatch // batch from which we started the pipeline pipelineFromBatch fromBatch // batch from which we started the pipeline
provers []prover.Client provers []prover.Client
consts common.SCConsts consts synchronizer.SCConsts
vars common.SCVariables vars synchronizer.SCVariables
stats synchronizer.Stats stats synchronizer.Stats
started bool started bool
@@ -328,13 +275,13 @@ type MsgSyncBlock struct {
Batches []common.BatchData Batches []common.BatchData
// Vars contains each Smart Contract variables if they are updated, or // Vars contains each Smart Contract variables if they are updated, or
// nil if they haven't changed. // nil if they haven't changed.
Vars common.SCVariablesPtr Vars synchronizer.SCVariablesPtr
} }
// MsgSyncReorg indicates a reorg // MsgSyncReorg indicates a reorg
type MsgSyncReorg struct { type MsgSyncReorg struct {
Stats synchronizer.Stats Stats synchronizer.Stats
Vars common.SCVariablesPtr Vars synchronizer.SCVariablesPtr
} }
// MsgStopPipeline indicates a signal to reset the pipeline // MsgStopPipeline indicates a signal to reset the pipeline
@@ -353,7 +300,7 @@ func (c *Coordinator) SendMsg(ctx context.Context, msg interface{}) {
} }
} }
func updateSCVars(vars *common.SCVariables, update common.SCVariablesPtr) { func updateSCVars(vars *synchronizer.SCVariables, update synchronizer.SCVariablesPtr) {
if update.Rollup != nil { if update.Rollup != nil {
vars.Rollup = *update.Rollup vars.Rollup = *update.Rollup
} }
@@ -365,13 +312,12 @@ func updateSCVars(vars *common.SCVariables, update common.SCVariablesPtr) {
} }
} }
func (c *Coordinator) syncSCVars(vars common.SCVariablesPtr) { func (c *Coordinator) syncSCVars(vars synchronizer.SCVariablesPtr) {
updateSCVars(&c.vars, vars) updateSCVars(&c.vars, vars)
} }
func canForge(auctionConstants *common.AuctionConstants, auctionVars *common.AuctionVariables, func canForge(auctionConstants *common.AuctionConstants, auctionVars *common.AuctionVariables,
currentSlot *common.Slot, nextSlot *common.Slot, addr ethCommon.Address, blockNum int64, currentSlot *common.Slot, nextSlot *common.Slot, addr ethCommon.Address, blockNum int64) bool {
mustForgeAtDeadline bool) bool {
if blockNum < auctionConstants.GenesisBlockNum { if blockNum < auctionConstants.GenesisBlockNum {
log.Infow("canForge: requested blockNum is < genesis", "blockNum", blockNum, log.Infow("canForge: requested blockNum is < genesis", "blockNum", blockNum,
"genesis", auctionConstants.GenesisBlockNum) "genesis", auctionConstants.GenesisBlockNum)
@@ -396,7 +342,7 @@ func canForge(auctionConstants *common.AuctionConstants, auctionVars *common.Auc
"block", blockNum) "block", blockNum)
anyoneForge = true anyoneForge = true
} }
if slot.Forger == addr || (anyoneForge && mustForgeAtDeadline) { if slot.Forger == addr || anyoneForge {
return true return true
} }
log.Debugw("canForge: can't forge", "slot.Forger", slot.Forger) log.Debugw("canForge: can't forge", "slot.Forger", slot.Forger)
@@ -406,14 +352,14 @@ func canForge(auctionConstants *common.AuctionConstants, auctionVars *common.Auc
func (c *Coordinator) canForgeAt(blockNum int64) bool { func (c *Coordinator) canForgeAt(blockNum int64) bool {
return canForge(&c.consts.Auction, &c.vars.Auction, return canForge(&c.consts.Auction, &c.vars.Auction,
&c.stats.Sync.Auction.CurrentSlot, &c.stats.Sync.Auction.NextSlot, &c.stats.Sync.Auction.CurrentSlot, &c.stats.Sync.Auction.NextSlot,
c.cfg.ForgerAddress, blockNum, c.cfg.MustForgeAtSlotDeadline) c.cfg.ForgerAddress, blockNum)
} }
func (c *Coordinator) canForge() bool { func (c *Coordinator) canForge() bool {
blockNum := c.stats.Eth.LastBlock.Num + 1 blockNum := c.stats.Eth.LastBlock.Num + 1
return canForge(&c.consts.Auction, &c.vars.Auction, return canForge(&c.consts.Auction, &c.vars.Auction,
&c.stats.Sync.Auction.CurrentSlot, &c.stats.Sync.Auction.NextSlot, &c.stats.Sync.Auction.CurrentSlot, &c.stats.Sync.Auction.NextSlot,
c.cfg.ForgerAddress, blockNum, c.cfg.MustForgeAtSlotDeadline) c.cfg.ForgerAddress, blockNum)
} }
func (c *Coordinator) syncStats(ctx context.Context, stats *synchronizer.Stats) error { func (c *Coordinator) syncStats(ctx context.Context, stats *synchronizer.Stats) error {
@@ -527,8 +473,7 @@ func (c *Coordinator) handleReorg(ctx context.Context, msg *MsgSyncReorg) error
// handleStopPipeline handles stopping the pipeline. If failedBatchNum is 0, // handleStopPipeline handles stopping the pipeline. If failedBatchNum is 0,
// the next pipeline will start from the last state of the synchronizer, // the next pipeline will start from the last state of the synchronizer,
// otherwise, it will state from failedBatchNum-1. // otherwise, it will state from failedBatchNum-1.
func (c *Coordinator) handleStopPipeline(ctx context.Context, reason string, func (c *Coordinator) handleStopPipeline(ctx context.Context, reason string, failedBatchNum common.BatchNum) error {
failedBatchNum common.BatchNum) error {
batchNum := c.stats.Sync.LastBatch.BatchNum batchNum := c.stats.Sync.LastBatch.BatchNum
if failedBatchNum != 0 { if failedBatchNum != 0 {
batchNum = failedBatchNum - 1 batchNum = failedBatchNum - 1

View File

@@ -105,7 +105,7 @@ func newTestModules(t *testing.T) modules {
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez") db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err) require.NoError(t, err)
test.WipeDB(db) test.WipeDB(db)
l2DB := l2db.NewL2DB(db, db, 10, 100, 0.0, 1000.0, 24*time.Hour, nil) l2DB := l2db.NewL2DB(db, db, 10, 100, 0.0, 24*time.Hour, nil)
historyDB := historydb.NewHistoryDB(db, db, nil) historyDB := historydb.NewHistoryDB(db, db, nil)
txSelDBPath, err = ioutil.TempDir("", "tmpTxSelDB") txSelDBPath, err = ioutil.TempDir("", "tmpTxSelDB")
@@ -126,8 +126,7 @@ func newTestModules(t *testing.T) modules {
batchBuilderDBPath, err = ioutil.TempDir("", "tmpBatchBuilderDB") batchBuilderDBPath, err = ioutil.TempDir("", "tmpBatchBuilderDB")
require.NoError(t, err) require.NoError(t, err)
deleteme = append(deleteme, batchBuilderDBPath) deleteme = append(deleteme, batchBuilderDBPath)
batchBuilder, err := batchbuilder.NewBatchBuilder(batchBuilderDBPath, syncStateDB, 0, batchBuilder, err := batchbuilder.NewBatchBuilder(batchBuilderDBPath, syncStateDB, 0, uint64(nLevels))
uint64(nLevels))
assert.NoError(t, err) assert.NoError(t, err)
return modules{ return modules{
@@ -159,15 +158,14 @@ func newTestCoordinator(t *testing.T, forgerAddr ethCommon.Address, ethClient *t
deleteme = append(deleteme, debugBatchPath) deleteme = append(deleteme, debugBatchPath)
conf := Config{ conf := Config{
ForgerAddress: forgerAddr, ForgerAddress: forgerAddr,
ConfirmBlocks: 5, ConfirmBlocks: 5,
L1BatchTimeoutPerc: 0.5, L1BatchTimeoutPerc: 0.5,
EthClientAttempts: 5, EthClientAttempts: 5,
SyncRetryInterval: 400 * time.Microsecond, SyncRetryInterval: 400 * time.Microsecond,
EthClientAttemptsDelay: 100 * time.Millisecond, EthClientAttemptsDelay: 100 * time.Millisecond,
TxManagerCheckInterval: 300 * time.Millisecond, TxManagerCheckInterval: 300 * time.Millisecond,
DebugBatchPath: debugBatchPath, DebugBatchPath: debugBatchPath,
MustForgeAtSlotDeadline: true,
Purger: PurgerCfg{ Purger: PurgerCfg{
PurgeBatchDelay: 10, PurgeBatchDelay: 10,
PurgeBlockDelay: 10, PurgeBlockDelay: 10,
@@ -189,12 +187,12 @@ func newTestCoordinator(t *testing.T, forgerAddr ethCommon.Address, ethClient *t
&prover.MockClient{Delay: 400 * time.Millisecond}, &prover.MockClient{Delay: 400 * time.Millisecond},
} }
scConsts := &common.SCConsts{ scConsts := &synchronizer.SCConsts{
Rollup: *ethClientSetup.RollupConstants, Rollup: *ethClientSetup.RollupConstants,
Auction: *ethClientSetup.AuctionConstants, Auction: *ethClientSetup.AuctionConstants,
WDelayer: *ethClientSetup.WDelayerConstants, WDelayer: *ethClientSetup.WDelayerConstants,
} }
initSCVars := &common.SCVariables{ initSCVars := &synchronizer.SCVariables{
Rollup: *ethClientSetup.RollupVariables, Rollup: *ethClientSetup.RollupVariables,
Auction: *ethClientSetup.AuctionVariables, Auction: *ethClientSetup.AuctionVariables,
WDelayer: *ethClientSetup.WDelayerVariables, WDelayer: *ethClientSetup.WDelayerVariables,
@@ -207,7 +205,7 @@ func newTestCoordinator(t *testing.T, forgerAddr ethCommon.Address, ethClient *t
func newTestSynchronizer(t *testing.T, ethClient *test.Client, ethClientSetup *test.ClientSetup, func newTestSynchronizer(t *testing.T, ethClient *test.Client, ethClientSetup *test.ClientSetup,
modules modules) *synchronizer.Synchronizer { modules modules) *synchronizer.Synchronizer {
sync, err := synchronizer.NewSynchronizer(ethClient, modules.historyDB, modules.l2DB, modules.stateDB, sync, err := synchronizer.NewSynchronizer(ethClient, modules.historyDB, modules.stateDB,
synchronizer.Config{ synchronizer.Config{
StatsRefreshPeriod: 0 * time.Second, StatsRefreshPeriod: 0 * time.Second,
}) })
@@ -392,10 +390,6 @@ func TestCoordCanForge(t *testing.T) {
assert.Equal(t, true, coord.canForge()) assert.Equal(t, true, coord.canForge())
assert.Equal(t, true, bootCoord.canForge()) assert.Equal(t, true, bootCoord.canForge())
// Anyone can forge but the node MustForgeAtSlotDeadline as set as false
coord.cfg.MustForgeAtSlotDeadline = false
assert.Equal(t, false, coord.canForge())
// Slot 3. coordinator bid, so the winner is the coordinator // Slot 3. coordinator bid, so the winner is the coordinator
stats.Eth.LastBlock.Num = ethClientSetup.AuctionConstants.GenesisBlockNum + stats.Eth.LastBlock.Num = ethClientSetup.AuctionConstants.GenesisBlockNum +
3*int64(ethClientSetup.AuctionConstants.BlocksPerSlot) 3*int64(ethClientSetup.AuctionConstants.BlocksPerSlot)
@@ -534,7 +528,7 @@ func TestCoordinatorStress(t *testing.T) {
coord.SendMsg(ctx, MsgSyncBlock{ coord.SendMsg(ctx, MsgSyncBlock{
Stats: *stats, Stats: *stats,
Batches: blockData.Rollup.Batches, Batches: blockData.Rollup.Batches,
Vars: common.SCVariablesPtr{ Vars: synchronizer.SCVariablesPtr{
Rollup: blockData.Rollup.Vars, Rollup: blockData.Rollup.Vars,
Auction: blockData.Auction.Vars, Auction: blockData.Auction.Vars,
WDelayer: blockData.WDelayer.Vars, WDelayer: blockData.WDelayer.Vars,

View File

@@ -22,7 +22,7 @@ import (
type statsVars struct { type statsVars struct {
Stats synchronizer.Stats Stats synchronizer.Stats
Vars common.SCVariablesPtr Vars synchronizer.SCVariablesPtr
} }
type state struct { type state struct {
@@ -36,7 +36,7 @@ type state struct {
type Pipeline struct { type Pipeline struct {
num int num int
cfg Config cfg Config
consts common.SCConsts consts synchronizer.SCConsts
// state // state
state state state state
@@ -57,7 +57,7 @@ type Pipeline struct {
purger *Purger purger *Purger
stats synchronizer.Stats stats synchronizer.Stats
vars common.SCVariables vars synchronizer.SCVariables
statsVarsCh chan statsVars statsVarsCh chan statsVars
ctx context.Context ctx context.Context
@@ -90,7 +90,7 @@ func NewPipeline(ctx context.Context,
coord *Coordinator, coord *Coordinator,
txManager *TxManager, txManager *TxManager,
provers []prover.Client, provers []prover.Client,
scConsts *common.SCConsts, scConsts *synchronizer.SCConsts,
) (*Pipeline, error) { ) (*Pipeline, error) {
proversPool := NewProversPool(len(provers)) proversPool := NewProversPool(len(provers))
proversPoolSize := 0 proversPoolSize := 0
@@ -124,8 +124,7 @@ func NewPipeline(ctx context.Context,
} }
// SetSyncStatsVars is a thread safe method to sets the synchronizer Stats // SetSyncStatsVars is a thread safe method to sets the synchronizer Stats
func (p *Pipeline) SetSyncStatsVars(ctx context.Context, stats *synchronizer.Stats, func (p *Pipeline) SetSyncStatsVars(ctx context.Context, stats *synchronizer.Stats, vars *synchronizer.SCVariablesPtr) {
vars *common.SCVariablesPtr) {
select { select {
case p.statsVarsCh <- statsVars{Stats: *stats, Vars: *vars}: case p.statsVarsCh <- statsVars{Stats: *stats, Vars: *vars}:
case <-ctx.Done(): case <-ctx.Done():
@@ -134,7 +133,7 @@ func (p *Pipeline) SetSyncStatsVars(ctx context.Context, stats *synchronizer.Sta
// reset pipeline state // reset pipeline state
func (p *Pipeline) reset(batchNum common.BatchNum, func (p *Pipeline) reset(batchNum common.BatchNum,
stats *synchronizer.Stats, vars *common.SCVariables) error { stats *synchronizer.Stats, vars *synchronizer.SCVariables) error {
p.state = state{ p.state = state{
batchNum: batchNum, batchNum: batchNum,
lastForgeL1TxsNum: stats.Sync.LastForgeL1TxsNum, lastForgeL1TxsNum: stats.Sync.LastForgeL1TxsNum,
@@ -195,7 +194,7 @@ func (p *Pipeline) reset(batchNum common.BatchNum,
return nil return nil
} }
func (p *Pipeline) syncSCVars(vars common.SCVariablesPtr) { func (p *Pipeline) syncSCVars(vars synchronizer.SCVariablesPtr) {
updateSCVars(&p.vars, vars) updateSCVars(&p.vars, vars)
} }
@@ -210,7 +209,7 @@ func (p *Pipeline) handleForgeBatch(ctx context.Context,
return nil, ctx.Err() return nil, ctx.Err()
} else if err != nil { } else if err != nil {
log.Errorw("proversPool.Get", "err", err) log.Errorw("proversPool.Get", "err", err)
return nil, tracerr.Wrap(err) return nil, err
} }
defer func() { defer func() {
// If we encounter any error (notice that this function returns // If we encounter any error (notice that this function returns
@@ -224,9 +223,8 @@ func (p *Pipeline) handleForgeBatch(ctx context.Context,
// 2. Forge the batch internally (make a selection of txs and prepare // 2. Forge the batch internally (make a selection of txs and prepare
// all the smart contract arguments) // all the smart contract arguments)
var skipReason *string
p.mutexL2DBUpdateDelete.Lock() p.mutexL2DBUpdateDelete.Lock()
batchInfo, skipReason, err = p.forgeBatch(batchNum) batchInfo, err = p.forgeBatch(batchNum)
p.mutexL2DBUpdateDelete.Unlock() p.mutexL2DBUpdateDelete.Unlock()
if ctx.Err() != nil { if ctx.Err() != nil {
return nil, ctx.Err() return nil, ctx.Err()
@@ -235,13 +233,13 @@ func (p *Pipeline) handleForgeBatch(ctx context.Context,
log.Warnw("forgeBatch: scheduled L1Batch too early", "err", err, log.Warnw("forgeBatch: scheduled L1Batch too early", "err", err,
"lastForgeL1TxsNum", p.state.lastForgeL1TxsNum, "lastForgeL1TxsNum", p.state.lastForgeL1TxsNum,
"syncLastForgeL1TxsNum", p.stats.Sync.LastForgeL1TxsNum) "syncLastForgeL1TxsNum", p.stats.Sync.LastForgeL1TxsNum)
} else if tracerr.Unwrap(err) == errForgeNoTxsBeforeDelay ||
tracerr.Unwrap(err) == errForgeBeforeDelay {
// no log
} else { } else {
log.Errorw("forgeBatch", "err", err) log.Errorw("forgeBatch", "err", err)
} }
return nil, tracerr.Wrap(err) return nil, err
} else if skipReason != nil {
log.Debugw("skipping batch", "batch", batchNum, "reason", *skipReason)
return nil, tracerr.Wrap(errSkipBatchByPolicy)
} }
// 3. Send the ZKInputs to the proof server // 3. Send the ZKInputs to the proof server
@@ -250,14 +248,14 @@ func (p *Pipeline) handleForgeBatch(ctx context.Context,
return nil, ctx.Err() return nil, ctx.Err()
} else if err != nil { } else if err != nil {
log.Errorw("sendServerProof", "err", err) log.Errorw("sendServerProof", "err", err)
return nil, tracerr.Wrap(err) return nil, err
} }
return batchInfo, nil return batchInfo, nil
} }
// Start the forging pipeline // Start the forging pipeline
func (p *Pipeline) Start(batchNum common.BatchNum, func (p *Pipeline) Start(batchNum common.BatchNum,
stats *synchronizer.Stats, vars *common.SCVariables) error { stats *synchronizer.Stats, vars *synchronizer.SCVariables) error {
if p.started { if p.started {
log.Fatal("Pipeline already started") log.Fatal("Pipeline already started")
} }
@@ -296,7 +294,8 @@ func (p *Pipeline) Start(batchNum common.BatchNum,
if p.ctx.Err() != nil { if p.ctx.Err() != nil {
continue continue
} else if tracerr.Unwrap(err) == errLastL1BatchNotSynced || } else if tracerr.Unwrap(err) == errLastL1BatchNotSynced ||
tracerr.Unwrap(err) == errSkipBatchByPolicy { tracerr.Unwrap(err) == errForgeNoTxsBeforeDelay ||
tracerr.Unwrap(err) == errForgeBeforeDelay {
continue continue
} else if err != nil { } else if err != nil {
p.setErrAtBatchNum(batchNum) p.setErrAtBatchNum(batchNum)
@@ -389,109 +388,17 @@ func (p *Pipeline) sendServerProof(ctx context.Context, batchInfo *BatchInfo) er
return nil return nil
} }
// slotCommitted returns true if the current slot has already been committed
func (p *Pipeline) slotCommitted() bool {
// Synchronizer has synchronized a batch in the current slot (setting
// CurrentSlot.ForgerCommitment) or the pipeline has already
// internally-forged a batch in the current slot
return p.stats.Sync.Auction.CurrentSlot.ForgerCommitment ||
p.stats.Sync.Auction.CurrentSlot.SlotNum == p.state.lastSlotForged
}
// forgePolicySkipPreSelection is called before doing a tx selection in a batch to
// determine by policy if we should forge the batch or not. Returns true and
// the reason when the forging of the batch must be skipped.
func (p *Pipeline) forgePolicySkipPreSelection(now time.Time) (bool, string) {
// Check if the slot is not yet fulfilled
slotCommitted := p.slotCommitted()
if p.cfg.ForgeOncePerSlotIfTxs {
if slotCommitted {
return true, "cfg.ForgeOncePerSlotIfTxs = true and slot already committed"
}
return false, ""
}
// Determine if we must commit the slot
if !p.cfg.IgnoreSlotCommitment && !slotCommitted {
return false, ""
}
// If we haven't reached the ForgeDelay, skip forging the batch
if now.Sub(p.lastForgeTime) < p.cfg.ForgeDelay {
return true, "we haven't reached the forge delay"
}
return false, ""
}
// forgePolicySkipPostSelection is called after doing a tx selection in a batch to
// determine by policy if we should forge the batch or not. Returns true and
// the reason when the forging of the batch must be skipped.
func (p *Pipeline) forgePolicySkipPostSelection(now time.Time, l1UserTxsExtra, l1CoordTxs []common.L1Tx,
poolL2Txs []common.PoolL2Tx, batchInfo *BatchInfo) (bool, string, error) {
// Check if the slot is not yet fulfilled
slotCommitted := p.slotCommitted()
pendingTxs := true
if len(l1UserTxsExtra) == 0 && len(l1CoordTxs) == 0 && len(poolL2Txs) == 0 {
if batchInfo.L1Batch {
// Query the number of unforged L1UserTxs
// (either in a open queue or in a frozen
// not-yet-forged queue).
count, err := p.historyDB.GetUnforgedL1UserTxsCount()
if err != nil {
return false, "", err
}
// If there are future L1UserTxs, we forge a
// batch to advance the queues to be able to
// forge the L1UserTxs in the future.
// Otherwise, skip.
if count == 0 {
pendingTxs = false
}
} else {
pendingTxs = false
}
}
if p.cfg.ForgeOncePerSlotIfTxs {
if slotCommitted {
return true, "cfg.ForgeOncePerSlotIfTxs = true and slot already committed",
nil
}
if pendingTxs {
return false, "", nil
}
return true, "cfg.ForgeOncePerSlotIfTxs = true and no pending txs",
nil
}
// Determine if we must commit the slot
if !p.cfg.IgnoreSlotCommitment && !slotCommitted {
return false, "", nil
}
// check if there is no txs to forge, no l1UserTxs in the open queue to
// freeze and we haven't reached the ForgeNoTxsDelay
if now.Sub(p.lastForgeTime) < p.cfg.ForgeNoTxsDelay {
if !pendingTxs {
return true, "no txs to forge and we haven't reached the forge no txs delay",
nil
}
}
return false, "", nil
}
// forgeBatch forges the batchNum batch. // forgeBatch forges the batchNum batch.
func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo, func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo, err error) {
skipReason *string, err error) {
// remove transactions from the pool that have been there for too long // remove transactions from the pool that have been there for too long
_, err = p.purger.InvalidateMaybe(p.l2DB, p.txSelector.LocalAccountsDB(), _, err = p.purger.InvalidateMaybe(p.l2DB, p.txSelector.LocalAccountsDB(),
p.stats.Sync.LastBlock.Num, int64(batchNum)) p.stats.Sync.LastBlock.Num, int64(batchNum))
if err != nil { if err != nil {
return nil, nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
_, err = p.purger.PurgeMaybe(p.l2DB, p.stats.Sync.LastBlock.Num, int64(batchNum)) _, err = p.purger.PurgeMaybe(p.l2DB, p.stats.Sync.LastBlock.Num, int64(batchNum))
if err != nil { if err != nil {
return nil, nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
// Structure to accumulate data and metadata of the batch // Structure to accumulate data and metadata of the batch
now := time.Now() now := time.Now()
@@ -499,50 +406,85 @@ func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo,
batchInfo.Debug.StartTimestamp = now batchInfo.Debug.StartTimestamp = now
batchInfo.Debug.StartBlockNum = p.stats.Eth.LastBlock.Num + 1 batchInfo.Debug.StartBlockNum = p.stats.Eth.LastBlock.Num + 1
selectionCfg := &txselector.SelectionConfig{
MaxL1UserTxs: common.RollupConstMaxL1UserTx,
TxProcessorConfig: p.cfg.TxProcessorConfig,
}
var poolL2Txs []common.PoolL2Tx var poolL2Txs []common.PoolL2Tx
var discardedL2Txs []common.PoolL2Tx var discardedL2Txs []common.PoolL2Tx
var l1UserTxs, l1CoordTxs []common.L1Tx var l1UserTxsExtra, l1CoordTxs []common.L1Tx
var auths [][]byte var auths [][]byte
var coordIdxs []common.Idx var coordIdxs []common.Idx
if skip, reason := p.forgePolicySkipPreSelection(now); skip { // Check if the slot is not yet fulfilled
return nil, &reason, nil slotCommitted := false
if p.stats.Sync.Auction.CurrentSlot.ForgerCommitment ||
p.stats.Sync.Auction.CurrentSlot.SlotNum == p.state.lastSlotForged {
slotCommitted = true
}
// If we haven't reached the ForgeDelay, skip forging the batch
if slotCommitted && now.Sub(p.lastForgeTime) < p.cfg.ForgeDelay {
return nil, errForgeBeforeDelay
} }
// 1. Decide if we forge L2Tx or L1+L2Tx // 1. Decide if we forge L2Tx or L1+L2Tx
if p.shouldL1L2Batch(batchInfo) { if p.shouldL1L2Batch(batchInfo) {
batchInfo.L1Batch = true batchInfo.L1Batch = true
if p.state.lastForgeL1TxsNum != p.stats.Sync.LastForgeL1TxsNum { if p.state.lastForgeL1TxsNum != p.stats.Sync.LastForgeL1TxsNum {
return nil, nil, tracerr.Wrap(errLastL1BatchNotSynced) return nil, tracerr.Wrap(errLastL1BatchNotSynced)
} }
// 2a: L1+L2 txs // 2a: L1+L2 txs
_l1UserTxs, err := p.historyDB.GetUnforgedL1UserTxs(p.state.lastForgeL1TxsNum + 1) l1UserTxs, err := p.historyDB.GetUnforgedL1UserTxs(p.state.lastForgeL1TxsNum + 1)
if err != nil { if err != nil {
return nil, nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
coordIdxs, auths, l1UserTxs, l1CoordTxs, poolL2Txs, discardedL2Txs, err = coordIdxs, auths, l1UserTxsExtra, l1CoordTxs, poolL2Txs, discardedL2Txs, err =
p.txSelector.GetL1L2TxSelection(p.cfg.TxProcessorConfig, _l1UserTxs) p.txSelector.GetL1L2TxSelection(selectionCfg, l1UserTxs)
if err != nil { if err != nil {
return nil, nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
} else { } else {
// 2b: only L2 txs // 2b: only L2 txs
coordIdxs, auths, l1CoordTxs, poolL2Txs, discardedL2Txs, err = coordIdxs, auths, l1CoordTxs, poolL2Txs, discardedL2Txs, err =
p.txSelector.GetL2TxSelection(p.cfg.TxProcessorConfig) p.txSelector.GetL2TxSelection(selectionCfg)
if err != nil { if err != nil {
return nil, nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
l1UserTxs = nil l1UserTxsExtra = nil
} }
if skip, reason, err := p.forgePolicySkipPostSelection(now, // If there are no txs to forge, no l1UserTxs in the open queue to
l1UserTxs, l1CoordTxs, poolL2Txs, batchInfo); err != nil { // freeze, and we haven't reached the ForgeNoTxsDelay, skip forging the
return nil, nil, tracerr.Wrap(err) // batch.
} else if skip { if slotCommitted && now.Sub(p.lastForgeTime) < p.cfg.ForgeNoTxsDelay {
if err := p.txSelector.Reset(batchInfo.BatchNum-1, false); err != nil { noTxs := false
return nil, nil, tracerr.Wrap(err) if len(l1UserTxsExtra) == 0 && len(l1CoordTxs) == 0 && len(poolL2Txs) == 0 {
if batchInfo.L1Batch {
// Query the L1UserTxs in the queue following
// the one we are trying to forge.
nextL1UserTxs, err := p.historyDB.GetUnforgedL1UserTxs(
p.state.lastForgeL1TxsNum + 1)
if err != nil {
return nil, tracerr.Wrap(err)
}
// If there are future L1UserTxs, we forge a
// batch to advance the queues and forge the
// L1UserTxs in the future. Otherwise, skip.
if len(nextL1UserTxs) == 0 {
noTxs = true
}
} else {
noTxs = true
}
}
if noTxs {
if err := p.txSelector.Reset(batchInfo.BatchNum-1, false); err != nil {
return nil, tracerr.Wrap(err)
}
return nil, errForgeNoTxsBeforeDelay
} }
return nil, &reason, tracerr.Wrap(err)
} }
if batchInfo.L1Batch { if batchInfo.L1Batch {
@@ -551,41 +493,40 @@ func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo,
} }
// 3. Save metadata from TxSelector output for BatchNum // 3. Save metadata from TxSelector output for BatchNum
batchInfo.L1UserTxs = l1UserTxs batchInfo.L1UserTxsExtra = l1UserTxsExtra
batchInfo.L1CoordTxs = l1CoordTxs batchInfo.L1CoordTxs = l1CoordTxs
batchInfo.L1CoordinatorTxsAuths = auths batchInfo.L1CoordinatorTxsAuths = auths
batchInfo.CoordIdxs = coordIdxs batchInfo.CoordIdxs = coordIdxs
batchInfo.VerifierIdx = p.cfg.VerifierIdx batchInfo.VerifierIdx = p.cfg.VerifierIdx
if err := p.l2DB.StartForging(common.TxIDsFromPoolL2Txs(poolL2Txs), if err := p.l2DB.StartForging(common.TxIDsFromPoolL2Txs(poolL2Txs), batchInfo.BatchNum); err != nil {
batchInfo.BatchNum); err != nil { return nil, tracerr.Wrap(err)
return nil, nil, tracerr.Wrap(err)
} }
if err := p.l2DB.UpdateTxsInfo(discardedL2Txs); err != nil { if err := p.l2DB.UpdateTxsInfo(discardedL2Txs); err != nil {
return nil, nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
// Invalidate transactions that become invalid because of // Invalidate transactions that become invalid beause of
// the poolL2Txs selected. Will mark as invalid the txs that have a // the poolL2Txs selected. Will mark as invalid the txs that have a
// (fromIdx, nonce) which already appears in the selected txs (includes // (fromIdx, nonce) which already appears in the selected txs (includes
// all the nonces smaller than the current one) // all the nonces smaller than the current one)
err = p.l2DB.InvalidateOldNonces(idxsNonceFromPoolL2Txs(poolL2Txs), batchInfo.BatchNum) err = p.l2DB.InvalidateOldNonces(idxsNonceFromPoolL2Txs(poolL2Txs), batchInfo.BatchNum)
if err != nil { if err != nil {
return nil, nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
// 4. Call BatchBuilder with TxSelector output // 4. Call BatchBuilder with TxSelector output
configBatch := &batchbuilder.ConfigBatch{ configBatch := &batchbuilder.ConfigBatch{
TxProcessorConfig: p.cfg.TxProcessorConfig, TxProcessorConfig: p.cfg.TxProcessorConfig,
} }
zkInputs, err := p.batchBuilder.BuildBatch(coordIdxs, configBatch, l1UserTxs, zkInputs, err := p.batchBuilder.BuildBatch(coordIdxs, configBatch, l1UserTxsExtra,
l1CoordTxs, poolL2Txs) l1CoordTxs, poolL2Txs)
if err != nil { if err != nil {
return nil, nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
l2Txs, err := common.PoolL2TxsToL2Txs(poolL2Txs) // NOTE: This is a big uggly, find a better way l2Txs, err := common.PoolL2TxsToL2Txs(poolL2Txs) // NOTE: This is a big uggly, find a better way
if err != nil { if err != nil {
return nil, nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
batchInfo.L2Txs = l2Txs batchInfo.L2Txs = l2Txs
@@ -597,13 +538,12 @@ func (p *Pipeline) forgeBatch(batchNum common.BatchNum) (batchInfo *BatchInfo,
p.state.lastSlotForged = p.stats.Sync.Auction.CurrentSlot.SlotNum p.state.lastSlotForged = p.stats.Sync.Auction.CurrentSlot.SlotNum
return batchInfo, nil, nil return batchInfo, nil
} }
// waitServerProof gets the generated zkProof & sends it to the SmartContract // waitServerProof gets the generated zkProof & sends it to the SmartContract
func (p *Pipeline) waitServerProof(ctx context.Context, batchInfo *BatchInfo) error { func (p *Pipeline) waitServerProof(ctx context.Context, batchInfo *BatchInfo) error {
proof, pubInputs, err := batchInfo.ServerProof.GetProof(ctx) // blocking call, proof, pubInputs, err := batchInfo.ServerProof.GetProof(ctx) // blocking call, until not resolved don't continue. Returns when the proof server has calculated the proof
// until not resolved don't continue. Returns when the proof server has calculated the proof
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -642,7 +582,7 @@ func prepareForgeBatchArgs(batchInfo *BatchInfo) *eth.RollupForgeBatchArgs {
NewLastIdx: int64(zki.Metadata.NewLastIdxRaw), NewLastIdx: int64(zki.Metadata.NewLastIdxRaw),
NewStRoot: zki.Metadata.NewStateRootRaw.BigInt(), NewStRoot: zki.Metadata.NewStateRootRaw.BigInt(),
NewExitRoot: zki.Metadata.NewExitRootRaw.BigInt(), NewExitRoot: zki.Metadata.NewExitRootRaw.BigInt(),
L1UserTxs: batchInfo.L1UserTxs, L1UserTxs: batchInfo.L1UserTxsExtra,
L1CoordinatorTxs: batchInfo.L1CoordTxs, L1CoordinatorTxs: batchInfo.L1CoordTxs,
L1CoordinatorTxsAuths: batchInfo.L1CoordinatorTxsAuths, L1CoordinatorTxsAuths: batchInfo.L1CoordinatorTxsAuths,
L2TxsData: batchInfo.L2Txs, L2TxsData: batchInfo.L2Txs,

View File

@@ -140,7 +140,7 @@ func preloadSync(t *testing.T, ethClient *test.Client, sync *synchronizer.Synchr
blocks[0].Rollup.Batches[0].Batch.StateRoot = blocks[0].Rollup.Batches[0].Batch.StateRoot =
newBigInt("0") newBigInt("0")
blocks[0].Rollup.Batches[1].Batch.StateRoot = blocks[0].Rollup.Batches[1].Batch.StateRoot =
newBigInt("6860514559199319426609623120853503165917774887908204288119245630904770452486") newBigInt("10941365282189107056349764238909072001483688090878331371699519307087372995595")
ethAddTokens(blocks, ethClient) ethAddTokens(blocks, ethClient)
err = ethClient.CtlAddBlocks(blocks) err = ethClient.CtlAddBlocks(blocks)
@@ -206,7 +206,11 @@ PoolTransfer(0) User2-User3: 300 (126)
require.NoError(t, err) require.NoError(t, err)
} }
err = pipeline.reset(batchNum, syncStats, syncSCVars) err = pipeline.reset(batchNum, syncStats, &synchronizer.SCVariables{
Rollup: *syncSCVars.Rollup,
Auction: *syncSCVars.Auction,
WDelayer: *syncSCVars.WDelayer,
})
require.NoError(t, err) require.NoError(t, err)
// Sanity check // Sanity check
sdbAccounts, err := pipeline.txSelector.LocalAccountsDB().TestGetAccounts() sdbAccounts, err := pipeline.txSelector.LocalAccountsDB().TestGetAccounts()
@@ -224,12 +228,12 @@ PoolTransfer(0) User2-User3: 300 (126)
batchNum++ batchNum++
batchInfo, _, err := pipeline.forgeBatch(batchNum) batchInfo, err := pipeline.forgeBatch(batchNum)
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, 3, len(batchInfo.L2Txs)) assert.Equal(t, 3, len(batchInfo.L2Txs))
batchNum++ batchNum++
batchInfo, _, err = pipeline.forgeBatch(batchNum) batchInfo, err = pipeline.forgeBatch(batchNum)
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, 0, len(batchInfo.L2Txs)) assert.Equal(t, 0, len(batchInfo.L2Txs))
} }

View File

@@ -14,7 +14,7 @@ import (
// PurgerCfg is the purger configuration // PurgerCfg is the purger configuration
type PurgerCfg struct { type PurgerCfg struct {
// PurgeBatchDelay is the delay between batches to purge outdated // PurgeBatchDelay is the delay between batches to purge outdated
// transactions. Outdated L2Txs are those that have been forged or // transactions. Oudated L2Txs are those that have been forged or
// marked as invalid for longer than the SafetyPeriod and pending L2Txs // marked as invalid for longer than the SafetyPeriod and pending L2Txs
// that have been in the pool for longer than TTL once there are // that have been in the pool for longer than TTL once there are
// MaxTxs. // MaxTxs.
@@ -23,7 +23,7 @@ type PurgerCfg struct {
// transactions due to nonce lower than the account nonce. // transactions due to nonce lower than the account nonce.
InvalidateBatchDelay int64 InvalidateBatchDelay int64
// PurgeBlockDelay is the delay between blocks to purge outdated // PurgeBlockDelay is the delay between blocks to purge outdated
// transactions. Outdated L2Txs are those that have been forged or // transactions. Oudated L2Txs are those that have been forged or
// marked as invalid for longer than the SafetyPeriod and pending L2Txs // marked as invalid for longer than the SafetyPeriod and pending L2Txs
// that have been in the pool for longer than TTL once there are // that have been in the pool for longer than TTL once there are
// MaxTxs. // MaxTxs.

View File

@@ -21,7 +21,7 @@ func newL2DB(t *testing.T) *l2db.L2DB {
db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez") db, err := dbUtils.InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err) require.NoError(t, err)
test.WipeDB(db) test.WipeDB(db)
return l2db.NewL2DB(db, db, 10, 100, 0.0, 1000.0, 24*time.Hour, nil) return l2db.NewL2DB(db, db, 10, 100, 0.0, 24*time.Hour, nil)
} }
func newStateDB(t *testing.T) *statedb.LocalStateDB { func newStateDB(t *testing.T) *statedb.LocalStateDB {

View File

@@ -31,10 +31,10 @@ type TxManager struct {
batchCh chan *BatchInfo batchCh chan *BatchInfo
chainID *big.Int chainID *big.Int
account accounts.Account account accounts.Account
consts common.SCConsts consts synchronizer.SCConsts
stats synchronizer.Stats stats synchronizer.Stats
vars common.SCVariables vars synchronizer.SCVariables
statsVarsCh chan statsVars statsVarsCh chan statsVars
discardPipelineCh chan int // int refers to the pipelineNum discardPipelineCh chan int // int refers to the pipelineNum
@@ -55,8 +55,7 @@ type TxManager struct {
// NewTxManager creates a new TxManager // NewTxManager creates a new TxManager
func NewTxManager(ctx context.Context, cfg *Config, ethClient eth.ClientInterface, l2DB *l2db.L2DB, func NewTxManager(ctx context.Context, cfg *Config, ethClient eth.ClientInterface, l2DB *l2db.L2DB,
coord *Coordinator, scConsts *common.SCConsts, initSCVars *common.SCVariables) ( coord *Coordinator, scConsts *synchronizer.SCConsts, initSCVars *synchronizer.SCVariables) (*TxManager, error) {
*TxManager, error) {
chainID, err := ethClient.EthChainID() chainID, err := ethClient.EthChainID()
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
@@ -67,7 +66,7 @@ func NewTxManager(ctx context.Context, cfg *Config, ethClient eth.ClientInterfac
} }
accNonce, err := ethClient.EthNonceAt(ctx, *address, nil) accNonce, err := ethClient.EthNonceAt(ctx, *address, nil)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, err
} }
log.Infow("TxManager started", "nonce", accNonce) log.Infow("TxManager started", "nonce", accNonce)
return &TxManager{ return &TxManager{
@@ -103,8 +102,7 @@ func (t *TxManager) AddBatch(ctx context.Context, batchInfo *BatchInfo) {
} }
// SetSyncStatsVars is a thread safe method to sets the synchronizer Stats // SetSyncStatsVars is a thread safe method to sets the synchronizer Stats
func (t *TxManager) SetSyncStatsVars(ctx context.Context, stats *synchronizer.Stats, func (t *TxManager) SetSyncStatsVars(ctx context.Context, stats *synchronizer.Stats, vars *synchronizer.SCVariablesPtr) {
vars *common.SCVariablesPtr) {
select { select {
case t.statsVarsCh <- statsVars{Stats: *stats, Vars: *vars}: case t.statsVarsCh <- statsVars{Stats: *stats, Vars: *vars}:
case <-ctx.Done(): case <-ctx.Done():
@@ -120,7 +118,7 @@ func (t *TxManager) DiscardPipeline(ctx context.Context, pipelineNum int) {
} }
} }
func (t *TxManager) syncSCVars(vars common.SCVariablesPtr) { func (t *TxManager) syncSCVars(vars synchronizer.SCVariablesPtr) {
updateSCVars(&t.vars, vars) updateSCVars(&t.vars, vars)
} }
@@ -147,7 +145,7 @@ func (t *TxManager) NewAuth(ctx context.Context, batchInfo *BatchInfo) (*bind.Tr
auth.Value = big.NewInt(0) // in wei auth.Value = big.NewInt(0) // in wei
gasLimit := t.cfg.ForgeBatchGasCost.Fixed + gasLimit := t.cfg.ForgeBatchGasCost.Fixed +
uint64(len(batchInfo.L1UserTxs))*t.cfg.ForgeBatchGasCost.L1UserTx + uint64(len(batchInfo.L1UserTxsExtra))*t.cfg.ForgeBatchGasCost.L1UserTx +
uint64(len(batchInfo.L1CoordTxs))*t.cfg.ForgeBatchGasCost.L1CoordTx + uint64(len(batchInfo.L1CoordTxs))*t.cfg.ForgeBatchGasCost.L1CoordTx +
uint64(len(batchInfo.L2Txs))*t.cfg.ForgeBatchGasCost.L2Tx uint64(len(batchInfo.L2Txs))*t.cfg.ForgeBatchGasCost.L2Tx
auth.GasLimit = gasLimit auth.GasLimit = gasLimit
@@ -184,30 +182,19 @@ func addPerc(v *big.Int, p int64) *big.Int {
r.Mul(r, big.NewInt(p)) r.Mul(r, big.NewInt(p))
// nolint reason: to calculate percentages we divide by 100 // nolint reason: to calculate percentages we divide by 100
r.Div(r, big.NewInt(100)) //nolit:gomnd r.Div(r, big.NewInt(100)) //nolit:gomnd
// If the increase is 0, force it to be 1 so that a gas increase
// doesn't result in the same value, making the transaction to be equal
// than before.
if r.Cmp(big.NewInt(0)) == 0 {
r = big.NewInt(1)
}
return r.Add(v, r) return r.Add(v, r)
} }
func (t *TxManager) sendRollupForgeBatch(ctx context.Context, batchInfo *BatchInfo, func (t *TxManager) sendRollupForgeBatch(ctx context.Context, batchInfo *BatchInfo, resend bool) error {
resend bool) error {
var ethTx *types.Transaction var ethTx *types.Transaction
var err error var err error
var auth *bind.TransactOpts auth, err := t.NewAuth(ctx, batchInfo)
if err != nil {
return tracerr.Wrap(err)
}
auth.Nonce = big.NewInt(int64(t.accNextNonce))
if resend { if resend {
auth = batchInfo.Auth auth.Nonce = big.NewInt(int64(batchInfo.EthTx.Nonce()))
auth.GasPrice = addPerc(auth.GasPrice, 10)
} else {
auth, err = t.NewAuth(ctx, batchInfo)
if err != nil {
return tracerr.Wrap(err)
}
batchInfo.Auth = auth
auth.Nonce = big.NewInt(int64(t.accNextNonce))
} }
for attempt := 0; attempt < t.cfg.EthClientAttempts; attempt++ { for attempt := 0; attempt < t.cfg.EthClientAttempts; attempt++ {
if auth.GasPrice.Cmp(t.cfg.MaxGasPrice) > 0 { if auth.GasPrice.Cmp(t.cfg.MaxGasPrice) > 0 {
@@ -278,8 +265,7 @@ func (t *TxManager) sendRollupForgeBatch(ctx context.Context, batchInfo *BatchIn
t.lastSentL1BatchBlockNum = t.stats.Eth.LastBlock.Num + 1 t.lastSentL1BatchBlockNum = t.stats.Eth.LastBlock.Num + 1
} }
} }
if err := t.l2DB.DoneForging(common.TxIDsFromL2Txs(batchInfo.L2Txs), if err := t.l2DB.DoneForging(common.TxIDsFromL2Txs(batchInfo.L2Txs), batchInfo.BatchNum); err != nil {
batchInfo.BatchNum); err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
return nil return nil
@@ -311,9 +297,7 @@ func (t *TxManager) checkEthTransactionReceipt(ctx context.Context, batchInfo *B
} }
} }
if err != nil { if err != nil {
return tracerr.Wrap( return tracerr.Wrap(fmt.Errorf("reached max attempts for ethClient.EthTransactionReceipt: %w", err))
fmt.Errorf("reached max attempts for ethClient.EthTransactionReceipt: %w",
err))
} }
batchInfo.Receipt = receipt batchInfo.Receipt = receipt
t.cfg.debugBatchStore(batchInfo) t.cfg.debugBatchStore(batchInfo)
@@ -503,7 +487,7 @@ func (t *TxManager) Run(ctx context.Context) {
// Our ethNode is giving an error different // Our ethNode is giving an error different
// than "not found" when getting the receipt // than "not found" when getting the receipt
// for the transaction, so we can't figure out // for the transaction, so we can't figure out
// if it was not mined, mined and successful or // if it was not mined, mined and succesfull or
// mined and failed. This could be due to the // mined and failed. This could be due to the
// ethNode failure. // ethNode failure.
t.coord.SendMsg(ctx, MsgStopPipeline{ t.coord.SendMsg(ctx, MsgStopPipeline{
@@ -568,7 +552,7 @@ func (t *TxManager) removeBadBatchInfos(ctx context.Context) error {
// Our ethNode is giving an error different // Our ethNode is giving an error different
// than "not found" when getting the receipt // than "not found" when getting the receipt
// for the transaction, so we can't figure out // for the transaction, so we can't figure out
// if it was not mined, mined and successful or // if it was not mined, mined and succesfull or
// mined and failed. This could be due to the // mined and failed. This could be due to the
// ethNode failure. // ethNode failure.
next++ next++
@@ -608,7 +592,7 @@ func (t *TxManager) removeBadBatchInfos(ctx context.Context) error {
func (t *TxManager) canForgeAt(blockNum int64) bool { func (t *TxManager) canForgeAt(blockNum int64) bool {
return canForge(&t.consts.Auction, &t.vars.Auction, return canForge(&t.consts.Auction, &t.vars.Auction,
&t.stats.Sync.Auction.CurrentSlot, &t.stats.Sync.Auction.NextSlot, &t.stats.Sync.Auction.CurrentSlot, &t.stats.Sync.Auction.NextSlot,
t.cfg.ForgerAddress, blockNum, t.cfg.MustForgeAtSlotDeadline) t.cfg.ForgerAddress, blockNum)
} }
func (t *TxManager) mustL1L2Batch(blockNum int64) bool { func (t *TxManager) mustL1L2Batch(blockNum int64) bool {

View File

@@ -8,7 +8,6 @@ import (
"time" "time"
ethCommon "github.com/ethereum/go-ethereum/common" ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/hermez-node/db" "github.com/hermeznetwork/hermez-node/db"
"github.com/hermeznetwork/tracerr" "github.com/hermeznetwork/tracerr"
@@ -39,14 +38,14 @@ func (hdb *HistoryDB) GetBatchAPI(batchNum common.BatchNum) (*BatchAPI, error) {
return hdb.getBatchAPI(hdb.dbRead, batchNum) return hdb.getBatchAPI(hdb.dbRead, batchNum)
} }
// GetBatchInternalAPI return the batch with the given batchNum // GetBatchAPI return the batch with the given batchNum
func (hdb *HistoryDB) GetBatchInternalAPI(batchNum common.BatchNum) (*BatchAPI, error) { func (hdb *HistoryDB) GetBatchInternalAPI(batchNum common.BatchNum) (*BatchAPI, error) {
return hdb.getBatchAPI(hdb.dbRead, batchNum) return hdb.getBatchAPI(hdb.dbRead, batchNum)
} }
func (hdb *HistoryDB) getBatchAPI(d meddler.DB, batchNum common.BatchNum) (*BatchAPI, error) { func (hdb *HistoryDB) getBatchAPI(d meddler.DB, batchNum common.BatchNum) (*BatchAPI, error) {
batch := &BatchAPI{} batch := &BatchAPI{}
if err := meddler.QueryRow( return batch, tracerr.Wrap(meddler.QueryRow(
d, batch, d, batch,
`SELECT batch.item_id, batch.batch_num, batch.eth_block_num, `SELECT batch.item_id, batch.batch_num, batch.eth_block_num,
batch.forger_addr, batch.fees_collected, batch.total_fees_usd, batch.state_root, batch.forger_addr, batch.fees_collected, batch.total_fees_usd, batch.state_root,
@@ -55,11 +54,7 @@ func (hdb *HistoryDB) getBatchAPI(d meddler.DB, batchNum common.BatchNum) (*Batc
COALESCE ((SELECT COUNT(*) FROM tx WHERE batch_num = batch.batch_num), 0) AS forged_txs COALESCE ((SELECT COUNT(*) FROM tx WHERE batch_num = batch.batch_num), 0) AS forged_txs
FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num
WHERE batch_num = $1;`, batchNum, WHERE batch_num = $1;`, batchNum,
); err != nil { ))
return nil, tracerr.Wrap(err)
}
batch.CollectedFeesAPI = apitypes.NewCollectedFeesAPI(batch.CollectedFeesDB)
return batch, nil
} }
// GetBatchesAPI return the batches applying the given filters // GetBatchesAPI return the batches applying the given filters
@@ -160,9 +155,6 @@ func (hdb *HistoryDB) GetBatchesAPI(
if len(batches) == 0 { if len(batches) == 0 {
return batches, 0, nil return batches, 0, nil
} }
for i := range batches {
batches[i].CollectedFeesAPI = apitypes.NewCollectedFeesAPI(batches[i].CollectedFeesDB)
}
return batches, batches[0].TotalItems - uint64(len(batches)), nil return batches, batches[0].TotalItems - uint64(len(batches)), nil
} }
@@ -937,8 +929,7 @@ func (hdb *HistoryDB) GetCommonAccountAPI(idx common.Idx) (*common.Account, erro
defer hdb.apiConnCon.Release() defer hdb.apiConnCon.Release()
account := &common.Account{} account := &common.Account{}
err = meddler.QueryRow( err = meddler.QueryRow(
hdb.dbRead, account, `SELECT idx, token_id, batch_num, bjj, eth_addr hdb.dbRead, account, `SELECT * FROM account WHERE idx = $1;`, idx,
FROM account WHERE idx = $1;`, idx,
) )
return account, tracerr.Wrap(err) return account, tracerr.Wrap(err)
} }
@@ -953,7 +944,6 @@ func (hdb *HistoryDB) GetCoordinatorAPI(bidderAddr ethCommon.Address) (*Coordina
defer hdb.apiConnCon.Release() defer hdb.apiConnCon.Release()
return hdb.getCoordinatorAPI(hdb.dbRead, bidderAddr) return hdb.getCoordinatorAPI(hdb.dbRead, bidderAddr)
} }
func (hdb *HistoryDB) getCoordinatorAPI(d meddler.DB, bidderAddr ethCommon.Address) (*CoordinatorAPI, error) { func (hdb *HistoryDB) getCoordinatorAPI(d meddler.DB, bidderAddr ethCommon.Address) (*CoordinatorAPI, error) {
coordinator := &CoordinatorAPI{} coordinator := &CoordinatorAPI{}
err := meddler.QueryRow( err := meddler.QueryRow(
@@ -964,7 +954,6 @@ func (hdb *HistoryDB) getCoordinatorAPI(d meddler.DB, bidderAddr ethCommon.Addre
return coordinator, tracerr.Wrap(err) return coordinator, tracerr.Wrap(err)
} }
// GetNodeInfoAPI retusnt he NodeInfo
func (hdb *HistoryDB) GetNodeInfoAPI() (*NodeInfo, error) { func (hdb *HistoryDB) GetNodeInfoAPI() (*NodeInfo, error) {
cancel, err := hdb.apiConnCon.Acquire() cancel, err := hdb.apiConnCon.Acquire()
defer cancel() defer cancel()
@@ -975,9 +964,9 @@ func (hdb *HistoryDB) GetNodeInfoAPI() (*NodeInfo, error) {
return hdb.GetNodeInfo() return hdb.GetNodeInfo()
} }
// GetBucketUpdatesInternalAPI returns the latest bucket updates
func (hdb *HistoryDB) GetBucketUpdatesInternalAPI() ([]BucketUpdateAPI, error) { func (hdb *HistoryDB) GetBucketUpdatesInternalAPI() ([]BucketUpdateAPI, error) {
var bucketUpdates []*BucketUpdateAPI var bucketUpdates []*BucketUpdateAPI
// var bucketUpdates []*common.BucketUpdate
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.dbRead, &bucketUpdates, hdb.dbRead, &bucketUpdates,
`SELECT num_bucket, withdrawals FROM bucket_update `SELECT num_bucket, withdrawals FROM bucket_update
@@ -988,7 +977,7 @@ func (hdb *HistoryDB) GetBucketUpdatesInternalAPI() ([]BucketUpdateAPI, error) {
return db.SlicePtrsToSlice(bucketUpdates).([]BucketUpdateAPI), tracerr.Wrap(err) return db.SlicePtrsToSlice(bucketUpdates).([]BucketUpdateAPI), tracerr.Wrap(err)
} }
// GetNextForgersInternalAPI returns next forgers // getNextForgers returns next forgers
func (hdb *HistoryDB) GetNextForgersInternalAPI(auctionVars *common.AuctionVariables, func (hdb *HistoryDB) GetNextForgersInternalAPI(auctionVars *common.AuctionVariables,
auctionConsts *common.AuctionConstants, auctionConsts *common.AuctionConstants,
lastBlock common.Block, currentSlot, lastClosedSlot int64) ([]NextForgerAPI, error) { lastBlock common.Block, currentSlot, lastClosedSlot int64) ([]NextForgerAPI, error) {
@@ -1082,9 +1071,13 @@ func (hdb *HistoryDB) GetNextForgersInternalAPI(auctionVars *common.AuctionVaria
return nextForgers, nil return nextForgers, nil
} }
// GetMetricsInternalAPI returns the MetricsAPI // UpdateMetrics update Status.Metrics information
func (hdb *HistoryDB) GetMetricsInternalAPI(lastBatchNum common.BatchNum) (metrics *MetricsAPI, poolLoad int64, err error) { func (hdb *HistoryDB) GetMetricsInternalAPI(lastBatchNum common.BatchNum) (*MetricsAPI, error) {
metrics = &MetricsAPI{} var metrics MetricsAPI
// Get the first and last batch of the last 24h and their timestamps
// if u.state.Network.LastBatch == nil {
// return &metrics, nil
// }
type period struct { type period struct {
FromBatchNum common.BatchNum `meddler:"from_batch_num"` FromBatchNum common.BatchNum `meddler:"from_batch_num"`
FromTimestamp time.Time `meddler:"from_timestamp"` FromTimestamp time.Time `meddler:"from_timestamp"`
@@ -1102,7 +1095,7 @@ func (hdb *HistoryDB) GetMetricsInternalAPI(lastBatchNum common.BatchNum) (metri
FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num FROM batch INNER JOIN block ON batch.eth_block_num = block.eth_block_num
WHERE block.timestamp >= NOW() - INTERVAL '24 HOURS';`, WHERE block.timestamp >= NOW() - INTERVAL '24 HOURS';`,
); err != nil { ); err != nil {
return nil, 0, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
// Get the amount of txs of that period // Get the amount of txs of that period
row := hdb.dbRead.QueryRow( row := hdb.dbRead.QueryRow(
@@ -1111,7 +1104,7 @@ func (hdb *HistoryDB) GetMetricsInternalAPI(lastBatchNum common.BatchNum) (metri
) )
var nTxs int var nTxs int
if err := row.Scan(&nTxs); err != nil { if err := row.Scan(&nTxs); err != nil {
return nil, 0, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
// Set txs/s // Set txs/s
seconds := p.ToTimestamp.Sub(p.FromTimestamp).Seconds() seconds := p.ToTimestamp.Sub(p.FromTimestamp).Seconds()
@@ -1125,6 +1118,7 @@ func (hdb *HistoryDB) GetMetricsInternalAPI(lastBatchNum common.BatchNum) (metri
nBatches++ nBatches++
} }
if (p.ToBatchNum - p.FromBatchNum) > 0 { if (p.ToBatchNum - p.FromBatchNum) > 0 {
fmt.Printf("DBG ntxs: %v, nBatches: %v\n", nTxs, nBatches)
metrics.TransactionsPerBatch = float64(nTxs) / metrics.TransactionsPerBatch = float64(nTxs) /
float64(nBatches) float64(nBatches)
} else { } else {
@@ -1137,38 +1131,29 @@ func (hdb *HistoryDB) GetMetricsInternalAPI(lastBatchNum common.BatchNum) (metri
) )
var totalFee float64 var totalFee float64
if err := row.Scan(&totalFee); err != nil { if err := row.Scan(&totalFee); err != nil {
return nil, 0, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
// Set batch frequency // Set batch frequency
metrics.BatchFrequency = seconds / float64(nBatches) metrics.BatchFrequency = seconds / float64(nBatches)
// Set avg transaction fee (only L2 txs have fee) if nTxs > 0 {
row = hdb.dbRead.QueryRow( metrics.AvgTransactionFee = totalFee / float64(nTxs)
`SELECT COUNT(*) as total_txs FROM tx WHERE tx.batch_num between $1 AND $2 AND NOT is_l1;`,
p.FromBatchNum, p.ToBatchNum,
)
var nL2Txs int
if err := row.Scan(&nL2Txs); err != nil {
return nil, 0, tracerr.Wrap(err)
}
if nL2Txs > 0 {
metrics.AvgTransactionFee = totalFee / float64(nL2Txs)
} else { } else {
metrics.AvgTransactionFee = 0 metrics.AvgTransactionFee = 0
} }
// Get and set amount of registered accounts // Get and set amount of registered accounts
type registeredAccounts struct { type registeredAccounts struct {
TokenAccounts int64 `meddler:"token_accounts"` TotalIdx int64 `meddler:"total_idx"`
Wallets int64 `meddler:"wallets"` TotalBJJ int64 `meddler:"total_bjj"`
} }
ra := &registeredAccounts{} ra := &registeredAccounts{}
if err := meddler.QueryRow( if err := meddler.QueryRow(
hdb.dbRead, ra, hdb.dbRead, ra,
`SELECT COUNT(*) AS token_accounts, COUNT(DISTINCT(bjj)) AS wallets FROM account;`, `SELECT COUNT(*) AS total_bjj, COUNT(DISTINCT(bjj)) AS total_idx FROM account;`,
); err != nil { ); err != nil {
return nil, 0, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
metrics.TokenAccounts = ra.TokenAccounts metrics.TotalAccounts = ra.TotalIdx
metrics.Wallets = ra.Wallets metrics.TotalBJJs = ra.TotalBJJ
// Get and set estimated time to forge L1 tx // Get and set estimated time to forge L1 tx
row = hdb.dbRead.QueryRow( row = hdb.dbRead.QueryRow(
`SELECT COALESCE (AVG(EXTRACT(EPOCH FROM (forged.timestamp - added.timestamp))), 0) FROM tx `SELECT COALESCE (AVG(EXTRACT(EPOCH FROM (forged.timestamp - added.timestamp))), 0) FROM tx
@@ -1180,21 +1165,12 @@ func (hdb *HistoryDB) GetMetricsInternalAPI(lastBatchNum common.BatchNum) (metri
) )
var timeToForgeL1 float64 var timeToForgeL1 float64
if err := row.Scan(&timeToForgeL1); err != nil { if err := row.Scan(&timeToForgeL1); err != nil {
return nil, 0, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
metrics.EstimatedTimeToForgeL1 = timeToForgeL1 metrics.EstimatedTimeToForgeL1 = timeToForgeL1
// Get amount of txs in the pool return &metrics, nil
row = hdb.dbRead.QueryRow(
`SELECT COUNT(*) FROM tx_pool WHERE state = $1 AND NOT external_delete;`,
common.PoolL2TxStatePending,
)
if err := row.Scan(&poolLoad); err != nil {
return nil, 0, tracerr.Wrap(err)
}
return metrics, poolLoad, nil
} }
// GetStateAPI returns the StateAPI
func (hdb *HistoryDB) GetStateAPI() (*StateAPI, error) { func (hdb *HistoryDB) GetStateAPI() (*StateAPI, error) {
cancel, err := hdb.apiConnCon.Acquire() cancel, err := hdb.apiConnCon.Acquire()
defer cancel() defer cancel()

View File

@@ -179,7 +179,7 @@ func (hdb *HistoryDB) GetBatch(batchNum common.BatchNum) (*common.Batch, error)
batch.slot_num, batch.total_fees_usd FROM batch WHERE batch_num = $1;`, batch.slot_num, batch.total_fees_usd FROM batch WHERE batch_num = $1;`,
batchNum, batchNum,
) )
return &batch, tracerr.Wrap(err) return &batch, err
} }
// GetAllBatches retrieve all batches from the DB // GetAllBatches retrieve all batches from the DB
@@ -235,7 +235,7 @@ func (hdb *HistoryDB) GetLastBatch() (*common.Batch, error) {
batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num, batch.num_accounts, batch.last_idx, batch.exit_root, batch.forge_l1_txs_num,
batch.slot_num, batch.total_fees_usd FROM batch ORDER BY batch_num DESC LIMIT 1;`, batch.slot_num, batch.total_fees_usd FROM batch ORDER BY batch_num DESC LIMIT 1;`,
) )
return &batch, tracerr.Wrap(err) return &batch, err
} }
// GetLastL1BatchBlockNum returns the blockNum of the latest forged l1Batch // GetLastL1BatchBlockNum returns the blockNum of the latest forged l1Batch
@@ -456,10 +456,13 @@ func (hdb *HistoryDB) addTokens(d meddler.DB, tokens []common.Token) error {
// UpdateTokenValue updates the USD value of a token. Value is the price in // UpdateTokenValue updates the USD value of a token. Value is the price in
// USD of a normalized token (1 token = 10^decimals units) // USD of a normalized token (1 token = 10^decimals units)
func (hdb *HistoryDB) UpdateTokenValue(tokenAddr ethCommon.Address, value float64) error { func (hdb *HistoryDB) UpdateTokenValue(tokenSymbol string, value float64) error {
// Sanitize symbol
tokenSymbol = strings.ToValidUTF8(tokenSymbol, " ")
_, err := hdb.dbWrite.Exec( _, err := hdb.dbWrite.Exec(
"UPDATE token SET usd = $1 WHERE eth_addr = $2;", "UPDATE token SET usd = $1 WHERE symbol = $2;",
value, tokenAddr, value, tokenSymbol,
) )
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -483,14 +486,23 @@ func (hdb *HistoryDB) GetAllTokens() ([]TokenWithUSD, error) {
return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), tracerr.Wrap(err) return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), tracerr.Wrap(err)
} }
// GetTokenSymbolsAndAddrs returns all the token symbols and addresses from the DB // GetTokenSymbols returns all the token symbols from the DB
func (hdb *HistoryDB) GetTokenSymbolsAndAddrs() ([]TokenSymbolAndAddr, error) { func (hdb *HistoryDB) GetTokenSymbols() ([]string, error) {
var tokens []*TokenSymbolAndAddr var tokenSymbols []string
err := meddler.QueryAll( rows, err := hdb.dbRead.Query("SELECT symbol FROM token;")
hdb.dbRead, &tokens, if err != nil {
"SELECT symbol, eth_addr FROM token;", return nil, tracerr.Wrap(err)
) }
return db.SlicePtrsToSlice(tokens).([]TokenSymbolAndAddr), tracerr.Wrap(err) defer db.RowsClose(rows)
sym := new(string)
for rows.Next() {
err = rows.Scan(sym)
if err != nil {
return nil, tracerr.Wrap(err)
}
tokenSymbols = append(tokenSymbols, *sym)
}
return tokenSymbols, nil
} }
// AddAccounts insert accounts into the DB // AddAccounts insert accounts into the DB
@@ -693,11 +705,11 @@ func (hdb *HistoryDB) GetAllExits() ([]common.ExitInfo, error) {
func (hdb *HistoryDB) GetAllL1UserTxs() ([]common.L1Tx, error) { func (hdb *HistoryDB) GetAllL1UserTxs() ([]common.L1Tx, error) {
var txs []*common.L1Tx var txs []*common.L1Tx
err := meddler.QueryAll( err := meddler.QueryAll(
hdb.dbRead, &txs, hdb.dbRead, &txs, // Note that '\x' gets parsed as a big.Int with value = 0
`SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin, `SELECT tx.id, tx.to_forge_l1_txs_num, tx.position, tx.user_origin,
tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id, tx.from_idx, tx.effective_from_idx, tx.from_eth_addr, tx.from_bjj, tx.to_idx, tx.token_id,
tx.amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.amount_success THEN tx.amount ELSE 0 END) AS effective_amount, tx.amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.amount_success THEN tx.amount ELSE '\x' END) AS effective_amount,
tx.deposit_amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.deposit_amount_success THEN tx.deposit_amount ELSE 0 END) AS effective_deposit_amount, tx.deposit_amount, (CASE WHEN tx.batch_num IS NULL THEN NULL WHEN tx.deposit_amount_success THEN tx.deposit_amount ELSE '\x' END) AS effective_deposit_amount,
tx.eth_block_num, tx.type, tx.batch_num tx.eth_block_num, tx.type, tx.batch_num
FROM tx WHERE is_l1 = TRUE AND user_origin = TRUE ORDER BY item_id;`, FROM tx WHERE is_l1 = TRUE AND user_origin = TRUE ORDER BY item_id;`,
) )
@@ -751,16 +763,6 @@ func (hdb *HistoryDB) GetUnforgedL1UserTxs(toForgeL1TxsNum int64) ([]common.L1Tx
return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err) return db.SlicePtrsToSlice(txs).([]common.L1Tx), tracerr.Wrap(err)
} }
// GetUnforgedL1UserTxsCount returns the count of unforged L1Txs (either in
// open or frozen queues that are not yet forged)
func (hdb *HistoryDB) GetUnforgedL1UserTxsCount() (int, error) {
row := hdb.dbRead.QueryRow(
`SELECT COUNT(*) FROM tx WHERE batch_num IS NULL;`,
)
var count int
return count, tracerr.Wrap(row.Scan(&count))
}
// TODO: Think about chaning all the queries that return a last value, to queries that return the next valid value. // TODO: Think about chaning all the queries that return a last value, to queries that return the next valid value.
// GetLastTxsPosition for a given to_forge_l1_txs_num // GetLastTxsPosition for a given to_forge_l1_txs_num
@@ -1159,7 +1161,7 @@ func (hdb *HistoryDB) GetTokensTest() ([]TokenWithUSD, error) {
tokens := []*TokenWithUSD{} tokens := []*TokenWithUSD{}
if err := meddler.QueryAll( if err := meddler.QueryAll(
hdb.dbRead, &tokens, hdb.dbRead, &tokens,
"SELECT * FROM token ORDER BY token_id ASC", "SELECT * FROM TOKEN",
); err != nil { ); err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
@@ -1169,19 +1171,8 @@ func (hdb *HistoryDB) GetTokensTest() ([]TokenWithUSD, error) {
return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), nil return db.SlicePtrsToSlice(tokens).([]TokenWithUSD), nil
} }
const ( // UpdateRecommendedFee update Status.RecommendedFee information
// CreateAccountExtraFeePercentage is the multiplication factor over func (hdb *HistoryDB) GetRecommendedFee(minFeeUSD float64) (*common.RecommendedFee, error) {
// the average fee for CreateAccount that is applied to obtain the
// recommended fee for CreateAccount
CreateAccountExtraFeePercentage float64 = 2.5
// CreateAccountInternalExtraFeePercentage is the multiplication factor
// over the average fee for CreateAccountInternal that is applied to
// obtain the recommended fee for CreateAccountInternal
CreateAccountInternalExtraFeePercentage float64 = 2.0
)
// GetRecommendedFee returns the RecommendedFee information
func (hdb *HistoryDB) GetRecommendedFee(minFeeUSD, maxFeeUSD float64) (*common.RecommendedFee, error) {
var recommendedFee common.RecommendedFee var recommendedFee common.RecommendedFee
// Get total txs and the batch of the first selected tx of the last hour // Get total txs and the batch of the first selected tx of the last hour
type totalTxsSinceBatchNum struct { type totalTxsSinceBatchNum struct {
@@ -1217,11 +1208,11 @@ func (hdb *HistoryDB) GetRecommendedFee(minFeeUSD, maxFeeUSD float64) (*common.R
} else { } else {
avgTransactionFee = 0 avgTransactionFee = 0
} }
recommendedFee.ExistingAccount = math.Min(maxFeeUSD, recommendedFee.ExistingAccount =
math.Max(avgTransactionFee, minFeeUSD)) math.Max(avgTransactionFee, minFeeUSD)
recommendedFee.CreatesAccount = math.Min(maxFeeUSD, recommendedFee.CreatesAccount =
math.Max(CreateAccountExtraFeePercentage*avgTransactionFee, minFeeUSD)) math.Max(createAccountExtraFeePercentage*avgTransactionFee, minFeeUSD)
recommendedFee.CreatesAccountInternal = math.Min(maxFeeUSD, recommendedFee.CreatesAccountAndRegister =
math.Max(CreateAccountInternalExtraFeePercentage*avgTransactionFee, minFeeUSD)) math.Max(createAccountInternalExtraFeePercentage*avgTransactionFee, minFeeUSD)
return &recommendedFee, nil return &recommendedFee, nil
} }

View File

@@ -11,7 +11,6 @@ import (
"time" "time"
ethCommon "github.com/ethereum/go-ethereum/common" ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
dbUtils "github.com/hermeznetwork/hermez-node/db" dbUtils "github.com/hermeznetwork/hermez-node/db"
"github.com/hermeznetwork/hermez-node/log" "github.com/hermeznetwork/hermez-node/log"
@@ -44,7 +43,7 @@ func TestMain(m *testing.M) {
if err != nil { if err != nil {
panic(err) panic(err)
} }
apiConnCon := dbUtils.NewAPIConnectionController(1, time.Second) apiConnCon := dbUtils.NewAPICnnectionController(1, time.Second)
historyDBWithACC = NewHistoryDB(db, db, apiConnCon) historyDBWithACC = NewHistoryDB(db, db, apiConnCon)
// Run tests // Run tests
result := m.Run() result := m.Run()
@@ -167,7 +166,7 @@ func TestBatches(t *testing.T) {
if i%2 != 0 { if i%2 != 0 {
// Set value to the token // Set value to the token
value := (float64(i) + 5) * 5.389329 value := (float64(i) + 5) * 5.389329
assert.NoError(t, historyDB.UpdateTokenValue(token.EthAddr, value)) assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
tokensValue[token.TokenID] = value / math.Pow(10, float64(token.Decimals)) tokensValue[token.TokenID] = value / math.Pow(10, float64(token.Decimals))
} }
} }
@@ -277,7 +276,7 @@ func TestTokens(t *testing.T) {
// Update token value // Update token value
for i, token := range tokens { for i, token := range tokens {
value := 1.01 * float64(i) value := 1.01 * float64(i)
assert.NoError(t, historyDB.UpdateTokenValue(token.EthAddr, value)) assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
} }
// Fetch tokens // Fetch tokens
fetchedTokens, err = historyDB.GetTokensTest() fetchedTokens, err = historyDB.GetTokensTest()
@@ -303,7 +302,7 @@ func TestTokensUTF8(t *testing.T) {
// Generate fake tokens // Generate fake tokens
const nTokens = 5 const nTokens = 5
tokens, ethToken := test.GenTokens(nTokens, blocks) tokens, ethToken := test.GenTokens(nTokens, blocks)
nonUTFTokens := make([]common.Token, len(tokens)) nonUTFTokens := make([]common.Token, len(tokens)+1)
// Force token.name and token.symbol to be non UTF-8 Strings // Force token.name and token.symbol to be non UTF-8 Strings
for i, token := range tokens { for i, token := range tokens {
token.Name = fmt.Sprint("NON-UTF8-NAME-\xc5-", i) token.Name = fmt.Sprint("NON-UTF8-NAME-\xc5-", i)
@@ -333,7 +332,7 @@ func TestTokensUTF8(t *testing.T) {
// Update token value // Update token value
for i, token := range nonUTFTokens { for i, token := range nonUTFTokens {
value := 1.01 * float64(i) value := 1.01 * float64(i)
assert.NoError(t, historyDB.UpdateTokenValue(token.EthAddr, value)) assert.NoError(t, historyDB.UpdateTokenValue(token.Symbol, value))
} }
// Fetch tokens // Fetch tokens
fetchedTokens, err = historyDB.GetTokensTest() fetchedTokens, err = historyDB.GetTokensTest()
@@ -721,10 +720,6 @@ func TestGetUnforgedL1UserTxs(t *testing.T) {
assert.Equal(t, 5, len(l1UserTxs)) assert.Equal(t, 5, len(l1UserTxs))
assert.Equal(t, blocks[0].Rollup.L1UserTxs, l1UserTxs) assert.Equal(t, blocks[0].Rollup.L1UserTxs, l1UserTxs)
count, err := historyDB.GetUnforgedL1UserTxsCount()
require.NoError(t, err)
assert.Equal(t, 5, count)
// No l1UserTxs for this toForgeL1TxsNum // No l1UserTxs for this toForgeL1TxsNum
l1UserTxs, err = historyDB.GetUnforgedL1UserTxs(2) l1UserTxs, err = historyDB.GetUnforgedL1UserTxs(2)
require.NoError(t, err) require.NoError(t, err)
@@ -1177,7 +1172,16 @@ func TestGetMetricsAPI(t *testing.T) {
assert.NoError(t, err) assert.NoError(t, err)
} }
res, _, err := historyDB.GetMetricsInternalAPI(common.BatchNum(numBatches)) // clientSetupExample := test.NewClientSetupExample()
// apiStateUpdater := NewAPIStateUpdater(historyDB, &NodeConfig{1000, 0.5},
// &Constants{
// RollupConstants: *clientSetupExample.RollupConstants,
// AuctionConstants: *clientSetupExample.AuctionConstants,
// WDelayerConstants: *clientSetupExample.WDelayerConstants,
// ChainID: uint16(clientSetupExample.ChainID.Int64()),
// HermezAddress: clientSetupExample.AuctionConstants.HermezRollup,
// })
res, err := historyDB.GetMetricsInternalAPI(common.BatchNum(numBatches))
assert.NoError(t, err) assert.NoError(t, err)
assert.Equal(t, float64(numTx)/float64(numBatches), res.TransactionsPerBatch) assert.Equal(t, float64(numTx)/float64(numBatches), res.TransactionsPerBatch)
@@ -1186,8 +1190,8 @@ func TestGetMetricsAPI(t *testing.T) {
// There is a -2 as time for first and last batch is not taken into account // There is a -2 as time for first and last batch is not taken into account
assert.InEpsilon(t, float64(frequency)*float64(numBatches-2)/float64(numBatches), res.BatchFrequency, 0.01) assert.InEpsilon(t, float64(frequency)*float64(numBatches-2)/float64(numBatches), res.BatchFrequency, 0.01)
assert.InEpsilon(t, float64(numTx)/float64(frequency*blockNum-frequency), res.TransactionsPerSecond, 0.01) assert.InEpsilon(t, float64(numTx)/float64(frequency*blockNum-frequency), res.TransactionsPerSecond, 0.01)
assert.Equal(t, int64(3), res.TokenAccounts) assert.Equal(t, int64(3), res.TotalAccounts)
assert.Equal(t, int64(3), res.Wallets) assert.Equal(t, int64(3), res.TotalBJJs)
// Til does not set fees // Til does not set fees
assert.Equal(t, float64(0), res.AvgTransactionFee) assert.Equal(t, float64(0), res.AvgTransactionFee)
} }
@@ -1255,22 +1259,22 @@ func TestGetMetricsAPIMoreThan24Hours(t *testing.T) {
assert.NoError(t, err) assert.NoError(t, err)
} }
res, _, err := historyDBWithACC.GetMetricsInternalAPI(common.BatchNum(numBatches)) res, err := historyDBWithACC.GetMetricsInternalAPI(common.BatchNum(numBatches))
assert.NoError(t, err) assert.NoError(t, err)
assert.InEpsilon(t, 1.0, res.TransactionsPerBatch, 0.1) assert.InEpsilon(t, 1.0, res.TransactionsPerBatch, 0.1)
assert.InEpsilon(t, res.BatchFrequency, float64(blockTime/time.Second), 0.1) assert.InEpsilon(t, res.BatchFrequency, float64(blockTime/time.Second), 0.1)
assert.InEpsilon(t, 1.0/float64(blockTime/time.Second), res.TransactionsPerSecond, 0.1) assert.InEpsilon(t, 1.0/float64(blockTime/time.Second), res.TransactionsPerSecond, 0.1)
assert.Equal(t, int64(3), res.TokenAccounts) assert.Equal(t, int64(3), res.TotalAccounts)
assert.Equal(t, int64(3), res.Wallets) assert.Equal(t, int64(3), res.TotalBJJs)
// Til does not set fees // Til does not set fees
assert.Equal(t, float64(0), res.AvgTransactionFee) assert.Equal(t, float64(0), res.AvgTransactionFee)
} }
func TestGetMetricsAPIEmpty(t *testing.T) { func TestGetMetricsAPIEmpty(t *testing.T) {
test.WipeDB(historyDB.DB()) test.WipeDB(historyDB.DB())
_, _, err := historyDBWithACC.GetMetricsInternalAPI(0) _, err := historyDBWithACC.GetMetricsInternalAPI(0)
assert.NoError(t, err) assert.NoError(t, err)
} }
@@ -1463,7 +1467,7 @@ func setTestBlocks(from, to int64) []common.Block {
func TestNodeInfo(t *testing.T) { func TestNodeInfo(t *testing.T) {
test.WipeDB(historyDB.DB()) test.WipeDB(historyDB.DB())
err := historyDB.SetStateInternalAPI(&StateAPI{}) err := historyDB.SetAPIState(&StateAPI{})
require.NoError(t, err) require.NoError(t, err)
clientSetup := test.NewClientSetupExample() clientSetup := test.NewClientSetupExample()
@@ -1480,71 +1484,17 @@ func TestNodeInfo(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
// Test parameters // Test parameters
var f64 float64 = 1.2
var i64 int64 = 8888
addr := ethCommon.HexToAddress("0x1234")
hash := ethCommon.HexToHash("0x5678")
stateAPI := &StateAPI{ stateAPI := &StateAPI{
NodePublicInfo: NodePublicInfo{ NodePublicConfig: NodePublicConfig{
ForgeDelay: 3.1, ForgeDelay: 3.1,
}, },
Network: NetworkAPI{ Network: NetworkAPI{
LastEthBlock: 12, LastEthBlock: 12,
LastSyncBlock: 34, LastSyncBlock: 34,
LastBatch: &BatchAPI{
ItemID: 123,
BatchNum: 456,
EthBlockNum: 789,
EthBlockHash: hash,
Timestamp: time.Now(),
ForgerAddr: addr,
// CollectedFeesDB: map[common.TokenID]*big.Int{
// 0: big.NewInt(11111),
// 1: big.NewInt(21111),
// 2: big.NewInt(31111),
// },
CollectedFeesAPI: apitypes.CollectedFeesAPI(map[common.TokenID]apitypes.BigIntStr{
0: apitypes.BigIntStr("11111"),
1: apitypes.BigIntStr("21111"),
2: apitypes.BigIntStr("31111"),
}),
TotalFeesUSD: &f64,
StateRoot: apitypes.BigIntStr("1234"),
NumAccounts: 11,
ExitRoot: apitypes.BigIntStr("5678"),
ForgeL1TxsNum: &i64,
SlotNum: 44,
ForgedTxs: 23,
TotalItems: 0,
FirstItem: 0,
LastItem: 0,
},
CurrentSlot: 22,
NextForgers: []NextForgerAPI{
{
Coordinator: CoordinatorAPI{
ItemID: 111,
Bidder: addr,
Forger: addr,
EthBlockNum: 566,
URL: "asd",
TotalItems: 0,
FirstItem: 0,
LastItem: 0,
},
Period: Period{
SlotNum: 33,
FromBlock: 55,
ToBlock: 66,
FromTimestamp: time.Now(),
ToTimestamp: time.Now(),
},
},
},
}, },
Metrics: MetricsAPI{ Metrics: MetricsAPI{
TransactionsPerBatch: 1.1, TransactionsPerBatch: 1.1,
TokenAccounts: 42, TotalAccounts: 42,
}, },
Rollup: *NewRollupVariablesAPI(clientSetup.RollupVariables), Rollup: *NewRollupVariablesAPI(clientSetup.RollupVariables),
Auction: *NewAuctionVariablesAPI(clientSetup.AuctionVariables), Auction: *NewAuctionVariablesAPI(clientSetup.AuctionVariables),
@@ -1553,7 +1503,7 @@ func TestNodeInfo(t *testing.T) {
ExistingAccount: 0.15, ExistingAccount: 0.15,
}, },
} }
err = historyDB.SetStateInternalAPI(stateAPI) err = historyDB.SetAPIState(stateAPI)
require.NoError(t, err) require.NoError(t, err)
nodeConfig := &NodeConfig{ nodeConfig := &NodeConfig{
@@ -1571,16 +1521,7 @@ func TestNodeInfo(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, nodeConfig, dbNodeConfig) assert.Equal(t, nodeConfig, dbNodeConfig)
dbStateAPI, err := historyDB.getStateAPI(historyDB.dbRead) dbStateAPI, err := historyDB.GetStateAPI()
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, stateAPI.Network.LastBatch.Timestamp.Unix(),
dbStateAPI.Network.LastBatch.Timestamp.Unix())
dbStateAPI.Network.LastBatch.Timestamp = stateAPI.Network.LastBatch.Timestamp
assert.Equal(t, stateAPI.Network.NextForgers[0].Period.FromTimestamp.Unix(),
dbStateAPI.Network.NextForgers[0].Period.FromTimestamp.Unix())
dbStateAPI.Network.NextForgers[0].Period.FromTimestamp = stateAPI.Network.NextForgers[0].Period.FromTimestamp
assert.Equal(t, stateAPI.Network.NextForgers[0].Period.ToTimestamp.Unix(),
dbStateAPI.Network.NextForgers[0].Period.ToTimestamp.Unix())
dbStateAPI.Network.NextForgers[0].Period.ToTimestamp = stateAPI.Network.NextForgers[0].Period.ToTimestamp
assert.Equal(t, stateAPI, dbStateAPI) assert.Equal(t, stateAPI, dbStateAPI)
} }

View File

@@ -4,13 +4,16 @@ import (
"time" "time"
ethCommon "github.com/ethereum/go-ethereum/common" ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/api/apitypes"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/hermeznetwork/tracerr" "github.com/hermeznetwork/tracerr"
"github.com/russross/meddler" "github.com/russross/meddler"
) )
// Period represents a time period in ethereum const (
createAccountExtraFeePercentage float64 = 2
createAccountInternalExtraFeePercentage float64 = 2.5
)
type Period struct { type Period struct {
SlotNum int64 `json:"slotNum"` SlotNum int64 `json:"slotNum"`
FromBlock int64 `json:"fromBlock"` FromBlock int64 `json:"fromBlock"`
@@ -19,33 +22,28 @@ type Period struct {
ToTimestamp time.Time `json:"toTimestamp"` ToTimestamp time.Time `json:"toTimestamp"`
} }
// NextForgerAPI represents the next forger exposed via the API
type NextForgerAPI struct { type NextForgerAPI struct {
Coordinator CoordinatorAPI `json:"coordinator"` Coordinator CoordinatorAPI `json:"coordinator"`
Period Period `json:"period"` Period Period `json:"period"`
} }
// NetworkAPI is the network state exposed via the API
type NetworkAPI struct { type NetworkAPI struct {
LastEthBlock int64 `json:"lastEthereumBlock"` LastEthBlock int64 `json:"lastEthereumBlock"`
LastSyncBlock int64 `json:"lastSynchedBlock"` LastSyncBlock int64 `json:"lastSynchedBlock"`
LastBatch *BatchAPI `json:"lastBatch"` LastBatch *BatchAPI `json:"lastBatch"`
CurrentSlot int64 `json:"currentSlot"` CurrentSlot int64 `json:"currentSlot"`
NextForgers []NextForgerAPI `json:"nextForgers"` NextForgers []NextForgerAPI `json:"nextForgers"`
PendingL1Txs int `json:"pendingL1Transactions"`
} }
// NodePublicInfo is the configuration and metrics of the node that is exposed via API // NodePublicConfig is the configuration of the node that is exposed via API
type NodePublicInfo struct { type NodePublicConfig struct {
// ForgeDelay in seconds // ForgeDelay in seconds
ForgeDelay float64 `json:"forgeDelay"` ForgeDelay float64 `json:"forgeDelay"`
// PoolLoad amount of transactions in the pool
PoolLoad int64 `json:"poolLoad"`
} }
// StateAPI is an object representing the node and network state exposed via the API
type StateAPI struct { type StateAPI struct {
NodePublicInfo NodePublicInfo `json:"node"` // NodePublicConfig is the configuration of the node that is exposed via API
NodePublicConfig NodePublicConfig `json:"nodeConfig"`
Network NetworkAPI `json:"network"` Network NetworkAPI `json:"network"`
Metrics MetricsAPI `json:"metrics"` Metrics MetricsAPI `json:"metrics"`
Rollup RollupVariablesAPI `json:"rollup"` Rollup RollupVariablesAPI `json:"rollup"`
@@ -54,30 +52,27 @@ type StateAPI struct {
RecommendedFee common.RecommendedFee `json:"recommendedFee"` RecommendedFee common.RecommendedFee `json:"recommendedFee"`
} }
// Constants contains network constants
type Constants struct { type Constants struct {
// RollupConstants common.RollupConstants
// AuctionConstants common.AuctionConstants
// WDelayerConstants common.WDelayerConstants
common.SCConsts common.SCConsts
ChainID uint16 ChainID uint16
HermezAddress ethCommon.Address HermezAddress ethCommon.Address
} }
// NodeConfig contains the node config exposed in the API
type NodeConfig struct { type NodeConfig struct {
MaxPoolTxs uint32 MaxPoolTxs uint32 `meddler:"max_pool_txs"`
MinFeeUSD float64 MinFeeUSD float64 `meddler:"min_fee"`
MaxFeeUSD float64
ForgeDelay float64
} }
// NodeInfo contains information about he node used when serving the API
type NodeInfo struct { type NodeInfo struct {
ItemID int `meddler:"item_id,pk"` ItemID int `meddler:"item_id,pk"`
StateAPI *StateAPI `meddler:"state,json"` APIState *StateAPI `meddler:"state,json"`
NodeConfig *NodeConfig `meddler:"config,json"` NodeConfig *NodeConfig `meddler:"config,json"`
Constants *Constants `meddler:"constants,json"` Constants *Constants `meddler:"constants,json"`
} }
// GetNodeInfo returns the NodeInfo
func (hdb *HistoryDB) GetNodeInfo() (*NodeInfo, error) { func (hdb *HistoryDB) GetNodeInfo() (*NodeInfo, error) {
ni := &NodeInfo{} ni := &NodeInfo{}
err := meddler.QueryRow( err := meddler.QueryRow(
@@ -86,7 +81,6 @@ func (hdb *HistoryDB) GetNodeInfo() (*NodeInfo, error) {
return ni, tracerr.Wrap(err) return ni, tracerr.Wrap(err)
} }
// GetConstants returns the Constats
func (hdb *HistoryDB) GetConstants() (*Constants, error) { func (hdb *HistoryDB) GetConstants() (*Constants, error) {
var nodeInfo NodeInfo var nodeInfo NodeInfo
err := meddler.QueryRow( err := meddler.QueryRow(
@@ -96,7 +90,6 @@ func (hdb *HistoryDB) GetConstants() (*Constants, error) {
return nodeInfo.Constants, tracerr.Wrap(err) return nodeInfo.Constants, tracerr.Wrap(err)
} }
// SetConstants sets the Constants
func (hdb *HistoryDB) SetConstants(constants *Constants) error { func (hdb *HistoryDB) SetConstants(constants *Constants) error {
_constants := struct { _constants := struct {
Constants *Constants `meddler:"constants,json"` Constants *Constants `meddler:"constants,json"`
@@ -112,7 +105,6 @@ func (hdb *HistoryDB) SetConstants(constants *Constants) error {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
// GetStateInternalAPI returns the StateAPI
func (hdb *HistoryDB) GetStateInternalAPI() (*StateAPI, error) { func (hdb *HistoryDB) GetStateInternalAPI() (*StateAPI, error) {
return hdb.getStateAPI(hdb.dbRead) return hdb.getStateAPI(hdb.dbRead)
} }
@@ -123,19 +115,14 @@ func (hdb *HistoryDB) getStateAPI(d meddler.DB) (*StateAPI, error) {
d, &nodeInfo, d, &nodeInfo,
"SELECT state FROM node_info WHERE item_id = 1;", "SELECT state FROM node_info WHERE item_id = 1;",
) )
return nodeInfo.StateAPI, tracerr.Wrap(err) return nodeInfo.APIState, tracerr.Wrap(err)
} }
// SetStateInternalAPI sets the StateAPI func (hdb *HistoryDB) SetAPIState(apiState *StateAPI) error {
func (hdb *HistoryDB) SetStateInternalAPI(stateAPI *StateAPI) error { _apiState := struct {
if stateAPI.Network.LastBatch != nil { APIState *StateAPI `meddler:"state,json"`
stateAPI.Network.LastBatch.CollectedFeesAPI = }{apiState}
apitypes.NewCollectedFeesAPI(stateAPI.Network.LastBatch.CollectedFeesDB) values, err := meddler.Default.Values(&_apiState, false)
}
_stateAPI := struct {
StateAPI *StateAPI `meddler:"state,json"`
}{stateAPI}
values, err := meddler.Default.Values(&_stateAPI, false)
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -146,7 +133,6 @@ func (hdb *HistoryDB) SetStateInternalAPI(stateAPI *StateAPI) error {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
// GetNodeConfig returns the NodeConfig
func (hdb *HistoryDB) GetNodeConfig() (*NodeConfig, error) { func (hdb *HistoryDB) GetNodeConfig() (*NodeConfig, error) {
var nodeInfo NodeInfo var nodeInfo NodeInfo
err := meddler.QueryRow( err := meddler.QueryRow(
@@ -156,7 +142,6 @@ func (hdb *HistoryDB) GetNodeConfig() (*NodeConfig, error) {
return nodeInfo.NodeConfig, tracerr.Wrap(err) return nodeInfo.NodeConfig, tracerr.Wrap(err)
} }
// SetNodeConfig sets the NodeConfig
func (hdb *HistoryDB) SetNodeConfig(nodeConfig *NodeConfig) error { func (hdb *HistoryDB) SetNodeConfig(nodeConfig *NodeConfig) error {
_nodeConfig := struct { _nodeConfig := struct {
NodeConfig *NodeConfig `meddler:"config,json"` NodeConfig *NodeConfig `meddler:"config,json"`
@@ -166,8 +151,65 @@ func (hdb *HistoryDB) SetNodeConfig(nodeConfig *NodeConfig) error {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
_, err = hdb.dbWrite.Exec( _, err = hdb.dbWrite.Exec(
"UPDATE node_info SET config = $1 WHERE item_id = 1;", "UPDATE config SET state = $1 WHERE item_id = 1;",
values[0], values[0],
) )
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
// func (hdb *HistoryDB) SetInitialNodeInfo(maxPoolTxs uint32, minFeeUSD float64, constants *Constants) error {
// ni := &NodeInfo{
// MaxPoolTxs: &maxPoolTxs,
// MinFeeUSD: &minFeeUSD,
// Constants: constants,
// }
// return tracerr.Wrap(meddler.Insert(hdb.dbWrite, "node_info", ni))
// }
// apiSlotToBigInts converts from [6]*apitypes.BigIntStr to [6]*big.Int
// func apiSlotToBigInts(defaultSlotSetBid [6]*apitypes.BigIntStr) ([6]*big.Int, error) {
// var slots [6]*big.Int
//
// for i, slot := range defaultSlotSetBid {
// bigInt, ok := new(big.Int).SetString(string(*slot), 10)
// if !ok {
// return slots, tracerr.Wrap(fmt.Errorf("can't convert %T into big.Int", slot))
// }
// slots[i] = bigInt
// }
//
// return slots, nil
// }
// func (hdb *HistoryDB) updateNodeInfo(setUpdatedNodeInfo func(*sqlx.Tx, *NodeInfo) error) error {
// // Create a SQL transaction or read and update atomicaly
// txn, err := hdb.dbWrite.Beginx()
// if err != nil {
// return tracerr.Wrap(err)
// }
// defer func() {
// if err != nil {
// db.Rollback(txn)
// }
// }()
// // Read current node info
// ni := &NodeInfo{}
// if err := meddler.QueryRow(
// txn, ni, "SELECT * FROM node_info;",
// ); err != nil {
// return tracerr.Wrap(err)
// }
// // Update NodeInfo struct
// if err := setUpdatedNodeInfo(txn, ni); err != nil {
// return tracerr.Wrap(err)
// }
// // Update NodeInfo at DB
// if _, err := txn.Exec("DELETE FROM node_info;"); err != nil {
// return tracerr.Wrap(err)
// }
// if err := meddler.Insert(txn, "node_info", ni); err != nil {
// return tracerr.Wrap(err)
// }
// // Commit NodeInfo update
// return tracerr.Wrap(txn.Commit())
// }

View File

@@ -6,7 +6,7 @@ import (
"time" "time"
ethCommon "github.com/ethereum/go-ethereum/common" ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/api/apitypes" "github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/iden3/go-iden3-crypto/babyjub" "github.com/iden3/go-iden3-crypto/babyjub"
"github.com/iden3/go-merkletree" "github.com/iden3/go-merkletree"
@@ -147,12 +147,6 @@ type txWrite struct {
Nonce *common.Nonce `meddler:"nonce"` Nonce *common.Nonce `meddler:"nonce"`
} }
// TokenSymbolAndAddr token representation with only Eth addr and symbol
type TokenSymbolAndAddr struct {
Symbol string `meddler:"symbol"`
Addr ethCommon.Address `meddler:"eth_addr"`
}
// TokenWithUSD add USD info to common.Token // TokenWithUSD add USD info to common.Token
type TokenWithUSD struct { type TokenWithUSD struct {
ItemID uint64 `json:"itemId" meddler:"item_id"` ItemID uint64 `json:"itemId" meddler:"item_id"`
@@ -289,24 +283,23 @@ func (account AccountAPI) MarshalJSON() ([]byte, error) {
// BatchAPI is a representation of a batch with additional information // BatchAPI is a representation of a batch with additional information
// required by the API, and extracted by joining block table // required by the API, and extracted by joining block table
type BatchAPI struct { type BatchAPI struct {
ItemID uint64 `json:"itemId" meddler:"item_id"` ItemID uint64 `json:"itemId" meddler:"item_id"`
BatchNum common.BatchNum `json:"batchNum" meddler:"batch_num"` BatchNum common.BatchNum `json:"batchNum" meddler:"batch_num"`
EthBlockNum int64 `json:"ethereumBlockNum" meddler:"eth_block_num"` EthBlockNum int64 `json:"ethereumBlockNum" meddler:"eth_block_num"`
EthBlockHash ethCommon.Hash `json:"ethereumBlockHash" meddler:"hash"` EthBlockHash ethCommon.Hash `json:"ethereumBlockHash" meddler:"hash"`
Timestamp time.Time `json:"timestamp" meddler:"timestamp,utctime"` Timestamp time.Time `json:"timestamp" meddler:"timestamp,utctime"`
ForgerAddr ethCommon.Address `json:"forgerAddr" meddler:"forger_addr"` ForgerAddr ethCommon.Address `json:"forgerAddr" meddler:"forger_addr"`
CollectedFeesDB map[common.TokenID]*big.Int `json:"-" meddler:"fees_collected,json"` CollectedFees apitypes.CollectedFees `json:"collectedFees" meddler:"fees_collected,json"`
CollectedFeesAPI apitypes.CollectedFeesAPI `json:"collectedFees" meddler:"-"` TotalFeesUSD *float64 `json:"historicTotalCollectedFeesUSD" meddler:"total_fees_usd"`
TotalFeesUSD *float64 `json:"historicTotalCollectedFeesUSD" meddler:"total_fees_usd"` StateRoot apitypes.BigIntStr `json:"stateRoot" meddler:"state_root"`
StateRoot apitypes.BigIntStr `json:"stateRoot" meddler:"state_root"` NumAccounts int `json:"numAccounts" meddler:"num_accounts"`
NumAccounts int `json:"numAccounts" meddler:"num_accounts"` ExitRoot apitypes.BigIntStr `json:"exitRoot" meddler:"exit_root"`
ExitRoot apitypes.BigIntStr `json:"exitRoot" meddler:"exit_root"` ForgeL1TxsNum *int64 `json:"forgeL1TransactionsNum" meddler:"forge_l1_txs_num"`
ForgeL1TxsNum *int64 `json:"forgeL1TransactionsNum" meddler:"forge_l1_txs_num"` SlotNum int64 `json:"slotNum" meddler:"slot_num"`
SlotNum int64 `json:"slotNum" meddler:"slot_num"` ForgedTxs int `json:"forgedTransactions" meddler:"forged_txs"`
ForgedTxs int `json:"forgedTransactions" meddler:"forged_txs"` TotalItems uint64 `json:"-" meddler:"total_items"`
TotalItems uint64 `json:"-" meddler:"total_items"` FirstItem uint64 `json:"-" meddler:"first_item"`
FirstItem uint64 `json:"-" meddler:"first_item"` LastItem uint64 `json:"-" meddler:"last_item"`
LastItem uint64 `json:"-" meddler:"last_item"`
} }
// MetricsAPI define metrics of the network // MetricsAPI define metrics of the network
@@ -314,10 +307,10 @@ type MetricsAPI struct {
TransactionsPerBatch float64 `json:"transactionsPerBatch"` TransactionsPerBatch float64 `json:"transactionsPerBatch"`
BatchFrequency float64 `json:"batchFrequency"` BatchFrequency float64 `json:"batchFrequency"`
TransactionsPerSecond float64 `json:"transactionsPerSecond"` TransactionsPerSecond float64 `json:"transactionsPerSecond"`
TokenAccounts int64 `json:"tokenAccounts"` TotalAccounts int64 `json:"totalAccounts" meddler:"total_accounts"`
Wallets int64 `json:"wallets"` TotalBJJs int64 `json:"totalBJJs" meddler:"total_bjjs"`
AvgTransactionFee float64 `json:"avgTransactionFee"` AvgTransactionFee float64 `json:"avgTransactionFee"`
EstimatedTimeToForgeL1 float64 `json:"estimatedTimeToForgeL1" meddler:"estimated_time_to_forge_l1"` EstimatedTimeToForgeL1 float64 `json:"estimatedTimeToForgeL1" meddler:"estimatedTimeToForgeL1"`
} }
// BidAPI is a representation of a bid with additional information // BidAPI is a representation of a bid with additional information

View File

@@ -316,7 +316,7 @@ func (k *KVDB) ResetFromSynchronizer(batchNum common.BatchNum, synchronizerKVDB
checkpointPath := path.Join(k.cfg.Path, fmt.Sprintf("%s%d", PathBatchNum, batchNum)) checkpointPath := path.Join(k.cfg.Path, fmt.Sprintf("%s%d", PathBatchNum, batchNum))
// copy synchronizer 'BatchNumX' to 'BatchNumX' // copy synchronizer'BatchNumX' to 'BatchNumX'
if err := synchronizerKVDB.MakeCheckpointFromTo(batchNum, checkpointPath); err != nil { if err := synchronizerKVDB.MakeCheckpointFromTo(batchNum, checkpointPath); err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -458,7 +458,7 @@ func (k *KVDB) CheckpointExists(batchNum common.BatchNum) (bool, error) {
if _, err := os.Stat(source); os.IsNotExist(err) { if _, err := os.Stat(source); os.IsNotExist(err) {
return false, nil return false, nil
} else if err != nil { } else if err != nil {
return false, tracerr.Wrap(err) return false, err
} }
return true, nil return true, nil
} }
@@ -544,12 +544,10 @@ func (k *KVDB) MakeCheckpointFromTo(fromBatchNum common.BatchNum, dest string) e
// synchronizer to the same batchNum // synchronizer to the same batchNum
k.m.Lock() k.m.Lock()
defer k.m.Unlock() defer k.m.Unlock()
return PebbleMakeCheckpoint(source, dest) return pebbleMakeCheckpoint(source, dest)
} }
// PebbleMakeCheckpoint is a hepler function to make a pebble checkpoint from func pebbleMakeCheckpoint(source, dest string) error {
// source to dest.
func PebbleMakeCheckpoint(source, dest string) error {
// Remove dest folder (if it exists) before doing the checkpoint // Remove dest folder (if it exists) before doing the checkpoint
if _, err := os.Stat(dest); os.IsNotExist(err) { if _, err := os.Stat(dest); os.IsNotExist(err) {
} else if err != nil { } else if err != nil {

View File

@@ -62,10 +62,6 @@ func (l2db *L2DB) AddTxAPI(tx *PoolL2TxWrite) error {
return tracerr.Wrap(fmt.Errorf("tx.feeUSD (%v) < minFeeUSD (%v)", return tracerr.Wrap(fmt.Errorf("tx.feeUSD (%v) < minFeeUSD (%v)",
feeUSD, l2db.minFeeUSD)) feeUSD, l2db.minFeeUSD))
} }
if feeUSD > l2db.maxFeeUSD {
return tracerr.Wrap(fmt.Errorf("tx.feeUSD (%v) > maxFeeUSD (%v)",
feeUSD, l2db.maxFeeUSD))
}
// Prepare insert SQL query argument parameters // Prepare insert SQL query argument parameters
namesPart, err := meddler.Default.ColumnsQuoted(tx, false) namesPart, err := meddler.Default.ColumnsQuoted(tx, false)
@@ -84,7 +80,7 @@ func (l2db *L2DB) AddTxAPI(tx *PoolL2TxWrite) error {
q := fmt.Sprintf( q := fmt.Sprintf(
`INSERT INTO tx_pool (%s) `INSERT INTO tx_pool (%s)
SELECT %s SELECT %s
WHERE (SELECT COUNT(*) FROM tx_pool WHERE state = $%v AND NOT external_delete) < $%v;`, WHERE (SELECT COUNT(*) FROM tx_pool WHERE state = $%v) < $%v;`,
namesPart, valuesPart, namesPart, valuesPart,
len(values)+1, len(values)+2) //nolint:gomnd len(values)+1, len(values)+2) //nolint:gomnd
values = append(values, common.PoolL2TxStatePending, l2db.maxTxs) values = append(values, common.PoolL2TxStatePending, l2db.maxTxs)

View File

@@ -27,7 +27,6 @@ type L2DB struct {
ttl time.Duration ttl time.Duration
maxTxs uint32 // limit of txs that are accepted in the pool maxTxs uint32 // limit of txs that are accepted in the pool
minFeeUSD float64 minFeeUSD float64
maxFeeUSD float64
apiConnCon *db.APIConnectionController apiConnCon *db.APIConnectionController
} }
@@ -39,7 +38,6 @@ func NewL2DB(
safetyPeriod common.BatchNum, safetyPeriod common.BatchNum,
maxTxs uint32, maxTxs uint32,
minFeeUSD float64, minFeeUSD float64,
maxFeeUSD float64,
TTL time.Duration, TTL time.Duration,
apiConnCon *db.APIConnectionController, apiConnCon *db.APIConnectionController,
) *L2DB { ) *L2DB {
@@ -50,7 +48,6 @@ func NewL2DB(
ttl: TTL, ttl: TTL,
maxTxs: maxTxs, maxTxs: maxTxs,
minFeeUSD: minFeeUSD, minFeeUSD: minFeeUSD,
maxFeeUSD: maxFeeUSD,
apiConnCon: apiConnCon, apiConnCon: apiConnCon,
} }
} }
@@ -77,16 +74,6 @@ func (l2db *L2DB) AddAccountCreationAuth(auth *common.AccountCreationAuth) error
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
// AddManyAccountCreationAuth inserts a batch of accounts creation authorization
// if not exist into the DB
func (l2db *L2DB) AddManyAccountCreationAuth(auths []common.AccountCreationAuth) error {
_, err := sqlx.NamedExec(l2db.dbWrite,
`INSERT INTO account_creation_auth (eth_addr, bjj, signature)
VALUES (:ethaddr, :bjj, :signature)
ON CONFLICT (eth_addr) DO NOTHING`, auths)
return tracerr.Wrap(err)
}
// GetAccountCreationAuth returns an account creation authorization from the DB // GetAccountCreationAuth returns an account creation authorization from the DB
func (l2db *L2DB) GetAccountCreationAuth(addr ethCommon.Address) (*common.AccountCreationAuth, error) { func (l2db *L2DB) GetAccountCreationAuth(addr ethCommon.Address) (*common.AccountCreationAuth, error) {
auth := new(common.AccountCreationAuth) auth := new(common.AccountCreationAuth)
@@ -207,7 +194,7 @@ func (l2db *L2DB) GetPendingTxs() ([]common.PoolL2Tx, error) {
var txs []*common.PoolL2Tx var txs []*common.PoolL2Tx
err := meddler.QueryAll( err := meddler.QueryAll(
l2db.dbRead, &txs, l2db.dbRead, &txs,
selectPoolTxCommon+"WHERE state = $1 AND NOT external_delete;", selectPoolTxCommon+"WHERE state = $1",
common.PoolL2TxStatePending, common.PoolL2TxStatePending,
) )
return db.SlicePtrsToSlice(txs).([]common.PoolL2Tx), tracerr.Wrap(err) return db.SlicePtrsToSlice(txs).([]common.PoolL2Tx), tracerr.Wrap(err)
@@ -323,7 +310,7 @@ func (l2db *L2DB) InvalidateOldNonces(updatedAccounts []common.IdxNonce, batchNu
return nil return nil
} }
// Fill the batch_num in the query with Sprintf because we are using a // Fill the batch_num in the query with Sprintf because we are using a
// named query which works with slices, and doesn't handle an extra // named query which works with slices, and doens't handle an extra
// individual argument. // individual argument.
query := fmt.Sprintf(invalidateOldNoncesQuery, batchNum) query := fmt.Sprintf(invalidateOldNoncesQuery, batchNum)
if _, err := sqlx.NamedExec(l2db.dbWrite, query, updatedAccounts); err != nil { if _, err := sqlx.NamedExec(l2db.dbWrite, query, updatedAccounts); err != nil {

View File

@@ -37,9 +37,9 @@ func TestMain(m *testing.M) {
if err != nil { if err != nil {
panic(err) panic(err)
} }
l2DB = NewL2DB(db, db, 10, 1000, 0.0, 1000.0, 24*time.Hour, nil) l2DB = NewL2DB(db, db, 10, 1000, 0.0, 24*time.Hour, nil)
apiConnCon := dbUtils.NewAPIConnectionController(1, time.Second) apiConnCon := dbUtils.NewAPICnnectionController(1, time.Second)
l2DBWithACC = NewL2DB(db, db, 10, 1000, 0.0, 1000.0, 24*time.Hour, apiConnCon) l2DBWithACC = NewL2DB(db, db, 10, 1000, 0.0, 24*time.Hour, apiConnCon)
test.WipeDB(l2DB.DB()) test.WipeDB(l2DB.DB())
historyDB = historydb.NewHistoryDB(db, db, nil) historyDB = historydb.NewHistoryDB(db, db, nil)
// Run tests // Run tests
@@ -121,7 +121,7 @@ func prepareHistoryDB(historyDB *historydb.HistoryDB) error {
} }
tokens[token.TokenID] = readToken tokens[token.TokenID] = readToken
// Set value to the tokens // Set value to the tokens
err := historyDB.UpdateTokenValue(readToken.EthAddr, *readToken.USD) err := historyDB.UpdateTokenValue(readToken.Symbol, *readToken.USD)
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -725,43 +725,6 @@ func TestAuth(t *testing.T) {
} }
} }
func TestManyAuth(t *testing.T) {
test.WipeDB(l2DB.DB())
const nAuths = 5
chainID := uint16(0)
hermezContractAddr := ethCommon.HexToAddress("0xc344E203a046Da13b0B4467EB7B3629D0C99F6E6")
// Generate authorizations
genAuths := test.GenAuths(nAuths, chainID, hermezContractAddr)
auths := make([]common.AccountCreationAuth, len(genAuths))
// Convert to a non-pointer slice
for i := 0; i < len(genAuths); i++ {
auths[i] = *genAuths[i]
}
// Add a duplicate one to check the not exist condition
err := l2DB.AddAccountCreationAuth(genAuths[0])
require.NoError(t, err)
// Add to the DB
err = l2DB.AddManyAccountCreationAuth(auths)
require.NoError(t, err)
// Assert the result
for i := 0; i < len(auths); i++ {
// Fetch from DB
auth, err := l2DB.GetAccountCreationAuth(auths[i].EthAddr)
require.NoError(t, err)
// Check fetched vs generated
assert.Equal(t, auths[i].EthAddr, auth.EthAddr)
assert.Equal(t, auths[i].BJJ, auth.BJJ)
assert.Equal(t, auths[i].Signature, auth.Signature)
assert.Equal(t, auths[i].Timestamp.Unix(), auths[i].Timestamp.Unix())
nameZone, offset := auths[i].Timestamp.Zone()
assert.Equal(t, "UTC", nameZone)
assert.Equal(t, 0, offset)
}
}
func TestAddGet(t *testing.T) { func TestAddGet(t *testing.T) {
err := prepareHistoryDB(historyDB) err := prepareHistoryDB(historyDB)
if err != nil { if err != nil {

View File

@@ -6,7 +6,7 @@ import (
"time" "time"
ethCommon "github.com/ethereum/go-ethereum/common" ethCommon "github.com/ethereum/go-ethereum/common"
"github.com/hermeznetwork/hermez-node/api/apitypes" "github.com/hermeznetwork/hermez-node/apitypes"
"github.com/hermeznetwork/hermez-node/common" "github.com/hermeznetwork/hermez-node/common"
"github.com/iden3/go-iden3-crypto/babyjub" "github.com/iden3/go-iden3-crypto/babyjub"
) )

View File

@@ -1,11 +1,5 @@
-- +migrate Up -- +migrate Up
-- NOTE: We use "DECIMAL(78,0)" to encode go *big.Int types. All the *big.Int
-- that we deal with represent a value in the SNARK field, which is an integer
-- of 256 bits. `log(2**256, 10) = 77.06`: that is, a 256 bit number can have
-- at most 78 digits, so we use this value to specify the precision in the
-- PostgreSQL DECIMAL guaranteeing that we will never lose precision.
-- History -- History
CREATE TABLE block ( CREATE TABLE block (
eth_block_num BIGINT PRIMARY KEY, eth_block_num BIGINT PRIMARY KEY,
@@ -28,10 +22,10 @@ CREATE TABLE batch (
forger_addr BYTEA NOT NULL, -- fake foreign key for coordinator forger_addr BYTEA NOT NULL, -- fake foreign key for coordinator
fees_collected BYTEA NOT NULL, fees_collected BYTEA NOT NULL,
fee_idxs_coordinator BYTEA NOT NULL, fee_idxs_coordinator BYTEA NOT NULL,
state_root DECIMAL(78,0) NOT NULL, state_root BYTEA NOT NULL,
num_accounts BIGINT NOT NULL, num_accounts BIGINT NOT NULL,
last_idx BIGINT NOT NULL, last_idx BIGINT NOT NULL,
exit_root DECIMAL(78,0) NOT NULL, exit_root BYTEA NOT NULL,
forge_l1_txs_num BIGINT, forge_l1_txs_num BIGINT,
slot_num BIGINT NOT NULL, slot_num BIGINT NOT NULL,
total_fees_usd NUMERIC total_fees_usd NUMERIC
@@ -40,7 +34,7 @@ CREATE TABLE batch (
CREATE TABLE bid ( CREATE TABLE bid (
item_id SERIAL PRIMARY KEY, item_id SERIAL PRIMARY KEY,
slot_num BIGINT NOT NULL, slot_num BIGINT NOT NULL,
bid_value DECIMAL(78,0) NOT NULL, bid_value BYTEA NOT NULL,
eth_block_num BIGINT NOT NULL REFERENCES block (eth_block_num) ON DELETE CASCADE, eth_block_num BIGINT NOT NULL REFERENCES block (eth_block_num) ON DELETE CASCADE,
bidder_addr BYTEA NOT NULL -- fake foreign key for coordinator bidder_addr BYTEA NOT NULL -- fake foreign key for coordinator
); );
@@ -112,7 +106,7 @@ CREATE TABLE account_update (
batch_num BIGINT NOT NULL REFERENCES batch (batch_num) ON DELETE CASCADE, batch_num BIGINT NOT NULL REFERENCES batch (batch_num) ON DELETE CASCADE,
idx BIGINT NOT NULL REFERENCES account (idx) ON DELETE CASCADE, idx BIGINT NOT NULL REFERENCES account (idx) ON DELETE CASCADE,
nonce BIGINT NOT NULL, nonce BIGINT NOT NULL,
balance DECIMAL(78,0) NOT NULL balance BYTEA NOT NULL
); );
CREATE TABLE exit_tree ( CREATE TABLE exit_tree (
@@ -120,7 +114,7 @@ CREATE TABLE exit_tree (
batch_num BIGINT REFERENCES batch (batch_num) ON DELETE CASCADE, batch_num BIGINT REFERENCES batch (batch_num) ON DELETE CASCADE,
account_idx BIGINT REFERENCES account (idx) ON DELETE CASCADE, account_idx BIGINT REFERENCES account (idx) ON DELETE CASCADE,
merkle_proof BYTEA NOT NULL, merkle_proof BYTEA NOT NULL,
balance DECIMAL(78,0) NOT NULL, balance BYTEA NOT NULL,
instant_withdrawn BIGINT REFERENCES block (eth_block_num) ON DELETE SET NULL, instant_withdrawn BIGINT REFERENCES block (eth_block_num) ON DELETE SET NULL,
delayed_withdraw_request BIGINT REFERENCES block (eth_block_num) ON DELETE SET NULL, delayed_withdraw_request BIGINT REFERENCES block (eth_block_num) ON DELETE SET NULL,
owner BYTEA, owner BYTEA,
@@ -170,7 +164,7 @@ CREATE TABLE tx (
to_idx BIGINT NOT NULL, to_idx BIGINT NOT NULL,
to_eth_addr BYTEA, to_eth_addr BYTEA,
to_bjj BYTEA, to_bjj BYTEA,
amount DECIMAL(78,0) NOT NULL, amount BYTEA NOT NULL,
amount_success BOOLEAN NOT NULL DEFAULT true, amount_success BOOLEAN NOT NULL DEFAULT true,
amount_f NUMERIC NOT NULL, amount_f NUMERIC NOT NULL,
token_id INT NOT NULL REFERENCES token (token_id), token_id INT NOT NULL REFERENCES token (token_id),
@@ -180,7 +174,7 @@ CREATE TABLE tx (
-- L1 -- L1
to_forge_l1_txs_num BIGINT, to_forge_l1_txs_num BIGINT,
user_origin BOOLEAN, user_origin BOOLEAN,
deposit_amount DECIMAL(78,0), deposit_amount BYTEA,
deposit_amount_success BOOLEAN NOT NULL DEFAULT true, deposit_amount_success BOOLEAN NOT NULL DEFAULT true,
deposit_amount_f NUMERIC, deposit_amount_f NUMERIC,
deposit_amount_usd NUMERIC, deposit_amount_usd NUMERIC,
@@ -550,7 +544,7 @@ FOR EACH ROW EXECUTE PROCEDURE forge_l1_user_txs();
CREATE TABLE rollup_vars ( CREATE TABLE rollup_vars (
eth_block_num BIGINT PRIMARY KEY REFERENCES block (eth_block_num) ON DELETE CASCADE, eth_block_num BIGINT PRIMARY KEY REFERENCES block (eth_block_num) ON DELETE CASCADE,
fee_add_token DECIMAL(78,0) NOT NULL, fee_add_token BYTEA NOT NULL,
forge_l1_timeout BIGINT NOT NULL, forge_l1_timeout BIGINT NOT NULL,
withdrawal_delay BIGINT NOT NULL, withdrawal_delay BIGINT NOT NULL,
buckets BYTEA NOT NULL, buckets BYTEA NOT NULL,
@@ -562,7 +556,7 @@ CREATE TABLE bucket_update (
eth_block_num BIGINT NOT NULL REFERENCES block (eth_block_num) ON DELETE CASCADE, eth_block_num BIGINT NOT NULL REFERENCES block (eth_block_num) ON DELETE CASCADE,
num_bucket BIGINT NOT NULL, num_bucket BIGINT NOT NULL,
block_stamp BIGINT NOT NULL, block_stamp BIGINT NOT NULL,
withdrawals DECIMAL(78,0) NOT NULL withdrawals BYTEA NOT NULL
); );
CREATE TABLE token_exchange ( CREATE TABLE token_exchange (
@@ -578,7 +572,7 @@ CREATE TABLE escape_hatch_withdrawal (
who_addr BYTEA NOT NULL, who_addr BYTEA NOT NULL,
to_addr BYTEA NOT NULL, to_addr BYTEA NOT NULL,
token_addr BYTEA NOT NULL, token_addr BYTEA NOT NULL,
amount DECIMAL(78,0) NOT NULL amount BYTEA NOT NULL
); );
CREATE TABLE auction_vars ( CREATE TABLE auction_vars (
@@ -616,7 +610,7 @@ CREATE TABLE tx_pool (
effective_to_eth_addr BYTEA, effective_to_eth_addr BYTEA,
effective_to_bjj BYTEA, effective_to_bjj BYTEA,
token_id INT NOT NULL REFERENCES token (token_id) ON DELETE CASCADE, token_id INT NOT NULL REFERENCES token (token_id) ON DELETE CASCADE,
amount DECIMAL(78,0) NOT NULL, amount BYTEA NOT NULL,
amount_f NUMERIC NOT NULL, amount_f NUMERIC NOT NULL,
fee SMALLINT NOT NULL, fee SMALLINT NOT NULL,
nonce BIGINT NOT NULL, nonce BIGINT NOT NULL,
@@ -630,7 +624,7 @@ CREATE TABLE tx_pool (
rq_to_eth_addr BYTEA, rq_to_eth_addr BYTEA,
rq_to_bjj BYTEA, rq_to_bjj BYTEA,
rq_token_id INT, rq_token_id INT,
rq_amount DECIMAL(78,0), rq_amount BYTEA,
rq_fee SMALLINT, rq_fee SMALLINT,
rq_nonce BIGINT, rq_nonce BIGINT,
tx_type VARCHAR(40) NOT NULL, tx_type VARCHAR(40) NOT NULL,
@@ -677,22 +671,12 @@ CREATE TABLE node_info (
); );
INSERT INTO node_info(item_id) VALUES (1); -- Always have a single row that we will update INSERT INTO node_info(item_id) VALUES (1); -- Always have a single row that we will update
CREATE VIEW account_state AS SELECT DISTINCT idx,
first_value(nonce) OVER w AS nonce,
first_value(balance) OVER w AS balance,
first_value(eth_block_num) OVER w AS eth_block_num,
first_value(batch_num) OVER w AS batch_num
FROM account_update
window w AS (partition by idx ORDER BY item_id desc);
-- +migrate Down -- +migrate Down
-- triggers -- triggers
DROP TRIGGER IF EXISTS trigger_token_usd_update ON token; DROP TRIGGER IF EXISTS trigger_token_usd_update ON token;
DROP TRIGGER IF EXISTS trigger_set_tx ON tx; DROP TRIGGER IF EXISTS trigger_set_tx ON tx;
DROP TRIGGER IF EXISTS trigger_forge_l1_txs ON batch; DROP TRIGGER IF EXISTS trigger_forge_l1_txs ON batch;
DROP TRIGGER IF EXISTS trigger_set_pool_tx ON tx_pool; DROP TRIGGER IF EXISTS trigger_set_pool_tx ON tx_pool;
-- drop views IF EXISTS
DROP VIEW IF EXISTS account_state;
-- functions -- functions
DROP FUNCTION IF EXISTS hez_idx; DROP FUNCTION IF EXISTS hez_idx;
DROP FUNCTION IF EXISTS set_token_usd_update; DROP FUNCTION IF EXISTS set_token_usd_update;

View File

@@ -17,8 +17,7 @@ import (
var ( var (
// ErrStateDBWithoutMT is used when a method that requires a MerkleTree // ErrStateDBWithoutMT is used when a method that requires a MerkleTree
// is called in a StateDB that does not have a MerkleTree defined // is called in a StateDB that does not have a MerkleTree defined
ErrStateDBWithoutMT = errors.New( ErrStateDBWithoutMT = errors.New("Can not call method to use MerkleTree in a StateDB without MerkleTree")
"Can not call method to use MerkleTree in a StateDB without MerkleTree")
// ErrAccountAlreadyExists is used when CreateAccount is called and the // ErrAccountAlreadyExists is used when CreateAccount is called and the
// Account already exists // Account already exists
@@ -29,8 +28,7 @@ var (
ErrIdxNotFound = errors.New("Idx can not be found") ErrIdxNotFound = errors.New("Idx can not be found")
// ErrGetIdxNoCase is used when trying to get the Idx from EthAddr & // ErrGetIdxNoCase is used when trying to get the Idx from EthAddr &
// BJJ with not compatible combination // BJJ with not compatible combination
ErrGetIdxNoCase = errors.New( ErrGetIdxNoCase = errors.New("Can not get Idx due unexpected combination of ethereum Address & BabyJubJub PublicKey")
"Can not get Idx due unexpected combination of ethereum Address & BabyJubJub PublicKey")
// PrefixKeyIdx is the key prefix for idx in the db // PrefixKeyIdx is the key prefix for idx in the db
PrefixKeyIdx = []byte("i:") PrefixKeyIdx = []byte("i:")
@@ -146,8 +144,7 @@ func NewStateDB(cfg Config) (*StateDB, error) {
} }
} }
if cfg.Type == TypeTxSelector && cfg.NLevels != 0 { if cfg.Type == TypeTxSelector && cfg.NLevels != 0 {
return nil, tracerr.Wrap( return nil, tracerr.Wrap(fmt.Errorf("invalid StateDB parameters: StateDB type==TypeStateDB can not have nLevels!=0"))
fmt.Errorf("invalid StateDB parameters: StateDB type==TypeStateDB can not have nLevels!=0"))
} }
return &StateDB{ return &StateDB{
@@ -350,8 +347,7 @@ func GetAccountInTreeDB(sto db.Storage, idx common.Idx) (*common.Account, error)
// CreateAccount creates a new Account in the StateDB for the given Idx. If // CreateAccount creates a new Account in the StateDB for the given Idx. If
// StateDB.MT==nil, MerkleTree is not affected, otherwise updates the // StateDB.MT==nil, MerkleTree is not affected, otherwise updates the
// MerkleTree, returning a CircomProcessorProof. // MerkleTree, returning a CircomProcessorProof.
func (s *StateDB) CreateAccount(idx common.Idx, account *common.Account) ( func (s *StateDB) CreateAccount(idx common.Idx, account *common.Account) (*merkletree.CircomProcessorProof, error) {
*merkletree.CircomProcessorProof, error) {
cpp, err := CreateAccountInTreeDB(s.db.DB(), s.MT, idx, account) cpp, err := CreateAccountInTreeDB(s.db.DB(), s.MT, idx, account)
if err != nil { if err != nil {
return cpp, tracerr.Wrap(err) return cpp, tracerr.Wrap(err)
@@ -365,8 +361,7 @@ func (s *StateDB) CreateAccount(idx common.Idx, account *common.Account) (
// from ExitTree. Creates a new Account in the StateDB for the given Idx. If // from ExitTree. Creates a new Account in the StateDB for the given Idx. If
// StateDB.MT==nil, MerkleTree is not affected, otherwise updates the // StateDB.MT==nil, MerkleTree is not affected, otherwise updates the
// MerkleTree, returning a CircomProcessorProof. // MerkleTree, returning a CircomProcessorProof.
func CreateAccountInTreeDB(sto db.Storage, mt *merkletree.MerkleTree, idx common.Idx, func CreateAccountInTreeDB(sto db.Storage, mt *merkletree.MerkleTree, idx common.Idx, account *common.Account) (*merkletree.CircomProcessorProof, error) {
account *common.Account) (*merkletree.CircomProcessorProof, error) {
// store at the DB the key: v, and value: leaf.Bytes() // store at the DB the key: v, and value: leaf.Bytes()
v, err := account.HashValue() v, err := account.HashValue()
if err != nil { if err != nil {
@@ -415,8 +410,7 @@ func CreateAccountInTreeDB(sto db.Storage, mt *merkletree.MerkleTree, idx common
// UpdateAccount updates the Account in the StateDB for the given Idx. If // UpdateAccount updates the Account in the StateDB for the given Idx. If
// StateDB.mt==nil, MerkleTree is not affected, otherwise updates the // StateDB.mt==nil, MerkleTree is not affected, otherwise updates the
// MerkleTree, returning a CircomProcessorProof. // MerkleTree, returning a CircomProcessorProof.
func (s *StateDB) UpdateAccount(idx common.Idx, account *common.Account) ( func (s *StateDB) UpdateAccount(idx common.Idx, account *common.Account) (*merkletree.CircomProcessorProof, error) {
*merkletree.CircomProcessorProof, error) {
return UpdateAccountInTreeDB(s.db.DB(), s.MT, idx, account) return UpdateAccountInTreeDB(s.db.DB(), s.MT, idx, account)
} }
@@ -424,8 +418,7 @@ func (s *StateDB) UpdateAccount(idx common.Idx, account *common.Account) (
// from ExitTree. Updates the Account in the StateDB for the given Idx. If // from ExitTree. Updates the Account in the StateDB for the given Idx. If
// StateDB.mt==nil, MerkleTree is not affected, otherwise updates the // StateDB.mt==nil, MerkleTree is not affected, otherwise updates the
// MerkleTree, returning a CircomProcessorProof. // MerkleTree, returning a CircomProcessorProof.
func UpdateAccountInTreeDB(sto db.Storage, mt *merkletree.MerkleTree, idx common.Idx, func UpdateAccountInTreeDB(sto db.Storage, mt *merkletree.MerkleTree, idx common.Idx, account *common.Account) (*merkletree.CircomProcessorProof, error) {
account *common.Account) (*merkletree.CircomProcessorProof, error) {
// store at the DB the key: v, and value: account.Bytes() // store at the DB the key: v, and value: account.Bytes()
v, err := account.HashValue() v, err := account.HashValue()
if err != nil { if err != nil {
@@ -510,7 +503,7 @@ func (l *LocalStateDB) CheckpointExists(batchNum common.BatchNum) (bool, error)
return l.db.CheckpointExists(batchNum) return l.db.CheckpointExists(batchNum)
} }
// Reset performs a reset in the LocalStateDB. If fromSynchronizer is true, it // Reset performs a reset in the LocaStateDB. If fromSynchronizer is true, it
// gets the state from LocalStateDB.synchronizerStateDB for the given batchNum. // gets the state from LocalStateDB.synchronizerStateDB for the given batchNum.
// If fromSynchronizer is false, get the state from LocalStateDB checkpoints. // If fromSynchronizer is false, get the state from LocalStateDB checkpoints.
func (l *LocalStateDB) Reset(batchNum common.BatchNum, fromSynchronizer bool) error { func (l *LocalStateDB) Reset(batchNum common.BatchNum, fromSynchronizer bool) error {

View File

@@ -22,8 +22,7 @@ import (
func newAccount(t *testing.T, i int) *common.Account { func newAccount(t *testing.T, i int) *common.Account {
var sk babyjub.PrivateKey var sk babyjub.PrivateKey
_, err := hex.Decode(sk[:], _, err := hex.Decode(sk[:], []byte("0001020304050607080900010203040506070809000102030405060708090001"))
[]byte("0001020304050607080900010203040506070809000102030405060708090001"))
require.NoError(t, err) require.NoError(t, err)
pk := sk.Public() pk := sk.Public()
@@ -372,8 +371,7 @@ func TestCheckpoints(t *testing.T) {
dirLocal, err := ioutil.TempDir("", "ldb") dirLocal, err := ioutil.TempDir("", "ldb")
require.NoError(t, err) require.NoError(t, err)
defer require.NoError(t, os.RemoveAll(dirLocal)) defer require.NoError(t, os.RemoveAll(dirLocal))
ldb, err := NewLocalStateDB(Config{Path: dirLocal, Keep: 128, Type: TypeBatchBuilder, ldb, err := NewLocalStateDB(Config{Path: dirLocal, Keep: 128, Type: TypeBatchBuilder, NLevels: 32}, sdb)
NLevels: 32}, sdb)
require.NoError(t, err) require.NoError(t, err)
// get checkpoint 4 from sdb (StateDB) to ldb (LocalStateDB) // get checkpoint 4 from sdb (StateDB) to ldb (LocalStateDB)
@@ -394,8 +392,7 @@ func TestCheckpoints(t *testing.T) {
dirLocal2, err := ioutil.TempDir("", "ldb2") dirLocal2, err := ioutil.TempDir("", "ldb2")
require.NoError(t, err) require.NoError(t, err)
defer require.NoError(t, os.RemoveAll(dirLocal2)) defer require.NoError(t, os.RemoveAll(dirLocal2))
ldb2, err := NewLocalStateDB(Config{Path: dirLocal2, Keep: 128, Type: TypeBatchBuilder, ldb2, err := NewLocalStateDB(Config{Path: dirLocal2, Keep: 128, Type: TypeBatchBuilder, NLevels: 32}, sdb)
NLevels: 32}, sdb)
require.NoError(t, err) require.NoError(t, err)
// get checkpoint 4 from sdb (StateDB) to ldb (LocalStateDB) // get checkpoint 4 from sdb (StateDB) to ldb (LocalStateDB)
@@ -474,8 +471,7 @@ func TestCheckAccountsTreeTestVectors(t *testing.T) {
ay0 := new(big.Int).Sub(new(big.Int).Exp(big.NewInt(2), big.NewInt(253), nil), big.NewInt(1)) ay0 := new(big.Int).Sub(new(big.Int).Exp(big.NewInt(2), big.NewInt(253), nil), big.NewInt(1))
// test value from js version (compatibility-canary) // test value from js version (compatibility-canary)
assert.Equal(t, "1fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", assert.Equal(t, "1fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", (hex.EncodeToString(ay0.Bytes())))
(hex.EncodeToString(ay0.Bytes())))
bjjPoint0Comp := babyjub.PackSignY(true, ay0) bjjPoint0Comp := babyjub.PackSignY(true, ay0)
bjj0 := babyjub.PublicKeyComp(bjjPoint0Comp) bjj0 := babyjub.PublicKeyComp(bjjPoint0Comp)
@@ -534,9 +530,7 @@ func TestCheckAccountsTreeTestVectors(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
} }
// root value generated by js version: // root value generated by js version:
assert.Equal(t, assert.Equal(t, "17298264051379321456969039521810887093935433569451713402227686942080129181291", sdb.MT.Root().BigInt().String())
"13174362770971232417413036794215823584762073355951212910715422236001731746065",
sdb.MT.Root().BigInt().String())
} }
// TestListCheckpoints performs almost the same test than kvdb/kvdb_test.go // TestListCheckpoints performs almost the same test than kvdb/kvdb_test.go

View File

@@ -18,8 +18,7 @@ func concatEthAddrTokenID(addr ethCommon.Address, tokenID common.TokenID) []byte
b = append(b[:], tokenID.Bytes()[:]...) b = append(b[:], tokenID.Bytes()[:]...)
return b return b
} }
func concatEthAddrBJJTokenID(addr ethCommon.Address, pk babyjub.PublicKeyComp, func concatEthAddrBJJTokenID(addr ethCommon.Address, pk babyjub.PublicKeyComp, tokenID common.TokenID) []byte {
tokenID common.TokenID) []byte {
pkComp := pk pkComp := pk
var b []byte var b []byte
b = append(b, addr.Bytes()...) b = append(b, addr.Bytes()...)
@@ -33,8 +32,7 @@ func concatEthAddrBJJTokenID(addr ethCommon.Address, pk babyjub.PublicKeyComp,
// - key: EthAddr & BabyJubJub PublicKey Compressed, value: idx // - key: EthAddr & BabyJubJub PublicKey Compressed, value: idx
// If Idx already exist for the given EthAddr & BJJ, the remaining Idx will be // If Idx already exist for the given EthAddr & BJJ, the remaining Idx will be
// always the smallest one. // always the smallest one.
func (s *StateDB) setIdxByEthAddrBJJ(idx common.Idx, addr ethCommon.Address, func (s *StateDB) setIdxByEthAddrBJJ(idx common.Idx, addr ethCommon.Address, pk babyjub.PublicKeyComp, tokenID common.TokenID) error {
pk babyjub.PublicKeyComp, tokenID common.TokenID) error {
oldIdx, err := s.GetIdxByEthAddrBJJ(addr, pk, tokenID) oldIdx, err := s.GetIdxByEthAddrBJJ(addr, pk, tokenID)
if err == nil { if err == nil {
// EthAddr & BJJ already have an Idx // EthAddr & BJJ already have an Idx
@@ -42,8 +40,7 @@ func (s *StateDB) setIdxByEthAddrBJJ(idx common.Idx, addr ethCommon.Address,
// if new idx is smaller, store the new one // if new idx is smaller, store the new one
// if new idx is bigger, don't store and return, as the used one will be the old // if new idx is bigger, don't store and return, as the used one will be the old
if idx >= oldIdx { if idx >= oldIdx {
log.Debug("StateDB.setIdxByEthAddrBJJ: Idx not stored because there " + log.Debug("StateDB.setIdxByEthAddrBJJ: Idx not stored because there already exist a smaller Idx for the given EthAddr & BJJ")
"already exist a smaller Idx for the given EthAddr & BJJ")
return nil return nil
} }
} }
@@ -83,8 +80,7 @@ func (s *StateDB) setIdxByEthAddrBJJ(idx common.Idx, addr ethCommon.Address,
// GetIdxByEthAddr returns the smallest Idx in the StateDB for the given // GetIdxByEthAddr returns the smallest Idx in the StateDB for the given
// Ethereum Address. Will return common.Idx(0) and error in case that Idx is // Ethereum Address. Will return common.Idx(0) and error in case that Idx is
// not found in the StateDB. // not found in the StateDB.
func (s *StateDB) GetIdxByEthAddr(addr ethCommon.Address, tokenID common.TokenID) (common.Idx, func (s *StateDB) GetIdxByEthAddr(addr ethCommon.Address, tokenID common.TokenID) (common.Idx, error) {
error) {
k := concatEthAddrTokenID(addr, tokenID) k := concatEthAddrTokenID(addr, tokenID)
b, err := s.db.DB().Get(append(PrefixKeyAddr, k...)) b, err := s.db.DB().Get(append(PrefixKeyAddr, k...))
if err != nil { if err != nil {
@@ -120,22 +116,18 @@ func (s *StateDB) GetIdxByEthAddrBJJ(addr ethCommon.Address, pk babyjub.PublicKe
return common.Idx(0), tracerr.Wrap(ErrIdxNotFound) return common.Idx(0), tracerr.Wrap(ErrIdxNotFound)
} else if err != nil { } else if err != nil {
return common.Idx(0), return common.Idx(0),
tracerr.Wrap(fmt.Errorf("GetIdxByEthAddrBJJ: %s: ToEthAddr: %s, ToBJJ: %s, TokenID: %d", tracerr.Wrap(fmt.Errorf("GetIdxByEthAddrBJJ: %s: ToEthAddr: %s, ToBJJ: %s, TokenID: %d", ErrIdxNotFound, addr.Hex(), pk, tokenID))
ErrIdxNotFound, addr.Hex(), pk, tokenID))
} }
idx, err := common.IdxFromBytes(b) idx, err := common.IdxFromBytes(b)
if err != nil { if err != nil {
return common.Idx(0), return common.Idx(0),
tracerr.Wrap(fmt.Errorf("GetIdxByEthAddrBJJ: %s: ToEthAddr: %s, ToBJJ: %s, TokenID: %d", tracerr.Wrap(fmt.Errorf("GetIdxByEthAddrBJJ: %s: ToEthAddr: %s, ToBJJ: %s, TokenID: %d", err, addr.Hex(), pk, tokenID))
err, addr.Hex(), pk, tokenID))
} }
return idx, nil return idx, nil
} }
// rest of cases (included case ToEthAddr==0) are not possible // rest of cases (included case ToEthAddr==0) are not possible
return common.Idx(0), return common.Idx(0),
tracerr.Wrap( tracerr.Wrap(fmt.Errorf("GetIdxByEthAddrBJJ: Not found, %s: ToEthAddr: %s, ToBJJ: %s, TokenID: %d", ErrGetIdxNoCase, addr.Hex(), pk, tokenID))
fmt.Errorf("GetIdxByEthAddrBJJ: Not found, %s: ToEthAddr: %s, ToBJJ: %s, TokenID: %d",
ErrGetIdxNoCase, addr.Hex(), pk, tokenID))
} }
// GetTokenIDsFromIdxs returns a map containing the common.TokenID with its // GetTokenIDsFromIdxs returns a map containing the common.TokenID with its
@@ -145,9 +137,7 @@ func (s *StateDB) GetTokenIDsFromIdxs(idxs []common.Idx) (map[common.TokenID]com
for i := 0; i < len(idxs); i++ { for i := 0; i < len(idxs); i++ {
a, err := s.GetAccount(idxs[i]) a, err := s.GetAccount(idxs[i])
if err != nil { if err != nil {
return nil, return nil, tracerr.Wrap(fmt.Errorf("GetTokenIDsFromIdxs error on GetAccount with Idx==%d: %s", idxs[i], err.Error()))
tracerr.Wrap(fmt.Errorf("GetTokenIDsFromIdxs error on GetAccount with Idx==%d: %s",
idxs[i], err.Error()))
} }
m[a.TokenID] = idxs[i] m[a.TokenID] = idxs[i]
} }

View File

@@ -13,9 +13,6 @@ import (
"github.com/hermeznetwork/hermez-node/log" "github.com/hermeznetwork/hermez-node/log"
"github.com/hermeznetwork/tracerr" "github.com/hermeznetwork/tracerr"
"github.com/jmoiron/sqlx" "github.com/jmoiron/sqlx"
//nolint:errcheck // driver for postgres DB
_ "github.com/lib/pq"
migrate "github.com/rubenv/sql-migrate" migrate "github.com/rubenv/sql-migrate"
"github.com/russross/meddler" "github.com/russross/meddler"
"golang.org/x/sync/semaphore" "golang.org/x/sync/semaphore"
@@ -96,8 +93,8 @@ type APIConnectionController struct {
timeout time.Duration timeout time.Duration
} }
// NewAPIConnectionController initialize APIConnectionController // NewAPICnnectionController initialize APIConnectionController
func NewAPIConnectionController(maxConnections int, timeout time.Duration) *APIConnectionController { func NewAPICnnectionController(maxConnections int, timeout time.Duration) *APIConnectionController {
return &APIConnectionController{ return &APIConnectionController{
smphr: semaphore.NewWeighted(int64(maxConnections)), smphr: semaphore.NewWeighted(int64(maxConnections)),
timeout: timeout, timeout: timeout,
@@ -168,11 +165,7 @@ func (b BigIntMeddler) PostRead(fieldPtr, scanTarget interface{}) error {
return tracerr.Wrap(fmt.Errorf("BigIntMeddler.PostRead: nil pointer")) return tracerr.Wrap(fmt.Errorf("BigIntMeddler.PostRead: nil pointer"))
} }
field := fieldPtr.(**big.Int) field := fieldPtr.(**big.Int)
var ok bool *field = new(big.Int).SetBytes([]byte(*ptr))
*field, ok = new(big.Int).SetString(*ptr, 10)
if !ok {
return tracerr.Wrap(fmt.Errorf("big.Int.SetString failed on \"%v\"", *ptr))
}
return nil return nil
} }
@@ -180,7 +173,7 @@ func (b BigIntMeddler) PostRead(fieldPtr, scanTarget interface{}) error {
func (b BigIntMeddler) PreWrite(fieldPtr interface{}) (saveValue interface{}, err error) { func (b BigIntMeddler) PreWrite(fieldPtr interface{}) (saveValue interface{}, err error) {
field := fieldPtr.(*big.Int) field := fieldPtr.(*big.Int)
return field.String(), nil return field.Bytes(), nil
} }
// BigIntNullMeddler encodes or decodes the field value to or from JSON // BigIntNullMeddler encodes or decodes the field value to or from JSON
@@ -205,12 +198,7 @@ func (b BigIntNullMeddler) PostRead(fieldPtr, scanTarget interface{}) error {
if ptr == nil { if ptr == nil {
return tracerr.Wrap(fmt.Errorf("BigIntMeddler.PostRead: nil pointer")) return tracerr.Wrap(fmt.Errorf("BigIntMeddler.PostRead: nil pointer"))
} }
var ok bool *field = new(big.Int).SetBytes(ptr)
*field, ok = new(big.Int).SetString(string(ptr), 10)
if !ok {
return tracerr.Wrap(fmt.Errorf("big.Int.SetString failed on \"%v\"", string(ptr)))
}
return nil return nil
} }
@@ -220,7 +208,7 @@ func (b BigIntNullMeddler) PreWrite(fieldPtr interface{}) (saveValue interface{}
if field == nil { if field == nil {
return nil, nil return nil, nil
} }
return field.String(), nil return field.Bytes(), nil
} }
// SliceToSlicePtrs converts any []Foo to []*Foo // SliceToSlicePtrs converts any []Foo to []*Foo

View File

@@ -1,13 +1,9 @@
package db package db
import ( import (
"math/big"
"os"
"testing" "testing"
"github.com/russross/meddler"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
type foo struct { type foo struct {
@@ -37,42 +33,3 @@ func TestSlicePtrsToSlice(t *testing.T) {
assert.Equal(t, *a[i], b[i]) assert.Equal(t, *a[i], b[i])
} }
} }
func TestBigInt(t *testing.T) {
pass := os.Getenv("POSTGRES_PASS")
db, err := InitSQLDB(5432, "localhost", "hermez", pass, "hermez")
require.NoError(t, err)
defer func() {
_, err := db.Exec("DROP TABLE IF EXISTS test_big_int;")
require.NoError(t, err)
err = db.Close()
require.NoError(t, err)
}()
_, err = db.Exec("DROP TABLE IF EXISTS test_big_int;")
require.NoError(t, err)
_, err = db.Exec(`CREATE TABLE test_big_int (
item_id SERIAL PRIMARY KEY,
value1 DECIMAL(78, 0) NOT NULL,
value2 DECIMAL(78, 0),
value3 DECIMAL(78, 0)
);`)
require.NoError(t, err)
type Entry struct {
ItemID int `meddler:"item_id"`
Value1 *big.Int `meddler:"value1,bigint"`
Value2 *big.Int `meddler:"value2,bigintnull"`
Value3 *big.Int `meddler:"value3,bigintnull"`
}
entry := Entry{ItemID: 1, Value1: big.NewInt(1234567890), Value2: big.NewInt(9876543210), Value3: nil}
err = meddler.Insert(db, "test_big_int", &entry)
require.NoError(t, err)
var dbEntry Entry
err = meddler.QueryRow(db, &dbEntry, "SELECT * FROM test_big_int WHERE item_id = 1;")
require.NoError(t, err)
assert.Equal(t, entry, dbEntry)
}

View File

@@ -70,8 +70,7 @@ type AuctionEventInitialize struct {
} }
// AuctionVariables returns the AuctionVariables from the initialize event // AuctionVariables returns the AuctionVariables from the initialize event
func (ei *AuctionEventInitialize) AuctionVariables( func (ei *AuctionEventInitialize) AuctionVariables(InitialMinimalBidding *big.Int) *common.AuctionVariables {
InitialMinimalBidding *big.Int) *common.AuctionVariables {
return &common.AuctionVariables{ return &common.AuctionVariables{
EthBlockNum: 0, EthBlockNum: 0,
DonationAddress: ei.DonationAddress, DonationAddress: ei.DonationAddress,
@@ -223,15 +222,12 @@ type AuctionInterface interface {
AuctionGetAllocationRatio() ([3]uint16, error) AuctionGetAllocationRatio() ([3]uint16, error)
AuctionSetDonationAddress(newDonationAddress ethCommon.Address) (*types.Transaction, error) AuctionSetDonationAddress(newDonationAddress ethCommon.Address) (*types.Transaction, error)
AuctionGetDonationAddress() (*ethCommon.Address, error) AuctionGetDonationAddress() (*ethCommon.Address, error)
AuctionSetBootCoordinator(newBootCoordinator ethCommon.Address, AuctionSetBootCoordinator(newBootCoordinator ethCommon.Address, newBootCoordinatorURL string) (*types.Transaction, error)
newBootCoordinatorURL string) (*types.Transaction, error)
AuctionGetBootCoordinator() (*ethCommon.Address, error) AuctionGetBootCoordinator() (*ethCommon.Address, error)
AuctionChangeDefaultSlotSetBid(slotSet int64, AuctionChangeDefaultSlotSetBid(slotSet int64, newInitialMinBid *big.Int) (*types.Transaction, error)
newInitialMinBid *big.Int) (*types.Transaction, error)
// Coordinator Management // Coordinator Management
AuctionSetCoordinator(forger ethCommon.Address, coordinatorURL string) (*types.Transaction, AuctionSetCoordinator(forger ethCommon.Address, coordinatorURL string) (*types.Transaction, error)
error)
// Slot Info // Slot Info
AuctionGetSlotNumber(blockNum int64) (int64, error) AuctionGetSlotNumber(blockNum int64) (int64, error)
@@ -241,8 +237,7 @@ type AuctionInterface interface {
AuctionGetSlotSet(slot int64) (*big.Int, error) AuctionGetSlotSet(slot int64) (*big.Int, error)
// Bidding // Bidding
AuctionBid(amount *big.Int, slot int64, bidAmount *big.Int, deadline *big.Int) ( AuctionBid(amount *big.Int, slot int64, bidAmount *big.Int, deadline *big.Int) (tx *types.Transaction, err error)
tx *types.Transaction, err error)
AuctionMultiBid(amount *big.Int, startingSlot, endingSlot int64, slotSets [6]bool, AuctionMultiBid(amount *big.Int, startingSlot, endingSlot int64, slotSets [6]bool,
maxBid, minBid, deadline *big.Int) (tx *types.Transaction, err error) maxBid, minBid, deadline *big.Int) (tx *types.Transaction, err error)
@@ -260,7 +255,7 @@ type AuctionInterface interface {
AuctionConstants() (*common.AuctionConstants, error) AuctionConstants() (*common.AuctionConstants, error)
AuctionEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*AuctionEvents, error) AuctionEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*AuctionEvents, error)
AuctionEventInit(genesisBlockNum int64) (*AuctionEventInitialize, int64, error) AuctionEventInit() (*AuctionEventInitialize, int64, error)
} }
// //
@@ -280,10 +275,8 @@ type AuctionClient struct {
} }
// NewAuctionClient creates a new AuctionClient. `tokenAddress` is the address of the HEZ tokens. // NewAuctionClient creates a new AuctionClient. `tokenAddress` is the address of the HEZ tokens.
func NewAuctionClient(client *EthereumClient, address ethCommon.Address, func NewAuctionClient(client *EthereumClient, address ethCommon.Address, tokenHEZCfg TokenConfig) (*AuctionClient, error) {
tokenHEZCfg TokenConfig) (*AuctionClient, error) { contractAbi, err := abi.JSON(strings.NewReader(string(HermezAuctionProtocol.HermezAuctionProtocolABI)))
contractAbi, err :=
abi.JSON(strings.NewReader(string(HermezAuctionProtocol.HermezAuctionProtocolABI)))
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
@@ -338,8 +331,7 @@ func (c *AuctionClient) AuctionGetSlotDeadline() (slotDeadline uint8, err error)
} }
// AuctionSetOpenAuctionSlots is the interface to call the smart contract function // AuctionSetOpenAuctionSlots is the interface to call the smart contract function
func (c *AuctionClient) AuctionSetOpenAuctionSlots( func (c *AuctionClient) AuctionSetOpenAuctionSlots(newOpenAuctionSlots uint16) (tx *types.Transaction, err error) {
newOpenAuctionSlots uint16) (tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -363,8 +355,7 @@ func (c *AuctionClient) AuctionGetOpenAuctionSlots() (openAuctionSlots uint16, e
} }
// AuctionSetClosedAuctionSlots is the interface to call the smart contract function // AuctionSetClosedAuctionSlots is the interface to call the smart contract function
func (c *AuctionClient) AuctionSetClosedAuctionSlots( func (c *AuctionClient) AuctionSetClosedAuctionSlots(newClosedAuctionSlots uint16) (tx *types.Transaction, err error) {
newClosedAuctionSlots uint16) (tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -388,8 +379,7 @@ func (c *AuctionClient) AuctionGetClosedAuctionSlots() (closedAuctionSlots uint1
} }
// AuctionSetOutbidding is the interface to call the smart contract function // AuctionSetOutbidding is the interface to call the smart contract function
func (c *AuctionClient) AuctionSetOutbidding(newOutbidding uint16) (tx *types.Transaction, func (c *AuctionClient) AuctionSetOutbidding(newOutbidding uint16) (tx *types.Transaction, err error) {
err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
12500000, //nolint:gomnd 12500000, //nolint:gomnd
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -413,8 +403,7 @@ func (c *AuctionClient) AuctionGetOutbidding() (outbidding uint16, err error) {
} }
// AuctionSetAllocationRatio is the interface to call the smart contract function // AuctionSetAllocationRatio is the interface to call the smart contract function
func (c *AuctionClient) AuctionSetAllocationRatio( func (c *AuctionClient) AuctionSetAllocationRatio(newAllocationRatio [3]uint16) (tx *types.Transaction, err error) {
newAllocationRatio [3]uint16) (tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -438,8 +427,7 @@ func (c *AuctionClient) AuctionGetAllocationRatio() (allocationRation [3]uint16,
} }
// AuctionSetDonationAddress is the interface to call the smart contract function // AuctionSetDonationAddress is the interface to call the smart contract function
func (c *AuctionClient) AuctionSetDonationAddress( func (c *AuctionClient) AuctionSetDonationAddress(newDonationAddress ethCommon.Address) (tx *types.Transaction, err error) {
newDonationAddress ethCommon.Address) (tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -452,8 +440,7 @@ func (c *AuctionClient) AuctionSetDonationAddress(
} }
// AuctionGetDonationAddress is the interface to call the smart contract function // AuctionGetDonationAddress is the interface to call the smart contract function
func (c *AuctionClient) AuctionGetDonationAddress() (donationAddress *ethCommon.Address, func (c *AuctionClient) AuctionGetDonationAddress() (donationAddress *ethCommon.Address, err error) {
err error) {
var _donationAddress ethCommon.Address var _donationAddress ethCommon.Address
if err := c.client.Call(func(ec *ethclient.Client) error { if err := c.client.Call(func(ec *ethclient.Client) error {
_donationAddress, err = c.auction.GetDonationAddress(c.opts) _donationAddress, err = c.auction.GetDonationAddress(c.opts)
@@ -465,13 +452,11 @@ func (c *AuctionClient) AuctionGetDonationAddress() (donationAddress *ethCommon.
} }
// AuctionSetBootCoordinator is the interface to call the smart contract function // AuctionSetBootCoordinator is the interface to call the smart contract function
func (c *AuctionClient) AuctionSetBootCoordinator(newBootCoordinator ethCommon.Address, func (c *AuctionClient) AuctionSetBootCoordinator(newBootCoordinator ethCommon.Address, newBootCoordinatorURL string) (tx *types.Transaction, err error) {
newBootCoordinatorURL string) (tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
return c.auction.SetBootCoordinator(auth, newBootCoordinator, return c.auction.SetBootCoordinator(auth, newBootCoordinator, newBootCoordinatorURL)
newBootCoordinatorURL)
}, },
); err != nil { ); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("Failed setting bootCoordinator: %w", err)) return nil, tracerr.Wrap(fmt.Errorf("Failed setting bootCoordinator: %w", err))
@@ -480,8 +465,7 @@ func (c *AuctionClient) AuctionSetBootCoordinator(newBootCoordinator ethCommon.A
} }
// AuctionGetBootCoordinator is the interface to call the smart contract function // AuctionGetBootCoordinator is the interface to call the smart contract function
func (c *AuctionClient) AuctionGetBootCoordinator() (bootCoordinator *ethCommon.Address, func (c *AuctionClient) AuctionGetBootCoordinator() (bootCoordinator *ethCommon.Address, err error) {
err error) {
var _bootCoordinator ethCommon.Address var _bootCoordinator ethCommon.Address
if err := c.client.Call(func(ec *ethclient.Client) error { if err := c.client.Call(func(ec *ethclient.Client) error {
_bootCoordinator, err = c.auction.GetBootCoordinator(c.opts) _bootCoordinator, err = c.auction.GetBootCoordinator(c.opts)
@@ -493,8 +477,7 @@ func (c *AuctionClient) AuctionGetBootCoordinator() (bootCoordinator *ethCommon.
} }
// AuctionChangeDefaultSlotSetBid is the interface to call the smart contract function // AuctionChangeDefaultSlotSetBid is the interface to call the smart contract function
func (c *AuctionClient) AuctionChangeDefaultSlotSetBid(slotSet int64, func (c *AuctionClient) AuctionChangeDefaultSlotSetBid(slotSet int64, newInitialMinBid *big.Int) (tx *types.Transaction, err error) {
newInitialMinBid *big.Int) (tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -508,8 +491,7 @@ func (c *AuctionClient) AuctionChangeDefaultSlotSetBid(slotSet int64,
} }
// AuctionGetClaimableHEZ is the interface to call the smart contract function // AuctionGetClaimableHEZ is the interface to call the smart contract function
func (c *AuctionClient) AuctionGetClaimableHEZ( func (c *AuctionClient) AuctionGetClaimableHEZ(claimAddress ethCommon.Address) (claimableHEZ *big.Int, err error) {
claimAddress ethCommon.Address) (claimableHEZ *big.Int, err error) {
if err := c.client.Call(func(ec *ethclient.Client) error { if err := c.client.Call(func(ec *ethclient.Client) error {
claimableHEZ, err = c.auction.GetClaimableHEZ(c.opts, claimAddress) claimableHEZ, err = c.auction.GetClaimableHEZ(c.opts, claimAddress)
return tracerr.Wrap(err) return tracerr.Wrap(err)
@@ -520,8 +502,7 @@ func (c *AuctionClient) AuctionGetClaimableHEZ(
} }
// AuctionSetCoordinator is the interface to call the smart contract function // AuctionSetCoordinator is the interface to call the smart contract function
func (c *AuctionClient) AuctionSetCoordinator(forger ethCommon.Address, func (c *AuctionClient) AuctionSetCoordinator(forger ethCommon.Address, coordinatorURL string) (tx *types.Transaction, err error) {
coordinatorURL string) (tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -570,8 +551,7 @@ func (c *AuctionClient) AuctionGetSlotSet(slot int64) (slotSet *big.Int, err err
} }
// AuctionGetDefaultSlotSetBid is the interface to call the smart contract function // AuctionGetDefaultSlotSetBid is the interface to call the smart contract function
func (c *AuctionClient) AuctionGetDefaultSlotSetBid(slotSet uint8) (minBidSlotSet *big.Int, func (c *AuctionClient) AuctionGetDefaultSlotSetBid(slotSet uint8) (minBidSlotSet *big.Int, err error) {
err error) {
if err := c.client.Call(func(ec *ethclient.Client) error { if err := c.client.Call(func(ec *ethclient.Client) error {
minBidSlotSet, err = c.auction.GetDefaultSlotSetBid(c.opts, slotSet) minBidSlotSet, err = c.auction.GetDefaultSlotSetBid(c.opts, slotSet)
return tracerr.Wrap(err) return tracerr.Wrap(err)
@@ -594,8 +574,7 @@ func (c *AuctionClient) AuctionGetSlotNumber(blockNum int64) (slot int64, err er
} }
// AuctionBid is the interface to call the smart contract function // AuctionBid is the interface to call the smart contract function
func (c *AuctionClient) AuctionBid(amount *big.Int, slot int64, bidAmount *big.Int, func (c *AuctionClient) AuctionBid(amount *big.Int, slot int64, bidAmount *big.Int, deadline *big.Int) (tx *types.Transaction, err error) {
deadline *big.Int) (tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -607,8 +586,7 @@ func (c *AuctionClient) AuctionBid(amount *big.Int, slot int64, bidAmount *big.I
} }
tokenName := c.tokenHEZCfg.Name tokenName := c.tokenHEZCfg.Name
tokenAddr := c.tokenHEZCfg.Address tokenAddr := c.tokenHEZCfg.Address
digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID, digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID, amount, nonce, deadline, tokenName)
amount, nonce, deadline, tokenName)
signature, _ := c.client.ks.SignHash(*c.client.account, digest) signature, _ := c.client.ks.SignHash(*c.client.account, digest)
permit := createPermit(owner, spender, amount, deadline, digest, signature) permit := createPermit(owner, spender, amount, deadline, digest, signature)
_slot := big.NewInt(slot) _slot := big.NewInt(slot)
@@ -621,8 +599,8 @@ func (c *AuctionClient) AuctionBid(amount *big.Int, slot int64, bidAmount *big.I
} }
// AuctionMultiBid is the interface to call the smart contract function // AuctionMultiBid is the interface to call the smart contract function
func (c *AuctionClient) AuctionMultiBid(amount *big.Int, startingSlot, endingSlot int64, func (c *AuctionClient) AuctionMultiBid(amount *big.Int, startingSlot, endingSlot int64, slotSets [6]bool,
slotSets [6]bool, maxBid, minBid, deadline *big.Int) (tx *types.Transaction, err error) { maxBid, minBid, deadline *big.Int) (tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
1000000, //nolint:gomnd 1000000, //nolint:gomnd
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -635,14 +613,12 @@ func (c *AuctionClient) AuctionMultiBid(amount *big.Int, startingSlot, endingSlo
tokenName := c.tokenHEZCfg.Name tokenName := c.tokenHEZCfg.Name
tokenAddr := c.tokenHEZCfg.Address tokenAddr := c.tokenHEZCfg.Address
digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID, digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID, amount, nonce, deadline, tokenName)
amount, nonce, deadline, tokenName)
signature, _ := c.client.ks.SignHash(*c.client.account, digest) signature, _ := c.client.ks.SignHash(*c.client.account, digest)
permit := createPermit(owner, spender, amount, deadline, digest, signature) permit := createPermit(owner, spender, amount, deadline, digest, signature)
_startingSlot := big.NewInt(startingSlot) _startingSlot := big.NewInt(startingSlot)
_endingSlot := big.NewInt(endingSlot) _endingSlot := big.NewInt(endingSlot)
return c.auction.ProcessMultiBid(auth, amount, _startingSlot, _endingSlot, return c.auction.ProcessMultiBid(auth, amount, _startingSlot, _endingSlot, slotSets, maxBid, minBid, permit)
slotSets, maxBid, minBid, permit)
}, },
); err != nil { ); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("Failed multibid: %w", err)) return nil, tracerr.Wrap(fmt.Errorf("Failed multibid: %w", err))
@@ -651,8 +627,7 @@ func (c *AuctionClient) AuctionMultiBid(amount *big.Int, startingSlot, endingSlo
} }
// AuctionCanForge is the interface to call the smart contract function // AuctionCanForge is the interface to call the smart contract function
func (c *AuctionClient) AuctionCanForge(forger ethCommon.Address, blockNum int64) (canForge bool, func (c *AuctionClient) AuctionCanForge(forger ethCommon.Address, blockNum int64) (canForge bool, err error) {
err error) {
if err := c.client.Call(func(ec *ethclient.Client) error { if err := c.client.Call(func(ec *ethclient.Client) error {
canForge, err = c.auction.CanForge(c.opts, forger, big.NewInt(blockNum)) canForge, err = c.auction.CanForge(c.opts, forger, big.NewInt(blockNum))
return tracerr.Wrap(err) return tracerr.Wrap(err)
@@ -705,8 +680,7 @@ func (c *AuctionClient) AuctionConstants() (auctionConstants *common.AuctionCons
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
auctionConstants.InitialMinimalBidding, err = auctionConstants.InitialMinimalBidding, err = c.auction.INITIALMINIMALBIDDING(c.opts)
c.auction.INITIALMINIMALBIDDING(c.opts)
if err != nil { if err != nil {
return tracerr.Wrap(err) return tracerr.Wrap(err)
} }
@@ -777,54 +751,37 @@ func (c *AuctionClient) AuctionVariables() (auctionVariables *common.AuctionVari
} }
var ( var (
logAuctionNewBid = crypto.Keccak256Hash([]byte( logAuctionNewBid = crypto.Keccak256Hash([]byte("NewBid(uint128,uint128,address)"))
"NewBid(uint128,uint128,address)")) logAuctionNewSlotDeadline = crypto.Keccak256Hash([]byte("NewSlotDeadline(uint8)"))
logAuctionNewSlotDeadline = crypto.Keccak256Hash([]byte( logAuctionNewClosedAuctionSlots = crypto.Keccak256Hash([]byte("NewClosedAuctionSlots(uint16)"))
"NewSlotDeadline(uint8)")) logAuctionNewOutbidding = crypto.Keccak256Hash([]byte("NewOutbidding(uint16)"))
logAuctionNewClosedAuctionSlots = crypto.Keccak256Hash([]byte( logAuctionNewDonationAddress = crypto.Keccak256Hash([]byte("NewDonationAddress(address)"))
"NewClosedAuctionSlots(uint16)")) logAuctionNewBootCoordinator = crypto.Keccak256Hash([]byte("NewBootCoordinator(address,string)"))
logAuctionNewOutbidding = crypto.Keccak256Hash([]byte( logAuctionNewOpenAuctionSlots = crypto.Keccak256Hash([]byte("NewOpenAuctionSlots(uint16)"))
"NewOutbidding(uint16)")) logAuctionNewAllocationRatio = crypto.Keccak256Hash([]byte("NewAllocationRatio(uint16[3])"))
logAuctionNewDonationAddress = crypto.Keccak256Hash([]byte( logAuctionSetCoordinator = crypto.Keccak256Hash([]byte("SetCoordinator(address,address,string)"))
"NewDonationAddress(address)")) logAuctionNewForgeAllocated = crypto.Keccak256Hash([]byte("NewForgeAllocated(address,address,uint128,uint128,uint128,uint128)"))
logAuctionNewBootCoordinator = crypto.Keccak256Hash([]byte( logAuctionNewDefaultSlotSetBid = crypto.Keccak256Hash([]byte("NewDefaultSlotSetBid(uint128,uint128)"))
"NewBootCoordinator(address,string)")) logAuctionNewForge = crypto.Keccak256Hash([]byte("NewForge(address,uint128)"))
logAuctionNewOpenAuctionSlots = crypto.Keccak256Hash([]byte( logAuctionHEZClaimed = crypto.Keccak256Hash([]byte("HEZClaimed(address,uint128)"))
"NewOpenAuctionSlots(uint16)")) logAuctionInitialize = crypto.Keccak256Hash([]byte(
logAuctionNewAllocationRatio = crypto.Keccak256Hash([]byte( "InitializeHermezAuctionProtocolEvent(address,address,string,uint16,uint8,uint16,uint16,uint16[3])"))
"NewAllocationRatio(uint16[3])"))
logAuctionSetCoordinator = crypto.Keccak256Hash([]byte(
"SetCoordinator(address,address,string)"))
logAuctionNewForgeAllocated = crypto.Keccak256Hash([]byte(
"NewForgeAllocated(address,address,uint128,uint128,uint128,uint128)"))
logAuctionNewDefaultSlotSetBid = crypto.Keccak256Hash([]byte(
"NewDefaultSlotSetBid(uint128,uint128)"))
logAuctionNewForge = crypto.Keccak256Hash([]byte(
"NewForge(address,uint128)"))
logAuctionHEZClaimed = crypto.Keccak256Hash([]byte(
"HEZClaimed(address,uint128)"))
logAuctionInitialize = crypto.Keccak256Hash([]byte(
"InitializeHermezAuctionProtocolEvent(address,address,string," +
"uint16,uint8,uint16,uint16,uint16[3])"))
) )
// AuctionEventInit returns the initialize event with its corresponding block number // AuctionEventInit returns the initialize event with its corresponding block number
func (c *AuctionClient) AuctionEventInit(genesisBlockNum int64) (*AuctionEventInitialize, int64, error) { func (c *AuctionClient) AuctionEventInit() (*AuctionEventInitialize, int64, error) {
query := ethereum.FilterQuery{ query := ethereum.FilterQuery{
Addresses: []ethCommon.Address{ Addresses: []ethCommon.Address{
c.address, c.address,
}, },
FromBlock: big.NewInt(max(0, genesisBlockNum-blocksPerDay)), Topics: [][]ethCommon.Hash{{logAuctionInitialize}},
ToBlock: big.NewInt(genesisBlockNum),
Topics: [][]ethCommon.Hash{{logAuctionInitialize}},
} }
logs, err := c.client.client.FilterLogs(context.Background(), query) logs, err := c.client.client.FilterLogs(context.Background(), query)
if err != nil { if err != nil {
return nil, 0, tracerr.Wrap(err) return nil, 0, tracerr.Wrap(err)
} }
if len(logs) != 1 { if len(logs) != 1 {
return nil, 0, return nil, 0, tracerr.Wrap(fmt.Errorf("no event of type InitializeHermezAuctionProtocolEvent found"))
tracerr.Wrap(fmt.Errorf("no event of type InitializeHermezAuctionProtocolEvent found"))
} }
vLog := logs[0] vLog := logs[0]
if vLog.Topics[0] != logAuctionInitialize { if vLog.Topics[0] != logAuctionInitialize {
@@ -872,8 +829,7 @@ func (c *AuctionClient) AuctionEventsByBlock(blockNum int64,
for _, vLog := range logs { for _, vLog := range logs {
if blockHash != nil && vLog.BlockHash != *blockHash { if blockHash != nil && vLog.BlockHash != *blockHash {
log.Errorw("Block hash mismatch", "expected", blockHash.String(), "got", log.Errorw("Block hash mismatch", "expected", blockHash.String(), "got", vLog.BlockHash.String())
vLog.BlockHash.String())
return nil, tracerr.Wrap(ErrBlockHashMismatchEvent) return nil, tracerr.Wrap(ErrBlockHashMismatchEvent)
} }
switch vLog.Topics[0] { switch vLog.Topics[0] {
@@ -884,8 +840,7 @@ func (c *AuctionClient) AuctionEventsByBlock(blockNum int64,
Address ethCommon.Address Address ethCommon.Address
} }
var newBid AuctionEventNewBid var newBid AuctionEventNewBid
if err := c.contractAbi.UnpackIntoInterface(&auxNewBid, "NewBid", if err := c.contractAbi.UnpackIntoInterface(&auxNewBid, "NewBid", vLog.Data); err != nil {
vLog.Data); err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
newBid.BidAmount = auxNewBid.BidAmount newBid.BidAmount = auxNewBid.BidAmount
@@ -894,60 +849,48 @@ func (c *AuctionClient) AuctionEventsByBlock(blockNum int64,
auctionEvents.NewBid = append(auctionEvents.NewBid, newBid) auctionEvents.NewBid = append(auctionEvents.NewBid, newBid)
case logAuctionNewSlotDeadline: case logAuctionNewSlotDeadline:
var newSlotDeadline AuctionEventNewSlotDeadline var newSlotDeadline AuctionEventNewSlotDeadline
if err := c.contractAbi.UnpackIntoInterface(&newSlotDeadline, if err := c.contractAbi.UnpackIntoInterface(&newSlotDeadline, "NewSlotDeadline", vLog.Data); err != nil {
"NewSlotDeadline", vLog.Data); err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
auctionEvents.NewSlotDeadline = append(auctionEvents.NewSlotDeadline, newSlotDeadline) auctionEvents.NewSlotDeadline = append(auctionEvents.NewSlotDeadline, newSlotDeadline)
case logAuctionNewClosedAuctionSlots: case logAuctionNewClosedAuctionSlots:
var newClosedAuctionSlots AuctionEventNewClosedAuctionSlots var newClosedAuctionSlots AuctionEventNewClosedAuctionSlots
if err := c.contractAbi.UnpackIntoInterface(&newClosedAuctionSlots, if err := c.contractAbi.UnpackIntoInterface(&newClosedAuctionSlots, "NewClosedAuctionSlots", vLog.Data); err != nil {
"NewClosedAuctionSlots", vLog.Data); err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
auctionEvents.NewClosedAuctionSlots = auctionEvents.NewClosedAuctionSlots = append(auctionEvents.NewClosedAuctionSlots, newClosedAuctionSlots)
append(auctionEvents.NewClosedAuctionSlots, newClosedAuctionSlots)
case logAuctionNewOutbidding: case logAuctionNewOutbidding:
var newOutbidding AuctionEventNewOutbidding var newOutbidding AuctionEventNewOutbidding
if err := c.contractAbi.UnpackIntoInterface(&newOutbidding, "NewOutbidding", if err := c.contractAbi.UnpackIntoInterface(&newOutbidding, "NewOutbidding", vLog.Data); err != nil {
vLog.Data); err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
auctionEvents.NewOutbidding = append(auctionEvents.NewOutbidding, newOutbidding) auctionEvents.NewOutbidding = append(auctionEvents.NewOutbidding, newOutbidding)
case logAuctionNewDonationAddress: case logAuctionNewDonationAddress:
var newDonationAddress AuctionEventNewDonationAddress var newDonationAddress AuctionEventNewDonationAddress
newDonationAddress.NewDonationAddress = ethCommon.BytesToAddress(vLog.Topics[1].Bytes()) newDonationAddress.NewDonationAddress = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
auctionEvents.NewDonationAddress = append(auctionEvents.NewDonationAddress, auctionEvents.NewDonationAddress = append(auctionEvents.NewDonationAddress, newDonationAddress)
newDonationAddress)
case logAuctionNewBootCoordinator: case logAuctionNewBootCoordinator:
var newBootCoordinator AuctionEventNewBootCoordinator var newBootCoordinator AuctionEventNewBootCoordinator
if err := c.contractAbi.UnpackIntoInterface(&newBootCoordinator, if err := c.contractAbi.UnpackIntoInterface(&newBootCoordinator, "NewBootCoordinator", vLog.Data); err != nil {
"NewBootCoordinator", vLog.Data); err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
newBootCoordinator.NewBootCoordinator = ethCommon.BytesToAddress(vLog.Topics[1].Bytes()) newBootCoordinator.NewBootCoordinator = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
auctionEvents.NewBootCoordinator = append(auctionEvents.NewBootCoordinator, auctionEvents.NewBootCoordinator = append(auctionEvents.NewBootCoordinator, newBootCoordinator)
newBootCoordinator)
case logAuctionNewOpenAuctionSlots: case logAuctionNewOpenAuctionSlots:
var newOpenAuctionSlots AuctionEventNewOpenAuctionSlots var newOpenAuctionSlots AuctionEventNewOpenAuctionSlots
if err := c.contractAbi.UnpackIntoInterface(&newOpenAuctionSlots, if err := c.contractAbi.UnpackIntoInterface(&newOpenAuctionSlots, "NewOpenAuctionSlots", vLog.Data); err != nil {
"NewOpenAuctionSlots", vLog.Data); err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
auctionEvents.NewOpenAuctionSlots = auctionEvents.NewOpenAuctionSlots = append(auctionEvents.NewOpenAuctionSlots, newOpenAuctionSlots)
append(auctionEvents.NewOpenAuctionSlots, newOpenAuctionSlots)
case logAuctionNewAllocationRatio: case logAuctionNewAllocationRatio:
var newAllocationRatio AuctionEventNewAllocationRatio var newAllocationRatio AuctionEventNewAllocationRatio
if err := c.contractAbi.UnpackIntoInterface(&newAllocationRatio, if err := c.contractAbi.UnpackIntoInterface(&newAllocationRatio, "NewAllocationRatio", vLog.Data); err != nil {
"NewAllocationRatio", vLog.Data); err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
auctionEvents.NewAllocationRatio = append(auctionEvents.NewAllocationRatio, auctionEvents.NewAllocationRatio = append(auctionEvents.NewAllocationRatio, newAllocationRatio)
newAllocationRatio)
case logAuctionSetCoordinator: case logAuctionSetCoordinator:
var setCoordinator AuctionEventSetCoordinator var setCoordinator AuctionEventSetCoordinator
if err := c.contractAbi.UnpackIntoInterface(&setCoordinator, if err := c.contractAbi.UnpackIntoInterface(&setCoordinator, "SetCoordinator", vLog.Data); err != nil {
"SetCoordinator", vLog.Data); err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
setCoordinator.BidderAddress = ethCommon.BytesToAddress(vLog.Topics[1].Bytes()) setCoordinator.BidderAddress = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
@@ -955,29 +898,25 @@ func (c *AuctionClient) AuctionEventsByBlock(blockNum int64,
auctionEvents.SetCoordinator = append(auctionEvents.SetCoordinator, setCoordinator) auctionEvents.SetCoordinator = append(auctionEvents.SetCoordinator, setCoordinator)
case logAuctionNewForgeAllocated: case logAuctionNewForgeAllocated:
var newForgeAllocated AuctionEventNewForgeAllocated var newForgeAllocated AuctionEventNewForgeAllocated
if err := c.contractAbi.UnpackIntoInterface(&newForgeAllocated, if err := c.contractAbi.UnpackIntoInterface(&newForgeAllocated, "NewForgeAllocated", vLog.Data); err != nil {
"NewForgeAllocated", vLog.Data); err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
newForgeAllocated.Bidder = ethCommon.BytesToAddress(vLog.Topics[1].Bytes()) newForgeAllocated.Bidder = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
newForgeAllocated.Forger = ethCommon.BytesToAddress(vLog.Topics[2].Bytes()) newForgeAllocated.Forger = ethCommon.BytesToAddress(vLog.Topics[2].Bytes())
newForgeAllocated.SlotToForge = new(big.Int).SetBytes(vLog.Topics[3][:]).Int64() newForgeAllocated.SlotToForge = new(big.Int).SetBytes(vLog.Topics[3][:]).Int64()
auctionEvents.NewForgeAllocated = append(auctionEvents.NewForgeAllocated, auctionEvents.NewForgeAllocated = append(auctionEvents.NewForgeAllocated, newForgeAllocated)
newForgeAllocated)
case logAuctionNewDefaultSlotSetBid: case logAuctionNewDefaultSlotSetBid:
var auxNewDefaultSlotSetBid struct { var auxNewDefaultSlotSetBid struct {
SlotSet *big.Int SlotSet *big.Int
NewInitialMinBid *big.Int NewInitialMinBid *big.Int
} }
var newDefaultSlotSetBid AuctionEventNewDefaultSlotSetBid var newDefaultSlotSetBid AuctionEventNewDefaultSlotSetBid
if err := c.contractAbi.UnpackIntoInterface(&auxNewDefaultSlotSetBid, if err := c.contractAbi.UnpackIntoInterface(&auxNewDefaultSlotSetBid, "NewDefaultSlotSetBid", vLog.Data); err != nil {
"NewDefaultSlotSetBid", vLog.Data); err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
newDefaultSlotSetBid.NewInitialMinBid = auxNewDefaultSlotSetBid.NewInitialMinBid newDefaultSlotSetBid.NewInitialMinBid = auxNewDefaultSlotSetBid.NewInitialMinBid
newDefaultSlotSetBid.SlotSet = auxNewDefaultSlotSetBid.SlotSet.Int64() newDefaultSlotSetBid.SlotSet = auxNewDefaultSlotSetBid.SlotSet.Int64()
auctionEvents.NewDefaultSlotSetBid = auctionEvents.NewDefaultSlotSetBid = append(auctionEvents.NewDefaultSlotSetBid, newDefaultSlotSetBid)
append(auctionEvents.NewDefaultSlotSetBid, newDefaultSlotSetBid)
case logAuctionNewForge: case logAuctionNewForge:
var newForge AuctionEventNewForge var newForge AuctionEventNewForge
newForge.Forger = ethCommon.BytesToAddress(vLog.Topics[1].Bytes()) newForge.Forger = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
@@ -985,8 +924,7 @@ func (c *AuctionClient) AuctionEventsByBlock(blockNum int64,
auctionEvents.NewForge = append(auctionEvents.NewForge, newForge) auctionEvents.NewForge = append(auctionEvents.NewForge, newForge)
case logAuctionHEZClaimed: case logAuctionHEZClaimed:
var HEZClaimed AuctionEventHEZClaimed var HEZClaimed AuctionEventHEZClaimed
if err := c.contractAbi.UnpackIntoInterface(&HEZClaimed, "HEZClaimed", if err := c.contractAbi.UnpackIntoInterface(&HEZClaimed, "HEZClaimed", vLog.Data); err != nil {
vLog.Data); err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
HEZClaimed.Owner = ethCommon.BytesToAddress(vLog.Topics[1].Bytes()) HEZClaimed.Owner = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())

View File

@@ -28,7 +28,7 @@ func TestAuctionGetCurrentSlotNumber(t *testing.T) {
} }
func TestAuctionEventInit(t *testing.T) { func TestAuctionEventInit(t *testing.T) {
auctionInit, blockNum, err := auctionClientTest.AuctionEventInit(genesisBlock) auctionInit, blockNum, err := auctionClientTest.AuctionEventInit()
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, int64(18), blockNum) assert.Equal(t, int64(18), blockNum)
assert.Equal(t, donationAddressConst, auctionInit.DonationAddress) assert.Equal(t, donationAddressConst, auctionInit.DonationAddress)
@@ -58,8 +58,7 @@ func TestAuctionConstants(t *testing.T) {
func TestAuctionVariables(t *testing.T) { func TestAuctionVariables(t *testing.T) {
INITMINBID := new(big.Int) INITMINBID := new(big.Int)
INITMINBID.SetString(minBidStr, 10) INITMINBID.SetString(minBidStr, 10)
defaultSlotSetBid := [6]*big.Int{INITMINBID, INITMINBID, INITMINBID, INITMINBID, INITMINBID, defaultSlotSetBid := [6]*big.Int{INITMINBID, INITMINBID, INITMINBID, INITMINBID, INITMINBID, INITMINBID}
INITMINBID}
auctionVariables, err := auctionClientTest.AuctionVariables() auctionVariables, err := auctionClientTest.AuctionVariables()
require.Nil(t, err) require.Nil(t, err)
@@ -133,8 +132,7 @@ func TestAuctionSetClosedAuctionSlots(t *testing.T) {
require.Nil(t, err) require.Nil(t, err)
auctionEvents, err := auctionClientTest.AuctionEventsByBlock(currentBlockNum, nil) auctionEvents, err := auctionClientTest.AuctionEventsByBlock(currentBlockNum, nil)
require.Nil(t, err) require.Nil(t, err)
assert.Equal(t, newClosedAuctionSlots, assert.Equal(t, newClosedAuctionSlots, auctionEvents.NewClosedAuctionSlots[0].NewClosedAuctionSlots)
auctionEvents.NewClosedAuctionSlots[0].NewClosedAuctionSlots)
_, err = auctionClientTest.AuctionSetClosedAuctionSlots(closedAuctionSlots) _, err = auctionClientTest.AuctionSetClosedAuctionSlots(closedAuctionSlots)
require.Nil(t, err) require.Nil(t, err)
} }
@@ -230,8 +228,7 @@ func TestAuctionSetBootCoordinator(t *testing.T) {
require.Nil(t, err) require.Nil(t, err)
assert.Equal(t, newBootCoordinator, auctionEvents.NewBootCoordinator[0].NewBootCoordinator) assert.Equal(t, newBootCoordinator, auctionEvents.NewBootCoordinator[0].NewBootCoordinator)
assert.Equal(t, newBootCoordinatorURL, auctionEvents.NewBootCoordinator[0].NewBootCoordinatorURL) assert.Equal(t, newBootCoordinatorURL, auctionEvents.NewBootCoordinator[0].NewBootCoordinatorURL)
_, err = auctionClientTest.AuctionSetBootCoordinator(bootCoordinatorAddressConst, _, err = auctionClientTest.AuctionSetBootCoordinator(bootCoordinatorAddressConst, bootCoordinatorURL)
bootCoordinatorURL)
require.Nil(t, err) require.Nil(t, err)
} }
@@ -345,8 +342,7 @@ func TestAuctionMultiBid(t *testing.T) {
budget := new(big.Int) budget := new(big.Int)
budget.SetString("45200000000000000000", 10) budget.SetString("45200000000000000000", 10)
bidderAddress := governanceAddressConst bidderAddress := governanceAddressConst
_, err = auctionClientTest.AuctionMultiBid(budget, currentSlot+4, currentSlot+10, slotSet, _, err = auctionClientTest.AuctionMultiBid(budget, currentSlot+4, currentSlot+10, slotSet, maxBid, minBid, deadline)
maxBid, minBid, deadline)
require.Nil(t, err) require.Nil(t, err)
currentBlockNum, err := auctionClientTest.client.EthLastBlock() currentBlockNum, err := auctionClientTest.client.EthLastBlock()
require.Nil(t, err) require.Nil(t, err)
@@ -387,8 +383,7 @@ func TestAuctionClaimHEZ(t *testing.T) {
} }
func TestAuctionForge(t *testing.T) { func TestAuctionForge(t *testing.T) {
auctionClientTestHermez, err := NewAuctionClient(ethereumClientHermez, auctionClientTestHermez, err := NewAuctionClient(ethereumClientHermez, auctionTestAddressConst, tokenHEZ)
auctionTestAddressConst, tokenHEZ)
require.Nil(t, err) require.Nil(t, err)
slotConst := 4 slotConst := 4
blockNum := int64(int(blocksPerSlot)*slotConst + int(genesisBlock)) blockNum := int64(int(blocksPerSlot)*slotConst + int(genesisBlock))

View File

@@ -12,17 +12,6 @@ import (
var errTODO = fmt.Errorf("TODO: Not implemented yet") var errTODO = fmt.Errorf("TODO: Not implemented yet")
const (
blocksPerDay = (3600 * 24) / 15
)
func max(x, y int64) int64 {
if x > y {
return x
}
return y
}
// ClientInterface is the eth Client interface used by hermez-node modules to // ClientInterface is the eth Client interface used by hermez-node modules to
// interact with Ethereum Blockchain and smart contracts. // interact with Ethereum Blockchain and smart contracts.
type ClientInterface interface { type ClientInterface interface {
@@ -75,19 +64,16 @@ type ClientConfig struct {
} }
// NewClient creates a new Client to interact with Ethereum and the Hermez smart contracts. // NewClient creates a new Client to interact with Ethereum and the Hermez smart contracts.
func NewClient(client *ethclient.Client, account *accounts.Account, ks *ethKeystore.KeyStore, func NewClient(client *ethclient.Client, account *accounts.Account, ks *ethKeystore.KeyStore, cfg *ClientConfig) (*Client, error) {
cfg *ClientConfig) (*Client, error) {
ethereumClient, err := NewEthereumClient(client, account, ks, &cfg.Ethereum) ethereumClient, err := NewEthereumClient(client, account, ks, &cfg.Ethereum)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
auctionClient, err := NewAuctionClient(ethereumClient, cfg.Auction.Address, auctionClient, err := NewAuctionClient(ethereumClient, cfg.Auction.Address, cfg.Auction.TokenHEZ)
cfg.Auction.TokenHEZ)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
rollupClient, err := NewRollupClient(ethereumClient, cfg.Rollup.Address, rollupClient, err := NewRollupClient(ethereumClient, cfg.Rollup.Address, cfg.Auction.TokenHEZ)
cfg.Auction.TokenHEZ)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }

View File

@@ -64,8 +64,7 @@ type EthereumConfig struct {
GasPriceDiv uint64 GasPriceDiv uint64
} }
// EthereumClient is an ethereum client to call Smart Contract methods and check blockchain // EthereumClient is an ethereum client to call Smart Contract methods and check blockchain information.
// information.
type EthereumClient struct { type EthereumClient struct {
client *ethclient.Client client *ethclient.Client
chainID *big.Int chainID *big.Int
@@ -77,8 +76,7 @@ type EthereumClient struct {
// NewEthereumClient creates a EthereumClient instance. The account is not mandatory (it can // NewEthereumClient creates a EthereumClient instance. The account is not mandatory (it can
// be nil). If the account is nil, CallAuth will fail with ErrAccountNil. // be nil). If the account is nil, CallAuth will fail with ErrAccountNil.
func NewEthereumClient(client *ethclient.Client, account *accounts.Account, func NewEthereumClient(client *ethclient.Client, account *accounts.Account, ks *ethKeystore.KeyStore, config *EthereumConfig) (*EthereumClient, error) {
ks *ethKeystore.KeyStore, config *EthereumConfig) (*EthereumClient, error) {
if config == nil { if config == nil {
config = &EthereumConfig{ config = &EthereumConfig{
CallGasLimit: defaultCallGasLimit, CallGasLimit: defaultCallGasLimit,
@@ -168,8 +166,7 @@ func (c *EthereumClient) NewAuth() (*bind.TransactOpts, error) {
// This call requires a valid account with Ether that can be spend during the // This call requires a valid account with Ether that can be spend during the
// call. // call.
func (c *EthereumClient) CallAuth(gasLimit uint64, func (c *EthereumClient) CallAuth(gasLimit uint64,
fn func(*ethclient.Client, *bind.TransactOpts) (*types.Transaction, error)) (*types.Transaction, fn func(*ethclient.Client, *bind.TransactOpts) (*types.Transaction, error)) (*types.Transaction, error) {
error) {
if c.account == nil { if c.account == nil {
return nil, tracerr.Wrap(ErrAccountNil) return nil, tracerr.Wrap(ErrAccountNil)
} }
@@ -215,8 +212,7 @@ func (c *EthereumClient) Call(fn func(*ethclient.Client) error) error {
} }
// EthTransactionReceipt returns the transaction receipt of the given txHash // EthTransactionReceipt returns the transaction receipt of the given txHash
func (c *EthereumClient) EthTransactionReceipt(ctx context.Context, func (c *EthereumClient) EthTransactionReceipt(ctx context.Context, txHash ethCommon.Hash) (*types.Receipt, error) {
txHash ethCommon.Hash) (*types.Receipt, error) {
return c.client.TransactionReceipt(ctx, txHash) return c.client.TransactionReceipt(ctx, txHash)
} }
@@ -232,15 +228,13 @@ func (c *EthereumClient) EthLastBlock() (int64, error) {
} }
// EthHeaderByNumber internally calls ethclient.Client HeaderByNumber // EthHeaderByNumber internally calls ethclient.Client HeaderByNumber
// func (c *EthereumClient) EthHeaderByNumber(ctx context.Context, number *big.Int) (*types.Header, // func (c *EthereumClient) EthHeaderByNumber(ctx context.Context, number *big.Int) (*types.Header, error) {
// error) {
// return c.client.HeaderByNumber(ctx, number) // return c.client.HeaderByNumber(ctx, number)
// } // }
// EthBlockByNumber internally calls ethclient.Client BlockByNumber and returns // EthBlockByNumber internally calls ethclient.Client BlockByNumber and returns
// *common.Block. If number == -1, the latests known block is returned. // *common.Block. If number == -1, the latests known block is returned.
func (c *EthereumClient) EthBlockByNumber(ctx context.Context, number int64) (*common.Block, func (c *EthereumClient) EthBlockByNumber(ctx context.Context, number int64) (*common.Block, error) {
error) {
blockNum := big.NewInt(number) blockNum := big.NewInt(number)
if number == -1 { if number == -1 {
blockNum = nil blockNum = nil

View File

@@ -14,8 +14,7 @@ import (
func addBlock(url string) { func addBlock(url string) {
method := "POST" method := "POST"
payload := strings.NewReader( payload := strings.NewReader("{\n \"jsonrpc\":\"2.0\",\n \"method\":\"evm_mine\",\n \"params\":[],\n \"id\":1\n}")
"{\n \"jsonrpc\":\"2.0\",\n \"method\":\"evm_mine\",\n \"params\":[],\n \"id\":1\n}")
client := &http.Client{} client := &http.Client{}
req, err := http.NewRequest(method, url, payload) req, err := http.NewRequest(method, url, payload)
@@ -46,9 +45,7 @@ func addTime(seconds float64, url string) {
secondsStr := strconv.FormatFloat(seconds, 'E', -1, 32) secondsStr := strconv.FormatFloat(seconds, 'E', -1, 32)
method := "POST" method := "POST"
payload := strings.NewReader( payload := strings.NewReader("{\n \"jsonrpc\":\"2.0\",\n \"method\":\"evm_increaseTime\",\n \"params\":[" + secondsStr + "],\n \"id\":1\n}")
"{\n \"jsonrpc\":\"2.0\",\n \"method\":\"evm_increaseTime\",\n \"params\":[" +
secondsStr + "],\n \"id\":1\n}")
client := &http.Client{} client := &http.Client{}
req, err := http.NewRequest(method, url, payload) req, err := http.NewRequest(method, url, payload)
@@ -69,16 +66,13 @@ func addTime(seconds float64, url string) {
}() }()
} }
func createPermitDigest(tokenAddr, owner, spender ethCommon.Address, chainID, value, nonce, func createPermitDigest(tokenAddr, owner, spender ethCommon.Address, chainID, value, nonce, deadline *big.Int, tokenName string) ([]byte, error) {
deadline *big.Int, tokenName string) ([]byte, error) {
// NOTE: We ignore hash.Write errors because we are writing to a memory // NOTE: We ignore hash.Write errors because we are writing to a memory
// buffer and don't expect any errors to occur. // buffer and don't expect any errors to occur.
abiPermit := abiPermit := []byte("Permit(address owner,address spender,uint256 value,uint256 nonce,uint256 deadline)")
[]byte("Permit(address owner,address spender,uint256 value,uint256 nonce,uint256 deadline)")
hashPermit := sha3.NewLegacyKeccak256() hashPermit := sha3.NewLegacyKeccak256()
hashPermit.Write(abiPermit) //nolint:errcheck,gosec hashPermit.Write(abiPermit) //nolint:errcheck,gosec
abiEIP712Domain := abiEIP712Domain := []byte("EIP712Domain(string name,string version,uint256 chainId,address verifyingContract)")
[]byte("EIP712Domain(string name,string version,uint256 chainId,address verifyingContract)")
hashEIP712Domain := sha3.NewLegacyKeccak256() hashEIP712Domain := sha3.NewLegacyKeccak256()
hashEIP712Domain.Write(abiEIP712Domain) //nolint:errcheck,gosec hashEIP712Domain.Write(abiEIP712Domain) //nolint:errcheck,gosec
var encodeBytes []byte var encodeBytes []byte
@@ -130,8 +124,7 @@ func createPermitDigest(tokenAddr, owner, spender ethCommon.Address, chainID, va
return hashBytes2.Sum(nil), nil return hashBytes2.Sum(nil), nil
} }
func createPermit(owner, spender ethCommon.Address, amount, deadline *big.Int, digest, func createPermit(owner, spender ethCommon.Address, amount, deadline *big.Int, digest, signature []byte) []byte {
signature []byte) []byte {
r := signature[0:32] r := signature[0:32]
s := signature[32:64] s := signature[32:64]
v := signature[64] + byte(27) //nolint:gomnd v := signature[64] + byte(27) //nolint:gomnd

View File

@@ -26,8 +26,7 @@ var (
mnemonic = "explain tackle mirror kit van hammer degree position ginger unfair soup bonus" mnemonic = "explain tackle mirror kit van hammer degree position ginger unfair soup bonus"
) )
func genAcc(w *hdwallet.Wallet, ks *keystore.KeyStore, i int) (*accounts.Account, func genAcc(w *hdwallet.Wallet, ks *keystore.KeyStore, i int) (*accounts.Account, ethCommon.Address) {
ethCommon.Address) {
path := hdwallet.MustParseDerivationPath(fmt.Sprintf("m/44'/60'/0'/0/%d", i)) path := hdwallet.MustParseDerivationPath(fmt.Sprintf("m/44'/60'/0'/0/%d", i))
account, err := w.Derive(path, false) account, err := w.Derive(path, false)
if err != nil { if err != nil {
@@ -112,9 +111,7 @@ func getEnvVariables() {
if err != nil { if err != nil {
log.Fatal(errEnvVar) log.Fatal(errEnvVar)
} }
if auctionAddressStr == "" || auctionTestAddressStr == "" || tokenHEZAddressStr == "" || if auctionAddressStr == "" || auctionTestAddressStr == "" || tokenHEZAddressStr == "" || hermezRollupAddressStr == "" || wdelayerAddressStr == "" || wdelayerTestAddressStr == "" || genesisBlockEnv == "" {
hermezRollupAddressStr == "" || wdelayerAddressStr == "" || wdelayerTestAddressStr == "" ||
genesisBlockEnv == "" {
log.Fatal(errEnvVar) log.Fatal(errEnvVar)
} }
@@ -192,8 +189,7 @@ func TestMain(m *testing.M) {
log.Fatal(err) log.Fatal(err)
} }
ethereumClientEmergencyCouncil, err = NewEthereumClient(ethClient, ethereumClientEmergencyCouncil, err = NewEthereumClient(ethClient, emergencyCouncilAccount, ks, nil)
emergencyCouncilAccount, ks, nil)
if err != nil { if err != nil {
log.Fatal(err) log.Fatal(err)
} }

View File

@@ -243,20 +243,13 @@ type RollupInterface interface {
// Public Functions // Public Functions
RollupForgeBatch(*RollupForgeBatchArgs, *bind.TransactOpts) (*types.Transaction, error) RollupForgeBatch(*RollupForgeBatchArgs, *bind.TransactOpts) (*types.Transaction, error)
RollupAddToken(tokenAddress ethCommon.Address, feeAddToken, RollupAddToken(tokenAddress ethCommon.Address, feeAddToken, deadline *big.Int) (*types.Transaction, error)
deadline *big.Int) (*types.Transaction, error)
RollupWithdrawMerkleProof(babyPubKey babyjub.PublicKeyComp, tokenID uint32, numExitRoot, RollupWithdrawMerkleProof(babyPubKey babyjub.PublicKeyComp, tokenID uint32, numExitRoot, idx int64, amount *big.Int, siblings []*big.Int, instantWithdraw bool) (*types.Transaction, error)
idx int64, amount *big.Int, siblings []*big.Int, instantWithdraw bool) (*types.Transaction, RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int, tokenID uint32, numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction, error)
error)
RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int, tokenID uint32,
numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction, error)
RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64, depositAmount *big.Int, RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64, depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64) (*types.Transaction, error)
amount *big.Int, tokenID uint32, toIdx int64) (*types.Transaction, error) RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64, depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64, deadline *big.Int) (tx *types.Transaction, err error)
RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64,
depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64,
deadline *big.Int) (tx *types.Transaction, err error)
// Governance Public Functions // Governance Public Functions
RollupUpdateForgeL1L2BatchTimeout(newForgeL1L2BatchTimeout int64) (*types.Transaction, error) RollupUpdateForgeL1L2BatchTimeout(newForgeL1L2BatchTimeout int64) (*types.Transaction, error)
@@ -273,7 +266,7 @@ type RollupInterface interface {
RollupConstants() (*common.RollupConstants, error) RollupConstants() (*common.RollupConstants, error)
RollupEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*RollupEvents, error) RollupEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*RollupEvents, error)
RollupForgeBatchArgs(ethCommon.Hash, uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error) RollupForgeBatchArgs(ethCommon.Hash, uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error)
RollupEventInit(genesisBlockNum int64) (*RollupEventInitialize, int64, error) RollupEventInit() (*RollupEventInitialize, int64, error)
} }
// //
@@ -294,8 +287,7 @@ type RollupClient struct {
} }
// NewRollupClient creates a new RollupClient // NewRollupClient creates a new RollupClient
func NewRollupClient(client *EthereumClient, address ethCommon.Address, func NewRollupClient(client *EthereumClient, address ethCommon.Address, tokenHEZCfg TokenConfig) (*RollupClient, error) {
tokenHEZCfg TokenConfig) (*RollupClient, error) {
contractAbi, err := abi.JSON(strings.NewReader(string(Hermez.HermezABI))) contractAbi, err := abi.JSON(strings.NewReader(string(Hermez.HermezABI)))
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
@@ -331,8 +323,7 @@ func NewRollupClient(client *EthereumClient, address ethCommon.Address,
} }
// RollupForgeBatch is the interface to call the smart contract function // RollupForgeBatch is the interface to call the smart contract function
func (c *RollupClient) RollupForgeBatch(args *RollupForgeBatchArgs, func (c *RollupClient) RollupForgeBatch(args *RollupForgeBatchArgs, auth *bind.TransactOpts) (tx *types.Transaction, err error) {
auth *bind.TransactOpts) (tx *types.Transaction, err error) {
if auth == nil { if auth == nil {
auth, err = c.client.NewAuth() auth, err = c.client.NewAuth()
if err != nil { if err != nil {
@@ -410,8 +401,7 @@ func (c *RollupClient) RollupForgeBatch(args *RollupForgeBatchArgs,
// RollupAddToken is the interface to call the smart contract function. // RollupAddToken is the interface to call the smart contract function.
// `feeAddToken` is the amount of HEZ tokens that will be paid to add the // `feeAddToken` is the amount of HEZ tokens that will be paid to add the
// token. `feeAddToken` must match the public value of the smart contract. // token. `feeAddToken` must match the public value of the smart contract.
func (c *RollupClient) RollupAddToken(tokenAddress ethCommon.Address, feeAddToken, func (c *RollupClient) RollupAddToken(tokenAddress ethCommon.Address, feeAddToken, deadline *big.Int) (tx *types.Transaction, err error) {
deadline *big.Int) (tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -423,11 +413,9 @@ func (c *RollupClient) RollupAddToken(tokenAddress ethCommon.Address, feeAddToke
} }
tokenName := c.tokenHEZCfg.Name tokenName := c.tokenHEZCfg.Name
tokenAddr := c.tokenHEZCfg.Address tokenAddr := c.tokenHEZCfg.Address
digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID, digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID, feeAddToken, nonce, deadline, tokenName)
feeAddToken, nonce, deadline, tokenName)
signature, _ := c.client.ks.SignHash(*c.client.account, digest) signature, _ := c.client.ks.SignHash(*c.client.account, digest)
permit := createPermit(owner, spender, feeAddToken, deadline, digest, permit := createPermit(owner, spender, feeAddToken, deadline, digest, signature)
signature)
return c.hermez.AddToken(auth, tokenAddress, permit) return c.hermez.AddToken(auth, tokenAddress, permit)
}, },
@@ -438,9 +426,7 @@ func (c *RollupClient) RollupAddToken(tokenAddress ethCommon.Address, feeAddToke
} }
// RollupWithdrawMerkleProof is the interface to call the smart contract function // RollupWithdrawMerkleProof is the interface to call the smart contract function
func (c *RollupClient) RollupWithdrawMerkleProof(fromBJJ babyjub.PublicKeyComp, tokenID uint32, func (c *RollupClient) RollupWithdrawMerkleProof(fromBJJ babyjub.PublicKeyComp, tokenID uint32, numExitRoot, idx int64, amount *big.Int, siblings []*big.Int, instantWithdraw bool) (tx *types.Transaction, err error) {
numExitRoot, idx int64, amount *big.Int, siblings []*big.Int,
instantWithdraw bool) (tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -448,8 +434,7 @@ func (c *RollupClient) RollupWithdrawMerkleProof(fromBJJ babyjub.PublicKeyComp,
babyPubKey := new(big.Int).SetBytes(pkCompB) babyPubKey := new(big.Int).SetBytes(pkCompB)
numExitRootB := uint32(numExitRoot) numExitRootB := uint32(numExitRoot)
idxBig := big.NewInt(idx) idxBig := big.NewInt(idx)
return c.hermez.WithdrawMerkleProof(auth, tokenID, amount, babyPubKey, return c.hermez.WithdrawMerkleProof(auth, tokenID, amount, babyPubKey, numExitRootB, siblings, idxBig, instantWithdraw)
numExitRootB, siblings, idxBig, instantWithdraw)
}, },
); err != nil { ); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("Failed update WithdrawMerkleProof: %w", err)) return nil, tracerr.Wrap(fmt.Errorf("Failed update WithdrawMerkleProof: %w", err))
@@ -458,17 +443,13 @@ func (c *RollupClient) RollupWithdrawMerkleProof(fromBJJ babyjub.PublicKeyComp,
} }
// RollupWithdrawCircuit is the interface to call the smart contract function // RollupWithdrawCircuit is the interface to call the smart contract function
func (c *RollupClient) RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int, func (c *RollupClient) RollupWithdrawCircuit(proofA, proofC [2]*big.Int, proofB [2][2]*big.Int, tokenID uint32, numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction, error) {
tokenID uint32, numExitRoot, idx int64, amount *big.Int, instantWithdraw bool) (*types.Transaction,
error) {
log.Error("TODO") log.Error("TODO")
return nil, tracerr.Wrap(errTODO) return nil, tracerr.Wrap(errTODO)
} }
// RollupL1UserTxERC20ETH is the interface to call the smart contract function // RollupL1UserTxERC20ETH is the interface to call the smart contract function
func (c *RollupClient) RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64, func (c *RollupClient) RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fromIdx int64, depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64) (tx *types.Transaction, err error) {
depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64) (tx *types.Transaction,
err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -503,9 +484,7 @@ func (c *RollupClient) RollupL1UserTxERC20ETH(fromBJJ babyjub.PublicKeyComp, fro
} }
// RollupL1UserTxERC20Permit is the interface to call the smart contract function // RollupL1UserTxERC20Permit is the interface to call the smart contract function
func (c *RollupClient) RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64, func (c *RollupClient) RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp, fromIdx int64, depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64, deadline *big.Int) (tx *types.Transaction, err error) {
depositAmount *big.Int, amount *big.Int, tokenID uint32, toIdx int64,
deadline *big.Int) (tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -537,12 +516,11 @@ func (c *RollupClient) RollupL1UserTxERC20Permit(fromBJJ babyjub.PublicKeyComp,
} }
tokenName := c.tokenHEZCfg.Name tokenName := c.tokenHEZCfg.Name
tokenAddr := c.tokenHEZCfg.Address tokenAddr := c.tokenHEZCfg.Address
digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID, digest, _ := createPermitDigest(tokenAddr, owner, spender, c.chainID, amount, nonce, deadline, tokenName)
amount, nonce, deadline, tokenName)
signature, _ := c.client.ks.SignHash(*c.client.account, digest) signature, _ := c.client.ks.SignHash(*c.client.account, digest)
permit := createPermit(owner, spender, amount, deadline, digest, signature) permit := createPermit(owner, spender, amount, deadline, digest, signature)
return c.hermez.AddL1Transaction(auth, babyPubKey, fromIdxBig, return c.hermez.AddL1Transaction(auth, babyPubKey, fromIdxBig, uint16(depositAmountF),
uint16(depositAmountF), uint16(amountF), tokenID, toIdxBig, permit) uint16(amountF), tokenID, toIdxBig, permit)
}, },
); err != nil { ); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("Failed add L1 Tx ERC20Permit: %w", err)) return nil, tracerr.Wrap(fmt.Errorf("Failed add L1 Tx ERC20Permit: %w", err))
@@ -574,13 +552,11 @@ func (c *RollupClient) RollupLastForgedBatch() (lastForgedBatch int64, err error
} }
// RollupUpdateForgeL1L2BatchTimeout is the interface to call the smart contract function // RollupUpdateForgeL1L2BatchTimeout is the interface to call the smart contract function
func (c *RollupClient) RollupUpdateForgeL1L2BatchTimeout( func (c *RollupClient) RollupUpdateForgeL1L2BatchTimeout(newForgeL1L2BatchTimeout int64) (tx *types.Transaction, err error) {
newForgeL1L2BatchTimeout int64) (tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
return c.hermez.UpdateForgeL1L2BatchTimeout(auth, return c.hermez.UpdateForgeL1L2BatchTimeout(auth, uint8(newForgeL1L2BatchTimeout))
uint8(newForgeL1L2BatchTimeout))
}, },
); err != nil { ); err != nil {
return nil, tracerr.Wrap(fmt.Errorf("Failed update ForgeL1L2BatchTimeout: %w", err)) return nil, tracerr.Wrap(fmt.Errorf("Failed update ForgeL1L2BatchTimeout: %w", err))
@@ -589,8 +565,7 @@ func (c *RollupClient) RollupUpdateForgeL1L2BatchTimeout(
} }
// RollupUpdateFeeAddToken is the interface to call the smart contract function // RollupUpdateFeeAddToken is the interface to call the smart contract function
func (c *RollupClient) RollupUpdateFeeAddToken(newFeeAddToken *big.Int) (tx *types.Transaction, func (c *RollupClient) RollupUpdateFeeAddToken(newFeeAddToken *big.Int) (tx *types.Transaction, err error) {
err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -625,8 +600,7 @@ func (c *RollupClient) RollupUpdateBucketsParameters(
} }
// RollupUpdateTokenExchange is the interface to call the smart contract function // RollupUpdateTokenExchange is the interface to call the smart contract function
func (c *RollupClient) RollupUpdateTokenExchange(addressArray []ethCommon.Address, func (c *RollupClient) RollupUpdateTokenExchange(addressArray []ethCommon.Address, valueArray []uint64) (tx *types.Transaction, err error) {
valueArray []uint64) (tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -639,8 +613,7 @@ func (c *RollupClient) RollupUpdateTokenExchange(addressArray []ethCommon.Addres
} }
// RollupUpdateWithdrawalDelay is the interface to call the smart contract function // RollupUpdateWithdrawalDelay is the interface to call the smart contract function
func (c *RollupClient) RollupUpdateWithdrawalDelay(newWithdrawalDelay int64) (tx *types.Transaction, func (c *RollupClient) RollupUpdateWithdrawalDelay(newWithdrawalDelay int64) (tx *types.Transaction, err error) {
err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -666,8 +639,7 @@ func (c *RollupClient) RollupSafeMode() (tx *types.Transaction, err error) {
} }
// RollupInstantWithdrawalViewer is the interface to call the smart contract function // RollupInstantWithdrawalViewer is the interface to call the smart contract function
func (c *RollupClient) RollupInstantWithdrawalViewer(tokenAddress ethCommon.Address, func (c *RollupClient) RollupInstantWithdrawalViewer(tokenAddress ethCommon.Address, amount *big.Int) (instantAllowed bool, err error) {
amount *big.Int) (instantAllowed bool, err error) {
if err := c.client.Call(func(ec *ethclient.Client) error { if err := c.client.Call(func(ec *ethclient.Client) error {
instantAllowed, err = c.hermez.InstantWithdrawalViewer(c.opts, tokenAddress, amount) instantAllowed, err = c.hermez.InstantWithdrawalViewer(c.opts, tokenAddress, amount)
return tracerr.Wrap(err) return tracerr.Wrap(err)
@@ -702,8 +674,7 @@ func (c *RollupClient) RollupConstants() (rollupConstants *common.RollupConstant
} }
newRollupVerifier.MaxTx = rollupVerifier.MaxTx.Int64() newRollupVerifier.MaxTx = rollupVerifier.MaxTx.Int64()
newRollupVerifier.NLevels = rollupVerifier.NLevels.Int64() newRollupVerifier.NLevels = rollupVerifier.NLevels.Int64()
rollupConstants.Verifiers = append(rollupConstants.Verifiers, rollupConstants.Verifiers = append(rollupConstants.Verifiers, newRollupVerifier)
newRollupVerifier)
} }
rollupConstants.HermezAuctionContract, err = c.hermez.HermezAuctionContract(c.opts) rollupConstants.HermezAuctionContract, err = c.hermez.HermezAuctionContract(c.opts)
if err != nil { if err != nil {
@@ -722,41 +693,28 @@ func (c *RollupClient) RollupConstants() (rollupConstants *common.RollupConstant
} }
var ( var (
logHermezL1UserTxEvent = crypto.Keccak256Hash([]byte( logHermezL1UserTxEvent = crypto.Keccak256Hash([]byte("L1UserTxEvent(uint32,uint8,bytes)"))
"L1UserTxEvent(uint32,uint8,bytes)")) logHermezAddToken = crypto.Keccak256Hash([]byte("AddToken(address,uint32)"))
logHermezAddToken = crypto.Keccak256Hash([]byte( logHermezForgeBatch = crypto.Keccak256Hash([]byte("ForgeBatch(uint32,uint16)"))
"AddToken(address,uint32)")) logHermezUpdateForgeL1L2BatchTimeout = crypto.Keccak256Hash([]byte("UpdateForgeL1L2BatchTimeout(uint8)"))
logHermezForgeBatch = crypto.Keccak256Hash([]byte( logHermezUpdateFeeAddToken = crypto.Keccak256Hash([]byte("UpdateFeeAddToken(uint256)"))
"ForgeBatch(uint32,uint16)")) logHermezWithdrawEvent = crypto.Keccak256Hash([]byte("WithdrawEvent(uint48,uint32,bool)"))
logHermezUpdateForgeL1L2BatchTimeout = crypto.Keccak256Hash([]byte( logHermezUpdateBucketWithdraw = crypto.Keccak256Hash([]byte("UpdateBucketWithdraw(uint8,uint256,uint256)"))
"UpdateForgeL1L2BatchTimeout(uint8)")) logHermezUpdateWithdrawalDelay = crypto.Keccak256Hash([]byte("UpdateWithdrawalDelay(uint64)"))
logHermezUpdateFeeAddToken = crypto.Keccak256Hash([]byte( logHermezUpdateBucketsParameters = crypto.Keccak256Hash([]byte("UpdateBucketsParameters(uint256[4][" +
"UpdateFeeAddToken(uint256)")) strconv.Itoa(common.RollupConstNumBuckets) + "])"))
logHermezWithdrawEvent = crypto.Keccak256Hash([]byte( logHermezUpdateTokenExchange = crypto.Keccak256Hash([]byte("UpdateTokenExchange(address[],uint64[])"))
"WithdrawEvent(uint48,uint32,bool)")) logHermezSafeMode = crypto.Keccak256Hash([]byte("SafeMode()"))
logHermezUpdateBucketWithdraw = crypto.Keccak256Hash([]byte( logHermezInitialize = crypto.Keccak256Hash([]byte("InitializeHermezEvent(uint8,uint256,uint64)"))
"UpdateBucketWithdraw(uint8,uint256,uint256)"))
logHermezUpdateWithdrawalDelay = crypto.Keccak256Hash([]byte(
"UpdateWithdrawalDelay(uint64)"))
logHermezUpdateBucketsParameters = crypto.Keccak256Hash([]byte(
"UpdateBucketsParameters(uint256[4][" + strconv.Itoa(common.RollupConstNumBuckets) + "])"))
logHermezUpdateTokenExchange = crypto.Keccak256Hash([]byte(
"UpdateTokenExchange(address[],uint64[])"))
logHermezSafeMode = crypto.Keccak256Hash([]byte(
"SafeMode()"))
logHermezInitialize = crypto.Keccak256Hash([]byte(
"InitializeHermezEvent(uint8,uint256,uint64)"))
) )
// RollupEventInit returns the initialize event with its corresponding block number // RollupEventInit returns the initialize event with its corresponding block number
func (c *RollupClient) RollupEventInit(genesisBlockNum int64) (*RollupEventInitialize, int64, error) { func (c *RollupClient) RollupEventInit() (*RollupEventInitialize, int64, error) {
query := ethereum.FilterQuery{ query := ethereum.FilterQuery{
Addresses: []ethCommon.Address{ Addresses: []ethCommon.Address{
c.address, c.address,
}, },
FromBlock: big.NewInt(max(0, genesisBlockNum-blocksPerDay)), Topics: [][]ethCommon.Hash{{logHermezInitialize}},
ToBlock: big.NewInt(genesisBlockNum),
Topics: [][]ethCommon.Hash{{logHermezInitialize}},
} }
logs, err := c.client.client.FilterLogs(context.Background(), query) logs, err := c.client.client.FilterLogs(context.Background(), query)
if err != nil { if err != nil {
@@ -771,8 +729,7 @@ func (c *RollupClient) RollupEventInit(genesisBlockNum int64) (*RollupEventIniti
} }
var rollupInit RollupEventInitialize var rollupInit RollupEventInitialize
if err := c.contractAbi.UnpackIntoInterface(&rollupInit, "InitializeHermezEvent", if err := c.contractAbi.UnpackIntoInterface(&rollupInit, "InitializeHermezEvent", vLog.Data); err != nil {
vLog.Data); err != nil {
return nil, 0, tracerr.Wrap(err) return nil, 0, tracerr.Wrap(err)
} }
return &rollupInit, int64(vLog.BlockNumber), tracerr.Wrap(err) return &rollupInit, int64(vLog.BlockNumber), tracerr.Wrap(err)
@@ -853,8 +810,7 @@ func (c *RollupClient) RollupEventsByBlock(blockNum int64,
var updateForgeL1L2BatchTimeout struct { var updateForgeL1L2BatchTimeout struct {
NewForgeL1L2BatchTimeout uint8 NewForgeL1L2BatchTimeout uint8
} }
err := c.contractAbi.UnpackIntoInterface(&updateForgeL1L2BatchTimeout, err := c.contractAbi.UnpackIntoInterface(&updateForgeL1L2BatchTimeout, "UpdateForgeL1L2BatchTimeout", vLog.Data)
"UpdateForgeL1L2BatchTimeout", vLog.Data)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
@@ -882,16 +838,14 @@ func (c *RollupClient) RollupEventsByBlock(blockNum int64,
case logHermezUpdateBucketWithdraw: case logHermezUpdateBucketWithdraw:
var updateBucketWithdrawAux rollupEventUpdateBucketWithdrawAux var updateBucketWithdrawAux rollupEventUpdateBucketWithdrawAux
var updateBucketWithdraw RollupEventUpdateBucketWithdraw var updateBucketWithdraw RollupEventUpdateBucketWithdraw
err := c.contractAbi.UnpackIntoInterface(&updateBucketWithdrawAux, err := c.contractAbi.UnpackIntoInterface(&updateBucketWithdrawAux, "UpdateBucketWithdraw", vLog.Data)
"UpdateBucketWithdraw", vLog.Data)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
updateBucketWithdraw.Withdrawals = updateBucketWithdrawAux.Withdrawals updateBucketWithdraw.Withdrawals = updateBucketWithdrawAux.Withdrawals
updateBucketWithdraw.NumBucket = int(new(big.Int).SetBytes(vLog.Topics[1][:]).Int64()) updateBucketWithdraw.NumBucket = int(new(big.Int).SetBytes(vLog.Topics[1][:]).Int64())
updateBucketWithdraw.BlockStamp = new(big.Int).SetBytes(vLog.Topics[2][:]).Int64() updateBucketWithdraw.BlockStamp = new(big.Int).SetBytes(vLog.Topics[2][:]).Int64()
rollupEvents.UpdateBucketWithdraw = rollupEvents.UpdateBucketWithdraw = append(rollupEvents.UpdateBucketWithdraw, updateBucketWithdraw)
append(rollupEvents.UpdateBucketWithdraw, updateBucketWithdraw)
case logHermezUpdateWithdrawalDelay: case logHermezUpdateWithdrawalDelay:
var withdrawalDelay RollupEventUpdateWithdrawalDelay var withdrawalDelay RollupEventUpdateWithdrawalDelay
@@ -903,8 +857,7 @@ func (c *RollupClient) RollupEventsByBlock(blockNum int64,
case logHermezUpdateBucketsParameters: case logHermezUpdateBucketsParameters:
var bucketsParametersAux rollupEventUpdateBucketsParametersAux var bucketsParametersAux rollupEventUpdateBucketsParametersAux
var bucketsParameters RollupEventUpdateBucketsParameters var bucketsParameters RollupEventUpdateBucketsParameters
err := c.contractAbi.UnpackIntoInterface(&bucketsParametersAux, err := c.contractAbi.UnpackIntoInterface(&bucketsParametersAux, "UpdateBucketsParameters", vLog.Data)
"UpdateBucketsParameters", vLog.Data)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
@@ -914,8 +867,7 @@ func (c *RollupClient) RollupEventsByBlock(blockNum int64,
bucketsParameters.ArrayBuckets[i].BlockWithdrawalRate = bucket[2] bucketsParameters.ArrayBuckets[i].BlockWithdrawalRate = bucket[2]
bucketsParameters.ArrayBuckets[i].MaxWithdrawals = bucket[3] bucketsParameters.ArrayBuckets[i].MaxWithdrawals = bucket[3]
} }
rollupEvents.UpdateBucketsParameters = rollupEvents.UpdateBucketsParameters = append(rollupEvents.UpdateBucketsParameters, bucketsParameters)
append(rollupEvents.UpdateBucketsParameters, bucketsParameters)
case logHermezUpdateTokenExchange: case logHermezUpdateTokenExchange:
var tokensExchange RollupEventUpdateTokenExchange var tokensExchange RollupEventUpdateTokenExchange
err := c.contractAbi.UnpackIntoInterface(&tokensExchange, "UpdateTokenExchange", vLog.Data) err := c.contractAbi.UnpackIntoInterface(&tokensExchange, "UpdateTokenExchange", vLog.Data)
@@ -947,8 +899,7 @@ func (c *RollupClient) RollupEventsByBlock(blockNum int64,
// RollupForgeBatchArgs returns the arguments used in a ForgeBatch call in the // RollupForgeBatchArgs returns the arguments used in a ForgeBatch call in the
// Rollup Smart Contract in the given transaction, and the sender address. // Rollup Smart Contract in the given transaction, and the sender address.
func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash, func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash, l1UserTxsLen uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error) {
l1UserTxsLen uint16) (*RollupForgeBatchArgs, *ethCommon.Address, error) {
tx, _, err := c.client.client.TransactionByHash(context.Background(), ethTxHash) tx, _, err := c.client.client.TransactionByHash(context.Background(), ethTxHash)
if err != nil { if err != nil {
return nil, nil, tracerr.Wrap(fmt.Errorf("TransactionByHash: %w", err)) return nil, nil, tracerr.Wrap(fmt.Errorf("TransactionByHash: %w", err))
@@ -963,8 +914,7 @@ func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash,
if err != nil { if err != nil {
return nil, nil, tracerr.Wrap(err) return nil, nil, tracerr.Wrap(err)
} }
sender, err := c.client.client.TransactionSender(context.Background(), tx, sender, err := c.client.client.TransactionSender(context.Background(), tx, receipt.Logs[0].BlockHash, receipt.Logs[0].Index)
receipt.Logs[0].BlockHash, receipt.Logs[0].Index)
if err != nil { if err != nil {
return nil, nil, tracerr.Wrap(err) return nil, nil, tracerr.Wrap(err)
} }
@@ -989,7 +939,7 @@ func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash,
FeeIdxCoordinator: []common.Idx{}, FeeIdxCoordinator: []common.Idx{},
} }
nLevels := c.consts.Verifiers[rollupForgeBatchArgs.VerifierIdx].NLevels nLevels := c.consts.Verifiers[rollupForgeBatchArgs.VerifierIdx].NLevels
lenL1L2TxsBytes := int((nLevels/8)*2 + common.Float40BytesLength + 1) //nolint:gomnd lenL1L2TxsBytes := int((nLevels/8)*2 + common.Float40BytesLength + 1)
numBytesL1TxUser := int(l1UserTxsLen) * lenL1L2TxsBytes numBytesL1TxUser := int(l1UserTxsLen) * lenL1L2TxsBytes
numTxsL1Coord := len(aux.EncodedL1CoordinatorTx) / common.RollupConstL1CoordinatorTotalBytes numTxsL1Coord := len(aux.EncodedL1CoordinatorTx) / common.RollupConstL1CoordinatorTotalBytes
numBytesL1TxCoord := numTxsL1Coord * lenL1L2TxsBytes numBytesL1TxCoord := numTxsL1Coord * lenL1L2TxsBytes
@@ -999,9 +949,7 @@ func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash,
l1UserTxsData = aux.L1L2TxsData[:numBytesL1TxUser] l1UserTxsData = aux.L1L2TxsData[:numBytesL1TxUser]
} }
for i := 0; i < int(l1UserTxsLen); i++ { for i := 0; i < int(l1UserTxsLen); i++ {
l1Tx, err := l1Tx, err := common.L1TxFromDataAvailability(l1UserTxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes], uint32(nLevels))
common.L1TxFromDataAvailability(l1UserTxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes],
uint32(nLevels))
if err != nil { if err != nil {
return nil, nil, tracerr.Wrap(err) return nil, nil, tracerr.Wrap(err)
} }
@@ -1013,17 +961,14 @@ func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash,
} }
numTxsL2 := len(l2TxsData) / lenL1L2TxsBytes numTxsL2 := len(l2TxsData) / lenL1L2TxsBytes
for i := 0; i < numTxsL2; i++ { for i := 0; i < numTxsL2; i++ {
l2Tx, err := l2Tx, err := common.L2TxFromBytesDataAvailability(l2TxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes], int(nLevels))
common.L2TxFromBytesDataAvailability(l2TxsData[i*lenL1L2TxsBytes:(i+1)*lenL1L2TxsBytes],
int(nLevels))
if err != nil { if err != nil {
return nil, nil, tracerr.Wrap(err) return nil, nil, tracerr.Wrap(err)
} }
rollupForgeBatchArgs.L2TxsData = append(rollupForgeBatchArgs.L2TxsData, *l2Tx) rollupForgeBatchArgs.L2TxsData = append(rollupForgeBatchArgs.L2TxsData, *l2Tx)
} }
for i := 0; i < numTxsL1Coord; i++ { for i := 0; i < numTxsL1Coord; i++ {
bytesL1Coordinator := bytesL1Coordinator := aux.EncodedL1CoordinatorTx[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*common.RollupConstL1CoordinatorTotalBytes]
aux.EncodedL1CoordinatorTx[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*common.RollupConstL1CoordinatorTotalBytes] //nolint:lll
var signature []byte var signature []byte
v := bytesL1Coordinator[0] v := bytesL1Coordinator[0]
s := bytesL1Coordinator[1:33] s := bytesL1Coordinator[1:33]
@@ -1036,29 +981,24 @@ func (c *RollupClient) RollupForgeBatchArgs(ethTxHash ethCommon.Hash,
return nil, nil, tracerr.Wrap(err) return nil, nil, tracerr.Wrap(err)
} }
rollupForgeBatchArgs.L1CoordinatorTxs = append(rollupForgeBatchArgs.L1CoordinatorTxs, *l1Tx) rollupForgeBatchArgs.L1CoordinatorTxs = append(rollupForgeBatchArgs.L1CoordinatorTxs, *l1Tx)
rollupForgeBatchArgs.L1CoordinatorTxsAuths = rollupForgeBatchArgs.L1CoordinatorTxsAuths = append(rollupForgeBatchArgs.L1CoordinatorTxsAuths, signature)
append(rollupForgeBatchArgs.L1CoordinatorTxsAuths, signature)
} }
lenFeeIdxCoordinatorBytes := int(nLevels / 8) //nolint:gomnd lenFeeIdxCoordinatorBytes := int(nLevels / 8) //nolint:gomnd
numFeeIdxCoordinator := len(aux.FeeIdxCoordinator) / lenFeeIdxCoordinatorBytes numFeeIdxCoordinator := len(aux.FeeIdxCoordinator) / lenFeeIdxCoordinatorBytes
for i := 0; i < numFeeIdxCoordinator; i++ { for i := 0; i < numFeeIdxCoordinator; i++ {
var paddedFeeIdx [6]byte var paddedFeeIdx [6]byte
// TODO: This check is not necessary: the first case will always work. Test it // TODO: This check is not necessary: the first case will always work. Test it before removing the if.
// before removing the if.
if lenFeeIdxCoordinatorBytes < common.IdxBytesLen { if lenFeeIdxCoordinatorBytes < common.IdxBytesLen {
copy(paddedFeeIdx[6-lenFeeIdxCoordinatorBytes:], copy(paddedFeeIdx[6-lenFeeIdxCoordinatorBytes:], aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
} else { } else {
copy(paddedFeeIdx[:], copy(paddedFeeIdx[:], aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
aux.FeeIdxCoordinator[i*lenFeeIdxCoordinatorBytes:(i+1)*lenFeeIdxCoordinatorBytes])
} }
feeIdxCoordinator, err := common.IdxFromBytes(paddedFeeIdx[:]) feeIdxCoordinator, err := common.IdxFromBytes(paddedFeeIdx[:])
if err != nil { if err != nil {
return nil, nil, tracerr.Wrap(err) return nil, nil, tracerr.Wrap(err)
} }
if feeIdxCoordinator != common.Idx(0) { if feeIdxCoordinator != common.Idx(0) {
rollupForgeBatchArgs.FeeIdxCoordinator = rollupForgeBatchArgs.FeeIdxCoordinator = append(rollupForgeBatchArgs.FeeIdxCoordinator, feeIdxCoordinator)
append(rollupForgeBatchArgs.FeeIdxCoordinator, feeIdxCoordinator)
} }
} }
return &rollupForgeBatchArgs, &sender, nil return &rollupForgeBatchArgs, &sender, nil

View File

@@ -56,7 +56,7 @@ func genKeysBjj(i int64) *keys {
} }
func TestRollupEventInit(t *testing.T) { func TestRollupEventInit(t *testing.T) {
rollupInit, blockNum, err := rollupClient.RollupEventInit(genesisBlock) rollupInit, blockNum, err := rollupClient.RollupEventInit()
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, int64(19), blockNum) assert.Equal(t, int64(19), blockNum)
assert.Equal(t, uint8(10), rollupInit.ForgeL1L2BatchTimeout) assert.Equal(t, uint8(10), rollupInit.ForgeL1L2BatchTimeout)
@@ -116,8 +116,7 @@ func TestRollupForgeBatch(t *testing.T) {
minBid.SetString("11000000000000000000", 10) minBid.SetString("11000000000000000000", 10)
budget := new(big.Int) budget := new(big.Int)
budget.SetString("45200000000000000000", 10) budget.SetString("45200000000000000000", 10)
_, err = auctionClient.AuctionMultiBid(budget, currentSlot+4, currentSlot+10, slotSet, _, err = auctionClient.AuctionMultiBid(budget, currentSlot+4, currentSlot+10, slotSet, maxBid, minBid, deadline)
maxBid, minBid, deadline)
require.NoError(t, err) require.NoError(t, err)
// Add Blocks // Add Blocks
@@ -129,18 +128,12 @@ func TestRollupForgeBatch(t *testing.T) {
// Forge Batch 1 // Forge Batch 1
args := new(RollupForgeBatchArgs) args := new(RollupForgeBatchArgs)
// When encoded, 64 times the 0 idx means that no idx to collect fees is specified. args.FeeIdxCoordinator = []common.Idx{} // When encoded, 64 times the 0 idx means that no idx to collect fees is specified.
args.FeeIdxCoordinator = []common.Idx{} l1CoordinatorBytes, err := hex.DecodeString("1c660323607bb113e586183609964a333d07ebe4bef3be82ec13af453bae9590bd7711cdb6abf42f176eadfbe5506fbef5e092e5543733f91b0061d9a7747fa10694a915a6470fa230de387b51e6f4db0b09787867778687b55197ad6d6a86eac000000001")
l1CoordinatorBytes, err := hex.DecodeString(
"1c660323607bb113e586183609964a333d07ebe4bef3be82ec13af453bae9590bd7711cdb6abf" +
"42f176eadfbe5506fbef5e092e5543733f91b0061d9a7747fa10694a915a6470fa230" +
"de387b51e6f4db0b09787867778687b55197ad6d6a86eac000000001")
require.NoError(t, err) require.NoError(t, err)
numTxsL1 := len(l1CoordinatorBytes) / common.RollupConstL1CoordinatorTotalBytes numTxsL1 := len(l1CoordinatorBytes) / common.RollupConstL1CoordinatorTotalBytes
for i := 0; i < numTxsL1; i++ { for i := 0; i < numTxsL1; i++ {
bytesL1Coordinator := bytesL1Coordinator := l1CoordinatorBytes[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*common.RollupConstL1CoordinatorTotalBytes]
l1CoordinatorBytes[i*common.RollupConstL1CoordinatorTotalBytes : (i+1)*
common.RollupConstL1CoordinatorTotalBytes]
var signature []byte var signature []byte
v := bytesL1Coordinator[0] v := bytesL1Coordinator[0]
s := bytesL1Coordinator[1:33] s := bytesL1Coordinator[1:33]
@@ -156,12 +149,9 @@ func TestRollupForgeBatch(t *testing.T) {
args.L1UserTxs = []common.L1Tx{} args.L1UserTxs = []common.L1Tx{}
args.L2TxsData = []common.L2Tx{} args.L2TxsData = []common.L2Tx{}
newStateRoot := new(big.Int) newStateRoot := new(big.Int)
newStateRoot.SetString( newStateRoot.SetString("18317824016047294649053625209337295956588174734569560016974612130063629505228", 10)
"18317824016047294649053625209337295956588174734569560016974612130063629505228",
10)
newExitRoot := new(big.Int) newExitRoot := new(big.Int)
bytesNumExitRoot, err := hex.DecodeString( bytesNumExitRoot, err := hex.DecodeString("10a89d5fe8d488eda1ba371d633515739933c706c210c604f5bd209180daa43b")
"10a89d5fe8d488eda1ba371d633515739933c706c210c604f5bd209180daa43b")
require.NoError(t, err) require.NoError(t, err)
newExitRoot.SetBytes(bytesNumExitRoot) newExitRoot.SetBytes(bytesNumExitRoot)
args.NewLastIdx = int64(300) args.NewLastIdx = int64(300)
@@ -216,8 +206,7 @@ func TestRollupUpdateForgeL1L2BatchTimeout(t *testing.T) {
rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil) rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, newForgeL1L2BatchTimeout, assert.Equal(t, newForgeL1L2BatchTimeout, rollupEvents.UpdateForgeL1L2BatchTimeout[0].NewForgeL1L2BatchTimeout)
rollupEvents.UpdateForgeL1L2BatchTimeout[0].NewForgeL1L2BatchTimeout)
} }
func TestRollupUpdateFeeAddToken(t *testing.T) { func TestRollupUpdateFeeAddToken(t *testing.T) {
@@ -259,8 +248,7 @@ func TestRollupUpdateWithdrawalDelay(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil) rollupEvents, err := rollupClient.RollupEventsByBlock(currentBlockNum, nil)
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, newWithdrawalDelay, assert.Equal(t, newWithdrawalDelay, int64(rollupEvents.UpdateWithdrawalDelay[0].NewWithdrawalDelay))
int64(rollupEvents.UpdateWithdrawalDelay[0].NewWithdrawalDelay))
} }
func TestRollupUpdateTokenExchange(t *testing.T) { func TestRollupUpdateTokenExchange(t *testing.T) {
@@ -299,8 +287,7 @@ func TestRollupL1UserTxETHCreateAccountDeposit(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -312,13 +299,11 @@ func TestRollupL1UserTxETHCreateAccountDeposit(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux.client.account.Address, assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxERC20CreateAccountDeposit(t *testing.T) { func TestRollupL1UserTxERC20CreateAccountDeposit(t *testing.T) {
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
key := genKeysBjj(1) key := genKeysBjj(1)
fromIdxInt64 := int64(0) fromIdxInt64 := int64(0)
@@ -334,8 +319,7 @@ func TestRollupL1UserTxERC20CreateAccountDeposit(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -347,13 +331,11 @@ func TestRollupL1UserTxERC20CreateAccountDeposit(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux2.client.account.Address, assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxERC20PermitCreateAccountDeposit(t *testing.T) { func TestRollupL1UserTxERC20PermitCreateAccountDeposit(t *testing.T) {
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
key := genKeysBjj(3) key := genKeysBjj(3)
fromIdxInt64 := int64(0) fromIdxInt64 := int64(0)
@@ -369,8 +351,7 @@ func TestRollupL1UserTxERC20PermitCreateAccountDeposit(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -382,13 +363,11 @@ func TestRollupL1UserTxERC20PermitCreateAccountDeposit(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux.client.account.Address, assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxETHDeposit(t *testing.T) { func TestRollupL1UserTxETHDeposit(t *testing.T) {
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
fromIdxInt64 := int64(256) fromIdxInt64 := int64(256)
toIdxInt64 := int64(0) toIdxInt64 := int64(0)
@@ -404,8 +383,7 @@ func TestRollupL1UserTxETHDeposit(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -416,13 +394,11 @@ func TestRollupL1UserTxETHDeposit(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux.client.account.Address, assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxERC20Deposit(t *testing.T) { func TestRollupL1UserTxERC20Deposit(t *testing.T) {
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
fromIdxInt64 := int64(257) fromIdxInt64 := int64(257)
toIdxInt64 := int64(0) toIdxInt64 := int64(0)
@@ -437,8 +413,7 @@ func TestRollupL1UserTxERC20Deposit(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -449,13 +424,11 @@ func TestRollupL1UserTxERC20Deposit(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux2.client.account.Address, assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxERC20PermitDeposit(t *testing.T) { func TestRollupL1UserTxERC20PermitDeposit(t *testing.T) {
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
fromIdxInt64 := int64(258) fromIdxInt64 := int64(258)
toIdxInt64 := int64(0) toIdxInt64 := int64(0)
@@ -469,8 +442,7 @@ func TestRollupL1UserTxERC20PermitDeposit(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -481,13 +453,11 @@ func TestRollupL1UserTxERC20PermitDeposit(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux.client.account.Address, assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxETHDepositTransfer(t *testing.T) { func TestRollupL1UserTxETHDepositTransfer(t *testing.T) {
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
fromIdxInt64 := int64(256) fromIdxInt64 := int64(256)
toIdxInt64 := int64(257) toIdxInt64 := int64(257)
@@ -503,8 +473,7 @@ func TestRollupL1UserTxETHDepositTransfer(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -515,13 +484,11 @@ func TestRollupL1UserTxETHDepositTransfer(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux.client.account.Address, assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxERC20DepositTransfer(t *testing.T) { func TestRollupL1UserTxERC20DepositTransfer(t *testing.T) {
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
fromIdxInt64 := int64(257) fromIdxInt64 := int64(257)
toIdxInt64 := int64(258) toIdxInt64 := int64(258)
@@ -536,8 +503,7 @@ func TestRollupL1UserTxERC20DepositTransfer(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -548,13 +514,11 @@ func TestRollupL1UserTxERC20DepositTransfer(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux2.client.account.Address, assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxERC20PermitDepositTransfer(t *testing.T) { func TestRollupL1UserTxERC20PermitDepositTransfer(t *testing.T) {
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
fromIdxInt64 := int64(258) fromIdxInt64 := int64(258)
toIdxInt64 := int64(259) toIdxInt64 := int64(259)
@@ -569,8 +533,7 @@ func TestRollupL1UserTxERC20PermitDepositTransfer(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -581,13 +544,11 @@ func TestRollupL1UserTxERC20PermitDepositTransfer(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux.client.account.Address, assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxETHCreateAccountDepositTransfer(t *testing.T) { func TestRollupL1UserTxETHCreateAccountDepositTransfer(t *testing.T) {
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
fromIdxInt64 := int64(256) fromIdxInt64 := int64(256)
toIdxInt64 := int64(257) toIdxInt64 := int64(257)
@@ -603,8 +564,7 @@ func TestRollupL1UserTxETHCreateAccountDepositTransfer(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -615,13 +575,11 @@ func TestRollupL1UserTxETHCreateAccountDepositTransfer(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux.client.account.Address, assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxERC20CreateAccountDepositTransfer(t *testing.T) { func TestRollupL1UserTxERC20CreateAccountDepositTransfer(t *testing.T) {
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
fromIdxInt64 := int64(257) fromIdxInt64 := int64(257)
toIdxInt64 := int64(258) toIdxInt64 := int64(258)
@@ -636,8 +594,7 @@ func TestRollupL1UserTxERC20CreateAccountDepositTransfer(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -648,13 +605,11 @@ func TestRollupL1UserTxERC20CreateAccountDepositTransfer(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux2.client.account.Address, assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxERC20PermitCreateAccountDepositTransfer(t *testing.T) { func TestRollupL1UserTxERC20PermitCreateAccountDepositTransfer(t *testing.T) {
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
fromIdxInt64 := int64(258) fromIdxInt64 := int64(258)
toIdxInt64 := int64(259) toIdxInt64 := int64(259)
@@ -669,8 +624,7 @@ func TestRollupL1UserTxERC20PermitCreateAccountDepositTransfer(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -681,13 +635,11 @@ func TestRollupL1UserTxERC20PermitCreateAccountDepositTransfer(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux.client.account.Address, assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxETHForceTransfer(t *testing.T) { func TestRollupL1UserTxETHForceTransfer(t *testing.T) {
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
fromIdxInt64 := int64(256) fromIdxInt64 := int64(256)
toIdxInt64 := int64(257) toIdxInt64 := int64(257)
@@ -702,8 +654,7 @@ func TestRollupL1UserTxETHForceTransfer(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -714,13 +665,11 @@ func TestRollupL1UserTxETHForceTransfer(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux.client.account.Address, assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxERC20ForceTransfer(t *testing.T) { func TestRollupL1UserTxERC20ForceTransfer(t *testing.T) {
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
fromIdxInt64 := int64(257) fromIdxInt64 := int64(257)
toIdxInt64 := int64(258) toIdxInt64 := int64(258)
@@ -734,8 +683,7 @@ func TestRollupL1UserTxERC20ForceTransfer(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -746,13 +694,11 @@ func TestRollupL1UserTxERC20ForceTransfer(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux2.client.account.Address, assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxERC20PermitForceTransfer(t *testing.T) { func TestRollupL1UserTxERC20PermitForceTransfer(t *testing.T) {
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
fromIdxInt64 := int64(259) fromIdxInt64 := int64(259)
toIdxInt64 := int64(260) toIdxInt64 := int64(260)
@@ -766,8 +712,7 @@ func TestRollupL1UserTxERC20PermitForceTransfer(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -778,13 +723,11 @@ func TestRollupL1UserTxERC20PermitForceTransfer(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux.client.account.Address, assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxETHForceExit(t *testing.T) { func TestRollupL1UserTxETHForceExit(t *testing.T) {
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
fromIdxInt64 := int64(256) fromIdxInt64 := int64(256)
toIdxInt64 := int64(1) toIdxInt64 := int64(1)
@@ -799,8 +742,7 @@ func TestRollupL1UserTxETHForceExit(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
l1Tx.DepositAmount, l1Tx.Amount, tokenIDUint32, toIdxInt64)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -811,13 +753,11 @@ func TestRollupL1UserTxETHForceExit(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux.client.account.Address, assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxERC20ForceExit(t *testing.T) { func TestRollupL1UserTxERC20ForceExit(t *testing.T) {
rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, rollupClientAux2, err := NewRollupClient(ethereumClientAux2, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
fromIdxInt64 := int64(257) fromIdxInt64 := int64(257)
toIdxInt64 := int64(1) toIdxInt64 := int64(1)
@@ -831,8 +771,7 @@ func TestRollupL1UserTxERC20ForceExit(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux2.RollupL1UserTxERC20ETH(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
l1Tx.DepositAmount, l1Tx.Amount, tokenHEZID, toIdxInt64)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -843,13 +782,11 @@ func TestRollupL1UserTxERC20ForceExit(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux2.client.account.Address, assert.Equal(t, rollupClientAux2.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupL1UserTxERC20PermitForceExit(t *testing.T) { func TestRollupL1UserTxERC20PermitForceExit(t *testing.T) {
rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, rollupClientAux, err := NewRollupClient(ethereumClientAux, hermezRollupAddressConst, tokenHEZ)
tokenHEZ)
require.NoError(t, err) require.NoError(t, err)
fromIdxInt64 := int64(258) fromIdxInt64 := int64(258)
toIdxInt64 := int64(1) toIdxInt64 := int64(1)
@@ -865,8 +802,7 @@ func TestRollupL1UserTxERC20PermitForceExit(t *testing.T) {
} }
L1UserTxs = append(L1UserTxs, l1Tx) L1UserTxs = append(L1UserTxs, l1Tx)
_, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, _, err = rollupClientAux.RollupL1UserTxERC20Permit(l1Tx.FromBJJ, fromIdxInt64, l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
l1Tx.DepositAmount, l1Tx.Amount, tokenIDERC777, toIdxInt64, deadline)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()
@@ -877,8 +813,7 @@ func TestRollupL1UserTxERC20PermitForceExit(t *testing.T) {
assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount) assert.Equal(t, l1Tx.DepositAmount, rollupEvents.L1UserTx[0].L1UserTx.DepositAmount)
assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID) assert.Equal(t, l1Tx.TokenID, rollupEvents.L1UserTx[0].L1UserTx.TokenID)
assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount) assert.Equal(t, l1Tx.Amount, rollupEvents.L1UserTx[0].L1UserTx.Amount)
assert.Equal(t, rollupClientAux.client.account.Address, assert.Equal(t, rollupClientAux.client.account.Address, rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
rollupEvents.L1UserTx[0].L1UserTx.FromEthAddr)
} }
func TestRollupForgeBatch2(t *testing.T) { func TestRollupForgeBatch2(t *testing.T) {
@@ -894,8 +829,7 @@ func TestRollupForgeBatch2(t *testing.T) {
// Forge Batch 3 // Forge Batch 3
args := new(RollupForgeBatchArgs) args := new(RollupForgeBatchArgs)
// When encoded, 64 times the 0 idx means that no idx to collect fees is specified. args.FeeIdxCoordinator = []common.Idx{} // When encoded, 64 times the 0 idx means that no idx to collect fees is specified.
args.FeeIdxCoordinator = []common.Idx{}
args.L1CoordinatorTxs = argsForge.L1CoordinatorTxs args.L1CoordinatorTxs = argsForge.L1CoordinatorTxs
args.L1CoordinatorTxsAuths = argsForge.L1CoordinatorTxsAuths args.L1CoordinatorTxsAuths = argsForge.L1CoordinatorTxsAuths
for i := 0; i < len(L1UserTxs); i++ { for i := 0; i < len(L1UserTxs); i++ {
@@ -903,19 +837,14 @@ func TestRollupForgeBatch2(t *testing.T) {
l1UserTx.EffectiveAmount = l1UserTx.Amount l1UserTx.EffectiveAmount = l1UserTx.Amount
l1Bytes, err := l1UserTx.BytesDataAvailability(uint32(nLevels)) l1Bytes, err := l1UserTx.BytesDataAvailability(uint32(nLevels))
require.NoError(t, err) require.NoError(t, err)
l1UserTxDataAvailability, err := common.L1TxFromDataAvailability(l1Bytes, l1UserTxDataAvailability, err := common.L1TxFromDataAvailability(l1Bytes, uint32(nLevels))
uint32(nLevels))
require.NoError(t, err) require.NoError(t, err)
args.L1UserTxs = append(args.L1UserTxs, *l1UserTxDataAvailability) args.L1UserTxs = append(args.L1UserTxs, *l1UserTxDataAvailability)
} }
newStateRoot := new(big.Int) newStateRoot := new(big.Int)
newStateRoot.SetString( newStateRoot.SetString("18317824016047294649053625209337295956588174734569560016974612130063629505228", 10)
"18317824016047294649053625209337295956588174734569560016974612130063629505228",
10)
newExitRoot := new(big.Int) newExitRoot := new(big.Int)
newExitRoot.SetString( newExitRoot.SetString("1114281409737474688393837964161044726766678436313681099613347372031079422302", 10)
"1114281409737474688393837964161044726766678436313681099613347372031079422302",
10)
amount := new(big.Int) amount := new(big.Int)
amount.SetString("79000000", 10) amount.SetString("79000000", 10)
l2Tx := common.L2Tx{ l2Tx := common.L2Tx{
@@ -975,8 +904,7 @@ func TestRollupWithdrawMerkleProof(t *testing.T) {
require.NoError(t, err) require.NoError(t, err)
var pkComp babyjub.PublicKeyComp var pkComp babyjub.PublicKeyComp
pkCompBE, err := pkCompBE, err := hex.DecodeString("adc3b754f8da621967b073a787bef8eec7052f2ba712b23af57d98f65beea8b2")
hex.DecodeString("adc3b754f8da621967b073a787bef8eec7052f2ba712b23af57d98f65beea8b2")
require.NoError(t, err) require.NoError(t, err)
pkCompLE := common.SwapEndianness(pkCompBE) pkCompLE := common.SwapEndianness(pkCompBE)
copy(pkComp[:], pkCompLE) copy(pkComp[:], pkCompLE)
@@ -986,20 +914,16 @@ func TestRollupWithdrawMerkleProof(t *testing.T) {
numExitRoot := int64(3) numExitRoot := int64(3)
fromIdx := int64(256) fromIdx := int64(256)
amount, _ := new(big.Int).SetString("20000000000000000000", 10) amount, _ := new(big.Int).SetString("20000000000000000000", 10)
// siblingBytes0, err := new(big.Int).SetString( // siblingBytes0, err := new(big.Int).SetString("19508838618377323910556678335932426220272947530531646682154552299216398748115", 10)
// "19508838618377323910556678335932426220272947530531646682154552299216398748115",
// 10)
// require.NoError(t, err) // require.NoError(t, err)
// siblingBytes1, err := new(big.Int).SetString( // siblingBytes1, err := new(big.Int).SetString("15198806719713909654457742294233381653226080862567104272457668857208564789571", 10)
// "15198806719713909654457742294233381653226080862567104272457668857208564789571", 10)
// require.NoError(t, err) // require.NoError(t, err)
var siblings []*big.Int var siblings []*big.Int
// siblings = append(siblings, siblingBytes0) // siblings = append(siblings, siblingBytes0)
// siblings = append(siblings, siblingBytes1) // siblings = append(siblings, siblingBytes1)
instantWithdraw := true instantWithdraw := true
_, err = rollupClientAux.RollupWithdrawMerkleProof(pkComp, tokenID, numExitRoot, fromIdx, _, err = rollupClientAux.RollupWithdrawMerkleProof(pkComp, tokenID, numExitRoot, fromIdx, amount, siblings, instantWithdraw)
amount, siblings, instantWithdraw)
require.NoError(t, err) require.NoError(t, err)
currentBlockNum, err := rollupClient.client.EthLastBlock() currentBlockNum, err := rollupClient.client.EthLastBlock()

View File

@@ -132,20 +132,18 @@ type WDelayerInterface interface {
WDelayerDepositInfo(owner, token ethCommon.Address) (depositInfo DepositState, err error) WDelayerDepositInfo(owner, token ethCommon.Address) (depositInfo DepositState, err error)
WDelayerDeposit(onwer, token ethCommon.Address, amount *big.Int) (*types.Transaction, error) WDelayerDeposit(onwer, token ethCommon.Address, amount *big.Int) (*types.Transaction, error)
WDelayerWithdrawal(owner, token ethCommon.Address) (*types.Transaction, error) WDelayerWithdrawal(owner, token ethCommon.Address) (*types.Transaction, error)
WDelayerEscapeHatchWithdrawal(to, token ethCommon.Address, WDelayerEscapeHatchWithdrawal(to, token ethCommon.Address, amount *big.Int) (*types.Transaction, error)
amount *big.Int) (*types.Transaction, error)
WDelayerEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*WDelayerEvents, error) WDelayerEventsByBlock(blockNum int64, blockHash *ethCommon.Hash) (*WDelayerEvents, error)
WDelayerConstants() (*common.WDelayerConstants, error) WDelayerConstants() (*common.WDelayerConstants, error)
WDelayerEventInit(genesisBlockNum int64) (*WDelayerEventInitialize, int64, error) WDelayerEventInit() (*WDelayerEventInitialize, int64, error)
} }
// //
// Implementation // Implementation
// //
// WDelayerClient is the implementation of the interface to the WithdrawDelayer // WDelayerClient is the implementation of the interface to the WithdrawDelayer Smart Contract in ethereum.
// Smart Contract in ethereum.
type WDelayerClient struct { type WDelayerClient struct {
client *EthereumClient client *EthereumClient
address ethCommon.Address address ethCommon.Address
@@ -174,8 +172,7 @@ func NewWDelayerClient(client *EthereumClient, address ethCommon.Address) (*WDel
} }
// WDelayerGetHermezGovernanceAddress is the interface to call the smart contract function // WDelayerGetHermezGovernanceAddress is the interface to call the smart contract function
func (c *WDelayerClient) WDelayerGetHermezGovernanceAddress() ( func (c *WDelayerClient) WDelayerGetHermezGovernanceAddress() (hermezGovernanceAddress *ethCommon.Address, err error) {
hermezGovernanceAddress *ethCommon.Address, err error) {
var _hermezGovernanceAddress ethCommon.Address var _hermezGovernanceAddress ethCommon.Address
if err := c.client.Call(func(ec *ethclient.Client) error { if err := c.client.Call(func(ec *ethclient.Client) error {
_hermezGovernanceAddress, err = c.wdelayer.GetHermezGovernanceAddress(c.opts) _hermezGovernanceAddress, err = c.wdelayer.GetHermezGovernanceAddress(c.opts)
@@ -187,8 +184,7 @@ func (c *WDelayerClient) WDelayerGetHermezGovernanceAddress() (
} }
// WDelayerTransferGovernance is the interface to call the smart contract function // WDelayerTransferGovernance is the interface to call the smart contract function
func (c *WDelayerClient) WDelayerTransferGovernance(newAddress ethCommon.Address) ( func (c *WDelayerClient) WDelayerTransferGovernance(newAddress ethCommon.Address) (tx *types.Transaction, err error) {
tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -214,8 +210,7 @@ func (c *WDelayerClient) WDelayerClaimGovernance() (tx *types.Transaction, err e
} }
// WDelayerGetEmergencyCouncil is the interface to call the smart contract function // WDelayerGetEmergencyCouncil is the interface to call the smart contract function
func (c *WDelayerClient) WDelayerGetEmergencyCouncil() (emergencyCouncilAddress *ethCommon.Address, func (c *WDelayerClient) WDelayerGetEmergencyCouncil() (emergencyCouncilAddress *ethCommon.Address, err error) {
err error) {
var _emergencyCouncilAddress ethCommon.Address var _emergencyCouncilAddress ethCommon.Address
if err := c.client.Call(func(ec *ethclient.Client) error { if err := c.client.Call(func(ec *ethclient.Client) error {
_emergencyCouncilAddress, err = c.wdelayer.GetEmergencyCouncil(c.opts) _emergencyCouncilAddress, err = c.wdelayer.GetEmergencyCouncil(c.opts)
@@ -227,8 +222,7 @@ func (c *WDelayerClient) WDelayerGetEmergencyCouncil() (emergencyCouncilAddress
} }
// WDelayerTransferEmergencyCouncil is the interface to call the smart contract function // WDelayerTransferEmergencyCouncil is the interface to call the smart contract function
func (c *WDelayerClient) WDelayerTransferEmergencyCouncil(newAddress ethCommon.Address) ( func (c *WDelayerClient) WDelayerTransferEmergencyCouncil(newAddress ethCommon.Address) (tx *types.Transaction, err error) {
tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -277,8 +271,7 @@ func (c *WDelayerClient) WDelayerGetWithdrawalDelay() (withdrawalDelay int64, er
} }
// WDelayerGetEmergencyModeStartingTime is the interface to call the smart contract function // WDelayerGetEmergencyModeStartingTime is the interface to call the smart contract function
func (c *WDelayerClient) WDelayerGetEmergencyModeStartingTime() (emergencyModeStartingTime int64, func (c *WDelayerClient) WDelayerGetEmergencyModeStartingTime() (emergencyModeStartingTime int64, err error) {
err error) {
var _emergencyModeStartingTime uint64 var _emergencyModeStartingTime uint64
if err := c.client.Call(func(ec *ethclient.Client) error { if err := c.client.Call(func(ec *ethclient.Client) error {
_emergencyModeStartingTime, err = c.wdelayer.GetEmergencyModeStartingTime(c.opts) _emergencyModeStartingTime, err = c.wdelayer.GetEmergencyModeStartingTime(c.opts)
@@ -303,8 +296,7 @@ func (c *WDelayerClient) WDelayerEnableEmergencyMode() (tx *types.Transaction, e
} }
// WDelayerChangeWithdrawalDelay is the interface to call the smart contract function // WDelayerChangeWithdrawalDelay is the interface to call the smart contract function
func (c *WDelayerClient) WDelayerChangeWithdrawalDelay(newWithdrawalDelay uint64) ( func (c *WDelayerClient) WDelayerChangeWithdrawalDelay(newWithdrawalDelay uint64) (tx *types.Transaction, err error) {
tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -317,8 +309,7 @@ func (c *WDelayerClient) WDelayerChangeWithdrawalDelay(newWithdrawalDelay uint64
} }
// WDelayerDepositInfo is the interface to call the smart contract function // WDelayerDepositInfo is the interface to call the smart contract function
func (c *WDelayerClient) WDelayerDepositInfo(owner, token ethCommon.Address) ( func (c *WDelayerClient) WDelayerDepositInfo(owner, token ethCommon.Address) (depositInfo DepositState, err error) {
depositInfo DepositState, err error) {
if err := c.client.Call(func(ec *ethclient.Client) error { if err := c.client.Call(func(ec *ethclient.Client) error {
amount, depositTimestamp, err := c.wdelayer.DepositInfo(c.opts, owner, token) amount, depositTimestamp, err := c.wdelayer.DepositInfo(c.opts, owner, token)
depositInfo.Amount = amount depositInfo.Amount = amount
@@ -331,8 +322,7 @@ func (c *WDelayerClient) WDelayerDepositInfo(owner, token ethCommon.Address) (
} }
// WDelayerDeposit is the interface to call the smart contract function // WDelayerDeposit is the interface to call the smart contract function
func (c *WDelayerClient) WDelayerDeposit(owner, token ethCommon.Address, amount *big.Int) ( func (c *WDelayerClient) WDelayerDeposit(owner, token ethCommon.Address, amount *big.Int) (tx *types.Transaction, err error) {
tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -345,8 +335,7 @@ func (c *WDelayerClient) WDelayerDeposit(owner, token ethCommon.Address, amount
} }
// WDelayerWithdrawal is the interface to call the smart contract function // WDelayerWithdrawal is the interface to call the smart contract function
func (c *WDelayerClient) WDelayerWithdrawal(owner, token ethCommon.Address) (tx *types.Transaction, func (c *WDelayerClient) WDelayerWithdrawal(owner, token ethCommon.Address) (tx *types.Transaction, err error) {
err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -359,8 +348,7 @@ func (c *WDelayerClient) WDelayerWithdrawal(owner, token ethCommon.Address) (tx
} }
// WDelayerEscapeHatchWithdrawal is the interface to call the smart contract function // WDelayerEscapeHatchWithdrawal is the interface to call the smart contract function
func (c *WDelayerClient) WDelayerEscapeHatchWithdrawal(to, token ethCommon.Address, func (c *WDelayerClient) WDelayerEscapeHatchWithdrawal(to, token ethCommon.Address, amount *big.Int) (tx *types.Transaction, err error) {
amount *big.Int) (tx *types.Transaction, err error) {
if tx, err = c.client.CallAuth( if tx, err = c.client.CallAuth(
0, 0,
func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) { func(ec *ethclient.Client, auth *bind.TransactOpts) (*types.Transaction, error) {
@@ -396,33 +384,24 @@ func (c *WDelayerClient) WDelayerConstants() (constants *common.WDelayerConstant
} }
var ( var (
logWDelayerDeposit = crypto.Keccak256Hash([]byte( logWDelayerDeposit = crypto.Keccak256Hash([]byte("Deposit(address,address,uint192,uint64)"))
"Deposit(address,address,uint192,uint64)")) logWDelayerWithdraw = crypto.Keccak256Hash([]byte("Withdraw(address,address,uint192)"))
logWDelayerWithdraw = crypto.Keccak256Hash([]byte( logWDelayerEmergencyModeEnabled = crypto.Keccak256Hash([]byte("EmergencyModeEnabled()"))
"Withdraw(address,address,uint192)")) logWDelayerNewWithdrawalDelay = crypto.Keccak256Hash([]byte("NewWithdrawalDelay(uint64)"))
logWDelayerEmergencyModeEnabled = crypto.Keccak256Hash([]byte( logWDelayerEscapeHatchWithdrawal = crypto.Keccak256Hash([]byte("EscapeHatchWithdrawal(address,address,address,uint256)"))
"EmergencyModeEnabled()")) logWDelayerNewEmergencyCouncil = crypto.Keccak256Hash([]byte("NewEmergencyCouncil(address)"))
logWDelayerNewWithdrawalDelay = crypto.Keccak256Hash([]byte( logWDelayerNewHermezGovernanceAddress = crypto.Keccak256Hash([]byte("NewHermezGovernanceAddress(address)"))
"NewWithdrawalDelay(uint64)")) logWDelayerInitialize = crypto.Keccak256Hash([]byte(
logWDelayerEscapeHatchWithdrawal = crypto.Keccak256Hash([]byte(
"EscapeHatchWithdrawal(address,address,address,uint256)"))
logWDelayerNewEmergencyCouncil = crypto.Keccak256Hash([]byte(
"NewEmergencyCouncil(address)"))
logWDelayerNewHermezGovernanceAddress = crypto.Keccak256Hash([]byte(
"NewHermezGovernanceAddress(address)"))
logWDelayerInitialize = crypto.Keccak256Hash([]byte(
"InitializeWithdrawalDelayerEvent(uint64,address,address)")) "InitializeWithdrawalDelayerEvent(uint64,address,address)"))
) )
// WDelayerEventInit returns the initialize event with its corresponding block number // WDelayerEventInit returns the initialize event with its corresponding block number
func (c *WDelayerClient) WDelayerEventInit(genesisBlockNum int64) (*WDelayerEventInitialize, int64, error) { func (c *WDelayerClient) WDelayerEventInit() (*WDelayerEventInitialize, int64, error) {
query := ethereum.FilterQuery{ query := ethereum.FilterQuery{
Addresses: []ethCommon.Address{ Addresses: []ethCommon.Address{
c.address, c.address,
}, },
FromBlock: big.NewInt(max(0, genesisBlockNum-blocksPerDay)), Topics: [][]ethCommon.Hash{{logWDelayerInitialize}},
ToBlock: big.NewInt(genesisBlockNum),
Topics: [][]ethCommon.Hash{{logWDelayerInitialize}},
} }
logs, err := c.client.client.FilterLogs(context.Background(), query) logs, err := c.client.client.FilterLogs(context.Background(), query)
if err != nil { if err != nil {
@@ -504,51 +483,42 @@ func (c *WDelayerClient) WDelayerEventsByBlock(blockNum int64,
case logWDelayerEmergencyModeEnabled: case logWDelayerEmergencyModeEnabled:
var emergencyModeEnabled WDelayerEventEmergencyModeEnabled var emergencyModeEnabled WDelayerEventEmergencyModeEnabled
wdelayerEvents.EmergencyModeEnabled = wdelayerEvents.EmergencyModeEnabled = append(wdelayerEvents.EmergencyModeEnabled, emergencyModeEnabled)
append(wdelayerEvents.EmergencyModeEnabled, emergencyModeEnabled)
case logWDelayerNewWithdrawalDelay: case logWDelayerNewWithdrawalDelay:
var withdrawalDelay WDelayerEventNewWithdrawalDelay var withdrawalDelay WDelayerEventNewWithdrawalDelay
err := c.contractAbi.UnpackIntoInterface(&withdrawalDelay, err := c.contractAbi.UnpackIntoInterface(&withdrawalDelay, "NewWithdrawalDelay", vLog.Data)
"NewWithdrawalDelay", vLog.Data)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
wdelayerEvents.NewWithdrawalDelay = wdelayerEvents.NewWithdrawalDelay = append(wdelayerEvents.NewWithdrawalDelay, withdrawalDelay)
append(wdelayerEvents.NewWithdrawalDelay, withdrawalDelay)
case logWDelayerEscapeHatchWithdrawal: case logWDelayerEscapeHatchWithdrawal:
var escapeHatchWithdrawal WDelayerEventEscapeHatchWithdrawal var escapeHatchWithdrawal WDelayerEventEscapeHatchWithdrawal
err := c.contractAbi.UnpackIntoInterface(&escapeHatchWithdrawal, err := c.contractAbi.UnpackIntoInterface(&escapeHatchWithdrawal, "EscapeHatchWithdrawal", vLog.Data)
"EscapeHatchWithdrawal", vLog.Data)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
escapeHatchWithdrawal.Who = ethCommon.BytesToAddress(vLog.Topics[1].Bytes()) escapeHatchWithdrawal.Who = ethCommon.BytesToAddress(vLog.Topics[1].Bytes())
escapeHatchWithdrawal.To = ethCommon.BytesToAddress(vLog.Topics[2].Bytes()) escapeHatchWithdrawal.To = ethCommon.BytesToAddress(vLog.Topics[2].Bytes())
escapeHatchWithdrawal.Token = ethCommon.BytesToAddress(vLog.Topics[3].Bytes()) escapeHatchWithdrawal.Token = ethCommon.BytesToAddress(vLog.Topics[3].Bytes())
wdelayerEvents.EscapeHatchWithdrawal = wdelayerEvents.EscapeHatchWithdrawal = append(wdelayerEvents.EscapeHatchWithdrawal, escapeHatchWithdrawal)
append(wdelayerEvents.EscapeHatchWithdrawal, escapeHatchWithdrawal)
case logWDelayerNewEmergencyCouncil: case logWDelayerNewEmergencyCouncil:
var emergencyCouncil WDelayerEventNewEmergencyCouncil var emergencyCouncil WDelayerEventNewEmergencyCouncil
err := c.contractAbi.UnpackIntoInterface(&emergencyCouncil, err := c.contractAbi.UnpackIntoInterface(&emergencyCouncil, "NewEmergencyCouncil", vLog.Data)
"NewEmergencyCouncil", vLog.Data)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
wdelayerEvents.NewEmergencyCouncil = wdelayerEvents.NewEmergencyCouncil = append(wdelayerEvents.NewEmergencyCouncil, emergencyCouncil)
append(wdelayerEvents.NewEmergencyCouncil, emergencyCouncil)
case logWDelayerNewHermezGovernanceAddress: case logWDelayerNewHermezGovernanceAddress:
var governanceAddress WDelayerEventNewHermezGovernanceAddress var governanceAddress WDelayerEventNewHermezGovernanceAddress
err := c.contractAbi.UnpackIntoInterface(&governanceAddress, err := c.contractAbi.UnpackIntoInterface(&governanceAddress, "NewHermezGovernanceAddress", vLog.Data)
"NewHermezGovernanceAddress", vLog.Data)
if err != nil { if err != nil {
return nil, tracerr.Wrap(err) return nil, tracerr.Wrap(err)
} }
wdelayerEvents.NewHermezGovernanceAddress = wdelayerEvents.NewHermezGovernanceAddress = append(wdelayerEvents.NewHermezGovernanceAddress, governanceAddress)
append(wdelayerEvents.NewHermezGovernanceAddress, governanceAddress)
} }
} }
return &wdelayerEvents, nil return &wdelayerEvents, nil

View File

@@ -18,7 +18,7 @@ var maxEmergencyModeTime = time.Hour * 24 * 7 * 26
var maxWithdrawalDelay = time.Hour * 24 * 7 * 2 var maxWithdrawalDelay = time.Hour * 24 * 7 * 2
func TestWDelayerInit(t *testing.T) { func TestWDelayerInit(t *testing.T) {
wDelayerInit, blockNum, err := wdelayerClientTest.WDelayerEventInit(genesisBlock) wDelayerInit, blockNum, err := wdelayerClientTest.WDelayerEventInit()
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, int64(16), blockNum) assert.Equal(t, int64(16), blockNum)
assert.Equal(t, uint64(initWithdrawalDelay), wDelayerInit.InitialWithdrawalDelay) assert.Equal(t, uint64(initWithdrawalDelay), wDelayerInit.InitialWithdrawalDelay)
@@ -54,8 +54,7 @@ func TestWDelayerSetHermezGovernanceAddress(t *testing.T) {
require.Nil(t, err) require.Nil(t, err)
wdelayerEvents, err := wdelayerClientTest.WDelayerEventsByBlock(currentBlockNum, nil) wdelayerEvents, err := wdelayerClientTest.WDelayerEventsByBlock(currentBlockNum, nil)
require.Nil(t, err) require.Nil(t, err)
assert.Equal(t, auxAddressConst, assert.Equal(t, auxAddressConst, wdelayerEvents.NewHermezGovernanceAddress[0].NewHermezGovernanceAddress)
wdelayerEvents.NewHermezGovernanceAddress[0].NewHermezGovernanceAddress)
_, err = wdelayerClientAux.WDelayerTransferGovernance(governanceAddressConst) _, err = wdelayerClientAux.WDelayerTransferGovernance(governanceAddressConst)
require.Nil(t, err) require.Nil(t, err)
_, err = wdelayerClientTest.WDelayerClaimGovernance() _, err = wdelayerClientTest.WDelayerClaimGovernance()
@@ -69,8 +68,7 @@ func TestWDelayerGetEmergencyCouncil(t *testing.T) {
} }
func TestWDelayerSetEmergencyCouncil(t *testing.T) { func TestWDelayerSetEmergencyCouncil(t *testing.T) {
wdelayerClientEmergencyCouncil, err := NewWDelayerClient(ethereumClientEmergencyCouncil, wdelayerClientEmergencyCouncil, err := NewWDelayerClient(ethereumClientEmergencyCouncil, wdelayerTestAddressConst)
wdelayerTestAddressConst)
require.Nil(t, err) require.Nil(t, err)
wdelayerClientAux, err := NewWDelayerClient(ethereumClientAux, wdelayerTestAddressConst) wdelayerClientAux, err := NewWDelayerClient(ethereumClientAux, wdelayerTestAddressConst)
require.Nil(t, err) require.Nil(t, err)
@@ -202,18 +200,13 @@ func TestWDelayerGetEmergencyModeStartingTime(t *testing.T) {
func TestWDelayerEscapeHatchWithdrawal(t *testing.T) { func TestWDelayerEscapeHatchWithdrawal(t *testing.T) {
amount := new(big.Int) amount := new(big.Int)
amount.SetString("10000000000000000", 10) amount.SetString("10000000000000000", 10)
wdelayerClientEmergencyCouncil, err := NewWDelayerClient(ethereumClientEmergencyCouncil, wdelayerClientEmergencyCouncil, err := NewWDelayerClient(ethereumClientEmergencyCouncil, wdelayerTestAddressConst)
wdelayerTestAddressConst)
require.Nil(t, err) require.Nil(t, err)
_, err = _, err = wdelayerClientEmergencyCouncil.WDelayerEscapeHatchWithdrawal(governanceAddressConst, tokenHEZAddressConst, amount)
wdelayerClientEmergencyCouncil.WDelayerEscapeHatchWithdrawal(governanceAddressConst,
tokenHEZAddressConst, amount)
require.Contains(t, err.Error(), "NO_MAX_EMERGENCY_MODE_TIME") require.Contains(t, err.Error(), "NO_MAX_EMERGENCY_MODE_TIME")
seconds := maxEmergencyModeTime.Seconds() seconds := maxEmergencyModeTime.Seconds()
addTime(seconds, ethClientDialURL) addTime(seconds, ethClientDialURL)
_, err = _, err = wdelayerClientEmergencyCouncil.WDelayerEscapeHatchWithdrawal(governanceAddressConst, tokenHEZAddressConst, amount)
wdelayerClientEmergencyCouncil.WDelayerEscapeHatchWithdrawal(governanceAddressConst,
tokenHEZAddressConst, amount)
require.Nil(t, err) require.Nil(t, err)
currentBlockNum, err := wdelayerClientTest.client.EthLastBlock() currentBlockNum, err := wdelayerClientTest.client.EthLastBlock()
require.Nil(t, err) require.Nil(t, err)

4
go.mod
View File

@@ -11,8 +11,8 @@ require (
github.com/gin-gonic/gin v1.5.0 github.com/gin-gonic/gin v1.5.0
github.com/gobuffalo/packr/v2 v2.8.1 github.com/gobuffalo/packr/v2 v2.8.1
github.com/hermeznetwork/tracerr v0.3.1-0.20210120162744-5da60b576169 github.com/hermeznetwork/tracerr v0.3.1-0.20210120162744-5da60b576169
github.com/iden3/go-iden3-crypto v0.0.6-0.20210308142348-8f85683b2cef github.com/iden3/go-iden3-crypto v0.0.6-0.20201221160344-58e589b6eb4c
github.com/iden3/go-merkletree v0.0.0-20210308143313-8b63ca866189 github.com/iden3/go-merkletree v0.0.0-20210119155851-bb53e6ad1a12
github.com/jinzhu/copier v0.0.0-20190924061706-b57f9002281a github.com/jinzhu/copier v0.0.0-20190924061706-b57f9002281a
github.com/jmoiron/sqlx v1.2.1-0.20200615141059-0794cb1f47ee github.com/jmoiron/sqlx v1.2.1-0.20200615141059-0794cb1f47ee
github.com/joho/godotenv v1.3.0 github.com/joho/godotenv v1.3.0

Some files were not shown because too many files have changed in this diff Show More