You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

587 lines
21 KiB

  1. # IPA polynomial commitment by hand
  2. *2023-04-06*
  3. *This post was originally written on July 2022 during the '0xPARC Halo2 learning group', posting it now as it was forgotten in a hackmd.*
  4. ## Context
  5. During this past month (June 13 - July 8, 2022) I've been attending at the [Halo2 Learning Group](https://0xparc.org/blog/halo2-learning-group) that [0xPARC](https://0xparc.org) organized. It has been an amazing experience and I've learned a lot from aweseome people. For the past two weeks, the participants were encouraged to do a project related to the contents of the sessions, after a bit of doubts, I've chosen to do it about the Inner Product Argument used in Halo2 for the polynomial commitments, which is described in the [Halo paper](https://eprint.iacr.org/2019/1021.pdf).
  6. I've used this opportunity to study how IPA works, reading it from the [Halo paper](https://eprint.iacr.org/2019/1021.pdf) but also from the [Bulletproofs paper](https://eprint.iacr.org/2017/1066.pdf) and the good explanations from the [Dalek documentation](https://doc-internal.dalek.rs/bulletproofs/notes/inner_product_proof/index.html) and the [Dankrad Feist article](https://dankradfeist.de/ethereum/2021/07/27/inner-product-arguments.html).
  7. Thanks to [Ye Zhang](https://twitter.com/yezhang1998), [Ying Tong](https://twitter.com/therealyingtong) and [Haichen Shen](https://twitter.com/shenhaichen) for their advise on the papers and resources to study. Also thanks to [David Nevado](https://github.com/davidnevadoc) for the typos found in this post.
  8. ## Intro
  9. This article overviews the IPA construction described in the [Halo paper](https://eprint.iacr.org/2019/1021.pdf), by doing a step-by-step example with small numbers, following the style done in the *["PLONK by Hand series"](https://research.metastate.dev/plonk-by-hand-part-1/)* by [Joshua Fitzgerald](https://twitter.com/lopeetall). This post does not cover the amortization technique proposed in the Halo paper.
  10. Together with this by-hand step-by-step example, I've implemented it in Sage and also in Rust using arkworks:
  11. - Sage: https://github.com/arnaucube/math/blob/master/ipa.sage
  12. - Rust: https://github.com/arnaucube/ipa-rs
  13. Also, if you're looking for other Sage/Python IPA implementations, you can find one in the [Darkfi/research repo](https://github.com/darkrenaissance/darkfi/tree/master/script/research/zk/bltprf) and another one in the [Ethereum/research repo](https://github.com/ethereum/research/tree/master/bulletproofs).
  14. This post is divided in 3 parts:
  15. 1. [IPA overview](#ipa-overview)
  16. 2. [IPA by hand](#ipa-by%20hand)
  17. 3. [IPA Sage implementation](#now-let%E2%80%99s%20do%20a%20simple%20implementation)
  18. ## IPA overview
  19. *This section provides an overview on the IPA scheme, for more details it is recommended to go directly to the earlier mentioned papers and articles.*
  20. The scheme objective is to allow the prover to prove that the polynomial $p(X)$ from the commitment $P$, evaluates to $v$ at $x$, and that $deg(p(X)) \leq d-1$.
  21. <div style="font-size:80%;background:#f9f9f9;padding-left:10px;">
  22. Notation:
  23. <ul>
  24. <li>Scalar mul: $[a]G$, where $a$ is a scalar and $G \in \mathbb{G}$</li>
  25. <li>Inner product: $<\overrightarrow{a}, \overrightarrow{b}> = a_0 b_0 + a_1 b_1 + \ldots + a_{n-1} b_{n-1}$</li>
  26. <li>Multiscalar mul: $<\overrightarrow{a}, \overrightarrow{G}> = [a_0] G_0 + [a_1] G_1 + \ldots + [a_{n-1}] G_{n-1}$</li>
  27. </ul>
  28. </div>
  29. We have a transparent setup consisting of a random vector of points $\overrightarrow{G} \in^r \mathbb{G}^d$, and a point $H \in^r \mathbb{G}$.
  30. The prover commits to the polynomial $p(X) = \sum^{d-1}_0 a_i x^i$ by a Pedersen vector commitment
  31. $$P=<\overrightarrow{a}, \overrightarrow{G}> + [r]H$$
  32. where $\overrightarrow{a}$ is the vector of the coefficients of $p(X)$. And sets $v$ such that
  33. $$v=<\overrightarrow{a}, \overrightarrow{b} > = <\overrightarrow{a}, \{1, x, x^2, \ldots, x^{d-1} \} >$$
  34. We can see that computing $v$ is the equivalent to evaluating $p(X)$ at $x$ ($p(x)=v$).
  35. Both parties know $P$, point $x$ and claimed evaluation $v$. For $U \in^r \mathbb{G}$.
  36. Now, for $k$ rounds ($d=2^k$, from $j=k$ to $j=1$):
  37. - Prover sets random blinding factors: $l_j, r_j \in \mathbb{F}_p$
  38. - Prover computes
  39. $$L_j = < \overrightarrow{a}_{lo}, \overrightarrow{G}_{hi}> + [l_j] H + [< \overrightarrow{a}_{lo}, \overrightarrow{b}_{hi}>] U\\
  40. R_j = < \overrightarrow{a}_{hi}, \overrightarrow{G}_{lo}> + [r_j] H + [< \overrightarrow{a}_{hi}, \overrightarrow{b}_{lo}>] U$$
  41. - Verifier sends random challenge $u_j$
  42. - Prover computes the halved vectors for next round:
  43. $$\overrightarrow{a} \leftarrow \overrightarrow{a}_{hi} \cdot u_j^{-1} + \overrightarrow{a}_{lo} \cdot u_j\\
  44. \overrightarrow{b} \leftarrow \overrightarrow{b}_{lo} \cdot u_j^{-1} + \overrightarrow{b}_{hi} \cdot u_j\\
  45. \overrightarrow{G} \leftarrow \overrightarrow{G}_{lo} \cdot u_j^{-1} + \overrightarrow{G}_{hi} \cdot u_j$$
  46. After final round, $\overrightarrow{a}, \overrightarrow{b}, \overrightarrow{G}$ are each of length 1.
  47. Verifier can compute $G = \overrightarrow{G}_0 = < \overrightarrow{s}, \overrightarrow{G} >$, and $b = \overrightarrow{b}_0 = < \overrightarrow{s}, \overrightarrow{b} >$,
  48. where $\overrightarrow{s}$ is the binary counting structure:
  49. $$
  50. s = (u_1^{-1} ~ u_2^{-1} \cdots ~u_k^{-1},\\
  51. ~~~~~~~~~u_1 ~~~ u_2^{-1} ~\cdots ~u_k^{-1},\\
  52. ~~~~~~~~~u_1^{-1} ~~ u_2 ~~\cdots ~u_k^{-1},\\
  53. ~~~~~~~~~~~~~~~~~\vdots\\
  54. ~~~~~~~~~u_1 ~~~~ u_2 ~~\cdots ~u_k)
  55. $$
  56. Also, the verifier can computes $P'$ as
  57. $$P' = P + [v] U$$
  58. which under the hood is equivalent to $P'= <\overrightarrow{a}, G> + [r]H + [v] U$.
  59. Then, Verifier checks:
  60. $$[a]G + [r'] H + [ab] U == P' + \sum_{j=1}^k ( [u_j^2] L_j + [u_j^{-2}] R_j)$$
  61. where the $r' = r + \sum_{j=1}^k (l_j u_j^2 + r_j u_j^{-2})$.
  62. We can see how this match if we unfold it (added colors to provide a bit of intuition on the relation of values):
  63. $$
  64. \textcolor{brown}{[a]G} + \textcolor{cyan}{[r'] H} + \textcolor{magenta}{[ab] U}
  65. ==
  66. \textcolor{blue}{P'} + \sum_{j=1}^k ( \textcolor{violet}{[u_j^2] L_j} + \textcolor{orange}{[u_j^{-2}] R_j})
  67. $$
  68. $$
  69. LHS = \textcolor{brown}{[a]G} + \textcolor{cyan}{[r'] H} + \textcolor{magenta}{[ab] U}\\
  70. = \textcolor{brown}{< \overrightarrow{a}, \overrightarrow{G} >}\\
  71. + \textcolor{cyan}{[r + \sum_{j=1}^k (l_j \cdot u_j^2 + r_j u_j^{-2})] \cdot H}\\
  72. + \textcolor{magenta}{< \overrightarrow{a}, \overrightarrow{b} > U}
  73. $$
  74. $$
  75. RHS = \textcolor{blue}{P'} + \sum_{j=1}^k ( \textcolor{violet}{[u_j^2] L_j} + \textcolor{orange}{[u_j^{-2}] R_j})\\
  76. = \textcolor{blue}{< \overrightarrow{a}, \overrightarrow{G}>}\\
  77. + \textcolor{blue}{[r] H}\\
  78. + \sum_{j=1}^k (
  79. \textcolor{violet}{[u_j^2] \cdot <\overrightarrow{a}_{lo}, \overrightarrow{G}_{hi}> + [l_j] H + [<\overrightarrow{a}_{lo}, \overrightarrow{b}_{hi}>] U}\\
  80. \textcolor{orange}{+ [u_j^{-2}] \cdot <\overrightarrow{a}_{hi}, \overrightarrow{G}_{lo}> + [r_j] H + [<\overrightarrow{a}_{hi}, \overrightarrow{b}_{lo}>] U})\\
  81. + \textcolor{blue}{[< \overrightarrow{a}, \overrightarrow{b} >] U}\\
  82. $$
  83. <br><br>
  84. In the following diagram we can see a high level overview of the steps in the protocol:
  85. ![](img/posts/ipa/sequence.png)
  86. <!--
  87. <pre class="mermaid" style="width:50%;background:#ffffff!important;margin-left:auto;margin-right:auto;">
  88. %%{init:{'theme':'neutral'}}%%
  89. sequenceDiagram
  90. participant P as Prover
  91. participant V as Verifier
  92. P->>P: knows p(X)
  93. P->>P: commit to p(X), P
  94. P->>V: P
  95. V->>V: rand x, U, u
  96. V->>P: x, U, u
  97. P->>P: eval v=p(x), gen proof
  98. P->>V: proof, a, Lⱼ, Rⱼ, v
  99. V->>V: verify(proof, P, a, x, v, Lⱼ, Rⱼ)
  100. </pre>
  101. -->
  102. ## IPA by hand
  103. *This section provides a step-by-step example of IPA with small values, following the style done in the ["PLONK by Hand series"](https://research.metastate.dev/plonk-by-hand-part-1/) by [Joshua Fitzgerald](https://twitter.com/lopeetall).*
  104. We will use the Elliptic curve $E(\mathbb{F}_{19}): y^2 = x^3 + 3$.
  105. In the same way that was done in the [_Plonk by hand series_](https://research.metastate.dev/plonk-by-hand-part-1/), let's compute all the points from our generator $G=(1,2) \in E(\mathbb{F}_{19})$. We'll start with $2G$:
  106. ![](img/posts/ipa/00.png)
  107. Combining point doubling and inverses for $4G, 6G, 8G, \ldots$, together with [point addition](https://en.wikipedia.org/wiki/Elliptic_curve_point_multiplication#Point_addition) we obtain all the points of our curve:
  108. ![](img/posts/ipa/01.png)
  109. As we find out, $14 G = G$, thus $ord(G)_{E(\mathbb{F}_{19})} = 13$, so we will work in $\mathbb{F}_{13}$.
  110. Before we start the interaction of the protocol, we need to setup some values. First of all, we need to define the degree of the polynomial with which we want to work, we will use the degree $d=8$ to make manual computations shorter.
  111. We set up the vector $\overrightarrow{G}$ with random points (without known DLOG between them), and also a random point $H$:
  112. ![](img/posts/ipa/02.png)
  113. We define our polynomial $p(X)$, to which we want to commit, and let $\overrightarrow{a}$ be its coefficients:
  114. ![](img/posts/ipa/03.png)
  115. Verifier chooses a random challenge $r$:
  116. ![](img/posts/ipa/04.png)
  117. Prover commits to $\overrightarrow{a}$:
  118. ![](img/posts/ipa/05.png)
  119. We are following the Halo paper, which describes an IPA variant in which the second vector $\overrightarrow{b}$ is fixed for the given choice of $x$, being $\overrightarrow{b} = {1, x, x^2, x^3, \ldots, x^{d-1}}$.
  120. We will use $x=3$, being $\overrightarrow{b}$:
  121. ![](img/posts/ipa/06.png)
  122. Now the prover computes the *inner product* between $\overrightarrow{a}$ and $\overrightarrow{b}$, using
  123. ![](img/posts/ipa/07.png)
  124. Which, since our choose of $\overrightarrow{b}$, it is the equivalent than evaluating $p(X)$ at $x=3$:
  125. ![](img/posts/ipa/08.png)
  126. Now the verifier generates the random challenges $u_j \in \mathbb{I}$ and $U \in E$:
  127. ![](img/posts/ipa/09.png)
  128. In a non-interactive version of the protocol, these values would be obtained by hashing the transcript ([Fiat-Shamir](https://en.wikipedia.org/wiki/Fiat%E2%80%93Shamir_heuristic)).
  129. The prover computes P'
  130. ![](img/posts/ipa/10.png)
  131. Followed by $k=log(d)$ (3) rounds explained below.
  132. #### Rounds
  133. For each round, from $k-1$ to $0$, prover will compute:
  134. $$
  135. L_j = \langle \overrightarrow{a}_{lo}, \overrightarrow{G}_{hi} \rangle + [l_j] H + [\langle \overrightarrow{a}_{lo}, \overrightarrow{b}_{hi} \rangle] U
  136. $$
  137. $$
  138. R_j = \langle \overrightarrow{a}_{hi}, \overrightarrow{G}_{lo} \rangle + [r_j] H + [\langle \overrightarrow{a}_{hi}, \overrightarrow{b}_{lo} \rangle] U
  139. $$
  140. ##### Round j=2:
  141. So, first we will split $\overrightarrow{a}, \overrightarrow{b}, \overrightarrow{G}$ into their respective left and right halves ($\overrightarrow{v}_{lo}, \overrightarrow{v}_{hi}$):
  142. ![](img/posts/ipa/11.png)
  143. Set random blinding factors $l_2, r_2$:
  144. ![](img/posts/ipa/12.png)
  145. And let's calculate $L_2$:
  146. ![](img/posts/ipa/13.png)
  147. And the same with $R_2$:
  148. ![](img/posts/ipa/14.png)
  149. Now, we will compute the new $\overrightarrow{a}, \overrightarrow{b}, \overrightarrow{G}$ to be used in the next round by:
  150. $$
  151. \overrightarrow{a} \longleftarrow \overrightarrow{a}_{hi} \cdot u_j^{-1} + \overrightarrow{a}_{lo} \cdot u_j\\
  152. \overrightarrow{b} \longleftarrow \overrightarrow{b}_{lo} \cdot u_j^{-1} + \overrightarrow{b}_{hi} \cdot u_j
  153. $$
  154. ![](img/posts/ipa/15.png)
  155. ![](img/posts/ipa/16.png)
  156. And similarly for $\overrightarrow{G} \longleftarrow \overrightarrow{G}_{lo} \cdot u_j^{-1} + \overrightarrow{G}_{hi} \cdot u_j$
  157. ![](img/posts/ipa/17.png)
  158. We can see that $\overrightarrow{a}, \overrightarrow{b}, \overrightarrow{G}$ have halved.
  159. ##### Round j=1:
  160. From the previous round we have:
  161. ![](img/posts/ipa/18.png)
  162. Choose the random blinding factors:
  163. ![](img/posts/ipa/19.png)
  164. And now, in the same fashion that we did in the previous round, we compute the $L_1, R_1$:
  165. ![](img/posts/ipa/20.png)
  166. ![](img/posts/ipa/21.png)
  167. Now, we will compute the new $\overrightarrow{a}, \overrightarrow{b}, \overrightarrow{G}$ to be used in the next round:
  168. ![](img/posts/ipa/22.png)
  169. ![](img/posts/ipa/23.png)
  170. ![](img/posts/ipa/24.png)
  171. ##### Round j=0:
  172. ![](img/posts/ipa/25.png)
  173. Set the random blinding factors $l_0, r_0$:
  174. ![](img/posts/ipa/26.png)
  175. Compute $L_0, R_0$:
  176. ![](img/posts/ipa/27.png)
  177. we will compute the new halved $\overrightarrow{a}, \overrightarrow{b}, \overrightarrow{G}$:
  178. ![](img/posts/ipa/28.png)
  179. ![](img/posts/ipa/29.png)
  180. ![](img/posts/ipa/30.png)
  181. The prover ends having as outputs $a=\overrightarrow{a}_0, ~~b=\overrightarrow{b}_0, ~~G=\overrightarrow{G}_0$, and the random blinding factors $\overrightarrow{l}, \overrightarrow{r}$, together with $\overrightarrow{L}, \overrightarrow{R}$:
  182. ![](img/posts/ipa/31.png)
  183. ### Verify
  184. First, verifier recomputes $b$ and $G$. This can be done in more efficient ways described in the Halo paper, but for the sake of simplicity we will assume that the verifier computes $b$ and $G$ in the *naive way* (which is, doing a similar loop than what the prover did, going from the original $\overrightarrow{b}, \overrightarrow{G}$ halving each round until obtaining $b, G$). In the Sage implementation provided in the next section, we will use the efficient approach, which was also provided in the overview section.
  185. Now, the verifier can compute $P'$ in the same way that the prover did ($P'=P + [v] U$), and from here computes $Q_0 = \sum_{j=1}^k ( [u_j^2] L_j) + P' + \sum_{j=1}^k ([u_j^{-2}] R_j)$:
  186. ![](img/posts/ipa/32.png)
  187. As we did previously, for an easier by-hand computation, we can 'translate' the points to multiples of the generator $G$, to operate with them more easily without a computer:
  188. ![](img/posts/ipa/33.png)
  189. We need to compute also $r' = \sum_{j=1}^k (l_j \cdot u_j^2) + r + \sum_{j=1}^k (r_j \cdot u_j^{-2})$:
  190. ![](img/posts/ipa/34.png)
  191. And then compute $Q_1 = [a] G + [r'] H + [ab] U$:
  192. ![](img/posts/ipa/35.png)
  193. And by checking that $Q_0 == Q_1$, the verifier finishes the verification.
  194. ## Now let's do a simple implementation
  195. We will do a simple implementation in Sage of the protocol.
  196. First of all, we set up our curve:
  197. ```python
  198. p = 19
  199. Fp = GF(p)
  200. E = EllipticCurve(Fp,[0,3])
  201. g = E(1, 2)
  202. q = g.order()
  203. Fq = GF(q)
  204. ```
  205. Now let's create the 'IPA' class:
  206. ```python
  207. class IPA_halo(object):
  208. def __init__(self, F, E, g, d):
  209. self.g = g
  210. self.F = F
  211. self.E = E
  212. self.d = d
  213. self.h = E.random_element()
  214. self.gs = random_values(E, d)
  215. self.hs = random_values(E, d)
  216. def random_values(G, d):
  217. r = [None] * d
  218. for i in range(d):
  219. r[i] = G.random_element()
  220. return r
  221. ```
  222. Now we can generate all the elliptic curve points in a similar way that when we computed them by-hand in the previous section:
  223. ```python
  224. print("\nlet's generate all the points:")
  225. P = g+g
  226. print("P_0:", g)
  227. i=1
  228. while P!=g:
  229. print("P_%d:" % i, P)
  230. P = P + P
  231. i += 1
  232. print("P_%d:" % (i+1), E(0), "(point at infinity)")
  233. ```
  234. Alternatively we can use Sage's methods:
  235. ```python
  236. # alternatively:
  237. print("points", E.points())
  238. print("number of points", len(E.points()))
  239. ```
  240. Now, the Prover will set the polynomial $p(X)$ and the Verifier would set $x$:
  241. ```python
  242. print("\ndefine p(X) = 1 + 2x + 3x² + 4x³ + 5x⁴ + 6x⁵ + 7x⁶ + 8x⁷")
  243. a = [ipa.F(1), ipa.F(2), ipa.F(3), ipa.F(4),
  244. ipa.F(5), ipa.F(6), ipa.F(7), ipa.F(8)]
  245. x = ipa.F(3)
  246. x_powers = powers_of(x, ipa.d) # = b
  247. ```
  248. Prover sets the blinding factor $r$:
  249. ```python
  250. r = int(ipa.F.random_element())
  251. ```
  252. We will implement some utility functions that we'll use later in the rest of the implementation:
  253. ```python
  254. def inner_product_field(a, b):
  255. assert len(a) == len(b)
  256. c = 0
  257. for i in range(len(a)):
  258. c = c + a[i] * b[i]
  259. return c
  260. def inner_product_point(a, b):
  261. assert len(a) == len(b)
  262. c = 0
  263. for i in range(len(a)):
  264. c = c + int(a[i]) * b[i]
  265. return c
  266. def vec_add(a, b):
  267. assert len(a) == len(b)
  268. return [x + y for x, y in zip(a, b)]
  269. def vec_mul(a, b):
  270. assert len(a) == len(b)
  271. return [x * y for x, y in zip(a, b)]
  272. def vec_scalar_mul_field(a, n):
  273. r = [None]*len(a)
  274. for i in range(len(a)):
  275. r[i] = a[i]*n
  276. return r
  277. def vec_scalar_mul_point(a, n):
  278. r = [None]*len(a)
  279. for i in range(len(a)):
  280. r[i] = a[i]*int(n)
  281. return r
  282. ```
  283. And we will implement in the `IPA` class the methods for committing and evaluating:
  284. ```python
  285. class IPA_halo:
  286. # [...]
  287. def commit(self, a, r):
  288. P = inner_product_point(a, self.gs) + r * self.h
  289. return P
  290. def evaluate(self, a, x_powers):
  291. return inner_product_field(a, x_powers)
  292. ```
  293. So now the prover can commit to $a$:
  294. ```python
  295. print("\nProver commits to a:")
  296. P = ipa.commit(a, r)
  297. print(" commit: P = <a, G> + r * H = ", P)
  298. print("Evaluates a at {1 + x + x² + x³ + … + xⁿ⁻¹}:")
  299. v = ipa.evaluate(a, x_powers)
  300. print(" v =", v)
  301. print(" (which is equivalent to evaluating p(X) at x=3)")
  302. ```
  303. Verifier generates random challenges $u_j \in \mathbb{I}$ and $U \in \mathbb{U}$
  304. ```python
  305. print("\nVerifier generate random challenges {uᵢ} ∈ 𝕀 and U ∈ 𝔾")
  306. U = ipa.E.random_element()
  307. k = int(math.log(ipa.d, 2))
  308. u = [None] * k
  309. for j in reversed(range(0, k)):
  310. u[j] = ipa.F.random_element()
  311. while (u[j] == 0): # prevent u[j] from being 0
  312. u[j] = ipa.F.random_element()
  313. print(" U =", U)
  314. print(" u =", u)
  315. ```
  316. Now let's implmenet the `ipa` proof generation method in the `IPA` class:
  317. ```python
  318. def ipa(self, a_, x_powers, u, U): # prove
  319. G = self.gs
  320. a = a_
  321. b = x_powers
  322. k = int(math.log(self.d, 2))
  323. l = [None] * k
  324. r = [None] * k
  325. L = [None] * k
  326. R = [None] * k
  327. for j in reversed(range(0, k)):
  328. m = len(a)/2
  329. a_lo = a[:m]
  330. a_hi = a[m:]
  331. b_lo = b[:m]
  332. b_hi = b[m:]
  333. G_lo = G[:m]
  334. G_hi = G[m:]
  335. l[j] = self.F.random_element() # random blinding factor
  336. r[j] = self.F.random_element() # random blinding factor
  337. # Lⱼ = <a'ₗₒ, G'ₕᵢ> + [lⱼ] H + [<a'ₗₒ, b'ₕᵢ>] U
  338. L[j] = inner_product_point(a_lo, G_hi) + int(l[j]) * self.h + int(inner_product_field(a_lo, b_hi)) * U
  339. # Rⱼ = <a'ₕᵢ, G'ₗₒ> + [rⱼ] H + [<a'ₕᵢ, b'ₗₒ>] U
  340. R[j] = inner_product_point(a_hi, G_lo) + int(r[j]) * self.h + int(inner_product_field(a_hi, b_lo)) * U
  341. # use the random challenge uⱼ ∈ 𝕀 generated by the verifier
  342. u_ = u[j] # uⱼ
  343. u_inv = self.F(u[j])^(-1) # uⱼ⁻¹
  344. a = vec_add(vec_scalar_mul_field(a_lo, u_), vec_scalar_mul_field(a_hi, u_inv))
  345. b = vec_add(vec_scalar_mul_field(b_lo, u_inv), vec_scalar_mul_field(b_hi, u_))
  346. G = vec_add(vec_scalar_mul_point(G_lo, u_inv), vec_scalar_mul_point(G_hi, u_))
  347. assert len(a)==1
  348. assert len(b)==1
  349. assert len(G)==1
  350. # a, b, G have length=1
  351. # l, r are random blinding factors
  352. # L, R are the "cross-terms" of the inner product
  353. return a[0], l, r, L, R
  354. ```
  355. Followed by the Inner Product Argument using the `ipa` method that we implemented in the previous step:
  356. ```python
  357. print("\nProver computes the Inner Product Argument:")
  358. a_ipa, b_ipa, G_ipa, lj, rj, L, R = ipa.ipa(a, x_powers, u, U)
  359. print(" a=", a_ipa)
  360. print(" b=", b_ipa)
  361. print(" G=", G_ipa)
  362. print(" l_j=", lj)
  363. print(" r_j=", rj)
  364. print(" R=", R)
  365. print(" L=", L)
  366. ```
  367. Now let's implement the verification:
  368. ```python
  369. def verify(self, P, a, v, x_powers, r, u, U, lj, rj, L, R):
  370. # compute P' = P + [v] U
  371. P = P + int(v) * U
  372. s = build_s_from_us(u, self.d)
  373. b = inner_product_field(s, x_powers)
  374. G = inner_product_point(s, self.gs)
  375. # synthetic blinding factor
  376. # r' = r + ∑ ( lⱼ uⱼ² + rⱼ uⱼ⁻²)
  377. r_ = r
  378. # Q_0 = P' ⋅ ∑ ( [uⱼ²] Lⱼ + [uⱼ⁻²] Rⱼ)
  379. Q_0 = P
  380. for j in range(len(u)):
  381. u_ = u[j] # uⱼ
  382. u_inv = u[j]^(-1) # uⱼ⁻²
  383. # ∑ ( [uⱼ²] Lⱼ + [uⱼ⁻²] Rⱼ)
  384. Q_0 = Q_0 + int(u[j]^2) * L[j] + int(u_inv^2) * R[j]
  385. r_ = r_ + lj[j] * (u_^2) + rj[j] * (u_inv^2)
  386. Q_1 = int(a) * G + int(r_) * self.h + int(a * b)*U
  387. return Q_0 == Q_1
  388. ```
  389. Which now the verifier uses the method to verify the proof:
  390. ```python
  391. verif = ipa.verify(P, a_ipa, v, x_powers, r, u, U, lj, rj, L, R)
  392. assert verif == True
  393. ```
  394. Now that we have the implementation of the protocol in Sage, we could replace the elliptic curve by some more 'real' curve to see it work with more realistic values.
  395. ## Final note
  396. As mentioned in the beginning, this post is not intended to provide the full description of the scheme and its proofs, but to provide an overview with a by-hand step-by-step example and a later simple Sage implementation. You can find the [complete Sage implementation here](https://github.com/arnaucube/math/blob/master/ipa.sage). Additionally, here you can find [a Rust implementation using arkworks](https://github.com/arnaucube/ipa-rs).
  397. One thing worth to mention is that IPA proving cost is $\mathcal{O}(n)$, proof size is $\mathcal{O}(log(n))$, and verification cost is $\mathcal{O}(n)$. An interesting technique not covered in this post, is the one presented in the [Halo](https://eprint.iacr.org/2019/1021.pdf) paper, in which the cost of verification is amortized, achieving practical $\mathcal{O}(log(n))$ verification.
  398. IPA is used as polynomial commitment scheme in places like the already mentioned [Halo](https://eprint.iacr.org/2019/1021.pdf) (and [Halo2](https://zcash.github.io/halo2/index.html)), but it can be combined in other schemes such as [Marlin](https://iacr.org/archive/eurocrypt2020/12105185/12105185.pdf).
  399. I'm still amazed by the existence of all the 'magic' of the polynomial commitments, and how they can be combined with polyomial IOPs to achieve practical SNARKs.
  400. <style>
  401. p > img{
  402. max-width: 70%!important;
  403. display: block;
  404. margin-left: auto;
  405. margin-right: auto;
  406. }
  407. </style>