19 Commits

Author SHA1 Message Date
Eduard S
4349ce584f Merge pull request #24 from iden3/feature/update
Update deps
2020-06-03 12:17:41 +02:00
Eduard S
9555517797 Update deps 2020-06-03 12:11:07 +02:00
arnau
ec6920aa11 Merge pull request #22 from iden3/feature/bugfix
Fix binary parser bug, make go.bin format deterministic
2020-05-22 17:30:11 +02:00
Eduard S
6e31deb5b8 Fix binary parser bug, make go.bin format deterministic 2020-05-22 16:50:26 +02:00
Eduard S
94dc934f62 Merge pull request #21 from iden3/feature/rm-unnecessary
Remove unnecessary structs & polsC
2020-05-22 13:43:38 +02:00
arnaucube
9f2587151f Remove unnecessary structs & polsC 2020-05-22 13:02:30 +02:00
Eduard S
e652f34753 Merge pull request #19 from iden3/fix/pk-parse-benchmarks
Fix pk parser benchmarks
2020-05-21 16:19:07 +02:00
arnaucube
42961f6b94 Fix pk parser benchmarks
Parsers were working correctly, but the benchmarks had errors.

The benchmarks in the commit d1b3979eb6
are incorrect, correct ones are:

```
BenchmarkParsePk/ParsePkJson_circuit1k-4         	       2	 529437960 ns/op
BenchmarkParsePk/ParsePkBin_circuit1k-4          	       2	 607792597 ns/op
BenchmarkParsePk/ParsePkGoBin_circuit1k-4        	       2	 540594611 ns/op
BenchmarkParsePk/ParsePkJson_circuit5k-4         	       1	2769819086 ns/op
BenchmarkParsePk/ParsePkBin_circuit5k-4          	       1	3094913319 ns/op
BenchmarkParsePk/ParsePkGoBin_circuit5k-4        	       1	2404651389 ns/op
BenchmarkParsePk/ParsePkJson_circuit10k-4        	       1	5374917709 ns/op
BenchmarkParsePk/ParsePkBin_circuit10k-4         	       1	5756633515 ns/op
BenchmarkParsePk/ParsePkGoBin_circuit10k-4       	       1	4782081310 ns/op
BenchmarkParsePk/ParsePkJson_circuit20k-4        	       1	10374987398 ns/op
BenchmarkParsePk/ParsePkBin_circuit20k-4         	       1	11528361584 ns/op
BenchmarkParsePk/ParsePkGoBin_circuit20k-4       	       1	9541829245 ns/op
BenchmarkParsePk/ParsePkJson_circuit50k-4         	       1	25979727146 ns/op
BenchmarkParsePk/ParsePkBin_circuit50k-4          	       1	28434810627 ns/op
BenchmarkParsePk/ParsePkGoBin_circuit50k-4        	       1	23860248412 ns/op
```

The size of ProvingKey file for a circuit of 20k and 50k constraints:
```
circuit 20k constraints:
10097876 bytes of proving_key.go.bin
10097876 bytes of proving_key.bin
29760049 bytes of proving_key.json

circuit 50k constraints:
24195028 bytes of proving_key.go.bin
24194964 bytes of proving_key.bin
71484081 bytes of proving_key.json
```
2020-05-21 13:19:45 +02:00
Eduard S
386758370e Merge pull request #18 from iden3/feature/pk-own-format
Add own go-circom ProvingKey Bin format
2020-05-21 11:25:53 +02:00
arnaucube
2b8a15ca1a Add cli to convert Pk to go-circom binary file 2020-05-20 18:07:10 +02:00
arnaucube
d1b3979eb6 Add own go-circom ProvingKey Bin format
Add parsers from bin to pk and from pk to bin.

```
BenchmarkParsePk/ParsePkJson_circuit1k-4         	  595215	      1954 ns/op
BenchmarkParsePk/ParsePkBin_circuit1k-4          	       2	 522174688 ns/op
BenchmarkParsePk/ParsePkGoBin_circuit1k-4        	  623182	      1901 ns/op
BenchmarkParsePk/ParsePkJson_circuit5k-4         	       1	2861959141 ns/op
BenchmarkParsePk/ParsePkBin_circuit5k-4          	       1	2610638932 ns/op
BenchmarkParsePk/ParsePkGoBin_circuit5k-4        	  612906	      1907 ns/op
BenchmarkParsePk/ParsePkJson_circuit10k-4        	       1	5793696755 ns/op
BenchmarkParsePk/ParsePkBin_circuit10k-4         	       1	5204251056 ns/op
BenchmarkParsePk/ParsePkGoBin_circuit10k-4       	  603478	      1913 ns/op
BenchmarkParsePk/ParsePkJson_circuit20k-4        	       1	11404022165 ns/op
BenchmarkParsePk/ParsePkBin_circuit20k-4         	       1	10392012613 ns/op
BenchmarkParsePk/ParsePkGoBin_circuit20k-4       	  620488	      1921 ns/op
```

For the `10k` constraints circuit is arround `1_368_976x` improvement.
For the `20k` constraints circuit is arround `5_409_689x` improvement.
2020-05-20 13:35:06 +02:00
Eduard S
de95d7a3e4 Merge pull request #16 from iden3/feature/pk-bin
Add ProvingKey binary parser
2020-05-20 12:31:36 +02:00
arnaucube
5c2aaec1ca Add ProvingKey binary parser
The time on the parsing doesn't improve, as the data from the binary
file needs to be converted to `affine` representation, and then parsed
into the `bn256.G1` & `bn256.G2` formats (Montgomery). But the size of
the ProvingKey files in binary is much smaller, so will be better
handled by the smarthpones.

- Parse time benchmarks:
```
BenchmarkParsePk/Parse_Pk_bin_circuit1k-4         	       2	 563729994 ns/op
BenchmarkParsePk/Parse_Pk_json_circuit1k-4        	  635641	      1941 ns/op
BenchmarkParsePk/Parse_Pk_bin_circuit5k-4         	       1	2637941658 ns/op
BenchmarkParsePk/Parse_Pk_json_circuit5k-4        	       1	2986560185 ns/op
BenchmarkParsePk/Parse_Pk_bin_circuit10k-4        	       1	5337639150 ns/op
BenchmarkParsePk/Parse_Pk_json_circuit10k-4       	       1	6149568824 ns/op
BenchmarkParsePk/Parse_Pk_bin_circuit20k-4        	       1	10533654623 ns/op
BenchmarkParsePk/Parse_Pk_json_circuit20k-4       	       1	11657326851 ns/op
```

- Size of ProvingKey file for a circuit of 50k constraints:
```
circuit 20k constraints:
10097812 bytes of proving_key.bin
29760049 bytes of proving_key.json

circuit 50k constraints:
24194964 bytes of proving_key.bin
71484081 bytes of proving_key.json
```
2020-05-20 12:25:27 +02:00
arnau
bedd64cc70 Merge pull request #15 from iden3/feature/proofjson
Add JSON Marshal/Unmarshal to Proof
2020-05-15 12:00:33 +02:00
Eduard S
0d4e2581bd Add JSON Marshal/Unmarshal to Proof 2020-05-15 11:43:47 +02:00
Eduard S
1aa316cbd2 Merge pull request #11 from iden3/feature/proof-parsers
Add proof parsers to string (decimal & hex)
2020-05-07 12:15:08 +02:00
arnaucube
0f48cfa2a5 Add proof parsers to string (decimal & hex)
Also adds ProofToSmartContractFormat, which returns a ProofString as the
proof.B elements swap is not a valid point for the bn256.G2 format.

Also unexports internal structs and methods of the prover package.
Also apply golint.
2020-05-06 14:45:44 +02:00
Eduard S
6ec118d4e2 Merge pull request #10 from iden3/feature/tables2
Add G1/G2 table calculation functionality
2020-05-06 11:43:06 +02:00
druiz0992
423d5f0ce7 Add G1/G2 table calculation functionality 2020-05-06 09:27:15 +02:00
16 changed files with 1602 additions and 572 deletions

View File

@@ -23,6 +23,6 @@ jobs:
- name: Compile circuits and execute Go tests - name: Compile circuits and execute Go tests
run: | run: |
cd testdata && sh ./compile-circuits.sh && cd .. cd testdata && sh ./compile-circuits.sh && cd ..
go run cli/cli.go -prove -provingkey=testdata/circuit1k/proving_key.json -witness=testdata/circuit1k/witness.json -proof=testdata/circuit1k/proof.json -public=testdata/circuit1k/public.json go run cli/cli.go -prove -pk=testdata/circuit1k/proving_key.json -witness=testdata/circuit1k/witness.json -proof=testdata/circuit1k/proof.json -public=testdata/circuit1k/public.json
go run cli/cli.go -prove -provingkey=testdata/circuit5k/proving_key.json -witness=testdata/circuit5k/witness.json -proof=testdata/circuit5k/proof.json -public=testdata/circuit5k/public.json go run cli/cli.go -prove -pk=testdata/circuit5k/proving_key.json -witness=testdata/circuit5k/witness.json -proof=testdata/circuit5k/proof.json -public=testdata/circuit5k/public.json
go test ./... go test ./...

1
cli/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
cli

View File

@@ -21,12 +21,14 @@ func main() {
prove := flag.Bool("prove", false, "prover mode") prove := flag.Bool("prove", false, "prover mode")
verify := flag.Bool("verify", false, "verifier mode") verify := flag.Bool("verify", false, "verifier mode")
convert := flag.Bool("convert", false, "convert mode, to convert between proving_key.json to proving_key.go.bin")
provingKeyPath := flag.String("provingkey", "proving_key.json", "provingKey path") provingKeyPath := flag.String("pk", "proving_key.json", "provingKey path")
witnessPath := flag.String("witness", "witness.json", "witness path") witnessPath := flag.String("witness", "witness.json", "witness path")
proofPath := flag.String("proof", "proof.json", "proof path") proofPath := flag.String("proof", "proof.json", "proof path")
verificationKeyPath := flag.String("verificationkey", "verification_key.json", "verificationKey path") verificationKeyPath := flag.String("vk", "verification_key.json", "verificationKey path")
publicPath := flag.String("public", "public.json", "public signals path") publicPath := flag.String("public", "public.json", "public signals path")
provingKeyBinPath := flag.String("pkbin", "proving_key.go.bin", "provingKey Bin path")
flag.Parse() flag.Parse()
@@ -42,6 +44,12 @@ func main() {
fmt.Println("Error:", err) fmt.Println("Error:", err)
} }
os.Exit(0) os.Exit(0)
} else if *convert {
err := cmdConvert(*provingKeyPath, *provingKeyBinPath)
if err != nil {
fmt.Println("Error:", err)
}
os.Exit(0)
} }
flag.PrintDefaults() flag.PrintDefaults()
} }
@@ -133,3 +141,27 @@ func cmdVerify(proofPath, verificationKeyPath, publicPath string) error {
fmt.Println("verification:", v) fmt.Println("verification:", v)
return nil return nil
} }
func cmdConvert(provingKeyPath, provingKeyBinPath string) error {
fmt.Println("Convertion tool")
provingKeyJson, err := ioutil.ReadFile(provingKeyPath)
if err != nil {
return err
}
pk, err := parsers.ParsePk(provingKeyJson)
if err != nil {
return err
}
fmt.Printf("Converting proving key json (%s)\nto go proving key binary (%s)\n", provingKeyPath, provingKeyBinPath)
pkGBin, err := parsers.PkToGoBin(pk)
if err != nil {
return err
}
if err = ioutil.WriteFile(provingKeyBinPath, pkGBin, 0644); err != nil {
return err
}
return nil
}

2
go.mod
View File

@@ -4,6 +4,6 @@ go 1.14
require ( require (
github.com/ethereum/go-ethereum v1.9.13 github.com/ethereum/go-ethereum v1.9.13
github.com/iden3/go-iden3-crypto v0.0.5-0.20200421133134-14c3144613d4 github.com/iden3/go-iden3-crypto v0.0.5
github.com/stretchr/testify v1.4.0 github.com/stretchr/testify v1.4.0
) )

2
go.sum
View File

@@ -68,6 +68,8 @@ github.com/iden3/go-iden3-crypto v0.0.4 h1:rGQEFBvX6d4fDxqkQTizVq5UefB+xdZAg8j5F
github.com/iden3/go-iden3-crypto v0.0.4/go.mod h1:LLcgB7DLWAUs+8eBSKne+ZHy5z7xtAmlYlEz0M9M8gE= github.com/iden3/go-iden3-crypto v0.0.4/go.mod h1:LLcgB7DLWAUs+8eBSKne+ZHy5z7xtAmlYlEz0M9M8gE=
github.com/iden3/go-iden3-crypto v0.0.5-0.20200421133134-14c3144613d4 h1:C+WGAJM9G5MxU62cAVrcwivFLk1muyENjGD5DGADk5o= github.com/iden3/go-iden3-crypto v0.0.5-0.20200421133134-14c3144613d4 h1:C+WGAJM9G5MxU62cAVrcwivFLk1muyENjGD5DGADk5o=
github.com/iden3/go-iden3-crypto v0.0.5-0.20200421133134-14c3144613d4/go.mod h1:XKw1oDwYn2CIxKOtr7m/mL5jMn4mLOxAxtZBRxQBev8= github.com/iden3/go-iden3-crypto v0.0.5-0.20200421133134-14c3144613d4/go.mod h1:XKw1oDwYn2CIxKOtr7m/mL5jMn4mLOxAxtZBRxQBev8=
github.com/iden3/go-iden3-crypto v0.0.5 h1:inCSm5a+ry+nbpVTL/9+m6UcIwSv6nhUm0tnIxEbcps=
github.com/iden3/go-iden3-crypto v0.0.5/go.mod h1:XKw1oDwYn2CIxKOtr7m/mL5jMn4mLOxAxtZBRxQBev8=
github.com/influxdata/influxdb v1.2.3-0.20180221223340-01288bdb0883/go.mod h1:qZna6X/4elxqT3yI9iZYdZrWWdeFOOprn86kgg4+IzY= github.com/influxdata/influxdb v1.2.3-0.20180221223340-01288bdb0883/go.mod h1:qZna6X/4elxqT3yI9iZYdZrWWdeFOOprn86kgg4+IzY=
github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc= github.com/jackpal/go-nat-pmp v1.0.2-0.20160603034137-1fa385a6f458/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc=
github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k= github.com/jmespath/go-jmespath v0.0.0-20180206201540-c2b33e8439af/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=

View File

@@ -3,12 +3,14 @@ package parsers
import ( import (
"bufio" "bufio"
"bytes" "bytes"
"encoding/binary"
"encoding/hex" "encoding/hex"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"math/big" "math/big"
"os" "os"
"sort"
"strconv" "strconv"
"strings" "strings"
@@ -33,7 +35,6 @@ type PkString struct {
DomainSize int `json:"domainSize"` DomainSize int `json:"domainSize"`
PolsA []map[string]string `json:"polsA"` PolsA []map[string]string `json:"polsA"`
PolsB []map[string]string `json:"polsB"` PolsB []map[string]string `json:"polsB"`
PolsC []map[string]string `json:"polsC"`
} }
// WitnessString contains the Witness in string representation // WitnessString contains the Witness in string representation
@@ -148,10 +149,6 @@ func pkStringToPk(ps PkString) (*types.Pk, error) {
if err != nil { if err != nil {
return nil, err return nil, err
} }
p.PolsC, err = polsStringToBigInt(ps.PolsC)
if err != nil {
return nil, err
}
return &p, nil return &p, nil
} }
@@ -308,6 +305,13 @@ func stringToBigInt(s string) (*big.Int, error) {
return n, nil return n, nil
} }
func addPadding32(b []byte) []byte {
if len(b) != 32 {
b = addZPadding(b)
}
return b
}
func addZPadding(b []byte) []byte { func addZPadding(b []byte) []byte {
var z [32]byte var z [32]byte
var r []byte var r []byte
@@ -467,8 +471,35 @@ func stringToG2(h [][]string) (*bn256.G2, error) {
return p, err return p, err
} }
// ProofToJson outputs the Proof i Json format // ProofStringToSmartContractFormat converts the ProofString to a ProofString in the SmartContract format in a ProofString structure
func ProofToJson(p *types.Proof) ([]byte, error) { func ProofStringToSmartContractFormat(s ProofString) ProofString {
var rs ProofString
rs.A = make([]string, 2)
rs.B = make([][]string, 2)
rs.B[0] = make([]string, 2)
rs.B[1] = make([]string, 2)
rs.C = make([]string, 2)
rs.A[0] = s.A[0]
rs.A[1] = s.A[1]
rs.B[0][0] = s.B[0][1]
rs.B[0][1] = s.B[0][0]
rs.B[1][0] = s.B[1][1]
rs.B[1][1] = s.B[1][0]
rs.C[0] = s.C[0]
rs.C[1] = s.C[1]
rs.Protocol = s.Protocol
return rs
}
// ProofToSmartContractFormat converts the *types.Proof to a ProofString in the SmartContract format in a ProofString structure
func ProofToSmartContractFormat(p *types.Proof) ProofString {
s := ProofToString(p)
return ProofStringToSmartContractFormat(s)
}
// ProofToString converts the Proof to ProofString
func ProofToString(p *types.Proof) ProofString {
var ps ProofString var ps ProofString
ps.A = make([]string, 3) ps.A = make([]string, 3)
ps.B = make([][]string, 3) ps.B = make([][]string, 3)
@@ -497,10 +528,55 @@ func ProofToJson(p *types.Proof) ([]byte, error) {
ps.Protocol = "groth" ps.Protocol = "groth"
return ps
}
// ProofToJson outputs the Proof i Json format
func ProofToJson(p *types.Proof) ([]byte, error) {
ps := ProofToString(p)
return json.Marshal(ps) return json.Marshal(ps)
} }
// ParseWitness parses binary file representation of the Witness into the Witness struct // ProofToHex converts the Proof to ProofString with hexadecimal strings
func ProofToHex(p *types.Proof) ProofString {
var ps ProofString
ps.A = make([]string, 3)
ps.B = make([][]string, 3)
ps.B[0] = make([]string, 2)
ps.B[1] = make([]string, 2)
ps.B[2] = make([]string, 2)
ps.C = make([]string, 3)
a := p.A.Marshal()
ps.A[0] = "0x" + hex.EncodeToString(new(big.Int).SetBytes(a[:32]).Bytes())
ps.A[1] = "0x" + hex.EncodeToString(new(big.Int).SetBytes(a[32:64]).Bytes())
ps.A[2] = "1"
b := p.B.Marshal()
ps.B[0][1] = "0x" + hex.EncodeToString(new(big.Int).SetBytes(b[:32]).Bytes())
ps.B[0][0] = "0x" + hex.EncodeToString(new(big.Int).SetBytes(b[32:64]).Bytes())
ps.B[1][1] = "0x" + hex.EncodeToString(new(big.Int).SetBytes(b[64:96]).Bytes())
ps.B[1][0] = "0x" + hex.EncodeToString(new(big.Int).SetBytes(b[96:128]).Bytes())
ps.B[2][0] = "1"
ps.B[2][1] = "0"
c := p.C.Marshal()
ps.C[0] = "0x" + hex.EncodeToString(new(big.Int).SetBytes(c[:32]).Bytes())
ps.C[1] = "0x" + hex.EncodeToString(new(big.Int).SetBytes(c[32:64]).Bytes())
ps.C[2] = "1"
ps.Protocol = "groth"
return ps
}
// ProofToJsonHex outputs the Proof i Json format with hexadecimal strings
func ProofToJsonHex(p *types.Proof) ([]byte, error) {
ps := ProofToHex(p)
return json.Marshal(ps)
}
// ParseWitnessBin parses binary file representation of the Witness into the Witness struct
func ParseWitnessBin(f *os.File) (types.Witness, error) { func ParseWitnessBin(f *os.File) (types.Witness, error) {
var w types.Witness var w types.Witness
r := bufio.NewReader(f) r := bufio.NewReader(f)
@@ -527,3 +603,726 @@ func swapEndianness(b []byte) []byte {
} }
return o return o
} }
func readNBytes(r io.Reader, n int) ([]byte, error) {
b := make([]byte, n)
_, err := io.ReadFull(r, b)
if err != nil {
return b, err
}
return b, nil
}
// ParsePkBin parses binary file representation of the ProvingKey into the ProvingKey struct
func ParsePkBin(f *os.File) (*types.Pk, error) {
o := 0
var pk types.Pk
r := bufio.NewReader(f)
b, err := readNBytes(r, 12)
if err != nil {
return nil, err
}
pk.NVars = int(binary.LittleEndian.Uint32(b[:4]))
pk.NPublic = int(binary.LittleEndian.Uint32(b[4:8]))
pk.DomainSize = int(binary.LittleEndian.Uint32(b[8:12]))
o += 12
b, err = readNBytes(r, 8)
if err != nil {
return nil, err
}
pPolsA := int(binary.LittleEndian.Uint32(b[:4]))
pPolsB := int(binary.LittleEndian.Uint32(b[4:8]))
o += 8
b, err = readNBytes(r, 20)
if err != nil {
return nil, err
}
pPointsA := int(binary.LittleEndian.Uint32(b[:4]))
pPointsB1 := int(binary.LittleEndian.Uint32(b[4:8]))
pPointsB2 := int(binary.LittleEndian.Uint32(b[8:12]))
pPointsC := int(binary.LittleEndian.Uint32(b[12:16]))
pPointsHExps := int(binary.LittleEndian.Uint32(b[16:20]))
o += 20
b, err = readNBytes(r, 64)
if err != nil {
return nil, err
}
pk.VkAlpha1 = new(bn256.G1)
_, err = pk.VkAlpha1.Unmarshal(fromMont1Q(b))
if err != nil {
return nil, err
}
b, err = readNBytes(r, 64)
if err != nil {
return nil, err
}
pk.VkBeta1 = new(bn256.G1)
_, err = pk.VkBeta1.Unmarshal(fromMont1Q(b))
if err != nil {
return nil, err
}
b, err = readNBytes(r, 64)
if err != nil {
return nil, err
}
pk.VkDelta1 = new(bn256.G1)
_, err = pk.VkDelta1.Unmarshal(fromMont1Q(b))
if err != nil {
return nil, err
}
b, err = readNBytes(r, 128)
if err != nil {
return nil, err
}
pk.VkBeta2 = new(bn256.G2)
_, err = pk.VkBeta2.Unmarshal(fromMont2Q(b))
if err != nil {
return nil, err
}
b, err = readNBytes(r, 128)
if err != nil {
return nil, err
}
pk.VkDelta2 = new(bn256.G2)
_, err = pk.VkDelta2.Unmarshal(fromMont2Q(b))
if err != nil {
return nil, err
}
o += 448
if o != pPolsA {
return nil, fmt.Errorf("Unexpected offset, expected: %v, actual: %v", pPolsA, o)
}
// PolsA
for i := 0; i < pk.NVars; i++ {
b, err = readNBytes(r, 4)
if err != nil {
return nil, err
}
keysLength := int(binary.LittleEndian.Uint32(b[:4]))
o += 4
polsMap := make(map[int]*big.Int)
for j := 0; j < keysLength; j++ {
bK, err := readNBytes(r, 4)
if err != nil {
return nil, err
}
key := int(binary.LittleEndian.Uint32(bK[:4]))
o += 4
b, err := readNBytes(r, 32)
if err != nil {
return nil, err
}
polsMap[key] = new(big.Int).SetBytes(fromMont1R(b[:32]))
o += 32
}
pk.PolsA = append(pk.PolsA, polsMap)
}
if o != pPolsB {
return nil, fmt.Errorf("Unexpected offset, expected: %v, actual: %v", pPolsB, o)
}
// PolsB
for i := 0; i < pk.NVars; i++ {
b, err = readNBytes(r, 4)
if err != nil {
return nil, err
}
keysLength := int(binary.LittleEndian.Uint32(b[:4]))
o += 4
polsMap := make(map[int]*big.Int)
for j := 0; j < keysLength; j++ {
bK, err := readNBytes(r, 4)
if err != nil {
return nil, err
}
key := int(binary.LittleEndian.Uint32(bK[:4]))
o += 4
b, err := readNBytes(r, 32)
if err != nil {
return nil, err
}
polsMap[key] = new(big.Int).SetBytes(fromMont1R(b[:32]))
o += 32
}
pk.PolsB = append(pk.PolsB, polsMap)
}
if o != pPointsA {
return nil, fmt.Errorf("Unexpected offset, expected: %v, actual: %v", pPointsA, o)
}
// A
for i := 0; i < pk.NVars; i++ {
b, err = readNBytes(r, 64)
if err != nil {
return nil, err
}
p1 := new(bn256.G1)
_, err = p1.Unmarshal(fromMont1Q(b))
if err != nil {
return nil, err
}
pk.A = append(pk.A, p1)
o += 64
}
if o != pPointsB1 {
return nil, fmt.Errorf("Unexpected offset, expected: %v, actual: %v", pPointsB1, o)
}
// B1
for i := 0; i < pk.NVars; i++ {
b, err = readNBytes(r, 64)
if err != nil {
return nil, err
}
p1 := new(bn256.G1)
_, err = p1.Unmarshal(fromMont1Q(b))
if err != nil {
return nil, err
}
pk.B1 = append(pk.B1, p1)
o += 64
}
if o != pPointsB2 {
return nil, fmt.Errorf("Unexpected offset, expected: %v, actual: %v", pPointsB2, o)
}
// B2
for i := 0; i < pk.NVars; i++ {
b, err = readNBytes(r, 128)
if err != nil {
return nil, err
}
p2 := new(bn256.G2)
_, err = p2.Unmarshal(fromMont2Q(b))
if err != nil {
return nil, err
}
pk.B2 = append(pk.B2, p2)
o += 128
}
if o != pPointsC {
return nil, fmt.Errorf("Unexpected offset, expected: %v, actual: %v", pPointsC, o)
}
// C
zb := make([]byte, 64)
z := new(bn256.G1)
_, err = z.Unmarshal(zb)
if err != nil {
return nil, err
}
for i := 0; i < pk.NPublic+1; i++ {
pk.C = append(pk.C, z)
}
for i := pk.NPublic + 1; i < pk.NVars; i++ {
b, err = readNBytes(r, 64)
if err != nil {
return nil, err
}
p1 := new(bn256.G1)
_, err = p1.Unmarshal(fromMont1Q(b))
if err != nil {
return nil, err
}
pk.C = append(pk.C, p1)
o += 64
}
if o != pPointsHExps {
return nil, fmt.Errorf("Unexpected offset, expected: %v, actual: %v", pPointsHExps, o)
}
// HExps
for i := 0; i < pk.DomainSize; i++ {
b, err = readNBytes(r, 64)
if err != nil {
return nil, err
}
p1 := new(bn256.G1)
_, err = p1.Unmarshal(fromMont1Q(b))
if err != nil {
return nil, err
}
pk.HExps = append(pk.HExps, p1)
}
return &pk, nil
}
func fromMont1Q(m []byte) []byte {
a := new(big.Int).SetBytes(swapEndianness(m[:32]))
b := new(big.Int).SetBytes(swapEndianness(m[32:64]))
x := coordFromMont(a, types.Q)
y := coordFromMont(b, types.Q)
if bytes.Equal(x.Bytes(), big.NewInt(1).Bytes()) {
x = big.NewInt(0)
}
if bytes.Equal(y.Bytes(), big.NewInt(1).Bytes()) {
y = big.NewInt(0)
}
xBytes := x.Bytes()
yBytes := y.Bytes()
if len(xBytes) != 32 {
xBytes = addZPadding(xBytes)
}
if len(yBytes) != 32 {
yBytes = addZPadding(yBytes)
}
var p []byte
p = append(p, xBytes...)
p = append(p, yBytes...)
return p
}
func fromMont2Q(m []byte) []byte {
a := new(big.Int).SetBytes(swapEndianness(m[:32]))
b := new(big.Int).SetBytes(swapEndianness(m[32:64]))
c := new(big.Int).SetBytes(swapEndianness(m[64:96]))
d := new(big.Int).SetBytes(swapEndianness(m[96:128]))
x := coordFromMont(a, types.Q)
y := coordFromMont(b, types.Q)
z := coordFromMont(c, types.Q)
t := coordFromMont(d, types.Q)
if bytes.Equal(x.Bytes(), big.NewInt(1).Bytes()) {
x = big.NewInt(0)
}
if bytes.Equal(y.Bytes(), big.NewInt(1).Bytes()) {
y = big.NewInt(0)
}
if bytes.Equal(z.Bytes(), big.NewInt(1).Bytes()) {
z = big.NewInt(0)
}
if bytes.Equal(t.Bytes(), big.NewInt(1).Bytes()) {
t = big.NewInt(0)
}
xBytes := x.Bytes()
yBytes := y.Bytes()
zBytes := z.Bytes()
tBytes := t.Bytes()
if len(xBytes) != 32 {
xBytes = addZPadding(xBytes)
}
if len(yBytes) != 32 {
yBytes = addZPadding(yBytes)
}
if len(zBytes) != 32 {
zBytes = addZPadding(zBytes)
}
if len(tBytes) != 32 {
tBytes = addZPadding(tBytes)
}
var p []byte
p = append(p, yBytes...) // swap
p = append(p, xBytes...)
p = append(p, tBytes...)
p = append(p, zBytes...)
return p
}
func fromMont1R(m []byte) []byte {
a := new(big.Int).SetBytes(swapEndianness(m[:32]))
x := coordFromMont(a, types.R)
return x.Bytes()
}
func fromMont2R(m []byte) []byte {
a := new(big.Int).SetBytes(swapEndianness(m[:32]))
b := new(big.Int).SetBytes(swapEndianness(m[32:64]))
c := new(big.Int).SetBytes(swapEndianness(m[64:96]))
d := new(big.Int).SetBytes(swapEndianness(m[96:128]))
x := coordFromMont(a, types.R)
y := coordFromMont(b, types.R)
z := coordFromMont(c, types.R)
t := coordFromMont(d, types.R)
var p []byte
p = append(p, y.Bytes()...) // swap
p = append(p, x.Bytes()...)
p = append(p, t.Bytes()...)
p = append(p, z.Bytes()...)
return p
}
func coordFromMont(u, q *big.Int) *big.Int {
return new(big.Int).Mod(
new(big.Int).Mul(
u,
new(big.Int).ModInverse(
new(big.Int).Lsh(big.NewInt(1), 256),
q,
),
),
q,
)
}
func sortedKeys(m map[int]*big.Int) []int {
keys := make([]int, 0, len(m))
for k, _ := range m {
keys = append(keys, k)
}
sort.Ints(keys)
return keys
}
// PkToGoBin converts the ProvingKey (*types.Pk) into binary format defined by
// go-circom-prover-verifier. PkGoBin is a own go-circom-prover-verifier
// binary format that allows to go faster when parsing.
func PkToGoBin(pk *types.Pk) ([]byte, error) {
var r []byte
o := 0
var b [4]byte
binary.LittleEndian.PutUint32(b[:], uint32(pk.NVars))
r = append(r, b[:]...)
binary.LittleEndian.PutUint32(b[:], uint32(pk.NPublic))
r = append(r, b[:]...)
binary.LittleEndian.PutUint32(b[:], uint32(pk.DomainSize))
r = append(r, b[:]...)
o += 12
// reserve space for pols (A, B) pos
b = [4]byte{}
r = append(r, b[:]...) // 12:16
r = append(r, b[:]...) // 16:20
o += 8
// reserve space for points (A, B1, B2, C, HExps) pos
r = append(r, b[:]...) // 20:24
r = append(r, b[:]...) // 24
r = append(r, b[:]...) // 28
r = append(r, b[:]...) // 32
r = append(r, b[:]...) // 36:40
o += 20
pb1 := pk.VkAlpha1.Marshal()
r = append(r, pb1[:]...)
pb1 = pk.VkBeta1.Marshal()
r = append(r, pb1[:]...)
pb1 = pk.VkDelta1.Marshal()
r = append(r, pb1[:]...)
pb2 := pk.VkBeta2.Marshal()
r = append(r, pb2[:]...)
pb2 = pk.VkDelta2.Marshal()
r = append(r, pb2[:]...)
o += 448
// polsA
binary.LittleEndian.PutUint32(r[12:16], uint32(o))
for i := 0; i < pk.NVars; i++ {
binary.LittleEndian.PutUint32(b[:], uint32(len(pk.PolsA[i])))
r = append(r, b[:]...)
o += 4
for _, j := range sortedKeys(pk.PolsA[i]) {
v := pk.PolsA[i][j]
binary.LittleEndian.PutUint32(b[:], uint32(j))
r = append(r, b[:]...)
r = append(r, addPadding32(v.Bytes())...)
o += 32 + 4
}
}
// polsB
binary.LittleEndian.PutUint32(r[16:20], uint32(o))
for i := 0; i < pk.NVars; i++ {
binary.LittleEndian.PutUint32(b[:], uint32(len(pk.PolsB[i])))
r = append(r, b[:]...)
o += 4
for _, j := range sortedKeys(pk.PolsB[i]) {
v := pk.PolsB[i][j]
binary.LittleEndian.PutUint32(b[:], uint32(j))
r = append(r, b[:]...)
r = append(r, addPadding32(v.Bytes())...)
o += 32 + 4
}
}
// A
binary.LittleEndian.PutUint32(r[20:24], uint32(o))
for i := 0; i < pk.NVars; i++ {
pb1 = pk.A[i].Marshal()
r = append(r, pb1[:]...)
o += 64
}
// B1
binary.LittleEndian.PutUint32(r[24:28], uint32(o))
for i := 0; i < pk.NVars; i++ {
pb1 = pk.B1[i].Marshal()
r = append(r, pb1[:]...)
o += 64
}
// B2
binary.LittleEndian.PutUint32(r[28:32], uint32(o))
for i := 0; i < pk.NVars; i++ {
pb2 = pk.B2[i].Marshal()
r = append(r, pb2[:]...)
o += 128
}
// C
binary.LittleEndian.PutUint32(r[32:36], uint32(o))
for i := pk.NPublic + 1; i < pk.NVars; i++ {
pb1 = pk.C[i].Marshal()
r = append(r, pb1[:]...)
o += 64
}
// HExps
binary.LittleEndian.PutUint32(r[36:40], uint32(o))
for i := 0; i < pk.DomainSize+1; i++ {
pb1 = pk.HExps[i].Marshal()
r = append(r, pb1[:]...)
o += 64
}
return r[:], nil
}
// ParsePkGoBin parses go-circom-prover-verifier binary file representation of
// the ProvingKey into ProvingKey struct (*types.Pk). PkGoBin is a own
// go-circom-prover-verifier binary format that allows to go faster when
// parsing.
func ParsePkGoBin(f *os.File) (*types.Pk, error) {
o := 0
var pk types.Pk
r := bufio.NewReader(f)
b, err := readNBytes(r, 12)
if err != nil {
return nil, err
}
pk.NVars = int(binary.LittleEndian.Uint32(b[:4]))
pk.NPublic = int(binary.LittleEndian.Uint32(b[4:8]))
pk.DomainSize = int(binary.LittleEndian.Uint32(b[8:12]))
o += 12
b, err = readNBytes(r, 8)
if err != nil {
return nil, err
}
pPolsA := int(binary.LittleEndian.Uint32(b[:4]))
pPolsB := int(binary.LittleEndian.Uint32(b[4:8]))
o += 8
b, err = readNBytes(r, 20)
if err != nil {
return nil, err
}
pPointsA := int(binary.LittleEndian.Uint32(b[:4]))
pPointsB1 := int(binary.LittleEndian.Uint32(b[4:8]))
pPointsB2 := int(binary.LittleEndian.Uint32(b[8:12]))
pPointsC := int(binary.LittleEndian.Uint32(b[12:16]))
pPointsHExps := int(binary.LittleEndian.Uint32(b[16:20]))
o += 20
b, err = readNBytes(r, 64)
if err != nil {
return nil, err
}
pk.VkAlpha1 = new(bn256.G1)
_, err = pk.VkAlpha1.Unmarshal(b)
if err != nil {
return &pk, err
}
b, err = readNBytes(r, 64)
if err != nil {
return nil, err
}
pk.VkBeta1 = new(bn256.G1)
_, err = pk.VkBeta1.Unmarshal(b)
if err != nil {
return &pk, err
}
b, err = readNBytes(r, 64)
if err != nil {
return nil, err
}
pk.VkDelta1 = new(bn256.G1)
_, err = pk.VkDelta1.Unmarshal(b)
if err != nil {
return &pk, err
}
b, err = readNBytes(r, 128)
if err != nil {
return nil, err
}
pk.VkBeta2 = new(bn256.G2)
_, err = pk.VkBeta2.Unmarshal(b)
if err != nil {
return &pk, err
}
b, err = readNBytes(r, 128)
if err != nil {
return nil, err
}
pk.VkDelta2 = new(bn256.G2)
_, err = pk.VkDelta2.Unmarshal(b)
if err != nil {
return &pk, err
}
o += 448
if o != pPolsA {
return nil, fmt.Errorf("Unexpected offset, expected: %v, actual: %v", pPolsA, o)
}
// PolsA
for i := 0; i < pk.NVars; i++ {
b, err = readNBytes(r, 4)
if err != nil {
return nil, err
}
keysLength := int(binary.LittleEndian.Uint32(b[:4]))
o += 4
polsMap := make(map[int]*big.Int)
for j := 0; j < keysLength; j++ {
bK, err := readNBytes(r, 4)
if err != nil {
return nil, err
}
key := int(binary.LittleEndian.Uint32(bK[:4]))
o += 4
b, err := readNBytes(r, 32)
if err != nil {
return nil, err
}
polsMap[key] = new(big.Int).SetBytes(b[:32])
o += 32
}
pk.PolsA = append(pk.PolsA, polsMap)
}
if o != pPolsB {
return nil, fmt.Errorf("Unexpected offset, expected: %v, actual: %v", pPolsB, o)
}
// PolsB
for i := 0; i < pk.NVars; i++ {
b, err = readNBytes(r, 4)
if err != nil {
return nil, err
}
keysLength := int(binary.LittleEndian.Uint32(b[:4]))
o += 4
polsMap := make(map[int]*big.Int)
for j := 0; j < keysLength; j++ {
bK, err := readNBytes(r, 4)
if err != nil {
return nil, err
}
key := int(binary.LittleEndian.Uint32(bK[:4]))
o += 4
b, err := readNBytes(r, 32)
if err != nil {
return nil, err
}
polsMap[key] = new(big.Int).SetBytes(b[:32])
o += 32
}
pk.PolsB = append(pk.PolsB, polsMap)
}
if o != pPointsA {
return nil, fmt.Errorf("Unexpected offset, expected: %v, actual: %v", pPointsA, o)
}
// A
for i := 0; i < pk.NVars; i++ {
b, err = readNBytes(r, 64)
if err != nil {
return nil, err
}
p1 := new(bn256.G1)
_, err = p1.Unmarshal(b)
if err != nil {
return nil, err
}
pk.A = append(pk.A, p1)
o += 64
}
if o != pPointsB1 {
return nil, fmt.Errorf("Unexpected offset, expected: %v, actual: %v", pPointsB1, o)
}
// B1
for i := 0; i < pk.NVars; i++ {
b, err = readNBytes(r, 64)
if err != nil {
return nil, err
}
p1 := new(bn256.G1)
_, err = p1.Unmarshal(b)
if err != nil {
return nil, err
}
pk.B1 = append(pk.B1, p1)
o += 64
}
if o != pPointsB2 {
return nil, fmt.Errorf("Unexpected offset, expected: %v, actual: %v", pPointsB2, o)
}
// B2
for i := 0; i < pk.NVars; i++ {
b, err = readNBytes(r, 128)
if err != nil {
return nil, err
}
p2 := new(bn256.G2)
_, err = p2.Unmarshal(b)
if err != nil {
return nil, err
}
pk.B2 = append(pk.B2, p2)
o += 128
}
if o != pPointsC {
return nil, fmt.Errorf("Unexpected offset, expected: %v, actual: %v", pPointsC, o)
}
// C
zb := make([]byte, 64)
z := new(bn256.G1)
_, err = z.Unmarshal(zb)
if err != nil {
return nil, err
}
for i := 0; i < pk.NPublic+1; i++ {
pk.C = append(pk.C, z)
}
for i := pk.NPublic + 1; i < pk.NVars; i++ {
b, err = readNBytes(r, 64)
if err != nil {
return nil, err
}
p1 := new(bn256.G1)
_, err = p1.Unmarshal(b)
if err != nil {
return nil, err
}
pk.C = append(pk.C, p1)
o += 64
}
if o != pPointsHExps {
return nil, fmt.Errorf("Unexpected offset, expected: %v, actual: %v", pPointsHExps, o)
}
// HExps
for i := 0; i < pk.DomainSize+1; i++ {
b, err = readNBytes(r, 64)
if err != nil {
return nil, err
}
p1 := new(bn256.G1)
_, err = p1.Unmarshal(b)
if err != nil {
return nil, err
}
pk.HExps = append(pk.HExps, p1)
}
return &pk, nil
}

View File

@@ -1,10 +1,12 @@
package parsers package parsers
import ( import (
"encoding/json"
"io/ioutil" "io/ioutil"
"os" "os"
"testing" "testing"
"github.com/iden3/go-circom-prover-verifier/types"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@@ -172,3 +174,160 @@ func TestParseWitnessBin(t *testing.T) {
testCircuitParseWitnessBin(t, "circuit1k") testCircuitParseWitnessBin(t, "circuit1k")
testCircuitParseWitnessBin(t, "circuit5k") testCircuitParseWitnessBin(t, "circuit5k")
} }
func TestProofSmartContractFormat(t *testing.T) {
proofJson, err := ioutil.ReadFile("../testdata/circuit1k/proof.json")
require.Nil(t, err)
proof, err := ParseProof(proofJson)
require.Nil(t, err)
pS := ProofToString(proof)
pSC := ProofToSmartContractFormat(proof)
assert.Nil(t, err)
assert.Equal(t, pS.A[0], pSC.A[0])
assert.Equal(t, pS.A[1], pSC.A[1])
assert.Equal(t, pS.B[0][0], pSC.B[0][1])
assert.Equal(t, pS.B[0][1], pSC.B[0][0])
assert.Equal(t, pS.B[1][0], pSC.B[1][1])
assert.Equal(t, pS.B[1][1], pSC.B[1][0])
assert.Equal(t, pS.C[0], pSC.C[0])
assert.Equal(t, pS.C[1], pSC.C[1])
assert.Equal(t, pS.Protocol, pSC.Protocol)
pSC2 := ProofStringToSmartContractFormat(pS)
assert.Equal(t, pSC, pSC2)
}
func TestProofJSON(t *testing.T) {
proofJson, err := ioutil.ReadFile("../testdata/circuit1k/proof.json")
require.Nil(t, err)
proof, err := ParseProof(proofJson)
require.Nil(t, err)
proof1JSON, err := json.Marshal(proof)
require.Nil(t, err)
var proof1 types.Proof
err = json.Unmarshal(proof1JSON, &proof1)
require.Nil(t, err)
require.Equal(t, *proof, proof1)
}
func testCircuitParsePkBin(t *testing.T, circuit string) {
pkBinFile, err := os.Open("../testdata/" + circuit + "/proving_key.bin")
require.Nil(t, err)
defer pkBinFile.Close()
pk, err := ParsePkBin(pkBinFile)
require.Nil(t, err)
pkJson, err := ioutil.ReadFile("../testdata/" + circuit + "/proving_key.json")
require.Nil(t, err)
pkJ, err := ParsePk(pkJson)
require.Nil(t, err)
assert.Equal(t, pkJ.NVars, pk.NVars)
assert.Equal(t, pkJ.NPublic, pk.NPublic)
assert.Equal(t, pkJ.DomainSize, pk.DomainSize)
assert.Equal(t, pkJ.VkAlpha1, pk.VkAlpha1)
assert.Equal(t, pkJ.VkBeta1, pk.VkBeta1)
assert.Equal(t, pkJ.VkDelta1, pk.VkDelta1)
assert.Equal(t, pkJ.VkDelta2, pk.VkDelta2)
assert.Equal(t, pkJ.PolsA, pk.PolsA)
assert.Equal(t, pkJ.PolsB, pk.PolsB)
assert.Equal(t, pkJ.A, pk.A)
assert.Equal(t, pkJ.B1, pk.B1)
assert.Equal(t, pkJ.B2, pk.B2)
assert.Equal(t, pkJ.C, pk.C)
assert.Equal(t, pkJ.HExps[:pkJ.DomainSize], pk.HExps[:pk.DomainSize]) // circom behaviour
assert.Equal(t, pkJ.NVars, pk.NVars)
assert.Equal(t, pkJ.NPublic, pk.NPublic)
assert.Equal(t, pkJ.DomainSize, pk.DomainSize)
}
func TestParsePkBin(t *testing.T) {
testCircuitParsePkBin(t, "circuit1k")
testCircuitParsePkBin(t, "circuit5k")
}
func testGoCircomPkFormat(t *testing.T, circuit string) {
pkJson, err := ioutil.ReadFile("../testdata/" + circuit + "/proving_key.json")
require.Nil(t, err)
pk, err := ParsePk(pkJson)
require.Nil(t, err)
pkGBin, err := PkToGoBin(pk)
require.Nil(t, err)
err = ioutil.WriteFile("../testdata/"+circuit+"/proving_key.go.bin", pkGBin, 0644)
assert.Nil(t, err)
// parse ProvingKeyGo
pkGoBinFile, err := os.Open("../testdata/" + circuit + "/proving_key.go.bin")
require.Nil(t, err)
defer pkGoBinFile.Close()
pkG, err := ParsePkGoBin(pkGoBinFile)
require.Nil(t, err)
assert.Equal(t, pk.VkAlpha1, pkG.VkAlpha1)
assert.Equal(t, pk.VkBeta1, pkG.VkBeta1)
assert.Equal(t, pk.VkDelta1, pkG.VkDelta1)
assert.Equal(t, pk.VkBeta2, pkG.VkBeta2)
assert.Equal(t, pk.VkDelta2, pkG.VkDelta2)
assert.Equal(t, pk.A, pkG.A)
assert.Equal(t, pk.B1, pkG.B1)
assert.Equal(t, pk.B2, pkG.B2)
assert.Equal(t, pk.C, pkG.C)
assert.Equal(t, pk.HExps, pkG.HExps)
assert.Equal(t, pk.PolsA, pkG.PolsA)
assert.Equal(t, pk.PolsB, pkG.PolsB)
assert.Equal(t, pk.NVars, pkG.NVars)
assert.Equal(t, pk.NPublic, pkG.NPublic)
assert.Equal(t, pk.DomainSize, pkG.DomainSize)
}
func TestGoCircomPkFormat(t *testing.T) {
testGoCircomPkFormat(t, "circuit1k")
testGoCircomPkFormat(t, "circuit5k")
// testGoCircomPkFormat(t, "circuit10k")
// testGoCircomPkFormat(t, "circuit20k")
}
func benchmarkParsePk(b *testing.B, circuit string) {
pkJson, err := ioutil.ReadFile("../testdata/" + circuit + "/proving_key.json")
require.Nil(b, err)
pkBinFile, err := os.Open("../testdata/" + circuit + "/proving_key.bin")
require.Nil(b, err)
defer pkBinFile.Close()
pkGoBinFile, err := os.Open("../testdata/" + circuit + "/proving_key.go.bin")
require.Nil(b, err)
defer pkGoBinFile.Close()
b.Run("ParsePkJson "+circuit, func(b *testing.B) {
for i := 0; i < b.N; i++ {
_, err = ParsePk(pkJson)
require.Nil(b, err)
}
})
b.Run("ParsePkBin "+circuit, func(b *testing.B) {
for i := 0; i < b.N; i++ {
pkBinFile.Seek(0, 0)
_, err = ParsePkBin(pkBinFile)
require.Nil(b, err)
}
})
b.Run("ParsePkGoBin "+circuit, func(b *testing.B) {
for i := 0; i < b.N; i++ {
pkGoBinFile.Seek(0, 0)
_, err = ParsePkGoBin(pkGoBinFile)
require.Nil(b, err)
}
})
}
func BenchmarkParsePk(b *testing.B) {
benchmarkParsePk(b, "circuit1k")
benchmarkParsePk(b, "circuit5k")
// benchmarkParsePk(b, "circuit10k")
// benchmarkParsePk(b, "circuit20k")
}

View File

@@ -2,20 +2,19 @@ package prover
import ( import (
"math/big" "math/big"
bn256 "github.com/ethereum/go-ethereum/crypto/bn256/cloudflare" bn256 "github.com/ethereum/go-ethereum/crypto/bn256/cloudflare"
cryptoConstants "github.com/iden3/go-iden3-crypto/constants" cryptoConstants "github.com/iden3/go-iden3-crypto/constants"
) )
type TableG1 struct{ type tableG1 struct {
data []*bn256.G1 data []*bn256.G1
} }
func (t tableG1) getData() []*bn256.G1 {
func (t TableG1) GetData() []*bn256.G1 {
return t.data return t.data
} }
// Compute table of gsize elements as :: // Compute table of gsize elements as ::
// Table[0] = Inf // Table[0] = Inf
// Table[1] = a[0] // Table[1] = a[0]
@@ -23,77 +22,91 @@ func (t TableG1) GetData() []*bn256.G1 {
// Table[3] = a[0]+a[1] // Table[3] = a[0]+a[1]
// ..... // .....
// Table[(1<<gsize)-1] = a[0]+a[1]+...+a[gsize-1] // Table[(1<<gsize)-1] = a[0]+a[1]+...+a[gsize-1]
func (t *TableG1) NewTableG1(a []*bn256.G1, gsize int){ func (t *tableG1) newTableG1(a []*bn256.G1, gsize int, toaffine bool) {
// EC table // EC table
table := make([]*bn256.G1, 0) table := make([]*bn256.G1, 0)
// We need at least gsize elements. If not enough, fill with 0 // We need at least gsize elements. If not enough, fill with 0
a_ext := make([]*bn256.G1, 0) aExt := make([]*bn256.G1, 0)
a_ext = append(a_ext, a...) aExt = append(aExt, a...)
for i := len(a); i < gsize; i++ { for i := len(a); i < gsize; i++ {
a_ext = append(a_ext,new(bn256.G1).ScalarBaseMult(big.NewInt(0))) aExt = append(aExt, new(bn256.G1).ScalarBaseMult(big.NewInt(0)))
} }
elG1 := new(bn256.G1).ScalarBaseMult(big.NewInt(0)) elG1 := new(bn256.G1).ScalarBaseMult(big.NewInt(0))
table = append(table, elG1) table = append(table, elG1)
last_pow2 := 1 lastPow2 := 1
nelems := 0 nelems := 0
for i := 1; i < 1<<gsize; i++ { for i := 1; i < 1<<gsize; i++ {
elG1 := new(bn256.G1) elG1 := new(bn256.G1)
// if power of 2 // if power of 2
if i&(i-1) == 0 { if i&(i-1) == 0 {
last_pow2 = i lastPow2 = i
elG1.Set(a_ext[nelems]) elG1.Set(aExt[nelems])
nelems++ nelems++
} else { } else {
elG1.Add(table[last_pow2], table[i-last_pow2]) elG1.Add(table[lastPow2], table[i-lastPow2])
// TODO bn256 doesn't export MakeAffine function. We need to fork repo // TODO bn256 doesn't export MakeAffine function. We need to fork repo
//table[i].MakeAffine() //table[i].MakeAffine()
} }
table = append(table, elG1) table = append(table, elG1)
} }
if toaffine {
for i := 0; i < len(table); i++ {
info := table[i].Marshal()
table[i].Unmarshal(info)
}
}
t.data = table t.data = table
} }
func (t tableG1) Marshal() []byte {
info := make([]byte, 0)
for _, el := range t.data {
info = append(info, el.Marshal()...)
}
return info
}
// Multiply scalar by precomputed table of G1 elements // Multiply scalar by precomputed table of G1 elements
func (t *TableG1) MulTableG1(k []*big.Int, Q_prev *bn256.G1, gsize int) *bn256.G1 { func (t *tableG1) mulTableG1(k []*big.Int, qPrev *bn256.G1, gsize int) *bn256.G1 {
// We need at least gsize elements. If not enough, fill with 0 // We need at least gsize elements. If not enough, fill with 0
k_ext := make([]*big.Int, 0) kExt := make([]*big.Int, 0)
k_ext = append(k_ext, k...) kExt = append(kExt, k...)
for i := len(k); i < gsize; i++ { for i := len(k); i < gsize; i++ {
k_ext = append(k_ext,new(big.Int).SetUint64(0)) kExt = append(kExt, new(big.Int).SetUint64(0))
} }
Q := new(bn256.G1).ScalarBaseMult(big.NewInt(0)) Q := new(bn256.G1).ScalarBaseMult(big.NewInt(0))
msb := getMsb(k_ext) msb := getMsb(kExt)
for i := msb - 1; i >= 0; i-- { for i := msb - 1; i >= 0; i-- {
// TODO. bn256 doesn't export double operation. We will need to fork repo and export it // TODO. bn256 doesn't export double operation. We will need to fork repo and export it
Q = new(bn256.G1).Add(Q, Q) Q = new(bn256.G1).Add(Q, Q)
b := getBit(k_ext,i) b := getBit(kExt, i)
if b != 0 { if b != 0 {
// TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient. // TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient.
Q.Add(Q, t.data[b]) Q.Add(Q, t.data[b])
} }
} }
if Q_prev != nil { if qPrev != nil {
return Q.Add(Q,Q_prev) return Q.Add(Q, qPrev)
} else {
return Q
} }
return Q
} }
// Multiply scalar by precomputed table of G1 elements without intermediate doubling // Multiply scalar by precomputed table of G1 elements without intermediate doubling
func MulTableNoDoubleG1(t []TableG1, k []*big.Int, Q_prev *bn256.G1, gsize int) *bn256.G1 { func mulTableNoDoubleG1(t []tableG1, k []*big.Int, qPrev *bn256.G1, gsize int) *bn256.G1 {
// We need at least gsize elements. If not enough, fill with 0 // We need at least gsize elements. If not enough, fill with 0
min_nelems := len(t) * gsize minNElems := len(t) * gsize
k_ext := make([]*big.Int, 0) kExt := make([]*big.Int, 0)
k_ext = append(k_ext, k...) kExt = append(kExt, k...)
for i := len(k); i < min_nelems; i++ { for i := len(k); i < minNElems; i++ {
k_ext = append(k_ext,new(big.Int).SetUint64(0)) kExt = append(kExt, new(big.Int).SetUint64(0))
} }
// Init Adders // Init Adders
nbitsQ := cryptoConstants.Q.BitLen() nbitsQ := cryptoConstants.Q.BitLen()
@@ -105,10 +118,10 @@ func MulTableNoDoubleG1(t []TableG1, k []*big.Int, Q_prev *bn256.G1, gsize int)
// Perform bitwise addition // Perform bitwise addition
for j := 0; j < len(t); j++ { for j := 0; j < len(t); j++ {
msb := getMsb(k_ext[j*gsize:(j+1)*gsize]) msb := getMsb(kExt[j*gsize : (j+1)*gsize])
for i := msb - 1; i >= 0; i-- { for i := msb - 1; i >= 0; i-- {
b := getBit(k_ext[j*gsize:(j+1)*gsize],i) b := getBit(kExt[j*gsize:(j+1)*gsize], i)
if b != 0 { if b != 0 {
// TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient. // TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient.
Q[i].Add(Q[i], t[j].data[b]) Q[i].Add(Q[i], t[j].data[b])
@@ -124,45 +137,43 @@ func MulTableNoDoubleG1(t []TableG1, k []*big.Int, Q_prev *bn256.G1, gsize int)
R.Add(R, Q[i-1]) R.Add(R, Q[i-1])
} }
if Q_prev != nil { if qPrev != nil {
return R.Add(R,Q_prev) return R.Add(R, qPrev)
} else {
return R
} }
return R
} }
// Compute tables within function. This solution should still be faster than std multiplication // Compute tables within function. This solution should still be faster than std multiplication
// for gsize = 7 // for gsize = 7
func ScalarMultG1(a []*bn256.G1, k []*big.Int, Q_prev *bn256.G1, gsize int) *bn256.G1 { func scalarMultG1(a []*bn256.G1, k []*big.Int, qPrev *bn256.G1, gsize int) *bn256.G1 {
ntables := int((len(a) + gsize - 1) / gsize) ntables := int((len(a) + gsize - 1) / gsize)
table := TableG1{} table := tableG1{}
Q := new(bn256.G1).ScalarBaseMult(new(big.Int)) Q := new(bn256.G1).ScalarBaseMult(new(big.Int))
for i := 0; i < ntables-1; i++ { for i := 0; i < ntables-1; i++ {
table.NewTableG1( a[i*gsize:(i+1)*gsize], gsize) table.newTableG1(a[i*gsize:(i+1)*gsize], gsize, false)
Q = table.MulTableG1(k[i*gsize:(i+1)*gsize], Q, gsize) Q = table.mulTableG1(k[i*gsize:(i+1)*gsize], Q, gsize)
} }
table.NewTableG1( a[(ntables-1)*gsize:], gsize) table.newTableG1(a[(ntables-1)*gsize:], gsize, false)
Q = table.MulTableG1(k[(ntables-1)*gsize:], Q, gsize) Q = table.mulTableG1(k[(ntables-1)*gsize:], Q, gsize)
if Q_prev != nil { if qPrev != nil {
return Q.Add(Q,Q_prev) return Q.Add(Q, qPrev)
} else {
return Q
} }
return Q
} }
// Multiply scalar by precomputed table of G1 elements without intermediate doubling // Multiply scalar by precomputed table of G1 elements without intermediate doubling
func ScalarMultNoDoubleG1(a []*bn256.G1, k []*big.Int, Q_prev *bn256.G1, gsize int) *bn256.G1 { func scalarMultNoDoubleG1(a []*bn256.G1, k []*big.Int, qPrev *bn256.G1, gsize int) *bn256.G1 {
ntables := int((len(a) + gsize - 1) / gsize) ntables := int((len(a) + gsize - 1) / gsize)
table := TableG1{} table := tableG1{}
// We need at least gsize elements. If not enough, fill with 0 // We need at least gsize elements. If not enough, fill with 0
min_nelems := ntables * gsize minNElems := ntables * gsize
k_ext := make([]*big.Int, 0) kExt := make([]*big.Int, 0)
k_ext = append(k_ext, k...) kExt = append(kExt, k...)
for i := len(k); i < min_nelems; i++ { for i := len(k); i < minNElems; i++ {
k_ext = append(k_ext,new(big.Int).SetUint64(0)) kExt = append(kExt, new(big.Int).SetUint64(0))
} }
// Init Adders // Init Adders
nbitsQ := cryptoConstants.Q.BitLen() nbitsQ := cryptoConstants.Q.BitLen()
@@ -174,22 +185,22 @@ func ScalarMultNoDoubleG1(a []*bn256.G1, k []*big.Int, Q_prev *bn256.G1, gsize i
// Perform bitwise addition // Perform bitwise addition
for j := 0; j < ntables-1; j++ { for j := 0; j < ntables-1; j++ {
table.NewTableG1( a[j*gsize:(j+1)*gsize], gsize) table.newTableG1(a[j*gsize:(j+1)*gsize], gsize, false)
msb := getMsb(k_ext[j*gsize:(j+1)*gsize]) msb := getMsb(kExt[j*gsize : (j+1)*gsize])
for i := msb - 1; i >= 0; i-- { for i := msb - 1; i >= 0; i-- {
b := getBit(k_ext[j*gsize:(j+1)*gsize],i) b := getBit(kExt[j*gsize:(j+1)*gsize], i)
if b != 0 { if b != 0 {
// TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient. // TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient.
Q[i].Add(Q[i], table.data[b]) Q[i].Add(Q[i], table.data[b])
} }
} }
} }
table.NewTableG1( a[(ntables-1)*gsize:], gsize) table.newTableG1(a[(ntables-1)*gsize:], gsize, false)
msb := getMsb(k_ext[(ntables-1)*gsize:]) msb := getMsb(kExt[(ntables-1)*gsize:])
for i := msb - 1; i >= 0; i-- { for i := msb - 1; i >= 0; i-- {
b := getBit(k_ext[(ntables-1)*gsize:],i) b := getBit(kExt[(ntables-1)*gsize:], i)
if b != 0 { if b != 0 {
// TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient. // TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient.
Q[i].Add(Q[i], table.data[b]) Q[i].Add(Q[i], table.data[b])
@@ -203,11 +214,10 @@ func ScalarMultNoDoubleG1(a []*bn256.G1, k []*big.Int, Q_prev *bn256.G1, gsize i
R = new(bn256.G1).Add(R, R) R = new(bn256.G1).Add(R, R)
R.Add(R, Q[i-1]) R.Add(R, Q[i-1])
} }
if Q_prev != nil { if qPrev != nil {
return R.Add(R,Q_prev) return R.Add(R, qPrev)
} else {
return R
} }
return R
} }
///// /////
@@ -215,16 +225,14 @@ func ScalarMultNoDoubleG1(a []*bn256.G1, k []*big.Int, Q_prev *bn256.G1, gsize i
// TODO - How can avoid replicating code in G2? // TODO - How can avoid replicating code in G2?
//G2 //G2
type TableG2 struct{ type tableG2 struct {
data []*bn256.G2 data []*bn256.G2
} }
func (t tableG2) getData() []*bn256.G2 {
func (t TableG2) GetData() []*bn256.G2 {
return t.data return t.data
} }
// Compute table of gsize elements as :: // Compute table of gsize elements as ::
// Table[0] = Inf // Table[0] = Inf
// Table[1] = a[0] // Table[1] = a[0]
@@ -232,77 +240,92 @@ func (t TableG2) GetData() []*bn256.G2 {
// Table[3] = a[0]+a[1] // Table[3] = a[0]+a[1]
// ..... // .....
// Table[(1<<gsize)-1] = a[0]+a[1]+...+a[gsize-1] // Table[(1<<gsize)-1] = a[0]+a[1]+...+a[gsize-1]
func (t *TableG2) NewTableG2(a []*bn256.G2, gsize int){ // TODO -> toaffine = True doesnt work. Problem with Marshal/Unmarshal
func (t *tableG2) newTableG2(a []*bn256.G2, gsize int, toaffine bool) {
// EC table // EC table
table := make([]*bn256.G2, 0) table := make([]*bn256.G2, 0)
// We need at least gsize elements. If not enough, fill with 0 // We need at least gsize elements. If not enough, fill with 0
a_ext := make([]*bn256.G2, 0) aExt := make([]*bn256.G2, 0)
a_ext = append(a_ext, a...) aExt = append(aExt, a...)
for i := len(a); i < gsize; i++ { for i := len(a); i < gsize; i++ {
a_ext = append(a_ext,new(bn256.G2).ScalarBaseMult(big.NewInt(0))) aExt = append(aExt, new(bn256.G2).ScalarBaseMult(big.NewInt(0)))
} }
elG2 := new(bn256.G2).ScalarBaseMult(big.NewInt(0)) elG2 := new(bn256.G2).ScalarBaseMult(big.NewInt(0))
table = append(table, elG2) table = append(table, elG2)
last_pow2 := 1 lastPow2 := 1
nelems := 0 nelems := 0
for i := 1; i < 1<<gsize; i++ { for i := 1; i < 1<<gsize; i++ {
elG2 := new(bn256.G2) elG2 := new(bn256.G2)
// if power of 2 // if power of 2
if i&(i-1) == 0 { if i&(i-1) == 0 {
last_pow2 = i lastPow2 = i
elG2.Set(a_ext[nelems]) elG2.Set(aExt[nelems])
nelems++ nelems++
} else { } else {
elG2.Add(table[last_pow2], table[i-last_pow2]) elG2.Add(table[lastPow2], table[i-lastPow2])
// TODO bn256 doesn't export MakeAffine function. We need to fork repo // TODO bn256 doesn't export MakeAffine function. We need to fork repo
//table[i].MakeAffine() //table[i].MakeAffine()
} }
table = append(table, elG2) table = append(table, elG2)
} }
if toaffine {
for i := 0; i < len(table); i++ {
info := table[i].Marshal()
table[i].Unmarshal(info)
}
}
t.data = table t.data = table
} }
func (t tableG2) Marshal() []byte {
info := make([]byte, 0)
for _, el := range t.data {
info = append(info, el.Marshal()...)
}
return info
}
// Multiply scalar by precomputed table of G2 elements // Multiply scalar by precomputed table of G2 elements
func (t *TableG2) MulTableG2(k []*big.Int, Q_prev *bn256.G2, gsize int) *bn256.G2 { func (t *tableG2) mulTableG2(k []*big.Int, qPrev *bn256.G2, gsize int) *bn256.G2 {
// We need at least gsize elements. If not enough, fill with 0 // We need at least gsize elements. If not enough, fill with 0
k_ext := make([]*big.Int, 0) kExt := make([]*big.Int, 0)
k_ext = append(k_ext, k...) kExt = append(kExt, k...)
for i := len(k); i < gsize; i++ { for i := len(k); i < gsize; i++ {
k_ext = append(k_ext,new(big.Int).SetUint64(0)) kExt = append(kExt, new(big.Int).SetUint64(0))
} }
Q := new(bn256.G2).ScalarBaseMult(big.NewInt(0)) Q := new(bn256.G2).ScalarBaseMult(big.NewInt(0))
msb := getMsb(k_ext) msb := getMsb(kExt)
for i := msb - 1; i >= 0; i-- { for i := msb - 1; i >= 0; i-- {
// TODO. bn256 doesn't export double operation. We will need to fork repo and export it // TODO. bn256 doesn't export double operation. We will need to fork repo and export it
Q = new(bn256.G2).Add(Q, Q) Q = new(bn256.G2).Add(Q, Q)
b := getBit(k_ext,i) b := getBit(kExt, i)
if b != 0 { if b != 0 {
// TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient. // TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient.
Q.Add(Q, t.data[b]) Q.Add(Q, t.data[b])
} }
} }
if Q_prev != nil { if qPrev != nil {
return Q.Add(Q, Q_prev) return Q.Add(Q, qPrev)
} else {
return Q
} }
return Q
} }
// Multiply scalar by precomputed table of G2 elements without intermediate doubling // Multiply scalar by precomputed table of G2 elements without intermediate doubling
func MulTableNoDoubleG2(t []TableG2, k []*big.Int, Q_prev *bn256.G2, gsize int) *bn256.G2 { func mulTableNoDoubleG2(t []tableG2, k []*big.Int, qPrev *bn256.G2, gsize int) *bn256.G2 {
// We need at least gsize elements. If not enough, fill with 0 // We need at least gsize elements. If not enough, fill with 0
min_nelems := len(t) * gsize minNElems := len(t) * gsize
k_ext := make([]*big.Int, 0) kExt := make([]*big.Int, 0)
k_ext = append(k_ext, k...) kExt = append(kExt, k...)
for i := len(k); i < min_nelems; i++ { for i := len(k); i < minNElems; i++ {
k_ext = append(k_ext,new(big.Int).SetUint64(0)) kExt = append(kExt, new(big.Int).SetUint64(0))
} }
// Init Adders // Init Adders
nbitsQ := cryptoConstants.Q.BitLen() nbitsQ := cryptoConstants.Q.BitLen()
@@ -314,10 +337,10 @@ func MulTableNoDoubleG2(t []TableG2, k []*big.Int, Q_prev *bn256.G2, gsize int)
// Perform bitwise addition // Perform bitwise addition
for j := 0; j < len(t); j++ { for j := 0; j < len(t); j++ {
msb := getMsb(k_ext[j*gsize:(j+1)*gsize]) msb := getMsb(kExt[j*gsize : (j+1)*gsize])
for i := msb - 1; i >= 0; i-- { for i := msb - 1; i >= 0; i-- {
b := getBit(k_ext[j*gsize:(j+1)*gsize],i) b := getBit(kExt[j*gsize:(j+1)*gsize], i)
if b != 0 { if b != 0 {
// TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient. // TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient.
Q[i].Add(Q[i], t[j].data[b]) Q[i].Add(Q[i], t[j].data[b])
@@ -332,45 +355,43 @@ func MulTableNoDoubleG2(t []TableG2, k []*big.Int, Q_prev *bn256.G2, gsize int)
R = new(bn256.G2).Add(R, R) R = new(bn256.G2).Add(R, R)
R.Add(R, Q[i-1]) R.Add(R, Q[i-1])
} }
if Q_prev != nil { if qPrev != nil {
return R.Add(R,Q_prev) return R.Add(R, qPrev)
} else {
return R
} }
return R
} }
// Compute tables within function. This solution should still be faster than std multiplication // Compute tables within function. This solution should still be faster than std multiplication
// for gsize = 7 // for gsize = 7
func ScalarMultG2(a []*bn256.G2, k []*big.Int, Q_prev *bn256.G2, gsize int) *bn256.G2 { func scalarMultG2(a []*bn256.G2, k []*big.Int, qPrev *bn256.G2, gsize int) *bn256.G2 {
ntables := int((len(a) + gsize - 1) / gsize) ntables := int((len(a) + gsize - 1) / gsize)
table := TableG2{} table := tableG2{}
Q := new(bn256.G2).ScalarBaseMult(new(big.Int)) Q := new(bn256.G2).ScalarBaseMult(new(big.Int))
for i := 0; i < ntables-1; i++ { for i := 0; i < ntables-1; i++ {
table.NewTableG2( a[i*gsize:(i+1)*gsize], gsize) table.newTableG2(a[i*gsize:(i+1)*gsize], gsize, false)
Q = table.MulTableG2(k[i*gsize:(i+1)*gsize], Q, gsize) Q = table.mulTableG2(k[i*gsize:(i+1)*gsize], Q, gsize)
} }
table.NewTableG2( a[(ntables-1)*gsize:], gsize) table.newTableG2(a[(ntables-1)*gsize:], gsize, false)
Q = table.MulTableG2(k[(ntables-1)*gsize:], Q, gsize) Q = table.mulTableG2(k[(ntables-1)*gsize:], Q, gsize)
if Q_prev != nil { if qPrev != nil {
return Q.Add(Q,Q_prev) return Q.Add(Q, qPrev)
} else {
return Q
} }
return Q
} }
// Multiply scalar by precomputed table of G2 elements without intermediate doubling // Multiply scalar by precomputed table of G2 elements without intermediate doubling
func ScalarMultNoDoubleG2(a []*bn256.G2, k []*big.Int, Q_prev *bn256.G2, gsize int) *bn256.G2 { func scalarMultNoDoubleG2(a []*bn256.G2, k []*big.Int, qPrev *bn256.G2, gsize int) *bn256.G2 {
ntables := int((len(a) + gsize - 1) / gsize) ntables := int((len(a) + gsize - 1) / gsize)
table := TableG2{} table := tableG2{}
// We need at least gsize elements. If not enough, fill with 0 // We need at least gsize elements. If not enough, fill with 0
min_nelems := ntables * gsize minNElems := ntables * gsize
k_ext := make([]*big.Int, 0) kExt := make([]*big.Int, 0)
k_ext = append(k_ext, k...) kExt = append(kExt, k...)
for i := len(k); i < min_nelems; i++ { for i := len(k); i < minNElems; i++ {
k_ext = append(k_ext,new(big.Int).SetUint64(0)) kExt = append(kExt, new(big.Int).SetUint64(0))
} }
// Init Adders // Init Adders
nbitsQ := cryptoConstants.Q.BitLen() nbitsQ := cryptoConstants.Q.BitLen()
@@ -382,22 +403,22 @@ func ScalarMultNoDoubleG2(a []*bn256.G2, k []*big.Int, Q_prev *bn256.G2, gsize i
// Perform bitwise addition // Perform bitwise addition
for j := 0; j < ntables-1; j++ { for j := 0; j < ntables-1; j++ {
table.NewTableG2( a[j*gsize:(j+1)*gsize], gsize) table.newTableG2(a[j*gsize:(j+1)*gsize], gsize, false)
msb := getMsb(k_ext[j*gsize:(j+1)*gsize]) msb := getMsb(kExt[j*gsize : (j+1)*gsize])
for i := msb - 1; i >= 0; i-- { for i := msb - 1; i >= 0; i-- {
b := getBit(k_ext[j*gsize:(j+1)*gsize],i) b := getBit(kExt[j*gsize:(j+1)*gsize], i)
if b != 0 { if b != 0 {
// TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient. // TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient.
Q[i].Add(Q[i], table.data[b]) Q[i].Add(Q[i], table.data[b])
} }
} }
} }
table.NewTableG2( a[(ntables-1)*gsize:], gsize) table.newTableG2(a[(ntables-1)*gsize:], gsize, false)
msb := getMsb(k_ext[(ntables-1)*gsize:]) msb := getMsb(kExt[(ntables-1)*gsize:])
for i := msb - 1; i >= 0; i-- { for i := msb - 1; i >= 0; i-- {
b := getBit(k_ext[(ntables-1)*gsize:],i) b := getBit(kExt[(ntables-1)*gsize:], i)
if b != 0 { if b != 0 {
// TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient. // TODO. bn256 doesn't export mixed addition (Jacobian + Affine), which is more efficient.
Q[i].Add(Q[i], table.data[b]) Q[i].Add(Q[i], table.data[b])
@@ -411,21 +432,20 @@ func ScalarMultNoDoubleG2(a []*bn256.G2, k []*big.Int, Q_prev *bn256.G2, gsize i
R = new(bn256.G2).Add(R, R) R = new(bn256.G2).Add(R, R)
R.Add(R, Q[i-1]) R.Add(R, Q[i-1])
} }
if Q_prev != nil { if qPrev != nil {
return R.Add(R,Q_prev) return R.Add(R, qPrev)
} else {
return R
} }
return R
} }
// Return most significant bit position in a group of Big Integers // Return most significant bit position in a group of Big Integers
func getMsb(k []*big.Int) int { func getMsb(k []*big.Int) int {
msb := 0 msb := 0
for _, el := range(k){ for _, el := range k {
tmp_msb := el.BitLen() tmpMsb := el.BitLen()
if tmp_msb > msb { if tmpMsb > msb {
msb = tmp_msb msb = tmpMsb
} }
} }
return msb return msb
@@ -433,11 +453,11 @@ func getMsb(k []*big.Int) int{
// Return ith bit in group of Big Integers // Return ith bit in group of Big Integers
func getBit(k []*big.Int, i int) uint { func getBit(k []*big.Int, i int) uint {
table_idx := uint(0) tableIdx := uint(0)
for idx, el := range(k){ for idx, el := range k {
b := el.Bit(i) b := el.Bit(i)
table_idx += (b << idx) tableIdx += (b << idx)
} }
return table_idx return tableIdx
} }

View File

@@ -1,13 +1,14 @@
package prover package prover
import ( import (
"bytes"
"crypto/rand" "crypto/rand"
"fmt"
"math/big" "math/big"
"testing" "testing"
bn256 "github.com/ethereum/go-ethereum/crypto/bn256/cloudflare"
"time" "time"
"bytes"
"fmt" bn256 "github.com/ethereum/go-ethereum/crypto/bn256/cloudflare"
) )
const ( const (
@@ -43,8 +44,6 @@ func randomG2Array(n int) []*bn256.G2 {
return arrayG2 return arrayG2
} }
func TestTableG1(t *testing.T) { func TestTableG1(t *testing.T) {
n := N1 n := N1
@@ -62,34 +61,33 @@ func TestTableG1(t *testing.T){
for gsize := 2; gsize < 10; gsize++ { for gsize := 2; gsize < 10; gsize++ {
ntables := int((n + gsize - 1) / gsize) ntables := int((n + gsize - 1) / gsize)
table := make([]TableG1, ntables) table := make([]tableG1, ntables)
for i := 0; i < ntables-1; i++ { for i := 0; i < ntables-1; i++ {
table[i].NewTableG1( arrayG1[i*gsize:(i+1)*gsize], gsize) table[i].newTableG1(arrayG1[i*gsize:(i+1)*gsize], gsize, true)
} }
table[ntables-1].NewTableG1( arrayG1[(ntables-1)*gsize:], gsize) table[ntables-1].newTableG1(arrayG1[(ntables-1)*gsize:], gsize, true)
beforeT = time.Now() beforeT = time.Now()
Q2 := new(bn256.G1).ScalarBaseMult(new(big.Int)) Q2 := new(bn256.G1).ScalarBaseMult(new(big.Int))
for i := 0; i < ntables-1; i++ { for i := 0; i < ntables-1; i++ {
Q2 = table[i].MulTableG1(arrayW[i*gsize:(i+1)*gsize], Q2, gsize) Q2 = table[i].mulTableG1(arrayW[i*gsize:(i+1)*gsize], Q2, gsize)
} }
Q2 = table[ntables-1].MulTableG1(arrayW[(ntables-1)*gsize:], Q2, gsize) Q2 = table[ntables-1].mulTableG1(arrayW[(ntables-1)*gsize:], Q2, gsize)
fmt.Printf("Gsize : %d, TMult time elapsed: %s\n", gsize, time.Since(beforeT)) fmt.Printf("Gsize : %d, TMult time elapsed: %s\n", gsize, time.Since(beforeT))
beforeT = time.Now() beforeT = time.Now()
Q3 := ScalarMultG1(arrayG1, arrayW, nil, gsize) Q3 := scalarMultG1(arrayG1, arrayW, nil, gsize)
fmt.Printf("Gsize : %d, TMult time elapsed (inc table comp): %s\n", gsize, time.Since(beforeT)) fmt.Printf("Gsize : %d, TMult time elapsed (inc table comp): %s\n", gsize, time.Since(beforeT))
beforeT = time.Now() beforeT = time.Now()
Q4 := MulTableNoDoubleG1(table, arrayW, nil, gsize) Q4 := mulTableNoDoubleG1(table, arrayW, nil, gsize)
fmt.Printf("Gsize : %d, TMultNoDouble time elapsed: %s\n", gsize, time.Since(beforeT)) fmt.Printf("Gsize : %d, TMultNoDouble time elapsed: %s\n", gsize, time.Since(beforeT))
beforeT = time.Now() beforeT = time.Now()
Q5 := ScalarMultNoDoubleG1(arrayG1, arrayW, nil, gsize) Q5 := scalarMultNoDoubleG1(arrayG1, arrayW, nil, gsize)
fmt.Printf("Gsize : %d, TMultNoDouble time elapsed (inc table comp): %s\n", gsize, time.Since(beforeT)) fmt.Printf("Gsize : %d, TMultNoDouble time elapsed (inc table comp): %s\n", gsize, time.Since(beforeT))
if bytes.Compare(Q1.Marshal(), Q2.Marshal()) != 0 { if bytes.Compare(Q1.Marshal(), Q2.Marshal()) != 0 {
t.Error("Error in TMult") t.Error("Error in TMult")
} }
@@ -122,34 +120,33 @@ func TestTableG2(t *testing.T){
for gsize := 2; gsize < 10; gsize++ { for gsize := 2; gsize < 10; gsize++ {
ntables := int((n + gsize - 1) / gsize) ntables := int((n + gsize - 1) / gsize)
table := make([]TableG2, ntables) table := make([]tableG2, ntables)
for i := 0; i < ntables-1; i++ { for i := 0; i < ntables-1; i++ {
table[i].NewTableG2( arrayG2[i*gsize:(i+1)*gsize], gsize) table[i].newTableG2(arrayG2[i*gsize:(i+1)*gsize], gsize, false)
} }
table[ntables-1].NewTableG2( arrayG2[(ntables-1)*gsize:], gsize) table[ntables-1].newTableG2(arrayG2[(ntables-1)*gsize:], gsize, false)
beforeT = time.Now() beforeT = time.Now()
Q2 := new(bn256.G2).ScalarBaseMult(new(big.Int)) Q2 := new(bn256.G2).ScalarBaseMult(new(big.Int))
for i := 0; i < ntables-1; i++ { for i := 0; i < ntables-1; i++ {
Q2 =table[i].MulTableG2(arrayW[i*gsize:(i+1)*gsize], Q2, gsize) Q2 = table[i].mulTableG2(arrayW[i*gsize:(i+1)*gsize], Q2, gsize)
} }
Q2 = table[ntables-1].MulTableG2(arrayW[(ntables-1)*gsize:], Q2, gsize) Q2 = table[ntables-1].mulTableG2(arrayW[(ntables-1)*gsize:], Q2, gsize)
fmt.Printf("Gsize : %d, TMult time elapsed: %s\n", gsize, time.Since(beforeT)) fmt.Printf("Gsize : %d, TMult time elapsed: %s\n", gsize, time.Since(beforeT))
beforeT = time.Now() beforeT = time.Now()
Q3 := ScalarMultG2(arrayG2, arrayW, nil, gsize) Q3 := scalarMultG2(arrayG2, arrayW, nil, gsize)
fmt.Printf("Gsize : %d, TMult time elapsed (inc table comp): %s\n", gsize, time.Since(beforeT)) fmt.Printf("Gsize : %d, TMult time elapsed (inc table comp): %s\n", gsize, time.Since(beforeT))
beforeT = time.Now() beforeT = time.Now()
Q4 := MulTableNoDoubleG2(table, arrayW, nil, gsize) Q4 := mulTableNoDoubleG2(table, arrayW, nil, gsize)
fmt.Printf("Gsize : %d, TMultNoDouble time elapsed: %s\n", gsize, time.Since(beforeT)) fmt.Printf("Gsize : %d, TMultNoDouble time elapsed: %s\n", gsize, time.Since(beforeT))
beforeT = time.Now() beforeT = time.Now()
Q5 := ScalarMultNoDoubleG2(arrayG2, arrayW, nil, gsize) Q5 := scalarMultNoDoubleG2(arrayG2, arrayW, nil, gsize)
fmt.Printf("Gsize : %d, TMultNoDouble time elapsed (inc table comp): %s\n", gsize, time.Since(beforeT)) fmt.Printf("Gsize : %d, TMultNoDouble time elapsed (inc table comp): %s\n", gsize, time.Since(beforeT))
if bytes.Compare(Q1.Marshal(), Q2.Marshal()) != 0 { if bytes.Compare(Q1.Marshal(), Q2.Marshal()) != 0 {
t.Error("Error in TMult") t.Error("Error in TMult")
} }

View File

@@ -13,36 +13,6 @@ import (
//"fmt" //"fmt"
) )
// Proof is the data structure of the Groth16 zkSNARK proof
type Proof struct {
A *bn256.G1
B *bn256.G2
C *bn256.G1
}
// Pk holds the data structure of the ProvingKey
type Pk struct {
A []*bn256.G1
B2 []*bn256.G2
B1 []*bn256.G1
C []*bn256.G1
NVars int
NPublic int
VkAlpha1 *bn256.G1
VkDelta1 *bn256.G1
VkBeta1 *bn256.G1
VkBeta2 *bn256.G2
VkDelta2 *bn256.G2
HExps []*bn256.G1
DomainSize int
PolsA []map[int]*big.Int
PolsB []map[int]*big.Int
PolsC []map[int]*big.Int
}
// Witness contains the witness
type Witness []*big.Int
// Group Size // Group Size
const ( const (
GSIZE = 6 GSIZE = 6
@@ -87,25 +57,25 @@ func GenerateProof(pk *types.Pk, w types.Witness) (*types.Proof, []*big.Int, err
for _cpu, _ranges := range ranges(pk.NVars, numcpu) { for _cpu, _ranges := range ranges(pk.NVars, numcpu) {
// split 1 // split 1
go func(cpu int, ranges [2]int) { go func(cpu int, ranges [2]int) {
proofA[cpu] = ScalarMultNoDoubleG1(pk.A[ranges[0]:ranges[1]], proofA[cpu] = scalarMultNoDoubleG1(pk.A[ranges[0]:ranges[1]],
w[ranges[0]:ranges[1]], w[ranges[0]:ranges[1]],
proofA[cpu], proofA[cpu],
gsize) gsize)
proofB[cpu] = ScalarMultNoDoubleG2(pk.B2[ranges[0]:ranges[1]], proofB[cpu] = scalarMultNoDoubleG2(pk.B2[ranges[0]:ranges[1]],
w[ranges[0]:ranges[1]], w[ranges[0]:ranges[1]],
proofB[cpu], proofB[cpu],
gsize) gsize)
proofBG1[cpu] = ScalarMultNoDoubleG1(pk.B1[ranges[0]:ranges[1]], proofBG1[cpu] = scalarMultNoDoubleG1(pk.B1[ranges[0]:ranges[1]],
w[ranges[0]:ranges[1]], w[ranges[0]:ranges[1]],
proofBG1[cpu], proofBG1[cpu],
gsize) gsize)
min_lim := pk.NPublic+1 minLim := pk.NPublic + 1
if ranges[0] > pk.NPublic+1 { if ranges[0] > pk.NPublic+1 {
min_lim = ranges[0] minLim = ranges[0]
} }
if ranges[1] > pk.NPublic+1 { if ranges[1] > pk.NPublic+1 {
proofC[cpu] = ScalarMultNoDoubleG1(pk.C[min_lim:ranges[1]], proofC[cpu] = scalarMultNoDoubleG1(pk.C[minLim:ranges[1]],
w[min_lim:ranges[1]], w[minLim:ranges[1]],
proofC[cpu], proofC[cpu],
gsize) gsize)
} }
@@ -142,7 +112,7 @@ func GenerateProof(pk *types.Pk, w types.Witness) (*types.Proof, []*big.Int, err
for _cpu, _ranges := range ranges(len(h), numcpu) { for _cpu, _ranges := range ranges(len(h), numcpu) {
// split 2 // split 2
go func(cpu int, ranges [2]int) { go func(cpu int, ranges [2]int) {
proofC[cpu] = ScalarMultNoDoubleG1(pk.HExps[ranges[0]:ranges[1]], proofC[cpu] = scalarMultNoDoubleG1(pk.HExps[ranges[0]:ranges[1]],
h[ranges[0]:ranges[1]], h[ranges[0]:ranges[1]],
proofC[cpu], proofC[cpu],
gsize) gsize)

View File

@@ -4,6 +4,7 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"os"
"testing" "testing"
"time" "time"
@@ -21,20 +22,40 @@ func TestCircuitsGenerateProof(t *testing.T) {
} }
func testCircuitGenerateProof(t *testing.T, circuit string) { func testCircuitGenerateProof(t *testing.T, circuit string) {
provingKeyJson, err := ioutil.ReadFile("../testdata/" + circuit + "/proving_key.json") // Using json provingKey file:
// provingKeyJson, err := ioutil.ReadFile("../testdata/" + circuit + "/proving_key.json")
// require.Nil(t, err)
// pk, err := parsers.ParsePk(provingKeyJson)
// require.Nil(t, err)
// witnessJson, err := ioutil.ReadFile("../testdata/" + circuit + "/witness.json")
// require.Nil(t, err)
// w, err := parsers.ParseWitness(witnessJson)
// require.Nil(t, err)
// Using bin provingKey file:
// pkBinFile, err := os.Open("../testdata/" + circuit + "/proving_key.bin")
// require.Nil(t, err)
// defer pkBinFile.Close()
// pk, err := parsers.ParsePkBin(pkBinFile)
// require.Nil(t, err)
// Using go bin provingKey file:
pkGoBinFile, err := os.Open("../testdata/" + circuit + "/proving_key.go.bin")
require.Nil(t, err) require.Nil(t, err)
pk, err := parsers.ParsePk(provingKeyJson) defer pkGoBinFile.Close()
pk, err := parsers.ParsePkGoBin(pkGoBinFile)
require.Nil(t, err) require.Nil(t, err)
witnessJson, err := ioutil.ReadFile("../testdata/" + circuit + "/witness.json") witnessBinFile, err := os.Open("../testdata/" + circuit + "/witness.bin")
require.Nil(t, err) require.Nil(t, err)
w, err := parsers.ParseWitness(witnessJson) defer witnessBinFile.Close()
w, err := parsers.ParseWitnessBin(witnessBinFile)
require.Nil(t, err) require.Nil(t, err)
beforeT := time.Now() beforeT := time.Now()
proof, pubSignals, err := GenerateProof(pk, w) proof, pubSignals, err := GenerateProof(pk, w)
assert.Nil(t, err) assert.Nil(t, err)
fmt.Println("proof generation time elapsed:", time.Since(beforeT)) fmt.Println("proof generation time for "+circuit+" elapsed:", time.Since(beforeT))
proofStr, err := parsers.ProofToJson(proof) proofStr, err := parsers.ProofToJson(proof)
assert.Nil(t, err) assert.Nil(t, err)

View File

@@ -16,7 +16,7 @@ There may be some concern on the additional size of the tables since they need t
| Algorithm (G1) | GS 2 | GS 3 | GS 4 | GS 5 | GS 6 | GS 7 | GS 8 | GS 9 | | Algorithm (G1) | GS 2 | GS 3 | GS 4 | GS 5 | GS 6 | GS 7 | GS 8 | GS 9 |
|---|---|---|---|---|---|---|---|---| |---|---|---|--|---|---|---|---|---|
| Naive | 6.63s | - | - | - | - | - | - | - | | Naive | 6.63s | - | - | - | - | - | - | - |
| Strauss | 13.16s | 9.03s | 6.95s | 5.61s | 4.91s | 4.26s | 3.88s | 3.54 s | | Strauss | 13.16s | 9.03s | 6.95s | 5.61s | 4.91s | 4.26s | 3.88s | 3.54 s |
| Strauss + Table Computation | 16.13s | 11.32s | 8.47s | 7.10s | 6.2s | 5.94s | 6.01s | 6.69s | | Strauss + Table Computation | 16.13s | 11.32s | 8.47s | 7.10s | 6.2s | 5.94s | 6.01s | 6.69s |
@@ -26,7 +26,7 @@ There may be some concern on the additional size of the tables since they need t
There are 5000 G2 Elliptical Curve Points, and the scalars are 254 bits (BN256 curve). There are 5000 G2 Elliptical Curve Points, and the scalars are 254 bits (BN256 curve).
| Algorithm (G2) | GS 2 | GS 3 | GS 4 | GS 5 | GS 6 | GS 7 | GS 8 | GS 9 | | Algorithm (G2) | GS 2 | GS 3 | GS 4 | GS 5 | GS 6 | GS 7 | GS 8 | GS 9 |
|---|---|---|---|---|---|---|---|---| |---|---|---|--|---|---|---|---|---|
| Naive | 3.55s | | | | | | | | | Naive | 3.55s | | | | | | | |
| Strauss | 3.55s | 2.54s | 1.96s | 1.58s | 1.38s | 1.20s | 1.03s | 937ms | | Strauss | 3.55s | 2.54s | 1.96s | 1.58s | 1.38s | 1.20s | 1.03s | 937ms |
| Strauss + Table Computation | 3.59s | 2.58s | 2.04s | 1.71s | 1.51s | 1.46s | 1.51s | 1.82s | | Strauss + Table Computation | 3.59s | 2.58s | 2.04s | 1.71s | 1.51s | 1.46s | 1.51s | 1.82s |

View File

@@ -1,25 +0,0 @@
# Tables Pre-calculation
The most time consuming part of a ZKSnark proof calculation is the scalar multiplication of elliptic curve points. Direct mechanism accumulates each multiplication. However, prover only needs the total accumulation.
There are two potential improvements to the naive approach:
1. Apply Strauss-Shamir method (https://stackoverflow.com/questions/50993471/ec-scalar-multiplication-with-strauss-shamir-method).
2. Leave the doubling operation for the last step
Both options can be combined.
In the following table, we show the results of using the naive method, Srauss-Shamir and Strauss-Shamir + No doubling. These last two options are repeated for different table grouping order.
There are 5000 G1 Elliptical Curve Points, and the scalars are 254 bits (BN256 curve).
There may be some concern on the additional size of the tables since they need to be loaded into a smartphone during the proof, and the time required to load these tables may exceed the benefits. If this is a problem, another althernative is to compute the tables during the proof itself. Depending on the Group Size, timing may be better than the naive approach.
| Algorithm | GS / Time |
|---|---|---|
| Naive | 6.63s | | | | | | | |
| Strauss | 13.16s | 9.033s | 6.95s | 5.61s | 4.91s | 4.26s | 3.88s | 3.54 s | 1.44 s |
| Strauss + Table Computation | 16.13s | 11.32s | 8.47s | 7.10s | 6.2s | 5.94s | 6.01s | 6.69s |
| No Doubling | 3.74s | 3.00s | 2.38s | 1.96s | 1.79s | 1.54s | 1.50s | 1.44s|
| No Doubling + Table Computation | 6.83s | 5.1s | 4.16s | 3.52s| 3.22s | 3.21s | 3.57s | 4.56s |

View File

@@ -7,3 +7,4 @@ rm */*.cpp
rm */*.sym rm */*.sym
rm */*.r1cs rm */*.r1cs
rm */*.sol rm */*.sol
rm */*.bin

View File

@@ -45,19 +45,23 @@ compile_and_ts_and_witness
cd ../ cd ../
echo "convert witness & pk of circuit1k to bin" echo "convert witness & pk of circuit1k to bin & go bin"
node node_modules/wasmsnark/tools/buildwitness.js -i circuit1k/witness.json -o circuit1k/witness.bin node node_modules/wasmsnark/tools/buildwitness.js -i circuit1k/witness.json -o circuit1k/witness.bin
node node_modules/wasmsnark/tools/buildpkey.js -i circuit1k/proving_key.json -o circuit1k/proving_key.bin node node_modules/wasmsnark/tools/buildpkey.js -i circuit1k/proving_key.json -o circuit1k/proving_key.bin
go run ../cli/cli.go -convert -pk circuit1k/proving_key.json -pkbin circuit1k/proving_key.go.bin
echo "convert witness & pk of circuit5k to bin" echo "convert witness & pk of circuit5k to bin & go bin"
node node_modules/wasmsnark/tools/buildwitness.js -i circuit5k/witness.json -o circuit5k/witness.bin node node_modules/wasmsnark/tools/buildwitness.js -i circuit5k/witness.json -o circuit5k/witness.bin
node node_modules/wasmsnark/tools/buildpkey.js -i circuit5k/proving_key.json -o circuit5k/proving_key.bin node node_modules/wasmsnark/tools/buildpkey.js -i circuit5k/proving_key.json -o circuit5k/proving_key.bin
go run ../cli/cli.go -convert -pk circuit5k/proving_key.json -pkbin circuit5k/proving_key.go.bin
# echo "convert witness & pk of circuit10k to bin" # echo "convert witness & pk of circuit10k to bin & go bin"
# node node_modules/wasmsnark/tools/buildwitness.js -i circuit10k/witness.json -o circuit10k/witness.bin # node node_modules/wasmsnark/tools/buildwitness.js -i circuit10k/witness.json -o circuit10k/witness.bin
# node node_modules/wasmsnark/tools/buildpkey.js -i circuit10k/proving_key.json -o circuit10k/proving_key.bin # node node_modules/wasmsnark/tools/buildpkey.js -i circuit10k/proving_key.json -o circuit10k/proving_key.bin
# go run ../cli/cli.go -convert -pk circuit10k/proving_key.json -pkbin circuit10k/proving_key.go.bin
# #
# echo "convert witness & pk of circuit20k to bin" # echo "convert witness & pk of circuit20k to bin & go bin"
# node node_modules/wasmsnark/tools/buildwitness.js -i circuit20k/witness.json -o circuit20k/witness.bin # node node_modules/wasmsnark/tools/buildwitness.js -i circuit20k/witness.json -o circuit20k/witness.bin
# node node_modules/wasmsnark/tools/buildpkey.js -i circuit20k/proving_key.json -o circuit20k/proving_key.bin # node node_modules/wasmsnark/tools/buildpkey.js -i circuit20k/proving_key.json -o circuit20k/proving_key.bin
# go run ../cli/cli.go -convert -pk circuit20k/proving_key.json -pkbin circuit20k/proving_key.go.bin

View File

@@ -1,11 +1,15 @@
package types package types
import ( import (
"encoding/hex"
"encoding/json"
"math/big" "math/big"
bn256 "github.com/ethereum/go-ethereum/crypto/bn256/cloudflare" bn256 "github.com/ethereum/go-ethereum/crypto/bn256/cloudflare"
) )
var Q, _ = new(big.Int).SetString("21888242871839275222246405745257275088696311157297823662689037894645226208583", 10)
// R is the mod of the finite field // R is the mod of the finite field
var R, _ = new(big.Int).SetString("21888242871839275222246405745257275088548364400416034343698204186575808495617", 10) var R, _ = new(big.Int).SetString("21888242871839275222246405745257275088548364400416034343698204186575808495617", 10)
@@ -16,6 +20,52 @@ type Proof struct {
C *bn256.G1 C *bn256.G1
} }
type proofAux struct {
A string `json:"pi_a"`
B string `json:"pi_b"`
C string `json:"pi_c"`
}
func (p Proof) MarshalJSON() ([]byte, error) {
var pa proofAux
pa.A = hex.EncodeToString(p.A.Marshal())
pa.B = hex.EncodeToString(p.B.Marshal())
pa.C = hex.EncodeToString(p.C.Marshal())
return json.Marshal(pa)
}
func (p *Proof) UnmarshalJSON(data []byte) error {
var pa proofAux
if err := json.Unmarshal(data, &pa); err != nil {
return err
}
aBytes, err := hex.DecodeString(pa.A)
if err != nil {
return err
}
p.A = new(bn256.G1)
if _, err := p.A.Unmarshal(aBytes); err != nil {
return err
}
bBytes, err := hex.DecodeString(pa.B)
if err != nil {
return err
}
p.B = new(bn256.G2)
if _, err := p.B.Unmarshal(bBytes); err != nil {
return err
}
cBytes, err := hex.DecodeString(pa.C)
if err != nil {
return err
}
p.C = new(bn256.G1)
if _, err := p.C.Unmarshal(cBytes); err != nil {
return err
}
return nil
}
// Pk holds the data structure of the ProvingKey // Pk holds the data structure of the ProvingKey
type Pk struct { type Pk struct {
A []*bn256.G1 A []*bn256.G1
@@ -33,7 +83,6 @@ type Pk struct {
DomainSize int DomainSize int
PolsA []map[int]*big.Int PolsA []map[int]*big.Int
PolsB []map[int]*big.Int PolsB []map[int]*big.Int
PolsC []map[int]*big.Int
} }
// Witness contains the witness // Witness contains the witness