Compare commits

...

48 Commits

Author SHA1 Message Date
Péter Szilágyi
dea1ce052a params: release go-ethereum v1.8.11 2018-06-12 17:02:14 +03:00
Felföldi Zsolt
25982375a8 les: fix retriever logic (#16776)
This PR fixes a retriever logic bug. When a peer had a soft timeout
and then a response arrived, it always assumed it was the same peer
even though it could have been a later requested one that did not time
out at all yet. In this case the logic went to an illegal state and
deadlocked, causing a goroutine leak.

Fixes #16243 and replaces #16359.
Thanks to @riceke for finding the bug in the logic.
2018-06-12 15:58:47 +02:00
Felföldi Zsolt
049f5b3572 core, eth, les: more efficient hash-based header chain retrieval (#16946) 2018-06-12 16:52:54 +03:00
Felix Lange
0255951587 crypto: replace ToECDSAPub with error-checking func UnmarshalPubkey (#16932)
ToECDSAPub was unsafe because it returned a non-nil key with nil X, Y in
case of invalid input. This change replaces ToECDSAPub with
UnmarshalPubkey across the codebase.
2018-06-12 15:26:08 +02:00
Péter Szilágyi
85cd64df0e Merge pull request #16958 from karalabe/pending-account-fast
internal/ethapi: reduce pendingTransactions to O(txs+accs) from O(txs*accs)
2018-06-12 14:07:21 +03:00
Péter Szilágyi
9608ccf106 Merge pull request #16959 from karalabe/fix-linters
metrics: fix gofmt linter warnings
2018-06-12 14:04:59 +03:00
Péter Szilágyi
3f06da7b5f metrics: fix gofmt linter warnings 2018-06-12 14:02:36 +03:00
Felföldi Zsolt
546d42179e les: pass server pool to protocol manager (#16947) 2018-06-12 14:00:52 +03:00
Péter Szilágyi
90829a04bf internal/ethapi: reduce pendingTransactions to O(txs+accs) from O(txs*accs) 2018-06-12 13:49:43 +03:00
gary rong
f991995918 ethdb: gracefullly handle quit channel (#16794)
* ethdb: gratefullly handle quit channel

* ethdb: minor polish
2018-06-11 16:11:48 +03:00
Wenbiao Zheng
aab7ab04b0 core/rawdb: wrap db key creations (#16914)
* core/rawdb: use wrappered helper to assemble key

* core/rawdb: wrappered helper to assemble key

* core/rawdb: rewrite the wrapper, pass common.Hash
2018-06-11 16:06:26 +03:00
Péter Szilágyi
43b940ec5a Merge pull request #16945 from karalabe/triedb-spurious-warning
trie: don't report the root flushlist as an alloc
2018-06-11 15:02:37 +03:00
Clayton Jacobs
b487bdf0ba metrics: removed repetitive calculations (#16944) 2018-06-11 14:45:25 +03:00
Péter Szilágyi
a3267ed929 trie: don't report the root flushlist as an alloc 2018-06-11 14:32:13 +03:00
Péter Szilágyi
9f7592c802 Merge pull request #16942 from karalabe/rpc-nil-reply
rpc: support returning nil pointer big.Ints (null)
2018-06-11 13:58:17 +03:00
Péter Szilágyi
99483e85b9 rpc: support returning nil pointer big.Ints (null) 2018-06-11 13:56:22 +03:00
xincaosu
1d666cf27e rpc: fix a comment typo (#16929) 2018-06-11 12:01:13 +03:00
Martin Holst Swende
eac16f9824 core: improve getBadBlocks to return full block rlp (#16902)
* core: improve getBadBlocks to return full block rlp

* core, eth, ethapi: changes to getBadBlocks formatting

* ethapi: address review concerns
2018-06-11 11:03:40 +03:00
Steven Roose
69c52bde3f ethclient: fix RPC parse error of Parity response (#16924)
The error produced when using a Parity RPC was the following:

ERROR: transaction did not get mined: failed to get tx for txid 0xbdeb094b3278019383c8da148ff1cb5b5dbd61bf8731bc2310ac1b8ed0235226: json: cannot unmarshal non-string into Go struct field txExtraInfo.blockHash of type common.Hash
2018-06-11 10:41:09 +03:00
Felföldi Zsolt
2977538ac0 light: new CHTs for mainnet and ropsten (#16926) 2018-06-11 10:38:05 +03:00
Anton Evangelatov
7f0726f706 metrics: return an empty snapshot for NilResettingTimer (#16930) 2018-06-11 10:31:55 +03:00
Steven Roose
13af276418 cmd/ethkey: add command to change key passphrase (#16516)
This change introduces 

    ethkey changepassphrase <keyfile>

to change the passphrase of a key file.
2018-06-08 15:07:07 +02:00
Sarlor
ea06da0892 trie: avoid unnecessary slicing on shortnode decoding (#16917)
optimization code
2018-06-07 11:48:36 +03:00
ledgerwatch
feb6620c34 core: relax type requirement for bc in ApplyTransaction (#16901) 2018-06-07 10:34:24 +02:00
Bruno Škvorc
90b22773e9 cmd/puppeth: fixed a typo in a wizard input query (#16910) 2018-06-06 12:17:41 +03:00
Guillaume Ballet
9e4f96a1a6 whisper: re-insert #16757 that has been lost during a merge (#16889) 2018-06-05 16:25:16 +02:00
Péter Szilágyi
01a7e267dc Merge pull request #16882 from karalabe/streaming-ecrecover
core: concurrent background transaction sender ecrecover
2018-06-05 17:13:43 +03:00
Felix Lange
e8ea5aa0d5 trie: reduce hasher allocations (#16896)
* trie: reduce hasher allocations

name    old time/op    new time/op    delta
Hash-8    4.05µs ±12%    3.56µs ± 9%  -12.13%  (p=0.000 n=20+19)

name    old alloc/op   new alloc/op   delta
Hash-8    1.30kB ± 0%    0.66kB ± 0%  -49.15%  (p=0.000 n=20+20)

name    old allocs/op  new allocs/op  delta
Hash-8      11.0 ± 0%       8.0 ± 0%  -27.27%  (p=0.000 n=20+20)

* trie: bump initial buffer cap in hasher
2018-06-05 15:06:29 +03:00
Elad
5bee5d69d7 vendor: added vendor packages necessary for the swarm-network-rewrite merge (#16792)
* vendor: added vendor packages necessary for the swarm-network-rewrite merge into ethereum master

* vendor: removed multihash deps
2018-06-05 12:40:21 +02:00
kiel barry
cbfb40b0aa params: fix golint warnings (#16853)
params: fix golint warnings
2018-06-05 12:31:34 +02:00
Antonio Salazar Cardozo
4cf2b4110e cmd/abigen: support for reading solc output from stdin (#16683)
Allow the --abi flag to be given - to indicate that it should read the
ABI information from standard input. It expects to read the solc output
with the --combined-json flag providing bin, abi, userdoc, devdoc, and
metadata, and works very similarly to the internal invocation of solc,
except it allows external invocation of solc.

This facilitates integration with more complex solc invocations, such
as invocations that require path remapping or --allow-paths tweaks.

Simple usage example:

    solc --combined-json bin,abi,userdoc,devdoc,metadata *.sol | abigen --abi -
2018-06-05 12:22:02 +02:00
Mark
0029a869f0 miner: not call commitNewWork if it's a side block (#16751) 2018-06-05 12:10:09 +02:00
Péter Szilágyi
2ab24a2a8f core: concurrent background transaction sender ecrecover 2018-06-05 11:03:55 +03:00
Martin Holst Swende
400332b99d eth/tracers: fix minor off-by-one error (#16879)
* tracing: fix minor off-by-one error

* tracers: go generate
2018-06-05 10:27:07 +03:00
Felföldi Zsolt
a5237a27ea les: add Skip overflow check to GetBlockHeadersMsg handler (#16891) 2018-06-05 10:23:00 +03:00
Péter Szilágyi
7a22e89080 Merge pull request #16894 from hadv/master
core: fix typo in comment code
2018-06-05 10:11:42 +03:00
hadv
e3a993d774 core: fix typo in comment code 2018-06-05 09:56:45 +07:00
Péter Szilágyi
ed40767355 Merge pull request #16800 from rjl493456442/memory_allowance_warining
cmd: cap cache size if exceeds reasonable range
2018-06-04 17:40:51 +03:00
rjl493456442
a20cc75b4a cmd/geth: cap cache allowance 2018-06-04 13:46:39 +03:00
Péter Szilágyi
b659718fd0 Merge pull request #16880 from holiman/http_timeouts
rpc: set timeouts for http server, see #16859
2018-06-04 13:38:23 +03:00
Anton Evangelatov
be2aec092d metrics: expvar support for ResettingTimer (#16878)
* metrics: expvar support for ResettingTimer

* metrics: use integers for percentiles; remove Overall

* metrics: fix edge-case panic for index-out-of-range
2018-06-04 13:05:16 +03:00
Martin Holst Swende
17f80cc2e2 rpc: set timeouts for http server, see #16859 2018-06-04 11:41:55 +02:00
Péter Szilágyi
143c4341d8 core, eth, trie: streaming GC for the trie cache (#16810)
* core, eth, trie: streaming GC for the trie cache

* trie: track memcache statistics
2018-06-04 10:47:43 +03:00
Felix Lange
3f33a7c8ce consensus/ethash: reduce keccak hash allocations (#16857)
Use Read instead of Sum to avoid internal allocations and
copying the state.

name                      old time/op  new time/op  delta
CacheGeneration-8          764ms ± 1%   579ms ± 1%  -24.22%  (p=0.000 n=20+17)
SmallDatasetGeneration-8  75.2ms ±12%  60.6ms ±10%  -19.37%  (p=0.000 n=20+20)
HashimotoLight-8          1.58ms ±11%  1.55ms ± 8%     ~     (p=0.322 n=20+19)
HashimotoFullSmall-8      4.90µs ± 1%  4.88µs ± 1%   -0.31%  (p=0.013 n=19+18)
2018-06-04 10:32:32 +03:00
Ryan Schneider
c8dcb9584e rpc: use HTTP request context as top-level context (#16861) 2018-06-02 12:26:47 +02:00
kiel barry
af28d12847 console: squash golint warnings (#16836) 2018-05-31 13:59:08 +02:00
kiel barry
0ad32d3be7 ethstats: fix last golint warning (#16837) 2018-05-30 11:36:02 +03:00
Péter Szilágyi
68b0d30d4a VERSION, params: begin 1.8.11 release cycle 2018-05-30 11:04:45 +03:00
111 changed files with 31040 additions and 1162 deletions

View File

@@ -1 +1 @@
1.8.10
1.8.11

View File

@@ -29,7 +29,7 @@ import (
)
var (
abiFlag = flag.String("abi", "", "Path to the Ethereum contract ABI json to bind")
abiFlag = flag.String("abi", "", "Path to the Ethereum contract ABI json to bind, - for STDIN")
binFlag = flag.String("bin", "", "Path to the Ethereum contract bytecode (generate deploy method)")
typFlag = flag.String("type", "", "Struct name for the binding (default = package name)")
@@ -75,16 +75,27 @@ func main() {
bins []string
types []string
)
if *solFlag != "" {
if *solFlag != "" || *abiFlag == "-" {
// Generate the list of types to exclude from binding
exclude := make(map[string]bool)
for _, kind := range strings.Split(*excFlag, ",") {
exclude[strings.ToLower(kind)] = true
}
contracts, err := compiler.CompileSolidity(*solcFlag, *solFlag)
if err != nil {
fmt.Printf("Failed to build Solidity contract: %v\n", err)
os.Exit(-1)
var contracts map[string]*compiler.Contract
var err error
if *solFlag != "" {
contracts, err = compiler.CompileSolidity(*solcFlag, *solFlag)
if err != nil {
fmt.Printf("Failed to build Solidity contract: %v\n", err)
os.Exit(-1)
}
} else {
contracts, err = contractsFromStdin()
if err != nil {
fmt.Printf("Failed to read input ABIs from STDIN: %v\n", err)
os.Exit(-1)
}
}
// Gather all non-excluded contract for binding
for name, contract := range contracts {
@@ -138,3 +149,12 @@ func main() {
os.Exit(-1)
}
}
func contractsFromStdin() (map[string]*compiler.Contract, error) {
bytes, err := ioutil.ReadAll(os.Stdin)
if err != nil {
return nil, err
}
return compiler.ParseCombinedJSON(bytes, "", "", "", "")
}

View File

@@ -0,0 +1,72 @@
package main
import (
"fmt"
"io/ioutil"
"strings"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/cmd/utils"
"gopkg.in/urfave/cli.v1"
)
var newPassphraseFlag = cli.StringFlag{
Name: "newpasswordfile",
Usage: "the file that contains the new passphrase for the keyfile",
}
var commandChangePassphrase = cli.Command{
Name: "changepassphrase",
Usage: "change the passphrase on a keyfile",
ArgsUsage: "<keyfile>",
Description: `
Change the passphrase of a keyfile.`,
Flags: []cli.Flag{
passphraseFlag,
newPassphraseFlag,
},
Action: func(ctx *cli.Context) error {
keyfilepath := ctx.Args().First()
// Read key from file.
keyjson, err := ioutil.ReadFile(keyfilepath)
if err != nil {
utils.Fatalf("Failed to read the keyfile at '%s': %v", keyfilepath, err)
}
// Decrypt key with passphrase.
passphrase := getPassphrase(ctx)
key, err := keystore.DecryptKey(keyjson, passphrase)
if err != nil {
utils.Fatalf("Error decrypting key: %v", err)
}
// Get a new passphrase.
fmt.Println("Please provide a new passphrase")
var newPhrase string
if passFile := ctx.String(newPassphraseFlag.Name); passFile != "" {
content, err := ioutil.ReadFile(passFile)
if err != nil {
utils.Fatalf("Failed to read new passphrase file '%s': %v", passFile, err)
}
newPhrase = strings.TrimRight(string(content), "\r\n")
} else {
newPhrase = promptPassphrase(true)
}
// Encrypt the key with the new passphrase.
newJson, err := keystore.EncryptKey(key, newPhrase, keystore.StandardScryptN, keystore.StandardScryptP)
if err != nil {
utils.Fatalf("Error encrypting with new passphrase: %v", err)
}
// Then write the new keyfile in place of the old one.
if err := ioutil.WriteFile(keyfilepath, newJson, 600); err != nil {
utils.Fatalf("Error writing new keyfile to disk: %v", err)
}
// Don't print anything. Just return successfully,
// producing a positive exit code.
return nil
},
}

View File

@@ -90,7 +90,7 @@ If you want to encrypt an existing private key, it can be specified by setting
}
// Encrypt key with passphrase.
passphrase := getPassPhrase(ctx, true)
passphrase := promptPassphrase(true)
keyjson, err := keystore.EncryptKey(key, passphrase, keystore.StandardScryptN, keystore.StandardScryptP)
if err != nil {
utils.Fatalf("Error encrypting key: %v", err)

View File

@@ -60,7 +60,7 @@ make sure to use this feature with great caution!`,
}
// Decrypt key with passphrase.
passphrase := getPassPhrase(ctx, false)
passphrase := getPassphrase(ctx)
key, err := keystore.DecryptKey(keyjson, passphrase)
if err != nil {
utils.Fatalf("Error decrypting key: %v", err)

View File

@@ -38,6 +38,7 @@ func init() {
app.Commands = []cli.Command{
commandGenerate,
commandInspect,
commandChangePassphrase,
commandSignMessage,
commandVerifyMessage,
}

View File

@@ -62,7 +62,7 @@ To sign a message contained in a file, use the --msgfile flag.
}
// Decrypt key with passphrase.
passphrase := getPassPhrase(ctx, false)
passphrase := getPassphrase(ctx)
key, err := keystore.DecryptKey(keyjson, passphrase)
if err != nil {
utils.Fatalf("Error decrypting key: %v", err)

View File

@@ -28,11 +28,32 @@ import (
"gopkg.in/urfave/cli.v1"
)
// getPassPhrase obtains a passphrase given by the user. It first checks the
// --passphrase command line flag and ultimately prompts the user for a
// promptPassphrase prompts the user for a passphrase. Set confirmation to true
// to require the user to confirm the passphrase.
func promptPassphrase(confirmation bool) string {
passphrase, err := console.Stdin.PromptPassword("Passphrase: ")
if err != nil {
utils.Fatalf("Failed to read passphrase: %v", err)
}
if confirmation {
confirm, err := console.Stdin.PromptPassword("Repeat passphrase: ")
if err != nil {
utils.Fatalf("Failed to read passphrase confirmation: %v", err)
}
if passphrase != confirm {
utils.Fatalf("Passphrases do not match")
}
}
return passphrase
}
// getPassphrase obtains a passphrase given by the user. It first checks the
// --passfile command line flag and ultimately prompts the user for a
// passphrase.
func getPassPhrase(ctx *cli.Context, confirmation bool) string {
// Look for the --passphrase flag.
func getPassphrase(ctx *cli.Context) string {
// Look for the --passwordfile flag.
passphraseFile := ctx.String(passphraseFlag.Name)
if passphraseFile != "" {
content, err := ioutil.ReadFile(passphraseFile)
@@ -44,20 +65,7 @@ func getPassPhrase(ctx *cli.Context, confirmation bool) string {
}
// Otherwise prompt the user for the passphrase.
passphrase, err := console.Stdin.PromptPassword("Passphrase: ")
if err != nil {
utils.Fatalf("Failed to read passphrase: %v", err)
}
if confirmation {
confirm, err := console.Stdin.PromptPassword("Repeat passphrase: ")
if err != nil {
utils.Fatalf("Failed to read passphrase confirmation: %v", err)
}
if passphrase != confirm {
utils.Fatalf("Passphrases do not match")
}
}
return passphrase
return promptPassphrase(false)
}
// signHash is a helper function that calculates a hash for the given message

View File

@@ -474,7 +474,7 @@ func (f *faucet) apiHandler(conn *websocket.Conn) {
amount = new(big.Int).Div(amount, new(big.Int).Exp(big.NewInt(2), big.NewInt(int64(msg.Tier)), nil))
tx := types.NewTransaction(f.nonce+uint64(len(f.reqs)), address, amount, 21000, f.price, nil)
signed, err := f.keystore.SignTx(f.account, tx, f.config.ChainId)
signed, err := f.keystore.SignTx(f.account, tx, f.config.ChainID)
if err != nil {
f.lock.Unlock()
if err = sendError(conn, err); err != nil {

View File

@@ -19,12 +19,16 @@ package main
import (
"fmt"
"math"
"os"
"runtime"
godebug "runtime/debug"
"sort"
"strconv"
"strings"
"time"
"github.com/elastic/gosigar"
"github.com/ethereum/go-ethereum/accounts"
"github.com/ethereum/go-ethereum/accounts/keystore"
"github.com/ethereum/go-ethereum/cmd/utils"
@@ -188,6 +192,22 @@ func init() {
if err := debug.Setup(ctx); err != nil {
return err
}
// Cap the cache allowance and tune the garbage colelctor
var mem gosigar.Mem
if err := mem.Get(); err == nil {
allowance := int(mem.Total / 1024 / 1024 / 3)
if cache := ctx.GlobalInt(utils.CacheFlag.Name); cache > allowance {
log.Warn("Sanitizing cache to Go's GC limits", "provided", cache, "updated", allowance)
ctx.GlobalSet(utils.CacheFlag.Name, strconv.Itoa(allowance))
}
}
// Ensure Go's GC ignores the database cache for trigger percentage
cache := ctx.GlobalInt(utils.CacheFlag.Name)
gogc := math.Max(20, math.Min(100, 100/(float64(cache)/1024)))
log.Debug("Sanitizing Go's GC trigger", "percent", int(gogc))
godebug.SetGCPercent(int(gogc))
// Start system runtime metrics collection
go metrics.CollectProcessMetrics(3 * time.Second)

View File

@@ -103,8 +103,8 @@ func newCppEthereumGenesisSpec(network string, genesis *core.Genesis) (*cppEther
spec.Params.ByzantiumForkBlock = (hexutil.Uint64)(genesis.Config.ByzantiumBlock.Uint64())
spec.Params.ConstantinopleForkBlock = (hexutil.Uint64)(math.MaxUint64)
spec.Params.NetworkID = (hexutil.Uint64)(genesis.Config.ChainId.Uint64())
spec.Params.ChainID = (hexutil.Uint64)(genesis.Config.ChainId.Uint64())
spec.Params.NetworkID = (hexutil.Uint64)(genesis.Config.ChainID.Uint64())
spec.Params.ChainID = (hexutil.Uint64)(genesis.Config.ChainID.Uint64())
spec.Params.MaximumExtraDataSize = (hexutil.Uint64)(params.MaximumExtraDataSize)
spec.Params.MinGasLimit = (hexutil.Uint64)(params.MinGasLimit)
@@ -284,7 +284,7 @@ func newParityChainSpec(network string, genesis *core.Genesis, bootnodes []strin
spec.Params.MaximumExtraDataSize = (hexutil.Uint64)(params.MaximumExtraDataSize)
spec.Params.MinGasLimit = (hexutil.Uint64)(params.MinGasLimit)
spec.Params.GasLimitBoundDivisor = (hexutil.Uint64)(params.GasLimitBoundDivisor)
spec.Params.NetworkID = (hexutil.Uint64)(genesis.Config.ChainId.Uint64())
spec.Params.NetworkID = (hexutil.Uint64)(genesis.Config.ChainID.Uint64())
spec.Params.MaxCodeSize = params.MaxCodeSize
spec.Params.EIP155Transition = genesis.Config.EIP155Block.Uint64()
spec.Params.EIP98Transition = math.MaxUint64

View File

@@ -609,7 +609,7 @@ func deployDashboard(client *sshClient, network string, conf *config, config *da
}
template.Must(template.New("").Parse(dashboardContent)).Execute(indexfile, map[string]interface{}{
"Network": network,
"NetworkID": conf.Genesis.Config.ChainId,
"NetworkID": conf.Genesis.Config.ChainID,
"NetworkTitle": strings.Title(network),
"EthstatsPage": config.ethstats,
"ExplorerPage": config.explorer,

View File

@@ -49,7 +49,7 @@ func (w *wizard) deployFaucet() {
existed := err == nil
infos.node.genesis, _ = json.MarshalIndent(w.conf.Genesis, "", " ")
infos.node.network = w.conf.Genesis.Config.ChainId.Int64()
infos.node.network = w.conf.Genesis.Config.ChainID.Int64()
// Figure out which port to listen on
fmt.Println()

View File

@@ -121,7 +121,7 @@ func (w *wizard) makeGenesis() {
// Query the user for some custom extras
fmt.Println()
fmt.Println("Specify your chain/network ID if you want an explicit one (default = random)")
genesis.Config.ChainId = new(big.Int).SetUint64(uint64(w.readDefaultInt(rand.Intn(65536))))
genesis.Config.ChainID = new(big.Int).SetUint64(uint64(w.readDefaultInt(rand.Intn(65536))))
// All done, store the genesis and flush to disk
log.Info("Configured new genesis block")

View File

@@ -56,7 +56,7 @@ func (w *wizard) deployNode(boot bool) {
existed := err == nil
infos.genesis, _ = json.MarshalIndent(w.conf.Genesis, "", " ")
infos.network = w.conf.Genesis.Config.ChainId.Int64()
infos.network = w.conf.Genesis.Config.ChainID.Int64()
// Figure out where the user wants to store the persistent data
fmt.Println()
@@ -107,7 +107,7 @@ func (w *wizard) deployNode(boot bool) {
// Ethash based miners only need an etherbase to mine against
fmt.Println()
if infos.etherbase == "" {
fmt.Printf("What address should the miner user?\n")
fmt.Printf("What address should the miner use?\n")
for {
if address := w.readAddress(); address != nil {
infos.etherbase = address.Hex()
@@ -115,7 +115,7 @@ func (w *wizard) deployNode(boot bool) {
}
}
} else {
fmt.Printf("What address should the miner user? (default = %s)\n", infos.etherbase)
fmt.Printf("What address should the miner use? (default = %s)\n", infos.etherbase)
infos.etherbase = w.readDefaultAddress(common.HexToAddress(infos.etherbase)).Hex()
}
} else if w.conf.Genesis.Config.Clique != nil {

View File

@@ -52,7 +52,7 @@ func (w *wizard) deployWallet() {
existed := err == nil
infos.genesis, _ = json.MarshalIndent(w.conf.Genesis, "", " ")
infos.network = w.conf.Genesis.Config.ChainId.Int64()
infos.network = w.conf.Genesis.Config.ChainID.Int64()
// Figure out which port to listen on
fmt.Println()

View File

@@ -140,8 +140,8 @@ func processArgs() {
}
if *asymmetricMode && len(*argPub) > 0 {
pub = crypto.ToECDSAPub(common.FromHex(*argPub))
if !isKeyValid(pub) {
var err error
if pub, err = crypto.UnmarshalPubkey(common.FromHex(*argPub)); err != nil {
utils.Fatalf("invalid public key")
}
}
@@ -321,10 +321,6 @@ func startServer() error {
return nil
}
func isKeyValid(k *ecdsa.PublicKey) bool {
return k.X != nil && k.Y != nil
}
func configureNode() {
var err error
var p2pAccept bool
@@ -340,9 +336,8 @@ func configureNode() {
if b == nil {
utils.Fatalf("Error: can not convert hexadecimal string")
}
pub = crypto.ToECDSAPub(b)
if !isKeyValid(pub) {
utils.Fatalf("Error: invalid public key")
if pub, err = crypto.UnmarshalPubkey(b); err != nil {
utils.Fatalf("Error: invalid peer public key")
}
}
}

View File

@@ -31,11 +31,17 @@ import (
var versionRegexp = regexp.MustCompile(`([0-9]+)\.([0-9]+)\.([0-9]+)`)
// Contract contains information about a compiled contract, alongside its code.
type Contract struct {
Code string `json:"code"`
Info ContractInfo `json:"info"`
}
// ContractInfo contains information about a compiled contract, including access
// to the ABI definition, user and developer docs, and metadata.
//
// Depending on the source, language version, compiler version, and compiler
// options will provide information about how the contract was compiled.
type ContractInfo struct {
Source string `json:"source"`
Language string `json:"language"`
@@ -142,8 +148,22 @@ func (s *Solidity) run(cmd *exec.Cmd, source string) (map[string]*Contract, erro
if err := cmd.Run(); err != nil {
return nil, fmt.Errorf("solc: %v\n%s", err, stderr.Bytes())
}
return ParseCombinedJSON(stdout.Bytes(), source, s.Version, s.Version, strings.Join(s.makeArgs(), " "))
}
// ParseCombinedJSON takes the direct output of a solc --combined-output run and
// parses it into a map of string contract name to Contract structs. The
// provided source, language and compiler version, and compiler options are all
// passed through into the Contract structs.
//
// The solc output is expected to contain ABI, user docs, and dev docs.
//
// Returns an error if the JSON is malformed or missing data, or if the JSON
// embedded within the JSON is malformed.
func ParseCombinedJSON(combinedJSON []byte, source string, languageVersion string, compilerVersion string, compilerOptions string) (map[string]*Contract, error) {
var output solcOutput
if err := json.Unmarshal(stdout.Bytes(), &output); err != nil {
if err := json.Unmarshal(combinedJSON, &output); err != nil {
return nil, err
}
@@ -168,9 +188,9 @@ func (s *Solidity) run(cmd *exec.Cmd, source string) (map[string]*Contract, erro
Info: ContractInfo{
Source: source,
Language: "Solidity",
LanguageVersion: s.Version,
CompilerVersion: s.Version,
CompilerOptions: strings.Join(s.makeArgs(), " "),
LanguageVersion: languageVersion,
CompilerVersion: compilerVersion,
CompilerOptions: compilerOptions,
AbiDefinition: abi,
UserDoc: userdoc,
DeveloperDoc: devdoc,

View File

@@ -94,14 +94,25 @@ func calcDatasetSize(epoch int) uint64 {
// reused between hash runs instead of requiring new ones to be created.
type hasher func(dest []byte, data []byte)
// makeHasher creates a repetitive hasher, allowing the same hash data structures
// to be reused between hash runs instead of requiring new ones to be created.
// The returned function is not thread safe!
// makeHasher creates a repetitive hasher, allowing the same hash data structures to
// be reused between hash runs instead of requiring new ones to be created. The returned
// function is not thread safe!
func makeHasher(h hash.Hash) hasher {
// sha3.state supports Read to get the sum, use it to avoid the overhead of Sum.
// Read alters the state but we reset the hash before every operation.
type readerHash interface {
hash.Hash
Read([]byte) (int, error)
}
rh, ok := h.(readerHash)
if !ok {
panic("can't find Read method on hash")
}
outputLen := rh.Size()
return func(dest []byte, data []byte) {
h.Write(data)
h.Sum(dest[:0])
h.Reset()
rh.Reset()
rh.Write(data)
rh.Read(dest[:outputLen])
}
}

View File

@@ -271,7 +271,7 @@ func (b *bridge) SleepBlocks(call otto.FunctionCall) (response otto.Value) {
}
type jsonrpcCall struct {
Id int64
ID int64
Method string
Params []interface{}
}
@@ -304,7 +304,7 @@ func (b *bridge) Send(call otto.FunctionCall) (response otto.Value) {
resps, _ := call.Otto.Object("new Array()")
for _, req := range reqs {
resp, _ := call.Otto.Object(`({"jsonrpc":"2.0"})`)
resp.Set("id", req.Id)
resp.Set("id", req.ID)
var result json.RawMessage
err = b.client.Call(&result, req.Method, req.Params...)
switch err := err.(type) {

View File

@@ -73,6 +73,8 @@ type Console struct {
printer io.Writer // Output writer to serialize any display strings to
}
// New initializes a JavaScript interpreted runtime environment and sets defaults
// with the config struct.
func New(config Config) (*Console, error) {
// Handle unset config values gracefully
if config.Prompter == nil {

View File

@@ -674,7 +674,7 @@ func (bc *BlockChain) Stop() {
for !bc.triegc.Empty() {
triedb.Dereference(bc.triegc.PopItem().(common.Hash), common.Hash{})
}
if size := triedb.Size(); size != 0 {
if size, _ := triedb.Size(); size != 0 {
log.Error("Dangling trie nodes after full cleanup")
}
}
@@ -916,33 +916,29 @@ func (bc *BlockChain) WriteBlockWithState(block *types.Block, receipts []*types.
bc.triegc.Push(root, -float32(block.NumberU64()))
if current := block.NumberU64(); current > triesInMemory {
// If we exceeded our memory allowance, flush matured singleton nodes to disk
var (
nodes, imgs = triedb.Size()
limit = common.StorageSize(bc.cacheConfig.TrieNodeLimit) * 1024 * 1024
)
if nodes > limit || imgs > 4*1024*1024 {
triedb.Cap(limit - ethdb.IdealBatchSize)
}
// Find the next state trie we need to commit
header := bc.GetHeaderByNumber(current - triesInMemory)
chosen := header.Number.Uint64()
// Only write to disk if we exceeded our memory allowance *and* also have at
// least a given number of tries gapped.
var (
size = triedb.Size()
limit = common.StorageSize(bc.cacheConfig.TrieNodeLimit) * 1024 * 1024
)
if size > limit || bc.gcproc > bc.cacheConfig.TrieTimeLimit {
// If we exceeded out time allowance, flush an entire trie to disk
if bc.gcproc > bc.cacheConfig.TrieTimeLimit {
// If we're exceeding limits but haven't reached a large enough memory gap,
// warn the user that the system is becoming unstable.
if chosen < lastWrite+triesInMemory {
switch {
case size >= 2*limit:
log.Warn("State memory usage too high, committing", "size", size, "limit", limit, "optimum", float64(chosen-lastWrite)/triesInMemory)
case bc.gcproc >= 2*bc.cacheConfig.TrieTimeLimit:
log.Info("State in memory for too long, committing", "time", bc.gcproc, "allowance", bc.cacheConfig.TrieTimeLimit, "optimum", float64(chosen-lastWrite)/triesInMemory)
}
}
// If optimum or critical limits reached, write to disk
if chosen >= lastWrite+triesInMemory || size >= 2*limit || bc.gcproc >= 2*bc.cacheConfig.TrieTimeLimit {
triedb.Commit(header.Root, true)
lastWrite = chosen
bc.gcproc = 0
if chosen < lastWrite+triesInMemory && bc.gcproc >= 2*bc.cacheConfig.TrieTimeLimit {
log.Info("State in memory for too long, committing", "time", bc.gcproc, "allowance", bc.cacheConfig.TrieTimeLimit, "optimum", float64(chosen-lastWrite)/triesInMemory)
}
// Flush an entire trie and restart the counters
triedb.Commit(header.Root, true)
lastWrite = chosen
bc.gcproc = 0
}
// Garbage collect anything below our required write retention
for !bc.triegc.Empty() {
@@ -1009,6 +1005,10 @@ func (bc *BlockChain) InsertChain(chain types.Blocks) (int, error) {
// only reason this method exists as a separate one is to make locking cleaner
// with deferred statements.
func (bc *BlockChain) insertChain(chain types.Blocks) (int, []interface{}, []*types.Log, error) {
// Sanity check that we have something meaningful to import
if len(chain) == 0 {
return 0, nil, nil, nil
}
// Do a sanity check that the provided chain is actually ordered and linked
for i := 1; i < len(chain); i++ {
if chain[i].NumberU64() != chain[i-1].NumberU64()+1 || chain[i].ParentHash() != chain[i-1].Hash() {
@@ -1047,6 +1047,9 @@ func (bc *BlockChain) insertChain(chain types.Blocks) (int, []interface{}, []*ty
abort, results := bc.engine.VerifyHeaders(bc, headers, seals)
defer close(abort)
// Start a parallel signature recovery (signer will fluke on fork transition, minimal perf loss)
senderCacher.recoverFromBlocks(types.MakeSigner(bc.chainConfig, chain[0].Number()), chain)
// Iterate over the blocks and insert when the verifier permits
for i, block := range chain {
// If the chain is terminating, stop processing blocks
@@ -1181,7 +1184,9 @@ func (bc *BlockChain) insertChain(chain types.Blocks) (int, []interface{}, []*ty
}
stats.processed++
stats.usedGas += usedGas
stats.report(chain, i, bc.stateCache.TrieDB().Size())
cache, _ := bc.stateCache.TrieDB().Size()
stats.report(chain, i, cache)
}
// Append a single chain head event if we've progressed the chain
if lastCanon != nil && bc.CurrentBlock().Hash() == lastCanon.Hash() {
@@ -1387,27 +1392,21 @@ func (bc *BlockChain) update() {
}
}
// BadBlockArgs represents the entries in the list returned when bad blocks are queried.
type BadBlockArgs struct {
Hash common.Hash `json:"hash"`
Header *types.Header `json:"header"`
}
// BadBlocks returns a list of the last 'bad blocks' that the client has seen on the network
func (bc *BlockChain) BadBlocks() ([]BadBlockArgs, error) {
headers := make([]BadBlockArgs, 0, bc.badBlocks.Len())
func (bc *BlockChain) BadBlocks() []*types.Block {
blocks := make([]*types.Block, 0, bc.badBlocks.Len())
for _, hash := range bc.badBlocks.Keys() {
if hdr, exist := bc.badBlocks.Peek(hash); exist {
header := hdr.(*types.Header)
headers = append(headers, BadBlockArgs{header.Hash(), header})
if blk, exist := bc.badBlocks.Peek(hash); exist {
block := blk.(*types.Block)
blocks = append(blocks, block)
}
}
return headers, nil
return blocks
}
// addBadBlock adds a bad block to the bad-block LRU cache
func (bc *BlockChain) addBadBlock(block *types.Block) {
bc.badBlocks.Add(block.Header().Hash(), block.Header())
bc.badBlocks.Add(block.Hash(), block)
}
// reportBlock logs a bad block error.
@@ -1525,6 +1524,18 @@ func (bc *BlockChain) GetBlockHashesFromHash(hash common.Hash, max uint64) []com
return bc.hc.GetBlockHashesFromHash(hash, max)
}
// GetAncestor retrieves the Nth ancestor of a given block. It assumes that either the given block or
// a close ancestor of it is canonical. maxNonCanonical points to a downwards counter limiting the
// number of blocks to be individually checked before we reach the canonical chain.
//
// Note: ancestor == 0 returns the same block, 1 returns its parent and so on.
func (bc *BlockChain) GetAncestor(hash common.Hash, number, ancestor uint64, maxNonCanonical *uint64) (common.Hash, uint64) {
bc.chainmu.Lock()
defer bc.chainmu.Unlock()
return bc.hc.GetAncestor(hash, number, ancestor, maxNonCanonical)
}
// GetHeaderByNumber retrieves a block header from the database by number,
// caching it (associated with its hash) if found.
func (bc *BlockChain) GetHeaderByNumber(number uint64) *types.Header {

View File

@@ -578,7 +578,7 @@ func TestFastVsFullChains(t *testing.T) {
Alloc: GenesisAlloc{address: {Balance: funds}},
}
genesis = gspec.MustCommit(gendb)
signer = types.NewEIP155Signer(gspec.Config.ChainId)
signer = types.NewEIP155Signer(gspec.Config.ChainID)
)
blocks, receipts := GenerateChain(gspec.Config, genesis, ethash.NewFaker(), gendb, 1024, func(i int, block *BlockGen) {
block.SetCoinbase(common.Address{0x00})
@@ -753,7 +753,7 @@ func TestChainTxReorgs(t *testing.T) {
},
}
genesis = gspec.MustCommit(db)
signer = types.NewEIP155Signer(gspec.Config.ChainId)
signer = types.NewEIP155Signer(gspec.Config.ChainID)
)
// Create two transactions shared between the chains:
@@ -859,7 +859,7 @@ func TestLogReorgs(t *testing.T) {
code = common.Hex2Bytes("60606040525b7f24ec1d3ff24c2f6ff210738839dbc339cd45a5294d85c79361016243157aae7b60405180905060405180910390a15b600a8060416000396000f360606040526008565b00")
gspec = &Genesis{Config: params.TestChainConfig, Alloc: GenesisAlloc{addr1: {Balance: big.NewInt(10000000000000)}}}
genesis = gspec.MustCommit(db)
signer = types.NewEIP155Signer(gspec.Config.ChainId)
signer = types.NewEIP155Signer(gspec.Config.ChainID)
)
blockchain, _ := NewBlockChain(db, nil, gspec.Config, ethash.NewFaker(), vm.Config{})
@@ -906,7 +906,7 @@ func TestReorgSideEvent(t *testing.T) {
Alloc: GenesisAlloc{addr1: {Balance: big.NewInt(10000000000000)}},
}
genesis = gspec.MustCommit(db)
signer = types.NewEIP155Signer(gspec.Config.ChainId)
signer = types.NewEIP155Signer(gspec.Config.ChainID)
)
blockchain, _ := NewBlockChain(db, nil, gspec.Config, ethash.NewFaker(), vm.Config{})
@@ -1032,7 +1032,7 @@ func TestEIP155Transition(t *testing.T) {
funds = big.NewInt(1000000000)
deleteAddr = common.Address{1}
gspec = &Genesis{
Config: &params.ChainConfig{ChainId: big.NewInt(1), EIP155Block: big.NewInt(2), HomesteadBlock: new(big.Int)},
Config: &params.ChainConfig{ChainID: big.NewInt(1), EIP155Block: big.NewInt(2), HomesteadBlock: new(big.Int)},
Alloc: GenesisAlloc{address: {Balance: funds}, deleteAddr: {Balance: new(big.Int)}},
}
genesis = gspec.MustCommit(db)
@@ -1063,7 +1063,7 @@ func TestEIP155Transition(t *testing.T) {
}
block.AddTx(tx)
tx, err = basicTx(types.NewEIP155Signer(gspec.Config.ChainId))
tx, err = basicTx(types.NewEIP155Signer(gspec.Config.ChainID))
if err != nil {
t.Fatal(err)
}
@@ -1075,7 +1075,7 @@ func TestEIP155Transition(t *testing.T) {
}
block.AddTx(tx)
tx, err = basicTx(types.NewEIP155Signer(gspec.Config.ChainId))
tx, err = basicTx(types.NewEIP155Signer(gspec.Config.ChainID))
if err != nil {
t.Fatal(err)
}
@@ -1103,7 +1103,7 @@ func TestEIP155Transition(t *testing.T) {
}
// generate an invalid chain id transaction
config := &params.ChainConfig{ChainId: big.NewInt(2), EIP155Block: big.NewInt(2), HomesteadBlock: new(big.Int)}
config := &params.ChainConfig{ChainID: big.NewInt(2), EIP155Block: big.NewInt(2), HomesteadBlock: new(big.Int)}
blocks, _ = GenerateChain(config, blocks[len(blocks)-1], ethash.NewFaker(), db, 4, func(i int, block *BlockGen) {
var (
tx *types.Transaction
@@ -1137,7 +1137,7 @@ func TestEIP161AccountRemoval(t *testing.T) {
theAddr = common.Address{1}
gspec = &Genesis{
Config: &params.ChainConfig{
ChainId: big.NewInt(1),
ChainID: big.NewInt(1),
HomesteadBlock: new(big.Int),
EIP155Block: new(big.Int),
EIP158Block: big.NewInt(2),
@@ -1153,7 +1153,7 @@ func TestEIP161AccountRemoval(t *testing.T) {
var (
tx *types.Transaction
err error
signer = types.NewEIP155Signer(gspec.Config.ChainId)
signer = types.NewEIP155Signer(gspec.Config.ChainID)
)
switch i {
case 0:

View File

@@ -307,6 +307,43 @@ func (hc *HeaderChain) GetBlockHashesFromHash(hash common.Hash, max uint64) []co
return chain
}
// GetAncestor retrieves the Nth ancestor of a given block. It assumes that either the given block or
// a close ancestor of it is canonical. maxNonCanonical points to a downwards counter limiting the
// number of blocks to be individually checked before we reach the canonical chain.
//
// Note: ancestor == 0 returns the same block, 1 returns its parent and so on.
func (hc *HeaderChain) GetAncestor(hash common.Hash, number, ancestor uint64, maxNonCanonical *uint64) (common.Hash, uint64) {
if ancestor > number {
return common.Hash{}, 0
}
if ancestor == 1 {
// in this case it is cheaper to just read the header
if header := hc.GetHeader(hash, number); header != nil {
return header.ParentHash, number - 1
} else {
return common.Hash{}, 0
}
}
for ancestor != 0 {
if rawdb.ReadCanonicalHash(hc.chainDb, number) == hash {
number -= ancestor
return rawdb.ReadCanonicalHash(hc.chainDb, number), number
}
if *maxNonCanonical == 0 {
return common.Hash{}, 0
}
*maxNonCanonical--
ancestor--
header := hc.GetHeader(hash, number)
if header == nil {
return common.Hash{}, 0
}
hash = header.ParentHash
number--
}
return hash, number
}
// GetTd retrieves a block's total difficulty in the canonical chain from the
// database by hash and number, caching it if found.
func (hc *HeaderChain) GetTd(hash common.Hash, number uint64) *big.Int {

View File

@@ -29,7 +29,7 @@ import (
// ReadCanonicalHash retrieves the hash assigned to a canonical block number.
func ReadCanonicalHash(db DatabaseReader, number uint64) common.Hash {
data, _ := db.Get(append(append(headerPrefix, encodeBlockNumber(number)...), headerHashSuffix...))
data, _ := db.Get(headerHashKey(number))
if len(data) == 0 {
return common.Hash{}
}
@@ -38,22 +38,21 @@ func ReadCanonicalHash(db DatabaseReader, number uint64) common.Hash {
// WriteCanonicalHash stores the hash assigned to a canonical block number.
func WriteCanonicalHash(db DatabaseWriter, hash common.Hash, number uint64) {
key := append(append(headerPrefix, encodeBlockNumber(number)...), headerHashSuffix...)
if err := db.Put(key, hash.Bytes()); err != nil {
if err := db.Put(headerHashKey(number), hash.Bytes()); err != nil {
log.Crit("Failed to store number to hash mapping", "err", err)
}
}
// DeleteCanonicalHash removes the number to hash canonical mapping.
func DeleteCanonicalHash(db DatabaseDeleter, number uint64) {
if err := db.Delete(append(append(headerPrefix, encodeBlockNumber(number)...), headerHashSuffix...)); err != nil {
if err := db.Delete(headerHashKey(number)); err != nil {
log.Crit("Failed to delete number to hash mapping", "err", err)
}
}
// ReadHeaderNumber returns the header number assigned to a hash.
func ReadHeaderNumber(db DatabaseReader, hash common.Hash) *uint64 {
data, _ := db.Get(append(headerNumberPrefix, hash.Bytes()...))
data, _ := db.Get(headerNumberKey(hash))
if len(data) != 8 {
return nil
}
@@ -129,14 +128,13 @@ func WriteFastTrieProgress(db DatabaseWriter, count uint64) {
// ReadHeaderRLP retrieves a block header in its raw RLP database encoding.
func ReadHeaderRLP(db DatabaseReader, hash common.Hash, number uint64) rlp.RawValue {
data, _ := db.Get(append(append(headerPrefix, encodeBlockNumber(number)...), hash.Bytes()...))
data, _ := db.Get(headerKey(number, hash))
return data
}
// HasHeader verifies the existence of a block header corresponding to the hash.
func HasHeader(db DatabaseReader, hash common.Hash, number uint64) bool {
key := append(append(append(headerPrefix, encodeBlockNumber(number)...), hash.Bytes()...))
if has, err := db.Has(key); !has || err != nil {
if has, err := db.Has(headerKey(number, hash)); !has || err != nil {
return false
}
return true
@@ -161,11 +159,11 @@ func ReadHeader(db DatabaseReader, hash common.Hash, number uint64) *types.Heade
func WriteHeader(db DatabaseWriter, header *types.Header) {
// Write the hash -> number mapping
var (
hash = header.Hash().Bytes()
hash = header.Hash()
number = header.Number.Uint64()
encoded = encodeBlockNumber(number)
)
key := append(headerNumberPrefix, hash...)
key := headerNumberKey(hash)
if err := db.Put(key, encoded); err != nil {
log.Crit("Failed to store hash to number mapping", "err", err)
}
@@ -174,7 +172,7 @@ func WriteHeader(db DatabaseWriter, header *types.Header) {
if err != nil {
log.Crit("Failed to RLP encode header", "err", err)
}
key = append(append(headerPrefix, encoded...), hash...)
key = headerKey(number, hash)
if err := db.Put(key, data); err != nil {
log.Crit("Failed to store header", "err", err)
}
@@ -182,32 +180,30 @@ func WriteHeader(db DatabaseWriter, header *types.Header) {
// DeleteHeader removes all block header data associated with a hash.
func DeleteHeader(db DatabaseDeleter, hash common.Hash, number uint64) {
if err := db.Delete(append(append(headerPrefix, encodeBlockNumber(number)...), hash.Bytes()...)); err != nil {
if err := db.Delete(headerKey(number, hash)); err != nil {
log.Crit("Failed to delete header", "err", err)
}
if err := db.Delete(append(headerNumberPrefix, hash.Bytes()...)); err != nil {
if err := db.Delete(headerNumberKey(hash)); err != nil {
log.Crit("Failed to delete hash to number mapping", "err", err)
}
}
// ReadBodyRLP retrieves the block body (transactions and uncles) in RLP encoding.
func ReadBodyRLP(db DatabaseReader, hash common.Hash, number uint64) rlp.RawValue {
data, _ := db.Get(append(append(blockBodyPrefix, encodeBlockNumber(number)...), hash.Bytes()...))
data, _ := db.Get(blockBodyKey(number, hash))
return data
}
// WriteBodyRLP stores an RLP encoded block body into the database.
func WriteBodyRLP(db DatabaseWriter, hash common.Hash, number uint64, rlp rlp.RawValue) {
key := append(append(blockBodyPrefix, encodeBlockNumber(number)...), hash.Bytes()...)
if err := db.Put(key, rlp); err != nil {
if err := db.Put(blockBodyKey(number, hash), rlp); err != nil {
log.Crit("Failed to store block body", "err", err)
}
}
// HasBody verifies the existence of a block body corresponding to the hash.
func HasBody(db DatabaseReader, hash common.Hash, number uint64) bool {
key := append(append(blockBodyPrefix, encodeBlockNumber(number)...), hash.Bytes()...)
if has, err := db.Has(key); !has || err != nil {
if has, err := db.Has(blockBodyKey(number, hash)); !has || err != nil {
return false
}
return true
@@ -238,14 +234,14 @@ func WriteBody(db DatabaseWriter, hash common.Hash, number uint64, body *types.B
// DeleteBody removes all block body data associated with a hash.
func DeleteBody(db DatabaseDeleter, hash common.Hash, number uint64) {
if err := db.Delete(append(append(blockBodyPrefix, encodeBlockNumber(number)...), hash.Bytes()...)); err != nil {
if err := db.Delete(blockBodyKey(number, hash)); err != nil {
log.Crit("Failed to delete block body", "err", err)
}
}
// ReadTd retrieves a block's total difficulty corresponding to the hash.
func ReadTd(db DatabaseReader, hash common.Hash, number uint64) *big.Int {
data, _ := db.Get(append(append(append(headerPrefix, encodeBlockNumber(number)...), hash[:]...), headerTDSuffix...))
data, _ := db.Get(headerTDKey(number, hash))
if len(data) == 0 {
return nil
}
@@ -263,15 +259,14 @@ func WriteTd(db DatabaseWriter, hash common.Hash, number uint64, td *big.Int) {
if err != nil {
log.Crit("Failed to RLP encode block total difficulty", "err", err)
}
key := append(append(append(headerPrefix, encodeBlockNumber(number)...), hash.Bytes()...), headerTDSuffix...)
if err := db.Put(key, data); err != nil {
if err := db.Put(headerTDKey(number, hash), data); err != nil {
log.Crit("Failed to store block total difficulty", "err", err)
}
}
// DeleteTd removes all block total difficulty data associated with a hash.
func DeleteTd(db DatabaseDeleter, hash common.Hash, number uint64) {
if err := db.Delete(append(append(append(headerPrefix, encodeBlockNumber(number)...), hash.Bytes()...), headerTDSuffix...)); err != nil {
if err := db.Delete(headerTDKey(number, hash)); err != nil {
log.Crit("Failed to delete block total difficulty", "err", err)
}
}
@@ -279,7 +274,7 @@ func DeleteTd(db DatabaseDeleter, hash common.Hash, number uint64) {
// ReadReceipts retrieves all the transaction receipts belonging to a block.
func ReadReceipts(db DatabaseReader, hash common.Hash, number uint64) types.Receipts {
// Retrieve the flattened receipt slice
data, _ := db.Get(append(append(blockReceiptsPrefix, encodeBlockNumber(number)...), hash[:]...))
data, _ := db.Get(blockReceiptsKey(number, hash))
if len(data) == 0 {
return nil
}
@@ -308,15 +303,14 @@ func WriteReceipts(db DatabaseWriter, hash common.Hash, number uint64, receipts
log.Crit("Failed to encode block receipts", "err", err)
}
// Store the flattened receipt slice
key := append(append(blockReceiptsPrefix, encodeBlockNumber(number)...), hash.Bytes()...)
if err := db.Put(key, bytes); err != nil {
if err := db.Put(blockReceiptsKey(number, hash), bytes); err != nil {
log.Crit("Failed to store block receipts", "err", err)
}
}
// DeleteReceipts removes all receipt data associated with a block hash.
func DeleteReceipts(db DatabaseDeleter, hash common.Hash, number uint64) {
if err := db.Delete(append(append(blockReceiptsPrefix, encodeBlockNumber(number)...), hash.Bytes()...)); err != nil {
if err := db.Delete(blockReceiptsKey(number, hash)); err != nil {
log.Crit("Failed to delete block receipts", "err", err)
}
}

View File

@@ -17,8 +17,6 @@
package rawdb
import (
"encoding/binary"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/log"
@@ -28,7 +26,7 @@ import (
// ReadTxLookupEntry retrieves the positional metadata associated with a transaction
// hash to allow retrieving the transaction or receipt by hash.
func ReadTxLookupEntry(db DatabaseReader, hash common.Hash) (common.Hash, uint64, uint64) {
data, _ := db.Get(append(txLookupPrefix, hash.Bytes()...))
data, _ := db.Get(txLookupKey(hash))
if len(data) == 0 {
return common.Hash{}, 0, 0
}
@@ -53,7 +51,7 @@ func WriteTxLookupEntries(db DatabaseWriter, block *types.Block) {
if err != nil {
log.Crit("Failed to encode transaction lookup entry", "err", err)
}
if err := db.Put(append(txLookupPrefix, tx.Hash().Bytes()...), data); err != nil {
if err := db.Put(txLookupKey(tx.Hash()), data); err != nil {
log.Crit("Failed to store transaction lookup entry", "err", err)
}
}
@@ -61,7 +59,7 @@ func WriteTxLookupEntries(db DatabaseWriter, block *types.Block) {
// DeleteTxLookupEntry removes all transaction data associated with a hash.
func DeleteTxLookupEntry(db DatabaseDeleter, hash common.Hash) {
db.Delete(append(txLookupPrefix, hash.Bytes()...))
db.Delete(txLookupKey(hash))
}
// ReadTransaction retrieves a specific transaction from the database, along with
@@ -97,23 +95,13 @@ func ReadReceipt(db DatabaseReader, hash common.Hash) (*types.Receipt, common.Ha
// ReadBloomBits retrieves the compressed bloom bit vector belonging to the given
// section and bit index from the.
func ReadBloomBits(db DatabaseReader, bit uint, section uint64, head common.Hash) ([]byte, error) {
key := append(append(bloomBitsPrefix, make([]byte, 10)...), head.Bytes()...)
binary.BigEndian.PutUint16(key[1:], uint16(bit))
binary.BigEndian.PutUint64(key[3:], section)
return db.Get(key)
return db.Get(bloomBitsKey(bit, section, head))
}
// WriteBloomBits stores the compressed bloom bits vector belonging to the given
// section and bit index.
func WriteBloomBits(db DatabaseWriter, bit uint, section uint64, head common.Hash, bits []byte) {
key := append(append(bloomBitsPrefix, make([]byte, 10)...), head.Bytes()...)
binary.BigEndian.PutUint16(key[1:], uint16(bit))
binary.BigEndian.PutUint64(key[3:], section)
if err := db.Put(key, bits); err != nil {
if err := db.Put(bloomBitsKey(bit, section, head), bits); err != nil {
log.Crit("Failed to store bloom bits", "err", err)
}
}

View File

@@ -45,7 +45,7 @@ func WriteDatabaseVersion(db DatabaseWriter, version int) {
// ReadChainConfig retrieves the consensus settings based on the given genesis hash.
func ReadChainConfig(db DatabaseReader, hash common.Hash) *params.ChainConfig {
data, _ := db.Get(append(configPrefix, hash[:]...))
data, _ := db.Get(configKey(hash))
if len(data) == 0 {
return nil
}
@@ -66,14 +66,14 @@ func WriteChainConfig(db DatabaseWriter, hash common.Hash, cfg *params.ChainConf
if err != nil {
log.Crit("Failed to JSON encode chain config", "err", err)
}
if err := db.Put(append(configPrefix, hash[:]...), data); err != nil {
if err := db.Put(configKey(hash), data); err != nil {
log.Crit("Failed to store chain config", "err", err)
}
}
// ReadPreimage retrieves a single preimage of the provided hash.
func ReadPreimage(db DatabaseReader, hash common.Hash) []byte {
data, _ := db.Get(append(preimagePrefix, hash.Bytes()...))
data, _ := db.Get(preimageKey(hash))
return data
}
@@ -81,7 +81,7 @@ func ReadPreimage(db DatabaseReader, hash common.Hash) []byte {
// current block number, and is used for debug messages only.
func WritePreimages(db DatabaseWriter, number uint64, preimages map[common.Hash][]byte) {
for hash, preimage := range preimages {
if err := db.Put(append(preimagePrefix, hash.Bytes()...), preimage); err != nil {
if err := db.Put(preimageKey(hash), preimage); err != nil {
log.Crit("Failed to store trie preimage", "err", err)
}
}

View File

@@ -77,3 +77,58 @@ func encodeBlockNumber(number uint64) []byte {
binary.BigEndian.PutUint64(enc, number)
return enc
}
// headerKey = headerPrefix + num (uint64 big endian) + hash
func headerKey(number uint64, hash common.Hash) []byte {
return append(append(headerPrefix, encodeBlockNumber(number)...), hash.Bytes()...)
}
// headerTDKey = headerPrefix + num (uint64 big endian) + hash + headerTDSuffix
func headerTDKey(number uint64, hash common.Hash) []byte {
return append(headerKey(number, hash), headerTDSuffix...)
}
// headerHashKey = headerPrefix + num (uint64 big endian) + headerHashSuffix
func headerHashKey(number uint64) []byte {
return append(append(headerPrefix, encodeBlockNumber(number)...), headerHashSuffix...)
}
// headerNumberKey = headerNumberPrefix + hash
func headerNumberKey(hash common.Hash) []byte {
return append(headerNumberPrefix, hash.Bytes()...)
}
// blockBodyKey = blockBodyPrefix + num (uint64 big endian) + hash
func blockBodyKey(number uint64, hash common.Hash) []byte {
return append(append(blockBodyPrefix, encodeBlockNumber(number)...), hash.Bytes()...)
}
// blockReceiptsKey = blockReceiptsPrefix + num (uint64 big endian) + hash
func blockReceiptsKey(number uint64, hash common.Hash) []byte {
return append(append(blockReceiptsPrefix, encodeBlockNumber(number)...), hash.Bytes()...)
}
// txLookupKey = txLookupPrefix + hash
func txLookupKey(hash common.Hash) []byte {
return append(txLookupPrefix, hash.Bytes()...)
}
// bloomBitsKey = bloomBitsPrefix + bit (uint16 big endian) + section (uint64 big endian) + hash
func bloomBitsKey(bit uint, section uint64, hash common.Hash) []byte {
key := append(append(bloomBitsPrefix, make([]byte, 10)...), hash.Bytes()...)
binary.BigEndian.PutUint16(key[1:], uint16(bit))
binary.BigEndian.PutUint64(key[3:], section)
return key
}
// preimageKey = preimagePrefix + hash
func preimageKey(hash common.Hash) []byte {
return append(preimagePrefix, hash.Bytes()...)
}
// configKey = configPrefix + hash
func configKey(hash common.Hash) []byte {
return append(configPrefix, hash.Bytes()...)
}

View File

@@ -219,7 +219,7 @@ func (self *stateObject) updateRoot(db Database) {
self.data.Root = self.trie.Hash()
}
// CommitTrie the storage trie of the object to dwb.
// CommitTrie the storage trie of the object to db.
// This updates the trie root.
func (self *stateObject) CommitTrie(db Database) error {
self.updateTrie(db)

View File

@@ -85,7 +85,7 @@ func (p *StateProcessor) Process(block *types.Block, statedb *state.StateDB, cfg
// and uses the input parameters for its environment. It returns the receipt
// for the transaction, gas used and an error if the transaction failed,
// indicating the block was invalid.
func ApplyTransaction(config *params.ChainConfig, bc *BlockChain, author *common.Address, gp *GasPool, statedb *state.StateDB, header *types.Header, tx *types.Transaction, usedGas *uint64, cfg vm.Config) (*types.Receipt, uint64, error) {
func ApplyTransaction(config *params.ChainConfig, bc ChainContext, author *common.Address, gp *GasPool, statedb *state.StateDB, header *types.Header, tx *types.Transaction, usedGas *uint64, cfg vm.Config) (*types.Receipt, uint64, error) {
msg, err := tx.AsMessage(types.MakeSigner(config, header.Number))
if err != nil {
return nil, 0, err

105
core/tx_cacher.go Normal file
View File

@@ -0,0 +1,105 @@
// Copyright 2018 The go-ethereum Authors
// This file is part of the go-ethereum library.
//
// The go-ethereum library is free software: you can redistribute it and/or modify
// it under the terms of the GNU Lesser General Public License as published by
// the Free Software Foundation, either version 3 of the License, or
// (at your option) any later version.
//
// The go-ethereum library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
// GNU Lesser General Public License for more details.
//
// You should have received a copy of the GNU Lesser General Public License
// along with the go-ethereum library. If not, see <http://www.gnu.org/licenses/>.
package core
import (
"runtime"
"github.com/ethereum/go-ethereum/core/types"
)
// senderCacher is a concurrent tranaction sender recoverer anc cacher.
var senderCacher = newTxSenderCacher(runtime.NumCPU())
// txSenderCacherRequest is a request for recovering transaction senders with a
// specific signature scheme and caching it into the transactions themselves.
//
// The inc field defines the number of transactions to skip after each recovery,
// which is used to feed the same underlying input array to different threads but
// ensure they process the early transactions fast.
type txSenderCacherRequest struct {
signer types.Signer
txs []*types.Transaction
inc int
}
// txSenderCacher is a helper structure to concurrently ecrecover transaction
// senders from digital signatures on background threads.
type txSenderCacher struct {
threads int
tasks chan *txSenderCacherRequest
}
// newTxSenderCacher creates a new transaction sender background cacher and starts
// as many procesing goroutines as allowed by the GOMAXPROCS on construction.
func newTxSenderCacher(threads int) *txSenderCacher {
cacher := &txSenderCacher{
tasks: make(chan *txSenderCacherRequest, threads),
threads: threads,
}
for i := 0; i < threads; i++ {
go cacher.cache()
}
return cacher
}
// cache is an infinite loop, caching transaction senders from various forms of
// data structures.
func (cacher *txSenderCacher) cache() {
for task := range cacher.tasks {
for i := 0; i < len(task.txs); i += task.inc {
types.Sender(task.signer, task.txs[i])
}
}
}
// recover recovers the senders from a batch of transactions and caches them
// back into the same data structures. There is no validation being done, nor
// any reaction to invalid signatures. That is up to calling code later.
func (cacher *txSenderCacher) recover(signer types.Signer, txs []*types.Transaction) {
// If there's nothing to recover, abort
if len(txs) == 0 {
return
}
// Ensure we have meaningful task sizes and schedule the recoveries
tasks := cacher.threads
if len(txs) < tasks*4 {
tasks = (len(txs) + 3) / 4
}
for i := 0; i < tasks; i++ {
cacher.tasks <- &txSenderCacherRequest{
signer: signer,
txs: txs[i:],
inc: tasks,
}
}
}
// recoverFromBlocks recovers the senders from a batch of blocks and caches them
// back into the same data structures. There is no validation being done, nor
// any reaction to invalid signatures. That is up to calling code later.
func (cacher *txSenderCacher) recoverFromBlocks(signer types.Signer, blocks []*types.Block) {
count := 0
for _, block := range blocks {
count += len(block.Transactions())
}
txs := make([]*types.Transaction, 0, count)
for _, block := range blocks {
txs = append(txs, block.Transactions()...)
}
cacher.recover(signer, txs)
}

View File

@@ -222,7 +222,7 @@ func NewTxPool(config TxPoolConfig, chainconfig *params.ChainConfig, chain block
config: config,
chainconfig: chainconfig,
chain: chain,
signer: types.NewEIP155Signer(chainconfig.ChainId),
signer: types.NewEIP155Signer(chainconfig.ChainID),
pending: make(map[common.Address]*txList),
queue: make(map[common.Address]*txList),
beats: make(map[common.Address]time.Time),
@@ -411,6 +411,7 @@ func (pool *TxPool) reset(oldHead, newHead *types.Header) {
// Inject any transactions discarded due to reorgs
log.Debug("Reinjecting stale transactions", "count", len(reinject))
senderCacher.recover(pool.signer, reinject)
pool.addTxsLocked(reinject, false)
// validate the pool of pending transactions, this will remove

View File

@@ -43,7 +43,7 @@ func MakeSigner(config *params.ChainConfig, blockNumber *big.Int) Signer {
var signer Signer
switch {
case config.IsEIP155(blockNumber):
signer = NewEIP155Signer(config.ChainId)
signer = NewEIP155Signer(config.ChainID)
case config.IsHomestead(blockNumber):
signer = HomesteadSigner{}
default:

View File

@@ -52,7 +52,7 @@ type Config struct {
func setDefaults(cfg *Config) {
if cfg.ChainConfig == nil {
cfg.ChainConfig = &params.ChainConfig{
ChainId: big.NewInt(1),
ChainID: big.NewInt(1),
HomesteadBlock: new(big.Int),
DAOForkBlock: new(big.Int),
DAOForkSupport: false,

View File

@@ -39,6 +39,8 @@ var (
secp256k1halfN = new(big.Int).Div(secp256k1N, big.NewInt(2))
)
var errInvalidPubkey = errors.New("invalid secp256k1 public key")
// Keccak256 calculates and returns the Keccak256 hash of the input data.
func Keccak256(data ...[]byte) []byte {
d := sha3.NewKeccak256()
@@ -122,12 +124,13 @@ func FromECDSA(priv *ecdsa.PrivateKey) []byte {
return math.PaddedBigBytes(priv.D, priv.Params().BitSize/8)
}
func ToECDSAPub(pub []byte) *ecdsa.PublicKey {
if len(pub) == 0 {
return nil
}
// UnmarshalPubkey converts bytes to a secp256k1 public key.
func UnmarshalPubkey(pub []byte) (*ecdsa.PublicKey, error) {
x, y := elliptic.Unmarshal(S256(), pub)
return &ecdsa.PublicKey{Curve: S256(), X: x, Y: y}
if x == nil {
return nil, errInvalidPubkey
}
return &ecdsa.PublicKey{Curve: S256(), X: x, Y: y}, nil
}
func FromECDSAPub(pub *ecdsa.PublicKey) []byte {

View File

@@ -23,9 +23,11 @@ import (
"io/ioutil"
"math/big"
"os"
"reflect"
"testing"
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/common/hexutil"
)
var testAddrHex = "970e8128ab834e8eac17ab8e3812f010678cf791"
@@ -56,6 +58,33 @@ func BenchmarkSha3(b *testing.B) {
}
}
func TestUnmarshalPubkey(t *testing.T) {
key, err := UnmarshalPubkey(nil)
if err != errInvalidPubkey || key != nil {
t.Fatalf("expected error, got %v, %v", err, key)
}
key, err = UnmarshalPubkey([]byte{1, 2, 3})
if err != errInvalidPubkey || key != nil {
t.Fatalf("expected error, got %v, %v", err, key)
}
var (
enc, _ = hex.DecodeString("04760c4460e5336ac9bbd87952a3c7ec4363fc0a97bd31c86430806e287b437fd1b01abc6e1db640cf3106b520344af1d58b00b57823db3e1407cbc433e1b6d04d")
dec = &ecdsa.PublicKey{
Curve: S256(),
X: hexutil.MustDecodeBig("0x760c4460e5336ac9bbd87952a3c7ec4363fc0a97bd31c86430806e287b437fd1"),
Y: hexutil.MustDecodeBig("0xb01abc6e1db640cf3106b520344af1d58b00b57823db3e1407cbc433e1b6d04d"),
}
)
key, err = UnmarshalPubkey(enc)
if err != nil {
t.Fatalf("expected no error, got %v", err)
}
if !reflect.DeepEqual(key, dec) {
t.Fatal("wrong result")
}
}
func TestSign(t *testing.T) {
key, _ := HexToECDSA(testPrivHex)
addr := common.HexToAddress(testAddrHex)
@@ -69,7 +98,7 @@ func TestSign(t *testing.T) {
if err != nil {
t.Errorf("ECRecover error: %s", err)
}
pubKey := ToECDSAPub(recoveredPub)
pubKey, _ := UnmarshalPubkey(recoveredPub)
recoveredAddr := PubkeyToAddress(*pubKey)
if addr != recoveredAddr {
t.Errorf("Address mismatch: want: %x have: %x", addr, recoveredAddr)

View File

@@ -32,6 +32,7 @@ import (
"github.com/ethereum/go-ethereum/core/rawdb"
"github.com/ethereum/go-ethereum/core/state"
"github.com/ethereum/go-ethereum/core/types"
"github.com/ethereum/go-ethereum/internal/ethapi"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/miner"
"github.com/ethereum/go-ethereum/params"
@@ -351,10 +352,34 @@ func (api *PrivateDebugAPI) Preimage(ctx context.Context, hash common.Hash) (hex
return nil, errors.New("unknown preimage")
}
// BadBlockArgs represents the entries in the list returned when bad blocks are queried.
type BadBlockArgs struct {
Hash common.Hash `json:"hash"`
Block map[string]interface{} `json:"block"`
RLP string `json:"rlp"`
}
// GetBadBLocks returns a list of the last 'bad blocks' that the client has seen on the network
// and returns them as a JSON list of block-hashes
func (api *PrivateDebugAPI) GetBadBlocks(ctx context.Context) ([]core.BadBlockArgs, error) {
return api.eth.BlockChain().BadBlocks()
func (api *PrivateDebugAPI) GetBadBlocks(ctx context.Context) ([]*BadBlockArgs, error) {
blocks := api.eth.BlockChain().BadBlocks()
results := make([]*BadBlockArgs, len(blocks))
var err error
for i, block := range blocks {
results[i] = &BadBlockArgs{
Hash: block.Hash(),
}
if rlpBytes, err := rlp.EncodeToBytes(block); err != nil {
results[i].RLP = err.Error() // Hacky, but hey, it works
} else {
results[i].RLP = fmt.Sprintf("0x%x", rlpBytes)
}
if results[i].Block, err = ethapi.RPCMarshalBlock(block, true, true); err != nil {
results[i].Block = map[string]interface{}{"error": err.Error()}
}
}
return results, nil
}
// StorageRangeResult is the result of a debug_storageRangeAt API call.

View File

@@ -251,7 +251,8 @@ func (api *PrivateDebugAPI) traceChain(ctx context.Context, start, end *types.Bl
// Print progress logs if long enough time elapsed
if time.Since(logged) > 8*time.Second {
if number > origin {
log.Info("Tracing chain segment", "start", origin, "end", end.NumberU64(), "current", number, "transactions", traced, "elapsed", time.Since(begin), "memory", database.TrieDB().Size())
nodes, imgs := database.TrieDB().Size()
log.Info("Tracing chain segment", "start", origin, "end", end.NumberU64(), "current", number, "transactions", traced, "elapsed", time.Since(begin), "memory", nodes+imgs)
} else {
log.Info("Preparing state for chain trace", "block", number, "start", origin, "elapsed", time.Since(begin))
}
@@ -298,6 +299,8 @@ func (api *PrivateDebugAPI) traceChain(ctx context.Context, start, end *types.Bl
// Dereference all past tries we ourselves are done working with
database.TrieDB().Dereference(proot, common.Hash{})
proot = root
// TODO(karalabe): Do we need the preimages? Won't they accumulate too much?
}
}()
@@ -526,7 +529,8 @@ func (api *PrivateDebugAPI) computeStateDB(block *types.Block, reexec uint64) (*
database.TrieDB().Dereference(proot, common.Hash{})
proot = root
}
log.Info("Historical state regenerated", "block", block.NumberU64(), "elapsed", time.Since(start), "size", database.TrieDB().Size())
nodes, imgs := database.TrieDB().Size()
log.Info("Historical state regenerated", "block", block.NumberU64(), "elapsed", time.Since(start), "nodes", nodes, "preimages", imgs)
return statedb, nil
}

View File

@@ -47,7 +47,7 @@ var DefaultConfig = Config{
LightPeers: 100,
DatabaseCache: 768,
TrieCache: 256,
TrieTimeout: 5 * time.Minute,
TrieTimeout: 60 * time.Minute,
GasPrice: big.NewInt(18 * params.Shannon),
TxPool: core.DefaultTxPoolConfig,

View File

@@ -340,6 +340,8 @@ func (pm *ProtocolManager) handleMsg(p *peer) error {
return errResp(ErrDecode, "%v: %v", msg, err)
}
hashMode := query.Origin.Hash != (common.Hash{})
first := true
maxNonCanonical := uint64(100)
// Gather headers until the fetch or network limits is reached
var (
@@ -351,31 +353,36 @@ func (pm *ProtocolManager) handleMsg(p *peer) error {
// Retrieve the next header satisfying the query
var origin *types.Header
if hashMode {
origin = pm.blockchain.GetHeaderByHash(query.Origin.Hash)
if first {
first = false
origin = pm.blockchain.GetHeaderByHash(query.Origin.Hash)
if origin != nil {
query.Origin.Number = origin.Number.Uint64()
}
} else {
origin = pm.blockchain.GetHeader(query.Origin.Hash, query.Origin.Number)
}
} else {
origin = pm.blockchain.GetHeaderByNumber(query.Origin.Number)
}
if origin == nil {
break
}
number := origin.Number.Uint64()
headers = append(headers, origin)
bytes += estHeaderRlpSize
// Advance to the next header of the query
switch {
case query.Origin.Hash != (common.Hash{}) && query.Reverse:
case hashMode && query.Reverse:
// Hash based traversal towards the genesis block
for i := 0; i < int(query.Skip)+1; i++ {
if header := pm.blockchain.GetHeader(query.Origin.Hash, number); header != nil {
query.Origin.Hash = header.ParentHash
number--
} else {
unknown = true
break
}
ancestor := query.Skip + 1
if ancestor == 0 {
unknown = true
} else {
query.Origin.Hash, query.Origin.Number = pm.blockchain.GetAncestor(query.Origin.Hash, query.Origin.Number, ancestor, &maxNonCanonical)
unknown = (query.Origin.Hash == common.Hash{})
}
case query.Origin.Hash != (common.Hash{}) && !query.Reverse:
case hashMode && !query.Reverse:
// Hash based traversal towards the leaf block
var (
current = origin.Number.Uint64()
@@ -387,8 +394,10 @@ func (pm *ProtocolManager) handleMsg(p *peer) error {
unknown = true
} else {
if header := pm.blockchain.GetHeaderByNumber(next); header != nil {
if pm.blockchain.GetBlockHashesFromHash(header.Hash(), query.Skip+1)[query.Skip] == query.Origin.Hash {
query.Origin.Hash = header.Hash()
nextHash := header.Hash()
expOldHash, _ := pm.blockchain.GetAncestor(nextHash, next, query.Skip+1, &maxNonCanonical)
if expOldHash == query.Origin.Hash {
query.Origin.Hash, query.Origin.Number = nextHash, next
} else {
unknown = true
}

View File

@@ -78,7 +78,7 @@
// the final result of the tracing.
result: function(ctx) {
// Save the outer calldata also
if (ctx.input.length > 4) {
if (ctx.input.length >= 4) {
this.store(slice(ctx.input, 0, 4), ctx.input.length-4)
}
return this.ids;

File diff suppressed because one or more lines are too long

View File

@@ -141,7 +141,9 @@ func (ec *Client) getBlock(ctx context.Context, method string, args ...interface
// Fill the sender cache of transactions in the block.
txs := make([]*types.Transaction, len(body.Transactions))
for i, tx := range body.Transactions {
setSenderFromServer(tx.tx, tx.From, body.Hash)
if tx.From != nil {
setSenderFromServer(tx.tx, *tx.From, body.Hash)
}
txs[i] = tx.tx
}
return types.NewBlockWithHeader(head).WithBody(txs, uncles), nil
@@ -174,9 +176,9 @@ type rpcTransaction struct {
}
type txExtraInfo struct {
BlockNumber *string
BlockHash common.Hash
From common.Address
BlockNumber *string `json:"blockNumber,omitempty"`
BlockHash *common.Hash `json:"blockHash,omitempty"`
From *common.Address `json:"from,omitempty"`
}
func (tx *rpcTransaction) UnmarshalJSON(msg []byte) error {
@@ -197,7 +199,9 @@ func (ec *Client) TransactionByHash(ctx context.Context, hash common.Hash) (tx *
} else if _, r, _ := json.tx.RawSignatureValues(); r == nil {
return nil, false, fmt.Errorf("server returned transaction without signature")
}
setSenderFromServer(json.tx, json.From, json.BlockHash)
if json.From != nil && json.BlockHash != nil {
setSenderFromServer(json.tx, *json.From, *json.BlockHash)
}
return json.tx, json.BlockNumber == nil, nil
}
@@ -244,7 +248,9 @@ func (ec *Client) TransactionInBlock(ctx context.Context, blockHash common.Hash,
return nil, fmt.Errorf("server returned transaction without signature")
}
}
setSenderFromServer(json.tx, json.From, json.BlockHash)
if json.From != nil && json.BlockHash != nil {
setSenderFromServer(json.tx, *json.From, *json.BlockHash)
}
return json.tx, err
}

View File

@@ -141,6 +141,7 @@ func (db *LDBDatabase) Close() {
if err := <-errc; err != nil {
db.log.Error("Metrics collection failed", "err", err)
}
db.quitChan = nil
}
err := db.db.Close()
if err == nil {
@@ -189,7 +190,7 @@ func (db *LDBDatabase) Meter(prefix string) {
// 3 | 570 | 1113.18458 | 0.00000 | 0.00000 | 0.00000
//
// This is how the write delay look like (currently):
// DelayN:5 Delay:406.604657ms
// DelayN:5 Delay:406.604657ms Paused: false
//
// This is how the iostats look like (currently):
// Read(MB):3895.04860 Write(MB):3654.64712
@@ -210,13 +211,19 @@ func (db *LDBDatabase) meter(refresh time.Duration) {
lastWritePaused time.Time
)
var (
errc chan error
merr error
)
// Iterate ad infinitum and collect the stats
for i := 1; ; i++ {
for i := 1; errc == nil && merr == nil; i++ {
// Retrieve the database stats
stats, err := db.db.GetProperty("leveldb.stats")
if err != nil {
db.log.Error("Failed to read database stats", "err", err)
return
merr = err
continue
}
// Find the compaction table, skip the header
lines := strings.Split(stats, "\n")
@@ -225,7 +232,8 @@ func (db *LDBDatabase) meter(refresh time.Duration) {
}
if len(lines) <= 3 {
db.log.Error("Compaction table not found")
return
merr = errors.New("compaction table not found")
continue
}
lines = lines[3:]
@@ -242,7 +250,8 @@ func (db *LDBDatabase) meter(refresh time.Duration) {
value, err := strconv.ParseFloat(strings.TrimSpace(counter), 64)
if err != nil {
db.log.Error("Compaction entry parsing failed", "err", err)
return
merr = err
continue
}
compactions[i%2][idx] += value
}
@@ -262,7 +271,8 @@ func (db *LDBDatabase) meter(refresh time.Duration) {
writedelay, err := db.db.GetProperty("leveldb.writedelay")
if err != nil {
db.log.Error("Failed to read database write delay statistic", "err", err)
return
merr = err
continue
}
var (
delayN int64
@@ -272,12 +282,14 @@ func (db *LDBDatabase) meter(refresh time.Duration) {
)
if n, err := fmt.Sscanf(writedelay, "DelayN:%d Delay:%s Paused:%t", &delayN, &delayDuration, &paused); n != 3 || err != nil {
db.log.Error("Write delay statistic not found")
return
merr = err
continue
}
duration, err = time.ParseDuration(delayDuration)
if err != nil {
db.log.Error("Failed to parse delay duration", "err", err)
return
merr = err
continue
}
if db.writeDelayNMeter != nil {
db.writeDelayNMeter.Mark(delayN - delaystats[0])
@@ -317,53 +329,47 @@ func (db *LDBDatabase) meter(refresh time.Duration) {
ioStats, err := db.db.GetProperty("leveldb.iostats")
if err != nil {
db.log.Error("Failed to read database iostats", "err", err)
return
merr = err
continue
}
var nRead, nWrite float64
parts := strings.Split(ioStats, " ")
if len(parts) < 2 {
db.log.Error("Bad syntax of ioStats", "ioStats", ioStats)
return
merr = fmt.Errorf("bad syntax of ioStats %s", ioStats)
continue
}
r := strings.Split(parts[0], ":")
if len(r) < 2 {
if n, err := fmt.Sscanf(parts[0], "Read(MB):%f", &nRead); n != 1 || err != nil {
db.log.Error("Bad syntax of read entry", "entry", parts[0])
return
merr = err
continue
}
read, err := strconv.ParseFloat(r[1], 64)
if err != nil {
db.log.Error("Read entry parsing failed", "err", err)
return
}
w := strings.Split(parts[1], ":")
if len(w) < 2 {
if n, err := fmt.Sscanf(parts[1], "Write(MB):%f", &nWrite); n != 1 || err != nil {
db.log.Error("Bad syntax of write entry", "entry", parts[1])
return
}
write, err := strconv.ParseFloat(w[1], 64)
if err != nil {
db.log.Error("Write entry parsing failed", "err", err)
return
merr = err
continue
}
if db.diskReadMeter != nil {
db.diskReadMeter.Mark(int64((read - iostats[0]) * 1024 * 1024))
db.diskReadMeter.Mark(int64((nRead - iostats[0]) * 1024 * 1024))
}
if db.diskWriteMeter != nil {
db.diskWriteMeter.Mark(int64((write - iostats[1]) * 1024 * 1024))
db.diskWriteMeter.Mark(int64((nWrite - iostats[1]) * 1024 * 1024))
}
iostats[0] = read
iostats[1] = write
iostats[0], iostats[1] = nRead, nWrite
// Sleep a bit, then repeat the stats collection
select {
case errc := <-db.quitChan:
case errc = <-db.quitChan:
// Quit requesting, stop hammering the database
errc <- nil
return
case <-time.After(refresh):
// Timeout, gather a new set of stats
}
}
if errc == nil {
errc = <-db.quitChan
}
errc <- merr
}
func (db *LDBDatabase) NewBatch() Batch {

View File

@@ -362,7 +362,7 @@ type nodeInfo struct {
// authMsg is the authentication infos needed to login to a monitoring server.
type authMsg struct {
Id string `json:"id"`
ID string `json:"id"`
Info nodeInfo `json:"info"`
Secret string `json:"secret"`
}
@@ -381,7 +381,7 @@ func (s *Service) login(conn *websocket.Conn) error {
protocol = fmt.Sprintf("les/%d", les.ClientProtocolVersions[0])
}
auth := &authMsg{
Id: s.node,
ID: s.node,
Info: nodeInfo{
Name: s.node,
Node: infos.Name,

View File

@@ -62,8 +62,9 @@ func NewPublicEthereumAPI(b Backend) *PublicEthereumAPI {
}
// GasPrice returns a suggestion for a gas price.
func (s *PublicEthereumAPI) GasPrice(ctx context.Context) (*big.Int, error) {
return s.b.SuggestPrice(ctx)
func (s *PublicEthereumAPI) GasPrice(ctx context.Context) (*hexutil.Big, error) {
price, err := s.b.SuggestPrice(ctx)
return (*hexutil.Big)(price), err
}
// ProtocolVersion returns the current Ethereum protocol version this node supports
@@ -354,7 +355,7 @@ func (s *PrivateAccountAPI) signTransaction(ctx context.Context, args SendTxArgs
var chainID *big.Int
if config := s.b.ChainConfig(); config.IsEIP155(s.b.CurrentBlock().Number()) {
chainID = config.ChainId
chainID = config.ChainID
}
return wallet.SignTxWithPassphrase(account, passwd, tx, chainID)
}
@@ -460,13 +461,11 @@ func (s *PrivateAccountAPI) EcRecover(ctx context.Context, data, sig hexutil.Byt
}
sig[64] -= 27 // Transform yellow paper V from 27/28 to 0/1
rpk, err := crypto.Ecrecover(signHash(data), sig)
rpk, err := crypto.SigToPub(signHash(data), sig)
if err != nil {
return common.Address{}, err
}
pubKey := crypto.ToECDSAPub(rpk)
recoveredAddr := crypto.PubkeyToAddress(*pubKey)
return recoveredAddr, nil
return crypto.PubkeyToAddress(*rpk), nil
}
// SignAndSendTransaction was renamed to SendTransaction. This method is deprecated
@@ -487,21 +486,20 @@ func NewPublicBlockChainAPI(b Backend) *PublicBlockChainAPI {
}
// BlockNumber returns the block number of the chain head.
func (s *PublicBlockChainAPI) BlockNumber() *big.Int {
func (s *PublicBlockChainAPI) BlockNumber() hexutil.Uint64 {
header, _ := s.b.HeaderByNumber(context.Background(), rpc.LatestBlockNumber) // latest header should always be available
return header.Number
return hexutil.Uint64(header.Number.Uint64())
}
// GetBalance returns the amount of wei for the given address in the state of the
// given block number. The rpc.LatestBlockNumber and rpc.PendingBlockNumber meta
// block numbers are also allowed.
func (s *PublicBlockChainAPI) GetBalance(ctx context.Context, address common.Address, blockNr rpc.BlockNumber) (*big.Int, error) {
func (s *PublicBlockChainAPI) GetBalance(ctx context.Context, address common.Address, blockNr rpc.BlockNumber) (*hexutil.Big, error) {
state, _, err := s.b.StateAndHeaderByNumber(ctx, blockNr)
if state == nil || err != nil {
return nil, err
}
b := state.GetBalance(address)
return b, state.Error()
return (*hexutil.Big)(state.GetBalance(address)), state.Error()
}
// GetBlockByNumber returns the requested block. When blockNr is -1 the chain head is returned. When fullTx is true all
@@ -792,10 +790,10 @@ func FormatLogs(logs []vm.StructLog) []StructLogRes {
return formatted
}
// rpcOutputBlock converts the given block to the RPC output which depends on fullTx. If inclTx is true transactions are
// RPCMarshalBlock converts the given block to the RPC output which depends on fullTx. If inclTx is true transactions are
// returned. When fullTx is true the returned block contains full transaction details, otherwise it will only contain
// transaction hashes.
func (s *PublicBlockChainAPI) rpcOutputBlock(b *types.Block, inclTx bool, fullTx bool) (map[string]interface{}, error) {
func RPCMarshalBlock(b *types.Block, inclTx bool, fullTx bool) (map[string]interface{}, error) {
head := b.Header() // copies the header once
fields := map[string]interface{}{
"number": (*hexutil.Big)(head.Number),
@@ -808,7 +806,6 @@ func (s *PublicBlockChainAPI) rpcOutputBlock(b *types.Block, inclTx bool, fullTx
"stateRoot": head.Root,
"miner": head.Coinbase,
"difficulty": (*hexutil.Big)(head.Difficulty),
"totalDifficulty": (*hexutil.Big)(s.b.GetTd(b.Hash())),
"extraData": hexutil.Bytes(head.Extra),
"size": hexutil.Uint64(b.Size()),
"gasLimit": hexutil.Uint64(head.GasLimit),
@@ -822,17 +819,15 @@ func (s *PublicBlockChainAPI) rpcOutputBlock(b *types.Block, inclTx bool, fullTx
formatTx := func(tx *types.Transaction) (interface{}, error) {
return tx.Hash(), nil
}
if fullTx {
formatTx = func(tx *types.Transaction) (interface{}, error) {
return newRPCTransactionFromBlockHash(b, tx.Hash()), nil
}
}
txs := b.Transactions()
transactions := make([]interface{}, len(txs))
var err error
for i, tx := range b.Transactions() {
for i, tx := range txs {
if transactions[i], err = formatTx(tx); err != nil {
return nil, err
}
@@ -850,6 +845,17 @@ func (s *PublicBlockChainAPI) rpcOutputBlock(b *types.Block, inclTx bool, fullTx
return fields, nil
}
// rpcOutputBlock uses the generalized output filler, then adds the total difficulty field, which requires
// a `PublicBlockchainAPI`.
func (s *PublicBlockChainAPI) rpcOutputBlock(b *types.Block, inclTx bool, fullTx bool) (map[string]interface{}, error) {
fields, err := RPCMarshalBlock(b, inclTx, fullTx)
if err != nil {
return nil, err
}
fields["totalDifficulty"] = (*hexutil.Big)(s.b.GetTd(b.Hash()))
return fields, err
}
// RPCTransaction represents a transaction that will serialize to the RPC representation of a transaction
type RPCTransaction struct {
BlockHash common.Hash `json:"blockHash"`
@@ -1096,7 +1102,7 @@ func (s *PublicTransactionPoolAPI) sign(addr common.Address, tx *types.Transacti
// Request the wallet to sign the transaction
var chainID *big.Int
if config := s.b.ChainConfig(); config.IsEIP155(s.b.CurrentBlock().Number()) {
chainID = config.ChainId
chainID = config.ChainID
}
return wallet.SignTx(account, tx, chainID)
}
@@ -1216,7 +1222,7 @@ func (s *PublicTransactionPoolAPI) SendTransaction(ctx context.Context, args Sen
var chainID *big.Int
if config := s.b.ChainConfig(); config.IsEIP155(s.b.CurrentBlock().Number()) {
chainID = config.ChainId
chainID = config.ChainID
}
signed, err := wallet.SignTx(account, tx, chainID)
if err != nil {
@@ -1293,14 +1299,19 @@ func (s *PublicTransactionPoolAPI) SignTransaction(ctx context.Context, args Sen
return &SignTransactionResult{data, tx}, nil
}
// PendingTransactions returns the transactions that are in the transaction pool and have a from address that is one of
// the accounts this node manages.
// PendingTransactions returns the transactions that are in the transaction pool
// and have a from address that is one of the accounts this node manages.
func (s *PublicTransactionPoolAPI) PendingTransactions() ([]*RPCTransaction, error) {
pending, err := s.b.GetPoolTransactions()
if err != nil {
return nil, err
}
accounts := make(map[common.Address]struct{})
for _, wallet := range s.b.AccountManager().Wallets() {
for _, account := range wallet.Accounts() {
accounts[account.Address] = struct{}{}
}
}
transactions := make([]*RPCTransaction, 0, len(pending))
for _, tx := range pending {
var signer types.Signer = types.HomesteadSigner{}
@@ -1308,7 +1319,7 @@ func (s *PublicTransactionPoolAPI) PendingTransactions() ([]*RPCTransaction, err
signer = types.NewEIP155Signer(tx.ChainId())
}
from, _ := types.Sender(signer, tx)
if _, err := s.b.AccountManager().Find(accounts.Account{Address: from}); err == nil {
if _, exists := accounts[from]; exists {
transactions = append(transactions, newRPCPendingTransaction(tx))
}
}

View File

@@ -127,7 +127,7 @@ func New(ctx *node.ServiceContext, config *eth.Config) (*LightEthereum, error) {
}
leth.txPool = light.NewTxPool(leth.chainConfig, leth.blockchain, leth.relay)
if leth.protocolManager, err = NewProtocolManager(leth.chainConfig, true, ClientProtocolVersions, config.NetworkId, leth.eventMux, leth.engine, leth.peers, leth.blockchain, nil, chainDb, leth.odr, leth.relay, quitSync, &leth.wg); err != nil {
if leth.protocolManager, err = NewProtocolManager(leth.chainConfig, true, ClientProtocolVersions, config.NetworkId, leth.eventMux, leth.engine, leth.peers, leth.blockchain, nil, chainDb, leth.odr, leth.relay, leth.serverPool, quitSync, &leth.wg); err != nil {
return nil, err
}
leth.ApiBackend = &LesApiBackend{leth, nil}

View File

@@ -19,6 +19,7 @@ package les
import (
"encoding/binary"
"encoding/json"
"errors"
"fmt"
"math/big"
@@ -82,7 +83,7 @@ type BlockChain interface {
InsertHeaderChain(chain []*types.Header, checkFreq int) (int, error)
Rollback(chain []common.Hash)
GetHeaderByNumber(number uint64) *types.Header
GetBlockHashesFromHash(hash common.Hash, max uint64) []common.Hash
GetAncestor(hash common.Hash, number, ancestor uint64, maxNonCanonical *uint64) (common.Hash, uint64)
Genesis() *types.Block
SubscribeChainHeadEvent(ch chan<- core.ChainHeadEvent) event.Subscription
}
@@ -128,7 +129,7 @@ type ProtocolManager struct {
// NewProtocolManager returns a new ethereum sub protocol manager. The Ethereum sub protocol manages peers capable
// with the ethereum network.
func NewProtocolManager(chainConfig *params.ChainConfig, lightSync bool, protocolVersions []uint, networkId uint64, mux *event.TypeMux, engine consensus.Engine, peers *peerSet, blockchain BlockChain, txpool txPool, chainDb ethdb.Database, odr *LesOdr, txrelay *LesTxRelay, quitSync chan struct{}, wg *sync.WaitGroup) (*ProtocolManager, error) {
func NewProtocolManager(chainConfig *params.ChainConfig, lightSync bool, protocolVersions []uint, networkId uint64, mux *event.TypeMux, engine consensus.Engine, peers *peerSet, blockchain BlockChain, txpool txPool, chainDb ethdb.Database, odr *LesOdr, txrelay *LesTxRelay, serverPool *serverPool, quitSync chan struct{}, wg *sync.WaitGroup) (*ProtocolManager, error) {
// Create the protocol manager with the base fields
manager := &ProtocolManager{
lightSync: lightSync,
@@ -140,6 +141,7 @@ func NewProtocolManager(chainConfig *params.ChainConfig, lightSync bool, protoco
networkId: networkId,
txpool: txpool,
txrelay: txrelay,
serverPool: serverPool,
peers: peers,
newPeerCh: make(chan *peer),
quitSync: quitSync,
@@ -417,6 +419,8 @@ func (pm *ProtocolManager) handleMsg(p *peer) error {
}
hashMode := query.Origin.Hash != (common.Hash{})
first := true
maxNonCanonical := uint64(100)
// Gather headers until the fetch or network limits is reached
var (
@@ -428,40 +432,57 @@ func (pm *ProtocolManager) handleMsg(p *peer) error {
// Retrieve the next header satisfying the query
var origin *types.Header
if hashMode {
origin = pm.blockchain.GetHeaderByHash(query.Origin.Hash)
if first {
first = false
origin = pm.blockchain.GetHeaderByHash(query.Origin.Hash)
if origin != nil {
query.Origin.Number = origin.Number.Uint64()
}
} else {
origin = pm.blockchain.GetHeader(query.Origin.Hash, query.Origin.Number)
}
} else {
origin = pm.blockchain.GetHeaderByNumber(query.Origin.Number)
}
if origin == nil {
break
}
number := origin.Number.Uint64()
headers = append(headers, origin)
bytes += estHeaderRlpSize
// Advance to the next header of the query
switch {
case query.Origin.Hash != (common.Hash{}) && query.Reverse:
case hashMode && query.Reverse:
// Hash based traversal towards the genesis block
for i := 0; i < int(query.Skip)+1; i++ {
if header := pm.blockchain.GetHeader(query.Origin.Hash, number); header != nil {
query.Origin.Hash = header.ParentHash
number--
} else {
unknown = true
break
}
}
case query.Origin.Hash != (common.Hash{}) && !query.Reverse:
// Hash based traversal towards the leaf block
if header := pm.blockchain.GetHeaderByNumber(origin.Number.Uint64() + query.Skip + 1); header != nil {
if pm.blockchain.GetBlockHashesFromHash(header.Hash(), query.Skip+1)[query.Skip] == query.Origin.Hash {
query.Origin.Hash = header.Hash()
} else {
unknown = true
}
} else {
ancestor := query.Skip + 1
if ancestor == 0 {
unknown = true
} else {
query.Origin.Hash, query.Origin.Number = pm.blockchain.GetAncestor(query.Origin.Hash, query.Origin.Number, ancestor, &maxNonCanonical)
unknown = (query.Origin.Hash == common.Hash{})
}
case hashMode && !query.Reverse:
// Hash based traversal towards the leaf block
var (
current = origin.Number.Uint64()
next = current + query.Skip + 1
)
if next <= current {
infos, _ := json.MarshalIndent(p.Peer.Info(), "", " ")
p.Log().Warn("GetBlockHeaders skip overflow attack", "current", current, "skip", query.Skip, "next", next, "attacker", infos)
unknown = true
} else {
if header := pm.blockchain.GetHeaderByNumber(next); header != nil {
nextHash := header.Hash()
expOldHash, _ := pm.blockchain.GetAncestor(nextHash, next, query.Skip+1, &maxNonCanonical)
if expOldHash == query.Origin.Hash {
query.Origin.Hash, query.Origin.Number = nextHash, next
} else {
unknown = true
}
} else {
unknown = true
}
}
case query.Reverse:
// Number based traversal towards the genesis block

View File

@@ -178,7 +178,7 @@ func newTestProtocolManager(lightSync bool, blocks int, generator func(int, *cor
} else {
protocolVersions = ServerProtocolVersions
}
pm, err := NewProtocolManager(gspec.Config, lightSync, protocolVersions, NetworkId, evmux, engine, peers, chain, nil, db, odr, nil, make(chan struct{}), new(sync.WaitGroup))
pm, err := NewProtocolManager(gspec.Config, lightSync, protocolVersions, NetworkId, evmux, engine, peers, chain, nil, db, odr, nil, nil, make(chan struct{}), new(sync.WaitGroup))
if err != nil {
return nil, err
}

View File

@@ -69,9 +69,9 @@ type sentReq struct {
lock sync.RWMutex // protect access to sentTo map
sentTo map[distPeer]sentReqToPeer
reqQueued bool // a request has been queued but not sent
reqSent bool // a request has been sent but not timed out
reqSrtoCount int // number of requests that reached soft (but not hard) timeout
lastReqQueued bool // last request has been queued but not sent
lastReqSentTo distPeer // if not nil then last request has been sent to given peer but not timed out
reqSrtoCount int // number of requests that reached soft (but not hard) timeout
}
// sentReqToPeer notifies the request-from-peer goroutine (tryRequest) about a response
@@ -180,7 +180,7 @@ type reqStateFn func() reqStateFn
// retrieveLoop is the retrieval state machine event loop
func (r *sentReq) retrieveLoop() {
go r.tryRequest()
r.reqQueued = true
r.lastReqQueued = true
state := r.stateRequesting
for state != nil {
@@ -214,7 +214,7 @@ func (r *sentReq) stateRequesting() reqStateFn {
case rpSoftTimeout:
// last request timed out, try asking a new peer
go r.tryRequest()
r.reqQueued = true
r.lastReqQueued = true
return r.stateRequesting
case rpDeliveredValid:
r.stop(nil)
@@ -233,7 +233,7 @@ func (r *sentReq) stateNoMorePeers() reqStateFn {
select {
case <-time.After(retryQueue):
go r.tryRequest()
r.reqQueued = true
r.lastReqQueued = true
return r.stateRequesting
case ev := <-r.eventsCh:
r.update(ev)
@@ -260,22 +260,26 @@ func (r *sentReq) stateStopped() reqStateFn {
func (r *sentReq) update(ev reqPeerEvent) {
switch ev.event {
case rpSent:
r.reqQueued = false
if ev.peer != nil {
r.reqSent = true
}
r.lastReqQueued = false
r.lastReqSentTo = ev.peer
case rpSoftTimeout:
r.reqSent = false
r.lastReqSentTo = nil
r.reqSrtoCount++
case rpHardTimeout, rpDeliveredValid, rpDeliveredInvalid:
case rpHardTimeout:
r.reqSrtoCount--
case rpDeliveredValid, rpDeliveredInvalid:
if ev.peer == r.lastReqSentTo {
r.lastReqSentTo = nil
} else {
r.reqSrtoCount--
}
}
}
// waiting returns true if the retrieval mechanism is waiting for an answer from
// any peer
func (r *sentReq) waiting() bool {
return r.reqQueued || r.reqSent || r.reqSrtoCount > 0
return r.lastReqQueued || r.lastReqSentTo != nil || r.reqSrtoCount > 0
}
// tryRequest tries to send the request to a new peer and waits for it to either

View File

@@ -52,7 +52,7 @@ type LesServer struct {
func NewLesServer(eth *eth.Ethereum, config *eth.Config) (*LesServer, error) {
quitSync := make(chan struct{})
pm, err := NewProtocolManager(eth.BlockChain().Config(), false, ServerProtocolVersions, config.NetworkId, eth.EventMux(), eth.Engine(), newPeerSet(), eth.BlockChain(), eth.TxPool(), eth.ChainDb(), nil, nil, quitSync, new(sync.WaitGroup))
pm, err := NewProtocolManager(eth.BlockChain().Config(), false, ServerProtocolVersions, config.NetworkId, eth.EventMux(), eth.Engine(), newPeerSet(), eth.BlockChain(), eth.TxPool(), eth.ChainDb(), nil, nil, nil, quitSync, new(sync.WaitGroup))
if err != nil {
return nil, err
}

View File

@@ -433,6 +433,18 @@ func (self *LightChain) GetBlockHashesFromHash(hash common.Hash, max uint64) []c
return self.hc.GetBlockHashesFromHash(hash, max)
}
// GetAncestor retrieves the Nth ancestor of a given block. It assumes that either the given block or
// a close ancestor of it is canonical. maxNonCanonical points to a downwards counter limiting the
// number of blocks to be individually checked before we reach the canonical chain.
//
// Note: ancestor == 0 returns the same block, 1 returns its parent and so on.
func (bc *LightChain) GetAncestor(hash common.Hash, number, ancestor uint64, maxNonCanonical *uint64) (common.Hash, uint64) {
bc.chainmu.Lock()
defer bc.chainmu.Unlock()
return bc.hc.GetAncestor(hash, number, ancestor, maxNonCanonical)
}
// GetHeaderByNumber retrieves a block header from the database by number,
// caching it (associated with its hash) if found.
func (self *LightChain) GetHeaderByNumber(number uint64) *types.Header {

View File

@@ -59,18 +59,18 @@ type trustedCheckpoint struct {
var (
mainnetCheckpoint = trustedCheckpoint{
name: "mainnet",
sectionIdx: 170,
sectionHead: common.HexToHash("3bb2c28bcce463d57968f14f56cdb3fbf35349ab7a701f44c1afb57349c9a356"),
chtRoot: common.HexToHash("d92b6d0853455f8439086292338e87f69781921680dd7aa072fb71547b87415e"),
bloomTrieRoot: common.HexToHash("e4e8250a2fefddead7ae42daecd848cbf9b66d748a8270f8bbd4370b764bb9e9"),
sectionIdx: 174,
sectionHead: common.HexToHash("a3ef48cd8f1c3a08419f0237fc7763491fe89497b3144b17adf87c1c43664613"),
chtRoot: common.HexToHash("dcbeed9f4dea1b3cb75601bb27c51b9960c28e5850275402ac49a150a667296e"),
bloomTrieRoot: common.HexToHash("6b7497a4a03e33870a2383cb6f5e70570f12b1bf5699063baf8c71d02ca90b02"),
}
ropstenCheckpoint = trustedCheckpoint{
name: "ropsten",
sectionIdx: 97,
sectionHead: common.HexToHash("719448c67c01eb5b9f27833a36a4e34612f66801316d7ff37daf9e77fb4cd095"),
chtRoot: common.HexToHash("a7857afc15930ca6e583b6c3d563a025144011655843d52d28e2fdaadd417bea"),
bloomTrieRoot: common.HexToHash("9c71d4b50cbec86dfeaa8e08992de8a4667b81d13c54d6522b17ce2fc5d36416"),
sectionIdx: 102,
sectionHead: common.HexToHash("9017ab08465cb2b2dee035ee5b817bbd7fa28e2c8d2cd903e0aed1cccb249e89"),
chtRoot: common.HexToHash("f61c10a7a787a5ef15f0ae1ae6c13c64331e57e79d0466d2bd9b0c06833fe956"),
bloomTrieRoot: common.HexToHash("69f2ad19aa46d5213a90137b3d2c9bff8a7c9483f7170f0125096ff450c9a873"),
}
)

View File

@@ -89,7 +89,7 @@ type TxRelayBackend interface {
func NewTxPool(config *params.ChainConfig, chain *LightChain, relay TxRelayBackend) *TxPool {
pool := &TxPool{
config: config,
signer: types.NewEIP155Signer(config.ChainId),
signer: types.NewEIP155Signer(config.ChainID),
nonce: make(map[common.Address]uint64),
pending: make(map[common.Hash]*types.Transaction),
mined: make(map[common.Hash][]*types.Transaction),

View File

@@ -134,6 +134,17 @@ func (exp *exp) publishTimer(name string, metric metrics.Timer) {
exp.getFloat(name + ".mean-rate").Set(t.RateMean())
}
func (exp *exp) publishResettingTimer(name string, metric metrics.ResettingTimer) {
t := metric.Snapshot()
ps := t.Percentiles([]float64{50, 75, 95, 99})
exp.getInt(name + ".count").Set(int64(len(t.Values())))
exp.getFloat(name + ".mean").Set(t.Mean())
exp.getInt(name + ".50-percentile").Set(ps[0])
exp.getInt(name + ".75-percentile").Set(ps[1])
exp.getInt(name + ".95-percentile").Set(ps[2])
exp.getInt(name + ".99-percentile").Set(ps[3])
}
func (exp *exp) syncToExpvar() {
exp.registry.Each(func(name string, i interface{}) {
switch i.(type) {
@@ -149,6 +160,8 @@ func (exp *exp) syncToExpvar() {
exp.publishMeter(name, i.(metrics.Meter))
case metrics.Timer:
exp.publishTimer(name, i.(metrics.Timer))
case metrics.ResettingTimer:
exp.publishResettingTimer(name, i.(metrics.ResettingTimer))
default:
panic(fmt.Sprintf("unsupported type for '%s': %T", name, i))
}

View File

@@ -68,17 +68,20 @@ func CollectProcessMetrics(refresh time.Duration) {
}
// Iterate loading the different stats and updating the meters
for i := 1; ; i++ {
runtime.ReadMemStats(memstats[i%2])
memAllocs.Mark(int64(memstats[i%2].Mallocs - memstats[(i-1)%2].Mallocs))
memFrees.Mark(int64(memstats[i%2].Frees - memstats[(i-1)%2].Frees))
memInuse.Mark(int64(memstats[i%2].Alloc - memstats[(i-1)%2].Alloc))
memPauses.Mark(int64(memstats[i%2].PauseTotalNs - memstats[(i-1)%2].PauseTotalNs))
location1 := i % 2
location2 := (i - 1) % 2
if ReadDiskStats(diskstats[i%2]) == nil {
diskReads.Mark(diskstats[i%2].ReadCount - diskstats[(i-1)%2].ReadCount)
diskReadBytes.Mark(diskstats[i%2].ReadBytes - diskstats[(i-1)%2].ReadBytes)
diskWrites.Mark(diskstats[i%2].WriteCount - diskstats[(i-1)%2].WriteCount)
diskWriteBytes.Mark(diskstats[i%2].WriteBytes - diskstats[(i-1)%2].WriteBytes)
runtime.ReadMemStats(memstats[location1])
memAllocs.Mark(int64(memstats[location1].Mallocs - memstats[location2].Mallocs))
memFrees.Mark(int64(memstats[location1].Frees - memstats[location2].Frees))
memInuse.Mark(int64(memstats[location1].Alloc - memstats[location2].Alloc))
memPauses.Mark(int64(memstats[location1].PauseTotalNs - memstats[location2].PauseTotalNs))
if ReadDiskStats(diskstats[location1]) == nil {
diskReads.Mark(diskstats[location1].ReadCount - diskstats[location2].ReadCount)
diskReadBytes.Mark(diskstats[location1].ReadBytes - diskstats[location2].ReadBytes)
diskWrites.Mark(diskstats[location1].WriteCount - diskstats[location2].WriteCount)
diskWriteBytes.Mark(diskstats[location1].WriteBytes - diskstats[location2].WriteBytes)
}
time.Sleep(refresh)
}

View File

@@ -58,7 +58,11 @@ type NilResettingTimer struct {
func (NilResettingTimer) Values() []int64 { return nil }
// Snapshot is a no-op.
func (NilResettingTimer) Snapshot() ResettingTimer { return NilResettingTimer{} }
func (NilResettingTimer) Snapshot() ResettingTimer {
return &ResettingTimerSnapshot{
values: []int64{},
}
}
// Time is a no-op.
func (NilResettingTimer) Time(func()) {}
@@ -210,7 +214,7 @@ func (t *ResettingTimerSnapshot) calc(percentiles []float64) {
// poor man's math.Round(x):
// math.Floor(x + 0.5)
indexOfPerc := int(math.Floor(((abs / 100.0) * float64(count)) + 0.5))
if pct >= 0 {
if pct >= 0 && indexOfPerc > 0 {
indexOfPerc -= 1 // index offset=0
}
thresholdBoundary = t.values[indexOfPerc]

View File

@@ -104,3 +104,113 @@ func TestResettingTimer(t *testing.T) {
}
}
}
func TestResettingTimerWithFivePercentiles(t *testing.T) {
tests := []struct {
values []int64
start int
end int
wantP05 int64
wantP20 int64
wantP50 int64
wantP95 int64
wantP99 int64
wantMean float64
wantMin int64
wantMax int64
}{
{
values: []int64{},
start: 1,
end: 11,
wantP05: 1, wantP20: 2, wantP50: 5, wantP95: 10, wantP99: 10,
wantMin: 1, wantMax: 10, wantMean: 5.5,
},
{
values: []int64{},
start: 1,
end: 101,
wantP05: 5, wantP20: 20, wantP50: 50, wantP95: 95, wantP99: 99,
wantMin: 1, wantMax: 100, wantMean: 50.5,
},
{
values: []int64{1},
start: 0,
end: 0,
wantP05: 1, wantP20: 1, wantP50: 1, wantP95: 1, wantP99: 1,
wantMin: 1, wantMax: 1, wantMean: 1,
},
{
values: []int64{0},
start: 0,
end: 0,
wantP05: 0, wantP20: 0, wantP50: 0, wantP95: 0, wantP99: 0,
wantMin: 0, wantMax: 0, wantMean: 0,
},
{
values: []int64{},
start: 0,
end: 0,
wantP05: 0, wantP20: 0, wantP50: 0, wantP95: 0, wantP99: 0,
wantMin: 0, wantMax: 0, wantMean: 0,
},
{
values: []int64{1, 10},
start: 0,
end: 0,
wantP05: 1, wantP20: 1, wantP50: 1, wantP95: 10, wantP99: 10,
wantMin: 1, wantMax: 10, wantMean: 5.5,
},
}
for ind, tt := range tests {
timer := NewResettingTimer()
for i := tt.start; i < tt.end; i++ {
tt.values = append(tt.values, int64(i))
}
for _, v := range tt.values {
timer.Update(time.Duration(v))
}
snap := timer.Snapshot()
ps := snap.Percentiles([]float64{5, 20, 50, 95, 99})
val := snap.Values()
if len(val) > 0 {
if tt.wantMin != val[0] {
t.Fatalf("%d: min: got %d, want %d", ind, val[0], tt.wantMin)
}
if tt.wantMax != val[len(val)-1] {
t.Fatalf("%d: max: got %d, want %d", ind, val[len(val)-1], tt.wantMax)
}
}
if tt.wantMean != snap.Mean() {
t.Fatalf("%d: mean: got %.2f, want %.2f", ind, snap.Mean(), tt.wantMean)
}
if tt.wantP05 != ps[0] {
t.Fatalf("%d: p05: got %d, want %d", ind, ps[0], tt.wantP05)
}
if tt.wantP20 != ps[1] {
t.Fatalf("%d: p20: got %d, want %d", ind, ps[1], tt.wantP20)
}
if tt.wantP50 != ps[2] {
t.Fatalf("%d: p50: got %d, want %d", ind, ps[2], tt.wantP50)
}
if tt.wantP95 != ps[3] {
t.Fatalf("%d: p95: got %d, want %d", ind, ps[3], tt.wantP95)
}
if tt.wantP99 != ps[4] {
t.Fatalf("%d: p99: got %d, want %d", ind, ps[4], tt.wantP99)
}
}
}

View File

@@ -297,7 +297,6 @@ func (self *worker) update() {
func (self *worker) wait() {
for {
mustCommitNewWork := true
for result := range self.recv {
atomic.AddInt32(&self.atWork, -1)
@@ -322,11 +321,6 @@ func (self *worker) wait() {
log.Error("Failed writing block to chain", "err", err)
continue
}
// check if canon block and write transactions
if stat == core.CanonStatTy {
// implicit by posting ChainHeadEvent
mustCommitNewWork = false
}
// Broadcast the block and announce chain insertion event
self.mux.Post(core.NewMinedBlockEvent{Block: block})
var (
@@ -341,10 +335,6 @@ func (self *worker) wait() {
// Insert the block into the set of pending ones to wait for confirmations
self.unconfirmed.Insert(block.NumberU64(), block.Hash())
if mustCommitNewWork {
self.commitNewWork()
}
}
}
}
@@ -370,7 +360,7 @@ func (self *worker) makeCurrent(parent *types.Block, header *types.Header) error
}
work := &Work{
config: self.config,
signer: types.NewEIP155Signer(self.config.ChainId),
signer: types.NewEIP155Signer(self.config.ChainID),
state: state,
ancestors: set.New(),
family: set.New(),

View File

@@ -338,6 +338,21 @@ func (api *PublicDebugAPI) Metrics(raw bool) (map[string]interface{}, error) {
},
}
case metrics.ResettingTimer:
t := metric.Snapshot()
ps := t.Percentiles([]float64{5, 20, 50, 80, 95})
root[name] = map[string]interface{}{
"Measurements": len(t.Values()),
"Mean": time.Duration(t.Mean()).String(),
"Percentiles": map[string]interface{}{
"5": time.Duration(ps[0]).String(),
"20": time.Duration(ps[1]).String(),
"50": time.Duration(ps[2]).String(),
"80": time.Duration(ps[3]).String(),
"95": time.Duration(ps[4]).String(),
},
}
default:
root[name] = "Unknown metric type"
}
@@ -373,6 +388,21 @@ func (api *PublicDebugAPI) Metrics(raw bool) (map[string]interface{}, error) {
},
}
case metrics.ResettingTimer:
t := metric.Snapshot()
ps := t.Percentiles([]float64{5, 20, 50, 80, 95})
root[name] = map[string]interface{}{
"Measurements": len(t.Values()),
"Mean": time.Duration(t.Mean()).String(),
"Percentiles": map[string]interface{}{
"5": time.Duration(ps[0]).String(),
"20": time.Duration(ps[1]).String(),
"50": time.Duration(ps[2]).String(),
"80": time.Duration(ps[3]).String(),
"95": time.Duration(ps[4]).String(),
},
}
default:
root[name] = "Unknown metric type"
}

View File

@@ -528,9 +528,9 @@ func importPublicKey(pubKey []byte) (*ecies.PublicKey, error) {
return nil, fmt.Errorf("invalid public key length %v (expect 64/65)", len(pubKey))
}
// TODO: fewer pointless conversions
pub := crypto.ToECDSAPub(pubKey65)
if pub.X == nil {
return nil, fmt.Errorf("invalid public key")
pub, err := crypto.UnmarshalPubkey(pubKey65)
if err != nil {
return nil, err
}
return ecies.ImportECDSAPublic(pub), nil
}

View File

@@ -23,15 +23,16 @@ import (
"github.com/ethereum/go-ethereum/common"
)
// Genesis hashes to enforce below configs on.
var (
MainnetGenesisHash = common.HexToHash("0xd4e56740f876aef8c010b86a40d5f56745a118d0906a34e69aec8c0db1cb8fa3") // Mainnet genesis hash to enforce below configs on
TestnetGenesisHash = common.HexToHash("0x41941023680923e0fe4d74a34bdac8141f2540e3ae90623718e47d66d1ca4a2d") // Testnet genesis hash to enforce below configs on
MainnetGenesisHash = common.HexToHash("0xd4e56740f876aef8c010b86a40d5f56745a118d0906a34e69aec8c0db1cb8fa3")
TestnetGenesisHash = common.HexToHash("0x41941023680923e0fe4d74a34bdac8141f2540e3ae90623718e47d66d1ca4a2d")
)
var (
// MainnetChainConfig is the chain parameters to run a node on the main network.
MainnetChainConfig = &ChainConfig{
ChainId: big.NewInt(1),
ChainID: big.NewInt(1),
HomesteadBlock: big.NewInt(1150000),
DAOForkBlock: big.NewInt(1920000),
DAOForkSupport: true,
@@ -46,7 +47,7 @@ var (
// TestnetChainConfig contains the chain parameters to run a node on the Ropsten test network.
TestnetChainConfig = &ChainConfig{
ChainId: big.NewInt(3),
ChainID: big.NewInt(3),
HomesteadBlock: big.NewInt(0),
DAOForkBlock: nil,
DAOForkSupport: true,
@@ -61,7 +62,7 @@ var (
// RinkebyChainConfig contains the chain parameters to run a node on the Rinkeby test network.
RinkebyChainConfig = &ChainConfig{
ChainId: big.NewInt(4),
ChainID: big.NewInt(4),
HomesteadBlock: big.NewInt(1),
DAOForkBlock: nil,
DAOForkSupport: true,
@@ -101,7 +102,7 @@ var (
// that any network, identified by its genesis block, can have its own
// set of configuration options.
type ChainConfig struct {
ChainId *big.Int `json:"chainId"` // Chain id identifies the current chain and is used for replay protection
ChainID *big.Int `json:"chainId"` // chainId identifies the current chain and is used for replay protection
HomesteadBlock *big.Int `json:"homesteadBlock,omitempty"` // Homestead switch block (nil = no fork, 0 = already homestead)
@@ -154,7 +155,7 @@ func (c *ChainConfig) String() string {
engine = "unknown"
}
return fmt.Sprintf("{ChainID: %v Homestead: %v DAO: %v DAOSupport: %v EIP150: %v EIP155: %v EIP158: %v Byzantium: %v Constantinople: %v Engine: %v}",
c.ChainId,
c.ChainID,
c.HomesteadBlock,
c.DAOForkBlock,
c.DAOForkSupport,
@@ -172,27 +173,32 @@ func (c *ChainConfig) IsHomestead(num *big.Int) bool {
return isForked(c.HomesteadBlock, num)
}
// IsDAO returns whether num is either equal to the DAO fork block or greater.
// IsDAOFork returns whether num is either equal to the DAO fork block or greater.
func (c *ChainConfig) IsDAOFork(num *big.Int) bool {
return isForked(c.DAOForkBlock, num)
}
// IsEIP150 returns whether num is either equal to the EIP150 fork block or greater.
func (c *ChainConfig) IsEIP150(num *big.Int) bool {
return isForked(c.EIP150Block, num)
}
// IsEIP155 returns whether num is either equal to the EIP155 fork block or greater.
func (c *ChainConfig) IsEIP155(num *big.Int) bool {
return isForked(c.EIP155Block, num)
}
// IsEIP158 returns whether num is either equal to the EIP158 fork block or greater.
func (c *ChainConfig) IsEIP158(num *big.Int) bool {
return isForked(c.EIP158Block, num)
}
// IsByzantium returns whether num is either equal to the Byzantium fork block or greater.
func (c *ChainConfig) IsByzantium(num *big.Int) bool {
return isForked(c.ByzantiumBlock, num)
}
// IsConstantinople returns whether num is either equal to the Constantinople fork block or greater.
func (c *ChainConfig) IsConstantinople(num *big.Int) bool {
return isForked(c.ConstantinopleBlock, num)
}
@@ -251,7 +257,7 @@ func (c *ChainConfig) checkCompatible(newcfg *ChainConfig, head *big.Int) *Confi
if isForkIncompatible(c.EIP158Block, newcfg.EIP158Block, head) {
return newCompatError("EIP158 fork block", c.EIP158Block, newcfg.EIP158Block)
}
if c.IsEIP158(head) && !configNumEqual(c.ChainId, newcfg.ChainId) {
if c.IsEIP158(head) && !configNumEqual(c.ChainID, newcfg.ChainID) {
return newCompatError("EIP158 chain ID", c.EIP158Block, newcfg.EIP158Block)
}
if isForkIncompatible(c.ByzantiumBlock, newcfg.ByzantiumBlock, head) {
@@ -324,15 +330,16 @@ func (err *ConfigCompatError) Error() string {
// Rules is a one time interface meaning that it shouldn't be used in between transition
// phases.
type Rules struct {
ChainId *big.Int
ChainID *big.Int
IsHomestead, IsEIP150, IsEIP155, IsEIP158 bool
IsByzantium bool
}
// Rules ensures c's ChainID is not nil.
func (c *ChainConfig) Rules(num *big.Int) Rules {
chainId := c.ChainId
if chainId == nil {
chainId = new(big.Int)
chainID := c.ChainID
if chainID == nil {
chainID = new(big.Int)
}
return Rules{ChainId: new(big.Int).Set(chainId), IsHomestead: c.IsHomestead(num), IsEIP150: c.IsEIP150(num), IsEIP155: c.IsEIP155(num), IsEIP158: c.IsEIP158(num), IsByzantium: c.IsByzantium(num)}
return Rules{ChainID: new(big.Int).Set(chainID), IsHomestead: c.IsHomestead(num), IsEIP150: c.IsEIP150(num), IsEIP155: c.IsEIP155(num), IsEIP158: c.IsEIP158(num), IsByzantium: c.IsByzantium(num)}
}

View File

@@ -16,12 +16,12 @@
package params
// These are the multipliers for ether denominations.
// Example: To get the wei value of an amount in 'douglas', use
//
// new(big.Int).Mul(value, big.NewInt(params.Douglas))
//
const (
// These are the multipliers for ether denominations.
// Example: To get the wei value of an amount in 'douglas', use
//
// new(big.Int).Mul(value, big.NewInt(params.Douglas))
//
Wei = 1
Ada = 1e3
Babbage = 1e6

View File

@@ -16,6 +16,7 @@
package params
// GasTable organizes gas prices for different ethereum phases.
type GasTable struct {
ExtcodeSize uint64
ExtcodeCopy uint64
@@ -34,6 +35,7 @@ type GasTable struct {
CreateBySuicide uint64
}
// Variables containing gas prices for different ethereum phases.
var (
// GasTableHomestead contain the gas prices for
// the homestead phase.
@@ -47,8 +49,8 @@ var (
ExpByte: 10,
}
// GasTableHomestead contain the gas re-prices for
// the homestead phase.
// GasTableEIP150 contain the gas re-prices for
// the EIP150 phase.
GasTableEIP150 = GasTable{
ExtcodeSize: 700,
ExtcodeCopy: 700,
@@ -60,7 +62,8 @@ var (
CreateBySuicide: 25000,
}
// GasTableEIP158 contain the gas re-prices for
// the EIP15* phase.
GasTableEIP158 = GasTable{
ExtcodeSize: 700,
ExtcodeCopy: 700,

View File

@@ -19,7 +19,7 @@ package params
import "math/big"
var (
TargetGasLimit uint64 = GenesisGasLimit // The artificial target
TargetGasLimit = GenesisGasLimit // The artificial target
)
const (

View File

@@ -23,7 +23,7 @@ import (
const (
VersionMajor = 1 // Major version component of the current release
VersionMinor = 8 // Minor version component of the current release
VersionPatch = 10 // Patch version component of the current release
VersionPatch = 11 // Patch version component of the current release
VersionMeta = "stable" // Version metadata to append to the version string
)

View File

@@ -165,7 +165,12 @@ func NewHTTPServer(cors []string, vhosts []string, srv *Server) *http.Server {
// Wrap the CORS-handler within a host-handler
handler := newCorsHandler(srv, cors)
handler = newVHostHandler(vhosts, handler)
return &http.Server{Handler: handler}
return &http.Server{
Handler: handler,
ReadTimeout: 5 * time.Second,
WriteTimeout: 10 * time.Second,
IdleTimeout: 120 * time.Second,
}
}
// ServeHTTP serves JSON-RPC requests over HTTP.
@@ -181,7 +186,7 @@ func (srv *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) {
// All checks passed, create a codec that reads direct from the request body
// untilEOF and writes the response to w and order the server to process a
// single request.
ctx := context.Background()
ctx := r.Context()
ctx = context.WithValue(ctx, "remote", r.RemoteAddr)
ctx = context.WithValue(ctx, "scheme", r.Proto)
ctx = context.WithValue(ctx, "local", r.Host)

View File

@@ -320,9 +320,6 @@ func parsePositionalArguments(rawArgs json.RawMessage, types []reflect.Type) ([]
// CreateResponse will create a JSON-RPC success response with the given id and reply as result.
func (c *jsonCodec) CreateResponse(id interface{}, reply interface{}) interface{} {
if isHexNum(reflect.TypeOf(reply)) {
return &jsonSuccessResponse{Version: jsonrpcVersion, Id: id, Result: fmt.Sprintf(`%#x`, reply)}
}
return &jsonSuccessResponse{Version: jsonrpcVersion, Id: id, Result: reply}
}
@@ -340,11 +337,6 @@ func (c *jsonCodec) CreateErrorResponseWithInfo(id interface{}, err Error, info
// CreateNotification will create a JSON-RPC notification with the given subscription id and event as params.
func (c *jsonCodec) CreateNotification(subid, namespace string, event interface{}) interface{} {
if isHexNum(reflect.TypeOf(event)) {
return &jsonNotification{Version: jsonrpcVersion, Method: namespace + notificationMethodSuffix,
Params: jsonSubscription{Subscription: subid, Result: fmt.Sprintf(`%#x`, event)}}
}
return &jsonNotification{Version: jsonrpcVersion, Method: namespace + notificationMethodSuffix,
Params: jsonSubscription{Subscription: subid, Result: event}}
}

View File

@@ -145,7 +145,7 @@ func (s *Server) serveRequest(ctx context.Context, codec ServerCodec, singleShot
defer cancel()
// if the codec supports notification include a notifier that callbacks can use
// to send notification to clients. It is thight to the codec/connection. If the
// to send notification to clients. It is tied to the codec/connection. If the
// connection is closed the notifier will stop and cancels all active subscriptions.
if options&OptionSubscriptions == OptionSubscriptions {
ctx = context.WithValue(ctx, notifierKey{}, newNotifier(codec))
@@ -313,7 +313,6 @@ func (s *Server) handle(ctx context.Context, codec ServerCodec, req *serverReque
if len(reply) == 0 {
return codec.CreateResponse(req.id, nil), nil
}
if req.callb.errPos >= 0 { // test if method returned an error
if !reply[req.callb.errPos].IsNil() {
e := reply[req.callb.errPos].Interface().(error)

View File

@@ -22,7 +22,6 @@ import (
crand "crypto/rand"
"encoding/binary"
"encoding/hex"
"math/big"
"math/rand"
"reflect"
"strings"
@@ -105,20 +104,6 @@ func formatName(name string) string {
return string(ret)
}
var bigIntType = reflect.TypeOf((*big.Int)(nil)).Elem()
// Indication if this type should be serialized in hex
func isHexNum(t reflect.Type) bool {
if t == nil {
return false
}
for t.Kind() == reflect.Ptr {
t = t.Elem()
}
return t == bigIntType
}
// suitableCallbacks iterates over the methods of the given type. It will determine if a method satisfies the criteria
// for a RPC callback or a subscription callback and adds it to the collection of callbacks or subscriptions. See server
// documentation for a summary of these criteria.

View File

@@ -432,13 +432,11 @@ func (api *SignerAPI) EcRecover(ctx context.Context, data, sig hexutil.Bytes) (c
}
sig[64] -= 27 // Transform yellow paper V from 27/28 to 0/1
hash, _ := SignHash(data)
rpk, err := crypto.Ecrecover(hash, sig)
rpk, err := crypto.SigToPub(hash, sig)
if err != nil {
return common.Address{}, err
}
pubKey := crypto.ToECDSAPub(rpk)
recoveredAddr := crypto.PubkeyToAddress(*pubKey)
return recoveredAddr, nil
return crypto.PubkeyToAddress(*rpk), nil
}
// SignHash is a helper function that calculates a hash for the given message that can be

View File

@@ -19,6 +19,7 @@ package swap
import (
"context"
"crypto/ecdsa"
"errors"
"fmt"
"math/big"
"os"
@@ -134,6 +135,11 @@ func NewSwap(local *SwapParams, remote *SwapProfile, backend chequebook.Backend,
out *chequebook.Outbox
)
remotekey, err := crypto.UnmarshalPubkey(common.FromHex(remote.PublicKey))
if err != nil {
return nil, errors.New("invalid remote public key")
}
// check if remote chequebook is valid
// insolvent chequebooks suicide so will signal as invalid
// TODO: monitoring a chequebooks events
@@ -142,7 +148,7 @@ func NewSwap(local *SwapParams, remote *SwapProfile, backend chequebook.Backend,
log.Info(fmt.Sprintf("invalid contract %v for peer %v: %v)", remote.Contract.Hex()[:8], proto, err))
} else {
// remote contract valid, create inbox
in, err = chequebook.NewInbox(local.privateKey, remote.Contract, local.Beneficiary, crypto.ToECDSAPub(common.FromHex(remote.PublicKey)), backend)
in, err = chequebook.NewInbox(local.privateKey, remote.Contract, local.Beneficiary, remotekey, backend)
if err != nil {
log.Warn(fmt.Sprintf("unable to set up inbox for chequebook contract %v for peer %v: %v)", remote.Contract.Hex()[:8], proto, err))
}

View File

@@ -42,6 +42,7 @@ type BlockTest struct {
json btJSON
}
// UnmarshalJSON implements json.Unmarshaler interface.
func (t *BlockTest) UnmarshalJSON(in []byte) error {
return json.Unmarshal(in, &t.json)
}

View File

@@ -27,7 +27,7 @@ import (
var (
mainnetChainConfig = params.ChainConfig{
ChainId: big.NewInt(1),
ChainID: big.NewInt(1),
HomesteadBlock: big.NewInt(1150000),
DAOForkBlock: big.NewInt(1920000),
DAOForkSupport: true,

View File

@@ -26,26 +26,26 @@ import (
// Forks table defines supported forks and their chain config.
var Forks = map[string]*params.ChainConfig{
"Frontier": {
ChainId: big.NewInt(1),
ChainID: big.NewInt(1),
},
"Homestead": {
ChainId: big.NewInt(1),
ChainID: big.NewInt(1),
HomesteadBlock: big.NewInt(0),
},
"EIP150": {
ChainId: big.NewInt(1),
ChainID: big.NewInt(1),
HomesteadBlock: big.NewInt(0),
EIP150Block: big.NewInt(0),
},
"EIP158": {
ChainId: big.NewInt(1),
ChainID: big.NewInt(1),
HomesteadBlock: big.NewInt(0),
EIP150Block: big.NewInt(0),
EIP155Block: big.NewInt(0),
EIP158Block: big.NewInt(0),
},
"Byzantium": {
ChainId: big.NewInt(1),
ChainID: big.NewInt(1),
HomesteadBlock: big.NewInt(0),
EIP150Block: big.NewInt(0),
EIP155Block: big.NewInt(0),
@@ -54,22 +54,22 @@ var Forks = map[string]*params.ChainConfig{
ByzantiumBlock: big.NewInt(0),
},
"FrontierToHomesteadAt5": {
ChainId: big.NewInt(1),
ChainID: big.NewInt(1),
HomesteadBlock: big.NewInt(5),
},
"HomesteadToEIP150At5": {
ChainId: big.NewInt(1),
ChainID: big.NewInt(1),
HomesteadBlock: big.NewInt(0),
EIP150Block: big.NewInt(5),
},
"HomesteadToDaoAt5": {
ChainId: big.NewInt(1),
ChainID: big.NewInt(1),
HomesteadBlock: big.NewInt(0),
DAOForkBlock: big.NewInt(5),
DAOForkSupport: true,
},
"EIP158ToByzantiumAt5": {
ChainId: big.NewInt(1),
ChainID: big.NewInt(1),
HomesteadBlock: big.NewInt(0),
EIP150Block: big.NewInt(0),
EIP155Block: big.NewInt(0),

View File

@@ -35,7 +35,7 @@ func TestTransaction(t *testing.T) {
EIP150Block: big.NewInt(0),
EIP155Block: big.NewInt(0),
EIP158Block: big.NewInt(0),
ChainId: big.NewInt(1),
ChainID: big.NewInt(1),
})
txt.config(`^Byzantium/`, params.ChainConfig{
HomesteadBlock: big.NewInt(0),

View File

@@ -23,6 +23,21 @@ import (
"github.com/ethereum/go-ethereum/common"
"github.com/ethereum/go-ethereum/ethdb"
"github.com/ethereum/go-ethereum/log"
"github.com/ethereum/go-ethereum/metrics"
)
var (
memcacheFlushTimeTimer = metrics.NewRegisteredResettingTimer("trie/memcache/flush/time", nil)
memcacheFlushNodesMeter = metrics.NewRegisteredMeter("trie/memcache/flush/nodes", nil)
memcacheFlushSizeMeter = metrics.NewRegisteredMeter("trie/memcache/flush/size", nil)
memcacheGCTimeTimer = metrics.NewRegisteredResettingTimer("trie/memcache/gc/time", nil)
memcacheGCNodesMeter = metrics.NewRegisteredMeter("trie/memcache/gc/nodes", nil)
memcacheGCSizeMeter = metrics.NewRegisteredMeter("trie/memcache/gc/size", nil)
memcacheCommitTimeTimer = metrics.NewRegisteredResettingTimer("trie/memcache/commit/time", nil)
memcacheCommitNodesMeter = metrics.NewRegisteredMeter("trie/memcache/commit/nodes", nil)
memcacheCommitSizeMeter = metrics.NewRegisteredMeter("trie/memcache/commit/size", nil)
)
// secureKeyPrefix is the database key prefix used to store trie node preimages.
@@ -46,15 +61,22 @@ type DatabaseReader interface {
type Database struct {
diskdb ethdb.Database // Persistent storage for matured trie nodes
nodes map[common.Hash]*cachedNode // Data and references relationships of a node
preimages map[common.Hash][]byte // Preimages of nodes from the secure trie
seckeybuf [secureKeyLength]byte // Ephemeral buffer for calculating preimage keys
nodes map[common.Hash]*cachedNode // Data and references relationships of a node
oldest common.Hash // Oldest tracked node, flush-list head
newest common.Hash // Newest tracked node, flush-list tail
preimages map[common.Hash][]byte // Preimages of nodes from the secure trie
seckeybuf [secureKeyLength]byte // Ephemeral buffer for calculating preimage keys
gctime time.Duration // Time spent on garbage collection since last commit
gcnodes uint64 // Nodes garbage collected since last commit
gcsize common.StorageSize // Data storage garbage collected since last commit
nodesSize common.StorageSize // Storage size of the nodes cache
flushtime time.Duration // Time spent on data flushing since last commit
flushnodes uint64 // Nodes flushed since last commit
flushsize common.StorageSize // Data storage flushed since last commit
nodesSize common.StorageSize // Storage size of the nodes cache (exc. flushlist)
preimagesSize common.StorageSize // Storage size of the preimages cache
lock sync.RWMutex
@@ -66,6 +88,9 @@ type cachedNode struct {
blob []byte // Cached data block of the trie node
parents int // Number of live nodes referencing this one
children map[common.Hash]int // Children referenced by this nodes
flushPrev common.Hash // Previous node in the flush-list
flushNext common.Hash // Next node in the flush-list
}
// NewDatabase creates a new trie database to store ephemeral trie content before
@@ -96,12 +121,20 @@ func (db *Database) Insert(hash common.Hash, blob []byte) {
// insert is the private locked version of Insert.
func (db *Database) insert(hash common.Hash, blob []byte) {
// If the node's already cached, skip
if _, ok := db.nodes[hash]; ok {
return
}
db.nodes[hash] = &cachedNode{
blob: common.CopyBytes(blob),
children: make(map[common.Hash]int),
blob: common.CopyBytes(blob),
children: make(map[common.Hash]int),
flushPrev: db.newest,
}
// Update the flush-list endpoints
if db.oldest == (common.Hash{}) {
db.oldest, db.newest = hash, hash
} else {
db.nodes[db.newest].flushNext, db.newest = hash, hash
}
db.nodesSize += common.StorageSize(common.HashLength + len(blob))
}
@@ -208,6 +241,10 @@ func (db *Database) Dereference(child common.Hash, parent common.Hash) {
db.gcsize += storage - db.nodesSize
db.gctime += time.Since(start)
memcacheGCTimeTimer.Update(time.Since(start))
memcacheGCSizeMeter.Mark(int64(storage - db.nodesSize))
memcacheGCNodesMeter.Mark(int64(nodes - len(db.nodes)))
log.Debug("Dereferenced trie from memory database", "nodes", nodes-len(db.nodes), "size", storage-db.nodesSize, "time", time.Since(start),
"gcnodes", db.gcnodes, "gcsize", db.gcsize, "gctime", db.gctime, "livenodes", len(db.nodes), "livesize", db.nodesSize)
}
@@ -221,7 +258,7 @@ func (db *Database) dereference(child common.Hash, parent common.Hash) {
if node.children[child] == 0 {
delete(node.children, child)
}
// If the node does not exist, it's a previously committed node.
// If the child does not exist, it's a previously committed node.
node, ok := db.nodes[child]
if !ok {
return
@@ -229,6 +266,14 @@ func (db *Database) dereference(child common.Hash, parent common.Hash) {
// If there are no more references to the child, delete it and cascade
node.parents--
if node.parents == 0 {
// Remove the node from the flush-list
if child == db.oldest {
db.oldest = node.flushNext
} else {
db.nodes[node.flushPrev].flushNext = node.flushNext
db.nodes[node.flushNext].flushPrev = node.flushPrev
}
// Dereference all children and delete the node
for hash := range node.children {
db.dereference(hash, child)
}
@@ -237,6 +282,107 @@ func (db *Database) dereference(child common.Hash, parent common.Hash) {
}
}
// Cap iteratively flushes old but still referenced trie nodes until the total
// memory usage goes below the given threshold.
func (db *Database) Cap(limit common.StorageSize) error {
// Create a database batch to flush persistent data out. It is important that
// outside code doesn't see an inconsistent state (referenced data removed from
// memory cache during commit but not yet in persistent storage). This is ensured
// by only uncaching existing data when the database write finalizes.
db.lock.RLock()
nodes, storage, start := len(db.nodes), db.nodesSize, time.Now()
batch := db.diskdb.NewBatch()
// db.nodesSize only contains the useful data in the cache, but when reporting
// the total memory consumption, the maintenance metadata is also needed to be
// counted. For every useful node, we track 2 extra hashes as the flushlist.
size := db.nodesSize + common.StorageSize((len(db.nodes)-1)*2*common.HashLength)
// If the preimage cache got large enough, push to disk. If it's still small
// leave for later to deduplicate writes.
flushPreimages := db.preimagesSize > 4*1024*1024
if flushPreimages {
for hash, preimage := range db.preimages {
if err := batch.Put(db.secureKey(hash[:]), preimage); err != nil {
log.Error("Failed to commit preimage from trie database", "err", err)
db.lock.RUnlock()
return err
}
if batch.ValueSize() > ethdb.IdealBatchSize {
if err := batch.Write(); err != nil {
db.lock.RUnlock()
return err
}
batch.Reset()
}
}
}
// Keep committing nodes from the flush-list until we're below allowance
oldest := db.oldest
for size > limit && oldest != (common.Hash{}) {
// Fetch the oldest referenced node and push into the batch
node := db.nodes[oldest]
if err := batch.Put(oldest[:], node.blob); err != nil {
db.lock.RUnlock()
return err
}
// If we exceeded the ideal batch size, commit and reset
if batch.ValueSize() >= ethdb.IdealBatchSize {
if err := batch.Write(); err != nil {
log.Error("Failed to write flush list to disk", "err", err)
db.lock.RUnlock()
return err
}
batch.Reset()
}
// Iterate to the next flush item, or abort if the size cap was achieved. Size
// is the total size, including both the useful cached data (hash -> blob), as
// well as the flushlist metadata (2*hash). When flushing items from the cache,
// we need to reduce both.
size -= common.StorageSize(3*common.HashLength + len(node.blob))
oldest = node.flushNext
}
// Flush out any remainder data from the last batch
if err := batch.Write(); err != nil {
log.Error("Failed to write flush list to disk", "err", err)
db.lock.RUnlock()
return err
}
db.lock.RUnlock()
// Write successful, clear out the flushed data
db.lock.Lock()
defer db.lock.Unlock()
if flushPreimages {
db.preimages = make(map[common.Hash][]byte)
db.preimagesSize = 0
}
for db.oldest != oldest {
node := db.nodes[db.oldest]
delete(db.nodes, db.oldest)
db.oldest = node.flushNext
db.nodesSize -= common.StorageSize(common.HashLength + len(node.blob))
}
if db.oldest != (common.Hash{}) {
db.nodes[db.oldest].flushPrev = common.Hash{}
}
db.flushnodes += uint64(nodes - len(db.nodes))
db.flushsize += storage - db.nodesSize
db.flushtime += time.Since(start)
memcacheFlushTimeTimer.Update(time.Since(start))
memcacheFlushSizeMeter.Mark(int64(storage - db.nodesSize))
memcacheFlushNodesMeter.Mark(int64(nodes - len(db.nodes)))
log.Debug("Persisted nodes from memory database", "nodes", nodes-len(db.nodes), "size", storage-db.nodesSize, "time", time.Since(start),
"flushnodes", db.flushnodes, "flushsize", db.flushsize, "flushtime", db.flushtime, "livenodes", len(db.nodes), "livesize", db.nodesSize)
return nil
}
// Commit iterates over all the children of a particular node, writes them out
// to disk, forcefully tearing down all references in both directions.
//
@@ -266,7 +412,7 @@ func (db *Database) Commit(node common.Hash, report bool) error {
}
}
// Move the trie itself into the batch, flushing if enough data is accumulated
nodes, storage := len(db.nodes), db.nodesSize+db.preimagesSize
nodes, storage := len(db.nodes), db.nodesSize
if err := db.commit(node, batch); err != nil {
log.Error("Failed to commit trie from trie database", "err", err)
db.lock.RUnlock()
@@ -289,15 +435,20 @@ func (db *Database) Commit(node common.Hash, report bool) error {
db.uncache(node)
memcacheCommitTimeTimer.Update(time.Since(start))
memcacheCommitSizeMeter.Mark(int64(storage - db.nodesSize))
memcacheCommitNodesMeter.Mark(int64(nodes - len(db.nodes)))
logger := log.Info
if !report {
logger = log.Debug
}
logger("Persisted trie from memory database", "nodes", nodes-len(db.nodes), "size", storage-db.nodesSize, "time", time.Since(start),
logger("Persisted trie from memory database", "nodes", nodes-len(db.nodes)+int(db.flushnodes), "size", storage-db.nodesSize+db.flushsize, "time", time.Since(start)+db.flushtime,
"gcnodes", db.gcnodes, "gcsize", db.gcsize, "gctime", db.gctime, "livenodes", len(db.nodes), "livesize", db.nodesSize)
// Reset the garbage collection statistics
db.gcnodes, db.gcsize, db.gctime = 0, 0, 0
db.flushnodes, db.flushsize, db.flushtime = 0, 0, 0
return nil
}
@@ -317,7 +468,7 @@ func (db *Database) commit(hash common.Hash, batch ethdb.Batch) error {
if err := batch.Put(hash[:], node.blob); err != nil {
return err
}
// If we've reached an optimal match size, commit and start over
// If we've reached an optimal batch size, commit and start over
if batch.ValueSize() >= ethdb.IdealBatchSize {
if err := batch.Write(); err != nil {
return err
@@ -337,7 +488,14 @@ func (db *Database) uncache(hash common.Hash) {
if !ok {
return
}
// Otherwise uncache the node's subtries and remove the node itself too
// Node still exists, remove it from the flush-list
if hash == db.oldest {
db.oldest = node.flushNext
} else {
db.nodes[node.flushPrev].flushNext = node.flushNext
db.nodes[node.flushNext].flushPrev = node.flushPrev
}
// Uncache the node's subtries and remove the node itself too
for child := range node.children {
db.uncache(child)
}
@@ -347,9 +505,13 @@ func (db *Database) uncache(hash common.Hash) {
// Size returns the current storage size of the memory cache in front of the
// persistent database layer.
func (db *Database) Size() common.StorageSize {
func (db *Database) Size() (common.StorageSize, common.StorageSize) {
db.lock.RLock()
defer db.lock.RUnlock()
return db.nodesSize + db.preimagesSize
// db.nodesSize only contains the useful data in the cache, but when reporting
// the total memory consumption, the maintenance metadata is also needed to be
// counted. For every useful node, we track 2 extra hashes as the flushlist.
var flushlistSize = common.StorageSize((len(db.nodes) - 1) * 2 * common.HashLength)
return db.nodesSize + flushlistSize, db.preimagesSize
}

View File

@@ -53,10 +53,9 @@ func hexToCompact(hex []byte) []byte {
func compactToHex(compact []byte) []byte {
base := keybytesToHex(compact)
base = base[:len(base)-1]
// apply terminator flag
if base[0] >= 2 {
base = append(base, 16)
// delete terminator flag
if base[0] < 2 {
base = base[:len(base)-1]
}
// apply odd flag
chop := 2 - base[0]&1

View File

@@ -17,7 +17,6 @@
package trie
import (
"bytes"
"hash"
"sync"
@@ -27,17 +26,39 @@ import (
)
type hasher struct {
tmp *bytes.Buffer
sha hash.Hash
tmp sliceBuffer
sha keccakState
cachegen uint16
cachelimit uint16
onleaf LeafCallback
}
// keccakState wraps sha3.state. In addition to the usual hash methods, it also supports
// Read to get a variable amount of data from the hash state. Read is faster than Sum
// because it doesn't copy the internal state, but also modifies the internal state.
type keccakState interface {
hash.Hash
Read([]byte) (int, error)
}
type sliceBuffer []byte
func (b *sliceBuffer) Write(data []byte) (n int, err error) {
*b = append(*b, data...)
return len(data), nil
}
func (b *sliceBuffer) Reset() {
*b = (*b)[:0]
}
// hashers live in a global db.
var hasherPool = sync.Pool{
New: func() interface{} {
return &hasher{tmp: new(bytes.Buffer), sha: sha3.NewKeccak256()}
return &hasher{
tmp: make(sliceBuffer, 0, 550), // cap is as large as a full fullNode.
sha: sha3.NewKeccak256().(keccakState),
}
},
}
@@ -157,26 +178,23 @@ func (h *hasher) store(n node, db *Database, force bool) (node, error) {
}
// Generate the RLP encoding of the node
h.tmp.Reset()
if err := rlp.Encode(h.tmp, n); err != nil {
if err := rlp.Encode(&h.tmp, n); err != nil {
panic("encode error: " + err.Error())
}
if h.tmp.Len() < 32 && !force {
if len(h.tmp) < 32 && !force {
return n, nil // Nodes smaller than 32 bytes are stored inside their parent
}
// Larger nodes are replaced by their hash and stored in the database.
hash, _ := n.cache()
if hash == nil {
h.sha.Reset()
h.sha.Write(h.tmp.Bytes())
hash = hashNode(h.sha.Sum(nil))
hash = h.makeHashNode(h.tmp)
}
if db != nil {
// We are pooling the trie nodes into an intermediate memory cache
db.lock.Lock()
hash := common.BytesToHash(hash)
db.insert(hash, h.tmp.Bytes())
db.insert(hash, h.tmp)
// Track all direct parent->child node references
switch n := n.(type) {
case *shortNode:
@@ -210,3 +228,11 @@ func (h *hasher) store(n node, db *Database, force bool) (node, error) {
}
return hash, nil
}
func (h *hasher) makeHashNode(data []byte) hashNode {
n := make(hashNode, h.sha.Size())
h.sha.Reset()
h.sha.Write(data)
h.sha.Read(n)
return n
}

View File

@@ -61,20 +61,26 @@ func (cfg *Config) NewEncoder(w io.Writer) *Encoder {
// Encode writes the TOML of v to the stream.
// See the documentation for Marshal for details about the conversion of Go values to TOML.
func (e *Encoder) Encode(v interface{}) error {
rv := reflect.ValueOf(v)
var (
buf = &tableBuf{typ: ast.TableTypeNormal}
rv = reflect.ValueOf(v)
err error
)
for rv.Kind() == reflect.Ptr {
if rv.IsNil() {
return &marshalNilError{rv.Type()}
}
rv = rv.Elem()
}
buf := &tableBuf{typ: ast.TableTypeNormal}
var err error
switch rv.Kind() {
case reflect.Struct:
err = buf.structFields(e.cfg, rv)
case reflect.Map:
err = buf.mapFields(e.cfg, rv)
case reflect.Interface:
return e.Encode(rv.Interface())
default:
err = &marshalTableError{rv.Type()}
}

View File

@@ -97,6 +97,7 @@ type toml struct {
currentTable *ast.Table
s string
key string
tableKeys []string
val ast.Value
arr *array
stack []*stack
@@ -180,12 +181,12 @@ func (p *tomlParser) SetArray(begin, end int) {
}
func (p *toml) SetTable(buf []rune, begin, end int) {
p.setTable(p.table, buf, begin, end)
rawName := string(buf[begin:end])
p.setTable(p.table, rawName, p.tableKeys)
p.tableKeys = nil
}
func (p *toml) setTable(parent *ast.Table, buf []rune, begin, end int) {
name := string(buf[begin:end])
names := splitTableKey(name)
func (p *toml) setTable(parent *ast.Table, name string, names []string) {
parent, err := p.lookupTable(parent, names[:len(names)-1])
if err != nil {
p.Error(err)
@@ -230,12 +231,12 @@ func (p *tomlParser) SetTableString(begin, end int) {
}
func (p *toml) SetArrayTable(buf []rune, begin, end int) {
p.setArrayTable(p.table, buf, begin, end)
rawName := string(buf[begin:end])
p.setArrayTable(p.table, rawName, p.tableKeys)
p.tableKeys = nil
}
func (p *toml) setArrayTable(parent *ast.Table, buf []rune, begin, end int) {
name := string(buf[begin:end])
names := splitTableKey(name)
func (p *toml) setArrayTable(parent *ast.Table, name string, names []string) {
parent, err := p.lookupTable(parent, names[:len(names)-1])
if err != nil {
p.Error(err)
@@ -260,11 +261,11 @@ func (p *toml) setArrayTable(parent *ast.Table, buf []rune, begin, end int) {
func (p *toml) StartInlineTable() {
p.skip = false
p.stack = append(p.stack, &stack{p.key, p.currentTable})
buf := []rune(p.key)
names := []string{p.key}
if p.arr == nil {
p.setTable(p.currentTable, buf, 0, len(buf))
p.setTable(p.currentTable, names[0], names)
} else {
p.setArrayTable(p.currentTable, buf, 0, len(buf))
p.setArrayTable(p.currentTable, names[0], names)
}
}
@@ -282,6 +283,13 @@ func (p *toml) AddLineCount(i int) {
func (p *toml) SetKey(buf []rune, begin, end int) {
p.key = string(buf[begin:end])
if len(p.key) > 0 && p.key[0] == '"' {
p.key = p.unquote(p.key)
}
}
func (p *toml) AddTableKey() {
p.tableKeys = append(p.tableKeys, p.key)
}
func (p *toml) AddKeyValue() {
@@ -352,25 +360,3 @@ func (p *toml) lookupTable(t *ast.Table, keys []string) (*ast.Table, error) {
}
return t, nil
}
func splitTableKey(tk string) []string {
key := make([]byte, 0, 1)
keys := make([]string, 0, 1)
inQuote := false
for i := 0; i < len(tk); i++ {
k := tk[i]
switch {
case k == tableSeparator && !inQuote:
keys = append(keys, string(key))
key = key[:0] // reuse buffer.
case k == '"':
inQuote = !inQuote
case (k == ' ' || k == '\t') && !inQuote:
// skip.
default:
key = append(key, k)
}
}
keys = append(keys, string(key))
return keys
}

View File

@@ -29,7 +29,7 @@ key <- bareKey / quotedKey
bareKey <- <[0-9A-Za-z\-_]+> { p.SetKey(p.buffer, begin, end) }
quotedKey <- '"' <basicChar+> '"' { p.SetKey(p.buffer, begin-1, end+1) }
quotedKey <- < '"' basicChar* '"' > { p.SetKey(p.buffer, begin, end) }
val <- (
<datetime> { p.SetTime(begin, end) }
@@ -55,7 +55,9 @@ inlineTable <- (
inlineTableKeyValues <- (keyval inlineTableValSep?)*
tableKey <- key (tableKeySep key)*
tableKey <- tableKeyComp (tableKeySep tableKeyComp)*
tableKeyComp <- key { p.AddTableKey() }
tableKeySep <- ws '.' ws
@@ -117,7 +119,7 @@ timeNumoffset <- [\-+] timeHour ':' timeMinute
timeOffset <- 'Z' / timeNumoffset
partialTime <- timeHour ':' timeMinute ':' timeSecond timeSecfrac?
fullDate <- dateFullYear '-' dateMonth '-' dateMDay
fullTime <- partialTime timeOffset
fullTime <- partialTime timeOffset?
datetime <- (fullDate ('T' fullTime)?) / partialTime
digit <- [0-9]

File diff suppressed because it is too large Load Diff

732
vendor/golang.org/x/net/idna/idna.go generated vendored Normal file
View File

@@ -0,0 +1,732 @@
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package idna implements IDNA2008 using the compatibility processing
// defined by UTS (Unicode Technical Standard) #46, which defines a standard to
// deal with the transition from IDNA2003.
//
// IDNA2008 (Internationalized Domain Names for Applications), is defined in RFC
// 5890, RFC 5891, RFC 5892, RFC 5893 and RFC 5894.
// UTS #46 is defined in http://www.unicode.org/reports/tr46.
// See http://unicode.org/cldr/utility/idna.jsp for a visualization of the
// differences between these two standards.
package idna // import "golang.org/x/net/idna"
import (
"fmt"
"strings"
"unicode/utf8"
"golang.org/x/text/secure/bidirule"
"golang.org/x/text/unicode/bidi"
"golang.org/x/text/unicode/norm"
)
// NOTE: Unlike common practice in Go APIs, the functions will return a
// sanitized domain name in case of errors. Browsers sometimes use a partially
// evaluated string as lookup.
// TODO: the current error handling is, in my opinion, the least opinionated.
// Other strategies are also viable, though:
// Option 1) Return an empty string in case of error, but allow the user to
// specify explicitly which errors to ignore.
// Option 2) Return the partially evaluated string if it is itself a valid
// string, otherwise return the empty string in case of error.
// Option 3) Option 1 and 2.
// Option 4) Always return an empty string for now and implement Option 1 as
// needed, and document that the return string may not be empty in case of
// error in the future.
// I think Option 1 is best, but it is quite opinionated.
// ToASCII is a wrapper for Punycode.ToASCII.
func ToASCII(s string) (string, error) {
return Punycode.process(s, true)
}
// ToUnicode is a wrapper for Punycode.ToUnicode.
func ToUnicode(s string) (string, error) {
return Punycode.process(s, false)
}
// An Option configures a Profile at creation time.
type Option func(*options)
// Transitional sets a Profile to use the Transitional mapping as defined in UTS
// #46. This will cause, for example, "ß" to be mapped to "ss". Using the
// transitional mapping provides a compromise between IDNA2003 and IDNA2008
// compatibility. It is used by most browsers when resolving domain names. This
// option is only meaningful if combined with MapForLookup.
func Transitional(transitional bool) Option {
return func(o *options) { o.transitional = true }
}
// VerifyDNSLength sets whether a Profile should fail if any of the IDN parts
// are longer than allowed by the RFC.
func VerifyDNSLength(verify bool) Option {
return func(o *options) { o.verifyDNSLength = verify }
}
// RemoveLeadingDots removes leading label separators. Leading runes that map to
// dots, such as U+3002 IDEOGRAPHIC FULL STOP, are removed as well.
//
// This is the behavior suggested by the UTS #46 and is adopted by some
// browsers.
func RemoveLeadingDots(remove bool) Option {
return func(o *options) { o.removeLeadingDots = remove }
}
// ValidateLabels sets whether to check the mandatory label validation criteria
// as defined in Section 5.4 of RFC 5891. This includes testing for correct use
// of hyphens ('-'), normalization, validity of runes, and the context rules.
func ValidateLabels(enable bool) Option {
return func(o *options) {
// Don't override existing mappings, but set one that at least checks
// normalization if it is not set.
if o.mapping == nil && enable {
o.mapping = normalize
}
o.trie = trie
o.validateLabels = enable
o.fromPuny = validateFromPunycode
}
}
// StrictDomainName limits the set of permissible ASCII characters to those
// allowed in domain names as defined in RFC 1034 (A-Z, a-z, 0-9 and the
// hyphen). This is set by default for MapForLookup and ValidateForRegistration.
//
// This option is useful, for instance, for browsers that allow characters
// outside this range, for example a '_' (U+005F LOW LINE). See
// http://www.rfc-editor.org/std/std3.txt for more details This option
// corresponds to the UseSTD3ASCIIRules option in UTS #46.
func StrictDomainName(use bool) Option {
return func(o *options) {
o.trie = trie
o.useSTD3Rules = use
o.fromPuny = validateFromPunycode
}
}
// NOTE: the following options pull in tables. The tables should not be linked
// in as long as the options are not used.
// BidiRule enables the Bidi rule as defined in RFC 5893. Any application
// that relies on proper validation of labels should include this rule.
func BidiRule() Option {
return func(o *options) { o.bidirule = bidirule.ValidString }
}
// ValidateForRegistration sets validation options to verify that a given IDN is
// properly formatted for registration as defined by Section 4 of RFC 5891.
func ValidateForRegistration() Option {
return func(o *options) {
o.mapping = validateRegistration
StrictDomainName(true)(o)
ValidateLabels(true)(o)
VerifyDNSLength(true)(o)
BidiRule()(o)
}
}
// MapForLookup sets validation and mapping options such that a given IDN is
// transformed for domain name lookup according to the requirements set out in
// Section 5 of RFC 5891. The mappings follow the recommendations of RFC 5894,
// RFC 5895 and UTS 46. It does not add the Bidi Rule. Use the BidiRule option
// to add this check.
//
// The mappings include normalization and mapping case, width and other
// compatibility mappings.
func MapForLookup() Option {
return func(o *options) {
o.mapping = validateAndMap
StrictDomainName(true)(o)
ValidateLabels(true)(o)
}
}
type options struct {
transitional bool
useSTD3Rules bool
validateLabels bool
verifyDNSLength bool
removeLeadingDots bool
trie *idnaTrie
// fromPuny calls validation rules when converting A-labels to U-labels.
fromPuny func(p *Profile, s string) error
// mapping implements a validation and mapping step as defined in RFC 5895
// or UTS 46, tailored to, for example, domain registration or lookup.
mapping func(p *Profile, s string) (mapped string, isBidi bool, err error)
// bidirule, if specified, checks whether s conforms to the Bidi Rule
// defined in RFC 5893.
bidirule func(s string) bool
}
// A Profile defines the configuration of an IDNA mapper.
type Profile struct {
options
}
func apply(o *options, opts []Option) {
for _, f := range opts {
f(o)
}
}
// New creates a new Profile.
//
// With no options, the returned Profile is the most permissive and equals the
// Punycode Profile. Options can be passed to further restrict the Profile. The
// MapForLookup and ValidateForRegistration options set a collection of options,
// for lookup and registration purposes respectively, which can be tailored by
// adding more fine-grained options, where later options override earlier
// options.
func New(o ...Option) *Profile {
p := &Profile{}
apply(&p.options, o)
return p
}
// ToASCII converts a domain or domain label to its ASCII form. For example,
// ToASCII("bücher.example.com") is "xn--bcher-kva.example.com", and
// ToASCII("golang") is "golang". If an error is encountered it will return
// an error and a (partially) processed result.
func (p *Profile) ToASCII(s string) (string, error) {
return p.process(s, true)
}
// ToUnicode converts a domain or domain label to its Unicode form. For example,
// ToUnicode("xn--bcher-kva.example.com") is "bücher.example.com", and
// ToUnicode("golang") is "golang". If an error is encountered it will return
// an error and a (partially) processed result.
func (p *Profile) ToUnicode(s string) (string, error) {
pp := *p
pp.transitional = false
return pp.process(s, false)
}
// String reports a string with a description of the profile for debugging
// purposes. The string format may change with different versions.
func (p *Profile) String() string {
s := ""
if p.transitional {
s = "Transitional"
} else {
s = "NonTransitional"
}
if p.useSTD3Rules {
s += ":UseSTD3Rules"
}
if p.validateLabels {
s += ":ValidateLabels"
}
if p.verifyDNSLength {
s += ":VerifyDNSLength"
}
return s
}
var (
// Punycode is a Profile that does raw punycode processing with a minimum
// of validation.
Punycode *Profile = punycode
// Lookup is the recommended profile for looking up domain names, according
// to Section 5 of RFC 5891. The exact configuration of this profile may
// change over time.
Lookup *Profile = lookup
// Display is the recommended profile for displaying domain names.
// The configuration of this profile may change over time.
Display *Profile = display
// Registration is the recommended profile for checking whether a given
// IDN is valid for registration, according to Section 4 of RFC 5891.
Registration *Profile = registration
punycode = &Profile{}
lookup = &Profile{options{
transitional: true,
useSTD3Rules: true,
validateLabels: true,
trie: trie,
fromPuny: validateFromPunycode,
mapping: validateAndMap,
bidirule: bidirule.ValidString,
}}
display = &Profile{options{
useSTD3Rules: true,
validateLabels: true,
trie: trie,
fromPuny: validateFromPunycode,
mapping: validateAndMap,
bidirule: bidirule.ValidString,
}}
registration = &Profile{options{
useSTD3Rules: true,
validateLabels: true,
verifyDNSLength: true,
trie: trie,
fromPuny: validateFromPunycode,
mapping: validateRegistration,
bidirule: bidirule.ValidString,
}}
// TODO: profiles
// Register: recommended for approving domain names: don't do any mappings
// but rather reject on invalid input. Bundle or block deviation characters.
)
type labelError struct{ label, code_ string }
func (e labelError) code() string { return e.code_ }
func (e labelError) Error() string {
return fmt.Sprintf("idna: invalid label %q", e.label)
}
type runeError rune
func (e runeError) code() string { return "P1" }
func (e runeError) Error() string {
return fmt.Sprintf("idna: disallowed rune %U", e)
}
// process implements the algorithm described in section 4 of UTS #46,
// see http://www.unicode.org/reports/tr46.
func (p *Profile) process(s string, toASCII bool) (string, error) {
var err error
var isBidi bool
if p.mapping != nil {
s, isBidi, err = p.mapping(p, s)
}
// Remove leading empty labels.
if p.removeLeadingDots {
for ; len(s) > 0 && s[0] == '.'; s = s[1:] {
}
}
// TODO: allow for a quick check of the tables data.
// It seems like we should only create this error on ToASCII, but the
// UTS 46 conformance tests suggests we should always check this.
if err == nil && p.verifyDNSLength && s == "" {
err = &labelError{s, "A4"}
}
labels := labelIter{orig: s}
for ; !labels.done(); labels.next() {
label := labels.label()
if label == "" {
// Empty labels are not okay. The label iterator skips the last
// label if it is empty.
if err == nil && p.verifyDNSLength {
err = &labelError{s, "A4"}
}
continue
}
if strings.HasPrefix(label, acePrefix) {
u, err2 := decode(label[len(acePrefix):])
if err2 != nil {
if err == nil {
err = err2
}
// Spec says keep the old label.
continue
}
isBidi = isBidi || bidirule.DirectionString(u) != bidi.LeftToRight
labels.set(u)
if err == nil && p.validateLabels {
err = p.fromPuny(p, u)
}
if err == nil {
// This should be called on NonTransitional, according to the
// spec, but that currently does not have any effect. Use the
// original profile to preserve options.
err = p.validateLabel(u)
}
} else if err == nil {
err = p.validateLabel(label)
}
}
if isBidi && p.bidirule != nil && err == nil {
for labels.reset(); !labels.done(); labels.next() {
if !p.bidirule(labels.label()) {
err = &labelError{s, "B"}
break
}
}
}
if toASCII {
for labels.reset(); !labels.done(); labels.next() {
label := labels.label()
if !ascii(label) {
a, err2 := encode(acePrefix, label)
if err == nil {
err = err2
}
label = a
labels.set(a)
}
n := len(label)
if p.verifyDNSLength && err == nil && (n == 0 || n > 63) {
err = &labelError{label, "A4"}
}
}
}
s = labels.result()
if toASCII && p.verifyDNSLength && err == nil {
// Compute the length of the domain name minus the root label and its dot.
n := len(s)
if n > 0 && s[n-1] == '.' {
n--
}
if len(s) < 1 || n > 253 {
err = &labelError{s, "A4"}
}
}
return s, err
}
func normalize(p *Profile, s string) (mapped string, isBidi bool, err error) {
// TODO: consider first doing a quick check to see if any of these checks
// need to be done. This will make it slower in the general case, but
// faster in the common case.
mapped = norm.NFC.String(s)
isBidi = bidirule.DirectionString(mapped) == bidi.RightToLeft
return mapped, isBidi, nil
}
func validateRegistration(p *Profile, s string) (idem string, bidi bool, err error) {
// TODO: filter need for normalization in loop below.
if !norm.NFC.IsNormalString(s) {
return s, false, &labelError{s, "V1"}
}
for i := 0; i < len(s); {
v, sz := trie.lookupString(s[i:])
if sz == 0 {
return s, bidi, runeError(utf8.RuneError)
}
bidi = bidi || info(v).isBidi(s[i:])
// Copy bytes not copied so far.
switch p.simplify(info(v).category()) {
// TODO: handle the NV8 defined in the Unicode idna data set to allow
// for strict conformance to IDNA2008.
case valid, deviation:
case disallowed, mapped, unknown, ignored:
r, _ := utf8.DecodeRuneInString(s[i:])
return s, bidi, runeError(r)
}
i += sz
}
return s, bidi, nil
}
func (c info) isBidi(s string) bool {
if !c.isMapped() {
return c&attributesMask == rtl
}
// TODO: also store bidi info for mapped data. This is possible, but a bit
// cumbersome and not for the common case.
p, _ := bidi.LookupString(s)
switch p.Class() {
case bidi.R, bidi.AL, bidi.AN:
return true
}
return false
}
func validateAndMap(p *Profile, s string) (vm string, bidi bool, err error) {
var (
b []byte
k int
)
// combinedInfoBits contains the or-ed bits of all runes. We use this
// to derive the mayNeedNorm bit later. This may trigger normalization
// overeagerly, but it will not do so in the common case. The end result
// is another 10% saving on BenchmarkProfile for the common case.
var combinedInfoBits info
for i := 0; i < len(s); {
v, sz := trie.lookupString(s[i:])
if sz == 0 {
b = append(b, s[k:i]...)
b = append(b, "\ufffd"...)
k = len(s)
if err == nil {
err = runeError(utf8.RuneError)
}
break
}
combinedInfoBits |= info(v)
bidi = bidi || info(v).isBidi(s[i:])
start := i
i += sz
// Copy bytes not copied so far.
switch p.simplify(info(v).category()) {
case valid:
continue
case disallowed:
if err == nil {
r, _ := utf8.DecodeRuneInString(s[start:])
err = runeError(r)
}
continue
case mapped, deviation:
b = append(b, s[k:start]...)
b = info(v).appendMapping(b, s[start:i])
case ignored:
b = append(b, s[k:start]...)
// drop the rune
case unknown:
b = append(b, s[k:start]...)
b = append(b, "\ufffd"...)
}
k = i
}
if k == 0 {
// No changes so far.
if combinedInfoBits&mayNeedNorm != 0 {
s = norm.NFC.String(s)
}
} else {
b = append(b, s[k:]...)
if norm.NFC.QuickSpan(b) != len(b) {
b = norm.NFC.Bytes(b)
}
// TODO: the punycode converters require strings as input.
s = string(b)
}
return s, bidi, err
}
// A labelIter allows iterating over domain name labels.
type labelIter struct {
orig string
slice []string
curStart int
curEnd int
i int
}
func (l *labelIter) reset() {
l.curStart = 0
l.curEnd = 0
l.i = 0
}
func (l *labelIter) done() bool {
return l.curStart >= len(l.orig)
}
func (l *labelIter) result() string {
if l.slice != nil {
return strings.Join(l.slice, ".")
}
return l.orig
}
func (l *labelIter) label() string {
if l.slice != nil {
return l.slice[l.i]
}
p := strings.IndexByte(l.orig[l.curStart:], '.')
l.curEnd = l.curStart + p
if p == -1 {
l.curEnd = len(l.orig)
}
return l.orig[l.curStart:l.curEnd]
}
// next sets the value to the next label. It skips the last label if it is empty.
func (l *labelIter) next() {
l.i++
if l.slice != nil {
if l.i >= len(l.slice) || l.i == len(l.slice)-1 && l.slice[l.i] == "" {
l.curStart = len(l.orig)
}
} else {
l.curStart = l.curEnd + 1
if l.curStart == len(l.orig)-1 && l.orig[l.curStart] == '.' {
l.curStart = len(l.orig)
}
}
}
func (l *labelIter) set(s string) {
if l.slice == nil {
l.slice = strings.Split(l.orig, ".")
}
l.slice[l.i] = s
}
// acePrefix is the ASCII Compatible Encoding prefix.
const acePrefix = "xn--"
func (p *Profile) simplify(cat category) category {
switch cat {
case disallowedSTD3Mapped:
if p.useSTD3Rules {
cat = disallowed
} else {
cat = mapped
}
case disallowedSTD3Valid:
if p.useSTD3Rules {
cat = disallowed
} else {
cat = valid
}
case deviation:
if !p.transitional {
cat = valid
}
case validNV8, validXV8:
// TODO: handle V2008
cat = valid
}
return cat
}
func validateFromPunycode(p *Profile, s string) error {
if !norm.NFC.IsNormalString(s) {
return &labelError{s, "V1"}
}
// TODO: detect whether string may have to be normalized in the following
// loop.
for i := 0; i < len(s); {
v, sz := trie.lookupString(s[i:])
if sz == 0 {
return runeError(utf8.RuneError)
}
if c := p.simplify(info(v).category()); c != valid && c != deviation {
return &labelError{s, "V6"}
}
i += sz
}
return nil
}
const (
zwnj = "\u200c"
zwj = "\u200d"
)
type joinState int8
const (
stateStart joinState = iota
stateVirama
stateBefore
stateBeforeVirama
stateAfter
stateFAIL
)
var joinStates = [][numJoinTypes]joinState{
stateStart: {
joiningL: stateBefore,
joiningD: stateBefore,
joinZWNJ: stateFAIL,
joinZWJ: stateFAIL,
joinVirama: stateVirama,
},
stateVirama: {
joiningL: stateBefore,
joiningD: stateBefore,
},
stateBefore: {
joiningL: stateBefore,
joiningD: stateBefore,
joiningT: stateBefore,
joinZWNJ: stateAfter,
joinZWJ: stateFAIL,
joinVirama: stateBeforeVirama,
},
stateBeforeVirama: {
joiningL: stateBefore,
joiningD: stateBefore,
joiningT: stateBefore,
},
stateAfter: {
joiningL: stateFAIL,
joiningD: stateBefore,
joiningT: stateAfter,
joiningR: stateStart,
joinZWNJ: stateFAIL,
joinZWJ: stateFAIL,
joinVirama: stateAfter, // no-op as we can't accept joiners here
},
stateFAIL: {
0: stateFAIL,
joiningL: stateFAIL,
joiningD: stateFAIL,
joiningT: stateFAIL,
joiningR: stateFAIL,
joinZWNJ: stateFAIL,
joinZWJ: stateFAIL,
joinVirama: stateFAIL,
},
}
// validateLabel validates the criteria from Section 4.1. Item 1, 4, and 6 are
// already implicitly satisfied by the overall implementation.
func (p *Profile) validateLabel(s string) (err error) {
if s == "" {
if p.verifyDNSLength {
return &labelError{s, "A4"}
}
return nil
}
if !p.validateLabels {
return nil
}
trie := p.trie // p.validateLabels is only set if trie is set.
if len(s) > 4 && s[2] == '-' && s[3] == '-' {
return &labelError{s, "V2"}
}
if s[0] == '-' || s[len(s)-1] == '-' {
return &labelError{s, "V3"}
}
// TODO: merge the use of this in the trie.
v, sz := trie.lookupString(s)
x := info(v)
if x.isModifier() {
return &labelError{s, "V5"}
}
// Quickly return in the absence of zero-width (non) joiners.
if strings.Index(s, zwj) == -1 && strings.Index(s, zwnj) == -1 {
return nil
}
st := stateStart
for i := 0; ; {
jt := x.joinType()
if s[i:i+sz] == zwj {
jt = joinZWJ
} else if s[i:i+sz] == zwnj {
jt = joinZWNJ
}
st = joinStates[st][jt]
if x.isViramaModifier() {
st = joinStates[st][joinVirama]
}
if i += sz; i == len(s) {
break
}
v, sz = trie.lookupString(s[i:])
x = info(v)
}
if st == stateFAIL || st == stateAfter {
return &labelError{s, "C"}
}
return nil
}
func ascii(s string) bool {
for i := 0; i < len(s); i++ {
if s[i] >= utf8.RuneSelf {
return false
}
}
return true
}

203
vendor/golang.org/x/net/idna/punycode.go generated vendored Normal file
View File

@@ -0,0 +1,203 @@
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package idna
// This file implements the Punycode algorithm from RFC 3492.
import (
"math"
"strings"
"unicode/utf8"
)
// These parameter values are specified in section 5.
//
// All computation is done with int32s, so that overflow behavior is identical
// regardless of whether int is 32-bit or 64-bit.
const (
base int32 = 36
damp int32 = 700
initialBias int32 = 72
initialN int32 = 128
skew int32 = 38
tmax int32 = 26
tmin int32 = 1
)
func punyError(s string) error { return &labelError{s, "A3"} }
// decode decodes a string as specified in section 6.2.
func decode(encoded string) (string, error) {
if encoded == "" {
return "", nil
}
pos := 1 + strings.LastIndex(encoded, "-")
if pos == 1 {
return "", punyError(encoded)
}
if pos == len(encoded) {
return encoded[:len(encoded)-1], nil
}
output := make([]rune, 0, len(encoded))
if pos != 0 {
for _, r := range encoded[:pos-1] {
output = append(output, r)
}
}
i, n, bias := int32(0), initialN, initialBias
for pos < len(encoded) {
oldI, w := i, int32(1)
for k := base; ; k += base {
if pos == len(encoded) {
return "", punyError(encoded)
}
digit, ok := decodeDigit(encoded[pos])
if !ok {
return "", punyError(encoded)
}
pos++
i += digit * w
if i < 0 {
return "", punyError(encoded)
}
t := k - bias
if t < tmin {
t = tmin
} else if t > tmax {
t = tmax
}
if digit < t {
break
}
w *= base - t
if w >= math.MaxInt32/base {
return "", punyError(encoded)
}
}
x := int32(len(output) + 1)
bias = adapt(i-oldI, x, oldI == 0)
n += i / x
i %= x
if n > utf8.MaxRune || len(output) >= 1024 {
return "", punyError(encoded)
}
output = append(output, 0)
copy(output[i+1:], output[i:])
output[i] = n
i++
}
return string(output), nil
}
// encode encodes a string as specified in section 6.3 and prepends prefix to
// the result.
//
// The "while h < length(input)" line in the specification becomes "for
// remaining != 0" in the Go code, because len(s) in Go is in bytes, not runes.
func encode(prefix, s string) (string, error) {
output := make([]byte, len(prefix), len(prefix)+1+2*len(s))
copy(output, prefix)
delta, n, bias := int32(0), initialN, initialBias
b, remaining := int32(0), int32(0)
for _, r := range s {
if r < 0x80 {
b++
output = append(output, byte(r))
} else {
remaining++
}
}
h := b
if b > 0 {
output = append(output, '-')
}
for remaining != 0 {
m := int32(0x7fffffff)
for _, r := range s {
if m > r && r >= n {
m = r
}
}
delta += (m - n) * (h + 1)
if delta < 0 {
return "", punyError(s)
}
n = m
for _, r := range s {
if r < n {
delta++
if delta < 0 {
return "", punyError(s)
}
continue
}
if r > n {
continue
}
q := delta
for k := base; ; k += base {
t := k - bias
if t < tmin {
t = tmin
} else if t > tmax {
t = tmax
}
if q < t {
break
}
output = append(output, encodeDigit(t+(q-t)%(base-t)))
q = (q - t) / (base - t)
}
output = append(output, encodeDigit(q))
bias = adapt(delta, h+1, h == b)
delta = 0
h++
remaining--
}
delta++
n++
}
return string(output), nil
}
func decodeDigit(x byte) (digit int32, ok bool) {
switch {
case '0' <= x && x <= '9':
return int32(x - ('0' - 26)), true
case 'A' <= x && x <= 'Z':
return int32(x - 'A'), true
case 'a' <= x && x <= 'z':
return int32(x - 'a'), true
}
return 0, false
}
func encodeDigit(digit int32) byte {
switch {
case 0 <= digit && digit < 26:
return byte(digit + 'a')
case 26 <= digit && digit < 36:
return byte(digit + ('0' - 26))
}
panic("idna: internal error in punycode encoding")
}
// adapt is the bias adaptation function specified in section 6.1.
func adapt(delta, numPoints int32, firstTime bool) int32 {
if firstTime {
delta /= damp
} else {
delta /= 2
}
delta += delta / numPoints
k := int32(0)
for delta > ((base-tmin)*tmax)/2 {
delta /= base - tmin
k += base
}
return k + (base-tmin+1)*delta/(delta+skew)
}

4557
vendor/golang.org/x/net/idna/tables.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

72
vendor/golang.org/x/net/idna/trie.go generated vendored Normal file
View File

@@ -0,0 +1,72 @@
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package idna
// appendMapping appends the mapping for the respective rune. isMapped must be
// true. A mapping is a categorization of a rune as defined in UTS #46.
func (c info) appendMapping(b []byte, s string) []byte {
index := int(c >> indexShift)
if c&xorBit == 0 {
s := mappings[index:]
return append(b, s[1:s[0]+1]...)
}
b = append(b, s...)
if c&inlineXOR == inlineXOR {
// TODO: support and handle two-byte inline masks
b[len(b)-1] ^= byte(index)
} else {
for p := len(b) - int(xorData[index]); p < len(b); p++ {
index++
b[p] ^= xorData[index]
}
}
return b
}
// Sparse block handling code.
type valueRange struct {
value uint16 // header: value:stride
lo, hi byte // header: lo:n
}
type sparseBlocks struct {
values []valueRange
offset []uint16
}
var idnaSparse = sparseBlocks{
values: idnaSparseValues[:],
offset: idnaSparseOffset[:],
}
// Don't use newIdnaTrie to avoid unconditional linking in of the table.
var trie = &idnaTrie{}
// lookup determines the type of block n and looks up the value for b.
// For n < t.cutoff, the block is a simple lookup table. Otherwise, the block
// is a list of ranges with an accompanying value. Given a matching range r,
// the value for b is by r.value + (b - r.lo) * stride.
func (t *sparseBlocks) lookup(n uint32, b byte) uint16 {
offset := t.offset[n]
header := t.values[offset]
lo := offset + 1
hi := lo + uint16(header.lo)
for lo < hi {
m := lo + (hi-lo)/2
r := t.values[m]
if r.lo <= b && b <= r.hi {
return r.value + uint16(b-r.lo)*header.value
}
if b < r.lo {
hi = m
} else {
lo = m + 1
}
}
return 0
}

119
vendor/golang.org/x/net/idna/trieval.go generated vendored Normal file
View File

@@ -0,0 +1,119 @@
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
package idna
// This file contains definitions for interpreting the trie value of the idna
// trie generated by "go run gen*.go". It is shared by both the generator
// program and the resultant package. Sharing is achieved by the generator
// copying gen_trieval.go to trieval.go and changing what's above this comment.
// info holds information from the IDNA mapping table for a single rune. It is
// the value returned by a trie lookup. In most cases, all information fits in
// a 16-bit value. For mappings, this value may contain an index into a slice
// with the mapped string. Such mappings can consist of the actual mapped value
// or an XOR pattern to be applied to the bytes of the UTF8 encoding of the
// input rune. This technique is used by the cases packages and reduces the
// table size significantly.
//
// The per-rune values have the following format:
//
// if mapped {
// if inlinedXOR {
// 15..13 inline XOR marker
// 12..11 unused
// 10..3 inline XOR mask
// } else {
// 15..3 index into xor or mapping table
// }
// } else {
// 15..14 unused
// 13 mayNeedNorm
// 12..11 attributes
// 10..8 joining type
// 7..3 category type
// }
// 2 use xor pattern
// 1..0 mapped category
//
// See the definitions below for a more detailed description of the various
// bits.
type info uint16
const (
catSmallMask = 0x3
catBigMask = 0xF8
indexShift = 3
xorBit = 0x4 // interpret the index as an xor pattern
inlineXOR = 0xE000 // These bits are set if the XOR pattern is inlined.
joinShift = 8
joinMask = 0x07
// Attributes
attributesMask = 0x1800
viramaModifier = 0x1800
modifier = 0x1000
rtl = 0x0800
mayNeedNorm = 0x2000
)
// A category corresponds to a category defined in the IDNA mapping table.
type category uint16
const (
unknown category = 0 // not currently defined in unicode.
mapped category = 1
disallowedSTD3Mapped category = 2
deviation category = 3
)
const (
valid category = 0x08
validNV8 category = 0x18
validXV8 category = 0x28
disallowed category = 0x40
disallowedSTD3Valid category = 0x80
ignored category = 0xC0
)
// join types and additional rune information
const (
joiningL = (iota + 1)
joiningD
joiningT
joiningR
//the following types are derived during processing
joinZWJ
joinZWNJ
joinVirama
numJoinTypes
)
func (c info) isMapped() bool {
return c&0x3 != 0
}
func (c info) category() category {
small := c & catSmallMask
if small != 0 {
return category(small)
}
return category(c & catBigMask)
}
func (c info) joinType() info {
if c.isMapped() {
return 0
}
return (c >> joinShift) & joinMask
}
func (c info) isModifier() bool {
return c&(modifier|catSmallMask) == modifier
}
func (c info) isViramaModifier() bool {
return c&(attributesMask|catSmallMask) == viramaModifier
}

336
vendor/golang.org/x/text/secure/bidirule/bidirule.go generated vendored Normal file
View File

@@ -0,0 +1,336 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package bidirule implements the Bidi Rule defined by RFC 5893.
//
// This package is under development. The API may change without notice and
// without preserving backward compatibility.
package bidirule
import (
"errors"
"unicode/utf8"
"golang.org/x/text/transform"
"golang.org/x/text/unicode/bidi"
)
// This file contains an implementation of RFC 5893: Right-to-Left Scripts for
// Internationalized Domain Names for Applications (IDNA)
//
// A label is an individual component of a domain name. Labels are usually
// shown separated by dots; for example, the domain name "www.example.com" is
// composed of three labels: "www", "example", and "com".
//
// An RTL label is a label that contains at least one character of class R, AL,
// or AN. An LTR label is any label that is not an RTL label.
//
// A "Bidi domain name" is a domain name that contains at least one RTL label.
//
// The following guarantees can be made based on the above:
//
// o In a domain name consisting of only labels that satisfy the rule,
// the requirements of Section 3 are satisfied. Note that even LTR
// labels and pure ASCII labels have to be tested.
//
// o In a domain name consisting of only LDH labels (as defined in the
// Definitions document [RFC5890]) and labels that satisfy the rule,
// the requirements of Section 3 are satisfied as long as a label
// that starts with an ASCII digit does not come after a
// right-to-left label.
//
// No guarantee is given for other combinations.
// ErrInvalid indicates a label is invalid according to the Bidi Rule.
var ErrInvalid = errors.New("bidirule: failed Bidi Rule")
type ruleState uint8
const (
ruleInitial ruleState = iota
ruleLTR
ruleLTRFinal
ruleRTL
ruleRTLFinal
ruleInvalid
)
type ruleTransition struct {
next ruleState
mask uint16
}
var transitions = [...][2]ruleTransition{
// [2.1] The first character must be a character with Bidi property L, R, or
// AL. If it has the R or AL property, it is an RTL label; if it has the L
// property, it is an LTR label.
ruleInitial: {
{ruleLTRFinal, 1 << bidi.L},
{ruleRTLFinal, 1<<bidi.R | 1<<bidi.AL},
},
ruleRTL: {
// [2.3] In an RTL label, the end of the label must be a character with
// Bidi property R, AL, EN, or AN, followed by zero or more characters
// with Bidi property NSM.
{ruleRTLFinal, 1<<bidi.R | 1<<bidi.AL | 1<<bidi.EN | 1<<bidi.AN},
// [2.2] In an RTL label, only characters with the Bidi properties R,
// AL, AN, EN, ES, CS, ET, ON, BN, or NSM are allowed.
// We exclude the entries from [2.3]
{ruleRTL, 1<<bidi.ES | 1<<bidi.CS | 1<<bidi.ET | 1<<bidi.ON | 1<<bidi.BN | 1<<bidi.NSM},
},
ruleRTLFinal: {
// [2.3] In an RTL label, the end of the label must be a character with
// Bidi property R, AL, EN, or AN, followed by zero or more characters
// with Bidi property NSM.
{ruleRTLFinal, 1<<bidi.R | 1<<bidi.AL | 1<<bidi.EN | 1<<bidi.AN | 1<<bidi.NSM},
// [2.2] In an RTL label, only characters with the Bidi properties R,
// AL, AN, EN, ES, CS, ET, ON, BN, or NSM are allowed.
// We exclude the entries from [2.3] and NSM.
{ruleRTL, 1<<bidi.ES | 1<<bidi.CS | 1<<bidi.ET | 1<<bidi.ON | 1<<bidi.BN},
},
ruleLTR: {
// [2.6] In an LTR label, the end of the label must be a character with
// Bidi property L or EN, followed by zero or more characters with Bidi
// property NSM.
{ruleLTRFinal, 1<<bidi.L | 1<<bidi.EN},
// [2.5] In an LTR label, only characters with the Bidi properties L,
// EN, ES, CS, ET, ON, BN, or NSM are allowed.
// We exclude the entries from [2.6].
{ruleLTR, 1<<bidi.ES | 1<<bidi.CS | 1<<bidi.ET | 1<<bidi.ON | 1<<bidi.BN | 1<<bidi.NSM},
},
ruleLTRFinal: {
// [2.6] In an LTR label, the end of the label must be a character with
// Bidi property L or EN, followed by zero or more characters with Bidi
// property NSM.
{ruleLTRFinal, 1<<bidi.L | 1<<bidi.EN | 1<<bidi.NSM},
// [2.5] In an LTR label, only characters with the Bidi properties L,
// EN, ES, CS, ET, ON, BN, or NSM are allowed.
// We exclude the entries from [2.6].
{ruleLTR, 1<<bidi.ES | 1<<bidi.CS | 1<<bidi.ET | 1<<bidi.ON | 1<<bidi.BN},
},
ruleInvalid: {
{ruleInvalid, 0},
{ruleInvalid, 0},
},
}
// [2.4] In an RTL label, if an EN is present, no AN may be present, and
// vice versa.
const exclusiveRTL = uint16(1<<bidi.EN | 1<<bidi.AN)
// From RFC 5893
// An RTL label is a label that contains at least one character of type
// R, AL, or AN.
//
// An LTR label is any label that is not an RTL label.
// Direction reports the direction of the given label as defined by RFC 5893.
// The Bidi Rule does not have to be applied to labels of the category
// LeftToRight.
func Direction(b []byte) bidi.Direction {
for i := 0; i < len(b); {
e, sz := bidi.Lookup(b[i:])
if sz == 0 {
i++
}
c := e.Class()
if c == bidi.R || c == bidi.AL || c == bidi.AN {
return bidi.RightToLeft
}
i += sz
}
return bidi.LeftToRight
}
// DirectionString reports the direction of the given label as defined by RFC
// 5893. The Bidi Rule does not have to be applied to labels of the category
// LeftToRight.
func DirectionString(s string) bidi.Direction {
for i := 0; i < len(s); {
e, sz := bidi.LookupString(s[i:])
if sz == 0 {
i++
continue
}
c := e.Class()
if c == bidi.R || c == bidi.AL || c == bidi.AN {
return bidi.RightToLeft
}
i += sz
}
return bidi.LeftToRight
}
// Valid reports whether b conforms to the BiDi rule.
func Valid(b []byte) bool {
var t Transformer
if n, ok := t.advance(b); !ok || n < len(b) {
return false
}
return t.isFinal()
}
// ValidString reports whether s conforms to the BiDi rule.
func ValidString(s string) bool {
var t Transformer
if n, ok := t.advanceString(s); !ok || n < len(s) {
return false
}
return t.isFinal()
}
// New returns a Transformer that verifies that input adheres to the Bidi Rule.
func New() *Transformer {
return &Transformer{}
}
// Transformer implements transform.Transform.
type Transformer struct {
state ruleState
hasRTL bool
seen uint16
}
// A rule can only be violated for "Bidi Domain names", meaning if one of the
// following categories has been observed.
func (t *Transformer) isRTL() bool {
const isRTL = 1<<bidi.R | 1<<bidi.AL | 1<<bidi.AN
return t.seen&isRTL != 0
}
// Reset implements transform.Transformer.
func (t *Transformer) Reset() { *t = Transformer{} }
// Transform implements transform.Transformer. This Transformer has state and
// needs to be reset between uses.
func (t *Transformer) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) {
if len(dst) < len(src) {
src = src[:len(dst)]
atEOF = false
err = transform.ErrShortDst
}
n, err1 := t.Span(src, atEOF)
copy(dst, src[:n])
if err == nil || err1 != nil && err1 != transform.ErrShortSrc {
err = err1
}
return n, n, err
}
// Span returns the first n bytes of src that conform to the Bidi rule.
func (t *Transformer) Span(src []byte, atEOF bool) (n int, err error) {
if t.state == ruleInvalid && t.isRTL() {
return 0, ErrInvalid
}
n, ok := t.advance(src)
switch {
case !ok:
err = ErrInvalid
case n < len(src):
if !atEOF {
err = transform.ErrShortSrc
break
}
err = ErrInvalid
case !t.isFinal():
err = ErrInvalid
}
return n, err
}
// Precomputing the ASCII values decreases running time for the ASCII fast path
// by about 30%.
var asciiTable [128]bidi.Properties
func init() {
for i := range asciiTable {
p, _ := bidi.LookupRune(rune(i))
asciiTable[i] = p
}
}
func (t *Transformer) advance(s []byte) (n int, ok bool) {
var e bidi.Properties
var sz int
for n < len(s) {
if s[n] < utf8.RuneSelf {
e, sz = asciiTable[s[n]], 1
} else {
e, sz = bidi.Lookup(s[n:])
if sz <= 1 {
if sz == 1 {
// We always consider invalid UTF-8 to be invalid, even if
// the string has not yet been determined to be RTL.
// TODO: is this correct?
return n, false
}
return n, true // incomplete UTF-8 encoding
}
}
// TODO: using CompactClass would result in noticeable speedup.
// See unicode/bidi/prop.go:Properties.CompactClass.
c := uint16(1 << e.Class())
t.seen |= c
if t.seen&exclusiveRTL == exclusiveRTL {
t.state = ruleInvalid
return n, false
}
switch tr := transitions[t.state]; {
case tr[0].mask&c != 0:
t.state = tr[0].next
case tr[1].mask&c != 0:
t.state = tr[1].next
default:
t.state = ruleInvalid
if t.isRTL() {
return n, false
}
}
n += sz
}
return n, true
}
func (t *Transformer) advanceString(s string) (n int, ok bool) {
var e bidi.Properties
var sz int
for n < len(s) {
if s[n] < utf8.RuneSelf {
e, sz = asciiTable[s[n]], 1
} else {
e, sz = bidi.LookupString(s[n:])
if sz <= 1 {
if sz == 1 {
return n, false // invalid UTF-8
}
return n, true // incomplete UTF-8 encoding
}
}
// TODO: using CompactClass results in noticeable speedup.
// See unicode/bidi/prop.go:Properties.CompactClass.
c := uint16(1 << e.Class())
t.seen |= c
if t.seen&exclusiveRTL == exclusiveRTL {
t.state = ruleInvalid
return n, false
}
switch tr := transitions[t.state]; {
case tr[0].mask&c != 0:
t.state = tr[0].next
case tr[1].mask&c != 0:
t.state = tr[1].next
default:
t.state = ruleInvalid
if t.isRTL() {
return n, false
}
}
n += sz
}
return n, true
}

View File

@@ -0,0 +1,11 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build go1.10
package bidirule
func (t *Transformer) isFinal() bool {
return t.state == ruleLTRFinal || t.state == ruleRTLFinal || t.state == ruleInitial
}

View File

@@ -0,0 +1,14 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !go1.10
package bidirule
func (t *Transformer) isFinal() bool {
if !t.isRTL() {
return true
}
return t.state == ruleLTRFinal || t.state == ruleRTLFinal || t.state == ruleInitial
}

198
vendor/golang.org/x/text/unicode/bidi/bidi.go generated vendored Normal file
View File

@@ -0,0 +1,198 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:generate go run gen.go gen_trieval.go gen_ranges.go
// Package bidi contains functionality for bidirectional text support.
//
// See http://www.unicode.org/reports/tr9.
//
// NOTE: UNDER CONSTRUCTION. This API may change in backwards incompatible ways
// and without notice.
package bidi // import "golang.org/x/text/unicode/bidi"
// TODO:
// The following functionality would not be hard to implement, but hinges on
// the definition of a Segmenter interface. For now this is up to the user.
// - Iterate over paragraphs
// - Segmenter to iterate over runs directly from a given text.
// Also:
// - Transformer for reordering?
// - Transformer (validator, really) for Bidi Rule.
// This API tries to avoid dealing with embedding levels for now. Under the hood
// these will be computed, but the question is to which extent the user should
// know they exist. We should at some point allow the user to specify an
// embedding hierarchy, though.
// A Direction indicates the overall flow of text.
type Direction int
const (
// LeftToRight indicates the text contains no right-to-left characters and
// that either there are some left-to-right characters or the option
// DefaultDirection(LeftToRight) was passed.
LeftToRight Direction = iota
// RightToLeft indicates the text contains no left-to-right characters and
// that either there are some right-to-left characters or the option
// DefaultDirection(RightToLeft) was passed.
RightToLeft
// Mixed indicates text contains both left-to-right and right-to-left
// characters.
Mixed
// Neutral means that text contains no left-to-right and right-to-left
// characters and that no default direction has been set.
Neutral
)
type options struct{}
// An Option is an option for Bidi processing.
type Option func(*options)
// ICU allows the user to define embedding levels. This may be used, for example,
// to use hierarchical structure of markup languages to define embeddings.
// The following option may be a way to expose this functionality in this API.
// // LevelFunc sets a function that associates nesting levels with the given text.
// // The levels function will be called with monotonically increasing values for p.
// func LevelFunc(levels func(p int) int) Option {
// panic("unimplemented")
// }
// DefaultDirection sets the default direction for a Paragraph. The direction is
// overridden if the text contains directional characters.
func DefaultDirection(d Direction) Option {
panic("unimplemented")
}
// A Paragraph holds a single Paragraph for Bidi processing.
type Paragraph struct {
// buffers
}
// SetBytes configures p for the given paragraph text. It replaces text
// previously set by SetBytes or SetString. If b contains a paragraph separator
// it will only process the first paragraph and report the number of bytes
// consumed from b including this separator. Error may be non-nil if options are
// given.
func (p *Paragraph) SetBytes(b []byte, opts ...Option) (n int, err error) {
panic("unimplemented")
}
// SetString configures p for the given paragraph text. It replaces text
// previously set by SetBytes or SetString. If b contains a paragraph separator
// it will only process the first paragraph and report the number of bytes
// consumed from b including this separator. Error may be non-nil if options are
// given.
func (p *Paragraph) SetString(s string, opts ...Option) (n int, err error) {
panic("unimplemented")
}
// IsLeftToRight reports whether the principle direction of rendering for this
// paragraphs is left-to-right. If this returns false, the principle direction
// of rendering is right-to-left.
func (p *Paragraph) IsLeftToRight() bool {
panic("unimplemented")
}
// Direction returns the direction of the text of this paragraph.
//
// The direction may be LeftToRight, RightToLeft, Mixed, or Neutral.
func (p *Paragraph) Direction() Direction {
panic("unimplemented")
}
// RunAt reports the Run at the given position of the input text.
//
// This method can be used for computing line breaks on paragraphs.
func (p *Paragraph) RunAt(pos int) Run {
panic("unimplemented")
}
// Order computes the visual ordering of all the runs in a Paragraph.
func (p *Paragraph) Order() (Ordering, error) {
panic("unimplemented")
}
// Line computes the visual ordering of runs for a single line starting and
// ending at the given positions in the original text.
func (p *Paragraph) Line(start, end int) (Ordering, error) {
panic("unimplemented")
}
// An Ordering holds the computed visual order of runs of a Paragraph. Calling
// SetBytes or SetString on the originating Paragraph invalidates an Ordering.
// The methods of an Ordering should only be called by one goroutine at a time.
type Ordering struct{}
// Direction reports the directionality of the runs.
//
// The direction may be LeftToRight, RightToLeft, Mixed, or Neutral.
func (o *Ordering) Direction() Direction {
panic("unimplemented")
}
// NumRuns returns the number of runs.
func (o *Ordering) NumRuns() int {
panic("unimplemented")
}
// Run returns the ith run within the ordering.
func (o *Ordering) Run(i int) Run {
panic("unimplemented")
}
// TODO: perhaps with options.
// // Reorder creates a reader that reads the runes in visual order per character.
// // Modifiers remain after the runes they modify.
// func (l *Runs) Reorder() io.Reader {
// panic("unimplemented")
// }
// A Run is a continuous sequence of characters of a single direction.
type Run struct {
}
// String returns the text of the run in its original order.
func (r *Run) String() string {
panic("unimplemented")
}
// Bytes returns the text of the run in its original order.
func (r *Run) Bytes() []byte {
panic("unimplemented")
}
// TODO: methods for
// - Display order
// - headers and footers
// - bracket replacement.
// Direction reports the direction of the run.
func (r *Run) Direction() Direction {
panic("unimplemented")
}
// Position of the Run within the text passed to SetBytes or SetString of the
// originating Paragraph value.
func (r *Run) Pos() (start, end int) {
panic("unimplemented")
}
// AppendReverse reverses the order of characters of in, appends them to out,
// and returns the result. Modifiers will still follow the runes they modify.
// Brackets are replaced with their counterparts.
func AppendReverse(out, in []byte) []byte {
panic("unimplemented")
}
// ReverseString reverses the order of characters in s and returns a new string.
// Modifiers will still follow the runes they modify. Brackets are replaced with
// their counterparts.
func ReverseString(s string) string {
panic("unimplemented")
}

335
vendor/golang.org/x/text/unicode/bidi/bracket.go generated vendored Normal file
View File

@@ -0,0 +1,335 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package bidi
import (
"container/list"
"fmt"
"sort"
)
// This file contains a port of the reference implementation of the
// Bidi Parentheses Algorithm:
// http://www.unicode.org/Public/PROGRAMS/BidiReferenceJava/BidiPBAReference.java
//
// The implementation in this file covers definitions BD14-BD16 and rule N0
// of UAX#9.
//
// Some preprocessing is done for each rune before data is passed to this
// algorithm:
// - opening and closing brackets are identified
// - a bracket pair type, like '(' and ')' is assigned a unique identifier that
// is identical for the opening and closing bracket. It is left to do these
// mappings.
// - The BPA algorithm requires that bracket characters that are canonical
// equivalents of each other be able to be substituted for each other.
// It is the responsibility of the caller to do this canonicalization.
//
// In implementing BD16, this implementation departs slightly from the "logical"
// algorithm defined in UAX#9. In particular, the stack referenced there
// supports operations that go beyond a "basic" stack. An equivalent
// implementation based on a linked list is used here.
// Bidi_Paired_Bracket_Type
// BD14. An opening paired bracket is a character whose
// Bidi_Paired_Bracket_Type property value is Open.
//
// BD15. A closing paired bracket is a character whose
// Bidi_Paired_Bracket_Type property value is Close.
type bracketType byte
const (
bpNone bracketType = iota
bpOpen
bpClose
)
// bracketPair holds a pair of index values for opening and closing bracket
// location of a bracket pair.
type bracketPair struct {
opener int
closer int
}
func (b *bracketPair) String() string {
return fmt.Sprintf("(%v, %v)", b.opener, b.closer)
}
// bracketPairs is a slice of bracketPairs with a sort.Interface implementation.
type bracketPairs []bracketPair
func (b bracketPairs) Len() int { return len(b) }
func (b bracketPairs) Swap(i, j int) { b[i], b[j] = b[j], b[i] }
func (b bracketPairs) Less(i, j int) bool { return b[i].opener < b[j].opener }
// resolvePairedBrackets runs the paired bracket part of the UBA algorithm.
//
// For each rune, it takes the indexes into the original string, the class the
// bracket type (in pairTypes) and the bracket identifier (pairValues). It also
// takes the direction type for the start-of-sentence and the embedding level.
//
// The identifiers for bracket types are the rune of the canonicalized opening
// bracket for brackets (open or close) or 0 for runes that are not brackets.
func resolvePairedBrackets(s *isolatingRunSequence) {
p := bracketPairer{
sos: s.sos,
openers: list.New(),
codesIsolatedRun: s.types,
indexes: s.indexes,
}
dirEmbed := L
if s.level&1 != 0 {
dirEmbed = R
}
p.locateBrackets(s.p.pairTypes, s.p.pairValues)
p.resolveBrackets(dirEmbed, s.p.initialTypes)
}
type bracketPairer struct {
sos Class // direction corresponding to start of sequence
// The following is a restatement of BD 16 using non-algorithmic language.
//
// A bracket pair is a pair of characters consisting of an opening
// paired bracket and a closing paired bracket such that the
// Bidi_Paired_Bracket property value of the former equals the latter,
// subject to the following constraints.
// - both characters of a pair occur in the same isolating run sequence
// - the closing character of a pair follows the opening character
// - any bracket character can belong at most to one pair, the earliest possible one
// - any bracket character not part of a pair is treated like an ordinary character
// - pairs may nest properly, but their spans may not overlap otherwise
// Bracket characters with canonical decompositions are supposed to be
// treated as if they had been normalized, to allow normalized and non-
// normalized text to give the same result. In this implementation that step
// is pushed out to the caller. The caller has to ensure that the pairValue
// slices contain the rune of the opening bracket after normalization for
// any opening or closing bracket.
openers *list.List // list of positions for opening brackets
// bracket pair positions sorted by location of opening bracket
pairPositions bracketPairs
codesIsolatedRun []Class // directional bidi codes for an isolated run
indexes []int // array of index values into the original string
}
// matchOpener reports whether characters at given positions form a matching
// bracket pair.
func (p *bracketPairer) matchOpener(pairValues []rune, opener, closer int) bool {
return pairValues[p.indexes[opener]] == pairValues[p.indexes[closer]]
}
const maxPairingDepth = 63
// locateBrackets locates matching bracket pairs according to BD16.
//
// This implementation uses a linked list instead of a stack, because, while
// elements are added at the front (like a push) they are not generally removed
// in atomic 'pop' operations, reducing the benefit of the stack archetype.
func (p *bracketPairer) locateBrackets(pairTypes []bracketType, pairValues []rune) {
// traverse the run
// do that explicitly (not in a for-each) so we can record position
for i, index := range p.indexes {
// look at the bracket type for each character
if pairTypes[index] == bpNone || p.codesIsolatedRun[i] != ON {
// continue scanning
continue
}
switch pairTypes[index] {
case bpOpen:
// check if maximum pairing depth reached
if p.openers.Len() == maxPairingDepth {
p.openers.Init()
return
}
// remember opener location, most recent first
p.openers.PushFront(i)
case bpClose:
// see if there is a match
count := 0
for elem := p.openers.Front(); elem != nil; elem = elem.Next() {
count++
opener := elem.Value.(int)
if p.matchOpener(pairValues, opener, i) {
// if the opener matches, add nested pair to the ordered list
p.pairPositions = append(p.pairPositions, bracketPair{opener, i})
// remove up to and including matched opener
for ; count > 0; count-- {
p.openers.Remove(p.openers.Front())
}
break
}
}
sort.Sort(p.pairPositions)
// if we get here, the closing bracket matched no openers
// and gets ignored
}
}
}
// Bracket pairs within an isolating run sequence are processed as units so
// that both the opening and the closing paired bracket in a pair resolve to
// the same direction.
//
// N0. Process bracket pairs in an isolating run sequence sequentially in
// the logical order of the text positions of the opening paired brackets
// using the logic given below. Within this scope, bidirectional types EN
// and AN are treated as R.
//
// Identify the bracket pairs in the current isolating run sequence
// according to BD16. For each bracket-pair element in the list of pairs of
// text positions:
//
// a Inspect the bidirectional types of the characters enclosed within the
// bracket pair.
//
// b If any strong type (either L or R) matching the embedding direction is
// found, set the type for both brackets in the pair to match the embedding
// direction.
//
// o [ e ] o -> o e e e o
//
// o [ o e ] -> o e o e e
//
// o [ NI e ] -> o e NI e e
//
// c Otherwise, if a strong type (opposite the embedding direction) is
// found, test for adjacent strong types as follows: 1 First, check
// backwards before the opening paired bracket until the first strong type
// (L, R, or sos) is found. If that first preceding strong type is opposite
// the embedding direction, then set the type for both brackets in the pair
// to that type. 2 Otherwise, set the type for both brackets in the pair to
// the embedding direction.
//
// o [ o ] e -> o o o o e
//
// o [ o NI ] o -> o o o NI o o
//
// e [ o ] o -> e e o e o
//
// e [ o ] e -> e e o e e
//
// e ( o [ o ] NI ) e -> e e o o o o NI e e
//
// d Otherwise, do not set the type for the current bracket pair. Note that
// if the enclosed text contains no strong types the paired brackets will
// both resolve to the same level when resolved individually using rules N1
// and N2.
//
// e ( NI ) o -> e ( NI ) o
// getStrongTypeN0 maps character's directional code to strong type as required
// by rule N0.
//
// TODO: have separate type for "strong" directionality.
func (p *bracketPairer) getStrongTypeN0(index int) Class {
switch p.codesIsolatedRun[index] {
// in the scope of N0, number types are treated as R
case EN, AN, AL, R:
return R
case L:
return L
default:
return ON
}
}
// classifyPairContent reports the strong types contained inside a Bracket Pair,
// assuming the given embedding direction.
//
// It returns ON if no strong type is found. If a single strong type is found,
// it returns this this type. Otherwise it returns the embedding direction.
//
// TODO: use separate type for "strong" directionality.
func (p *bracketPairer) classifyPairContent(loc bracketPair, dirEmbed Class) Class {
dirOpposite := ON
for i := loc.opener + 1; i < loc.closer; i++ {
dir := p.getStrongTypeN0(i)
if dir == ON {
continue
}
if dir == dirEmbed {
return dir // type matching embedding direction found
}
dirOpposite = dir
}
// return ON if no strong type found, or class opposite to dirEmbed
return dirOpposite
}
// classBeforePair determines which strong types are present before a Bracket
// Pair. Return R or L if strong type found, otherwise ON.
func (p *bracketPairer) classBeforePair(loc bracketPair) Class {
for i := loc.opener - 1; i >= 0; i-- {
if dir := p.getStrongTypeN0(i); dir != ON {
return dir
}
}
// no strong types found, return sos
return p.sos
}
// assignBracketType implements rule N0 for a single bracket pair.
func (p *bracketPairer) assignBracketType(loc bracketPair, dirEmbed Class, initialTypes []Class) {
// rule "N0, a", inspect contents of pair
dirPair := p.classifyPairContent(loc, dirEmbed)
// dirPair is now L, R, or N (no strong type found)
// the following logical tests are performed out of order compared to
// the statement of the rules but yield the same results
if dirPair == ON {
return // case "d" - nothing to do
}
if dirPair != dirEmbed {
// case "c": strong type found, opposite - check before (c.1)
dirPair = p.classBeforePair(loc)
if dirPair == dirEmbed || dirPair == ON {
// no strong opposite type found before - use embedding (c.2)
dirPair = dirEmbed
}
}
// else: case "b", strong type found matching embedding,
// no explicit action needed, as dirPair is already set to embedding
// direction
// set the bracket types to the type found
p.setBracketsToType(loc, dirPair, initialTypes)
}
func (p *bracketPairer) setBracketsToType(loc bracketPair, dirPair Class, initialTypes []Class) {
p.codesIsolatedRun[loc.opener] = dirPair
p.codesIsolatedRun[loc.closer] = dirPair
for i := loc.opener + 1; i < loc.closer; i++ {
index := p.indexes[i]
if initialTypes[index] != NSM {
break
}
p.codesIsolatedRun[i] = dirPair
}
for i := loc.closer + 1; i < len(p.indexes); i++ {
index := p.indexes[i]
if initialTypes[index] != NSM {
break
}
p.codesIsolatedRun[i] = dirPair
}
}
// resolveBrackets implements rule N0 for a list of pairs.
func (p *bracketPairer) resolveBrackets(dirEmbed Class, initialTypes []Class) {
for _, loc := range p.pairPositions {
p.assignBracketType(loc, dirEmbed, initialTypes)
}
}

1058
vendor/golang.org/x/text/unicode/bidi/core.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

206
vendor/golang.org/x/text/unicode/bidi/prop.go generated vendored Normal file
View File

@@ -0,0 +1,206 @@
// Copyright 2016 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package bidi
import "unicode/utf8"
// Properties provides access to BiDi properties of runes.
type Properties struct {
entry uint8
last uint8
}
var trie = newBidiTrie(0)
// TODO: using this for bidirule reduces the running time by about 5%. Consider
// if this is worth exposing or if we can find a way to speed up the Class
// method.
//
// // CompactClass is like Class, but maps all of the BiDi control classes
// // (LRO, RLO, LRE, RLE, PDF, LRI, RLI, FSI, PDI) to the class Control.
// func (p Properties) CompactClass() Class {
// return Class(p.entry & 0x0F)
// }
// Class returns the Bidi class for p.
func (p Properties) Class() Class {
c := Class(p.entry & 0x0F)
if c == Control {
c = controlByteToClass[p.last&0xF]
}
return c
}
// IsBracket reports whether the rune is a bracket.
func (p Properties) IsBracket() bool { return p.entry&0xF0 != 0 }
// IsOpeningBracket reports whether the rune is an opening bracket.
// IsBracket must return true.
func (p Properties) IsOpeningBracket() bool { return p.entry&openMask != 0 }
// TODO: find a better API and expose.
func (p Properties) reverseBracket(r rune) rune {
return xorMasks[p.entry>>xorMaskShift] ^ r
}
var controlByteToClass = [16]Class{
0xD: LRO, // U+202D LeftToRightOverride,
0xE: RLO, // U+202E RightToLeftOverride,
0xA: LRE, // U+202A LeftToRightEmbedding,
0xB: RLE, // U+202B RightToLeftEmbedding,
0xC: PDF, // U+202C PopDirectionalFormat,
0x6: LRI, // U+2066 LeftToRightIsolate,
0x7: RLI, // U+2067 RightToLeftIsolate,
0x8: FSI, // U+2068 FirstStrongIsolate,
0x9: PDI, // U+2069 PopDirectionalIsolate,
}
// LookupRune returns properties for r.
func LookupRune(r rune) (p Properties, size int) {
var buf [4]byte
n := utf8.EncodeRune(buf[:], r)
return Lookup(buf[:n])
}
// TODO: these lookup methods are based on the generated trie code. The returned
// sizes have slightly different semantics from the generated code, in that it
// always returns size==1 for an illegal UTF-8 byte (instead of the length
// of the maximum invalid subsequence). Most Transformers, like unicode/norm,
// leave invalid UTF-8 untouched, in which case it has performance benefits to
// do so (without changing the semantics). Bidi requires the semantics used here
// for the bidirule implementation to be compatible with the Go semantics.
// They ultimately should perhaps be adopted by all trie implementations, for
// convenience sake.
// This unrolled code also boosts performance of the secure/bidirule package by
// about 30%.
// So, to remove this code:
// - add option to trie generator to define return type.
// - always return 1 byte size for ill-formed UTF-8 runes.
// Lookup returns properties for the first rune in s and the width in bytes of
// its encoding. The size will be 0 if s does not hold enough bytes to complete
// the encoding.
func Lookup(s []byte) (p Properties, sz int) {
c0 := s[0]
switch {
case c0 < 0x80: // is ASCII
return Properties{entry: bidiValues[c0]}, 1
case c0 < 0xC2:
return Properties{}, 1
case c0 < 0xE0: // 2-byte UTF-8
if len(s) < 2 {
return Properties{}, 0
}
i := bidiIndex[c0]
c1 := s[1]
if c1 < 0x80 || 0xC0 <= c1 {
return Properties{}, 1
}
return Properties{entry: trie.lookupValue(uint32(i), c1)}, 2
case c0 < 0xF0: // 3-byte UTF-8
if len(s) < 3 {
return Properties{}, 0
}
i := bidiIndex[c0]
c1 := s[1]
if c1 < 0x80 || 0xC0 <= c1 {
return Properties{}, 1
}
o := uint32(i)<<6 + uint32(c1)
i = bidiIndex[o]
c2 := s[2]
if c2 < 0x80 || 0xC0 <= c2 {
return Properties{}, 1
}
return Properties{entry: trie.lookupValue(uint32(i), c2), last: c2}, 3
case c0 < 0xF8: // 4-byte UTF-8
if len(s) < 4 {
return Properties{}, 0
}
i := bidiIndex[c0]
c1 := s[1]
if c1 < 0x80 || 0xC0 <= c1 {
return Properties{}, 1
}
o := uint32(i)<<6 + uint32(c1)
i = bidiIndex[o]
c2 := s[2]
if c2 < 0x80 || 0xC0 <= c2 {
return Properties{}, 1
}
o = uint32(i)<<6 + uint32(c2)
i = bidiIndex[o]
c3 := s[3]
if c3 < 0x80 || 0xC0 <= c3 {
return Properties{}, 1
}
return Properties{entry: trie.lookupValue(uint32(i), c3)}, 4
}
// Illegal rune
return Properties{}, 1
}
// LookupString returns properties for the first rune in s and the width in
// bytes of its encoding. The size will be 0 if s does not hold enough bytes to
// complete the encoding.
func LookupString(s string) (p Properties, sz int) {
c0 := s[0]
switch {
case c0 < 0x80: // is ASCII
return Properties{entry: bidiValues[c0]}, 1
case c0 < 0xC2:
return Properties{}, 1
case c0 < 0xE0: // 2-byte UTF-8
if len(s) < 2 {
return Properties{}, 0
}
i := bidiIndex[c0]
c1 := s[1]
if c1 < 0x80 || 0xC0 <= c1 {
return Properties{}, 1
}
return Properties{entry: trie.lookupValue(uint32(i), c1)}, 2
case c0 < 0xF0: // 3-byte UTF-8
if len(s) < 3 {
return Properties{}, 0
}
i := bidiIndex[c0]
c1 := s[1]
if c1 < 0x80 || 0xC0 <= c1 {
return Properties{}, 1
}
o := uint32(i)<<6 + uint32(c1)
i = bidiIndex[o]
c2 := s[2]
if c2 < 0x80 || 0xC0 <= c2 {
return Properties{}, 1
}
return Properties{entry: trie.lookupValue(uint32(i), c2), last: c2}, 3
case c0 < 0xF8: // 4-byte UTF-8
if len(s) < 4 {
return Properties{}, 0
}
i := bidiIndex[c0]
c1 := s[1]
if c1 < 0x80 || 0xC0 <= c1 {
return Properties{}, 1
}
o := uint32(i)<<6 + uint32(c1)
i = bidiIndex[o]
c2 := s[2]
if c2 < 0x80 || 0xC0 <= c2 {
return Properties{}, 1
}
o = uint32(i)<<6 + uint32(c2)
i = bidiIndex[o]
c3 := s[3]
if c3 < 0x80 || 0xC0 <= c3 {
return Properties{}, 1
}
return Properties{entry: trie.lookupValue(uint32(i), c3)}, 4
}
// Illegal rune
return Properties{}, 1
}

1815
vendor/golang.org/x/text/unicode/bidi/tables10.0.0.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

1781
vendor/golang.org/x/text/unicode/bidi/tables9.0.0.go generated vendored Normal file

File diff suppressed because it is too large Load Diff

60
vendor/golang.org/x/text/unicode/bidi/trieval.go generated vendored Normal file
View File

@@ -0,0 +1,60 @@
// Code generated by running "go generate" in golang.org/x/text. DO NOT EDIT.
package bidi
// Class is the Unicode BiDi class. Each rune has a single class.
type Class uint
const (
L Class = iota // LeftToRight
R // RightToLeft
EN // EuropeanNumber
ES // EuropeanSeparator
ET // EuropeanTerminator
AN // ArabicNumber
CS // CommonSeparator
B // ParagraphSeparator
S // SegmentSeparator
WS // WhiteSpace
ON // OtherNeutral
BN // BoundaryNeutral
NSM // NonspacingMark
AL // ArabicLetter
Control // Control LRO - PDI
numClass
LRO // LeftToRightOverride
RLO // RightToLeftOverride
LRE // LeftToRightEmbedding
RLE // RightToLeftEmbedding
PDF // PopDirectionalFormat
LRI // LeftToRightIsolate
RLI // RightToLeftIsolate
FSI // FirstStrongIsolate
PDI // PopDirectionalIsolate
unknownClass = ^Class(0)
)
var controlToClass = map[rune]Class{
0x202D: LRO, // LeftToRightOverride,
0x202E: RLO, // RightToLeftOverride,
0x202A: LRE, // LeftToRightEmbedding,
0x202B: RLE, // RightToLeftEmbedding,
0x202C: PDF, // PopDirectionalFormat,
0x2066: LRI, // LeftToRightIsolate,
0x2067: RLI, // RightToLeftIsolate,
0x2068: FSI, // FirstStrongIsolate,
0x2069: PDI, // PopDirectionalIsolate,
}
// A trie entry has the following bits:
// 7..5 XOR mask for brackets
// 4 1: Bracket open, 0: Bracket close
// 3..0 Class type
const (
openMask = 0x10
xorMaskShift = 5
)

508
vendor/golang.org/x/text/unicode/norm/composition.go generated vendored Normal file
View File

@@ -0,0 +1,508 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package norm
import "unicode/utf8"
const (
maxNonStarters = 30
// The maximum number of characters needed for a buffer is
// maxNonStarters + 1 for the starter + 1 for the GCJ
maxBufferSize = maxNonStarters + 2
maxNFCExpansion = 3 // NFC(0x1D160)
maxNFKCExpansion = 18 // NFKC(0xFDFA)
maxByteBufferSize = utf8.UTFMax * maxBufferSize // 128
)
// ssState is used for reporting the segment state after inserting a rune.
// It is returned by streamSafe.next.
type ssState int
const (
// Indicates a rune was successfully added to the segment.
ssSuccess ssState = iota
// Indicates a rune starts a new segment and should not be added.
ssStarter
// Indicates a rune caused a segment overflow and a CGJ should be inserted.
ssOverflow
)
// streamSafe implements the policy of when a CGJ should be inserted.
type streamSafe uint8
// first inserts the first rune of a segment. It is a faster version of next if
// it is known p represents the first rune in a segment.
func (ss *streamSafe) first(p Properties) {
*ss = streamSafe(p.nTrailingNonStarters())
}
// insert returns a ssState value to indicate whether a rune represented by p
// can be inserted.
func (ss *streamSafe) next(p Properties) ssState {
if *ss > maxNonStarters {
panic("streamSafe was not reset")
}
n := p.nLeadingNonStarters()
if *ss += streamSafe(n); *ss > maxNonStarters {
*ss = 0
return ssOverflow
}
// The Stream-Safe Text Processing prescribes that the counting can stop
// as soon as a starter is encountered. However, there are some starters,
// like Jamo V and T, that can combine with other runes, leaving their
// successive non-starters appended to the previous, possibly causing an
// overflow. We will therefore consider any rune with a non-zero nLead to
// be a non-starter. Note that it always hold that if nLead > 0 then
// nLead == nTrail.
if n == 0 {
*ss = streamSafe(p.nTrailingNonStarters())
return ssStarter
}
return ssSuccess
}
// backwards is used for checking for overflow and segment starts
// when traversing a string backwards. Users do not need to call first
// for the first rune. The state of the streamSafe retains the count of
// the non-starters loaded.
func (ss *streamSafe) backwards(p Properties) ssState {
if *ss > maxNonStarters {
panic("streamSafe was not reset")
}
c := *ss + streamSafe(p.nTrailingNonStarters())
if c > maxNonStarters {
return ssOverflow
}
*ss = c
if p.nLeadingNonStarters() == 0 {
return ssStarter
}
return ssSuccess
}
func (ss streamSafe) isMax() bool {
return ss == maxNonStarters
}
// GraphemeJoiner is inserted after maxNonStarters non-starter runes.
const GraphemeJoiner = "\u034F"
// reorderBuffer is used to normalize a single segment. Characters inserted with
// insert are decomposed and reordered based on CCC. The compose method can
// be used to recombine characters. Note that the byte buffer does not hold
// the UTF-8 characters in order. Only the rune array is maintained in sorted
// order. flush writes the resulting segment to a byte array.
type reorderBuffer struct {
rune [maxBufferSize]Properties // Per character info.
byte [maxByteBufferSize]byte // UTF-8 buffer. Referenced by runeInfo.pos.
nbyte uint8 // Number or bytes.
ss streamSafe // For limiting length of non-starter sequence.
nrune int // Number of runeInfos.
f formInfo
src input
nsrc int
tmpBytes input
out []byte
flushF func(*reorderBuffer) bool
}
func (rb *reorderBuffer) init(f Form, src []byte) {
rb.f = *formTable[f]
rb.src.setBytes(src)
rb.nsrc = len(src)
rb.ss = 0
}
func (rb *reorderBuffer) initString(f Form, src string) {
rb.f = *formTable[f]
rb.src.setString(src)
rb.nsrc = len(src)
rb.ss = 0
}
func (rb *reorderBuffer) setFlusher(out []byte, f func(*reorderBuffer) bool) {
rb.out = out
rb.flushF = f
}
// reset discards all characters from the buffer.
func (rb *reorderBuffer) reset() {
rb.nrune = 0
rb.nbyte = 0
}
func (rb *reorderBuffer) doFlush() bool {
if rb.f.composing {
rb.compose()
}
res := rb.flushF(rb)
rb.reset()
return res
}
// appendFlush appends the normalized segment to rb.out.
func appendFlush(rb *reorderBuffer) bool {
for i := 0; i < rb.nrune; i++ {
start := rb.rune[i].pos
end := start + rb.rune[i].size
rb.out = append(rb.out, rb.byte[start:end]...)
}
return true
}
// flush appends the normalized segment to out and resets rb.
func (rb *reorderBuffer) flush(out []byte) []byte {
for i := 0; i < rb.nrune; i++ {
start := rb.rune[i].pos
end := start + rb.rune[i].size
out = append(out, rb.byte[start:end]...)
}
rb.reset()
return out
}
// flushCopy copies the normalized segment to buf and resets rb.
// It returns the number of bytes written to buf.
func (rb *reorderBuffer) flushCopy(buf []byte) int {
p := 0
for i := 0; i < rb.nrune; i++ {
runep := rb.rune[i]
p += copy(buf[p:], rb.byte[runep.pos:runep.pos+runep.size])
}
rb.reset()
return p
}
// insertOrdered inserts a rune in the buffer, ordered by Canonical Combining Class.
// It returns false if the buffer is not large enough to hold the rune.
// It is used internally by insert and insertString only.
func (rb *reorderBuffer) insertOrdered(info Properties) {
n := rb.nrune
b := rb.rune[:]
cc := info.ccc
if cc > 0 {
// Find insertion position + move elements to make room.
for ; n > 0; n-- {
if b[n-1].ccc <= cc {
break
}
b[n] = b[n-1]
}
}
rb.nrune += 1
pos := uint8(rb.nbyte)
rb.nbyte += utf8.UTFMax
info.pos = pos
b[n] = info
}
// insertErr is an error code returned by insert. Using this type instead
// of error improves performance up to 20% for many of the benchmarks.
type insertErr int
const (
iSuccess insertErr = -iota
iShortDst
iShortSrc
)
// insertFlush inserts the given rune in the buffer ordered by CCC.
// If a decomposition with multiple segments are encountered, they leading
// ones are flushed.
// It returns a non-zero error code if the rune was not inserted.
func (rb *reorderBuffer) insertFlush(src input, i int, info Properties) insertErr {
if rune := src.hangul(i); rune != 0 {
rb.decomposeHangul(rune)
return iSuccess
}
if info.hasDecomposition() {
return rb.insertDecomposed(info.Decomposition())
}
rb.insertSingle(src, i, info)
return iSuccess
}
// insertUnsafe inserts the given rune in the buffer ordered by CCC.
// It is assumed there is sufficient space to hold the runes. It is the
// responsibility of the caller to ensure this. This can be done by checking
// the state returned by the streamSafe type.
func (rb *reorderBuffer) insertUnsafe(src input, i int, info Properties) {
if rune := src.hangul(i); rune != 0 {
rb.decomposeHangul(rune)
}
if info.hasDecomposition() {
// TODO: inline.
rb.insertDecomposed(info.Decomposition())
} else {
rb.insertSingle(src, i, info)
}
}
// insertDecomposed inserts an entry in to the reorderBuffer for each rune
// in dcomp. dcomp must be a sequence of decomposed UTF-8-encoded runes.
// It flushes the buffer on each new segment start.
func (rb *reorderBuffer) insertDecomposed(dcomp []byte) insertErr {
rb.tmpBytes.setBytes(dcomp)
// As the streamSafe accounting already handles the counting for modifiers,
// we don't have to call next. However, we do need to keep the accounting
// intact when flushing the buffer.
for i := 0; i < len(dcomp); {
info := rb.f.info(rb.tmpBytes, i)
if info.BoundaryBefore() && rb.nrune > 0 && !rb.doFlush() {
return iShortDst
}
i += copy(rb.byte[rb.nbyte:], dcomp[i:i+int(info.size)])
rb.insertOrdered(info)
}
return iSuccess
}
// insertSingle inserts an entry in the reorderBuffer for the rune at
// position i. info is the runeInfo for the rune at position i.
func (rb *reorderBuffer) insertSingle(src input, i int, info Properties) {
src.copySlice(rb.byte[rb.nbyte:], i, i+int(info.size))
rb.insertOrdered(info)
}
// insertCGJ inserts a Combining Grapheme Joiner (0x034f) into rb.
func (rb *reorderBuffer) insertCGJ() {
rb.insertSingle(input{str: GraphemeJoiner}, 0, Properties{size: uint8(len(GraphemeJoiner))})
}
// appendRune inserts a rune at the end of the buffer. It is used for Hangul.
func (rb *reorderBuffer) appendRune(r rune) {
bn := rb.nbyte
sz := utf8.EncodeRune(rb.byte[bn:], rune(r))
rb.nbyte += utf8.UTFMax
rb.rune[rb.nrune] = Properties{pos: bn, size: uint8(sz)}
rb.nrune++
}
// assignRune sets a rune at position pos. It is used for Hangul and recomposition.
func (rb *reorderBuffer) assignRune(pos int, r rune) {
bn := rb.rune[pos].pos
sz := utf8.EncodeRune(rb.byte[bn:], rune(r))
rb.rune[pos] = Properties{pos: bn, size: uint8(sz)}
}
// runeAt returns the rune at position n. It is used for Hangul and recomposition.
func (rb *reorderBuffer) runeAt(n int) rune {
inf := rb.rune[n]
r, _ := utf8.DecodeRune(rb.byte[inf.pos : inf.pos+inf.size])
return r
}
// bytesAt returns the UTF-8 encoding of the rune at position n.
// It is used for Hangul and recomposition.
func (rb *reorderBuffer) bytesAt(n int) []byte {
inf := rb.rune[n]
return rb.byte[inf.pos : int(inf.pos)+int(inf.size)]
}
// For Hangul we combine algorithmically, instead of using tables.
const (
hangulBase = 0xAC00 // UTF-8(hangulBase) -> EA B0 80
hangulBase0 = 0xEA
hangulBase1 = 0xB0
hangulBase2 = 0x80
hangulEnd = hangulBase + jamoLVTCount // UTF-8(0xD7A4) -> ED 9E A4
hangulEnd0 = 0xED
hangulEnd1 = 0x9E
hangulEnd2 = 0xA4
jamoLBase = 0x1100 // UTF-8(jamoLBase) -> E1 84 00
jamoLBase0 = 0xE1
jamoLBase1 = 0x84
jamoLEnd = 0x1113
jamoVBase = 0x1161
jamoVEnd = 0x1176
jamoTBase = 0x11A7
jamoTEnd = 0x11C3
jamoTCount = 28
jamoVCount = 21
jamoVTCount = 21 * 28
jamoLVTCount = 19 * 21 * 28
)
const hangulUTF8Size = 3
func isHangul(b []byte) bool {
if len(b) < hangulUTF8Size {
return false
}
b0 := b[0]
if b0 < hangulBase0 {
return false
}
b1 := b[1]
switch {
case b0 == hangulBase0:
return b1 >= hangulBase1
case b0 < hangulEnd0:
return true
case b0 > hangulEnd0:
return false
case b1 < hangulEnd1:
return true
}
return b1 == hangulEnd1 && b[2] < hangulEnd2
}
func isHangulString(b string) bool {
if len(b) < hangulUTF8Size {
return false
}
b0 := b[0]
if b0 < hangulBase0 {
return false
}
b1 := b[1]
switch {
case b0 == hangulBase0:
return b1 >= hangulBase1
case b0 < hangulEnd0:
return true
case b0 > hangulEnd0:
return false
case b1 < hangulEnd1:
return true
}
return b1 == hangulEnd1 && b[2] < hangulEnd2
}
// Caller must ensure len(b) >= 2.
func isJamoVT(b []byte) bool {
// True if (rune & 0xff00) == jamoLBase
return b[0] == jamoLBase0 && (b[1]&0xFC) == jamoLBase1
}
func isHangulWithoutJamoT(b []byte) bool {
c, _ := utf8.DecodeRune(b)
c -= hangulBase
return c < jamoLVTCount && c%jamoTCount == 0
}
// decomposeHangul writes the decomposed Hangul to buf and returns the number
// of bytes written. len(buf) should be at least 9.
func decomposeHangul(buf []byte, r rune) int {
const JamoUTF8Len = 3
r -= hangulBase
x := r % jamoTCount
r /= jamoTCount
utf8.EncodeRune(buf, jamoLBase+r/jamoVCount)
utf8.EncodeRune(buf[JamoUTF8Len:], jamoVBase+r%jamoVCount)
if x != 0 {
utf8.EncodeRune(buf[2*JamoUTF8Len:], jamoTBase+x)
return 3 * JamoUTF8Len
}
return 2 * JamoUTF8Len
}
// decomposeHangul algorithmically decomposes a Hangul rune into
// its Jamo components.
// See http://unicode.org/reports/tr15/#Hangul for details on decomposing Hangul.
func (rb *reorderBuffer) decomposeHangul(r rune) {
r -= hangulBase
x := r % jamoTCount
r /= jamoTCount
rb.appendRune(jamoLBase + r/jamoVCount)
rb.appendRune(jamoVBase + r%jamoVCount)
if x != 0 {
rb.appendRune(jamoTBase + x)
}
}
// combineHangul algorithmically combines Jamo character components into Hangul.
// See http://unicode.org/reports/tr15/#Hangul for details on combining Hangul.
func (rb *reorderBuffer) combineHangul(s, i, k int) {
b := rb.rune[:]
bn := rb.nrune
for ; i < bn; i++ {
cccB := b[k-1].ccc
cccC := b[i].ccc
if cccB == 0 {
s = k - 1
}
if s != k-1 && cccB >= cccC {
// b[i] is blocked by greater-equal cccX below it
b[k] = b[i]
k++
} else {
l := rb.runeAt(s) // also used to compare to hangulBase
v := rb.runeAt(i) // also used to compare to jamoT
switch {
case jamoLBase <= l && l < jamoLEnd &&
jamoVBase <= v && v < jamoVEnd:
// 11xx plus 116x to LV
rb.assignRune(s, hangulBase+
(l-jamoLBase)*jamoVTCount+(v-jamoVBase)*jamoTCount)
case hangulBase <= l && l < hangulEnd &&
jamoTBase < v && v < jamoTEnd &&
((l-hangulBase)%jamoTCount) == 0:
// ACxx plus 11Ax to LVT
rb.assignRune(s, l+v-jamoTBase)
default:
b[k] = b[i]
k++
}
}
}
rb.nrune = k
}
// compose recombines the runes in the buffer.
// It should only be used to recompose a single segment, as it will not
// handle alternations between Hangul and non-Hangul characters correctly.
func (rb *reorderBuffer) compose() {
// UAX #15, section X5 , including Corrigendum #5
// "In any character sequence beginning with starter S, a character C is
// blocked from S if and only if there is some character B between S
// and C, and either B is a starter or it has the same or higher
// combining class as C."
bn := rb.nrune
if bn == 0 {
return
}
k := 1
b := rb.rune[:]
for s, i := 0, 1; i < bn; i++ {
if isJamoVT(rb.bytesAt(i)) {
// Redo from start in Hangul mode. Necessary to support
// U+320E..U+321E in NFKC mode.
rb.combineHangul(s, i, k)
return
}
ii := b[i]
// We can only use combineForward as a filter if we later
// get the info for the combined character. This is more
// expensive than using the filter. Using combinesBackward()
// is safe.
if ii.combinesBackward() {
cccB := b[k-1].ccc
cccC := ii.ccc
blocked := false // b[i] blocked by starter or greater or equal CCC?
if cccB == 0 {
s = k - 1
} else {
blocked = s != k-1 && cccB >= cccC
}
if !blocked {
combined := combine(rb.runeAt(s), rb.runeAt(i))
if combined != 0 {
rb.assignRune(s, combined)
continue
}
}
}
b[k] = b[i]
k++
}
rb.nrune = k
}

259
vendor/golang.org/x/text/unicode/norm/forminfo.go generated vendored Normal file
View File

@@ -0,0 +1,259 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package norm
// This file contains Form-specific logic and wrappers for data in tables.go.
// Rune info is stored in a separate trie per composing form. A composing form
// and its corresponding decomposing form share the same trie. Each trie maps
// a rune to a uint16. The values take two forms. For v >= 0x8000:
// bits
// 15: 1 (inverse of NFD_QC bit of qcInfo)
// 13..7: qcInfo (see below). isYesD is always true (no decompostion).
// 6..0: ccc (compressed CCC value).
// For v < 0x8000, the respective rune has a decomposition and v is an index
// into a byte array of UTF-8 decomposition sequences and additional info and
// has the form:
// <header> <decomp_byte>* [<tccc> [<lccc>]]
// The header contains the number of bytes in the decomposition (excluding this
// length byte). The two most significant bits of this length byte correspond
// to bit 5 and 4 of qcInfo (see below). The byte sequence itself starts at v+1.
// The byte sequence is followed by a trailing and leading CCC if the values
// for these are not zero. The value of v determines which ccc are appended
// to the sequences. For v < firstCCC, there are none, for v >= firstCCC,
// the sequence is followed by a trailing ccc, and for v >= firstLeadingCC
// there is an additional leading ccc. The value of tccc itself is the
// trailing CCC shifted left 2 bits. The two least-significant bits of tccc
// are the number of trailing non-starters.
const (
qcInfoMask = 0x3F // to clear all but the relevant bits in a qcInfo
headerLenMask = 0x3F // extract the length value from the header byte
headerFlagsMask = 0xC0 // extract the qcInfo bits from the header byte
)
// Properties provides access to normalization properties of a rune.
type Properties struct {
pos uint8 // start position in reorderBuffer; used in composition.go
size uint8 // length of UTF-8 encoding of this rune
ccc uint8 // leading canonical combining class (ccc if not decomposition)
tccc uint8 // trailing canonical combining class (ccc if not decomposition)
nLead uint8 // number of leading non-starters.
flags qcInfo // quick check flags
index uint16
}
// functions dispatchable per form
type lookupFunc func(b input, i int) Properties
// formInfo holds Form-specific functions and tables.
type formInfo struct {
form Form
composing, compatibility bool // form type
info lookupFunc
nextMain iterFunc
}
var formTable = []*formInfo{{
form: NFC,
composing: true,
compatibility: false,
info: lookupInfoNFC,
nextMain: nextComposed,
}, {
form: NFD,
composing: false,
compatibility: false,
info: lookupInfoNFC,
nextMain: nextDecomposed,
}, {
form: NFKC,
composing: true,
compatibility: true,
info: lookupInfoNFKC,
nextMain: nextComposed,
}, {
form: NFKD,
composing: false,
compatibility: true,
info: lookupInfoNFKC,
nextMain: nextDecomposed,
}}
// We do not distinguish between boundaries for NFC, NFD, etc. to avoid
// unexpected behavior for the user. For example, in NFD, there is a boundary
// after 'a'. However, 'a' might combine with modifiers, so from the application's
// perspective it is not a good boundary. We will therefore always use the
// boundaries for the combining variants.
// BoundaryBefore returns true if this rune starts a new segment and
// cannot combine with any rune on the left.
func (p Properties) BoundaryBefore() bool {
if p.ccc == 0 && !p.combinesBackward() {
return true
}
// We assume that the CCC of the first character in a decomposition
// is always non-zero if different from info.ccc and that we can return
// false at this point. This is verified by maketables.
return false
}
// BoundaryAfter returns true if runes cannot combine with or otherwise
// interact with this or previous runes.
func (p Properties) BoundaryAfter() bool {
// TODO: loosen these conditions.
return p.isInert()
}
// We pack quick check data in 4 bits:
// 5: Combines forward (0 == false, 1 == true)
// 4..3: NFC_QC Yes(00), No (10), or Maybe (11)
// 2: NFD_QC Yes (0) or No (1). No also means there is a decomposition.
// 1..0: Number of trailing non-starters.
//
// When all 4 bits are zero, the character is inert, meaning it is never
// influenced by normalization.
type qcInfo uint8
func (p Properties) isYesC() bool { return p.flags&0x10 == 0 }
func (p Properties) isYesD() bool { return p.flags&0x4 == 0 }
func (p Properties) combinesForward() bool { return p.flags&0x20 != 0 }
func (p Properties) combinesBackward() bool { return p.flags&0x8 != 0 } // == isMaybe
func (p Properties) hasDecomposition() bool { return p.flags&0x4 != 0 } // == isNoD
func (p Properties) isInert() bool {
return p.flags&qcInfoMask == 0 && p.ccc == 0
}
func (p Properties) multiSegment() bool {
return p.index >= firstMulti && p.index < endMulti
}
func (p Properties) nLeadingNonStarters() uint8 {
return p.nLead
}
func (p Properties) nTrailingNonStarters() uint8 {
return uint8(p.flags & 0x03)
}
// Decomposition returns the decomposition for the underlying rune
// or nil if there is none.
func (p Properties) Decomposition() []byte {
// TODO: create the decomposition for Hangul?
if p.index == 0 {
return nil
}
i := p.index
n := decomps[i] & headerLenMask
i++
return decomps[i : i+uint16(n)]
}
// Size returns the length of UTF-8 encoding of the rune.
func (p Properties) Size() int {
return int(p.size)
}
// CCC returns the canonical combining class of the underlying rune.
func (p Properties) CCC() uint8 {
if p.index >= firstCCCZeroExcept {
return 0
}
return ccc[p.ccc]
}
// LeadCCC returns the CCC of the first rune in the decomposition.
// If there is no decomposition, LeadCCC equals CCC.
func (p Properties) LeadCCC() uint8 {
return ccc[p.ccc]
}
// TrailCCC returns the CCC of the last rune in the decomposition.
// If there is no decomposition, TrailCCC equals CCC.
func (p Properties) TrailCCC() uint8 {
return ccc[p.tccc]
}
// Recomposition
// We use 32-bit keys instead of 64-bit for the two codepoint keys.
// This clips off the bits of three entries, but we know this will not
// result in a collision. In the unlikely event that changes to
// UnicodeData.txt introduce collisions, the compiler will catch it.
// Note that the recomposition map for NFC and NFKC are identical.
// combine returns the combined rune or 0 if it doesn't exist.
func combine(a, b rune) rune {
key := uint32(uint16(a))<<16 + uint32(uint16(b))
return recompMap[key]
}
func lookupInfoNFC(b input, i int) Properties {
v, sz := b.charinfoNFC(i)
return compInfo(v, sz)
}
func lookupInfoNFKC(b input, i int) Properties {
v, sz := b.charinfoNFKC(i)
return compInfo(v, sz)
}
// Properties returns properties for the first rune in s.
func (f Form) Properties(s []byte) Properties {
if f == NFC || f == NFD {
return compInfo(nfcData.lookup(s))
}
return compInfo(nfkcData.lookup(s))
}
// PropertiesString returns properties for the first rune in s.
func (f Form) PropertiesString(s string) Properties {
if f == NFC || f == NFD {
return compInfo(nfcData.lookupString(s))
}
return compInfo(nfkcData.lookupString(s))
}
// compInfo converts the information contained in v and sz
// to a Properties. See the comment at the top of the file
// for more information on the format.
func compInfo(v uint16, sz int) Properties {
if v == 0 {
return Properties{size: uint8(sz)}
} else if v >= 0x8000 {
p := Properties{
size: uint8(sz),
ccc: uint8(v),
tccc: uint8(v),
flags: qcInfo(v >> 8),
}
if p.ccc > 0 || p.combinesBackward() {
p.nLead = uint8(p.flags & 0x3)
}
return p
}
// has decomposition
h := decomps[v]
f := (qcInfo(h&headerFlagsMask) >> 2) | 0x4
p := Properties{size: uint8(sz), flags: f, index: v}
if v >= firstCCC {
v += uint16(h&headerLenMask) + 1
c := decomps[v]
p.tccc = c >> 2
p.flags |= qcInfo(c & 0x3)
if v >= firstLeadingCCC {
p.nLead = c & 0x3
if v >= firstStarterWithNLead {
// We were tricked. Remove the decomposition.
p.flags &= 0x03
p.index = 0
return p
}
p.ccc = decomps[v+1]
}
}
return p
}

Some files were not shown because too many files have changed in this diff Show More