1
0
Fork 0

Consul and etcd autoclustering (#957)

This PR introduces new node-discovery integration with Consul and etcd. By using one of those systems with rqlite, automatic clustering of rqlite is much easier.
master
Philip O'Toole 3 years ago committed by GitHub
parent 8990900fd8
commit 4aea326959
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -59,6 +59,8 @@ jobs:
working_directory: /go/src/github.com/rqlite/rqlite
docker:
- image: circleci/golang@sha256:bf91f089cecab7fcd329193022794e7d13c42ee4570b1ac2920875c1b948eb63
- image: consul
- image: gcr.io/etcd-development/etcd:v3.5.1
steps:
- checkout
- run: sudo apt-get install python3

@ -1,3 +1,20 @@
## 7.0.0 (unreleased)
This release introduces new node-discovery integration with [Consul](https://www.consul.io/) and [etcd](https://etcd.io/). By using one of those systems with rqlite, automatic clustering of rqlite is much easier. The [legacy Discovery mode](https://github.com/rqlite/rqlite/blob/master/DOC/DISCOVERY.md) is not supported by release 7.0, but may be supported in a future release. So, for now, if you wish to continue using legacy Discovery, you will need to run rqlite 6.x, or earlier.
See the [new documentation](https://github.com/rqlite/rqlite/blob/master/DOC/AUTO_CLUSTERING.md) for full details on using Consul and etcd.
### Upgrading
There are some breaking changes in release 7.0, but most users of rqlite can upgrade to 7.0 without doing anything special -- simply download and run the 7.0 release.
- The disco-related command-line arguments have changed to support Consul and etcd. If you wish to continue to use legacy Discovery, you can't upgrade to 7.0 -- or consider using Consul or etcd for node-discovery.
- The command-line argument `-RaftWaitForLeader` has been removed. If you need to wait for a node to have a Leader, you should poll the `/readyz` endpoint.
### New features
- [PR #957](https://github.com/rqlite/rqlite/pull/957): Support autoclustering via [Consul](https://www.consul.io/) and [etcd](https://etcd.io/).
- [PR #947](https://github.com/rqlite/rqlite/pull/947): CLI takes list of hosts, so it can try another node if first node is unresponsive. Fixes [issue #157](https://github.com/rqlite/rqlite/issues/157). Thanks @chermehdi
### Implementation changes and bug fixes
- [PR #957](https://github.com/rqlite/rqlite/pull/957): Refactor `rqlited` command-line argument code.
## 6.10.2 (January 13th 2022)
### Implementation changes and bug fixes
- [PR #959](https://github.com/rqlite/rqlite/pull/959): Return clearer error if no database results set.

@ -0,0 +1,76 @@
# Automatic clustering via node-discovery
This document describes how to use [Consul](https://www.consul.io/) and [etcd](https://etcd.io/) to automatically form rqlite clusters.
> :warning: **This functionality was introduced in version 7.x. It does not exist in earlier releases.**
## Contents
* [Quickstart](#quickstart)
* [Consul](#consul)
* [etcd](#etcd)
* [More details](more-details)
* [Controlling configuration](#controlling-configuration)
* [Running multiple different clusters](#running-multiple-different-clusters)
## Quickstart
### Consul
Let's assume your Consul cluster is running at `http://example.com:8500`. Let's also assume that you are going to run 3 rqlite nodes, each node on a different machine. Launch your rqlite nodes as follows:
Node 1:
```bash
rqlited -http-addr=$IP1:$HTTP_PORT -raft-addr=$IP1:$RAFT_PORT \
-disco-mode consul -disco-config '{"address": "localhost:8500"}' data
```
Node 2:
```bash
rqlited -http-addr=$IP2:$HTTP_PORT -raft-addr=$IP2:$RAFT_PORT \
-disco-mode consul -disco-config '{"address": "localhost:8500"}' data
```
Node 3:
```bash
rqlited -http-addr=$IP3:$HTTP_PORT -raft-addr=$IP3:$RAFT_PORT \
-disco-mode consul -disco-config '{"address": "localhost:8500"}' data
```
These three nodes will automatically find each other, and cluster. You can start the nodes in any order and at anytime. Furthermore, the cluster Leader will continually update Consul with its address. This means other nodes can be launched later and automatically join the cluster, even if the Leader changes.
#### Docker
It's even easier with Docker, as you can launch every node identically:
```bash
docker run rqlite/rqlite -disco-mode=consul -disco-config '{"address": "localhost:8500"}'
```
### etcd
Autoclustering with etcd is very similar. Let's assume etcd is available at `localhost:2379`.
Node 1:
```bash
rqlited -http-addr=$IP1:$HTTP_PORT -raft-addr=$IP1:$RAFT_PORT \
-disco-mode etcd -disco-config '{"endpoints": ["localhost:2379"]}' data
```
Node 2:
```bash
rqlited -http-addr=$IP2:$HTTP_PORT -raft-addr=$IP2:$RAFT_PORT \
-disco-mode etcd -disco-config '{"endpoints": ["localhost:2379"]}' data
```
Node 3:
```bash
rqlited -http-addr=$IP3:$HTTP_PORT -raft-addr=$IP3:$RAFT_PORT \
-disco-mode etcd -disco-config '{"endpoints": ["localhost:2379"]}' data
```
Like with Consul autoclustering, the cluster Leader will continually report its address to etcd.
#### Docker
```bash
docker run rqlite/rqlite -disco-mode=etcd -disco-config '{"endpoints": ["localhost:2379"]}'
```
## More Details
### Controlling configuration
For both Consul and etcd, `-disco-confg` can either be an actual JSON string, or a path to a file containing a JSON-formatted configuration. The former option may be more convenient if the configuration you need to supply is very short, as in the example above.
- [Full Consul configuration description](https://github.com/rqlite/rqlite-disco-clients/blob/main/consul/config.go)
- [Full etcd configuration description](https://github.com/rqlite/rqlite-disco-clients/blob/main/etcd/config.go)
### Running multiple different clusters
If you wish a single Consul or etcd system to support multiple rqlite clusters, then set the `-disco-key` command line argument to a different value for each cluster.

@ -27,6 +27,8 @@ Specifically, a majority is defined as `(N/2)+1` where `N` is the number of node
So a 4-node cluster is no more fault-tolerant than a 3-node cluster, so running a 4-node cluster provides no advantage over a 3-node cluster. Only a 5-node cluster can tolerate the failure of 2 nodes. An analogous argument applies to 5-node vs. 6-node clusters, and so on.
# Creating a cluster
_This section describes manually creating a cluster. If you wish rqlite nodes to automatically find other, and form a cluster, check out [auto-clustering with Consul and etcd](https://github.com/rqlite/rqlite/blob/master/DOC/AUTO_CLUSTERING.md)._
Let's say you have 3 host machines, _host1_, _host2_, and _host3_, and that each of these hostnames resolves to an IP address reachable from the other hosts. Instead of hostnames you can use the IP address directly if you wish.
To create a cluster you must first launch a node that can act as the initial leader. Do this as follows on _host1_:

@ -1,6 +1,8 @@
# rqlite Cluster Discovery Service
_For full details on how the Discovery Service is implemented using AWS Lambda and DynamoDB check out [this blog post](http://www.philipotoole.com/building-a-cluster-discovery-service-with-aws-lambda-and-dynamodb/)._
> :warning: **rqlite 7.0 and later does not support this legacy Discovery service.** If you wish to use the legacy Discovery service, you must run rqlite 6.x or earlier.
To form a rqlite cluster, the joining node must be supplied with the network address of some other node in the cluster. This requirement -- that one must know the network address of other nodes to join a cluster -- can be inconvenient in various environments. For example, if you do not know which network addresses will be assigned ahead of time, creating a cluster for the first time requires the following steps:
* First start one node and specify its network address.

@ -21,7 +21,7 @@ rqlite uses [Raft](https://raft.github.io/) to achieve consensus across all the
- Fully replicated production-grade SQL database.
- [Production-grade](https://github.com/hashicorp/raft) distributed consensus system.
- An easy-to-use [HTTP(S) API](https://github.com/rqlite/rqlite/blob/master/DOC/DATA_API.md). A [command-line interface is also available](https://github.com/rqlite/rqlite/tree/master/cmd/rqlite), as are various [client libraries](https://github.com/rqlite).
- [Discovery Service support](https://github.com/rqlite/rqlite/blob/master/DOC/DISCOVERY.md), allowing clusters to be dynamically created.
- [Node-discovery and automatic clustering via Consul and etcd](ttps://github.com/rqlite/rqlite/blob/master/DOC/AUTO_CLUSTERING.md), allowing clusters to be dynamically created.
- [Extensive security and encryption support](https://github.com/rqlite/rqlite/blob/master/DOC/SECURITY.md), including node-to-node encryption.
- Choice of [read consistency levels](https://github.com/rqlite/rqlite/blob/master/DOC/CONSISTENCY.md).
- Optional [read-only (non-voting) nodes](https://github.com/rqlite/rqlite/blob/master/DOC/READ_ONLY_NODES.md), which can add read scalability to the system.

@ -0,0 +1,296 @@
package main
import (
"flag"
"fmt"
"net"
"os"
"runtime"
"strings"
"time"
)
// Config represents the configuration as set by command-line flags.
// All variables will be set, unless explicit noted.
type Config struct {
// DataPath is path to node data. Always set.
DataPath string
// HTTPAddr is the bind network address for the HTTP Server.
// It never includes a trailing HTTP or HTTPS.
HTTPAddr string
// HTTPAdv is the advertised HTTP server network.
HTTPAdv string
// TLS1011 indicates whether the node should support deprecated
// encryption standards.
TLS1011 bool
// AuthFile is the path to the authentication file. May not be set.
AuthFile string
// X509CACert is the path the root-CA certficate file for when this
// node contacts other nodes' HTTP servers. May not be set.
X509CACert string
// X509Cert is the path to the X509 cert for the HTTP server. May not be set.
X509Cert string
// X509Key is the path to the private key for the HTTP server. May not be set.
X509Key string
// NodeEncrypt indicates whether node encryption should be enabled.
NodeEncrypt bool
// NodeX509CACert is the path the root-CA certficate file for when this
// node contacts other nodes' Raft servers. May not be set.
NodeX509CACert string
// NodeX509Cert is the path to the X509 cert for the Raft server. May not be set.
NodeX509Cert string
// NodeX509Key is the path to the X509 key for the Raft server. May not be set.
NodeX509Key string
// NodeID is the Raft ID for the node.
NodeID string
// RaftAddr is the bind network address for the Raft server.
RaftAddr string
// RaftAdv is the advertised Raft server address.
RaftAdv string
// JoinSrcIP sets the source IP address during Join request. May not be set.
JoinSrcIP string
// JoinAddr is the list addresses to use for a join attempt. Each address
// will include the proto (HTTP or HTTPS) and will never include the node's
// own HTTP server address. May not be set.
JoinAddr string
// JoinAs sets the user join attempts should be performed as. May not be set.
JoinAs string
// JoinAttempts is the number of times a node should attempt to join using a
// given address.
JoinAttempts int
// JoinInterval is the time between retrying failed join operations.
JoinInterval time.Duration
// NoHTTPVerify disables checking other nodes' HTTP X509 certs for validity.
NoHTTPVerify bool
// NoNodeVerify disables checking other nodes' Node X509 certs for validity.
NoNodeVerify bool
// DisoMode sets the discovery mode. May not be set.
DiscoMode string
// DiscoKey sets the discovery prefix key.
DiscoKey string
// DiscoConfig sets the path to any discovery configuration file. May not be set.
DiscoConfig string
// Expvar enables go/expvar information. Defaults to true.
Expvar bool
// PprofEnabled enables Go PProf information. Defaults to true.
PprofEnabled bool
// OnDisk enables on-disk mode.
OnDisk bool
// OnDiskPath sets the path to the SQLite file. May not be set.
OnDiskPath string
// OnDiskStartup disables the in-memory on-disk startup optimization.
OnDiskStartup bool
// FKConstraints enables SQLite foreign key constraints.
FKConstraints bool
// RaftLogLevel sets the minimum logging level for the Raft subsystem.
RaftLogLevel string
// RaftNonVoter controls whether this node is a voting, read-only node.
RaftNonVoter bool
// RaftSnapThreshold is the number of outstanding log entries that trigger snapshot.
RaftSnapThreshold uint64
// RaftSnapInterval sets the threshold check interval.
RaftSnapInterval time.Duration
// RaftLeaderLeaseTimeout sets the leader lease timeout.
RaftLeaderLeaseTimeout time.Duration
// RaftHeartbeatTimeout sets the heartbeast timeout.
RaftHeartbeatTimeout time.Duration
// RaftElectionTimeout sets the election timeout.
RaftElectionTimeout time.Duration
// RaftApplyTimeout sets the Log-apply timeout.
RaftApplyTimeout time.Duration
// RaftOpenTimeout sets the Raft store open timeout.
RaftOpenTimeout time.Duration
// RaftShutdownOnRemove sets whether Raft should be shutdown if the node is removed
RaftShutdownOnRemove bool
// CompressionSize sets request query size for compression attempt
CompressionSize int
// CompressionBatch sets request batch threshold for compression attempt.
CompressionBatch int
// ShowVersion instructs the node to show version information and exit.
ShowVersion bool
// CPUProfile enables CPU profiling.
CPUProfile string
//MemProfile enables memory profiling.
MemProfile string
}
// HTTPURL returns the fully-formed, advertised HTTP API address for this config, including
// protocol, host and port.
func (c *Config) HTTPURL() string {
apiProto := "http"
if c.X509Cert != "" {
apiProto = "https"
}
return fmt.Sprintf("%s://%s", apiProto, c.HTTPAdv)
}
// BuildInfo is build information for display at command line.
type BuildInfo struct {
Version string
Commit string
Branch string
}
// ParseFlags parses the command line, and returns the configuration.
func ParseFlags(name, desc string, build *BuildInfo) (*Config, error) {
if flag.Parsed() {
return nil, fmt.Errorf("command-line flags already parsed")
}
config := Config{}
flag.StringVar(&config.NodeID, "node-id", "", "Unique name for node. If not set, set to advertised Raft address")
flag.StringVar(&config.HTTPAddr, "http-addr", "localhost:4001", "HTTP server bind address. To enable HTTPS, set X.509 cert and key")
flag.StringVar(&config.HTTPAdv, "http-adv-addr", "", "Advertised HTTP address. If not set, same as HTTP server bind")
flag.BoolVar(&config.TLS1011, "tls1011", false, "Support deprecated TLS versions 1.0 and 1.1")
flag.StringVar(&config.X509CACert, "http-ca-cert", "", "Path to root X.509 certificate for HTTP endpoint")
flag.StringVar(&config.X509Cert, "http-cert", "", "Path to X.509 certificate for HTTP endpoint")
flag.StringVar(&config.X509Key, "http-key", "", "Path to X.509 private key for HTTP endpoint")
flag.BoolVar(&config.NoHTTPVerify, "http-no-verify", false, "Skip verification of remote HTTPS cert when joining cluster")
flag.BoolVar(&config.NodeEncrypt, "node-encrypt", false, "Enable node-to-node encryption")
flag.StringVar(&config.NodeX509CACert, "node-ca-cert", "", "Path to root X.509 certificate for node-to-node encryption")
flag.StringVar(&config.NodeX509Cert, "node-cert", "cert.pem", "Path to X.509 certificate for node-to-node encryption")
flag.StringVar(&config.NodeX509Key, "node-key", "key.pem", "Path to X.509 private key for node-to-node encryption")
flag.BoolVar(&config.NoNodeVerify, "node-no-verify", false, "Skip verification of a remote node cert")
flag.StringVar(&config.AuthFile, "auth", "", "Path to authentication and authorization file. If not set, not enabled")
flag.StringVar(&config.RaftAddr, "raft-addr", "localhost:4002", "Raft communication bind address")
flag.StringVar(&config.RaftAdv, "raft-adv-addr", "", "Advertised Raft communication address. If not set, same as Raft bind")
flag.StringVar(&config.JoinSrcIP, "join-source-ip", "", "Set source IP address during Join request")
flag.StringVar(&config.JoinAddr, "join", "", "Comma-delimited list of nodes, through which a cluster can be joined (proto://host:port)")
flag.StringVar(&config.JoinAs, "join-as", "", "Username in authentication file to join as. If not set, joins anonymously")
flag.IntVar(&config.JoinAttempts, "join-attempts", 5, "Number of join attempts to make")
flag.DurationVar(&config.JoinInterval, "join-interval", 5*time.Second, "Period between join attempts")
flag.StringVar(&config.DiscoMode, "disco-mode", "", "Choose cluster discovery service. If not set, not used")
flag.StringVar(&config.DiscoKey, "disco-key", "rqlite", "Key prefix for cluster discovery service")
flag.StringVar(&config.DiscoConfig, "disco-config", "", "Set path to cluster discovery config file")
flag.BoolVar(&config.Expvar, "expvar", true, "Serve expvar data on HTTP server")
flag.BoolVar(&config.PprofEnabled, "pprof", true, "Serve pprof data on HTTP server")
flag.BoolVar(&config.OnDisk, "on-disk", false, "Use an on-disk SQLite database")
flag.StringVar(&config.OnDiskPath, "on-disk-path", "", "Path for SQLite on-disk database file. If not set, use file in data directory")
flag.BoolVar(&config.OnDiskStartup, "on-disk-startup", false, "Do not initialize on-disk database in memory first at startup")
flag.BoolVar(&config.FKConstraints, "fk", false, "Enable SQLite foreign key constraints")
flag.BoolVar(&config.ShowVersion, "version", false, "Show version information and exit")
flag.BoolVar(&config.RaftNonVoter, "raft-non-voter", false, "Configure as non-voting node")
flag.DurationVar(&config.RaftHeartbeatTimeout, "raft-timeout", time.Second, "Raft heartbeat timeout")
flag.DurationVar(&config.RaftElectionTimeout, "raft-election-timeout", time.Second, "Raft election timeout")
flag.DurationVar(&config.RaftApplyTimeout, "raft-apply-timeout", 10*time.Second, "Raft apply timeout")
flag.DurationVar(&config.RaftOpenTimeout, "raft-open-timeout", 120*time.Second, "Time for initial Raft logs to be applied. Use 0s duration to skip wait")
flag.Uint64Var(&config.RaftSnapThreshold, "raft-snap", 8192, "Number of outstanding log entries that trigger snapshot")
flag.DurationVar(&config.RaftSnapInterval, "raft-snap-int", 30*time.Second, "Snapshot threshold check interval")
flag.DurationVar(&config.RaftLeaderLeaseTimeout, "raft-leader-lease-timeout", 0, "Raft leader lease timeout. Use 0s for Raft default")
flag.BoolVar(&config.RaftShutdownOnRemove, "raft-remove-shutdown", false, "Shutdown Raft if node removed")
flag.StringVar(&config.RaftLogLevel, "raft-log-level", "INFO", "Minimum log level for Raft module")
flag.IntVar(&config.CompressionSize, "compression-size", 150, "Request query size for compression attempt")
flag.IntVar(&config.CompressionBatch, "compression-batch", 5, "Request batch threshold for compression attempt")
flag.StringVar(&config.CPUProfile, "cpu-profile", "", "Path to file for CPU profiling information")
flag.StringVar(&config.MemProfile, "mem-profile", "", "Path to file for memory profiling information")
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "\n%s\n\n", desc)
fmt.Fprintf(os.Stderr, "Usage: %s [flags] <data directory>\n", name)
flag.PrintDefaults()
}
flag.Parse()
if config.ShowVersion {
msg := fmt.Sprintf("%s %s %s %s %s (commit %s, branch %s, compiler %s)\n",
name, build.Version, runtime.GOOS, runtime.GOARCH, runtime.Version(), build.Commit, build.Branch, runtime.Compiler)
errorExit(0, msg)
}
if config.OnDiskPath != "" && !config.OnDisk {
errorExit(1, "fatal: on-disk-path is set, but on-disk is not\n")
}
// Ensure the data path is set.
if flag.NArg() < 1 {
errorExit(1, "fatal: no data directory set\n")
}
config.DataPath = flag.Arg(0)
// Ensure no args come after the data directory.
if flag.NArg() > 1 {
errorExit(1, "fatal: arguments after data directory are not accepted\n")
}
// Enforce policies regarding addresses
if config.RaftAdv == "" {
config.RaftAdv = config.RaftAddr
}
if config.HTTPAdv == "" {
config.HTTPAdv = config.HTTPAddr
}
// Perfom some address validity checks.
if strings.HasPrefix(strings.ToLower(config.HTTPAddr), "http") ||
strings.HasPrefix(strings.ToLower(config.HTTPAdv), "http") {
errorExit(1, "fatal: HTTP options should not include protocol (http:// or https://)\n")
}
if _, _, err := net.SplitHostPort(config.HTTPAddr); err != nil {
errorExit(1, "HTTP bind address not valid")
}
if _, _, err := net.SplitHostPort(config.HTTPAdv); err != nil {
errorExit(1, "HTTP advertised address not valid")
}
if _, _, err := net.SplitHostPort(config.RaftAddr); err != nil {
errorExit(1, "Raft bind address not valid")
}
if _, _, err := net.SplitHostPort(config.RaftAdv); err != nil {
errorExit(1, "Raft advertised address not valid")
}
// Node ID policy
if config.NodeID == "" {
config.NodeID = config.RaftAdv
}
return &config, nil
}
func errorExit(code int, msg string) {
fmt.Fprintf(os.Stderr, msg)
os.Exit(code)
}

@ -2,10 +2,11 @@
package main
import (
"bytes"
"crypto/tls"
"crypto/x509"
"flag"
"fmt"
"io"
"io/ioutil"
"log"
"net"
@ -13,10 +14,11 @@ import (
"os/signal"
"path/filepath"
"runtime"
"runtime/pprof"
"strings"
"time"
consul "github.com/rqlite/rqlite-disco-clients/consul"
etcd "github.com/rqlite/rqlite-disco-clients/etcd"
"github.com/rqlite/rqlite/auth"
"github.com/rqlite/rqlite/cluster"
"github.com/rqlite/rqlite/cmd"
@ -37,163 +39,43 @@ const logo = `
|_|
`
var httpAddr string
var httpAdv string
var joinSrcIP string
var tls1011 bool
var authFile string
var x509CACert string
var x509Cert string
var x509Key string
var nodeEncrypt bool
var nodeX509CACert string
var nodeX509Cert string
var nodeX509Key string
var nodeID string
var raftAddr string
var raftAdv string
var joinAddr string
var joinAs string
var joinAttempts int
var joinInterval string
var noVerify bool
var noNodeVerify bool
var discoURL string
var discoID string
var expvar bool
var pprofEnabled bool
var onDisk bool
var onDiskPath string
var onDiskStartup bool
var fkConstraints bool
var raftLogLevel string
var raftNonVoter bool
var raftSnapThreshold uint64
var raftSnapInterval time.Duration
var raftLeaderLeaseTimeout time.Duration
var raftHeartbeatTimeout time.Duration
var raftElectionTimeout time.Duration
var raftApplyTimeout time.Duration
var raftOpenTimeout time.Duration
var raftWaitForLeader bool
var raftShutdownOnRemove bool
var compressionSize int
var compressionBatch int
var showVersion bool
var cpuProfile string
var memProfile string
const name = `rqlited`
const desc = `rqlite is a lightweight, distributed relational database, which uses SQLite as its
storage engine. It provides an easy-to-use, fault-tolerant store for relational data.`
func init() {
flag.StringVar(&nodeID, "node-id", "", "Unique name for node. If not set, set to Raft address")
flag.StringVar(&httpAddr, "http-addr", "localhost:4001", "HTTP server bind address. For HTTPS, set X.509 cert and key")
flag.StringVar(&httpAdv, "http-adv-addr", "", "Advertised HTTP address. If not set, same as HTTP server")
flag.StringVar(&joinSrcIP, "join-source-ip", "", "Set source IP address during Join request")
flag.BoolVar(&tls1011, "tls1011", false, "Support deprecated TLS versions 1.0 and 1.1")
flag.StringVar(&x509CACert, "http-ca-cert", "", "Path to root X.509 certificate for HTTP endpoint")
flag.StringVar(&x509Cert, "http-cert", "", "Path to X.509 certificate for HTTP endpoint")
flag.StringVar(&x509Key, "http-key", "", "Path to X.509 private key for HTTP endpoint")
flag.BoolVar(&noVerify, "http-no-verify", false, "Skip verification of remote HTTPS cert when joining cluster")
flag.BoolVar(&nodeEncrypt, "node-encrypt", false, "Enable node-to-node encryption")
flag.StringVar(&nodeX509CACert, "node-ca-cert", "", "Path to root X.509 certificate for node-to-node encryption")
flag.StringVar(&nodeX509Cert, "node-cert", "cert.pem", "Path to X.509 certificate for node-to-node encryption")
flag.StringVar(&nodeX509Key, "node-key", "key.pem", "Path to X.509 private key for node-to-node encryption")
flag.BoolVar(&noNodeVerify, "node-no-verify", false, "Skip verification of a remote node cert")
flag.StringVar(&authFile, "auth", "", "Path to authentication and authorization file. If not set, not enabled")
flag.StringVar(&raftAddr, "raft-addr", "localhost:4002", "Raft communication bind address")
flag.StringVar(&raftAdv, "raft-adv-addr", "", "Advertised Raft communication address. If not set, same as Raft bind")
flag.StringVar(&joinAddr, "join", "", "Comma-delimited list of nodes, through which a cluster can be joined (proto://host:port)")
flag.StringVar(&joinAs, "join-as", "", "Username in authentication file to join as. If not set, joins anonymously")
flag.IntVar(&joinAttempts, "join-attempts", 5, "Number of join attempts to make")
flag.StringVar(&joinInterval, "join-interval", "5s", "Period between join attempts")
flag.StringVar(&discoURL, "disco-url", "http://discovery.rqlite.com", "Set Discovery Service URL")
flag.StringVar(&discoID, "disco-id", "", "Set Discovery ID. If not set, Discovery Service not used")
flag.BoolVar(&expvar, "expvar", true, "Serve expvar data on HTTP server")
flag.BoolVar(&pprofEnabled, "pprof", true, "Serve pprof data on HTTP server")
flag.BoolVar(&onDisk, "on-disk", false, "Use an on-disk SQLite database")
flag.StringVar(&onDiskPath, "on-disk-path", "", "Path for SQLite on-disk database file. If not set, use file in data directory")
flag.BoolVar(&onDiskStartup, "on-disk-startup", false, "Do not initialize on-disk database in memory first at startup")
flag.BoolVar(&fkConstraints, "fk", false, "Enable SQLite foreign key constraints")
flag.BoolVar(&showVersion, "version", false, "Show version information and exit")
flag.BoolVar(&raftNonVoter, "raft-non-voter", false, "Configure as non-voting node")
flag.DurationVar(&raftHeartbeatTimeout, "raft-timeout", time.Second, "Raft heartbeat timeout")
flag.DurationVar(&raftElectionTimeout, "raft-election-timeout", time.Second, "Raft election timeout")
flag.DurationVar(&raftApplyTimeout, "raft-apply-timeout", 10*time.Second, "Raft apply timeout")
flag.DurationVar(&raftOpenTimeout, "raft-open-timeout", 120*time.Second, "Time for initial Raft logs to be applied. Use 0s duration to skip wait")
flag.BoolVar(&raftWaitForLeader, "raft-leader-wait", true, "Node waits for a leader before answering requests")
flag.Uint64Var(&raftSnapThreshold, "raft-snap", 8192, "Number of outstanding log entries that trigger snapshot")
flag.DurationVar(&raftSnapInterval, "raft-snap-int", 30*time.Second, "Snapshot threshold check interval")
flag.DurationVar(&raftLeaderLeaseTimeout, "raft-leader-lease-timeout", 0, "Raft leader lease timeout. Use 0s for Raft default")
flag.BoolVar(&raftShutdownOnRemove, "raft-remove-shutdown", false, "Shutdown Raft if node removed")
flag.StringVar(&raftLogLevel, "raft-log-level", "INFO", "Minimum log level for Raft module")
flag.IntVar(&compressionSize, "compression-size", 150, "Request query size for compression attempt")
flag.IntVar(&compressionBatch, "compression-batch", 5, "Request batch threshold for compression attempt")
flag.StringVar(&cpuProfile, "cpu-profile", "", "Path to file for CPU profiling information")
flag.StringVar(&memProfile, "mem-profile", "", "Path to file for memory profiling information")
flag.Usage = func() {
fmt.Fprintf(os.Stderr, "\n%s\n\n", desc)
fmt.Fprintf(os.Stderr, "Usage: %s [flags] <data directory>\n", name)
flag.PrintDefaults()
}
log.SetFlags(log.LstdFlags)
log.SetOutput(os.Stderr)
log.SetPrefix(fmt.Sprintf("[%s] ", name))
}
func main() {
flag.Parse()
if showVersion {
fmt.Printf("%s %s %s %s %s (commit %s, branch %s, compiler %s)\n",
name, cmd.Version, runtime.GOOS, runtime.GOARCH, runtime.Version(), cmd.Commit, cmd.Branch, runtime.Compiler)
os.Exit(0)
}
if onDiskPath != "" && !onDisk {
fmt.Fprintf(os.Stderr, "fatal: on-disk-path is set, but on-disk is not\n")
os.Exit(1)
}
// Ensure the data path is set.
if flag.NArg() < 1 {
fmt.Fprintf(os.Stderr, "fatal: no data directory set\n")
os.Exit(1)
}
// Ensure no args come after the data directory.
if flag.NArg() > 1 {
fmt.Fprintf(os.Stderr, "fatal: arguments after data directory are not accepted\n")
os.Exit(1)
}
// Ensure Raft adv address is set as per policy.
if raftAdv == "" {
raftAdv = raftAddr
cfg, err := ParseFlags(name, desc, &BuildInfo{
Version: cmd.Version,
Commit: cmd.Commit,
Branch: cmd.Branch,
})
if err != nil {
log.Fatalf("failed to parse command-line flags: %s", err.Error())
}
dataPath := flag.Arg(0)
// Display logo.
fmt.Println(logo)
// Configure logging and pump out initial message.
log.SetFlags(log.LstdFlags)
log.SetOutput(os.Stderr)
log.SetPrefix(fmt.Sprintf("[%s] ", name))
log.Printf("%s starting, version %s, commit %s, branch %s, compiler %s", name, cmd.Version, cmd.Commit, cmd.Branch, runtime.Compiler)
log.Printf("%s, target architecture is %s, operating system target is %s", runtime.Version(), runtime.GOARCH, runtime.GOOS)
log.Printf("launch command: %s", strings.Join(os.Args, " "))
// Start requested profiling.
startProfile(cpuProfile, memProfile)
startProfile(cfg.CPUProfile, cfg.MemProfile)
// Create internode network mux and configure.
muxLn, err := net.Listen("tcp", raftAddr)
muxLn, err := net.Listen("tcp", cfg.RaftAddr)
if err != nil {
log.Fatalf("failed to listen on %s: %s", raftAddr, err.Error())
log.Fatalf("failed to listen on %s: %s", cfg.RaftAddr, err.Error())
}
mux, err := startNodeMux(muxLn)
mux, err := startNodeMux(cfg, muxLn)
if err != nil {
log.Fatalf("failed to start node mux: %s", err.Error())
}
@ -201,14 +83,14 @@ func main() {
log.Printf("Raft TCP mux Listener registered with %d", cluster.MuxRaftHeader)
// Create the store.
str, isNew, err := createStore(raftTn, dataPath)
str, err := createStore(cfg, raftTn)
if err != nil {
log.Fatalf("failed to create store: %s", err.Error())
}
// Determine join addresses
var joins []string
joins, err = determineJoinAddresses(isNew)
joins, err = determineJoinAddresses(cfg)
if err != nil {
log.Fatalf("unable to determine join addresses: %s", err.Error())
}
@ -219,25 +101,25 @@ func main() {
}
// Get any credential store.
credStr, err := credentialStore()
credStr, err := credentialStore(cfg)
if err != nil {
log.Fatalf("failed to get credential store: %s", err.Error())
}
// Create cluster service now, so nodes will be able to learn information about each other.
clstr, err := clusterService(mux.Listen(cluster.MuxClusterHeader), str)
clstr, err := clusterService(cfg, mux.Listen(cluster.MuxClusterHeader), str)
if err != nil {
log.Fatalf("failed to create cluster service: %s", err.Error())
}
log.Printf("Cluster TCP mux Listener registered with %d", cluster.MuxClusterHeader)
// Start the HTTP API server.
clstrDialer := tcp.NewDialer(cluster.MuxClusterHeader, nodeEncrypt, noNodeVerify)
clstrDialer := tcp.NewDialer(cluster.MuxClusterHeader, cfg.NodeEncrypt, cfg.NoNodeVerify)
clstrClient := cluster.NewClient(clstrDialer)
if err := clstrClient.SetLocal(raftAdv, clstr); err != nil {
if err := clstrClient.SetLocal(cfg.RaftAdv, clstr); err != nil {
log.Fatalf("failed to set cluster client local parameters: %s", err.Error())
}
httpServ, err := startHTTPService(str, clstrClient, credStr)
httpServ, err := startHTTPService(cfg, str, clstrClient, credStr)
if err != nil {
log.Fatalf("failed to start HTTP server: %s", err.Error())
}
@ -245,74 +127,31 @@ func main() {
// Register remaining status providers.
httpServ.RegisterStatus("cluster", clstr)
// Execute any requested join operation.
if len(joins) > 0 {
log.Println("join addresses are:", joins)
joinDur, err := time.ParseDuration(joinInterval)
tlsConfig := tls.Config{InsecureSkipVerify: cfg.NoHTTPVerify}
if cfg.X509CACert != "" {
asn1Data, err := ioutil.ReadFile(cfg.X509CACert)
if err != nil {
log.Fatalf("failed to parse Join interval %s: %s", joinInterval, err.Error())
}
tlsConfig := tls.Config{InsecureSkipVerify: noVerify}
if x509CACert != "" {
asn1Data, err := ioutil.ReadFile(x509CACert)
if err != nil {
log.Fatalf("ioutil.ReadFile failed: %s", err.Error())
}
tlsConfig.RootCAs = x509.NewCertPool()
ok := tlsConfig.RootCAs.AppendCertsFromPEM(asn1Data)
if !ok {
log.Fatalf("failed to parse root CA certificate(s) in %q", x509CACert)
}
}
// Add credentials to any join addresses, if necessary.
if credStr != nil && joinAs != "" {
var err error
pw, ok := credStr.Password(joinAs)
if !ok {
log.Fatalf("user %s does not exist in credential store", joinAs)
}
for i := range joins {
joins[i], err = cluster.AddUserInfo(joins[i], joinAs, pw)
if err != nil {
log.Fatalf("failed to use credential store join_as: %s", err.Error())
}
}
log.Println("added join_as identity from credential store")
}
if j, err := cluster.Join(joinSrcIP, joins, str.ID(), raftAdv, !raftNonVoter,
joinAttempts, joinDur, &tlsConfig); err != nil {
log.Fatalf("failed to join cluster at %s: %s", joins, err.Error())
} else {
log.Println("successfully joined cluster at", j)
log.Fatalf("ioutil.ReadFile failed: %s", err.Error())
}
} else if isNew {
// No prexisting state, and no joins to do. Node needs bootstrap itself.
log.Println("bootstraping single new node")
if err := str.Bootstrap(store.NewServer(str.ID(), str.Addr(), true)); err != nil {
log.Fatalf("failed to bootstrap single new node: %s", err.Error())
tlsConfig.RootCAs = x509.NewCertPool()
ok := tlsConfig.RootCAs.AppendCertsFromPEM(asn1Data)
if !ok {
log.Fatalf("failed to parse root CA certificate(s) in %q", cfg.X509CACert)
}
}
// Friendly final log message.
apiProto := "http"
if httpServ.HTTPS() {
apiProto = "https"
// Create the cluster!
nodes, err := str.Nodes()
if err != nil {
log.Fatalf("failed to get nodes %s", err.Error())
}
apiAdv := httpAddr
if httpAdv != "" {
apiAdv = httpAdv
if err := createCluster(cfg, joins, &tlsConfig, len(nodes) > 0, str, httpServ, credStr); err != nil {
log.Fatalf("clustering failure: %s", err.Error())
}
// Tell the user the node is ready, giving some advice on how to connect.
log.Printf("node HTTP API available at %s://%s", apiProto, apiAdv)
h, p, err := net.SplitHostPort(apiAdv)
if err != nil {
log.Fatalf("advertised address is not valid: %s", err.Error())
}
// Tell the user the node is ready for HTTP, giving some advice on how to connect.
log.Printf("node HTTP API available at %s", cfg.HTTPURL())
h, p, _ := net.SplitHostPort(cfg.HTTPAdv)
log.Printf("connect using the command-line tool via 'rqlite -H %s -P %s'", h, p)
// Block until signalled.
@ -328,37 +167,16 @@ func main() {
log.Println("rqlite server stopped")
}
func determineJoinAddresses(isNew bool) ([]string, error) {
apiAdv := httpAddr
if httpAdv != "" {
apiAdv = httpAdv
}
func determineJoinAddresses(cfg *Config) ([]string, error) {
var addrs []string
if joinAddr != "" {
// Explicit join addresses are first priority.
addrs = strings.Split(joinAddr, ",")
}
if discoID != "" {
if !isNew {
log.Printf("node has preexisting state, ignoring Discovery ID %s", discoID)
} else {
log.Printf("registering with Discovery Service at %s with ID %s", discoURL, discoID)
c := disco.New(discoURL)
r, err := c.Register(discoID, apiAdv)
if err != nil {
return nil, err
}
log.Println("Discovery Service responded with nodes:", r.Nodes)
addrs = append(addrs, r.Nodes...)
}
if cfg.JoinAddr != "" {
addrs = strings.Split(cfg.JoinAddr, ",")
}
// It won't work to attempt a self-join, so remove any such address.
var validAddrs []string
for i := range addrs {
if addrs[i] == apiAdv || addrs[i] == fmt.Sprintf("http://%s", apiAdv) {
if addrs[i] == cfg.HTTPAdv || addrs[i] == cfg.HTTPAddr {
log.Printf("ignoring join address %s equal to this node's address", addrs[i])
continue
}
@ -368,34 +186,32 @@ func determineJoinAddresses(isNew bool) ([]string, error) {
return validAddrs, nil
}
func createStore(ln *tcp.Layer, dataPath string) (*store.Store, bool, error) {
var err error
dataPath, err = filepath.Abs(dataPath)
func createStore(cfg *Config, ln *tcp.Layer) (*store.Store, error) {
dataPath, err := filepath.Abs(cfg.DataPath)
if err != nil {
return nil, false, fmt.Errorf("failed to determine absolute data path: %s", err.Error())
return nil, fmt.Errorf("failed to determine absolute data path: %s", err.Error())
}
dbConf := store.NewDBConfig(!onDisk)
dbConf.FKConstraints = fkConstraints
dbConf.OnDiskPath = onDiskPath
dbConf := store.NewDBConfig(!cfg.OnDisk)
dbConf.OnDiskPath = cfg.OnDiskPath
dbConf.FKConstraints = cfg.FKConstraints
str := store.New(ln, &store.Config{
DBConf: dbConf,
Dir: dataPath,
ID: idOrRaftAddr(),
Dir: cfg.DataPath,
ID: cfg.NodeID,
})
// Set optional parameters on store.
str.StartupOnDisk = onDiskStartup
str.SetRequestCompression(compressionBatch, compressionSize)
str.RaftLogLevel = raftLogLevel
str.ShutdownOnRemove = raftShutdownOnRemove
str.SnapshotThreshold = raftSnapThreshold
str.SnapshotInterval = raftSnapInterval
str.LeaderLeaseTimeout = raftLeaderLeaseTimeout
str.HeartbeatTimeout = raftHeartbeatTimeout
str.ElectionTimeout = raftElectionTimeout
str.ApplyTimeout = raftApplyTimeout
str.StartupOnDisk = cfg.OnDiskStartup
str.SetRequestCompression(cfg.CompressionBatch, cfg.CompressionSize)
str.RaftLogLevel = cfg.RaftLogLevel
str.ShutdownOnRemove = cfg.RaftShutdownOnRemove
str.SnapshotThreshold = cfg.RaftSnapThreshold
str.SnapshotInterval = cfg.RaftSnapInterval
str.LeaderLeaseTimeout = cfg.RaftLeaderLeaseTimeout
str.HeartbeatTimeout = cfg.RaftHeartbeatTimeout
str.ElectionTimeout = cfg.RaftElectionTimeout
str.ApplyTimeout = cfg.RaftApplyTimeout
isNew := store.IsNewNode(dataPath)
if isNew {
@ -404,23 +220,72 @@ func createStore(ln *tcp.Layer, dataPath string) (*store.Store, bool, error) {
log.Printf("preexisting node state detected in %s", dataPath)
}
return str, isNew, nil
return str, nil
}
func startHTTPService(str *store.Store, cltr *cluster.Client, credStr *auth.CredentialsStore) (*httpd.Service, error) {
func createDiscoService(cfg *Config, str *store.Store) (*disco.Service, error) {
var c disco.Client
var err error
var reader io.Reader
if cfg.DiscoConfig != "" {
// Open config file. If opening fails, assume the config is a JSON string.
cfgFile, err := os.Open(cfg.DiscoConfig)
if err != nil {
reader = bytes.NewReader([]byte(cfg.DiscoConfig))
} else {
reader = cfgFile
defer cfgFile.Close()
}
}
if cfg.DiscoMode == "consul" {
var consulCfg *consul.Config
if reader != nil {
consulCfg, err = consul.NewConfigFromReader(reader)
if err != nil {
return nil, err
}
}
c, err = consul.New(cfg.DiscoKey, consulCfg)
if err != nil {
return nil, err
}
} else if cfg.DiscoMode == "etcd" {
var etcdCfg *etcd.Config
if reader != nil {
etcdCfg, err = etcd.NewConfigFromReader(reader)
if err != nil {
return nil, err
}
}
c, err = etcd.New(cfg.DiscoKey, etcdCfg)
if err != nil {
return nil, err
}
} else {
return nil, fmt.Errorf("invalid disco service: %s", cfg.DiscoMode)
}
return disco.NewService(c, str), nil
}
func startHTTPService(cfg *Config, str *store.Store, cltr *cluster.Client, credStr *auth.CredentialsStore) (*httpd.Service, error) {
// Create HTTP server and load authentication information if required.
var s *httpd.Service
if credStr != nil {
s = httpd.New(httpAddr, str, cltr, credStr)
s = httpd.New(cfg.HTTPAddr, str, cltr, credStr)
} else {
s = httpd.New(httpAddr, str, cltr, nil)
s = httpd.New(cfg.HTTPAddr, str, cltr, nil)
}
s.CertFile = x509Cert
s.KeyFile = x509Key
s.TLS1011 = tls1011
s.Expvar = expvar
s.Pprof = pprofEnabled
s.CertFile = cfg.X509Cert
s.KeyFile = cfg.X509Key
s.TLS1011 = cfg.TLS1011
s.Expvar = cfg.Expvar
s.Pprof = cfg.PprofEnabled
s.BuildInfo = map[string]interface{}{
"commit": cmd.Commit,
"branch": cmd.Branch,
@ -433,40 +298,39 @@ func startHTTPService(str *store.Store, cltr *cluster.Client, credStr *auth.Cred
// startNodeMux starts the TCP mux on the given listener, which should be already
// bound to the relevant interface.
func startNodeMux(ln net.Listener) (*tcp.Mux, error) {
func startNodeMux(cfg *Config, ln net.Listener) (*tcp.Mux, error) {
var adv net.Addr
var err error
if raftAdv != "" {
adv, err = net.ResolveTCPAddr("tcp", raftAdv)
if err != nil {
return nil, fmt.Errorf("failed to resolve advertise address %s: %s", raftAdv, err.Error())
}
adv, err = net.ResolveTCPAddr("tcp", cfg.RaftAdv)
if err != nil {
return nil, fmt.Errorf("failed to resolve advertise address %s: %s", cfg.RaftAdv, err.Error())
}
var mux *tcp.Mux
if nodeEncrypt {
log.Printf("enabling node-to-node encryption with cert: %s, key: %s", nodeX509Cert, nodeX509Key)
mux, err = tcp.NewTLSMux(ln, adv, nodeX509Cert, nodeX509Key, nodeX509CACert)
if cfg.NodeEncrypt {
log.Printf("enabling node-to-node encryption with cert: %s, key: %s", cfg.NodeX509Cert, cfg.NodeX509Key)
mux, err = tcp.NewTLSMux(ln, adv, cfg.NodeX509Cert, cfg.NodeX509Key, cfg.NodeX509CACert)
} else {
mux, err = tcp.NewMux(ln, adv)
}
if err != nil {
return nil, fmt.Errorf("failed to create node-to-node mux: %s", err.Error())
}
mux.InsecureSkipVerify = noNodeVerify
mux.InsecureSkipVerify = cfg.NoNodeVerify
go mux.Serve()
return mux, nil
}
func credentialStore() (*auth.CredentialsStore, error) {
if authFile == "" {
func credentialStore(cfg *Config) (*auth.CredentialsStore, error) {
if cfg.AuthFile == "" {
return nil, nil
}
f, err := os.Open(authFile)
f, err := os.Open(cfg.AuthFile)
if err != nil {
return nil, fmt.Errorf("failed to open authentication file %s: %s", authFile, err.Error())
return nil, fmt.Errorf("failed to open authentication file %s: %s", cfg.AuthFile, err.Error())
}
cs := auth.NewCredentialsStore()
@ -476,14 +340,10 @@ func credentialStore() (*auth.CredentialsStore, error) {
return cs, nil
}
func clusterService(tn cluster.Transport, db cluster.Database) (*cluster.Service, error) {
func clusterService(cfg *Config, tn cluster.Transport, db cluster.Database) (*cluster.Service, error) {
c := cluster.New(tn, db)
apiAddr := httpAddr
if httpAdv != "" {
apiAddr = httpAdv
}
c.SetAPIAddr(apiAddr)
c.EnableHTTPS(x509Cert != "" && x509Key != "") // Conditions met for an HTTPS API
c.SetAPIAddr(cfg.HTTPAdv)
c.EnableHTTPS(cfg.X509Cert != "" && cfg.X509Key != "") // Conditions met for an HTTPS API
if err := c.Open(); err != nil {
return nil, err
@ -491,52 +351,107 @@ func clusterService(tn cluster.Transport, db cluster.Database) (*cluster.Service
return c, nil
}
func idOrRaftAddr() string {
if nodeID != "" {
return nodeID
func createCluster(cfg *Config, joins []string, tlsConfig *tls.Config, hasPeers bool, str *store.Store,
httpServ *httpd.Service, credStr *auth.CredentialsStore) error {
if len(joins) == 0 && cfg.DiscoMode == "" && !hasPeers {
// Brand new node, told to bootstrap itself. So do it.
log.Println("bootstraping single new node")
if err := str.Bootstrap(store.NewServer(str.ID(), str.Addr(), true)); err != nil {
return fmt.Errorf("failed to bootstrap single new node: %s", err.Error())
}
return nil
}
return raftAdv
}
// prof stores the file locations of active profiles.
var prof struct {
cpu *os.File
mem *os.File
}
if len(joins) > 0 {
// Explicit join addresses supplied, so use them.
log.Println("explicit join addresses are:", joins)
// startProfile initializes the CPU and memory profile, if specified.
func startProfile(cpuprofile, memprofile string) {
if cpuprofile != "" {
f, err := os.Create(cpuprofile)
if err := addJoinCreds(joins, cfg.JoinAs, credStr); err != nil {
return fmt.Errorf("failed too add auth creds: %s", err.Error())
}
j, err := cluster.Join(cfg.JoinSrcIP, joins, str.ID(), cfg.RaftAdv, !cfg.RaftNonVoter,
cfg.JoinAttempts, cfg.JoinInterval, tlsConfig)
if err != nil {
log.Fatalf("failed to create CPU profile file at %s: %s", cpuprofile, err.Error())
return fmt.Errorf("failed to join cluster at %s: %s", joins, err.Error())
}
log.Printf("writing CPU profile to: %s\n", cpuprofile)
prof.cpu = f
pprof.StartCPUProfile(prof.cpu)
log.Println("successfully joined cluster at", j)
return nil
}
if cfg.DiscoMode == "" {
// No more clustering techniques to try. Node will just sit, probably using
// existing Raft state.
return nil
}
log.Printf("discovery mode: %s", cfg.DiscoMode)
discoService, err := createDiscoService(cfg, str)
if err != nil {
return fmt.Errorf("failed to start discovery service: %s", err.Error())
}
if memprofile != "" {
f, err := os.Create(memprofile)
if !hasPeers {
log.Println("no preexisting nodes, registering with discovery service")
leader, addr, err := discoService.Register(str.ID(), cfg.HTTPURL(), cfg.RaftAdv)
if err != nil {
log.Fatalf("failed to create memory profile file at %s: %s", cpuprofile, err.Error())
return fmt.Errorf("failed to register with discovery service: %s", err.Error())
}
log.Printf("writing memory profile to: %s\n", memprofile)
prof.mem = f
runtime.MemProfileRate = 4096
if leader {
log.Println("node registered as leader using discovery service")
if err := str.Bootstrap(store.NewServer(str.ID(), str.Addr(), true)); err != nil {
return fmt.Errorf("failed to bootstrap single new node: %s", err.Error())
}
} else {
for {
log.Printf("discovery service returned %s as join address", addr)
if err := addJoinCreds([]string{addr}, cfg.JoinAs, credStr); err != nil {
return fmt.Errorf("failed too add auth creds: %s", err.Error())
}
if j, err := cluster.Join(cfg.JoinSrcIP, []string{addr}, str.ID(), cfg.RaftAdv, !cfg.RaftNonVoter,
cfg.JoinAttempts, cfg.JoinInterval, tlsConfig); err != nil {
log.Printf("failed to join cluster at %s: %s", addr, err.Error())
time.Sleep(time.Second)
_, addr, err = discoService.Register(str.ID(), cfg.HTTPURL(), cfg.RaftAdv)
if err != nil {
log.Printf("failed to get updated leader: %s", err.Error())
}
continue
} else {
log.Println("successfully joined cluster at", j)
break
}
}
}
} else {
log.Println("preexisting node configuration detected, not registering with discovery service")
}
go discoService.StartReporting(cfg.NodeID, cfg.HTTPURL(), cfg.RaftAdv)
httpServ.RegisterStatus("disco", discoService)
return nil
}
// stopProfile closes the CPU and memory profiles if they are running.
func stopProfile() {
if prof.cpu != nil {
pprof.StopCPUProfile()
prof.cpu.Close()
log.Println("CPU profiling stopped")
// addJoinCreds adds credentials to any join addresses, if necessary.
func addJoinCreds(joins []string, joinAs string, credStr *auth.CredentialsStore) error {
if credStr == nil || joinAs == "" {
return nil
}
if prof.mem != nil {
pprof.Lookup("heap").WriteTo(prof.mem, 0)
prof.mem.Close()
log.Println("memory profiling stopped")
pw, ok := credStr.Password(joinAs)
if !ok {
return fmt.Errorf("user %s does not exist in credential store", joinAs)
}
var err error
for i := range joins {
joins[i], err = cluster.AddUserInfo(joins[i], joinAs, pw)
if err != nil {
return fmt.Errorf("failed to use credential store join_as: %s", err.Error())
}
}
return nil
}

@ -0,0 +1,51 @@
package main
import (
"log"
"os"
"runtime"
"runtime/pprof"
)
// prof stores the file locations of active profiles.
var prof struct {
cpu *os.File
mem *os.File
}
// startProfile initializes the CPU and memory profile, if specified.
func startProfile(cpuprofile, memprofile string) {
if cpuprofile != "" {
f, err := os.Create(cpuprofile)
if err != nil {
log.Fatalf("failed to create CPU profile file at %s: %s", cpuprofile, err.Error())
}
log.Printf("writing CPU profile to: %s\n", cpuprofile)
prof.cpu = f
pprof.StartCPUProfile(prof.cpu)
}
if memprofile != "" {
f, err := os.Create(memprofile)
if err != nil {
log.Fatalf("failed to create memory profile file at %s: %s", cpuprofile, err.Error())
}
log.Printf("writing memory profile to: %s\n", memprofile)
prof.mem = f
runtime.MemProfileRate = 4096
}
}
// stopProfile closes the CPU and memory profiles if they are running.
func stopProfile() {
if prof.cpu != nil {
pprof.StopCPUProfile()
prof.cpu.Close()
log.Println("CPU profiling stopped")
}
if prof.mem != nil {
pprof.Lookup("heap").WriteTo(prof.mem, 0)
prof.mem.Close()
log.Println("memory profiling stopped")
}
}

@ -3,7 +3,7 @@ package cmd
// These variables are populated via the Go linker.
var (
// Version of rqlite.
Version = "6"
Version = "7"
// Commit this code was built at.
Commit = "unknown"

@ -1,92 +0,0 @@
// Package disco controls interaction with the rqlite Discovery service
package disco
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"log"
"net/http"
"os"
)
// Response represents the response returned by a Discovery Service.
type Response struct {
CreatedAt string `json:"created_at"`
DiscoID string `json:"disco_id"`
Nodes []string `json:"nodes"`
}
// Client provides a Discovery Service client.
type Client struct {
url string
logger *log.Logger
}
// New returns an initialized Discovery Service client.
func New(url string) *Client {
return &Client{
url: url,
logger: log.New(os.Stderr, "[discovery] ", log.LstdFlags),
}
}
// URL returns the Discovery Service URL used by this client.
func (c *Client) URL() string {
return c.url
}
// Register attempts to register with the Discovery Service, using the given
// address.
func (c *Client) Register(id, addr string) (*Response, error) {
m := map[string]string{
"addr": addr,
}
url := c.registrationURL(c.url, id)
client := &http.Client{}
client.CheckRedirect = func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
}
for {
b, err := json.Marshal(m)
if err != nil {
return nil, err
}
c.logger.Printf("discovery client attempting registration of %s at %s", addr, url)
resp, err := client.Post(url, "application/json", bytes.NewReader(b))
if err != nil {
return nil, err
}
defer resp.Body.Close()
b, err = ioutil.ReadAll(resp.Body)
if err != nil {
return nil, err
}
switch resp.StatusCode {
case http.StatusOK:
r := &Response{}
if err := json.Unmarshal(b, r); err != nil {
return nil, err
}
c.logger.Printf("discovery client successfully registered %s at %s", addr, url)
return r, nil
case http.StatusMovedPermanently:
url = resp.Header.Get("location")
c.logger.Println("discovery client redirecting to", url)
continue
default:
return nil, errors.New(resp.Status)
}
}
}
func (c *Client) registrationURL(url, id string) string {
return fmt.Sprintf("%s/%s", url, id)
}

@ -1,206 +0,0 @@
package disco
import (
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
"testing"
)
// Test_NewClient tests that a new disco client can be instantiated. Nothing more.
func Test_NewClient(t *testing.T) {
c := New("https://discovery.rqlite.com")
if c == nil {
t.Fatal("failed to create new disco client")
}
if c.URL() != "https://discovery.rqlite.com" {
t.Fatal("configured address of disco service is incorrect")
}
}
// Test_ClientRegisterBadRequest tests how the client responds to a 400 from the Discovery Service.
func Test_ClientRegisterBadRequest(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Method != "POST" {
t.Fatalf("Client did not use POST")
}
w.WriteHeader(http.StatusBadRequest)
}))
defer ts.Close()
c := New(ts.URL)
_, err := c.Register("1234", "http://127.0.0.1")
if err == nil {
t.Fatalf("failed to receive error on 400 from server")
}
}
// Test_ClientRegisterNotFound tests how the client responds to a 404 from the Discovery Service.
func Test_ClientRegisterNotFound(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Method != "POST" {
t.Fatalf("Client did not use POST")
}
w.WriteHeader(http.StatusNotFound)
}))
defer ts.Close()
c := New(ts.URL)
_, err := c.Register("1234", "http://127.0.0.1")
if err == nil {
t.Fatalf("failed to receive error on 404 from server")
}
}
// Test_ClientRegisterForbidden tests how the client responds to a 403 from the Discovery Service.
func Test_ClientRegisterForbidden(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Method != "POST" {
t.Fatalf("Client did not use POST")
}
w.WriteHeader(http.StatusForbidden)
}))
defer ts.Close()
c := New(ts.URL)
_, err := c.Register("1234", "http://127.0.0.1")
if err == nil {
t.Fatalf("failed to receive error on 403 from server")
}
}
// Test_ClientRegisterRequestOK tests how the client responds to a 200 from the Discovery Service.
func Test_ClientRegisterRequestOK(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Method != "POST" {
t.Fatalf("Client did not use POST")
}
if r.URL.String() != "/1234" {
t.Fatalf("Request URL is wrong, got: %s", r.URL.String())
}
b, err := ioutil.ReadAll(r.Body)
if err != nil {
t.Fatalf("failed to read request from client: %s", err.Error())
}
m := map[string]string{}
if err := json.Unmarshal(b, &m); err != nil {
t.Fatalf("failed to unmarshal request from client: %s", err.Error())
}
if m["addr"] != "http://127.0.0.1" {
t.Fatalf("incorrect join address supplied by client: %s", m["addr"])
}
fmt.Fprintln(w, `{"created_at": "2017-02-17 04:49:05.079125", "disco_id": "68d6c7cc-f4cc-11e6-a170-2e79ea0be7b1", "nodes": ["http://127.0.0.1"]}`)
}))
defer ts.Close()
c := New(ts.URL)
disco, err := c.Register("1234", "http://127.0.0.1")
if err != nil {
t.Fatalf("failed to register: %s", err.Error())
}
if len(disco.Nodes) != 1 {
t.Fatalf("failed to receive correct list of nodes, got %v", disco.Nodes)
}
}
// Test_ClientRegisterRequestOK tests how the client responds to a redirect from the Discovery Service.
func Test_ClientRegisterRequestRedirectOK(t *testing.T) {
ts1 := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Method != "POST" {
t.Fatalf("Client did not use POST")
}
if r.URL.String() != "/1234" {
t.Fatalf("Request URL is wrong, got: %s", r.URL.String())
}
b, err := ioutil.ReadAll(r.Body)
if err != nil {
t.Fatalf("failed to read request from client: %s", err.Error())
}
m := map[string]string{}
if err := json.Unmarshal(b, &m); err != nil {
t.Fatalf("failed to unmarshal request from client: %s", err.Error())
}
if m["addr"] != "http://127.0.0.1" {
t.Fatalf("incorrect join address supplied by client: %s", m["addr"])
}
fmt.Fprintln(w, `{"created_at": "2017-02-17 04:49:05.079125", "disco_id": "68d6c7cc-f4cc-11e6-a170-2e79ea0be7b1", "nodes": ["http://127.0.0.1"]}`)
}))
defer ts1.Close()
ts2 := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
http.Redirect(w, r, ts1.URL+"/1234", http.StatusMovedPermanently)
}))
c := New(ts2.URL)
disco, err := c.Register("1234", "http://127.0.0.1")
if err != nil {
t.Fatalf("failed to register: %s", err.Error())
}
if len(disco.Nodes) != 1 {
t.Fatalf("failed to receive correct list of nodes, got %v", disco.Nodes)
}
}
// Test_ClientRegisterFollowerOK tests how the client responds to getting a list of nodes it can join.
func Test_ClientRegisterFollowerOK(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Method != "POST" {
t.Fatalf("Client did not use POST")
}
fmt.Fprintln(w, `{"created_at": "2017-02-17 04:49:05.079125", "disco_id": "68d6c7cc-f4cc-11e6-a170-2e79ea0be7b1", "nodes": ["http://1.1.1.1", "http://2.2.2.2"]}`)
}))
defer ts.Close()
c := New(ts.URL)
disco, err := c.Register("1234", "http://2.2.2.2")
if err != nil {
t.Fatalf("failed to register: %s", err.Error())
}
if len(disco.Nodes) != 2 {
t.Fatalf("failed to receive non-empty list of nodes")
}
if disco.Nodes[0] != `http://1.1.1.1` {
t.Fatalf("got incorrect node, got %v", disco.Nodes[0])
}
}
// Test_ClientRegisterFollowerMultiOK tests how the client responds to getting a list of nodes it can join.
func Test_ClientRegisterFollowerMultiOK(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.Method != "POST" {
t.Fatalf("Client did not use POST")
}
fmt.Fprintln(w, `{"created_at": "2017-02-17 04:49:05.079125", "disco_id": "68d6c7cc-f4cc-11e6-a170-2e79ea0be7b1", "nodes": ["http://1.1.1.1", "http://2.2.2.2", "http://3.3.3.3"]}`)
}))
defer ts.Close()
c := New(ts.URL)
disco, err := c.Register("1234", "http://3.3.3.3")
if err != nil {
t.Fatalf("failed to register: %s", err.Error())
}
if len(disco.Nodes) != 3 {
t.Fatalf("failed to receive non-empty list of nodes")
}
if disco.Nodes[0] != `http://1.1.1.1` {
t.Fatalf("got incorrect first node, got %v", disco.Nodes[0])
}
if disco.Nodes[1] != `http://2.2.2.2` {
t.Fatalf("got incorrect second node, got %v", disco.Nodes[1])
}
if disco.Nodes[2] != `http://3.3.3.3` {
t.Fatalf("got incorrect third node, got %v", disco.Nodes[1])
}
}

@ -0,0 +1,147 @@
package disco
import (
"fmt"
"log"
"math/rand"
"os"
"sync"
"time"
)
func init() {
rand.Seed(time.Now().UnixNano())
}
const (
leaderChanLen = 5 // Support any fast back-to-back leadership changes.
)
// Client is the interface discovery clients must implement.
type Client interface {
GetLeader() (id string, apiAddr string, addr string, ok bool, e error)
InitializeLeader(id, apiAddr, addr string) (bool, error)
SetLeader(id, apiAddr, addr string) error
fmt.Stringer
}
// Store is the interface the consensus system must implement.
type Store interface {
IsLeader() bool
RegisterLeaderChange(c chan<- struct{})
}
// Service represents a Discovery Service instance.
type Service struct {
RegisterInterval time.Duration
ReportInterval time.Duration
c Client
s Store
logger *log.Logger
mu sync.Mutex
lastContact time.Time
}
// NewService returns an instantiated Discovery Service.
func NewService(c Client, s Store) *Service {
return &Service{
c: c,
s: s,
RegisterInterval: 3 * time.Second,
ReportInterval: 30 * time.Second,
logger: log.New(os.Stderr, "[disco] ", log.LstdFlags),
}
}
// Register registers this node with the discovery service. It will block
// until a) the node registers itself as leader, b) learns of another node
// it can use to join the cluster, or c) an unrecoverable error occurs.
func (s *Service) Register(id, apiAddr, addr string) (bool, string, error) {
for {
_, cAPIAddr, _, ok, err := s.c.GetLeader()
if err != nil {
s.logger.Printf("failed to get leader: %s", err.Error())
}
if ok {
return false, cAPIAddr, nil
}
ok, err = s.c.InitializeLeader(id, apiAddr, addr)
if err != nil {
s.logger.Printf("failed to initialize as Leader: %s", err.Error())
}
if ok {
s.updateContact(time.Now())
return true, apiAddr, nil
}
time.Sleep(jitter(s.RegisterInterval))
}
}
// StartReporting reports the details of this node to the discovery service,
// if, and only if, this node is the leader. The service will report
// anytime a leadership change is detected. It also does it periodically
// to deal with any intermittent issues that caused Leadership information
/// to go stale.
func (s *Service) StartReporting(id, apiAddr, addr string) chan struct{} {
ticker := time.NewTicker(10 * s.ReportInterval)
obCh := make(chan struct{}, leaderChanLen)
s.s.RegisterLeaderChange(obCh)
update := func(changed bool) {
if s.s.IsLeader() {
if err := s.c.SetLeader(id, apiAddr, addr); err != nil {
s.logger.Printf("failed to update discovery service with Leader details: %s",
err.Error())
}
if changed {
s.logger.Printf("updated Leader API address to %s due to leadership change",
apiAddr)
}
s.updateContact(time.Now())
}
}
done := make(chan struct{})
go func() {
for {
select {
case <-ticker.C:
update(false)
case <-obCh:
update(true)
case <-done:
return
}
}
}()
return done
}
// Stats returns diagnostic information on the disco service.
func (s *Service) Stats() (map[string]interface{}, error) {
s.mu.Lock()
defer s.mu.Unlock()
return map[string]interface{}{
"name": s.c.String(),
"register_interval": s.RegisterInterval,
"report_interval": s.ReportInterval,
"last_contact": s.lastContact,
}, nil
}
func (s *Service) updateContact(t time.Time) {
s.mu.Lock()
defer s.mu.Unlock()
s.lastContact = t
}
func jitter(duration time.Duration) time.Duration {
return duration + time.Duration(rand.Float64()*float64(duration))
}

@ -0,0 +1,172 @@
package disco
import (
"sync"
"testing"
"time"
)
func Test_NewServce(t *testing.T) {
s := NewService(&mockClient{}, &mockStore{})
if s == nil {
t.Fatalf("service is nil")
}
}
func Test_RegisterGetLeaderOK(t *testing.T) {
m := &mockClient{}
m.getLeaderFn = func() (id string, apiAddr string, addr string, ok bool, e error) {
return "2", "localhost:4003", "localhost:4004", true, nil
}
m.initializeLeaderFn = func(tID, tAPIAddr, tAddr string) (bool, error) {
t.Fatalf("Leader initialized unexpectedly")
return false, nil
}
c := &mockStore{}
s := NewService(m, c)
s.RegisterInterval = 10 * time.Millisecond
ok, addr, err := s.Register("1", "localhost:4001", "localhost:4002")
if err != nil {
t.Fatalf("error registering with disco: %s", err.Error())
}
if ok {
t.Fatalf("registered as leader unexpectedly")
}
if exp, got := "localhost:4003", addr; exp != got {
t.Fatalf("returned addressed incorrect, exp %s, got %s", exp, got)
}
}
func Test_RegisterInitializeLeader(t *testing.T) {
m := &mockClient{}
m.getLeaderFn = func() (id string, apiAddr string, addr string, ok bool, e error) {
return "", "", "", false, nil
}
m.initializeLeaderFn = func(tID, tAPIAddr, tAddr string) (bool, error) {
if tID != "1" || tAPIAddr != "localhost:4001" || tAddr != "localhost:4002" {
t.Fatalf("wrong values passed to InitializeLeader")
}
return true, nil
}
c := &mockStore{}
s := NewService(m, c)
s.RegisterInterval = 10 * time.Millisecond
ok, addr, err := s.Register("1", "localhost:4001", "localhost:4002")
if err != nil {
t.Fatalf("error registering with disco: %s", err.Error())
}
if !ok {
t.Fatalf("failed to register as expected")
}
if exp, got := "localhost:4001", addr; exp != got {
t.Fatalf("returned addressed incorrect, exp %s, got %s", exp, got)
}
}
func Test_StartReportingTimer(t *testing.T) {
// WaitGroups won't work because we don't know how many times
// setLeaderFn will be called.
calledCh := make(chan struct{}, 10)
m := &mockClient{}
m.setLeaderFn = func(id, apiAddr, addr string) error {
if id != "1" || apiAddr != "localhost:4001" || addr != "localhost:4002" {
t.Fatalf("wrong values passed to SetLeader")
}
calledCh <- struct{}{}
return nil
}
c := &mockStore{}
c.isLeaderFn = func() bool {
return true
}
s := NewService(m, c)
s.ReportInterval = 10 * time.Millisecond
go s.StartReporting("1", "localhost:4001", "localhost:4002")
<-calledCh
}
func Test_StartReportingChange(t *testing.T) {
var wg sync.WaitGroup
m := &mockClient{}
m.setLeaderFn = func(id, apiAddr, addr string) error {
defer wg.Done()
if id != "1" || apiAddr != "localhost:4001" || addr != "localhost:4002" {
t.Fatalf("wrong values passed to SetLeader")
}
return nil
}
c := &mockStore{}
c.isLeaderFn = func() bool {
return true
}
var ch chan<- struct{}
c.registerLeaderChangeFn = func(c chan<- struct{}) {
ch = c
}
wg.Add(1)
s := NewService(m, c)
s.ReportInterval = 10 * time.Minute // Nothing will happen due to timer.
done := s.StartReporting("1", "localhost:4001", "localhost:4002")
// Signal a leadership change.
ch <- struct{}{}
wg.Wait()
close(done)
}
type mockClient struct {
getLeaderFn func() (id string, apiAddr string, addr string, ok bool, e error)
initializeLeaderFn func(id, apiAddr, addr string) (bool, error)
setLeaderFn func(id, apiAddr, addr string) error
}
func (m *mockClient) GetLeader() (id string, apiAddr string, addr string, ok bool, e error) {
if m.getLeaderFn != nil {
return m.getLeaderFn()
}
return
}
func (m *mockClient) InitializeLeader(id, apiAddr, addr string) (bool, error) {
if m.initializeLeaderFn != nil {
return m.initializeLeaderFn(id, apiAddr, addr)
}
return false, nil
}
func (m *mockClient) SetLeader(id, apiAddr, addr string) error {
if m.setLeaderFn != nil {
return m.setLeaderFn(id, apiAddr, addr)
}
return nil
}
func (m *mockClient) String() string {
return "mock"
}
type mockStore struct {
isLeaderFn func() bool
registerLeaderChangeFn func(c chan<- struct{})
}
func (m *mockStore) IsLeader() bool {
if m.isLeaderFn != nil {
return m.isLeaderFn()
}
return false
}
func (m *mockStore) RegisterLeaderChange(c chan<- struct{}) {
if m.registerLeaderChangeFn != nil {
m.registerLeaderChangeFn(c)
}
}

@ -7,20 +7,30 @@ require (
github.com/armon/go-metrics v0.3.10 // indirect
github.com/fatih/color v1.13.0 // indirect
github.com/golang/protobuf v1.5.2
github.com/hashicorp/go-hclog v1.0.0 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-hclog v1.1.0 // indirect
github.com/hashicorp/go-immutable-radix v1.3.1 // indirect
github.com/hashicorp/go-msgpack v1.1.5 // indirect
github.com/hashicorp/golang-lru v0.5.4 // indirect
github.com/hashicorp/raft v1.3.3
github.com/hashicorp/serf v0.9.7 // indirect
github.com/labstack/gommon v0.3.1 // indirect
github.com/mattn/go-colorable v0.1.12 // indirect
github.com/mitchellh/mapstructure v1.4.3 // indirect
github.com/mkideal/cli v0.2.7
github.com/mkideal/pkg v0.1.3
github.com/rqlite/go-sqlite3 v1.22.0
github.com/rqlite/raft-boltdb v0.0.0-20211018013422-771de01086ce
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e // indirect
github.com/rqlite/rqlite-disco-clients v0.0.0-20220119143504-8b316f2862ba
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.7.0 // indirect
go.uber.org/zap v1.20.0 // indirect
golang.org/x/crypto v0.0.0-20220112180741-5e0467b6c7ce
golang.org/x/net v0.0.0-20220114011407-0dd24b26b47d
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9 // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
golang.org/x/text v0.3.7 // indirect
google.golang.org/genproto v0.0.0-20220118154757-00ab72f36ad5 // indirect
google.golang.org/grpc v1.43.0 // indirect
google.golang.org/protobuf v1.27.1
)

280
go.sum

@ -1,88 +1,202 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
github.com/Bowery/prompt v0.0.0-20190916142128-fa8279994f75 h1:xGHheKK44eC6K0u5X+DZW/fRaR1LnDdqPHMZMWx5fv8=
github.com/Bowery/prompt v0.0.0-20190916142128-fa8279994f75/go.mod h1:4/6eNcqZ09BZ9wLK3tZOjBA1nDj+B0728nlX5YRlSmQ=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/DataDog/datadog-go v2.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
github.com/DataDog/datadog-go v3.2.0+incompatible/go.mod h1:LButxg5PwREeZtORoXG3tL4fMGNddJ+vMq1mwgfaqoQ=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/armon/circbuf v0.0.0-20150827004946-bbbad097214e/go.mod h1:3U/XgcO3hCbHZ8TKRvWD2dDTCfh9M9ya+I9JpbB7O8o=
github.com/armon/go-metrics v0.0.0-20180917152333-f0300d1749da/go.mod h1:Q73ZrmVTwzkszR9V5SSuryQ31EELlFMUz1kKyl939pY=
github.com/armon/go-metrics v0.0.0-20190430140413-ec5e00d3c878/go.mod h1:3AMJUQhVx52RsWOnlkpikZr01T/yAVN2gn0861vByNg=
github.com/armon/go-metrics v0.3.10 h1:FR+drcQStOe+32sYyJYyZ7FIdgoGGBnwLl+flodp8Uo=
github.com/armon/go-metrics v0.3.10/go.mod h1:4O98XIr/9W0sxpJ8UaYkvjk10Iff7SnFrb4QAOwNTFc=
github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/armon/go-radix v1.0.0/go.mod h1:ufUuZ+zHj4x4TnLV4JWEpy2hxWSpsRywHrMgIH9cCH8=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
github.com/boltdb/bolt v1.3.1/go.mod h1:clJnj/oiGkjum5o1McbSZDSLxVThjynRyGBgiAx27Ps=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/circonus-labs/circonus-gometrics v2.3.1+incompatible/go.mod h1:nmEj6Dob7S7YxXgwXpfOuvO54S+tGdZdw9fuRZt25Ag=
github.com/circonus-labs/circonusllhist v0.1.3/go.mod h1:kMXHVDlOchFAehlya5ePtbp5jckzBHf4XRpQvBOLI+I=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4/go.mod h1:6pvJx4me5XPnfI9Z40ddWsdw2W/uZgQLFXToKeRcDiI=
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/coreos/go-semver v0.3.0 h1:wkHLiw0WNATZnSG7epLsujiMCgPAc9xhjJ4tgnAxmfM=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-systemd/v22 v22.3.2 h1:D9/bQk5vlXQFZ6Kwuu6zaiXJ9oTPe68++AzAJc1DzSI=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/go-control-plane v0.9.9-0.20201210154907-fd9021fe5dad/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.mod h1:cXg6YxExXjJnVBQHBLXeUAgxn2UodCpnH306RInaBQk=
github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/fatih/color v1.9.0/go.mod h1:eQcE1qtQxscV5RaZvpXrrb8Drkc3/DdQ+uUYCNjL+zU=
github.com/fatih/color v1.13.0 h1:8LOYc1KYPPmyKMuN8QV2DNRWNbLo6LZ0iLs8+mlH53w=
github.com/fatih/color v1.13.0/go.mod h1:kLAiJbzzSOZDVNGyDpeOxJ47H46qBXwg5ILebYFFOfk=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.3/go.mod h1:vzj43D7+SQXF/4pzW/hwtAqwc6iTitCiVSaWz5lYuqw=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.2/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c h1:964Od4U6p2jUkFxvCydnIczKteheJEzHRToSGK3Bnlw=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.4/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.5 h1:Khx7svrCpmxxtHBq5j2mp/xVjsi8hQMfNLvJFAlrGgU=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgfV/d3M/q6VIi02HzZEHgUlZvzk=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/hashicorp/consul/api v1.12.0 h1:k3y1FYv6nuKyNTqj6w9gXOx5r5CfLj/k/euUeBXj1OY=
github.com/hashicorp/consul/api v1.12.0/go.mod h1:6pVBMo0ebnYdt2S3H87XhekM/HHrUoTD2XXb/VrZVy0=
github.com/hashicorp/consul/sdk v0.8.0 h1:OJtKBtEjboEZvG6AOUdh4Z1Zbyu0WcxQ0qatRrZHTVU=
github.com/hashicorp/consul/sdk v0.8.0/go.mod h1:GBvyrGALthsZObzUGsfgHZQDXjg4lOjagTIwIR1vPms=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.0/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-cleanhttp v0.5.2 h1:035FKYIWjmULyFRBKPs8TBQoi0x6d9G4xc9neXJWAZQ=
github.com/hashicorp/go-cleanhttp v0.5.2/go.mod h1:kO/YDlP8L1346E6Sodw+PrpBSV4/SoxCXGY6BqNFT48=
github.com/hashicorp/go-hclog v0.9.1/go.mod h1:5CU+agLiy3J7N7QjHK5d05KxGsuXiQLrjA0H7acj2lQ=
github.com/hashicorp/go-hclog v1.0.0 h1:bkKf0BeBXcSYa7f5Fyi9gMuQ8gNsxeiNpZjR6VxNZeo=
github.com/hashicorp/go-hclog v1.0.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
github.com/hashicorp/go-hclog v0.12.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
github.com/hashicorp/go-hclog v1.1.0 h1:QsGcniKx5/LuX2eYoeL+Np3UKYPNaN7YKpTh29h8rbw=
github.com/hashicorp/go-hclog v1.1.0/go.mod h1:whpDNt7SSdeAju8AWKIWsul05p54N/39EeqMAyrmvFQ=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-immutable-radix v1.3.1 h1:DKHmCUm2hRBK510BaiZlwvpD40f8bJFeZnpfm2KLowc=
github.com/hashicorp/go-immutable-radix v1.3.1/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
github.com/hashicorp/go-msgpack v0.5.5/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
github.com/hashicorp/go-msgpack v1.1.5 h1:9byZdVjKTe5mce63pRVNP1L7UAmdHOTEMGehn6KvJWs=
github.com/hashicorp/go-msgpack v1.1.5/go.mod h1:gWVc3sv/wbDmR3rQsj1CAktEZzoz1YNK9NfGLXJ69/4=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
github.com/hashicorp/go-multierror v1.1.0 h1:B9UzwGQJehnUY1yNrnwREHc3fGbC2xefo8g4TbElacI=
github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA=
github.com/hashicorp/go-retryablehttp v0.5.3/go.mod h1:9B5zBasrRhHXnJnui7y6sL7es7NDiJgTc6Er0maI1Xs=
github.com/hashicorp/go-uuid v1.0.0 h1:RS8zrF7PhGwyNPOtxSClXXj9HA8feRnJzgnI1RJCSnM=
github.com/hashicorp/go-rootcerts v1.0.2 h1:jzhAVGtqPKbwpyCPELlgNWhE1znq+qwJtW5Oi2viEzc=
github.com/hashicorp/go-rootcerts v1.0.2/go.mod h1:pqUvnprVnM5bf7AOirdbb01K4ccR319Vf4pU3K5EGc8=
github.com/hashicorp/go-sockaddr v1.0.0 h1:GeH6tui99pF4NJgfnhp+L6+FfobzVW3Ah46sLo0ICXs=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
github.com/hashicorp/go-uuid v1.0.0/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/go-uuid v1.0.1 h1:fv1ep09latC32wFoVwnqcnKJGnMSdBanPczbHAYm1BE=
github.com/hashicorp/go-uuid v1.0.1/go.mod h1:6SBZvOh/SIDV7/2o3Jml5SYk/TvGqwFJ/bN7x4byOro=
github.com/hashicorp/golang-lru v0.5.0/go.mod h1:/m3WP610KZHVQ1SGc6re/UDhFvYD7pJ4Ao+sR/qLZy8=
github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc=
github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/hashicorp/logutils v1.0.0/go.mod h1:QIAnNjmIWmVIIkWDTG1z5v++HQmx9WQRO+LraFDTW64=
github.com/hashicorp/mdns v1.0.4/go.mod h1:mtBihi+LeNXGtG8L9dX59gAEa12BDtBQSp4v/YAJqrc=
github.com/hashicorp/memberlist v0.3.0 h1:8+567mCcFDnS5ADl7lrpxPMWiFCElyUEeW0gtj34fMA=
github.com/hashicorp/memberlist v0.3.0/go.mod h1:MS2lj3INKhZjWNqd3N0m3J+Jxf3DAOnAH9VT3Sh9MUE=
github.com/hashicorp/raft v1.1.0/go.mod h1:4Ak7FSPnuvmb0GV6vgIAJ4vYT4bek9bb6Q+7HVbyzqM=
github.com/hashicorp/raft v1.3.3 h1:Xr6DSHC5cIM8kzxu+IgoT/+MeNeUNeWin3ie6nlSrMg=
github.com/hashicorp/raft v1.3.3/go.mod h1:4Ak7FSPnuvmb0GV6vgIAJ4vYT4bek9bb6Q+7HVbyzqM=
github.com/hashicorp/serf v0.9.6/go.mod h1:TXZNMjZQijwlDvp+r0b63xZ45H7JmCmgg4gpTwn9UV4=
github.com/hashicorp/serf v0.9.7 h1:hkdgbqizGQHuU5IPqYM1JdSMV8nKfpuOnZYXssk9muY=
github.com/hashicorp/serf v0.9.7/go.mod h1:TXZNMjZQijwlDvp+r0b63xZ45H7JmCmgg4gpTwn9UV4=
github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX5e0EB2j4=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/json-iterator/go v1.1.9/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/labstack/gommon v0.3.0/go.mod h1:MULnywXg0yavhxWKc+lOruYdAhDwPK9wf0OL7NoOu+k=
github.com/labstack/gommon v0.3.1 h1:OomWaJXm7xR6L1HmEtGyQf26TEn7V6X88mktX9kee9o=
github.com/labstack/gommon v0.3.1/go.mod h1:uW6kP17uPlLJsD3ijUYn3/M5bAxtlZhMI6m3MFxTMTM=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.4/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-colorable v0.1.6/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.7/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.9/go.mod h1:u6P/XSegPjTcexA+o6vUJrdnUu04hMope9wVRipJSqc=
github.com/mattn/go-colorable v0.1.11/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
github.com/mattn/go-colorable v0.1.12 h1:jF+Du6AlPIjs2BiUiQlKOX0rt3SujHxPnksPKZbaA40=
github.com/mattn/go-colorable v0.1.12/go.mod h1:u5H1YNBxpqRaxsYJYSkiCWKzEfiAb1Gb520KVy5xxl4=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hdxcsrc5s=
github.com/mattn/go-isatty v0.0.9/go.mod h1:YNRxwqDuOph6SZLI9vUUz6OYw3QyUt7WiY2yME+cCiQ=
github.com/mattn/go-isatty v0.0.10/go.mod h1:qgIWMr58cqv1PHHyhnkY9lrL7etaEgOFcMEpPG5Rm84=
github.com/mattn/go-isatty v0.0.11/go.mod h1:PhnuNfih5lzO57/f3n+odYbM4JtupLOxQOAqxQCu2WE=
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-isatty v0.0.14 h1:yVuAays6BHfxijgZPzw+3Zlu5yQgKGP2/hcQbHb7S9Y=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/miekg/dns v1.1.26/go.mod h1:bPDLeHnStXmXAq1m/Ch/hvfNHr14JKNPMBo3VZKjuso=
github.com/miekg/dns v1.1.41 h1:WMszZWJG0XmzbK9FEmzH2TVcqYzFesusSIB41b8KHxY=
github.com/miekg/dns v1.1.41/go.mod h1:p6aan82bvRIyn+zDIv9xYNUpwa73JcSh9BKwknJysuI=
github.com/mitchellh/cli v1.1.0/go.mod h1:xcISNoH86gajksDmfB23e/pu+B+GeFRMYmoHXxx3xhI=
github.com/mitchellh/go-homedir v1.1.0 h1:lukF9ziXFxDFPkA1vsr5zpc1XuPDn/wFntq5mG+4E0Y=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-testing-interface v1.0.0 h1:fzU/JVNcaqHQEcVFAKeR41fkiLdIPrefOvVG1VZ96U0=
github.com/mitchellh/go-testing-interface v1.0.0/go.mod h1:kRemZodwjscx+RGhAo8eIhFbs2+BFgRtFPeD/KE+zxI=
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.4.3 h1:OVowDSCllw/YjdLkam3/sm7wEtOy59d8ndGgCcyj8cs=
github.com/mitchellh/mapstructure v1.4.3/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
github.com/mkideal/cli v0.2.7 h1:mB/XrMzuddmTJ8f7KY1c+KzfYoM149tYGAnzmqRdvOU=
github.com/mkideal/cli v0.2.7/go.mod h1:efaTeFI4jdPqzAe0bv3myLB2NW5yzMBLvWB70a6feco=
github.com/mkideal/expr v0.1.0 h1:fzborV9TeSUmLm0aEQWTWcexDURFFo4v5gHSc818Kl8=
@ -94,63 +208,142 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJ
github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pascaldekloe/goe v0.1.0 h1:cBOtyMzM9HTpWjXfbbunk26uA6nG3a8n06Wieeh0MwY=
github.com/pascaldekloe/goe v0.1.0/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
github.com/posener/complete v1.2.3/go.mod h1:WZIdtGGp+qx0sLrYKtIRAruyNpv6hFCicSgv7Sy7s/s=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.2/go.mod h1:OsXs2jCmiKlQ1lTBmv21f2mNfw4xf/QclQDMrYNZzcM=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
github.com/prometheus/client_golang v1.4.0/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/common v0.0.0-20181126121408-4724e9255275/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181204211112-1dc9a6cbc91a/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rqlite/go-sqlite3 v1.22.0 h1:twqvKzylJXG62Qe0rcqdy5ClGhc0YRc2vvA3nEXwmes=
github.com/rqlite/go-sqlite3 v1.22.0/go.mod h1:ml55MVv28UP7V8zrxILd2EsrI6Wfsz76YSskpg08Ut4=
github.com/rqlite/raft-boltdb v0.0.0-20211018013422-771de01086ce h1:sVlzmCJiaM0LGK3blAHOD/43QxJZ8bLCDcsqZRatnFE=
github.com/rqlite/raft-boltdb v0.0.0-20211018013422-771de01086ce/go.mod h1:mc+WNDHyskdViYAoPnaMXEBnSKBmoUgiEZjrlAj6G34=
github.com/rqlite/rqlite-disco-clients v0.0.0-20220119143504-8b316f2862ba h1:85k9Qkc0B76RVBXWhahVrjb9ShfZU1m0SqEO7RRcqBU=
github.com/rqlite/rqlite-disco-clients v0.0.0-20220119143504-8b316f2862ba/go.mod h1:pym85nj6JnCI7rM9RxTZ4cubkTQyyg7uLwVydso9B80=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 h1:nn5Wsu0esKSJiIVhscUtVbo7ada43DJhG55ua/hjS5I=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1 h1:2vfRuCMp5sSVIDSqO8oNnWJq7mPa6KVP3iPIwFBuy8A=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.7.0 h1:nwc3DEeHmmLAfoZucVR881uASk0Mfjw8xYJ99tb5CcY=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/tv42/httpunix v0.0.0-20150427012821-b75d8614f926/go.mod h1:9ESjWnEqriFuLhtthL60Sar/7RFoluCcXsuvEwTV5KM=
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
github.com/valyala/fasttemplate v1.0.1/go.mod h1:UQGH1tvbgY+Nz5t2n7tXsz52dQxojPUpymEIMZ47gx8=
github.com/valyala/fasttemplate v1.2.1/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
go.etcd.io/bbolt v1.3.6 h1:/ecaJf0sk1l4l6V4awd65v2C3ILy7MSj+s/x1ADCIMU=
go.etcd.io/bbolt v1.3.6/go.mod h1:qXsaaIqmgQH0T+OPdb99Bf+PKfBBQVAdyD6TY9G8XM4=
go.etcd.io/etcd/api/v3 v3.5.1 h1:v28cktvBq+7vGyJXF8G+rWJmj+1XUmMtqcLnH8hDocM=
go.etcd.io/etcd/api/v3 v3.5.1/go.mod h1:cbVKeC6lCfl7j/8jBhAK6aIYO9XOjdptoxU/nLQcPvs=
go.etcd.io/etcd/client/pkg/v3 v3.5.1 h1:XIQcHCFSG53bJETYeRJtIxdLv2EWRGxcfzR8lSnTH4E=
go.etcd.io/etcd/client/pkg/v3 v3.5.1/go.mod h1:IJHfcCEKxYu1Os13ZdwCwIUTUVGYTSAM3YSwc9/Ac1g=
go.etcd.io/etcd/client/v3 v3.5.1 h1:oImGuV5LGKjCqXdjkMHCyWa5OO1gYKCnC/1sgdfj1Uk=
go.etcd.io/etcd/client/v3 v3.5.1/go.mod h1:OnjH4M8OnAotwaB2l9bVgZzRFKru7/ZMoS46OtKyd3Q=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/atomic v1.9.0 h1:ECmE8Bn/WFTYwEW/bpKD3M8VtR/zQVbavAoalC1PYyE=
go.uber.org/atomic v1.9.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/goleak v1.1.11 h1:wy28qYRKZgnJTxGxvye5/wgWr1EKjmUDGYox5mGlRlI=
go.uber.org/goleak v1.1.11/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/multierr v1.7.0 h1:zaiO/rmgFjbmCXdSYJWQcdvOCsthmdaHfr3Gm2Kx4Ec=
go.uber.org/multierr v1.7.0/go.mod h1:7EAYxJLBy9rStEaz58O2t4Uvip6FSURkq8/ppBp95ak=
go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
go.uber.org/zap v1.20.0 h1:N4oPlghZwYG55MlU6LXk/Zp00FVNE9X9wrYO8CEs4lc=
go.uber.org/zap v1.20.0/go.mod h1:wjWOCqI0f2ZZrJF/UufIOkiC8ii6tm1iqIsLo76RfJw=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190923035154-9ee001bba392/go.mod h1:/lpIB1dKB+9EgE3H3cr1v9wB50oz8l4C4h62xy7jSTY=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3 h1:0es+/5331RGQPcXlMfP+WrnIIS6dNnNRe0WB02W0F4M=
golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220112180741-5e0467b6c7ce h1:Roh6XWxHFKrPgC/EQhVubSAGQ6Ozk6IdxHSzt1mR0EI=
golang.org/x/crypto v0.0.0-20220112180741-5e0467b6c7ce/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190923162816-aa69164e4478/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2 h1:CIJ76btIcR3eFI5EgSo6k1qKw9KJexJuRLI9G7Hp5wE=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210410081132-afb366fc7cd1/go.mod h1:9tjilg8BloeKEkVJvy7fQ90B1CfIiPueXVOjqfkSzI8=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220114011407-0dd24b26b47d h1:1n1fc535VhN8SYtD4cDUyNlfpAF2ROMM9+11equK3hs=
golang.org/x/net v0.0.0-20220114011407-0dd24b26b47d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181116152217-5ac8a444bdc5/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -158,43 +351,110 @@ golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5h
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190813064441-fde4db37ae7a/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190922100055-0a153f010e69/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190924154521-2837fb4f24fe/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200223170610-d5e6a3e2c0ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200923182605-d9f96fdee20d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210303074136-134d130e1a04/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211103235746-7861aae1554b/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e h1:fLOSk5Q00efkSvAm+4xcoXD+RRmLmmulPn5I3Y9F2EM=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9 h1:XfKQ4OlFl8okEOr5UvAqFRVj8pY/4yfcXrddB8qAbU0=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.6 h1:aRYxNxv6iGQlyVaZmk6ZgYEDa+Jg18DxebPSrd6bg1M=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190424220101-1e8e1cfdf96b/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 h1:E7g+9GITq07hpfrRu66IVDexMakfv52eLZ2CXBWiKr4=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190907020128-2ca718005c18/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1 h1:go1bK/D/BFZV2I8cIQd1NKEZ+0owSTG1fDTci4IqFcE=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
google.golang.org/genproto v0.0.0-20220118154757-00ab72f36ad5 h1:zzNejm+EgrbLfDZ6lu9Uud2IVvHySPl8vQzf04laR5Q=
google.golang.org/genproto v0.0.0-20220118154757-00ab72f36ad5/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.1/go.mod h1:fr5YgcSWrqhRRxogOsw7RzIpsmvOZ6IcH4kBYTpR3n0=
google.golang.org/grpc v1.36.0/go.mod h1:qjiiYl8FncCW8feJPdyg3v6XW24KsRHe+dy9BAGRRjU=
google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQdJfM=
google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34=
google.golang.org/grpc v1.43.0 h1:Eeu7bZtDZ2DpRCsLhUlcrLnvYaMK1Gz86a+hMVvELmM=
google.golang.org/grpc v1.43.0/go.mod h1:k+4IHHFw41K8+bbowsex27ge2rCb65oeWqe4jJ590SU=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.3/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.5/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU=
gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=

@ -60,6 +60,7 @@ const (
connectionTimeout = 10 * time.Second
raftLogCacheSize = 512
trailingScale = 1.25
observerChanLen = 5 // Support any fast back-to-back leadership changes.
)
const (
@ -76,6 +77,8 @@ const (
snapshotPersistDuration = "snapshot_persist_duration"
snapshotDBSerializedSize = "snapshot_db_serialized_size"
snapshotDBOnDiskSize = "snapshot_db_ondisk_size"
leaderChangesObserved = "leader_changes_observed"
leaderChangesDropped = "leader_changes_dropped"
)
// BackupFormat represents the format of database backup.
@ -107,6 +110,8 @@ func init() {
stats.Add(snapshotPersistDuration, 0)
stats.Add(snapshotDBSerializedSize, 0)
stats.Add(snapshotDBOnDiskSize, 0)
stats.Add(leaderChangesObserved, 0)
stats.Add(leaderChangesDropped, 0)
}
// ClusterState defines the possible Raft states the current node can be in
@ -123,6 +128,7 @@ const (
// Store is a SQLite database, where all changes are made via Raft consensus.
type Store struct {
open bool
raftDir string
peersPath string
peersInfoPath string
@ -150,6 +156,15 @@ type Store struct {
raftStable raft.StableStore // Persistent k-v store.
boltStore *rlog.Log // Physical store.
// Leader change observer
leaderObserversMu sync.RWMutex
leaderObservers []chan<- struct{}
observerClose chan struct{}
observerDone chan struct{}
observerChan chan raft.Observation
observer *raft.Observer
observerWg sync.WaitGroup
onDiskCreated bool // On disk database actually created?
snapsExistOnOpen bool // Any snaps present when store opens?
firstIdxOnOpen uint64 // First index on log when Store opens.
@ -216,21 +231,32 @@ func New(ln Listener, c *Config) *Store {
}
return &Store{
ln: ln,
raftDir: c.Dir,
peersPath: filepath.Join(c.Dir, peersPath),
peersInfoPath: filepath.Join(c.Dir, peersInfoPath),
raftID: c.ID,
dbConf: c.DBConf,
dbPath: dbPath,
reqMarshaller: command.NewRequestMarshaler(),
logger: logger,
ApplyTimeout: applyTimeout,
ln: ln,
raftDir: c.Dir,
peersPath: filepath.Join(c.Dir, peersPath),
peersInfoPath: filepath.Join(c.Dir, peersInfoPath),
raftID: c.ID,
dbConf: c.DBConf,
dbPath: dbPath,
leaderObservers: make([]chan<- struct{}, 0),
reqMarshaller: command.NewRequestMarshaler(),
logger: logger,
ApplyTimeout: applyTimeout,
}
}
// Open opens the Store.
func (s *Store) Open() error {
func (s *Store) Open() (retErr error) {
defer func() {
if retErr == nil {
s.open = true
}
}()
if s.open {
return fmt.Errorf("store already open")
}
s.openT = time.Now()
s.logger.Printf("opening store with node ID %s", s.raftID)
@ -340,6 +366,17 @@ func (s *Store) Open() error {
}
s.raft = ra
// Open the observer channels.
s.observerChan = make(chan raft.Observation, observerChanLen)
s.observer = raft.NewObserver(s.observerChan, false, func(o *raft.Observation) bool {
_, ok := o.Data.(raft.LeaderObservation)
return ok
})
// Register and listen for leader changes.
s.raft.RegisterObserver(s.observer)
s.observerClose, s.observerDone = s.observe()
return nil
}
@ -361,7 +398,20 @@ func (s *Store) Bootstrap(servers ...*Server) error {
}
// Close closes the store. If wait is true, waits for a graceful shutdown.
func (s *Store) Close(wait bool) error {
func (s *Store) Close(wait bool) (retErr error) {
defer func() {
if retErr == nil {
s.open = false
}
}()
if !s.open {
// Protect against closing already-closed resource, such as channels.
return nil
}
close(s.observerClose)
<-s.observerDone
f := s.raft.Shutdown()
if wait {
if e := f.(raft.Future); e.Error() != nil {
@ -375,6 +425,7 @@ func (s *Store) Close(wait bool) error {
if err := s.boltStore.Close(); err != nil {
return err
}
return nil
}
@ -626,6 +677,10 @@ func (s *Store) Stats() (map[string]interface{}, error) {
"node_id": leaderID,
"addr": leaderAddr,
},
"observer": map[string]uint64{
"observed": s.observer.GetNumObserved(),
"dropped": s.observer.GetNumDropped(),
},
"startup_on_disk": s.StartupOnDisk,
"apply_timeout": s.ApplyTimeout.String(),
"heartbeat_timeout": s.HeartbeatTimeout.String(),
@ -1133,6 +1188,41 @@ func (s *Store) DeregisterObserver(o *raft.Observer) {
s.raft.DeregisterObserver(o)
}
// RegisterLeaderChange registers the given channel which will
// receive a signal when this node detects that the Leader changes.
func (s *Store) RegisterLeaderChange(c chan<- struct{}) {
s.leaderObserversMu.Lock()
defer s.leaderObserversMu.Unlock()
s.leaderObservers = append(s.leaderObservers, c)
}
func (s *Store) observe() (closeCh, doneCh chan struct{}) {
closeCh = make(chan struct{})
doneCh = make(chan struct{})
go func() {
defer close(doneCh)
for {
select {
case <-s.observerChan:
s.leaderObserversMu.RLock()
for i := range s.leaderObservers {
select {
case s.leaderObservers[i] <- struct{}{}:
stats.Add(leaderChangesObserved, 1)
default:
stats.Add(leaderChangesDropped, 1)
}
}
s.leaderObserversMu.RUnlock()
case <-closeCh:
return
}
}
}()
return closeCh, doneCh
}
// logSize returns the size of the Raft log on disk.
func (s *Store) logSize() (int64, error) {
fi, err := os.Stat(filepath.Join(s.raftDir, raftDBPath))

@ -59,6 +59,10 @@ func Test_OpenStoreCloseSingleNode(t *testing.T) {
if err := s.Open(); err != nil {
t.Fatalf("failed to open single-node store: %s", err.Error())
}
if !s.open {
t.Fatalf("store not marked as open")
}
if err := s.Bootstrap(NewServer(s.ID(), s.Addr(), true)); err != nil {
t.Fatalf("failed to bootstrap single-node store: %s", err.Error())
}
@ -82,6 +86,9 @@ func Test_OpenStoreCloseSingleNode(t *testing.T) {
if err := s.Close(true); err != nil {
t.Fatalf("failed to close single-node store: %s", err.Error())
}
if s.open {
t.Fatalf("store still marked as open")
}
// Reopen it and confirm data still there.
if err := s.Open(); err != nil {
@ -114,6 +121,49 @@ func Test_OpenStoreCloseSingleNode(t *testing.T) {
}
}
func Test_StoreLeaderObservation(t *testing.T) {
s, ln := mustNewStore(true)
defer s.Close(true)
defer os.RemoveAll(s.Path())
defer ln.Close()
if err := s.Open(); err != nil {
t.Fatalf("failed to open single-node store: %s", err.Error())
}
ch1 := make(chan struct{})
ch2 := make(chan struct{})
countCh := make(chan int, 2)
s.RegisterLeaderChange(ch1)
s.RegisterLeaderChange(ch2)
go func() {
<-ch1
countCh <- 1
}()
go func() {
<-ch2
countCh <- 2
}()
if err := s.Bootstrap(NewServer(s.ID(), s.Addr(), true)); err != nil {
t.Fatalf("failed to bootstrap single-node store: %s", err.Error())
}
count := 0
for {
select {
case <-countCh:
count++
if count == 2 {
return
}
case <-time.After(10 * time.Second):
t.Fatalf("timeout waiting for all observations")
}
}
}
func Test_SingleNodeInMemExecuteQuery(t *testing.T) {
s, ln := mustNewStore(true)
defer os.RemoveAll(s.Path())

@ -13,10 +13,12 @@ import subprocess
import requests
import json
import os
import random
import shutil
import time
import socket
import sqlite3
import string
import sys
import unittest
@ -28,14 +30,23 @@ TIMEOUT=10
def d_(s):
return ast.literal_eval(s.replace("'", "\""))
def random_string(n):
letters = string.ascii_lowercase
return ''.join(random.choice(letters) for i in range(n))
def write_random_file(data):
f = tempfile.NamedTemporaryFile('w', delete=False)
f.write(data)
f.close()
return f.name
class Node(object):
def __init__(self, path, node_id,
api_addr=None, api_adv=None,
raft_addr=None, raft_adv=None,
raft_voter=True,
raft_snap_threshold=8192, raft_snap_int="1s",
auth=None, join_as=None,
dir=None, on_disk=False):
auth=None, dir=None, on_disk=False):
if api_addr is None:
s, addr = random_addr()
api_addr = addr
@ -61,7 +72,7 @@ class Node(object):
self.raft_snap_threshold = raft_snap_threshold
self.raft_snap_int = raft_snap_int
self.auth = auth
self.join_as = join_as
self.disco_key = random_string(10)
self.on_disk = on_disk
self.process = None
self.stdout_file = os.path.join(dir, 'rqlited.log')
@ -98,7 +109,8 @@ class Node(object):
if self.raft_adv is None:
self.raft_adv = self.raft_addr
def start(self, join=None, wait=True, timeout=TIMEOUT):
def start(self, join=None, join_as=None, join_attempts=None, join_interval=None,
disco_mode=None, disco_key=None, disco_config=None, wait=True, timeout=TIMEOUT):
if self.process is not None:
return
@ -119,8 +131,19 @@ class Node(object):
command += ['-auth', self.auth]
if join is not None:
command += ['-join', 'http://' + join]
if self.join_as is not None:
command += ['-join-as', self.join_as]
if join_as is not None:
command += ['-join-as', join_as]
if join_attempts is not None:
command += ['-join-attempts', str(join_attempts)]
if join_interval is not None:
command += ['-join-interval', join_interval]
if disco_mode is not None:
dk = disco_key
if dk is None:
dk = self.disco_key
command += ['-disco-mode', disco_mode, '-disco-key', dk]
if disco_config is not None:
command += ['-disco-config', disco_config]
command.append(self.dir)
self.process = subprocess.Popen(command, stdout=self.stdout_fd, stderr=self.stderr_fd)
@ -187,6 +210,12 @@ class Node(object):
except requests.exceptions.ConnectionError:
return False
def disco_mode(self):
try:
return self.status()['disco']['name']
except requests.exceptions.ConnectionError:
return ''
def wait_for_leader(self, timeout=TIMEOUT):
lr = None
t = 0
@ -679,6 +708,101 @@ class TestEndToEndAdvAddr(TestEndToEnd):
self.cluster = Cluster([n0, n1, n2])
class TestAutoClustering(unittest.TestCase):
def autocluster(self, mode):
disco_key = random_string(10)
n0 = Node(RQLITED_PATH, '0')
n0.start(disco_mode=mode, disco_key=disco_key)
n0.wait_for_leader()
self.assertEqual(n0.disco_mode(), mode)
j = n0.execute('CREATE TABLE foo (id INTEGER NOT NULL PRIMARY KEY, name TEXT)')
self.assertEqual(j, d_("{'results': [{}]}"))
j = n0.execute('INSERT INTO foo(name) VALUES("fiona")')
n0.wait_for_all_fsm()
j = n0.query('SELECT * FROM foo')
self.assertEqual(j, d_("{'results': [{'values': [[1, 'fiona']], 'types': ['integer', 'text'], 'columns': ['id', 'name']}]}"))
# Add second node, make sure it joins the cluster fine.
n1 = Node(RQLITED_PATH, '1')
n1.start(disco_mode=mode, disco_key=disco_key)
n1.wait_for_leader()
self.assertEqual(n1.disco_mode(), mode)
j = n1.query('SELECT * FROM foo', level='none')
self.assertEqual(j, d_("{'results': [{'values': [[1, 'fiona']], 'types': ['integer', 'text'], 'columns': ['id', 'name']}]}"))
# Now a third.
n2 = Node(RQLITED_PATH, '2')
n2.start(disco_mode=mode, disco_key=disco_key)
n2.wait_for_leader()
self.assertEqual(n2.disco_mode(), mode)
j = n2.query('SELECT * FROM foo', level='none')
self.assertEqual(j, d_("{'results': [{'values': [[1, 'fiona']], 'types': ['integer', 'text'], 'columns': ['id', 'name']}]}"))
# Now, kill the leader, which should trigger a different node to report leadership.
deprovision_node(n0)
# Add a fourth node, it should join fine using updated leadership details.
# Use quick retries, as we know the leader information may be changing while
# the node is coming up.
n3 = Node(RQLITED_PATH, '3')
n3.start(disco_mode=mode, disco_key=disco_key, join_interval='1s', join_attempts=1)
n3.wait_for_leader()
self.assertEqual(n3.disco_mode(), mode)
j = n3.query('SELECT * FROM foo', level='none')
self.assertEqual(j, d_("{'results': [{'values': [[1, 'fiona']], 'types': ['integer', 'text'], 'columns': ['id', 'name']}]}"))
deprovision_node(n1)
deprovision_node(n2)
deprovision_node(n3)
def autocluster_config(self, mode, config):
disco_key = random_string(10)
n0 = Node(RQLITED_PATH, '0')
n0.start(disco_mode=mode, disco_key=disco_key, disco_config=config)
n0.wait_for_leader()
self.assertEqual(n0.disco_mode(), mode)
j = n0.execute('CREATE TABLE foo (id INTEGER NOT NULL PRIMARY KEY, name TEXT)')
self.assertEqual(j, d_("{'results': [{}]}"))
j = n0.execute('INSERT INTO foo(name) VALUES("fiona")')
n0.wait_for_all_fsm()
j = n0.query('SELECT * FROM foo')
self.assertEqual(j, d_("{'results': [{'values': [[1, 'fiona']], 'types': ['integer', 'text'], 'columns': ['id', 'name']}]}"))
# Add second node, make sure it joins the cluster fine.
n1 = Node(RQLITED_PATH, '1')
n1.start(disco_mode=mode, disco_key=disco_key, disco_config=config)
n1.wait_for_leader()
self.assertEqual(n1.disco_mode(), mode)
j = n1.query('SELECT * FROM foo', level='none')
self.assertEqual(j, d_("{'results': [{'values': [[1, 'fiona']], 'types': ['integer', 'text'], 'columns': ['id', 'name']}]}"))
deprovision_node(n0)
deprovision_node(n1)
def test_consul(self):
'''Test clustering via Consul and that leadership change is observed'''
self.autocluster('consul')
def test_etcd(self):
'''Test clustering via Etcd and that leadership change is observed'''
self.autocluster('etcd')
def test_consul_config(self):
'''Test clustering via Consul with explicit file-based config'''
filename = write_random_file('{"address": "localhost:8500"}')
self.autocluster_config('consul', filename)
os.remove(filename)
def test_etcd_config(self):
'''Test clustering via Etcd with explicit file-based config'''
filename = write_random_file('{"endpoints": ["localhost:2379"]}')
self.autocluster_config('etcd', filename)
os.remove(filename)
class TestAuthJoin(unittest.TestCase):
'''Test that joining works with authentication'''
@ -695,8 +819,8 @@ class TestAuthJoin(unittest.TestCase):
n1.start(join=n0.APIAddr())
self.assertRaises(Exception, n1.wait_for_leader) # Join should fail due to lack of auth.
n2 = Node(RQLITED_PATH, '2', auth=self.auth_file.name, join_as="foo")
n2.start(join=n0.APIAddr())
n2 = Node(RQLITED_PATH, '2', auth=self.auth_file.name)
n2.start(join=n0.APIAddr(), join_as="foo")
n2.wait_for_leader()
self.cluster = Cluster([n0, n1, n2])

Loading…
Cancel
Save