1
0
Fork 0

Fix typos and Grammar issues

master
chermehdi 3 years ago
parent ba42be7fdf
commit 3bfcbffc37

@ -47,7 +47,7 @@ jobs:
- checkout
# specify any bash command here prefixed with `run: `
# BoltDB, no longer maintained, has pointer issues. However it's run
# BoltDB, no longer maintained, has pointer issues. However, it's run
# for years without actual issue so disabling the pointer tests
- run: go get -t -d ./...
- run:

@ -12,7 +12,7 @@ You can also access the rqlite API directly, via a HTTP `GET` request to the end
curl -s -XGET localhost:4001/db/backup -o bak.sqlite3
```
In either case the generated file file can then be used to restore a node (or cluster) using the [restore API](https://github.com/rqlite/rqlite/blob/master/DOC/RESTORE_FROM_SQLITE.md).
In either case the generated file can then be used to restore a node (or cluster) using the [restore API](https://github.com/rqlite/rqlite/blob/master/DOC/RESTORE_FROM_SQLITE.md).
## Generating a SQL text dump
You can dump the database in SQL text format via the CLI as follows:

@ -41,7 +41,7 @@ The response is of the form:
A bulk update is contained within a single Raft log entry, so the network round-trips between nodes in the cluster are amortized over the bulk update. This should result in better throughput, if it is possible to use this kind of update.
### Atomicity
Because a bulk operation is contained within a single Raft log entry, and only one Raft log entry is every processed at one time, a bulk operation will never be interleaved with other requests.
Because a bulk operation is contained within a single Raft log entry, and only one Raft log entry is ever processed at one time, a bulk operation will never be interleaved with other requests.
### Transaction support
You may still wish to set the `transaction` flag when issuing a bulk update. This ensures that if any error occurs while processing the bulk update, all changes will be rolled back.

@ -14,7 +14,7 @@ Firstly, you should understand the basic requirement for systems built on the [R
Clusters of 3, 5, 7, or 9, nodes are most practical. Clusters of those sizes can tolerate failures of 1, 2, 3, and 4 nodes respectively.
Clusters with a greater number of nodes start to become unweildy, due to the number of nodes that must be contacted before a database change can take place.
Clusters with a greater number of nodes start to become unwieldy, due to the number of nodes that must be contacted before a database change can take place.
### Read-only nodes
It is possible to run larger clusters if you just need nodes [from which you only need to read from](https://github.com/rqlite/rqlite/blob/master/DOC/READ_ONLY_NODES.md). When it comes to the Raft protocol, these nodes do not count towards `N`, since they do not [vote](https://raft.github.io/).
@ -47,7 +47,7 @@ Once executed you now have a cluster of two nodes. Of course, for fault-toleranc
```bash
host3:$ rqlited -node-id 3 -http-addr host3:4001 -raft-addr host3:4002 -join http://host1:4001 ~/node
```
_When simply restarting a node, there is no further need to pass `-join`. However if a node does attempt to join a cluster it is already a member of, and neither its node ID or Raft network address has changed, then the cluster Leader will ignore the join request as there is nothing to do -- the joining node is already a fully-configured member of the cluster. However, if either the node ID or Raft network address of the joining node has changed, the cluster Leader will first automatically remove the joining node from the cluster configuration before processing the join request. For most applications this is an implementation detail which can be safely ignored, and cluster-joins are basically idempotent._
_When simply restarting a node, there is no further need to pass `-join`. However, if a node does attempt to join a cluster it is already a member of, and neither its node ID or Raft network address has changed, then the cluster Leader will ignore the join request as there is nothing to do -- the joining node is already a fully-configured member of the cluster. However, if either the node ID or Raft network address of the joining node has changed, the cluster Leader will first automatically remove the joining node from the cluster configuration before processing the join request. For most applications this is an implementation detail which can be safely ignored, and cluster-joins are basically idempotent._
You've now got a fault-tolerant, distributed, relational database. It can tolerate the failure of any node, even the leader, and remain operational.

@ -26,7 +26,7 @@ If a query request is sent to a follower, and _strong_ consistency is specified,
To avoid even the issues associated with _weak_ consistency, rqlite also offers _strong_. In this mode, the Leader sends the query through the Raft consensus system, ensuring that the Leader **remains** the Leader at all times during query processing. When using _strong_ you can be sure that the database reflects every change sent to it prior to the query. However, this will involve the Leader contacting at least a quorum of nodes, and will therefore increase query response times.
# Which should I use?
_Weak_ is probably sufficient for most applications, and is the default read consistency level. To explicitly select consistency, set the query param `level` to the desired level. However you should use _none_ with read-only nodes, unless you want those nodes to actually forward the query to the Leader.
_Weak_ is probably sufficient for most applications, and is the default read consistency level. To explicitly select consistency, set the query param `level` to the desired level. However, you should use _none_ with read-only nodes, unless you want those nodes to actually forward the query to the Leader.
## Example queries
Examples of enabling each read consistency level for a simple query is shown below.

@ -39,7 +39,7 @@ The response is of the form:
The use of the URL param `pretty` is optional, and results in pretty-printed JSON responses. Time is measured in seconds. If you do not want timings, do not pass `timings` as a URL parameter.
## Querying Data
Querying data is easy. For a single query simply perform a HTTP GET on the `/db/query` endpoint, setting the query statement as the query parameter `q`:
Querying data is easy. For a single query simply perform an HTTP GET on the `/db/query` endpoint, setting the query statement as the query parameter `q`:
```bash
curl -G 'localhost:4001/db/query?pretty&timings' --data-urlencode 'q=SELECT * FROM foo'

@ -7,7 +7,7 @@ You can find details on the design and implementation of rqlite from [these blog
- [Presentation](http://www.slideshare.net/PhilipOToole/rqlite-replicating-sqlite-via-raft-consensu) given at the [GoSF](http://www.meetup.com/golangsf/) [April 2016](http://www.meetup.com/golangsf/events/230127735/) Meetup.
## Node design
The diagram below shows a high-level view of an rqlite node.
The diagram below shows a high-level view of a rqlite node.
![node-design](https://user-images.githubusercontent.com/536312/133258366-1f2fbc50-8493-4ba6-8d62-04c57e39eb6f.png)
## File system
@ -15,7 +15,7 @@ The diagram below shows a high-level view of an rqlite node.
The Raft layer always creates a file -- it creates the _Raft log_. The log stores the set of committed SQLite commands, in the order which they were executed. This log is authoritative record of every change that has happened to the system. It may also contain some read-only queries as entries, depending on read-consistency choices.
### SQLite
By default the SQLite layer doesn't create a file. Instead it creates the database in memory. rqlite can create the SQLite database on disk, if so configured at start-time, by passing `-on-disk` to `rqlited` at startup. Regardless of whether rqlite creates a database entirely in memory, or on disk, the SQLite database is completely recreated everytime `rqlited` starts, using the information stored in the Raft log.
By default, the SQLite layer doesn't create a file. Instead, it creates the database in memory. rqlite can create the SQLite database on disk, if so configured at start-time, by passing `-on-disk` to `rqlited` at startup. Regardless of whether rqlite creates a database entirely in memory, or on disk, the SQLite database is completely recreated everytime `rqlited` starts, using the information stored in the Raft log.
## Log Compaction and Truncation
rqlite automatically performs log compaction, so that disk usage due to the log remains bounded. After a configurable number of changes rqlite snapshots the SQLite database, and truncates the Raft log. This is a technical feature of the Raft consensus system, and most users of rqlite need not be concerned with this.

@ -32,7 +32,7 @@ runtime:
## Nodes API
The _nodes_ API returns basic information for nodes in the cluster, as seen by the node receiving the _nodes_ request. The receiving node will also check whether it can actually connect to all other nodes in the cluster. This is an effective way to determine the cluster leader, and the leader's HTTP API address. It can also be used to check if the cluster is **basically** running -- if the other nodes are reachable, it probably is.
By default the node only checks if _voting_ nodes are contactable.
By default, the node only checks if _voting_ nodes are contactable.
```bash
curl localhost:4001/nodes?pretty

@ -34,7 +34,7 @@ rqlited -disco-id 809d9ba6-f70b-11e6-9a5a-92819c00729a
When any node registers using the ID, it is returned the current list of nodes that have registered using that ID. If the nodes is the first node to access the service using the ID, it will receive a list that contains just itself -- and will subsequently elect itself leader. Subsequent nodes will then receive a list with more than 1 entry. These nodes will use one of the join addresses in the list to join the cluster.
### Controlling the registered join address
By default each node registers the address passed in via the `-http-addr` option. However if you instead set `-http-adv-addr` when starting a node, the node will instead register that address. This can be useful when telling a node to listen on all interfaces, but that is should be contacted at a specific address. For example:
By default, each node registers the address passed in via the `-http-addr` option. However if you instead set `-http-adv-addr` when starting a node, the node will instead register that address. This can be useful when telling a node to listen on all interfaces, but that is should be contacted at a specific address. For example:
```shell
rqlited -disco-id 809d9ba6-f70b-11e6-9a5a-92819c00729a -http-addr 0.0.0.0:4001 -http-adv-addr host1:4001
```
@ -60,7 +60,7 @@ $ rqlited -disco-id b3da7185-725f-461c-b7a4-13f185bd5007 ~/node.1
$ rqlited -http-addr localhost:4003 -raft-addr localhost:4004 -disco-id b3da7185-725f-461c-b7a4-13f185bd5007 ~/node.2
$ rqlited -http-addr localhost:4005 -raft-addr localhost:4006 -disco-id b3da7185-725f-461c-b7a4-13f185bd5007 ~/node.3
```
_This demonstration shows all 3 nodes running on the same host. In reality you probably wouldn't do this, and then you wouldn't need to select different -http-addr and -raft-addr ports for each rqlite node._
_This demonstration shows all 3 nodes running on the same host. In reality, you probably wouldn't do this, and then you wouldn't need to select different -http-addr and -raft-addr ports for each rqlite node._
## Removing registered addresses
If you need to remove an address from the list of registered addresses, perhaps because a node has permanently left a cluster, you can do this via the following command (be sure to pass all the options shown to `curl`):

@ -36,14 +36,14 @@ mount -t tmpfs -o size=512m tmpfs /mnt/ramdisk
## Improving read-write concurrency
SQLite can offer better concurrent read and write support when using an on-disk database, compared to in-memory databases. But as explained above, using an on-disk SQLite database can significant impact performance. But since the database-update performance will be so much better with an in-memory database, improving read-write concurrency may not be needed in practise.
However if you enable an on-disk SQLite database, but then place the SQLite database on a memory-backed file system, you can have the best of both worlds. You can dedicate your disk to the Raft log, but still get better read-write concurrency with SQLite. You can specify the SQLite database file path via the `-on-disk-path` flag.
However, if you enable an on-disk SQLite database, but then place the SQLite database on a memory-backed file system, you can have the best of both worlds. You can dedicate your disk to the Raft log, but still get better read-write concurrency with SQLite. You can specify the SQLite database file path via the `-on-disk-path` flag.
An alternative approach would be to place the SQLite on-disk database on a disk different than that storing the Raft log, but this is unlikely to be as performant as an in-memory file system for the SQLite database.
# In-memory Database Limits
In-memory databases are currently limited to 2GiB in size. One way to get around this limit is to use an on-disk database, by passing `-on-disk` to `rqlited`. But this would impact performance significantly, since disk is slower than memory.
However by telling rqlite to place the SQLite database file on a memory-backed filesystem you can use larger databases, and still have good performance. To control where rqlite places the SQLite database file, set `-on-disk-startup -on-disk-path` when launching `rqlited`. **Note that you should still place the `data` directory on an actual disk, to ensure your data is not lost if a node retarts.**
However, by telling rqlite to place the SQLite database file on a memory-backed filesystem you can use larger databases, and still have good performance. To control where rqlite places the SQLite database file, set `-on-disk-startup -on-disk-path` when launching `rqlited`. **Note that you should still place the `data` directory on an actual disk, to ensure your data is not lost if a node retarts.**
Setting `-on-disk-startup` is also important because it disables an optimization rqlite performs at startup, when using an on-disk SQLite database. rqlite, by default, initially builds any on-disk database in memory first, before moving it to disk. It does this to reduce startup times. But with databases larger than 2GiB, this optimization can cause rqlite to fail to start. To avoid this issue, you can disable this optimization via the flag.

@ -15,4 +15,4 @@ Pass `-raft-non-voter=true` to `rqlited` to enable read-only mode.
Read-only nodes join a cluster in the [same manner as a voting node. They can also be removed using the same operations](https://github.com/rqlite/rqlite/blob/master/DOC/CLUSTER_MGMT.md).
### Handling failure
If a read-only node becomes unreachable, the leader will continually attempt to reconnect until the node becomes reachable again, or the node is removed from the cluster. This is exactly the same behaviour as when a voting node fails. However since read-only nodes do not vote, a failed read-only node will not prevent the cluster commiting changes via the Raft consensus mechanism.
If a read-only node becomes unreachable, the leader will continually attempt to reconnect until the node becomes reachable again, or the node is removed from the cluster. This is exactly the same behaviour as when a voting node fails. However, since read-only nodes do not vote, a failed read-only node will not prevent the cluster commiting changes via the Raft consensus mechanism.

@ -51,7 +51,7 @@ While not strictly necessary to run rqlite, running multiple nodes means you'll
rqlited -node-id 2 -http-addr localhost:4003 -raft-addr localhost:4004 -join http://localhost:4001 ~/node.2
rqlited -node-id 3 -http-addr localhost:4005 -raft-addr localhost:4006 -join http://localhost:4001 ~/node.3
```
_This demonstration shows all 3 nodes running on the same host. In reality you probably wouldn't do this, and then you wouldn't need to select different -http-addr and -raft-addr ports for each rqlite node._
_This demonstration shows all 3 nodes running on the same host. In reality, you probably wouldn't do this, and then you wouldn't need to select different -http-addr and -raft-addr ports for each rqlite node._
With just these few steps you've now got a fault-tolerant, distributed relational database. For full details on creating and managing real clusters, including running read-only nodes, check out [this documentation](https://github.com/rqlite/rqlite/blob/master/DOC/CLUSTER_MGMT.md).

@ -213,7 +213,7 @@ func (c *Client) Execute(er *command.ExecuteRequest, nodeAddr string, timeout ti
return a.Results, nil
}
// Query performs an Query on a remote node.
// Query performs a Query on a remote node.
func (c *Client) Query(qr *command.QueryRequest, nodeAddr string, timeout time.Duration) ([]*command.QueryRows, error) {
conn, err := c.dial(nodeAddr, c.timeout)
if err != nil {

@ -115,7 +115,7 @@ func join(srcIP, joinAddr, id, addr string, voter bool, tlsConfig *tls.Config, l
}
continue
case http.StatusBadRequest:
// One possible cause is that the target server is listening for HTTPS, but a HTTP
// One possible cause is that the target server is listening for HTTPS, but an HTTP
// attempt was made. Switch the protocol to HTTPS, and try again. This can happen
// when using the Disco service, since it doesn't record information about which
// protocol a registered node is actually using.

@ -11,7 +11,7 @@ import (
)
const numAttempts int = 3
const attemptInterval time.Duration = 5 * time.Second
const attemptInterval = 5 * time.Second
func Test_SingleJoinOK(t *testing.T) {
var body map[string]interface{}

@ -10,7 +10,7 @@ import (
"time"
)
// HTTPTester represents a HTTP transport tester.
// HTTPTester represents an HTTP transport tester.
type HTTPTester struct {
client http.Client
url string

@ -12,10 +12,6 @@ import (
"github.com/mkideal/cli"
)
type backupResponse struct {
BackupFile []byte
}
type restoreResponse struct {
Results []*Result `json:"results"`
}

@ -62,7 +62,7 @@ func main() {
}
if argv.Version {
ctx.String("Version %s, commmit %s, branch %s, built on %s\n", cmd.Version,
ctx.String("Version %s, commit %s, branch %s, built on %s\n", cmd.Version,
cmd.Commit, cmd.Branch, cmd.Buildtime)
return nil
}
@ -141,25 +141,25 @@ func main() {
err = removeNode(client, line[index+1:], argv, timer)
case ".BACKUP":
if index == -1 || index == len(line)-1 {
err = fmt.Errorf("Please specify an output file for the backup")
err = fmt.Errorf("please specify an output file for the backup")
break
}
err = backup(ctx, line[index+1:], argv)
case ".RESTORE":
if index == -1 || index == len(line)-1 {
err = fmt.Errorf("Please specify an input file to restore from")
err = fmt.Errorf("please specify an input file to restore from")
break
}
err = restore(ctx, line[index+1:], argv)
case ".SYSDUMP":
if index == -1 || index == len(line)-1 {
err = fmt.Errorf("Please specify an output file for the sysdump")
err = fmt.Errorf("please specify an output file for the sysdump")
break
}
err = sysdump(ctx, line[index+1:], argv)
case ".DUMP":
if index == -1 || index == len(line)-1 {
err = fmt.Errorf("Please specify an output file for the SQL text")
err = fmt.Errorf("please specify an output file for the SQL text")
break
}
err = dump(ctx, line[index+1:], argv)

@ -308,7 +308,7 @@ func main() {
log.Fatalf("ioutil.ReadFile failed: %s", err.Error())
}
tlsConfig.RootCAs = x509.NewCertPool()
ok := tlsConfig.RootCAs.AppendCertsFromPEM([]byte(asn1Data))
ok := tlsConfig.RootCAs.AppendCertsFromPEM(asn1Data)
if !ok {
log.Fatalf("failed to parse root CA certificate(s) in %q", x509CACert)
}
@ -496,7 +496,7 @@ func clusterService(tn cluster.Transport, db cluster.Database) (*cluster.Service
apiAddr = httpAdv
}
c.SetAPIAddr(apiAddr)
c.EnableHTTPS(x509Cert != "" && x509Key != "") // Conditions met for a HTTPS API
c.EnableHTTPS(x509Cert != "" && x509Key != "") // Conditions met for an HTTPS API
if err := c.Open(); err != nil {
return nil, err

@ -2,7 +2,7 @@ package cmd
// These variables are populated via the Go linker.
var (
// rqlite version
// Version of rqlite.
Version = "6"
// Commit this code was built at.
@ -11,6 +11,6 @@ var (
// Branch the code was built from.
Branch = "unknown"
// Timestamp of build.
// Buildtime timestamp.
Buildtime = "unknown"
)

@ -278,7 +278,7 @@ func (db *DB) Stats() (map[string]interface{}, error) {
"compile_options": copts,
"mem_stats": memStats,
"db_size": dbSz,
"rw_dsn": string(db.rwDSN),
"rw_dsn": db.rwDSN,
"ro_dsn": db.roDSN,
"conn_pool_stats": connPoolStats,
}
@ -881,7 +881,7 @@ func parametersToValues(parameters []*command.Parameter) ([]interface{}, error)
// normalizeRowValues performs some normalization of values in the returned rows.
// Text values come over (from sqlite-go) as []byte instead of strings
// for some reason, so we have explicitly convert (but only when type
// for some reason, so we have explicitly converted (but only when type
// is "text" so we don't affect BLOB types)
func normalizeRowValues(row []interface{}, types []string) ([]*command.Parameter, error) {
values := make([]*command.Parameter, len(types))

@ -114,8 +114,6 @@ github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsT
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/rqlite/go-sqlite3 v1.22.0 h1:twqvKzylJXG62Qe0rcqdy5ClGhc0YRc2vvA3nEXwmes=
github.com/rqlite/go-sqlite3 v1.22.0/go.mod h1:ml55MVv28UP7V8zrxILd2EsrI6Wfsz76YSskpg08Ut4=
github.com/rqlite/raft-boltdb v0.0.0-20210909125202-124e0a496d7e h1:qiwSp5M0NVQDnve86XE5Y70hWp71jJM7ofOjRpa5Kwg=
github.com/rqlite/raft-boltdb v0.0.0-20210909125202-124e0a496d7e/go.mod h1:X1tXZi6gr5ZI1OIBVZxYv9zCiXeLi9Znjiolz5BqNE8=
github.com/rqlite/raft-boltdb v0.0.0-20210909131733-595768e10065 h1:uSGp32RAdVkPOSaPMRg1Y8BMInYj2icipXZ9BuvQyxA=
github.com/rqlite/raft-boltdb v0.0.0-20210909131733-595768e10065/go.mod h1:X1tXZi6gr5ZI1OIBVZxYv9zCiXeLi9Znjiolz5BqNE8=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=

@ -27,8 +27,8 @@ func ParseRequest(b []byte) ([]*command.Statement, error) {
return nil, ErrNoStatements
}
simple := []string{} // Represents a set of unparameterized queries
parameterized := [][]interface{}{} // Represents a set of parameterized queries
var simple []string // Represents a set of unparameterized queries
var parameterized [][]interface{} // Represents a set of parameterized queries
// Try simple form first.
err := json.Unmarshal(b, &simple)

@ -664,7 +664,7 @@ func (s *Service) handleStatus(w http.ResponseWriter, r *http.Request) {
http.StatusInternalServerError)
return
}
_, err = w.Write([]byte(b))
_, err = w.Write(b)
if err != nil {
http.Error(w, fmt.Sprintf("write: %s", err.Error()),
http.StatusInternalServerError)
@ -761,7 +761,7 @@ func (s *Service) handleNodes(w http.ResponseWriter, r *http.Request) {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
_, err = w.Write([]byte(b))
_, err = w.Write(b)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
return
@ -1200,7 +1200,7 @@ func createTLSConfig(certFile, keyFile, caCertFile string, tls1011 bool) (*tls.C
return nil, err
}
config.RootCAs = x509.NewCertPool()
ok := config.RootCAs.AppendCertsFromPEM([]byte(asn1Data))
ok := config.RootCAs.AppendCertsFromPEM(asn1Data)
if !ok {
return nil, fmt.Errorf("failed to parse root certificate(s) in %q", caCertFile)
}

@ -887,7 +887,7 @@ func Test_TLSServce(t *testing.T) {
url := fmt.Sprintf("https://%s", s.Addr().String())
// Test connecting with a HTTP client.
// Test connecting with an HTTP client.
tn := &http.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
}
@ -901,7 +901,7 @@ func Test_TLSServce(t *testing.T) {
t.Fatalf("incorrect build version present in HTTP response header, got: %s", v)
}
// Test connecting with a HTTP/2 client.
// Test connecting with an HTTP/2 client.
client = &http.Client{
Transport: &http2.Transport{
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},

@ -164,7 +164,7 @@ type Store struct {
// StartupOnDisk disables in-memory initialization of on-disk databases.
// Restarting a node with an on-disk database can be slow so, by default,
// rqlite creates on-disk databases in memory first, and then moves the
// database to disk before Raft starts. However this optimization can
// database to disk before Raft starts. However, this optimization can
// prevent nodes with very large (2GB+) databases from starting. This
// flag allows control of the optimization.
StartupOnDisk bool
@ -186,7 +186,7 @@ type Store struct {
numSnapshots int
}
// IsNewNode returns whether a node using raftDir would be a brand new node.
// IsNewNode returns whether a node using raftDir would be a brand-new node.
// It also means that the window this node joining a different cluster has passed.
func IsNewNode(raftDir string) bool {
// If there is any pre-existing Raft state, then this node
@ -317,7 +317,7 @@ func (s *Store) Open(enableBootstrap bool) error {
// If an on-disk database has been requested, and there are no snapshots, and
// there are no commands in the log, then this is the only opportunity to
// create that on-disk database file before Raft initializes. In addition this
// create that on-disk database file before Raft initializes. In addition, this
// can also happen if the user explicitly disables the startup optimization of
// building the SQLite database in memory, before switching to disk.
if s.StartupOnDisk || (!s.dbConf.Memory && !s.snapsExistOnOpen && s.lastCommandIdxOnOpen == 0) {
@ -400,7 +400,7 @@ func (s *Store) WaitForInitialLogs(timeout time.Duration) error {
return s.WaitForApplied(timeout)
}
// WaitForApplied waits for all Raft log entries to to be applied to the
// WaitForApplied waits for all Raft log entries to be applied to the
// underlying database.
func (s *Store) WaitForApplied(timeout time.Duration) error {
if timeout == 0 {
@ -755,7 +755,7 @@ func (s *Store) Query(qr *command.QueryRequest) ([]*command.QueryRows, error) {
// Backup writes a snapshot of the underlying database to dst
//
// If leader is true, this operation is performed with a read consistency
// level equivalent to "weak". Otherwise no guarantees are made about the
// level equivalent to "weak". Otherwise, no guarantees are made about the
// read consistency level.
func (s *Store) Backup(leader bool, fmt BackupFormat, dst io.Writer) error {
if leader && s.raft.State() != raft.Leader {
@ -796,7 +796,7 @@ func (s *Store) Join(id, addr string, voter bool) error {
// If a node already exists with either the joining node's ID or address,
// that node may need to be removed from the config first.
if srv.ID == raft.ServerID(id) || srv.Address == raft.ServerAddress(addr) {
// However if *both* the ID and the address are the same, then no
// However, if *both* the ID and the address are the same, then no
// join is actually needed.
if srv.Address == raft.ServerAddress(addr) && srv.ID == raft.ServerID(id) {
stats.Add(numIgnoredJoins, 1)
@ -845,7 +845,7 @@ func (s *Store) Remove(id string) error {
}
// Noop writes a noop command to the Raft log. A noop command simply
// consumes a slot in the Raft log, but has no other affect on the
// consumes a slot in the Raft log, but has no other effect on the
// system.
func (s *Store) Noop(id string) error {
n := &command.Noop{
@ -888,7 +888,7 @@ func (s *Store) createInMemory(b []byte) (db *sql.DB, err error) {
// createOnDisk opens an on-disk database file at the Store's configured path. If
// b is non-nil, any preexisting file will first be overwritten with those contents.
// Otherwise any pre-existing file will be removed before the database is opened.
// Otherwise, any preexisting file will be removed before the database is opened.
func (s *Store) createOnDisk(b []byte) (*sql.DB, error) {
if err := os.Remove(s.dbPath); err != nil && !os.IsNotExist(err) {
return nil, err
@ -1036,7 +1036,7 @@ func (s *Store) Apply(l *raft.Log) (e interface{}) {
// Database returns a copy of the underlying database. The caller MUST
// ensure that no transaction is taking place during this call, or an error may
// be returned. If leader is true, this operation is performed with a read
// consistency level equivalent to "weak". Otherwise no guarantees are made
// consistency level equivalent to "weak". Otherwise, no guarantees are made
// about the read consistency level.
//
// http://sqlite.org/howtocorrupt.html states it is safe to do this
@ -1054,7 +1054,7 @@ func (s *Store) Database(leader bool) ([]byte, error) {
// Hashicorp Raft guarantees that this function will not be called concurrently
// with Apply, as it states Apply() and Snapshot() are always called from the same
// thread. This means there is no need to synchronize this function with Execute().
// However queries that involve a transaction must be blocked.
// However, queries that involve a transaction must be blocked.
//
// http://sqlite.org/howtocorrupt.html states it is safe to copy or serialize the
// database as long as no transaction is in progress.
@ -1098,7 +1098,7 @@ func (s *Store) Restore(rc io.ReadCloser) error {
if s.StartupOnDisk || (!s.dbConf.Memory && s.lastCommandIdxOnOpen == 0) {
// A snapshot clearly exists (this function has been called) but there
// are no command entries in the log -- so Apply will not be called.
// Therefore this is the last opportunity to create the on-disk database
// Therefore, this is the last opportunity to create the on-disk database
// before Raft starts. This could also happen because the user has explicitly
// disabled the build-on-disk-database-in-memory-first optimization.
db, err = s.createOnDisk(b)
@ -1193,7 +1193,7 @@ func newFSMSnapshot(db *sql.DB, logger *log.Logger) *fsmSnapshot {
}
// The error code is not meaningful from Serialize(). The code needs to be able
// handle a nil byte slice being returned.
// to handle a nil byte slice being returned.
fsm.database, _ = db.Serialize()
return fsm
}

@ -81,7 +81,7 @@ func Test_DialerHeaderTLSBadConnect(t *testing.T) {
defer os.Remove(key)
go s.Start(t)
// Connect to a TLS server with a unencrypted client, to make sure
// Connect to a TLS server with an unencrypted client, to make sure
// code can handle that misconfig.
d := NewDialer(56, false, false)
conn, err := d.Dial(s.Addr(), 5*time.Second)

@ -297,7 +297,7 @@ func createTLSConfig(certFile, keyFile, caCertFile string) (*tls.Config, error)
return nil, err
}
config.RootCAs = x509.NewCertPool()
ok := config.RootCAs.AppendCertsFromPEM([]byte(asn1Data))
ok := config.RootCAs.AppendCertsFromPEM(asn1Data)
if !ok {
return nil, fmt.Errorf("failed to parse root certificate(s) in %q", caCertFile)
}

@ -44,7 +44,7 @@ func TestMux(t *testing.T) {
mux.Logger = log.New(ioutil.Discard, "", 0)
}
for i := uint8(0); i < n; i++ {
ln := mux.Listen(byte(i))
ln := mux.Listen(i)
wg.Add(1)
go func(i uint8, ln net.Listener) {
@ -58,7 +58,7 @@ func TestMux(t *testing.T) {
// If there is no message or the header byte
// doesn't match then expect close.
if len(msg) == 0 || msg[0] != byte(i) {
if len(msg) == 0 || msg[0] != i {
if err == nil || err.Error() != "network connection closed" {
t.Fatalf("unexpected error: %s", err)
}
@ -97,8 +97,8 @@ func TestMux(t *testing.T) {
_, err = io.ReadFull(conn, resp[:])
// If the message header is less than n then expect a response.
// Otherwise we should get an EOF because the mux closed.
if len(msg) > 0 && uint8(msg[0]) < n {
// Otherwise, we should get an EOF because the mux closed.
if len(msg) > 0 && msg[0] < n {
if string(resp[:]) != `OK` {
t.Fatalf("unexpected response: %s", resp[:])
}

@ -70,7 +70,7 @@ func (c *channelPool) Get() (net.Conn, error) {
return nil, ErrClosed
}
// wrap our connections with out custom net.Conn implementation (wrapConn
// wrap our connections without custom net.Conn implementation (wrapConn
// method) that puts the connection back to the pool if it's closed.
select {
case conn := <-conns:
@ -138,7 +138,7 @@ func (c *channelPool) Close() {
atomic.AddInt64(&c.nOpenConns, 0)
}
// Len() returns the number of idle connections.
// Len returns the number of idle connections.
func (c *channelPool) Len() int {
conns, _ := c.getConnsAndFactory()
return len(conns)

@ -5,7 +5,7 @@ import (
"sync"
)
// Conn is a wrapper around net.Conn to modify the the behavior of
// Conn is a wrapper around net.Conn to modify the behavior of
// net.Conn's Close() method.
type Conn struct {
net.Conn
@ -28,7 +28,7 @@ func (p *Conn) Close() error {
return p.c.put(p.Conn)
}
// MarkUnusable marks the connection not usable any more, to let the pool close it instead of returning it to pool.
// MarkUnusable marks the connection not usable anymore, to let the pool close it instead of returning it to pool.
func (p *Conn) MarkUnusable() {
p.mu.Lock()
p.unusable = true

Loading…
Cancel
Save