Skip to content

Commit 5197e9a

Browse files
committed
Fix low-hanging fruit caught by the vale linter
Mostly, these are contractions, repetitions, and redundancies.
1 parent 0445380 commit 5197e9a

File tree

83 files changed

+133
-133
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

83 files changed

+133
-133
lines changed

v2.0/common-errors.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ To resolve this issue, use the [`cockroach cert client-create`](create-security-
4949

5050
## retry transaction
5151

52-
Messages with the error code `40001` and the string `retry transaction` indicate that a transaction failed because it conflicted with another concurrent or recent transaction accessing the same data. The transaction needs to be retried by the the client. See [client-side transaction retries](transactions.html#client-side-transaction-retries) for more details.
52+
Messages with the error code `40001` and the string `retry transaction` indicate that a transaction failed because it conflicted with another concurrent or recent transaction accessing the same data. The transaction needs to be retried by the client. See [client-side transaction retries](transactions.html#client-side-transaction-retries) for more details.
5353

5454
## node belongs to cluster \<cluster ID> but is attempting to connect to a gossip network for cluster \<another cluster ID>
5555

v2.0/monitoring-and-alerting.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ Active monitoring helps you spot problems early, but it is also essential to cre
158158

159159
- **Rule:** Send an alert when a node is not executing SQL despite having connections.
160160

161-
- **How to detect:** The `sql_conns` metric in the node's `_status/vars` output will be greater than `0` while the the `sql_query_count` metric will be `0`. You can also break this down by statement type using `sql_select_count`, `sql_insert_count`, `sql_update_count`, and `sql_delete_count`.
161+
- **How to detect:** The `sql_conns` metric in the node's `_status/vars` output will be greater than `0` while the `sql_query_count` metric will be `0`. You can also break this down by statement type using `sql_select_count`, `sql_insert_count`, `sql_update_count`, and `sql_delete_count`.
162162

163163
### CA certificate expires soon
164164

v2.0/orchestrate-cockroachdb-with-mesosphere-insecure.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ When using AWS CloudFormation, the launch process generally takes 10 to 15 minut
113113
$ dcos node ssh --master-proxy --leader
114114
~~~
115115

116-
3. Start a temporary container and open the [built-in SQL shell](use-the-built-in-sql-client.html) inside it, using the the `vip` endpoint as the `--host`:
116+
3. Start a temporary container and open the [built-in SQL shell](use-the-built-in-sql-client.html) inside it, using the `vip` endpoint as the `--host`:
117117

118118
{% include copy-clipboard.html %}
119119
~~~ shell
@@ -196,7 +196,7 @@ The default `cockroachdb` service creates a 3-node CockroachDB cluster. You can
196196
197197
The Scheduler process will restart with the new configuration and will validate any detected changes. To check that nodes were successfully added to the cluster, go back to the Admin UI, view **Node List**, and check for the new nodes.
198198
199-
Alternately, you can [SSH to the DC/OS master node](https://docs.mesosphere.com/1.10/administering-clusters/sshcluster/) and then run the [`cockroach node status`](view-node-details.html) command in a temporary container, again using the the `vip` endpoint as the `--host`:
199+
Alternately, you can [SSH to the DC/OS master node](https://docs.mesosphere.com/1.10/administering-clusters/sshcluster/) and then run the [`cockroach node status`](view-node-details.html) command in a temporary container, again using the `vip` endpoint as the `--host`:
200200
201201
{% include copy-clipboard.html %}
202202
~~~ shell

v2.0/query-order.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -152,7 +152,7 @@ It is also possible to sort using an arbitrary scalar expression computed for ea
152152
## Sorting Using Multiple Columns
153153

154154
When more than one ordering specification is given, the later specifications are used
155-
to order rows that are equal over the the earlier specifications, for example:
155+
to order rows that are equal over the earlier specifications, for example:
156156

157157
~~~ sql
158158
> CREATE TABLE ab(a INT, b INT);

v2.0/training/fault-tolerance-and-automated-repair.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -172,7 +172,7 @@ When a node fails, the cluster waits for the node to remain offline for 5 minute
172172
--execute="SET CLUSTER SETTING server.time_until_store_dead = '1m0s';"
173173
~~~
174174
175-
2. Then use the the [`cockroach quit`](../stop-a-node.html) command to stop node 5:
175+
2. Then use the [`cockroach quit`](../stop-a-node.html) command to stop node 5:
176176
177177
{% include copy-clipboard.html %}
178178
~~~ shell
@@ -276,7 +276,7 @@ To be able to tolerate 2 of 5 nodes failing simultaneously without any service i
276276

277277
## Step 8. Simulate two simultaneous node failures
278278

279-
1. Use the the [`cockroach quit`](../stop-a-node.html) command to stop nodes 4 and 5:
279+
1. Use the [`cockroach quit`](../stop-a-node.html) command to stop nodes 4 and 5:
280280

281281
{% include copy-clipboard.html %}
282282
~~~ shell

v2.0/training/locality-and-replication-zones.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ Start a cluster like you did previously, but this time use the [`--locality`](..
7676

7777
By default, CockroachDB tries to balance data evenly across specified "localities". At this point, since all three of the initial nodes have the same locality, the data is distributed across the 3 nodes. This means that for each range, one replica is on each node.
7878

79-
To check this, open the Admin UI at <a href="http://localhost:8080" data-proofer-ignore>http://localhost:8080</a>, view **Node List**, and check the the replica count is the same on all nodes.
79+
To check this, open the Admin UI at <a href="http://localhost:8080" data-proofer-ignore>http://localhost:8080</a>, view **Node List**, and check the replica count is the same on all nodes.
8080

8181
<img src="{{ 'images/v2.0/training-1.png' | relative_url }}" alt="CockroachDB Admin UI" style="border:1px solid #eee;max-width:100%" />
8282

v2.0/transactions.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ To handle errors in transactions, you should check for the following types of se
5353

5454
Type | Description
5555
-----|------------
56-
**Retryable Errors** | Errors with the code `40001` or string `retry transaction`, which indicate that a transaction failed because it conflicted with another concurrent or recent transaction accessing the same data. The transaction needs to be retried by the the client. See [client-side transaction retries](#client-side-transaction-retries) for more details.
56+
**Retryable Errors** | Errors with the code `40001` or string `retry transaction`, which indicate that a transaction failed because it conflicted with another concurrent or recent transaction accessing the same data. The transaction needs to be retried by the client. See [client-side transaction retries](#client-side-transaction-retries) for more details.
5757
**Ambiguous Errors** | Errors with the code `40003` that are returned in response to `RELEASE SAVEPOINT` (or `COMMIT` when not using `SAVEPOINT`), which indicate that the state of the transaction is ambiguous, i.e., you cannot assume it either committed or failed. How you handle these errors depends on how you want to resolve the ambiguity. See [here](common-errors.html#result-is-ambiguous) for more about this kind of error.
5858
**SQL Errors** | All other errors, which indicate that a statement in the transaction failed. For example, violating the Unique constraint generates an `23505` error. After encountering these errors, you can either issue a `COMMIT` or `ROLLBACK` to abort the transaction and revert the database to its state before the transaction began.<br><br>If you want to attempt the same set of statements again, you must begin a completely new transaction.
5959

v2.1/add-constraint.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ Adding the [Check constraint](check.html) requires that all of a column's values
5757

5858
Before you can add the [Foreign Key](foreign-key.html) constraint to columns, the columns must already be indexed. If they are not already indexed, use [`CREATE INDEX`](create-index.html) to index them and only then use the `ADD CONSTRAINT` statement to add the Foreign Key constraint to the columns.
5959

60-
For example, let's say you have two simple tables, `orders` and `customers`:
60+
For example, let's say you have two tables, `orders` and `customers`:
6161

6262
~~~ sql
6363
> SHOW CREATE TABLE customers;

v2.1/admin-ui-custom-chart-debug-page.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ toc: false
55

66
<span class="version-tag">New in v2.0:</span> The **Custom Chart** debug page in the Admin UI can be used to create a custom chart showing any combination of over [200 available metrics](#available-metrics).
77

8-
The definition of the customized dashboard is encoded in the URL. To share the dashboard with someone, send them the URL. Just like any other URL, it can be bookmarked, sit in a pinned tab in your browser, etc.
8+
The definition of the customized dashboard is encoded in the URL. To share the dashboard with someone, send them the URL. Like any other URL, it can be bookmarked, sit in a pinned tab in your browser, etc.
99

1010
<div id="toc"></div>
1111

v2.1/admin-ui-overview-dashboard.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ summary: The Overview dashboard lets you monitor important SQL performance, repl
44
toc: false
55
---
66

7-
The **Overview** dashboard lets you monitor important SQL performance, replication, and storage metrics. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and click **Metrics** on the left-hand navigation bar. The **Overview** dashboard is displayed by default.
7+
The **Overview** dashboard lets you monitor important SQL performance, replication, and storage metrics. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and click **Metrics** on the left-hand navigation bar. The **Overview** dashboard is displayed by default.
88

99
<div id="toc"></div>
1010

@@ -40,7 +40,7 @@ Ranges are subsets of your data, which are replicated to ensure survivability. R
4040

4141
For details about how to control the number and location of replicas, see [Configure Replication Zones](configure-replication-zones.html).
4242

43-
{{site.data.alerts.callout_info}}The timeseries data used to power the graphs in the admin UI is stored within the cluster and accumulates for 30 days before it starts getting truncated. As a result, for the first 30 days or so of a cluster's life, you will see a steady increase in disk usage and the number of ranges even if you aren't writing data to the cluster yourself. For more details, see this <a href="operational-faqs.html#why-is-disk-usage-increasing-despite-lack-of-writes">FAQ</a>.{{site.data.alerts.end}}
43+
{{site.data.alerts.callout_info}}The timeseries data used to power the graphs in the Admin UI is stored within the cluster and accumulates for 30 days before it starts getting truncated. As a result, for the first 30 days or so of a cluster's life, you will see a steady increase in disk usage and the number of ranges even if you aren't writing data to the cluster yourself. For more details, see this <a href="operational-faqs.html#why-is-disk-usage-increasing-despite-lack-of-writes">FAQ</a>.{{site.data.alerts.end}}
4444

4545
## Capacity
4646

v2.1/admin-ui-sql-dashboard.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ The **SQL Byte Traffic** graph helps you correlate SQL query count to byte traff
4040

4141
<img src="{{ 'images/v2.1/admin_ui_transactions.png' | relative_url }}" alt="CockroachDB Admin UI Transactions" style="border:1px solid #eee;max-width:100%" />
4242

43-
- In the node view, the graph shows separately the current moving average, over the last 10 seconds, of the number of opened, committed, aborted and rolled back transactions per second issued by SQL clients on the node.
43+
- In the node view, the graph shows separately the current moving average, over the last 10 seconds, of the number of opened, committed, aborted, and rolled back transactions per second issued by SQL clients on the node.
4444

4545
- In the cluster view, the graph shows the sum of the per-node averages, that is, an aggregate estimation of the current transactions load over the cluster, assuming the last 10 seconds of activity per node are representative of this load.
4646

v2.1/architecture/overview.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ It's helpful to understand a few terms before reading our architecture documenta
4141
Term | Definition
4242
-----|-----------
4343
**Cluster** | Your CockroachDB deployment, which acts as a single logical application that contains one or more databases.
44-
**Node** | An individual machine running CockroachDB. Many nodes join together to create your cluster.
44+
**Node** | An individual machine running CockroachDB. Many nodes join to create your cluster.
4545
**Range** | A set of sorted, contiguous data from your cluster.
4646
**Replicas** | Copies of your ranges, which are stored on at least 3 nodes to ensure survivability.
4747
**Range Lease** | For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range.
@@ -56,7 +56,7 @@ Term | Definition
5656
**Consensus** | When a range receives a write, a quorum of nodes containing replicas of the range acknowledge the write. This means your data is safely stored and a majority of nodes agree on the database's current state, even if some of the nodes are offline.<br/><br/>When a write *doesn't* achieve consensus, forward progress halts to maintain consistency within the cluster.
5757
**Replication** | Replication involves creating and distributing copies of data, as well as ensuring copies remain consistent. However, there are multiple types of replication: namely, synchronous and asynchronous.<br/><br/>Synchronous replication requires all writes to propagate to a quorum of copies of the data before being considered committed. To ensure consistency with your data, this is the kind of replication CockroachDB uses.<br/><br/>Aysnchronous replication only requires a single node to receive the write to be considered committed; it's propagated to each copy of the data after the fact. This is more or less equivalent to "eventual consistency", which was popularized by NoSQL databases. This method of replication is likely to cause anomalies and loss of data.
5858
**Transactions** | A set of operations performed on your database that satisfy the requirements of [ACID semantics](https://en.wikipedia.org/wiki/Database_transaction). This is a crucial component for a consistent system to ensure developers can trust the data in their database.
59-
**Multi-Active Availability** | Our consensus-based notion of high availability that lets each node in the cluster handle reads and writes for a subset of the stored data (on a per-range basis). This is in contrast to active-passive replication, in which the active node receives 100% of request traffic, as well as active-active replication, in which all nodes accept requests but typically can't guarantee that reads are both up-to-date and fast.
59+
**Multi-Active Availability** | Our consensus-based notion of high availability that lets each node in the cluster handle reads and writes for a subset of the stored data (on a per-range basis). This is in contrast to active-passive replication, in which the active node receives 100% of request traffic, as well as active-active replication, in which all nodes accept requests but typically cannot guarantee that reads are both up-to-date and fast.
6060

6161
## Overview
6262

@@ -69,7 +69,7 @@ Once the `cockroach` process is running, developers interact with CockroachDB th
6969

7070
After receiving SQL RPCs, nodes convert them into operations that work with our distributed key-value store. As these RPCs start filling your cluster with data, CockroachDB algorithmically starts distributing your data among your nodes, breaking the data up into 64MiB chunks that we call ranges. Each range is replicated to at least 3 nodes to ensure survivability. This way, if nodes go down, you still have copies of the data which can be used for reads and writes, as well as replicating the data to other nodes.
7171

72-
If a node receives a read or write request it can't directly serve, it simply finds the node that can handle the request, and communicates with it. This way you don't need to know where your data lives, CockroachDB tracks it for you, and enables symmetric behavior for each node.
72+
If a node receives a read or write request it cannot directly serve, it simply finds the node that can handle the request, and communicates with it. This way you do not need to know where your data lives, CockroachDB tracks it for you, and enables symmetric behavior for each node.
7373

7474
Any changes made to the data in a range rely on a consensus algorithm to ensure a majority of its replicas agree to commit the change, ensuring industry-leading isolation guarantees and providing your application consistent reads, regardless of which node you communicate with.
7575

v2.1/architecture/replication-layer.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@ To achieve this, each lease renewal or transfer also attempts to collocate them.
7171

7272
To manage leases for table data, CockroachDB implements a notion of "epochs," which are defined as the period between a node joining a cluster and a node disconnecting from a cluster. When the node disconnects, the epoch is considered changed, and the node immediately loses all of its leases.
7373

74-
This mechanism lets us avoid tracking leases for every range, which eliminates a substantial amount of traffic we would otherwise incur. Instead, we assume leases don't expire until a node loses connection.
74+
This mechanism lets us avoid tracking leases for every range, which eliminates a substantial amount of traffic we would otherwise incur. Instead, we assume leases do not expire until a node loses connection.
7575

7676
#### Expiration-Based Leases (Meta & System Ranges)
7777

v2.1/architecture/sql-layer.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Because of this structure, CockroachDB provides typical relational features like
3636

3737
CockroachDB implements a large portion of the ANSI SQL standard to manifest its relational structure. You can view [all of the SQL features CockroachDB supports here](../sql-feature-support.html).
3838

39-
Importantly, through the SQL API, we also let developers use ACID-semantic transactions just like they would through any SQL database (`BEGIN`, `END`, `ISOLATION LEVELS`, etc.)
39+
Importantly, through the SQL API, we also let developers use ACID-semantic transactions like they would through any SQL database (`BEGIN`, `END`, `ISOLATION LEVELS`, etc.)
4040

4141
### PostgreSQL Wire Protocol
4242

v2.1/architecture/storage-layer.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Storage Layer
3-
summary:
3+
summary:
44
toc: false
55
---
66

@@ -36,7 +36,7 @@ CockroachDB uses RocksDB––an embedded key-value store––to read and write
3636

3737
RocksDB integrates really well with CockroachDB for a number of reasons:
3838

39-
- Key-value store, which makes mapping to our key-value layer very simple
39+
- Key-value store, which makes mapping to our key-value layer simple
4040
- Atomic write batches and snapshots, which give us a subset of transactions
4141

4242
Efficient storage for the keys is guaranteed by the underlying RocksDB engine by means of prefix compression.
@@ -51,7 +51,7 @@ Despite being implemented in the Storage Layer, MVCC values are widely used to e
5151

5252
As described in the [SQL:2011 standard](https://en.wikipedia.org/wiki/SQL:2011#Temporal_support), CockroachDB supports time travel queries (enabled by MVCC).
5353

54-
To do this, all of the schema information also has an MVCC-like model behind it. This lets you perform `SELECT...AS OF SYSTEM TIME`, and CockroachDB actually uses the schema information as of that time to formulate the queries.
54+
To do this, all of the schema information also has an MVCC-like model behind it. This lets you perform `SELECT...AS OF SYSTEM TIME`, and CockroachDB uses the schema information as of that time to formulate the queries.
5555

5656
Using these tools, you can get consistent data from your database as far back as your garbage collection period.
5757

v2.1/architecture/transaction-layer.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -155,7 +155,7 @@ CockroachDB proceeds through the following steps until one of the transactions i
155155

156156
3. `TxnB` enters the `TxnWaitQueue` to wait for `TxnA` to complete.
157157

158-
Additionally, the following types of conflicts that don't involve running into intents can arise:
158+
Additionally, the following types of conflicts that do not involve running into intents can arise:
159159

160160
- **Write after read**, when a write with a lower timestamp encounters a later read. This is handled through the [Timestamp Cache](#timestamp-cache).
161161
- **Read within uncertainty window**, when a read encounters a value with a higher timestamp but it's ambiguous whether the value should be considered to be in the future or in the past of the transaction because of possible *clock skew*. This is handled by attempting to push the transaction's timestamp beyond the uncertain value (see [read refreshing](#read-refreshing)). Note that, if the transaction has to be retried, reads will never encounter uncertainty issues on any node which was previously visited, and that there's never any uncertainty on values read from the transaction's gateway node.

0 commit comments

Comments
 (0)