You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: v2.0/common-errors.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -49,7 +49,7 @@ To resolve this issue, use the [`cockroach cert client-create`](create-security-
49
49
50
50
## retry transaction
51
51
52
-
Messages with the error code `40001` and the string `retry transaction` indicate that a transaction failed because it conflicted with another concurrent or recent transaction accessing the same data. The transaction needs to be retried by the the client. See [client-side transaction retries](transactions.html#client-side-transaction-retries) for more details.
52
+
Messages with the error code `40001` and the string `retry transaction` indicate that a transaction failed because it conflicted with another concurrent or recent transaction accessing the same data. The transaction needs to be retried by the client. See [client-side transaction retries](transactions.html#client-side-transaction-retries) for more details.
53
53
54
54
## node belongs to cluster \<cluster ID> but is attempting to connect to a gossip network for cluster \<another cluster ID>
Copy file name to clipboardExpand all lines: v2.0/monitoring-and-alerting.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -158,7 +158,7 @@ Active monitoring helps you spot problems early, but it is also essential to cre
158
158
159
159
-**Rule:** Send an alert when a node is not executing SQL despite having connections.
160
160
161
-
-**How to detect:** The `sql_conns` metric in the node's `_status/vars` output will be greater than `0` while the the `sql_query_count` metric will be `0`. You can also break this down by statement type using `sql_select_count`, `sql_insert_count`, `sql_update_count`, and `sql_delete_count`.
161
+
-**How to detect:** The `sql_conns` metric in the node's `_status/vars` output will be greater than `0` while the `sql_query_count` metric will be `0`. You can also break this down by statement type using `sql_select_count`, `sql_insert_count`, `sql_update_count`, and `sql_delete_count`.
Copy file name to clipboardExpand all lines: v2.0/orchestrate-cockroachdb-with-mesosphere-insecure.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -113,7 +113,7 @@ When using AWS CloudFormation, the launch process generally takes 10 to 15 minut
113
113
$ dcos node ssh --master-proxy --leader
114
114
~~~
115
115
116
-
3. Start a temporary container and open the [built-in SQL shell](use-the-built-in-sql-client.html) inside it, using the the `vip` endpoint as the `--host`:
116
+
3. Start a temporary container and open the [built-in SQL shell](use-the-built-in-sql-client.html) inside it, using the `vip` endpoint as the `--host`:
117
117
118
118
{% include copy-clipboard.html %}
119
119
~~~ shell
@@ -196,7 +196,7 @@ The default `cockroachdb` service creates a 3-node CockroachDB cluster. You can
196
196
197
197
The Scheduler process will restart with the new configuration and will validate any detected changes. To check that nodes were successfully added to the cluster, go back to the Admin UI, view **Node List**, and check for the new nodes.
198
198
199
-
Alternately, you can [SSH to the DC/OS master node](https://docs.mesosphere.com/1.10/administering-clusters/sshcluster/) and then run the [`cockroach node status`](view-node-details.html) command in a temporary container, again using the the `vip` endpoint as the `--host`:
199
+
Alternately, you can [SSH to the DC/OS master node](https://docs.mesosphere.com/1.10/administering-clusters/sshcluster/) and then run the [`cockroach node status`](view-node-details.html) command in a temporary container, again using the `vip` endpoint as the `--host`:
Copy file name to clipboardExpand all lines: v2.0/training/locality-and-replication-zones.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -76,7 +76,7 @@ Start a cluster like you did previously, but this time use the [`--locality`](..
76
76
77
77
By default, CockroachDB tries to balance data evenly across specified "localities". At this point, since all three of the initial nodes have the same locality, the data is distributed across the 3 nodes. This means that for each range, one replica is on each node.
78
78
79
-
To check this, open the Admin UI at <a href="http://localhost:8080" data-proofer-ignore>http://localhost:8080</a>, view **Node List**, and check the the replica count is the same on all nodes.
79
+
To check this, open the Admin UI at <a href="http://localhost:8080" data-proofer-ignore>http://localhost:8080</a>, view **Node List**, and check the replica count is the same on all nodes.
Copy file name to clipboardExpand all lines: v2.0/transactions.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -53,7 +53,7 @@ To handle errors in transactions, you should check for the following types of se
53
53
54
54
Type | Description
55
55
-----|------------
56
-
**Retryable Errors** | Errors with the code `40001` or string `retry transaction`, which indicate that a transaction failed because it conflicted with another concurrent or recent transaction accessing the same data. The transaction needs to be retried by the the client. See [client-side transaction retries](#client-side-transaction-retries) for more details.
56
+
**Retryable Errors** | Errors with the code `40001` or string `retry transaction`, which indicate that a transaction failed because it conflicted with another concurrent or recent transaction accessing the same data. The transaction needs to be retried by the client. See [client-side transaction retries](#client-side-transaction-retries) for more details.
57
57
**Ambiguous Errors** | Errors with the code `40003` that are returned in response to `RELEASE SAVEPOINT` (or `COMMIT` when not using `SAVEPOINT`), which indicate that the state of the transaction is ambiguous, i.e., you cannot assume it either committed or failed. How you handle these errors depends on how you want to resolve the ambiguity. See [here](common-errors.html#result-is-ambiguous) for more about this kind of error.
58
58
**SQL Errors** | All other errors, which indicate that a statement in the transaction failed. For example, violating the Unique constraint generates an `23505` error. After encountering these errors, you can either issue a `COMMIT` or `ROLLBACK` to abort the transaction and revert the database to its state before the transaction began.<br><br>If you want to attempt the same set of statements again, you must begin a completely new transaction.
Copy file name to clipboardExpand all lines: v2.1/add-constraint.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -57,7 +57,7 @@ Adding the [Check constraint](check.html) requires that all of a column's values
57
57
58
58
Before you can add the [Foreign Key](foreign-key.html) constraint to columns, the columns must already be indexed. If they are not already indexed, use [`CREATE INDEX`](create-index.html) to index them and only then use the `ADD CONSTRAINT` statement to add the Foreign Key constraint to the columns.
59
59
60
-
For example, let's say you have two simple tables, `orders` and `customers`:
60
+
For example, let's say you have two tables, `orders` and `customers`:
Copy file name to clipboardExpand all lines: v2.1/admin-ui-custom-chart-debug-page.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ toc: false
5
5
6
6
<spanclass="version-tag">New in v2.0:</span> The **Custom Chart** debug page in the Admin UI can be used to create a custom chart showing any combination of over [200 available metrics](#available-metrics).
7
7
8
-
The definition of the customized dashboard is encoded in the URL. To share the dashboard with someone, send them the URL. Just like any other URL, it can be bookmarked, sit in a pinned tab in your browser, etc.
8
+
The definition of the customized dashboard is encoded in the URL. To share the dashboard with someone, send them the URL. Like any other URL, it can be bookmarked, sit in a pinned tab in your browser, etc.
Copy file name to clipboardExpand all lines: v2.1/admin-ui-overview-dashboard.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ summary: The Overview dashboard lets you monitor important SQL performance, repl
4
4
toc: false
5
5
---
6
6
7
-
The **Overview** dashboard lets you monitor important SQL performance, replication, and storage metrics. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and click **Metrics** on the left-hand navigation bar. The **Overview** dashboard is displayed by default.
7
+
The **Overview** dashboard lets you monitor important SQL performance, replication, and storage metrics. To view this dashboard, [access the Admin UI](admin-ui-access-and-navigate.html#access-the-admin-ui) and click **Metrics** on the left-hand navigation bar. The **Overview** dashboard is displayed by default.
8
8
9
9
<divid="toc"></div>
10
10
@@ -40,7 +40,7 @@ Ranges are subsets of your data, which are replicated to ensure survivability. R
40
40
41
41
For details about how to control the number and location of replicas, see [Configure Replication Zones](configure-replication-zones.html).
42
42
43
-
{{site.data.alerts.callout_info}}The timeseries data used to power the graphs in the admin UI is stored within the cluster and accumulates for 30 days before it starts getting truncated. As a result, for the first 30 days or so of a cluster's life, you will see a steady increase in disk usage and the number of ranges even if you aren't writing data to the cluster yourself. For more details, see this <ahref="operational-faqs.html#why-is-disk-usage-increasing-despite-lack-of-writes">FAQ</a>.{{site.data.alerts.end}}
43
+
{{site.data.alerts.callout_info}}The timeseries data used to power the graphs in the Admin UI is stored within the cluster and accumulates for 30 days before it starts getting truncated. As a result, for the first 30 days or so of a cluster's life, you will see a steady increase in disk usage and the number of ranges even if you aren't writing data to the cluster yourself. For more details, see this <ahref="operational-faqs.html#why-is-disk-usage-increasing-despite-lack-of-writes">FAQ</a>.{{site.data.alerts.end}}
- In the node view, the graph shows separately the current moving average, over the last 10 seconds, of the number of opened, committed, aborted and rolled back transactions per second issued by SQL clients on the node.
43
+
- In the node view, the graph shows separately the current moving average, over the last 10 seconds, of the number of opened, committed, aborted, and rolled back transactions per second issued by SQL clients on the node.
44
44
45
45
- In the cluster view, the graph shows the sum of the per-node averages, that is, an aggregate estimation of the current transactions load over the cluster, assuming the last 10 seconds of activity per node are representative of this load.
Copy file name to clipboardExpand all lines: v2.1/architecture/overview.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -41,7 +41,7 @@ It's helpful to understand a few terms before reading our architecture documenta
41
41
Term | Definition
42
42
-----|-----------
43
43
**Cluster** | Your CockroachDB deployment, which acts as a single logical application that contains one or more databases.
44
-
**Node** | An individual machine running CockroachDB. Many nodes join together to create your cluster.
44
+
**Node** | An individual machine running CockroachDB. Many nodes join to create your cluster.
45
45
**Range** | A set of sorted, contiguous data from your cluster.
46
46
**Replicas** | Copies of your ranges, which are stored on at least 3 nodes to ensure survivability.
47
47
**Range Lease** | For each range, one of the replicas holds the "range lease". This replica, referred to as the "leaseholder", is the one that receives and coordinates all read and write requests for the range.
@@ -56,7 +56,7 @@ Term | Definition
56
56
**Consensus** | When a range receives a write, a quorum of nodes containing replicas of the range acknowledge the write. This means your data is safely stored and a majority of nodes agree on the database's current state, even if some of the nodes are offline.<br/><br/>When a write *doesn't* achieve consensus, forward progress halts to maintain consistency within the cluster.
57
57
**Replication** | Replication involves creating and distributing copies of data, as well as ensuring copies remain consistent. However, there are multiple types of replication: namely, synchronous and asynchronous.<br/><br/>Synchronous replication requires all writes to propagate to a quorum of copies of the data before being considered committed. To ensure consistency with your data, this is the kind of replication CockroachDB uses.<br/><br/>Aysnchronous replication only requires a single node to receive the write to be considered committed; it's propagated to each copy of the data after the fact. This is more or less equivalent to "eventual consistency", which was popularized by NoSQL databases. This method of replication is likely to cause anomalies and loss of data.
58
58
**Transactions** | A set of operations performed on your database that satisfy the requirements of [ACID semantics](https://en.wikipedia.org/wiki/Database_transaction). This is a crucial component for a consistent system to ensure developers can trust the data in their database.
59
-
**Multi-Active Availability** | Our consensus-based notion of high availability that lets each node in the cluster handle reads and writes for a subset of the stored data (on a per-range basis). This is in contrast to active-passive replication, in which the active node receives 100% of request traffic, as well as active-active replication, in which all nodes accept requests but typically can't guarantee that reads are both up-to-date and fast.
59
+
**Multi-Active Availability** | Our consensus-based notion of high availability that lets each node in the cluster handle reads and writes for a subset of the stored data (on a per-range basis). This is in contrast to active-passive replication, in which the active node receives 100% of request traffic, as well as active-active replication, in which all nodes accept requests but typically cannot guarantee that reads are both up-to-date and fast.
60
60
61
61
## Overview
62
62
@@ -69,7 +69,7 @@ Once the `cockroach` process is running, developers interact with CockroachDB th
69
69
70
70
After receiving SQL RPCs, nodes convert them into operations that work with our distributed key-value store. As these RPCs start filling your cluster with data, CockroachDB algorithmically starts distributing your data among your nodes, breaking the data up into 64MiB chunks that we call ranges. Each range is replicated to at least 3 nodes to ensure survivability. This way, if nodes go down, you still have copies of the data which can be used for reads and writes, as well as replicating the data to other nodes.
71
71
72
-
If a node receives a read or write request it can't directly serve, it simply finds the node that can handle the request, and communicates with it. This way you don't need to know where your data lives, CockroachDB tracks it for you, and enables symmetric behavior for each node.
72
+
If a node receives a read or write request it cannot directly serve, it simply finds the node that can handle the request, and communicates with it. This way you do not need to know where your data lives, CockroachDB tracks it for you, and enables symmetric behavior for each node.
73
73
74
74
Any changes made to the data in a range rely on a consensus algorithm to ensure a majority of its replicas agree to commit the change, ensuring industry-leading isolation guarantees and providing your application consistent reads, regardless of which node you communicate with.
Copy file name to clipboardExpand all lines: v2.1/architecture/replication-layer.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -71,7 +71,7 @@ To achieve this, each lease renewal or transfer also attempts to collocate them.
71
71
72
72
To manage leases for table data, CockroachDB implements a notion of "epochs," which are defined as the period between a node joining a cluster and a node disconnecting from a cluster. When the node disconnects, the epoch is considered changed, and the node immediately loses all of its leases.
73
73
74
-
This mechanism lets us avoid tracking leases for every range, which eliminates a substantial amount of traffic we would otherwise incur. Instead, we assume leases don't expire until a node loses connection.
74
+
This mechanism lets us avoid tracking leases for every range, which eliminates a substantial amount of traffic we would otherwise incur. Instead, we assume leases do not expire until a node loses connection.
75
75
76
76
#### Expiration-Based Leases (Meta & System Ranges)
Copy file name to clipboardExpand all lines: v2.1/architecture/sql-layer.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ Because of this structure, CockroachDB provides typical relational features like
36
36
37
37
CockroachDB implements a large portion of the ANSI SQL standard to manifest its relational structure. You can view [all of the SQL features CockroachDB supports here](../sql-feature-support.html).
38
38
39
-
Importantly, through the SQL API, we also let developers use ACID-semantic transactions just like they would through any SQL database (`BEGIN`, `END`, `ISOLATION LEVELS`, etc.)
39
+
Importantly, through the SQL API, we also let developers use ACID-semantic transactions like they would through any SQL database (`BEGIN`, `END`, `ISOLATION LEVELS`, etc.)
RocksDB integrates really well with CockroachDB for a number of reasons:
38
38
39
-
- Key-value store, which makes mapping to our key-value layer very simple
39
+
- Key-value store, which makes mapping to our key-value layer simple
40
40
- Atomic write batches and snapshots, which give us a subset of transactions
41
41
42
42
Efficient storage for the keys is guaranteed by the underlying RocksDB engine by means of prefix compression.
@@ -51,7 +51,7 @@ Despite being implemented in the Storage Layer, MVCC values are widely used to e
51
51
52
52
As described in the [SQL:2011 standard](https://en.wikipedia.org/wiki/SQL:2011#Temporal_support), CockroachDB supports time travel queries (enabled by MVCC).
53
53
54
-
To do this, all of the schema information also has an MVCC-like model behind it. This lets you perform `SELECT...AS OF SYSTEM TIME`, and CockroachDB actually uses the schema information as of that time to formulate the queries.
54
+
To do this, all of the schema information also has an MVCC-like model behind it. This lets you perform `SELECT...AS OF SYSTEM TIME`, and CockroachDB uses the schema information as of that time to formulate the queries.
55
55
56
56
Using these tools, you can get consistent data from your database as far back as your garbage collection period.
Copy file name to clipboardExpand all lines: v2.1/architecture/transaction-layer.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -155,7 +155,7 @@ CockroachDB proceeds through the following steps until one of the transactions i
155
155
156
156
3.`TxnB` enters the `TxnWaitQueue` to wait for `TxnA` to complete.
157
157
158
-
Additionally, the following types of conflicts that don't involve running into intents can arise:
158
+
Additionally, the following types of conflicts that do not involve running into intents can arise:
159
159
160
160
-**Write after read**, when a write with a lower timestamp encounters a later read. This is handled through the [Timestamp Cache](#timestamp-cache).
161
161
-**Read within uncertainty window**, when a read encounters a value with a higher timestamp but it's ambiguous whether the value should be considered to be in the future or in the past of the transaction because of possible *clock skew*. This is handled by attempting to push the transaction's timestamp beyond the uncertain value (see [read refreshing](#read-refreshing)). Note that, if the transaction has to be retried, reads will never encounter uncertainty issues on any node which was previously visited, and that there's never any uncertainty on values read from the transaction's gateway node.
0 commit comments