You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: v2.1/change-data-capture.md
+34-10Lines changed: 34 additions & 10 deletions
Original file line number
Diff line number
Diff line change
@@ -30,6 +30,19 @@ The core feature of CDC is the [changefeed](create-changefeed.html). Changefeeds
30
30
- Rows are sharded between Kafka partitions by the row’s [primary key](primary-key.html).
31
31
32
32
- The `WITH timestamps` option adds an **update timestamp** to each emitted row. It also causes periodic **resolved timestamp** messages to be emitted to each Kafka partition. A resolved timestamp is a guarantee that no (previously unseen) rows with a lower update timestamp will be emitted on that partition.
- Cross-row and cross-table order guarantees are not directly given. However, the resolved timestamp notifications on every Kafka partition can be used to provide strong ordering and global consistency guarantees by buffering records in between timestamp closures.
35
48
@@ -94,7 +107,7 @@ In this example, you'll set up a changefeed for a single-node cluster that is co
94
107
$ ./cockroach start --insecure
95
108
~~~
96
109
97
-
2. Download and extract the [Confluent platform](https://www.confluent.io/download/) (which includes Kafka).
110
+
2. Download and extract the [Confluent Open Source platform](https://www.confluent.io/download/) (which includes Kafka).
98
111
99
112
3. Start Confluent:
100
113
@@ -105,28 +118,39 @@ In this example, you'll set up a changefeed for a single-node cluster that is co
105
118
106
119
Only `zookeeper` and `kafka` are needed. To troubleshoot Confluent, see [their docs](https://docs.confluent.io/current/installation/installing_cp.html#zip-and-tar-archives).
107
120
108
-
4. As the `root` user, open the [built-in SQL client](use-the-built-in-sql-client.html):
You are expected to create any Kafka topics with the necessary number of replications and partitions. [Topics can be created manually](https://kafka.apache.org/documentation/#basic_ops_add_topic) or [Kafka brokers can be configured to automatically create topics](https://kafka.apache.org/documentation/#topicconfigs) with a default partition count and replication factor.
130
+
{{site.data.alerts.end}}
131
+
132
+
5. As the `root` user, open the [built-in SQL client](use-the-built-in-sql-client.html):
109
133
110
134
{% include copy-clipboard.html %}
111
135
~~~ shell
112
136
$ cockroach sql --insecure
113
137
~~~
114
138
115
-
5. Create a database called `test`:
139
+
6. Create a database called `test`:
116
140
117
141
{% include copy-clipboard.html %}
118
142
~~~ sql
119
143
> CREATE DATABASE cdc_demo;
120
144
~~~
121
145
122
-
6. Set the database as the default:
146
+
7. Set the database as the default:
123
147
124
148
{% include copy-clipboard.html %}
125
149
~~~ sql
126
150
> SET DATABASE = cdc_demo;
127
151
~~~
128
152
129
-
7. Create a table and add data:
153
+
8. Create a table and add data:
130
154
131
155
{% include copy-clipboard.html %}
132
156
~~~ sql
@@ -147,7 +171,7 @@ In this example, you'll set up a changefeed for a single-node cluster that is co
147
171
> UPDATE office_dogs SET name = 'Petee H' WHERE id = 1;
148
172
~~~
149
173
150
-
8. Start the changefeed:
174
+
9. Start the changefeed:
151
175
152
176
{% include copy-clipboard.html %}
153
177
~~~ sql
@@ -164,7 +188,7 @@ In this example, you'll set up a changefeed for a single-node cluster that is co
164
188
165
189
This will start up the changefeed in the background and return the `job_id`. The changefeed writes to Kafka.
166
190
167
-
9. In a new terminal, start watching the Kafka topic:
191
+
10. In a new terminal, start watching the Kafka topic:
168
192
169
193
{% include copy-clipboard.html %}
170
194
~~~ shell
@@ -177,22 +201,22 @@ In this example, you'll set up a changefeed for a single-node cluster that is co
177
201
178
202
Note that the initial scan displays the state of the table as of when the changefeed started (therefore, the initial value of `"Petee"` is missing).
179
203
180
-
10. Back in the SQL client, insert more data:
204
+
11. Back in the SQL client, insert more data:
181
205
182
206
{% include copy-clipboard.html %}
183
207
~~~ sql
184
208
> INSERT INTO office_dogs VALUES (3, 'Ernie');
185
209
~~~
186
210
187
-
11. Back in the terminal where you're watching the Kafka topic, the following output has appeared:
211
+
12. Back in the terminal where you're watching the Kafka topic, the following output has appeared:
188
212
189
213
~~~
190
214
{"id": 3, "name": "Ernie"}
191
215
~~~
192
216
193
217
## Known limitations
194
218
195
-
The following are limitations in July 2, 2018 alpha release, and will be addressed before the v2.1 release.
219
+
The following are limitations in July 30, 2018 alpha release, and will be addressed before the v2.1 release.
196
220
197
221
- Changefeeds created with the alpha may not be compatible with future alphas and the final v2.1 release.
0 commit comments