Skip to content

Commit 228809d

Browse files
committed
fix anchors
1 parent 9008800 commit 228809d

File tree

8 files changed

+28
-35
lines changed

8 files changed

+28
-35
lines changed

docs/guides/best-practices/sparse-primary-indexes.md

Lines changed: 16 additions & 23 deletions
Large diffs are not rendered by default.

docs/guides/sre/keeper/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -482,7 +482,7 @@ If you have ClickHouse installed, you can use the binary directly:
482482
clickhouse keeper-converter ...
483483
```
484484

485-
Otherwise, you can [download the binary](/getting-started/quick-start#1-download-the-binary) and run the tool as described above without installing ClickHouse.
485+
Otherwise, you can [download the binary](/getting-started/quick-start#download-the-binary) and run the tool as described above without installing ClickHouse.
486486
:::
487487

488488

docs/guides/sre/network-ports.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ sidebar_label: Network ports
66
# Network ports
77

88
:::note
9-
Ports described as **default** mean that the port number is configured in `/etc/clickhouse-server/config.xml`. To customize your settings, add a file to `/etc/clickhouse-server/config.d/`. See the [configuration file](../../operations/configuration-files.md#override) documentation.
9+
Ports described as **default** mean that the port number is configured in `/etc/clickhouse-server/config.xml`. To customize your settings, add a file to `/etc/clickhouse-server/config.d/`. See the [configuration file](/operations/configuration-files) documentation.
1010
:::
1111

1212
|Port|Description|

docs/guides/troubleshooting.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -155,7 +155,7 @@ Check:
155155
- If you run ClickHouse in Docker in an IPv6 network, make sure that `network=host` is set.
156156
157157
1. Endpoint settings.
158-
- Check [listen_host](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-listen_host) and [tcp_port](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port) settings.
158+
- Check [listen_host](/operations/server-configuration-parameters/settings#listen_host) and [tcp_port](/operations/server-configuration-parameters/settings#tcp_port) settings.
159159
- ClickHouse server accepts localhost connections only by default.
160160
161161
1. HTTP protocol settings:
@@ -165,8 +165,8 @@ Check:
165165
1. Secure connection settings.
166166
167167
- Check:
168-
- The [tcp_port_secure](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port_secure) setting.
169-
- Settings for [SSL certificates](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-openssl).
168+
- The [tcp_port_secure](/operations/server-configuration-parameters/settings#tcp_port_secure) setting.
169+
- Settings for [SSL certificates](/operations/server-configuration-parameters/settings#openssl).
170170
- Use proper parameters while connecting. For example, use the `port_secure` parameter with `clickhouse_client`.
171171
172172
1. User settings:

docs/integrations/data-ingestion/clickpipes/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ If ClickPipes cannot connect to a data source or destination after 15min., Click
7878

7979
- **Does using ClickPipes incur an additional cost?**
8080

81-
ClickPipes is billed on two dimensions: Ingested Data and Compute. The full details of the pricing are available on [this page](/cloud/manage/jan-2025-faq/pricing-dimensions#clickpipes-pricing). Running ClickPipes might also generate an indirect compute and storage cost on the destination ClickHouse Cloud service similar to any ingest workload.
81+
ClickPipes is billed on two dimensions: Ingested Data and Compute. The full details of the pricing are available on [this page](/cloud/manage/jan-2025-faq/pricing-dimensions#clickpipes-pricing-faq). Running ClickPipes might also generate an indirect compute and storage cost on the destination ClickHouse Cloud service similar to any ingest workload.
8282

8383
- **Is there a way to handle errors or failures when using ClickPipes for Kafka?**
8484

docs/integrations/data-ingestion/clickpipes/kafka.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -127,7 +127,7 @@ The following ClickHouse data types are currently supported in ClickPipes:
127127
### Avro {#avro}
128128
#### Supported Avro Data Types {#supported-avro-data-types}
129129

130-
ClickPipes supports all Avro Primitive and Complex types, and all Avro Logical types except `time-millis`, `time-micros`, `local-timestamp-millis`, `local_timestamp-micros`, and `duration`. Avro `record` types are converted to Tuple, `array` types to Array, and `map` to Map (string keys only). In general the conversions listed [here](../../../interfaces/formats.md#data-types-matching) are available. We recommend using exact type matching for Avro numeric types, as ClickPipes does not check for overflow or precision loss on type conversion.
130+
ClickPipes supports all Avro Primitive and Complex types, and all Avro Logical types except `time-millis`, `time-micros`, `local-timestamp-millis`, `local_timestamp-micros`, and `duration`. Avro `record` types are converted to Tuple, `array` types to Array, and `map` to Map (string keys only). In general the conversions listed [here](/interfaces/formats/Avro#data-types-matching) are available. We recommend using exact type matching for Avro numeric types, as ClickPipes does not check for overflow or precision loss on type conversion.
131131

132132
#### Nullable Types and Avro Unions {#nullable-types-and-avro-unions}
133133

docs/integrations/data-ingestion/clickpipes/object-storage.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -83,9 +83,9 @@ More connectors are will get added to ClickPipes, you can find out more by [cont
8383
## Supported Data Formats {#supported-data-formats}
8484

8585
The supported formats are:
86-
- [JSON](../../../interfaces/formats.md/#json)
87-
- [CSV](../../../interfaces/formats.md/#csv)
88-
- [Parquet](../../../interfaces/formats.md/#parquet)
86+
- [JSON](/interfaces/formats/JSON)
87+
- [CSV](/interfaces/formats/CSV)
88+
- [Parquet](/interfaces/formats/Parquet)
8989

9090
## Exactly-Once Semantics {#exactly-once-semantics}
9191

@@ -107,7 +107,7 @@ To increase the throughput on large ingest jobs, we recommend scaling the ClickH
107107
- There are limitations on the types of views that are supported. Please read the section on [exactly-once semantics](#exactly-once-semantics) and [view support](#view-support) for more information.
108108
- Role authentication is not available for S3 ClickPipes for ClickHouse Cloud instances deployed into GCP or Azure. It is only supported for AWS ClickHouse Cloud instances.
109109
- ClickPipes will only attempt to ingest objects at 10GB or smaller in size. If a file is greater than 10GB an error will be appended to the ClickPipes dedicated error table.
110-
- S3 / GCS ClickPipes **does not** share a listing syntax with the [S3 Table Function](/sql-reference/table-functions/file#globs_in_path).
110+
- S3 / GCS ClickPipes **does not** share a listing syntax with the [S3 Table Function](/sql-reference/table-functions/s3).
111111
- `?` — Substitutes any single character
112112
- `*` — Substitutes any number of any characters except / including empty string
113113
- `**` — Substitutes any number of any character include / including empty string

docs/integrations/data-ingestion/etl-tools/apache-beam.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ You can adjust the `ClickHouseIO.Write` configuration with the following setter
140140

141141
Please consider the following limitations when using the connector:
142142
* As of today, only Sink operation is supported. The connector doesn't support Source operation.
143-
* ClickHouse performs deduplication when inserting into a `ReplicatedMergeTree` or a `Distributed` table built on top of a `ReplicatedMergeTree`. Without replication, inserting into a regular MergeTree can result in duplicates if an insert fails and then successfully retries. However, each block is inserted atomically, and the block size can be configured using `ClickHouseIO.Write.withMaxInsertBlockSize(long)`. Deduplication is achieved by using checksums of the inserted blocks. For more information about deduplication, please visit [Deduplication](/guides/developer/deduplication) and [Deduplicate insertion config](/operations/settings/settings#insert-deduplicate).
143+
* ClickHouse performs deduplication when inserting into a `ReplicatedMergeTree` or a `Distributed` table built on top of a `ReplicatedMergeTree`. Without replication, inserting into a regular MergeTree can result in duplicates if an insert fails and then successfully retries. However, each block is inserted atomically, and the block size can be configured using `ClickHouseIO.Write.withMaxInsertBlockSize(long)`. Deduplication is achieved by using checksums of the inserted blocks. For more information about deduplication, please visit [Deduplication](/guides/developer/deduplication) and [Deduplicate insertion config](/operations/settings/settings#insert_deduplicate).
144144
* The connector doesn't perform any DDL statements; therefore, the target table must exist prior insertion.
145145

146146

0 commit comments

Comments
 (0)