Skip to content

Commit c8d330a

Browse files
committed
Added README
1 parent ba8e123 commit c8d330a

File tree

1 file changed

+27
-0
lines changed

1 file changed

+27
-0
lines changed

README.md

+27
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
Kafka Connect Kafka connector
2+
=============================
3+
4+
![Workflow Status](https://github.com/segence/kafka-connect-kafka/actions/workflows/test.yaml/badge.svg)
5+
[![Coverage Status](https://coveralls.io/repos/github/Segence/kafka-connect-kafka/badge.svg?branch=main)](https://coveralls.io/github/Segence/kafka-connect-kafka?branch=main)
6+
7+
## Summary
8+
9+
A connector with a Kafka sink.
10+
11+
Why?
12+
13+
Because there are use cases to use Kafka Connect for Kafka-to-Kafka traffic while transforming messages.
14+
15+
This connector helps with that use case.
16+
17+
## Configuration
18+
19+
| **Name** | **Description** | **Default value** |
20+
|:----------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------|
21+
| `sink.bootstrap.servers` | A list of host/port pairs used to establish the initial connection to the Kafka cluster. Clients use this list to bootstrap and discover the full set of Kafka brokers. While the order of servers in the list does not matter, we recommend including more than one server to ensure resilience if any servers are down. This list does not need to contain the entire set of brokers, as Kafka clients automatically manage and update connections to the cluster efficiently. This list must be in the form host1:port1,host2:port2,... | *(none)* |
22+
| `sink.topic` | The sink topic name. | *(none)* |
23+
| `sink.key.serializer` | Serializer class for key that implements the <code>org.apache.kafka.common.serialization.Serializer</code> interface. | org.apache.kafka.common.serialization.StringSerializer |
24+
| `sink.value.serializer` | Serializer class for value that implements the <code>org.apache.kafka.common.serialization.Serializer</code> interface. | org.apache.kafka.common.serialization.StringSerializer |
25+
| `sink.exactly.once.support` | Whether to enable exactly-once support for source connectors in the cluster by using transactions to write source records and their source offsets, and by proactively fencing out old task generations before bringing up new ones. | false |
26+
| `sink.callback` | The callback that is registered on the Kafka Producer. Must be a class implementing <code>org.apache.kafka.clients.producer.Callback</code> and it must be accessible on the CLASSPATH. | *(none)* |
27+

0 commit comments

Comments
 (0)