You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,11 +24,11 @@ Splunk Connect for Kafka is a Kafka Connect Sink for Splunk with the following f
24
24
## Quick Start
25
25
26
26
1.[Start](https://kafka.apache.org/quickstart) your Kafka Cluster and confirm it is running.
27
-
2. If this is a new install, create a test topic (eg: `perf`). Inject events into the topic. This can be done using [Kafka data-gen-app](https://github.com/dtregonning/kafka-data-gen) or the Kafka bundle[kafka-console-producer](https://kafka.apache.org/quickstart#quickstart_send).
27
+
2. If this is a new install, create a test topic (eg: `perf`). Inject events into the topic. This can be done using [Kafka data-gen-app](https://github.com/dtregonning/kafka-data-gen) or the Kafka-bundled[kafka-console-producer](https://kafka.apache.org/quickstart#quickstart_send).
28
28
3. Within your Kafka Connect deployment adjust the values for `bootstrap.servers` and `plugin.path` inside the `$KAFKA_HOME/config/connect-distributed.properties` file. `bootstrap.servers` should be configured to point to your Kafka Brokers. `plugin.path` should be configured to point to the install directory of your Kafka Connect Sink and Source Connectors. For more information on installing Kafka Connect plugins please refer to the [Confluent Documentation.](https://docs.confluent.io/current/connect/userguide.html#id3)
29
-
4. Place the jar file created by the `mvn package` (`splunk-kafka-connect-[VERSION].jar`) in or under the location specified in `plugin.path`
30
-
5. Run `./bin/connect-distributed.sh config/connect-distributed.properties` to start Kafka Connect.
31
-
6. Run the following command to create connector tasks. Adjust `topics` to set the topic, `splunk.indexes` to set the Splunk indexes, `splunk.hec.token` to set your HEC token and `splunk.hec.uri` to the URI for your Splunk HEC endpoint. For more information on Splunk HEC configuration refer to [Splunk Documentation.](http://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector)
29
+
4. Place the jar file created by `mvn package` (`splunk-kafka-connect-[VERSION].jar`) in or under the location specified in `plugin.path`
30
+
5. Run `.$KAFKA_HOME/bin/connect-distributed.sh $KAFKA_HOME/config/connect-distributed.properties` to start Kafka Connect.
31
+
6. Run the following command to create connector tasks. Adjust `topics` to configure the Kafka topic to be ingested, `splunk.indexes` to set the destination Splunk indexes, `splunk.hec.token` to set your Http Event Collector (HEC) token and `splunk.hec.uri` to the URI for your destination Splunk HEC endpoint. For more information on Splunk HEC configuration refer to [Splunk Documentation.](http://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector)
32
32
33
33
```
34
34
curl localhost:8083/connectors -X POST -H "Content-Type: application/json" -d '{
@@ -77,7 +77,7 @@ See [Splunk Docs](https://docs.splunk.com/Documentation/KafkaConnect/latest/User
77
77
## Configuration
78
78
79
79
After Kafka Connect is brought up on every host, all of the Kafka Connect instances will form a cluster automatically.
80
-
A REST call can be executed against one of the cluster instances, and will automatically propogate to the other instances in the cluster.
80
+
A REST call can be executed against one of the cluster instances, and the configuration will automatically propogate to all instances in the cluster.
81
81
82
82
### Configuration schema structure
83
83
Use the below schema to configure Splunk Connect for Kafka
@@ -127,10 +127,10 @@ Use the below schema to configure Splunk Connect for Kafka
|`name`| Connector name. A consumer group with this name will be created with tasks to be distributed evenly across the connector cluster nodes.||
130
+
|`name`| Connector name. A consumer group with this name will be created with tasks to be distributed evenly across the connector cluster nodes.|
131
131
|`connector.class`| The Java class used to perform connector jobs. Keep the default unless you modify the connector.|`com.splunk.kafka.connect.SplunkSinkConnector`|
132
132
|`tasks.max`| The number of tasks generated to handle data collection jobs in parallel. The tasks will be spread evenly across all Splunk Kafka Connector nodes.||
133
-
|`splunk.hec.uri`| Splunk HEC URIs. Either a list of FQDNs or IPs of all Splunk indexers, separated with a ",", or a load balancer. The connector will load balance to indexers using round robin. Splunk Connector will round robin to this list of indexers.```https://hec1.splunk.com:8088,https://hec2.splunk.com:8088,https://hec3.splunk.com:8088```|
133
+
|`splunk.hec.uri`| Splunk HEC URIs. Either a list of FQDNs or IPs of all Splunk indexers, separated with a ",", or a load balancer. The connector will load balance to indexers using round robin. Splunk Connector will round robin to this list of indexers.`https://hec1.splunk.com:8088,https://hec2.splunk.com:8088,https://hec3.splunk.com:8088`||
0 commit comments