Skip to content
This repository was archived by the owner on Mar 21, 2020. It is now read-only.
This repository was archived by the owner on Mar 21, 2020. It is now read-only.

400 bad request error when actual chunk size > batch_size_limit #32

@sandeepbhojwani

Description

@sandeepbhojwani

There's a bug in the code where if the actual chunk size > batch_size_limit then the code tries to send an array of objects instead of concatenated json objects. This results in an error like this..

2018-07-11 23:23:02 +0000 [debug]: #0 Pushing 2,067 events (1,042,549 bytes) to Splunk.
2018-07-11 23:23:02 +0000 [warn]: #0 Fluentd is attempting to push 1,042,549 bytes in a single push to Splunk. The configured limit is 1,041,076 bytes.
2018-07-11 23:23:03 +0000 [debug]: #0 POST XXX/collector/event
2018-07-11 23:23:03 +0000 [debug]: #0 =>(1/1) 400 (Bad Request)
2018-07-11 23:23:03 +0000 [error]: #0 XXX/collector/event: 400 (Bad Request)
{"text":"Invalid data format","code":6,"invalid-event-number":0}

Here's the config i used

  <match **>
    @type splunk-http-eventcollector
    @log_level debug
    # test_mode true
    server "XXXX"
    protocol https
    verify false
    token XXXX
    host logging
    index test
    check_index false
    sourcetype _json
    all_items true
    batch_size_limit 1041076 # 1MB
    post_retry_max 1
    post_retry_interval 5

    <buffer>
      @type memory
      chunk_limit_size 1MB
      queued_chunks_limit_size 64
      total_limit_size 128MB

      flush_mode interval
      flush_interval 2s
      flush_thread_count 2

      retry_type exponential_backoff
      retry_wait 10
      retry_exponential_backoff_base 2
      retry_max_interval 1280
      retry_timeout 2h
      retry_randomize true
    </buffer>
  </match>

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions