Skip to content

Processing Hug files with JSON data takes hours together #509

Open
@veeraraghukiranyerva

Description

@veeraraghukiranyerva

Hi we have 3-4 files with 3GB. and have over 500 million records in each. These files needs to be processed by the kafka file pulse connector. for one file with 520 mill records it take like 3 hours and for another file with 540Million records will take like 40min.
so i am not sure why 1 file take very much time than the other where the size of the file is almost same and no of records are also almost same.

my assumption is we are using the json converter and drop filter. may be json convert is eating up the time, do you think this could be causing the issue. do i need to convert the input json to json to figure out if i need to filleter the record or else i can use something else to figure out making the process go more smoothly and faster.

can you please take a look at the below configuration and see if there is any thing which we can do to speed up the reading process.

{
"name" : "iHub-clog",
"config" : {
"batch.size" : "100000",
"buffer.initial.bytes.size" : "16384",
"connector.class" : "io.streamthoughts.kafka.connect.filepulse.source.FilePulseSourceConnector",
"file.filter.minimum.age.ms" : "10000",
"filters" : "ParseJson, ExcludeMessageField, Drop",
"filters.Drop.if" : "{{ matches($.CLOG_Objects.File, '(CAPP|CUSTOMER|PREFERENCE.MGMT|IHUB.C.AR|COUNTY.CODE|MKTG.CODE|MKTG.CUSTOMER|PMNT)') }}",
"filters.Drop.invert" : "true",
"filters.Drop.type" : "io.streamthoughts.kafka.connect.filepulse.filter.DropFilter",
"filters.ExcludeMessageField.fields" : "message",
"filters.ExcludeMessageField.type" : "io.streamthoughts.kafka.connect.filepulse.filter.ExcludeFilter",
"filters.ParseJson.merge" : "true",
"filters.ParseJson.type" : "io.streamthoughts.kafka.connect.filepulse.filter.JSONFilter",
"fs.cleanup.policy.class" : "io.streamthoughts.kafka.connect.filepulse.fs.clean.LogCleanupPolicy",
"fs.listing.class" : "io.streamthoughts.kafka.connect.filepulse.fs.LocalFSDirectoryListing",
"fs.listing.directory.path" : "/tlextfs",
"fs.listing.filters" : "io.streamthoughts.kafka.connect.filepulse.fs.filter.LastModifiedFileListFilter",
"fs.listing.interval.ms" : "30000",
"key.converter" : "org.apache.kafka.connect.storage.StringConverter",
"name" : "iHub-clog",
"read.max.wait.ms" : "600000",
"tasks.file.status.storage.bootstrap.servers" : "XXXXXXXX-servers",
"tasks.reader.class" : "io.streamthoughts.kafka.connect.filepulse.fs.reader.LocalRowFileInputReader",
"topic" : "connectit",
"value.converter" : "org.apache.kafka.connect.json.JsonConverter"
}
}

can you suggest a similar configuration which can speed the process up for us.

Metadata

Metadata

Assignees

No one assigned

    Labels

    wontfixThis will not be worked on

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions