Description
Logstash information:
Please include the following information:
- Logstash version : 8.17.0
- Logstash installation source : Docker
- How is Logstash being run : Docker
- How was the Logstash Plugin installed : Default packaged
JVM (e.g. java -version
): Default Bundled with Logstash 8.17.0
If the affected version of Logstash is 7.9 (or earlier), or if it is NOT using the bundled JDK or using the 'no-jdk' version in 7.10 (or higher), please provide the following information:
- JVM version (
java -version
) - JVM installation source (e.g. from the Operating System's package manager, from source, etc).
- Value of the
JAVA_HOME
environment variable if set.
OS version (uname -a
if on a Unix-like system): Debian 12 bookworm
Description of the problem including expected versus actual behavior:
- Logstash input consumes files using file input plugin as follow
input {
file {
path => ["/opt/workspace/uploads/*.log", "/opt/workspace/uploads/*.log.gz"]
start_position => "beginning"
file_chunk_size => "327680"
sincedb_path => "/opt/workspace/sincedb.txt"
file_completed_action => "log_and_delete"
file_completed_log_path => "/opt/workspace/completed.txt"
mode => read
ecs_compatibility => "disabled"
file_sort_by => "last_modified"
file_sort_direction => "asc"
max_open_files => 5
codec => plain {charset => "ISO-8859-1"}
check_archive_validity => true
}
}
2- Files are generally with average size of 3 to 4 MB, with CSV format
Names are like this node02-250605-1744-14613.log or node02-250605-1744-14613.log.gz
3- Files are pushed into /opt/workspace/uploads/ by another process using FTP as follow
- Upload file into /opt/workspace/uploads/tmp
- When FTP transfer finish, move the file from /opt/workspace/uploads/tmp to /opt/workspace/uploads/
- FTP process can upload *.log or *.log.gz files, all of have the same format
4- The following error occure and logstash is not anymore consuming files
{
"level": "ERROR",
"loggerName": "filewatch.readmode.handlers.readfile",
"timeMillis": 1749176640587,
"thread": "[pipeline-001]<file",
"logEvent": {
"message": "End of file reached",
"path": "/opt/workspace/uploads/node02-250605-1744-14613.log",
"exception": "EOFError",
"backtrace": [
"org/jruby/RubyIO.java:3266:in `sysread'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-input-file-4.4.6/lib/filewatch/watched_file.rb:229:in `file_read'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-input-file-4.4.6/lib/filewatch/watched_file.rb:241:in `read_extract_lines'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-input-file-4.4.6/lib/filewatch/read_mode/handlers/read_file.rb:50:in `block in controlled_read'",
"org/jruby/RubyFixnum.java:306:in `times'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-input-file-4.4.6/lib/filewatch/read_mode/handlers/read_file.rb:47:in `controlled_read'",
"/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/logstash-input-file-4.4.6/lib/filewatch/read_mode/handlers/read_file.rb:21:in `block in handle_specifically'",
"org/jruby/RubyKernel.java:1725:in `loop'"
]
}
}
Steps to reproduce:
Please include a minimal but complete recreation of the problem,
including (e.g.) pipeline definition(s), settings, locale, etc. The easier
you make for us to reproduce it, the more likely that somebody will take the
time to look at it.
- Use the same logic i described before
- Send 500 to 1000 files evey 5 minutes
Provide logs (if relevant):