-
Notifications
You must be signed in to change notification settings - Fork 350
Closed
Description
Describe the bug:
I have a minio in the kubernetes namespace minio. Created fluentbit pod fails with the error msg below:
➜ klo default-logging-simple-fluentd-0 -c fluentd
2024-10-09 21:52:27 +0000 [error]: #0 unexpected error error_class=Aws::S3::Errors::BadRequest error="Aws::S3::Errors::BadRequest"
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/aws-sdk-s3-1.149.1/lib/aws-sdk-s3/bucket.rb:82:in `rescue in exists?'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/aws-sdk-s3-1.149.1/lib/aws-sdk-s3/bucket.rb:77:in `exists?'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluent-plugin-s3-1.7.2/lib/fluent/plugin/out_s3.rb:417:in `ensure_bucket'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluent-plugin-s3-1.7.2/lib/fluent/plugin/out_s3.rb:268:in `start'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/root_agent.rb:203:in `block in start'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/root_agent.rb:182:in `block (2 levels) in lifecycle'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/agent.rb:121:in `block (2 levels) in lifecycle'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/agent.rb:120:in `each'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/agent.rb:120:in `block in lifecycle'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/agent.rb:113:in `each'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/agent.rb:113:in `lifecycle'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/root_agent.rb:181:in `block in lifecycle'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/root_agent.rb:178:in `each'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/root_agent.rb:178:in `lifecycle'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/root_agent.rb:202:in `start'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/engine.rb:248:in `start'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/engine.rb:147:in `run'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/supervisor.rb:617:in `block in run_worker'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/supervisor.rb:962:in `main_process'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/supervisor.rb:608:in `run_worker'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/lib/fluent/command/fluentd.rb:372:in `<top (required)>'
2024-10-09 21:52:27 +0000 [error]: #0 <internal:/usr/local/lib/ruby/3.2.0/rubygems/core_ext/kernel_require.rb>:86:in `require'
2024-10-09 21:52:27 +0000 [error]: #0 <internal:/usr/local/lib/ruby/3.2.0/rubygems/core_ext/kernel_require.rb>:86:in `require'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/gems/fluentd-1.16.5/bin/fluentd:15:in `<top (required)>'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/bin/fluentd:25:in `load'
2024-10-09 21:52:27 +0000 [error]: #0 /usr/local/bundle/bin/fluentd:25:in `<main>'
2024-10-09 21:52:27 +0000 [error]: Worker 0 exited unexpectedly with status 1
Expected behaviour:
Fluentbit to write data into minio
Steps to reproduce the bug:
➜ helm upgrade --install --wait log-generator oci://ghcr.io/kube-logging/helm-charts/log-generator
➜ helm install minio --create-namespace --namespace minio --set resources.requests.memory=100Mi --set replicas=1 --set persistence.enabled=false --set mode=standalone --set rootUser=rootuser,rootPassword=rootpass123 minio/minio
apiVersion: logging.banzaicloud.io/v1beta1
kind: Logging
metadata:
name: default-logging-simple
spec:
fluentd: {}
fluentbit: {}
controlNamespace: logging-operator
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: Output
metadata:
name: s3-output
spec:
s3:
aws_key_id:
value: console
aws_sec_key:
value: console123
s3_bucket: tekton-logs
s3_region: tekton
s3_endpoint: http://minio.minio.svc.cluster.local:9000
path: logs/${tag}/%Y/%m/%d/
buffer:
timekey: 1m
timekey_wait: 1m
timekey_use_utc: true
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: Flow
metadata:
name: s3-flow
spec:
filters:
- tag_normaliser: {}
match:
- select:
labels:
app.kubernetes.io/name: log-generator
localOutputRefs:
- s3-output
this has provisioned the following fluentbit config in the secret:
kgsec default-logging-simple-fluentd-configcheck-app-9f08d787 -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'
generated.conf: <source>
@type forward
@id main_forward
bind 0.0.0.0
port 24240
</source>
<match **>
@type label_router
@id main
metrics false
<route>
@label @4cf1da639c4ecf36a3b5392a80286a00
metrics_labels {"id":"flow:logging-operator:s3-flow"}
<match>
labels app.kubernetes.io/name:log-generator
namespaces logging-operator
negate false
</match>
</route>
</match>
<label @4cf1da639c4ecf36a3b5392a80286a00>
<match kubernetes.**>
@type tag_normaliser
@id flow:logging-operator:s3-flow:0
format ${namespace_name}.${pod_name}.${container_name}
</match>
<match **>
@type s3
@id flow:logging-operator:s3-flow:output:logging-operator:s3-output
aws_key_id console
aws_sec_key console123
path logs/${tag}/%Y/%m/%d/
s3_bucket tekton-logs
s3_endpoint http://minio.minio.svc.cluster.local:9000
s3_object_key_format %{path}%{time_slice}_%{uuid_hash}_%{index}.%{file_extension}
s3_region tekton
<buffer tag,time>
@type file
path /buffers/flow:logging-operator:s3-flow:output:logging-operator:s3-output.*.buffer
retry_forever true
timekey 1m
timekey_use_utc true
timekey_wait 1m
</buffer>
</match>
</label>
<label @ERROR>
<match **>
@type null
@id main-fluentd-error
</match>
</label>
Additional context:
At the same time the following python3 boto3 code worked without any issues:
import boto3
from botocore.client import Config
# S3/MinIO credentials and configuration
aws_key_id = "console"
aws_sec_key = "console123"
s3_bucket = "tekton-logs"
s3_region = "tekton"
s3_endpoint = "http://minio.minio.svc.cluster.local:9000"
file_to_upload = "sample.txt" # The local file you want to upload
s3_key = "sample.txt" # The name of the file in the S3 bucket
# Create the S3 client
s3_client = boto3.client(
's3',
aws_access_key_id=aws_key_id,
aws_secret_access_key=aws_sec_key,
endpoint_url=s3_endpoint, # Specify MinIO custom endpoint
region_name=s3_region,
config=Config(signature_version='s3v4') # Ensure correct signature version
)
# Upload the file
try:
s3_client.upload_file(file_to_upload, s3_bucket, s3_key)
print(f"File '{file_to_upload}' uploaded to '{s3_bucket}/{s3_key}'.")
except Exception as e:
print(f"Error uploading file: {e}")
File 'sample.txt' uploaded to 'tekton-logs/sample.txt'.
Environment details:
- Kubernetes version (e.g. v1.15.2): 1.30
- Cloud-provider/provisioner (e.g. AKS, GKE, EKS, PKE etc): AWS
- logging-operator version (e.g. 2.1.1): latest 4.10.0
- Install method (e.g. helm or static manifests): HELM
- Logs from the misbehaving component (and any other relevant logs): see above
- Resource definition (possibly in YAML format) that caused the issue, without sensitive data: see above
/kind bug