You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Instead of using the global ttl configured in cleanup-operator, I would like to specify ttl more granual. Say, using the labels or annotations set ttl=1h on successful completion for job A, and specify ttl=10m for failed completion for job B.
I would like to do this using the labels kube-cleanup-operator/ttl-success: 1h and kube-cleanup-operator/ttl-fail: 10m.
Why is it important ?
For some releases it is very important to read and analyze logs before removing pods or jobs, but for other is it not so important. This is why in many cases you need to specify different ttl for each job, or use the default value from the cleanup-operator.
The text was updated successfully, but these errors were encountered:
Using annotations for this fine-tuning was the plan from the beginning, but nobody asked for it, so I kept it simple.
I'll look into the implications of adding it, when I have cycles to work on it.
Instead of using the global ttl configured in cleanup-operator, I would like to specify ttl more granual. Say, using the labels or annotations set ttl=1h on successful completion for job A, and specify ttl=10m for failed completion for job B.
I would like to do this using the labels
kube-cleanup-operator/ttl-success: 1h
andkube-cleanup-operator/ttl-fail: 10m
.Why is it important ?
For some releases it is very important to read and analyze logs before removing pods or jobs, but for other is it not so important. This is why in many cases you need to specify different ttl for each job, or use the default value from the cleanup-operator.
The text was updated successfully, but these errors were encountered: