-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filter CDI Pods #104
base: main
Are you sure you want to change the base?
Filter CDI Pods #104
Conversation
All CDI worker pods have the label app: containerized-data-importer The new calculator will filter out CDI worker pods when VirtualResources or DedicatedVirtualResources configs are active Signed-off-by: Michael Henriksen <[email protected]>
Signed-off-by: Michael Henriksen <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
if !hasMatchingKeyAndValue(filtercalc.Labels, pod.Labels) && | ||
!hasMatchingKeyAndValue(filtercalc.Annotations, pod.Annotations) { | ||
return corev1.ResourceList{}, nil, false | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! The approach of adding an additional calculator for CDI pods looks great. I do have some concerns about relying on labels to identify CDI pods. While we could apply the same method for virt-launcher pods, filtering based on labels might allow users to create pods with specific labels to bypass the quota.
Do you think it would be possible to verify that these pods are real CDI pods by checking through the VM/VMI objects or with owner reference like with the launcher pods, to prevent this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @Barakmor1, I understand your concern but but I'm not sure going with owner references is a winning battle against a motivated user. CDI pods are owned by PVCs, not any CDI specific resource. And in the case of cross-namespace cloning, the source pods don't even have owner references because the "owner" may be in another namespace. So a user could get around the quota by creating a PVC + Pod that is owned by the PVC.
I think the only way to be 100% secure would be for HCO or come other component to publish a whitelist of allowed images+shas
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the explanation, I'm not too familiar with how CDI pods work, but what we really need is a way to ensure that we have a VMI that justifies the existence of the pod we're evaluating and that there's only one relevant pod.
Are you familiar with any approach to achieve this without HCO?
FYI @vladikr
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but what we really need is a way to ensure that we have a VMI
Okay this is something I did not consider as a requirement. For each CDI pod, there is a "target" PVC. Could try to map that PVC to a VM/VMI that refers to it.
But what to do about standalone DVs/CDI populators? For example when managing "golden images"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay this is something I did not consider as a requirement. For each CDI pod, there is a "target" PVC. Could try to map that PVC to a VM/VMI that refers to it.
This approach sounds good to me.
But what to do about standalone DVs/CDI populators? For example when managing "golden images"
As for the standalone DVs/CDI populators, I'm not sure we need to address this with the VirtualResources
or dedicatedVirtualResources
configurations. The goal here as i see it is to hide the infrastructure cost of VMIs, and if a VMI doesn’t exist, I don’t think we should change the resource counting. Users who want to ignore the pods related to these golden images can either use a dedicated namespace for them or apply their own pluggable policy for such cases.
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
What this PR does / why we need it:
CDI worker pods should be excluded from AAQ when VirtualResources or DedicatedVirtualResources configs are active.
Turns out that all CDI worker pods have
app: containerized-data-importer
annotation. So excluding them is pretty easy.Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
https://issues.redhat.com/browse/CNV-40763
Special notes for your reviewer:
Release note: