-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DownstreamMerge] 8-8-2022 #1237
[DownstreamMerge] 8-8-2022 #1237
Conversation
Signed-off-by: Pardhakeswar Pacha <[email protected]>
Apparently if you don't pass a ResourceVersion the call goes directly through to etcd, while if you pass "0" it gets pulled from the apiserver's cache. Pulling from the cache can greatly reduce load on etcd. See: https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apiserver/pkg/storage/cacher/cacher.go#L637 // GetList implements storage.Interface func (c *Cacher) GetList(ctx context.Context, key string, opts storage.ListOptions, listObj runtime.Object) error { recursive := opts.Recursive resourceVersion := opts.ResourceVersion pred := opts.Predicate if shouldDelegateList(opts) { return c.storage.GetList(ctx, key, opts, listObj) } <...> } func shouldDelegateList(opts storage.ListOptions) bool { resourceVersion := opts.ResourceVersion pred := opts.Predicate pagingEnabled := utilfeature.DefaultFeatureGate.Enabled(features.APIListChunking) hasContinuation := pagingEnabled && len(pred.Continue) > 0 hasLimit := pagingEnabled && pred.Limit > 0 && resourceVersion != "0" // If resourceVersion is not specified, serve it from underlying // storage (for backward compatibility). If a continuation is // requested, serve it from the underlying storage as well. // Limits are only sent to storage when resourceVersion is non-zero // since the watch cache isn't able to perform continuations, and // limits are ignored when resource version is zero return resourceVersion == "" || hasContinuation || hasLimit || opts.ResourceVersionMatch == metav1.ResourceVersionMatchExact } Signed-off-by: Dan Williams <[email protected]>
Signed-off-by: Nadia Pinaeva <[email protected]>
Signed-off-by: Nadia Pinaeva <[email protected]>
This denies resources that specify an invalid cidr. Signed-off-by: Ori Braunshtein <[email protected]>
Isolate syncMap implementation from the retry logic: - retryObjEntry doesn't need to have mutex anymore - retryObjs doesn't work with cache mutexes, only calls methods syncMap is a map with lockable keys. It allows to lock the key regardless of whether the entry for given key exists. When key is locked other threads can't read/write the entry with the key. Split ensureRetryEntryLocked into loadOrStore that will make sure newEntry is present and locked, and load that was previously called as ensureRetryEntryLocked with nil newRetryEntry. Change iterateRetryResources to only lock every entry once and not use cache mutex. Create a snapshot of keys that will be retried instead of holding map lock. Lock retryObject key until work for this key is completed. That also removes the need for .ignore field - retry loop will wait until key is unlocked. Signed-off-by: Nadia Pinaeva <[email protected]>
Fix retry_obj retryMutex vs retryEntry.Mutex deadlocks
kube: pass ResourceVersion:"0" for direct List() calls
Add EgressQoS DstCIDR kubebuilder validation
Signed-off-by: Girish Moodalbail <[email protected]>
... since golangci-lint might install a later version by design: golangci/golangci-lint-action#75 Signed-off-by: Riccardo Ravaioli <[email protected]>
fixed go version for unit test openshift/release#31182 |
Signed-off-by: Riccardo Ravaioli <[email protected]>
/retest |
…e informer cache When processing an object in terminal state there is a chance that it was already removed from the API server. Since delete events for objects in terminal state are skipped delete it here. Signed-off-by: Patryk Diak <[email protected]>
/retest |
On update, delete objects in terminal state that no longer exist in the informer cache
Followup to EndpointSlices PR for ovn-k node
Adding and removing a pod on changing nodes back to back can end up in a race where corresponding logical switch port remains in the wrong logical switch and never gets properly removed. In order for this to happen, the logical switch port has to have the same name, which is the <namespace>_<podName>. Signed-off-by: Flavio Fernandes <[email protected]> Co-authored-by: Tim Rozet <[email protected]>
BZ2117310: Fix race when adding and removing pod with same name
/retest |
/retest-required |
@trozet: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: npinaeva, trozet The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@npinaeva fyi