Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: force strict mode for patch for safe concurrent writes #3912

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

mykysha
Copy link
Contributor

@mykysha mykysha commented Dec 27, 2024

What type of PR is this?

/kind bug

What this PR does / why we need it:

Enforce strict mode in Patch operation to remove possible race conditions.

Which issue(s) this PR fixes:

Fixes #3899

Special notes for your reviewer:

Does this PR introduce a user-facing change?

NONE

@k8s-ci-robot
Copy link
Contributor

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. release-note-none Denotes a PR that doesn't merit a release note. labels Dec 27, 2024
@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Dec 27, 2024
Copy link

netlify bot commented Dec 27, 2024

Deploy Preview for kubernetes-sigs-kueue ready!

Name Link
🔨 Latest commit 7589f14
🔍 Latest deploy log https://app.netlify.com/sites/kubernetes-sigs-kueue/deploys/676eaf43f8bdaf00082f83b4
😎 Deploy Preview https://deploy-preview-3912--kubernetes-sigs-kueue.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

@mykysha
Copy link
Contributor Author

mykysha commented Dec 27, 2024

/test all

@mykysha
Copy link
Contributor Author

mykysha commented Dec 27, 2024

/cc @mbobrovskyi

Copy link
Contributor

@mbobrovskyi mbobrovskyi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
Thanks!

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Dec 27, 2024
@k8s-ci-robot
Copy link
Contributor

LGTM label has been added.

Git tree hash: 8952d4f81acb656dc70e2e9ee998c643ab0a4228

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: mbobrovskyi, mykysha
Once this PR has been reviewed and has the lgtm label, please assign tenzen-y for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@mbobrovskyi
Copy link
Contributor

cc: @troychiu

@mykysha mykysha marked this pull request as ready for review December 27, 2024 14:16
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Dec 27, 2024
Copy link
Member

@tenzen-y tenzen-y left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK the nonstrict patch is the intended behavior.
Could you add test cases to prove the situation where the nonstrict patch removes non-owned finalizers (kueue.x-k8s. io/managed) from the Pod?

Additionally, what is the reason for enforcing the strict patch everywhere?

@mimowo
Copy link
Contributor

mimowo commented Jan 7, 2025

AFAIK the nonstrict patch is the intended behavior.

I'm not sure about that actually, dropping finalizers successfully added by other controllers is not ideal.

Could you add test cases to prove the situation where the nonstrict patch removes non-owned finalizers (kueue.x-k8s. io/managed) from the Pod?

Race conditions like this might be tricky to test. How would you imagine the test? e2e test which repeats the operation multiple times, or rather a unit test?

Additionally, what is the reason for enforcing the strict patch everywhere?

RemoveFinalizer was the only use of non-strict mode, so it makes sense to drop the flag if no other place wants to use it.

However, I think there is some risk to use strict patches - it is the possible performance implication due to necessary re-tries. I don't expect it, because we anyway manage scheduling gates, and it wasn't a problem. However, for safety I would suggest to add a feature gate RemoveFinalizersWithStrictPatch which is beta and will graduate to GA in the future, but it will give us a safety mechanism for users who use Kueue at scale and the change could impact them. WDYT @tenzen-y ?

@trasc
Copy link
Contributor

trasc commented Jan 14, 2025

/uncc

@k8s-ci-robot k8s-ci-robot removed the request for review from trasc January 14, 2025 11:55
@tenzen-y
Copy link
Member

AFAIK the nonstrict patch is the intended behavior.

I'm not sure about that actually, dropping finalizers successfully added by other controllers is not ideal.

Could you add test cases to prove the situation where the nonstrict patch removes non-owned finalizers (kueue.x-k8s. io/managed) from the Pod?

Race conditions like this might be tricky to test. How would you imagine the test? e2e test which repeats the operation multiple times, or rather a unit test?

Additionally, what is the reason for enforcing the strict patch everywhere?

RemoveFinalizer was the only use of non-strict mode, so it makes sense to drop the flag if no other place wants to use it.

However, I think there is some risk to use strict patches - it is the possible performance implication due to necessary re-tries. I don't expect it, because we anyway manage scheduling gates, and it wasn't a problem. However, for safety I would suggest to add a feature gate RemoveFinalizersWithStrictPatch which is beta and will graduate to GA in the future, but it will give us a safety mechanism for users who use Kueue at scale and the change could impact them. WDYT @tenzen-y ?

Yes, performance was my primary concern due to requeue objects to reconcilers.
So, guarding the strict patch by the RemoveFinalizersWithStrictPatch would be worth it.
Additionally, I would propose adding the kueue performance testings metrics to the feature graduation criteria. For example, we want to restrict the graduation like If the performance metrics are not decreased by 5% or something else, we can graduate it to GA.

@mimowo WDYT?

@mimowo
Copy link
Contributor

mimowo commented Jan 16, 2025

For example, we want to restrict the graduation like If the performance metrics are not decreased by 5% or something else, we can graduate it to GA.

Sounds reasonable, I'm just wondering how much insight we get by artificial testing, but the scale requirement wouldn't be clear to me. Some users will use it at a large scale, and not sure how small scale testing will be representative. I suppose we would need to have at least 10000 pods. WDYT @tenzen-y @mykysha ?

Alternative (or along with that) we could enable it by default and wait 3 releases with graduation for the feedback from users.

@mykysha
Copy link
Contributor Author

mykysha commented Jan 16, 2025

Both approaches sound good to me. I believe the user-feedback approach would be more informative, however making some performance metering alongside definitely wouldn't hurt either

@mimowo
Copy link
Contributor

mimowo commented Jan 16, 2025

I think we will not need a KEP process for that, but let me check if @tenzen-y is ok with the plan:

  • introduce the feature-gate in Beta, and add TODO with the graduation criteria to stable with a link to the issue "Performance testing for the impact of RemoveFinalizersWithStrictPatch". In the issue propose to test it manually.
  • also add a comment to graduate in 0.13 or later to await for the user feedback while the gate is enabled

@tenzen-y
Copy link
Member

Some users will use it at a large scale, and not sure how small scale testing will be representative. I suppose we would need to have at least 10000 pods. WDYT @tenzen-y @mykysha ?

That makes sense. We should verify the performance issue in the large env to represent issue obviously.
So, if it's challenging to simulate the situation by our performance testings, I guess that we can simulate it by other tools like KWOK manually.

I think we will not need a KEP process for that, but let me check if @tenzen-y is ok with the plan:

introduce the feature-gate in Beta, and add TODO with the graduation criteria to stable with a link to the issue "Performance testing for the impact of RemoveFinalizersWithStrictPatch". In the issue propose to test it manually.
also add a comment to graduate in 0.13 or later to await for the user feedback while the gate is enabled

I'm ok without KEP. But at least, could we summarize graduation criteria and background in the dedicated issue?

@mimowo
Copy link
Contributor

mimowo commented Jan 17, 2025

So, if it's challenging to simulate the situation by our performance testings, I guess that we can simulate it by other tools like KWOK manually.

Right, I think it might be tricky for automated testing, and will consume our build time. So, I'm leaning to just a manual experiment, either in a real cluster k8s or kwok. I think the scale of 10000 pods would be enough, they don't need to be running at the same time. Please also confirm how many of the errors we got in that case.

@mimowo
Copy link
Contributor

mimowo commented Jan 17, 2025

But at least, could we summarize graduation criteria and background in the dedicated issue?

SGTM, We can add TODO(# issue number) comment. @mykysha please open the issue and update the PR accordingly.

@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jan 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. release-note-none Denotes a PR that doesn't merit a release note. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Finalizer patch on the pod may overwrite other changes to the finalizer
6 participants