Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Late binding support in workload identity feature #2131

Open
jack4it opened this issue Oct 10, 2024 · 2 comments
Open

Late binding support in workload identity feature #2131

jack4it opened this issue Oct 10, 2024 · 2 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@jack4it
Copy link

jack4it commented Oct 10, 2024

Is your feature request related to a problem?/Why is this needed

Today's workload identity support seems to directly use the service account and one set of tenant id / client id bound to the controller/node-daemon. That makes it difficult to support multiple storage accounts that reside in different AAD tenants

Describe the solution you'd like in detail

Add logic to instead use a service account defined in a storage class and fetch token to use for that particular class. Then we could have multi storage classes defined for different storage accounts from different places that are governed by different set of identities

Describe alternatives you've considered

We fall back to account key for now; otherwise, we'd have to install a driver instance per storage account...

Additional context

@andyzhangx
Copy link
Member

@jack4it are you using the open source csi driver following this guide? https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/workload-identity-deploy-csi-driver.md

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 13, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants