-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update containerd to v2.0.1 #3828
base: main
Are you sure you want to change the base?
Conversation
/retest |
I plan on doing some more local testing, but this looks good to me and CI is happy. /lgtm |
560c251
to
b5b1ff6
Compare
@@ -82,7 +83,9 @@ func LoadImageArchive(n nodes.Node, image io.Reader) error { | |||
if err != nil { | |||
return err | |||
} | |||
cmd := n.Command("ctr", "--namespace=k8s.io", "images", "import", "--all-platforms", "--digests", "--snapshotter="+snapshotter, "-").SetStdin(image) | |||
// --local=true: Disables the Transfer API mode that is enabled by default since containerd v2.0. | |||
// Workaround for `ctr: rpc error: code = InvalidArgument desc = unable to initialize unpacker: no unpack platforms defined: invalid argument` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We probably need a long term solution for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also: #3805 (Attempts to set --local=false instead of --local=true to resolve a different problem)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
} | ||
var snapshotter string | ||
switch configVersion { | ||
case 2: // Introduced in containerd v1.3. Still supported in containerd v2. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we probably need to allow user supplied config patches to similarly target 2 vs 3 separately, though that doesn't need to be done in this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we depend on the version of the config?
kind/pkg/cluster/internal/create/actions/config/config.go
Lines 119 to 146 in 90d9439
if len(ctx.Config.ContainerdConfigPatches) > 0 || len(ctx.Config.ContainerdConfigPatchesJSON6902) > 0 { | |
fns := make([]func() error, len(kubeNodes)) | |
for i, node := range kubeNodes { | |
node := node // capture loop variable | |
fns[i] = func() error { | |
// read and patch the config | |
const containerdConfigPath = "/etc/containerd/config.toml" | |
var buff bytes.Buffer | |
if err := node.Command("cat", containerdConfigPath).SetStdout(&buff).Run(); err != nil { | |
return errors.Wrap(err, "failed to read containerd config from node") | |
} | |
patched, err := patch.TOML(buff.String(), ctx.Config.ContainerdConfigPatches, ctx.Config.ContainerdConfigPatchesJSON6902) | |
if err != nil { | |
return errors.Wrap(err, "failed to patch containerd config") | |
} | |
if err := nodeutils.WriteFile(node, containerdConfigPath, patched); err != nil { | |
return errors.Wrap(err, "failed to write patched containerd config") | |
} | |
// restart containerd now that we've re-configured it | |
// skip if containerd is not running | |
if err := node.Command("bash", "-c", `! pgrep --exact containerd || systemctl restart containerd`).Run(); err != nil { | |
return errors.Wrap(err, "failed to restart containerd after patching config") | |
} | |
return nil | |
} | |
} | |
if err := errors.UntilErrorConcurrent(fns); err != nil { | |
return err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we depend on the version of the config?
user's patches will not work as intended, they will wind up merging in keys that are not valid for v3 because all the keys have new namespacing again.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(and if they simply switch to v3 keys, it will not work with older kind images)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, so you refer to the "existing" user patches, I think is a matter of how you see it then ... for me those are containerd specific patches, and yes, users need to adapt them to containerd 2.0 unless there is some containerd feature that allows to migrate between formats
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for me those are containerd specific patches, and yes, users need to adapt them to containerd 2.0 unless there is some containerd feature that allows to migrate between formats
But then their config won't work for older kind releases/images.
The way we do this with kubeadm config patches is you can have multiple patches that are targeting different kubeadm API versions. So far that's been irrelevant because for nearly all of our history we've only had containerd 2.0 config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But actually, that is for a future PR, because in this PR we're still using a v2.0 config (images/base/files/etc/containerd/config.toml
), this part of the diff seems to be technically unecessary for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, I got it now, so we keep using containerd 2.0 version for the config and let containerd deal with the translation internall to version 3 https://github.com/containerd/containerd/blob/main/docs/cri/config.md#config-versions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, before we switch our config to v3 we should give users a way to deal with it smoothly, but we're not actually switching to v3 in this PR, even though this PR adds partial support for v3.
/lgtm I'm positively surprised the diff is so small ... IIRC we already had issues with |
I don't think this addresses all of the issues, and since CRI only has v1 and not v1alpha2 in containerd v2, we break kubernetes < 1.26 (which is probably OK, but worth noting, we have not actively broken an old k8s version for some time now). |
CRI v1 seems introduced in Kubernetes 1.20: kubernetes/kubernetes@9fcede9 |
v1 was added in 1.20 kubernetes/kubernetes#96387 |
ACK, definitely no problem with that. We will still need to make sure to note this for the release, and we should consider if this is sufficient excuse to drop other < 1.20 bits from the project. We've been very lenient about dropping support outright so far. And re: other parts: #3828 (comment) |
At some point <1.19 will be widely unusable because of cgroups v2 anyhow. |
Fix issue 3768 Signed-off-by: Akihiro Suda <[email protected]>
b5b1ff6
to
ccd6c2a
Compare
New changes are detected. LGTM label has been removed. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: AkihiroSuda The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@AkihiroSuda: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Fix #3768