-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove yamlgen
. Refactor deployment and yaml generation to use Helm templating
#350
Conversation
Force pushed to update certificate resources that changed with #340 - Ran through successful updates using:
|
6425170
to
64622a9
Compare
Force pushed:
Marking this as ready for review. There are no big things left to do here and I'd love to start getting some feedback from the community. I'm sure there are little things left here since this is a bigger refactor, but we can continue to tackle those in the coming weeks! We'll also want to think about doing a one off change to the new release workflow to ensure that works for releases. Aka, we can change that to trigger on each push to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. There are a lot of changes here, but with the noted testing done I feel comfortable with the change.
Looks like there is a merge conflict to be resolved now. Since an update is needed anyway, a couple small things pointed out.
deploy/build.rs
Outdated
const DEPLOY_DIR: &str = env!("CARGO_MANIFEST_DIR"); | ||
|
||
fn main() { | ||
dotenv::dotenv().ok(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we're still using dotenv as a development tool, should we leave the .env
file around with some comments for what it can do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmmm that's a good point - I think this means we'll have to yes. Let me circle back and look at that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new workflow should become:
- Create a
values.yaml
file (maybe we can create a top level one for development purposes and git ignore it) - Populate it with necessary values. For example:
namespace: "my-custom-ns"
- Pass the values yaml file to be used during install either via:
helm install -f values.yaml ...
(to install to a cluster)helm template -f values.yaml ... > file.yaml
(to template into the yaml)
I think there's some better dev UX to tease out here, especially with how the integration tests work into this flow: currently, the integration tests will just read from bottlerocket-update-operator.yaml
and apply that to the cluster but there's opportunity here to make this more seamless.
deploy/charts/bottlerocket-shadow/templates/custom-resource-definition.yaml
Show resolved
Hide resolved
This is awesome, nice work @jpmcb! 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whoops, forgot to hit send on these.
deploy/charts/bottlerocket-shadow/templates/custom-resource-definition.yaml
Show resolved
Hide resolved
deploy/charts/bottlerocket-shadow/templates/custom-resource-definition.yaml
Outdated
Show resolved
Hide resolved
Force pushed to correct for Going to do another round of iteration here based on comments and a few |
Force pushed to add the following:
Integration tests still looking 👍🏼
|
Force pushed to template a few remaining references to |
yamlgen
. Refactor deployment and yaml generation to use Helm templating
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Just a few minor changes that I'd like to see and a few nits that I could take or leave.
deploy/charts/bottlerocket-shadow/templates/custom-resource-definition.yaml
Show resolved
Hide resolved
Force pushed to address merge conflicts - addressing comments next! |
Force pushed to address above comments |
Force push to address Sean's comments around infering the namespace throughout. Now, when the various client configs are created, we pass down the namespace into subsequent constructors that require namespaced resources. It didn't make sense to have to create a new client config with each query to namespaced resources. That would be a frequent, costly operation. Instead, we can now access the namespace data member built from |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Super clean! Nice work John! 🚀 🎉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
About 1/3 coverage from me. A few questions of ignorance on my part. Looks good.
@@ -217,11 +217,14 @@ async fn run() -> Result<()> { | |||
) | |||
.await | |||
.context(error::LoadKubeConfigSnafu)?; | |||
|
|||
// infer the default namespace from the loaded kubeconfig | |||
let namespace = config.default_namespace.to_string(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the user can configure the namespace of their choice, how is it that you get that from the kubeconfig?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question: the config.default_namespace
property is the namespace defined by the loaded client config. This is typically the namespace of where the execution is currently happening.
So, in the case of the controller, it reads the in-cluster config and infers that it is in the "bottlerocket-aws" (or whatever the namespace was set during deploy).
This case in the integration tests is abit different in that we aren't inferring a in-cluster client config, we load one explicitly from eksctl. But the principle is the same - it infers the default namespace to deploy some stuff to (and in this case, it is actually "default")
It's abit confusing because the name "default" is overloaded. Most of the time, from the controllers perspective, we are simply attempting to infer what namespace the current context is executing from.
Force pushed to rename the FYI - this will also need a hefty rebase and abit of additonal legwork to get in once the cron PR merges |
I hope this helm chart gets ready soon! 🙏 😄 |
Yes! this is definitely a needed feature. Can't wait! |
Hi there. Just wanted to check if there's any insight on when this might be merged? |
9b98b50
to
7399de4
Compare
Force pushed with large rebase (which included integrating the work for the scheduler cron) and some docs updates. The biggest thing you'll notice is different is the templates now include the diff --git a/bottlerocket-update-operator.yaml b/bottlerocket-update-operator.yaml
index cf982ec..a96c65a 100644
--- a/bottlerocket-update-operator.yaml
+++ b/bottlerocket-update-operator.yaml
@@ -711,10 +711,8 @@ spec:
fieldPath: spec.nodeName
- name: MAX_CONCURRENT_UPDATE
value: "1"
- - name: UPDATE_WINDOW_START
- value: "0:0:0"
- - name: UPDATE_WINDOW_STOP
- value: "0:0:0"
+ - name: SCHEDULER_CRON_EXPRESSION
+ value: "* * * * * * *"
image: public.ecr.aws/bottlerocket/bottlerocket-update-operator:v1.0.0
name: brupop
priorityClassName: brupop-controller-high-priority
Testing
This should be ready to go in now |
Follow the instruction above, it works perfectly on my side! nice work!!! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good, and I think it would be great to get this in and available to get more usages/feedback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Looks like the checks might just need a re-run to get by a transient error.
- helm install brupop-crd --create-namespace deploy/charts/bottlerocket-shadow - helm install brupop deploy/charts/bottlerocket-shadow - Able to deploy brupop to different namespaces - Removes yamlgen subprogram - Snapshot testing of rust generated CRD yaml via cargo insta Signed-off-by: John McBride <[email protected]>
Force pushed to rebase on |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's gooooo!! 🥳
Issue number:
Closes #126
Description of changes:
Refactors deployment and yaml templating to use Helm. Still a work in progress. Using this draft PR as a staging area for early feedback
Testing done:
Able to:
make
imageshelm install brupop-crd deploy/charts/bottlerocket-shadow
helm install brupop deploy/charts/bottlerocket-update-operator
Terms of contribution:
By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.