diff --git a/Days/day60.md b/Days/day60.md index b88ace47c..6c3bb294f 100644 --- a/Days/day60.md +++ b/Days/day60.md @@ -1,19 +1,20 @@ --- -title: '#90DaysOfDevOps - Docker Containers, Provisioners & Modules - Day 60' +title: "#90DaysOfDevOps - Docker Containers, Provisioners & Modules - Day 60" published: false -description: '90DaysOfDevOps - Docker Containers, Provisioners & Modules' -tags: 'devops, 90daysofdevops, learning' +description: "90DaysOfDevOps - Docker Containers, Provisioners & Modules" +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049052 --- -## Docker Containers, Provisioners & Modules -On [Day 59](day59.md) we provisioned a virtual machine using Terraform to our local FREE virtualbox environment. In this section we are going to be deploy a Docker container with some configuration to our local Docker environment. +## Docker Containers, Provisioners & Modules + +On [Day 59](day59.md) we provisioned a virtual machine using Terraform to our local FREE virtualbox environment. In this section we are going to be deploy a Docker container with some configuration to our local Docker environment. ### Docker Demo -First up we are going to use the code block below, the outcome of the below is that we would like a simple web app to be deployed into docker and to publish this so that it is available to our network. We will be using nginx and we will make this available externally on our laptop over localhost and port 8000. We are using a docker provider from the community and you can see the docker image we are using also stated in our configuration. +First up we are going to use the code block below, the outcome of the below is that we would like a simple web app to be deployed into docker and to publish this so that it is available to our network. We will be using nginx and we will make this available externally on our laptop over localhost and port 8000. We are using a docker provider from the community and you can see the docker image we are using also stated in our configuration. ``` terraform { @@ -42,21 +43,21 @@ resource "docker_container" "nginx" { } ``` -The first task is to use `terraform init` command to download the provider to our local machine. +The first task is to use `terraform init` command to download the provider to our local machine. ![](Images/Day60_IAC1.png) -We then run our `terraform apply` followed by `docker ps` and you can see we have a running container. +We then run our `terraform apply` followed by `docker ps` and you can see we have a running container. ![](Images/Day60_IAC2.png) -If we then open a browser we can navigate to http://localhost:8000/ and you will see we have access to our NGINX container. +If we then open a browser we can navigate to `http://localhost:8000/` and you will see we have access to our NGINX container. ![](Images/Day60_IAC3.png) -You can find out more information on the [Docker Provider](https://registry.terraform.io/providers/kreuzwerker/docker/latest/docs/resources/container) +You can find out more information on the [Docker Provider](https://registry.terraform.io/providers/kreuzwerker/docker/latest/docs/resources/container) -The above is a very simple demo of what can be done with Terraform plus Docker and how we can now manage this under the Terraform state. We covered docker compose in the containers section and there is a little crossover in a way between this, infrastructure as code as well as then Kubernetes. +The above is a very simple demo of what can be done with Terraform plus Docker and how we can now manage this under the Terraform state. We covered docker compose in the containers section and there is a little crossover in a way between this, infrastructure as code as well as then Kubernetes. For the purpose of showing this and how Terraform can handle a little more complexity, we are going to take the docker compose file for wordpress and mysql that we created with docker compose and we will put this to Terraform. You can find the [docker-wordpress.tf](/Days/IaC/Docker-Wordpress/docker-wordpress.tf) @@ -120,26 +121,25 @@ resource "docker_container" "wordpress" { } ``` -We again put this is in a new folder and then run our `terraform init` command to pull down our provisioners required. +We again put this is in a new folder and then run our `terraform init` command to pull down our provisioners required. ![](Images/Day60_IAC4.png) -We then run our `terraform apply` command and then take a look at our docker ps output we should see our newly created containers. +We then run our `terraform apply` command and then take a look at our docker ps output we should see our newly created containers. ![](Images/Day60_IAC5.png) -We can then also navigate to our WordPress front end. Much like when we went through this process with docker-compose in the containers section we can now run through the setup and our wordpress posts would be living in our MySQL database. +We can then also navigate to our WordPress front end. Much like when we went through this process with docker-compose in the containers section we can now run through the setup and our wordpress posts would be living in our MySQL database. ![](Images/Day60_IAC6.png) -Obviously now we have covered containers and Kubernetes in some detail, we probably know that this is ok for testing but if you were really going to be running a website you would not do this with containers alone and you would look at using Kubernetes to achieve this, Next up we are going to take a look using Terraform with Kubernetes. - +Obviously now we have covered containers and Kubernetes in some detail, we probably know that this is ok for testing but if you were really going to be running a website you would not do this with containers alone and you would look at using Kubernetes to achieve this, Next up we are going to take a look using Terraform with Kubernetes. -### Provisioners +### Provisioners -Provisioners are there so that if something cannot be declartive we have a way in which to parse this to our deployment. +Provisioners are there so that if something cannot be declarative we have a way in which to parse this to our deployment. -If you have no other alternative and adding this complexity to your code is the place to go then you can do this by running something similar to the following block of code. +If you have no other alternative and adding this complexity to your code is the place to go then you can do this by running something similar to the following block of code. ``` resource "docker_container" "db" { @@ -152,40 +152,41 @@ resource "docker_container" "db" { ``` -The remote-exec provisioner invokes a script on a remote resource after it is created. This could be used for something OS specific or it could be used to wrap in a configuration management tool. Although notice that we have some of these covered in their own provisioners. +The remote-exec provisioner invokes a script on a remote resource after it is created. This could be used for something OS specific or it could be used to wrap in a configuration management tool. Although notice that we have some of these covered in their own provisioners. [More details on provisioners](https://www.terraform.io/language/resources/provisioners/syntax) - file -- local-exec -- remote-exec -- vendor - - ansible - - chef - - puppet +- local-exec +- remote-exec +- vendor + - ansible + - chef + - puppet -### Modules +### Modules -Modules are containers for multiple resources that are used together. A module consists of a collection of .tf files in the same directory. +Modules are containers for multiple resources that are used together. A module consists of a collection of .tf files in the same directory. -Modules are a good way to separate your infrastructure resources as well as being able to pull in third party modules that have already been created so you do not have to re invent the wheel. +Modules are a good way to separate your infrastructure resources as well as being able to pull in third party modules that have already been created so you do not have to re invent the wheel. -For example if we wanted to use the same project to build out some VMs, VPCs, Security Groups and then also a Kubernetes cluster we would likely want to split our resources out into modules to better define our resources and where they are grouped. +For example if we wanted to use the same project to build out some VMs, VPCs, Security Groups and then also a Kubernetes cluster we would likely want to split our resources out into modules to better define our resources and where they are grouped. -Another benefit to modules is that you can take these modules and use them on other projects or share publicly to help the community. +Another benefit to modules is that you can take these modules and use them on other projects or share publicly to help the community. We are breaking down our infrastructure into components, components are known here as modules. -## Resources -I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. +## Resources + +I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) -- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) +- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) - [Terraform Simple Projects](https://terraform.joshuajebaraj.com/) - [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks) - [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform) diff --git a/Days/day61.md b/Days/day61.md index 4b159328e..6cb9011fd 100644 --- a/Days/day61.md +++ b/Days/day61.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Kubernetes & Multiple Environments - Day 61' +title: "#90DaysOfDevOps - Kubernetes & Multiple Environments - Day 61" published: false description: 90DaysOfDevOps - Kubernetes & Multiple Environments tags: "devops, 90daysofdevops, learning" @@ -7,25 +7,26 @@ cover_image: null canonical_url: null id: 1048743 --- -## Kubernetes & Multiple Environments + +## Kubernetes & Multiple Environments So far during this section on Infrastructure as code we have looked at deploying virtual machines albeit to virtualbox but the premise is the same really as we define in code what we want our virtual machine to look like and then we deploy. The same for Docker containers and in this session we are going to take a look at how Terraform can be used to interact with resources supported by Kubernetes. I have been using Terraform to deploy my Kubernetes clusters for demo purposes across the 3 main cloud providers and you can find the repository [tf_k8deploy](https://github.com/MichaelCade/tf_k8deploy) -However you can also use Terraform to interact with objects within the Kubernetes cluster, this could be using the [Kubernetes provider](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs) or it could be using the [Helm provider](https://registry.terraform.io/providers/hashicorp/helm/latest) to manage your chart deployments. +However you can also use Terraform to interact with objects within the Kubernetes cluster, this could be using the [Kubernetes provider](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs) or it could be using the [Helm provider](https://registry.terraform.io/providers/hashicorp/helm/latest) to manage your chart deployments. -Now we could use `kubectl` as we have showed in previous sections. But there are some benefits to using Terraform in your Kubernetes environment. +Now we could use `kubectl` as we have showed in previous sections. But there are some benefits to using Terraform in your Kubernetes environment. - Unified workflow - if you have used terraform to deploy your clusters, you could use the same workflow and tool to deploy within your Kubernetes clusters -- Lifecycle management - Terraform is not just a provisioning tool, its going to enable change, updates and deletions. +- Lifecycle management - Terraform is not just a provisioning tool, its going to enable change, updates and deletions. ### Simple Kubernetes Demo Much like the demo we created in the last session we can now deploy nginx into our Kubernetes cluster, I will be using minikube here again for demo purposes. We create our Kubernetes.tf file and you can find this in the [folder](/Days/IaC/Kubernetes/kubernetes.tf) -In that file we are going to define our Kubernetes provider, we are going to point to our kubeconfig file, create a namespace called nginx, then we will create a deployment which contains 2 replicas and finally a service. +In that file we are going to define our Kubernetes provider, we are going to point to our kubeconfig file, create a namespace called nginx, then we will create a deployment which contains 2 replicas and finally a service. ``` terraform { @@ -93,73 +94,77 @@ resource "kubernetes_service" "test" { } ``` -The first thing we have to do in our new project folder is run the `terraform init` command. +The first thing we have to do in our new project folder is run the `terraform init` command. ![](Images/Day61_IAC1.png) -And then before we run the `terraform apply` command, let me show you that we have no namespaces. +And then before we run the `terraform apply` command, let me show you that we have no namespaces. ![](Images/Day61_IAC2.png) -When we run our apply command this is going to create those 3 new resources, namespace, deployment and service within our Kubernetes cluster. +When we run our apply command this is going to create those 3 new resources, namespace, deployment and service within our Kubernetes cluster. ![](Images/Day61_IAC3.png) -We can now take a look at the deployed resources within our cluster. +We can now take a look at the deployed resources within our cluster. ![](Images/Day61_IAC4.png) -Now because we are using minikube and you will have seen in the previous section this has its own limitations when we try and play with the docker networking for ingress. But if we simply issue the `kubectl port-forward -n nginx svc/nginx 30201:80` command and open a browser to http://localhost:30201/ we should see our NGINX page. +Now because we are using minikube and you will have seen in the previous section this has its own limitations when we try and play with the docker networking for ingress. But if we simply issue the `kubectl port-forward -n nginx svc/nginx 30201:80` command and open a browser to `http://localhost:30201/` we should see our NGINX page. ![](Images/Day61_IAC5.png) -If you want to try out more detailed demos with Terraform and Kubernetes then the [HashiCorp Learn site](https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider) is fantastic to run through. +If you want to try out more detailed demos with Terraform and Kubernetes then the [HashiCorp Learn site](https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider) is fantastic to run through. +### Multiple Environments -### Multiple Environments +If we wanted to take any of the demos we have ran through but wanted to now have specific production, staging and development environments looking exactly the same and leveraging this code there are two approaches to achieve this with Terraform -If we wanted to take any of the demos we have ran through but wanted to now have specific production, staging and development environments looking exactly the same and leveraging this code there are two approaches to achieve this with Terraform +- `terraform workspaces` - multiple named sections within a single backend -- `terraform workspaces` - multiple named sections within a single backend +- file structure - Directory layout provides separation, modules provide reuse. -- file structure - Directory layout provides separation, modules provide reuse. +Each of the above do have their pros and cons though. -Each of the above do have their pros and cons though. +### terraform workspaces -### terraform workspaces +Pros -Pros -- Easy to get started -- Convenient terraform.workspace expression -- Minimises code duplication +- Easy to get started +- Convenient terraform.workspace expression +- Minimises code duplication Cons + - Prone to human error (we were trying to eliminate this by using TF) -- State stored within the same backend -- Codebase doesnt unambiguously show deployment configurations. +- State stored within the same backend +- Codebase doesn't unambiguously show deployment configurations. + +### File Structure -### File Structure +Pros -Pros -- Isolation of backends - - improved security - - decreased potential for human error +- Isolation of backends + - improved security + - decreased potential for human error - Codebase fully represents deployed state -Cons -- Multiple terraform apply required to provision environments -- More code duplication, but can be minimised with modules. +Cons + +- Multiple terraform apply required to provision environments +- More code duplication, but can be minimised with modules. + +## Resources -## Resources -I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. +I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) -- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) +- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) - [Terraform Simple Projects](https://terraform.joshuajebaraj.com/) - [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks) - [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform) diff --git a/Days/day62.md b/Days/day62.md index 61ead6767..0d926058f 100644 --- a/Days/day62.md +++ b/Days/day62.md @@ -1,57 +1,58 @@ --- -title: '#90DaysOfDevOps - Testing, Tools & Alternatives - Day 62' +title: "#90DaysOfDevOps - Testing, Tools & Alternatives - Day 62" published: false -description: '90DaysOfDevOps - Testing, Tools & Alternatives' -tags: 'devops, 90daysofdevops, learning' +description: "90DaysOfDevOps - Testing, Tools & Alternatives" +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049053 --- + ## Testing, Tools & Alternatives -As we close out this section on Infrastructure as Code we must mention about testing our code, the various different tools available and then some of the alternatives to Terraform to achieve this. As I said at the start of the section my focus was on Terraform because it is firstly free and open source, secondly it is cross platform and agnostic to environments. But there are also alternatives out there that should be considered but the overall goal is to make people aware that this is the way to deploy your infrastructure. +As we close out this section on Infrastructure as Code we must mention about testing our code, the various different tools available and then some of the alternatives to Terraform to achieve this. As I said at the start of the section my focus was on Terraform because it is firstly free and open source, secondly it is cross platform and agnostic to environments. But there are also alternatives out there that should be considered but the overall goal is to make people aware that this is the way to deploy your infrastructure. -### Code Rot +### Code Rot -The first area I want to cover in this session is code rot, unlike application code, infrastructure as code might get used and then not for a very long time. Lets take the example that we are going to be using Terraform to deploy our VM environment in AWS, perfect and it works first time and we have our environment, but this environment doesnt change too often so the code gets left the state possibly or hopefully stored in a central location but the code does not change. +The first area I want to cover in this session is code rot, unlike application code, infrastructure as code might get used and then not for a very long time. Lets take the example that we are going to be using Terraform to deploy our VM environment in AWS, perfect and it works first time and we have our environment, but this environment doesn't change too often so the code gets left the state possibly or hopefully stored in a central location but the code does not change. -What if something changes in the infrastructure? But it is done out of band, or other things change in our environment. +What if something changes in the infrastructure? But it is done out of band, or other things change in our environment. -- Out of band changes -- Unpinned versions -- Deprecated dependancies -- Unapplied changes +- Out of band changes +- Unpinned versions +- Deprecated dependencies +- Unapplied changes -### Testing +### Testing -Another huge area that follows on from code rot and in general is the ability to test your IaC and make sure all areas are working the way they should. +Another huge area that follows on from code rot and in general is the ability to test your IaC and make sure all areas are working the way they should. -First up there are some built in testing commands we can take a look at: +First up there are some built in testing commands we can take a look at: -| Command | Description | -| --------------------- | ------------------------------------------------------------------------------------------ | -| `terraform fmt` | Rewrite Terraform configuration files to a canonical format and style. | -| `terraform validate` | Validates the configuration files in a directory, referring only to the configuration | -| `terraform plan` | Creates an execution plan, which lets you preview the changes that Terraform plans to make | -| Custom validation | Validation of your input variables to ensure they match what you would expect them to be | +| Command | Description | +| -------------------- | ------------------------------------------------------------------------------------------ | +| `terraform fmt` | Rewrite Terraform configuration files to a canonical format and style. | +| `terraform validate` | Validates the configuration files in a directory, referring only to the configuration | +| `terraform plan` | Creates an execution plan, which lets you preview the changes that Terraform plans to make | +| Custom validation | Validation of your input variables to ensure they match what you would expect them to be | -We also have some testing tools available external to Terraform: +We also have some testing tools available external to Terraform: - [tflint](https://github.com/terraform-linters/tflint) - - Find possible errors - - Warn about deprecated syntax, unused declarations. - - Enforce best practices, naming conventions. + - Find possible errors + - Warn about deprecated syntax, unused declarations. + - Enforce best practices, naming conventions. -Scanning tools +Scanning tools - [checkov](https://www.checkov.io/) - scans cloud infrastructure configurations to find misconfigurations before they're deployed. - [tfsec](https://aquasecurity.github.io/tfsec/v1.4.2/) - static analysis security scanner for your Terraform code. -- [terrascan](https://github.com/accurics/terrascan) - static code analyzer for Infrastructure as Code. +- [terrascan](https://github.com/accurics/terrascan) - static code analyser for Infrastructure as Code. - [terraform-compliance](https://terraform-compliance.com/) - a lightweight, security and compliance focused test framework against terraform to enable negative testing capability for your infrastructure-as-code. -- [snyk](https://docs.snyk.io/products/snyk-infrastructure-as-code/scan-terraform-files/scan-and-fix-security-issues-in-terraform-files) - scans your Terraform code for misconfigurations and security issues +- [snyk](https://docs.snyk.io/products/snyk-infrastructure-as-code/scan-terraform-files/scan-and-fix-security-issues-in-terraform-files) - scans your Terraform code for misconfigurations and security issues -Managed Cloud offering +Managed Cloud offering - [Terraform Sentinel](https://www.terraform.io/cloud-docs/sentinel) - embedded policy-as-code framework integrated with the HashiCorp Enterprise products. It enables fine-grained, logic-based policy decisions, and can be extended to use information from external sources. @@ -59,48 +60,49 @@ Automated testing - [Terratest](https://terratest.gruntwork.io/) - Terratest is a Go library that provides patterns and helper functions for testing infrastructure -Worth a mention +Worth a mention - [Terraform Cloud](https://cloud.hashicorp.com/products/terraform) - Terraform Cloud is HashiCorp’s managed service offering. It eliminates the need for unnecessary tooling and documentation for practitioners, teams, and organizations to use Terraform in production. -- [Terragrunt](https://terragrunt.gruntwork.io/) - Terragrunt is a thin wrapper that provides extra tools for keeping your configurations DRY, working with multiple Terraform modules, and managing remote state. +- [Terragrunt](https://terragrunt.gruntwork.io/) - Terragrunt is a thin wrapper that provides extra tools for keeping your configurations DRY, working with multiple Terraform modules, and managing remote state. -- [Atlantis](https://www.runatlantis.io/) - Terraform Pull Request Automation +- [Atlantis](https://www.runatlantis.io/) - Terraform Pull Request Automation -### Alternatives +### Alternatives -We mentioned on Day 57 when we started this section that there were some alternatives and I very much plan on exploring this following on from this challenge. +We mentioned on Day 57 when we started this section that there were some alternatives and I very much plan on exploring this following on from this challenge. -| Cloud Specific | Cloud Agnostic | +| Cloud Specific | Cloud Agnostic | | ------------------------------- | -------------- | -| AWS CloudFormation | Terraform | -| Azure Resource Manager | Pulumi | -| Google Cloud Deployment Manager | | +| AWS CloudFormation | Terraform | +| Azure Resource Manager | Pulumi | +| Google Cloud Deployment Manager | | + +I have used AWS CloudFormation probably the most out of the above list and native to AWS but I have not used the others other than Terraform. As you can imagine the cloud specific versions are very good in that particular cloud but if you have multiple cloud environments then you are going to struggle to migrate those configurations or you are going to have multiple management planes for your IaC efforts. + +I think an interesting next step for me is to take some time and learn more about [Pulumi](https://www.pulumi.com/) -I have used AWS CloudFormation probably the most out of the above list and native to AWS but I have not used the others other than Terraform. As you can imagine the cloud specific versions are very good in that particular cloud but if you have multiple cloud environments then you are going to struggle to migrate those configurations or you are going to have multiple management planes for your IaC efforts. +From a Pulumi comparison on their site -I think an interesting next step for me is to take some time and learn more about [Pulumi](https://www.pulumi.com/) - -From a Pulumi comparison on their site +> "Both Terraform and Pulumi offer a desired state infrastructure as code model where the code represents the desired infrastructure state and the deployment engine compares this desired state with the stack’s current state and determines what resources need to be created, updated or deleted." -*"Both Terraform and Pulumi offer a desired state infrastructure as code model where the code represents the desired infrastructure state and the deployment engine compares this desired state with the stack’s current state and determines what resources need to be created, updated or deleted."* +The biggest difference I can see is that unlike the HashiCorp Configuration Language (HCL) Pulumi allows for general purpose languages like Python, TypeScript, JavaScript, Go and .NET. -The biggest difference I can see is that unlike the HashiCorp Configuration Language (HCL) Pulumi allows for general purpose languages like Python, TypeScript, JavaScript, Go and .NET. +A quick overview [Introduction to Pulumi: Modern Infrastructure as Code](https://www.youtube.com/watch?v=QfJTJs24-JM) I like the ease and choices you are prompted with and want to get into this a little more. -A quick overview [Introduction to Pulumi: Modern Infrastructure as Code](https://www.youtube.com/watch?v=QfJTJs24-JM) I like the ease and choices you are prompted with and want to get into this a little more. +This wraps up the Infrastructure as code section and next we move on to that little bit of overlap with configuration management and in particular as we get past the big picture of configuration management we are going to be using Ansible for some of those tasks and demos. -This wraps up the Infrastructure as code section and next we move on to that little bit of overlap with configuration management and in particular as we get past the big picture of configuration management we are going to be using Ansible for some of those tasks and demos. +## Resources -## Resources -I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. +I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) -- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) +- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) - [Terraform Simple Projects](https://terraform.joshuajebaraj.com/) - [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks) - [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform) diff --git a/Days/day63.md b/Days/day63.md index ab498820e..1f23e1d6e 100644 --- a/Days/day63.md +++ b/Days/day63.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - The Big Picture: Configuration Management - Day 63' +title: "#90DaysOfDevOps - The Big Picture: Configuration Management - Day 63" published: false description: 90DaysOfDevOps - The Big Picture Configuration Management tags: "devops, 90daysofdevops, learning" @@ -7,102 +7,100 @@ cover_image: null canonical_url: null id: 1048711 --- + ## The Big Picture: Configuration Management -Coming straight off the back of the section covering Infrastructure as Code, there is likely going to be some crossover as we talk about Configuration Management or Application Configuration Management. +Coming straight off the back of the section covering Infrastructure as Code, there is likely going to be some crossover as we talk about Configuration Management or Application Configuration Management. -Configuration Management is the process of maintaining applications, systems and servers in a desired state. The overlap with Infrastructure as code is that IaC is going to make sure your infrastructure is at the desired state but after that especially terraform is not going to look after the desired state of your OS settings or Application and that is where Configuration Management tools come in. Making sure that system and applications perform the way it is expected as changes occur over Deane. +Configuration Management is the process of maintaining applications, systems and servers in a desired state. The overlap with Infrastructure as code is that IaC is going to make sure your infrastructure is at the desired state but after that especially terraform is not going to look after the desired state of your OS settings or Application and that is where Configuration Management tools come in. Making sure that system and applications perform the way it is expected as changes occur over Deane. -Configuration management keeps you from making small or large changes that go undocumented. +Configuration management keeps you from making small or large changes that go undocumented. ### Scenario: Why would you want to use Configuration Management The scenario or why you'd want to use Configuration Management, meet Dean he's our system administrator and Dean is a happy camper pretty and -working on all of the systems in his environement. - -What happens if their system fails, if there's a fire, a server goes down well? Dean knows exactly what to do he can fix that fire really easily the problems become really difficult for Dean however if multiple servers start failing particularly when you have large and expanding environments, this is why Dean really needs to have a configuration management tool. Configuration Management tools can help make Dean look like a rockstar, all he has to do is configure the right codes that allows him to push out the instructions on how to set up each of the servers quickly effectively and at scale. +working on all of the systems in his environment. +What happens if their system fails, if there's a fire, a server goes down well? Dean knows exactly what to do he can fix that fire really easily the problems become really difficult for Dean however if multiple servers start failing particularly when you have large and expanding environments, this is why Dean really needs to have a configuration management tool. Configuration Management tools can help make Dean look like a rockstar, all he has to do is configure the right codes that allows him to push out the instructions on how to set up each of the servers quickly effectively and at scale. -### Configuration Management tools +### Configuration Management tools -There are a variety of configuration management tools available, and each has specific features that make it better for some situations than others. +There are a variety of configuration management tools available, and each has specific features that make it better for some situations than others. ![](Images/Day63_config1.png) -At this stage we will take a quick fire look at the options in the above picture before making our choice on which one we will use and why. +At this stage we will take a quick fire look at the options in the above picture before making our choice on which one we will use and why. - **Chef** - - Chef ensures configuration is applied consistently in every environment, at any scale with infrastructure automation. + + - Chef ensures configuration is applied consistently in every environment, at any scale with infrastructure automation. - Chef is an open-source tool developed by OpsCode written in Ruby and Erlang. - - Chef is best suited for organisations that have a hetrogenous infrastructure and are looking for mature solutions. - - Recipes and Cookbooks determine the configuration code for your systems. + - Chef is best suited for organisations that have a heterogeneous infrastructure and are looking for mature solutions. + - Recipes and Cookbooks determine the configuration code for your systems. - Pro - A large collection of recipes are available - Pro - Integrates well with Git which provides a strong version control - - Con - Steep learning curve, a considerable amount of time required. - - Con - The main server doesn't have much control. - - Architecture - Server / Clients - - Ease of setup - Moderate + - Con - Steep learning curve, a considerable amount of time required. + - Con - The main server doesn't have much control. + - Architecture - Server / Clients + - Ease of setup - Moderate - Language - Procedural - Specify how to do a task - **Puppet** - - Puppet is a configuration management tool that supports automatic deployment. - - Puppet is built in Ruby and uses DSL for writing manifests. - - Puppet also works well with hetrogenous infrastructure where the focus is on scalability. - - Pro - Large community for support. - - Pro - Well developed reporting mechanism. + - Puppet is a configuration management tool that supports automatic deployment. + - Puppet is built in Ruby and uses DSL for writing manifests. + - Puppet also works well with heterogeneous infrastructure where the focus is on scalability. + - Pro - Large community for support. + - Pro - Well developed reporting mechanism. - Con - Advance tasks require knowledge of Ruby language. - - Con - The main server doesn't have much control. - - Architecture - Server / Clients - - Ease of setup - Moderate - - Language - Declartive - Specify only what to do - + - Con - The main server doesn't have much control. + - Architecture - Server / Clients + - Ease of setup - Moderate + - Language - Declarative - Specify only what to do - **Ansible** - - Ansible is an IT automation tool that automates configuration management, cloud provisioning, deployment and orchestration. + + - Ansible is an IT automation tool that automates configuration management, cloud provisioning, deployment and orchestration. - The core of Ansible playbooks are written in YAML. (Should really do a section on YAML as we have seen this a few times) - - Ansible works well when there are environments that focus on getting things up and running fast. + - Ansible works well when there are environments that focus on getting things up and running fast. - Works on playbooks which provide instructions to your servers. - Pro - No agents needed on remote nodes. - - Pro - YAML is easy to learn. + - Pro - YAML is easy to learn. - Con - Performance speed is often less than other tools (Faster than Dean doing it himself manually) - - Con - YAML not as powerful as Ruby but less of a learning curve. + - Con - YAML not as powerful as Ruby but less of a learning curve. - Architecture - Client Only - - Ease of setup - Very Easy + - Ease of setup - Very Easy - Language - Procedural - Specify how to do a task - **SaltStack** - - SaltStack is a CLI based tool that automates configuration management and remote execution. - - SaltStack is Python based whilst the instructions are written in YAML or its own DSL. - - Perfect for environments with scalability and resilience as the priority. - - Pro - Easy to use when up and running - - Pro - Good reporting mechanism + - SaltStack is a CLI based tool that automates configuration management and remote execution. + - SaltStack is Python based whilst the instructions are written in YAML or its own DSL. + - Perfect for environments with scalability and resilience as the priority. + - Pro - Easy to use when up and running + - Pro - Good reporting mechanism - Con - Setup phase is tough - - Con - New web ui which is much less developed than the others. + - Con - New web ui which is much less developed than the others. - Architecture - Server / Clients - - Ease of setup - Moderate - - Language - Declartive - Specify only what to do + - Ease of setup - Moderate + - Language - Declarative - Specify only what to do ### Ansible vs Terraform The tool that we will be using for this section is going to be Ansible. (Easy to use and easier language basics required.) -I think it is important to touch on some of the differences between Ansible and Terraform before we look into the tooling a little further. - -| |Ansible |Terraform | -| ------------- | ------------------------------------------------------------- | ----------------------------------------------------------------- | -|Type |Ansible is a configuration management tool |Terraform is a an orchestration tool | -|Infrastructure |Ansible provides support for mutable infrastructure |Terraform provides support for immutable infrastructure | -|Language |Ansible follows procedural language |Terraform follows a declartive language | -|Provisioning |Ansible provides partial provisioning (VM, Network, Storage) |Terraform provides extensive provisioning (VM, Network, Storage) | -|Packaging |Ansible provides complete support for packaging & templating |Terraform provides partial support for packaging & templating | -|Lifecycle Mgmt |Ansible does not have lifecycle management |Terraform is heavily dependant on lifecycle and state mgmt | +I think it is important to touch on some of the differences between Ansible and Terraform before we look into the tooling a little further. +| | Ansible | Terraform | +| -------------- | ------------------------------------------------------------ | ---------------------------------------------------------------- | +| Type | Ansible is a configuration management tool | Terraform is a an orchestration tool | +| Infrastructure | Ansible provides support for mutable infrastructure | Terraform provides support for immutable infrastructure | +| Language | Ansible follows procedural language | Terraform follows a declarative language | +| Provisioning | Ansible provides partial provisioning (VM, Network, Storage) | Terraform provides extensive provisioning (VM, Network, Storage) | +| Packaging | Ansible provides complete support for packaging & templating | Terraform provides partial support for packaging & templating | +| Lifecycle Mgmt | Ansible does not have lifecycle management | Terraform is heavily dependant on lifecycle and state mgmt | - -## Resources +## Resources - [What is Ansible](https://www.youtube.com/watch?v=1id6ERvfozo) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - See you on [Day 64](day64.md) diff --git a/Days/day64.md b/Days/day64.md index db3aeff6c..ac3a4029f 100644 --- a/Days/day64.md +++ b/Days/day64.md @@ -1,88 +1,87 @@ --- -title: '#90DaysOfDevOps - Ansible: Getting Started - Day 64' +title: "#90DaysOfDevOps - Ansible: Getting Started - Day 64" published: false -description: '90DaysOfDevOps - Ansible: Getting Started' +description: "90DaysOfDevOps - Ansible: Getting Started" tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048765 --- + ## Ansible: Getting Started -We covered a little what Ansible is in the [big picture session yesterday](day63.md) But we are going to get started with a little more information on top of that here. Firstly Ansible comes from RedHat. Secondly it is agentles, connects via SSH and runs commands. Thirdly it is cross platform (Linux & macOS, WSL2) and open-source (there is also a paid for enterprise option) Ansible pushes configuration vs other models. +We covered a little what Ansible is in the [big picture session yesterday](day63.md) But we are going to get started with a little more information on top of that here. Firstly Ansible comes from RedHat. Secondly it is agentles, connects via SSH and runs commands. Thirdly it is cross platform (Linux & macOS, WSL2) and open-source (there is also a paid for enterprise option) Ansible pushes configuration vs other models. -### Ansible Installation -As you might imagine, RedHat and the Ansible team have done a fantastic job around documenting Ansible. This generally starts with the installation steps which you can find [here](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) Remember we said that Ansible is an agentless automation tool, the tool is deployed to a system referred to as a "Control Node" from this control node is manages machines and other devices (possibly network) over SSH. +### Ansible Installation -It does state in the above linked documentation that the Windows OS cannot be used as the control node. +As you might imagine, RedHat and the Ansible team have done a fantastic job around documenting Ansible. This generally starts with the installation steps which you can find [here](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) Remember we said that Ansible is an agentless automation tool, the tool is deployed to a system referred to as a "Control Node" from this control node is manages machines and other devices (possibly network) over SSH. -For my control node and for at least this demo I am going to use the Linux VM we created way back in the [Linux section](day20.md) as my control node. +It does state in the above linked documentation that the Windows OS cannot be used as the control node. -This system was running Ubuntu and the installation steps simply needs the following commands. +For my control node and for at least this demo I am going to use the Linux VM we created way back in the [Linux section](day20.md) as my control node. -``` +This system was running Ubuntu and the installation steps simply needs the following commands. + +```Shell sudo apt update sudo apt install software-properties-common sudo add-apt-repository --yes --update ppa:ansible/ansible sudo apt install ansible ``` -Now we should have ansible installed on our control node, you can check this by running `ansible --version` and you should see something similar to this below. + +Now we should have ansible installed on our control node, you can check this by running `ansible --version` and you should see something similar to this below. ![](Images/Day64_config1.png) -Before we then start to look at controlling other nodes in our environment, we can also check functionality of ansible by running a command against our local machine `ansible localhost -m ping` will use an [Ansible Module](https://docs.ansible.com/ansible/2.9/user_guide/modules_intro.html) and this is a quick way to perform a single task across many different systems. I mean it is not much fun with just the local host but imagine you wanted to get something or make sure all your systems were up and you had 1000+ servers and devices. +Before we then start to look at controlling other nodes in our environment, we can also check functionality of ansible by running a command against our local machine `ansible localhost -m ping` will use an [Ansible Module](https://docs.ansible.com/ansible/2.9/user_guide/modules_intro.html) and this is a quick way to perform a single task across many different systems. I mean it is not much fun with just the local host but imagine you wanted to get something or make sure all your systems were up and you had 1000+ servers and devices. ![](Images/Day64_config2.png) -Or an actual real life use for a module might be something like `ansible webservers --m service -a "name=httpd state=started"` this will tell us if all of our webservers have the httpd service running. I have glossed over the webservers term used in that command. +Or an actual real life use for a module might be something like `ansible webservers --m service -a "name=httpd state=started"` this will tell us if all of our webservers have the httpd service running. I have glossed over the webservers term used in that command. -### hosts +### hosts -The way I used localhost above to run a simple ping module against the system, I cannot specify another machine on my network, for example in the environment I am using my Windows host where VirtualBox is running has a network adapter with the IP 10.0.0.1 but you can see below that I can reach by pinging but I cannot use ansible to perform that task. +The way I used localhost above to run a simple ping module against the system, I cannot specify another machine on my network, for example in the environment I am using my Windows host where VirtualBox is running has a network adapter with the IP 10.0.0.1 but you can see below that I can reach by pinging but I cannot use ansible to perform that task. ![](Images/Day64_config3.png) -In order for us to specify our hosts or the nodes that we want to automate with these tasks we need to define them. We can define them by navigating to the /etc/ansible directory on your system. +In order for us to specify our hosts or the nodes that we want to automate with these tasks we need to define them. We can define them by navigating to the /etc/ansible directory on your system. ![](Images/Day64_config4.png) -The file we want to edit is the hosts file, using a text editor we can jump in and define our hosts. The hosts file contains lots of great instructions on how to use and modify the file. We want to scroll down to the bottom and we are going to create a new group called [windows] and we are going to add our `10.0.0.1` IP address for that host. Save the file. +The file we want to edit is the hosts file, using a text editor we can jump in and define our hosts. The hosts file contains lots of great instructions on how to use and modify the file. We want to scroll down to the bottom and we are going to create a new group called [windows] and we are going to add our `10.0.0.1` IP address for that host. Save the file. ![](Images/Day64_config5.png) -However remember I said you will need to have SSH available to enable ansible to connect to your system. As you can see below when I run `ansible windows -m ping` we get an unreachable because things failed to connect via SSH. +However remember I said you will need to have SSH available to enable ansible to connect to your system. As you can see below when I run `ansible windows -m ping` we get an unreachable because things failed to connect via SSH. ![](Images/Day64_config6.png) -I have now also started adding some additional hosts to our inventory, another name for this file as this is where you are going to define all of your devices, could be network devices, switches and routers for example also would be added here and grouped. In our hosts file though I have also added in my credentials for accessing the linux group of systems. +I have now also started adding some additional hosts to our inventory, another name for this file as this is where you are going to define all of your devices, could be network devices, switches and routers for example also would be added here and grouped. In our hosts file though I have also added in my credentials for accessing the linux group of systems. ![](Images/Day64_config7.png) -Now if we run `ansible linux -m ping` we get a success as per below. +Now if we run `ansible linux -m ping` we get a success as per below. ![](Images/Day64_config8.png) -We then have the node requirements, these are the target systems you wish to automate the configuration on. We are not installing anything for Ansible on these (I mean we might be installing software but there is no client from Ansible we need) Ansible will make a connection over SSH and send anything over SFTP. (If you so desire though and you have SSH configured you could use SCP vs SFTP.) +We then have the node requirements, these are the target systems you wish to automate the configuration on. We are not installing anything for Ansible on these (I mean we might be installing software but there is no client from Ansible we need) Ansible will make a connection over SSH and send anything over SFTP. (If you so desire though and you have SSH configured you could use SCP vs SFTP.) -### Ansible Commands +### Ansible Commands You saw that we were able to run `ansible linux -m ping` against our Linux machine and get a response, basically with Ansible we have the ability to run many adhoc commands. But obviously you can run this against a group of systems and get that information back. [ad hoc commands](https://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html) -If you find yourself repeating commands or even worse you are having to log into individual systems to run these commands then Ansible can help there. For example the simple command below would give us the output of all the operating system details for all of the systems we add to our linux group. +If you find yourself repeating commands or even worse you are having to log into individual systems to run these commands then Ansible can help there. For example the simple command below would give us the output of all the operating system details for all of the systems we add to our linux group. `ansible linux -a "cat /etc/os-release"` -Other use cases could be to reboot systems, copy files, manage packers and users. You can also couple ad hoc commands with Ansible modules. +Other use cases could be to reboot systems, copy files, manage packers and users. You can also couple ad hoc commands with Ansible modules. Ad hoc commands use a declarative model, calculating and executing the actions required to reach a specified final state. They achieve a form of idempotence by checking the current state before they begin and doing nothing unless the current state is different from the specified final state. -## Resources +## Resources - [What is Ansible](https://www.youtube.com/watch?v=1id6ERvfozo) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - See you on [Day 65](day65.md) - - - diff --git a/Days/day65.md b/Days/day65.md index 2478bf429..63f0d20d6 100644 --- a/Days/day65.md +++ b/Days/day65.md @@ -1,13 +1,14 @@ --- -title: '#90DaysOfDevOps - Ansible Playbooks - Day 65' +title: "#90DaysOfDevOps - Ansible Playbooks - Day 65" published: false description: 90DaysOfDevOps - Ansible Playbooks -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049054 --- -### Ansible Playbooks + +### Ansible Playbooks In this section we will take a look at the main reason that I can see at least for Ansible, I mean it is great to take a single command and hit many different servers to perform simple commands such as rebooting a long list of servers and saving the hassle of having to connect to each one individually. @@ -25,7 +26,7 @@ These playbooks are written in YAML (YAML ain’t markup language) you will find Let’s take a look at a simple playbook called playbook.yml. -``` +```Yaml - name: Simple Play hosts: localhost connection: local @@ -37,30 +38,30 @@ Let’s take a look at a simple playbook called playbook.yml. msg: "{{ ansible_os_family }}" ``` -You will find the above file [simple_play](days/../Configmgmt/simple_play.yml). If we then use the `ansible-playbook simple_play.yml` command we will walk through the following steps. +You will find the above file [simple_play](days/../Configmgmt/simple_play.yml). If we then use the `ansible-playbook simple_play.yml` command we will walk through the following steps. ![](Images/Day65_config1.png) You can see the first task of "gathering steps" happened, but we didn't trigger or ask for this? This module is automatically called by playbooks to gather useful variables about remote hosts. [ansible.builtin.setup](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/setup_module.html) -Our second task was to set a ping, this is not an ICMP ping but a python script to report back `pong` on successful connectivity to remote or localhost. [ansible.builtin.ping](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/ping_module.html) +Our second task was to set a ping, this is not an ICMP ping but a python script to report back `pong` on successful connectivity to remote or localhost. [ansible.builtin.ping](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/ping_module.html) -Then our third or really our second defined task as the first one will run unless you disable was the printing of a message telling us our OS. In this task we are using conditionals, we could run this playbook against all different types of operating systems and this would return the OS name. We are simply messaging this output for ease but we could add a task to say something like: +Then our third or really our second defined task as the first one will run unless you disable was the printing of a message telling us our OS. In this task we are using conditionals, we could run this playbook against all different types of operating systems and this would return the OS name. We are simply messaging this output for ease but we could add a task to say something like: -``` -tasks: +```Yaml +tasks: - name: "shut down Debian flavoured systems" - command: /sbin/shutdown -t now + command: /sbin/shutdown -t now when: ansible_os_family == "Debian" -``` +``` ### Vagrant to setup our environment -We are going to use Vagrant to set up our node environment, I am going to keep this at a reasonable 4 nodes but you can hopefully see that this could easily be 300 or 3000 and this is the power of Ansible and other configuration management tools to be able to configure your servers. +We are going to use Vagrant to set up our node environment, I am going to keep this at a reasonable 4 nodes but you can hopefully see that this could easily be 300 or 3000 and this is the power of Ansible and other configuration management tools to be able to configure your servers. You can find this file located here ([Vagrantfile](/Days/Configmgmt/Vagrantfile)) -``` +```Vagrant Vagrant.configure("2") do |config| servers=[ { @@ -97,7 +98,7 @@ config.vm.base_address = 600 config.vm.define machine[:hostname] do |node| node.vm.box = machine[:box] node.vm.hostname = machine[:hostname] - + node.vm.network :public_network, bridge: "Intel(R) Ethernet Connection (7) I219-V", ip: machine[:ip] node.vm.network "forwarded_port", guest: 22, host: machine[:ssh_port], id: "ssh" @@ -111,49 +112,51 @@ config.vm.base_address = 600 end ``` -Use the `vagrant up` command to spin these machines up in VirtualBox, You might be able to add more memory and you might also want to define a different private_network address for each machine but this works in my environment. Remember our control box is the Ubuntu desktop we deployed during the Linux section. +Use the `vagrant up` command to spin these machines up in VirtualBox, You might be able to add more memory and you might also want to define a different private_network address for each machine but this works in my environment. Remember our control box is the Ubuntu desktop we deployed during the Linux section. -If you are resource contrained then you can also run `vagrant up web01 web02` to only bring up the webservers that we are using here. +If you are resource contrained then you can also run `vagrant up web01 web02` to only bring up the webservers that we are using here. ### Ansible host configuration Now that we have our environment ready, we can check ansible and for this we will use our Ubuntu desktop (You could use this but you can equally use any Linux based machine on your network accessible to the network below) as our control, let’s also add the new nodes to our group in the ansible hosts file, you can think of this file as an inventory, an alternative to this could be another inventory file that is called on as part of your ansible command with `-i filename` this could be useful vs using the host file as you can have different files for different environments, maybe production, test and staging. Because we are using the default hosts file we do not need to specify as this would be the default used. -I have added the following to the default hosts file. +I have added the following to the default hosts file. -``` +```Text [control] ansible-control -[proxy] +[proxy] loadbalancer -[webservers] +[webservers] web01 web02 -[database] +[database] db01 ``` + ![](Images/Day65_config2.png) Before moving on we want to make sure we can run a command against our nodes, let’s run `ansible nodes -m command -a hostname` this simple command will test that we have connectivity and report back our host names. Also note that I have added these nodes and IPs to my Ubuntu control node within the /etc/hosts file to ensure connectivity. We might also need to do SSH configuration for each node from the Ubuntu box. -``` +```Text 192.168.169.140 ansible-control 192.168.169.130 db01 192.168.169.131 web01 192.168.169.132 web02 192.168.169.133 loadbalancer ``` + ![](Images/Day65_config3.png) -At this stage we want to run through setting up SSH keys between your control and your server nodes. This is what we are going to do next, another way here could be to add variables into your hosts file to give username and password. I would advise against this as this is never going to be a best practice. +At this stage we want to run through setting up SSH keys between your control and your server nodes. This is what we are going to do next, another way here could be to add variables into your hosts file to give username and password. I would advise against this as this is never going to be a best practice. -To set up SSH and share amongst your nodes, follow the steps below, you will be prompted for passwords (`vagrant`) and you will likely need to hit `y` a few times to accept. +To set up SSH and share amongst your nodes, follow the steps below, you will be prompted for passwords (`vagrant`) and you will likely need to hit `y` a few times to accept. `ssh-keygen` @@ -165,28 +168,27 @@ To set up SSH and share amongst your nodes, follow the steps below, you will be Now if you have all of your VMs switched on then you can run the `ssh-copy-id web01 && ssh-copy-id web02 && ssh-copy-id loadbalancer && ssh-copy-id db01` this will prompt you for your password in our case our password is `vagrant` -I am not running all my VMs and only running the webservers so I issued `ssh-copy-id web01 && ssh-copy-id web02` +I am not running all my VMs and only running the webservers so I issued `ssh-copy-id web01 && ssh-copy-id web02` ![](Images/Day65_config7.png) -Before running any playbooks I like to make sure that I have simple connectivity with my groups so I have ran `ansible webservers -m ping` to test connectivity. +Before running any playbooks I like to make sure that I have simple connectivity with my groups so I have ran `ansible webservers -m ping` to test connectivity. ![](Images/Day65_config4.png) - ### Our First "real" Ansible Playbook -Our first Ansible playbook is going to configure our webservers, we have grouped these in our hosts file under the grouping [webservers]. -Before we run our playbook we can confirm that our web01 and web02 do not have apache installed. The top of the screenshot below is showing you the folder and file layout I have created within my ansible control to run this playbook, we have the `playbook1.yml`, then in the templates folder we have the `index.html.j2` and `ports.conf.j2` files. You can find these files in the folder listed above in the repository. +Our first Ansible playbook is going to configure our webservers, we have grouped these in our hosts file under the grouping [webservers]. -Then we SSH into web01 to check if we have apache installed? +Before we run our playbook we can confirm that our web01 and web02 do not have apache installed. The top of the screenshot below is showing you the folder and file layout I have created within my ansible control to run this playbook, we have the `playbook1.yml`, then in the templates folder we have the `index.html.j2` and `ports.conf.j2` files. You can find these files in the folder listed above in the repository. -![](Images/Day65_config8.png) +Then we SSH into web01 to check if we have apache installed? -You can see from the above that we have not got apache installed on our web01 so we can fix this by running the below playbook. +![](Images/Day65_config8.png) +You can see from the above that we have not got apache installed on our web01 so we can fix this by running the below playbook. -``` +```Yaml - hosts: webservers become: yes vars: @@ -224,30 +226,31 @@ You can see from the above that we have not got apache installed on our web01 so name: apache2 state: restarted ``` -Breaking down the above playbook: + +Breaking down the above playbook: - `- hosts: webservers` this is saying that our group to run this playbook on is a group called webservers -- `become: yes` means that our user running the playbook will become root on our remote systems. You will be prompted for the root password. -- We then have `vars` and this defines some environment variables we want throughout our webservers. +- `become: yes` means that our user running the playbook will become root on our remote systems. You will be prompted for the root password. +- We then have `vars` and this defines some environment variables we want throughout our webservers. -Following this we start our tasks, +Following this we start our tasks, - Task 1 is to ensure that apache is running the latest version -- Task 2 is writing the ports.conf file from our source found in the templates folder. -- Task 3 is creating a basic index.html file -- Task 4 is making sure apache is running +- Task 2 is writing the ports.conf file from our source found in the templates folder. +- Task 3 is creating a basic index.html file +- Task 4 is making sure apache is running Finally we have a handlers section, [Handlers: Running operations on change](https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html) "Sometimes you want a task to run only when a change is made on a machine. For example, you may want to restart a service if a task updates the configuration of that service, but not if the configuration is unchanged. Ansible uses handlers to address this use case. Handlers are tasks that only run when notified. Each handler should have a globally unique name." -At this stage you might be thinking but we have deployed 5 VMs (including our Ubuntu Desktop machine which is acting as our Ansible Control) The other systems will come into play during the rest of the section. +At this stage you might be thinking but we have deployed 5 VMs (including our Ubuntu Desktop machine which is acting as our Ansible Control) The other systems will come into play during the rest of the section. ### Run our Playbook -We are now ready to run our playbook against our nodes. To run our playbook we can use the `ansible-playbook playbook1.yml` We have defined our hosts that our playbook will run against within the playbook and this will walkthrough our tasks that we have defined. +We are now ready to run our playbook against our nodes. To run our playbook we can use the `ansible-playbook playbook1.yml` We have defined our hosts that our playbook will run against within the playbook and this will walkthrough our tasks that we have defined. -When the command is complete we get an output showing our plays and tasks, this may take some time you can see from the below image that this took a while to go and install our desired state. +When the command is complete we get an output showing our plays and tasks, this may take some time you can see from the below image that this took a while to go and install our desired state. ![](Images/Day65_config9.png) @@ -255,7 +258,7 @@ We can then double check this by jumping into a node and checking we have the in ![](Images/Day65_config10.png) -Just to round this out as we have deployed two standalone webservers with the above we can now navigate to the respective IPs that we defined and get our new website. +Just to round this out as we have deployed two standalone webservers with the above we can now navigate to the respective IPs that we defined and get our new website. ![](Images/Day65_config11.png) @@ -263,13 +266,13 @@ We are going to build on this playbook as we move through the rest of this secti Another thing to add here is that we are only really working with Ubuntu VMs but Ansible is agnostic to the target systems. The alternatives that we have previously mentioned to manage your systems could be server by server (not scalable when you get over a large amount of servers, plus a pain even with 3 nodes) we can also use shell scripting which again we covered in the Linux section but these nodes are potentially different so yes it can be done but then someone needs to maintain and manage those scripts. Ansible is free and hits the easy button vs having to have a specialised script. -## Resources +## Resources - [What is Ansible](https://www.youtube.com/watch?v=1id6ERvfozo) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - [Your complete guide to Ansible](https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u) -This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. +This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. See you on [Day 66](day66.md) diff --git a/Days/day66.md b/Days/day66.md index 3a23c3b19..a032177f1 100644 --- a/Days/day66.md +++ b/Days/day66.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Ansible Playbooks Continued... - Day 66' +title: "#90DaysOfDevOps - Ansible Playbooks Continued... - Day 66" published: false description: 90DaysOfDevOps - Ansible Playbooks Continued... tags: "devops, 90daysofdevops, learning" @@ -7,27 +7,28 @@ cover_image: null canonical_url: null id: 1048712 --- -## Ansible Playbooks Continued... -In our last section we started with creating our small lab using a Vagrantfile to deploy 4 machines and we used our Linux machine we created in that section as our ansible control system. +## Ansible Playbooks (Continued) -We also ran through a few scenarios of playbooks and at the end we had a playbook that made our web01 and web02 individual webservers. +In our last section we started with creating our small lab using a Vagrantfile to deploy 4 machines and we used our Linux machine we created in that section as our ansible control system. + +We also ran through a few scenarios of playbooks and at the end we had a playbook that made our web01 and web02 individual webservers. ![](Images/Day66_config1.png) ### Keeping things tidy -Before we get into further automation and deployment we should cover the ability to keep our playbook lean and tidy and how we can separate our taks and handlers into subfolders. +Before we get into further automation and deployment we should cover the ability to keep our playbook lean and tidy and how we can separate our taks and handlers into subfolders. we are basically going to copy our tasks into their own file within a folder. -``` +```Yaml - name: ensure apache is at the latest version apt: name=apache2 state=latest - name: write the apache2 ports.conf config file - template: - src=templates/ports.conf.j2 + template: + src=templates/ports.conf.j2 dest=/etc/apache2/ports.conf notify: restart apache @@ -44,9 +45,9 @@ we are basically going to copy our tasks into their own file within a folder. state: started ``` -and the same for the handlers. +and the same for the handlers. -``` +```Yaml - name: restart apache service: name: apache2 @@ -59,7 +60,7 @@ You can test this on your control machine. If you have copied the files from the ![](Images/Day66_config2.png) -Let's find out what simple change I made. Using `curl web01:8000` +Let's find out what simple change I made. Using `curl web01:8000` ![](Images/Day66_config3.png) @@ -67,25 +68,25 @@ We have just tidied up our playbook and started to separate areas that could mak ### Roles and Ansible Galaxy -At the moment we have deployed 4 VMs and we have configured 2 of these VMs as our webservers but we have some more specific functions namely, a database server and a loadbalancer or proxy. In order for us to do this and tidy up our repository we can use roles within Ansible. +At the moment we have deployed 4 VMs and we have configured 2 of these VMs as our webservers but we have some more specific functions namely, a database server and a loadbalancer or proxy. In order for us to do this and tidy up our repository we can use roles within Ansible. -To do this we will use the `ansible-galaxy` command which is there to manage ansible roles in shared repositories. +To do this we will use the `ansible-galaxy` command which is there to manage ansible roles in shared repositories. ![](Images/Day66_config4.png) -We are going to use `ansible-galaxy` to create a role for apache2 which is where we are going to put our specifics for our webservers. +We are going to use `ansible-galaxy` to create a role for apache2 which is where we are going to put our specifics for our webservers. ![](Images/Day66_config5.png) -The above command `ansible-galaxy init roles/apache2` will create the folder structure that we have shown above. Our next step is we need to move our existing tasks and templates to the relevant folders in the new structure. +The above command `ansible-galaxy init roles/apache2` will create the folder structure that we have shown above. Our next step is we need to move our existing tasks and templates to the relevant folders in the new structure. ![](Images/Day66_config6.png) -Copy and paste is easy to move those files but we also need to make a change to the tasks/main.yml so that we point this to the apache2_install.yml. +Copy and paste is easy to move those files but we also need to make a change to the tasks/main.yml so that we point this to the apache2_install.yml. -We also need to change our playbook now to refer to our new role. In the playbook1.yml and playbook2.yml we determine our tasks and handlers in different ways as we changed these between the two versions. We need to change our playbook to use this role as per below: +We also need to change our playbook now to refer to our new role. In the playbook1.yml and playbook2.yml we determine our tasks and handlers in different ways as we changed these between the two versions. We need to change our playbook to use this role as per below: -``` +```Yaml - hosts: webservers become: yes vars: @@ -98,32 +99,32 @@ We also need to change our playbook now to refer to our new role. In the playboo ![](Images/Day66_config7.png) -We can now run our playbook again this time with the new playbook name `ansible-playbook playbook3.yml` you will notice the depreciation, we can fix that next. +We can now run our playbook again this time with the new playbook name `ansible-playbook playbook3.yml` you will notice the depreciation, we can fix that next. ![](Images/Day66_config8.png) -Ok, the depreciation although our playbook ran we should fix our ways now, in order to do that I have changed the include option in the tasks/main.yml to now be import_tasks as per below. +Ok, the depreciation although our playbook ran we should fix our ways now, in order to do that I have changed the include option in the tasks/main.yml to now be import_tasks as per below. ![](Images/Day66_config9.png) You can find these files in the [ansible-scenario3](Days/Configmgmt/ansible-scenario3) -We are also going to create a few more roles whilst using `ansible-galaxy` we are going to create: +We are also going to create a few more roles whilst using `ansible-galaxy` we are going to create: - common = for all of our servers (`ansible-galaxy init roles/common`) - nginx = for our loadbalancer (`ansible-galaxy init roles/nginx`) ![](Images/Day66_config10.png) -I am going to leave this one here and in the next session we will start working on those other nodes we have deployed but have not done anything with yet. +I am going to leave this one here and in the next session we will start working on those other nodes we have deployed but have not done anything with yet. -## Resources +## Resources - [What is Ansible](https://www.youtube.com/watch?v=1id6ERvfozo) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - [Your complete guide to Ansible](https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u) -This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. +This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. See you on [Day 67](day67.md) diff --git a/Days/day67.md b/Days/day67.md index 545a85363..142f101d0 100644 --- a/Days/day67.md +++ b/Days/day67.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Using Roles & Deploying a Loadbalancer - Day 67' +title: "#90DaysOfDevOps - Using Roles & Deploying a Loadbalancer - Day 67" published: false description: 90DaysOfDevOps - Using Roles & Deploying a Loadbalancer tags: "devops, 90daysofdevops, learning" @@ -7,20 +7,22 @@ cover_image: null canonical_url: null id: 1048713 --- + ## Using Roles & Deploying a Loadbalancer -In the last session we covered roles and used the `ansible-galaxy` command to help create our folder structures for some roles that we are going to use. We finished up with a much tidier working repository for our configuration code as everything is hidden away in our role folders. +In the last session we covered roles and used the `ansible-galaxy` command to help create our folder structures for some roles that we are going to use. We finished up with a much tidier working repository for our configuration code as everything is hidden away in our role folders. -However we have only used the apache2 role and have a working playbook3.yaml to handle our webservers. +However we have only used the apache2 role and have a working playbook3.yaml to handle our webservers. -At this point if you have only used `vagrant up web01 web02` now is the time to run `vagrant up loadbalancer` this will bring up another Ubuntu system that we will use as our Load Balancer/Proxy. +At this point if you have only used `vagrant up web01 web02` now is the time to run `vagrant up loadbalancer` this will bring up another Ubuntu system that we will use as our Load Balancer/Proxy. -We have already defined this new machine in our hosts file, but we do not have the ssh key configured until it is available, so we need to also run `ssh-copy-id loadbalancer` when the system is up and ready. +We have already defined this new machine in our hosts file, but we do not have the ssh key configured until it is available, so we need to also run `ssh-copy-id loadbalancer` when the system is up and ready. ### Common role -I created at the end of yesterdays session the role of `common`, common will be used across all of our servers where as the other roles are specific to use cases, now the applications I am going to install as common as spurious and I cannot see many reasons for this to be the case but it shows the objective. In our common role folder structure, navigate to tasks folder and you will have a main.yml. In this yaml we need to point this to our install_tools.yml file and we do this by adding a line `- import_tasks: install_tools.yml` this used to be `include` but this is going to be depreciated soon enough so we are using import_tasks. -``` +I created at the end of yesterdays session the role of `common`, common will be used across all of our servers where as the other roles are specific to use cases, now the applications I am going to install as common as spurious and I cannot see many reasons for this to be the case but it shows the objective. In our common role folder structure, navigate to tasks folder and you will have a main.yml. In this yaml we need to point this to our install_tools.yml file and we do this by adding a line `- import_tasks: install_tools.yml` this used to be `include` but this is going to be depreciated soon enough so we are using import_tasks. + +```Yaml - name: "Install Common packages" apt: name={{ item }} state=latest with_items: @@ -29,9 +31,9 @@ I created at the end of yesterdays session the role of `common`, common will be - figlet ``` -In our playbook we then add in the common role for each host block. +In our playbook we then add in the common role for each host block. -``` +```Yaml - hosts: webservers become: yes vars: @@ -45,13 +47,13 @@ In our playbook we then add in the common role for each host block. ### nginx -The next phase is for us to install and configure nginx on our loadbalancer vm. Like the common folder structure, we have the nginx based on the last session. +The next phase is for us to install and configure nginx on our loadbalancer vm. Like the common folder structure, we have the nginx based on the last session. -First of all we are going to add a host block to our playbook. This block will include our common role and then our new nginx role. +First of all we are going to add a host block to our playbook. This block will include our common role and then our new nginx role. The playbook can be found here. [playbook4.yml](Days/../Configmgmt/ansible-scenario4/playbook4.yml) -``` +```Yaml - hosts: webservers become: yes vars: @@ -62,32 +64,32 @@ The playbook can be found here. [playbook4.yml](Days/../Configmgmt/ansible-scena - common - apache2 -- hosts: proxy +- hosts: proxy become: yes - roles: + roles: - common - nginx ``` -In order for this to mean anything, we have to define our tasks that we wish to run, in the same way we will modify the main.yml in tasks to point to two files this time, one for installation and one for configuration. +In order for this to mean anything, we have to define our tasks that we wish to run, in the same way we will modify the main.yml in tasks to point to two files this time, one for installation and one for configuration. -There are some other files that I have modified based on the outcome we desire, take a look in the folder [ansible-scenario4](Days/Configmgmt/ansible-scenario4) for all the files changed. You should check the folders tasks, handlers and templates in the nginx folder and you will find those additional changes and files. +There are some other files that I have modified based on the outcome we desire, take a look in the folder [ansible-scenario4](Days/Configmgmt/ansible-scenario4) for all the files changed. You should check the folders tasks, handlers and templates in the nginx folder and you will find those additional changes and files. -### Run the updated playbook +### Run the updated playbook -Since yesterday we have added the common role which will now install some packages on our system and then we have also added our nginx role which includes installation and configuration. +Since yesterday we have added the common role which will now install some packages on our system and then we have also added our nginx role which includes installation and configuration. Let's run our playbook4.yml using the `ansible-playbook playbook4.yml` ![](Images/Day67_config1.png) -Now that we have our webservers and loadbalancer configured we should now be able to go to http://192.168.169.134/ which is the IP address of our loadbalancer. +Now that we have our webservers and loadbalancer configured we should now be able to go to http://192.168.169.134/ which is the IP address of our loadbalancer. ![](Images/Day67_config2.png) -If you are following along and you do not have this state then it could be down to the server IP addresses you have in your environment. The file can be found in `templates\mysite.j2` and looks similar to the below: You would need to update with your webserver IP addresses. +If you are following along and you do not have this state then it could be down to the server IP addresses you have in your environment. The file can be found in `templates\mysite.j2` and looks similar to the below: You would need to update with your webserver IP addresses. -``` +```J2 upstream webservers { server 192.168.169.131:8000; server 192.168.169.132:8000; @@ -96,24 +98,25 @@ If you are following along and you do not have this state then it could be down server { listen 80; - location / { + location / { proxy_pass http://webservers; } } ``` -I am pretty confident that what we have installed is all good but let's use an adhoc command using ansible to check these common tools installation. + +I am pretty confident that what we have installed is all good but let's use an adhoc command using ansible to check these common tools installation. `ansible loadbalancer -m command -a neofetch` ![](Images/Day67_config3.png) -## Resources +## Resources - [What is Ansible](https://www.youtube.com/watch?v=1id6ERvfozo) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - [Your complete guide to Ansible](https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u) -This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. +This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. See you on [Day 68](day68.md) diff --git a/Days/day68.md b/Days/day68.md index c25eb5ced..c95694dcd 100644 --- a/Days/day68.md +++ b/Days/day68.md @@ -1,23 +1,24 @@ --- -title: '#90DaysOfDevOps - Tags, Variables, Inventory & Database Server config - Day 68' +title: "#90DaysOfDevOps - Tags, Variables, Inventory & Database Server config - Day 68" published: false -description: '90DaysOfDevOps - Tags, Variables, Inventory & Database Server config' +description: "90DaysOfDevOps - Tags, Variables, Inventory & Database Server config" tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048780 --- + ## Tags, Variables, Inventory & Database Server config -### Tags +### Tags -As we left our playbook in the session yesterday we would need to run every tasks and play within that playbook. Which means we would have to run the webservers and loadbalancer plays and tasks to completion. +As we left our playbook in the session yesterday we would need to run every tasks and play within that playbook. Which means we would have to run the webservers and loadbalancer plays and tasks to completion. -However tags can enable us to seperate these out if we want. This could be an effcient move if we have extra large and long playbooks in our environments. +However tags can enable us to separate these out if we want. This could be an efficient move if we have extra large and long playbooks in our environments. In our playbook file, in this case we are using [ansible-scenario5](Configmgmt/ansible-scenario5/playbook5.yml) -``` +```Yaml - hosts: webservers become: yes vars: @@ -29,39 +30,40 @@ In our playbook file, in this case we are using [ansible-scenario5](Configmgmt/a - apache2 tags: web -- hosts: proxy +- hosts: proxy become: yes - roles: + roles: - common - nginx tags: proxy ``` -We can then confirm this by using the `ansible-playbook playbook5.yml --list-tags` and the list tags is going to outline the tags we have defined in our playbook. + +We can then confirm this by using the `ansible-playbook playbook5.yml --list-tags` and the list tags is going to outline the tags we have defined in our playbook. ![](Images/Day68_config1.png) -Now if we wanted to target just the proxy we could do this by running `ansible-playbook playbook5.yml --tags proxy` and this will as you can see below only run the playbook against the proxy. +Now if we wanted to target just the proxy we could do this by running `ansible-playbook playbook5.yml --tags proxy` and this will as you can see below only run the playbook against the proxy. ![](Images/Day68_config2.png) -tags can be added at the task level as well so we can get really granular on where and what you want to happen. It could be application focused tags, we could go through tasks for example and tag our tasks based on installation, configuration or removal. Another very useful tag you can use is +tags can be added at the task level as well so we can get really granular on where and what you want to happen. It could be application focused tags, we could go through tasks for example and tag our tasks based on installation, configuration or removal. Another very useful tag you can use is -`tag: always` this will ensure no matter what --tags you are using in your command if something is tagged with the always value then it will always be ran when you run the ansible-playbook command. +`tag: always` this will ensure no matter what --tags you are using in your command if something is tagged with the always value then it will always be ran when you run the ansible-playbook command. -With tags we can also bundle multiple tags together and if we choose to run `ansible-playbook playbook5.yml --tags proxy,web` this will run all of the items with those tags. Obviously in our instance that would mean the same as running the the playbook but if we had multiple other plays then this would make sense. +With tags we can also bundle multiple tags together and if we choose to run `ansible-playbook playbook5.yml --tags proxy,web` this will run all of the items with those tags. Obviously in our instance that would mean the same as running the the playbook but if we had multiple other plays then this would make sense. -You can also define more than one tag. +You can also define more than one tag. -### Variables +### Variables -There are two main types of variables within Ansible. +There are two main types of variables within Ansible. -- User created -- Ansible Facts +- User created +- Ansible Facts ### Ansible Facts -Each time we have ran our playbooks, we have had a task that we have not defined called "Gathering facts" we can use these variables or facts to make things happen with our automation tasks. +Each time we have ran our playbooks, we have had a task that we have not defined called "Gathering facts" we can use these variables or facts to make things happen with our automation tasks. ![](Images/Day68_config3.png) @@ -69,9 +71,9 @@ If we were to run the following `ansible proxy -m setup` command we should see a ![](Images/Day68_config4.png) -If you open this file you can see all sorts of information for our command. We can get our IP addresses, architecture, bios version. A lot of useful information if we want to leverage this and use this in our playbooks. +If you open this file you can see all sorts of information for our command. We can get our IP addresses, architecture, bios version. A lot of useful information if we want to leverage this and use this in our playbooks. -An idea would be to potentially use one of these variables within our nginx template mysite.j2 where we hard coded the IP addresses of our webservers. You can do this by creating a for loop in your mysite.j2 and this is going to cycle through the group [webservers] this enables us to have more than our 2 webservers automatically and dynamically created or added to this load balancer configuration. +An idea would be to potentially use one of these variables within our nginx template mysite.j2 where we hard coded the IP addresses of our webservers. You can do this by creating a for loop in your mysite.j2 and this is going to cycle through the group [webservers] this enables us to have more than our 2 webservers automatically and dynamically created or added to this load balancer configuration. ``` #Dynamic Config for server {{ ansible_facts['nodename'] }} @@ -84,18 +86,19 @@ An idea would be to potentially use one of these variables within our nginx temp server { listen 80; - location / { + location / { proxy_pass http://webservers; } } ``` -The outcome of the above will look the same as it does right now but if we added more webservers or removed one this would dynamically change the proxy configuration. For this to work you will need to have name resolution configured. + +The outcome of the above will look the same as it does right now but if we added more webservers or removed one this would dynamically change the proxy configuration. For this to work you will need to have name resolution configured. ### User created -User created variables are what we have created ourselves. If you take a look in our playbook you will see we have `vars:` and then a list of 3 variables we are using there. +User created variables are what we have created ourselves. If you take a look in our playbook you will see we have `vars:` and then a list of 3 variables we are using there. -``` +```Yaml - hosts: webservers become: yes vars: @@ -107,25 +110,25 @@ User created variables are what we have created ourselves. If you take a look in - apache2 tags: web -- hosts: proxy +- hosts: proxy become: yes - roles: + roles: - common - nginx tags: proxy ``` -We can however keep our playbook clear of variables by moving them to their own file. We are going to do this but we will move into the [ansible-scenario6](Configmgmt/ansible-scenario6) folder. In the root of that folder we are going to create a group_vars folder. We are then going to create another folder called all (all groups are going to get these variables). In there we will create a file called `common_variables.yml` and we will copy our variables from our playbook into this file. Removing them from the playbook along with vars: as well. +We can however keep our playbook clear of variables by moving them to their own file. We are going to do this but we will move into the [ansible-scenario6](Configmgmt/ansible-scenario6) folder. In the root of that folder we are going to create a group_vars folder. We are then going to create another folder called all (all groups are going to get these variables). In there we will create a file called `common_variables.yml` and we will copy our variables from our playbook into this file. Removing them from the playbook along with vars: as well. -``` +```Yaml http_port: 8000 https_port: 4443 html_welcome_msg: "Hello 90DaysOfDevOps - Welcome to Day 68!" ``` -Because we are associating this as a global variable we could also add in our NTP and DNS servers here as well. The variables are set from the folder structure that we have created. You can see below how clean our Playbook now looks. +Because we are associating this as a global variable we could also add in our NTP and DNS servers here as well. The variables are set from the folder structure that we have created. You can see below how clean our Playbook now looks. -``` +```Yaml - hosts: webservers become: yes roles: @@ -133,20 +136,20 @@ Because we are associating this as a global variable we could also add in our NT - apache2 tags: web -- hosts: proxy +- hosts: proxy become: yes - roles: + roles: - common - nginx tags: proxy ``` -One of those variables was the http_port, we can use this again in our for loop within the mysite.j2 as per below: +One of those variables was the http_port, we can use this again in our for loop within the mysite.j2 as per below: -``` +```J2 #Dynamic Config for server {{ ansible_facts['nodename'] }} upstream webservers { - {% for host in groups['webservers'] %} + {% for host in groups['webservers'] %} server {{ hostvars[host]['ansible_facts']['nodename'] }}:{{ http_port }}; {% endfor %} } @@ -154,44 +157,45 @@ One of those variables was the http_port, we can use this again in our for loop server { listen 80; - location / { + location / { proxy_pass http://webservers; } } ``` -We can also define an ansible fact in our roles/apache2/templates/index.html.j2 file so that we can understand which webserver we are on. +We can also define an ansible fact in our roles/apache2/templates/index.html.j2 file so that we can understand which webserver we are on. -``` +```J2

{{ html_welcome_msg }}! I'm webserver {{ ansible_facts['nodename'] }}

``` -The results of running the `ansible-playbook playbook6.yml` command with our variable changes means that when we hit our loadbalancer you can see that we hit either of the webservers we have in our group. + +The results of running the `ansible-playbook playbook6.yml` command with our variable changes means that when we hit our loadbalancer you can see that we hit either of the webservers we have in our group. ![](Images/Day68_config5.png) -We could also add a folder called host_vars and create a web01.yml and have a specific message or change what that looks like on a per host basis if we wish. +We could also add a folder called host_vars and create a web01.yml and have a specific message or change what that looks like on a per host basis if we wish. ### Inventory Files -So far we have used the default hosts file in the /etc/ansible folder to determine our hosts. We could however have different files for different environments, for example production and staging. I am not going to create more environments. But we are able to create our own host files. +So far we have used the default hosts file in the /etc/ansible folder to determine our hosts. We could however have different files for different environments, for example production and staging. I am not going to create more environments. But we are able to create our own host files. -We can create multiple files for our different inventory of servers and nodes. We would call these using `ansible-playbook -i dev playbook.yml` you can also define variables within your hosts file and then print that out or leverage that variable somewhere else in your playbooks for example in the example and training course I am following along to below they have added the environment variable created in the host file to the loadbalancer web page template to show the environment as part of the web page message. +We can create multiple files for our different inventory of servers and nodes. We would call these using `ansible-playbook -i dev playbook.yml` you can also define variables within your hosts file and then print that out or leverage that variable somewhere else in your playbooks for example in the example and training course I am following along to below they have added the environment variable created in the host file to the loadbalancer web page template to show the environment as part of the web page message. ### Deploying our Database server -We still have one more machine we have not powered up yet and configured. We can do this using `vagrant up db01` from where our Vagrantfile is located. When this is up and accessible we then need to make sure the SSH key is copied over using `ssh-copy-id db01` so that we can access. +We still have one more machine we have not powered up yet and configured. We can do this using `vagrant up db01` from where our Vagrantfile is located. When this is up and accessible we then need to make sure the SSH key is copied over using `ssh-copy-id db01` so that we can access. We are going to be working from the [ansible-scenario7](Configmgmt/ansible-scenario7) folder -Let's then use `ansible-galaxy init roles/mysql` to create a new folder structure for a new role called "mysql" +Let's then use `ansible-galaxy init roles/mysql` to create a new folder structure for a new role called "mysql" -In our playbook we are going to add a new play block for the database configuration. We have our group database defined in our /etc/ansible/hosts file. We then instruct our database group to have the role common and a new role called mysql which we created in the previous step. We are also tagging our database group with database, this means as we discussed earlier we can choose to only run against these tags if we wish. +In our playbook we are going to add a new play block for the database configuration. We have our group database defined in our /etc/ansible/hosts file. We then instruct our database group to have the role common and a new role called mysql which we created in the previous step. We are also tagging our database group with database, this means as we discussed earlier we can choose to only run against these tags if we wish. -``` +```Yaml - hosts: webservers become: yes roles: @@ -205,7 +209,7 @@ In our playbook we are going to add a new play block for the database configurat roles: - common - nginx - tags: + tags: proxy - hosts: database @@ -216,11 +220,11 @@ In our playbook we are going to add a new play block for the database configurat tags: database ``` -Within our roles folder structure you will now have the tree automatically created, we need to populate the following: +Within our roles folder structure you will now have the tree automatically created, we need to populate the following: -Handlers - main.yml +Handlers - main.yml -``` +```Yaml # handlers file for roles/mysql - name: restart mysql service: @@ -230,9 +234,9 @@ Handlers - main.yml Tasks - install_mysql.yml, main.yml & setup_mysql.yml -install_mysql.yml - this task is going to be there to install mysql and ensure that the service is running. +install_mysql.yml - this task is going to be there to install mysql and ensure that the service is running. -``` +```Yaml - name: "Install Common packages" apt: name={{ item }} state=latest with_items: @@ -254,17 +258,17 @@ install_mysql.yml - this task is going to be there to install mysql and ensure t state: started ``` -main.yml is a pointer file that will suggest that we import_tasks from these files. +main.yml is a pointer file that will suggest that we import_tasks from these files. -``` +```Yaml # tasks file for roles/mysql - import_tasks: install_mysql.yml - import_tasks: setup_mysql.yml ``` -setup_mysql.yml - This task will create our database and database user. +setup_mysql.yml - This task will create our database and database user. -``` +```Yaml - name: Create my.cnf configuration file template: src=templates/my.cnf.j2 dest=/etc/mysql/conf.d/mysql.cnf notify: restart mysql @@ -272,8 +276,8 @@ setup_mysql.yml - This task will create our database and database user. - name: Create database user with name 'devops' and password 'DevOps90' with all database privileges community.mysql.mysql_user: login_unix_socket: /var/run/mysqld/mysqld.sock - login_user: "{{ mysql_user_name }}" - login_password: "{{ mysql_user_password }}" + login_user: "{{ mysql_user_name }}" + login_password: "{{ mysql_user_password }}" name: "{{db_user}}" password: "{{db_pass}}" priv: '*.*:ALL' @@ -282,15 +286,15 @@ setup_mysql.yml - This task will create our database and database user. - name: Create a new database with name '90daysofdevops' mysql_db: - login_user: "{{ mysql_user_name }}" - login_password: "{{ mysql_user_password }}" + login_user: "{{ mysql_user_name }}" + login_password: "{{ mysql_user_password }}" name: "{{ db_name }}" state: present ``` -You can see from the above we are using some variables to determine some of our configuration such as passwords, usernames and databases, this is all stored in our group_vars/all/common_variables.yml file. +You can see from the above we are using some variables to determine some of our configuration such as passwords, usernames and databases, this is all stored in our group_vars/all/common_variables.yml file. -``` +```Yaml http_port: 8000 https_port: 4443 html_welcome_msg: "Hello 90DaysOfDevOps - Welcome to Day 68!" @@ -301,48 +305,49 @@ db_user: devops db_pass: DevOps90 db_name: 90DaysOfDevOps ``` -We also have the my.cnf.j2 file in the templates folder, which looks like below: -``` -[mysql] +We also have the my.cnf.j2 file in the templates folder, which looks like below: + +```J2 +[mysql] bind-address = 0.0.0.0 -``` +``` -### Running the playbook +### Running the playbook -Now we have our VM up and running and we have our configuration files in place, we are now ready to run our playbook which will include everything we have done before if we run the following `ansible-playbook playbook7.yml` or we could choose to just deploy to our database group with the `ansible-playbook playbook7.yml --tags database` command, which will just run our new configuration files. +Now we have our VM up and running and we have our configuration files in place, we are now ready to run our playbook which will include everything we have done before if we run the following `ansible-playbook playbook7.yml` or we could choose to just deploy to our database group with the `ansible-playbook playbook7.yml --tags database` command, which will just run our new configuration files. -I ran only against the database tag but I stumbled across an error. This error tells me that we do not have pip3 (Python) installed. We can fix this by adding this to our common tasks and install +I ran only against the database tag but I stumbled across an error. This error tells me that we do not have pip3 (Python) installed. We can fix this by adding this to our common tasks and install ![](Images/Day68_config6.png) -We fixed the above and ran the playbook again and we have a successful change. +We fixed the above and ran the playbook again and we have a successful change. ![](Images/Day68_config7.png) -We should probably make sure that everything is how we want it to be on our newly configured db01 server. We can do this from our control node using the `ssh db01` command. +We should probably make sure that everything is how we want it to be on our newly configured db01 server. We can do this from our control node using the `ssh db01` command. -To connect to mySQL I used `sudo /usr/bin/mysql -u root -p` and gave the vagrant password for root at the prompt. +To connect to mySQL I used `sudo /usr/bin/mysql -u root -p` and gave the vagrant password for root at the prompt. When we have connected let's first make sure we have our user created called devops. `select user, host from mysql.user;` ![](Images/Day68_config8.png) -Now we can issue the `SHOW DATABASES;` command to see our new database that has also been created. +Now we can issue the `SHOW DATABASES;` command to see our new database that has also been created. ![](Images/Day68_config9.png) -I actually used root to connect but we could also now log in with our devops account in the same way using `sudo /usr/bin/mysql -u devops -p` but the password here is DevOps90. +I actually used root to connect but we could also now log in with our devops account in the same way using `sudo /usr/bin/mysql -u devops -p` but the password here is DevOps90. -One thing I have found that in our `setup_mysql.yml` I had to add the line `login_unix_socket: /var/run/mysqld/mysqld.sock` in order to successfully connect to my db01 mysql instance and now everytime I run this it reports a change when creating the user, any suggestions would be greatly appreciated. +One thing I have found that in our `setup_mysql.yml` I had to add the line `login_unix_socket: /var/run/mysqld/mysqld.sock` in order to successfully connect to my db01 mysql instance and now everytime I run this it reports a change when creating the user, any suggestions would be greatly appreciated. -## Resources +## Resources - [What is Ansible](https://www.youtube.com/watch?v=1id6ERvfozo) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - [Your complete guide to Ansible](https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u) -This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. +This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. See you on [Day 69](day69.md) diff --git a/Days/day69.md b/Days/day69.md index 700f4b198..19d1acbf7 100644 --- a/Days/day69.md +++ b/Days/day69.md @@ -1,17 +1,18 @@ --- -title: '#90DaysOfDevOps - All other things Ansible - Automation Controller (Tower), AWX, Vault - Day 69' +title: "#90DaysOfDevOps - All other things Ansible - Automation Controller (Tower), AWX, Vault - Day 69" published: false -description: '90DaysOfDevOps - All other things Ansible - Automation Controller (Tower), AWX, Vault' +description: "90DaysOfDevOps - All other things Ansible - Automation Controller (Tower), AWX, Vault" tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048714 --- + ## All other things Ansible - Automation Controller (Tower), AWX, Vault -Rounding out the section on Configuration Management I wanted to have a look into the other areas that you might come across when dealing with Ansible. +Rounding out the section on Configuration Management I wanted to have a look into the other areas that you might come across when dealing with Ansible. -There are a lot of products that make up the Ansible Automation platform. +There are a lot of products that make up the Ansible Automation platform. Red Hat Ansible Automation Platform is a foundation for building and operating automation across an organization. The platform includes all the tools needed to implement enterprise-wide automation. @@ -19,40 +20,40 @@ Red Hat Ansible Automation Platform is a foundation for building and operating a I will try and cover some of these in this post. But for more information then the official Red Hat Ansible site is going to have lots more information. [Ansible.com](https://www.ansible.com/?hsLang=en-us) -### Ansible Automation Controller | AWX +### Ansible Automation Controller | AWX -I have bundled these two together because the Automation Controller and AWX are very similar in what they offer. +I have bundled these two together because the Automation Controller and AWX are very similar in what they offer. -The AWX project or AWX for short is an open-source community project, sponsored by Red Hat that enables you to better control your Ansible projects within your environments. AWX is the upstream project from which the automation controller component is derived. +The AWX project or AWX for short is an open-source community project, sponsored by Red Hat that enables you to better control your Ansible projects within your environments. AWX is the upstream project from which the automation controller component is derived. -If you are looking for an enterprise solution then you will be looking for the Automation Controller or you might have previously heard this as Ansible Tower. The Ansible Automation Controller is the control plane for the Ansible Automation Platform. +If you are looking for an enterprise solution then you will be looking for the Automation Controller or you might have previously heard this as Ansible Tower. The Ansible Automation Controller is the control plane for the Ansible Automation Platform. -Both AWX and the Automation Controller bring the following features above everything else we have covered in this section thus far. +Both AWX and the Automation Controller bring the following features above everything else we have covered in this section thus far. -- User Interface -- Role Based Access Control -- Workflows -- CI/CD integration +- User Interface +- Role Based Access Control +- Workflows +- CI/CD integration -The Automation Controller is the enterprise offering where you pay for your support. +The Automation Controller is the enterprise offering where you pay for your support. -We are going to take a look at deploying AWX within our minikube Kubernetes environment. +We are going to take a look at deploying AWX within our minikube Kubernetes environment. -### Deploying Ansible AWX +### Deploying Ansible AWX -AWX does not need to be deployed to a Kubernetes cluster, the [github](https://github.com/ansible/awx) for AWX from ansible will give you that detail. However starting in version 18.0, the AWX Operator is the preferred way to install AWX. +AWX does not need to be deployed to a Kubernetes cluster, the [github](https://github.com/ansible/awx) for AWX from ansible will give you that detail. However starting in version 18.0, the AWX Operator is the preferred way to install AWX. -First of all we need a minikube cluster. We can do this if you followed along during the Kubernetes section by creating a new minikube cluster with the `minikube start --cpus=4 --memory=6g --addons=ingress` command. +First of all we need a minikube cluster. We can do this if you followed along during the Kubernetes section by creating a new minikube cluster with the `minikube start --cpus=4 --memory=6g --addons=ingress` command. ![](Images/Day69_config2.png) -The official [Ansible AWX Operator](https://github.com/ansible/awx-operator) can be found here. As stated in the install instructions you should clone this repository and then run through the deployment. +The official [Ansible AWX Operator](https://github.com/ansible/awx-operator) can be found here. As stated in the install instructions you should clone this repository and then run through the deployment. -I forked the repo above and then ran `git clone https://github.com/MichaelCade/awx-operator.git` my advice is you do the same and do not use my repository as I might change things or it might not be there. +I forked the repo above and then ran `git clone https://github.com/MichaelCade/awx-operator.git` my advice is you do the same and do not use my repository as I might change things or it might not be there. -In the cloned repository you will find a awx-demo.yml file we need to change `NodePort` for `ClusterIP` as per below: +In the cloned repository you will find a awx-demo.yml file we need to change `NodePort` for `ClusterIP` as per below: -``` +```Yaml --- apiVersion: awx.ansible.com/v1beta1 kind: AWX @@ -62,7 +63,7 @@ spec: service_type: ClusterIP ``` -The next step is to define our namespace where we will be deploying the awx operator, using the `export NAMESPACE=awx` command then followed by `make deploy` we will start the deployment. +The next step is to define our namespace where we will be deploying the awx operator, using the `export NAMESPACE=awx` command then followed by `make deploy` we will start the deployment. ![](Images/Day69_config3.png) @@ -74,17 +75,17 @@ Within the cloned repository you will find a file called awx-demo.yml we now wan ![](Images/Day69_config5.png) -You can keep an eye on the progress with `kubectl get pods -n awx -w` which will keep a visual watch on what is happening. +You can keep an eye on the progress with `kubectl get pods -n awx -w` which will keep a visual watch on what is happening. -You should have something that resembles the image you see below when everything is running. +You should have something that resembles the image you see below when everything is running. ![](Images/Day69_config6.png) -Now we should be able to access our awx deployment after running in a new terminal `minikube service awx-demo-service --url -n $NAMESPACE` to expose this through the minikube ingress. +Now we should be able to access our awx deployment after running in a new terminal `minikube service awx-demo-service --url -n $NAMESPACE` to expose this through the minikube ingress. ![](Images/Day69_config7.png) -If we then open a browser to that address [] you can see we are prompted for username and password. +If we then open a browser to that address [] you can see we are prompted for username and password. ![](Images/Day69_config8.png) @@ -92,19 +93,19 @@ The username by default is admin, to get the password we can run the following c ![](Images/Day69_config9.png) -Obviously this then gives you a UI to manage your playbook and configuration management tasks in a centralised location, it also allows you as a team to work together vs what we have been doing so far here where we have been running from one ansible control station. +Obviously this then gives you a UI to manage your playbook and configuration management tasks in a centralised location, it also allows you as a team to work together vs what we have been doing so far here where we have been running from one ansible control station. -This is another one of those areas where you could probably go and spend another length of time walking through the capabilities within this tool. +This is another one of those areas where you could probably go and spend another length of time walking through the capabilities within this tool. -I will call out a great resource from Jeff Geerling, which goes into more detail on using Ansible AWX. [Ansible 101 - Episode 10 - Ansible Tower and AWX](https://www.youtube.com/watch?v=iKmY4jEiy_A&t=752s) +I will call out a great resource from Jeff Geerling, which goes into more detail on using Ansible AWX. [Ansible 101 - Episode 10 - Ansible Tower and AWX](https://www.youtube.com/watch?v=iKmY4jEiy_A&t=752s) In this video he also goes into great detail on the differences between Automation Controller (Previously Ansible Tower) and Ansible AWX (Free and Open Source). -### Ansible Vault +### Ansible Vault -`ansible-vault` allows us to encrypt and decrypt Ansible data files. Throughout this section we have skipped over and we have put some of our sensitive information in plain text. +`ansible-vault` allows us to encrypt and decrypt Ansible data files. Throughout this section we have skipped over and we have put some of our sensitive information in plain text. -Built in to the Ansible binary is `ansible-vault` which allows us to mask away this sensitive information. +Built in to the Ansible binary is `ansible-vault` which allows us to mask away this sensitive information. ![](Images/Day69_config10.png) @@ -124,19 +125,19 @@ Now, we have already used `ansible-galaxy` to create some of our roles and file - [Ansible Lint](https://ansible-lint.readthedocs.io/en/latest/) - CLI tool for linting playbooks, roles and collections -### Other Resource +### Other Resource - [Ansible Documentation](https://docs.ansible.com/ansible/latest/index.html) -## Resources +## Resources - [What is Ansible](https://www.youtube.com/watch?v=1id6ERvfozo) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - [Your complete guide to Ansible](https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u) -This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. +This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. -This post wraps up our look into configuration management, we next move into CI/CD Pipelines and some of the tools and processes that we might see and use out there to achieve this workflow for our application development and release. +This post wraps up our look into configuration management, we next move into CI/CD Pipelines and some of the tools and processes that we might see and use out there to achieve this workflow for our application development and release. See you on [Day 70](day70.md) diff --git a/Days/day70.md b/Days/day70.md index 43ea50819..5df69ed7a 100644 --- a/Days/day70.md +++ b/Days/day70.md @@ -1,8 +1,8 @@ --- -title: '#90DaysOfDevOps - The Big Picture: CI/CD Pipelines - Day 70' +title: "#90DaysOfDevOps - The Big Picture: CI/CD Pipelines - Day 70" published: false description: 90DaysOfDevOps - The Big Picture CI/CD Pipelines -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048836 @@ -10,13 +10,13 @@ id: 1048836 ## The Big Picture: CI/CD Pipelines -A CI/CD (Continous Integration/Continous Deployment) Pipeline implementation is the backbone of the modern DevOps environment. +A CI/CD (Continuous Integration/Continuous Deployment) Pipeline implementation is the backbone of the modern DevOps environment. It bridges the gap between development and operations by automating the build, test and deployment of applications. -We covered a lot of this Continous mantra in the opening section of the challenge. But to reiterate: +We covered a lot of this continuous mantra in the opening section of the challenge. But to reiterate: -Continous Integration (CI) is a more modern software development practice in which incremental code changes are made more frequently and reliabily. Automated build and test workflow steps triggered by Contininous Integration ensures that code changes being merged into the repository are reliable. +Continuous Integration (CI) is a more modern software development practice in which incremental code changes are made more frequently and reliably. Automated build and test workflow steps triggered by Continuous Integration ensures that code changes being merged into the repository are reliable. That code / Application is then delivered quickly and seamlessly as part of the Continuous Deployment process. @@ -24,7 +24,7 @@ That code / Application is then delivered quickly and seamlessly as part of the - Ship software quickly and efficiently - Facilitates an effective process for getting applications to market as fast as possible -- A continous flow of bug fixes and new features without waiting months or years for version releases. +- A continuous flow of bug fixes and new features without waiting months or years for version releases. The ability for developers to make small impactful changes regular means we get faster fixes and more features quicker. diff --git a/Days/day71.md b/Days/day71.md index 869db6d6a..435082a9a 100644 --- a/Days/day71.md +++ b/Days/day71.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - What is Jenkins? - Day 71' +title: "#90DaysOfDevOps - What is Jenkins? - Day 71" published: false description: 90DaysOfDevOps - What is Jenkins? tags: "devops, 90daysofdevops, learning" @@ -7,87 +7,85 @@ cover_image: null canonical_url: null id: 1048745 --- + ## What is Jenkins? -Jenkins is a continous integration tool that allows continous development, test and deployment of newly created code. +Jenkins is a continuous integration tool that allows continuous development, test and deployment of newly created code. -There are two ways we can achieve this with either nightly builds or continous development. The first option is that our developers are developing throughout the day on their tasks and come the end of the set day they push their changes to the source code repository. Then during the night we run our unit tests and build of the software. This could be deemed as the old way to integrate all code. +There are two ways we can achieve this with either nightly builds or continuous development. The first option is that our developers are developing throughout the day on their tasks and come the end of the set day they push their changes to the source code repository. Then during the night we run our unit tests and build of the software. This could be deemed as the old way to integrate all code. ![](Images/Day71_CICD1.png) -The other option and the preferred way is that our developers are still committing their changes to source code, then when that code commit has been made there is a build process kicked off continously. +The other option and the preferred way is that our developers are still committing their changes to source code, then when that code commit has been made there is a build process kicked off continuously. ![](Images/Day71_CICD2.png) -The above methods means that with distributed developers across the world we don't have a set time each day where we have to stop committing our code changes. This is where Jenkins comes in to act as that CI server to control those tests and build processes. +The above methods means that with distributed developers across the world we don't have a set time each day where we have to stop committing our code changes. This is where Jenkins comes in to act as that CI server to control those tests and build processes. ![](Images/Day71_CICD3.png) -I know we are talking about Jenkins here but I also want to add a few more to maybe look into later on down the line to get an understanding why I am seeing Jenkins as the overall most popular, why is that and what can the others do over Jenkins. +I know we are talking about Jenkins here but I also want to add a few more to maybe look into later on down the line to get an understanding why I am seeing Jenkins as the overall most popular, why is that and what can the others do over Jenkins. -- TravisCI - A hosted, distributed continous integration service used to build and test software projects hosted on GitHub. - -- Bamboo - Can run multiple builds in parallel for faster compilation, built in functionality to connect with repositories and has build tasks for Ant, Maven. - -- Buildbot - is an open-source framework for automating software build, test and release processes. It is written in Python and supports distributed, parallel execution of jobs across multiple platforms. - -- Apache Gump - Specific to Java projects, designed with the aim to build and test those Java projects every night. ensures that all projects are compatible at both API and functionality level. +- TravisCI - A hosted, distributed continuous integration service used to build and test software projects hosted on GitHub. +- Bamboo - Can run multiple builds in parallel for faster compilation, built in functionality to connect with repositories and has build tasks for Ant, Maven. +- Buildbot - is an open-source framework for automating software build, test and release processes. It is written in Python and supports distributed, parallel execution of jobs across multiple platforms. +- Apache Gump - Specific to Java projects, designed with the aim to build and test those Java projects every night. ensures that all projects are compatible at both API and functionality level. -Because we are now going to focus on Jenkins - Jenkins is again open source like all of the above tools and is an automation server written in Java. It is used to automate the software development process via continous integration adn faciliates continous delivery. +Because we are now going to focus on Jenkins - Jenkins is again open source like all of the above tools and is an automation server written in Java. It is used to automate the software development process via continuous integration adn facilitates continuous delivery. ### Features of Jenkins -As you can probably expect Jenkins has a lot of features spanning a lot of areas. +As you can probably expect Jenkins has a lot of features spanning a lot of areas. -**Easy Installation** - Jenkins is a self contained java based program ready to run with packages for Windows, macOS and Linux operating systems. +**Easy Installation** - Jenkins is a self contained java based program ready to run with packages for Windows, macOS and Linux operating systems. -**Easy Configuration** - Easy setup and configured via a web interface which includes error checks and built in help. +**Easy Configuration** - Easy setup and configured via a web interface which includes error checks and built in help. -**Plug-ins** - Lots of plugins available in the Update Centre and integrates with many tools in the CI / CD toolchain. +**Plug-ins** - Lots of plugins available in the Update Centre and integrates with many tools in the CI / CD toolchain. -**Extensible** - In addition to the Plug-Ins available, Jenkins can be extended by that plugin architecture which provides nearly infinite options for what it can be used for. +**Extensible** - In addition to the Plug-Ins available, Jenkins can be extended by that plugin architecture which provides nearly infinite options for what it can be used for. -**Distributed** - Jenkins easily distributes work across multiple machines, helping to speed up builds, tests and deployments across multiple platforms. +**Distributed** - Jenkins easily distributes work across multiple machines, helping to speed up builds, tests and deployments across multiple platforms. -### Jenkins Pipeline +### Jenkins Pipeline -You will have seen this pipeline but used in a much broader and we have not spoken about specific tools. +You will have seen this pipeline but used in a much broader and we have not spoken about specific tools. -You are going to be committing code to Jenkins, which then will build out your application, with all automated tests, it will then release and deploy that code when each step is completed. Jenkins is what allows for the automation of this process. +You are going to be committing code to Jenkins, which then will build out your application, with all automated tests, it will then release and deploy that code when each step is completed. Jenkins is what allows for the automation of this process. ![](Images/Day71_CICD4.png) -### Jenkins Architecture +### Jenkins Architecture -First up and not wanting to reinvent the wheel, the [Jenkins Documentation](https://www.jenkins.io/doc/developer/architecture/) is always the place to start but I am going to put down my notes and learnings here as well. +First up and not wanting to reinvent the wheel, the [Jenkins Documentation](https://www.jenkins.io/doc/developer/architecture/) is always the place to start but I am going to put down my notes and learnings here as well. Jenkins can be installed on many different operating systems, Windows, Linux and macOS but then also the ability to deploy as a Docker container and within Kubernetes. [Installing Jenkins](https://www.jenkins.io/doc/book/installing/) -As we get into this we will likely take a look at installing Jenkins within a minikube cluster simulating the deployment to Kubernetes. But this will depend on the scenarios we put together throughout the rest of the section. +As we get into this we will likely take a look at installing Jenkins within a minikube cluster simulating the deployment to Kubernetes. But this will depend on the scenarios we put together throughout the rest of the section. -Let's now break down the image below. +Let's now break down the image below. Step 1 - Developers commit changes to the source code repository. Step 2 - Jenkins checks the repository at regular intervals and pulls any new code. -Step 3 - A build server then builds the code into an executable, in this example we are using maven as a well known build server. Another area to cover. +Step 3 - A build server then builds the code into an executable, in this example we are using maven as a well known build server. Another area to cover. -Step 4 - If the build fails then feedback is sent back to the developers. +Step 4 - If the build fails then feedback is sent back to the developers. -Step 5 - Jenkins then deploys the build app to the test server, in this example we are using selenium as a well known test server. Another area to cover. +Step 5 - Jenkins then deploys the build app to the test server, in this example we are using selenium as a well known test server. Another area to cover. Step 6 - If the test fails then feedback is passed to the developers. -Step 7 - If the tests are successful then we can release to production. +Step 7 - If the tests are successful then we can release to production. -This cycle is continous, this is what allows applications to be updated in minutes vs hours, days, months, years! +This cycle is continuous, this is what allows applications to be updated in minutes vs hours, days, months, years! ![](Images/Day71_CICD5.png) -There is a lot more to the architecture of Jenkins if you require it, they have a master-slave capability, which enables a master to distribute the tasks to slave jenkins environment. +There is a lot more to the architecture of Jenkins if you require it, they have a master-slave capability, which enables a master to distribute the tasks to slave jenkins environment. -For reference with Jenkins being open source, there are going to be lots of enterprises that require support, CloudBees is that enterprise version of Jenkins that brings support and possibly other functionality for the paying enterprise customer. +For reference with Jenkins being open source, there are going to be lots of enterprises that require support, CloudBees is that enterprise version of Jenkins that brings support and possibly other functionality for the paying enterprise customer. An example of this in a customer is Bosch, you can find the Bosch case study [here](https://assets.ctfassets.net/vtn4rfaw6n2j/case-study-boschpdf/40a0b23c61992ed3ee414ae0a55b6777/case-study-bosch.pdf) diff --git a/Days/day72.md b/Days/day72.md index f63838c95..d91916ba3 100644 --- a/Days/day72.md +++ b/Days/day72.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Getting hands on with Jenkins - Day 72' +title: "#90DaysOfDevOps - Getting hands on with Jenkins - Day 72" published: false description: 90DaysOfDevOps - Getting hands on with Jenkins tags: "devops, 90daysofdevops, learning" @@ -7,33 +7,34 @@ cover_image: null canonical_url: null id: 1048829 --- -## Getting hands on with Jenkins -The plan today is to get some hands on with Jenkins and make something happen as part of our CI pipeline, looking at some example code bases that we can use. +## Getting hands on with Jenkins -### What is a pipeline? +The plan today is to get some hands on with Jenkins and make something happen as part of our CI pipeline, looking at some example code bases that we can use. -Before we start we need to know what is a pipeline when it comes to CI, and we already covered this in the session yesterday with the following image. +### What is a pipeline? + +Before we start we need to know what is a pipeline when it comes to CI, and we already covered this in the session yesterday with the following image. ![](Images/Day71_CICD4.png) -We want to take the processes or steps above and we want to automate them to get an outcome eventually meaning that we have a deployed application that we can then ship to our customers, end users etc. +We want to take the processes or steps above and we want to automate them to get an outcome eventually meaning that we have a deployed application that we can then ship to our customers, end users etc. -This automated process enables us to have a version control through to our users and customers. Every change, feature enhancement, bug fix etc goes through this automated process confirming that everything is fine without too much manual intervention to ensure our code is good. +This automated process enables us to have a version control through to our users and customers. Every change, feature enhancement, bug fix etc goes through this automated process confirming that everything is fine without too much manual intervention to ensure our code is good. This process involves building the software in a reliable and repeatable manner, as well as progressing the built software (called a "build") through multiple stages of testing and deployment. -A jenkins pipeline, is written into a text file called a Jenkinsfile. Which itself should be committed to a source control repository. This is also known as Pipeline as code, we could also very much liken this to Infrastructure as code which we covered a few weeks back. +A jenkins pipeline, is written into a text file called a Jenkinsfile. Which itself should be committed to a source control repository. This is also known as Pipeline as code, we could also very much liken this to Infrastructure as code which we covered a few weeks back. -[Jenkins Pipeline Definition](https://www.jenkins.io/doc/book/pipeline/#ji-toolbar) +[Jenkins Pipeline Definition](https://www.jenkins.io/doc/book/pipeline/#ji-toolbar) -### Deploying Jenkins +### Deploying Jenkins -I had some fun deploying Jenkins, You will notice from the [documentation](https://www.jenkins.io/doc/book/installing/) that there are many options on where you can install Jenkins. +I had some fun deploying Jenkins, You will notice from the [documentation](https://www.jenkins.io/doc/book/installing/) that there are many options on where you can install Jenkins. -Given that I have minikube on hand and we have used this a number of times I wanted to use this for this task also. (also it is free!) Although the steps given in the [Kubernetes Installation](https://www.jenkins.io/doc/book/installing/kubernetes/) had me hitting a wall and not getting things up and running, you can compare the two when I document my steps here. +Given that I have minikube on hand and we have used this a number of times I wanted to use this for this task also. (also it is free!) Although the steps given in the [Kubernetes Installation](https://www.jenkins.io/doc/book/installing/kubernetes/) had me hitting a wall and not getting things up and running, you can compare the two when I document my steps here. -The first step is to get our minikube cluster up and running, we can simply do this with the `minikube start` command. +The first step is to get our minikube cluster up and running, we can simply do this with the `minikube start` command. ![](Images/Day72_CICD1.png) @@ -41,15 +42,15 @@ I have added a folder with all the YAML configuration and values that can be fou ![](Images/Day72_CICD2.png) -We will be using Helm to deploy jenkins into our cluster, we covered helm in the Kubernetes section. We firstly need to add the jenkinsci helm repository `helm repo add jenkinsci https://charts.jenkins.io` then update our charts `helm repo update`. +We will be using Helm to deploy jenkins into our cluster, we covered helm in the Kubernetes section. We firstly need to add the jenkinsci helm repository `helm repo add jenkinsci https://charts.jenkins.io` then update our charts `helm repo update`. ![](Images/Day72_CICD3.png) -The idea behind Jenkins is that it is going to save state for its pipelines, you can run the above helm installation without persistence but if those pods are rebooted, changed or modified then any pipeline or configuration you have made will be lost. We will create a volume for persistence using the jenkins-volume.yml file with the `kubectl apply -f jenkins-volume.yml` command. +The idea behind Jenkins is that it is going to save state for its pipelines, you can run the above helm installation without persistence but if those pods are rebooted, changed or modified then any pipeline or configuration you have made will be lost. We will create a volume for persistence using the jenkins-volume.yml file with the `kubectl apply -f jenkins-volume.yml` command. ![](Images/Day72_CICD4.png) -We also need a service account which we can create using this yaml file and command. `kubectl apply -f jenkins-sa.yml` +We also need a service account which we can create using this yaml file and command. `kubectl apply -f jenkins-sa.yml` ![](Images/Day72_CICD5.png) @@ -57,17 +58,17 @@ At this stage we are good to deploy using the helm chart, we will firstly define ![](Images/Day72_CICD6.png) -At this stage our pods will be pulling the image but the pod will not have access to the storage so no configuration can be started in terms of getting Jenkins up and running. +At this stage our pods will be pulling the image but the pod will not have access to the storage so no configuration can be started in terms of getting Jenkins up and running. -This is where the documentation did not help me massively understand what needed to happen. But we can see that we have no permission to start our jenkins install. +This is where the documentation did not help me massively understand what needed to happen. But we can see that we have no permission to start our jenkins install. ![](Images/Day72_CICD7.png) -In order to fix the above or resolve, we need to make sure we provide access or the right permission in order for our jenkins pods to be able to write to this location that we have suggested. We can do this by using the `minikube ssh` which will put us into the minikube docker container we are running on, and then using `sudo chown -R 1000:1000 /data/jenkins-volume` we can ensure we have permissions set on our data volume. +In order to fix the above or resolve, we need to make sure we provide access or the right permission in order for our jenkins pods to be able to write to this location that we have suggested. We can do this by using the `minikube ssh` which will put us into the minikube docker container we are running on, and then using `sudo chown -R 1000:1000 /data/jenkins-volume` we can ensure we have permissions set on our data volume. ![](Images/Day72_CICD8.png) -The above process should fix the pods, however if not you can force the pods to be refreshed with the `kubectl delete pod jenkins-0 -n jenkins` command. At this point you should have 2/2 running pods called jenkins-0. +The above process should fix the pods, however if not you can force the pods to be refreshed with the `kubectl delete pod jenkins-0 -n jenkins` command. At this point you should have 2/2 running pods called jenkins-0. ![](Images/Day72_CICD9.png) @@ -79,25 +80,25 @@ Now open a new terminal as we are going to use the `port-forward` command to all ![](Images/Day72_CICD11.png) -We should now be able to open a browser and login to http://localhost:8080 and authenticate with the username: admin and password we gathered in a previous step. +We should now be able to open a browser and login to `http://localhost:8080` and authenticate with the username: admin and password we gathered in a previous step. ![](Images/Day72_CICD12.png) -When we have authenticated, our Jenkins welcome page should look something like this: +When we have authenticated, our Jenkins welcome page should look something like this: ![](Images/Day72_CICD13.png) -From here, I would suggest heading to "Manage Jenkins" and you will see "Manage Plugins" which will have some updates available. Select all of those plugins and choose "Download now and install after restart" +From here, I would suggest heading to "Manage Jenkins" and you will see "Manage Plugins" which will have some updates available. Select all of those plugins and choose "Download now and install after restart" ![](Images/Day72_CICD14.png) If you want to go even further and automate the deployment of Jenkins using a shell script this great repository was shared with me on twitter [mehyedes/nodejs-k8s](https://github.com/mehyedes/nodejs-k8s/blob/main/docs/automated-setup.md) +### Jenkinsfile -### Jenkinsfile -Now we have Jenkins deployed in our Kubernetes cluster, we can now go back and think about this Jenkinsfile. +Now we have Jenkins deployed in our Kubernetes cluster, we can now go back and think about this Jenkinsfile. -Every Jenkinsfile will likely start like this, Which is firstly where you would define your steps of your pipeline, in this instance you have Build > Test > Deploy. But we are not really doing anything other than using the `echo` command to call out the specific stages. +Every Jenkinsfile will likely start like this, Which is firstly where you would define your steps of your pipeline, in this instance you have Build > Test > Deploy. But we are not really doing anything other than using the `echo` command to call out the specific stages. ``` @@ -126,27 +127,28 @@ pipeline { } ``` -In our Jenkins dashboard, select "New Item" give the item a name, I am going to "echo1" I am going to suggest that this is a Pipeline. + +In our Jenkins dashboard, select "New Item" give the item a name, I am going to "echo1" I am going to suggest that this is a Pipeline. ![](Images/Day72_CICD15.png) -Hit Ok and you will then have the tabs (General, Build Triggers, Advanced Project Options and Pipeline) for a simple test we are only interested in Pipeline. Under Pipeline you have the ability to add a script, we can copy and paste the above script into the box. +Hit Ok and you will then have the tabs (General, Build Triggers, Advanced Project Options and Pipeline) for a simple test we are only interested in Pipeline. Under Pipeline you have the ability to add a script, we can copy and paste the above script into the box. As we said above this is not going to do much but it will show us the stages of our Build > Test > Deploy ![](Images/Day72_CICD16.png) -Click Save, We can now run our build using the build now highlighted below. +Click Save, We can now run our build using the build now highlighted below. ![](Images/Day72_CICD17.png) -We should also open a terminal and run the `kubectl get pods -n jenkins` to see what happens there. +We should also open a terminal and run the `kubectl get pods -n jenkins` to see what happens there. ![](Images/Day72_CICD18.png) -Ok, very simple stuff but we can now see that our Jenkins deployment and installation is working correctly and we can start to see the building blocks of the CI pipeline here. +Ok, very simple stuff but we can now see that our Jenkins deployment and installation is working correctly and we can start to see the building blocks of the CI pipeline here. -In the next section we will be building a Jenkins Pipeline. +In the next section we will be building a Jenkins Pipeline. ## Resources diff --git a/Days/day73.md b/Days/day73.md index bcc58a643..83d410314 100644 --- a/Days/day73.md +++ b/Days/day73.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Building a Jenkins Pipeline - Day 73' +title: "#90DaysOfDevOps - Building a Jenkins Pipeline - Day 73" published: false description: 90DaysOfDevOps - Building a Jenkins Pipeline tags: "devops, 90daysofdevops, learning" @@ -7,17 +7,18 @@ cover_image: null canonical_url: null id: 1048766 --- -## Building a Jenkins Pipeline -In the last section we got Jenkins deployed to our Minikube cluster and we set up a very basic Jenkins Pipeline, that didn't do much at all other than echo out the stages of a Pipeline. +## Building a Jenkins Pipeline -You might have also seen that there are some example scripts available for us to run in the Jenkins Pipeline creation. +In the last section we got Jenkins deployed to our Minikube cluster and we set up a very basic Jenkins Pipeline, that didn't do much at all other than echo out the stages of a Pipeline. + +You might have also seen that there are some example scripts available for us to run in the Jenkins Pipeline creation. ![](Images/Day73_CICD1.png) -The first demo script is "Declartive (Kubernetes)" and you can see the stages below. +The first demo script is "Declarative (Kubernetes)" and you can see the stages below. -``` +```Yaml // Uses Declarative syntax to run commands inside a container. pipeline { agent { @@ -58,23 +59,24 @@ spec: } } ``` -You can see below the outcome of what happens when this Pipeline is ran. + +You can see below the outcome of what happens when this Pipeline is ran. ![](Images/Day73_CICD2.png) -### Job creation +### Job creation -**Goals** +#### Goals -- Create a simple app and store in GitHub public repository (https://github.com/scriptcamp/kubernetes-kaniko.git) +- Create a simple app and store in GitHub public repository [https://github.com/scriptcamp/kubernetes-kaniko.git](https://github.com/scriptcamp/kubernetes-kaniko.git) -- Use Jenkins to build our docker Container image and push to docker hub. (for this we will use a private repository) +- Use Jenkins to build our docker Container image and push to docker hub. (for this we will use a private repository) -To achieve this in our Kubernetes cluster running in or using Minikube we need to use something called [Kaniko](https://github.com/GoogleContainerTools/kaniko#running-kaniko-in-a-kubernetes-cluster) It general though if you are using Jenkins in a real Kubernetes cluster or you are running it on a server then you can specify an agent which will give you the ability to perform the docker build commands and upload that to DockerHub. +To achieve this in our Kubernetes cluster running in or using Minikube we need to use something called [Kaniko](https://github.com/GoogleContainerTools/kaniko#running-kaniko-in-a-kubernetes-cluster) It general though if you are using Jenkins in a real Kubernetes cluster or you are running it on a server then you can specify an agent which will give you the ability to perform the docker build commands and upload that to DockerHub. -With the above in mind we are also going to deploy a secret into Kubernetes with our GitHub credentials. +With the above in mind we are also going to deploy a secret into Kubernetes with our GitHub credentials. -``` +```Shell kubectl create secret docker-registry dockercred \ --docker-server=https://index.docker.io/v1/ \ --docker-username= \ @@ -82,17 +84,17 @@ kubectl create secret docker-registry dockercred \ --docker-email= ``` -In fact I want to share another great resource from [DevOpsCube.com](https://devopscube.com/build-docker-image-kubernetes-pod/) running through much of what we will cover here. +In fact I want to share another great resource from [DevOpsCube.com](https://devopscube.com/build-docker-image-kubernetes-pod/) running through much of what we will cover here. -### Adding credentials to Jenkins +### Adding credentials to Jenkins -However if you were on a Jenkins system unlike ours then you will likely want to define your credentials within Jenkins and then use them multiple times within your Pipelines and configurations. We can refer to these credentials in the Pipelines using the ID we determine on creation. I went ahead and stepped through and created a user entry for DockerHub and GitHub. +However if you were on a Jenkins system unlike ours then you will likely want to define your credentials within Jenkins and then use them multiple times within your Pipelines and configurations. We can refer to these credentials in the Pipelines using the ID we determine on creation. I went ahead and stepped through and created a user entry for DockerHub and GitHub. First of all select "Manage Jenkins" and then "Manage Credentials" ![](Images/Day73_CICD3.png) -You will see in the centre of the page, Stores scoped to Jenkins click on Jenkins here. +You will see in the centre of the page, Stores scoped to Jenkins click on Jenkins here. ![](Images/Day73_CICD4.png) @@ -100,25 +102,25 @@ Now select Global Credentials (Unrestricted) ![](Images/Day73_CICD5.png) -Then in the top left you have Add Credentials +Then in the top left you have Add Credentials ![](Images/Day73_CICD6.png) -Fill in your details for your account and then select OK, remember the ID is what you will refer to when you want to call this credential. My advice here also is that you use specific token access vs passwords. +Fill in your details for your account and then select OK, remember the ID is what you will refer to when you want to call this credential. My advice here also is that you use specific token access vs passwords. ![](Images/Day73_CICD7.png) For GitHub you should use a [Personal Access Token](https://vzilla.co.uk/vzilla-blog/creating-updating-your-github-personal-access-token) -Personally I did not find this process very intuitive to create these accounts, so even though we are not using I wanted to share the process as it is not clear from the UI. +Personally I did not find this process very intuitive to create these accounts, so even though we are not using I wanted to share the process as it is not clear from the UI. ### Building the pipeline -We have our DockerHub credentials deployed to as a secret into our Kubernetes cluster which we will call upon for our docker deploy to DockerHub stage in our pipeline. +We have our DockerHub credentials deployed to as a secret into our Kubernetes cluster which we will call upon for our docker deploy to DockerHub stage in our pipeline. -The pipeline script is what you can see below, this could in turn become our Jenkinsfile located in our GitHub repository which you can also see is listed in the Get the project stage of the pipeline. +The pipeline script is what you can see below, this could in turn become our Jenkinsfile located in our GitHub repository which you can also see is listed in the Get the project stage of the pipeline. -``` +```Yaml podTemplate(yaml: ''' apiVersion: v1 kind: Pod @@ -174,41 +176,41 @@ podTemplate(yaml: ''' } ``` -To kick things on the Jenkins dashboard we need to select "New Item" +To kick things on the Jenkins dashboard we need to select "New Item" ![](Images/Day73_CICD8.png) -We are then going to give our item a name, select Pipeline and then hit ok. +We are then going to give our item a name, select Pipeline and then hit ok. ![](Images/Day73_CICD9.png) -We are not going to be selecting any of the general or build triggers but have a play with these as there are some interesting schedules and other configurations that might be useful. +We are not going to be selecting any of the general or build triggers but have a play with these as there are some interesting schedules and other configurations that might be useful. ![](Images/Day73_CICD10.png) -We are only interested in the Pipeline tab at the end. +We are only interested in the Pipeline tab at the end. ![](Images/Day73_CICD11.png) -In the Pipeline definition we are going to copy and paste the pipeline script that we have above into the Script section and hit save. +In the Pipeline definition we are going to copy and paste the pipeline script that we have above into the Script section and hit save. ![](Images/Day73_CICD12.png) -Next we will select the "Build Now" option on the left side of the page. +Next we will select the "Build Now" option on the left side of the page. ![](Images/Day73_CICD13.png) -You should now wait a short amount of time, less than a minute really. and you should see under status the stages that we defined above in our script. +You should now wait a short amount of time, less than a minute really. and you should see under status the stages that we defined above in our script. ![](Images/Day73_CICD14.png) -More importantly if we now head on over to our DockerHub and check that we have a new build. +More importantly if we now head on over to our DockerHub and check that we have a new build. ![](Images/Day73_CICD15.png) -This overall did take a while to figure out but I wanted to stick with it for the purpose of getting hands on and working through a scenario that anyone can run through using minikube and access to github and dockerhub. +This overall did take a while to figure out but I wanted to stick with it for the purpose of getting hands on and working through a scenario that anyone can run through using minikube and access to github and dockerhub. -The DockerHub repository I used for this demo was a private one. But in the next section I want to advance some of these stages and actually have them do something vs just printing out `pwd` and actually run some tests and build stages. +The DockerHub repository I used for this demo was a private one. But in the next section I want to advance some of these stages and actually have them do something vs just printing out `pwd` and actually run some tests and build stages. ## Resources diff --git a/Days/day74.md b/Days/day74.md index 9eddd2542..8a6eb10b3 100644 --- a/Days/day74.md +++ b/Days/day74.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Hello World - Jenkinsfile App Pipeline - Day 74' +title: "#90DaysOfDevOps - Hello World - Jenkinsfile App Pipeline - Day 74" published: false description: 90DaysOfDevOps - Hello World - Jenkinsfile App Pipeline tags: "devops, 90daysofdevops, learning" @@ -7,79 +7,80 @@ cover_image: null canonical_url: null id: 1048744 --- + ## Hello World - Jenkinsfile App Pipeline -In the last section we built a simple Pipeline in Jenkins that would push our docker image from our dockerfile in a public GitHub repository to our private Dockerhub repository. +In the last section we built a simple Pipeline in Jenkins that would push our docker image from our dockerfile in a public GitHub repository to our private Dockerhub repository. -In this section we want to take this one step further and we want to achieve the following with our simple application. +In this section we want to take this one step further and we want to achieve the following with our simple application. -### Objective +### Objective - Dockerfile (Hello World) -- Jenkinsfile -- Jenkins Pipeline to trigger when GitHub Repository is updated -- Use GitHub Repository as source. +- Jenkinsfile +- Jenkins Pipeline to trigger when GitHub Repository is updated +- Use GitHub Repository as source. - Run - Clone/Get Repository, Build, Test, Deploy Stages - Deploy to DockerHub with incremental version numbers - Stretch Goal to deploy to our Kubernetes Cluster (This will involve another job and manifest repository using GitHub credentials) -### Step One +### Step One -We have our [GitHub repository](https://github.com/MichaelCade/Jenkins-HelloWorld) This currently contains our Dockerfile and our index.html +We have our [GitHub repository](https://github.com/MichaelCade/Jenkins-HelloWorld) This currently contains our Dockerfile and our index.html ![](Images/Day74_CICD1.png) -With the above this is what we were using as our source in our Pipeline, now we want to add that Jenkins Pipeline script to our GitHub repository as well. +With the above this is what we were using as our source in our Pipeline, now we want to add that Jenkins Pipeline script to our GitHub repository as well. ![](Images/Day74_CICD2.png) -Now back in our Jenkins dashboard, we are going to create a new pipeline but now instead of pasting our script we are going to use "Pipeline script from SCM" We are then going to use the configuration options below. +Now back in our Jenkins dashboard, we are going to create a new pipeline but now instead of pasting our script we are going to use "Pipeline script from SCM" We are then going to use the configuration options below. -For reference we are going to use https://github.com/MichaelCade/Jenkins-HelloWorld.git as the repository URL. +For reference we are going to use `https://github.com/MichaelCade/Jenkins-HelloWorld.git` as the repository URL. ![](Images/Day74_CICD3.png) -We could at this point hit save and apply and we would then be able to manually run our Pipeline building our new Docker image that is uploaded to our DockerHub repository. +We could at this point hit save and apply and we would then be able to manually run our Pipeline building our new Docker image that is uploaded to our DockerHub repository. -However, I also want to make sure that we set a schedule that whenever our repository or our source code is changed, I want to trigger a build. we could use webhooks or we could use a scheduled pull. +However, I also want to make sure that we set a schedule that whenever our repository or our source code is changed, I want to trigger a build. we could use webhooks or we could use a scheduled pull. This is a big consideration because if you are using costly cloud resources to hold your pipeline and you have lots of changes to your code repository then you will incur a lot of costs. We know that this is a demo environment which is why I am using the "poll scm" option. (Also I believe that using minikube I am lacking the ability to use webhooks) ![](Images/Day74_CICD4.png) -One thing I have changed since yesterdays session is I want to now upload my image to a public repository which in this case would be michaelcade1\90DaysOfDevOps, my Jenkinsfile has this change already. And from previous sections I have removed any existing demo container images. +One thing I have changed since yesterdays session is I want to now upload my image to a public repository which in this case would be michaelcade1\90DaysOfDevOps, my Jenkinsfile has this change already. And from previous sections I have removed any existing demo container images. ![](Images/Day74_CICD5.png) -Going backwards here, we created our Pipeline and then as previously shown we added our configuration. +Going backwards here, we created our Pipeline and then as previously shown we added our configuration. ![](Images/Day74_CICD6.png) -At this stage our Pipeline has never ran and your stage view will look something like this. +At this stage our Pipeline has never ran and your stage view will look something like this. ![](Images/Day74_CICD7.png) -Now lets trigger the "Build Now" button. and our stage view will display our stages. +Now lets trigger the "Build Now" button. and our stage view will display our stages. ![](Images/Day74_CICD8.png) -If we then head over to our DockerHub repository, we should have 2 new Docker images. We should have a Build ID of 1 and a latest because every build that we create based on the "Upload to DockerHub" is we send a version using the Jenkins Build_ID environment variable and we also issue a latest. +If we then head over to our DockerHub repository, we should have 2 new Docker images. We should have a Build ID of 1 and a latest because every build that we create based on the "Upload to DockerHub" is we send a version using the Jenkins Build_ID environment variable and we also issue a latest. ![](Images/Day74_CICD9.png) -Let's go and create an update to our index.html file in our GitHub repository as per below, I will let you go and find out what version 1 of the index.html was saying. +Let's go and create an update to our index.html file in our GitHub repository as per below, I will let you go and find out what version 1 of the index.html was saying. ![](Images/Day74_CICD10.png) -If we head back to Jenkins and select "Build Now" again. We will see our #2 build is successful. +If we head back to Jenkins and select "Build Now" again. We will see our #2 build is successful. ![](Images/Day74_CICD11.png) -Then a quick look at DockerHub, we can see that we have our tagged version 2 and our latest tag. +Then a quick look at DockerHub, we can see that we have our tagged version 2 and our latest tag. ![](Images/Day74_CICD12.png) -It is worth noting here that I have added into my Kubernetes cluster a secret that enables my access and authentication to push my docker builds into DockerHub. If you are following along you should repeat this process for your account, and also make a change to the Jenkinsfile that is associated to my repository and account. +It is worth noting here that I have added into my Kubernetes cluster a secret that enables my access and authentication to push my docker builds into DockerHub. If you are following along you should repeat this process for your account, and also make a change to the Jenkinsfile that is associated to my repository and account. ## Resources diff --git a/Days/day75.md b/Days/day75.md index fce804420..335090587 100644 --- a/Days/day75.md +++ b/Days/day75.md @@ -1,63 +1,64 @@ --- -title: '#90DaysOfDevOps - GitHub Actions Overview - Day 75' +title: "#90DaysOfDevOps - GitHub Actions Overview - Day 75" published: false description: 90DaysOfDevOps - GitHub Actions Overview -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049070 --- + ## GitHub Actions Overview -In this section I wanted to move on and take a look at maybe a different approach to what we just spent time on. GitHub Actions is where we will focus on in this session. +In this section I wanted to move on and take a look at maybe a different approach to what we just spent time on. GitHub Actions is where we will focus on in this session. -GitHub Actions is a CI/CD platform that allows us to build, test and deploy amongst other tasks our pipeline. It has the concept of workflows that build and test against a GitHub repository. You could also use GitHub Actions to drive other workflows based on events that happen within your repository. +GitHub Actions is a CI/CD platform that allows us to build, test and deploy amongst other tasks our pipeline. It has the concept of workflows that build and test against a GitHub repository. You could also use GitHub Actions to drive other workflows based on events that happen within your repository. ### Workflows -Overall, in GitHub Actions our task is called a **Workflow**. +Overall, in GitHub Actions our task is called a **Workflow**. -- A **workflow** is the configurable automated process. +- A **workflow** is the configurable automated process. - Defined as YAML files. - Contain and run one or more **jobs** -- Will run when triggered by an **event** in your repository or can be ran manually +- Will run when triggered by an **event** in your repository or can be ran manually - You can multiple workflows per repository - A **workflow** will contain a **job** and then **steps** to achieve that **job** -- Within our **workflow** we will also have a **runner** on which our **workflow** runs. +- Within our **workflow** we will also have a **runner** on which our **workflow** runs. For example, you can have one **workflow** to build and test pull requests, another **workflow** to deploy your application every time a release is created, and still another **workflow** that adds a label every time someone opens a new issue. -### Events +### Events -Events are a specific event in a repository that triggers the workflow to run. +Events are a specific event in a repository that triggers the workflow to run. -### Jobs +### Jobs -A job is a set of steps in the workflow that execute on a runner. +A job is a set of steps in the workflow that execute on a runner. ### Steps -Each step within the job can be a shell script that gets executed, or an action. Steps are executed in order and they are dependant on each other. +Each step within the job can be a shell script that gets executed, or an action. Steps are executed in order and they are dependant on each other. -### Actions +### Actions -A repeatable custom application used for frequently repeated tasks. +A repeatable custom application used for frequently repeated tasks. ### Runners -A runner is a server that runs the workflow, each runner runs a single job at a time. GitHub Actions provides the ability to run Ubuntu Linux, Microsoft Windows, and macOS runners. You can also host your own on specific OS or hardware. +A runner is a server that runs the workflow, each runner runs a single job at a time. GitHub Actions provides the ability to run Ubuntu Linux, Microsoft Windows, and macOS runners. You can also host your own on specific OS or hardware. -Below you can see how this looks, we have our event triggering our workflow > our workflow consists of two jobs > within our jobs we then have steps and then we have actions. +Below you can see how this looks, we have our event triggering our workflow > our workflow consists of two jobs > within our jobs we then have steps and then we have actions. ![](Images/Day75_CICD1.png) -### YAML +### YAML -Before we get going with a real use case lets take a quick look at the above image in the form of an example YAML file. +Before we get going with a real use case lets take a quick look at the above image in the form of an example YAML file. -I have added # to comment in where we can find the components of the YAML workflow. +I have added # to comment in where we can find the components of the YAML workflow. -``` +```Yaml #Workflow name: 90DaysOfDevOps #Event @@ -78,19 +79,19 @@ jobs: - run: bats -v ``` -### Getting Hands-On with GitHub Actions +### Getting Hands-On with GitHub Actions -I think there are a lot of options when it comes to GitHub Actions, yes it will satisfy your CI/CD needs when it comes to Build, Test, Deploying your code and the continued steps thereafter. +I think there are a lot of options when it comes to GitHub Actions, yes it will satisfy your CI/CD needs when it comes to Build, Test, Deploying your code and the continued steps thereafter. -I can see lots of options and other automated tasks that we could use GitHub Actions for. +I can see lots of options and other automated tasks that we could use GitHub Actions for. -### Using GitHub Actions for Linting your code +### Using GitHub Actions for Linting your code -One option is making sure your code is clean and tidy within your repository. This will be our first example demo. +One option is making sure your code is clean and tidy within your repository. This will be our first example demo. -I am going to be using some example code linked in one of the resources for this section, we are going to use `github/super-linter` to check against our code. +I am going to be using some example code linked in one of the resources for this section, we are going to use `github/super-linter` to check against our code. -``` +```Yaml name: Super-Linter on: push @@ -115,37 +116,37 @@ You can see from the above that for one of our steps we have an action called gi "This repository is for the GitHub Action to run a Super-Linter. It is a simple combination of various linters, written in bash, to help validate your source code." -Also in the code snippet above it mentions GITHUB_TOKEN so I was interested to find out why and what this does and needed for. +Also in the code snippet above it mentions GITHUB_TOKEN so I was interested to find out why and what this does and needed for. -"NOTE: If you pass the Environment variable `GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}` in your workflow, then the GitHub Super-Linter will mark the status of each individual linter run in the Checks section of a pull request. Without this you will only see the overall status of the full run. **There is no need to set the GitHub Secret as it is automatically set by GitHub, it only needs to be passed to the action.**" +"NOTE: If you pass the Environment variable `GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}` in your workflow, then the GitHub Super-Linter will mark the status of each individual linter run in the Checks section of a pull request. Without this you will only see the overall status of the full run. **There is no need to set the GitHub Secret as it is automatically set by GitHub, it only needs to be passed to the action.**" -The bold text being important to note at this stage. We are using it but we do not need to set any environment variable within our repository. +The bold text being important to note at this stage. We are using it but we do not need to set any environment variable within our repository. We will use our repository that we used in our Jenkins demo to test against.[Jenkins-HelloWorld](https://github.com/MichaelCade/Jenkins-HelloWorld) -Here is our repository as we left it in the Jenkins sessions. +Here is our repository as we left it in the Jenkins sessions. ![](Images/Day75_CICD2.png) -In order for us to take advantage we have to use the Actions tab above to choose from the marketplace which I will cover shortly or we can create our own files using our super-linter code above, in order to create your own you must create a new file in your repository at this exact location. `.github/workflows/workflow_name` obviously making sure the workflow_name is something useful for you recognise, within here we can have many different workflows performing different jobs and tasks against our repository. +In order for us to take advantage we have to use the Actions tab above to choose from the marketplace which I will cover shortly or we can create our own files using our super-linter code above, in order to create your own you must create a new file in your repository at this exact location. `.github/workflows/workflow_name` obviously making sure the workflow_name is something useful for you recognise, within here we can have many different workflows performing different jobs and tasks against our repository. We are going to create `.github/workflows/super-linter.yml` ![](Images/Day75_CICD3.png) -We can then paste our code and commit the code to our repository, if we then head to the Actions tab we will now see our Super-Linter workflow listed as per below, +We can then paste our code and commit the code to our repository, if we then head to the Actions tab we will now see our Super-Linter workflow listed as per below, ![](Images/Day75_CICD4.png) -We defined in our code that this workflow would run when we pushed anything to our repository, so in pushing the super-linter.yml to our repository we triggered the workflow. +We defined in our code that this workflow would run when we pushed anything to our repository, so in pushing the super-linter.yml to our repository we triggered the workflow. ![](Images/Day75_CICD5.png) -As you can see from the above we have some errors most likely with my hacking ability vs coding ability. +As you can see from the above we have some errors most likely with my hacking ability vs coding ability. Although actually it was not my code at least not yet, in running this and getting an error I found this [issue](https://github.com/github/super-linter/issues/2255) -Take #2 I changed the version of Super-Linter from version 3 to 4 and have ran the task again. +Take #2 I changed the version of Super-Linter from version 3 to 4 and have ran the task again. ![](Images/Day75_CICD6.png) @@ -155,21 +156,21 @@ I wanted to show the look now on our repository when something within the workfl ![](Images/Day75_CICD7.png) -Now if we resolve the issue with my code and push the changes our workflow will run again (you can see from the image it took a while to iron out our "bugs") Deleting a file is probably not recommended but it is a very quick way to show the issue being resolved. +Now if we resolve the issue with my code and push the changes our workflow will run again (you can see from the image it took a while to iron out our "bugs") Deleting a file is probably not recommended but it is a very quick way to show the issue being resolved. ![](Images/Day75_CICD8.png) -If you hit the new workflow button highlighted above, this is going to open the door to a huge plethora of actions. One thing you might have noticed throughout this challenge is that we don't want to reinvent the wheel we want to stand on the shoulders of giants and share our code, automations and skills far and wide to make our lives easier. +If you hit the new workflow button highlighted above, this is going to open the door to a huge plethora of actions. One thing you might have noticed throughout this challenge is that we don't want to reinvent the wheel we want to stand on the shoulders of giants and share our code, automations and skills far and wide to make our lives easier. ![](Images/Day75_CICD9.png) -Oh, I didn't show you the green tick on the repository when our workflow was successful. +Oh, I didn't show you the green tick on the repository when our workflow was successful. ![](Images/Day75_CICD10.png) -I think that covers things from a foundational point of view for GitHub Actions but if you are anything like me then you are probably seeing how else GitHub Actions can be used to automate a lot of tasks. +I think that covers things from a foundational point of view for GitHub Actions but if you are anything like me then you are probably seeing how else GitHub Actions can be used to automate a lot of tasks. -Next up we will cover another area of CD, we will be looking into ArgoCD to deploy our applications out into our environments. +Next up we will cover another area of CD, we will be looking into ArgoCD to deploy our applications out into our environments. ## Resources diff --git a/Days/day76.md b/Days/day76.md index d4faa476b..9f49617f2 100644 --- a/Days/day76.md +++ b/Days/day76.md @@ -1,12 +1,13 @@ --- -title: '#90DaysOfDevOps - ArgoCD Overview - Day 76' +title: "#90DaysOfDevOps - ArgoCD Overview - Day 76" published: false description: 90DaysOfDevOps - ArgoCD Overview -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048809 --- + ## ArgoCD Overview “Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes” @@ -17,11 +18,11 @@ From an Operations background but having played a lot around Infrastructure as C [What is ArgoCD](https://argo-cd.readthedocs.io/en/stable/) -### Deploying ArgoCD +### Deploying ArgoCD -We are going to be using our trusty minikube Kubernetes cluster locally again for this deployment. +We are going to be using our trusty minikube Kubernetes cluster locally again for this deployment. -``` +```Shell kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml ``` @@ -32,41 +33,41 @@ Make sure all the ArgoCD pods are up and running with `kubectl get pods -n argoc ![](Images/Day76_CICD2.png) -Also let's check everything that we deployed in the namespace with `kubectl get all -n argocd` +Also let's check everything that we deployed in the namespace with `kubectl get all -n argocd` ![](Images/Day76_CICD3.png) -When the above is looking good, we then should consider accessing this via the port forward. Using the `kubectl port-forward svc/argocd-server -n argocd 8080:443` command. Do this in a new terminal. +When the above is looking good, we then should consider accessing this via the port forward. Using the `kubectl port-forward svc/argocd-server -n argocd 8080:443` command. Do this in a new terminal. -Then open a new web browser and head to https://localhost:8080 +Then open a new web browser and head to `https://localhost:8080` ![](Images/Day76_CICD4.png) -To log in you will need a username of admin and then to grab your created secret as your password use the `kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo` +To log in you will need a username of admin and then to grab your created secret as your password use the `kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo` ![](Images/Day76_CICD5.png) -Once you have logged in you will have your blank CD canvas. +Once you have logged in you will have your blank CD canvas. ![](Images/Day76_CICD6.png) -### Deploying our application +### Deploying our application -Now we have ArgoCD up and running we can now start using it to deploy our applications from our Git repositories as well as Helm. +Now we have ArgoCD up and running we can now start using it to deploy our applications from our Git repositories as well as Helm. -The application I want to deploy is Pac-Man, yes that's right the famous game and something I use in a lot of demos when it comes to data management, this will not be the last time we see Pac-Man. +The application I want to deploy is Pac-Man, yes that's right the famous game and something I use in a lot of demos when it comes to data management, this will not be the last time we see Pac-Man. You can find the repository for [Pac-Man](https://github.com/MichaelCade/pacman-tanzu.git) here. -Instead of going through each step using screen shots I thought it would be easier to create a walkthrough video covering the steps taken for this one particular application deployment. +Instead of going through each step using screen shots I thought it would be easier to create a walkthrough video covering the steps taken for this one particular application deployment. [ArgoCD Demo - 90DaysOfDevOps](https://www.youtube.com/watch?v=w6J413_j0hA) -Note - During the video there is a service that is never satisfied as the app health being healthy this is because the LoadBalancer type set for the pacman service is in a pending state, in Minikube we do not have a loadbalancer configured. If you would like to test this you could change the YAML for the service to ClusterIP and use port forwarding to play the game. +Note - During the video there is a service that is never satisfied as the app health being healthy this is because the LoadBalancer type set for the pacman service is in a pending state, in Minikube we do not have a loadbalancer configured. If you would like to test this you could change the YAML for the service to ClusterIP and use port forwarding to play the game. -This wraps up the CICD Pipelines section, I feel there is a lot of focus on this area in the industry at the moment and you will also hear terms around GitOps also related to the methodologies used within CICD in general. +This wraps up the CICD Pipelines section, I feel there is a lot of focus on this area in the industry at the moment and you will also hear terms around GitOps also related to the methodologies used within CICD in general. -The next section we move into is around Observability, another concept or area that is not new but it is more and more important as we look at our environments in a different way. +The next section we move into is around Observability, another concept or area that is not new but it is more and more important as we look at our environments in a different way. ## Resources diff --git a/Days/day77.md b/Days/day77.md index ef45231bc..a8e88f935 100644 --- a/Days/day77.md +++ b/Days/day77.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - The Big Picture: Monitoring - Day 77' +title: "#90DaysOfDevOps - The Big Picture: Monitoring - Day 77" published: false description: 90DaysOfDevOps - The Big Picture Monitoring tags: "devops, 90daysofdevops, learning" @@ -7,76 +7,77 @@ cover_image: null canonical_url: null id: 1048715 --- + ## The Big Picture: Monitoring -In this section we are going to talk about monitoring, what is it why do we need it? +In this section we are going to talk about monitoring, what is it why do we need it? -### What is Monitoring? +### What is Monitoring? -Monitoring is the process of keeping a close eye on the entire infrastructure +Monitoring is the process of keeping a close eye on the entire infrastructure -### and why do we need it? +### and why do we need it? -Let's assume we're managing a thousand servers these include a variety of specialised servers like application servers, database servers and web servers. We could also complicate this further with additional services and different platforms including public cloud offerings and Kubernetes. +Let's assume we're managing a thousand servers these include a variety of specialised servers like application servers, database servers and web servers. We could also complicate this further with additional services and different platforms including public cloud offerings and Kubernetes. ![](Images/Day77_Monitoring1.png) -We are responsible for ensuring that all the services, applications and resources on the servers are running as they should be. +We are responsible for ensuring that all the services, applications and resources on the servers are running as they should be. ![](Images/Day77_Monitoring2.png) -How do we do it? there are three ways: +How do we do it? there are three ways: -- Login manually to all of our servers and check all the data pertaining to services processes and resources. -- Write a script that logs in to the servers for us and checks on the data. +- Login manually to all of our servers and check all the data pertaining to services processes and resources. +- Write a script that logs in to the servers for us and checks on the data. -Both of these options would require considerable amount of work on our part, +Both of these options would require considerable amount of work on our part, -The third option is easier, we could use a monitoring solution that is available in the market. +The third option is easier, we could use a monitoring solution that is available in the market. -Nagios and Zabbix are possible solutions that are readily available which allow us to upscale our monitoring infrastructure to include as many servers as we want. +Nagios and Zabbix are possible solutions that are readily available which allow us to upscale our monitoring infrastructure to include as many servers as we want. ### Nagios Nagios is an infrastructure monitoring tool that is made by a company that goes by the same name. The open-source version of this tool is called Nagios core while the commercial version is called Nagios XI. [Nagios Website](https://www.nagios.org/) -The tool allows us to monitor our servers and see if they are being sufficiently utilised or if there are any tasks of failure that need addressing. +The tool allows us to monitor our servers and see if they are being sufficiently utilised or if there are any tasks of failure that need addressing. ![](Images/Day77_Monitoring3.png) -Essentially monitoring allows us to achieve these two goals, check the status of our servers and services and determine the health of our infrastructure it also gives us a 40,000ft view of the complete infrastructure to see if our servers are up and running, if the applications are working properly and the web servers are reachable or not. +Essentially monitoring allows us to achieve these two goals, check the status of our servers and services and determine the health of our infrastructure it also gives us a 40,000ft view of the complete infrastructure to see if our servers are up and running, if the applications are working properly and the web servers are reachable or not. -It will tell us that our disk has been increasing by 10 percent for the last 10 weeks in a particular server, that it will exhaust entirely within the next four or five days and we'll fail to respond soon it will alert us when your disk or server is in a critical state so that we can take appropriate actions to avoid possible outages. +It will tell us that our disk has been increasing by 10 percent for the last 10 weeks in a particular server, that it will exhaust entirely within the next four or five days and we'll fail to respond soon it will alert us when your disk or server is in a critical state so that we can take appropriate actions to avoid possible outages. -In this case we can free up some disk space and ensure that our servers don't fail and that our users are not affected. +In this case we can free up some disk space and ensure that our servers don't fail and that our users are not affected. -The difficult question for most monitoring engineers is what do we monitor? and alternately what do we not? +The difficult question for most monitoring engineers is what do we monitor? and alternately what do we not? -Every system has a number of resources, which of these should we keep a close eye on and which ones can we turn a blind eye to for instance is it necessary to monitor CPU usage the answer is yes obviously nevertheless it is still a decision that has to be made is it necessary to monitor the number of open ports in the system we may or may not have to depending on the situation if it is a general-purpose server we probably won't have to but then again if it is a webserver we probably would have to. +Every system has a number of resources, which of these should we keep a close eye on and which ones can we turn a blind eye to for instance is it necessary to monitor CPU usage the answer is yes obviously nevertheless it is still a decision that has to be made is it necessary to monitor the number of open ports in the system we may or may not have to depending on the situation if it is a general-purpose server we probably won't have to but then again if it is a webserver we probably would have to. -### Continous Monitoring +### Continuous Monitoring -Monitoring is not a new item and even continous monitoring has been an ideal that many enterprises have adopted for many years. +Monitoring is not a new item and even continuous monitoring has been an ideal that many enterprises have adopted for many years. -There are three key areas of focus when it comes to monitoring. +There are three key areas of focus when it comes to monitoring. - Infrastructure Monitoring -- Application Monitoring -- Network Monitoring +- Application Monitoring +- Network Monitoring -The important thing to note is that there are many tools available we have mentioned two generic systems and tools in this session but there are lots. The real benefit of a monitoring solution comes when you have really spent the time making sure you are answering that question of what should we be monitoring and what shouldn't we? +The important thing to note is that there are many tools available we have mentioned two generic systems and tools in this session but there are lots. The real benefit of a monitoring solution comes when you have really spent the time making sure you are answering that question of what should we be monitoring and what shouldn't we? -We could turn on a monitoring solution in any of our platforms and it will start grabbing information but if that information is simply too much then you are going to struggle to benefit from that solution, you have to spend the time to configure. +We could turn on a monitoring solution in any of our platforms and it will start grabbing information but if that information is simply too much then you are going to struggle to benefit from that solution, you have to spend the time to configure. -In the next session we will get hands on with a monitoring tool and see what we can start monitoring. +In the next session we will get hands on with a monitoring tool and see what we can start monitoring. -## Resources +## Resources - [The Importance of Monitoring in DevOps](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) -- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) -- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) +- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) +- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) - [Top 5 - DevOps Monitoring Tools](https://www.youtube.com/watch?v=4t71iv_9t_4) -- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) +- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) - [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8) See you on [Day 78](day78.md) diff --git a/Days/day78.md b/Days/day78.md index 3e44f0c06..b6ba30d51 100644 --- a/Days/day78.md +++ b/Days/day78.md @@ -1,77 +1,79 @@ --- -title: '#90DaysOfDevOps - Hands-On Monitoring Tools - Day 78' +title: "#90DaysOfDevOps - Hands-On Monitoring Tools - Day 78" published: false description: 90DaysOfDevOps - Hands-On Monitoring Tools -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049056 --- + ## Hands-On Monitoring Tools -In the last session, I spoke about the big picture of monitoring and I took a look into Nagios, there was two reasons for doing this. The first was this is a peice of software I have heard a lot of over the years so wanted to know a little more about its capabilities. +In the last session, I spoke about the big picture of monitoring and I took a look into Nagios, there was two reasons for doing this. The first was this is a piece of software I have heard a lot of over the years so wanted to know a little more about its capabilities. -Today I am going to be going into Prometheus, I have seen more and more of Prometheus in the Cloud-Native landscape but it can also be used to look after those physical resources as well outside of Kubernetes and the like. +Today I am going to be going into Prometheus, I have seen more and more of Prometheus in the Cloud-Native landscape but it can also be used to look after those physical resources as well outside of Kubernetes and the like. ### Prometheus - Monitors nearly everything -First of all Prometheus is Open-Source that can help you monitor containers and microservice based systems as well as physical, virtual and other services. There is a large community behind Prometheus. +First of all Prometheus is Open-Source that can help you monitor containers and microservice based systems as well as physical, virtual and other services. There is a large community behind Prometheus. -Prometheus has a large array of [integrations and exporters](https://prometheus.io/docs/instrumenting/exporters/) The key being to exporting existing metrics as prometheus metrics. On top of this it also supports multiple proagramming languages. +Prometheus has a large array of [integrations and exporters](https://prometheus.io/docs/instrumenting/exporters/) The key being to exporting existing metrics as prometheus metrics. On top of this it also supports multiple proagramming languages. -Pull approach - If you are talking to thousands of microservices or systems and services a push method is going to be where you generally see the service pushing to the monitoring system. This brings some challenges around flooding the network, high cpu and also a single point of failure. Where Pull gives us a much better experience where Prometheus will pull from the metrics endpoint on every service. +Pull approach - If you are talking to thousands of microservices or systems and services a push method is going to be where you generally see the service pushing to the monitoring system. This brings some challenges around flooding the network, high cpu and also a single point of failure. Where Pull gives us a much better experience where Prometheus will pull from the metrics endpoint on every service. -Once again we see YAML for configuration for Prometheus. +Once again we see YAML for configuration for Prometheus. ![](Images/Day78_Monitoring7.png) -Later on you are going to see how this looks when deployed into Kubernetes, in particular we have the **PushGateway** which pulls our metrics from our jobs/exporters. +Later on you are going to see how this looks when deployed into Kubernetes, in particular we have the **PushGateway** which pulls our metrics from our jobs/exporters. -We have the **AlertManager** which pushes alerts and this is where we can integrate into external services such as email, slack and other tooling. +We have the **AlertManager** which pushes alerts and this is where we can integrate into external services such as email, slack and other tooling. -Then we have the Prometheus server which manages the retrieval of those pull metrics from the PushGateway and then sends those push alerts to the AlertManager. The Prometheus server also stores data on a local disk. Although can leverage remote storage solutions. +Then we have the Prometheus server which manages the retrieval of those pull metrics from the PushGateway and then sends those push alerts to the AlertManager. The Prometheus server also stores data on a local disk. Although can leverage remote storage solutions. -We then also have PromQL which is the language used to interact with the metrics, this can be seen later on with the Prometheus Web UI but you will also see later on in this section how this is also used within Data visualisation tools such as Grafana. +We then also have PromQL which is the language used to interact with the metrics, this can be seen later on with the Prometheus Web UI but you will also see later on in this section how this is also used within Data visualisation tools such as Grafana. -### Ways to Deploy Prometheus +### Ways to Deploy Prometheus -Various ways of installing Prometheus, [Download Section](https://prometheus.io/download/) Docker images are also available. +Various ways of installing Prometheus, [Download Section](https://prometheus.io/download/) Docker images are also available. `docker run --name prometheus -d -p 127.0.0.1:9090:9090 prom/prometheus` -But we are going to focus our efforts on deploying to Kubernetes. Which also has some options. +But we are going to focus our efforts on deploying to Kubernetes. Which also has some options. -- Create configuration YAML files +- Create configuration YAML files - Using an Operator (manager of all prometheus components) -- Using helm chart to deploy operator +- Using helm chart to deploy operator -### Deploying to Kubernetes +### Deploying to Kubernetes -We will be using our minikube cluster locally again for this quick and simple installation. As with previous touch points with minikube, we will be using helm to deploy the Prometheus helm chart. +We will be using our minikube cluster locally again for this quick and simple installation. As with previous touch points with minikube, we will be using helm to deploy the Prometheus helm chart. -`helm repo add prometheus-community https://prometheus-community.github.io/helm-charts` +`helm repo add prometheus-community https://prometheus-community.github.io/helm-charts` ![](Images/Day78_Monitoring1.png) -As you can see from the above we have also ran a helm repo update, we are now ready to deploy Prometheus into our minikube environment using the `helm install stable prometheus-community/prometheus` command. +As you can see from the above we have also ran a helm repo update, we are now ready to deploy Prometheus into our minikube environment using the `helm install stable prometheus-community/prometheus` command. ![](Images/Day78_Monitoring2.png) -After a couple of minutes you will see a number of new pods appear, for this demo I have deployed into the default namespace, I would normally push this to its own namespace. +After a couple of minutes you will see a number of new pods appear, for this demo I have deployed into the default namespace, I would normally push this to its own namespace. ![](Images/Day78_Monitoring3.png) -Once all the pods are running we can also take a look at all the deployed aspects of Prometheus. +Once all the pods are running we can also take a look at all the deployed aspects of Prometheus. ![](Images/Day78_Monitoring4.png) -Now for us to access the Prometheus Server UI we can use the following command to port forward. +Now for us to access the Prometheus Server UI we can use the following command to port forward. -``` +```Shell export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 9090 ``` -When we first open our browser to http://localhost:9090 we see the following very blank screen. + +When we first open our browser to `http://localhost:9090` we see the following very blank screen. ![](Images/Day78_Monitoring5.png) @@ -79,17 +81,17 @@ Because we have deployed to our Kubernetes cluster we will automatically be pick ![](Images/Day78_Monitoring6.png) -Short on learning PromQL and putting that into practice this is very much like I mentioned previously in that gaining metrics is great, so is monitoring but you have to know what you are monitoring and why and what you are not monitoring and why! +Short on learning PromQL and putting that into practice this is very much like I mentioned previously in that gaining metrics is great, so is monitoring but you have to know what you are monitoring and why and what you are not monitoring and why! -I want to come back to Prometheus but for now I think we need to think about Log Management and Data Visualisation to bring us back to Prometheus later on. +I want to come back to Prometheus but for now I think we need to think about Log Management and Data Visualisation to bring us back to Prometheus later on. -## Resources +## Resources - [The Importance of Monitoring in DevOps](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) -- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) -- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) +- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) +- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) - [Top 5 - DevOps Monitoring Tools](https://www.youtube.com/watch?v=4t71iv_9t_4) -- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) +- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) - [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8) - [Promql cheat sheet with examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) diff --git a/Days/day79.md b/Days/day79.md index d941e1a2a..1db073544 100644 --- a/Days/day79.md +++ b/Days/day79.md @@ -1,41 +1,42 @@ --- -title: '#90DaysOfDevOps - The Big Picture: Log Management - Day 79' +title: "#90DaysOfDevOps - The Big Picture: Log Management - Day 79" published: false description: 90DaysOfDevOps - The Big Picture Log Management -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049057 --- + ## The Big Picture: Log Management -A continuation to the infrastructure monitoring challenges and solutions, log management is another puzzle peice to the overall observability jigsaw. +A continuation to the infrastructure monitoring challenges and solutions, log management is another puzzle piece to the overall observability jigsaw. -### Log Management & Aggregation +### Log Management & Aggregation -Let's talk about two core concepts the first of which is log aggregation and it's a way of collecting and tagging application logs from many different services and to a single dashboard that can easily be searched. +Let's talk about two core concepts the first of which is log aggregation and it's a way of collecting and tagging application logs from many different services and to a single dashboard that can easily be searched. -One of the first systems that have to be built out in an application performance management system is log aggregation. Application performance management is the part of the devops lifecycle where things have been built and deployed and you need to make sure that they're continuously working so they have enough resources allocated to them and errors aren't being shown to users. In most production deployments there are many related events that emit logs across services at google a single search might hit ten different services before being returned to the user if you got unexpected search results that might mean a logic problem in any of the ten services and log aggregation helps companies like google diagnose problems in production, they've built a single dashboard where they can map every request to unique id so if you search something your search will get a unique id and then every time that search is passing through a different service that service will connect that id to what they're currently doing. +One of the first systems that have to be built out in an application performance management system is log aggregation. Application performance management is the part of the devops lifecycle where things have been built and deployed and you need to make sure that they're continuously working so they have enough resources allocated to them and errors aren't being shown to users. In most production deployments there are many related events that emit logs across services at google a single search might hit ten different services before being returned to the user if you got unexpected search results that might mean a logic problem in any of the ten services and log aggregation helps companies like google diagnose problems in production, they've built a single dashboard where they can map every request to unique id so if you search something your search will get a unique id and then every time that search is passing through a different service that service will connect that id to what they're currently doing. -This is the essence of a good log aggregation platform efficiently collect logs from everywhere that emits them and make them easily searchable in the case of a fault again. +This is the essence of a good log aggregation platform efficiently collect logs from everywhere that emits them and make them easily searchable in the case of a fault again. -### Example App +### Example App -Our example application is a web app, we have a typical front end and backend storing our critical data to a MongoDB database. +Our example application is a web app, we have a typical front end and backend storing our critical data to a MongoDB database. -If a user told us the page turned all white and printed an error message we would be hard-pressed to diagnose the problem with our current stack the user would need to manually send us the error and we'd need to match it with relevant logs in the other three services. +If a user told us the page turned all white and printed an error message we would be hard-pressed to diagnose the problem with our current stack the user would need to manually send us the error and we'd need to match it with relevant logs in the other three services. -### ELK +### ELK -Let's take a look at ELK, a popular open source log aggregation stack named after its three components elasticsearch, logstash and kibana if we installed it in the same environment as our example app. +Let's take a look at ELK, a popular open source log aggregation stack named after its three components elasticsearch, logstash and kibana if we installed it in the same environment as our example app. -The web application would connect to the frontend which then connects to the backend, the backend would send logs to logstash and then the way that these three components work +The web application would connect to the frontend which then connects to the backend, the backend would send logs to logstash and then the way that these three components work -### The components of elk +### The components of elk -Elasticsearch, logstash and Kibana is that all of services send logs to logstash, logstash takes these logs which are text emitted by the application. For example the web application when you visit a web page, the web page might log this visitor access to this page at this time and that's an example of a log message those logs would be sent to logstash. +Elasticsearch, logstash and Kibana is that all of services send logs to logstash, logstash takes these logs which are text emitted by the application. For example the web application when you visit a web page, the web page might log this visitor access to this page at this time and that's an example of a log message those logs would be sent to logstash. -Logstash would then extract things from them so for that log message user did **thing**, at **time**. It would extract the time and extract the message and extract the user and include those all as tags so the message would be an object of tags and message so that you could search them easily you could find all of the requests made by a specific user but logstash doesn't store things itself it stores things in elasticsearch which is a efficient database for querying text and elasticsearch exposes the results as Kibana and Kibana is a web server that connects to elasticsearch and allows administrators as the devops person or other people on your team, the on-call engineer to view the logs in production whenever there's a major fault. You as the administrator would connect to Kibana, Kibana would query elasticsearch for logs matching whatever you wanted. +Logstash would then extract things from them so for that log message user did **thing**, at **time**. It would extract the time and extract the message and extract the user and include those all as tags so the message would be an object of tags and message so that you could search them easily you could find all of the requests made by a specific user but logstash doesn't store things itself it stores things in elasticsearch which is a efficient database for querying text and elasticsearch exposes the results as Kibana and Kibana is a web server that connects to elasticsearch and allows administrators as the devops person or other people on your team, the on-call engineer to view the logs in production whenever there's a major fault. You as the administrator would connect to Kibana, Kibana would query elasticsearch for logs matching whatever you wanted. You could say hey Kibana in the search bar I want to find errors and kibana would say elasticsearch find the messages which contain the string error and then elasticsearch would return results that had been populated by logstash. Logstash would have been sent those results from all of the other services. @@ -43,39 +44,38 @@ You could say hey Kibana in the search bar I want to find errors and kibana woul A user says i saw error code one two three four five six seven when i tried to do this with elk setup we'd have to go to kibana enter one two three four five six seven in the search bar press enter and then that would show us the logs that corresponded to that and one of the logs might say internal server error returning one two three four five six seven and we'd see that the service that emitted that log was the backend and we'd see what time that log was emitted at so we could go to the time in that log and we could look at the messages above and below it in the backend and then we could see a better picture of what happened for the user's request and we'd be able to repeat this process going to other services until we found what actually caused the problem for the user. -### Security and Access to Logs +### Security and Access to Logs -An important peice of the puzzle is ensuring that logs are only visible to administrators (or the users and groups that absolutely need to have access), logs can contain sensitive information like tokens it's important that only authenticated users can access them you wouldn't want to expose Kibana to the internet without some way of authenticating. +An important piece of the puzzle is ensuring that logs are only visible to administrators (or the users and groups that absolutely need to have access), logs can contain sensitive information like tokens it's important that only authenticated users can access them you wouldn't want to expose Kibana to the internet without some way of authenticating. ### Examples of Log Management Tools Examples of log management platforms there's -- Elasticsearch -- Logstash -- Kibana +- Elasticsearch +- Logstash +- Kibana - Fluentd - popular open source choice -- Datadog - hosted offering, commonly used at larger enterprises, -- LogDNA - hosted offering -- Splunk - -Cloud providers also provide logging such as AWS CloudWatch Logs, Microsoft Azure Monitor and Google Cloud Logging. +- Datadog - hosted offering, commonly used at larger enterprises, +- LogDNA - hosted offering +- Splunk +Cloud providers also provide logging such as AWS CloudWatch Logs, Microsoft Azure Monitor and Google Cloud Logging. -Log Management is a key aspect of the overall observability of your applications and instracture environment for diagnosing problems in production it's relatively simple to install a turnkey solution like ELK or CloudWatch and it makes diagnosing and triaging problems in production significantly easier. +Log Management is a key aspect of the overall observability of your applications and instracture environment for diagnosing problems in production it's relatively simple to install a turnkey solution like ELK or CloudWatch and it makes diagnosing and triaging problems in production significantly easier. -## Resources +## Resources - [The Importance of Monitoring in DevOps](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) -- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) -- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) +- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) +- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) - [Top 5 - DevOps Monitoring Tools](https://www.youtube.com/watch?v=4t71iv_9t_4) -- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) +- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) - [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8) - [Promql cheat sheet with examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) - [Log Management for DevOps | Manage application, server, and cloud logs with Site24x7](https://www.youtube.com/watch?v=J0csO_Shsj0) - [Log Management what DevOps need to know](https://devops.com/log-management-what-devops-teams-need-to-know/) - [What is ELK Stack?](https://www.youtube.com/watch?v=4X0WLg05ASw) -- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) +- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) See you on [Day 80](day80.md) diff --git a/Days/day80.md b/Days/day80.md index 7fd8324c7..855413b9d 100644 --- a/Days/day80.md +++ b/Days/day80.md @@ -1,36 +1,34 @@ --- -title: '#90DaysOfDevOps - ELK Stack - Day 80' +title: "#90DaysOfDevOps - ELK Stack - Day 80" published: false description: 90DaysOfDevOps - ELK Stack -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048746 --- -## ELK Stack -In this session, we are going to get a little more hands-on with some of the options we have mentioned. +## ELK Stack -### ELK Stack +In this session, we are going to get a little more hands-on with some of the options we have mentioned. -ELK Stack is the combination of 3 separate tools: +ELK Stack is the combination of 3 separate tools: - [Elasticsearch](https://www.elastic.co/what-is/elasticsearch) is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured. -- [Logstash](https://www.elastic.co/logstash/) is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite "stash." +- [Logstash](https://www.elastic.co/logstash/) is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite "stash." -- [Kibana](https://www.elastic.co/kibana/) is a free and open user interface that lets you visualize your Elasticsearch data and navigate the Elastic Stack. Do anything from tracking query load to understanding the way requests flow through your apps. +- [Kibana](https://www.elastic.co/kibana/) is a free and open user interface that lets you visualize your Elasticsearch data and navigate the Elastic Stack. Do anything from tracking query load to understanding the way requests flow through your apps. ELK stack lets us reliably and securely take data from any source, in any format, then search, analyze, and visualize it in real time. On top of the above mentioned components you might also see Beats which are lightweight agents that are installed on edge hosts to collect different types of data for forwarding into the stack. - -- Logs: Server logs that need to be analyzed are identified +- Logs: Server logs that need to be analysed are identified - Logstash: Collect logs and events data. It even parses and transforms data -- ElasticSearch: The transformed data from Logstash is Store, Search, and indexed. +- ElasticSearch: The transformed data from Logstash is Store, Search, and indexed. - Kibana uses Elasticsearch DB to Explore, Visualize, and Share @@ -40,69 +38,69 @@ On top of the above mentioned components you might also see Beats which are ligh A good resource explaining this [The Complete Guide to the ELK Stack](https://logz.io/learn/complete-guide-elk-stack/) -With the addition of beats the ELK Stack is also now known as Elastic Stack. +With the addition of beats the ELK Stack is also now known as Elastic Stack. -For the hands-on scenario there are many places you can deploy the Elastic Stack but we are going to be using docker compose to deploy locally on our system. +For the hands-on scenario there are many places you can deploy the Elastic Stack but we are going to be using docker compose to deploy locally on our system. [Start the Elastic Stack with Docker Compose](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-stack-docker.html#get-started-docker-tls) ![](Images/Day80_Monitoring1.png) -You will find the original files and walkthrough that I used here [ deviantony/docker-elk](https://github.com/deviantony/docker-elk) +You will find the original files and walkthrough that I used here [deviantony/docker-elk](https://github.com/deviantony/docker-elk) -Now we can run `docker-compose up -d`, the first time this has been ran will require the pulling of images. +Now we can run `docker-compose up -d`, the first time this has been ran will require the pulling of images. ![](Images/Day80_Monitoring2.png) If you follow either this repository or the one that I used you will have either have the password of "changeme" or in my repository the password of "90DaysOfDevOps". The username is "elastic" -After a few minutes we can navigate to http://localhost:5601/ which is our Kibana server / Docker container. +After a few minutes we can navigate to `http://localhost:5601/` which is our Kibana server / Docker container. ![](Images/Day80_Monitoring3.png) -Your initial home screen is going to look something like this. +Your initial home screen is going to look something like this. ![](Images/Day80_Monitoring4.png) -Under the section titled "Get started by adding integrations" there is a "try sample data" click this and we can add one of the shown below. +Under the section titled "Get started by adding integrations" there is a "try sample data" click this and we can add one of the shown below. ![](Images/Day80_Monitoring5.png) -I am going to select "Sample web logs" but this is really to get a look and feel of what data sets you can get into the ELK stack. +I am going to select "Sample web logs" but this is really to get a look and feel of what data sets you can get into the ELK stack. -When you have selected "Add Data" it takes a while to populate some of that data and then you have the "View Data" option and a list of the available ways to view that data in the drop down. +When you have selected "Add Data" it takes a while to populate some of that data and then you have the "View Data" option and a list of the available ways to view that data in the drop down. ![](Images/Day80_Monitoring6.png) -As it states on the dashboard view: +As it states on the dashboard view: **Sample Logs Data** -*This dashboard contains sample data for you to play with. You can view it, search it, and interact with the visualizations. For more information about Kibana, check our docs.* +> This dashboard contains sample data for you to play with. You can view it, search it, and interact with the visualizations. For more information about Kibana, check our docs. ![](Images/Day80_Monitoring7.png) -This is using Kibana to visualise data that has been added into ElasticSearch via Logstash. This is not the only option but I personally wanted to deploy and look at this. +This is using Kibana to visualise data that has been added into ElasticSearch via Logstash. This is not the only option but I personally wanted to deploy and look at this. -We are going to cover Grafana at some point and you are going to see some data visualisation similarities between the two, you have also seen Prometheus. +We are going to cover Grafana at some point and you are going to see some data visualisation similarities between the two, you have also seen Prometheus. -The key takeaway I have had between the Elastic Stack and Prometheus + Grafana is that Elastic Stack or ELK Stack is focused on Logs and Prometheus is focused on metrics. +The key takeaway I have had between the Elastic Stack and Prometheus + Grafana is that Elastic Stack or ELK Stack is focused on Logs and Prometheus is focused on metrics. -I was reading this article from MetricFire [Prometheus vs. ELK](https://www.metricfire.com/blog/prometheus-vs-elk/) to get a better understanding of the different offerings. +I was reading this article from MetricFire [Prometheus vs. ELK](https://www.metricfire.com/blog/prometheus-vs-elk/) to get a better understanding of the different offerings. -## Resources +## Resources - [Understanding Logging: Containers & Microservices](https://www.youtube.com/watch?v=MMVdkzeQ848) - [The Importance of Monitoring in DevOps](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) -- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) -- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) +- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) +- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) - [Top 5 - DevOps Monitoring Tools](https://www.youtube.com/watch?v=4t71iv_9t_4) -- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) +- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) - [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8) - [Promql cheat sheet with examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) - [Log Management for DevOps | Manage application, server, and cloud logs with Site24x7](https://www.youtube.com/watch?v=J0csO_Shsj0) - [Log Management what DevOps need to know](https://devops.com/log-management-what-devops-teams-need-to-know/) - [What is ELK Stack?](https://www.youtube.com/watch?v=4X0WLg05ASw) -- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) +- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) See you on [Day 81](day81.md) diff --git a/Days/day81.md b/Days/day81.md index f252d3680..95040efa3 100644 --- a/Days/day81.md +++ b/Days/day81.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Fluentd & FluentBit - Day 81' +title: "#90DaysOfDevOps - Fluentd & FluentBit - Day 81" published: false description: 90DaysOfDevOps - Fluentd & FluentBit tags: "devops, 90daysofdevops, learning" @@ -7,9 +7,10 @@ cover_image: null canonical_url: null id: 1048716 --- + ## Fluentd & FluentBit -Another data collector that I wanted to explore as part of this observability section was [Fluentd](https://docs.fluentd.org/). An Open-Source unified logging layer. +Another data collector that I wanted to explore as part of this observability section was [Fluentd](https://docs.fluentd.org/). An Open-Source unified logging layer. Fluentd has four key features that makes it suitable to build clean, reliable logging pipelines: @@ -23,48 +24,47 @@ Built-in Reliability: Data loss should never happen. Fluentd supports memory- an [Installing Fluentd](https://docs.fluentd.org/quickstart#step-1-installing-fluentd) -### How apps log data? +### How apps log data? - Write to files. `.log` files (difficult to analyse without a tool and at scale) - Log directly to a database (each application must be configured with the correct format) - Third party applications (NodeJS, NGINX, PostgreSQL) -This is why we want a unified logging layer. +This is why we want a unified logging layer. -FluentD allows for the 3 logging data types shown above and gives us the ability to collect, process and send those to a destination, this could be sending them logs to Elastic, MongoDB, Kafka databases for example. +FluentD allows for the 3 logging data types shown above and gives us the ability to collect, process and send those to a destination, this could be sending them logs to Elastic, MongoDB, Kafka databases for example. -Any Data, Any Data source can be sent to FluentD and that can be sent to any destination. FluentD is not tied to any particular source or destination. +Any Data, Any Data source can be sent to FluentD and that can be sent to any destination. FluentD is not tied to any particular source or destination. -In my research of Fluentd I kept stumbling across Fluent bit as another option and it looks like if you were looking to deploy a logging tool into your Kubernetes environment then fluent bit would give you that capability, even though fluentd can also be deployed to containers as well as servers. +In my research of Fluentd I kept stumbling across Fluent bit as another option and it looks like if you were looking to deploy a logging tool into your Kubernetes environment then fluent bit would give you that capability, even though fluentd can also be deployed to containers as well as servers. [Fluentd & Fluent Bit](https://docs.fluentbit.io/manual/about/fluentd-and-fluent-bit) -Fluentd and Fluentbit will use the input plugins to transform that data to Fluent Bit format, then we have output plugins to whatever that output target is such as elasticsearch. - -We can also use tags and matches between configurations. +Fluentd and Fluentbit will use the input plugins to transform that data to Fluent Bit format, then we have output plugins to whatever that output target is such as elasticsearch. -I cannot see a good reason for using fluentd and it sems that Fluent Bit is the best way to get started. Although they can be used together in some architectures. +We can also use tags and matches between configurations. -### Fluent Bit in Kubernetes +I cannot see a good reason for using fluentd and it sems that Fluent Bit is the best way to get started. Although they can be used together in some architectures. -Fluent Bit in Kubernetes is deployed as a DaemonSet, which means it will run on each node in the cluster. Each Fluent Bit pod on each node will then read each container on that node and gather all of the logs available. It will also gather the metadata from the Kubernetes API Server. +### Fluent Bit in Kubernetes -Kubernetes annotations can be used within the configuration YAML of our applications. +Fluent Bit in Kubernetes is deployed as a DaemonSet, which means it will run on each node in the cluster. Each Fluent Bit pod on each node will then read each container on that node and gather all of the logs available. It will also gather the metadata from the Kubernetes API Server. +Kubernetes annotations can be used within the configuration YAML of our applications. -First of all we can deploy from the fluent helm repository. `helm repo add fluent https://fluent.github.io/helm-charts` and then install using the `helm install fluent-bit fluent/fluent-bit` command. +First of all we can deploy from the fluent helm repository. `helm repo add fluent https://fluent.github.io/helm-charts` and then install using the `helm install fluent-bit fluent/fluent-bit` command. ![](Images/Day81_Monitoring1.png) -In my cluster I am also running prometheus in my default namespace (for test purposes) we need to make sure our fluent-bit pod is up and running. we can do this using `kubectl get all | grep fluent` this is going to show us our running pod, service and daemonset that we mentioned earlier. +In my cluster I am also running prometheus in my default namespace (for test purposes) we need to make sure our fluent-bit pod is up and running. we can do this using `kubectl get all | grep fluent` this is going to show us our running pod, service and daemonset that we mentioned earlier. ![](Images/Day81_Monitoring2.png) -So that fluentbit knows where to get logs from we have a configuration file, in this Kubernetes deployment of fluentbit we have a configmap which resembles the configuration file. +So that fluentbit knows where to get logs from we have a configuration file, in this Kubernetes deployment of fluentbit we have a configmap which resembles the configuration file. ![](Images/Day81_Monitoring3.png) -That ConfigMap will look something like: +That ConfigMap will look something like: ``` Name: fluent-bit @@ -141,28 +141,26 @@ fluent-bit.conf: Events: ``` -We can now port-forward our pod to our localhost to ensure that we have connectivity. Firstly get the name of your pod with `kubectl get pods | grep fluent` and then use `kubectl port-forward fluent-bit-8kvl4 2020:2020` open a web browser to http://localhost:2020/ +We can now port-forward our pod to our localhost to ensure that we have connectivity. Firstly get the name of your pod with `kubectl get pods | grep fluent` and then use `kubectl port-forward fluent-bit-8kvl4 2020:2020` open a web browser to `http://localhost:2020/` ![](Images/Day81_Monitoring4.png) I also found this really great medium article covering more about [Fluent Bit](https://medium.com/kubernetes-tutorials/exporting-kubernetes-logs-to-elasticsearch-using-fluent-bit-758e8de606af) -## Resources +## Resources - [Understanding Logging: Containers & Microservices](https://www.youtube.com/watch?v=MMVdkzeQ848) - [The Importance of Monitoring in DevOps](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) -- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) -- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) +- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) +- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) - [Top 5 - DevOps Monitoring Tools](https://www.youtube.com/watch?v=4t71iv_9t_4) -- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) +- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) - [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8) - [Promql cheat sheet with examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) - [Log Management for DevOps | Manage application, server, and cloud logs with Site24x7](https://www.youtube.com/watch?v=J0csO_Shsj0) - [Log Management what DevOps need to know](https://devops.com/log-management-what-devops-teams-need-to-know/) - [What is ELK Stack?](https://www.youtube.com/watch?v=4X0WLg05ASw) -- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) -- [ Fluent Bit explained | Fluent Bit vs Fluentd ](https://www.youtube.com/watch?v=B2IS-XS-cc0) - +- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) +- [Fluent Bit explained | Fluent Bit vs Fluentd](https://www.youtube.com/watch?v=B2IS-XS-cc0) See you on [Day 82](day82.md) - diff --git a/Days/day82.md b/Days/day82.md index 954c5b38a..23cfb1742 100644 --- a/Days/day82.md +++ b/Days/day82.md @@ -1,21 +1,22 @@ --- -title: '#90DaysOfDevOps - EFK Stack - Day 82' +title: "#90DaysOfDevOps - EFK Stack - Day 82" published: false description: 90DaysOfDevOps - EFK Stack -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049059 --- + ### EFK Stack In the previous section, we spoke about ELK Stack, which uses Logstash as the log collector in the stack, in the EFK Stack we are swapping that out for FluentD or FluentBit. -Our mission in this section is to monitor our Kubernetes logs using EFK. +Our mission in this section is to monitor our Kubernetes logs using EFK. ### Overview of EFK -We will be deploying the following into our Kubernetes cluster. +We will be deploying the following into our Kubernetes cluster. ![](Images/Day82_Monitoring1.png) @@ -23,13 +24,13 @@ The EFK stack is a collection of 3 software bundled together, including: - Elasticsearch : NoSQL database is used to store data and provides interface for searching and query log. -- Fluentd : Fluentd is an open source data collector for unified logging layer. Fluentd allows you to unify data collection and consumption for a better use and understanding of data. +- Fluentd : Fluentd is an open source data collector for unified logging layer. Fluentd allows you to unify data collection and consumption for a better use and understanding of data. -- Kibana : Interface for managing and statistics logs. Responsible for reading information from elasticsearch . +- Kibana : Interface for managing and statistics logs. Responsible for reading information from elasticsearch. -### Deploying EFK on Minikube +### Deploying EFK on Minikube -We will be using our trusty minikube cluster to deploy our EFK stack. Let's start a cluster using `minikube start` on our system. I am using a Windows OS with WSL2 enabled. +We will be using our trusty minikube cluster to deploy our EFK stack. Let's start a cluster using `minikube start` on our system. I am using a Windows OS with WSL2 enabled. ![](Images/Day82_Monitoring2.png) @@ -37,20 +38,21 @@ I have created [efk-stack.yaml](Days/Monitoring/../../Monitoring/EFK%20Stack/efk ![](Images/Day82_Monitoring3.png) -Depending on your system and if you have ran this already and have images pulled you should now watch the pods into a ready state before we can move on, you can check the progress with the following command. `kubectl get pods -n kube-logging -w` This can take a few minutes. +Depending on your system and if you have ran this already and have images pulled you should now watch the pods into a ready state before we can move on, you can check the progress with the following command. `kubectl get pods -n kube-logging -w` This can take a few minutes. ![](Images/Day82_Monitoring4.png) -The above command lets us keep an eye on things but I like to clarify that things are all good by just running the following `kubectl get pods -n kube-logging` command to ensure all pods are now up and running. +The above command lets us keep an eye on things but I like to clarify that things are all good by just running the following `kubectl get pods -n kube-logging` command to ensure all pods are now up and running. ![](Images/Day82_Monitoring5.png) -Once we have all our pods up and running and at this stage we should see +Once we have all our pods up and running and at this stage we should see + - 3 pods associated to ElasticSearch - 1 pod associated to Fluentd - 1 pod associated to Kibana -We can also use `kubectl get all -n kube-logging` to show all in our namespace, fluentd as explained previously is deployed as a daemonset, kibana as a deployment and Elasticsearch as a statefulset. +We can also use `kubectl get all -n kube-logging` to show all in our namespace, fluentd as explained previously is deployed as a daemonset, kibana as a deployment and Elasticsearch as a statefulset. ![](Images/Day82_Monitoring6.png) @@ -58,55 +60,53 @@ Now all of our pods are up and running we can now issue in a new terminal the po ![](Images/Day82_Monitoring7.png) -We can now open up a browser and navigate to this address, http://localhost:5601 you will be greeted with either the screen you see below or you might indeed see a sample data screen or continue and configure yourself. Either way and by all means look at that test data, it is what we covered when we looked at the ELK stack in a previous session. +We can now open up a browser and navigate to this address, `http://localhost:5601` you will be greeted with either the screen you see below or you might indeed see a sample data screen or continue and configure yourself. Either way and by all means look at that test data, it is what we covered when we looked at the ELK stack in a previous session. ![](Images/Day82_Monitoring8.png) -Next, we need to hit the "discover" tab on the left menu and add "*" to our index pattern. Continue to the next step by hitting "Next step". +Next, we need to hit the "discover" tab on the left menu and add "\*" to our index pattern. Continue to the next step by hitting "Next step". ![](Images/Day82_Monitoring9.png) -On Step 2 of 2, we are going to use the @timestamp option from the dropdown as this will filter our data by time. When you hit create pattern it might take a few seconds to complete. +On Step 2 of 2, we are going to use the @timestamp option from the dropdown as this will filter our data by time. When you hit create pattern it might take a few seconds to complete. ![](Images/Day82_Monitoring10.png) -If we now head back to our "discover" tab after a few seconds you should start to see data coming in from your Kubernetes cluster. +If we now head back to our "discover" tab after a few seconds you should start to see data coming in from your Kubernetes cluster. ![](Images/Day82_Monitoring11.png) -Now that we have the EFK stack up and running and we are gathering logs from our Kubernetes cluster via Fluentd we can also take a look at other sources we can choose from, if you navigate to the home screen by hitting the Kibana logo in the top left you will be greeted with the same page we saw when we first logged in. +Now that we have the EFK stack up and running and we are gathering logs from our Kubernetes cluster via Fluentd we can also take a look at other sources we can choose from, if you navigate to the home screen by hitting the Kibana logo in the top left you will be greeted with the same page we saw when we first logged in. -We have the ability to add APM, Log data, metric data and security events from other plugins or sources. +We have the ability to add APM, Log data, metric data and security events from other plugins or sources. ![](Images/Day82_Monitoring12.png) -If we select "Add log data" then we can see below that we have a lot of choices on where we want to get our logs from, you can see that Logstash is mentioned there which is part of the ELK stack. +If we select "Add log data" then we can see below that we have a lot of choices on where we want to get our logs from, you can see that Logstash is mentioned there which is part of the ELK stack. ![](Images/Day82_Monitoring13.png) -Under the metrics data you will find that you can add sources for Prometheus and lots of other services. +Under the metrics data you will find that you can add sources for Prometheus and lots of other services. ### APM (Application Performance Monitoring) -There is also the option to gather APM (Application Performance Monitoring) which collects in-depth performance metrics and errors from inside your application. It allows you to monitor the performance of thousands of applications in real time. +There is also the option to gather APM (Application Performance Monitoring) which collects in-depth performance metrics and errors from inside your application. It allows you to monitor the performance of thousands of applications in real time. I am not going to get into APM here but you can find out more on the [Elastic site](https://www.elastic.co/observability/application-performance-monitoring) - -## Resources +## Resources - [Understanding Logging: Containers & Microservices](https://www.youtube.com/watch?v=MMVdkzeQ848) - [The Importance of Monitoring in DevOps](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) -- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) -- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) +- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) +- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) - [Top 5 - DevOps Monitoring Tools](https://www.youtube.com/watch?v=4t71iv_9t_4) -- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) +- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) - [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8) - [Promql cheat sheet with examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) - [Log Management for DevOps | Manage application, server, and cloud logs with Site24x7](https://www.youtube.com/watch?v=J0csO_Shsj0) - [Log Management what DevOps need to know](https://devops.com/log-management-what-devops-teams-need-to-know/) - [What is ELK Stack?](https://www.youtube.com/watch?v=4X0WLg05ASw) -- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) +- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) See you on [Day 83](day83.md) - diff --git a/Days/day83.md b/Days/day83.md index 403fbc3a0..4ff84ab0f 100644 --- a/Days/day83.md +++ b/Days/day83.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Data Visualisation - Grafana - Day 83' +title: "#90DaysOfDevOps - Data Visualisation - Grafana - Day 83" published: false description: 90DaysOfDevOps - Data Visualisation - Grafana tags: "devops, 90daysofdevops, learning" @@ -7,57 +7,58 @@ cover_image: null canonical_url: null id: 1048767 --- + ## Data Visualisation - Grafana -We saw a lot of Kibana over this section around Observability. But we have to also take some time to cover Grafana. But also they are not the same and they are not completely competing against each other. +We saw a lot of Kibana over this section around Observability. But we have to also take some time to cover Grafana. But also they are not the same and they are not completely competing against each other. Kibana’s core feature is data querying and analysis. Using various methods, users can search the data indexed in Elasticsearch for specific events or strings within their data for root cause analysis and diagnostics. Based on these queries, users can use Kibana’s visualisation features which allow users to visualize data in a variety of different ways, using charts, tables, geographical maps and other types of visualizations. -Grafana actually started as a fork of Kibana, Grafana had an aim to supply support for metrics aka monitoring, which at that time Kibana did not provide. +Grafana actually started as a fork of Kibana, Grafana had an aim to supply support for metrics aka monitoring, which at that time Kibana did not provide. -Grafana is a free and Open-Source data visualisation tool. We commonly see Prometheus and Grafana together out in the field but we might also see Grafana alongside Elasticsearch and Graphite. +Grafana is a free and Open-Source data visualisation tool. We commonly see Prometheus and Grafana together out in the field but we might also see Grafana alongside Elasticsearch and Graphite. -The key difference between the two tools is Logging vs Monitoring, we started the section off covering monitoring with Nagios and then into Prometheus before moving into Logging where we covered the ELK and EFK stacks. +The key difference between the two tools is Logging vs Monitoring, we started the section off covering monitoring with Nagios and then into Prometheus before moving into Logging where we covered the ELK and EFK stacks. -Grafana caters to analysing and visualising metrics such as system CPU, memory, disk and I/O utilisation. The platform does not allow full-text data querying. Kibana runs on top of Elasticsearch and is used primarily for analyzing log messages. +Grafana caters to analysing and visualising metrics such as system CPU, memory, disk and I/O utilisation. The platform does not allow full-text data querying. Kibana runs on top of Elasticsearch and is used primarily for analyzing log messages. -As we have already discovered with Kibana it is quite easy to deploy as well as having the choice of where to deploy, this is the same for Grafana. +As we have already discovered with Kibana it is quite easy to deploy as well as having the choice of where to deploy, this is the same for Grafana. -Both support installation on Linux, Mac, Windows, Docker or building from source. +Both support installation on Linux, Mac, Windows, Docker or building from source. -There are no doubt others but Grafana is a tool that I have seen spanning the virtual, cloud and cloud-native platforms so I wanted to cover this here in this section. +There are no doubt others but Grafana is a tool that I have seen spanning the virtual, cloud and cloud-native platforms so I wanted to cover this here in this section. -### Prometheus Operator + Grafana Deployment +### Prometheus Operator + Grafana Deployment -We have covered Prometheus already in this section but as we see these paired so often I wanted to spin up an environment that would allow us to at least see what metrics we could have displayed in a visualisation. We know that monitoring our environments is important but going through those metrics alone in Prometheus or any metric tool is going to be cumbersome and it is not going to scale. This is where Grafana comes in and provides us that interactive visualisation of those metrics collected and stored in the Prometheus database. +We have covered Prometheus already in this section but as we see these paired so often I wanted to spin up an environment that would allow us to at least see what metrics we could have displayed in a visualisation. We know that monitoring our environments is important but going through those metrics alone in Prometheus or any metric tool is going to be cumbersome and it is not going to scale. This is where Grafana comes in and provides us that interactive visualisation of those metrics collected and stored in the Prometheus database. -With that visualisation we can create custom charts, graphs and alerts for our environment. In this walkthrough we will be using our minikube cluster. +With that visualisation we can create custom charts, graphs and alerts for our environment. In this walkthrough we will be using our minikube cluster. We are going to start by cloning this down to our local system. Using `git clone https://github.com/prometheus-operator/kube-prometheus.git` and `cd kube-prometheus` ![](Images/Day83_Monitoring1.png) -First job is to create our namespace within our minikube cluster `kubectl create -f manifests/setup` if you have not been following along in previous sections we can use `minikube start` to bring up a new cluster here. +First job is to create our namespace within our minikube cluster `kubectl create -f manifests/setup` if you have not been following along in previous sections we can use `minikube start` to bring up a new cluster here. ![](Images/Day83_Monitoring2.png) -Next we are going to deploy everything we need for our demo using the `kubectl create -f manifests/` command, as you can see this is going to deploy a lot of different resources within our cluster. +Next we are going to deploy everything we need for our demo using the `kubectl create -f manifests/` command, as you can see this is going to deploy a lot of different resources within our cluster. ![](Images/Day83_Monitoring3.png) -We then need to wait for our pods to come up and being in the running state we can use the `kubectl get pods -n monitoring -w` command to keep an eye on the pods. +We then need to wait for our pods to come up and being in the running state we can use the `kubectl get pods -n monitoring -w` command to keep an eye on the pods. ![](Images/Day83_Monitoring4.png) -When everything is running we can check all pods are in a running and healthy state using the `kubectl get pods -n monitoring` command. +When everything is running we can check all pods are in a running and healthy state using the `kubectl get pods -n monitoring` command. ![](Images/Day83_Monitoring5.png) -With the deployment, we deployed a number of services that we are going to be using later on in the demo you can check these by using the `kubectl get svc -n monitoring` command. +With the deployment, we deployed a number of services that we are going to be using later on in the demo you can check these by using the `kubectl get svc -n monitoring` command. ![](Images/Day83_Monitoring6.png) -And finally lets check on all resources deployed in our new monitoring namespace using the `kubectl get all -n monitoring` command. +And finally lets check on all resources deployed in our new monitoring namespace using the `kubectl get all -n monitoring` command. ![](Images/Day83_Monitoring7.png) @@ -65,19 +66,21 @@ Opening a new terminal we are now ready to access our Grafana tool and start gat ![](Images/Day83_Monitoring8.png) -Open a browser and navigate to http://localhost:3000 you will be prompted for a username and password. +Open a browser and navigate to http://localhost:3000 you will be prompted for a username and password. ![](Images/Day83_Monitoring9.png) -The default username and password to access is +The default username and password to access is + ``` -Username: admin +Username: admin Password: admin ``` -However you will be asked to provide a new password at first login. The initial screen or home page you will see will give you some areas to explore as well as some useful resources to get up to speed with Grafana and its capabilities. Notice the "Add your first data source" and "create your first dashboard" widgets we will be using them later. + +However you will be asked to provide a new password at first login. The initial screen or home page you will see will give you some areas to explore as well as some useful resources to get up to speed with Grafana and its capabilities. Notice the "Add your first data source" and "create your first dashboard" widgets we will be using them later. ![](Images/Day83_Monitoring10.png) -You will find that there is already a prometheus data source already added to our Grafana data sources, however because we are using minikube we need to also port forward prometheus so that this is available on our localhost, opening a new terminal we can run the following command. `kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090` if on the home page of Grafana we now enter into the widget "Add your first data source" and from here we are going to select Prometheus. +You will find that there is already a prometheus data source already added to our Grafana data sources, however because we are using minikube we need to also port forward prometheus so that this is available on our localhost, opening a new terminal we can run the following command. `kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090` if on the home page of Grafana we now enter into the widget "Add your first data source" and from here we are going to select Prometheus. ![](Images/Day83_Monitoring11.png) @@ -85,7 +88,7 @@ For our new data source we can use the address http://localhost:9090 and we will ![](Images/Day83_Monitoring12.png) -At the bottom of the page, we can now hit save and test. This should give us the outcome you see below if the port forward for prometheus is working. +At the bottom of the page, we can now hit save and test. This should give us the outcome you see below if the port forward for prometheus is working. ![](Images/Day83_Monitoring13.png) @@ -93,58 +96,58 @@ Head back to the home page and find the option to "Create your first dashboard" ![](Images/Day83_Monitoring14.png) -You will see from below that we are already gathering from our Grafana data source, but we would like to gather metrics from our Prometheus data source, select the data source drop down and select our newly created "Prometheus-1" +You will see from below that we are already gathering from our Grafana data source, but we would like to gather metrics from our Prometheus data source, select the data source drop down and select our newly created "Prometheus-1" ![](Images/Day83_Monitoring15.png) -If you then select the Metrics browser you will have a long list of metrics being gathered from Prometheus related to our minikube cluster. +If you then select the Metrics browser you will have a long list of metrics being gathered from Prometheus related to our minikube cluster. ![](Images/Day83_Monitoring16.png) -For the purpose of the demo I am going to find a metric that gives us some output around our system resources, `cluster:node_cpu:ratio{}` gives us some detail on the nodes in our cluster and proves that this integration is working. +For the purpose of the demo I am going to find a metric that gives us some output around our system resources, `cluster:node_cpu:ratio{}` gives us some detail on the nodes in our cluster and proves that this integration is working. ![](Images/Day83_Monitoring17.png) -Once you are happy with this as your visualisation then you can hit the apply button in the top right and you will then add this graph to your dashboard. Obviously you can go ahead and add additional graphs and other charts to give you the visual that you need. +Once you are happy with this as your visualisation then you can hit the apply button in the top right and you will then add this graph to your dashboard. Obviously you can go ahead and add additional graphs and other charts to give you the visual that you need. ![](Images/Day83_Monitoring18.png) -We can however take advantage of thousands of previously created dashboards that we can use so that we do not need to reinvent the wheel. +We can however take advantage of thousands of previously created dashboards that we can use so that we do not need to reinvent the wheel. ![](Images/Day83_Monitoring19.png) -If we do a search for Kubernetes we will see a long list of pre built dashboards that we can choose from. +If we do a search for Kubernetes we will see a long list of pre built dashboards that we can choose from. ![](Images/Day83_Monitoring20.png) -We have chosen the Kubernetes API Server dashboard and changed the data source to suit our newly added Prometheus-1 data source and we get to see some of the metrics displayed as per below. +We have chosen the Kubernetes API Server dashboard and changed the data source to suit our newly added Prometheus-1 data source and we get to see some of the metrics displayed as per below. ![](Images/Day83_Monitoring21.png) ### Alerting -You could also leverage the alertmanager that we deployed to then send alerts out to slack or other integrations, in order to do this you would need to port foward the alertmanager service using the below details. +You could also leverage the alertmanager that we deployed to then send alerts out to slack or other integrations, in order to do this you would need to port forward the alertmanager service using the below details. `kubectl --namespace monitoring port-forward svc/alertmanager-main 9093` -http://localhost:9093 +`http://localhost:9093` -That wraps up our section on all things observability, I have personally found that this section has highlighted how broad this topic is but equally how important this is for our roles and that be it metrics, logging or tracing you are going to need to have a good idea of what is happening in our broad environments moving forward, especially when they can change so dramatically with all the automation that we have already covered in the other sections. +That wraps up our section on all things observability, I have personally found that this section has highlighted how broad this topic is but equally how important this is for our roles and that be it metrics, logging or tracing you are going to need to have a good idea of what is happening in our broad environments moving forward, especially when they can change so dramatically with all the automation that we have already covered in the other sections. -Next up we are going to be taking a look into data management and how DevOps principles also needs to be considered when it comes to Data Management. +Next up we are going to be taking a look into data management and how DevOps principles also needs to be considered when it comes to Data Management. -## Resources +## Resources - [Understanding Logging: Containers & Microservices](https://www.youtube.com/watch?v=MMVdkzeQ848) - [The Importance of Monitoring in DevOps](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) -- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) -- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) +- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) +- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) - [Top 5 - DevOps Monitoring Tools](https://www.youtube.com/watch?v=4t71iv_9t_4) -- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) +- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) - [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8) - [Promql cheat sheet with examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) - [Log Management for DevOps | Manage application, server, and cloud logs with Site24x7](https://www.youtube.com/watch?v=J0csO_Shsj0) - [Log Management what DevOps need to know](https://devops.com/log-management-what-devops-teams-need-to-know/) - [What is ELK Stack?](https://www.youtube.com/watch?v=4X0WLg05ASw) -- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) +- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) See you on [Day 84](day84.md) diff --git a/Days/day84.md b/Days/day84.md index 6397d33a8..27ff48bd0 100644 --- a/Days/day84.md +++ b/Days/day84.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - The Big Picture: Data Management - Day 84' +title: "#90DaysOfDevOps - The Big Picture: Data Management - Day 84" published: false description: 90DaysOfDevOps - The Big Picture Data Management tags: "devops, 90daysofdevops, learning" @@ -7,61 +7,61 @@ cover_image: null canonical_url: null id: 1048747 --- + ## The Big Picture: Data Management ![](Images/Day84_Data1.png) -Data Management is by no means a new wall to climb, although we do know that data is more important than it maybe was a few years ago. Valuable and ever changing it can also be a massive nightmare when we are talking about automation and continuously integrate, test and deploy frequent software releases. Enter the persistent data and underlying data services often the main culprit when things go wrong. +Data Management is by no means a new wall to climb, although we do know that data is more important than it maybe was a few years ago. Valuable and ever changing it can also be a massive nightmare when we are talking about automation and continuously integrate, test and deploy frequent software releases. Enter the persistent data and underlying data services often the main culprit when things go wrong. -But before I get into the Cloud-Native Data Management, we need to go up a level. We have touched on many different platforms throughout this challenge. Be it Physical, Virtual, Cloud and Cloud-Native obviously including Kubernetes there is none of these platforms that provide the lack of requirement for data management. +But before I get into the Cloud-Native Data Management, we need to go up a level. We have touched on many different platforms throughout this challenge. Be it Physical, Virtual, Cloud and Cloud-Native obviously including Kubernetes there is none of these platforms that provide the lack of requirement for data management. -Whatever our business it is more than likely you will find a database lurking in the environment somewhere, be it for the most mission critical system in the business or at least some cog in the chain is storing that persistent data on some level of system. +Whatever our business it is more than likely you will find a database lurking in the environment somewhere, be it for the most mission critical system in the business or at least some cog in the chain is storing that persistent data on some level of system. -### DevOps and Data +### DevOps and Data -Much like the very start of this series where we spoke about the DevOps principles, in order for a better process when it comes to data you have to include the right people. This might be the DBAs but equally that is going to include people that care about the backup of those data services as well. +Much like the very start of this series where we spoke about the DevOps principles, in order for a better process when it comes to data you have to include the right people. This might be the DBAs but equally that is going to include people that care about the backup of those data services as well. -Secondly we also need to identify the different data types, domains, boundaries that we have associated with our data. This way it is not just dealt with in a silo approach amongst Database administrators, storage engineers or Backup focused engineers. This way the whole team can determine the best route of action when it comes to developing and hosting applications for the wider business and focus on the data architecture vs it being an after thought. +Secondly we also need to identify the different data types, domains, boundaries that we have associated with our data. This way it is not just dealt with in a silo approach amongst Database administrators, storage engineers or Backup focused engineers. This way the whole team can determine the best route of action when it comes to developing and hosting applications for the wider business and focus on the data architecture vs it being an after thought. -Now, this can span many different areas of the data lifecycle, we could be talking about data ingest, where and how will data be ingested into our service or application? How will the service, application or users access this data. But then it also requires us to understand how we will secure the data and then how will we protect that data. +Now, this can span many different areas of the data lifecycle, we could be talking about data ingest, where and how will data be ingested into our service or application? How will the service, application or users access this data. But then it also requires us to understand how we will secure the data and then how will we protect that data. -### Data Management 101 +### Data Management 101 -Data management according to the [Data Management Body of Knowledge](https://www.dama.org/cpages/body-of-knowledge) is “the development, execution and supervision of plans, policies, programs and practices that control, protect, deliver and enhance the value of data and information assets.” +Data management according to the [Data Management Body of Knowledge](https://www.dama.org/cpages/body-of-knowledge) is “the development, execution and supervision of plans, policies, programs and practices that control, protect, deliver and enhance the value of data and information assets.” -- Data is the most important aspect of your business - Data is only one part of your overall business. I have seen the term "Data is the lifeblood of our business" and most likely absolutely true. Which then got me thinking about blood being pretty important to the body but alone it is nothing we still need the aspects of the body to make the blood something other than a liquid. +- Data is the most important aspect of your business - Data is only one part of your overall business. I have seen the term "Data is the lifeblood of our business" and most likely absolutely true. Which then got me thinking about blood being pretty important to the body but alone it is nothing we still need the aspects of the body to make the blood something other than a liquid. -- Data quality is more important than ever - We are having to treat data as a business asset, meaning that we have to give it the considerations it needs and requires to work with our automation and DevOps principles. +- Data quality is more important than ever - We are having to treat data as a business asset, meaning that we have to give it the considerations it needs and requires to work with our automation and DevOps principles. -- Accessing data in a timely fashion - Nobody has the patience to not have access to the right data at the right time to make effective decisions. Data must be available in a streamlined and timely manher regardless of presentation. +- Accessing data in a timely fashion - Nobody has the patience to not have access to the right data at the right time to make effective decisions. Data must be available in a streamlined and timely manher regardless of presentation. -- Data Management has to be an enabler to DevOps - I mentioned streamline previously, we have to include the data management requirements into our cycle and ensure not just availablity of that data but also include other important policy based protection of those data points along with fully tested recovery models with that as well. +- Data Management has to be an enabler to DevOps - I mentioned streamline previously, we have to include the data management requirements into our cycle and ensure not just availablity of that data but also include other important policy based protection of those data points along with fully tested recovery models with that as well. -### DataOps +### DataOps -Both DataOps and DevOps apply the best practices of technology development and operations to improve quality, increase speed, reduce security threats, delight customers and provide meaningful and challenging work for skilled professionals. DevOps and DataOps share goals to accelerate product delivery by automating as many process steps as possible. For DataOps, the objective is a resilient data pipeline and trusted insights from data analytics. +Both DataOps and DevOps apply the best practices of technology development and operations to improve quality, increase speed, reduce security threats, delight customers and provide meaningful and challenging work for skilled professionals. DevOps and DataOps share goals to accelerate product delivery by automating as many process steps as possible. For DataOps, the objective is a resilient data pipeline and trusted insights from data analytics. -Some of the most common higher level areas that focus on DataOps are going to be Machine Learning, Big Data and Data Analytics including Artifical Intelligence. +Some of the most common higher level areas that focus on DataOps are going to be Machine Learning, Big Data and Data Analytics including Artifical Intelligence. ### Data Management is the management of information -My focus throughout this section is not going to be getting into Machine Learning or Articial Intelligence but to focus on the protecting the data from a data protection point of view, the title of this subsection is "Data management is the management of information" and we can relate that information = data. +My focus throughout this section is not going to be getting into Machine Learning or Articial Intelligence but to focus on the protecting the data from a data protection point of view, the title of this subsection is "Data management is the management of information" and we can relate that information = data. -Three key areas that we should consider along this journey with data are: +Three key areas that we should consider along this journey with data are: -- Accuracy - Making sure that production data is accurate, equally we need to ensure that our data in the form of backups are also working and tested against recovery to be sure if a failure or a reason comes up we need to be able to get back up and running as fast as possible. - -- Consistent - If our data services span multiple locations then for production we need to make sure we have consistency across all data locations so that we are getting accurate data, this also spans into data protection when it comes to protecting these data services especially data services we need to ensure consistency at different levels to make sure we are taking a good clean copy of that data for our backups, replicas etc. +- Accuracy - Making sure that production data is accurate, equally we need to ensure that our data in the form of backups are also working and tested against recovery to be sure if a failure or a reason comes up we need to be able to get back up and running as fast as possible. +- Consistent - If our data services span multiple locations then for production we need to make sure we have consistency across all data locations so that we are getting accurate data, this also spans into data protection when it comes to protecting these data services especially data services we need to ensure consistency at different levels to make sure we are taking a good clean copy of that data for our backups, replicas etc. -- Secure - Access Control but equally just keeping data in general is a topical theme at the moment across the globe. Making sure the right people have access to your data is paramount, again this leads into data protection where we must make sure that only the required personnel have access to backups and the ability to restore from those as well clone and provide other versions of the business data. +- Secure - Access Control but equally just keeping data in general is a topical theme at the moment across the globe. Making sure the right people have access to your data is paramount, again this leads into data protection where we must make sure that only the required personnel have access to backups and the ability to restore from those as well clone and provide other versions of the business data. -Better Data = Better Decisions +Better Data = Better Decisions -### Data Management Days +### Data Management Days -During the next 6 sessions we are going to be taking a closer look at Databases, Backup & Recovery, Disaster Recovery, Application Mobility all with an element of demo and hands on throughout. +During the next 6 sessions we are going to be taking a closer look at Databases, Backup & Recovery, Disaster Recovery, Application Mobility all with an element of demo and hands on throughout. -## Resources +## Resources - [Kubernetes Backup and Restore made easy!](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) - [Kubernetes Backups, Upgrades, Migrations - with Velero](https://www.youtube.com/watch?v=zybLTQER0yY) @@ -70,7 +70,3 @@ During the next 6 sessions we are going to be taking a closer look at Databases, - [Veeam Portability & Cloud Mobility](https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s) See you on [Day 85](day85.md) - - - - diff --git a/Days/day85.md b/Days/day85.md index dba8421bb..7ead66140 100644 --- a/Days/day85.md +++ b/Days/day85.md @@ -7,11 +7,12 @@ cover_image: null canonical_url: null id: 1048781 --- + ## Data Services -Databases are going to be the most common data service that we come across in our environments. I wanted to take this session to explore some of those different types of Databases and some of the use cases they each have. Some we have used and seen throughout the course of the challenge. +Databases are going to be the most common data service that we come across in our environments. I wanted to take this session to explore some of those different types of Databases and some of the use cases they each have. Some we have used and seen throughout the course of the challenge. -From an application development point of view choosing the right data service or database is going to be a huge decision when it comes to the performance and scalability of your application. +From an application development point of view choosing the right data service or database is going to be a huge decision when it comes to the performance and scalability of your application. https://www.youtube.com/watch?v=W2Z7fbCLSTw @@ -19,79 +20,82 @@ https://www.youtube.com/watch?v=W2Z7fbCLSTw A key-value database is a type of nonrelational database that uses a simple key-value method to store data. A key-value database stores data as a collection of key-value pairs in which a key serves as a unique identifier. Both keys and values can be anything, ranging from simple objects to complex compound objects. Key-value databases are highly partitionable and allow horizontal scaling at scales that other types of databases cannot achieve. -An example of a Key-Value database is Redis. +An example of a Key-Value database is Redis. -*Redis is an in-memory data structure store, used as a distributed, in-memory key–value database, cache and message broker, with optional durability. Redis supports different kinds of abstract data structures, such as strings, lists, maps, sets, sorted sets, HyperLogLogs, bitmaps, streams, and spatial indices.* +_Redis is an in-memory data structure store, used as a distributed, in-memory key–value database, cache and message broker, with optional durability. Redis supports different kinds of abstract data structures, such as strings, lists, maps, sets, sorted sets, HyperLogLogs, bitmaps, streams, and spatial indices._ ![](Images/Day85_Data1.png) -As you can see from the description of Redis this means that our database is fast but we are limited on space as a trade off. Also no queries or joins which means data modelling options are very limited. +As you can see from the description of Redis this means that our database is fast but we are limited on space as a trade off. Also no queries or joins which means data modelling options are very limited. + +Best for: -Best for: -- Caching +- Caching - Pub/Sub -- Leaderboards +- Leaderboards - Shopping carts -Generally used as a cache above another persistent data layer. +Generally used as a cache above another persistent data layer. ### Wide Column A wide-column database is a NoSQL database that organises data storage into flexible columns that can be spread across multiple servers or database nodes, using multi-dimensional mapping to reference data by column, row, and timestamp. -*Cassandra is a free and open-source, distributed, wide-column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure.* +_Cassandra is a free and open-source, distributed, wide-column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure._ ![](Images/Day85_Data2.png) -No schema which means can handle unstructured data however this can be seen as a benefit to some workloads. +No schema which means can handle unstructured data however this can be seen as a benefit to some workloads. + +Best for: -Best for: -- Time-Series -- Historical Records -- High-Write, Low-Read +- Time-Series +- Historical Records +- High-Write, Low-Read ### Document -A document database (also known as a document-oriented database or a document store) is a database that stores information in documents. +A document database (also known as a document-oriented database or a document store) is a database that stores information in documents. -*MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License.* +_MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License._ ![](Images/Day85_Data3.png) -NoSQL document databases allow businesses to store simple data without using complex SQL codes. Quickly store with no compromise to reliability. +NoSQL document databases allow businesses to store simple data without using complex SQL codes. Quickly store with no compromise to reliability. -Best for: +Best for: -- Most Applications -- Games -- Internet of Things +- Most Applications +- Games +- Internet of Things ### Relational -If you are new to databases but you know of them my guess is that you have absolutely come across a relational database. +If you are new to databases but you know of them my guess is that you have absolutely come across a relational database. A relational database is a digital database based on the relational model of data, as proposed by E. F. Codd in 1970. A system used to maintain relational databases is a relational database management system. Many relational database systems have an option of using the SQL for querying and maintaining the database. -*MySQL is an open-source relational database management system. Its name is a combination of "My", the name of co-founder Michael Widenius's daughter, and "SQL", the abbreviation for Structured Query Language.* +_MySQL is an open-source relational database management system. Its name is a combination of "My", the name of co-founder Michael Widenius's daughter, and "SQL", the abbreviation for Structured Query Language._ -MySQL is one example of a relational database there are lots of other options. +MySQL is one example of a relational database there are lots of other options. ![](Images/Day85_Data4.png) -Whilst researching relational databases the term or abbreviation **ACID** has been mentioned a lot, (atomicity, consistency, isolation, durability) is a set of properties of database transactions intended to guarantee data validity despite errors, power failures, and other mishaps. In the context of databases, a sequence of database operations that satisfies the ACID properties (which can be perceived as a single logical operation on the data) is called a transaction. For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction. +Whilst researching relational databases the term or abbreviation **ACID** has been mentioned a lot, (atomicity, consistency, isolation, durability) is a set of properties of database transactions intended to guarantee data validity despite errors, power failures, and other mishaps. In the context of databases, a sequence of database operations that satisfies the ACID properties (which can be perceived as a single logical operation on the data) is called a transaction. For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction. + +Best for: -Best for: - Most Applications (It has been around for years, doesn't mean it is the best) -It is not ideal for unstructured data or the ability to scale is where some of the other NoSQL mentions give a better ability to scale for certain workloads. +It is not ideal for unstructured data or the ability to scale is where some of the other NoSQL mentions give a better ability to scale for certain workloads. ### Graph A graph database stores nodes and relationships instead of tables, or documents. Data is stored just like you might sketch ideas on a whiteboard. Your data is stored without restricting it to a pre-defined model, allowing a very flexible way of thinking about and using it. -*Neo4j is a graph database management system developed by Neo4j, Inc. Described by its developers as an ACID-compliant transactional database with native graph storage and processing* +_Neo4j is a graph database management system developed by Neo4j, Inc. Described by its developers as an ACID-compliant transactional database with native graph storage and processing_ -Best for: +Best for: - Graphs - Knowledge Graphs @@ -99,37 +103,37 @@ Best for: ### Search Engine -In the last section we actually used a Search Engine database in the way of Elasticsearch. +In the last section we actually used a Search Engine database in the way of Elasticsearch. A search-engine database is a type of non-relational database that is dedicated to the search of data content. Search-engine databases use indexes to categorise the similar characteristics among data and facilitate search capability. -*Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.* +_Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents._ -Best for: +Best for: -- Search Engines -- Typeahead +- Search Engines +- Typeahead - Log search ### Multi-model -A multi-model database is a database management system designed to support multiple data models against a single, integrated backend. In contrast, most database management systems are organized around a single data model that determines how data can be organized, stored, and manipulated.Document, graph, relational, and key–value models are examples of data models that may be supported by a multi-model database. +A multi-model database is a database management system designed to support multiple data models against a single, integrated backend. In contrast, most database management systems are organized around a single data model that determines how data can be organized, stored, and manipulated.Document, graph, relational, and key–value models are examples of data models that may be supported by a multi-model database. -*Fauna is a flexible, developer-friendly, transactional database delivered as a secure and scalable cloud API with native GraphQL.* +_Fauna is a flexible, developer-friendly, transactional database delivered as a secure and scalable cloud API with native GraphQL._ -Best for: +Best for: - You are not stuck to having to choose a data model - ACID Compliant -- Fast +- Fast - No provisioning overhead - How do you want to consume your data and let the cloud do the heavy lifting -That is going to wrap up this database overview session, no matter what industry you are in you are going to come across one area of databases. We are then going to take some of these examples and look at the data management and in particular the protection and storing of these data services later on in the section. +That is going to wrap up this database overview session, no matter what industry you are in you are going to come across one area of databases. We are then going to take some of these examples and look at the data management and in particular the protection and storing of these data services later on in the section. -There are a ton of resources I have linked below, you could honestly spend 90 years probably deep diving into all database types and everything that comes with this. +There are a ton of resources I have linked below, you could honestly spend 90 years probably deep diving into all database types and everything that comes with this. -## Resources +## Resources - [Redis Crash Course - the What, Why and How to use Redis as your primary database](https://www.youtube.com/watch?v=OqCK95AS-YE) - [Redis: How to setup a cluster - for beginners](https://www.youtube.com/watch?v=GEg7s3i6Jak) @@ -145,5 +149,4 @@ There are a ton of resources I have linked below, you could honestly spend 90 ye - [FaunaDB Basics - The Database of your Dreams](https://www.youtube.com/watch?v=2CipVwISumA) - [Fauna Crash Course - Covering the Basics](https://www.youtube.com/watch?v=ihaB7CqJju0) - See you on [Day 86](day86.md) diff --git a/Days/day86.md b/Days/day86.md index 2448c6961..ad50823c9 100644 --- a/Days/day86.md +++ b/Days/day86.md @@ -1,137 +1,138 @@ --- -title: '#90DaysOfDevOps - Backup all the platforms - Day 86' +title: "#90DaysOfDevOps - Backup all the platforms - Day 86" published: false description: 90DaysOfDevOps - Backup all the platforms -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049058 --- + ## Backup all the platforms During this whole challenge we have discussed many different platforms and environments. One thing all of those have in common is the fact they all need some level of data protection! -Data Protection has been around for many many years but the wealth of data that we have today and the value that this data brings means we have to make sure we are not only resilient to infrastructure failure by having multiple nodes and high availablity across applications but we must also consider that we need a copy of that data, that important data in a safe and secure location if a failure scenario was to occur. +Data Protection has been around for many many years but the wealth of data that we have today and the value that this data brings means we have to make sure we are not only resilient to infrastructure failure by having multiple nodes and high availablity across applications but we must also consider that we need a copy of that data, that important data in a safe and secure location if a failure scenario was to occur. -We hear a lot these days it seems about cybercrime and ransomware, and don't get me wrong this is a massive threat and I stand by the fact that you will be attacked by ransomware. It is not a matter of if it is a matter of when. So even more reason to make sure you have your data secure for when that time arises. However the most common cause for data loss is not ransomware or cybercrime it is simply accidental deletion! +We hear a lot these days it seems about cybercrime and ransomware, and don't get me wrong this is a massive threat and I stand by the fact that you will be attacked by ransomware. It is not a matter of if it is a matter of when. So even more reason to make sure you have your data secure for when that time arises. However the most common cause for data loss is not ransomware or cybercrime it is simply accidental deletion! -We have all done it, deleted something we shouldn't have and had that instant regret. +We have all done it, deleted something we shouldn't have and had that instant regret. -With all of the technology and automation we have discussed during the challenge, the requirement to protect any stateful data or even complex stateless configuration is still there, regardless of platform. +With all of the technology and automation we have discussed during the challenge, the requirement to protect any stateful data or even complex stateless configuration is still there, regardless of platform. ![](Images/Day86_Data1.png) -But we should be able to perform that protection of the data with automation in mind and being able to integrate into our workflows. +But we should be able to perform that protection of the data with automation in mind and being able to integrate into our workflows. -If we look at what backup is: +If we look at what backup is: -*In information technology, a backup, or data backup is a copy of computer data taken and stored elsewhere so that it may be used to restore the original after a data loss event. The verb form, referring to the process of doing so, is "back up", whereas the noun and adjective form is "backup".* +_In information technology, a backup, or data backup is a copy of computer data taken and stored elsewhere so that it may be used to restore the original after a data loss event. The verb form, referring to the process of doing so, is "back up", whereas the noun and adjective form is "backup"._ -If we break this down to the simplest form, a backup is a copy and paste of data to a new location. Simply put I could take a backup right now by copying a file from my C: drive to my D: drive and I would then have a copy in case something happened to the C: drive or something was edited wrongly within the files. I could revert back to the copy I have on the D: drive. Now if my computer dies where both the C & D drives live then I am not protected so I have to consider a solution or a copy of data outside of my system maybe onto a NAS drive in my house? But then what happens if something happens to my house, maybe I need to consider storing it on another system in another location, maybe the cloud is an option. Maybe I could store a copy of my important files in several locations to mitigate against the risk of failure? +If we break this down to the simplest form, a backup is a copy and paste of data to a new location. Simply put I could take a backup right now by copying a file from my C: drive to my D: drive and I would then have a copy in case something happened to the C: drive or something was edited wrongly within the files. I could revert back to the copy I have on the D: drive. Now if my computer dies where both the C & D drives live then I am not protected so I have to consider a solution or a copy of data outside of my system maybe onto a NAS drive in my house? But then what happens if something happens to my house, maybe I need to consider storing it on another system in another location, maybe the cloud is an option. Maybe I could store a copy of my important files in several locations to mitigate against the risk of failure? -### 3-2-1 Backup Methodolgy +### 3-2-1 Backup Methodolgy -Now seems a good time to talk about the 3-2-1 rule or backup methodology. I actually did a [lightening talk](https://www.youtube.com/watch?v=5wRt1bJfKBw) covering this topic. +Now seems a good time to talk about the 3-2-1 rule or backup methodology. I actually did a [lightening talk](https://www.youtube.com/watch?v=5wRt1bJfKBw) covering this topic. -We have already mentioned before some of the extreme ends of why we need to protect our data but a few more are listed below: +We have already mentioned before some of the extreme ends of why we need to protect our data but a few more are listed below: ![](Images/Day86_Data2.png) -Which then allows me to talk about the 3-2-1 methodology. My first copy or backup of my data should be as close to my production system as possible, the reason for this is based on speed to recovery and again going back to that original point about accidental deletion this is going to be the most common reason for recovery. But I want to be storing that on a suitable second media outside of the original or production system. +Which then allows me to talk about the 3-2-1 methodology. My first copy or backup of my data should be as close to my production system as possible, the reason for this is based on speed to recovery and again going back to that original point about accidental deletion this is going to be the most common reason for recovery. But I want to be storing that on a suitable second media outside of the original or production system. -We then want to make sure we also send a copy of our data external or offsite this is where a second location comes in be it another house, building, data centre or the public cloud. +We then want to make sure we also send a copy of our data external or offsite this is where a second location comes in be it another house, building, data centre or the public cloud. ![](Images/Day86_Data3.png) -### Backup Responsibility +### Backup Responsibility -We have most likely heard all of the myths when it comes to not having to backup, things like "Everything is stateless" I mean if everything is stateless then what is the business? no databases? word documents? Obviously there is a level of responsibility on every individual within the business to ensure they are protected but it is going to come down most likely to the operations teams to provide the backup process for the mission critical applications and data. +We have most likely heard all of the myths when it comes to not having to backup, things like "Everything is stateless" I mean if everything is stateless then what is the business? no databases? word documents? Obviously there is a level of responsibility on every individual within the business to ensure they are protected but it is going to come down most likely to the operations teams to provide the backup process for the mission critical applications and data. -Another good one is that "High availability is my backup, we have built in multiple nodes into our cluster there is no way this is going down!" apart from when you make a mistake to the database and this is replicated over all the nodes in the cluster, or there is fire, flood or blood scenario that means the cluster is no longer available and with it the important data. It's not about being stubborn it is about being aware of the data and the services, absolutely everyone should factor in high availability and fault tollerance into their architecture but that does not substitute the need for backup! +Another good one is that "High availability is my backup, we have built in multiple nodes into our cluster there is no way this is going down!" apart from when you make a mistake to the database and this is replicated over all the nodes in the cluster, or there is fire, flood or blood scenario that means the cluster is no longer available and with it the important data. It's not about being stubborn it is about being aware of the data and the services, absolutely everyone should factor in high availability and fault tollerance into their architecture but that does not substitute the need for backup! -Replication can also seem to give us the offsite copy of the data and maybe that cluster mentioned above does live across multiple locations, however the first accidental mistake would still be replicated there. But again a Backup requirement should stand alongside application replication or system replication within the environment. +Replication can also seem to give us the offsite copy of the data and maybe that cluster mentioned above does live across multiple locations, however the first accidental mistake would still be replicated there. But again a Backup requirement should stand alongside application replication or system replication within the environment. -Now with all this said you can go to the extreme the other end as well and send copies of data to too many locations which is going to not only cost but also increase risk about being attacked as your surface area is now massively expanded. +Now with all this said you can go to the extreme the other end as well and send copies of data to too many locations which is going to not only cost but also increase risk about being attacked as your surface area is now massively expanded. -Anyway, who looks after backup? It will be different within each business but someone should be taking it upon themselves to understand the backup requirements. But also understand the recovery plan! +Anyway, who looks after backup? It will be different within each business but someone should be taking it upon themselves to understand the backup requirements. But also understand the recovery plan! -### Nobody cares till everybody cares +### Nobody cares till everybody cares -Backup is a prime example, nobody cares about backup until you need to restore something. Alongside the requirement to back our data up we also need to consider how we restore! +Backup is a prime example, nobody cares about backup until you need to restore something. Alongside the requirement to back our data up we also need to consider how we restore! -With our text document example we are talking very small files so the ability to copy back and forth is easy and fast. But if we are talking about 100GB plus files then this is going to take time. Also we have to consider the level in which we need to recover, if we take a virtual machine for example. +With our text document example we are talking very small files so the ability to copy back and forth is easy and fast. But if we are talking about 100GB plus files then this is going to take time. Also we have to consider the level in which we need to recover, if we take a virtual machine for example. -We have the whole Virtual Machine, we have the Operating System, Application installation and then if this is a database server then we will have some database files as well. If we have made a mistake and inserted the wrong line of code into our database I probably don't need to restore the whole virtual machine, I want to be granular on what I recover back. +We have the whole Virtual Machine, we have the Operating System, Application installation and then if this is a database server then we will have some database files as well. If we have made a mistake and inserted the wrong line of code into our database I probably don't need to restore the whole virtual machine, I want to be granular on what I recover back. -### Backup Scenario +### Backup Scenario -I want to now start building on a scenario to protect some data, specifically I want to protect some files on my local machine (in this case Windows but the tool I am going to use is in fact not only free and open-source but also cross platform) I would like to make sure they are protected to a NAS device I have locally in my home but also into an Object Storage bucket in the cloud. +I want to now start building on a scenario to protect some data, specifically I want to protect some files on my local machine (in this case Windows but the tool I am going to use is in fact not only free and open-source but also cross platform) I would like to make sure they are protected to a NAS device I have locally in my home but also into an Object Storage bucket in the cloud. -I want to backup this important data, it just so happens to be the repository for the 90DaysOfDevOps, which yes this is also being sent to GitHub which is probably where you are reading this now but what if my machine was to die and GitHub was down? How would anyone be able to read the content but also how would I potentially be able to restore that data to another service. +I want to backup this important data, it just so happens to be the repository for the 90DaysOfDevOps, which yes this is also being sent to GitHub which is probably where you are reading this now but what if my machine was to die and GitHub was down? How would anyone be able to read the content but also how would I potentially be able to restore that data to another service. ![](Images/Day86_Data5.png) -There are lots of tools that can help us achieve this but I am going to be using a a tool called [Kopia](https://kopia.io/) an Open-Source backup tool which will enable us to encrypt, dedupe and compress our backups whilst being able to send them to many locations. +There are lots of tools that can help us achieve this but I am going to be using a a tool called [Kopia](https://kopia.io/) an Open-Source backup tool which will enable us to encrypt, dedupe and compress our backups whilst being able to send them to many locations. -You will find the releases to download [here](https://github.com/kopia/kopia/releases) at the time of writing I will be using v0.10.6. +You will find the releases to download [here](https://github.com/kopia/kopia/releases) at the time of writing I will be using v0.10.6. -### Installing Kopia +### Installing Kopia -There is a Kopia CLI and GUI, we will be using the GUI but know that you can have a CLI version of this as well for those Linux servers that do not give you a GUI. +There is a Kopia CLI and GUI, we will be using the GUI but know that you can have a CLI version of this as well for those Linux servers that do not give you a GUI. I will be using `KopiaUI-Setup-0.10.6.exe` -Really quick next next installation and then when you open the application you are greeted with the choice of selecting your storage type that you wish to use as your backup repository. +Really quick next next installation and then when you open the application you are greeted with the choice of selecting your storage type that you wish to use as your backup repository. ![](Images/Day86_Data6.png) -### Setting up a Repository +### Setting up a Repository -Firstly we would like to setup a repository using our local NAS device and we are going to do this using SMB, but we could also use NFS I believe. +Firstly we would like to setup a repository using our local NAS device and we are going to do this using SMB, but we could also use NFS I believe. ![](Images/Day86_Data7.png) -On the next screen we are going to define a password, this password is used to encrypt the repository contents. +On the next screen we are going to define a password, this password is used to encrypt the repository contents. ![](Images/Day86_Data8.png) -Now that we have the repository configured we can trigger an adhoc snapshot to start writing data to our it. +Now that we have the repository configured we can trigger an adhoc snapshot to start writing data to our it. ![](Images/Day86_Data9.png) -First up we need to enter a path to what we want to snapshot and our case we want to take a copy of our `90DaysOfDevOps` folder. We will get back to the scheduling aspect shortly. +First up we need to enter a path to what we want to snapshot and our case we want to take a copy of our `90DaysOfDevOps` folder. We will get back to the scheduling aspect shortly. ![](Images/Day86_Data10.png) -We can define our snapshot retention. +We can define our snapshot retention. ![](Images/Day86_Data11.png) -Maybe there are files or file types that we wish to exclude. +Maybe there are files or file types that we wish to exclude. ![](Images/Day86_Data12.png) -If we wanted to define a schedule we could this on this next screen, when you first create this snapshot this is the opening page to define. +If we wanted to define a schedule we could this on this next screen, when you first create this snapshot this is the opening page to define. ![](Images/Day86_Data13.png) -And you will see a number of other settings that can be handled here. +And you will see a number of other settings that can be handled here. ![](Images/Day86_Data14.png) -Select snapshot now and the data will be written to your repository. +Select snapshot now and the data will be written to your repository. ![](Images/Day86_Data15.png) -### Offsite backup to S3 +### Offsite backup to S3 -With Kopia we can through the UI it seems only have one repository configured at a time. But through the UI we can be creative and basically have multiple repository configuration files to choose from to achieve our goal of having a copy local and offsite in Object Storage. +With Kopia we can through the UI it seems only have one repository configured at a time. But through the UI we can be creative and basically have multiple repository configuration files to choose from to achieve our goal of having a copy local and offsite in Object Storage. -The Object Storage I am choosing to send my data to is going to Google Cloud Storage. I firstly logged into my Google Cloud Platform account and created myself a storage bucket. I already had the Google Cloud SDK installed on my system but running the `gcloud auth application-default login` authenticated me with my account. +The Object Storage I am choosing to send my data to is going to Google Cloud Storage. I firstly logged into my Google Cloud Platform account and created myself a storage bucket. I already had the Google Cloud SDK installed on my system but running the `gcloud auth application-default login` authenticated me with my account. ![](Images/Day86_Data16.png) -I then used the CLI of Kopia to show me the current status of my repository after we added our SMB repository in the previous steps. I did this using the `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config repository status` command. +I then used the CLI of Kopia to show me the current status of my repository after we added our SMB repository in the previous steps. I did this using the `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config repository status` command. ![](Images/Day86_Data17.png) @@ -141,21 +142,21 @@ The above command is taking into account that the Google Cloud Storage bucket we ![](Images/Day86_Data18.png) -Now that we have created our new repository we can then run the `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config repository status` command again and will now show the GCS repository configuration. +Now that we have created our new repository we can then run the `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config repository status` command again and will now show the GCS repository configuration. ![](Images/Day86_Data19.png) -Next thing we need to do is create a snapshot and send that to our newly created repository. Using the `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config kopia snapshot create "C:\Users\micha\demo\90DaysOfDevOps"` command we can kick off this process. You can see in the below browser that our Google Cloud Storage bucket now has kopia files based on our backup in place. +Next thing we need to do is create a snapshot and send that to our newly created repository. Using the `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config kopia snapshot create "C:\Users\micha\demo\90DaysOfDevOps"` command we can kick off this process. You can see in the below browser that our Google Cloud Storage bucket now has kopia files based on our backup in place. ![](Images/Day86_Data20.png) -With the above process we are able to settle our requirement of sending our important data to 2 different locations, 1 of which is offsite in Google Cloud Storage and of course we still have our production copy of our data on a different media type. +With the above process we are able to settle our requirement of sending our important data to 2 different locations, 1 of which is offsite in Google Cloud Storage and of course we still have our production copy of our data on a different media type. ### Restore -Restore is another consideration and is very important, Kopia gives us the capability to not only restore to the existing location but also to a new location. +Restore is another consideration and is very important, Kopia gives us the capability to not only restore to the existing location but also to a new location. -If we run the command `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config snapshot list` this will list the snapshots that we have currently in our configured repository (GCS) +If we run the command `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config snapshot list` this will list the snapshots that we have currently in our configured repository (GCS) ![](Images/Day86_Data21.png) @@ -163,13 +164,13 @@ We can then mount those snapshots directly from GCS using the `"C:\Program Files ![](Images/Day86_Data22.png) -We could also restore the snapshot contents using `kopia snapshot restore kdbd9dff738996cfe7bcf99b45314e193` +We could also restore the snapshot contents using `kopia snapshot restore kdbd9dff738996cfe7bcf99b45314e193` -Obviously the commands above are very long and this is because I was using the KopiaUI version of the kopia.exe as explained at the top of the walkthrough you can download the kopia.exe and put into a path so you can just use the `kopia` command. +Obviously the commands above are very long and this is because I was using the KopiaUI version of the kopia.exe as explained at the top of the walkthrough you can download the kopia.exe and put into a path so you can just use the `kopia` command. -In the next session we will be focusing in on protecting workloads within Kubernetes. +In the next session we will be focusing in on protecting workloads within Kubernetes. -## Resources +## Resources - [Kubernetes Backup and Restore made easy!](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) - [Kubernetes Backups, Upgrades, Migrations - with Velero](https://www.youtube.com/watch?v=zybLTQER0yY) diff --git a/Days/day87.md b/Days/day87.md index 90b725541..089c3f226 100644 --- a/Days/day87.md +++ b/Days/day87.md @@ -7,28 +7,29 @@ cover_image: null canonical_url: null id: 1048717 --- + ## Hands-On Backup & Recovery -In the last session we touched on [Kopia](https://kopia.io/) an Open-Source backup tool that we used to get some important data off to a local NAS and off to some cloud based object storage. +In the last session we touched on [Kopia](https://kopia.io/) an Open-Source backup tool that we used to get some important data off to a local NAS and off to some cloud based object storage. -In this section, I want to get into the world of Kubernetes backup. It is a platform we covered [The Big Picture: Kubernetes](Days/day49.md) earlier in the challenge. +In this section, I want to get into the world of Kubernetes backup. It is a platform we covered [The Big Picture: Kubernetes](Days/day49.md) earlier in the challenge. -We will again be using our minikube cluster but this time we are going to take advantage of some of those addons that are available. +We will again be using our minikube cluster but this time we are going to take advantage of some of those addons that are available. -### Kubernetes cluster setup +### Kubernetes cluster setup -To set up our minikube cluster we will be issuing the `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p 90daysofdevops --kubernetes-version=1.21.2` you will notice that we are using the `volumesnapshots` and `csi-hostpath-driver` as we will take full use of these for when we are taking our backups. +To set up our minikube cluster we will be issuing the `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p 90daysofdevops --kubernetes-version=1.21.2` you will notice that we are using the `volumesnapshots` and `csi-hostpath-driver` as we will take full use of these for when we are taking our backups. -At this point I know we have not deployed Kasten K10 yet but we want to issue the following command when your cluster is up, but we want to annotate the volumesnapshotclass so that Kasten K10 can use this. +At this point I know we have not deployed Kasten K10 yet but we want to issue the following command when your cluster is up, but we want to annotate the volumesnapshotclass so that Kasten K10 can use this. -``` +```Shell kubectl annotate volumesnapshotclass csi-hostpath-snapclass \ k10.kasten.io/is-snapshot-class=true ``` -We are also going to change over the default storageclass from the standard default storageclass to the csi-hostpath storageclass using the following. +We are also going to change over the default storageclass from the standard default storageclass to the csi-hostpath storageclass using the following. -``` +```Shell kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' @@ -36,7 +37,7 @@ kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storagecla ![](Images/Day87_Data1.png) -### Deploy Kasten K10 +### Deploy Kasten K10 Add the Kasten Helm repository @@ -44,7 +45,7 @@ Add the Kasten Helm repository We could use `arkade kasten install k10` here as well but for the purpose of the demo we will run through the following steps. [More Details](https://blog.kasten.io/kasten-k10-goes-to-the-arkade) -Create the namespace and deploy K10, note that this will take around 5 mins +Create the namespace and deploy K10, note that this will take around 5 mins `helm install k10 kasten/k10 --namespace=kasten-io --set auth.tokenAuth.enabled=true --set injectKanisterSidecar.enabled=true --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true --create-namespace` @@ -64,9 +65,9 @@ The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/` ![](Images/Day87_Data4.png) -To authenticate with the dashboard we now need the token which we can get with the following commands. +To authenticate with the dashboard we now need the token which we can get with the following commands. -``` +```Shell TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) @@ -76,17 +77,17 @@ echo $TOKEN ![](Images/Day87_Data5.png) -Now we take this token and we input that into our browser, you will then be prompted for an email and company name. +Now we take this token and we input that into our browser, you will then be prompted for an email and company name. ![](Images/Day87_Data6.png) -Then we get access to the Kasten K10 dashboard. +Then we get access to the Kasten K10 dashboard. ![](Images/Day87_Data7.png) -### Deploy our stateful application +### Deploy our stateful application -Use the stateful application that we used in the Kubernetes section. +Use the stateful application that we used in the Kubernetes section. ![](Images/Day55_Kubernetes1.png) @@ -94,51 +95,51 @@ You can find the YAML configuration file for this application here[pacman-statef ![](Images/Day87_Data8.png) -We can use `kubectl get all -n pacman` to check on our pods coming up. +We can use `kubectl get all -n pacman` to check on our pods coming up. ![](Images/Day87_Data9.png) In a new terminal we can then port forward the pacman front end. `kubectl port-forward svc/pacman 9090:80 -n pacman` -Open another tab on your browser to http://localhost:9090/ +Open another tab on your browser to http://localhost:9090/ ![](Images/Day87_Data10.png) -Take the time to clock up some high scores in the backend MongoDB database. +Take the time to clock up some high scores in the backend MongoDB database. ![](Images/Day87_Data11.png) -### Protect our High Scores +### Protect our High Scores -Now we have some mission critical data in our database and we do not want to lose it. We can use Kasten K10 to protect this whole application. +Now we have some mission critical data in our database and we do not want to lose it. We can use Kasten K10 to protect this whole application. -If we head back into the Kasten K10 dashboard tab you will see that our number of application has now increased from 1 to 2 with the addition of our pacman application to our Kubernetes cluster. +If we head back into the Kasten K10 dashboard tab you will see that our number of application has now increased from 1 to 2 with the addition of our pacman application to our Kubernetes cluster. ![](Images/Day87_Data12.png) -If you click on the Applications card you will see the automatically discovered applications in our cluster. +If you click on the Applications card you will see the automatically discovered applications in our cluster. ![](Images/Day87_Data13.png) -With Kasten K10 we have the ability to leverage storage based snapshots as well export our copies out to object storage options. +With Kasten K10 we have the ability to leverage storage based snapshots as well export our copies out to object storage options. -For the purpose of the demo, we will create a manual storage snapshot in our cluster and then we can add some rogue data to our high scores to simulate an accidental mistake being made or is it? +For the purpose of the demo, we will create a manual storage snapshot in our cluster and then we can add some rogue data to our high scores to simulate an accidental mistake being made or is it? -Firstly we can use the manual snapshot option below. +Firstly we can use the manual snapshot option below. ![](Images/Day87_Data14.png) -For the demo I am going to leave everything as the default +For the demo I am going to leave everything as the default ![](Images/Day87_Data15.png) -Back on the dashboard you get a status report on the job as it is running and then when complete it should look as successful as this one. +Back on the dashboard you get a status report on the job as it is running and then when complete it should look as successful as this one. ![](Images/Day87_Data16.png) -### Failure Scenario +### Failure Scenario -We can now make that fatal change to our mission critical data by simply adding in a prescriptive bad change to our application. +We can now make that fatal change to our mission critical data by simply adding in a prescriptive bad change to our application. As you can see below we have two inputs that we probably dont want in our production mission critical database. @@ -146,39 +147,39 @@ As you can see below we have two inputs that we probably dont want in our produc ### Restore the data -Obviously this is a simple demo and in a way not realistic although have you seen how easy it is to drop databases? +Obviously this is a simple demo and in a way not realistic although have you seen how easy it is to drop databases? -Now we want to get that high score list looking a little cleaner and how we had it before the mistakes were made. +Now we want to get that high score list looking a little cleaner and how we had it before the mistakes were made. -Back in the Applications card and on the pacman tab we now have 1 restore point we can use to restore from. +Back in the Applications card and on the pacman tab we now have 1 restore point we can use to restore from. ![](Images/Day87_Data18.png) -When you select restore you can see all the associated snapshots and exports to that application. +When you select restore you can see all the associated snapshots and exports to that application. ![](Images/Day87_Data19.png) -Select that restore and a side window will appear, we will keep the default settings and hit restore. +Select that restore and a side window will appear, we will keep the default settings and hit restore. ![](Images/Day87_Data20.png) -Confirm that you really want to make this happen. +Confirm that you really want to make this happen. ![](Images/Day87_Data21.png) -You can then go back to the dashboard and see the progress of the restore. You should see something like this. +You can then go back to the dashboard and see the progress of the restore. You should see something like this. ![](Images/Day87_Data22.png) -But more importantly how is our High-Score list looking in our mission critical application. You will have to start the port forward again to pacman as we previously covered. +But more importantly how is our High-Score list looking in our mission critical application. You will have to start the port forward again to pacman as we previously covered. ![](Images/Day87_Data23.png) -A super simple demo and only really touching the surface of what Kasten K10 can really achieve when it comes to backup. I will be creating some more in depth video content on some of these areas in the future. We will also be using Kasten K10 to highlight some of the other prominent areas around Data Management when it comes to Disaster Recovery and the mobility of your data. +A super simple demo and only really touching the surface of what Kasten K10 can really achieve when it comes to backup. I will be creating some more in depth video content on some of these areas in the future. We will also be using Kasten K10 to highlight some of the other prominent areas around Data Management when it comes to Disaster Recovery and the mobility of your data. -Next we will take a look at Application consistency. +Next we will take a look at Application consistency. -## Resources +## Resources - [Kubernetes Backup and Restore made easy!](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) - [Kubernetes Backups, Upgrades, Migrations - with Velero](https://www.youtube.com/watch?v=zybLTQER0yY) diff --git a/Days/day88.md b/Days/day88.md index 684137697..6e8473e6c 100644 --- a/Days/day88.md +++ b/Days/day88.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Application Focused Backup - Day 88' +title: "#90DaysOfDevOps - Application Focused Backup - Day 88" published: false description: 90DaysOfDevOps - Application Focused Backups tags: "devops, 90daysofdevops, learning" @@ -7,41 +7,42 @@ cover_image: null canonical_url: null id: 1048749 --- + ## Application Focused Backups -We have already spent some time talking about data services or data intensive applications such as databases on [Day 85](day85.md). For these data services we have to consider how we manage consistency, especially when it comes application consistency. +We have already spent some time talking about data services or data intensive applications such as databases on [Day 85](day85.md). For these data services we have to consider how we manage consistency, especially when it comes application consistency. -In this post we are going to dive into that requirement around protecting the application data in a consistent manner. +In this post we are going to dive into that requirement around protecting the application data in a consistent manner. In order to do this our tool of choice will be [Kanister](https://kanister.io/) ![](Images/Day88_Data1.png) -### Introducing Kanister +### Introducing Kanister -Kanister is an open-source project by Kasten, that enables us to manage (backup and restore) application data on Kubernetes. You can deploy Kanister as a helm application into your Kubernetes cluster. +Kanister is an open-source project by Kasten, that enables us to manage (backup and restore) application data on Kubernetes. You can deploy Kanister as a helm application into your Kubernetes cluster. -Kanister uses Kubernetes custom resources, the main custom resources that are installed when Kanister is deployed are +Kanister uses Kubernetes custom resources, the main custom resources that are installed when Kanister is deployed are -- `Profile` - is a target location to store your backups and recover from. Most commonly this will be object storage. +- `Profile` - is a target location to store your backups and recover from. Most commonly this will be object storage. - `Blueprint` - steps that are to be taken to backup and restore the database should be maintained in the Blueprint -- `ActionSet` - is the motion to move our target backup to our profile as well as restore actions. - -### Execution Walkthrough +- `ActionSet` - is the motion to move our target backup to our profile as well as restore actions. + +### Execution Walkthrough -Before we get hands on we should take a look at the workflow that Kanister takes in protecting application data. Firstly our controller is deployed using helm into our Kubernetes cluster, Kanister lives within its own namespace. We take our Blueprint of which there are many community supported blueprints available, we will cover this in more detail shortly. We then have our database workload. +Before we get hands on we should take a look at the workflow that Kanister takes in protecting application data. Firstly our controller is deployed using helm into our Kubernetes cluster, Kanister lives within its own namespace. We take our Blueprint of which there are many community supported blueprints available, we will cover this in more detail shortly. We then have our database workload. ![](Images/Day88_Data2.png) -We then create our ActionSet. +We then create our ActionSet. ![](Images/Day88_Data3.png) -The ActionSet allows us to run the actions defined in the blueprint against the specific data service. +The ActionSet allows us to run the actions defined in the blueprint against the specific data service. ![](Images/Day88_Data4.png) -The ActionSet in turns uses the Kanister functions (KubeExec, KubeTask, Resource Lifecycle) and pushes our backup to our target repository (Profile). +The ActionSet in turns uses the Kanister functions (KubeExec, KubeTask, Resource Lifecycle) and pushes our backup to our target repository (Profile). ![](Images/Day88_Data5.png) @@ -49,50 +50,53 @@ If that action is completed/failed the respective status is updated in the Actio ![](Images/Day88_Data6.png) -### Deploying Kanister +### Deploying Kanister -Once again we will be using the minikube cluster to achieve this application backup. If you have it still running from the previous session then we can continue to use this. +Once again we will be using the minikube cluster to achieve this application backup. If you have it still running from the previous session then we can continue to use this. -At the time of writing we are up to image version `0.75.0` with the following helm command we will install kanister into our Kubernetes cluster. +At the time of writing we are up to image version `0.75.0` with the following helm command we will install kanister into our Kubernetes cluster. `helm install kanister --namespace kanister kanister/kanister-operator --set image.tag=0.75.0 --create-namespace` ![](Images/Day88_Data7.png) -We can use `kubectl get pods -n kanister` to ensure the pod is up and runnnig and then we can also check our custom resource definitions are now available (If you have only installed Kanister then you will see the highlighted 3) +We can use `kubectl get pods -n kanister` to ensure the pod is up and running and then we can also check our custom resource definitions are now available (If you have only installed Kanister then you will see the highlighted 3) ![](Images/Day88_Data8.png) -### Deploy a Database +### Deploy a Database Deploying mysql via helm: -``` +```Shell APP_NAME=my-production-app kubectl create ns ${APP_NAME} helm repo add bitnami https://charts.bitnami.com/bitnami helm install mysql-store bitnami/mysql --set primary.persistence.size=1Gi,volumePermissions.enabled=true --namespace=${APP_NAME} kubectl get pods -n ${APP_NAME} -w ``` + ![](Images/Day88_Data9.png) Populate the mysql database with initial data, run the following: -``` +```Shell MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace ${APP_NAME} mysql-store -o jsonpath="{.data.mysql-root-password}" | base64 --decode) MYSQL_HOST=mysql-store.${APP_NAME}.svc.cluster.local MYSQL_EXEC="mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} -DmyImportantData -t" echo MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} ``` -### Create a MySQL CLIENT +### Create a MySQL CLIENT + We will run another container image to act as our client -``` +```Shell APP_NAME=my-production-app kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash ``` -``` + +```Shell Note: if you already have an existing mysql client pod running, delete with the command kubectl delete pod -n ${APP_NAME} mysql-client @@ -100,7 +104,7 @@ kubectl delete pod -n ${APP_NAME} mysql-client ### Add Data to MySQL -``` +```Shell echo "create database myImportantData;" | mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} MYSQL_EXEC="mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} -DmyImportantData -t" echo "drop table Accounts" | ${MYSQL_EXEC} @@ -116,18 +120,18 @@ echo "insert into Accounts values('rastapopoulos', 377);" | ${MYSQL_EXEC} echo "select * from Accounts;" | ${MYSQL_EXEC} exit ``` -You should be able to see some data as per below. -![](Images/Day88_Data10.png) +You should be able to see some data as per below. +![](Images/Day88_Data10.png) ### Create Kanister Profile -Kanister provides a CLI, `kanctl` and another utility `kando` that is used to interact with your object storage provider from blueprint and both of these utilities. +Kanister provides a CLI, `kanctl` and another utility `kando` that is used to interact with your object storage provider from blueprint and both of these utilities. [CLI Download](https://docs.kanister.io/tooling.html#tooling) -I have gone and I have created an AWS S3 Bucket that we will use as our profile target and restore location. I am going to be using environment variables so that I am able to still show you the commands I am running with `kanctl` to create our kanister profile. +I have gone and I have created an AWS S3 Bucket that we will use as our profile target and restore location. I am going to be using environment variables so that I am able to still show you the commands I am running with `kanctl` to create our kanister profile. `kanctl create profile s3compliant --access-key $ACCESS_KEY --secret-key $SECRET_KEY --bucket $BUCKET --region eu-west-2 --namespace my-production-app` @@ -135,12 +139,11 @@ I have gone and I have created an AWS S3 Bucket that we will use as our profile ### Blueprint time -Don't worry you don't need to create your own one from scratch unless your data service is not listed here in the [Kanister Examples](https://github.com/kanisterio/kanister/tree/master/examples) but by all means community contributions are how this project gains awareness. - -The blueprint we will be using will be the below. +Don't worry you don't need to create your own one from scratch unless your data service is not listed here in the [Kanister Examples](https://github.com/kanisterio/kanister/tree/master/examples) but by all means community contributions are how this project gains awareness. +The blueprint we will be using will be the below. -``` +```Shell apiVersion: cr.kanister.io/v1alpha1 kind: Blueprint metadata: @@ -220,46 +223,46 @@ actions: kando location delete --profile '{{ toJson .Profile }}' --path ${s3_path} ``` -To add this we will use the `kubectl create -f mysql-blueprint.yml -n kanister` command +To add this we will use the `kubectl create -f mysql-blueprint.yml -n kanister` command ![](Images/Day88_Data12.png) -### Create our ActionSet and Protect our application +### Create our ActionSet and Protect our application We will now take a backup of the MySQL data using an ActionSet defining backup for this application. Create an ActionSet in the same namespace as the controller. -`kubectl get profiles.cr.kanister.io -n my-production-app` This command will show us the profile we previously created, we can have multiple profiles configured here so we might want to use specific ones for different ActionSets +`kubectl get profiles.cr.kanister.io -n my-production-app` This command will show us the profile we previously created, we can have multiple profiles configured here so we might want to use specific ones for different ActionSets -We are then going to create our ActionSet with the following command using `kanctl` +We are then going to create our ActionSet with the following command using `kanctl` `kanctl create actionset --action backup --namespace kanister --blueprint mysql-blueprint --statefulset my-production-app/mysql-store --profile my-production-app/s3-profile-dc5zm --secrets mysql=my-production-app/mysql-store` -You can see from the command above we are defining the blueprint we added to the namespace, the statefulset in our `my-production-app` namespace and also the secrets to get into the MySQL application. +You can see from the command above we are defining the blueprint we added to the namespace, the statefulset in our `my-production-app` namespace and also the secrets to get into the MySQL application. ![](Images/Day88_Data13.png) Check the status of the ActionSet by taking the ActionSet name and using this command `kubectl --namespace kanister describe actionset backup-qpnqv` -Finally we can go and confirm that we now have data in our AWS S3 bucket. +Finally we can go and confirm that we now have data in our AWS S3 bucket. ![](Images/Day88_Data14.png) -### Restore +### Restore -We need to cause some damage before we can restore anything, we can do this by dropping our table, maybe it was an accident, maybe it wasn't. +We need to cause some damage before we can restore anything, we can do this by dropping our table, maybe it was an accident, maybe it wasn't. -Connect to our MySQL pod. +Connect to our MySQL pod. -``` +```Shell APP_NAME=my-production-app kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash ``` -You can see that our importantdata db is there with `echo "SHOW DATABASES;" | ${MYSQL_EXEC}` +You can see that our importantdata db is there with `echo "SHOW DATABASES;" | ${MYSQL_EXEC}` -Then to drop we ran `echo "DROP DATABASE myImportantData;" | ${MYSQL_EXEC}` +Then to drop we ran `echo "DROP DATABASE myImportantData;" | ${MYSQL_EXEC}` -And confirmed that this was gone with a few attempts to show our database. +And confirmed that this was gone with a few attempts to show our database. ![](Images/Day88_Data15.png) @@ -267,19 +270,20 @@ We can now use Kanister to get our important data back in business using the `ku ![](Images/Day88_Data16.png) -We can confirm our data is back by using the below command to connect to our database. +We can confirm our data is back by using the below command to connect to our database. -``` +```Shell APP_NAME=my-production-app kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash ``` -Now we are inside the MySQL Client, we can issue the `echo "SHOW DATABASES;" | ${MYSQL_EXEC}` and we can see the database is back. We can also issue the `echo "select * from Accounts;" | ${MYSQL_EXEC}` to check the contents of the database and our important data is restored. + +Now we are inside the MySQL Client, we can issue the `echo "SHOW DATABASES;" | ${MYSQL_EXEC}` and we can see the database is back. We can also issue the `echo "select * from Accounts;" | ${MYSQL_EXEC}` to check the contents of the database and our important data is restored. ![](Images/Day88_Data17.png) -In the next post we take a look at Disaster Recovery within Kubernetes. +In the next post we take a look at Disaster Recovery within Kubernetes. -## Resources +## Resources - [Kanister Overview - An extensible open-source framework for app-lvl data management on Kubernetes](https://www.youtube.com/watch?v=wFD42Zpbfts) - [Application Level Data Operations on Kubernetes](https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-live-webinar-kanister-application-level-data-operations-on-kubernetes/) diff --git a/Days/day89.md b/Days/day89.md index 5d3e61d12..996449395 100644 --- a/Days/day89.md +++ b/Days/day89.md @@ -1,31 +1,32 @@ --- -title: '#90DaysOfDevOps - Disaster Recovery - Day 89' +title: "#90DaysOfDevOps - Disaster Recovery - Day 89" published: false description: 90DaysOfDevOps - Disaster Recovery -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048718 --- + ## Disaster Recovery -We have mentioned already how different failure scenarios will warrant different recovery requirements. When it comes to Fire, Flood and Blood scenarios we can consider these mostly disaster situations where we might need our workloads up and running in a completely different location as fast as possible or at least with near-zero recovery time objectives (RTO). +We have mentioned already how different failure scenarios will warrant different recovery requirements. When it comes to Fire, Flood and Blood scenarios we can consider these mostly disaster situations where we might need our workloads up and running in a completely different location as fast as possible or at least with near-zero recovery time objectives (RTO). -This can only be achieved at scale when you automate the replication of the complete application stack to a standby environment. +This can only be achieved at scale when you automate the replication of the complete application stack to a standby environment. -This allows for fast failovers across cloud regions, cloud providers or between on-premises and cloud infrastructure. +This allows for fast failovers across cloud regions, cloud providers or between on-premises and cloud infrastructure. -Keeping with the theme so far, we are going to concentrate on how this can be achieved using Kasten K10 using our minikube cluster that we deployed and configured a few sessions ago. +Keeping with the theme so far, we are going to concentrate on how this can be achieved using Kasten K10 using our minikube cluster that we deployed and configured a few sessions ago. -We will then create another minikube cluster with Kasten K10 also installed to act as our standby cluster which in theory could be any location. +We will then create another minikube cluster with Kasten K10 also installed to act as our standby cluster which in theory could be any location. Kasten K10 also has built in functionality to ensure if something was to happen to the Kubernetes cluster it is running on that the catalog data is replicated and available in a new one [K10 Disaster Recovery](https://docs.kasten.io/latest/operating/dr.html). -### Add object storage to K10 +### Add object storage to K10 -The first thing we need to do is add an object storage bucket as a target location for our backups to land. Not only does this act as an offsite location but we can also leverage this as our disaster recovery source data to recover from. +The first thing we need to do is add an object storage bucket as a target location for our backups to land. Not only does this act as an offsite location but we can also leverage this as our disaster recovery source data to recover from. -I have cleaned out the S3 bucket that we created for the Kanister demo in the last session. +I have cleaned out the S3 bucket that we created for the Kanister demo in the last session. ![](Images/Day89_Data1.png) @@ -37,9 +38,9 @@ The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/` ![](Images/Day87_Data4.png) -To authenticate with the dashboard, we now need the token which we can get with the following commands. +To authenticate with the dashboard, we now need the token which we can get with the following commands. -``` +```Shell TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) @@ -49,11 +50,11 @@ echo $TOKEN ![](Images/Day87_Data5.png) -Now we take this token and we input that into our browser, you will then be prompted for an email and company name. +Now we take this token and we input that into our browser, you will then be prompted for an email and company name. ![](Images/Day87_Data6.png) -Then we get access to the Kasten K10 dashboard. +Then we get access to the Kasten K10 dashboard. ![](Images/Day87_Data7.png) @@ -61,27 +62,27 @@ Now that we are back in the Kasten K10 dashboard we can add our location profile ![](Images/Day89_Data2.png) -You can see from the image below that we have choice when it comes to where this location profile is, we are going to select Amazon S3, and we are going to add our sensitive access credentials, region and bucket name. +You can see from the image below that we have choice when it comes to where this location profile is, we are going to select Amazon S3, and we are going to add our sensitive access credentials, region and bucket name. ![](Images/Day89_Data3.png) -If we scroll down on the New Profile creation window you will see, we also have the ability to enable immutable backups which leverages the S3 Object Lock API. For this demo we won't be using that. +If we scroll down on the New Profile creation window you will see, we also have the ability to enable immutable backups which leverages the S3 Object Lock API. For this demo we won't be using that. ![](Images/Day89_Data4.png) -Hit "Save Profile" and you can now see our newly created or added location profile as per below. +Hit "Save Profile" and you can now see our newly created or added location profile as per below. ![](Images/Day89_Data5.png) ### Create a policy to protect Pac-Man app to object storage -In the previous session we created only an ad-hoc snapshot of our Pac-Man application, therefore we need to create a backup policy that will send our application backups to our newly created object storage location. +In the previous session we created only an ad-hoc snapshot of our Pac-Man application, therefore we need to create a backup policy that will send our application backups to our newly created object storage location. -If you head back to the dashboard and select the Policy card you will see a screen as per below. Select "Create New Policy". +If you head back to the dashboard and select the Policy card you will see a screen as per below. Select "Create New Policy". ![](Images/Day89_Data6.png) -First, we can give our policy a useful name and description. We can also define our backup frequency for demo purposes I am using on-demand. +First, we can give our policy a useful name and description. We can also define our backup frequency for demo purposes I am using on-demand. ![](Images/Day89_Data7.png) @@ -89,36 +90,35 @@ Next, we want to enable backups via Snapshot exports meaning that we want to sen ![](Images/Day89_Data8.png) -Next, we select the application by either name or labels, I am going to choose by name and all resources. +Next, we select the application by either name or labels, I am going to choose by name and all resources. ![](Images/Day89_Data9.png) -Under Advanced settings we are not going to be using any of these but based on our [walkthrough of Kanister yesterday](https://github.com/MichaelCade/90DaysOfDevOps/blob/main/Days/day88.md), we can leverage Kanister as part of Kasten K10 as well to take those application consistent copies of our data. +Under Advanced settings we are not going to be using any of these but based on our [walkthrough of Kanister yesterday](https://github.com/MichaelCade/90DaysOfDevOps/blob/main/Days/day88.md), we can leverage Kanister as part of Kasten K10 as well to take those application consistent copies of our data. ![](Images/Day89_Data10.png) -Finally select "Create Policy" and you will now see the policy in our Policy window. +Finally select "Create Policy" and you will now see the policy in our Policy window. ![](Images/Day89_Data11.png) -At the bottom of the created policy, you will have "Show import details" we need this string to be able to import into our standby cluster. Copy this somewhere safe for now. +At the bottom of the created policy, you will have "Show import details" we need this string to be able to import into our standby cluster. Copy this somewhere safe for now. ![](Images/Day89_Data12.png) -Before we move on, we just need to select "run once" to get a backup sent our object storage bucket. +Before we move on, we just need to select "run once" to get a backup sent our object storage bucket. ![](Images/Day89_Data13.png) -Below, the screenshot is just to show the successful backup and export of our data. +Below, the screenshot is just to show the successful backup and export of our data. ![](Images/Day89_Data14.png) - ### Create a new MiniKube cluster & deploy K10 -We then need to deploy a second Kubernetes cluster and where this could be any supported version of Kubernetes including OpenShift, for the purpose of education we will use the very free version of MiniKube with a different name. +We then need to deploy a second Kubernetes cluster and where this could be any supported version of Kubernetes including OpenShift, for the purpose of education we will use the very free version of MiniKube with a different name. -Using `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p standby --kubernetes-version=1.21.2` we can create our new cluster. +Using `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p standby --kubernetes-version=1.21.2` we can create our new cluster. ![](Images/Day89_Data15.png) @@ -126,11 +126,11 @@ We then can deploy Kasten K10 in this cluster using: `helm install k10 kasten/k10 --namespace=kasten-io --set auth.tokenAuth.enabled=true --set injectKanisterSidecar.enabled=true --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true --create-namespace` -This will take a while but in the meantime, we can use `kubectl get pods -n kasten-io -w` to watch the progress of our pods getting to the running status. +This will take a while but in the meantime, we can use `kubectl get pods -n kasten-io -w` to watch the progress of our pods getting to the running status. -It is worth noting that because we are using MiniKube our application will just run when we run our import policy, our storageclass is the same on this standby cluster. However, something we will cover in the final session is about mobility and transformation. +It is worth noting that because we are using MiniKube our application will just run when we run our import policy, our storageclass is the same on this standby cluster. However, something we will cover in the final session is about mobility and transformation. -When the pods are up and running, we can follow the steps we went through on the previous steps in the other cluster. +When the pods are up and running, we can follow the steps we went through on the previous steps in the other cluster. Port forward to access the K10 dashboard, open a new terminal to run the below command @@ -140,9 +140,9 @@ The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/` ![](Images/Day87_Data4.png) -To authenticate with the dashboard, we now need the token which we can get with the following commands. +To authenticate with the dashboard, we now need the token which we can get with the following commands. -``` +```Shell TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) @@ -152,55 +152,55 @@ echo $TOKEN ![](Images/Day87_Data5.png) -Now we take this token and we input that into our browser, you will then be prompted for an email and company name. +Now we take this token and we input that into our browser, you will then be prompted for an email and company name. ![](Images/Day87_Data6.png) -Then we get access to the Kasten K10 dashboard. +Then we get access to the Kasten K10 dashboard. ![](Images/Day87_Data7.png) ### Import Pac-Man into new the MiniKube cluster -At this point we are now able to create an import policy in that standby cluster and connect to the object storage backups and determine what and how we want this to look. +At this point we are now able to create an import policy in that standby cluster and connect to the object storage backups and determine what and how we want this to look. -First, we add in our Location Profile that we walked through earlier on the other cluster, showing off dark mode here to show the difference between our production system and our DR standby location. +First, we add in our Location Profile that we walked through earlier on the other cluster, showing off dark mode here to show the difference between our production system and our DR standby location. ![](Images/Day89_Data16.png) -Now we go back to the dashboard and into the policies tab to create a new policy. +Now we go back to the dashboard and into the policies tab to create a new policy. ![](Images/Day89_Data17.png) -Create the import policy as per the below image. When complete, we can create policy. There are options here to restore after import and some people might want this option, this will go and restore into our standby cluster on completion. We also have the ability to change the configuration of the application as it is restored and this is what I have documented in [Day 90](day90.md). +Create the import policy as per the below image. When complete, we can create policy. There are options here to restore after import and some people might want this option, this will go and restore into our standby cluster on completion. We also have the ability to change the configuration of the application as it is restored and this is what I have documented in [Day 90](day90.md). ![](Images/Day89_Data18.png) -I selected to import on demand, but you can obviously set a schedule on when you want this import to happen. Because of this I am going to run once. +I selected to import on demand, but you can obviously set a schedule on when you want this import to happen. Because of this I am going to run once. ![](Images/Day89_Data19.png) -You can see below the successful import policy job. +You can see below the successful import policy job. ![](Images/Day89_Data20.png) -If we now head back to the dashboard and into the Applications card, we can then select the drop down where you see below "Removed" you will see our application here. Select Restore +If we now head back to the dashboard and into the Applications card, we can then select the drop down where you see below "Removed" you will see our application here. Select Restore ![](Images/Day89_Data21.png) -Here we can see the restore points we have available to us; this was the backup job that we ran on the primary cluster against our Pac-Man application. +Here we can see the restore points we have available to us; this was the backup job that we ran on the primary cluster against our Pac-Man application. ![](Images/Day89_Data22.png) -I am not going to change any of the defaults as I want to cover this in more detail in the next session. +I am not going to change any of the defaults as I want to cover this in more detail in the next session. ![](Images/Day89_Data23.png) -When you hit "Restore" it will prompt you with a confirmation. +When you hit "Restore" it will prompt you with a confirmation. ![](Images/Day89_Data24.png) -We can see below that we are in the standby cluster and if we check on our pods, we can see that we have our running application. +We can see below that we are in the standby cluster and if we check on our pods, we can see that we have our running application. ![](Images/Day89_Data25.png) @@ -208,9 +208,9 @@ We can then port forward (in real life/production environments, you would not ne ![](Images/Day89_Data26.png) -Next, we will take a look at Application mobility and transformation. +Next, we will take a look at Application mobility and transformation. -## Resources +## Resources - [Kubernetes Backup and Restore made easy!](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) - [Kubernetes Backups, Upgrades, Migrations - with Velero](https://www.youtube.com/watch?v=zybLTQER0yY) diff --git a/Days/day90.md b/Days/day90.md index 2bd9dac4c..36bcde6bb 100644 --- a/Days/day90.md +++ b/Days/day90.md @@ -1,37 +1,38 @@ --- -title: '#90DaysOfDevOps - Data & Application Mobility - Day 90' +title: "#90DaysOfDevOps - Data & Application Mobility - Day 90" published: false description: 90DaysOfDevOps - Data & Application Mobility -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048748 --- + ## Data & Application Mobility -Day 90 of the #90DaysOfDevOps Challenge! In this final session I am going to cover mobility of our data and applications. I am specifically going to focus on Kubernetes but the requirement across platforms and between platforms is something that is an ever-growing requirement and is seen in the field. +Day 90 of the #90DaysOfDevOps Challenge! In this final session I am going to cover mobility of our data and applications. I am specifically going to focus on Kubernetes but the requirement across platforms and between platforms is something that is an ever-growing requirement and is seen in the field. -The use case being "I want to move my workload, application and data from one location to another" for many different reasons, could be cost, risk or to provide the business with a better service. +The use case being "I want to move my workload, application and data from one location to another" for many different reasons, could be cost, risk or to provide the business with a better service. -In this session we are going to take our workload and we are going to look at moving a Kubernetes workload from one cluster to another, but in doing so we are going to change how our application is on the target location. +In this session we are going to take our workload and we are going to look at moving a Kubernetes workload from one cluster to another, but in doing so we are going to change how our application is on the target location. It in fact uses a lot of the characteristics that we went through with [Disaster Recovery](day89.md) ### **The Requirement** -Our current Kubernetes cluster cannot handle demand and our costs are rocketing through the roof, it is a business decision that we wish to move our production Kubernetes cluster to our Disaster Recovery location, located on a different public cloud which will provide the ability to expand but also at a cheaper rate. We could also take advantage of some of the native cloud services available in the target cloud. +Our current Kubernetes cluster cannot handle demand and our costs are rocketing through the roof, it is a business decision that we wish to move our production Kubernetes cluster to our Disaster Recovery location, located on a different public cloud which will provide the ability to expand but also at a cheaper rate. We could also take advantage of some of the native cloud services available in the target cloud. -Our current mission critical application (Pac-Man) has a database (MongoDB) and is running on slow storage, we would like to move to a newer faster storage tier. +Our current mission critical application (Pac-Man) has a database (MongoDB) and is running on slow storage, we would like to move to a newer faster storage tier. -The current Pac-Man (NodeJS) front-end is not scaling very well, and we would like to increase the number of available pods in the new location. +The current Pac-Man (NodeJS) front-end is not scaling very well, and we would like to increase the number of available pods in the new location. ### Getting to IT -We have our brief and in fact we have our imports already hitting the Disaster Recovery Kubernetes cluster. +We have our brief and in fact we have our imports already hitting the Disaster Recovery Kubernetes cluster. -The first job we need to do is remove the restore operation we carried out on Day 89 for the Disaster Recovery testing. +The first job we need to do is remove the restore operation we carried out on Day 89 for the Disaster Recovery testing. -We can do this using `kubectl delete ns pacman` on the "standby" minikube cluster. +We can do this using `kubectl delete ns pacman` on the "standby" minikube cluster. ![](Images/Day90_Data1.png) @@ -43,23 +44,23 @@ We then get a list of the available restore points. We will select the one that ![](Images/Day90_Data3.png) -When we worked on the Disaster Recovery process, we left everything as default. However these additional restore options are there if you have a Disaster Recovery process that requires the transformation of your application. In this instance we have the requirement to change our storage and number of replicas. +When we worked on the Disaster Recovery process, we left everything as default. However these additional restore options are there if you have a Disaster Recovery process that requires the transformation of your application. In this instance we have the requirement to change our storage and number of replicas. ![](Images/Day90_Data4.png) -Select the "Apply transforms to restored resources" option. +Select the "Apply transforms to restored resources" option. ![](Images/Day90_Data5.png) -It just so happens that the two built in examples for the transformation that we want to perform are what we need for our requirements. +It just so happens that the two built in examples for the transformation that we want to perform are what we need for our requirements. ![](Images/Day90_Data6.png) -The first requirement is that on our primary cluster we were using a Storage Class called `csi-hostpath-sc` and in our new cluster we would like to use `standard` so we can make that change here. +The first requirement is that on our primary cluster we were using a Storage Class called `csi-hostpath-sc` and in our new cluster we would like to use `standard` so we can make that change here. ![](Images/Day90_Data7.png) -Looks good, hit the create transform button at the bottom. +Looks good, hit the create transform button at the bottom. ![](Images/Day90_Data8.png) @@ -67,7 +68,7 @@ The next requirement is that we would like to scale our Pac-Man frontend deploym ![](Images/Day90_Data9.png) -If you are following along you should see both of our transforms as per below. +If you are following along you should see both of our transforms as per below. ![](Images/Day90_Data10.png) @@ -75,25 +76,25 @@ You can now see from the below image that we are going to restore all of the art ![](Images/Day90_Data11.png) -Again, we will be asked to confirm the actions. +Again, we will be asked to confirm the actions. ![](Images/Day90_Data12.png) -The final thing to show is now if we head back into the terminal and we take a look at our cluster, you can see we have 5 pods now for the pacman pods and our storageclass is now set to standard vs the csi-hostpath-sc +The final thing to show is now if we head back into the terminal and we take a look at our cluster, you can see we have 5 pods now for the pacman pods and our storageclass is now set to standard vs the csi-hostpath-sc ![](Images/Day90_Data13.png) -There are many different options that can be achieved through transformation. This can span not only migration but also Disaster Recovery, test and development type scenarios and more. +There are many different options that can be achieved through transformation. This can span not only migration but also Disaster Recovery, test and development type scenarios and more. -### API and Automation +### API and Automation -I have not spoken about the ability to leverage the API and to automate some of these tasks, but these options are present and throughout the UI there are breadcrumbs that provide the command sets to take advantage of the APIs for automation tasks. +I have not spoken about the ability to leverage the API and to automate some of these tasks, but these options are present and throughout the UI there are breadcrumbs that provide the command sets to take advantage of the APIs for automation tasks. -The important thing to note about Kasten K10 is that on deployment it is deployed inside the Kubernetes cluster and then can be called through the Kubernetes API. +The important thing to note about Kasten K10 is that on deployment it is deployed inside the Kubernetes cluster and then can be called through the Kubernetes API. -This then brings us to a close on the section around Storing and Protecting your data. +This then brings us to a close on the section around Storing and Protecting your data. -## Resources +## Resources - [Kubernetes Backup and Restore made easy!](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) - [Kubernetes Backups, Upgrades, Migrations - with Velero](https://www.youtube.com/watch?v=zybLTQER0yY) @@ -103,23 +104,24 @@ This then brings us to a close on the section around Storing and Protecting your ### **Closing** -As I wrap up this challenge, I want to continue to ask for feedback to make sure that the information is always relevant. +As I wrap up this challenge, I want to continue to ask for feedback to make sure that the information is always relevant. -I also appreciate there are a lot of topics that I was not able to cover or not able to dive deeper into around the topics of DevOps. +I also appreciate there are a lot of topics that I was not able to cover or not able to dive deeper into around the topics of DevOps. -This means that we can always take another attempt that this challenge next year and find another 90 day's worth of content and walkthroughs to work through. +This means that we can always take another attempt that this challenge next year and find another 90 day's worth of content and walkthroughs to work through. -### What is next? +### What is next? -Firstly, a break from writing for a little while, I started this challenge on the 1st January 2022 and I have finished on the 31st March 2022 19:50 BST! It has been a slog. But as I say and have said for a long time, if this content helps one person, then it is always worth learning in public! +Firstly, a break from writing for a little while, I started this challenge on the 1st January 2022 and I have finished on the 31st March 2022 19:50 BST! It has been a slog. But as I say and have said for a long time, if this content helps one person, then it is always worth learning in public! -I have some ideas on where to take this next and hopefully it has a life outside of a GitHub repository and we can look at creating an eBook and possibly even a physical book. +I have some ideas on where to take this next and hopefully it has a life outside of a GitHub repository and we can look at creating an eBook and possibly even a physical book. -I also know that we need to revisit each post and make sure everything is grammatically correct before making anything like that happen. If anyone does know about how to take markdown to print or to an eBook it would be greatly appreciated feedback. +I also know that we need to revisit each post and make sure everything is grammatically correct before making anything like that happen. If anyone does know about how to take markdown to print or to an eBook it would be greatly appreciated feedback. -As always keep the issues and PRs coming. +As always keep the issues and PRs coming. -Thanks! +Thanks! @MichaelCade1 + - [GitHub](https://github.com/MichaelCade) - [Twitter](https://twitter.com/MichaelCade1) diff --git a/README.md b/README.md index 6837ce378..2e7af5a5d 100644 --- a/README.md +++ b/README.md @@ -6,17 +6,17 @@ English Version | [中文版本](zh_cn/README.md) | [繁體中文版本](zh_tw/README.md)| [日本語版](ja/README.md) -This repository is used to document my journey on getting a better foundational knowledge of "DevOps". I will be starting this journey on the 1st January 2022 but the idea is that we take 90 days which just so happens to be January 1st to March 31st. +This repository is used to document my journey on getting a better foundational knowledge of "DevOps". I will be starting this journey on the 1st January 2022 but the idea is that we take 90 days which just so happens to be January 1st to March 31st. -The reason for documenting these days is so that others can take something from it and also hopefully enhance the resources. +The reason for documenting these days is so that others can take something from it and also hopefully enhance the resources. -The goal is to take 90 days, 1 hour each a day, to tackle over 13 areas of "DevOps" to a foundational knowledge. +The goal is to take 90 days, 1 hour each a day, to tackle over 13 areas of "DevOps" to a foundational knowledge. -This will **not cover all things** "DevOps" but it will cover the areas that I feel will benefit my learning and understanding overall. +This will **not cover all things** "DevOps" but it will cover the areas that I feel will benefit my learning and understanding overall. The quickest way to get in touch is going to be via Twitter, my handle is [@MichaelCade1](https://twitter.com/MichaelCade1) -## Progress +## Progress - [✔️] ♾️ 1 > [Introduction](Days/day01.md) @@ -78,7 +78,7 @@ The quickest way to get in touch is going to be via Twitter, my handle is [@Mich - [✔️] 📚 40 > [Social Network for code](Days/day40.md) - [✔️] 📚 41 > [The Open Source Workflow](Days/day41.md) -### Containers +### Containers - [✔️] 🏗️ 42 > [The Big Picture: Containers](Days/day42.md) - [✔️] 🏗️ 43 > [What is Docker & Getting installed](Days/day43.md) @@ -118,7 +118,7 @@ The quickest way to get in touch is going to be via Twitter, my handle is [@Mich - [✔️] 📜 68 > [Tags, Variables, Inventory & Database Server config](Days/day68.md) - [✔️] 📜 69 > [All other things Ansible - Automation Controller, AWX, Vault](Days/day69.md) -### Create CI/CD Pipelines +### Create CI/CD Pipelines - [✔️] 🔄 70 > [The Big Picture: CI/CD Pipelines](Days/day70.md) - [✔️] 🔄 71 > [What is Jenkins?](Days/day71.md) @@ -161,7 +161,6 @@ This work is licensed under a [![Star History Chart](https://api.star-history.com/svg?repos=MichaelCade/90DaysOfDevOps&type=Timeline)](https://star-history.com/#MichaelCade/90DaysOfDevOps&Timeline) - [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png [cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg