diff --git a/Days/Monitoring/EFK Stack/efk-stack.yaml b/Days/Monitoring/EFK Stack/efk-stack.yaml index 6440e6b6e..46b61afa7 100644 --- a/Days/Monitoring/EFK Stack/efk-stack.yaml +++ b/Days/Monitoring/EFK Stack/efk-stack.yaml @@ -17,7 +17,7 @@ metadata: spec: selector: app: elasticsearch - #Renderes The service Headless + #Renders The service Headless clusterIP: None ports: - port: 9200 @@ -253,4 +253,4 @@ spec: - name: varlibdockercontainers hostPath: path: /var/lib/docker/containers ---- \ No newline at end of file +--- diff --git a/Days/day01.md b/Days/day01.md index 4df021918..01d89f437 100644 --- a/Days/day01.md +++ b/Days/day01.md @@ -1,63 +1,64 @@ --- -title: '#90DaysOfDevOps - Introduction - Day 1' +title: "#90DaysOfDevOps - Introduction - Day 1" published: true description: 90DaysOfDevOps - Introduction -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048731 -date: '2022-04-17T10:12:40Z' +date: "2022-04-17T10:12:40Z" --- -## Introduction - Day 1 -Day 1 of our 90 days and adventure to learn a good foundational understanding of DevOps and tools that help with a DevOps mindset. +## Introduction - Day 1 -This learning journey started for me a few years back, but my focus then was around virtualisation platforms and cloud-based technologies, I was looking mostly into Infrastructure as Code and Application configuration management with Terraform and Chef. +Day 1 of our 90 days and adventure to learn a good foundational understanding of DevOps and tools that help with a DevOps mindset. -Fast forward to March 2021, I was given an amazing opportunity to concentrate my efforts around the Cloud Native strategy at Kasten by Veeam. Which was going to be a massive focus on Kubernetes and DevOps and the community surrounding these technologies. I started my learning journey and quickly realised there was a very wide world aside from just learning the fundamentals of Kubernetes and Containerisation and it was then when I started speaking to the community and learning more and more about the DevOps culture, tooling and processes so I started documenting some of the areas I wanted to learn in public. +This learning journey started for me a few years back, but my focus then was around virtualisation platforms and cloud-based technologies, I was looking mostly into Infrastructure as Code and Application configuration management with Terraform and Chef. + +Fast forward to March 2021, I was given an amazing opportunity to concentrate my efforts around the Cloud Native strategy at Kasten by Veeam. Which was going to be a massive focus on Kubernetes and DevOps and the community surrounding these technologies. I started my learning journey and quickly realised there was a very wide world aside from just learning the fundamentals of Kubernetes and Containerisation and it was then when I started speaking to the community and learning more and more about the DevOps culture, tooling and processes so I started documenting some of the areas I wanted to learn in public. [So you want to learn DevOps?](https://blog.kasten.io/devops-learning-curve) ## Let the journey begin -If you read the above blog, you will see this is a high-level contents for my learning journey and I will say at this point I am nowhere near an expert in any of these sections but what I wanted to do was share some resources both FREE and some paid for but an option for both as we all have different circumstances. +If you read the above blog, you will see this is a high-level contents for my learning journey and I will say at this point I am nowhere near an expert in any of these sections but what I wanted to do was share some resources both FREE and some paid for but an option for both as we all have different circumstances. -Over the next 90 days, I want to document these resources and cover those foundational areas. I would love for the community to also get involved. Share your journey and resources so we can learn in public and help each other. +Over the next 90 days, I want to document these resources and cover those foundational areas. I would love for the community to also get involved. Share your journey and resources so we can learn in public and help each other. -You will see from the opening readme in the project repository that I have split things into sections and it is 12 weeks plus 6 days. For the first 6 days, we will explore the fundamentals of DevOps in general before diving into some of the specific areas. By no way is this list exhaustive and again, I would love for the community to assist in making this a useful resource. +You will see from the opening readme in the project repository that I have split things into sections and it is 12 weeks plus 6 days. For the first 6 days, we will explore the fundamentals of DevOps in general before diving into some of the specific areas. By no way is this list exhaustive and again, I would love for the community to assist in making this a useful resource. -Another resource I will share at this point and that I think everyone should have a good look at, maybe create your mind map for yourself and your interest and position, is the following: +Another resource I will share at this point and that I think everyone should have a good look at, maybe create your mind map for yourself and your interest and position, is the following: [DevOps Roadmap](https://roadmap.sh/devops) -I found this a great resource when I was creating my initial list and blog post on this topic. You can also see other areas go into a lot more detail outside of the 12 topics I have listed here in this repository. +I found this a great resource when I was creating my initial list and blog post on this topic. You can also see other areas go into a lot more detail outside of the 12 topics I have listed here in this repository. -## First Steps - What is DevOps? +## First Steps - What is DevOps? -There are so many blog articles and YouTube videos to list here, but as we start the 90-day challenge and we focus on spending around an hour a day learning something new or about DevOps, I thought it was good to get some of the high level of "what DevOps is" down to begin. +There are so many blog articles and YouTube videos to list here, but as we start the 90-day challenge and we focus on spending around an hour a day learning something new or about DevOps, I thought it was good to get some of the high level of "what DevOps is" down to begin. -Firstly, DevOps is not a tool. You cannot buy it, it is not a software SKU or an open source GitHub repository you can download. It is also not a programming language, it is also not some dark art magic either. +Firstly, DevOps is not a tool. You cannot buy it, it is not a software SKU or an open source GitHub repository you can download. It is also not a programming language, it is also not some dark art magic either. -DevOps is a way to do smarter things in Software Development. - Hold up... But if you are not a software developer should you turn away right now and not dive into this project??? No. Not at all. Stay... Because DevOps brings together a combination of software development and operations. I mentioned earlier that I was more on the VM side and that would generally fall under the Operations side of the house, but within the community, there are people with all different backgrounds where DevOps is 100% going to benefit the individual, Developers, Operations and QA Engineers all can equally learn these best practices by having a better understanding of DevOps. +DevOps is a way to do smarter things in Software Development. - Hold up... But if you are not a software developer should you turn away right now and not dive into this project??? No. Not at all. Stay... Because DevOps brings together a combination of software development and operations. I mentioned earlier that I was more on the VM side and that would generally fall under the Operations side of the house, but within the community, there are people with all different backgrounds where DevOps is 100% going to benefit the individual, Developers, Operations and QA Engineers all can equally learn these best practices by having a better understanding of DevOps. -DevOps is a set of practices that help to reach the goal of this movement: reducing the time between the ideation phase of a product and its release in production to the end-user or whomever it could be an internal team or customer. +DevOps is a set of practices that help to reach the goal of this movement: reducing the time between the ideation phase of a product and its release in production to the end-user or whomever it could be an internal team or customer. -Another area we will dive into in this first week is around **The Agile Methodology**. DevOps and Agile are widely adopted together to achieve continuous delivery of your **Application**. +Another area we will dive into in this first week is around **The Agile Methodology**. DevOps and Agile are widely adopted together to achieve continuous delivery of your **Application**. -The high-level takeaway is that a DevOps mindset or culture is about shrinking the long, drawn out software release process from potentially years to being able to drop smaller releases more frequently. The other key fundamental point to understand here is the responsibility of a DevOps engineer to break down silos between the teams I previously mentioned: Developers, Operations and QA. +The high-level takeaway is that a DevOps mindset or culture is about shrinking the long, drawn out software release process from potentially years to being able to drop smaller releases more frequently. The other key fundamental point to understand here is the responsibility of a DevOps engineer to break down silos between the teams I previously mentioned: Developers, Operations and QA. -From a DevOps perspective, **Development, Testing and Deployment** all land with the DevOps team. +From a DevOps perspective, **Development, Testing and Deployment** all land with the DevOps team. The final point I will make is to make this as effective and efficient as possible we must leverage **Automation**. -## Resources +## Resources -I am always open to adding additional resources to these readme files as it is here as a learning tool. +I am always open to adding additional resources to these readme files as it is here as a learning tool. -My advice is to watch all of the below and hopefully you have also picked something up from the text and explanations above. +My advice is to watch all of the below and hopefully you have also picked something up from the text and explanations above. - [DevOps in 5 Minutes](https://www.youtube.com/watch?v=Xrgk023l4lI) - [What is DevOps? Easy Way](https://www.youtube.com/watch?v=_Gpe1Zn-1fE&t=43s) - [DevOps roadmap 2022 | Success Roadmap 2022](https://www.youtube.com/watch?v=7l_n97Mt0ko) -If you made it this far, then you will know if this is where you want to be or not. See you on [Day 2](day02.md). +If you made it this far, then you will know if this is where you want to be or not. See you on [Day 2](day02.md). diff --git a/Days/day02.md b/Days/day02.md index cbfae4815..c58073fa4 100644 --- a/Days/day02.md +++ b/Days/day02.md @@ -1,68 +1,70 @@ --- -title: '#90DaysOfDevOps - Responsibilities of a DevOps Engineer - Day 2' +title: "#90DaysOfDevOps - Responsibilities of a DevOps Engineer - Day 2" published: false description: 90DaysOfDevOps - Responsibilities of a DevOps Engineer -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048699 -date: '2022-04-17T21:15:34Z' +date: "2022-04-17T21:15:34Z" --- -## Responsibilities of a DevOps Engineer -Hopefully, you are coming into this off the back of going through the resources and posting on [Day1 of #90DaysOfDevOps](day01.md) +## Responsibilities of a DevOps Engineer -It was briefly touched on in the first post but now we must get deeper into this concept and understand that there are two main parts when creating an application. We have the **Development** part where software developers program the application and test it. Then we have the **Operations** part where the application is deployed and maintained on a server. +Hopefully, you are coming into this off the back of going through the resources and posting on [Day1 of #90DaysOfDevOps](day01.md) + +It was briefly touched on in the first post but now we must get deeper into this concept and understand that there are two main parts when creating an application. We have the **Development** part where software developers program the application and test it. Then we have the **Operations** part where the application is deployed and maintained on a server. ## DevOps is the link between the two -To get to grips with DevOps or the tasks which a DevOps engineer would be carrying out we need to understand the tools or the process and overview of those and how they come together. +To get to grips with DevOps or the tasks which a DevOps engineer would be carrying out we need to understand the tools or the process and overview of those and how they come together. Everything starts with the application! You will see so much throughout that it is all about the application when it comes to DevOps. -Developers will create an application, this can be done with many different technology stacks and let's leave that to the imagination for now as we get into this later. This can also involve many different programming languages, build tools, code repositories etc. +Developers will create an application, this can be done with many different technology stacks and let's leave that to the imagination for now as we get into this later. This can also involve many different programming languages, build tools, code repositories etc. -As a DevOps engineer you won't be programming the application but having a good understanding of the concepts of how a developer works and the systems, tools and processes they are using is key to success. +As a DevOps engineer you won't be programming the application but having a good understanding of the concepts of how a developer works and the systems, tools and processes they are using is key to success. -At a very high level, you are going to need to know how the application is configured to talk to all of its required services or data services and then also sprinkle a requirement of how this can or should be tested. +At a very high level, you are going to need to know how the application is configured to talk to all of its required services or data services and then also sprinkle a requirement of how this can or should be tested. -The application will need to be deployed somewhere, lets's keep it generally simple here and make this a server, doesn't matter where but a server. This is then expected to be accessed by the customer or end user depending on the application that has been created. +The application will need to be deployed somewhere, lets's keep it generally simple here and make this a server, doesn't matter where but a server. This is then expected to be accessed by the customer or end user depending on the application that has been created. -This server needs to run somewhere, on-premises, in a public cloud, serverless (Ok I have gone too far, we won't be covering serverless but its an option and more and more enterprises are heading this way) Someone needs to create and configure these servers and get them ready for the application to run. Now, this element might land to you as a DevOps engineer to deploy and configure these servers. +This server needs to run somewhere, on-premises, in a public cloud, serverless (Ok I have gone too far, we won't be covering serverless but its an option and more and more enterprises are heading this way) Someone needs to create and configure these servers and get them ready for the application to run. Now, this element might land to you as a DevOps engineer to deploy and configure these servers. -These servers run an operating system and generally speaking this is going to be Linux but we have a whole section or week where we cover some of the foundational knowledge you should gain here. +These servers run an operating system and generally speaking this is going to be Linux but we have a whole section or week where we cover some of the foundational knowledge you should gain here. -It is also likely that we need to communicate with other services in our network or environment, so we also need to have that level of knowledge around networking and configuring that, this might to some degree also land at the feet of the DevOps engineer. Again we will cover this in more detail in a dedicated section talking about all things DNS, DHCP, Load Balancing etc. +It is also likely that we need to communicate with other services in our network or environment, so we also need to have that level of knowledge around networking and configuring that, this might to some degree also land at the feet of the DevOps engineer. Again we will cover this in more detail in a dedicated section talking about all things DNS, DHCP, Load Balancing etc. -## Jack of all trades, Master of none +## Jack of all trades, Master of none -I will say at this point though, you don't need to be a Network or Infrastructure specialist you need a foundational knowledge of how to get things up and running and talking to each other, much the same as maybe having a foundational knowledge of a programming language but you don't need to be a developer. However, you might be coming into this as a specialist in an area and that is a great footing to adapt to other areas. +I will say at this point though, you don't need to be a Network or Infrastructure specialist you need a foundational knowledge of how to get things up and running and talking to each other, much the same as maybe having a foundational knowledge of a programming language but you don't need to be a developer. However, you might be coming into this as a specialist in an area and that is a great footing to adapt to other areas. -You will also most likely not take over the management of these servers or the application daily. +You will also most likely not take over the management of these servers or the application daily. -We have been talking about servers but the likelihood is that your application will be developed to run as containers, Which still runs on a server for the most part but you will also need an understanding of not only virtualisation, Cloud Infrastructure as a Service (IaaS) but also containerisation as well, The focus in these 90 days will be more catered towards containers. +We have been talking about servers but the likelihood is that your application will be developed to run as containers, Which still runs on a server for the most part but you will also need an understanding of not only virtualisation, Cloud Infrastructure as a Service (IaaS) but also containerisation as well, The focus in these 90 days will be more catered towards containers. ## High-Level Overview -On one side we have our developers creating new features and functionality (as well as bug fixes) for the application. +On one side we have our developers creating new features and functionality (as well as bug fixes) for the application. + +On the other side, we have some sort of environment, infrastructure or servers which are configured and managed to run this application and communicate with all its required services. -On the other side, we have some sort of environment, infrastructure or servers which are configured and managed to run this application and communicate with all its required services. +The big question is how do we get those features and bug fixes into our products and make them available to those end users? -The big question is how do we get those features and bug fixes into our products and make them available to those end users? +How do we release the new application version? This is one of the main tasks for a DevOps engineer, and the important thing here is not to just figure out how to do this once but we need to do this continuously and in an automated, efficient way which also needs to include testing! -How do we release the new application version? This is one of the main tasks for a DevOps engineer, and the important thing here is not to just figure out how to do this once but we need to do this continuously and in an automated, efficient way which also needs to include testing! +This is where we are going to end this day of learning, hopefully, this was useful. Over the next few days, we are going to dive a little deeper into some more areas of DevOps and then we will get into the sections that dive deeper into the tooling and processes and the benefits of these. -This is where we are going to end this day of learning, hopefully, this was useful. Over the next few days, we are going to dive a little deeper into some more areas of DevOps and then we will get into the sections that dive deeper into the tooling and processes and the benefits of these. +## Resources -## Resources +I am always open to adding additional resources to these readme files as it is here as a learning tool. -I am always open to adding additional resources to these readme files as it is here as a learning tool. +My advice is to watch all of the below and hopefully you also picked something up from the text and explanations above. -My advice is to watch all of the below and hopefully you also picked something up from the text and explanations above. - [What is DevOps? - TechWorld with Nana](https://www.youtube.com/watch?v=0yWAtQ6wYNM) - [What is DevOps? - GitHub YouTube](https://www.youtube.com/watch?v=kBV8gPVZNEE) - [What is DevOps? - IBM YouTube](https://www.youtube.com/watch?v=UbtB4sMaaNM) -- [What is DevOps? - AWS ](https://aws.amazon.com/devops/what-is-devops/) +- [What is DevOps? - AWS](https://aws.amazon.com/devops/what-is-devops/) - [What is DevOps? - Microsoft](https://docs.microsoft.com/en-us/devops/what-is-devops) -If you made it this far then you will know if this is where you want to be or not. See you on [Day 3](day03.md). +If you made it this far then you will know if this is where you want to be or not. See you on [Day 3](day03.md). diff --git a/Days/day03.md b/Days/day03.md index b72b255e6..896204c83 100644 --- a/Days/day03.md +++ b/Days/day03.md @@ -1,78 +1,82 @@ --- -title: '#90DaysOfDevOps - Application Focused - Day 3' +title: "#90DaysOfDevOps - Application Focused - Day 3" published: false description: 90DaysOfDevOps - Application Focused -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048825 --- + ## DevOps Lifecycle - Application Focused -As we continue through these next few weeks we are 100% going to come across these titles (Continuous Development, Testing, Deployment, Monitor) over and over again, If you are heading towards the DevOps Engineer role then repeatability will be something you will get used to but constantly enhancing each time is another thing that keeps things interesting. +As we continue through these next few weeks we are 100% going to come across these titles (Continuous Development, Testing, Deployment, Monitor) over and over again, If you are heading towards the DevOps Engineer role then repeatability will be something you will get used to but constantly enhancing each time is another thing that keeps things interesting. + +In this hour we are going to take a look at the high-level view of the application from start to finish and then back around again like a constant loop. -In this hour we are going to take a look at the high-level view of the application from start to finish and then back around again like a constant loop. +### Development -### Development -Let's take a brand new example of an Application, to start with we have nothing created, maybe as a developer, you have to discuss with your client or end user the requirements and come up with some sort of plan or requirements for your Application. We then need to create from the requirements our brand new application. +Let's take a brand new example of an Application, to start with we have nothing created, maybe as a developer, you have to discuss with your client or end user the requirements and come up with some sort of plan or requirements for your Application. We then need to create from the requirements our brand new application. -In regards to tooling at this stage, there is no real requirement here other than choosing your IDE and the programming language you wish to use to write your application. +In regards to tooling at this stage, there is no real requirement here other than choosing your IDE and the programming language you wish to use to write your application. -As a DevOps engineer, remember you are probably not the one creating this plan or coding the application for the end user, this will be a skilled developer. +As a DevOps engineer, remember you are probably not the one creating this plan or coding the application for the end user, this will be a skilled developer. But it also would not hurt for you to be able to read some of the code so that you can make the best infrastructure decisions moving forward for your application. -We previously mentioned that this application can be written in any language. Importantly this should be maintained using a version control system, this is something we will cover also in detail later on and in particular, we will dive into **Git**. +We previously mentioned that this application can be written in any language. Importantly this should be maintained using a version control system, this is something we will cover also in detail later on and in particular, we will dive into **Git**. -It is also likely that it will not be one developer working on this project although this could be the case even so best practices would require a code repository to store and collaborate on the code, this could be private or public and could be hosted or privately deployed generally speaking you would hear the likes of **GitHub or GitLab** being used as a code repository. Again we will cover these as part of our section on **Git** later on. +It is also likely that it will not be one developer working on this project although this could be the case even so best practices would require a code repository to store and collaborate on the code, this could be private or public and could be hosted or privately deployed generally speaking you would hear the likes of **GitHub or GitLab** being used as a code repository. Again we will cover these as part of our section on **Git** later on. -### Testing -At this stage, we have our requirements and we have our application being developed. But we need to make sure we are testing our code in all the different environments that we have available to us or specifically maybe to the programming language chosen. +### Testing -This phase enables QA to test for bugs, more frequently we see containers being used for simulating the test environment which overall can improve on cost overheads of physical or cloud infrastructure. +At this stage, we have our requirements and we have our application being developed. But we need to make sure we are testing our code in all the different environments that we have available to us or specifically maybe to the programming language chosen. + +This phase enables QA to test for bugs, more frequently we see containers being used for simulating the test environment which overall can improve on cost overheads of physical or cloud infrastructure. This phase is also likely going to be automated as part of the next area which is Continuous Integration. -The ability to automate this testing vs 10s,100s or even 1000s of QA engineers having to do this manually speaks for itself, these engineers can focus on something else within the stack to ensure you are moving faster and developing more functionality vs testing bugs and software which tends to be the hold up on most traditional software releases that use a waterfall methodology. +The ability to automate this testing vs 10s,100s or even 1000s of QA engineers having to do this manually speaks for itself, these engineers can focus on something else within the stack to ensure you are moving faster and developing more functionality vs testing bugs and software which tends to be the hold up on most traditional software releases that use a waterfall methodology. + +### Integration -### Integration +Quite importantly Integration is at the middle of the DevOps lifecycle. It is the practice in which developers require to commit changes to the source code more frequently. This could be on a daily or weekly basis. -Quite importantly Integration is at the middle of the DevOps lifecycle. It is the practice in which developers require to commit changes to the source code more frequently. This could be on a daily or weekly basis. +With every commit, your application can go through the automated testing phases and this allows for early detection of issues or bugs before the next phase. -With every commit, your application can go through the automated testing phases and this allows for early detection of issues or bugs before the next phase. +Now you might at this stage be saying "but we don't create applications, we buy it off the shelf from a software vendor" Don't worry many companies do this and will continue to do this and it will be the software vendor that is concentrating on the above 3 phases but you might want to still adopt the final phase as this will enable for faster and more efficient deployments of your off the shelf deployments. -Now you might at this stage be saying "but we don't create applications, we buy it off the shelf from a software vendor" Don't worry many companies do this and will continue to do this and it will be the software vendor that is concentrating on the above 3 phases but you might want to still adopt the final phase as this will enable for faster and more efficient deployments of your off the shelf deployments. +I would also suggest just having this above knowledge is very important as you might buy off the shelf software today, but what about tomorrow or down the line... next job maybe? -I would also suggest just having this above knowledge is very important as you might buy off the shelf software today, but what about tomorrow or down the line... next job maybe? +### Deployment -### Deployment -Ok so we have our application built and tested against the requirements of our end user and we now need to go ahead and deploy this application into production for our end users to consume. +Ok so we have our application built and tested against the requirements of our end user and we now need to go ahead and deploy this application into production for our end users to consume. -This is the stage where the code is deployed to the production servers, now this is where things get extremely interesting and it is where the rest of our 86 days dives deeper into these areas. Because different applications require different possibly hardware or configurations. This is where **Application Configuration Management** and **Infrastructure as Code** could play a key part in your DevOps lifecycle. It might be that your application is **Containerised** but also available to run on a virtual machine. This then also leads us onto platforms like **Kubernetes** which would be orchestrating those containers and making sure you have the desired state available to your end users. +This is the stage where the code is deployed to the production servers, now this is where things get extremely interesting and it is where the rest of our 86 days dives deeper into these areas. Because different applications require different possibly hardware or configurations. This is where **Application Configuration Management** and **Infrastructure as Code** could play a key part in your DevOps lifecycle. It might be that your application is **Containerised** but also available to run on a virtual machine. This then also leads us onto platforms like **Kubernetes** which would be orchestrating those containers and making sure you have the desired state available to your end users. -Of these bold topics, we will go into more detail over the next few weeks to get a better foundational knowledge of what they are and when to use them. +Of these bold topics, we will go into more detail over the next few weeks to get a better foundational knowledge of what they are and when to use them. -### Monitoring +### Monitoring -Things are moving fast here and we have our Application that we are continuously updating with new features and functionality and we have our testing making sure no gremlins are being found. We have the application running in our environment that can be continually keeping the required configuration and performance. +Things are moving fast here and we have our Application that we are continuously updating with new features and functionality and we have our testing making sure no gremlins are being found. We have the application running in our environment that can be continually keeping the required configuration and performance. -But now we need to be sure that our end users are getting the experience they require. Here we need to make sure that our Application Performance is continuously being monitored, this phase is going to allow your developers to make better decisions about enhancements to the application in future releases to better serve the end users. +But now we need to be sure that our end users are getting the experience they require. Here we need to make sure that our Application Performance is continuously being monitored, this phase is going to allow your developers to make better decisions about enhancements to the application in future releases to better serve the end users. -This section is also where we are going to capture that feedback wheel about the features that have been implemented and how the end users would like to make these better for them. +This section is also where we are going to capture that feedback wheel about the features that have been implemented and how the end users would like to make these better for them. -Reliability is a key factor here as well, at the end of the day we want our Application to be available all the time it is required. This then leads to other **observability, security and data management** areas that should be continuously monitored and feedback can always be used to better enhance, update and release the application continuously. +Reliability is a key factor here as well, at the end of the day we want our Application to be available all the time it is required. This then leads to other **observability, security and data management** areas that should be continuously monitored and feedback can always be used to better enhance, update and release the application continuously. -Some input from the community here specifically [@_ediri](https://twitter.com/_ediri) mentioned also part of this continuous process we should also have the FinOps teams involved. Apps & Data are running and stored somewhere you should be monitoring this continuously to make sure if things change from a resources point of view your costs are not causing some major financial pain on your Cloud Bills. +Some input from the community here specifically [@\_ediri](https://twitter.com/_ediri) mentioned also part of this continuous process we should also have the FinOps teams involved. Apps & Data are running and stored somewhere you should be monitoring this continuously to make sure if things change from a resources point of view your costs are not causing some major financial pain on your Cloud Bills. -I think it is also a good time to bring up the "DevOps Engineer" mentioned above, albeit there are many DevOps Engineer positions in the wild that people hold, this is not the ideal way of positioning the process of DevOps. What I mean is from speaking to others in the community the title of DevOps Engineer should not be the goal for anyone because really any position should be adopting DevOps processes and the culture explained here. DevOps should be used in many different positions such as Cloud-Native engineer/architect, virtualisation admin, cloud architect/engineer, and infrastructure admin. This is to name a few but the reason for using DevOps Engineer above was really to highlight the scope of the process used by any of the above positions and more. +I think it is also a good time to bring up the "DevOps Engineer" mentioned above, albeit there are many DevOps Engineer positions in the wild that people hold, this is not the ideal way of positioning the process of DevOps. What I mean is from speaking to others in the community the title of DevOps Engineer should not be the goal for anyone because really any position should be adopting DevOps processes and the culture explained here. DevOps should be used in many different positions such as Cloud-Native engineer/architect, virtualisation admin, cloud architect/engineer, and infrastructure admin. This is to name a few but the reason for using DevOps Engineer above was really to highlight the scope of the process used by any of the above positions and more. -## Resources +## Resources -I am always open to adding additional resources to these readme files as it is here as a learning tool. +I am always open to adding additional resources to these readme files as it is here as a learning tool. -My advice is to watch all of the below and hopefully you also picked something up from the text and explanations above. +My advice is to watch all of the below and hopefully you also picked something up from the text and explanations above. -- [Continuous Development](https://www.youtube.com/watch?v=UnjwVYAN7Ns) I will also add that this is focused on manufacturing but the lean culture can be closely followed with DevOps. +- [Continuous Development](https://www.youtube.com/watch?v=UnjwVYAN7Ns) I will also add that this is focused on manufacturing but the lean culture can be closely followed with DevOps. - [Continuous Testing - IBM YouTube](https://www.youtube.com/watch?v=RYQbmjLgubM) - [Continuous Integration - IBM YouTube](https://www.youtube.com/watch?v=1er2cjUq1UI) - [Continuous Monitoring](https://www.youtube.com/watch?v=Zu53QQuYqJ0) @@ -80,4 +84,4 @@ My advice is to watch all of the below and hopefully you also picked something u - [FinOps Foundation - What is FinOps](https://www.finops.org/introduction/what-is-finops/) - [**NOT FREE** The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win](https://www.amazon.com/Phoenix-Project-DevOps-Helping-Business/dp/1942788290/) -If you made it this far then you will know if this is where you want to be or not. See you on [Day 4](day04.md). +If you made it this far then you will know if this is where you want to be or not. See you on [Day 4](day04.md). diff --git a/Days/day04.md b/Days/day04.md index b7be1bc84..58cc9f3b8 100644 --- a/Days/day04.md +++ b/Days/day04.md @@ -1,8 +1,8 @@ --- -title: '#90DaysOfDevOps - DevOps & Agile - Day 4' +title: "#90DaysOfDevOps - DevOps & Agile - Day 4" published: false description: 90DaysOfDevOps - DevOps & Agile -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048700 @@ -71,11 +71,11 @@ DevOps uses tools for team communication, software development, deployment and i The combination of Agile and DevOps brings the following benefits you will get: -- Flexible management and powerful technology. -- Agile practices help DevOps teams to communicate their priorities more efficiently. -- The automation cost that you have to pay for your DevOps practices is justified by your agile requirement of deploying quickly and frequently. -- It leads to strengthening: the team adopting agile practices will improve collaboration, increase the team's motivation and decrease employee turnover rates. -- As a result, you get better product quality. +- Flexible management and powerful technology. +- Agile practices help DevOps teams to communicate their priorities more efficiently. +- The automation cost that you have to pay for your DevOps practices is justified by your agile requirement of deploying quickly and frequently. +- It leads to strengthening: the team adopting agile practices will improve collaboration, increase the team's motivation and decrease employee turnover rates. +- As a result, you get better product quality. Agile allows coming back to previous product development stages to fix errors and prevent the accumulation of technical debt. To adopt agile and DevOps simultaneously just follow 7 steps: @@ -92,8 +92,8 @@ What do you think? Do you have different views? I want to hear from Developers, ### Resources -- [DevOps for Developers – Day in the Life: DevOps Engineer in 2021](https://www.youtube.com/watch?v=2JymM0YoqGA) -- [3 Things I wish I knew as a DevOps Engineer](https://www.youtube.com/watch?v=udRNM7YRdY4) -- [How to become a DevOps Engineer feat. Shawn Powers](https://www.youtube.com/watch?v=kDQMjAQNvY4) +- [DevOps for Developers – Day in the Life: DevOps Engineer in 2021](https://www.youtube.com/watch?v=2JymM0YoqGA) +- [3 Things I wish I knew as a DevOps Engineer](https://www.youtube.com/watch?v=udRNM7YRdY4) +- [How to become a DevOps Engineer feat. Shawn Powers](https://www.youtube.com/watch?v=kDQMjAQNvY4) If you made it this far then you will know if this is where you want to be or not. See you on [Day 5](day05.md). diff --git a/Days/day05.md b/Days/day05.md index 15b2d7fcc..c5422835e 100644 --- a/Days/day05.md +++ b/Days/day05.md @@ -1,85 +1,86 @@ --- -title: '#90DaysOfDevOps - Plan > Code > Build > Testing > Release > Deploy > Operate > Monitor > - Day 5' +title: "#90DaysOfDevOps - Plan > Code > Build > Testing > Release > Deploy > Operate > Monitor > - Day 5" published: false description: 90DaysOfDevOps - Plan > Code > Build > Testing > Release > Deploy > Operate > Monitor > -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048830 --- -## Plan > Code > Build > Testing > Release > Deploy > Operate > Monitor > -Today we are going to focus on the individual steps from start to finish and the continuous cycle of an Application in a DevOps world. +## Plan > Code > Build > Testing > Release > Deploy > Operate > Monitor > + +Today we are going to focus on the individual steps from start to finish and the continuous cycle of an Application in a DevOps world. ![DevOps](Images/Day5_DevOps8.png) -### Plan: +### Plan It all starts with the planning process this is where the development team gets together and figures out what types of features and bug fixes they're going to roll out in their next sprint. This is an opportunity as a DevOps Engineer for you to get involved with that and learn what kinds of things are going to be coming your way that you need to be involved with and also influence their decisions or their path and kind of help them work with the infrastructure that you've built or steer them towards something that's going to work better for them in case they're not on that path and so one key thing to point out here is the developers or software engineering team is your customer as a DevOps engineer so this is your opportunity to work with your customer before they go down a bad path. -### Code: +### Code -Now once that planning session's done they're going to go start writing the code you may or may not be involved a whole lot with this one of the places you may get involved with it, is whenever they're writing code you can help them better understand the infrastructure so if they know what services are available and how to best talk with those services so they're going to do that and then once they're done they'll merge that code into the repository +Now once that planning session's done they're going to go start writing the code you may or may not be involved a whole lot with this one of the places you may get involved with it, is whenever they're writing code you can help them better understand the infrastructure so if they know what services are available and how to best talk with those services so they're going to do that and then once they're done they'll merge that code into the repository -### Build: +### Build -This is where we'll kick off the first of our automation processes because we're going to take their code and we're going to build it depending on what language they're using it may be transpiring it or compiling it or it might be creating a docker image from that code either way we're going to go through that process using our ci cd pipeline +This is where we'll kick off the first of our automation processes because we're going to take their code and we're going to build it depending on what language they're using it may be transpiring it or compiling it or it might be creating a docker image from that code either way we're going to go through that process using our ci cd pipeline -## Testing: +## Testing Once we've built it we're going to run some tests on it now the development team usually writes the test you may have some input in what tests get written but we need to run those tests and the testing is a way for us to try and minimise introducing problems out into production, it doesn't guarantee that but we want to get as close to a guarantee as we can that were one not introducing new bugs and two not breaking things that used to work -## Release: +## Release -Once those tests pass we're going to do the release process and depending again on what type of application you're working on this may be a non-step. You know the code may just live in the GitHub repo or the git repository or wherever it lives but it may be the process of taking your compiled code or the docker image that you've built and putting it into a registry or a repository where it's accessible by your production servers for the deployment process +Once those tests pass we're going to do the release process and depending again on what type of application you're working on this may be a non-step. You know the code may just live in the GitHub repo or the git repository or wherever it lives but it may be the process of taking your compiled code or the docker image that you've built and putting it into a registry or a repository where it's accessible by your production servers for the deployment process -## Deploy: +## Deploy -which is the thing that we do next because deployment is like the end game of this whole thing because deployments are when we put the code into production and it's not until we do that that our business realizes the value from all the time effort and hard work that you and the software engineering team have put into this product up to this point. +which is the thing that we do next because deployment is like the end game of this whole thing because deployments are when we put the code into production and it's not until we do that that our business realizes the value from all the time effort and hard work that you and the software engineering team have put into this product up to this point. -## Operate: +## Operate -Once it's deployed we are going to operate it and operate it may involve something like you start getting calls from your customers that they're all annoyed that the site's running slow or their application is running slow right so you need to figure out why that is and then possibly build auto-scaling you know to handle increase the number of servers available during peak periods and decrease the number of servers during off-peak periods either way that's all operational type metrics, another operational thing that you do is include like a feedback loop from production back to your ops team letting you know about key events that happened in production such as a deployment back one step on the deployment thing this may or may not get automated depending on your environment the goal is to always automate it when possible there are some environments where you possibly need to do a few steps before you're ready to do that but ideally you want to deploy automatically as part of your automation process but if you're doing that it might be a good idea to include in your operational steps some type of notification so that your ops team knows that a deployment has happened +Once it's deployed we are going to operate it and operate it may involve something like you start getting calls from your customers that they're all annoyed that the site's running slow or their application is running slow right so you need to figure out why that is and then possibly build auto-scaling you know to handle increase the number of servers available during peak periods and decrease the number of servers during off-peak periods either way that's all operational type metrics, another operational thing that you do is include like a feedback loop from production back to your ops team letting you know about key events that happened in production such as a deployment back one step on the deployment thing this may or may not get automated depending on your environment the goal is to always automate it when possible there are some environments where you possibly need to do a few steps before you're ready to do that but ideally you want to deploy automatically as part of your automation process but if you're doing that it might be a good idea to include in your operational steps some type of notification so that your ops team knows that a deployment has happened -## Monitor: +## Monitor All of the above parts lead to the final step because you need to have monitoring, especially around operational issues auto-scaling troubleshooting like you don't know -there's a problem if you don't have monitoring in place to tell you that there's a problem so some of the things you might build monitoring for are memory utilization CPU utilization disk space, API endpoint, response time, how quickly that endpoint is responding and a big part of that as well is logs. Logs give developers the ability to see what is happening without having to access production systems. +there's a problem if you don't have monitoring in place to tell you that there's a problem so some of the things you might build monitoring for are memory utilization CPU utilization disk space, API endpoint, response time, how quickly that endpoint is responding and a big part of that as well is logs. Logs give developers the ability to see what is happening without having to access production systems. -## Rince & Repeat: +## Rinse & Repeat Once that's in place you go right back to the beginning to the planning stage and go through the whole thing again -## Continuous: +## Continuous -Many tools help us achieve the above continuous process, all this code and the ultimate goal of being completely automated, cloud infrastructure or any environment is often described as Continuous Integration/ Continuous Delivery/Continous Deployment or “CI/CD” for short. We will spend a whole week on CI/CD later on in the 90 Days with some examples and walkthroughs to grasp the fundamentals. +Many tools help us achieve the above continuous process, all this code and the ultimate goal of being completely automated, cloud infrastructure or any environment is often described as Continuous Integration/ Continuous Delivery/Continous Deployment or “CI/CD” for short. We will spend a whole week on CI/CD later on in the 90 Days with some examples and walkthroughs to grasp the fundamentals. -### Continuous Delivery: +### Continuous Delivery -Continuous Delivery = Plan > Code > Build > Test +Continuous Delivery = Plan > Code > Build > Test -### Continuous Integration: +### Continuous Integration -This is effectively the outcome of the Continuous Delivery phases above plus the outcome of the Release phase. This is the case for both failure and success but this is fed back into continuous delivery or moved to Continuous Deployment. +This is effectively the outcome of the Continuous Delivery phases above plus the outcome of the Release phase. This is the case for both failure and success but this is fed back into continuous delivery or moved to Continuous Deployment. -Continuous Integration = Plan > Code > Build > Test > Release +Continuous Integration = Plan > Code > Build > Test > Release -### Continuous Deployment: +### Continuous Deployment -If you have a successful release from your continuous integration then move to Continuous Deployment which brings in the following phases +If you have a successful release from your continuous integration then move to Continuous Deployment which brings in the following phases -CI Release is Success = Continuous Deployment = Deploy > Operate > Monitor +CI Release is Success = Continuous Deployment = Deploy > Operate > Monitor -You can see these three Continuous notions above as the simple collection of phases of the DevOps Lifecycle. +You can see these three Continuous notions above as the simple collection of phases of the DevOps Lifecycle. -This last bit was a bit of a recap for me on Day 3 but think this makes things clearer for me. +This last bit was a bit of a recap for me on Day 3 but think this makes things clearer for me. -### Resources: +### Resources - [DevOps for Developers – Software or DevOps Engineer?](https://www.youtube.com/watch?v=a0-uE3rOyeU) -- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps? ](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s) +- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps?](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s) - [How to become a DevOps Engineer in 2021 - DevOps Roadmap](https://www.youtube.com/watch?v=5pxbp6FyTfk) -If you made it this far then you will know if this is where you want to be or not. +If you made it this far then you will know if this is where you want to be or not. -See you on [Day 6](day06.md). +See you on [Day 6](day06.md). diff --git a/Days/day06.md b/Days/day06.md index 91d5dde11..a4d02a6b1 100644 --- a/Days/day06.md +++ b/Days/day06.md @@ -1,53 +1,56 @@ --- -title: '#90DaysOfDevOps - DevOps - The real stories - Day 6' +title: "#90DaysOfDevOps - DevOps - The real stories - Day 6" published: false description: 90DaysOfDevOps - DevOps - The real stories -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048855 --- -## DevOps - The real stories -DevOps to begin with was seen to be out of reach for a lot of us as we didn't have an environment or requirement anything like a Netflix or fortune 500 but think now that is beginning to sway into the normal when adopting a DevOps practice within any type of business. +## DevOps - The real stories -You will see from the second link below in references there are a lot of different industries and verticals using DevOps and having a hugely positive effect on their business objectives. +DevOps to begin with was seen to be out of reach for a lot of us as we didn't have an environment or requirement anything like a Netflix or fortune 500 but think now that is beginning to sway into the normal when adopting a DevOps practice within any type of business. -The overarching benefit here is DevOps if done correctly should help your Business improve the speed and quality of software development. +You will see from the second link below in references there are a lot of different industries and verticals using DevOps and having a hugely positive effect on their business objectives. -I wanted to take this Day to look at successful companies that have adopted a DevOps practice and share some resources around this, This will be another great one for the community to also dive in and help here. Have you adopted a DevOps culture in your business? Has it been successful? +The overarching benefit here is DevOps if done correctly should help your Business improve the speed and quality of software development. -I mentioned Netflix above and will touch on them again as it is a very good model and advanced to what we generally see today even still but will also mention some other big name brands that are succeeding it seems. +I wanted to take this Day to look at successful companies that have adopted a DevOps practice and share some resources around this, This will be another great one for the community to also dive in and help here. Have you adopted a DevOps culture in your business? Has it been successful? -## Amazon -In 2010 Amazon moved their physical server footprint to Amazon Web Services (AWS) cloud this allowed them to save resources by scaling capacity up and down in very small increments. We also know that this AWS cloud would go on and make a huge amount of revenue itself whilst still running the Amazon retail branch of the company. +I mentioned Netflix above and will touch on them again as it is a very good model and advanced to what we generally see today even still but will also mention some other big name brands that are succeeding it seems. -Amazon adopted in 2011 (According to the resource below) a continued deployment process where their developers could deploy code whenever they want and to whatever servers they needed. This enabled Amazon to achieve deploying new software to production servers on average every 11.6 seconds! +## Amazon -## Netflix -Who doesn't use Netflix? a huge quality streaming service with by all accounts at least personally a great user experience. +In 2010 Amazon moved their physical server footprint to Amazon Web Services (AWS) cloud this allowed them to save resources by scaling capacity up and down in very small increments. We also know that this AWS cloud would go on and make a huge amount of revenue itself whilst still running the Amazon retail branch of the company. -Why is that user experience so great? Well, the ability to deliver a service with no recollected memory for me at least of glitches requires speed, flexibility, and attention to quality. +Amazon adopted in 2011 (According to the resource below) a continued deployment process where their developers could deploy code whenever they want and to whatever servers they needed. This enabled Amazon to achieve deploying new software to production servers on average every 11.6 seconds! + +## Netflix + +Who doesn't use Netflix? a huge quality streaming service with by all accounts at least personally a great user experience. + +Why is that user experience so great? Well, the ability to deliver a service with no recollected memory for me at least of glitches requires speed, flexibility, and attention to quality. NetFlix developers can automatically build pieces of code into deployable web images without relying on IT operations. As the images are updated, they are integrated into Netflix’s infrastructure using a custom-built, web-based platform. -Continuous Monitoring is in place so that if the deployment of the images fails, the new images are rolled back and traffic rerouted to the previous version. +Continuous Monitoring is in place so that if the deployment of the images fails, the new images are rolled back and traffic rerouted to the previous version. -There is a great talk listed below that goes into more about the DOs and DONTs that Netflix lives and dies by within their teams. +There is a great talk listed below that goes into more about the DOs and DONTs that Netflix lives and dies by within their teams. -## Etsy -As with many of us and many companies, there was a real struggle around slow and painful deployments. In the same vein, we might have also experienced working in companies that have lots of siloes and teams that are not working well together. +## Etsy -From what I can make out at least from reading about Amazon and Netflix, Etsy might have adopted the letting developers deploy their code around the end of 2009 which might have been before the other two were mentioned. (interesting!) +As with many of us and many companies, there was a real struggle around slow and painful deployments. In the same vein, we might have also experienced working in companies that have lots of siloes and teams that are not working well together. -An interesting takeaway I read here was that they realised that when developers feel responsible for deployment they also would take responsibility for application performance, uptime and other goals. +From what I can make out at least from reading about Amazon and Netflix, Etsy might have adopted the letting developers deploy their code around the end of 2009 which might have been before the other two were mentioned. (interesting!) +An interesting takeaway I read here was that they realised that when developers feel responsible for deployment they also would take responsibility for application performance, uptime and other goals. A learning culture is a key part of DevOps, even failure can be a success if lessons are learned. (not sure where this quote came from but it kind of makes sense!) -I have added some other stories where DevOps has changed the game within some of these massively successful companies. +I have added some other stories where DevOps has changed the game within some of these massively successful companies. -## Resources +## Resources - [How Netflix Thinks of DevOps](https://www.youtube.com/watch?v=UTKIT6STSVM) - [16 Popular DevOps Use Cases & Real Life Applications [2021]](https://www.upgrad.com/blog/devops-use-cases-applications/) @@ -59,14 +62,14 @@ I have added some other stories where DevOps has changed the game within some of ### Recap of our first few days looking at DevOps -- DevOps is a combo of Development and Operations that allows a single team to manage the whole application development lifecycle that consists of **Development**, **Testing**, **Deployment**, **Operations**. +- DevOps is a combo of Development and Operations that allows a single team to manage the whole application development lifecycle that consists of **Development**, **Testing**, **Deployment**, **Operations**. -- The main focus and aim of DevOps are to shorten the development lifecycle while delivering features, fixes and functionality frequently in close alignment with business objectives. +- The main focus and aim of DevOps are to shorten the development lifecycle while delivering features, fixes and functionality frequently in close alignment with business objectives. - DevOps is a software development approach through which software can be delivered and developed reliably and quickly. You may also see this referenced as **Continuous Development, Testing, Deployment, Monitoring** -If you made it this far then you will know if this is where you want to be or not. See you on [Day 7](day07.md). +If you made it this far then you will know if this is where you want to be or not. See you on [Day 7](day07.md). -Day 7 will be us diving into a programming language, I am not aiming to be a developer but I want to be able to understand what the developers are doing. +Day 7 will be us diving into a programming language, I am not aiming to be a developer but I want to be able to understand what the developers are doing. -Can we achieve that in a week? Probably not but if we spend 7 days or 7 hours learning something we are going to know more than when we started. +Can we achieve that in a week? Probably not but if we spend 7 days or 7 hours learning something we are going to know more than when we started. diff --git a/Days/day07.md b/Days/day07.md index 434e0ba1e..62d5078d8 100644 --- a/Days/day07.md +++ b/Days/day07.md @@ -1,69 +1,71 @@ --- -title: '#90DaysOfDevOps - The Big Picture: Learning a Programming Language - Day 7' +title: "#90DaysOfDevOps - The Big Picture: Learning a Programming Language - Day 7" published: false description: 90DaysOfDevOps - The Big Picture DevOps & Learning a Programming Language -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048856 --- + ## The Big Picture: DevOps & Learning a Programming Language -I think it is fair to say to be successful in the long term as a DevOps engineer you've got to know at least one programming language at a foundational level. I want to take this first session of this section to explore why this is such a critical skill to have, and hopefully, by the end of this week or section, you are going to have a better understanding of the why, how and what to do to progress with your learning journey. +I think it is fair to say to be successful in the long term as a DevOps engineer you've got to know at least one programming language at a foundational level. I want to take this first session of this section to explore why this is such a critical skill to have, and hopefully, by the end of this week or section, you are going to have a better understanding of the why, how and what to do to progress with your learning journey. -I think if I was to ask out on social do you need to have programming skills for DevOps related roles, the answer will be most likely a hard yes? Let me know if you think otherwise? Ok but then a bigger question and this is where you won't get such a clear answer which programming language? The most common answer I have seen here has been Python or increasingly more often, we're seeing Golang or Go should be the language that you learn. +I think if I was to ask out on social do you need to have programming skills for DevOps related roles, the answer will be most likely a hard yes? Let me know if you think otherwise? Ok but then a bigger question and this is where you won't get such a clear answer which programming language? The most common answer I have seen here has been Python or increasingly more often, we're seeing Golang or Go should be the language that you learn. -To be successful in DevOps you have to have a good knowledge of programming skills is my takeaway from that at least. But we have to understand why we need it to choose the right path. +To be successful in DevOps you have to have a good knowledge of programming skills is my takeaway from that at least. But we have to understand why we need it to choose the right path. -## Understand why you need to learn a programming language. +## Understand why you need to learn a programming language. -The reason that Python and Go are recommended so often for DevOps engineers is that a lot of the DevOps tooling is written in either Python or Go, which makes sense if you are going to be building DevOps tools. Now this is important as this will determine really what you should learn and that would likely be the most beneficial. If you are going to be building DevOps tools or you are joining a team that does then it would make sense to learn that same language, if you are going to be heavily involved in Kubernetes or Containers then it's more than likely that you would want to choose Go as your programming language. For me, the company I work for (Kasten by Veeam) is in the Cloud-Native ecosystem focused on data management for Kubernetes and everything is written in Go. +The reason that Python and Go are recommended so often for DevOps engineers is that a lot of the DevOps tooling is written in either Python or Go, which makes sense if you are going to be building DevOps tools. Now this is important as this will determine really what you should learn and that would likely be the most beneficial. If you are going to be building DevOps tools or you are joining a team that does then it would make sense to learn that same language, if you are going to be heavily involved in Kubernetes or Containers then it's more than likely that you would want to choose Go as your programming language. For me, the company I work for (Kasten by Veeam) is in the Cloud-Native ecosystem focused on data management for Kubernetes and everything is written in Go. -But then you might not have clear cut reasoning like that to choose you might be a student or transitioning careers with no real decision made for you. I think in this situation then you should choose the one that seems to resonate and fit with the applications you are looking to work with. +But then you might not have clear cut reasoning like that to choose you might be a student or transitioning careers with no real decision made for you. I think in this situation then you should choose the one that seems to resonate and fit with the applications you are looking to work with. -Remember I am not looking to become a software developer here I just want to understand a little more about the programming language so that I can read and understand what those tools are doing and then that leads to possibly how we can help improve things. +Remember I am not looking to become a software developer here I just want to understand a little more about the programming language so that I can read and understand what those tools are doing and then that leads to possibly how we can help improve things. -I would also it is also important to know how you interact with those DevOps tools which could be Kasten K10 or it could be Terraform and HCL. These are what we will call config files and this is how you interact with those DevOps tools to make things happen, commonly these are going to be YAML. (We may use the last day of this section to dive a little into YAML) +I would also it is also important to know how you interact with those DevOps tools which could be Kasten K10 or it could be Terraform and HCL. These are what we will call config files and this is how you interact with those DevOps tools to make things happen, commonly these are going to be YAML. (We may use the last day of this section to dive a little into YAML) ## Did I just talk myself out of learning a programming language? -Most of the time or depending on the role, you will be helping engineering teams implement DevOps into their workflow, a lot of testing around the application and making sure that the workflow that is built aligns to those DevOps principles we mentioned over the first few days. But in reality, this is going to be a lot of the time troubleshooting an application performance issue or something along those lines. This comes back to my original point and reasoning, the programming language I need to know is the one that the code is written in? If their application is written in NodeJS it won’t help much if you have a Go or Python badge. +Most of the time or depending on the role, you will be helping engineering teams implement DevOps into their workflow, a lot of testing around the application and making sure that the workflow that is built aligns to those DevOps principles we mentioned over the first few days. But in reality, this is going to be a lot of the time troubleshooting an application performance issue or something along those lines. This comes back to my original point and reasoning, the programming language I need to know is the one that the code is written in? If their application is written in NodeJS it won’t help much if you have a Go or Python badge. -## Why Go +## Why Go Why Golang is the next programming language for DevOps, Go has become a very popular programming language in recent years. According to the StackOverflow Survey for 2021 Go came in fourth for the most wanted Programming, scripting and markup languages with Python being top but hear me out. [StackOverflow 2021 Developer Survey – Most Wanted Link](https://insights.stackoverflow.com/survey/2021#section-most-loved-dreaded-and-wanted-programming-scripting-and-markup-languages) -As I have also mentioned some of the most known DevOps tools and platforms are written in Go such as Kubernetes, Docker, Grafana and Prometheus. +As I have also mentioned some of the most known DevOps tools and platforms are written in Go such as Kubernetes, Docker, Grafana and Prometheus. What are some of the characteristics of Go that make it great for DevOps? -## Build and Deployment of Go Programs -An advantage of using a language like Python that is interpreted in a DevOps role is that you don’t need to compile a python program before running it. Especially for smaller automation tasks, you don’t want to be slowed down by a build process that requires compilation even though, Go is a compiled programming language, **Go compiles directly into machine code**. Go is known also for fast compilation times. +## Build and Deployment of Go Programs + +An advantage of using a language like Python that is interpreted in a DevOps role is that you don’t need to compile a python program before running it. Especially for smaller automation tasks, you don’t want to be slowed down by a build process that requires compilation even though, Go is a compiled programming language, **Go compiles directly into machine code**. Go is known also for fast compilation times. -## Go vs Python for DevOps +## Go vs Python for DevOps -Go Programs are statically linked, this means that when you compile a go program everything is included in a single binary executable, and no external dependencies will be required that would need to be installed on the remote machine, this makes the deployment of go programs easy, compared to python program that uses external libraries you have to make sure that all those libraries are installed on the remote machine that you wish to run on. +Go Programs are statically linked, this means that when you compile a go program everything is included in a single binary executable, and no external dependencies will be required that would need to be installed on the remote machine, this makes the deployment of go programs easy, compared to python program that uses external libraries you have to make sure that all those libraries are installed on the remote machine that you wish to run on. -Go is a platform-independent language, which means you can produce binary executables for *all the operating systems, Linux, Windows, macOS etc and very easy to do so. With Python, it is not as easy to create these binary executables for particular operating systems. +Go is a platform-independent language, which means you can produce binary executables for \*all the operating systems, Linux, Windows, macOS etc and very easy to do so. With Python, it is not as easy to create these binary executables for particular operating systems. -Go is a very performant language, it has fast compilation and fast run time with lower resource usage like CPU and memory especially compared to python, numerous optimisations have been implemented in the Go language that makes it so performant. (Resources below) +Go is a very performant language, it has fast compilation and fast run time with lower resource usage like CPU and memory especially compared to python, numerous optimisations have been implemented in the Go language that makes it so performant. (Resources below) -Unlike Python which often requires the use of third party libraries to implement a particular python program, go includes a standard library that has the majority of functionality that you would need for DevOps built directly into it. This includes functionality file processing, HTTP web services, JSON processing, native support for concurrency and parallelism as well as built-in testing. +Unlike Python which often requires the use of third party libraries to implement a particular python program, go includes a standard library that has the majority of functionality that you would need for DevOps built directly into it. This includes functionality file processing, HTTP web services, JSON processing, native support for concurrency and parallelism as well as built-in testing. -This is by no way throwing Python under the bus I am just giving my reasons for choosing Go but they are not the above Go vs Python it's generally because it makes sense as the company I work for develops software in Go so that is why. +This is by no way throwing Python under the bus I am just giving my reasons for choosing Go but they are not the above Go vs Python it's generally because it makes sense as the company I work for develops software in Go so that is why. -I will say that once you have or at least I am told as I am not many pages into this chapter right now, is that once you learn your first programming language it becomes easier to take on other languages. You're probably never going to have a single job in any company anywhere where you don't have to deal with managing, architect, orchestrating, debug JavaScript and Node JS applications. +I will say that once you have or at least I am told as I am not many pages into this chapter right now, is that once you learn your first programming language it becomes easier to take on other languages. You're probably never going to have a single job in any company anywhere where you don't have to deal with managing, architect, orchestrating, debug JavaScript and Node JS applications. ## Resources - [StackOverflow 2021 Developer Survey](https://insights.stackoverflow.com/survey/2021) - [Why we are choosing Golang to learn](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s) -- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) -- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I) -- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals) -- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) -- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) +- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) +- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I) +- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals) +- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) +- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) -Now for the next 6 days of this topic, I intend to work through some of the resources listed above and document my notes for each day. You will notice that they are generally around 3 hours as a full course, I wanted to share my complete list so that if you have time you should move ahead and work through each one if time permits, I will be sticking to my learning hour each day. +Now for the next 6 days of this topic, I intend to work through some of the resources listed above and document my notes for each day. You will notice that they are generally around 3 hours as a full course, I wanted to share my complete list so that if you have time you should move ahead and work through each one if time permits, I will be sticking to my learning hour each day. -See you on [Day 8](day08.md). +See you on [Day 8](day08.md). diff --git a/Days/day08.md b/Days/day08.md index 411bb3cc3..5f18adb12 100644 --- a/Days/day08.md +++ b/Days/day08.md @@ -1,51 +1,52 @@ --- -title: '#90DaysOfDevOps - Setting up your DevOps environment for Go & Hello World - Day 8' +title: "#90DaysOfDevOps - Setting up your DevOps environment for Go & Hello World - Day 8" published: false description: 90DaysOfDevOps - Setting up your DevOps environment for Go & Hello World -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048857 --- + ## Setting up your DevOps environment for Go & Hello World -Before we get into some of the fundamentals of Go we should get Go installed on our workstation and do what every "learning programming 101" module teaches us which is to create the Hello World app. As this one is going to be walking through the steps to get Go installed on your workstation we are going to attempt to document the process in pictures so people can easily follow along. +Before we get into some of the fundamentals of Go we should get Go installed on our workstation and do what every "learning programming 101" module teaches us which is to create the Hello World app. As this one is going to be walking through the steps to get Go installed on your workstation we are going to attempt to document the process in pictures so people can easily follow along. -First of all, let's head on over to [go.dev/dl](https://go.dev/dl/) and you will be greeted with some available options for downloads. +First of all, let's head on over to [go.dev/dl](https://go.dev/dl/) and you will be greeted with some available options for downloads. ![](Images/Day8_Go1.png) -If we made it this far you probably know which workstation operating system you are running so select the appropriate download and then we can get installing. I am using Windows for this walkthrough, basically, from this next screen, we can leave all the defaults in place for now. ***(I will note that at the time of writing this was the latest version so screenshots might be out of date)*** +If we made it this far you probably know which workstation operating system you are running so select the appropriate download and then we can get installing. I am using Windows for this walkthrough, basically, from this next screen, we can leave all the defaults in place for now. **_(I will note that at the time of writing this was the latest version so screenshots might be out of date)_** ![](Images/Day8_Go2.png) -Also note if you do have an older version of Go installed you will have to remove this before installing, Windows has this built into the installer and will remove and install as one. +Also note if you do have an older version of Go installed you will have to remove this before installing, Windows has this built into the installer and will remove and install as one. -Once finished you should now open a command prompt/terminal and we want to check that we have to Go installed. If you do not get the output that we see below then Go is not installed and you will need to retrace your steps. +Once finished you should now open a command prompt/terminal and we want to check that we have to Go installed. If you do not get the output that we see below then Go is not installed and you will need to retrace your steps. `go version` ![](Images/Day8_Go3.png) -Next up we want to check our environment for Go. This is always good to check to make sure your working directories are configured correctly, as you can see below we need to make sure you have the following directory on your system. +Next up we want to check our environment for Go. This is always good to check to make sure your working directories are configured correctly, as you can see below we need to make sure you have the following directory on your system. ![](Images/Day8_Go4.png) -Did you check? Are you following along? You will probably get something like the below if you try and navigate there. +Did you check? Are you following along? You will probably get something like the below if you try and navigate there. ![](Images/Day8_Go5.png) -Ok, let's create that directory for ease I am going to use the mkdir command in my PowerShell terminal. We also need to create 3 folders within the Go folder as you will see also below. +Ok, let's create that directory for ease I am going to use the mkdir command in my PowerShell terminal. We also need to create 3 folders within the Go folder as you will see also below. ![](Images/Day8_Go6.png) -Now we have to Go installed and we have our Go working directory ready for action. We now need an integrated development environment (IDE) Now there are many out there available that you can use but the most common and the one I use is Visual Studio Code or Code. You can learn more about IDEs [here](https://www.youtube.com/watch?v=vUn5akOlFXQ). +Now we have to Go installed and we have our Go working directory ready for action. We now need an integrated development environment (IDE) Now there are many out there available that you can use but the most common and the one I use is Visual Studio Code or Code. You can learn more about IDEs [here](https://www.youtube.com/watch?v=vUn5akOlFXQ). -If you have not downloaded and installed VSCode already on your workstation then you can do so by heading [here](https://code.visualstudio.com/download). As you can see below you have your different OS options. +If you have not downloaded and installed VSCode already on your workstation then you can do so by heading [here](https://code.visualstudio.com/download). As you can see below you have your different OS options. ![](Images/Day8_Go7.png) -Much the same as with the Go installation we are going to download and install and keep the defaults. Once complete you can open VSCode you can select Open File and navigate to our Go directory that we created above. +Much the same as with the Go installation we are going to download and install and keep the defaults. Once complete you can open VSCode you can select Open File and navigate to our Go directory that we created above. ![](Images/Day8_Go8.png) @@ -55,13 +56,13 @@ Now you should see the three folders we also created earlier as well and what we ![](Images/Day8_Go9.png) -Pretty easy stuff I would say up till this point? Now we are going to create our first Go Program with no understanding of anything we put in this next phase. +Pretty easy stuff I would say up till this point? Now we are going to create our first Go Program with no understanding of anything we put in this next phase. -Next, create a file called `main.go` in your `Hello` folder. As soon as you hit enter on the main.go you will be asked if you want to install the Go extension and also packages you can also check that empty pkg file that we made a few steps back and notice that we should have some new packages in there now? +Next, create a file called `main.go` in your `Hello` folder. As soon as you hit enter on the main.go you will be asked if you want to install the Go extension and also packages you can also check that empty pkg file that we made a few steps back and notice that we should have some new packages in there now? ![](Images/Day8_Go10.png) -Now let's get this Hello World app going, copy the following code into your new main.go file and save that. +Now let's get this Hello World app going, copy the following code into your new main.go file and save that. ``` package main @@ -72,18 +73,21 @@ func main() { fmt.Println("Hello #90DaysOfDevOps") } ``` -Now I appreciate that the above might make no sense at all, but we will cover more about functions, packages and more in later days. For now, let's run our app. Back in the terminal and in our Hello folder we can now check that all is working. Using the command below we can check to see if our generic learning program is working. + +Now I appreciate that the above might make no sense at all, but we will cover more about functions, packages and more in later days. For now, let's run our app. Back in the terminal and in our Hello folder we can now check that all is working. Using the command below we can check to see if our generic learning program is working. ``` go run main.go ``` + ![](Images/Day8_Go11.png) -It doesn't end there though, what if we now want to take our program and run it on other Windows machines? We can do that by building our binary using the following command +It doesn't end there though, what if we now want to take our program and run it on other Windows machines? We can do that by building our binary using the following command ``` go build main.go -``` +``` + ![](Images/Day8_Go12.png) If we run that, we would see the same output: @@ -97,11 +101,11 @@ Hello #90DaysOfDevOps - [StackOverflow 2021 Developer Survey](https://insights.stackoverflow.com/survey/2021) - [Why we are choosing Golang to learn](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s) -- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) -- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I) -- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals) -- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) -- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) +- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) +- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I) +- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals) +- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) +- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) See you on [Day 9](day09.md). diff --git a/Days/day09.md b/Days/day09.md index ad75c072d..9ea3affb6 100644 --- a/Days/day09.md +++ b/Days/day09.md @@ -1,52 +1,56 @@ --- -title: '#90DaysOfDevOps - Let''s explain the Hello World code - Day 9' +title: "#90DaysOfDevOps - Let's explain the Hello World code - Day 9" published: false description: 90DaysOfDevOps - Let's explain the Hello World code -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1099682 --- + ## Let's explain the Hello World code -### How Go works +### How Go works + +On [Day 8](day08.md) we walked through getting Go installed on your workstation and we then created our first Go application. -On [Day 8](day08.md) we walked through getting Go installed on your workstation and we then created our first Go application. - -In this section, we are going to take a deeper look into the code and understand a few more things about the Go language. +In this section, we are going to take a deeper look into the code and understand a few more things about the Go language. ### What is Compiling? + Before we get into the [6 lines of the Hello World code](Go/hello.go) we need to have a bit of an understanding of compiling. -Programming languages that we commonly use such as Python, Java, Go and C++ are high-level languages. Meaning they are human-readable but when a machine is trying to execute a program it needs to be in a form that a machine can understand. We have to translate our human-readable code to machine code which is called compiling. +Programming languages that we commonly use such as Python, Java, Go and C++ are high-level languages. Meaning they are human-readable but when a machine is trying to execute a program it needs to be in a form that a machine can understand. We have to translate our human-readable code to machine code which is called compiling. ![](Images/Day9_Go1.png) -From the above you can see what we did on [Day 8](day08.md) here, we created a simple Hello World main.go and we then used the command `go build main.go` to compile our executable. +From the above you can see what we did on [Day 8](day08.md) here, we created a simple Hello World main.go and we then used the command `go build main.go` to compile our executable. ### What are packages? -A package is a collection of source files in the same directory that are compiled together. We can simplify this further, a package is a bunch of .go files in the same directory. Remember our Hello folder from Day 8? If and when you get into more complex Go programs you might find that you have folder1 folder2 and folder3 containing different.go files that make up your program with multiple packages. -We use packages so we can reuse other people's code, we don't have to write everything from scratch. Maybe we are wanting a calculator as part of our program, you could probably find an existing Go Package that contains the mathematical functions that you could import into your code saving you a lot of time and effort in the long run. +A package is a collection of source files in the same directory that are compiled together. We can simplify this further, a package is a bunch of .go files in the same directory. Remember our Hello folder from Day 8? If and when you get into more complex Go programs you might find that you have folder1 folder2 and folder3 containing different.go files that make up your program with multiple packages. + +We use packages so we can reuse other people's code, we don't have to write everything from scratch. Maybe we are wanting a calculator as part of our program, you could probably find an existing Go Package that contains the mathematical functions that you could import into your code saving you a lot of time and effort in the long run. + +Go encourages you to organise your code in packages so that it is easy to reuse and maintain source code. -Go encourages you to organise your code in packages so that it is easy to reuse and maintain source code. +### Hello #90DaysOfDevOps Line by Line -### Hello #90DaysOfDevOps Line by Line -Now let's take a look at our Hello #90DaysOfDevOps main.go file and walk through the lines. +Now let's take a look at our Hello #90DaysOfDevOps main.go file and walk through the lines. ![](Images/Day9_Go2.png) -In the first line, you have `package main` which means that this file belongs to a package called main. All .go files need to belong to a package, they should also have `package something` in the opening line. +In the first line, you have `package main` which means that this file belongs to a package called main. All .go files need to belong to a package, they should also have `package something` in the opening line. -A package can be named whatever you wish. We have to call this `main` as this is the starting point of the program that is going to be in this package, this is a rule. (I need to understand more about this rule?) +A package can be named whatever you wish. We have to call this `main` as this is the starting point of the program that is going to be in this package, this is a rule. (I need to understand more about this rule?) ![](Images/Day9_Go3.png) -Whenever we want to compile and execute our code we have to tell the machine where the execution needs to start. We do this by writing a function called main. The machine will look for a function called main to find the entry point of the program. +Whenever we want to compile and execute our code we have to tell the machine where the execution needs to start. We do this by writing a function called main. The machine will look for a function called main to find the entry point of the program. -A function is a block of code that can do some specific task and can be used across the program. +A function is a block of code that can do some specific task and can be used across the program. -You can declare a function with any name using `func` but in this case, we need to name it `main` as this is where the code starts. +You can declare a function with any name using `func` but in this case, we need to name it `main` as this is where the code starts. ![](Images/Day9_Go4.png) @@ -54,25 +58,25 @@ Next, we are going to look at line 3 of our code, the import, this means you wan ![](Images/Day9_Go5.png) -the `Println()` that we have here is a way in which to write standard output to the terminal where ever the executable has been executed successfully. Feel free to change the message in between the (). +the `Println()` that we have here is a way in which to write standard output to the terminal where ever the executable has been executed successfully. Feel free to change the message in between the (). ![](Images/Day9_Go6.png) ### TLDR -- **Line 1** = This file will be in the package called `main` and this needs to be called `main` because includes the entry point of the program. -- **Line 3** = For us to use the `Println()` we have to import the fmt package to use this on line 6. -- **Line 5** = The actual starting point, its the `main` function. -- **Line 6** = This will let us print "Hello #90DaysOfDevOps" on our system. +- **Line 1** = This file will be in the package called `main` and this needs to be called `main` because includes the entry point of the program. +- **Line 3** = For us to use the `Println()` we have to import the fmt package to use this on line 6. +- **Line 5** = The actual starting point, its the `main` function. +- **Line 6** = This will let us print "Hello #90DaysOfDevOps" on our system. ## Resources - [StackOverflow 2021 Developer Survey](https://insights.stackoverflow.com/survey/2021) - [Why we are choosing Golang to learn](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s) -- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) -- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I) -- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals) -- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) -- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) +- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) +- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I) +- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals) +- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) +- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) See you on [Day 10](day10.md). diff --git a/Days/day10.md b/Days/day10.md index d2c6b258e..900c45484 100644 --- a/Days/day10.md +++ b/Days/day10.md @@ -1,28 +1,32 @@ --- -title: '#90DaysOfDevOps - The Go Workspace - Day 10' +title: "#90DaysOfDevOps - The Go Workspace - Day 10" published: false description: 90DaysOfDevOps - The Go Workspace -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048701 --- -### The Go Workspace -On [Day 8](day08.md) we briefly covered the Go workspace to get Go up and running to get to the demo of `Hello #90DaysOfDevOps` But we should explain a little more about the Go workspace. -Remember we chose the defaults and we then went through and created our Go folder in the GOPATH that was already defined but in reality, this GOPATH can be changed to be wherever you want it to be. +### The Go Workspace -If you run +On [Day 8](day08.md) we briefly covered the Go workspace to get Go up and running to get to the demo of `Hello #90DaysOfDevOps` But we should explain a little more about the Go workspace. + +Remember we chose the defaults and we then went through and created our Go folder in the GOPATH that was already defined but in reality, this GOPATH can be changed to be wherever you want it to be. + +If you run ``` echo $GOPATH -``` -The output should be similar to mine (with a different username may be) which is: +``` + +The output should be similar to mine (with a different username may be) which is: ``` /home/michael/projects/go ``` -Then here, we created 3 directories. **src**, **pkg** and **bin** + +Then here, we created 3 directories. **src**, **pkg** and **bin** ![](Images/Day10_Go1.png) @@ -30,11 +34,11 @@ Then here, we created 3 directories. **src**, **pkg** and **bin** ![](Images/Day10_Go2.png) -**pkg** is where your archived files of packages that are or were installed in programs. This helps to speed up the compiling process based on if the packages being used have been modified. +**pkg** is where your archived files of packages that are or were installed in programs. This helps to speed up the compiling process based on if the packages being used have been modified. ![](Images/Day10_Go3.png) -**bin** is where all of your compiled binaries are stored. +**bin** is where all of your compiled binaries are stored. ![](Images/Day10_Go4.png) @@ -44,51 +48,52 @@ Our Hello #90DaysOfDevOps is not a complex program so here is an example of a mo This page also goes into some great detail about why and how the layout is like this it also goes a little deeper on other folders we have not mentioned [GoChronicles](https://gochronicles.com/project-structure/) -### Compiling & running code -On [Day 9](day09.md) we also covered a brief introduction to compiling code, but we can go a little deeper here. +### Compiling & running code + +On [Day 9](day09.md) we also covered a brief introduction to compiling code, but we can go a little deeper here. -To run our code we first must **compile** it. There are three ways to do this within Go. +To run our code we first must **compile** it. There are three ways to do this within Go. - go build - go install -- go run +- go run -Before we get to the above compile stage we need to take a look at what we get with the Go Installation. +Before we get to the above compile stage we need to take a look at what we get with the Go Installation. When we installed Go on Day 8 we installed something known as Go tools which consist of several programs that let us build and process our Go source files. One of the tools is `Go` -It is worth noting that you can install additional tools that are not in the standard Go installation. +It is worth noting that you can install additional tools that are not in the standard Go installation. -If you open your command prompt and type `go` you should see something like the image below and then you will see "Additional Help Topics" below that for now we don't need to worry about those. +If you open your command prompt and type `go` you should see something like the image below and then you will see "Additional Help Topics" below that for now we don't need to worry about those. ![](Images/Day10_Go6.png) -You might also remember that we have already used at least two of these tools so far on Day 8. +You might also remember that we have already used at least two of these tools so far on Day 8. ![](Images/Day10_Go7.png) -The ones we want to learn more about are the build, install and run. +The ones we want to learn more about are the build, install and run. ![](Images/Day10_Go8.png) - `go run` - This command compiles and runs the main package comprised of the .go files specified on the command line. The command is compiled to a temporary folder. -- `go build` - To compile packages and dependencies, compile the package in the current directory. If the `main` package, will place the executable in the current directory if not then it will place the executable in the `pkg` folder. `go build` also enables you to build an executable file for any Go Supported OS platform. -- `go install` - The same as go build but will place the executable in the `bin` folder +- `go build` - To compile packages and dependencies, compile the package in the current directory. If the `main` package, will place the executable in the current directory if not then it will place the executable in the `pkg` folder. `go build` also enables you to build an executable file for any Go Supported OS platform. +- `go install` - The same as go build but will place the executable in the `bin` folder -We have run through go build and go run but feel free to run through them again here if you wish, `go install` as stated above puts the executable in our bin folder. +We have run through go build and go run but feel free to run through them again here if you wish, `go install` as stated above puts the executable in our bin folder. ![](Images/Day10_Go9.png) -Hopefully, if you are following along you are watching one of the playlists or videos below, I am taking bits of all of these and translating these into my notes so that I can understand the foundational knowledge of the Golang language. The resources below are likely going to give you a much better understanding of a lot of the areas you need overall but I am trying to document the 7 days or 7 hours worth of the journey with interesting things that I have found. +Hopefully, if you are following along you are watching one of the playlists or videos below, I am taking bits of all of these and translating these into my notes so that I can understand the foundational knowledge of the Golang language. The resources below are likely going to give you a much better understanding of a lot of the areas you need overall but I am trying to document the 7 days or 7 hours worth of the journey with interesting things that I have found. ## Resources - [StackOverflow 2021 Developer Survey](https://insights.stackoverflow.com/survey/2021) - [Why we are choosing Golang to learn](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s) -- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) -- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I) -- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals) -- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) -- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) +- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) +- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I) +- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals) +- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) +- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) See you on [Day 11](day11.md). diff --git a/Days/day11.md b/Days/day11.md index 1b850cf02..3d3f7af9a 100644 --- a/Days/day11.md +++ b/Days/day11.md @@ -1,36 +1,38 @@ --- -title: '#90DaysOfDevOps - Variables & Constants in Go - Day 11' +title: "#90DaysOfDevOps - Variables & Constants in Go - Day 11" published: false description: 90DaysOfDevOps - Variables & Constants in Go -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048862 --- -Before we get into the topics for today I want to give a massive shout out to [Techworld with Nana](https://www.youtube.com/watch?v=yyUHQIec83I) and this fantastic concise journey through the fundamentals of Go. +Before we get into the topics for today I want to give a massive shout out to [Techworld with Nana](https://www.youtube.com/watch?v=yyUHQIec83I) and this fantastic concise journey through the fundamentals of Go. On [Day8](day08.md) we set our environment up, on [Day9](day09.md) we walked through the Hello #90DaysOfDevOps code and on [Day10](day10.md)) we looked at our Go workspace and went a little deeper into compiling and running the code. -Today we are going to take a look into Variables, Constants and Data Types whilst writing a new program. +Today we are going to take a look into Variables, Constants and Data Types whilst writing a new program. ## Variables & Constants in Go -Let's start by planning our application, I think it would be a good idea to work on a program that tells us how many days we have remained in our #90DaysOfDevOps challenge. -The first thing to consider here is that as we are building our app and we are welcoming our attendees and we are giving the user feedback on the number of days they have completed we might use the term #90DaysOfDevOps many times throughout the program. This is a great use case to make #90DaysOfDevOps a variable within our program. +Let's start by planning our application, I think it would be a good idea to work on a program that tells us how many days we have remained in our #90DaysOfDevOps challenge. -- Variables are used to store values. -- Like a little box with our saved information or values. -- We can then use this variable across the program which also benefits that if this challenge or variable changes then we only have to change this in one place. This means we could translate this to other challenges we have in the community by just changing that one variable value. +The first thing to consider here is that as we are building our app and we are welcoming our attendees and we are giving the user feedback on the number of days they have completed we might use the term #90DaysOfDevOps many times throughout the program. This is a great use case to make #90DaysOfDevOps a variable within our program. -To declare this in our Go Program we define a value by using a **keyword** for variables. This will live within our `func main` block of code that you will see later. You can find more about [Keywords](https://go.dev/ref/spec#Keywords)here. +- Variables are used to store values. +- Like a little box with our saved information or values. +- We can then use this variable across the program which also benefits that if this challenge or variable changes then we only have to change this in one place. This means we could translate this to other challenges we have in the community by just changing that one variable value. -Remember to make sure that your variable names are descriptive. If you declare a variable you must use it or you will get an error, this is to avoid possible dead code, code that is never used. This is the same for packages not used. +To declare this in our Go Program we define a value by using a **keyword** for variables. This will live within our `func main` block of code that you will see later. You can find more about [Keywords](https://go.dev/ref/spec#Keywords)here. + +Remember to make sure that your variable names are descriptive. If you declare a variable you must use it or you will get an error, this is to avoid possible dead code, code that is never used. This is the same for packages not used. ``` var challenge = "#90DaysOfDevOps" ``` -With the above set and used as we will see in the next code snippet you can see from the output below that we have used a variable. + +With the above set and used as we will see in the next code snippet you can see from the output below that we have used a variable. ``` package main @@ -42,15 +44,16 @@ func main() { fmt.Println("Welcome to", challenge, "") } ``` + You can find the above code snippet in [day11_example1.go](Go/day11_example1.go) -You will then see from the below that we built our code with the above example and we got the output shown below. +You will then see from the below that we built our code with the above example and we got the output shown below. ![](Images/Day11_Go1.png) We also know that our challenge is 90 days at least for this challenge, but next, maybe it's 100 so we want to define a variable to help us here as well. However, for our program, we want to define this as a constant. Constants are like variables, except that their value cannot be changed within code (we can still create a new app later on down the line with this code and change this constant but this 90 will not change whilst we are running our application) -Adding the `const` to our code and adding another line of code to print this. +Adding the `const` to our code and adding another line of code to print this. ``` package main @@ -65,15 +68,16 @@ func main() { fmt.Println("This is a", daystotal, "challenge") } ``` + You can find the above code snippet in [day11_example2.go](Go/day11_example2.go) If we then go through that `go build` process again and run you will see below the outcome. ![](Images/Day11_Go2.png) -Finally, and this won't be the end of our program we will come back to this in [Day12](day12.md) to add more functionality. We now want to add another variable for the number of days we have completed the challenge. +Finally, and this won't be the end of our program we will come back to this in [Day12](day12.md) to add more functionality. We now want to add another variable for the number of days we have completed the challenge. -Below I added the `dayscomplete` variable with the number of days completed. +Below I added the `dayscomplete` variable with the number of days completed. ``` package main @@ -90,17 +94,18 @@ func main() { fmt.Println("Great work") } ``` + You can find the above code snippet in [day11_example3.go](Go/day11_example3.go) Let's run through that `go build` process again or you could just use `go run` ![](Images/Day11_Go3.png) -Here are some other examples that I have used to make the code easier to read and edit. We have up till now been using `Println` but we can simplify this by using `Printf` by using `%v` which means we define our variables in order at the end of the line of code. we also use `\n` for a line break. +Here are some other examples that I have used to make the code easier to read and edit. We have up till now been using `Println` but we can simplify this by using `Printf` by using `%v` which means we define our variables in order at the end of the line of code. we also use `\n` for a line break. I am using `%v` as this uses a default value but there are other options that can be found here in the [fmt package documentation](https://pkg.go.dev/fmt) you can find the code example [day11_example4.go](Go/day11_example4.go) -Variables may also be defined in a simpler format in your code. Instead of defining that it is a `var` and the `type` you can code this as follows to get the same functionality but a nice cleaner and simpler look for your code. This will only work for variables though and not constants. +Variables may also be defined in a simpler format in your code. Instead of defining that it is a `var` and the `type` you can code this as follows to get the same functionality but a nice cleaner and simpler look for your code. This will only work for variables though and not constants. ``` func main() { @@ -108,14 +113,15 @@ func main() { const daystotal = 90 ``` -## Data Types -In the above examples, we have not defined the type of variables, this is because we can give it a value here and Go is smart enough to know what that type is or at least can infer what it is based on the value you have stored. However, if we want a user to input this will require a specific type. +## Data Types + +In the above examples, we have not defined the type of variables, this is because we can give it a value here and Go is smart enough to know what that type is or at least can infer what it is based on the value you have stored. However, if we want a user to input this will require a specific type. -We have used Strings and Integers in our code so far. Integers for the number of days and strings are for the name of the challenge. +We have used Strings and Integers in our code so far. Integers for the number of days and strings are for the name of the challenge. -It is also important to note that each data type can do different things and behaves differently. For example, integers can multiply where strings do not. +It is also important to note that each data type can do different things and behaves differently. For example, integers can multiply where strings do not. -There are four categories +There are four categories - **Basic type**: Numbers, strings, and booleans come under this category. - **Aggregate type**: Array and structs come under this category. @@ -134,21 +140,22 @@ Go has three basic data types: I found this resource super detailed on data types [Golang by example](https://golangbyexample.com/all-data-types-in-golang-with-examples/) -I would also suggest [Techworld with Nana](https://www.youtube.com/watch?v=yyUHQIec83I&t=2023s) at this point covers in detail a lot about the data types in Go. +I would also suggest [Techworld with Nana](https://www.youtube.com/watch?v=yyUHQIec83I&t=2023s) at this point covers in detail a lot about the data types in Go. -If we need to define a type in our variable we can do this like so: +If we need to define a type in our variable we can do this like so: ``` -var TwitterHandle string +var TwitterHandle string var DaysCompleted uint ``` -Because Go implies variables where a value is given we can print out those values with the following: +Because Go implies variables where a value is given we can print out those values with the following: ``` fmt.Printf("challenge is %T, daystotal is %T, dayscomplete is %T\n", conference, daystotal, dayscomplete) ``` -There are many different types of integer and float types the links above will cover these in detail. + +There are many different types of integer and float types the links above will cover these in detail. - **int** = whole numbers - **unint** = positive whole numbers @@ -158,12 +165,12 @@ There are many different types of integer and float types the links above will c - [StackOverflow 2021 Developer Survey](https://insights.stackoverflow.com/survey/2021) - [Why we are choosing Golang to learn](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s) -- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) -- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I) -- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals) -- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) -- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) +- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) +- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I) +- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals) +- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) +- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) -Next up we are going to start adding some user input functionality to our program so that we are asked how many days have been completed. +Next up we are going to start adding some user input functionality to our program so that we are asked how many days have been completed. See you on [Day 12](day12.md). diff --git a/Days/day12.md b/Days/day12.md index c146843db..d22d860c5 100644 --- a/Days/day12.md +++ b/Days/day12.md @@ -1,57 +1,60 @@ --- -title: '#90DaysOfDevOps - Getting user input with Pointers and a finished program - Day 12' +title: "#90DaysOfDevOps - Getting user input with Pointers and a finished program - Day 12" published: false description: 90DaysOfDevOps - Getting user input with Pointers and a finished program -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048864 --- + ## Getting user input with Pointers and a finished program -Yesterday ([Day 11](day11.md)), we created our first Go program that was self-contained and the parts we wanted to get user input for were created as variables within our code and given values, we now want to ask the user for their input to give the variable the value for the end message. +Yesterday ([Day 11](day11.md)), we created our first Go program that was self-contained and the parts we wanted to get user input for were created as variables within our code and given values, we now want to ask the user for their input to give the variable the value for the end message. ## Getting user input -Before we do that let's take a look at our application again and walk through the variables we want as a test before getting that user input. +Before we do that let's take a look at our application again and walk through the variables we want as a test before getting that user input. -Yesterday we finished up with our code looking like this [day11_example4.go](Go/day11_example4.go) we have manually in code defined our `challenge, daystotal, dayscomplete` variables and constants. +Yesterday we finished up with our code looking like this [day11_example4.go](Go/day11_example4.go) we have manually in code defined our `challenge, daystotal, dayscomplete` variables and constants. -Let's now add a new variable called `TwitterName` you can find this new code at [day12_example1.go](Go/day12_example1.go) and if we run this code this is our output. +Let's now add a new variable called `TwitterName` you can find this new code at [day12_example1.go](Go/day12_example1.go) and if we run this code this is our output. ![](Images/Day12_Go1.png) -We are on day 12 and we would need to change that `dayscomplete` every day and compile our code each day if this was hardcoded which doesn't sound so great. +We are on day 12 and we would need to change that `dayscomplete` every day and compile our code each day if this was hardcoded which doesn't sound so great. -Getting user input, we want to get the value of maybe a name and the number of days completed. For us to do this we can use another function from within the `fmt` package. +Getting user input, we want to get the value of maybe a name and the number of days completed. For us to do this we can use another function from within the `fmt` package. Recap on the `fmt` package, different functions for formatted input and output (I/O) -- Print Messages -- Collect User Input -- Write into a file +- Print Messages +- Collect User Input +- Write into a file -This is instead of assigning the value of a variable we want to ask the user for their input. +This is instead of assigning the value of a variable we want to ask the user for their input. ``` fmt.Scan(&TwitterName) ``` -Notice that we also use `&` before the variable. This is known as a pointer which we will cover in the next section. + +Notice that we also use `&` before the variable. This is known as a pointer which we will cover in the next section. In our code [day12_example2.go](Go/day12_example2.go) you can see that we are asking the user to input two variables, `TwitterName` and `DaysCompleted` -Let's now run our program and you see we have input for both of the above. +Let's now run our program and you see we have input for both of the above. ![](Images/Day12_Go2.png) Ok, that's great we got some user input and we printed a message but what about getting our program to tell us how many days we have left in our challenge. -For us to do that we have created a variable called `remainingDays` and we have hard valued this in our code as `90` we then need to change the value of this value to print out the remaining days when we get our user input of `DaysCompleted` we can do this with this simple variable change. +For us to do that we have created a variable called `remainingDays` and we have hard valued this in our code as `90` we then need to change the value of this value to print out the remaining days when we get our user input of `DaysCompleted` we can do this with this simple variable change. ``` remainingDays = remainingDays - DaysCompleted ``` -You can see how our finished program looks here [day12_example2.go](Go/day12_example3.go). + +You can see how our finished program looks here [day12_example2.go](Go/day12_example3.go). If we now run this program you can see that simple calculation is made based on the user input and the value of the `remainingDays` @@ -59,13 +62,13 @@ If we now run this program you can see that simple calculation is made based on ## What is a pointer? (Special Variables) -A pointer is a (special) variable that points to the memory address of another variable. +A pointer is a (special) variable that points to the memory address of another variable. A great explanation of this can be found here at [geeksforgeeks](https://www.geeksforgeeks.org/pointers-in-golang/) -Let's simplify our code now and show with and without the `&` in front of one of our print commands, this gives us the memory address of the pointer. I have added this code example here. [day12_example4.go](Go/day12_example4.go) +Let's simplify our code now and show with and without the `&` in front of one of our print commands, this gives us the memory address of the pointer. I have added this code example here. [day12_example4.go](Go/day12_example4.go) -Below is running this code. +Below is running this code. ![](Images/Day12_Go4.png) @@ -73,10 +76,10 @@ Below is running this code. - [StackOverflow 2021 Developer Survey](https://insights.stackoverflow.com/survey/2021) - [Why we are choosing Golang to learn](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s) -- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) -- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I) -- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals) -- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) -- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) +- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) +- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I) +- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals) +- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) +- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) See you on [Day 13](day13.md). diff --git a/Days/day13.md b/Days/day13.md index 0e1aea8c8..2e286a046 100644 --- a/Days/day13.md +++ b/Days/day13.md @@ -1,60 +1,63 @@ --- -title: '#90DaysOfDevOps - Tweet your progress with our new App - Day 13' +title: "#90DaysOfDevOps - Tweet your progress with our new App - Day 13" published: false description: 90DaysOfDevOps - Tweet your progress with our new App -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048865 --- + ## Tweet your progress with our new App -On the final day of looking into this programming language, we have only just touched the surface here of the language but it is at that start that I think we need to get interested and excited and want to dive more into it. +On the final day of looking into this programming language, we have only just touched the surface here of the language but it is at that start that I think we need to get interested and excited and want to dive more into it. + +Over the last few days, we have taken a small idea for an application and we have added functionality to it, in this session I want to take advantage of those packages we mentioned and create the functionality for our app to not only give you the update of your progress on screen but also send a tweet with the details of the challenge and your status. -Over the last few days, we have taken a small idea for an application and we have added functionality to it, in this session I want to take advantage of those packages we mentioned and create the functionality for our app to not only give you the update of your progress on screen but also send a tweet with the details of the challenge and your status. +## Adding the ability to tweet your progress -## Adding the ability to tweet your progress -The first thing we need to do is set up our developer API access with Twitter for this to work. +The first thing we need to do is set up our developer API access with Twitter for this to work. -Head to the [Twitter Developer Platform](https://developer.twitter.com) and sign in with your Twitter handle and details. Once in you should see something like the below without the app that I already have created. +Head to the [Twitter Developer Platform](https://developer.twitter.com) and sign in with your Twitter handle and details. Once in you should see something like the below without the app that I already have created. ![](Images/Day13_Go1.png) -From here you may also want to request elevated access, this might take some time but it was very fast for me. +From here you may also want to request elevated access, this might take some time but it was very fast for me. -Next, we should select Projects & Apps and create our App. Limits are depending on the account access you have, with essential you only have one app and one project and with elevated you can have 3 apps. +Next, we should select Projects & Apps and create our App. Limits are depending on the account access you have, with essential you only have one app and one project and with elevated you can have 3 apps. ![](Images/Day13_Go2.png) -Give your application a name +Give your application a name ![](Images/Day13_Go3.png) -You will be then given these API tokens, you must save these somewhere secure. (I have since deleted this app) We will need these later with our Go Application. +You will be then given these API tokens, you must save these somewhere secure. (I have since deleted this app) We will need these later with our Go Application. ![](Images/Day13_Go4.png) -Now we have our app created,(I did have to change my app name as the one in the screenshot above was already taken, these names need to be unique) +Now we have our app created,(I did have to change my app name as the one in the screenshot above was already taken, these names need to be unique) ![](Images/Day13_Go5.png) -The keys that we gathered before are known as our consumer keys and we will also need our access token and secrets. We can gather this information using the "Keys & Tokens" tab. +The keys that we gathered before are known as our consumer keys and we will also need our access token and secrets. We can gather this information using the "Keys & Tokens" tab. ![](Images/Day13_Go6.png) -Ok, we are done in the Twitter developer portal for now. Make sure you keep your keys safe because we will need them later. +Ok, we are done in the Twitter developer portal for now. Make sure you keep your keys safe because we will need them later. -## Go Twitter Bot +## Go Twitter Bot -Remember the code we are starting within our application as well [day13_example1](Go/day13_example1.go) but first, we need to check we have the correct code to make something tweet +Remember the code we are starting within our application as well [day13_example1](Go/day13_example1.go) but first, we need to check we have the correct code to make something tweet -We now need to think about the code to get our output or message to Twitter in the form of a tweet. We are going to be using [go-twitter](https://github.com/dghubble/go-twitter) This is a Go client library for the Twitter API. +We now need to think about the code to get our output or message to Twitter in the form of a tweet. We are going to be using [go-twitter](https://github.com/dghubble/go-twitter) This is a Go client library for the Twitter API. -To test this before putting this into our main application, I created a new directory in our `src` folder called go-twitter-bot, issued the `go mod init github.com/michaelcade/go-Twitter-bot` on the folder which then created a `go.mod` file and then we can start writing our new main.go and test this out. +To test this before putting this into our main application, I created a new directory in our `src` folder called go-twitter-bot, issued the `go mod init github.com/michaelcade/go-Twitter-bot` on the folder which then created a `go.mod` file and then we can start writing our new main.go and test this out. -We now need those keys, tokens and secrets we gathered from the Twitter developer portal. We are going to set these in our environment variables. This will depend on the OS you are running: +We now need those keys, tokens and secrets we gathered from the Twitter developer portal. We are going to set these in our environment variables. This will depend on the OS you are running: Windows + ``` set CONSUMER_KEY set CONSUMER_SECRET @@ -63,17 +66,19 @@ set ACCESS_TOKEN_SECRET ``` Linux / macOS + ``` export CONSUMER_KEY export CONSUMER_SECRET export ACCESS_TOKEN export ACCESS_TOKEN_SECRET ``` -At this stage, you can take a look at [day13_example2](Go/day13_example2.go) at the code but you will see here that we are using a struct to define our keys, secrets and tokens. -We then have a `func` to parse those credentials and make that connection to the Twitter API +At this stage, you can take a look at [day13_example2](Go/day13_example2.go) at the code but you will see here that we are using a struct to define our keys, secrets and tokens. -Then based on the success we will then send a tweet. +We then have a `func` to parse those credentials and make that connection to the Twitter API + +Then based on the success we will then send a tweet. ``` package main @@ -152,13 +157,14 @@ func main() { } ``` -The above will either give you an error based on what is happening or it will succeed and you will have a tweet sent with the message outlined in the code. -## Pairing the two together - Go-Twitter-Bot + Our App +The above will either give you an error based on what is happening or it will succeed and you will have a tweet sent with the message outlined in the code. + +## Pairing the two together - Go-Twitter-Bot + Our App -Now we need to merge these two in our `main.go` I am sure someone out there is screaming that there is a better way of doing this and please comment on this as you can have more than one `.go` file in a project it might make sense but this works. +Now we need to merge these two in our `main.go` I am sure someone out there is screaming that there is a better way of doing this and please comment on this as you can have more than one `.go` file in a project it might make sense but this works. -You can see the merged codebase [day13_example3](Go/day13_example3.go) but I will also show it below. +You can see the merged codebase [day13_example3](Go/day13_example3.go) but I will also show it below. ``` package main @@ -261,26 +267,28 @@ func main() { } ``` -The outcome of this should be a tweet but if you did not supply your environment variables then you should get an error like the one below. + +The outcome of this should be a tweet but if you did not supply your environment variables then you should get an error like the one below. ![](Images/Day13_Go7.png) -Once you have fixed that or if you choose not to authenticate with Twitter then you can use the code we finished with yesterday. The terminal output on success will look similar to this: +Once you have fixed that or if you choose not to authenticate with Twitter then you can use the code we finished with yesterday. The terminal output on success will look similar to this: ![](Images/Day13_Go8.png) -The resulting tweet should look something like this: +The resulting tweet should look something like this: ![](Images/Day13_Go9.png) ## How to compile for multiple OSs -I next want to cover the question, "How do you compile for multiple Operating Systems?" The great thing about Go is that it can easily compile for many different Operating Systems. You can get a full list by running the following command: +I next want to cover the question, "How do you compile for multiple Operating Systems?" The great thing about Go is that it can easily compile for many different Operating Systems. You can get a full list by running the following command: ``` go tool dist list ``` -Using our `go build` commands so far is great and it will use the `GOOS` and `GOARCH` environment variables to determine the host machine and what the build should be built for. But we can also create other binaries by using the code below as an example. + +Using our `go build` commands so far is great and it will use the `GOOS` and `GOARCH` environment variables to determine the host machine and what the build should be built for. But we can also create other binaries by using the code below as an example. ``` GOARCH=amd64 GOOS=darwin go build -o ${BINARY_NAME}_0.1_darwin main.go @@ -298,18 +306,18 @@ This is what I have used to create the releases you can now see on the [reposito - [StackOverflow 2021 Developer Survey](https://insights.stackoverflow.com/survey/2021) - [Why we are choosing Golang to learn](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s) -- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) -- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I) -- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals) -- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) -- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) +- [Jake Wright - Learn Go in 12 minutes](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) +- [Techworld with Nana - Golang full course - 3 hours 24 mins](https://www.youtube.com/watch?v=yyUHQIec83I) +- [**NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins](https://www.pluralsight.com/courses/go-fundamentals) +- [FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) +- [Hitesh Choudhary - Complete playlist](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) - [A great repo full of all things DevOps & exercises](https://github.com/bregman-arie/devops-exercises) - [GoByExample - Example based learning](https://gobyexample.com/) - [go.dev/tour/list](https://go.dev/tour/list) - [go.dev/learn](https://go.dev/learn/) -This wraps up the Programming language for 7 days! So much more that can be covered and I hope you have been able to continue through the content above and be able to understand some of the other aspects of the Go programming language. +This wraps up the Programming language for 7 days! So much more that can be covered and I hope you have been able to continue through the content above and be able to understand some of the other aspects of the Go programming language. -Next, we take our focus into Linux and some of the fundamentals that we should all know there. +Next, we take our focus into Linux and some of the fundamentals that we should all know there. See you on [Day 14](day14.md). diff --git a/Days/day14.md b/Days/day14.md index 224099fa9..e25f71f7c 100644 --- a/Days/day14.md +++ b/Days/day14.md @@ -1,66 +1,56 @@ --- -title: '#90DaysOfDevOps - The Big Picture: DevOps and Linux - Day 14' +title: "#90DaysOfDevOps - The Big Picture: DevOps and Linux - Day 14" published: false description: 90DaysOfDevOps - The Big Picture DevOps and Linux -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049033 --- + ## The Big Picture: DevOps and Linux + Linux and DevOps share very similar cultures and perspectives; both are focused on customization and scalability. Both of these aspects of Linux are of particular importance for DevOps. A lot of technologies start on Linux, especially if they are related to software development or managing infrastructure. As well lots of open source projects, especially DevOps tools, were designed to run on Linux from the start. -From a DevOps perspective or any operations role perspective, you are going to come across Linux I would say mostly. There is a place for WinOps but the majority of the time you are going to be administering and deploying Linux servers. - -I have been using Linux daily for several years but my go to desktop machine has always been either macOS or Windows. However, when I moved into the Cloud Native role I am in now I took the plunge to make sure that my laptop was fully Linux based and my daily driver, whilst I still needed Windows for work-based applications and a lot of my audio and video gear does not run on Linux I was forcing myself to run a Linux desktop full time to get a better grasp of a lot of the things we are going to touch on over the next 7 days. - -## Getting Started -I am not suggesting you do the same as me by any stretch as there are easier options which are less destructive but I will say that taking that full-time step forces you to learn faster how to make things work on Linux. - -For the majority of these 7 days, I am going to deploy a Virtual Machine in Virtual Box on my Windows machine. I am also going to deploy a desktop version of a Linux distribution, whereas a lot of the Linux servers you will be administering will likely be servers that come with no GUI and everything is shell-based. However, as I said at the start a lot of the tools that we covered throughout this whole 90 days started on Linux I would also strongly encourage you to dive into running that Linux Desktop for that learning experience as well. - - -For the rest of this post, we are going to concentrate on getting a Ubuntu Desktop virtual machine up and running in our Virtual Box environment. Now we could just download [Virtual Box](https://www.virtualbox.org/) and grab the latest [Ubuntu ISO](https://ubuntu.com/download) from the sites linked and go ahead and build out our desktop environment but that wouldn't be very DevOps of us, would it? - +From a DevOps perspective or any operations role perspective, you are going to come across Linux I would say mostly. There is a place for WinOps but the majority of the time you are going to be administering and deploying Linux servers. -Another good reason to use most Linux distributions is that they are free and open-source. We are also choosing Ubuntu as it is probably the most widely used distribution deployed not thinking about mobile devices and enterprise RedHat Enterprise servers. I might be wrong there but with CentOS and the history there I bet Ubuntu is high on the list and it's super simple. +I have been using Linux daily for several years but my go to desktop machine has always been either macOS or Windows. However, when I moved into the Cloud Native role I am in now I took the plunge to make sure that my laptop was fully Linux based and my daily driver, whilst I still needed Windows for work-based applications and a lot of my audio and video gear does not run on Linux I was forcing myself to run a Linux desktop full time to get a better grasp of a lot of the things we are going to touch on over the next 7 days. +## Getting Started -## Introducing HashiCorp Vagrant +I am not suggesting you do the same as me by any stretch as there are easier options which are less destructive but I will say that taking that full-time step forces you to learn faster how to make things work on Linux. +For the majority of these 7 days, I am going to deploy a Virtual Machine in Virtual Box on my Windows machine. I am also going to deploy a desktop version of a Linux distribution, whereas a lot of the Linux servers you will be administering will likely be servers that come with no GUI and everything is shell-based. However, as I said at the start a lot of the tools that we covered throughout this whole 90 days started on Linux I would also strongly encourage you to dive into running that Linux Desktop for that learning experience as well. -Vagrant is a CLI utility that manages the lifecycle of your virtual machines. We can use vagrant to spin up and down virtual machines across many different platforms including vSphere, Hyper-v, Virtual Box and also Docker. It does have other providers but we will stick with Virtual Box here so we are good to go. +For the rest of this post, we are going to concentrate on getting a Ubuntu Desktop virtual machine up and running in our Virtual Box environment. Now we could just download [Virtual Box](https://www.virtualbox.org/) and grab the latest [Ubuntu ISO](https://ubuntu.com/download) from the sites linked and go ahead and build out our desktop environment but that wouldn't be very DevOps of us, would it? +Another good reason to use most Linux distributions is that they are free and open-source. We are also choosing Ubuntu as it is probably the most widely used distribution deployed not thinking about mobile devices and enterprise RedHat Enterprise servers. I might be wrong there but with CentOS and the history there I bet Ubuntu is high on the list and it's super simple. -The first thing we need to do is get Vagrant installed on our machine, when you go to the downloads page you will see all the operating systems listed for your choice. [HashiCorp Vagrant](https://www.vagrantup.com/downloads) I am using Windows so I grabbed the binary for my system and went ahead and installed this on my system. +## Introducing HashiCorp Vagrant +Vagrant is a CLI utility that manages the lifecycle of your virtual machines. We can use vagrant to spin up and down virtual machines across many different platforms including vSphere, Hyper-v, Virtual Box and also Docker. It does have other providers but we will stick with Virtual Box here so we are good to go. -Next up we also need to get [Virtual Box](https://www.virtualbox.org/wiki/Downloads) installed. Again, this can also be installed on many different operating systems and a good reason to choose this and vagrant is that if you are running Windows, macOS, or Linux then we have you covered here. +The first thing we need to do is get Vagrant installed on our machine, when you go to the downloads page you will see all the operating systems listed for your choice. [HashiCorp Vagrant](https://www.vagrantup.com/downloads) I am using Windows so I grabbed the binary for my system and went ahead and installed this on my system. +Next up we also need to get [Virtual Box](https://www.virtualbox.org/wiki/Downloads) installed. Again, this can also be installed on many different operating systems and a good reason to choose this and vagrant is that if you are running Windows, macOS, or Linux then we have you covered here. Both installations are pretty straightforward and both have great communitites around them so feel free to reach out if you have issues and I can try and assist too. - ## Our first VAGRANTFILE +The VAGRANTFILE describes the type of machine we want to deploy. It also defines the configuration and provisioning for this machine. -The VAGRANTFILE describes the type of machine we want to deploy. It also defines the configuration and provisioning for this machine. - - -When it comes to saving these and organizing your VAGRANTFILEs I tend to put them in their folders in my workspace. You can see below how this looks on my system. Hopefully following this you will play around with Vagrant and see the ease of spinning up different systems, it is also great for that rabbit hole known as distro hopping for Linux Desktops. - +When it comes to saving these and organizing your VAGRANTFILEs I tend to put them in their folders in my workspace. You can see below how this looks on my system. Hopefully following this you will play around with Vagrant and see the ease of spinning up different systems, it is also great for that rabbit hole known as distro hopping for Linux Desktops. ![](Images/Day14_Linux1.png) +Let's take a look at that VAGRANTFILE and see what we are building. -Let's take a look at that VAGRANTFILE and see what we are building. - - -``` +``` Vagrant.configure("2") do |config| @@ -80,11 +70,9 @@ end ``` -This is a very simple VAGRANTFILE overall. We are saying that we want a specific "box", a box being possibly either a public image or private build of the system you are looking for. You can find a long list of "boxes" publicly available here in the [public catalogue of Vagrant boxes](https://app.vagrantup.com/boxes/search) - - -Next line we're saying that we want to use a specific provider and in this case it's `VirtualBox`. We also define our machine's memory to `8GB` and the number of CPUs to `4`. My experience tells me that you may want to also add the following line if you experience display issues. This will set the video memory to what you want, I would ramp this right up to `128MB` but it depends on your system. +This is a very simple VAGRANTFILE overall. We are saying that we want a specific "box", a box being possibly either a public image or private build of the system you are looking for. You can find a long list of "boxes" publicly available here in the [public catalogue of Vagrant boxes](https://app.vagrantup.com/boxes/search) +Next line we're saying that we want to use a specific provider and in this case it's `VirtualBox`. We also define our machine's memory to `8GB` and the number of CPUs to `4`. My experience tells me that you may want to also add the following line if you experience display issues. This will set the video memory to what you want, I would ramp this right up to `128MB` but it depends on your system. ``` @@ -92,53 +80,41 @@ v.customize ["modifyvm", :id, "--vram", ""] ``` -I have also placed a copy of this specific vagrant file in the [Linux Folder](Linux/VAGRANTFILE) - +I have also placed a copy of this specific vagrant file in the [Linux Folder](Linux/VAGRANTFILE) ## Provisioning our Linux Desktop - -We are now ready to get our first machine up and running, in our workstation's terminal. In my case I am using PowerShell on my Windows machine. Navigate to your projects folder and where you will find your VAGRANTFILE. Once there you can type the command `vagrant up` and if everything's allright you will see something like this. - +We are now ready to get our first machine up and running, in our workstation's terminal. In my case I am using PowerShell on my Windows machine. Navigate to your projects folder and where you will find your VAGRANTFILE. Once there you can type the command `vagrant up` and if everything's allright you will see something like this. ![](Images/Day14_Linux2.png) - Another thing to add here is that the network will be set to `NAT` on your virtual machine. At this stage we don't need to know about NAT and I plan to have a whole session talking about it in the Networking session. Know that it is the easy button when it comes to getting a machine on your home network, it is also the default networking mode on Virtual Box. You can find out more in the [Virtual Box documentation](https://www.virtualbox.org/manual/ch06.html#network_nat) - -Once `vagrant up` is complete we can now use `vagrant ssh` to jump straight into the terminal of our new VM. - +Once `vagrant up` is complete we can now use `vagrant ssh` to jump straight into the terminal of our new VM. ![](Images/Day14_Linux3.png) +This is where we will do most of our exploring over the next few days but I also want to dive into some customizations for your developer workstation that I have done and it makes your life much simpler when running this as your daily driver, and of course, are you really in DevOps unless you have a cool nonstandard terminal? -This is where we will do most of our exploring over the next few days but I also want to dive into some customizations for your developer workstation that I have done and it makes your life much simpler when running this as your daily driver, and of course, are you really in DevOps unless you have a cool nonstandard terminal? - - -But just to confirm in Virtual Box you should see the login prompt when you select your VM. - +But just to confirm in Virtual Box you should see the login prompt when you select your VM. ![](Images/Day14_Linux4.png) +Oh and if you made it this far and you have been asking "WHAT IS THE USERNAME & PASSWORD?" -Oh and if you made it this far and you have been asking "WHAT IS THE USERNAME & PASSWORD?" - - -- Username = vagrant - -- Password = vagrant +- Username = vagrant +- Password = vagrant -Tomorrow we are going to get into some of the commands and what they do, The terminal is going to be the place to make everything happen. +Tomorrow we are going to get into some of the commands and what they do, The terminal is going to be the place to make everything happen. -## Resources +## Resources - [Learn the Linux Fundamentals - Part 1](https://www.youtube.com/watch?v=kPylihJRG70) - [Linux for hackers (don't worry you don't need to be a hacker!)](https://www.youtube.com/watch?v=VbEx7B_PTOE) -There are going to be lots of resources I find as we go through and much like the Go resources I am generally going to be keeping them to FREE content so we can all partake and learn here. +There are going to be lots of resources I find as we go through and much like the Go resources I am generally going to be keeping them to FREE content so we can all partake and learn here. -As I mentioned next up we will take a look at the commands we might be using on a daily whilst in our Linux environments. +As I mentioned next up we will take a look at the commands we might be using on a daily whilst in our Linux environments. See you on [Day15](day15.md) diff --git a/Days/day15.md b/Days/day15.md index 6f22b6126..3f0791b74 100644 --- a/Days/day15.md +++ b/Days/day15.md @@ -1,40 +1,41 @@ --- -title: '#90DaysOfDevOps - Linux Commands for DevOps (Actually everyone) - Day 15' +title: "#90DaysOfDevOps - Linux Commands for DevOps (Actually everyone) - Day 15" published: false description: 90DaysOfDevOps - Linux Commands for DevOps (Actually everyone) -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048834 --- + ## Linux Commands for DevOps (Actually everyone) -I mentioned [yesterday](day14.md) that we are going to be spending a lot of time in the terminal with some commands to get stuff done. +I mentioned [yesterday](day14.md) that we are going to be spending a lot of time in the terminal with some commands to get stuff done. I also mentioned that with our vagrant provisioned VM we can use `vagrant ssh` and gain access to our box. You will need to be in the same directory as we provisioned it from. -For SSH you won't need the username and password, you will only need that if you decide to log in to the Virtual Box console. +For SSH you won't need the username and password, you will only need that if you decide to log in to the Virtual Box console. -This is where we want to be as per below: +This is where we want to be as per below: ![](Images/Day15_Linux1.png) -## Commands +## Commands -I cannot cover all the commands here, there are pages and pages of documentation that cover these but also if you are ever in your terminal and you just need to understand options to a specific command we have the `man` pages short for manual. We can use this to go through each of the commands we touch on during this post to find out more options for each one. We can run `man man` which will give you the help for manual pages. To escape the man pages you should press `q` for quit. +I cannot cover all the commands here, there are pages and pages of documentation that cover these but also if you are ever in your terminal and you just need to understand options to a specific command we have the `man` pages short for manual. We can use this to go through each of the commands we touch on during this post to find out more options for each one. We can run `man man` which will give you the help for manual pages. To escape the man pages you should press `q` for quit. ![](Images/Day15_Linux2.png) ![](Images/Day15_Linux3.png) -`sudo` If you are familiar with Windows and the right click `run as administrator` we can think of `sudo` as very much this. When you run a command with this command you will be running it as `root` it will prompt you for the password before running the command. +`sudo` If you are familiar with Windows and the right click `run as administrator` we can think of `sudo` as very much this. When you run a command with this command you will be running it as `root` it will prompt you for the password before running the command. ![](Images/Day15_Linux4.png) -For one off jobs like installing applications or services, you might need that `sudo command` but what if you have several tasks to deal with and you want to live as `sudo` for a while? This is where you can use `sudo su` again the same as `sudo` once entered you will be prompted for your `root` password. In a test VM like ours, this is fine but I would find it very hard for us to be rolling around as `root` for prolonged periods, bad things can happen. To get out of this elevated position you simply type in `exit` +For one off jobs like installing applications or services, you might need that `sudo command` but what if you have several tasks to deal with and you want to live as `sudo` for a while? This is where you can use `sudo su` again the same as `sudo` once entered you will be prompted for your `root` password. In a test VM like ours, this is fine but I would find it very hard for us to be rolling around as `root` for prolonged periods, bad things can happen. To get out of this elevated position you simply type in `exit` ![](Images/Day15_Linux5.png) -I find myself using `clear` all the time, the `clear` command does exactly what it says it is going to clear the screen of all previous commands, putting your prompt to the top and giving you a nice clean workspace. Windows I think is `cls` in the .mdprompt. +I find myself using `clear` all the time, the `clear` command does exactly what it says it is going to clear the screen of all previous commands, putting your prompt to the top and giving you a nice clean workspace. Windows I think is `cls` in the .mdprompt. ![](Images/Day15_Linux6.png) @@ -50,7 +51,7 @@ With `cd` this allows us to change the directory, so for us to move into our new ![](Images/Day15_Linux9.png) -I am sure we have all done it where we have navigated to the depths of our file system to a directory and not known where we are. `pwd` gives us the printout of the working directory, pwd as much as it looks like password it stands for print working directory. +I am sure we have all done it where we have navigated to the depths of our file system to a directory and not known where we are. `pwd` gives us the printout of the working directory, pwd as much as it looks like password it stands for print working directory. ![](Images/Day15_Linux10.png) @@ -58,35 +59,35 @@ We know how to create folders and directories but how do we create files? We can ![](Images/Day15_Linux11.png) -`ls` I can put my house on this, you will use this command so many times, this is going to list all the files and folders in the current directory. Let's see if we can see that file we just created. +`ls` I can put my house on this, you will use this command so many times, this is going to list all the files and folders in the current directory. Let's see if we can see that file we just created. ![](Images/Day15_Linux12.png) -How can we find files on our Linux system? `locate` is going to allow us to search our file system. If we use `locate Day15` it will report back the location of the file. The bonus round is that if you know that the file does exist but you get a blank result then run `sudo updatedb` which will index all the files in the file system then run your `locate` again. If you do not have `locate` available to you, you can install it using this command `sudo apt install mlocate` +How can we find files on our Linux system? `locate` is going to allow us to search our file system. If we use `locate Day15` it will report back the location of the file. The bonus round is that if you know that the file does exist but you get a blank result then run `sudo updatedb` which will index all the files in the file system then run your `locate` again. If you do not have `locate` available to you, you can install it using this command `sudo apt install mlocate` ![](Images/Day15_Linux13.png) -What about moving files from one location to another? `mv` is going to allow you to move your files. Example `mv Day15 90DaysOfDevOps` will move your file to the 90DaysOfDevOps folder. +What about moving files from one location to another? `mv` is going to allow you to move your files. Example `mv Day15 90DaysOfDevOps` will move your file to the 90DaysOfDevOps folder. ![](Images/Day15_Linux14.png) -We have moved our file but what if we want to rename it now to something else? We can do that using the `mv` command again... WOT!!!? yep we can simply use `mv Day15 day15` to change to upper case or we could use `mv day15 AnotherDay` to change it altogether, now use `ls` to check the file. +We have moved our file but what if we want to rename it now to something else? We can do that using the `mv` command again... WOT!!!? yep we can simply use `mv Day15 day15` to change to upper case or we could use `mv day15 AnotherDay` to change it altogether, now use `ls` to check the file. ![](Images/Day15_Linux15.png) -Enough is enough, let's now get rid (delete)of our file and maybe even our directory if we have one created. `rm` simply `rm AnotherDay` will remove our file. We will also use quite a bit `rm -R` which will recursively work through a folder or location. We might also use `rm -R -f` to force the removal of all of those files. Spoiler if you run `rm -R -f /` add sudo to it and you can say goodbye to your system....! +Enough is enough, let's now get rid (delete)of our file and maybe even our directory if we have one created. `rm` simply `rm AnotherDay` will remove our file. We will also use quite a bit `rm -R` which will recursively work through a folder or location. We might also use `rm -R -f` to force the removal of all of those files. Spoiler if you run `rm -R -f /` add sudo to it and you can say goodbye to your system....! ![](Images/Day15_Linux16.png) -We have looked at moving files around but what if I just want to copy files from one folder to another, simply put its very similar to the `mv` command but we use `cp` so we can now say `cp Day15 Desktop` +We have looked at moving files around but what if I just want to copy files from one folder to another, simply put its very similar to the `mv` command but we use `cp` so we can now say `cp Day15 Desktop` ![](Images/Day15_Linux17.png) -We have created folders and files but we haven't put any contents into our folder, we can add contents a few ways but an easy way is `echo` we can also use `echo` to print out a lot of things in our terminal, I use echo a lot to print out system variables to know if they are set or not at least. we can use `echo "Hello #90DaysOfDevOps" > Day15` and this will add this to our file. We can also append to our file using `echo "Commands are fun!" >> Day15` +We have created folders and files but we haven't put any contents into our folder, we can add contents a few ways but an easy way is `echo` we can also use `echo` to print out a lot of things in our terminal, I use echo a lot to print out system variables to know if they are set or not at least. we can use `echo "Hello #90DaysOfDevOps" > Day15` and this will add this to our file. We can also append to our file using `echo "Commands are fun!" >> Day15` ![](Images/Day15_Linux18.png) -Another one of those commands you will use a lot! `cat` short for concatenate. We can use `cat Day15` to see the contents inside the file. Great for quickly reading those configuration files. +Another one of those commands you will use a lot! `cat` short for concatenate. We can use `cat Day15` to see the contents inside the file. Great for quickly reading those configuration files. ![](Images/Day15_Linux19.png) @@ -94,22 +95,26 @@ If you have a long complex configuration file and you want or need to find somet ![](Images/Day15_Linux20.png) -If you are like me and you use that `clear` command a lot then you might miss some of the commands previously ran, we can use `history` to find out all those commands we have run prior. `history -c` will remove the history. +If you are like me and you use that `clear` command a lot then you might miss some of the commands previously ran, we can use `history` to find out all those commands we have run prior. `history -c` will remove the history. -When you run `history` and you would like to pick a specific command you can use `!3` to choose the 3rd command in the list. +When you run `history` and you would like to pick a specific command you can use `!3` to choose the 3rd command in the list. -You are also able to use `history | grep "Command` to search for something specific. +You are also able to use `history | grep "Command` to search for something specific. On servers to trace back when was a command executed, it can be useful to append the date and time to each command in the history file. The following system variable controls this behaviour: + ``` HISTTIMEFORMAT="%d-%m-%Y %T " ``` + You can easily add to your bash_profile: + ``` echo 'export HISTTIMEFORMAT="%d-%m-%Y %T "' >> ~/.bash_profile ``` + So as useful to allow the history file to grow bigger: ``` @@ -119,7 +124,7 @@ echo 'export HISTFILESIZE=10000000' >> ~/.bash_profile ![](Images/Day15_Linux21.png) -Need to change your password? `passwd` is going to allow us to change our password. Note that when you add your password like this when it is hidden it will not be shown in `history` however if your command has `-p PASSWORD` then this will be visible in your `history`. +Need to change your password? `passwd` is going to allow us to change our password. Note that when you add your password like this when it is hidden it will not be shown in `history` however if your command has `-p PASSWORD` then this will be visible in your `history`. ![](Images/Day15_Linux22.png) @@ -127,22 +132,22 @@ We might also want to add new users to our system, we can do this with `useradd` ![](Images/Day15_Linux23.png) -Creating a group again requires `sudo` and we can use `sudo groupadd DevOps` then if we want to add our new user to that group we can do this by running `sudo usermod -a -G DevOps` `-a` is add and `-G` is group name. +Creating a group again requires `sudo` and we can use `sudo groupadd DevOps` then if we want to add our new user to that group we can do this by running `sudo usermod -a -G DevOps` `-a` is add and `-G` is group name. ![](Images/Day15_Linux24.png) -How do we add users to the `sudo` group, this would be a very rare occasion for this to happen but to do this it would be `usermod -a -G sudo NewUser` +How do we add users to the `sudo` group, this would be a very rare occasion for this to happen but to do this it would be `usermod -a -G sudo NewUser` -### Permissions +### Permissions -read, write and execute are the permissions we have on all of our files and folders on our Linux system. +read, write and execute are the permissions we have on all of our files and folders on our Linux system. -A full list: +A full list: - 0 = None `---` - 1 = Execute only `--X` - 2 = Write only `-W-` -- 3 = Write & Exectute `-WX` +- 3 = Write & Execute `-WX` - 4 = Read Only `R--` - 5 = Read & Execute `R-X` - 6 = Read & Write `RW-` @@ -150,41 +155,41 @@ A full list: You will also see `777` or `775` and these represent the same numbers as the list above but each one represents **User - Group - Everyone** -Let's take a look at our file. `ls -al Day15` you can see the 3 groups mentioned above, user and group have read & write but everyone only has read. +Let's take a look at our file. `ls -al Day15` you can see the 3 groups mentioned above, user and group have read & write but everyone only has read. ![](Images/Day15_Linux25.png) -We can change this using `chmod` you might find yourself doing this if you are creating binaries a lot on your systems as well and you need to give the ability to execute those binaries. `chmod 750 Day15` now run `ls -al Day15` if you want to run this for a whole folder then you can use `-R` to recursively do that. +We can change this using `chmod` you might find yourself doing this if you are creating binaries a lot on your systems as well and you need to give the ability to execute those binaries. `chmod 750 Day15` now run `ls -al Day15` if you want to run this for a whole folder then you can use `-R` to recursively do that. ![](Images/Day15_Linux26.png) -What about changing the owner of the file? We can use `chown` for this operation, if we wanted to change the ownership of our `Day15` from user `vagrant` to `NewUser` we can run `sudo chown NewUser Day15` again `-R` can be used. +What about changing the owner of the file? We can use `chown` for this operation, if we wanted to change the ownership of our `Day15` from user `vagrant` to `NewUser` we can run `sudo chown NewUser Day15` again `-R` can be used. ![](Images/Day15_Linux27.png) -A command that you will come across is `awk` which comes in real use when you have an output that you only need specific data from. like running `who` we get lines with information, but maybe we only need the names. We can run `who | awk '{print $1}'` to get just a list of that first column. +A command that you will come across is `awk` which comes in real use when you have an output that you only need specific data from. like running `who` we get lines with information, but maybe we only need the names. We can run `who | awk '{print $1}'` to get just a list of that first column. ![](Images/Day15_Linux28.png) -If you are looking to read streams of data from standard input, then generate and execute command lines; meaning it can take the output of a command and passes it as an argument of another command. `xargs` is a useful tool for this use case. If for example, I want a list of all the Linux user accounts on the system I can run. `cut -d: -f1 < /etc/passwd` and get the long list we see below. +If you are looking to read streams of data from standard input, then generate and execute command lines; meaning it can take the output of a command and passes it as an argument of another command. `xargs` is a useful tool for this use case. If for example, I want a list of all the Linux user accounts on the system I can run. `cut -d: -f1 < /etc/passwd` and get the long list we see below. ![](Images/Day15_Linux29.png) -If I want to compact that list I can do so by using `xargs` in a command like this `cut -d: -f1 < /etc/passwd | sort | xargs` +If I want to compact that list I can do so by using `xargs` in a command like this `cut -d: -f1 < /etc/passwd | sort | xargs` ![](Images/Day15_Linux30.png) -I didn't mention the `cut` command either, this allows us to remove sections from each line of a file. It can be used to cut parts of a line by byte position, character and field. The `cut -d " " -f 2 list.txt` command allows us to remove that first letter we have and just display our numbers. There are so many combinations that can be used here with this command, I am sure I have spent too much time trying to use this command when I could have extracted data quicker manually. +I didn't mention the `cut` command either, this allows us to remove sections from each line of a file. It can be used to cut parts of a line by byte position, character and field. The `cut -d " " -f 2 list.txt` command allows us to remove that first letter we have and just display our numbers. There are so many combinations that can be used here with this command, I am sure I have spent too much time trying to use this command when I could have extracted data quicker manually. ![](Images/Day15_Linux31.png) -Also to note if you type a command and you are no longer happy with it and you want to start again just hit control + c and this will cancel that line and start you fresh. +Also to note if you type a command and you are no longer happy with it and you want to start again just hit control + c and this will cancel that line and start you fresh. -## Resources +## Resources - [Learn the Linux Fundamentals - Part 1](https://www.youtube.com/watch?v=kPylihJRG70) - [Linux for hackers (don't worry you don't need to be a hacker!)](https://www.youtube.com/watch?v=VbEx7B_PTOE) See you on [Day16](day16.md) -This is a pretty heavy list already but I can safely say that I have used all of these commands in my day to day, be it from an administering Linux servers or on my Linux Desktop, it is very easy when you are in Windows or macOS to navigate the UI but in Linux Servers, they are not there, everything is done through the terminal. +This is a pretty heavy list already but I can safely say that I have used all of these commands in my day to day, be it from an administering Linux servers or on my Linux Desktop, it is very easy when you are in Windows or macOS to navigate the UI but in Linux Servers, they are not there, everything is done through the terminal. diff --git a/Days/day16.md b/Days/day16.md index c835f716a..83c445012 100644 --- a/Days/day16.md +++ b/Days/day16.md @@ -1,107 +1,108 @@ --- -title: '#90DaysOfDevOps - Managing your Linux System, Filesystem & Storage - Day 16' +title: "#90DaysOfDevOps - Managing your Linux System, Filesystem & Storage - Day 16" published: false -description: '90DaysOfDevOps - Managing your Linux System, Filesystem & Storage' -tags: 'devops, 90daysofdevops, learning' +description: "90DaysOfDevOps - Managing your Linux System, Filesystem & Storage" +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048702 --- + ## Managing your Linux System, Filesystem & Storage -So far we have had a brief overview of Linux and DevOps and then we got our lab environment set up using vagant [(Day 14)](day14.md), we then touched on a small portion of commands that will be in your daily toolkit when in the terminal and getting things done [(Day 15)](day15.md). +So far we have had a brief overview of Linux and DevOps and then we got our lab environment set up using Vagrant [(Day 14)](day14.md), we then touched on a small portion of commands that will be in your daily toolkit when in the terminal and getting things done [(Day 15)](day15.md). -Here we are going to look into three key areas of looking after your Linux systems with updates, installing software, understanding what system folders are used for and we will also take a look at storage. +Here we are going to look into three key areas of looking after your Linux systems with updates, installing software, understanding what system folders are used for and we will also take a look at storage. ## Managing Ubuntu & Software -The first thing we are going to look at is how we update our operating system. Most of you will be familiar with this process in a Windows OS and macOS, this looks slightly different on a Linux desktop and server. +The first thing we are going to look at is how we update our operating system. Most of you will be familiar with this process in a Windows OS and macOS, this looks slightly different on a Linux desktop and server. -We are going to be looking at the apt package manager, this is what we are going to use on our Ubuntu VM for updates and software installation. +We are going to be looking at the apt package manager, this is what we are going to use on our Ubuntu VM for updates and software installation. -Generally, at least on dev workstations, I run this command to make sure that I have the latest available updates from the central repositories, before any software installation. +Generally, at least on dev workstations, I run this command to make sure that I have the latest available updates from the central repositories, before any software installation. `sudo apt-get update` ![](Images/Day16_Linux1.png) -Now we have an updated Ubuntu VM with the latest OS updates installed. We now want to get some software installed here. +Now we have an updated Ubuntu VM with the latest OS updates installed. We now want to get some software installed here. Let's choose `figlet` which is a program that generates text banners. -If we type `figlet` in our terminal you are going to see that we do not have it installed on our system. +If we type `figlet` in our terminal you are going to see that we do not have it installed on our system. ![](Images/Day16_Linux2.png) -You will see from the above though that it does give us some `apt` install options that we could try. This is because in the default repositories there is a program called figlet. Let's try `sudo apt install figlet` +You will see from the above though that it does give us some `apt` install options that we could try. This is because in the default repositories there is a program called figlet. Let's try `sudo apt install figlet` ![](Images/Day16_Linux3.png) -We can now use our `figlet` app as you can see below. +We can now use our `figlet` app as you can see below. ![](Images/Day16_Linux4.png) -If we want to remove that or any of our software installations we can also do that via the `apt` package manager. +If we want to remove that or any of our software installations we can also do that via the `apt` package manager. `sudo apt remove figlet` ![](Images/Day16_Linux5.png) -There are third party repositories that we can also add to our system, the ones we have access to out of the box are the Ubuntu default repositories. +There are third party repositories that we can also add to our system, the ones we have access to out of the box are the Ubuntu default repositories. -If for example, we wanted to install vagrant on our Ubuntu VM we would not be able to right now and you can see this below on the first command issued. We then add the key to trust the HashiCorp repository, then add the repository to our system. +If for example, we wanted to install vagrant on our Ubuntu VM we would not be able to right now and you can see this below on the first command issued. We then add the key to trust the HashiCorp repository, then add the repository to our system. ![](Images/Day16_Linux6.png) -Once we have the HashiCorp repository added we can go ahead and run `sudo apt install vagrant` and get vagrant installed on our system. +Once we have the HashiCorp repository added we can go ahead and run `sudo apt install vagrant` and get vagrant installed on our system. ![](Images/Day16_Linux7.png) -There are so many options when it comes to software installation, different options for package managers, built into Ubuntu we could also use snaps for our software installations. +There are so many options when it comes to software installation, different options for package managers, built into Ubuntu we could also use snaps for our software installations. -Hopefully, this gives you a feel about how to manage your OS and software installations on Linux. +Hopefully, this gives you a feel about how to manage your OS and software installations on Linux. -## File System Explained +## File System Explained -Linux is made up of configuration files, if you want to change anything then you change these configuration files. +Linux is made up of configuration files, if you want to change anything then you change these configuration files. -On Windows, you have C: drive and that is what we consider the root. On Linux we have `/` this is where we are going to find the important folders on our Linux system. +On Windows, you have C: drive and that is what we consider the root. On Linux we have `/` this is where we are going to find the important folders on our Linux system. ![](Images/Day16_Linux8.png) -- `/bin` - Short for binary, the bin folder is where our binaries that your system needs, executables and tools will mostly be found here. +- `/bin` - Short for binary, the bin folder is where our binaries that your system needs, executables and tools will mostly be found here. ![](Images/Day16_Linux9.png) -- `/boot` - All the files your system needs to boot up. How to boot up, and what drive to boot from. +- `/boot` - All the files your system needs to boot up. How to boot up, and what drive to boot from. ![](Images/Day16_Linux10.png) -- `/dev` - You can find device information here, this is where you will find pointers to your disk drives `sda` will be your main OS disk. +- `/dev` - You can find device information here, this is where you will find pointers to your disk drives `sda` will be your main OS disk. ![](Images/Day16_Linux11.png) -- `/etc` Likely the most important folder on your Linux system, this is where the majority of your configuration files are. +- `/etc` Likely the most important folder on your Linux system, this is where the majority of your configuration files are. ![](Images/Day16_Linux12.png) -- `/home` - this is where you will find your user folders and files. We have our vagrant user folder. This is where you will find your `Documents` and `Desktop` folders that we worked in for the commands section. +- `/home` - this is where you will find your user folders and files. We have our vagrant user folder. This is where you will find your `Documents` and `Desktop` folders that we worked in for the commands section. ![](Images/Day16_Linux13.png) -- `/lib` - We mentioned that `/bin` is where our binaries and executables live, and `/lib` is where you will find the shared libraries for those. +- `/lib` - We mentioned that `/bin` is where our binaries and executables live, and `/lib` is where you will find the shared libraries for those. ![](Images/Day16_Linux14.png) -- `/media` - This is where we will find removable devices. +- `/media` - This is where we will find removable devices. ![](Images/Day16_Linux15.png) -- `/mnt` - This is a temporary mount point. We will cover more here in the next storage section. +- `/mnt` - This is a temporary mount point. We will cover more here in the next storage section. ![](Images/Day16_Linux16.png) -- `/opt` - Optional software packages. You will notice here that we have some vagrant and virtual box software stored here. +- `/opt` - Optional software packages. You will notice here that we have some vagrant and virtual box software stored here. ![](Images/Day16_Linux17.png) @@ -109,7 +110,7 @@ On Windows, you have C: drive and that is what we consider the root. On Linux we ![](Images/Day16_Linux18.png) -- `/root` - To gain access you will need to sudo into this folder. The home folder for root. +- `/root` - To gain access you will need to sudo into this folder. The home folder for root. ![](Images/Day16_Linux19.png) @@ -121,43 +122,43 @@ On Windows, you have C: drive and that is what we consider the root. On Linux we ![](Images/Day16_Linux21.png) -- `/tmp` - temporary files. +- `/tmp` - temporary files. ![](Images/Day16_Linux22.png) -- `/usr` - If we as a standard user have installed software packages it would generally be installed in the `/usr/bin` location. +- `/usr` - If we as a standard user have installed software packages it would generally be installed in the `/usr/bin` location. ![](Images/Day16_Linux23.png) -- `/var` - Our applications get installed in a `bin` folder. We need somewhere to store all of the log files this is `/var` +- `/var` - Our applications get installed in a `bin` folder. We need somewhere to store all of the log files this is `/var` ![](Images/Day16_Linux24.png) -## Storage +## Storage -When we come to a Linux system or any system we might want to know the available disks and how much free space we have on those disks. The next few commands will help us identify and use and manage storage. +When we come to a Linux system or any system we might want to know the available disks and how much free space we have on those disks. The next few commands will help us identify and use and manage storage. -- `lsblk` List Block devices. `sda` is our physical disk and then `sda1, sda2, sda3` are our partitions on that disk. +- `lsblk` List Block devices. `sda` is our physical disk and then `sda1, sda2, sda3` are our partitions on that disk. ![](Images/Day16_Linux25.png) -- `df` gives us a little more detail about those partitions, total, used and available. You can parse other flags here I generally use `df -h` to give us a human output of the data. +- `df` gives us a little more detail about those partitions, total, used and available. You can parse other flags here I generally use `df -h` to give us a human output of the data. ![](Images/Day16_Linux26.png) -If you were adding a new disk to your system and this is the same in Windows you would need to format the disk in disk management, in the Linux terminal you can do this by using the `sudo mkfs -t ext4 /dev/sdb` with sdb relating to our newly added disk. +If you were adding a new disk to your system and this is the same in Windows you would need to format the disk in disk management, in the Linux terminal you can do this by using the `sudo mkfs -t ext4 /dev/sdb` with sdb relating to our newly added disk. -We would then need to mount our newly formatted disk so that it was useable. We would do this in our `/mnt` folder previously mentioned and we would create a directory there with `sudo mkdir NewDisk` we would then use `sudo mount /dev/sdb newdisk` to mount the disk to that location. +We would then need to mount our newly formatted disk so that it was useable. We would do this in our `/mnt` folder previously mentioned and we would create a directory there with `sudo mkdir NewDisk` we would then use `sudo mount /dev/sdb newdisk` to mount the disk to that location. -It is also possible that you will need to unmount storage from your system safely vs just pulling it from the configuration. We can do this with `sudo umount /dev/sdb` +It is also possible that you will need to unmount storage from your system safely vs just pulling it from the configuration. We can do this with `sudo umount /dev/sdb` -If you did not want to unmount that disk and you were going to be using this disk for a database or some other persistent use case then you want it to be there when you reboot your system. For this to happen we need to add this disk to our `/etc/fstab` configuration file for it to persist, if you don't it won't be useable when the machine reboots and you would manually have to go through the above process. The data will still be there on the disk but it won't automount unless you add the configuration to this file. +If you did not want to unmount that disk and you were going to be using this disk for a database or some other persistent use case then you want it to be there when you reboot your system. For this to happen we need to add this disk to our `/etc/fstab` configuration file for it to persist, if you don't it won't be useable when the machine reboots and you would manually have to go through the above process. The data will still be there on the disk but it won't automount unless you add the configuration to this file. -Once you have edited the `fstab` configuration file you can check your workings with `sudo mount -a` if no errors then your changes will now be persistent across restarts. +Once you have edited the `fstab` configuration file you can check your workings with `sudo mount -a` if no errors then your changes will now be persistent across restarts. -We will cover how you would edit a file using a text editor in a future session. +We will cover how you would edit a file using a text editor in a future session. -## Resources +## Resources - [Learn the Linux Fundamentals - Part 1](https://www.youtube.com/watch?v=kPylihJRG70) - [Linux for hackers (don't worry you don't need to be a hacker!)](https://www.youtube.com/watch?v=VbEx7B_PTOE) diff --git a/Days/day17.md b/Days/day17.md index 08c85c5d3..47685aacf 100644 --- a/Days/day17.md +++ b/Days/day17.md @@ -1,57 +1,58 @@ --- -title: '#90DaysOfDevOps - Text Editors - nano vs vim - Day 17' +title: "#90DaysOfDevOps - Text Editors - nano vs vim - Day 17" published: false description: 90DaysOfDevOps - Text Editors - nano vs vim -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048703 --- + ## Text Editors - nano vs vim -The majority of your Linux systems are going to be servers and these are not going to have a GUI. I also mentioned in the last session that Linux is mostly made up of configuration files, to make changes you are going to need to be able to edit those configuration files to change anything on the system. +The majority of your Linux systems are going to be servers and these are not going to have a GUI. I also mentioned in the last session that Linux is mostly made up of configuration files, to make changes you are going to need to be able to edit those configuration files to change anything on the system. -There are lots of options out there but I think we should cover probably the two most common terminal text editors. I have used both of these editors and for me, I find `nano` the easy button when it comes to quick changes but `vim` has such a broad set of capabilities. +There are lots of options out there but I think we should cover probably the two most common terminal text editors. I have used both of these editors and for me, I find `nano` the easy button when it comes to quick changes but `vim` has such a broad set of capabilities. -### nano +### nano -- Not available on every system. +- Not available on every system. - Great for getting started. -If you run `nano 90DaysOfDevOps.txt` we will create a new file with nothing in, from here we can add our text and we have our instructions below for what we want to do with that file. +If you run `nano 90DaysOfDevOps.txt` we will create a new file with nothing in, from here we can add our text and we have our instructions below for what we want to do with that file. ![](Images/Day17_Linux1.png) -We can now use `control x + enter` and then run `ls` you can now see our new text file. +We can now use `control x + enter` and then run `ls` you can now see our new text file. ![](Images/Day17_Linux2.png) -We can now run `cat` against that file to read our file. We can then use that same `nano 90DaysOfDevOps.txt` to add additional text or modify your file. +We can now run `cat` against that file to read our file. We can then use that same `nano 90DaysOfDevOps.txt` to add additional text or modify your file. -For me, nano is super easy when it comes to getting small changes done on configuration files. +For me, nano is super easy when it comes to getting small changes done on configuration files. -### vim +### vim -Possibly the most common text editor around? A sibling of the UNIX text editor vi from 1976 we get a lot of functionality with vim. +Possibly the most common text editor around? A sibling of the UNIX text editor vi from 1976 we get a lot of functionality with vim. -- Pretty much supported on every single Linux distribution. -- Incredibly powerful! You can likely find a full 7-hour course just covering vim. +- Pretty much supported on every single Linux distribution. +- Incredibly powerful! You can likely find a full 7-hour course just covering vim. -We can jump into vim with the `vim` command or if we want to edit our new txt file we could run `vim 90DaysOfDevOps.txt` but you are going to first see the lack of help menus at the bottom. +We can jump into vim with the `vim` command or if we want to edit our new txt file we could run `vim 90DaysOfDevOps.txt` but you are going to first see the lack of help menus at the bottom. -The first question might be "How do I exit vim?" that is going to be `escape` and if we have not made any changes then it will be `:q` +The first question might be "How do I exit vim?" that is going to be `escape` and if we have not made any changes then it will be `:q` ![](Images/Day17_Linux3.png) -You start in `normal` mode, there are other modes `command, normal, visual, insert`, if we want to add the text we will need to switch from `normal` to `insert` we need to press `i` if you have added some text and would like to save these changes then you would hit escape and then `:wq` +You start in `normal` mode, there are other modes `command, normal, visual, insert`, if we want to add the text we will need to switch from `normal` to `insert` we need to press `i` if you have added some text and would like to save these changes then you would hit escape and then `:wq` ![](Images/Day17_Linux4.png) ![](Images/Day17_Linux5.png) -You can confirm this with the `cat` command to check you have saved those changes. +You can confirm this with the `cat` command to check you have saved those changes. -There is some cool fast functionality with vim that allows you to do menial tasks very quickly if you know the shortcuts which is a lecture in itself. Let's say we have added a list of repeated words and we now need to change that, maybe it's a configuration file and we repeat a network name and now this has changed and we quickly want to change this. I am using the word day for this example. +There is some cool fast functionality with vim that allows you to do menial tasks very quickly if you know the shortcuts which is a lecture in itself. Let's say we have added a list of repeated words and we now need to change that, maybe it's a configuration file and we repeat a network name and now this has changed and we quickly want to change this. I am using the word day for this example. ![](Images/Day17_Linux6.png) @@ -59,23 +60,23 @@ Now we want to replace that word with 90DaysOfDevOps, we can do this by hitting ![](Images/Day17_Linux7.png) -The outcome when you hit enter is that the word day is then replaced with 90DaysOfDevOps. +The outcome when you hit enter is that the word day is then replaced with 90DaysOfDevOps. ![](Images/Day17_Linux8.png) -Copy and Paste was a big eye-opener for me. Copy is not copied it is yanked. we can copy using `yy` on our keyboard in normal mode. `p` paste on the same line, `P` paste on a new line. +Copy and Paste was a big eye-opener for me. Copy is not copied it is yanked. we can copy using `yy` on our keyboard in normal mode. `p` paste on the same line, `P` paste on a new line. -You can also delete these lines by choosing the number of lines you wish to delete followed by `dd` +You can also delete these lines by choosing the number of lines you wish to delete followed by `dd` -There is also likely a time you will need to search a file, now we can use `grep` as mentioned in a previous session but we can also use vim. we can use `/word` and this will find the first match, to navigate through to the next you will use the `n` key and so on. +There is also likely a time you will need to search a file, now we can use `grep` as mentioned in a previous session but we can also use vim. we can use `/word` and this will find the first match, to navigate through to the next you will use the `n` key and so on. -For vim this is not even touching the surface, the biggest advice I can give is to get hands-on and use vim wherever possible. +For vim this is not even touching the surface, the biggest advice I can give is to get hands-on and use vim wherever possible. -A common interview question is what is your favourite text editor in Linux and I would make sure you have at least this knowledge of both so you can answer, it is fine to say nano because it's simple. At least you show competence in understanding what a text editor is. But get hands-on with them to be more proficient. +A common interview question is what is your favourite text editor in Linux and I would make sure you have at least this knowledge of both so you can answer, it is fine to say nano because it's simple. At least you show competence in understanding what a text editor is. But get hands-on with them to be more proficient. -Another pointer to navigate around in vim we can use `H,J,K,L` as well as our arrow keys. +Another pointer to navigate around in vim we can use `H,J,K,L` as well as our arrow keys. -## Resources +## Resources - [Vim in 100 Seconds](https://www.youtube.com/watch?v=-txKSRn0qeA) - [Vim tutorial](https://www.youtube.com/watch?v=IiwGbcd8S7I) diff --git a/Days/day18.md b/Days/day18.md index 74252ab57..9e7acab3c 100644 --- a/Days/day18.md +++ b/Days/day18.md @@ -1,89 +1,90 @@ --- -title: '#90DaysOfDevOps - SSH & Web Server - Day 18' +title: "#90DaysOfDevOps - SSH & Web Server - Day 18" published: false description: 90DaysOfDevOps - SSH & Web Server -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048733 --- + ## SSH & Web Server -As we have mentioned throughout you are going to most likely be managing lots of remote Linux servers, because of this, you will need to make sure that your connectivity to these remote servers is secure. In this section, we want to cover some of the basics of SSH that everyone should know that will help you with that secure tunnel to your remote systems. +As we have mentioned throughout you are going to most likely be managing lots of remote Linux servers, because of this, you will need to make sure that your connectivity to these remote servers is secure. In this section, we want to cover some of the basics of SSH that everyone should know that will help you with that secure tunnel to your remote systems. -- Setting up a connection with SSH -- Transferring files +- Setting up a connection with SSH +- Transferring files - Create your private key -### SSH introduction +### SSH introduction -- Secure shell -- Networking Protocol -- Allows secure communications -- Can secure any network service -- Typically used for remote command-line access +- Secure shell +- Networking Protocol +- Allows secure communications +- Can secure any network service +- Typically used for remote command-line access -In our environment, if you have been following along we have been using SSH already but this was all configured and automated through our vagrant configuration so we only had to run `vagrant ssh` and we gained access to our remote virtual machine. +In our environment, if you have been following along we have been using SSH already but this was all configured and automated through our vagrant configuration so we only had to run `vagrant ssh` and we gained access to our remote virtual machine. If our remote machine was not on the same system as our workstation and was in a remote location, maybe a cloud-based system or running in a data centre that we could only access over the internet we would need a secure way of being able to access the system to manage it. -SSH provides a secure tunnel between client and server so that nothing can be intercepted by bad actors. +SSH provides a secure tunnel between client and server so that nothing can be intercepted by bad actors. ![](Images/Day18_Linux1.png) -The server has a server-side SSH service always running and listening on a specific TCP port (22). +The server has a server-side SSH service always running and listening on a specific TCP port (22). -If we use our client to connect with the correct credentials or SSH key then we gain access to that server. +If we use our client to connect with the correct credentials or SSH key then we gain access to that server. ### Adding a bridged network adapter to our system -For us to use this with our current virtual box VM, we need to add a bridged network adapter to our machine. +For us to use this with our current virtual box VM, we need to add a bridged network adapter to our machine. -Power down your virtual machine, right-click on your machine within Virtual Box and select settings. In the new window then select networking. +Power down your virtual machine, right-click on your machine within Virtual Box and select settings. In the new window then select networking. ![](Images/Day18_Linux2.png) -Now power your machine back on and you will now have an IP address on your local machine. You can confirm this with the `IP addr` command. +Now power your machine back on and you will now have an IP address on your local machine. You can confirm this with the `IP addr` command. ### Confirming SSH server is running -We know SSH is already configured on our machine as we have been using it with vagrant but we can confirm by running +We know SSH is already configured on our machine as we have been using it with vagrant but we can confirm by running `sudo systemctl status ssh` ![](Images/Day18_Linux3.png) -If your system does not have the SSH server then you can install it by issuing this command `sudo apt install OpenSSH-server` +If your system does not have the SSH server then you can install it by issuing this command `sudo apt install OpenSSH-server` -You then want to make sure that our SSH is allowed if the firewall is running. We can do this with `sudo ufw allow ssh` this is not required on our configuration as we automated this with our vagrant provisioning. +You then want to make sure that our SSH is allowed if the firewall is running. We can do this with `sudo ufw allow ssh` this is not required on our configuration as we automated this with our vagrant provisioning. -### Remote Access - SSH Password +### Remote Access - SSH Password -Now that we have our SSH Server listening out on port 22 for any incoming connection requests and we have added the bridged networking we could use putty or an SSH client on our local machine to connect to our system using SSH. +Now that we have our SSH Server listening out on port 22 for any incoming connection requests and we have added the bridged networking we could use putty or an SSH client on our local machine to connect to our system using SSH. ![](Images/Day18_Linux4.png) -Then hit open, if this is the first time you have connected to this system via this IP address you will get this warning. We know that this is our system so you can choose yes. +Then hit open, if this is the first time you have connected to this system via this IP address you will get this warning. We know that this is our system so you can choose yes. ![](Images/Day18_Linux5.png) -We are then prompted for our username (vagrant) and password (default password - vagrant) Below you will see we are now using our SSH client (Putty) to connect to our machine using username and password. +We are then prompted for our username (vagrant) and password (default password - vagrant) Below you will see we are now using our SSH client (Putty) to connect to our machine using username and password. ![](Images/Day18_Linux6.png) -At this stage, we are connected to our VM from our remote client and we can issue our commands on our system. +At this stage, we are connected to our VM from our remote client and we can issue our commands on our system. ### Remote Access - SSH Key -The above is an easy way to gain access to your systems however it still relies on username and password, if some malicious actor was to gain access to this information plus the public address or IP of your system then it could be easily compromised. This is where SSH keys are preferred. +The above is an easy way to gain access to your systems however it still relies on username and password, if some malicious actor was to gain access to this information plus the public address or IP of your system then it could be easily compromised. This is where SSH keys are preferred. -SSH Keys means that we provide a key pair so that both the client and server know that this is a trusted device. +SSH Keys means that we provide a key pair so that both the client and server know that this is a trusted device. -Creating a key is easy. On our local machine (Windows) We can issue the following command in fact if you have an ssh-client installed on any system I believe this same command will work? +Creating a key is easy. On our local machine (Windows) We can issue the following command in fact if you have an ssh-client installed on any system I believe this same command will work? `ssh-keygen -t ed25519` -I am not going to get into what `ed25519` is and means here but you can have a search if you want to learn more about [cryptography](https://en.wikipedia.org/wiki/EdDSA#Ed25519) +I am not going to get into what `ed25519` is and means here but you can have a search if you want to learn more about [cryptography](https://en.wikipedia.org/wiki/EdDSA#Ed25519) ![](Images/Day18_Linux7.png) @@ -91,36 +92,37 @@ At this point, we have our created SSH key stored in `C:\Users\micha/.ssh/` But to link this with our Linux VM we need to copy the key. We can do this by using the `ssh-copy-id vagrant@192.168.169.135` -I used Powershell to create my keys on my Windows client but there is no `ssh-copy-id` available here. There are ways in which you can do this on Windows and a small search online will find you an alternative, but I will just use git bash on my Windows machine to make the copy. +I used Powershell to create my keys on my Windows client but there is no `ssh-copy-id` available here. There are ways in which you can do this on Windows and a small search online will find you an alternative, but I will just use git bash on my Windows machine to make the copy. ![](Images/Day18_Linux8.png) -We can now go back to Powershell to test that our connection now works with our SSH Keys and no password is required. +We can now go back to Powershell to test that our connection now works with our SSH Keys and no password is required. `ssh vagrant@192.168.169.135` ![](Images/Day18_Linux9.png) -We could secure this further if needed by using a passphrase. We could also go one step further saying that no passwords at all meaning only key pairs over SSH would be allowed. You can make this happen in the following configuration file. +We could secure this further if needed by using a passphrase. We could also go one step further saying that no passwords at all meaning only key pairs over SSH would be allowed. You can make this happen in the following configuration file. -`sudo nano /etc/ssh/sshd_config` +`sudo nano /etc/ssh/sshd_config` -there is a line in here with `PasswordAuthentication yes` this will be `#` commented out, you should uncomment and change the yes to no. You will then need to reload the SSH service with `sudo systemctl reload sshd` +there is a line in here with `PasswordAuthentication yes` this will be `#` commented out, you should uncomment and change the yes to no. You will then need to reload the SSH service with `sudo systemctl reload sshd` -## Setting up a Web Server +## Setting up a Web Server -Not specifically related to what we have just done with SSH above but I wanted to include this as this is again another task that you might find a little daunting but it really should not be. +Not specifically related to what we have just done with SSH above but I wanted to include this as this is again another task that you might find a little daunting but it really should not be. -We have our Linux playground VM and at this stage, we want to add an apache webserver to our VM so that we can host a simple website from it that serves my home network. Note that this web page will not be accessible from the internet, this can be done but it will not be covered here. +We have our Linux playground VM and at this stage, we want to add an apache webserver to our VM so that we can host a simple website from it that serves my home network. Note that this web page will not be accessible from the internet, this can be done but it will not be covered here. -You might also see this referred to as a LAMP stack. +You might also see this referred to as a LAMP stack. -- **L**inux Operating System -- **A**pache Web Server -- **m**ySQL database +- **L**inux Operating System +- **A**pache Web Server +- **m**ySQL database - **P**HP -### Apache2 +### Apache2 + Apache2 is an open-source HTTP server. We can install apache2 with the following command. `sudo apt-get install apache2` @@ -132,32 +134,34 @@ Then using the bridged network address from the SSH walkthrough open a browser a ![](Images/Day18_Linux10.png) ### mySQL + MySQL is a database in which we will be storing our data for our simple website. To get MySQL installed we should use the following command `sudo apt-get install mysql-server` ### PHP -PHP is a server-side scripting language, we will use this to interact with a MySQL database. The final installation is to get PHP and dependencies installed using `sudo apt-get install php libapache2-mod-php php-mysql` -The first configuration change we want to make out of the box apache is using index.html and we want it to use index.php instead. +PHP is a server-side scripting language, we will use this to interact with a MySQL database. The final installation is to get PHP and dependencies installed using `sudo apt-get install php libapache2-mod-php php-mysql` -We are going to use `sudo nano /etc/apache2/mods-enabled/dir.conf` and we are going to move index.php to the first item in the list. +The first configuration change we want to make out of the box apache is using index.html and we want it to use index.php instead. + +We are going to use `sudo nano /etc/apache2/mods-enabled/dir.conf` and we are going to move index.php to the first item in the list. ![](Images/Day18_Linux11.png) Restart the apache2 service `sudo systemctl restart apache2` -Now let's confirm that our system is configured correctly for PHP. Create the following file using this command, this will open a blank file in nano. +Now let's confirm that our system is configured correctly for PHP. Create the following file using this command, this will open a blank file in nano. `sudo nano /var/www/html/90Days.php` -then copy the following and use control + x to exit and save your file. +then copy the following and use control + x to exit and save your file. -``` +``` ``` -Now navigate to your Linux VM IP again with the additional 90Days.php on the end of the URL. `http://192.168.169.135/90Days.php` you should see something similar to the below if PHP is configured correctly. +Now navigate to your Linux VM IP again with the additional 90Days.php on the end of the URL. `http://192.168.169.135/90Days.php` you should see something similar to the below if PHP is configured correctly. ![](Images/Day18_Linux12.png) @@ -189,13 +193,13 @@ I then walked through this tutorial to get WordPress up on our LAMP stack, some `sudo rm latest.tar.gz` -At this point you are in Step 4 in the linked article, you will need to follow the steps to make sure all correct permissions are in place for the WordPress directory. +At this point you are in Step 4 in the linked article, you will need to follow the steps to make sure all correct permissions are in place for the WordPress directory. -Because this is internal only you do not need to "generate security keys" in this step. Move to Step 5 which is changing the Apache configuration to WordPress. +Because this is internal only you do not need to "generate security keys" in this step. Move to Step 5 which is changing the Apache configuration to WordPress. -Then providing everything is configured correctly you will be able to access via your internal network address and run through the WordPress installation. +Then providing everything is configured correctly you will be able to access via your internal network address and run through the WordPress installation. -## Resources +## Resources - [Client SSH GUI - Remmina](https://remmina.org/) - [The Beginner's guide to SSH](https://www.youtube.com/watch?v=2QXkrLVsRmk) diff --git a/Days/day19.md b/Days/day19.md index 839427b8f..5f72ba0f9 100644 --- a/Days/day19.md +++ b/Days/day19.md @@ -1,85 +1,88 @@ --- -title: '#90DaysOfDevOps - Automate tasks with bash scripts - Day 19' +title: "#90DaysOfDevOps - Automate tasks with bash scripts - Day 19" published: false description: 90DaysOfDevOps - Automate tasks with bash scripts -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048774 --- + ## Automate tasks with bash scripts -The shell that we are going to use today is the bash but we will cover another shell tomorrow when we dive into ZSH. +The shell that we are going to use today is the bash but we will cover another shell tomorrow when we dive into ZSH. BASH - **B**ourne **A**gain **Sh**ell -We could almost dedicate a whole section of 7 days to shell scripting much like the programming languages, bash gives us the capability of working alongside other automation tools to get things done. +We could almost dedicate a whole section of 7 days to shell scripting much like the programming languages, bash gives us the capability of working alongside other automation tools to get things done. -I still speak to a lot of people who have set up some complex shell scripts to make something happen and they rely on this script for some of the most important things in the business, I am not saying we need to understand shell/bash scripting for this purpose, this is not the way. But we should learn shell/bash scripting to work alongside our automation tools and for ad-hoc tasks. +I still speak to a lot of people who have set up some complex shell scripts to make something happen and they rely on this script for some of the most important things in the business, I am not saying we need to understand shell/bash scripting for this purpose, this is not the way. But we should learn shell/bash scripting to work alongside our automation tools and for ad-hoc tasks. -An example of this that we have used in this section could be the VAGRANTFILE we used to create our VM, we could wrap this into a simple bash script that deleted and renewed this every Monday morning so that we have a fresh copy of our Linux VM every week, we could also add all the software stack that we need on said Linux machine and so on all through this one bash script. +An example of this that we have used in this section could be the VAGRANTFILE we used to create our VM, we could wrap this into a simple bash script that deleted and renewed this every Monday morning so that we have a fresh copy of our Linux VM every week, we could also add all the software stack that we need on said Linux machine and so on all through this one bash script. -I think another thing I am at least hearing is that hands-on scripting questions are becoming more and more apparent in all lines of interviews. +I think another thing I am at least hearing is that hands-on scripting questions are becoming more and more apparent in all lines of interviews. -### Getting started +### Getting started -As with a lot of things we are covering in this whole 90 days, the only real way to learn is through doing. Hands-on experience is going to help soak all of this into your muscle memory. +As with a lot of things we are covering in this whole 90 days, the only real way to learn is through doing. Hands-on experience is going to help soak all of this into your muscle memory. -First of all, we are going to need a text editor. On [Day 17](day17.md) we covered probably the two most common text editors and a little on how to use them. +First of all, we are going to need a text editor. On [Day 17](day17.md) we covered probably the two most common text editors and a little on how to use them. -Let's get straight into it and create our first shell script. +Let's get straight into it and create our first shell script. `touch 90DaysOfDevOps.sh` -Followed by `nano 90DaysOfDevOps.sh` this will open our new blank shell script in nano. Again you can choose your text editor of choice here. +Followed by `nano 90DaysOfDevOps.sh` this will open our new blank shell script in nano. Again you can choose your text editor of choice here. -The first line of all bash scripts will need to look something like this `#!/usr/bin/bash` this is the path to your bash binary. +The first line of all bash scripts will need to look something like this `#!/usr/bin/bash` this is the path to your bash binary. -You should however check this in the terminal by running `which bash` if you are not using Ubuntu then you might also try `whereis bash` from the terminal. +You should however check this in the terminal by running `which bash` if you are not using Ubuntu then you might also try `whereis bash` from the terminal. -However, you may see other paths listed in already created shell scripts which could include: +However, you may see other paths listed in already created shell scripts which could include: - `#!/bin/bash` - `#!/usr/bin/env bash` -In the next line in our script, I like to add a comment and add the purpose of the script or at least some information about me. You can do this by using the `#` This allows us to comment on particular lines in our code and provide descriptions of what the upcoming commands will be doing. I find the more notes the better for the user experience especially if you are sharing this. +In the next line in our script, I like to add a comment and add the purpose of the script or at least some information about me. You can do this by using the `#` This allows us to comment on particular lines in our code and provide descriptions of what the upcoming commands will be doing. I find the more notes the better for the user experience especially if you are sharing this. -I sometimes use figlet, a program we installed earlier in the Linux section to create some asci art to kick things off in our scripts. +I sometimes use figlet, a program we installed earlier in the Linux section to create some asci art to kick things off in our scripts. ![](Images/Day19_Linux1.png) -All of the commands we have been through earlier in this Linux section ([Day15](day15.md)) could be used here as a simple command to test our script. +All of the commands we have been through earlier in this Linux section ([Day15](day15.md)) could be used here as a simple command to test our script. -Let's add a simple block of code to our script. +Let's add a simple block of code to our script. -``` +``` mkdir 90DaysOfDevOps cd 90DaysOfDevOps touch Day19 -ls +ls ``` -You can then save this and exit your text editor, if we run our script with `./90DaysOfDevOps.sh` you should get a permission denied message. You can check the permissions of this file using the `ls -al` command and you can see highlighted we do not have executable rights on this file. + +You can then save this and exit your text editor, if we run our script with `./90DaysOfDevOps.sh` you should get a permission denied message. You can check the permissions of this file using the `ls -al` command and you can see highlighted we do not have executable rights on this file. ![](Images/Day19_Linux2.png) -We can change this using `chmod +x 90DaysOfDevOps.sh` and then you will see the `x` meaning we can now execute our script. +We can change this using `chmod +x 90DaysOfDevOps.sh` and then you will see the `x` meaning we can now execute our script. ![](Images/Day19_Linux3.png) -Now we can run our script again using `./90DaysOfDevOps.sh` after running the script has now created a new directory, changed into that directory and then created a new file. +Now we can run our script again using `./90DaysOfDevOps.sh` after running the script has now created a new directory, changed into that directory and then created a new file. ![](Images/Day19_Linux4.png) -Pretty basic stuff but you can start to see hopefully how this could be used to call on other tools as part of ways to make your life easier and automate things. +Pretty basic stuff but you can start to see hopefully how this could be used to call on other tools as part of ways to make your life easier and automate things. ### Variables, Conditionals -A lot of this section is a repeat of what we covered when we were learning Golang but I think it's worth us diving in here again. -- ### Variables +A lot of this section is a repeat of what we covered when we were learning Golang but I think it's worth us diving in here again. -Variables enable us to define once a particular repeated term that is used throughout a potentially complex script. +- ### Variables -To add a variable you simply add it like this to a clean line in your script. +Variables enable us to define once a particular repeated term that is used throughout a potentially complex script. + +To add a variable you simply add it like this to a clean line in your script. `challenge="90DaysOfDevOps"` @@ -87,22 +90,22 @@ This way when and where we use `$challenge` in our code, if we change the variab ![](Images/Day19_Linux5.png) -If we now run our `sh` script you will see the printout that was added to our script. +If we now run our `sh` script you will see the printout that was added to our script. ![](Images/Day19_Linux5.png) -We can also ask for user input that can set our variables using the following: +We can also ask for user input that can set our variables using the following: -``` +``` echo "Enter your name" read name ``` -This would then define the input as the variable `$name` We could then use this later on. +This would then define the input as the variable `$name` We could then use this later on. -- ### Conditionals +- ### Conditionals -Maybe we want to find out who we have on our challenge and how many days they have completed, we can define this using `if` `if-else` `else-if` conditionals, this is what we have defined below in our script. +Maybe we want to find out who we have on our challenge and how many days they have completed, we can define this using `if` `if-else` `else-if` conditionals, this is what we have defined below in our script. ``` #!/bin/bash @@ -138,7 +141,8 @@ else echo "You have entered the wrong amount of days" fi ``` -You can also see from the above that we are running some comparisons or checking values against each other to move on to the next stage. We have different options here worth noting. + +You can also see from the above that we are running some comparisons or checking values against each other to move on to the next stage. We have different options here worth noting. - `eq` - if the two values are equal will return TRUE - `ne` - if the two values are not equal will return TRUE @@ -147,11 +151,11 @@ You can also see from the above that we are running some comparisons or checking - `lt` - if the first value is less than the second value will return TRUE - `le` - if the first value is less than or equal to the second value will return TRUE -We might also use bash scripting to determine information about files and folders, this is known as file conditions. +We might also use bash scripting to determine information about files and folders, this is known as file conditions. - `-d file` True if the file is a directory - `-e file` True if the file exists -- `-f file` True if the provided string is a file +- `-f file` True if the provided string is a file - `-g file` True if the group id is set on a file - `-r file` True if the file is readable - `-s file` True if the file has a non-zero size @@ -159,41 +163,42 @@ We might also use bash scripting to determine information about files and folder ``` FILE="90DaysOfDevOps.txt" if [ -f "$FILE" ] -then +then echo "$FILE is a file" -else +else echo "$FILE is not a file" fi ``` ![](Images/Day19_Linux7.png) -Providing we have that file still in our directory we should get the first echo command back. But if we remove that file then we should get the second echo command. +Providing we have that file still in our directory we should get the first echo command back. But if we remove that file then we should get the second echo command. ![](Images/Day19_Linux8.png) -You can hopefully see how this can be used to save you time when searching through a system for specific items. +You can hopefully see how this can be used to save you time when searching through a system for specific items. I found this amazing repository on GitHub that has what seems to be an endless amount of scripts [DevOps Bash Tools](https://github.com/HariSekhon/DevOps-Bash-tools/blob/master/README.md) -### Example +### Example -**Scenario**: We have our company called "90DaysOfDevOps" and we have been running a while and now it is time to expand the team from 1 person to lots more over the coming weeks, I am the only one so far that knows the onboarding process so we want to reduce that bottleneck by automating some of these tasks. +**Scenario**: We have our company called "90DaysOfDevOps" and we have been running a while and now it is time to expand the team from 1 person to lots more over the coming weeks, I am the only one so far that knows the onboarding process so we want to reduce that bottleneck by automating some of these tasks. -**Requirements**: -- A user can be passed in as a command line argument. -- A user is created with the name of the command line argument. -- A password can be parsed as a command line argument. -- The password is set for the user -- A message of successful account creation is displayed. +**Requirements**: + +- A user can be passed in as a command line argument. +- A user is created with the name of the command line argument. +- A password can be parsed as a command line argument. +- The password is set for the user +- A message of successful account creation is displayed. Let's start with creating our shell script with `touch create_user.sh` Before we move on let's also make this executable using `chmod +x create_user.sh` -then we can use `nano create_user.sh` to start editing our script for the scenario we have been set. +then we can use `nano create_user.sh` to start editing our script for the scenario we have been set. -We can take a look at the first requirement "A user can be passed in as a command line argument" we can use the following +We can take a look at the first requirement "A user can be passed in as a command line argument" we can use the following ``` #! /usr/bin/bash @@ -204,7 +209,7 @@ echo "$1" ![](Images/Day19_Linux9.png) -Go ahead and run this using `./create_user.sh Michael` replace Michael with your name when you run the script. +Go ahead and run this using `./create_user.sh Michael` replace Michael with your name when you run the script. ![](Images/Day19_Linux10.png) @@ -223,11 +228,11 @@ sudo useradd -m "$1" Warning: If you do not provide a user account name then it will error as we have not filled the variable `$1` -We can then check this account has been created with the `awk -F: '{ print $1}' /etc/passwd` command. +We can then check this account has been created with the `awk -F: '{ print $1}' /etc/passwd` command. ![](Images/Day19_Linux11.png) -Our next requirement is "A password can be parsed as a command line argument." First of all, we are not going to ever do this in production it is more for us to work through a list of requirements in the lab to understand. +Our next requirement is "A password can be parsed as a command line argument." First of all, we are not going to ever do this in production it is more for us to work through a list of requirements in the lab to understand. ``` #! /usr/bin/bash @@ -244,15 +249,15 @@ sudo chpasswd <<< "$1":"$2" If we then run this script with the two parameters `./create_user.sh 90DaysOfDevOps password` -You can see from the below image that we executed our script it created our user and password and then we manually jumped into that user and confirmed with the `whoami` command. +You can see from the below image that we executed our script it created our user and password and then we manually jumped into that user and confirmed with the `whoami` command. ![](Images/Day19_Linux12.png) -The final requirement is "A message of successful account creation is displayed." We already have this in the top line of our code and we can see on the above screenshot that we have a `90DaysOfDevOps user account being created` shown. This was left from our testing with the `$1` parameter. +The final requirement is "A message of successful account creation is displayed." We already have this in the top line of our code and we can see on the above screenshot that we have a `90DaysOfDevOps user account being created` shown. This was left from our testing with the `$1` parameter. -Now, this script can be used to quickly onboard and set up new users on to our Linux systems. But maybe instead of a few of the historic people having to work through this and then having to get other people their new usernames or passwords we could add some user input that we have previously covered earlier on to capture our variables. +Now, this script can be used to quickly onboard and set up new users on to our Linux systems. But maybe instead of a few of the historic people having to work through this and then having to get other people their new usernames or passwords we could add some user input that we have previously covered earlier on to capture our variables. -``` +``` #! /usr/bin/bash echo "What is your intended username?" @@ -270,7 +275,7 @@ sudo useradd -m $username sudo chpasswd <<< $username:$password ``` -With the steps being more interactive, +With the steps being more interactive, ![](Images/Day19_Linux14.png) @@ -286,13 +291,13 @@ If you do want to delete the user you have created for lab purposes then you can [Example Script](Linux/create-user.sh) -Once again I am not saying this is going to be something that you do create in your day to day but it was something I thought of that would highlight the flexibility of what you could use shell scripting for. +Once again I am not saying this is going to be something that you do create in your day to day but it was something I thought of that would highlight the flexibility of what you could use shell scripting for. -Think about any repeatable tasks that you do every day or week or month and how could you better automate that, first option is likely going to be using a bash script before moving into more complex territory. +Think about any repeatable tasks that you do every day or week or month and how could you better automate that, first option is likely going to be using a bash script before moving into more complex territory. -I have created a very simple bash file that helps me spin up a Kubernetes cluster using minikube on my local machine along with data services and Kasten K10 to help demonstrate the requirements and needs around data management. [Project Pace](https://github.com/MichaelCade/project_pace/blob/main/singlecluster_demo.sh) But I did not feel this appropriate to raise here as we have not covered Kubernetes yet. +I have created a very simple bash file that helps me spin up a Kubernetes cluster using minikube on my local machine along with data services and Kasten K10 to help demonstrate the requirements and needs around data management. [Project Pace](https://github.com/MichaelCade/project_pace/blob/main/singlecluster_demo.sh) But I did not feel this appropriate to raise here as we have not covered Kubernetes yet. -## Resources +## Resources - [Bash in 100 seconds](https://www.youtube.com/watch?v=I4EWvMFj37g) - [Bash script with practical examples - Full Course](https://www.youtube.com/watch?v=TPRSJbtfK4M) diff --git a/Days/day20.md b/Days/day20.md index f0856c887..bf14323ff 100644 --- a/Days/day20.md +++ b/Days/day20.md @@ -1,81 +1,84 @@ --- -title: '#90DaysOfDevOps - Dev workstation setup - All the pretty things - Day 20' +title: "#90DaysOfDevOps - Dev workstation setup - All the pretty things - Day 20" published: false description: 90DaysOfDevOps - Dev workstation setup - All the pretty things -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048734 --- + ## Dev workstation setup - All the pretty things -Not to be confused with us setting Linux servers up this way but I wanted to also show off the choice and flexibility that we have within the Linux desktop. +Not to be confused with us setting Linux servers up this way but I wanted to also show off the choice and flexibility that we have within the Linux desktop. -I have been using a Linux Desktop for almost a year now and I have it configured just the way I want from a look and feel perspective. Using our Ubuntu VM on Virtual Box we can run through some of the customisations I have made to my daily driver. +I have been using a Linux Desktop for almost a year now and I have it configured just the way I want from a look and feel perspective. Using our Ubuntu VM on Virtual Box we can run through some of the customisations I have made to my daily driver. -I have put together a YouTube video walking through the rest as some people might be able to better follow along: +I have put together a YouTube video walking through the rest as some people might be able to better follow along: [![Click to access YouTube Video](Images/Day20_YouTube.png)](https://youtu.be/jeEslAtHfKc) -Out of the box, our system will look something like the below: +Out of the box, our system will look something like the below: ![](Images/Day20_Linux1.png) -We can also see our default bash shell below, +We can also see our default bash shell below, ![](Images/Day20_Linux2.png) -A lot of this comes down to dotfiles something we will cover in this final Linux session of the series. +A lot of this comes down to dotfiles something we will cover in this final Linux session of the series. + +### dotfiles -### dotfiles -First up I want to dig into dotfiles, I have said on a previous day that Linux is made up of configuration files. These dotfiles are configuration files for your Linux system and applications. +First up I want to dig into dotfiles, I have said on a previous day that Linux is made up of configuration files. These dotfiles are configuration files for your Linux system and applications. -I will also add that dotfiles are not just used to customise and make your desktop look pretty, there are also dotfile changes and configurations that will help you with productivity. +I will also add that dotfiles are not just used to customise and make your desktop look pretty, there are also dotfile changes and configurations that will help you with productivity. -As I mentioned many software programs store their configurations in these dotfiles. These dotfiles assist in managing functionality. +As I mentioned many software programs store their configurations in these dotfiles. These dotfiles assist in managing functionality. -Each dotfile starts with a `.` You can probably guess where the naming came from? +Each dotfile starts with a `.` You can probably guess where the naming came from? -So far we have been using bash as our shell which means you will have a .bashrc and .bash_profile in our home folder. You can see below a few dotfiles we have on our system. +So far we have been using bash as our shell which means you will have a .bashrc and .bash_profile in our home folder. You can see below a few dotfiles we have on our system. ![](Images/Day20_Linux3.png) -We are going to be changing our shell, so we will later be seeing a new `.zshrc` configuration dotfile. +We are going to be changing our shell, so we will later be seeing a new `.zshrc` configuration dotfile. + +But now you know if we refer to dotfiles you know they are configuration files. We can use them to add aliases to our command prompt as well as paths to different locations. Some people publish their dotfiles so they are publicly available. You will find mine here on my GitHub [MichaelCade/dotfiles](https://github.com/MichaelCade/dotfiles) here you will find my custom `.zshrc` file, my terminal of choice is terminator which also has some configuration files in the folder and then also some background options. -But now you know if we refer to dotfiles you know they are configuration files. We can use them to add aliases to our command prompt as well as paths to different locations. Some people publish their dotfiles so they are publicly available. You will find mine here on my GitHub [MichaelCade/dotfiles](https://github.com/MichaelCade/dotfiles) here you will find my custom `.zshrc` file, my terminal of choice is terminator which also has some configuration files in the folder and then also some background options. +### ZSH -### ZSH -As I mentioned throughout our interactions so far we have been using a bash shell the default shell with Ubuntu. ZSH is very similar but it does have some benefits over bash. +As I mentioned throughout our interactions so far we have been using a bash shell the default shell with Ubuntu. ZSH is very similar but it does have some benefits over bash. Zsh has features like interactive Tab completion, automated file searching, regex integration, advanced shorthand for defining command scope, and a rich theme engine. -We can use our `apt` package manager to get zsh installed on our system. Let's go ahead and run `sudo apt install zsh` from our bash terminal. I am going to do this from within the VM console vs being connected over SSH. +We can use our `apt` package manager to get zsh installed on our system. Let's go ahead and run `sudo apt install zsh` from our bash terminal. I am going to do this from within the VM console vs being connected over SSH. -When the installation command is complete you can run `zsh` inside your terminal, this will then start a shell configuration script. +When the installation command is complete you can run `zsh` inside your terminal, this will then start a shell configuration script. ![](Images/Day20_Linux4.png) -I selected `1` to the above question and now we have some more options. +I selected `1` to the above question and now we have some more options. ![](Images/Day20_Linux5.png) -You can see from this menu that we can make some out of the box edits to make ZSH configured to our needs. +You can see from this menu that we can make some out of the box edits to make ZSH configured to our needs. -If you exit the wizard with a `0` and then use the `ls -al | grep .zshrc` you should see we have a new configuration file. +If you exit the wizard with a `0` and then use the `ls -al | grep .zshrc` you should see we have a new configuration file. -Now we want to make zsh our default shell every time we open our terminal, we can do this by running the following command to change our shell `chsh -s $(which zsh)` we then need to log out and back in again for the changes to take place. +Now we want to make zsh our default shell every time we open our terminal, we can do this by running the following command to change our shell `chsh -s $(which zsh)` we then need to log out and back in again for the changes to take place. When you log back and open a terminal it should look something like this. We can also confirm our shell has now been changed over by running `which $SHELL` ![](Images/Day20_Linux6.png) -I generally perform this step on each Ubuntu desktop I spin up and find in general without going any further that the zsh shell is a little faster than bash. +I generally perform this step on each Ubuntu desktop I spin up and find in general without going any further that the zsh shell is a little faster than bash. -### OhMyZSH +### OhMyZSH -Next up we want to make things look a little better and also add some functionality to help us move around within the terminal. +Next up we want to make things look a little better and also add some functionality to help us move around within the terminal. -OhMyZSH is a free and open source framework for managing your zsh configuration. There are lots of plugins, themes and other things that just make interacting with the zsh shell a lot nicer. +OhMyZSH is a free and open source framework for managing your zsh configuration. There are lots of plugins, themes and other things that just make interacting with the zsh shell a lot nicer. You can find out more about [ohmyzsh](https://ohmyz.sh/) @@ -87,67 +90,67 @@ When you have run the above command you should see some output like the below. ![](Images/Day20_Linux7.png) - Now we can move on to start putting a theme in for our experience, there are well over 100 bundled with Oh My ZSH but my go-to for all of my applications and everything is the Dracula theme. +Now we can move on to start putting a theme in for our experience, there are well over 100 bundled with Oh My ZSH but my go-to for all of my applications and everything is the Dracula theme. - I also want to add that these two plugins are a must when using Oh My ZSH. +I also want to add that these two plugins are a must when using Oh My ZSH. - `git clone https://github.com/zsh-users/zsh-autosuggestions.git $ZSH_CUSTOM/plugins/zsh-autosuggestions` +`git clone https://github.com/zsh-users/zsh-autosuggestions.git $ZSH_CUSTOM/plugins/zsh-autosuggestions` - `git clone https://github.com/zsh-users/zsh-syntax-highlighting.git $ZSH_CUSTOM/plugins/zsh-syntax-highlighting` +`git clone https://github.com/zsh-users/zsh-syntax-highlighting.git $ZSH_CUSTOM/plugins/zsh-syntax-highlighting` - `nano ~/.zshrc` +`nano ~/.zshrc` - edit the plugins to now include `plugins=(git zsh-autosuggestions zsh-syntax-highlighting)` +edit the plugins to now include `plugins=(git zsh-autosuggestions zsh-syntax-highlighting)` ## Gnome Extensions -I also use Gnome extensions, and in particular the list below +I also use Gnome extensions, and in particular the list below [Gnome extensions](https://extensions.gnome.org) - - Caffeine + - Caffeine - CPU Power Manager - - Dash to Dock - - Desktop Icons - - User Themes + - Dash to Dock + - Desktop Icons + - User Themes ## Software Installation -A short list of the programs I install on the machine using `apt` +A short list of the programs I install on the machine using `apt` - - VSCode - - azure-cli + - VSCode + - azure-cli - containerd.io - docker - - docker-ce - - google-cloud-sdk - - insomnia + - docker-ce + - google-cloud-sdk + - insomnia - packer - terminator - - terraform + - terraform - vagrant ### Dracula theme -This site is the only theme I am using at the moment. Looks clear, and clean and everything looks great. [Dracula Theme](https://draculatheme.com/) It also has you covered when you have lots of other programs you use on your machine. +This site is the only theme I am using at the moment. Looks clear, and clean and everything looks great. [Dracula Theme](https://draculatheme.com/) It also has you covered when you have lots of other programs you use on your machine. -From the link above we can search for zsh on the site and you will find at least two options. +From the link above we can search for zsh on the site and you will find at least two options. -Follow the instructions listed to install either manually or using git. Then you will need to finally edit your `.zshrc` configuration file as per below. +Follow the instructions listed to install either manually or using git. Then you will need to finally edit your `.zshrc` configuration file as per below. ![](Images/Day20_Linux8.png) -You are next going to want the [Gnome Terminal Dracula theme](https://draculatheme.com/gnome-terminal) with all instructions available here as well. +You are next going to want the [Gnome Terminal Dracula theme](https://draculatheme.com/gnome-terminal) with all instructions available here as well. It would take a long time for me to document every step so I created a video walkthrough of the process. (**Click on the image below**) [![](Images/Day20_YouTube.png)](https://youtu.be/jeEslAtHfKc) -If you made it this far, then we have now finished our Linux section of the #90DaysOfDevOps. Once again I am open to feedback and additions to resources here. +If you made it this far, then we have now finished our Linux section of the #90DaysOfDevOps. Once again I am open to feedback and additions to resources here. -I also thought on this it was easier to show you a lot of the steps through video vs writing them down here, what do you think about this? I do have a goal to work back through these days and where possible create video walkthroughs to add in and better maybe explain and show some of the things we have covered. What do you think? +I also thought on this it was easier to show you a lot of the steps through video vs writing them down here, what do you think about this? I do have a goal to work back through these days and where possible create video walkthroughs to add in and better maybe explain and show some of the things we have covered. What do you think? -## Resources +## Resources - [Bash in 100 seconds](https://www.youtube.com/watch?v=I4EWvMFj37g) - [Bash script with practical examples - Full Course](https://www.youtube.com/watch?v=TPRSJbtfK4M) @@ -158,6 +161,6 @@ I also thought on this it was easier to show you a lot of the steps through vide - [Learn the Linux Fundamentals - Part 1](https://www.youtube.com/watch?v=kPylihJRG70) - [Linux for hackers (don't worry you don't need to be a hacker!)](https://www.youtube.com/watch?v=VbEx7B_PTOE) -Tomorrow we start our 7 days of diving into Networking, we will be looking to give ourselves the foundational knowledge and understanding of Networking around DevOps. +Tomorrow we start our 7 days of diving into Networking, we will be looking to give ourselves the foundational knowledge and understanding of Networking around DevOps. See you on [Day21](day21.md) diff --git a/Days/day21.md b/Days/day21.md index 3ec1a7f61..2600b4d1a 100644 --- a/Days/day21.md +++ b/Days/day21.md @@ -1,105 +1,106 @@ --- -title: '#90DaysOfDevOps - The Big Picture: DevOps and Networking - Day 21' +title: "#90DaysOfDevOps - The Big Picture: DevOps and Networking - Day 21" published: false description: 90DaysOfDevOps - The Big Picture DevOps and Networking -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048761 --- + ## The Big Picture: DevOps and Networking -Welcome to Day 21! We are going to be getting into Networking over the next 7 days, Networking and DevOps are the overarching themes but we will need to get into some of the networking fundamentals as well. +Welcome to Day 21! We are going to be getting into Networking over the next 7 days, Networking and DevOps are the overarching themes but we will need to get into some of the networking fundamentals as well. -Ultimately as we have said previously DevOps is about a culture and process change within your organisation this as we have discussed can be Virtual Machines, Containers, or Kubernetes but it can also be the network, If we are using those DevOps principles for our infrastructure that has to include the network more to the point from a DevOps point of view you also need to know about the network as in the different topologies and networking tools and stacks that we have available. +Ultimately as we have said previously DevOps is about a culture and process change within your organisation this as we have discussed can be Virtual Machines, Containers, or Kubernetes but it can also be the network, If we are using those DevOps principles for our infrastructure that has to include the network more to the point from a DevOps point of view you also need to know about the network as in the different topologies and networking tools and stacks that we have available. -I would argue that we should have our networking devices configured using infrastructure as code and have everything automated like we would our virtual machines, but to do that we have to have a good understanding of what we are automating. +I would argue that we should have our networking devices configured using infrastructure as code and have everything automated like we would our virtual machines, but to do that we have to have a good understanding of what we are automating. ### What is NetDevOps | Network DevOps? -You may also hear the terms Network DevOps or NetDevOps. Maybe you are already a Network engineer and have a great grasp on the network components within the infrastructure you understand the elements used around networking such as DHCP, DNS, NAT etc. You will also have a good understanding of the hardware or software-defined networking options, switches, routers etc. +You may also hear the terms Network DevOps or NetDevOps. Maybe you are already a Network engineer and have a great grasp on the network components within the infrastructure you understand the elements used around networking such as DHCP, DNS, NAT etc. You will also have a good understanding of the hardware or software-defined networking options, switches, routers etc. -But if you are not a network engineer then we probably need to get foundational knowledge across the board in some of those areas so that we can understand the end goal of Network DevOps. +But if you are not a network engineer then we probably need to get foundational knowledge across the board in some of those areas so that we can understand the end goal of Network DevOps. -But in regards to those terms, we can think of NetDevOps or Network DevOps as applying the DevOps Principles and Practices to the network, applying version control and automation tools to the network creation, testing, monitoring, and deployments. +But in regards to those terms, we can think of NetDevOps or Network DevOps as applying the DevOps Principles and Practices to the network, applying version control and automation tools to the network creation, testing, monitoring, and deployments. -If we think of Network DevOps as having to require automation, we mentioned before about DevOps breaking down the siloes between teams. If the networking teams do not change to a similar model and process then they become the bottleneck or even the failure overall. +If we think of Network DevOps as having to require automation, we mentioned before about DevOps breaking down the silos between teams. If the networking teams do not change to a similar model and process then they become the bottleneck or even the failure overall. -Using the automation principles around provisioning, configuration, testing, version control and deployment is a great start. Automation is overall going to enable speed of deployment, stability of the networking infrastructure and consistent improvement as well as the process being shared across multiple environments once they have been tested. Such as a fully tested Network Policy that has been fully tested on one environment can be used quickly in another location because of the nature of this being in code vs a manually authored process which it might have been before. +Using the automation principles around provisioning, configuration, testing, version control and deployment is a great start. Automation is overall going to enable speed of deployment, stability of the networking infrastructure and consistent improvement as well as the process being shared across multiple environments once they have been tested. Such as a fully tested Network Policy that has been fully tested on one environment can be used quickly in another location because of the nature of this being in code vs a manually authored process which it might have been before. A really good viewpoint and outline of this thinking can be found here. [Network DevOps](https://www.thousandeyes.com/learning/techtorials/network-devops) -## Networking The Basics +## Networking The Basics -Let's forget the DevOps side of things to begin with here and we now need to look very briefly into some of the Networking fundamentals. +Let's forget the DevOps side of things to begin with here and we now need to look very briefly into some of the Networking fundamentals. -### Network Devices +### Network Devices -**Host** are any devices which send or receive traffic. +**Host** are any devices which send or receive traffic. ![](Images/Day21_Networking1.png) -**IP Address** the identity of each host. +**IP Address** the identity of each host. ![](Images/Day21_Networking2.png) -**Network** is what transports traffic between hosts. If we did not have networks there would be a lot of manual movement of data! +**Network** is what transports traffic between hosts. If we did not have networks there would be a lot of manual movement of data! -A logical group of hosts which require similar connectivity. +A logical group of hosts which require similar connectivity. ![](Images/Day21_Networking3.png) -**Switches** facilitate communication ***within*** a network. A switch forwards data packets between hosts. A switch sends packets directly to hosts. +**Switches** facilitate communication **_within_** a network. A switch forwards data packets between hosts. A switch sends packets directly to hosts. -- Network: A Grouping of hosts which require similar connectivity. -- Hosts on a Network share the same IP address space. +- Network: A Grouping of hosts which require similar connectivity. +- Hosts on a Network share the same IP address space. ![](Images/Day21_Networking4.png) -**Router** facilitates communication between networks. As we said before that a switch looks after communication within a network a router allows us to join these networks together or at least give them access to each other if permitted. +**Router** facilitates communication between networks. As we said before that a switch looks after communication within a network a router allows us to join these networks together or at least give them access to each other if permitted. -A router can provide a traffic control point (security, filtering, redirecting) More and more switches also provide some of these functions now. +A router can provide a traffic control point (security, filtering, redirecting) More and more switches also provide some of these functions now. -Routers learn which networks they are attached to. These are known as routes, a routing table is all the networks a router knows about. +Routers learn which networks they are attached to. These are known as routes, a routing table is all the networks a router knows about. -A router has an IP address in the networks they are attached to. This IP is also going to be each host's way out of their local network also known as a gateway. +A router has an IP address in the networks they are attached to. This IP is also going to be each host's way out of their local network also known as a gateway. -Routers also create the hierarchy in networks I mentioned earlier. +Routers also create the hierarchy in networks I mentioned earlier. ![](Images/Day21_Networking5.png) -## Switches vs Routers +## Switches vs Routers + +**Routing** is the process of moving data between networks. -**Routing** is the process of moving data between networks. - - A router is a device whose primary purpose is Routing. -**Switching** is the process of moving data within networks. +**Switching** is the process of moving data within networks. -- A Switch is a device whose primary purpose is switching. +- A Switch is a device whose primary purpose is switching. -This is very much a foundational overview of devices as we know there are many different Network Devices such as: +This is very much a foundational overview of devices as we know there are many different Network Devices such as: -- Access Points -- Firewalls -- Load Balancers +- Access Points +- Firewalls +- Load Balancers - Layer 3 Switches -- IDS / IPS -- Proxies -- Virtual Switches -- Virtual Routers +- IDS / IPS +- Proxies +- Virtual Switches +- Virtual Routers -Although all of these devices are going to perform Routing and/or Switching. +Although all of these devices are going to perform Routing and/or Switching. -Over the next few days, we are going to get to know a little more about this list. +Over the next few days, we are going to get to know a little more about this list. -- OSI Model -- Network Protocols +- OSI Model +- Network Protocols - DNS (Domain Name System) -- NAT +- NAT - DHCP -- Subnets +- Subnets -## Resources +## Resources [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8) diff --git a/Days/day22.md b/Days/day22.md index f92a59515..7fbca009d 100644 --- a/Days/day22.md +++ b/Days/day22.md @@ -1,12 +1,13 @@ --- -title: '#90DaysOfDevOps - The OSI Model - The 7 Layers - Day 22' +title: "#90DaysOfDevOps - The OSI Model - The 7 Layers - Day 22" published: false description: 90DaysOfDevOps - The OSI Model - The 7 Layers -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049037 --- + ## The OSI Model - The 7 Layers The overall purpose of networking as an industry is to allow two hosts to share data. Before networking if I want to get data from this host to this host I'd have to plug something into this host walk it over to the other host and plug it into the other host. @@ -15,82 +16,87 @@ Networking allows us to automate this by allowing the host to share data automat This is no different than any language. English has a set of rules that two English speakers must follow. Spanish has its own set of rules. French has its own set of rules, while networking also has its own set of rules -The rules for networking are divided into seven different layers and those layers are known as the OSI model. +The rules for networking are divided into seven different layers and those layers are known as the OSI model. -### Introduction to the OSI Model +### Introduction to the OSI Model The OSI Model (Open Systems Interconnection Model) is a framework used to describe the functions of a networking system. The OSI model characterises computing functions into a universal set of rules and requirements to support interoperability between different products and software. In the OSI reference model, the communications between a computing system are split into seven different abstraction layers: **Physical, Data Link, Network, Transport, Session, Presentation, and Application**. ![](Images/Day22_Networking1.png) ### Physical -Layer 1 in the OSI model and this is known as physical, the premise of being able to get data from one host to another through a means be it physical cable or we could also consider Wi-Fi in this layer as well. We might also see some more legacy hardware seen here around hubs and repeaters to transport the data from one host to another. + +Layer 1 in the OSI model and this is known as physical, the premise of being able to get data from one host to another through a means be it physical cable or we could also consider Wi-Fi in this layer as well. We might also see some more legacy hardware seen here around hubs and repeaters to transport the data from one host to another. ![](Images/Day22_Networking2.png) -### Data Link -Layer 2, the data link enables a node to node transfer where data is packaged into frames. There is also a level of error correcting that might have occurred at the physical layer. This is also where we introduce or first see MAC addresses. +### Data Link + +Layer 2, the data link enables a node to node transfer where data is packaged into frames. There is also a level of error correcting that might have occurred at the physical layer. This is also where we introduce or first see MAC addresses. This is where we see the first mention of switches that we covered on our first day of networking on [Day 21](day21.md) ![](Images/Day22_Networking3.png) -### Network -You have likely heard the term layer 3 switches or layer 2 switches. In our OSI model Layer 3, the Network has a goal of an end to end delivery, this is where we see our IP addresses also mentioned in the first-day overview. +### Network -Routers and hosts exist at layer 3, remember the router is the ability to route between multiple networks. Anything with an IP could be considered Layer 3. +You have likely heard the term layer 3 switches or layer 2 switches. In our OSI model Layer 3, the Network has a goal of an end to end delivery, this is where we see our IP addresses also mentioned in the first-day overview. + +Routers and hosts exist at layer 3, remember the router is the ability to route between multiple networks. Anything with an IP could be considered Layer 3. ![](Images/Day22_Networking4.png) -So why do we need addressing schemes on both Layers 2 and 3? (MAC Addresses vs IP Addresses) +So why do we need addressing schemes on both Layers 2 and 3? (MAC Addresses vs IP Addresses) -If we think about getting data from one host to another, each host has an IP address but there are several switches and routers in between. Each of the devices has that layer 2 MAC address. +If we think about getting data from one host to another, each host has an IP address but there are several switches and routers in between. Each of the devices has that layer 2 MAC address. The layer 2 MAC address will go from host to switch/router only, it is focused on hops whereas the layer 3 IP addresses will stay with that packet of data until it reaches its end host. (End to End) -IP Addresses - Layer 3 = End to End Delivery +IP Addresses - Layer 3 = End to End Delivery + +MAC Addresses - Layer 2 = Hop to Hop Delivery -MAC Addresses - Layer 2 = Hop to Hop Delivery +Now there is a network protocol that we will get into but not today called ARP(Address Resolution Protocol) which links our Layer3 and Layer2 addresses. -Now there is a network protocol that we will get into but not today called ARP(Address Resolution Protocol) which links our Layer3 and Layer2 addresses. +### Transport -### Transport -Service to Service delivery, Layer 4 is there to distinguish data streams. In the same way that Layer 3 and Layer 2 both had their addressing schemes, in Layer 4 we have ports. +Service to Service delivery, Layer 4 is there to distinguish data streams. In the same way that Layer 3 and Layer 2 both had their addressing schemes, in Layer 4 we have ports. ![](Images/Day22_Networking5.png) -### Session, Presentation, Application -The distinction between Layers 5,6,7 is or had become somewhat vague. +### Session, Presentation, Application + +The distinction between Layers 5,6,7 is or had become somewhat vague. -It is worth looking at the [TCP IP Model](https://www.geeksforgeeks.org/tcp-ip-model/) to get a more recent understanding. +It is worth looking at the [TCP IP Model](https://www.geeksforgeeks.org/tcp-ip-model/) to get a more recent understanding. Let's now try and explain what's happening when hosts are communicating with each other using this networking stack. This host has an application that's going to generate data that is meant to be sent to another host. The source host is going to go through is what's known as the encapsulation process. That data will be first sent to layer 4. -Layer 4 is going to add a header to that data which can facilitate the goal of layer 4 which is service to service delivery. This is going to be a port using either TCP or UDP. It is also going to include the source port and destination port. +Layer 4 is going to add a header to that data which can facilitate the goal of layer 4 which is service to service delivery. This is going to be a port using either TCP or UDP. It is also going to include the source port and destination port. This may also be known as a segment (Data and Port) This segment is going to be passed down the OSI stack to layer 3, the network layer, and the network layer is going to add another header to this data. -This header is going to facilitate the goal of layer 3 which is the end to end delivery meaning in this header you will have a source IP address and a destination IP, the header plus data may also be referred to as a packet. +This header is going to facilitate the goal of layer 3 which is the end to end delivery meaning in this header you will have a source IP address and a destination IP, the header plus data may also be referred to as a packet. -Layer 3 will then take that packet and hand it off to layer 2, layer 2 will once again add another header to that data to accomplish layer 2's goal of hop to hop delivery meaning this header will include a source and destination mac address. +Layer 3 will then take that packet and hand it off to layer 2, layer 2 will once again add another header to that data to accomplish layer 2's goal of hop to hop delivery meaning this header will include a source and destination mac address. This is known as a frame when you have the layer 2 header and data. -That frame then gets converted into ones and zeros and sent over the Layer 1 Physical cable or wifi. +That frame then gets converted into ones and zeros and sent over the Layer 1 Physical cable or wifi. ![](Images/Day22_Networking6.png) -I did mention above the naming for each layer of header plus data but decided to draw this out as well. +I did mention above the naming for each layer of header plus data but decided to draw this out as well. ![](Images/Day22_Networking7.png) -The Application sending the data is being sent somewhere so the receiving is somewhat in reverse to get that back up the stack and into the receiving host. +The Application sending the data is being sent somewhere so the receiving is somewhat in reverse to get that back up the stack and into the receiving host. ![](Images/Day22_Networking8.png) -## Resources +## Resources - [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8) - [Practical Networking](http://www.practicalnetworking.net/) diff --git a/Days/day23.md b/Days/day23.md index 0b4ab6361..578054420 100644 --- a/Days/day23.md +++ b/Days/day23.md @@ -1,114 +1,115 @@ --- -title: '#90DaysOfDevOps - Network Protocols - Day 23' +title: "#90DaysOfDevOps - Network Protocols - Day 23" published: false description: 90DaysOfDevOps - Network Protocols -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048704 --- -## Network Protocols -A set of rules and messages that form a standard. An Internet Standard. +## Network Protocols -- ARP - Address Resolution Protocol +A set of rules and messages that form a standard. An Internet Standard. -If you want to get really into the weeds on ARP you can read the Internet Standard here. [RFC 826](https://datatracker.ietf.org/doc/html/rfc826) +- ARP - Address Resolution Protocol -Connects IP addresses to fixed physical machine addresses, also known as MAC addresses across a layer 2 network. +If you want to get really into the weeds on ARP you can read the Internet Standard here. [RFC 826](https://datatracker.ietf.org/doc/html/rfc826) + +Connects IP addresses to fixed physical machine addresses, also known as MAC addresses across a layer 2 network. ![](Images/Day23_Networking1.png) -- FTP - File Transfer Protocol +- FTP - File Transfer Protocol -Allows for the transfer of files from source to destination. Generally, this process is authenticated but there is the ability if configured to use anonymous access. You will more frequently now see FTPS which provides SSL/TLS connectivity to FTP servers from the client for better security. This protocol would be found in the Application layer of the OSI Model. +Allows for the transfer of files from source to destination. Generally, this process is authenticated but there is the ability if configured to use anonymous access. You will more frequently now see FTPS which provides SSL/TLS connectivity to FTP servers from the client for better security. This protocol would be found in the Application layer of the OSI Model. ![](Images/Day23_Networking2.png) -- SMTP - Simple Mail Transfer Protocol +- SMTP - Simple Mail Transfer Protocol -Used for email transmission, mail servers use SMTP to send and receive mail messages. You will still find even with Microsoft 365 that the SMTP protocol is used for the same purpose. +Used for email transmission, mail servers use SMTP to send and receive mail messages. You will still find even with Microsoft 365 that the SMTP protocol is used for the same purpose. ![](Images/Day23_Networking3.png) -- HTTP - Hyper Text Transfer Protocol +- HTTP - Hyper Text Transfer Protocol -HTTP is the foundation of the internet and browsing content. Giving us the ability to easily access our favourite websites. HTTP is still heavily used but HTTPS is more so used or should be used on most of your favourite sites. +HTTP is the foundation of the internet and browsing content. Giving us the ability to easily access our favourite websites. HTTP is still heavily used but HTTPS is more so used or should be used on most of your favourite sites. ![](Images/Day23_Networking4.png) -- SSL - Secure Sockets Layer | TLS - Transport Layer Security +- SSL - Secure Sockets Layer | TLS - Transport Layer Security -TLS has taken over from SSL, TLS is a [Cryptographic Protocol]() that provides secure communications over a network. It can and will be found in the mail, Instant Messaging and other applications but most commonly it is used to secure HTTPS. +TLS has taken over from SSL, TLS is a **Cryptographic Protocol** that provides secure communications over a network. It can and will be found in the mail, Instant Messaging and other applications but most commonly it is used to secure HTTPS. ![](Images/Day23_Networking5.png) -- HTTPS - HTTP secured with SSL/TLS +- HTTPS - HTTP secured with SSL/TLS -An extension of HTTP, used for secure communications over a network, HTTPS is encrypted with TLS as mentioned above. The focus here was to bring authentication, privacy and integrity whilst data is exchanged between hosts. +An extension of HTTP, used for secure communications over a network, HTTPS is encrypted with TLS as mentioned above. The focus here was to bring authentication, privacy and integrity whilst data is exchanged between hosts. ![](Images/Day23_Networking6.png) -- DNS - Domain Name System +- DNS - Domain Name System -The DNS is used to map human-friendly domain names for example we all know [google.com](https://google.com) but if you were to open a browser and put in [8.8.8.8](https://8.8.8.8) you would get Google as we pretty much know it. However good luck trying to remember all of the IP addresses for all of your websites where some of them we even use google to find information. +The DNS is used to map human-friendly domain names for example we all know [google.com](https://google.com) but if you were to open a browser and put in [8.8.8.8](https://8.8.8.8) you would get Google as we pretty much know it. However good luck trying to remember all of the IP addresses for all of your websites where some of them we even use google to find information. -This is where DNS comes in, it ensures that hosts, services and other resources are reachable. +This is where DNS comes in, it ensures that hosts, services and other resources are reachable. -On all hosts, if they require internet connectivity then they must have DNS to be able to resolve those domain names. DNS is an area you could spend Days and Years on learning. I would also say from experience that DNS is mostly the common cause of all errors when it comes to Networking. Not sure if a Network engineer would agree there though. +On all hosts, if they require internet connectivity then they must have DNS to be able to resolve those domain names. DNS is an area you could spend Days and Years on learning. I would also say from experience that DNS is mostly the common cause of all errors when it comes to Networking. Not sure if a Network engineer would agree there though. ![](Images/Day23_Networking7.png) -- DHCP - Dynamic Host Configuration Protocol +- DHCP - Dynamic Host Configuration Protocol -We have discussed a lot about protocols that are required to make our hosts work, be it accessing the internet or transferring files between each other. +We have discussed a lot about protocols that are required to make our hosts work, be it accessing the internet or transferring files between each other. -There are 4 things that we need on every host for it to be able to achieve both of those tasks. +There are 4 things that we need on every host for it to be able to achieve both of those tasks. -- IP Address -- Subnet Mask -- Default Gateway -- DNS +- IP Address +- Subnet Mask +- Default Gateway +- DNS -We have covered IP address being a unique address for your host on the network it resides, we can think of this as our house number. +We have covered IP address being a unique address for your host on the network it resides, we can think of this as our house number. -Subnet mask we will cover shortly, but you can think of this as postcode or zip code. +Subnet mask we will cover shortly, but you can think of this as postcode or zip code. -A default gateway is the IP of our router generally on our network providing us with that Layer 3 connectivity. You could think of this as the single road that allows us out of our street. +A default gateway is the IP of our router generally on our network providing us with that Layer 3 connectivity. You could think of this as the single road that allows us out of our street. -Then we have DNS as we just covered to help us convert complicated public IP addresses to more suitable and rememberable domain names. Maybe we can think of this as the giant sorting office to make sure we get the right post. +Then we have DNS as we just covered to help us convert complicated public IP addresses to more suitable and rememberable domain names. Maybe we can think of this as the giant sorting office to make sure we get the right post. -As I said each host requires these 4 things, if you have 1000 or 10,000 hosts then that is going to take you a very long time to determine each one of these individually. This is where DHCP comes in and allows you to determine a scope for your network and then this protocol will distribute to all available hosts in your network. +As I said each host requires these 4 things, if you have 1000 or 10,000 hosts then that is going to take you a very long time to determine each one of these individually. This is where DHCP comes in and allows you to determine a scope for your network and then this protocol will distribute to all available hosts in your network. -Another example is you head into a coffee shop, grab a coffee and sit down with your laptop or your phone let's call that your host. You connect your host to the coffee shop wifi and you gain access to the internet, messages and mail start pinging through and you can navigate web pages and social media. When you connected to the coffee shop wifi your machine would have picked up a DHCP address either from a dedicated DHCP server or most likely from the router also handling DHCP. +Another example is you head into a coffee shop, grab a coffee and sit down with your laptop or your phone let's call that your host. You connect your host to the coffee shop WiFi and you gain access to the internet, messages and mail start pinging through and you can navigate web pages and social media. When you connected to the coffee shop WiFi your machine would have picked up a DHCP address either from a dedicated DHCP server or most likely from the router also handling DHCP. ![](Images/Day23_Networking8.png) -### Subnetting +### Subnetting A subnet is a logical subdivision of an IP network. -Subnets break large networks into smaller, more manageable networks that run more efficiently. +Subnets break large networks into smaller, more manageable networks that run more efficiently. -Each subnet is a logical subdivision of the bigger network. Connected devices with enough subnet share common IP address identifiers, enabling them to communicate with each other. +Each subnet is a logical subdivision of the bigger network. Connected devices with enough subnet share common IP address identifiers, enabling them to communicate with each other. -Routers manage communication between subnets. +Routers manage communication between subnets. -The size of a subnet depends on the connectivity requirements and the network technology used. +The size of a subnet depends on the connectivity requirements and the network technology used. An organisation is responsible for determining the number and size of the subnets within the limits of address space -available, and the details remain local to that organisation. Subnets can also be segmented into even smaller subnets for things like Point to Point links, or subnetworks supporting a few devices. +available, and the details remain local to that organisation. Subnets can also be segmented into even smaller subnets for things like Point to Point links, or subnetworks supporting a few devices. Among other advantages, segmenting large networks into subnets enable IP address -reallocation and relieves network congestion, streamlining, network communication and efficiency. +reallocation and relieves network congestion, streamlining, network communication and efficiency. Subnets can also improve network security. -If a section of a network is compromised, it can be quarantined, making it difficult for bad actors to move around the larger network. +If a section of a network is compromised, it can be quarantined, making it difficult for bad actors to move around the larger network. ![](Images/Day23_Networking9.png) -## Resources +## Resources - [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8) - [Practical Networking](http://www.practicalnetworking.net/) diff --git a/Days/day24.md b/Days/day24.md index 4744a673c..f50a1c31b 100644 --- a/Days/day24.md +++ b/Days/day24.md @@ -1,22 +1,24 @@ --- -title: '#90DaysOfDevOps - Network Automation - Day 24' +title: "#90DaysOfDevOps - Network Automation - Day 24" published: false description: 90DaysOfDevOps - Network Automation -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048805 --- -## Network Automation + +## Network Automation ### Basics of network automation -Primary drivers for Network Automation -- Achieve Agility -- Reduce Cost -- Eliminate Errors -- Ensure Compliance -- Centralised Management +Primary drivers for Network Automation + +- Achieve Agility +- Reduce Cost +- Eliminate Errors +- Ensure Compliance +- Centralised Management The automation adoption process is specific to each business. There's no one size fits all when it comes to deploying automation, the ability to identify and embrace the approach that works best for your organisation is critical in advancing towards maintaining or creating a more agile environment, the focus should always be on business value and end-user experience. (We said something similar right at the start in regards to the whole of DevOps and the culture change and the automated process that this brings) @@ -26,98 +28,98 @@ To break this down you would need to identify how the task or process that you'r Have a framework or design structure that you're trying to achieve know what your end goal is and then work step by step towards achieving that goal measuring the automation success at various stages based on the business outcomes. -Build concepts modelled around existing applications there's no need to design the concepts around automation in a bubble because they need to be applied to your application, your service, and your infrastructure, so begin to build the concepts and model them around your existing infrastructure, you're existing applications. +Build concepts modelled around existing applications there's no need to design the concepts around automation in a bubble because they need to be applied to your application, your service, and your infrastructure, so begin to build the concepts and model them around your existing infrastructure, you're existing applications. -### Approach to Networking Automation +### Approach to Networking Automation -We should identify the tasks and perform a discovery on network change requests so that you have the most common issues and problems to automate a solution to. +We should identify the tasks and perform a discovery on network change requests so that you have the most common issues and problems to automate a solution to. -- Make a list of all the change requests and workflows that are currently being addressed manually. -- Determine the most common, time-consuming and error-prone activities. -- Prioritise the requests by taking a business-driven approach. -- This is the framework for building an automation process, what must be automated and what must not. +- Make a list of all the change requests and workflows that are currently being addressed manually. +- Determine the most common, time-consuming and error-prone activities. +- Prioritise the requests by taking a business-driven approach. +- This is the framework for building an automation process, what must be automated and what must not. -We should then divide tasks and analyse how different network functions work and interact with each other. +We should then divide tasks and analyse how different network functions work and interact with each other. -- The infrastructure/Network team receives change tickets at multiple layers to deploy applications. -- Based on Network services, divide them into different areas and understand how they interact with each other. - - Application Optimisation - - ADC (Application Delivery Controller) - - Firewall - - DDI (DNS, DHCP, IPAM etc) - - Routing - - Others -- Identify various dependencies to address business and cultural differences and bring in cross-team collaboration. +- The infrastructure/Network team receives change tickets at multiple layers to deploy applications. +- Based on Network services, divide them into different areas and understand how they interact with each other. + - Application Optimisation + - ADC (Application Delivery Controller) + - Firewall + - DDI (DNS, DHCP, IPAM etc) + - Routing + - Others +- Identify various dependencies to address business and cultural differences and bring in cross-team collaboration. -Reusable policies, define and simplify reusable service tasks, processes and input/outputs. +Reusable policies, define and simplify reusable service tasks, processes and input/outputs. -- Define offerings for various services, processes and input/outputs. -- Simplifying the deployment process will reduce the time to market for both new and existing workloads. -- Once you have a standard process, it can be sequenced and aligned to individual requests for a multi-threaded approach and delivery. +- Define offerings for various services, processes and input/outputs. +- Simplifying the deployment process will reduce the time to market for both new and existing workloads. +- Once you have a standard process, it can be sequenced and aligned to individual requests for a multi-threaded approach and delivery. -Combine the policies with business-specific activities. How does implementing this policy help the business? Saves time? Saves Money? Provides a better business outcome? +Combine the policies with business-specific activities. How does implementing this policy help the business? Saves time? Saves Money? Provides a better business outcome? -- Ensure that service tasks are interoperable. -- Associate the incremental service tasks so that they align to create business services. -- Allow for the flexibility to associate and re-associate service tasks on demand. -- Deploy Self-Service capabilities and pave the way for improved operational efficiency. -- Allow for the multiple technology skillsets to continue to contribute with oversight and compliance. +- Ensure that service tasks are interoperable. +- Associate the incremental service tasks so that they align to create business services. +- Allow for the flexibility to associate and re-associate service tasks on demand. +- Deploy Self-Service capabilities and pave the way for improved operational efficiency. +- Allow for the multiple technology skillsets to continue to contribute with oversight and compliance. -**Iterate** on the policies and process, adding and improving while maintaining availability and service. +**Iterate** on the policies and process, adding and improving while maintaining availability and service. -- Start small by automating existing tasks. -- Get familiar with the automation process, so that you can identify other areas that can benefit from automation. -- iterate your automation initiatives, adding agility incrementally while maintaining the required availability. -- Taking an incremental approach paves the way for success! +- Start small by automating existing tasks. +- Get familiar with the automation process, so that you can identify other areas that can benefit from automation. +- iterate your automation initiatives, adding agility incrementally while maintaining the required availability. +- Taking an incremental approach paves the way for success! Orchestrate the network service! -- Automation of the deployment process is required to deliver applications rapidly. -- Creating an agile service environment requires different elements to be managed across technology skillsets. -- Prepare for an end to end orchestration that provides for control over automation and the order of deployments. +- Automation of the deployment process is required to deliver applications rapidly. +- Creating an agile service environment requires different elements to be managed across technology skillsets. +- Prepare for an end to end orchestration that provides for control over automation and the order of deployments. + +## Network Automation Tools -## Network Automation Tools +The good news here is that for the most part, the tools we use here for Network automation are generally the same that we will use for other areas of automation or what we have already covered so far or what we will cover in future sessions. -The good news here is that for the most part, the tools we use here for Network automation are generally the same that we will use for other areas of automation or what we have already covered so far or what we will cover in future sessions. +Operating System - As I have throughout this challenge, I am focusing on doing most of my learning with a Linux OS, those reasons were given in the Linux section but almost all of the tooling that we will touch albeit cross-OS platforms maybe today they all started as Linux based applications or tools, to begin with. -Operating System - As I have throughout this challenge, I am focusing on doing most of my learning with a Linux OS, those reasons were given in the Linux section but almost all of the tooling that we will touch albeit cross-OS platforms maybe today they all started as Linux based applications or tools, to begin with. +Integrated Development Environment (IDE) - Again not much to say here other than throughout I would suggest Visual Studio Code as your IDE, based on the extensive plugins that are available for so many different languages. -Integrated Development Environment (IDE) - Again not much to say here other than throughout I would suggest Visual Studio Code as your IDE, based on the extensive plugins that are available for so many different languages. +Configuration Management - We have not got to the Configuration management section yet, but it is very clear that Ansible is a favourite in this area for managing and automating configurations. Ansible is written in Python but you do not need to know Python. -Configuration Management - We have not got to the Configuration management section yet, but it is very clear that Ansible is a favourite in this area for managing and automating configurations. Ansible is written in Python but you do not need to know Python. - -- Agentless +- Agentless - Only requires SSH -- Large Support Community +- Large Support Community - Lots of Network Modules -- Push only model -- Configured with YAML -- Open Source! +- Push only model +- Configured with YAML +- Open Source! [Link to Ansible Network Modules](https://docs.ansible.com/ansible/2.9/modules/list_of_network_modules.html) -We will also touch on **Ansible Tower** in the configuration management section, see this as the GUI front end for Ansible. +We will also touch on **Ansible Tower** in the configuration management section, see this as the GUI front end for Ansible. -CI/CD - Again we will cover more about the concepts and tooling around this but it's important to at least mention here as this spans not only networking but all provisioning of service and platform. +CI/CD - Again we will cover more about the concepts and tooling around this but it's important to at least mention here as this spans not only networking but all provisioning of service and platform. In particular, Jenkins provides or seems to be a popular tool for Network Automation. -- Monitors git repository for changes and then initiates them. +- Monitors git repository for changes and then initiates them. -Version Control - Again something we will dive deeper into later on. +Version Control - Again something we will dive deeper into later on. - Git provides version control of your code on your local device - Cross-Platform -- GitHub, GitLab, BitBucket etc are online websites where you define your repositories and upload your code. +- GitHub, GitLab, BitBucket etc are online websites where you define your repositories and upload your code. -Language | Scripting - Something we have not covered here is Python as a language, I decided to dive into Go instead as the programming language based on my circumstances, I would say that it was a close call between Golang and Python and Python it seems to be the winner for Network Automation. +Language | Scripting - Something we have not covered here is Python as a language, I decided to dive into Go instead as the programming language based on my circumstances, I would say that it was a close call between Golang and Python and Python it seems to be the winner for Network Automation. -- Nornir is something to mention here, an automation framework written in Python. This seems to take the role of Ansible but specifically around Network Automation. [Nornir documentation](https://nornir.readthedocs.io/en/latest/) +- Nornir is something to mention here, an automation framework written in Python. This seems to take the role of Ansible but specifically around Network Automation. [Nornir documentation](https://nornir.readthedocs.io/en/latest/) Analyse APIs - Postman is a great tool for analysing RESTful APIs. Helps to build, test and modify APIs. - POST >>> To create resources objects. -- GET >>> To retrieve a resources. -- PUT >>> To create or replace the resources. +- GET >>> To retrieve a resources. +- PUT >>> To create or replace the resources. - PATCH >>> To create or update the resources object. - Delete >>> To delete a resources @@ -131,11 +133,11 @@ Analyse APIs - Postman is a great tool for analysing RESTful APIs. Helps to buil [Network Test Automation](https://pubhub.devnetcloud.com/media/genie-feature-browser/docs/#/) -Over the next 3 days, I am planning to get more hands-on with some of the things we have covered and put some work in around Python and Network automation. +Over the next 3 days, I am planning to get more hands-on with some of the things we have covered and put some work in around Python and Network automation. -We have nowhere near covered all of the networking topics so far but wanted to make this broad enough to follow along and still keep learning from the resources I am adding below. +We have nowhere near covered all of the networking topics so far but wanted to make this broad enough to follow along and still keep learning from the resources I am adding below. -## Resources +## Resources - [3 Necessary Skills for Network Automation](https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s) - [Computer Networking full course](https://www.youtube.com/watch?v=IPvYjXCsTg8) diff --git a/Days/day25.md b/Days/day25.md index 975f86f48..0e7c2abd6 100644 --- a/Days/day25.md +++ b/Days/day25.md @@ -1,23 +1,24 @@ --- -title: '#90DaysOfDevOps - Python for Network Automation - Day 25' +title: "#90DaysOfDevOps - Python for Network Automation - Day 25" published: false description: 90DaysOfDevOps - Python for Network Automation -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049038 --- -## Python for Network Automation -Python is the standard language used for automated network operations. +## Python for Network Automation -Whilst it is not only for network automation it seems to be everywhere when you are looking for resources and as previously mentioned if it's not Python then it's generally Ansible which is written also in Python. +Python is the standard language used for automated network operations. -I think I have mentioned this already but during the "Learn a programming language" section I chose Golang over Python for reasons around my company is developing in Go so that was a good reason for me to learn but if that was not the case then Python would have taken that time. +Whilst it is not only for network automation it seems to be everywhere when you are looking for resources and as previously mentioned if it's not Python then it's generally Ansible which is written also in Python. -- Readability and ease of use - It seems that Python seems just makes sense. There don't seem to be the requirements around `{}` in the code to start and end blocks. Couple this with a strong IDE like VS Code you have a pretty easy start when wanting to run some python code. +I think I have mentioned this already but during the "Learn a programming language" section I chose Golang over Python for reasons around my company is developing in Go so that was a good reason for me to learn but if that was not the case then Python would have taken that time. -Pycharm might be another IDE worth mentioning here. +- Readability and ease of use - It seems that Python seems just makes sense. There don't seem to be the requirements around `{}` in the code to start and end blocks. Couple this with a strong IDE like VS Code you have a pretty easy start when wanting to run some python code. + +Pycharm might be another IDE worth mentioning here. - Libraries - The extensibility of Python is the real gold mine here, I mentioned before that this is not just for Network Automation but in fact, there are libraries plenty for all sorts of devices and configurations. You can see the vast amount here [PyPi](https://pypi.python.org/pypi) @@ -25,144 +26,144 @@ When you want to download the library to your workstation, then you use a tool c - Powerful & Efficient - Remember during the Go days I went through the "Hello World" scenario and we went through I think 6 lines of code? In Python it is -``` +``` print('hello world') ``` -Put all of the above points together and it should be easy to see why Python is generally mentioned as the de-facto tool when working on automating. +Put all of the above points together and it should be easy to see why Python is generally mentioned as the de-facto tool when working on automating. -I think it's important to note that it's possible that several years back there were scripts that might have interacted with your network devices to maybe automate the backup of configuration or to gather logs and other insights into your devices. The automation we are talking about here is a little different and that's because the overall networking landscape has also changed to suit this way of thinking better and enabled more automation. +I think it's important to note that it's possible that several years back there were scripts that might have interacted with your network devices to maybe automate the backup of configuration or to gather logs and other insights into your devices. The automation we are talking about here is a little different and that's because the overall networking landscape has also changed to suit this way of thinking better and enabled more automation. -- Software-Defined Network - SDN Controllers take the responsibility of delivering the control plane configuration to all devices on the network, meaning just a single point of contact for any network changes, no longer having to telnet or SSH into every device and also relying on humans to do this which has a repeatable chance of failure or misconfiguration. +- Software-Defined Network - SDN Controllers take the responsibility of delivering the control plane configuration to all devices on the network, meaning just a single point of contact for any network changes, no longer having to telnet or SSH into every device and also relying on humans to do this which has a repeatable chance of failure or misconfiguration. -- High-Level Orchestration - Go up a level from those SDN controllers and this allows for orchestration of service levels then there is the integration of this orchestration layer into your platforms of choice, VMware, Kubernetes, Public Clouds etc. +- High-Level Orchestration - Go up a level from those SDN controllers and this allows for orchestration of service levels then there is the integration of this orchestration layer into your platforms of choice, VMware, Kubernetes, Public Clouds etc. -- Policy-based management - What do you want to have? What is the desired state? You describe this and the system has all the details on how to figure it out to become the desired state. +- Policy-based management - What do you want to have? What is the desired state? You describe this and the system has all the details on how to figure it out to become the desired state. ## Setting up the lab environment -Not everyone has access to physical routers, switches and other networking devices. +Not everyone has access to physical routers, switches and other networking devices. -I wanted to make it possible for us to look at some of the tooling pre-mentioned but also get hands-on and learn how to automate the configuration of our networks. +I wanted to make it possible for us to look at some of the tooling pre-mentioned but also get hands-on and learn how to automate the configuration of our networks. -When it comes to options there are a few that we can choose from. +When it comes to options there are a few that we can choose from. - [GNS3 VM](https://www.gns3.com/software/download-vm) - [Eve-ng](https://www.eve-ng.net/) -- [Unimus](https://unimus.net/) Not a lab environment but an interesting concept. +- [Unimus](https://unimus.net/) Not a lab environment but an interesting concept. -We will build our lab out using [Eve-ng](https://www.eve-ng.net/) as mentioned before you can use a physical device but to be honest a virtual environment means that we can have a sandbox environment to test many different scenarios. Plus being able to play with different devices and topologies might be of interest. +We will build our lab out using [Eve-ng](https://www.eve-ng.net/) as mentioned before you can use a physical device but to be honest a virtual environment means that we can have a sandbox environment to test many different scenarios. Plus being able to play with different devices and topologies might be of interest. -We are going to do everything on EVE-NG with the community edition. +We are going to do everything on EVE-NG with the community edition. -### Getting started +### Getting started The community edition comes in ISO and OVF formats for [download](https://www.eve-ng.net/index.php/download/) -We will be using the OVF download but with the ISO there is the option to build out on a bare metal server without the need for a hypervisor. +We will be using the OVF download but with the ISO there is the option to build out on a bare metal server without the need for a hypervisor. ![](Images/Day25_Networking1.png) -For our walkthrough, we will be using VMware Workstation as I have a license via my vExpert but you can equally use VMware Player or any of the other options mentioned in the [documentation](https://www.eve-ng.net/index.php/documentation/installation/system-requirement/)Unfortunately we cannot use our previously used Virtual box! +For our walkthrough, we will be using VMware Workstation as I have a license via my vExpert but you can equally use VMware Player or any of the other options mentioned in the [documentation](https://www.eve-ng.net/index.php/documentation/installation/system-requirement/)Unfortunately we cannot use our previously used Virtual box! -This is also where I had an issue with GNS3 with Virtual Box even though supported. +This is also where I had an issue with GNS3 with Virtual Box even though supported. -[Download VMware Workstation Player - FREE](https://www.vmware.com/uk/products/workstation-player.html) +[Download VMware Workstation Player - FREE](https://www.vmware.com/uk/products/workstation-player.html) -[VMware Workstation PRO](https://www.vmware.com/uk/products/workstation-pro.html) Also noted that there is an evaluation period for free! +[VMware Workstation PRO](https://www.vmware.com/uk/products/workstation-pro.html) Also noted that there is an evaluation period for free! -### Installation on VMware Workstation PRO +### Installation on VMware Workstation PRO -Now we have our hypervisor software downloaded and installed, and we have the EVE-NG OVF downloaded. If you are using VMware Player please let me know if this process is the same. +Now we have our hypervisor software downloaded and installed, and we have the EVE-NG OVF downloaded. If you are using VMware Player please let me know if this process is the same. -We are now ready to get things configured. +We are now ready to get things configured. -Open VMware Workstation and then select `file` and `open` +Open VMware Workstation and then select `file` and `open` ![](Images/Day25_Networking2.png) -When you download the EVE-NG OVF Image it is going to be within a compressed file. Extract the contents out into its folder so it looks like this. +When you download the EVE-NG OVF Image it is going to be within a compressed file. Extract the contents out into its folder so it looks like this. ![](Images/Day25_Networking3.png) -Navigate to the location where you downloaded the EVE-NG OVF image and begin the import. +Navigate to the location where you downloaded the EVE-NG OVF image and begin the import. -Give it a recognisable name and store the virtual machine somewhere on your system. +Give it a recognisable name and store the virtual machine somewhere on your system. ![](Images/Day25_Networking4.png) When the import is complete increase the number of processors to 4 and the memory allocated to 8 GB. (This should be the case after import with the latest version if not then edit VM settings) -Also, make sure the Virtualise Intel VT-x/EPT or AMD-V/RVI checkbox is enabled. This option instructs the VMware workstation to pass the virtualisation flags to the guest OS (nested virtualisation) This was the issue I was having with GNS3 with Virtual Box even though my CPU allows this. +Also, make sure the Virtualise Intel VT-x/EPT or AMD-V/RVI checkbox is enabled. This option instructs the VMware workstation to pass the virtualisation flags to the guest OS (nested virtualisation) This was the issue I was having with GNS3 with Virtual Box even though my CPU allows this. ![](Images/Day25_Networking5.png) -### Power on & Access +### Power on & Access -Sidenote & Rabbit hole: Remember I mentioned that this would not work with VirtualBox! Well yeah had the same issue with VMware Workstation and EVE-NG but it was not the fault of the virtualisation platform! +Sidenote & Rabbit hole: Remember I mentioned that this would not work with VirtualBox! Well yeah had the same issue with VMware Workstation and EVE-NG but it was not the fault of the virtualisation platform! -I have WSL2 running on my Windows Machine and this seems to remove the capability of being able to run anything nested inside of your environment. I am confused as to why the Ubuntu VM does run as it seems to take out the Intel VT-d virtualisation aspect of the CPU when using WSL2. +I have WSL2 running on my Windows Machine and this seems to remove the capability of being able to run anything nested inside of your environment. I am confused as to why the Ubuntu VM does run as it seems to take out the Intel VT-d virtualisation aspect of the CPU when using WSL2. -To resolve this we can run the following command on our Windows machine and reboot the system, note that whilst this is off then you will not be able to use WSL2. +To resolve this we can run the following command on our Windows machine and reboot the system, note that whilst this is off then you will not be able to use WSL2. `bcdedit /set hypervisorlaunchtype off` -When you want to go back and use WSL2 then you will need to run this command and reboot. +When you want to go back and use WSL2 then you will need to run this command and reboot. `bcdedit /set hypervisorlaunchtype auto` -Both of these commands should be run as administrator! +Both of these commands should be run as administrator! -Ok back to the show, You should now have a powered-on machine in VMware Workstation and you should have a prompt looking similar to this. +Ok back to the show, You should now have a powered-on machine in VMware Workstation and you should have a prompt looking similar to this. ![](Images/Day25_Networking6.png) -On the prompt above you can use: +On the prompt above you can use: username = root password = eve -You will then be asked to provide the root password again, this will be used to SSH into the host later on. +You will then be asked to provide the root password again, this will be used to SSH into the host later on. -We then can change the hostname. +We then can change the hostname. ![](Images/Day25_Networking7.png) -Next, we define a DNS Domain Name, I have used the one below but I am not sure if this will need to be changed later on. +Next, we define a DNS Domain Name, I have used the one below but I am not sure if this will need to be changed later on. ![](Images/Day25_Networking8.png) -We then configure networking, I am selecting static so that the IP address given will be persistent after reboots. +We then configure networking, I am selecting static so that the IP address given will be persistent after reboots. ![](Images/Day25_Networking9.png) -The final step, provide a static IP address from a network that is reachable from your workstation. +The final step, provide a static IP address from a network that is reachable from your workstation. ![](Images/Day25_Networking10.png) -There are some additional steps here where you will have to provide a subnet mask for your network, default gateway and DNS. +There are some additional steps here where you will have to provide a subnet mask for your network, default gateway and DNS. -Once finished it will reboot, when it is back up you can take your static IP address and put this into your browser. +Once finished it will reboot, when it is back up you can take your static IP address and put this into your browser. ![](Images/Day25_Networking11.png) -The default username for the GUI is `admin` and the password is `eve` while the default username for SSH is `root` and the password is `eve` but this would have been changed if you changed during the setup. +The default username for the GUI is `admin` and the password is `eve` while the default username for SSH is `root` and the password is `eve` but this would have been changed if you changed during the setup. ![](Images/Day25_Networking12.png) -I chose HTML5 for the console vs native as this will open a new tab in your browser when you are navigating through different consoles. +I chose HTML5 for the console vs native as this will open a new tab in your browser when you are navigating through different consoles. -Next up we are going to: +Next up we are going to: -- Install the EVE-NG client pack +- Install the EVE-NG client pack - Load some network images into EVE-NG -- Build a Network Topology -- Adding Nodes -- Connecting Nodes -- Start building Python Scripts +- Build a Network Topology +- Adding Nodes +- Connecting Nodes +- Start building Python Scripts - Look at telnetlib, Netmiko, Paramiko and Pexpect -## Resources +## Resources - [Free Course: Introduction to EVE-NG](https://www.youtube.com/watch?v=g6B0f_E0NMg) - [EVE-NG - Creating your first lab](https://www.youtube.com/watch?v=9dPWARirtK8) diff --git a/Days/day26.md b/Days/day26.md index 1a438b1bb..6f6b4b9d0 100644 --- a/Days/day26.md +++ b/Days/day26.md @@ -1,19 +1,20 @@ --- -title: '#90DaysOfDevOps - Building our Lab - Day 26' +title: "#90DaysOfDevOps - Building our Lab - Day 26" published: false description: 90DaysOfDevOps - Building our Lab -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048762 --- + ## Building our Lab -We are going to continue our setup of our emulated network using EVE-NG and then hopefully get some devices deployed and start thinking about how we can automate the configuration of these devices. On [Day 25](day25.md) we covered the installation of EVE-NG onto our machine using VMware Workstation. +We are going to continue our setup of our emulated network using EVE-NG and then hopefully get some devices deployed and start thinking about how we can automate the configuration of these devices. On [Day 25](day25.md) we covered the installation of EVE-NG onto our machine using VMware Workstation. ### Installing EVE-NG Client -There is also a client pack that allows us to choose which application is used when we SSH to the devices. It will also set up Wireshark for packet captures between links. You can grab the client pack for your OS (Windows, macOS, Linux). +There is also a client pack that allows us to choose which application is used when we SSH to the devices. It will also set up Wireshark for packet captures between links. You can grab the client pack for your OS (Windows, macOS, Linux). [EVE-NG Client Download](https://www.eve-ng.net/index.php/download/) @@ -21,90 +22,90 @@ There is also a client pack that allows us to choose which application is used w Quick Tip: If you are using Linux as your client then there is this [client pack](https://github.com/SmartFinn/eve-ng-integration). -The install is straightforward next, next and I would suggest leaving the defaults. +The install is straightforward next, next and I would suggest leaving the defaults. ### Obtaining network images -This step has been a challenge, I have followed some videos that I will link at the end that links to some resources and downloads for our router and switch images whilst telling us how and where to upload them. +This step has been a challenge, I have followed some videos that I will link at the end that links to some resources and downloads for our router and switch images whilst telling us how and where to upload them. -It is important to note that I using everything for education purposes. I would suggest downloading official images from network vendors. +It is important to note that I using everything for education purposes. I would suggest downloading official images from network vendors. -[Blog & Links to YouTube videos](https://loopedback.com/2019/11/15/setting-up-eve-ng-for-ccna-ccnp-ccie-level-studies-includes-multiple-vendor-node-support-an-absolutely-amazing-study-tool-to-check-out-asap/) +[Blog & Links to YouTube videos](https://loopedback.com/2019/11/15/setting-up-eve-ng-for-ccna-ccnp-ccie-level-studies-includes-multiple-vendor-node-support-an-absolutely-amazing-study-tool-to-check-out-asap/) [How To Add Cisco VIRL vIOS image to Eve-ng](https://networkhunt.com/how-to-add-cisco-virl-vios-image-to-eve-ng/) -Overall the steps here are a little complicated and could be much easier but the above blogs and videos walk through the process of adding the images to your EVE-NG box. +Overall the steps here are a little complicated and could be much easier but the above blogs and videos walk through the process of adding the images to your EVE-NG box. -I used FileZilla to transfer the qcow2 to the VM over SFTP. +I used FileZilla to transfer the qcow2 to the VM over SFTP. -For our lab, we need Cisco vIOS L2 (switches) and Cisco vIOS (router) +For our lab, we need Cisco vIOS L2 (switches) and Cisco vIOS (router) ### Create a Lab -Inside the EVE-NG web interface, we are going to create our new network topology. We will have four switches and one router that will act as our gateway to outside networks. +Inside the EVE-NG web interface, we are going to create our new network topology. We will have four switches and one router that will act as our gateway to outside networks. -| Node | IP Address | -| ----------- | ----------- | -| Router | 10.10.88.110| -| Switch1 | 10.10.88.111| -| Switch2 | 10.10.88.112| -| Switch3 | 10.10.88.113| -| Switch4 | 10.10.88.114| +| Node | IP Address | +| ------- | ------------ | +| Router | 10.10.88.110 | +| Switch1 | 10.10.88.111 | +| Switch2 | 10.10.88.112 | +| Switch3 | 10.10.88.113 | +| Switch4 | 10.10.88.114 | #### Adding our Nodes to EVE-NG -When you first log in to EVE-NG you will see a screen like the below, we want to start by creating our first lab. +When you first log in to EVE-NG you will see a screen like the below, we want to start by creating our first lab. ![](Images/Day26_Networking2.png) -Give your lab a name and the other fields are optional. +Give your lab a name and the other fields are optional. ![](Images/Day26_Networking3.png) -You will be then greeted with a blank canvas to start creating your network. Right-click on your canvas and choose add node. +You will be then greeted with a blank canvas to start creating your network. Right-click on your canvas and choose add node. -From here you will have a long list of node options, If you have followed along above you will have the two in blue shown below and the others are going to be grey and unselectable. +From here you will have a long list of node options, If you have followed along above you will have the two in blue shown below and the others are going to be grey and unselectable. ![](Images/Day26_Networking4.png) -We want to add the following to our lab: +We want to add the following to our lab: -- 1 x Cisco vIOS Router +- 1 x Cisco vIOS Router - 4 x Cisco vIOS Switch -Run through the simple wizard to add them to your lab and it should look something like this. +Run through the simple wizard to add them to your lab and it should look something like this. ![](Images/Day26_Networking5.png) -#### Connecting our nodes +#### Connecting our nodes -We now need to add our connectivity between our routers and switches. We can do this quite easily by hovering over the device and seeing the connection icon as per below and then connecting that to the device we wish to connect to. +We now need to add our connectivity between our routers and switches. We can do this quite easily by hovering over the device and seeing the connection icon as per below and then connecting that to the device we wish to connect to. ![](Images/Day26_Networking6.png) -When you have finished connecting your environment you may also want to add some way to define physical boundaries or locations using boxes or circles which can also be found in the right-click menu. You can also add text which is useful when we want to define our naming or IP addresses in our labs. +When you have finished connecting your environment you may also want to add some way to define physical boundaries or locations using boxes or circles which can also be found in the right-click menu. You can also add text which is useful when we want to define our naming or IP addresses in our labs. -I went ahead and made my lab look like the below. +I went ahead and made my lab look like the below. ![](Images/Day26_Networking7.png) -You will also notice that the lab above is all powered off, we can start our lab by selecting everything and right-clicking and selecting start selected. +You will also notice that the lab above is all powered off, we can start our lab by selecting everything and right-clicking and selecting start selected. ![](Images/Day26_Networking8.png) -Once we have our lab up and running you will be able to console into each device and you will notice at this stage they are pretty dumb with no configuration. We can add some configuration to each node by copying or creating your own in each terminal. +Once we have our lab up and running you will be able to console into each device and you will notice at this stage they are pretty dumb with no configuration. We can add some configuration to each node by copying or creating your own in each terminal. -I will leave my configuration in the Networking folder of the repository for reference. +I will leave my configuration in the Networking folder of the repository for reference. -| Node | Configuration | -| ----------- | ----------- | -| Router | [R1](Networking/R1) | -| Switch1 | [SW1](Networking/SW1) | -| Switch2 | [SW2](Networking/SW2) | -| Switch3 | [SW3](Networking/SW3) | -| Switch4 | [SW4](Networking/SW4) | +| Node | Configuration | +| ------- | --------------------- | +| Router | [R1](Networking/R1) | +| Switch1 | [SW1](Networking/SW1) | +| Switch2 | [SW2](Networking/SW2) | +| Switch3 | [SW3](Networking/SW3) | +| Switch4 | [SW4](Networking/SW4) | -## Resources +## Resources - [Free Course: Introduction to EVE-NG](https://www.youtube.com/watch?v=g6B0f_E0NMg) - [EVE-NG - Creating your first lab](https://www.youtube.com/watch?v=9dPWARirtK8) @@ -113,7 +114,7 @@ I will leave my configuration in the Networking folder of the repository for ref - [Practical Networking](http://www.practicalnetworking.net/) - [Python Network Automation](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126) -Most of the examples I am using here as I am not a Network Engineer have come from this extensive book which is not free but I am using some of the scenarios to help understand Network Automation. +Most of the examples I am using here as I am not a Network Engineer have come from this extensive book which is not free but I am using some of the scenarios to help understand Network Automation. - [Hands-On Enterprise Automation with Python (Book)](https://www.packtpub.com/product/hands-on-enterprise-automation-with-python/9781788998512) diff --git a/Days/day27.md b/Days/day27.md index 18da540bb..721a1043a 100644 --- a/Days/day27.md +++ b/Days/day27.md @@ -1,76 +1,77 @@ --- -title: '#90DaysOfDevOps - Getting Hands-On with Python & Network - Day 27' +title: "#90DaysOfDevOps - Getting Hands-On with Python & Network - Day 27" published: false description: 90DaysOfDevOps - Getting Hands-On with Python & Network -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048735 --- + ## Getting Hands-On with Python & Network -In this final section of Networking fundamentals, we are going to cover some automation tasks and tools with our lab environment created on [Day 26](day26.md) +In this final section of Networking fundamentals, we are going to cover some automation tasks and tools with our lab environment created on [Day 26](day26.md) We will be using an SSH tunnel to connect to our devices from our client vs telnet. The SSH tunnel created between client and device is encrypted. We also covered SSH in the Linux section on [Day 18](day18.md) ## Access our virtual emulated environment -For us to interact with our switches we either need a workstation inside the EVE-NG network or you can deploy a Linux box there with Python installed to perform your automation ([Resource for setting up Linux inside EVE-NG](https://www.youtube.com/watch?v=3Qstk3zngrY)) or you can do something like me and define a cloud for access from your workstation. +For us to interact with our switches we either need a workstation inside the EVE-NG network or you can deploy a Linux box there with Python installed to perform your automation ([Resource for setting up Linux inside EVE-NG](https://www.youtube.com/watch?v=3Qstk3zngrY)) or you can do something like me and define a cloud for access from your workstation. ![](Images/Day27_Networking3.png) -To do this, we have right-clicked on our canvas and we have selected network and then selected "Management(Cloud0)" this will bridge out to our home network. +To do this, we have right-clicked on our canvas and we have selected network and then selected "Management(Cloud0)" this will bridge out to our home network. ![](Images/Day27_Networking4.png) However, we do not have anything inside this network so we need to add connections from the new network to each of our devices. (My networking knowledge needs more attention and I feel that you could just do this next step to the top router and then have connectivity to the rest of the network through this one cable?) -I have then logged on to each of our devices and I have run through the following commands for the interfaces applicable to where the cloud comes in. +I have then logged on to each of our devices and I have run through the following commands for the interfaces applicable to where the cloud comes in. ``` enable config t int gi0/0 -IP add DHCP -no sh -exit +IP add DHCP +no sh +exit exit sh ip int br ``` -The final step gives us the DHCP address from our home network. My device network list is as follows: +The final step gives us the DHCP address from our home network. My device network list is as follows: -| Node | IP Address | Home Network IP | -| ----------- | ----------- | ----------- | -| Router | 10.10.88.110| 192.168.169.115 | -| Switch1 | 10.10.88.111| 192.168.169.178 | -| Switch2 | 10.10.88.112| 192.168.169.193 | -| Switch3 | 10.10.88.113| 192.168.169.125 | -| Switch4 | 10.10.88.114| 192.168.169.197 | +| Node | IP Address | Home Network IP | +| ------- | ------------ | --------------- | +| Router | 10.10.88.110 | 192.168.169.115 | +| Switch1 | 10.10.88.111 | 192.168.169.178 | +| Switch2 | 10.10.88.112 | 192.168.169.193 | +| Switch3 | 10.10.88.113 | 192.168.169.125 | +| Switch4 | 10.10.88.114 | 192.168.169.197 | -### SSH to a network device +### SSH to a network device -With the above in place, we can now connect to our devices on our home network using our workstation. I am using Putty but also have access to other terminals such as git bash that give me the ability to SSH to our devices. +With the above in place, we can now connect to our devices on our home network using our workstation. I am using Putty but also have access to other terminals such as git bash that give me the ability to SSH to our devices. Below you can see we have an SSH connection to our router device. (R1) ![](Images/Day27_Networking5.png) -### Using Python to gather information from our devices +### Using Python to gather information from our devices The first example of how we can leverage Python is to gather information from all of our devices and in particular, I want to be able to connect to each one and run a simple command to provide me with interface configuration and settings. I have stored this script here [netmiko_con_multi.py](Networking/netmiko_con_multi.py) -Now when I run this I can see each port configuration over all of my devices. +Now when I run this I can see each port configuration over all of my devices. ![](Images/Day27_Networking6.png) -This could be handy if you have a lot of different devices, create this one script so that you can centrally control and understand quickly all of the configurations in one place. +This could be handy if you have a lot of different devices, create this one script so that you can centrally control and understand quickly all of the configurations in one place. -### Using Python to configure our devices +### Using Python to configure our devices -The above is useful but what about using Python to configure our devices, in our scenario we have a trunked port between `SW1` and `SW2` again imagine if this was to be done across many of the same switches we want to automate that and not have to manually connect to each switch to make the configuration change. +The above is useful but what about using Python to configure our devices, in our scenario we have a trunked port between `SW1` and `SW2` again imagine if this was to be done across many of the same switches we want to automate that and not have to manually connect to each switch to make the configuration change. -We can use [netmiko_sendchange.py](Networking/netmiko_sendchange.py) to achieve this. This will connect over SSH and perform that change on our `SW1` which will also change to `SW2`. +We can use [netmiko_sendchange.py](Networking/netmiko_sendchange.py) to achieve this. This will connect over SSH and perform that change on our `SW1` which will also change to `SW2`. ![](Images/Day27_Networking7.png) @@ -78,51 +79,51 @@ Now for those that look at the code, you will see the message appears and tells ![](Images/Day27_Networking8.png) -### backing up your device configurations +### backing up your device configurations -Another use case would be to capture our network configurations and make sure we have those backed up, but again we don't want to be connecting to every device we have on our network so we can also automate this using [backup.py](Networking/backup.py). You will also need to populate the [backup.txt](Networking/backup.txt) with the IP addresses you want to backup. +Another use case would be to capture our network configurations and make sure we have those backed up, but again we don't want to be connecting to every device we have on our network so we can also automate this using [backup.py](Networking/backup.py). You will also need to populate the [backup.txt](Networking/backup.txt) with the IP addresses you want to backup. -Run your script and you should see something like the below. +Run your script and you should see something like the below. ![](Images/Day27_Networking9.png) -That could be me just writing a simple print script in python so I should show you the backup files as well. +That could be me just writing a simple print script in python so I should show you the backup files as well. ![](Images/Day27_Networking10.png) -### Paramiko +### Paramiko A widely used Python module for SSH. You can find out more at the official GitHub link [here](https://github.com/paramiko/paramiko) -We can install this module using the `pip install paramiko` command. +We can install this module using the `pip install paramiko` command. ![](Images/Day27_Networking1.png) -We can verify the installation by entering the Python shell and importing the paramiko module. +We can verify the installation by entering the Python shell and importing the paramiko module. ![](Images/Day27_Networking2.png) -### Netmiko +### Netmiko -The netmiko module targets network devices specifically whereas paramiko is a broader tool for handling SSH connections overall. +The netmiko module targets network devices specifically whereas paramiko is a broader tool for handling SSH connections overall. -Netmiko which we have used above alongside paramiko can be installed using `pip install netmiko` +Netmiko which we have used above alongside paramiko can be installed using `pip install netmiko` -Netmiko supports many network vendors and devices, you can find a list of supported devices on the [GitHub Page](https://github.com/ktbyers/netmiko#supports) +Netmiko supports many network vendors and devices, you can find a list of supported devices on the [GitHub Page](https://github.com/ktbyers/netmiko#supports) -### Other modules +### Other modules -It is also worth mentioning a few other modules that we have not had the chance to look at but they give a lot more functionality when it comes to network automation. +It is also worth mentioning a few other modules that we have not had the chance to look at but they give a lot more functionality when it comes to network automation. -`netaddr` is used for working with and manipulating IP addresses, again the installation is simple with `pip install netaddr` +`netaddr` is used for working with and manipulating IP addresses, again the installation is simple with `pip install netaddr` -you might find yourself wanting to store a lot of your switch configuration in an excel spreadsheet, the `xlrd` will allow your scripts to read the excel workbook and convert rows and columns into a matrix. `pip install xlrd` to get the module installed. +you might find yourself wanting to store a lot of your switch configuration in an excel spreadsheet, the `xlrd` will allow your scripts to read the excel workbook and convert rows and columns into a matrix. `pip install xlrd` to get the module installed. Some more use cases where network automation can be used that I have not had the chance to look into can be found [here](https://github.com/ktbyers/pynet/tree/master/presentations/dfwcug/examples) -I think this wraps up our Networking section of the #90DaysOfDevOps, Networking is one area that I have not touched for a while really and there is so much more to cover but I am hoping between my notes and the resources shared throughout it is helpful for some. +I think this wraps up our Networking section of the #90DaysOfDevOps, Networking is one area that I have not touched for a while really and there is so much more to cover but I am hoping between my notes and the resources shared throughout it is helpful for some. -## Resources +## Resources - [Free Course: Introduction to EVE-NG](https://www.youtube.com/watch?v=g6B0f_E0NMg) - [EVE-NG - Creating your first lab](https://www.youtube.com/watch?v=9dPWARirtK8) @@ -131,8 +132,8 @@ I think this wraps up our Networking section of the #90DaysOfDevOps, Networking - [Practical Networking](http://www.practicalnetworking.net/) - [Python Network Automation](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126) -Most of the examples I am using here as I am not a Network Engineer have come from this extensive book which is not free but I am using some of the scenarios to help understand Network Automation. +Most of the examples I am using here as I am not a Network Engineer have come from this extensive book which is not free but I am using some of the scenarios to help understand Network Automation. - [Hands-On Enterprise Automation with Python (Book)](https://www.packtpub.com/product/hands-on-enterprise-automation-with-python/9781788998512) -See you on [Day 28](day28.md) where will start looking into cloud computing and get a good grasp and foundational knowledge of the topic and what is available. +See you on [Day 28](day28.md) where will start looking into cloud computing and get a good grasp and foundational knowledge of the topic and what is available. diff --git a/Days/day28.md b/Days/day28.md index 745fe7181..dad2cf082 100644 --- a/Days/day28.md +++ b/Days/day28.md @@ -1,72 +1,73 @@ --- -title: '#90DaysOfDevOps - The Big Picture: DevOps & The Cloud - Day 28' +title: "#90DaysOfDevOps - The Big Picture: DevOps & The Cloud - Day 28" published: false description: 90DaysOfDevOps - The Big Picture DevOps & The Cloud -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048737 --- + ## The Big Picture: DevOps & The Cloud -When it comes to cloud computing and what is offered, it goes very nicely with the DevOps ethos and processes. We can think of Cloud Computing as bringing the technology and services whilst DevOps as we have mentioned many times before is about the process and process improvement. +When it comes to cloud computing and what is offered, it goes very nicely with the DevOps ethos and processes. We can think of Cloud Computing as bringing the technology and services whilst DevOps as we have mentioned many times before is about the process and process improvement. -But to start with that cloud learning journey is a steep one and making sure you know and understand all elements or the best service to choose for the right price point is confusing. +But to start with that cloud learning journey is a steep one and making sure you know and understand all elements or the best service to choose for the right price point is confusing. ![](Images/Day28_Cloud1.png) -Does the public cloud require a DevOps mindset? My answer here is not, but to really take advantage of cloud computing and possibly avoid those large cloud bills that so many people have been hit with then it is important to think of Cloud Computing and DevOps together. +Does the public cloud require a DevOps mindset? My answer here is not, but to really take advantage of cloud computing and possibly avoid those large cloud bills that so many people have been hit with then it is important to think of Cloud Computing and DevOps together. -If we look at what we mean by the Public Cloud at a 40,000ft view, it is about removing some responsibility to a managed service to enable you and your team to focus on more important aspects which name should be the application and the end-users. After all the Public Cloud is just someone else's computer. +If we look at what we mean by the Public Cloud at a 40,000ft view, it is about removing some responsibility to a managed service to enable you and your team to focus on more important aspects which name should be the application and the end-users. After all the Public Cloud is just someone else's computer. ![](Images/Day28_Cloud2.png) -In this first section, I want to get into and describe a little more of what a Public Cloud is and some of the building blocks that get referred to as the Public Cloud overall. +In this first section, I want to get into and describe a little more of what a Public Cloud is and some of the building blocks that get referred to as the Public Cloud overall. -### SaaS +### SaaS -The first area to cover is Software as a service, this service is removing almost all of the management overhead of a service that you may have once run on-premises. Let's think about Microsoft Exchange for our email, this used to be a physical box that lived in your data centre or maybe in the cupboard under the stairs. You would need to feed and water that server. By that I mean you would need to keep it updated and you would be responsible for buying the server hardware, most likely installing the operating system, installing the applications required and then keeping that patched, if anything went wrong you would have to troubleshoot and get things back up and running. +The first area to cover is Software as a service, this service is removing almost all of the management overhead of a service that you may have once run on-premises. Let's think about Microsoft Exchange for our email, this used to be a physical box that lived in your data centre or maybe in the cupboard under the stairs. You would need to feed and water that server. By that I mean you would need to keep it updated and you would be responsible for buying the server hardware, most likely installing the operating system, installing the applications required and then keeping that patched, if anything went wrong you would have to troubleshoot and get things back up and running. -Oh, and you would also have to make sure you were backing up your data, although this doesn't change with SaaS for the most part either. +Oh, and you would also have to make sure you were backing up your data, although this doesn't change with SaaS for the most part either. -What SaaS does and in particular Microsoft 365, because I mentioned Exchange is removing that administration overhead and they provide a service that delivers your exchange functionality by way of mail but also much other productivity (Office 365) and storage options (OneDrive) that overall gives a great experience to the end-user. +What SaaS does and in particular Microsoft 365, because I mentioned Exchange is removing that administration overhead and they provide a service that delivers your exchange functionality by way of mail but also much other productivity (Office 365) and storage options (OneDrive) that overall gives a great experience to the end-user. -Other SaaS applications are widely adopted, such as Salesforce, SAP, Oracle, Google, and Apple. All removing that burden of having to manage more of the stack. +Other SaaS applications are widely adopted, such as Salesforce, SAP, Oracle, Google, and Apple. All removing that burden of having to manage more of the stack. -I am sure there is a story with DevOps and SaaS-based applications but I am struggling to find out what they may be. I know Azure DevOps has some great integrations with Microsoft 365 that I might have a look into and report back to. +I am sure there is a story with DevOps and SaaS-based applications but I am struggling to find out what they may be. I know Azure DevOps has some great integrations with Microsoft 365 that I might have a look into and report back to. ![](Images/Day28_Cloud3.png) ### Public Cloud -Next up we have the public cloud, most people would think of this in a few different ways, some would see this as the hyper scalers only such as Microsoft Azure, Google Cloud Platform and AWS. +Next up we have the public cloud, most people would think of this in a few different ways, some would see this as the hyper scalers only such as Microsoft Azure, Google Cloud Platform and AWS. ![](Images/Day28_Cloud4.png) -Some will also see the public cloud as a much wider offering that includes those hyper scalers but also the thousands of MSPs all over the world as well. For this post, we are going to consider Public Cloud including hyper scalers and MSPs, although later on, we will specifically dive into one or more of the hyper scalers to get that foundational knowledge. +Some will also see the public cloud as a much wider offering that includes those hyper scalers but also the thousands of MSPs all over the world as well. For this post, we are going to consider Public Cloud including hyper scalers and MSPs, although later on, we will specifically dive into one or more of the hyper scalers to get that foundational knowledge. ![](Images/Day28_Cloud5.png) -*thousands more companies could land on this, I am merely picking from local, regional, telco and global brands I have worked with and am aware of.* +_thousands more companies could land on this, I am merely picking from local, regional, telco and global brands I have worked with and am aware of._ -We mentioned in the SaaS section that Cloud removed the responsibility or the burden of having to administer parts of a system. If SaaS we see a lot of the abstraction layers removed i.e the physical systems, network, storage, operating system, and even application to some degree. When it comes to the cloud there are various levels of abstraction we can remove or keep depending on your requirements. +We mentioned in the SaaS section that Cloud removed the responsibility or the burden of having to administer parts of a system. If SaaS we see a lot of the abstraction layers removed i.e the physical systems, network, storage, operating system, and even application to some degree. When it comes to the cloud there are various levels of abstraction we can remove or keep depending on your requirements. -We have already mentioned SaaS but there are at least two more to mention regarding the public cloud. +We have already mentioned SaaS but there are at least two more to mention regarding the public cloud. -Infrastructure as a service - You can think of this layer as a virtual machine but whereas on-premises you will be having to look after the physical layer in the cloud this is not the case, the physical is the cloud provider's responsibility and you will manage and administer the Operating System, the data and the applications you wish to run. +Infrastructure as a service - You can think of this layer as a virtual machine but whereas on-premises you will be having to look after the physical layer in the cloud this is not the case, the physical is the cloud provider's responsibility and you will manage and administer the Operating System, the data and the applications you wish to run. -Platform as a service - This continues to remove the responsibility of layers and this is really about you taking control of the data and the application but not having to worry about the underpinning hardware or operating system. +Platform as a service - This continues to remove the responsibility of layers and this is really about you taking control of the data and the application but not having to worry about the underpinning hardware or operating system. There are many other aaS offerings out there but these are the two fundamentals. You might see offerings around StaaS (Storage as a service) which provide you with your storage layer but without having to worry about the hardware underneath. Or you might have heard CaaS for Containers as a service which we will get onto, later on, another aaS we will look to cover over the next 7 days is FaaS (Functions as a Service) where maybe you do not need a running system up all the time and you just want a function to be executed as and when. -There are many ways in which the public cloud can provide abstraction layers of control that you wish to pass up and pay for. +There are many ways in which the public cloud can provide abstraction layers of control that you wish to pass up and pay for. ![](Images/Day28_Cloud6.png) ### Private Cloud -Having your own data centre is not a thing of the past I would think that this has become a resurgence among a lot of companies that have found the OPEX model difficult to manage as well as skill sets in just using the public cloud. +Having your own data centre is not a thing of the past I would think that this has become a resurgence among a lot of companies that have found the OPEX model difficult to manage as well as skill sets in just using the public cloud. -The important thing to note here is the public cloud is likely now going to be your responsibility and it is going to be on your premises. +The important thing to note here is the public cloud is likely now going to be your responsibility and it is going to be on your premises. We have some interesting things happening in this space not only with VMware that dominated the virtualisation era and on-premises infrastructure environments. We also have the hyper scalers offering an on-premises version of their public clouds. @@ -74,27 +75,27 @@ We have some interesting things happening in this space not only with VMware tha ### Hybrid Cloud -To follow on from the Public and Private cloud mentions we also can span across both of these environments to provide flexibility between the two, maybe take advantage of services available in the public cloud but then also take advantage of features and functionality of being on-premises or it might be a regulation that dictates you having to store data locally. +To follow on from the Public and Private cloud mentions we also can span across both of these environments to provide flexibility between the two, maybe take advantage of services available in the public cloud but then also take advantage of features and functionality of being on-premises or it might be a regulation that dictates you having to store data locally. ![](Images/Day28_Cloud8.png) -Putting this all together we have a lot of choices for where we store and run our workloads. +Putting this all together we have a lot of choices for where we store and run our workloads. ![](Images/Day28_Cloud9.png) -Before we get into a specific hyper-scale, I have asked the power of Twitter where we should go? +Before we get into a specific hyper-scale, I have asked the power of Twitter where we should go? ![](Images/Day28_Cloud10.png) [Link to Twitter Poll](https://twitter.com/MichaelCade1/status/1486814904510259208?s=20&t=x2n6QhyOXSUs7Pq0itdIIQ) -Whichever one gets the highest percentage we will take a deeper dive into the offerings, I think the important to mention though is that services from all of these are quite similar which is why I say to start with one because I have found that in knowing the foundation of one and how to create virtual machines, set up networking etc. I have been able to go to the others and quickly ramp up in those areas. +Whichever one gets the highest percentage we will take a deeper dive into the offerings, I think the important to mention though is that services from all of these are quite similar which is why I say to start with one because I have found that in knowing the foundation of one and how to create virtual machines, set up networking etc. I have been able to go to the others and quickly ramp up in those areas. -Either way, I am going to share some great **FREE** resources that cover all three of the hyper scalers. +Either way, I am going to share some great **FREE** resources that cover all three of the hyper scalers. -I am also going to build out a scenario as I have done in the other sections where we can build something as we move through the days. +I am also going to build out a scenario as I have done in the other sections where we can build something as we move through the days. -## Resources +## Resources - [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw) - [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) diff --git a/Days/day29.md b/Days/day29.md index d7b1f35db..6f8946ba5 100644 --- a/Days/day29.md +++ b/Days/day29.md @@ -1,134 +1,137 @@ --- -title: '#90DaysOfDevOps - Microsoft Azure Fundamentals - Day 29' +title: "#90DaysOfDevOps - Microsoft Azure Fundamentals - Day 29" published: false description: 90DaysOfDevOps - Microsoft Azure Fundamentals -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048705 --- -## Microsoft Azure Fundamentals -Before we get going, the winner of the Twitter poll was Microsoft Azure, hence the title of the page. It was close and also quite interesting to see the results come in over the 24 hours. +## Microsoft Azure Fundamentals + +Before we get going, the winner of the Twitter poll was Microsoft Azure, hence the title of the page. It was close and also quite interesting to see the results come in over the 24 hours. ![](Images/Day29_Cloud1.png) -I would say in terms of covering this topic is going to give me a better understanding and update around the services available on Microsoft Azure, I lean towards Amazon AWS when it comes to my day today. I have however left resources I had lined up for all three of the major cloud providers. +I would say in terms of covering this topic is going to give me a better understanding and update around the services available on Microsoft Azure, I lean towards Amazon AWS when it comes to my day today. I have however left resources I had lined up for all three of the major cloud providers. -I do appreciate that there are more and the poll only included these 3 and in particular, there were some comments about Oracle Cloud. I would love to hear more about other cloud providers being used out in the wild. +I do appreciate that there are more and the poll only included these 3 and in particular, there were some comments about Oracle Cloud. I would love to hear more about other cloud providers being used out in the wild. -### The Basics +### The Basics -- Provides public cloud services +- Provides public cloud services - Geographically distributed (60+ Regions worldwide) -- Accessed via the internet and/or private connections -- Multi-tenant model -- Consumption-based billing - (Pay as you go | Pay as you grow) -- A large number of service types and offerings for different requirements. +- Accessed via the internet and/or private connections +- Multi-tenant model +- Consumption-based billing - (Pay as you go | Pay as you grow) +- A large number of service types and offerings for different requirements. - [Microsoft Azure Global Infrastructure](https://infrastructuremap.microsoft.com/explore) -As much as we spoke about SaaS and Hybrid Cloud we are not planning on covering those topics here. +As much as we spoke about SaaS and Hybrid Cloud we are not planning on covering those topics here. The best way to get started and follow along is by clicking the link, which will enable you to spin up a [Microsoft Azure Free Account](https://azure.microsoft.com/en-gb/free/) -### Regions +### Regions -I linked the interactive map above, but we can see the image below the breadth of regions being offered in the Microsoft Azure platform worldwide. +I linked the interactive map above, but we can see the image below the breadth of regions being offered in the Microsoft Azure platform worldwide. ![](Images/Day29_Cloud2.png) -*image taken from [Microsoft Docs - 01/05/2021](https://docs.microsoft.com/en-us/azure/networking/microsoft-global-network)* +_image taken from [Microsoft Docs - 01/05/2021](https://docs.microsoft.com/en-us/azure/networking/microsoft-global-network)_ -You will also see several "sovereign" clouds meaning they are not linked or able to speak to the other regions, for example, these would be associated with governments such as the `AzureUSGovernment` also `AzureChinaCloud` and others. +You will also see several "sovereign" clouds meaning they are not linked or able to speak to the other regions, for example, these would be associated with governments such as the `AzureUSGovernment` also `AzureChinaCloud` and others. -When we are deploying our services within Microsoft Azure we will choose a region for almost everything. However, it is important to note that not every service is available in every region. You can see [Products available by region](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=all) at the time of my writing this that in West Central US we cannot use Azure Databricks. +When we are deploying our services within Microsoft Azure we will choose a region for almost everything. However, it is important to note that not every service is available in every region. You can see [Products available by region](https://azure.microsoft.com/en-us/global-infrastructure/services/?products=all) at the time of my writing this that in West Central US we cannot use Azure Databricks. -I also mentioned "almost everything" above, there are certain services that are linked to the region such as Azure Bot Services, Bing Speech, Azure Virtual Desktop, Static Web Apps, and some more. +I also mentioned "almost everything" above, there are certain services that are linked to the region such as Azure Bot Services, Bing Speech, Azure Virtual Desktop, Static Web Apps, and some more. -Behind the scenes, a region may be made up of more than one data centre. These will be referred to as Availability Zones. +Behind the scenes, a region may be made up of more than one data centre. These will be referred to as Availability Zones. -In the below image you will see and again this is taken from the Microsoft official documentation it describes what a region is and how it is made up of Availability Zones. However not all regions have multiple Availability Zones. +In the below image you will see and again this is taken from the Microsoft official documentation it describes what a region is and how it is made up of Availability Zones. However not all regions have multiple Availability Zones. ![](Images/Day29_Cloud3.png) -The Microsoft Documentation is very good, and you can read up more on [Regions and Availability Zones](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview) here. +The Microsoft Documentation is very good, and you can read up more on [Regions and Availability Zones](https://docs.microsoft.com/en-us/azure/availability-zones/az-overview) here. -### Subscriptions +### Subscriptions -Remember we mentioned that Microsoft Azure is a consumption model cloud you will find that all major cloud providers follow this model. +Remember we mentioned that Microsoft Azure is a consumption model cloud you will find that all major cloud providers follow this model. -If you are an Enterprise then you might want or have an Enterprise Agreement set up with Microsoft to enable your company to consume these Azure Services. +If you are an Enterprise then you might want or have an Enterprise Agreement set up with Microsoft to enable your company to consume these Azure Services. -If you are like me and you are using Microsoft Azure for education then we have a few other options. +If you are like me and you are using Microsoft Azure for education then we have a few other options. -We have the [Microsoft Azure Free Account](https://azure.microsoft.com/en-gb/free/) which generally gives you several free cloud credits to spend in Azure over some time. +We have the [Microsoft Azure Free Account](https://azure.microsoft.com/en-gb/free/) which generally gives you several free cloud credits to spend in Azure over some time. There is also the ability to use a Visual Studio subscription which gives you maybe some free credits each month alongside your annual subscription to Visual Studio, this was commonly known as the MSDN years ago. [Visual Studio](https://azure.microsoft.com/en-us/pricing/member-offers/credit-for-visual-studio-subscribers/) Then finally there is the hand over a credit card and have a pay as you go, model. [Pay-as-you-go](https://azure.microsoft.com/en-us/pricing/purchase-options/pay-as-you-go/) -A subscription can be seen as a boundary between different subscriptions potentially cost centres but completely different environments. A subscription is where the resources are created. +A subscription can be seen as a boundary between different subscriptions potentially cost centres but completely different environments. A subscription is where the resources are created. ### Management Groups Management groups give us the ability to segregate control across our Azure Active Directory (AD) or our tenant environment. Management groups allow us to control policies, Role Based Access Control (RBAC), and budgets. -Subscriptions belong to these management groups so you could have many subscriptions in your Azure AD Tenant, these subscriptions then can also control policies, RBAC, and budgets. +Subscriptions belong to these management groups so you could have many subscriptions in your Azure AD Tenant, these subscriptions then can also control policies, RBAC, and budgets. + +### Resource Manager and Resource Groups + +#### Azure Resource Manager -### Resource Manager and Resource Groups +- JSON based API that is built on resource providers. +- Resources belong to a resource group and share a common life cycle. +- Parallelism +- JSON-Based deployments are declarative, idempotent and understand dependencies between resources to govern creation and order. -**Azure Resource Manager** -- JSON based API that is built on resource providers. -- Resources belong to a resource group and share a common life cycle. -- Parallelism -- JSON-Based deployments are declarative, idempotent and understand dependencies between resources to govern creation and order. +#### Resource Groups -**Resource Groups** -- Every Azure Resource Manager resource exists in one and only one resource group! -- Resource groups are created in a region that can contain resources from outside the region. -- Resources can be moved between resource groups -- Resource groups are not walled off from other resource groups, there can be communication between resource groups. -- Resource Groups can also control policies, RBAC, and budgets. +- Every Azure Resource Manager resource exists in one and only one resource group! +- Resource groups are created in a region that can contain resources from outside the region. +- Resources can be moved between resource groups +- Resource groups are not walled off from other resource groups, there can be communication between resource groups. +- Resource Groups can also control policies, RBAC, and budgets. -### Hands-On +### Hands-On -Let's go and get connected and make sure we have a **Subscription** available to us. We can check our simple out of the box **Management Group**, We can then go and create a new dedicated **Resource Group** in our preferred **Region**. +Let's go and get connected and make sure we have a **Subscription** available to us. We can check our simple out of the box **Management Group**, We can then go and create a new dedicated **Resource Group** in our preferred **Region**. -When we first login to our [Azure portal](https://portal.azure.com/#home) you will see at the top the ability to search for resources, services and docs. +When we first login to our [Azure portal](https://portal.azure.com/#home) you will see at the top the ability to search for resources, services and docs. ![](Images/Day29_Cloud4.png) -We are going to first look at our subscription, you will see here that I am using a Visual Studio Professional subscription which gives me some free credit each month. +We are going to first look at our subscription, you will see here that I am using a Visual Studio Professional subscription which gives me some free credit each month. ![](Images/Day29_Cloud5.png) -If we go into that you will get a wider view and a look into what is happening or what can be done with the subscription, we can see billing information with control functions on the left where you can define IAM Access Control and further down there are more resources available. +If we go into that you will get a wider view and a look into what is happening or what can be done with the subscription, we can see billing information with control functions on the left where you can define IAM Access Control and further down there are more resources available. ![](Images/Day29_Cloud6.png) -There might be a scenario where you have multiple subscriptions and you want to manage them all under one, this is where management groups can be used to segregate responsibility groups. In mine below, you can see there is just my tenant root group with my subscription. +There might be a scenario where you have multiple subscriptions and you want to manage them all under one, this is where management groups can be used to segregate responsibility groups. In mine below, you can see there is just my tenant root group with my subscription. -You will also see in the previous image that the parent management group is the same id used on the tenant root group. +You will also see in the previous image that the parent management group is the same id used on the tenant root group. ![](Images/Day29_Cloud7.png) -Next up we have Resource groups, this is where we combine our resources and we can easily manage them in one place. I have a few created for various other projects. +Next up we have Resource groups, this is where we combine our resources and we can easily manage them in one place. I have a few created for various other projects. ![](Images/Day29_Cloud8.png) -With what we are going to be doing over the next few days, we want to create our resource group. This is easily done in this console by hitting the create option on the previous image. +With what we are going to be doing over the next few days, we want to create our resource group. This is easily done in this console by hitting the create option on the previous image. ![](Images/Day29_Cloud9.png) -A validation step takes place and then you have the chance to review your creation and then create. You will also see down the bottom "Download a template for automation" this allows us to grab the JSON format so that we can perform this simple in an automated fashion later on if we wanted, we will cover this later on as well. +A validation step takes place and then you have the chance to review your creation and then create. You will also see down the bottom "Download a template for automation" this allows us to grab the JSON format so that we can perform this simple in an automated fashion later on if we wanted, we will cover this later on as well. ![](Images/Day29_Cloud10.png) -Hit create, then in our list of resource groups, we now have our "90DaysOfDevOps" group ready for what we do in the next session. +Hit create, then in our list of resource groups, we now have our "90DaysOfDevOps" group ready for what we do in the next session. ![](Images/Day29_Cloud11.png) -## Resources +## Resources - [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw) - [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) diff --git a/Days/day30.md b/Days/day30.md index 8d2a1a8ed..d7bcc13da 100644 --- a/Days/day30.md +++ b/Days/day30.md @@ -1,104 +1,103 @@ --- -title: '#90DaysOfDevOps - Microsoft Azure Security Models - Day 30' +title: "#90DaysOfDevOps - Microsoft Azure Security Models - Day 30" published: false description: 90DaysOfDevOps - Microsoft Azure Security Models -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049039 --- -## Microsoft Azure Security Models - -Following on from the Microsoft Azure Overview, we are going to start with Azure Security and see where this can help in our day to day. For the most part, I have found the built-in roles have been sufficient but knowing that we can create and work with many different areas of authentication and configurations. I have found Microsoft Azure to be quite advanced with its Active Directory background compared to other public clouds. ## Microsoft Azure Security Models -This is one area in which Microsoft Azure seemingly works differently from other public cloud providers, in Azure there is ALWAYS Azure AD. +Following on from the Microsoft Azure Overview, we are going to start with Azure Security and see where this can help in our day to day. For the most part, I have found the built-in roles have been sufficient but knowing that we can create and work with many different areas of authentication and configurations. I have found Microsoft Azure to be quite advanced with its Active Directory background compared to other public clouds. + +This is one area in which Microsoft Azure seemingly works differently from other public cloud providers, in Azure there is ALWAYS Azure AD. -### Directory Services +### Directory Services -- Azure Active Directory hosts the security principles used by Microsoft Azure and other Microsoft cloud services. -- Authentication is accomplished through protocols such as SAML, WS-Federation, OpenID Connect and OAuth2. -- Queries are accomplished through REST API called Microsoft Graph API. -- Tenants have a tenant.onmicrosoft.com default name but can also have custom domain names. -- Subscriptions are associated with an Azure Active Directory tenant. +- Azure Active Directory hosts the security principles used by Microsoft Azure and other Microsoft cloud services. +- Authentication is accomplished through protocols such as SAML, WS-Federation, OpenID Connect and OAuth2. +- Queries are accomplished through REST API called Microsoft Graph API. +- Tenants have a tenant.onmicrosoft.com default name but can also have custom domain names. +- Subscriptions are associated with an Azure Active Directory tenant. -If we think about AWS to compare the equivalent offering would be AWS IAM (Identity & Access Management) Although still very different +If we think about AWS to compare the equivalent offering would be AWS IAM (Identity & Access Management) Although still very different -Azure AD Connect provides the ability to replicate accounts from AD to Azure AD. This can also include groups and sometimes objects. This can be granular and filtered. Supports multiple forests and domains. +Azure AD Connect provides the ability to replicate accounts from AD to Azure AD. This can also include groups and sometimes objects. This can be granular and filtered. Supports multiple forests and domains. -It is possible to create cloud accounts in Microsoft Azure Active Directory (AD) but most organisations already have accounted for their users in their own Active Directory being on-premises. +It is possible to create cloud accounts in Microsoft Azure Active Directory (AD) but most organisations already have accounted for their users in their own Active Directory being on-premises. -Azure AD Connect also allows you to not only see Windows AD servers but also other Azure AD, Google and others. This also provides the ability to collaborate with external people and organisations this is called Azure B2B. +Azure AD Connect also allows you to not only see Windows AD servers but also other Azure AD, Google and others. This also provides the ability to collaborate with external people and organisations this is called Azure B2B. Authentication options between Active Directory Domain Services and Microsoft Azure Active Directory are possible with both identity sync with a password hash. ![](Images/Day30_Cloud1.png) -The passing of the password hash is optional, if this is not used then pass-through authentication is required. +The passing of the password hash is optional, if this is not used then pass-through authentication is required. -There is a video linked below that goes into detail about Passthrough authentication. +There is a video linked below that goes into detail about Passthrough authentication. [User sign-in with Azure Active Directory Pass-through Authentication](https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-pta) ![](Images/Day30_Cloud2.png) -### Federation +### Federation -It's fair to say that if you are using Microsoft 365, Microsoft Dynamics and on-premises Active Directory it is quite easy to understand and integrate into Azure AD for federation. However, you might be using other services outside of the Microsoft ecosystem. +It's fair to say that if you are using Microsoft 365, Microsoft Dynamics and on-premises Active Directory it is quite easy to understand and integrate into Azure AD for federation. However, you might be using other services outside of the Microsoft ecosystem. -Azure AD can act as a federation broker to these other Non-Microsoft apps and other directory services. +Azure AD can act as a federation broker to these other Non-Microsoft apps and other directory services. -This will be seen in the Azure Portal as Enterprise Applications of which there are a large number of options. +This will be seen in the Azure Portal as Enterprise Applications of which there are a large number of options. ![](Images/Day30_Cloud3.png) -If you scroll down on the enterprise application page you are going to see a long list of featured applications. +If you scroll down on the enterprise application page you are going to see a long list of featured applications. ![](Images/Day30_Cloud4.png) -This option also allows for "bring your own" integration, an application you are developing or a non-gallery application. +This option also allows for "bring your own" integration, an application you are developing or a non-gallery application. -I have not looked into this before but I can see that this is quite the feature set when compared to the other cloud providers and capabilities. +I have not looked into this before but I can see that this is quite the feature set when compared to the other cloud providers and capabilities. -### Role-Based Access Control +### Role-Based Access Control -We have already covered on [Day 29](day29.md) the scopes we are going to cover here, we can set our role-based access control according to one of these areas. +We have already covered on [Day 29](day29.md) the scopes we are going to cover here, we can set our role-based access control according to one of these areas. - Subscriptions - Management Group -- Resource Group -- Resources +- Resource Group +- Resources -Roles can be split into three, there are many built-in roles in Microsoft Azure. Those three are: +Roles can be split into three, there are many built-in roles in Microsoft Azure. Those three are: -- Owner -- Contributor -- Reader +- Owner +- Contributor +- Reader -Owner and Contributor are very similar in their boundaries of scope however the owner can change permissions. +Owner and Contributor are very similar in their boundaries of scope however the owner can change permissions. -Other roles are specific to certain types of Azure Resources as well as custom roles. +Other roles are specific to certain types of Azure Resources as well as custom roles. -We should focus on assigning permissions to groups vs users. +We should focus on assigning permissions to groups vs users. -Permissions are inherited. +Permissions are inherited. If we go back and look at the "90DaysOfDevOps" Resource group we created and check the Access Control (IAM) within you can see we have a list of contributors and a customer User Access Administrator, and we do have a list of owners (But I cannot show this) ![](Images/Day30_Cloud5.png) -We can also check the roles we have assigned here if they are BuiltInRoles and which category they fall under. +We can also check the roles we have assigned here if they are BuiltInRoles and which category they fall under. ![](Images/Day30_Cloud6.png) -We can also use the check access tab if we want to check an account against this resource group and make sure that the account we wish to have that access to has the correct permissions or maybe we want to check if a user has too much access. +We can also use the check access tab if we want to check an account against this resource group and make sure that the account we wish to have that access to has the correct permissions or maybe we want to check if a user has too much access. ![](Images/Day30_Cloud7.png) -### Microsoft Defender for Cloud +### Microsoft Defender for Cloud -- Microsoft Defender for Cloud (formerly known as Azure Security Center) provides insight into the security of the entire Azure environment. +- Microsoft Defender for Cloud (formerly known as Azure Security Center) provides insight into the security of the entire Azure environment. - A single dashboard for visibility into the overall security health of all Azure and non-Azure resources (via Azure Arc) and security hardening guidance. @@ -106,7 +105,7 @@ We can also use the check access tab if we want to check an account against this - Paid plans for protected resource types (e.g. Servers, AppService, SQL, Storage, Containers, KeyVault). -I have switched to another subscription to view the Azure Security Center and you can see here based on very few resources that I have some recommendations in one place. +I have switched to another subscription to view the Azure Security Center and you can see here based on very few resources that I have some recommendations in one place. ![](Images/Day30_Cloud8.png) @@ -128,45 +127,45 @@ I have gone out and I have purchased www.90DaysOfDevOps.com and I would like to ![](Images/Day30_Cloud9.png) -With that now, we can create a new user on our new Active Directory Domain. +With that now, we can create a new user on our new Active Directory Domain. ![](Images/Day30_Cloud10.png) -Now we want to create a group for all of our new 90DaysOfDevOps users in one group. We can create a group as per the below, notice that I am using "Dynamic User" which means Azure AD will query user accounts and add them dynamically vs assigned which is where you manually add the user to your group. +Now we want to create a group for all of our new 90DaysOfDevOps users in one group. We can create a group as per the below, notice that I am using "Dynamic User" which means Azure AD will query user accounts and add them dynamically vs assigned which is where you manually add the user to your group. ![](Images/Day30_Cloud11.png) -There are lots of options when it comes to creating your query, I plan to simply find the principal name and make sure that the name contains @90DaysOfDevOps.com. +There are lots of options when it comes to creating your query, I plan to simply find the principal name and make sure that the name contains @90DaysOfDevOps.com. ![](Images/Day30_Cloud12.png) -Now because we have created our user account already for michael.cade@90DaysOfDevOps.com we can validate the rules are working. For comparison I have also added another account I have associated to another domain here and you can see that because of this rule our user will not land in this group. +Now because we have created our user account already for michael.cade@90DaysOfDevOps.com we can validate the rules are working. For comparison I have also added another account I have associated to another domain here and you can see that because of this rule our user will not land in this group. ![](Images/Day30_Cloud13.png) -I have since added a new user1@90DaysOfDevOps.com and if we go and check the group we can see our members. +I have since added a new user1@90DaysOfDevOps.com and if we go and check the group we can see our members. ![](Images/Day30_Cloud14.png) -If we have this requirement x100 then we are not going to want to do this all in the console we are going to want to take advantage of either bulk options to create, invite, and delete users or you are going to want to look into PowerShell to achieve this automated approach to scale. +If we have this requirement x100 then we are not going to want to do this all in the console we are going to want to take advantage of either bulk options to create, invite, and delete users or you are going to want to look into PowerShell to achieve this automated approach to scale. -Now we can go to our Resource Group and specify that on the 90DaysOfDevOps resource group we want the owner to be the group we just created. +Now we can go to our Resource Group and specify that on the 90DaysOfDevOps resource group we want the owner to be the group we just created. ![](Images/Day30_Cloud15.png) -We can equally go in here and deny assignments access to our resource group as well. +We can equally go in here and deny assignments access to our resource group as well. -Now if we log in to the Azure Portal with our new user account, you can see that we only have access to our 90DaysOfDevOps resource group and not the others seen in previous pictures because we do not have the access. +Now if we log in to the Azure Portal with our new user account, you can see that we only have access to our 90DaysOfDevOps resource group and not the others seen in previous pictures because we do not have the access. ![](Images/Day30_Cloud16.png) -The above is great if this is a user that has access to resources inside of your Azure portal, not every user needs to be aware of the portal, but to check access we can use the [Apps Portal](https://myapps.microsoft.com/) This is a single sign-on portal for us to test. +The above is great if this is a user that has access to resources inside of your Azure portal, not every user needs to be aware of the portal, but to check access we can use the [Apps Portal](https://myapps.microsoft.com/) This is a single sign-on portal for us to test. ![](Images/Day30_Cloud17.png) -You can customise this portal with your branding and this might be something we come back to later on. +You can customise this portal with your branding and this might be something we come back to later on. -## Resources +## Resources - [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw) - [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) diff --git a/Days/day31.md b/Days/day31.md index 6b6b97f62..7db125734 100644 --- a/Days/day31.md +++ b/Days/day31.md @@ -1,113 +1,114 @@ --- -title: '#90DaysOfDevOps - Microsoft Azure Compute Models - Day 31' +title: "#90DaysOfDevOps - Microsoft Azure Compute Models - Day 31" published: false description: 90DaysOfDevOps - Microsoft Azure Compute Models -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049040 --- + ## Microsoft Azure Compute Models -Following on from covering the basics around security models within Microsoft Azure yesterday today we are going to look into the various compute services available to us in Azure. +Following on from covering the basics around security models within Microsoft Azure yesterday today we are going to look into the various compute services available to us in Azure. -### Service Availability Options +### Service Availability Options -This section is close to my heart given my role in Data Management. As with on-premises, it is critical to ensure the availability of your services. +This section is close to my heart given my role in Data Management. As with on-premises, it is critical to ensure the availability of your services. - High Availability (Protection within a region) - Disaster Recovery (Protection between regions) - Backup (Recovery from a point in time) -Microsoft deploys multiple regions within a geopolitical boundary. +Microsoft deploys multiple regions within a geopolitical boundary. -Two concepts with Azure for Service Availability. Both sets and zones. +Two concepts with Azure for Service Availability. Both sets and zones. -Availability Sets - Provide resiliency within a datacenter +Availability Sets - Provide resiliency within a datacenter -Availability Zones - Provide resiliency between data centres within a region. +Availability Zones - Provide resiliency between data centres within a region. -### Virtual Machines +### Virtual Machines -Most likely the starting point for anyone in the public cloud. +Most likely the starting point for anyone in the public cloud. - Provides a VM from a variety of series and sizes with different capabilities (Sometimes an overwhelming) [Sizes for Virtual machines in Azure](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes) -- There are many different options and focuses for VMs from high performance, and low latency to high memory options VMs. -- We also have a burstable VM type which can be found under the B-Series. This is great for workloads where you can have a low CPU requirement for the most part but require that maybe once a month performance spike requirement. -- Virtual Machines are placed on a virtual network that can provide connectivity to any network. -- Windows and Linux guest OS support. -- There are also Azure-tuned kernels when it comes to specific Linux distributions. [Azure Tuned Kernals](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros#azure-tuned-kernels) +- There are many different options and focuses for VMs from high performance, and low latency to high memory options VMs. +- We also have a burstable VM type which can be found under the B-Series. This is great for workloads where you can have a low CPU requirement for the most part but require that maybe once a month performance spike requirement. +- Virtual Machines are placed on a virtual network that can provide connectivity to any network. +- Windows and Linux guest OS support. +- There are also Azure-tuned kernels when it comes to specific Linux distributions. [Azure Tuned Kernals](https://docs.microsoft.com/en-us/azure/virtual-machines/linux/endorsed-distros#azure-tuned-kernels) -### Templating +### Templating -I have mentioned before that everything behind or underneath Microsoft Azure is JSON. +I have mentioned before that everything behind or underneath Microsoft Azure is JSON. -There are several different management portals and consoles we can use to create our resources the preferred route is going to be via JSON templates. +There are several different management portals and consoles we can use to create our resources the preferred route is going to be via JSON templates. -Idempotent deployments in incremental or complete mode - i.e repeatable desired state. +Idempotent deployments in incremental or complete mode - i.e repeatable desired state. -There is a large selection of templates that can export deployed resource definitions. I like to think about this templating feature to something like AWS CloudFormation or could be Terraform for a multi-cloud option. We will cover Terraform more in the Infrastructure as code section. +There is a large selection of templates that can export deployed resource definitions. I like to think about this templating feature to something like AWS CloudFormation or could be Terraform for a multi-cloud option. We will cover Terraform more in the Infrastructure as code section. ### Scaling -Automatic scaling is a large feature of the Public Cloud, being able to spin down resources you are not using or spin up when you need them. +Automatic scaling is a large feature of the Public Cloud, being able to spin down resources you are not using or spin up when you need them. -In Azure, we have something called Virtual Machine Scale Sets (VMSS) for IaaS. This enables the automatic creation and scale from a gold standard image based on schedules and metrics. +In Azure, we have something called Virtual Machine Scale Sets (VMSS) for IaaS. This enables the automatic creation and scale from a gold standard image based on schedules and metrics. -This is ideal for updating windows so that you can update your images and roll those out with the least impact. +This is ideal for updating windows so that you can update your images and roll those out with the least impact. -Other services such as Azure App Services have auto-scaling built in. +Other services such as Azure App Services have auto-scaling built in. -### Containers +### Containers -We have not covered containers as a use case and what and how they can and should be needed in our DevOps learning journey but we need to mention that Azure has some specific container-focused services to mention. +We have not covered containers as a use case and what and how they can and should be needed in our DevOps learning journey but we need to mention that Azure has some specific container-focused services to mention. -Azure Kubernetes Service (AKS) - Provides a managed Kubernetes solution, no need to worry about the control plane or management of the underpinning cluster management. More on Kubernetes also later on. +Azure Kubernetes Service (AKS) - Provides a managed Kubernetes solution, no need to worry about the control plane or management of the underpinning cluster management. More on Kubernetes also later on. -Azure Container Instances - Containers as a service with Per-Second Billing. Run an image and integrate it with your virtual network, no need for Container Orchestration. +Azure Container Instances - Containers as a service with Per-Second Billing. Run an image and integrate it with your virtual network, no need for Container Orchestration. -Service Fabric - Has many capabilities but includes orchestration for container instances. +Service Fabric - Has many capabilities but includes orchestration for container instances. -Azure also has the Container Registry which provides a private registry for Docker Images, Helm charts, OCI Artifacts and images. More on this again when we reach the containers section. +Azure also has the Container Registry which provides a private registry for Docker Images, Helm charts, OCI Artifacts and images. More on this again when we reach the containers section. -We should also mention that a lot of the container services may indeed also leverage containers under the hood but this is abstracted away from your requirement to manage. +We should also mention that a lot of the container services may indeed also leverage containers under the hood but this is abstracted away from your requirement to manage. -These mentioned container-focused services we also find similar services in all other public clouds. +These mentioned container-focused services we also find similar services in all other public clouds. -### Application Services +### Application Services -- Azure Application Services provides an application hosting solution that provides an easy method to establish services. -- Automatic Deployment and Scaling. -- Supports Windows & Linux-based solutions. -- Services run in an App Service Plan which has a type and size. -- Number of different services including web apps, API apps and mobile apps. -- Support for Deployment slots for reliable testing and promotion. +- Azure Application Services provides an application hosting solution that provides an easy method to establish services. +- Automatic Deployment and Scaling. +- Supports Windows & Linux-based solutions. +- Services run in an App Service Plan which has a type and size. +- Number of different services including web apps, API apps and mobile apps. +- Support for Deployment slots for reliable testing and promotion. -### Serverless Computing +### Serverless Computing -Serverless for me is an exciting next step that I am extremely interested in learning more about. +Serverless for me is an exciting next step that I am extremely interested in learning more about. -The goal with serverless is that we only pay for the runtime of the function and do not have to have running virtual machines or PaaS applications running all the time. We simply run our function when we need it and then it goes away. +The goal with serverless is that we only pay for the runtime of the function and do not have to have running virtual machines or PaaS applications running all the time. We simply run our function when we need it and then it goes away. -Azure Functions - Provides serverless code. If we remember back to our first look into the public cloud we will remember the abstraction layer of management, with serverless functions you are only going to be managing the code. +Azure Functions - Provides serverless code. If we remember back to our first look into the public cloud we will remember the abstraction layer of management, with serverless functions you are only going to be managing the code. -Event-Driven with massive scale, I have a plan to build something when I get some hands-on here hopefully later on. +Event-Driven with massive scale, I have a plan to build something when I get some hands-on here hopefully later on. -Provides input and output binding to many Azure and 3rd Party Services. +Provides input and output binding to many Azure and 3rd Party Services. Supports many different programming languages. (C#, NodeJS, Python, PHP, batch, bash, Golang and Rust. Or any Executable) -Azure Event Grid enables logic to be triggered from services and events. +Azure Event Grid enables logic to be triggered from services and events. -Azure Logic App provides a graphical-based workflow and integration. +Azure Logic App provides a graphical-based workflow and integration. -We can also look at Azure Batch which can run large-scale jobs on both Windows and Linux nodes with consistent management & scheduling. +We can also look at Azure Batch which can run large-scale jobs on both Windows and Linux nodes with consistent management & scheduling. -## Resources +## Resources - [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw) - [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) - [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s) - [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s) -See you on [Day 32](day32.md) +See you on [Day 32](day32.md) diff --git a/Days/day32.md b/Days/day32.md index 9141d230f..facbf59e1 100644 --- a/Days/day32.md +++ b/Days/day32.md @@ -1,190 +1,191 @@ --- -title: '#90DaysOfDevOps - Microsoft Azure Storage Models - Day 32' +title: "#90DaysOfDevOps - Microsoft Azure Storage Models - Day 32" published: false description: 90DaysOfDevOps - Microsoft Azure Storage Models -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048775 --- + ## Microsoft Azure Storage Models ### Storage Services -- Azure storage services are provided by storage accounts. -- Storage accounts are primarily accessed via REST API. +- Azure storage services are provided by storage accounts. +- Storage accounts are primarily accessed via REST API. - A storage account must have a unique name that is part of a DNS name `.core.windows.net` - Various replication and encryption options. - Sits within a resource group -We can create our storage group by simply searching for Storage Group in the search bar at the top of the Azure Portal. +We can create our storage group by simply searching for Storage Group in the search bar at the top of the Azure Portal. ![](Images/Day32_Cloud1.png) -We can then run through the steps to create our storage account remembering that this name needs to be unique and it also needs to be all lower case, with no spaces but can include numbers. +We can then run through the steps to create our storage account remembering that this name needs to be unique and it also needs to be all lower case, with no spaces but can include numbers. ![](Images/Day32_Cloud2.png) -We can also choose the level of redundancy we would like against our storage account and anything we store here. The further down the list the more expensive option but also the spread of your data. +We can also choose the level of redundancy we would like against our storage account and anything we store here. The further down the list the more expensive option but also the spread of your data. -Even the default redundancy option gives us 3 copies of our data. +Even the default redundancy option gives us 3 copies of our data. [Azure Storage Redundancy](https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy) -Summary of the above link down below: +Summary of the above link down below: - **Locally-redundant storage** - replicates your data three times within a single data centre in the primary region. - - **Geo-redundant storage** - copies your data synchronously three times within a single physical location in the primary region using LRS. - - **Zone-redundant storage** - replicates your Azure Storage data synchronously across three Azure availability zones in the primary region. - - **Geo-zone-redundant storage** - combines the high availability provided by redundancy across availability zones with protection from regional outages provided by geo-replication. Data in a GZRS storage account is copied across three Azure availability zones in the primary region and is also replicated to a second geographic region for protection from regional disasters. ![](Images/Day32_Cloud3.png) -Just moving back up to performance options. We have Standard and Premium to choose from. We have chosen Standard in our walkthrough but premium gives you some specific options. +Just moving back up to performance options. We have Standard and Premium to choose from. We have chosen Standard in our walkthrough but premium gives you some specific options. ![](Images/Day32_Cloud4.png) -Then in the drop-down, you can see we have these three options to choose from. +Then in the drop-down, you can see we have these three options to choose from. ![](Images/Day32_Cloud5.png) -There are lots more advanced options available for your storage account but for now, we do not need to get into these areas. These options are around encryption and data protection. +There are lots more advanced options available for your storage account but for now, we do not need to get into these areas. These options are around encryption and data protection. + +### Managed Disks -### Managed Disks +Storage access can be achieved in a few different ways. -Storage access can be achieved in a few different ways. +Authenticated access via: -Authenticated access via: -- A shared key for full control. +- A shared key for full control. - Shared Access Signature for delegated, granular access. - Azure Active Directory (Where Available) -Public Access: -- Public access can also be granted to enable anonymous access including via HTTP. -- An example of this could be to host basic content and files in a block blob so a browser can view and download this data. +Public Access: + +- Public access can also be granted to enable anonymous access including via HTTP. +- An example of this could be to host basic content and files in a block blob so a browser can view and download this data. + +If you are accessing your storage from another Azure service, traffic stays within Azure. -If you are accessing your storage from another Azure service, traffic stays within Azure. +When it comes to storage performance we have two different types: -When it comes to storage performance we have two different types: - **Standard** - Maximum number of IOPS - **Premium** - Guaranteed number of IOPS IOPS => Input/Output operations per sec. -There is also a difference between unmanaged and managed disks to consider when choosing the right storage for the task you have. +There is also a difference between unmanaged and managed disks to consider when choosing the right storage for the task you have. -### Virtual Machine Storage +### Virtual Machine Storage -- Virtual Machine OS disks are typically stored on persistent storage. -- Some stateless workloads do not require persistent storage and reduced latency is a larger benefit. -- There are VMs that support ephemeral OS-managed disks that are created on the node-local storage. +- Virtual Machine OS disks are typically stored on persistent storage. +- Some stateless workloads do not require persistent storage and reduced latency is a larger benefit. +- There are VMs that support ephemeral OS-managed disks that are created on the node-local storage. - These can also be used with VM Scale Sets. -Managed Disks are durable block storage that can be used with Azure Virtual Machines. You can have Ultra Disk Storage, Premium SSD, Standard SSD, or Standard HDD. They also carry some characteristics. +Managed Disks are durable block storage that can be used with Azure Virtual Machines. You can have Ultra Disk Storage, Premium SSD, Standard SSD, or Standard HDD. They also carry some characteristics. -- Snapshot and Image support -- Simple movement between SKUs -- Better availability when combined with availability sets -- Billed based on disk size not on consumed storage. +- Snapshot and Image support +- Simple movement between SKUs +- Better availability when combined with availability sets +- Billed based on disk size not on consumed storage. -## Archive Storage +## Archive Storage -- **Cool Tier** - A cool tier of storage is available to block and append blobs. +- **Cool Tier** - A cool tier of storage is available to block and append blobs. - Lower Storage cost - - Higher transaction cost. -- **Archive Tier** - Archive storage is available for block BLOBs. - - This is configured on a per-BLOB basis. - - Cheaper cost, Longer Data retrieval latency. - - Same Data Durability as regular Azure Storage. - - Custom Data tiering can be enabled as required. + - Higher transaction cost. +- **Archive Tier** - Archive storage is available for block BLOBs. + - This is configured on a per-BLOB basis. + - Cheaper cost, Longer Data retrieval latency. + - Same Data Durability as regular Azure Storage. + - Custom Data tiering can be enabled as required. -### File Sharing +### File Sharing From the above creation of our storage account, we can now create file shares. ![](Images/Day32_Cloud6.png) -This will provide SMB2.1 and 3.0 file shares in Azure. +This will provide SMB2.1 and 3.0 file shares in Azure. -Useable within the Azure and externally via SMB3 and port 445 open to the internet. +Useable within the Azure and externally via SMB3 and port 445 open to the internet. -Provides shared file storage in Azure. +Provides shared file storage in Azure. -Can be mapped using standard SMB clients in addition to REST API. +Can be mapped using standard SMB clients in addition to REST API. -You might also notice [Azure NetApp Files](https://vzilla.co.uk/vzilla-blog/azure-netapp-files-how) (SMB and NFS) +You might also notice [Azure NetApp Files](https://vzilla.co.uk/vzilla-blog/azure-netapp-files-how) (SMB and NFS) -### Caching & Media Services +### Caching & Media Services -The Azure Content Delivery Network provides a cache of static web content with locations throughout the world. +The Azure Content Delivery Network provides a cache of static web content with locations throughout the world. -Azure Media Services, provides media transcoding technologies in addition to playback services. +Azure Media Services, provides media transcoding technologies in addition to playback services. ## Microsoft Azure Database Models -Back on [Day 28](day28.md), we covered various service options. One of these was PaaS (Platform as a Service) where you abstract a large amount of the infrastructure and operating system away and you are left with the control of the application or in this case the database models. +Back on [Day 28](day28.md), we covered various service options. One of these was PaaS (Platform as a Service) where you abstract a large amount of the infrastructure and operating system away and you are left with the control of the application or in this case the database models. ### Relational Databases -Azure SQL Database provides a relational database as a service based on Microsoft SQL Server. +Azure SQL Database provides a relational database as a service based on Microsoft SQL Server. -This is SQL running the latest SQL branch with database compatibility level available where a specific functionality version is required. +This is SQL running the latest SQL branch with database compatibility level available where a specific functionality version is required. -There are a few options on how this can be configured, we can provide a single database that provides one database in the instance, while an elastic pool enables multiple databases that share a pool of capacity and collectively scale. +There are a few options on how this can be configured, we can provide a single database that provides one database in the instance, while an elastic pool enables multiple databases that share a pool of capacity and collectively scale. -These database instances can be accessed like regular SQL instances. +These database instances can be accessed like regular SQL instances. -Additional managed offerings for MySQL, PostgreSQL and MariaDB. +Additional managed offerings for MySQL, PostgreSQL and MariaDB. ![](Images/Day32_Cloud7.png) -### NoSQL Solutions +### NoSQL Solutions -Azure Cosmos DB is a scheme agnostic NoSQL implementation. +Azure Cosmos DB is a scheme agnostic NoSQL implementation. -99.99% SLA +99.99% SLA -Globally distributed database with single-digit latencies at the 99th percentile anywhere in the world with automatic homing. +Globally distributed database with single-digit latencies at the 99th percentile anywhere in the world with automatic homing. -Partition key leveraged for the partitioning/sharding/distribution of data. +Partition key leveraged for the partitioning/sharding/distribution of data. Supports various data models (documents, key-value, graph, column-friendly) -Supports various APIs (DocumentDB SQL, MongoDB, Azure Table Storage and Gremlin) +Supports various APIs (DocumentDB SQL, MongoDB, Azure Table Storage and Gremlin) ![](Images/Day32_Cloud9.png) -Various consistency models are available based around [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem). +Various consistency models are available based around [CAP theorem](https://en.wikipedia.org/wiki/CAP_theorem). ![](Images/Day32_Cloud8.png) -### Caching +### Caching -Without getting into the weeds about caching systems such as Redis I wanted to include that Microsoft Azure has a service called Azure Cache for Redis. +Without getting into the weeds about caching systems such as Redis I wanted to include that Microsoft Azure has a service called Azure Cache for Redis. -Azure Cache for Redis provides an in-memory data store based on the Redis software. +Azure Cache for Redis provides an in-memory data store based on the Redis software. -- It is an implementation of the open-source Redis Cache. - - A hosted, secure Redis cache instance. - - Different tiers are available - - Application must be updated to leverage the cache. - - Aimed for an application that has high read requirements compared to writes. - - Key-Value store based. +- It is an implementation of the open-source Redis Cache. + - A hosted, secure Redis cache instance. + - Different tiers are available + - Application must be updated to leverage the cache. + - Aimed for an application that has high read requirements compared to writes. + - Key-Value store based. ![](Images/Day32_Cloud10.png) -I appreciate the last few days have been a lot of note-taking and theory on Microsoft Azure but I wanted to cover the building blocks before we get into the hands-on aspects of how these components come together and work. +I appreciate the last few days have been a lot of note-taking and theory on Microsoft Azure but I wanted to cover the building blocks before we get into the hands-on aspects of how these components come together and work. -We have one more bit of theory remaining around networking before we can get some scenario-based deployments of services up and running. We also want to take a look at some of the different ways we can interact with Microsoft Azure vs just using the portal that we have been using so far. +We have one more bit of theory remaining around networking before we can get some scenario-based deployments of services up and running. We also want to take a look at some of the different ways we can interact with Microsoft Azure vs just using the portal that we have been using so far. -## Resources +## Resources - [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw) - [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) - [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s) - [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s) -See you on [Day 33](day33.md) +See you on [Day 33](day33.md) diff --git a/Days/day33.md b/Days/day33.md index 5fe65db3a..10e76e9f0 100644 --- a/Days/day33.md +++ b/Days/day33.md @@ -1,165 +1,166 @@ --- -title: '#90DaysOfDevOps - Microsoft Azure Networking Models + Azure Management - Day 33' +title: "#90DaysOfDevOps - Microsoft Azure Networking Models + Azure Management - Day 33" published: false description: 90DaysOfDevOps - Microsoft Azure Networking Models + Azure Management -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048706 --- + ## Microsoft Azure Networking Models + Azure Management -As if today marks the anniversary of Microsoft Azure and its 12th Birthday! (1st February 2022) Anyway, we are going to cover the networking models within Microsoft Azure and some of the management options for Azure. So far we have only used the Azure portal but we have mentioned other areas that can be used to drive and create our resources within the platform. +As if today marks the anniversary of Microsoft Azure and its 12th Birthday! (1st February 2022) Anyway, we are going to cover the networking models within Microsoft Azure and some of the management options for Azure. So far we have only used the Azure portal but we have mentioned other areas that can be used to drive and create our resources within the platform. -## Azure Network Models +## Azure Network Models -### Virtual Networks +### Virtual Networks -- A virtual network is a construct created in Azure. +- A virtual network is a construct created in Azure. - A virtual network has one or more IP ranges assigned to it. - Virtual networks live within a subscription within a region. -- Virtual subnets are created in the virtual network to break up the network range. -- Virtual machines are placed in virtual subnets. +- Virtual subnets are created in the virtual network to break up the network range. +- Virtual machines are placed in virtual subnets. - All virtual machines within a virtual network can communicate. - 65,536 Private IPs per Virtual Network. - Only pay for egress traffic from a region. (Data leaving the region) -- IPv4 & IPv6 Supported. +- IPv4 & IPv6 Supported. - IPv6 for public-facing and within virtual networks. -We can liken Azure Virtual Networks to AWS VPCs. However, there are some differences to note: +We can liken Azure Virtual Networks to AWS VPCs. However, there are some differences to note: -- In AWS a default VNet is created that is not the case in Microsoft Azure, you have to create your first virtual network to your requirements. -- All Virtual Machines by default in Azure have NAT access to the internet. No NAT Gateways as per AWS. -- In Microsoft Azure, there is no concept of Private or Public subnets. -- Public IPs are a resource that can be assigned to vNICs or Load Balancers. -- The Virtual Network and Subnets have their own ACLs enabling subnet level delegation. -- Subnets across Availability Zones whereas in AWS you have subnets per Availability Zones. +- In AWS a default VNet is created that is not the case in Microsoft Azure, you have to create your first virtual network to your requirements. +- All Virtual Machines by default in Azure have NAT access to the internet. No NAT Gateways as per AWS. +- In Microsoft Azure, there is no concept of Private or Public subnets. +- Public IPs are a resource that can be assigned to vNICs or Load Balancers. +- The Virtual Network and Subnets have their own ACLs enabling subnet level delegation. +- Subnets across Availability Zones whereas in AWS you have subnets per Availability Zones. -We also have Virtual Network Peering. This enables virtual networks across tenants and regions to be connected using the Azure backbone. Not transitive but can be enabled via Azure Firewall in the hub virtual network. Using a gateway transit allows peered virtual networks to the connectivity of the connected network and an example of this could ExpressRoute to On-Premises. +We also have Virtual Network Peering. This enables virtual networks across tenants and regions to be connected using the Azure backbone. Not transitive but can be enabled via Azure Firewall in the hub virtual network. Using a gateway transit allows peered virtual networks to the connectivity of the connected network and an example of this could ExpressRoute to On-Premises. -### Access Control +### Access Control -- Azure utilises Network Security Groups, these are stateful. -- Enable rules to be created and then assigned to a network security group -- Network security groups applied to subnets or VMs. -- When applied to a subnet it is still enforced at the Virtual Machine NIC that it is not an "Edge" device. +- Azure utilises Network Security Groups, these are stateful. +- Enable rules to be created and then assigned to a network security group +- Network security groups applied to subnets or VMs. +- When applied to a subnet it is still enforced at the Virtual Machine NIC that it is not an "Edge" device. ![](Images/Day33_Cloud1.png) -- Rules are combined in a Network Security Group. -- Based on the priority, flexible configurations are possible. -- Lower priority number means high priority. -- Most logic is built by IP Addresses but some tags and labels can also be used. +- Rules are combined in a Network Security Group. +- Based on the priority, flexible configurations are possible. +- Lower priority number means high priority. +- Most logic is built by IP Addresses but some tags and labels can also be used. -| Description | Priority | Source Address | Source Port | Destination Address | Destination Port | Action | -| ----------- | ---------| -------------- | ----------- | ------------------- | ---------------- | ------ | -| Inbound 443 | 1005 | * | * | * | 443 | Allow | -| ILB | 1010 | Azure LoadBalancer | * | * | 10000 | Allow | -| Deny All Inbound | 4000 | * | * | * | * | DENY | +| Description | Priority | Source Address | Source Port | Destination Address | Destination Port | Action | +| ---------------- | -------- | ------------------ | ----------- | ------------------- | ---------------- | ------ | +| Inbound 443 | 1005 | \* | \* | \* | 443 | Allow | +| ILB | 1010 | Azure LoadBalancer | \* | \* | 10000 | Allow | +| Deny All Inbound | 4000 | \* | \* | \* | \* | DENY | -We also have Application Security Groups (ASGs) +We also have Application Security Groups (ASGs) -- Where NSGs are focused on the IP address ranges which may be difficult to maintain for growing environments. +- Where NSGs are focused on the IP address ranges which may be difficult to maintain for growing environments. - ASGs enable real names (Monikers) for different application roles to be defined (Webservers, DB servers, WebApp1 etc.) -- The Virtual Machine NIC is made a member of one or more ASGs. +- The Virtual Machine NIC is made a member of one or more ASGs. -The ASGs can then be used in rules that are part of Network Security Groups to control the flow of communication and can still use NSG features like service tags. +The ASGs can then be used in rules that are part of Network Security Groups to control the flow of communication and can still use NSG features like service tags. -| Action| Name | Source | Destination | Port | -| ------| ------------------ | ---------- | ----------- | ------------ | -| Allow | AllowInternettoWeb | Internet | WebServers | 443(HTTPS) | -| Allow | AllowWebToApp | WebServers | AppServers | 443(HTTPS) | -| Allow | AllowAppToDB | AppServers | DbServers | 1443 (MSSQL) | -| Deny | DenyAllinbound | Any | Any | Any | +| Action | Name | Source | Destination | Port | +| ------ | ------------------ | ---------- | ----------- | ------------ | +| Allow | AllowInternettoWeb | Internet | WebServers | 443(HTTPS) | +| Allow | AllowWebToApp | WebServers | AppServers | 443(HTTPS) | +| Allow | AllowAppToDB | AppServers | DbServers | 1443 (MSSQL) | +| Deny | DenyAllinbound | Any | Any | Any | -### Load Balancing +### Load Balancing -Microsoft Azure has two separate load balancing solutions. (the first party, there are third parties available in the Azure marketplace.) Both can operate with externally facing or internally facing endpoints. +Microsoft Azure has two separate load balancing solutions. (the first party, there are third parties available in the Azure marketplace.) Both can operate with externally facing or internally facing endpoints. -- Load Balancer (Layer 4) supporting hash-based distribution and port-forwarding. -- App Gateway (Layer 7) supports features such as SSL offload, cookie-based session affinity and URL-based content routing. +- Load Balancer (Layer 4) supporting hash-based distribution and port-forwarding. +- App Gateway (Layer 7) supports features such as SSL offload, cookie-based session affinity and URL-based content routing. -Also with the App Gateway, you can optionally use the Web Application firewall component. +Also with the App Gateway, you can optionally use the Web Application firewall component. -## Azure Management Tools +## Azure Management Tools -We have spent most of our theory time walking through the Azure Portal, I would suggest that when it comes to following a DevOps culture and process a lot of these tasks, especially around provisioning will be done via an API or a command-line tool. I wanted to touch on some of those other management tools that we have available to us as we need to know this for when we are automating the provisioning of our Azure environments. +We have spent most of our theory time walking through the Azure Portal, I would suggest that when it comes to following a DevOps culture and process a lot of these tasks, especially around provisioning will be done via an API or a command-line tool. I wanted to touch on some of those other management tools that we have available to us as we need to know this for when we are automating the provisioning of our Azure environments. -### Azure Portal +### Azure Portal -The Microsoft Azure Portal is a web-based console, that provides an alternative to command-line tools. You can manage your subscriptions within the Azure Portal. Build, Manage, and Monitor everything from a simple web app to complex cloud deployments. Another thing you will find within the portal are these breadcrumbs, JSON as mentioned before is the underpinning of all Azure Resources, It might be that you start in the Portal to understand the features, services and functionality but then later understand the JSON underneath to incorporate into your automated workflows. +The Microsoft Azure Portal is a web-based console, that provides an alternative to command-line tools. You can manage your subscriptions within the Azure Portal. Build, Manage, and Monitor everything from a simple web app to complex cloud deployments. Another thing you will find within the portal are these breadcrumbs, JSON as mentioned before is the underpinning of all Azure Resources, It might be that you start in the Portal to understand the features, services and functionality but then later understand the JSON underneath to incorporate into your automated workflows. ![](Images/Day33_Cloud2.png) -There is also the Azure Preview portal, this can be used to view and test new and upcoming services and enhancements. +There is also the Azure Preview portal, this can be used to view and test new and upcoming services and enhancements. ![](Images/Day33_Cloud3.png) -### PowerShell +### PowerShell -Before we get into Azure PowerShell it is worth introducing PowerShell first. PowerShell is a task automation and configuration management framework, a command-line shell and a scripting language. We might and dare I say this liken this to what we have covered in the Linux section around shell scripting. PowerShell was very much first found on Windows OS but it is now cross-platform. +Before we get into Azure PowerShell it is worth introducing PowerShell first. PowerShell is a task automation and configuration management framework, a command-line shell and a scripting language. We might and dare I say this liken this to what we have covered in the Linux section around shell scripting. PowerShell was very much first found on Windows OS but it is now cross-platform. -Azure PowerShell is a set of cmdlets for managing Azure resources directly from the PowerShell command line. +Azure PowerShell is a set of cmdlets for managing Azure resources directly from the PowerShell command line. -We can see below that you can connect to your subscription using the PowerShell command `Connect-AzAccount` +We can see below that you can connect to your subscription using the PowerShell command `Connect-AzAccount` ![](Images/Day33_Cloud4.png) -Then if we wanted to find some specific commands associated with Azure VMs we can run the following command. You could spend hours learning and understanding more about this PowerShell programming language. +Then if we wanted to find some specific commands associated with Azure VMs we can run the following command. You could spend hours learning and understanding more about this PowerShell programming language. ![](Images/Day33_Cloud5.png) There are some great quickstarts from Microsoft on getting started and provisioning services from PowerShell [here](https://docs.microsoft.com/en-us/powershell/azure/get-started-azureps?view=azps-7.1.0) -### Visual Studio Code +### Visual Studio Code -Like many, and as you have all seen my go-to IDE is Visual Studio Code. +Like many, and as you have all seen my go-to IDE is Visual Studio Code. -Visual Studio Code is a free source-code editor made by Microsoft for Windows, Linux and macOS. +Visual Studio Code is a free source-code editor made by Microsoft for Windows, Linux and macOS. -You will see below that there are lots of integrations and tools built into Visual Studio Code that you can use to interact with Microsoft Azure and the services within. +You will see below that there are lots of integrations and tools built into Visual Studio Code that you can use to interact with Microsoft Azure and the services within. ![](Images/Day33_Cloud6.png) -### Cloud Shell +### Cloud Shell -Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work. +Azure Cloud Shell is an interactive, authenticated, browser-accessible shell for managing Azure resources. It provides the flexibility of choosing the shell experience that best suits the way you work. ![](Images/Day33_Cloud7.png) -You can see from the below when we first launch Cloud Shell within the portal we can choose between Bash and PowerShell. +You can see from the below when we first launch Cloud Shell within the portal we can choose between Bash and PowerShell. ![](Images/Day33_Cloud8.png) -To use the cloud shell you will have to provide a bit of storage in your subscription. +To use the cloud shell you will have to provide a bit of storage in your subscription. -When you select to use the cloud shell it is spinning up a machine, these machines are temporary but your files are persisted in two ways; through a disk image and a mounted file share. +When you select to use the cloud shell it is spinning up a machine, these machines are temporary but your files are persisted in two ways; through a disk image and a mounted file share. ![](Images/Day33_Cloud9.png) - - Cloud Shell runs on a temporary host provided on a per-session, per-user basis - - Cloud Shell times out after 20 minutes without interactive activity - - Cloud Shell requires an Azure file share to be mounted - - Cloud Shell uses the same Azure file share for both Bash and PowerShell - - Cloud Shell is assigned one machine per user account - - Cloud Shell persists $HOME using a 5-GB image held in your file share - - Permissions are set as a regular Linux user in Bash +- Cloud Shell runs on a temporary host provided on a per-session, per-user basis +- Cloud Shell times out after 20 minutes without interactive activity +- Cloud Shell requires an Azure file share to be mounted +- Cloud Shell uses the same Azure file share for both Bash and PowerShell +- Cloud Shell is assigned one machine per user account +- Cloud Shell persists $HOME using a 5-GB image held in your file share +- Permissions are set as a regular Linux user in Bash The above was copied from [Cloud Shell Overview](https://docs.microsoft.com/en-us/azure/cloud-shell/overview) -### Azure CLI +### Azure CLI -Finally, I want to cover the Azure CLI, The Azure CLI can be installed on Windows, Linux and macOS. Once installed you can type `az` followed by other commands to create, update, delete and view Azure resources. +Finally, I want to cover the Azure CLI, The Azure CLI can be installed on Windows, Linux and macOS. Once installed you can type `az` followed by other commands to create, update, delete and view Azure resources. -When I initially came into my Azure learning I was a little confused by there being Azure PowerShell and the Azure CLI. +When I initially came into my Azure learning I was a little confused by there being Azure PowerShell and the Azure CLI. -I would love some feedback from the community on this as well. But the way I see it is that Azure PowerShell is a module added to Windows PowerShell or PowerShell Core (Also available on other OS but not all) Whereas Azure CLI is a cross-platform command-line program that connects to Azure and executes those commands. +I would love some feedback from the community on this as well. But the way I see it is that Azure PowerShell is a module added to Windows PowerShell or PowerShell Core (Also available on other OS but not all) Whereas Azure CLI is a cross-platform command-line program that connects to Azure and executes those commands. -Both of these options have a different syntax, although they can from what I can see and what I have done do very similar tasks. +Both of these options have a different syntax, although they can from what I can see and what I have done do very similar tasks. -For example, creating a virtual machine from PowerShell would use the `New-AzVM` cmdlet whereas Azure CLI would use `az VM create`. +For example, creating a virtual machine from PowerShell would use the `New-AzVM` cmdlet whereas Azure CLI would use `az VM create`. -You saw previously that I have the Azure PowerShell module installed on my system but then I also have the Azure CLI installed that can be called through PowerShell on my Windows machine. +You saw previously that I have the Azure PowerShell module installed on my system but then I also have the Azure CLI installed that can be called through PowerShell on my Windows machine. ![](Images/Day33_Cloud10.png) @@ -175,15 +176,15 @@ Azure PowerShell - Cross-platform PowerShell module, runs on Windows, macOS, Linux - Requires Windows PowerShell or PowerShell -If there is a reason you cannot use PowerShell in your environment but you can use .mdor bash then the Azure CLI is going to be your choice. +If there is a reason you cannot use PowerShell in your environment but you can use .mdor bash then the Azure CLI is going to be your choice. -Next up we take all the theories we have been through and create some scenarios and get hands-on in Azure. +Next up we take all the theories we have been through and create some scenarios and get hands-on in Azure. -## Resources +## Resources - [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw) - [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) - [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s) - [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s) -See you on [Day 34](day34.md) +See you on [Day 34](day34.md) diff --git a/Days/day34.md b/Days/day34.md index 416e9916e..fa0e3c1e8 100644 --- a/Days/day34.md +++ b/Days/day34.md @@ -1,53 +1,55 @@ --- -title: '#90DaysOfDevOps - Microsoft Azure Hands-On Scenarios - Day 34' +title: "#90DaysOfDevOps - Microsoft Azure Hands-On Scenarios - Day 34" published: false description: 90DaysOfDevOps - Microsoft Azure Hands-On Scenarios -tags: 'DevOps, 90daysofdevops, learning' +tags: "DevOps, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048763 --- + ## Microsoft Azure Hands-On Scenarios -The last 6 days have been focused on Microsoft Azure and the public cloud in general, a lot of this foundation had to contain a lot of theory to understand the building blocks of Azure but also this will nicely translate to the other major cloud providers as well. +The last 6 days have been focused on Microsoft Azure and the public cloud in general, a lot of this foundation had to contain a lot of theory to understand the building blocks of Azure but also this will nicely translate to the other major cloud providers as well. + +I mentioned at the very beginning about getting a foundational knowledge of the public cloud and choosing one provider to at least begin with, if you are dancing between different clouds then I believe you can get lost quite easily whereas choosing one you get to understand the fundamentals and when you have those it is quite easy to jump into the other clouds and accelerate your learning. -I mentioned at the very beginning about getting a foundational knowledge of the public cloud and choosing one provider to at least begin with, if you are dancing between different clouds then I believe you can get lost quite easily whereas choosing one you get to understand the fundamentals and when you have those it is quite easy to jump into the other clouds and accelerate your learning. +In this final session, I am going to be picking and choosing my hands-on scenarios from this page here which is a reference created by Microsoft and is used for preparations for the [AZ-104 Microsoft Azure Administrator](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/) -In this final session, I am going to be picking and choosing my hands-on scenarios from this page here which is a reference created by Microsoft and is used for preparations for the [AZ-104 Microsoft Azure Administrator](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/) +There are some here such as Containers and Kubernetes that we have not covered in any detail as of yet so I don't want to jump in there just yet. -There are some here such as Containers and Kubernetes that we have not covered in any detail as of yet so I don't want to jump in there just yet. +In previous posts, we have created most of Modules 1,2 and 3. -In previous posts, we have created most of Modules 1,2 and 3. +### Virtual Networking -### Virtual Networking Following [Module 04](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_04-Implement_Virtual_Networking.html): -I went through the above and changed a few namings for #90DaysOfDevOps. I also instead of using the Cloud Shell went ahead and logged in with my new user created on previous days with the Azure CLI on my Windows machine. +I went through the above and changed a few namings for #90DaysOfDevOps. I also instead of using the Cloud Shell went ahead and logged in with my new user created on previous days with the Azure CLI on my Windows machine. -You can do this using the `az login` which will open a browser and let you authenticate to your account. +You can do this using the `az login` which will open a browser and let you authenticate to your account. I have then created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder. - (Cloud\01VirtualNetworking) +(Cloud\01VirtualNetworking) - Please make sure you change the file location in the script to suit your environment. +Please make sure you change the file location in the script to suit your environment. -At this first stage, we have no virtual network or virtual machines created in our environment, I only have a cloud shell storage location configured in my resource group. +At this first stage, we have no virtual network or virtual machines created in our environment, I only have a cloud shell storage location configured in my resource group. I first of all run my [PowerShell script](Cloud/01VirtualNetworking/Module4_90DaysOfDevOps.ps1) - ![](Images/Day34_Cloud1.png) - +![](Images/Day34_Cloud1.png) + - Task 1: Create and configure a virtual network - ![](Images/Day34_Cloud2.png) +![](Images/Day34_Cloud2.png) - Task 2: Deploy virtual machines into the virtual network - ![](Images/Day34_Cloud3.png) +![](Images/Day34_Cloud3.png) - Task 3: Configure private and public IP addresses of Azure VMs - - ![](Images/Day34_Cloud4.png) + +![](Images/Day34_Cloud4.png) - Task 4: Configure network security groups @@ -59,13 +61,14 @@ I first of all run my [PowerShell script](Cloud/01VirtualNetworking/Module4_90Da ![](Images/Day34_Cloud7.png) ![](Images/Day34_Cloud8.png) -### Network Traffic Management +### Network Traffic Management + Following [Module 06](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_06-Implement_Network_Traffic_Management.html): -Next walkthrough, from the last one we have gone into our resource group and deleted our resources, if you had not set up the user account like me to only have access to that one resource group you could follow the module changing the name to `90Days*` this will delete all resources and resource group. This will be my process for each of the following labs. +Next walkthrough, from the last one we have gone into our resource group and deleted our resources, if you had not set up the user account like me to only have access to that one resource group you could follow the module changing the name to `90Days*` this will delete all resources and resource group. This will be my process for each of the following labs. For this lab, I have also created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder. - (Cloud\02TrafficManagement) +(Cloud\02TrafficManagement) - Task 1: Provision of the lab environment @@ -79,22 +82,22 @@ I first of all run my [PowerShell script](Cloud/02TrafficManagement/Mod06_90Days - Task 3: Test transitivity of virtual network peering -For this my 90DaysOfDevOps group did not have access to the Network Watcher because of permissions, I expect this is because Network Watchers are one of those resources that are not tied to a resource group which is where our RBAC was covered for this user. I added the East US Network Watcher contributor role to the 90DaysOfDevOps group. +For this my 90DaysOfDevOps group did not have access to the Network Watcher because of permissions, I expect this is because Network Watchers are one of those resources that are not tied to a resource group which is where our RBAC was covered for this user. I added the East US Network Watcher contributor role to the 90DaysOfDevOps group. ![](Images/Day34_Cloud11.png) ![](Images/Day34_Cloud12.png) ![](Images/Day34_Cloud13.png) -^ This is expected since the two spoke virtual networks do not peer with each other (virtual network peering is not transitive). +^ This is expected since the two spoke virtual networks do not peer with each other (virtual network peering is not transitive). - Task 4: Configure routing in the hub and spoke topology -I had another issue here with my account not being able to run the script as my user within the group 90DaysOfDevOps which I am unsure of so I did jump back into my main admin account. The 90DaysOfDevOps group is an owner of everything in the 90DaysOfDevOps Resource Group so would love to understand why I cannot run a command inside the VM? +I had another issue here with my account not being able to run the script as my user within the group 90DaysOfDevOps which I am unsure of so I did jump back into my main admin account. The 90DaysOfDevOps group is an owner of everything in the 90DaysOfDevOps Resource Group so would love to understand why I cannot run a command inside the VM? ![](Images/Day34_Cloud14.png) ![](Images/Day34_Cloud15.png) -I then was able to go back into my michael.cade@90DaysOfDevOps.com account and continue this section. Here we are running the same test again but now with the result being reachable. +I then was able to go back into my michael.cade@90DaysOfDevOps.com account and continue this section. Here we are running the same test again but now with the result being reachable. ![](Images/Day34_Cloud16.png) @@ -108,11 +111,12 @@ I then was able to go back into my michael.cade@90DaysOfDevOps.com account and c ![](Images/Day34_Cloud19.png) ![](Images/Day34_Cloud20.png) -### Azure Storage +### Azure Storage + Following [Module 07](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_07-Manage_Azure_Storage.html): For this lab, I have also created a PowerShell script and some references from the module to use to build out some of the tasks below. You can find the associated files in this folder. - (Cloud\03Storage) +(Cloud\03Storage) - Task 1: Provision of the lab environment @@ -133,13 +137,13 @@ I first of all run my [PowerShell script](Cloud/03Storage/Mod07_90DaysOfDeveOps. ![](Images/Day34_Cloud24.png) ![](Images/Day34_Cloud25.png) -I was a little impatient waiting for this to be allowed but it did work eventually. +I was a little impatient waiting for this to be allowed but it did work eventually. ![](Images/Day34_Cloud26.png) - Task 5: Create and configure an Azure Files shares -On the run command, this would not work with michael.cade@90DaysOfDevOps.com so I used my elevated account. +On the run command, this would not work with michael.cade@90DaysOfDevOps.com so I used my elevated account. ![](Images/Day34_Cloud27.png) ![](Images/Day34_Cloud28.png) @@ -150,6 +154,7 @@ On the run command, this would not work with michael.cade@90DaysOfDevOps.com so ![](Images/Day34_Cloud30.png) ### Serverless (Implement Web Apps) + Following [Module 09a](https://microsoftlearning.github.io/AZ-104-MicrosoftAzureAdministrator/Instructions/Labs/LAB_09a-Implement_Web_Apps.html): - Task 1: Create an Azure web app @@ -178,15 +183,15 @@ This script I am using can be found in (Cloud/05Serverless) ![](Images/Day34_Cloud36.png) -This wraps up the section on Microsoft Azure and the public cloud in general. I will say that I had lots of fun attacking and working through these scenarios. +This wraps up the section on Microsoft Azure and the public cloud in general. I will say that I had lots of fun attacking and working through these scenarios. -## Resources +## Resources - [Hybrid Cloud and MultiCloud](https://www.youtube.com/watch?v=qkj5W98Xdvw) - [Microsoft Azure Fundamentals](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) - [Google Cloud Digital Leader Certification Course](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s) - [AWS Basics for Beginners - Full Course](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s) -Next, we will be diving into version control systems, specifically around git and then also code repository overviews and we will be choosing GitHub as this is my preferred option. +Next, we will be diving into version control systems, specifically around git and then also code repository overviews and we will be choosing GitHub as this is my preferred option. See you on [Day 35](day35.md) diff --git a/Days/day35.md b/Days/day35.md index 3c5672d11..06576645c 100644 --- a/Days/day35.md +++ b/Days/day35.md @@ -1,77 +1,78 @@ --- -title: '#90DaysOfDevOps - The Big Picture: Git - Version Control - Day 35' +title: "#90DaysOfDevOps - The Big Picture: Git - Version Control - Day 35" published: false description: 90DaysOfDevOps - The Big Picture Git - Version Control -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049041 --- + ## The Big Picture: Git - Version Control -Before we get into git, we need to understand what version control is and why? In this opener for Git, we will take a look at what version control is, and the basics of git. +Before we get into git, we need to understand what version control is and why? In this opener for Git, we will take a look at what version control is, and the basics of git. -### What is Version Control? +### What is Version Control? -Git is not the only version control system so here we want to cover what options and what methodologies are available around version control. +Git is not the only version control system so here we want to cover what options and what methodologies are available around version control. -The most obvious and a big benefit of Version Control is the ability to track a project's history. We can look back over this repository using `git log` and see that we have many commits and many comments and what has happened so far in the project. Don't worry we will get into the commands later. Now think if this was an actual software project full of source code and multiple people are committing to our software at different times, different authors and then reviewers all are logged here so that we know what has happened, when, by whom and who reviewed. +The most obvious and a big benefit of Version Control is the ability to track a project's history. We can look back over this repository using `git log` and see that we have many commits and many comments and what has happened so far in the project. Don't worry we will get into the commands later. Now think if this was an actual software project full of source code and multiple people are committing to our software at different times, different authors and then reviewers all are logged here so that we know what has happened, when, by whom and who reviewed. ![](Images/Day35_Git1.png) -Version Control before it was cool, would have been something like manually creating a copy of your version before you made changes. It might be that you also comment out old useless code with the just-in-case mentality. +Version Control before it was cool, would have been something like manually creating a copy of your version before you made changes. It might be that you also comment out old useless code with the just-in-case mentality. ![](Images/Day35_Git2.png) -I have started using version control over not just source code but pretty much anything that talks about projects like this (90DaysOfDevOps) because why would you not want that rollback and log of everything that has gone on. +I have started using version control over not just source code but pretty much anything that talks about projects like this (90DaysOfDevOps) because why would you not want that rollback and log of everything that has gone on. However, a big disclaimer **Version Control is not a Backup!** -Another benefit of Version Control is the ability to manage multiple versions of a project, Let's create an example, we have a free app that is available on all operating systems and then we have a paid-for app also available on all operating systems. The majority of the code is shared between both applications. We could copy and paste our code each commit to each app but that is going to be very messy especially as you scale your development to more than just one person, also mistakes will be made. +Another benefit of Version Control is the ability to manage multiple versions of a project, Let's create an example, we have a free app that is available on all operating systems and then we have a paid-for app also available on all operating systems. The majority of the code is shared between both applications. We could copy and paste our code each commit to each app but that is going to be very messy especially as you scale your development to more than just one person, also mistakes will be made. -The premium app is where we are going to have additional features, let's call them premium commits, the free edition will just contain the normal commits. +The premium app is where we are going to have additional features, let's call them premium commits, the free edition will just contain the normal commits. -The way this is achieved in Version Control is through branching. +The way this is achieved in Version Control is through branching. ![](Images/Day35_Git3.png) -Branching allows for two code streams for the same app as we stated above. But we will still want new features that land in our source code-free version to be in our premium and to achieve this we have something called merging. +Branching allows for two code streams for the same app as we stated above. But we will still want new features that land in our source code-free version to be in our premium and to achieve this we have something called merging. ![](Images/Day35_Git4.png) -Now, this same easy but merging can be complicated because you could have a team working on the free edition and you could have another team working on the premium paid-for version and what if both change code that affects aspects of the overall code. Maybe a variable gets updated and breaks something. Then you have a conflict that breaks one of the features. Version Control cannot fix the conflicts that are down to you. But version control allows this to be easily managed. +Now, this same easy but merging can be complicated because you could have a team working on the free edition and you could have another team working on the premium paid-for version and what if both change code that affects aspects of the overall code. Maybe a variable gets updated and breaks something. Then you have a conflict that breaks one of the features. Version Control cannot fix the conflicts that are down to you. But version control allows this to be easily managed. -The primary reason if you have not picked up so far for version control, in general, is the ability to collaborate. The ability to share code amongst developers and when I say code as I said before more and more we are seeing much more use cases for other reasons to use source control, maybe its a joint presentation you are working on with a colleague or a 90DaysOfDevOps challenge where you have the community offering their corrections and updates throughout the project. +The primary reason if you have not picked up so far for version control, in general, is the ability to collaborate. The ability to share code amongst developers and when I say code as I said before more and more we are seeing much more use cases for other reasons to use source control, maybe its a joint presentation you are working on with a colleague or a 90DaysOfDevOps challenge where you have the community offering their corrections and updates throughout the project. -Without version control how did teams of software developers even handle this? I find it hard enough when I am working on my projects to keep track of things. I expect they would split out the code into each functional module. Maybe a little part of the puzzle then was bringing the pieces together and then problems and issues before anything would get released. +Without version control how did teams of software developers even handle this? I find it hard enough when I am working on my projects to keep track of things. I expect they would split out the code into each functional module. Maybe a little part of the puzzle then was bringing the pieces together and then problems and issues before anything would get released. -With version control, we have a single source of truth. We might all still work on different modules but it enables us to collaborate better. +With version control, we have a single source of truth. We might all still work on different modules but it enables us to collaborate better. ![](Images/Day35_Git5.png) -Another thing to mention here is that it's not just developers that can benefit from Version Control, it's all members of the team to have visibility but also tools all having awareness or leverage, Project Management tools can be linked here, tracking the work. We might also have a build machine for example Jenkins which we will talk about in another module. A tool that Builds and Packages the system, automating the deployment tests and metrics. +Another thing to mention here is that it's not just developers that can benefit from Version Control, it's all members of the team to have visibility but also tools all having awareness or leverage, Project Management tools can be linked here, tracking the work. We might also have a build machine for example Jenkins which we will talk about in another module. A tool that Builds and Packages the system, automating the deployment tests and metrics. -### What is Git? +### What is Git? -Git is a tool that tracks changes to source code or any file, or we could also say Git is an open-source distributed version control system. +Git is a tool that tracks changes to source code or any file, or we could also say Git is an open-source distributed version control system. -There are many ways in which git can be used on our systems, most commonly or at least for me I have seen it at the command line, but we also have graphical user interfaces and tools like Visual Studio Code that have git-aware operations we can take advantage of. +There are many ways in which git can be used on our systems, most commonly or at least for me I have seen it at the command line, but we also have graphical user interfaces and tools like Visual Studio Code that have git-aware operations we can take advantage of. -Now we are going to run through a high-level overview before we even get Git installed on our local machine. +Now we are going to run through a high-level overview before we even get Git installed on our local machine. -Let's take the folder we created earlier. +Let's take the folder we created earlier. ![](Images/Day35_Git2.png) -To use this folder with version control we first need to initiate this directory using the `git init command. For now, just think that this command puts our directory as a repository in a database somewhere on our computer. +To use this folder with version control we first need to initiate this directory using the `git init command. For now, just think that this command puts our directory as a repository in a database somewhere on our computer. ![](Images/Day35_Git6.png) -Now we can create some files and folders and our source code can begin or maybe it already has and we have something in here already. We can use the `git add .` command which puts all files and folders in our directory into a snapshot but we have not yet committed anything to that database. We are just saying all files with the `.` are ready to be added. +Now we can create some files and folders and our source code can begin or maybe it already has and we have something in here already. We can use the `git add .` command which puts all files and folders in our directory into a snapshot but we have not yet committed anything to that database. We are just saying all files with the `.` are ready to be added. ![](Images/Day35_Git7.png) -Then we want to go ahead and commit our files, we do this with the `git commit -m "My First Commit"` command. We can give a reason for our commit and this is suggested so we know what has happened for each commit. +Then we want to go ahead and commit our files, we do this with the `git commit -m "My First Commit"` command. We can give a reason for our commit and this is suggested so we know what has happened for each commit. ![](Images/Day35_Git8.png) @@ -79,11 +80,11 @@ We can now see what has happened within the history of the project. Using the `g ![](Images/Day35_Git9.png) -We can also check the status of our repository by using `git status` this shows we have nothing to commit and we can add a new file called sample code.ps1. If we then run the same `git status you will see that we file to be committed. +We can also check the status of our repository by using `git status` this shows we have nothing to commit and we can add a new file called sample code.ps1. If we then run the same `git status you will see that we file to be committed. ![](Images/Day35_Git10.png) -Add our new file using the `git add sample code.ps1` command and then we can run `git status` again and see our file is ready to be committed. +Add our new file using the `git add sample code.ps1` command and then we can run `git status` again and see our file is ready to be committed. ![](Images/Day35_Git11.png) @@ -95,7 +96,7 @@ Another `git status` now shows everything is clean again. ![](Images/Day35_Git13.png) -We can then use the `git log` command which shows the latest changes and first commit. +We can then use the `git log` command which shows the latest changes and first commit. ![](Images/Day35_Git14.png) @@ -103,38 +104,37 @@ If we wanted to see the changes between our commits i.e what files have been add ![](Images/Day35_Git15.png) -Which then displays what has changed in our case we added a new file. +Which then displays what has changed in our case we added a new file. ![](Images/Day35_Git16.png) -We can also and we will go deeper into this later on but we can jump around our commits i.e we can go time travelling! By using our commit number we can use the `git checkout 709a` command to jump back in time without losing our new file. +We can also and we will go deeper into this later on but we can jump around our commits i.e we can go time travelling! By using our commit number we can use the `git checkout 709a` command to jump back in time without losing our new file. ![](Images/Day35_Git17.png) -But then equally we will want to move forward as well and we can do this the same way with the commit number or you can see here we are using the `git switch -` command to undo our operation. +But then equally we will want to move forward as well and we can do this the same way with the commit number or you can see here we are using the `git switch -` command to undo our operation. ![](Images/Day35_Git18.png) -The TLDR; +The TLDR; - Tracking a project's history - Managing multiple versions of a project - Sharing code amongst developers and a wider scope of teams and tools - Coordinating teamwork -- Oh and there is some time travel! - -This might have seemed a jump around but hopefully, you can see without really knowing the commands used the powers and the big picture behind Version Control. +- Oh and there is some time travel! -Next up we will be getting git installed and set up on your local machine and diving a little deeper into some other use cases and commands that we can achieve in Git. +This might have seemed a jump around but hopefully, you can see without really knowing the commands used the powers and the big picture behind Version Control. +Next up we will be getting git installed and set up on your local machine and diving a little deeper into some other use cases and commands that we can achieve in Git. -## Resources +## Resources - [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4) - [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ) -- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) -- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg) -- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) +- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) +- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg) +- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) - [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics) -See you on [Day 36](day36.md) +See you on [Day 36](day36.md) diff --git a/Days/day36.md b/Days/day36.md index 7c4bdef97..83297eb24 100644 --- a/Days/day36.md +++ b/Days/day36.md @@ -1,152 +1,154 @@ --- -title: '#90DaysOfDevOps - Installing & Configuring Git - Day 36' +title: "#90DaysOfDevOps - Installing & Configuring Git - Day 36" published: false description: 90DaysOfDevOps - Installing & Configuring Git -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048738 --- + ## Installing & Configuring Git -Git is an open source, cross-platform tool for version control. If you are like me, using Ubuntu or most Linux environments you might find that you already have git installed but we are going to run through the install and configuration. +Git is an open source, cross-platform tool for version control. If you are like me, using Ubuntu or most Linux environments you might find that you already have git installed but we are going to run through the install and configuration. -Even if you already have git installed on your system it is also a good idea to make sure we are up to date. +Even if you already have git installed on your system it is also a good idea to make sure we are up to date. ### Installing Git As already mentioned Git is cross-platform, we will be running through Windows and Linux but you can find macOS also listed [here](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) -For [Windows](https://git-scm.com/download/win) we can grab our installers from the official site. +For [Windows](https://git-scm.com/download/win) we can grab our installers from the official site. -You could also use `winget` on your Windows machine, think of this as your Windows Application Package Manager. +You could also use `winget` on your Windows machine, think of this as your Windows Application Package Manager. -Before we install anything let's see what version we have on our Windows Machine. Open a PowerShell window and run `git --version` +Before we install anything let's see what version we have on our Windows Machine. Open a PowerShell window and run `git --version` ![](Images/Day36_Git1.png) -We can also check our WSL Ubuntu version of Git as well. +We can also check our WSL Ubuntu version of Git as well. ![](Images/Day36_Git2.png) -At the time of writing the latest Windows release is `2.35.1` so we have some updating to do there which I will run through. I expect the same for Linux. +At the time of writing the latest Windows release is `2.35.1` so we have some updating to do there which I will run through. I expect the same for Linux. -I went ahead and downloaded the latest installer and ran through the wizard and will document that here. The important thing to note is that git will uninstall previous versions before installing the latest. +I went ahead and downloaded the latest installer and ran through the wizard and will document that here. The important thing to note is that git will uninstall previous versions before installing the latest. -Meaning that the process shown below is also the same process for the most part as if you were installing from no git. +Meaning that the process shown below is also the same process for the most part as if you were installing from no git. -It is a very simple installation. Once downloaded double click and get started. Read through the GNU license agreement. But remember this is free and open-source software. +It is a very simple installation. Once downloaded double click and get started. Read through the GNU license agreement. But remember this is free and open-source software. ![](Images/Day36_Git3.png) -Now we can choose additional components that we would like to also install but also associate with git. On Windows, I always make sure I install Git Bash as this allows us to run bash scripts on Windows. +Now we can choose additional components that we would like to also install but also associate with git. On Windows, I always make sure I install Git Bash as this allows us to run bash scripts on Windows. ![](Images/Day36_Git4.png) -We can then choose which SSH Executable we wish to use. IN leave this as the bundled OpenSSH that you might have seen in the Linux section. +We can then choose which SSH Executable we wish to use. IN leave this as the bundled OpenSSH that you might have seen in the Linux section. ![](Images/Day36_Git5.png) -We then have experimental features that we may wish to enable, for me I don't need them so I don't enable them, you can always come back in through the installation and enable these later on. +We then have experimental features that we may wish to enable, for me I don't need them so I don't enable them, you can always come back in through the installation and enable these later on. ![](Images/Day36_Git6.png) -Installation complete, we can now choose to open Git Bash and or the latest release notes. +Installation complete, we can now choose to open Git Bash and or the latest release notes. ![](Images/Day36_Git7.png) -The final check is to take a look in our PowerShell window at what version of git we have now. +The final check is to take a look in our PowerShell window at what version of git we have now. ![](Images/Day36_Git8.png) -Super simple stuff and now we are on the latest version. On our Linux machine, we seemed to be a little behind so we can also walk through that update process. +Super simple stuff and now we are on the latest version. On our Linux machine, we seemed to be a little behind so we can also walk through that update process. -I simply run the `sudo apt-get install git` command. +I simply run the `sudo apt-get install git` command. ![](Images/Day36_Git9.png) -You could also run the following which will add the git repository for software installations. +You could also run the following which will add the git repository for software installations. ``` sudo add-apt-repository ppa:git-core/ppa -y sudo apt-get update sudo apt-get install git -y git --version -``` +``` + ### Configuring Git -When we first use git we have to define some settings, +When we first use git we have to define some settings, - Name -- Email +- Email - Default Editor - Line Ending -This can be done at three levels +This can be done at three levels -- System = All users -- Global = All repositories of the current user +- System = All users +- Global = All repositories of the current user - Local = The current repository -Example: -`git config --global user.name "Michael Cade"` +Example: +`git config --global user.name "Michael Cade"` `git config --global user.email Michael.Cade@90DaysOfDevOPs.com"` -Depending on your Operating System will determine the default text editor. In my Ubuntu machine without setting the next command is using nano. The below command will change this to visual studio code. +Depending on your Operating System will determine the default text editor. In my Ubuntu machine without setting the next command is using nano. The below command will change this to visual studio code. `git config --global core.editor "code --wait"` -now if we want to be able to see all git configurations then we can use the following command. +now if we want to be able to see all git configurations then we can use the following command. -`git config --global -e` +`git config --global -e` ![](Images/Day36_Git10.png) -On any machine this file will be named `.gitconfig` on my Windows machine you will find this in your user account directory. +On any machine this file will be named `.gitconfig` on my Windows machine you will find this in your user account directory. ![](Images/Day36_Git11.png) ### Git Theory -I mentioned in the post yesterday that there were other version control types and we can split these down into two different types. One is Client Server and the other is Distributed. +I mentioned in the post yesterday that there were other version control types and we can split these down into two different types. One is Client Server and the other is Distributed. -### Client-Server Version Control +### Client-Server Version Control -Before git was around Client-Server was the defacto method for version control. An example of this would be [Apache Subversion](https://subversion.apache.org/) which is an open source version control system founded in 2000. +Before git was around Client-Server was the defacto method for version control. An example of this would be [Apache Subversion](https://subversion.apache.org/) which is an open source version control system founded in 2000. -In this model of Client-Server version control, the first step the developer downloads the source code and the actual files from the server. This doesn't remove the conflicts but it does remove the complexity of the conflicts and how to resolve them. +In this model of Client-Server version control, the first step the developer downloads the source code and the actual files from the server. This doesn't remove the conflicts but it does remove the complexity of the conflicts and how to resolve them. ![](Images/Day36_Git12.png) -Now for example let's say we have two developers working on the same files and one wins the race and commits or uploads their file back to the server first with their new changes. When the second developer goes to update they have a conflict. +Now for example let's say we have two developers working on the same files and one wins the race and commits or uploads their file back to the server first with their new changes. When the second developer goes to update they have a conflict. ![](Images/Day36_Git13.png) -So now the Dev needs to pull down the first devs code change next to their check and then commit once those conflicts have been settled. +So now the Dev needs to pull down the first devs code change next to their check and then commit once those conflicts have been settled. ![](Images/Day36_Git15.png) -### Distributed Version Control +### Distributed Version Control -Git is not the only distributed version control system. But it is very much the defacto. +Git is not the only distributed version control system. But it is very much the defacto. -Some of the major benefits of Git are: +Some of the major benefits of Git are: -- Fast -- Smart -- Flexible +- Fast +- Smart +- Flexible - Safe & Secure -Unlike the Client-Server version control model, each developer downloads the source repository meaning everything. History of commits, all the branches etc. +Unlike the Client-Server version control model, each developer downloads the source repository meaning everything. History of commits, all the branches etc. ![](Images/Day36_Git16.png) -## Resources +## Resources - [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4) - [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ) -- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) -- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg) -- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) +- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) +- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg) +- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) - [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics) -See you on [Day 37](day37.md) +See you on [Day 37](day37.md) diff --git a/Days/day37.md b/Days/day37.md index 180cfa531..8490a6ba1 100644 --- a/Days/day37.md +++ b/Days/day37.md @@ -1,172 +1,171 @@ --- -title: '#90DaysOfDevOps - Gitting to know Git - Day 37' +title: "#90DaysOfDevOps - Gitting to know Git - Day 37" published: false description: 90DaysOfDevOps - Gitting to know Git -tags: 'DevOps, 90daysofdevops, learning' +tags: "DevOps, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048707 --- + ## Gitting to know Git -Apologies for the terrible puns in the title and throughout. I am surely not the first person to turn Git into a dad joke! +Apologies for the terrible puns in the title and throughout. I am surely not the first person to turn Git into a dad joke! In the last two posts we learnt about version control systems, and some of the fundamental workflows of git as a version control system [Day 35](day35.md) Then we got git installed on our system, updated and configured. We also went a little deeper into the theory between the Client-Server version control system and Git which is a distributed version control system [Day 36](day36.md). Now we are going to run through some of the commands and use cases that we will all commonly see with git. -### Where to git help with git? +### Where to git help with git? -There are going to be times when you just cannot remember or just don't know the command you need to get things done with git. You are going to need help. +There are going to be times when you just cannot remember or just don't know the command you need to get things done with git. You are going to need help. - Google or any search engine is likely to be your first port of call when searching for help. +Google or any search engine is likely to be your first port of call when searching for help. -Secondly, the next place is going to be the official git site and the documentation. [git-scm.com/docs](http://git-scm.com/docs) Here you will find not only a solid reference to all the commands available but also lots of different resources. +Secondly, the next place is going to be the official git site and the documentation. [git-scm.com/docs](http://git-scm.com/docs) Here you will find not only a solid reference to all the commands available but also lots of different resources. ![](Images/Day37_Git1.png) -We can also access this same documentation which is super useful if you are without connectivity from the terminal. If we chose the `git add` command for example we can run `git add --help` and we see below the manual. +We can also access this same documentation which is super useful if you are without connectivity from the terminal. If we chose the `git add` command for example we can run `git add --help` and we see below the manual. ![](Images/Day37_Git2.png) -We can also in the shell use `git add -h` which is going to give us a summary of the options we have available. +We can also in the shell use `git add -h` which is going to give us a summary of the options we have available. ![](Images/Day37_Git3.png) ### Myths surrounding Git -"Git has no access control" - You can empower a leader to maintain source code. +"Git has no access control" - You can empower a leader to maintain source code. -"Git is too heavy" - Git can provide shallow repositories which means a reduced amount of history if you have large projects. +"Git is too heavy" - Git can provide shallow repositories which means a reduced amount of history if you have large projects. ### Real shortcomings -Not ideal for Binary files. Great for source code but not great for executable files or videos for example. +Not ideal for Binary files. Great for source code but not great for executable files or videos for example. -Git is not user-friendly, the fact that we have to spend time talking about commands and functions of the tool is probably a key sign of that. +Git is not user-friendly, the fact that we have to spend time talking about commands and functions of the tool is probably a key sign of that. -Overall though, git is hard to learn but easy to use. +Overall though, git is hard to learn but easy to use. -### The git ecosystem +### The git ecosystem -I want to briefly cover the ecosystem around git but not deep dive into some of these areas but I think it's important to note these here at a high level. +I want to briefly cover the ecosystem around git but not deep dive into some of these areas but I think it's important to note these here at a high level. -Almost all modern development tools support Git. +Almost all modern development tools support Git. -- Developer tools - We have already mentioned visual studio code but you will find git plugins and integrations into sublime text and other text editors and IDEs. - -- Team tools - Also mentioned around tools like Jenkins from a CI/CD point of view, Slack from a messaging framework and Jira for project management and issue tracking. +- Developer tools - We have already mentioned visual studio code but you will find git plugins and integrations into sublime text and other text editors and IDEs. +- Team tools - Also mentioned around tools like Jenkins from a CI/CD point of view, Slack from a messaging framework and Jira for project management and issue tracking. -- Cloud Providers - All the large cloud providers support git, Microsoft Azure, Amazon AWS, and Google Cloud Platform. - -- Git-Based services - Then we have GitHub, GitLab and BitBucket which we will cover in more detail later on. I have heard of these services as the social network for code! +- Cloud Providers - All the large cloud providers support git, Microsoft Azure, Amazon AWS, and Google Cloud Platform. +- Git-Based services - Then we have GitHub, GitLab and BitBucket which we will cover in more detail later on. I have heard of these services as the social network for code! -### The Git Cheatsheet +### The Git Cheatsheet -We have not covered most of these commands but having looked at some cheat sheets available online I wanted to document some of the git commands and what their purpose is. We don't need to remember these all, and with more hands-on practice and use you will pick at least the git basics. +We have not covered most of these commands but having looked at some cheat sheets available online I wanted to document some of the git commands and what their purpose is. We don't need to remember these all, and with more hands-on practice and use you will pick at least the git basics. -I have taken these from [atlassian](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) but writing them down and reading the description is a good way to get to know what the commands are as well as getting hands-on in everyday tasks. +I have taken these from [atlassian](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) but writing them down and reading the description is a good way to get to know what the commands are as well as getting hands-on in everyday tasks. -### Git Basics +### Git Basics -| Command | Example | Description | -| --------------- | ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------- | -| git init | `git init ` | Create an empty git repository in the specified directory. | -| git clone | `git clone ` | Clone repository located at onto local machine. | -| git config | `git config user.name` | Define author name to be used for all commits in current repository `system`, `global`, `local` flag to set config options. | -| git add | `git add ` | Stage all changes in for the next commit. We can also add and <.> for everything. | -| git commit -m | `git commit -m ""` | Commit the staged snapshot, use to detail what is being committed. | -| git status | `git status` | List files that are staged, unstaged and untracked. | -| git log | `git log` | Display all commit history using the default format. There are additional options with this command. | -| git diff | `git diff` | Show unstaged changes between your index and working directory. | +| Command | Example | Description | +| ------------- | --------------------------- | --------------------------------------------------------------------------------------------------------------------------- | +| git init | `git init ` | Create an empty git repository in the specified directory. | +| git clone | `git clone ` | Clone repository located at onto local machine. | +| git config | `git config user.name` | Define author name to be used for all commits in current repository `system`, `global`, `local` flag to set config options. | +| git add | `git add ` | Stage all changes in for the next commit. We can also add and <.> for everything. | +| git commit -m | `git commit -m ""` | Commit the staged snapshot, use to detail what is being committed. | +| git status | `git status` | List files that are staged, unstaged and untracked. | +| git log | `git log` | Display all commit history using the default format. There are additional options with this command. | +| git diff | `git diff` | Show unstaged changes between your index and working directory. | -### Git Undoing Changes +### Git Undoing Changes -| Command | Example | Description | -| --------------- | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | -| git revert | `git revert ` | Create a new commit that undoes all of the changes made in then apply it to the current branch. | -| git reset | `git reset ` | Remove from the staging area, but leave the working directory unchanged. This unstaged a file without overwriting any changes. | -| git clean | `git clean -n` | Shows which files would be removed from the working directory. Use `-f` in place of `-n` to execute the clean. | +| Command | Example | Description | +| ---------- | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------- | +| git revert | `git revert ` | Create a new commit that undoes all of the changes made in then apply it to the current branch. | +| git reset | `git reset ` | Remove from the staging area, but leave the working directory unchanged. This unstaged a file without overwriting any changes. | +| git clean | `git clean -n` | Shows which files would be removed from the working directory. Use `-f` in place of `-n` to execute the clean. | ### Git Rewriting History -| Command | Example | Description | -| --------------- | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | -| git commit | `git commit --amend` | Replace the last commit with the staged changes and the last commit combined. Use with nothing staged to edit the last commit’s message. | -| git rebase | `git rebase ` | Rebase the current branch onto . can be a commit ID, branch name, a tag, or a relative reference to HEAD. | -| git reflog | `git reflog` | Show a log of changes to the local repository’s HEAD. Add --relative-date flag to show date info or --all to show all refs. | +| Command | Example | Description | +| ---------- | -------------------- | ---------------------------------------------------------------------------------------------------------------------------------------- | +| git commit | `git commit --amend` | Replace the last commit with the staged changes and the last commit combined. Use with nothing staged to edit the last commit’s message. | +| git rebase | `git rebase ` | Rebase the current branch onto . can be a commit ID, branch name, a tag, or a relative reference to HEAD. | +| git reflog | `git reflog` | Show a log of changes to the local repository’s HEAD. Add --relative-date flag to show date info or --all to show all refs. | ### Git Branches -| Command | Example | Description | -| --------------- | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | -| git branch | `git branch` | List all of the branches in your repo. Add a argument to create a new branch with the name . | -| git checkout | `git checkout -b ` | Create and check out a new branch named . Drop the -b flag to checkout an existing branch. | -| git merge | `git merge ` | Merge into the current branch. | +| Command | Example | Description | +| ------------ | -------------------------- | ------------------------------------------------------------------------------------------------------------- | +| git branch | `git branch` | List all of the branches in your repo. Add a argument to create a new branch with the name . | +| git checkout | `git checkout -b ` | Create and check out a new branch named . Drop the -b flag to checkout an existing branch. | +| git merge | `git merge ` | Merge into the current branch. | ### Git Remote Repositories -| Command | Example | Description | -| --------------- | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | -| git remote add | `git remote add ` | Create a new connection to a remote repo. After adding a remote, you can use as a shortcut for in other commands. | -| git fetch | `git fetch ` | Fetches a specific , from the repo. Leave off to fetch all remote refs. | -| git pull | `git pull ` | Fetch the specified remote’s copy of current branch and immediately merge it into the local copy. | -| git push | `git push ` | Push the branch to , along with necessary commits and objects. Creates named branch in the remote repo if it doesn’t exist. | +| Command | Example | Description | +| -------------- | ----------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | +| git remote add | `git remote add ` | Create a new connection to a remote repo. After adding a remote, you can use as a shortcut for in other commands. | +| git fetch | `git fetch ` | Fetches a specific , from the repo. Leave off to fetch all remote refs. | +| git pull | `git pull ` | Fetch the specified remote’s copy of current branch and immediately merge it into the local copy. | +| git push | `git push ` | Push the branch to , along with necessary commits and objects. Creates named branch in the remote repo if it doesn’t exist. | ### Git Diff -| Command | Example | Description | -| --------------- | ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- | -| git diff HEAD | `git diff HEAD` | Show the difference between the working directory and the last commit. | -| git diff --cached | `git diff --cached` | Show difference between staged changes and last commit | +| Command | Example | Description | +| ----------------- | ------------------- | ---------------------------------------------------------------------- | +| git diff HEAD | `git diff HEAD` | Show the difference between the working directory and the last commit. | +| git diff --cached | `git diff --cached` | Show difference between staged changes and last commit | ### Git Config -| Command | Example | Description | -| ----------------------------------------------------- | ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | -| git config --global user.name | `git config --global user.name ` | Define the author name to be used for all commits by the current user. | -| git config --global user.email | `git config --global user.email ` | Define author email to be used for all commits by the current user. | -| git config --global alias | `git config --global alias ` | Create shortcut for a git command . | -| git config --system core.editor | `git config --system core.editor ` | Set the text editor to be used by commands for all users on the machine. arg should be the comamnd that launches the desired editor. | -| git config --global --edit | `git config --global --edit ` | Open the global configuration file in a text editor for manual editing. | +| Command | Example | Description | +| ---------------------------------------------------- | ------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------- | +| git config --global user.name | `git config --global user.name ` | Define the author name to be used for all commits by the current user. | +| git config --global user.email | `git config --global user.email ` | Define author email to be used for all commits by the current user. | +| git config --global alias | `git config --global alias ` | Create shortcut for a git command . | +| git config --system core.editor | `git config --system core.editor ` | Set the text editor to be used by commands for all users on the machine. arg should be the comamnd that launches the desired editor. | +| git config --global --edit | `git config --global --edit ` | Open the global configuration file in a text editor for manual editing. | ### Git Rebase -| Command | Example | Description | -| ------------------------------------- | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------- | -| git rebase -i | `git rebase -i ` | Interactively rebase current branch onto . Launches editor to enter commands for how each commit will be transferred to the new base. | +| Command | Example | Description | +| -------------------- | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | +| git rebase -i | `git rebase -i ` | Interactively rebase current branch onto . Launches editor to enter commands for how each commit will be transferred to the new base. | ### Git Pull -| Command | Example | Description | -| ------------------------------------- | ------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------- | -| git pull --rebase | `git pull --rebase ` | Fetch the remote’s copy of current branch and rebases it into the local copy. Uses git rebase instead of the merge to integrate the branches. | +| Command | Example | Description | +| -------------------------- | ---------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | +| git pull --rebase | `git pull --rebase ` | Fetch the remote’s copy of current branch and rebases it into the local copy. Uses git rebase instead of the merge to integrate the branches. | ### Git Reset -| Command | Example | Description | -| ------------------------- | --------------------------| --------------------------------------------------------------------------------------------------------------------------------------------- | -| git reset | `git reset ` | Reset the staging area to match the most recent commit but leave the working directory unchanged. | -| git reset --hard | `git reset --hard` | Reset staging area and working directory to match most recent commit and overwrites all changes in the working directory | -| git reset | `git reset ` | Move the current branch tip backwards to , reset the staging area to match, but leave the working directory alone | -| git reset --hard | `git reset --hard ` | Same as previous, but resets both the staging area & working directory to match. Deletes uncommitted changes, and all commits after . | +| Command | Example | Description | +| ------------------------- | --------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------- | +| git reset | `git reset ` | Reset the staging area to match the most recent commit but leave the working directory unchanged. | +| git reset --hard | `git reset --hard` | Reset staging area and working directory to match most recent commit and overwrites all changes in the working directory | +| git reset | `git reset ` | Move the current branch tip backwards to , reset the staging area to match, but leave the working directory alone | +| git reset --hard | `git reset --hard ` | Same as previous, but resets both the staging area & working directory to match. Deletes uncommitted changes, and all commits after . | ### Git Push -| Command | Example | Description | -| ------------------------- | --------------------------| --------------------------------------------------------------------------------------------------------------------------------------------- | -| git push --force | `git push --force` | Forces the git push even if it results in a non-fast-forward merge. Do not use the --force flag unless you’re sure you know what you’re doing. | -| git push --all | `git push --all` | Push all of your local branches to the specified remote. | -| git push --tags | `git push --tags` | Tags aren’t automatically pushed when you push a branch or use the --all flag. The --tags flag sends all of your local tags to the remote repo. | +| Command | Example | Description | +| ------------------------- | --------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | +| git push --force | `git push --force` | Forces the git push even if it results in a non-fast-forward merge. Do not use the --force flag unless you’re sure you know what you’re doing. | +| git push --all | `git push --all` | Push all of your local branches to the specified remote. | +| git push --tags | `git push --tags` | Tags aren’t automatically pushed when you push a branch or use the --all flag. The --tags flag sends all of your local tags to the remote repo. | -## Resources +## Resources - [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4) - [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ) -- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) -- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg) -- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) +- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) +- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg) +- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) - [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics) - [Git cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) -See you on [Day 38](day38.md) +See you on [Day 38](day38.md) diff --git a/Days/day38.md b/Days/day38.md index 96da68c87..5c5a4042b 100644 --- a/Days/day38.md +++ b/Days/day38.md @@ -1,29 +1,30 @@ --- -title: '#90DaysOfDevOps - Staging & Changing - Day 38' +title: "#90DaysOfDevOps - Staging & Changing - Day 38" published: false description: 90DaysOfDevOps - Staging & Changing -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049042 --- + ## Staging & Changing -We have already covered some of the basics but putting things into a walkthrough makes it better for me to learn and understand how and why we are doing it this way. Before we get into any git-based services such as GitHub, git has its powers that we can take advantage of on our local workstation. +We have already covered some of the basics but putting things into a walkthrough makes it better for me to learn and understand how and why we are doing it this way. Before we get into any git-based services such as GitHub, git has its powers that we can take advantage of on our local workstation. -We are going to take the project folder we created at the start of the git session and we are going to walk through some of the simple steps we can do with git. We created a folder on our local machine and we initialised it with the `git init` command +We are going to take the project folder we created at the start of the git session and we are going to walk through some of the simple steps we can do with git. We created a folder on our local machine and we initialised it with the `git init` command ![](Images/Day38_Git1.png) -We can also see now that we have initialised the folder we have a hidden folder in our directory. +We can also see now that we have initialised the folder we have a hidden folder in our directory. ![](Images/Day38_Git2.png) -This is where the details of the git repository are stored as well as the information regarding our branches and commits. +This is where the details of the git repository are stored as well as the information regarding our branches and commits. ### Staging Files -We then start working on our empty folder and maybe we add some source code on the first days of work. We create our readme.mdfile and we can see that file in the directory, next we check our `git status` and it knows about the new readme.mdfile but we have not committed the file yet. +We then start working on our empty folder and maybe we add some source code on the first days of work. We create our readme.mdfile and we can see that file in the directory, next we check our `git status` and it knows about the new readme.mdfile but we have not committed the file yet. ![](Images/Day38_Git3.png) @@ -31,96 +32,96 @@ We can stage our readme.mdfile with the `git add README.md` command then we can ![](Images/Day38_Git4.png) -Next up we want to commit this, our first commit or our first snapshot of our project. We can do this by using the `git commit -m "Meaningful message"` command so that we can easily see what has changed for each commit. Also, notice the yellow cross changes now to a green tick. This is something I have within my terminal with the theme I use, something we covered in the Linux section. +Next up we want to commit this, our first commit or our first snapshot of our project. We can do this by using the `git commit -m "Meaningful message"` command so that we can easily see what has changed for each commit. Also, notice the yellow cross changes now to a green tick. This is something I have within my terminal with the theme I use, something we covered in the Linux section. ![](Images/Day38_Git5.png) ### Committing Changes -We are going to most likely want to add more files or even change the files we have in our directory. We have already done our first commit above. But now we are going to add more details and more files. +We are going to most likely want to add more files or even change the files we have in our directory. We have already done our first commit above. But now we are going to add more details and more files. -We could repeat our process from before, create or edit our file > `git add .` to add all files to the staging area then `git commit -m "meaningful message"` and this would work just fine. But to be able to offer a meaningful message on commit of what has changed you might not want to write something out like `git commit -m "Well, I changed some code because it did not work and when I fixed that I also added something new to the readme.mdto ensure everyone knew about the user experience and then I made a tea."` I mean this would work as well although probably make it descriptive but the preferred way here is to add this with a text editor. +We could repeat our process from before, create or edit our file > `git add .` to add all files to the staging area then `git commit -m "meaningful message"` and this would work just fine. But to be able to offer a meaningful message on commit of what has changed you might not want to write something out like `git commit -m "Well, I changed some code because it did not work and when I fixed that I also added something new to the readme.mdto ensure everyone knew about the user experience and then I made a tea."` I mean this would work as well although probably make it descriptive but the preferred way here is to add this with a text editor. If we run `git commit` after running `git add` it will open our default text editor which in my case here is nano. Here are the steps I took to add some changes to the file, ran `git status` to show what is and what is not staged. Then I used `git add` to add the file to the staging area, then ran `git commit` which opened nano. ![](Images/Day38_Git6.png) -When nano opens you can then add your short and long description and then save the file. +When nano opens you can then add your short and long description and then save the file. ![](Images/Day38_Git7.png) ### Committing Best Practices -There is a balance here between when to commit and commit often. We do not want to be waiting to be finished the project before committing, each commit should be meaningful and they also should not be coupled with non-relevant tasks with each other. If you have a bug fix and a typo make sure they are two separate commits as a best practice. +There is a balance here between when to commit and commit often. We do not want to be waiting to be finished the project before committing, each commit should be meaningful and they also should not be coupled with non-relevant tasks with each other. If you have a bug fix and a typo make sure they are two separate commits as a best practice. -Make the commit message mean something. +Make the commit message mean something. -In terms of wording, the team or yourself should be sticking to the same wording for each commit. +In terms of wording, the team or yourself should be sticking to the same wording for each commit. ### Skipping the Staging Area -Do we always have to stage our changes before committing them? +Do we always have to stage our changes before committing them? -The answer is yes but don't see this as a shortcut, you have to be sure 100% that you are not needing that snapshot to roll back to, it is a risky thing to do. +The answer is yes but don't see this as a shortcut, you have to be sure 100% that you are not needing that snapshot to roll back to, it is a risky thing to do. ![](Images/Day38_Git8.png) ### Removing Files -What about removing files from our project, maybe we have another file in our directory that we have committed but now the project no longer needs or using it, as a best practice we should remove it. +What about removing files from our project, maybe we have another file in our directory that we have committed but now the project no longer needs or using it, as a best practice we should remove it. -Just because we remove the file from the directory, git is still aware of this file and we also need to remove it from the repository. You can see the workflow for this below. +Just because we remove the file from the directory, git is still aware of this file and we also need to remove it from the repository. You can see the workflow for this below. ![](Images/Day38_Git9.png) -That could be a bit of a pain to either remember or have to deal with if you have a large project which has many moving files and folders. We can do this with one command with `git rm oldcode.ps1` +That could be a bit of a pain to either remember or have to deal with if you have a large project which has many moving files and folders. We can do this with one command with `git rm oldcode.ps1` ![](Images/Day38_Git10.png) ### Renaming or Moving Files -Within our operating system, we can rename and move our files. We will no doubt need to do this from time to time with our projects. Similar to removing though there is a two-step process, we change our files on our OS and then we have to modify and make sure that the staging area or that the files are added correctly. Steps as follows: +Within our operating system, we can rename and move our files. We will no doubt need to do this from time to time with our projects. Similar to removing though there is a two-step process, we change our files on our OS and then we have to modify and make sure that the staging area or that the files are added correctly. Steps as follows: ![](Images/Day38_Git11.png) -However, like removing files from the operating system and then the git repository we can perform this rename using a git command too. +However, like removing files from the operating system and then the git repository we can perform this rename using a git command too. ![](Images/Day38_Git12.png) ### Ignoring Files -We may have the requirement to ignore files or folders within our project that we might be using locally or that will be just wasted space if we were to share with the overall project, a good example of this could be logs. I also think using this for secrets that you do not want to be shared out in public or across teams. +We may have the requirement to ignore files or folders within our project that we might be using locally or that will be just wasted space if we were to share with the overall project, a good example of this could be logs. I also think using this for secrets that you do not want to be shared out in public or across teams. -We can ignore files by adding folders or files to the `.gitignore` file in our project directory. +We can ignore files by adding folders or files to the `.gitignore` file in our project directory. ![](Images/Day38_Git13.png) -You can then open the `.gitignore` file and see that we have the logs/ directory present. But we could also add additional files and folders here to ignore. +You can then open the `.gitignore` file and see that we have the logs/ directory present. But we could also add additional files and folders here to ignore. ![](Images/Day38_Git14.png) -We can then see `git status` and then see what has happened. +We can then see `git status` and then see what has happened. ![](Images/Day38_Git15.png) -There are also ways in which you might need to go back and ignore files and folders, maybe you did want to share the logs folder but then later realised that you didn't want to. You will have to use `git rm --cached ` to remove files and folders from the staging area if you have a previously tracked folder that you now want to ignore. +There are also ways in which you might need to go back and ignore files and folders, maybe you did want to share the logs folder but then later realised that you didn't want to. You will have to use `git rm --cached ` to remove files and folders from the staging area if you have a previously tracked folder that you now want to ignore. ### Short Status -We have been using `git status` a lot to understand what we have in our staging area and what we do not, it's a very comprehensive command with lots of detail. Most of the time you will just want to know what has been modified or what is new? We can use `git status -s` for a short status of this detail. I would usually set an alias on my system to just use `git status -s` vs the more detailed command. +We have been using `git status` a lot to understand what we have in our staging area and what we do not, it's a very comprehensive command with lots of detail. Most of the time you will just want to know what has been modified or what is new? We can use `git status -s` for a short status of this detail. I would usually set an alias on my system to just use `git status -s` vs the more detailed command. ![](Images/Day38_Git16.png) -In the post tomorrow we will continue to look through these short examples of these common git commands. +In the post tomorrow we will continue to look through these short examples of these common git commands. -## Resources +## Resources - [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4) - [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ) -- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) -- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg) -- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) +- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) +- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg) +- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) - [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics) - [Git cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) -See you on [Day 39](day39.md) +See you on [Day 39](day39.md) diff --git a/Days/day39.md b/Days/day39.md index e336a8548..093b673a8 100644 --- a/Days/day39.md +++ b/Days/day39.md @@ -1,210 +1,212 @@ --- -title: '#90DaysOfDevOps - Viewing, unstaging, discarding & restoring - Day 39' +title: "#90DaysOfDevOps - Viewing, unstaging, discarding & restoring - Day 39" published: false -description: '90DaysOfDevOps - Viewing, unstaging, discarding & restoring' -tags: 'devops, 90daysofdevops, learning' +description: "90DaysOfDevOps - Viewing, unstaging, discarding & restoring" +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048827 --- + ## Viewing, unstaging, discarding & restoring -Continuing from where we finished yesterday around some of the commands that we have with git and how to leverage git with your projects. Remember we have not touched GitHub or any other git-based services yet this is all to help you keep control of your projects locally at the moment, but they will all become useful when we start to integrate into those tools. +Continuing from where we finished yesterday around some of the commands that we have with git and how to leverage git with your projects. Remember we have not touched GitHub or any other git-based services yet this is all to help you keep control of your projects locally at the moment, but they will all become useful when we start to integrate into those tools. -### Viewing the Staged and Unstaged Changes +### Viewing the Staged and Unstaged Changes -It is good practice to make sure you view the staged and unstaged code before committing. We can do this by running the `git diff --staged` command +It is good practice to make sure you view the staged and unstaged code before committing. We can do this by running the `git diff --staged` command ![](Images/Day39_Git1.png) -This then shows us all the changes we have made and all new files we have added or deleted. +This then shows us all the changes we have made and all new files we have added or deleted. -changes in the modified files are indicated with `---` or `+++` you can see below that we just added +add some text below which means they are new lines. +changes in the modified files are indicated with `---` or `+++` you can see below that we just added +add some text below which means they are new lines. ![](Images/Day39_Git2.png) -We can also run `git diff` to compare our staging area with our working directory. If we make some changes to our newly added file code.txt and add some lines of text. +We can also run `git diff` to compare our staging area with our working directory. If we make some changes to our newly added file code.txt and add some lines of text. ![](Images/Day39_Git3.png) -If we then run `git diff` we compare and see the output below. +If we then run `git diff` we compare and see the output below. ![](Images/Day39_Git4.png) ### Visual Diff Tools -For me, the above is more confusing so I would much rather use a visual tool, +For me, the above is more confusing so I would much rather use a visual tool, -To name a few visual diff tools: +To name a few visual diff tools: - KDiff3 -- P4Merge +- P4Merge - WinMerge (Windows Only) - VSCode To set this in git we run the following command `git config --global diff.tool vscode` -We are going to run the above and we are going to set some parameters when we launch VScode. +We are going to run the above and we are going to set some parameters when we launch VScode. ![](Images/Day39_Git5.png) -We can also check our configuration with `git config --global -e` +We can also check our configuration with `git config --global -e` ![](Images/Day39_Git6.png) -We can then use `git difftool` to now open our diff visual tool. +We can then use `git difftool` to now open our diff visual tool. ![](Images/Day39_Git7.png) -Which then opens our VScode editor on the diff page and compares the two, we have only modified one file from nothing to now adding a line of code on the right side. +Which then opens our VScode editor on the diff page and compares the two, we have only modified one file from nothing to now adding a line of code on the right side. ![](Images/Day39_Git8.png) -I find this method much easier to track changes and this is something similar to what we will see when we look into git-based services such as GitHub. +I find this method much easier to track changes and this is something similar to what we will see when we look into git-based services such as GitHub. -We can also use `git difftool --staged` to compare stage with committed files. +We can also use `git difftool --staged` to compare stage with committed files. ![](Images/Day39_Git9.png) -Then we can cycle through our changed files before we commit. +Then we can cycle through our changed files before we commit. ![](Images/Day39_Git10.png) -I am using VScode as my IDE and like most IDEs they have this functionality built in it is very rare you would need to run these commands from the terminal, although helpful if you don't have an IDE installed for some reason. +I am using VScode as my IDE and like most IDEs they have this functionality built in it is very rare you would need to run these commands from the terminal, although helpful if you don't have an IDE installed for some reason. ### Viewing the History -We previously touched on `git log` which will provide us with a comprehensive view of all commits we have made in our repository. +We previously touched on `git log` which will provide us with a comprehensive view of all commits we have made in our repository. ![](Images/Day39_Git11.png) -Each commit has its hexadecimal string, unique to the repository. Here you can see which branch we are working on and then also the author, date and commit message. +Each commit has its hexadecimal string, unique to the repository. Here you can see which branch we are working on and then also the author, date and commit message. -We also have `git log --oneline` and this gives us a much smaller version of the hexadecimal string which we can use in other `diff` commands. We also only have the one-line description or commit message. +We also have `git log --oneline` and this gives us a much smaller version of the hexadecimal string which we can use in other `diff` commands. We also only have the one-line description or commit message. ![](Images/Day39_Git12.png) -We can reverse this into a start with the first commit by running `git log --oneline --reverse` and now we see our first commit at the top of our page. +We can reverse this into a start with the first commit by running `git log --oneline --reverse` and now we see our first commit at the top of our page. ![](Images/Day39_Git13.png) ### Viewing a Commit -Being able to look at the commit message is great if you have been conscious about following best practices and you have added a meaningful commit message, however, there is also `git show` command which allows us to inspect and view a commit. +Being able to look at the commit message is great if you have been conscious about following best practices and you have added a meaningful commit message, however, there is also `git show` command which allows us to inspect and view a commit. We can use `git log --oneline --reverse` to get a list of our commits. and then we can take those and run `git show ` ![](Images/Day39_Git14.png) -The output of that command will look like below with the detail of the commit, author and what changed. +The output of that command will look like below with the detail of the commit, author and what changed. ![](Images/Day39_Git15.png) -We can also use `git show HEAD~1` where 1 is how many steps back from the current version we want to get back to. +We can also use `git show HEAD~1` where 1 is how many steps back from the current version we want to get back to. -This is great if you want some detail on your files, but if we want to list all the files in a tree for the whole snapshot directory. We can achieve this by using the `git ls-tree HEAD~1` command, again going back one snapshot from the last commit. We can see below we have two blobs, these indicate files whereas the tree would indicate a directory. You can also see commits and tags in this information. +This is great if you want some detail on your files, but if we want to list all the files in a tree for the whole snapshot directory. We can achieve this by using the `git ls-tree HEAD~1` command, again going back one snapshot from the last commit. We can see below we have two blobs, these indicate files whereas the tree would indicate a directory. You can also see commits and tags in this information. ![](Images/Day39_Git16.png) -We can then use the above to drill in and see the contents of our file (blobs) using the `git show` command. +We can then use the above to drill in and see the contents of our file (blobs) using the `git show` command. ![](Images/Day39_Git17.png) -Then the contents of that specific version of the file will be shown. +Then the contents of that specific version of the file will be shown. ![](Images/Day39_Git18.png) ### Unstaging Files -There will be a time when you have maybe used `git add .` but there are files you do not wish to commit to that snapshot just yet. In this example below I have added newfile.txt to my staging area but I am not ready to commit this file so I am going to use the `git restore --staged newfile.txt` to undo the `git add` step. +There will be a time when you have maybe used `git add .` but there are files you do not wish to commit to that snapshot just yet. In this example below I have added newfile.txt to my staging area but I am not ready to commit this file so I am going to use the `git restore --staged newfile.txt` to undo the `git add` step. ![](Images/Day39_Git19.png) -We can also do the same to modified files such as main.js and unstage the commit, see above we have a greem M for modified and then below we are unstaging those changes. +We can also do the same to modified files such as main.js and unstage the commit, see above we have a greem M for modified and then below we are unstaging those changes. ![](Images/Day39_Git20.png) -I have found this command quite useful during the 90DaysOfDevOps as I sometimes work ahead of the days where I feel I want to make notes for the following day but I don't want to commit and push to the public GitHub repository. +I have found this command quite useful during the 90DaysOfDevOps as I sometimes work ahead of the days where I feel I want to make notes for the following day but I don't want to commit and push to the public GitHub repository. ### Discarding Local Changes -Sometimes we might make changes but we are not happy with those changes and we want to throw them away. We are going to use the `git restore` command again and we are going to be able to restore files from our snapshots or previous versions. We can run `git restore .` against our directory and we will restore everything from our snapshot but notice that our untracked file is still present. There is no previous file being tracked called newfile.txt. +Sometimes we might make changes but we are not happy with those changes and we want to throw them away. We are going to use the `git restore` command again and we are going to be able to restore files from our snapshots or previous versions. We can run `git restore .` against our directory and we will restore everything from our snapshot but notice that our untracked file is still present. There is no previous file being tracked called newfile.txt. ![](Images/Day39_Git21.png) -Now to remove newfile.txt or any untracked files. We can use `git clean` we will get a warning alone. +Now to remove newfile.txt or any untracked files. We can use `git clean` we will get a warning alone. ![](Images/Day39_Git22.png) -Or if we know the consequences then we might want to run `git clean -fd` to force and remove all directories. +Or if we know the consequences then we might want to run `git clean -fd` to force and remove all directories. ![](Images/Day39_Git23.png) -### Restoring a File to an Earlier Version +### Restoring a File to an Earlier Version -As we have alluded to throughout a big portion of what Git can help with is being able to restore copies of your files from your snapshots (this is not a backup but it is a very fast restore point) My advice is that you also save copies of your code in other locations using a backup solution for this. +As we have alluded to throughout a big portion of what Git can help with is being able to restore copies of your files from your snapshots (this is not a backup but it is a very fast restore point) My advice is that you also save copies of your code in other locations using a backup solution for this. -As an example let's go and delete our most important file in our directory, notice we are using Unix-based commands to remove this from the directory, not git commands. +As an example let's go and delete our most important file in our directory, notice we are using Unix-based commands to remove this from the directory, not git commands. ![](Images/Day39_Git24.png) -Now we have no readme.mdin our working directory. We could have used `git rm readme.md` and this would then be reflected in our git database. Let's also delete it from here to simulate it being removed completely. +Now we have no readme.mdin our working directory. We could have used `git rm readme.md` and this would then be reflected in our git database. Let's also delete it from here to simulate it being removed completely. ![](Images/Day39_Git25.png) -Let's now commit this with a message and prove that we no longer have anything in our working directory or staging area. +Let's now commit this with a message and prove that we no longer have anything in our working directory or staging area. ![](Images/Day39_Git26.png) -Mistakes were made and we now need this file back! +Mistakes were made and we now need this file back! -We could use the `git undo` command which will undo the last commit, but what if it was a while back? We can use our `git log` command to find our commits and then we find that our file is in the last commit but we don't all of those commits to be undone so we can then use this command `git restore --source=HEAD~1 README.md` to specifically find the file and restore it from our snapshot. +We could use the `git undo` command which will undo the last commit, but what if it was a while back? We can use our `git log` command to find our commits and then we find that our file is in the last commit but we don't all of those commits to be undone so we can then use this command `git restore --source=HEAD~1 README.md` to specifically find the file and restore it from our snapshot. -You can see using this process we now have the file back in our working directory. +You can see using this process we now have the file back in our working directory. ![](Images/Day39_Git27.png) -We now have a new untracked file and we can use our commands previously mentioned to track, stage and commit our files and changes. +We now have a new untracked file and we can use our commands previously mentioned to track, stage and commit our files and changes. -### Rebase vs Merge +### Rebase vs Merge -This seems to be the biggest headache when it comes to Git and when to use rebase vs using merge on your git repositories. +This seems to be the biggest headache when it comes to Git and when to use rebase vs using merge on your git repositories. -The first thing to know is that both `git rebase` and `git merge` solve the same problem. Both are to integrate changes from one branch into another branch. However, they do this in different ways. +The first thing to know is that both `git rebase` and `git merge` solve the same problem. Both are to integrate changes from one branch into another branch. However, they do this in different ways. -Let's start with a new feature in a new dedicated branch. The Main branch continues with new commits. +Let's start with a new feature in a new dedicated branch. The Main branch continues with new commits. ![](Images/Day39_Git28.png) -The easy option here is to use `git merge feature main` which will merge the main branch into the feature branch. +The easy option here is to use `git merge feature main` which will merge the main branch into the feature branch. ![](Images/Day39_Git29.png) -Merging is easy because it is non-destructive. The existing branches are not changed in any way. However, this also means that the feature branch will have an irrelevant merge commit every time you need to incorporate upstream changes. If the main is very busy or active this will or can pollute the feature branch history. +Merging is easy because it is non-destructive. The existing branches are not changed in any way. However, this also means that the feature branch will have an irrelevant merge commit every time you need to incorporate upstream changes. If the main is very busy or active this will or can pollute the feature branch history. -As an alternate option, we can rebase the feature branch onto the main branch using +As an alternate option, we can rebase the feature branch onto the main branch using ``` git checkout feature git rebase main -``` +``` + This moves the feature branch (the entire feature branch) effectively incorporating all of the new commits in the main. But, instead of using a merge commit, rebasing re-writes the project history by creating brand new commits for each commit in the original branch. ![](Images/Day39_Git30.png) -The biggest benefit of rebasing is a much cleaner project history. It also eliminates unnecessary merge commits. and as you compare the last two images, you can follow arguably a much cleaner linear project history. +The biggest benefit of rebasing is a much cleaner project history. It also eliminates unnecessary merge commits. and as you compare the last two images, you can follow arguably a much cleaner linear project history. -Although it's still not a foregone conclusion, choosing the cleaner history also comes with tradeoffs, If you do not follow the [The Golden rule of rebasing](https://www.atlassian.com/git/tutorials/merging-vs-rebasing#the-golden-rule-of-rebasing) re-writing project history can be potentially catastrophic for your collaboration workflow. And, less importantly, rebasing loses the context provided by a merge commit—you can’t see when upstream changes were incorporated into the feature. +Although it's still not a foregone conclusion, choosing the cleaner history also comes with tradeoffs, If you do not follow the [The Golden rule of rebasing](https://www.atlassian.com/git/tutorials/merging-vs-rebasing#the-golden-rule-of-rebasing) re-writing project history can be potentially catastrophic for your collaboration workflow. And, less importantly, rebasing loses the context provided by a merge commit—you can’t see when upstream changes were incorporated into the feature. -## Resources +## Resources - [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4) - [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ) -- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) -- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg) -- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) +- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) +- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg) +- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) - [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics) - [Git cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) - [Exploring the Git command line – A getting started guide](https://veducate.co.uk/exploring-the-git-command-line/) -See you on [Day40](day40.md) +See you on [Day40](day40.md) diff --git a/Days/day40.md b/Days/day40.md index 6c45c1adb..fc494d5f7 100644 --- a/Days/day40.md +++ b/Days/day40.md @@ -1,208 +1,210 @@ --- -title: '#90DaysOfDevOps - Social Network for code - Day 40' +title: "#90DaysOfDevOps - Social Network for code - Day 40" published: false description: 90DaysOfDevOps - Social Network for code -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049044 --- + ## Social Network for code -Exploring GitHub | GitLab | BitBucket -Today I want to cover some of the git-based services that we have likely all heard of and expect we also use daily. +Exploring GitHub | GitLab | BitBucket + +Today I want to cover some of the git-based services that we have likely all heard of and expect we also use daily. -We will then use some of our prior session knowledge to move copies of our data to each of the main services. +We will then use some of our prior session knowledge to move copies of our data to each of the main services. -I called this section "Social Network for Code" let me explain why? +I called this section "Social Network for Code" let me explain why? -### GitHub +### GitHub -Most common at least for me is GitHub, GitHub is a web-based hosting service for git. It is most commonly used by software developers to store their code. Source Code Management with the git version control features as well as a lot of additional features. It allows for teams or open contributors to easily communicate and provides a social aspect to coding. (hence the social networking title) Since 2018 GitHub is part of Microsoft. +Most common at least for me is GitHub, GitHub is a web-based hosting service for git. It is most commonly used by software developers to store their code. Source Code Management with the git version control features as well as a lot of additional features. It allows for teams or open contributors to easily communicate and provides a social aspect to coding. (hence the social networking title) Since 2018 GitHub is part of Microsoft. -GitHub has been around for quite some time and was founded in 2007/2008. With Over 40 million users on the platform today. +GitHub has been around for quite some time and was founded in 2007/2008. With Over 40 million users on the platform today. -GitHub Main Features +GitHub Main Features -- Code Repository -- Pull Requests -- Project Management toolset - Issues -- CI / CD Pipeline - GitHub Actions +- Code Repository +- Pull Requests +- Project Management toolset - Issues +- CI / CD Pipeline - GitHub Actions -In terms of pricing, GitHub has different levels of pricing for its users. More can be found on [Pricing](https://github.com/pricing) +In terms of pricing, GitHub has different levels of pricing for its users. More can be found on [Pricing](https://github.com/pricing) -For this, we will cover the free tier. +For this, we will cover the free tier. -I am going to be using my already created GitHub account during this walkthrough, if you do not have an account then on the opening GitHub page there is a sign-up option and some easy steps to get set up. +I am going to be using my already created GitHub account during this walkthrough, if you do not have an account then on the opening GitHub page there is a sign-up option and some easy steps to get set up. ### GitHub opening page -When you first log in to your GitHub account you get a page containing a lot of widgets giving you options of where and what you would like to see or do. First up we have the "All Activity" this is going to give you a look into what is happening with your repositories or activity in general associated with your organisation or account. +When you first log in to your GitHub account you get a page containing a lot of widgets giving you options of where and what you would like to see or do. First up we have the "All Activity" this is going to give you a look into what is happening with your repositories or activity in general associated with your organisation or account. ![](Images/Day40_Git1.png) -Next, we have our Code Repositories, either our own or repositories that we have interacted with recently. We can also quickly create new repositories or search repositories. +Next, we have our Code Repositories, either our own or repositories that we have interacted with recently. We can also quickly create new repositories or search repositories. ![](Images/Day40_Git2.png) -We then have our recent activity, these for me are issues and pull requests that I have created or contributed to recently. +We then have our recent activity, these for me are issues and pull requests that I have created or contributed to recently. ![](Images/Day40_Git3.png) -Over on the right side of the page, we have some referrals for repositories that we might be interested in, most likely based on your recent activity or own projects. +Over on the right side of the page, we have some referrals for repositories that we might be interested in, most likely based on your recent activity or own projects. ![](Images/Day40_Git4.png) -To be honest I am very rarely on my home page that we just saw and described, although I now see that the feed could be really useful to help interact with the community a little better on certain projects. +To be honest I am very rarely on my home page that we just saw and described, although I now see that the feed could be really useful to help interact with the community a little better on certain projects. -Next up if we want to head into our GitHub Profile we can navigate to the top right corner and on your image, there is a drop-down which allows you to navigate through your account. From here to access your Profile select "Your Profile" +Next up if we want to head into our GitHub Profile we can navigate to the top right corner and on your image, there is a drop-down which allows you to navigate through your account. From here to access your Profile select "Your Profile" ![](Images/Day40_Git5.png) -Next, your profile page will appear, by default, unless you change your configuration you are not going to see what I have, I have added some functionality that shows my recent blog posts over on [vZilla](https://vzilla.co.uk) and then also my latest videos on my [YouTube](https://m.youtube.com/c/MichaelCade1) Channel. +Next, your profile page will appear, by default, unless you change your configuration you are not going to see what I have, I have added some functionality that shows my recent blog posts over on [vZilla](https://vzilla.co.uk) and then also my latest videos on my [YouTube](https://m.youtube.com/c/MichaelCade1) Channel. -You are not going to be spending much time looking at your profile, but this is a good profile page to share around your network so they can see the cool projects you are working on. +You are not going to be spending much time looking at your profile, but this is a good profile page to share around your network so they can see the cool projects you are working on. ![](Images/Day40_Git6.png) -We can then drill down into the building block of GitHub, the repositories. Here you are going to see your repositories and if you have private repositories they are also going to be shown in this long list. +We can then drill down into the building block of GitHub, the repositories. Here you are going to see your repositories and if you have private repositories they are also going to be shown in this long list. ![](Images/Day40_Git7.png) -As the repository is so important to GitHub let me choose a pretty busy one of late and run through some of the core functionality that we can use here on top of everything I am already using when it comes to editing our "code" in git on my local system. +As the repository is so important to GitHub let me choose a pretty busy one of late and run through some of the core functionality that we can use here on top of everything I am already using when it comes to editing our "code" in git on my local system. -First of all, from the previous window, I have selected the 90DaysOfDevOps repository and we get to see this view. You can see from this view we have a lot of information, we have our main code structure in the middle showing our files and folders that are stored in our repository. We have our readme. mdbeing displayed down at the bottom. Over to the right of the page, we have an about section where the repository has a description and purpose. Then we have a lot of information underneath this showing how many people have starred in the project, forked, and watched. +First of all, from the previous window, I have selected the 90DaysOfDevOps repository and we get to see this view. You can see from this view we have a lot of information, we have our main code structure in the middle showing our files and folders that are stored in our repository. We have our readme. mdbeing displayed down at the bottom. Over to the right of the page, we have an about section where the repository has a description and purpose. Then we have a lot of information underneath this showing how many people have starred in the project, forked, and watched. ![](Images/Day40_Git8.png) -If we scroll down a little further you will also see that we have Released, these are from the golang part of the challenge. We do not have any packages in our project, we have our contributors listed here. (Thank you community for assisting in my spelling and fact checking) We then have languages used again these are from different sections in the challenge. +If we scroll down a little further you will also see that we have Released, these are from the golang part of the challenge. We do not have any packages in our project, we have our contributors listed here. (Thank you community for assisting in my spelling and fact checking) We then have languages used again these are from different sections in the challenge. ![](Images/Day40_Git9.png) -A the top of the page you are going to see a list of tabs. These may vary and these can be modified to only show the ones you require. You will see here that I am not using all of these and I should remove them to make sure my whole repository is tidy. +A the top of the page you are going to see a list of tabs. These may vary and these can be modified to only show the ones you require. You will see here that I am not using all of these and I should remove them to make sure my whole repository is tidy. -First up we had the code tab which we just discussed but these tabs are always available when navigating through a repository which is super useful so we can jump between sections quickly and easily. Next, we have the issues tab. +First up we had the code tab which we just discussed but these tabs are always available when navigating through a repository which is super useful so we can jump between sections quickly and easily. Next, we have the issues tab. -Issues let you track your work on GitHub, where development happens. In this specific repository you can see I have some issues focused on adding diagrams or typos but also we have an issue stating a need or requirement for a Chinese version of the repository. +Issues let you track your work on GitHub, where development happens. In this specific repository you can see I have some issues focused on adding diagrams or typos but also we have an issue stating a need or requirement for a Chinese version of the repository. -If this was a code repository then this is a great place to raise concerns or issues with the maintainers, but remember to be mindful and detailed about what you are reporting, and give as much detail as possible. +If this was a code repository then this is a great place to raise concerns or issues with the maintainers, but remember to be mindful and detailed about what you are reporting, and give as much detail as possible. ![](Images/Day40_Git10.png) -The next tab is Pull Requests, Pull requests let you tell others about changes you've pushed to a branch in a repository. This is where someone may have forked your repository, made changes such as bug fixes or feature enhancements or just typos in a lot of the cases in this repository. +The next tab is Pull Requests, Pull requests let you tell others about changes you've pushed to a branch in a repository. This is where someone may have forked your repository, made changes such as bug fixes or feature enhancements or just typos in a lot of the cases in this repository. -We will cover forking later on. +We will cover forking later on. ![](Images/Day40_Git11.png) -I believe the next tab is quite new? But I thought for a project like #90DaysOfDevOps this could help guide the content journey but also help the community as they walk through their learning journey. I have created some discussion groups for each section of the challenge so people can jump in and discuss. +I believe the next tab is quite new? But I thought for a project like #90DaysOfDevOps this could help guide the content journey but also help the community as they walk through their learning journey. I have created some discussion groups for each section of the challenge so people can jump in and discuss. ![](Images/Day40_Git12.png) -The Actions tab is going to enable you to build, test and deploy code and a lot more right from within GitHub. GitHub Actions will be something we cover in the CI/CD section of the challenge but this is where we can set some configuration here to automate steps for us. +The Actions tab is going to enable you to build, test and deploy code and a lot more right from within GitHub. GitHub Actions will be something we cover in the CI/CD section of the challenge but this is where we can set some configuration here to automate steps for us. -On my main GitHub Profile, I am using GitHub Actions to fetch the latest blog posts and YouTube videos to keep things up to date on that home screen. +On my main GitHub Profile, I am using GitHub Actions to fetch the latest blog posts and YouTube videos to keep things up to date on that home screen. ![](Images/Day40_Git13.png) -I mentioned above how GitHub is not just a source code repository but is also a project management tool, The Project tab enables us to build out project tables kanban type boards so that we can link issues and PRs to better collaborate on the project and have visibility of those tasks. +I mentioned above how GitHub is not just a source code repository but is also a project management tool, The Project tab enables us to build out project tables kanban type boards so that we can link issues and PRs to better collaborate on the project and have visibility of those tasks. ![](Images/Day40_Git14.png) -I know that issues to me seem like a good place to log feature requests and they are but the wiki page allows for a comprehensive roadmap for the project to be outlined with the current status and in general better document your project is it troubleshooting or how-to type content. +I know that issues to me seem like a good place to log feature requests and they are but the wiki page allows for a comprehensive roadmap for the project to be outlined with the current status and in general better document your project is it troubleshooting or how-to type content. ![](Images/Day40_Git15.png) -Not so applicable to this project but the Security tab is there to make sure that contributors know how to deal with certain tasks, we can define a policy here but also code scanning add-ons to make sure your code for example does not contain secret environment variables. +Not so applicable to this project but the Security tab is there to make sure that contributors know how to deal with certain tasks, we can define a policy here but also code scanning add-ons to make sure your code for example does not contain secret environment variables. ![](Images/Day40_Git16.png) -For me the insights tab is great, it provides so much information about the repository from how much activity has been going on down to commits and issues, but it also reports on traffic to the repository. You can see a list on the left side that allows you to go into great detail about metrics on the repository. +For me the insights tab is great, it provides so much information about the repository from how much activity has been going on down to commits and issues, but it also reports on traffic to the repository. You can see a list on the left side that allows you to go into great detail about metrics on the repository. ![](Images/Day40_Git17.png) -Finally, we have the Settings tab, this is where we can get into the details of how we run our repository, I am currently the only maintainer of the repository but we could share this responsibility here. We can define integrations and other such tasks here. +Finally, we have the Settings tab, this is where we can get into the details of how we run our repository, I am currently the only maintainer of the repository but we could share this responsibility here. We can define integrations and other such tasks here. ![](Images/Day40_Git18.png) -This was a super quick overview of GitHub, I think there are some other areas that I might have mentioned that need explaining in a little more detail. As mentioned GitHub houses millions of repositories mostly these are holding source code and these can be public or privately accessible. +This was a super quick overview of GitHub, I think there are some other areas that I might have mentioned that need explaining in a little more detail. As mentioned GitHub houses millions of repositories mostly these are holding source code and these can be public or privately accessible. -### Forking +### Forking -I am going to get more into Open-Source in the session tomorrow but a big part of any code repository is the ability to collaborate with the community. Let's think of the scenario I want a copy of a repository because I want to make some changes to it, maybe I want to fix a bug or maybe I want to change something to use it for a use case that I have that was maybe not the intended use case for the original maintainer of the code. This is what we would call forking a repository. A fork is a copy of a repository. Forking a repository allows you to freely experiment with changes without affecting the original project. +I am going to get more into Open-Source in the session tomorrow but a big part of any code repository is the ability to collaborate with the community. Let's think of the scenario I want a copy of a repository because I want to make some changes to it, maybe I want to fix a bug or maybe I want to change something to use it for a use case that I have that was maybe not the intended use case for the original maintainer of the code. This is what we would call forking a repository. A fork is a copy of a repository. Forking a repository allows you to freely experiment with changes without affecting the original project. -Let me head back to the opening page after login and see one of those suggested repositories. +Let me head back to the opening page after login and see one of those suggested repositories. ![](Images/Day40_Git19.png) -If we click on that repository we are going to get the same look as we have just walked through on the 90DaysOfDevOps repository. +If we click on that repository we are going to get the same look as we have just walked through on the 90DaysOfDevOps repository. ![](Images/Day40_Git20.png) -If we notice below we have 3 options, we have watch, fork and star. +If we notice below we have 3 options, we have watch, fork and star. -- Watch - Updates when things happen to the repository. +- Watch - Updates when things happen to the repository. - Fork - a copy of a repository. - Star - "I think your project is cool" ![](Images/Day40_Git21.png) -Given our scenario of wanting a copy of this repository to work on we are going to hit the fork option. If you are a member of multiple organisations then you will have to choose where the fork will take place, I am going to choose my profile. +Given our scenario of wanting a copy of this repository to work on we are going to hit the fork option. If you are a member of multiple organisations then you will have to choose where the fork will take place, I am going to choose my profile. ![](Images/Day40_Git22.png) -Now we have our copy of the repository that we can freely work on and change as we see fit. This would be the start of the pull request process that we mentioned briefly before but we will cover it in more detail tomorrow. +Now we have our copy of the repository that we can freely work on and change as we see fit. This would be the start of the pull request process that we mentioned briefly before but we will cover it in more detail tomorrow. ![](Images/Day40_Git23.png) -Ok, I hear you say, but how do I make changes to this repository and code if it's on a website, well you can go through and edit on the website but it's not going to be the same as using your favourite IDE on your local system with your favourite colour theme. For us to get a copy of this repository on our local machine we will perform a clone of the repository. This will allow us to work on things locally and then push our changes back into our forked copy of the repository. +Ok, I hear you say, but how do I make changes to this repository and code if it's on a website, well you can go through and edit on the website but it's not going to be the same as using your favourite IDE on your local system with your favourite colour theme. For us to get a copy of this repository on our local machine we will perform a clone of the repository. This will allow us to work on things locally and then push our changes back into our forked copy of the repository. -We have several options when it comes to getting a copy of this code as you can see below. +We have several options when it comes to getting a copy of this code as you can see below. -There is a local version available of GitHub Desktop which gives you a visual desktop application to track changes and push and pull changes between local and GitHub. +There is a local version available of GitHub Desktop which gives you a visual desktop application to track changes and push and pull changes between local and GitHub. -For this little demo, I am going to use the HTTPS URL we see on there. +For this little demo, I am going to use the HTTPS URL we see on there. ![](Images/Day40_Git24.png) -Now on our local machine, I am going to navigate to a directory I am happy to download this repository to and then run `git clone url` +Now on our local machine, I am going to navigate to a directory I am happy to download this repository to and then run `git clone url` ![](Images/Day40_Git25.png) -Now we could take it to VScode to make some changes to this. +Now we could take it to VScode to make some changes to this. ![](Images/Day40_Git26.png) -Let's now make some changes, I want to make a change to all those links and replace that with something else. +Let's now make some changes, I want to make a change to all those links and replace that with something else. ![](Images/Day40_Git27.png) -Now if we check back on GitHub and we find our readme.mdin that repository, you should be able to see a few changes that I made to the file. +Now if we check back on GitHub and we find our readme.mdin that repository, you should be able to see a few changes that I made to the file. ![](Images/Day40_Git28.png) -At this stage, this might be complete and we might be happy with our change as we are the only people going to use our new change but maybe it was a bug change and if that is the case then we will want to contribute via a Pull Request to notify the original repository maintainers of our change and see if they accept our changes. +At this stage, this might be complete and we might be happy with our change as we are the only people going to use our new change but maybe it was a bug change and if that is the case then we will want to contribute via a Pull Request to notify the original repository maintainers of our change and see if they accept our changes. -We can do this by using the contribute button highlighted below. I will cover more on this tomorrow when we look into Open-Source workflows. +We can do this by using the contribute button highlighted below. I will cover more on this tomorrow when we look into Open-Source workflows. ![](Images/Day40_Git29.png) -I have spent a long time looking through GitHub and I hear some of you cry but what about other options! +I have spent a long time looking through GitHub and I hear some of you cry but what about other options! -Well, there are and I am going to find some resources that cover the basics for some of those as well. You are going to come across GitLab and BitBucket amongst others in your travels and whilst they are git-based services they have their differences. +Well, there are and I am going to find some resources that cover the basics for some of those as well. You are going to come across GitLab and BitBucket amongst others in your travels and whilst they are git-based services they have their differences. You will also come across hosted options. Most commonly here I have seen GitLab as a hosted version vs GitHub Enterprise (Don't believe there is a free hosted GitHub?) -## Resources +## Resources - [Learn GitLab in 3 Hours | GitLab Complete Tutorial For Beginners](https://www.youtube.com/watch?v=8aV5AxJrHDg) - [BitBucket Tutorials Playlist](https://www.youtube.com/watch?v=OMLh-5O6Ub8&list=PLaD4FvsFdarSyyGl3ooAm-ZyAllgw_AM5) - [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4) - [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ) -- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) -- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg) -- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) +- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) +- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg) +- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) - [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics) - [Git cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) -See you on [Day 41](day41.md) +See you on [Day 41](day41.md) diff --git a/Days/day41.md b/Days/day41.md index 951b07f32..ca55f0fbe 100644 --- a/Days/day41.md +++ b/Days/day41.md @@ -1,55 +1,56 @@ --- -title: '#90DaysOfDevOps - The Open Source Workflow - Day 41' +title: "#90DaysOfDevOps - The Open Source Workflow - Day 41" published: false description: 90DaysOfDevOps - The Open Source Workflow -tags: 'DevOps, 90daysofdevops, learning' +tags: "DevOps, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048806 --- -## The Open Source Workflow - -Hopefully, through the last 7 sections of Git, we have a better understanding of what git is and then how a git-based service such as GitHub integrates with git to provide a source code repository but also a way in which the wider community can collaborate on code and projects together. -When we went through the GitHub fundamentals we went through the process of forking a random project and making a change to our local repository. Here we want to go one step further and contribute to an open-source project. Remember that contributing doesn't need to be bug fixes or coding features but it could also be documentation. Every little helps and it also allows you to get hands-on with some of the git functionality we have covered. +## The Open Source Workflow -## Fork a Project +Hopefully, through the last 7 sections of Git, we have a better understanding of what git is and then how a git-based service such as GitHub integrates with git to provide a source code repository but also a way in which the wider community can collaborate on code and projects together. -The first thing we have to do is find a project we can contribute to. I have recently been presenting on the [Kanister Project](https://github.com/kanisterio/kanister) and I would like to share my presentations that are now on YouTube to the main readme.mdfile in the project. +When we went through the GitHub fundamentals we went through the process of forking a random project and making a change to our local repository. Here we want to go one step further and contribute to an open-source project. Remember that contributing doesn't need to be bug fixes or coding features but it could also be documentation. Every little helps and it also allows you to get hands-on with some of the git functionality we have covered. -First of all, we need to fork the project. Let's run through that process. I am going to navigate to the link shared above and fork the repository. +## Fork a Project + +The first thing we have to do is find a project we can contribute to. I have recently been presenting on the [Kanister Project](https://github.com/kanisterio/kanister) and I would like to share my presentations that are now on YouTube to the main readme.mdfile in the project. + +First of all, we need to fork the project. Let's run through that process. I am going to navigate to the link shared above and fork the repository. ![](Images/Day41_Git1.png) -We now have our copy of the whole repository. +We now have our copy of the whole repository. ![](Images/Day41_Git2.png) -For reference on the Readme.mdfile the original Presentations listed are just these two so we need to fix this with our process. +For reference on the Readme.mdfile the original Presentations listed are just these two so we need to fix this with our process. ![](Images/Day41_Git3.png) -## Clones to a local machine +## Clones to a local machine -Now we have our fork we can bring that down to our local and we can then start making our edits to the files. Using the code button on our repo we can grab the URL and then use `git clone url` in a directory we wish to place the repository. +Now we have our fork we can bring that down to our local and we can then start making our edits to the files. Using the code button on our repo we can grab the URL and then use `git clone url` in a directory we wish to place the repository. ![](Images/Day41_Git4.png) -## Make our changes +## Make our changes -We have our project local so we can open VSCode or an IDE or text editor of your choice to add your modifications. +We have our project local so we can open VSCode or an IDE or text editor of your choice to add your modifications. ![](Images/Day41_Git5.png) -The readme.mdfile is written in markdown language and because I am modifying someone else's project I am going to follow the existing project formatting to add our content. +The readme.mdfile is written in markdown language and because I am modifying someone else's project I am going to follow the existing project formatting to add our content. ![](Images/Day41_Git6.png) ## Test your changes -We must as a best practice test our changes, this makes total sense if this was a code change to an application you would want to ensure that the application still functions after a code change, well we also must make sure that documentation is formatted and looks correct. +We must as a best practice test our changes, this makes total sense if this was a code change to an application you would want to ensure that the application still functions after a code change, well we also must make sure that documentation is formatted and looks correct. -In vscode we can add a lot of plugins one of these is the ability to preview markdown pages. +In vscode we can add a lot of plugins one of these is the ability to preview markdown pages. ![](Images/Day41_Git7.png) @@ -59,13 +60,13 @@ We do not have the authentication to push our changes directly back to the Kanis ![](Images/Day41_Git8.png) -Now we go back to GitHub to check the changes once more and then contribute back to the master project. +Now we go back to GitHub to check the changes once more and then contribute back to the master project. -Looks good. +Looks good. ![](Images/Day41_Git9.png) -Now we can go back to the top of our forked repository for Kanister and we can see that we are 1 commit ahead of the kanisterio:master branch. +Now we can go back to the top of our forked repository for Kanister and we can see that we are 1 commit ahead of the kanisterio:master branch. ![](Images/Day41_Git10.png) @@ -73,54 +74,54 @@ Next, we hit that contribute button highlighted above. We see the option to "Ope ![](Images/Day41_Git11.png) -## Open a pull request +## Open a pull request -There is quite a bit going on in this next image, top left you can now see we are in the original or the master repository. then you can see what we are comparing and that is the original master and our forked repository. We then have a create pull request button which we will come back to shortly. We have our single commit but if this was more changes you might have multiple commits here. then we have the changes we have made in the readme.mdfile. +There is quite a bit going on in this next image, top left you can now see we are in the original or the master repository. then you can see what we are comparing and that is the original master and our forked repository. We then have a create pull request button which we will come back to shortly. We have our single commit but if this was more changes you might have multiple commits here. then we have the changes we have made in the readme.mdfile. ![](Images/Day41_Git12.png) -We have reviewed the above changes and we are ready to create a pull request by hitting the green button. +We have reviewed the above changes and we are ready to create a pull request by hitting the green button. -Then depending on how the maintainer of a project has set out their Pull Request functionality on their repository you may or may not have a template that gives you pointers on what the maintainer wants to see. +Then depending on how the maintainer of a project has set out their Pull Request functionality on their repository you may or may not have a template that gives you pointers on what the maintainer wants to see. -This is again where you want to make a meaningful description of what you have done, clear and concise but with enough detail. You can see I have made a simple change overview and I have ticked documentation. +This is again where you want to make a meaningful description of what you have done, clear and concise but with enough detail. You can see I have made a simple change overview and I have ticked documentation. ![](Images/Day41_Git13.png) ## Create a pull request -We are now ready to create our pull request. After hitting the "Create Pull Request" at the top of the page you will get a summary of your pull request. +We are now ready to create our pull request. After hitting the "Create Pull Request" at the top of the page you will get a summary of your pull request. ![](Images/Day41_Git14.png) -Scrolling down you are likely to see some automation taking place, in this instance, we require a review and some checks are taking place. We can see that Travis CI is in progress and a build has started and this will check our update, making sure that before anything is merged we are not breaking things with our additions. +Scrolling down you are likely to see some automation taking place, in this instance, we require a review and some checks are taking place. We can see that Travis CI is in progress and a build has started and this will check our update, making sure that before anything is merged we are not breaking things with our additions. ![](Images/Day41_Git15.png) -Another thing to note here is that the red in the screenshot above, can look a little daunting and look as if you have made mistakes! Don't worry you have not broken anything, my biggest tip here is this process is there to help you and the maintainers of the project. If you have made a mistake at least from my experience the maintainer will contact and advise on what to do next. +Another thing to note here is that the red in the screenshot above, can look a little daunting and look as if you have made mistakes! Don't worry you have not broken anything, my biggest tip here is this process is there to help you and the maintainers of the project. If you have made a mistake at least from my experience the maintainer will contact and advise on what to do next. This pull request is now public for everyone to see [added Kanister presentation/resource #1237](https://github.com/kanisterio/kanister/pull/1237) -I am going to publish this before the merge and pull requests are accepted so maybe we can get a little prize for anyone that is still following along and can add a picture of the successful PR? +I am going to publish this before the merge and pull requests are accepted so maybe we can get a little prize for anyone that is still following along and can add a picture of the successful PR? -1. Fork this repository to your own GitHub account -2. Add your picture and possibly text -3. Push the changes to your forked repository -4. Create a PR that I will see and approve. -5. I will think of some sort of prize +1. Fork this repository to your own GitHub account +2. Add your picture and possibly text +3. Push the changes to your forked repository +4. Create a PR that I will see and approve. +5. I will think of some sort of prize -This then wraps up our look into Git and GitHub, next we are diving into containers which starts with a big picture look into how, and why containers and also a look into virtualisation and how we got here. +This then wraps up our look into Git and GitHub, next we are diving into containers which starts with a big picture look into how, and why containers and also a look into virtualisation and how we got here. -## Resources +## Resources - [Learn GitLab in 3 Hours | GitLab Complete Tutorial For Beginners](https://www.youtube.com/watch?v=8aV5AxJrHDg) - [BitBucket Tutorials Playlist](https://www.youtube.com/watch?v=OMLh-5O6Ub8&list=PLaD4FvsFdarSyyGl3ooAm-ZyAllgw_AM5) - [What is Version Control?](https://www.youtube.com/watch?v=Yc8sCSeMhi4) - [Types of Version Control System](https://www.youtube.com/watch?v=kr62e_n6QuQ) -- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) -- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg) -- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) +- [Git Tutorial for Beginners](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) +- [Git for Professionals Tutorial](https://www.youtube.com/watch?v=Uszj_k0DGsg) +- [Git and GitHub for Beginners - Crash Course](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) - [Complete Git and GitHub Tutorial](https://www.youtube.com/watch?v=apGV9Kg7ics) - [Git cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) -See you on [Day 42](day42.md) +See you on [Day 42](day42.md) diff --git a/Days/day42.md b/Days/day42.md index 62e96b3e2..05975773e 100644 --- a/Days/day42.md +++ b/Days/day42.md @@ -1,137 +1,138 @@ --- -title: '#90DaysOfDevOps - The Big Picture: Containers - Day 42' +title: "#90DaysOfDevOps - The Big Picture: Containers - Day 42" published: false description: 90DaysOfDevOps - The Big Picture Containers -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048826 --- + ## The Big Picture: Containers -We are now starting the next section and this section is going to be focused on containers in particular we are going to be looking into Docker getting into some of the key areas to understand more about Containers. +We are now starting the next section and this section is going to be focused on containers in particular we are going to be looking into Docker getting into some of the key areas to understand more about Containers. -I will also be trying to get some hands-on here to create the container that we can use during this section but also in future sections later on in the challenge. +I will also be trying to get some hands-on here to create the container that we can use during this section but also in future sections later on in the challenge. -As always this first post is going to be focused on the big picture of how we got here and what it all means. +As always this first post is going to be focused on the big picture of how we got here and what it all means. #History of platforms and application development -#do we want to talk about Virtualisation & Containerisation +#do we want to talk about Virtualisation & Containerisation -### Why another way to run applications? +### Why another way to run applications? -The first thing we have to take a look at is why do we need another way to run our software or applications? Well it is just that choice is great, we can run our applications in many different forms, we might see applications deployed on physical hardware with an operating system and a single application deployed there, and we might see the virtual machine or cloud-based IaaS instances running our application which then integrate into a database again in a VM or as PaaS offering in the public cloud. Or we might see our applications running in containers. +The first thing we have to take a look at is why do we need another way to run our software or applications? Well it is just that choice is great, we can run our applications in many different forms, we might see applications deployed on physical hardware with an operating system and a single application deployed there, and we might see the virtual machine or cloud-based IaaS instances running our application which then integrate into a database again in a VM or as PaaS offering in the public cloud. Or we might see our applications running in containers. -None of the above options is wrong or right, but they each have their reasons to exist and I also strongly believe that none of these is going away. I have seen a lot of content that walks into Containers vs Virtual Machines and there really should not be an argument as that is more like apples vs pears argument where they are both fruit (ways to run our applications) but they are not the same. +None of the above options is wrong or right, but they each have their reasons to exist and I also strongly believe that none of these is going away. I have seen a lot of content that walks into Containers vs Virtual Machines and there really should not be an argument as that is more like apples vs pears argument where they are both fruit (ways to run our applications) but they are not the same. -I would also say that if you were starting and you were developing an application you should lean towards containers simply because we will get into some of these areas later, but it's about efficiency, speed and size. But that also comes with a price, if you have no idea about containers then it's going to be a learning curve to force yourself to understand the why and get into that mindset. If you have developed your applications a particular way or you are not in a greenfield environment then you might have more pain points to deal with before even considering containers. +I would also say that if you were starting and you were developing an application you should lean towards containers simply because we will get into some of these areas later, but it's about efficiency, speed and size. But that also comes with a price, if you have no idea about containers then it's going to be a learning curve to force yourself to understand the why and get into that mindset. If you have developed your applications a particular way or you are not in a greenfield environment then you might have more pain points to deal with before even considering containers. -We have many different choices then when it comes to downloading a given piece of software, there are a variety of different operating systems that we might be using. And specific instructions for what we need to do to install our applications. +We have many different choices then when it comes to downloading a given piece of software, there are a variety of different operating systems that we might be using. And specific instructions for what we need to do to install our applications. ![](Images/Day42_Containers1.png) -More and more recently I am finding that the applications we might have once needed a full server OS, A VM, Physical or cloud instance are now releasing container-based versions of their software. I find this interesting as this opens the world of containers and then Kubernetes to everyone and not just a focus on application developers. +More and more recently I am finding that the applications we might have once needed a full server OS, A VM, Physical or cloud instance are now releasing container-based versions of their software. I find this interesting as this opens the world of containers and then Kubernetes to everyone and not just a focus on application developers. ![](Images/Day42_Containers2.png) -As you can probably tell as I have said before, I am not going to advocate that the answer is containers, what's the question! But I would like to discuss how this is another option for us to be aware of when we deploy our applications. +As you can probably tell as I have said before, I am not going to advocate that the answer is containers, what's the question! But I would like to discuss how this is another option for us to be aware of when we deploy our applications. ![](Images/Day42_Containers4.png) -We have had container technology for a long time, so why now over the last say 10 years has this become popular, I would say even more popular in the last 5. We have had containers for decades. It comes down to the challenge of containers or should I say images as well, to how we distribute our software, because if we just have container technology, then we still will have many of the same problems we've had with software management. - -If we think about Docker as a tool, the reason that it took off, is because of the ecosystem of images that are easy to find and use. Simple to get on your systems and get up and running. A major part of this is consistency across the entire space, of all these different challenges that we face with software. It doesn't matter if it's MongoDB or nodeJS, the process to get either of those up and running will be the same. The process to stop either of those is the same. All of these issues will still exist, but the nice thing is, when we bring good container and image technology together, we now have a single set of tools to help us tackle all of these different problems. Some of those issues are listed below: - -- We first have to find software on the internet. -- We then have to download this software. -- Do we trust the source? -- Do we then need a license? Which License? -- Is it compatible with different platforms? -- What is the package? binary? Executable? Package manager? -- How do we configure the software? -- Dependencies? Did the overall download have us covered or do we need them as well? -- Dependencies of Dependencies? -- How do we start the application? -- How do we stop the application? -- Will it auto-restart? -- Start on boot? -- Resource conflicts? -- Conflicting libraries? +We have had container technology for a long time, so why now over the last say 10 years has this become popular, I would say even more popular in the last 5. We have had containers for decades. It comes down to the challenge of containers or should I say images as well, to how we distribute our software, because if we just have container technology, then we still will have many of the same problems we've had with software management. + +If we think about Docker as a tool, the reason that it took off, is because of the ecosystem of images that are easy to find and use. Simple to get on your systems and get up and running. A major part of this is consistency across the entire space, of all these different challenges that we face with software. It doesn't matter if it's MongoDB or nodeJS, the process to get either of those up and running will be the same. The process to stop either of those is the same. All of these issues will still exist, but the nice thing is, when we bring good container and image technology together, we now have a single set of tools to help us tackle all of these different problems. Some of those issues are listed below: + +- We first have to find software on the internet. +- We then have to download this software. +- Do we trust the source? +- Do we then need a license? Which License? +- Is it compatible with different platforms? +- What is the package? binary? Executable? Package manager? +- How do we configure the software? +- Dependencies? Did the overall download have us covered or do we need them as well? +- Dependencies of Dependencies? +- How do we start the application? +- How do we stop the application? +- Will it auto-restart? +- Start on boot? +- Resource conflicts? +- Conflicting libraries? - Port Conflicts -- Security for the software? -- Software updates? -- How can I remove the software? +- Security for the software? +- Software updates? +- How can I remove the software? -We can split the above into 3 areas of the complexity of the software that containers and images do help with these. +We can split the above into 3 areas of the complexity of the software that containers and images do help with these. -| Distribution | Installation | Operation | -| ------------ | ------------ | ----------------- | -| Find | Install | Start | -| Download | Configuration| Security | -| License | Uninstall | Ports | -| Package | Dependencies | Resource Conflicts | -| Trust | Platform | Auto-Restart | -| Find | Libraries | Updates | +| Distribution | Installation | Operation | +| ------------ | ------------- | ------------------ | +| Find | Install | Start | +| Download | Configuration | Security | +| License | Uninstall | Ports | +| Package | Dependencies | Resource Conflicts | +| Trust | Platform | Auto-Restart | +| Find | Libraries | Updates | -Containers and images are going to help us remove some of these challenges that we have with possibly other software and applications. +Containers and images are going to help us remove some of these challenges that we have with possibly other software and applications. -At a high level we could move installation and operation into the same list, Images are going to help us from a distribution point of view and containers help with the installation and operations. +At a high level we could move installation and operation into the same list, Images are going to help us from a distribution point of view and containers help with the installation and operations. -Ok, probably sounds great and exciting but we still need to understand what is a container and now I have mentioned images so let's cover those areas next. +Ok, probably sounds great and exciting but we still need to understand what is a container and now I have mentioned images so let's cover those areas next. -Another thing you might have seen a lot when we talk about Containers for software development is the analogy used alongside shipping containers, shipping containers are used to ship various goods across the seas using large vessels. +Another thing you might have seen a lot when we talk about Containers for software development is the analogy used alongside shipping containers, shipping containers are used to ship various goods across the seas using large vessels. ![](Images/Day42_Containers5.png) What does this have to do with our topic of containers? Think about the code that software developers write, how can we ship that particular code from one machine to another machine? -If we think about what we touched on before about software distribution, installation and operations but now we start to build this out into an environment visual. We have hardware and an operating system where you will run multiple applications. For example, nodejs has certain dependencies and needs certain libraries. If you then want to install MySQL then it needs its required libraries and dependencies. Each software application will have its library and dependency. We might be massively lucky and not have any conflicts between any of our applications where specific libraries and dependencies are clashing causing issues but the more applications the more chance or risk of conflicts. However, this is not about that one deployment when everything fixes your software applications are going to be updated and then we can also introduce these conflicts. +If we think about what we touched on before about software distribution, installation and operations but now we start to build this out into an environment visual. We have hardware and an operating system where you will run multiple applications. For example, nodejs has certain dependencies and needs certain libraries. If you then want to install MySQL then it needs its required libraries and dependencies. Each software application will have its library and dependency. We might be massively lucky and not have any conflicts between any of our applications where specific libraries and dependencies are clashing causing issues but the more applications the more chance or risk of conflicts. However, this is not about that one deployment when everything fixes your software applications are going to be updated and then we can also introduce these conflicts. ![](Images/Day42_Containers6.png) Containers can help solve this problem. Containers help **build** your application, **ship** the application, **deploy** and **scale** these applications with ease independently. let's look at the architecture, you will have hardware and operating system then on top of it you will have a container engine like docker which we will cover later. The container engine software helps create containers that package the libraries and dependencies along with it so that you can move this container seamlessly from one machine to another machine without worrying about the libraries and dependencies since they come as a part of a package which is nothing but the container so you can have different containers this container can be moved across the systems without worrying about the underlying dependencies that the application needs to run because everything the application needs to run is packaged as -a container that you can move. +a container that you can move. ![](Images/Day42_Containers7.png) -### The advantages of these containers +### The advantages of these containers - Containers help package all the dependencies within the container and -isolate it. + isolate it. -- It is easy to manage the containers +- It is easy to manage the containers -- The ability to move from one system to another. +- The ability to move from one system to another. -- Containers help package the software and you can easily ship it without any duplicate efforts +- Containers help package the software and you can easily ship it without any duplicate efforts - Containers are easily scalable. Using containers you can scale independent containers and use a load balancer -or a service which helps split the traffic and you can scale the applications horizontally. Containers offer a lot of flexibility and ease in how you manage your applications +or a service which helps split the traffic and you can scale the applications horizontally. Containers offer a lot of flexibility and ease in how you manage your applications -### What is a container? +### What is a container? -When we run applications on our computer, this could be the web browser or VScode that you are using to read this post. That application is running as a process or what is known as a process. On our laptops or systems, we tend to run multiple applications or as we said processes. When we open a new application or click on the application icon this is an application we would like to run, sometimes this application might be a service that we just want to run in the background, our operating system is full of services that are running in the background providing you with the user experience you get with your system. +When we run applications on our computer, this could be the web browser or VScode that you are using to read this post. That application is running as a process or what is known as a process. On our laptops or systems, we tend to run multiple applications or as we said processes. When we open a new application or click on the application icon this is an application we would like to run, sometimes this application might be a service that we just want to run in the background, our operating system is full of services that are running in the background providing you with the user experience you get with your system. -That application icon represents a link to an executable somewhere on your file system, the operating system then loads that executable into memory. Interestingly, that executable is sometimes referred to as an image when we're talking about a process. +That application icon represents a link to an executable somewhere on your file system, the operating system then loads that executable into memory. Interestingly, that executable is sometimes referred to as an image when we're talking about a process. -Containers are processes, A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. +Containers are processes, A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. Containerised software will always run the same, regardless of the infrastructure. Containers isolate software from its environment and ensure that it works uniformly despite differences for instance between development and staging. -I mentioned images in the last section when it comes to how and why containers and images combined made containers popular in our ecosystem. +I mentioned images in the last section when it comes to how and why containers and images combined made containers popular in our ecosystem. -### What is an Image? +### What is an Image? -A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. Container images become containers at runtime. +A container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. Container images become containers at runtime. -## Resources +## Resources - [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE) - [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI) - [Docker Tutorial for Beginners - What is Docker? Introduction to Containers](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s) - [Introduction to Container By Red Hat](https://www.redhat.com/en/topics/containers) -See you on [Day 43](day43.md) +See you on [Day 43](day43.md) diff --git a/Days/day43.md b/Days/day43.md index ef8e36bf4..78e6e88cb 100644 --- a/Days/day43.md +++ b/Days/day43.md @@ -1,25 +1,26 @@ --- -title: '#90DaysOfDevOps - What is Docker & Getting installed - Day 43' +title: "#90DaysOfDevOps - What is Docker & Getting installed - Day 43" published: false description: 90DaysOfDevOps - What is Docker & Getting installed -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048739 --- + ## What is Docker & Getting installed -In the previous post, I mentioned Docker at least once and that is because Docker is innovative in making containers popular even though they have been around for such a long time. +In the previous post, I mentioned Docker at least once and that is because Docker is innovative in making containers popular even though they have been around for such a long time. We are going to be using and explaining docker here but we should also mention the [Open Container Initiative (OCI)](https://www.opencontainers.org/) which is an industry standards organization that encourages innovation while avoiding the danger of vendor lock-in. Thanks to the OCI, we have a choice when choosing a container toolchain, including Docker, [CRI-O](https://cri-o.io/), [Podman](http://podman.io/), [LXC](https://linuxcontainers.org/), and others. Docker is a software framework for building, running, and managing containers. The term "docker" may refer to either the tools (the commands and a daemon) or the Dockerfile file format. -We are going to be using Docker Personal here which is free (for education and learning). This includes all the essentials that we need to cover to get a good foundation of knowledge of containers and tooling. +We are going to be using Docker Personal here which is free (for education and learning). This includes all the essentials that we need to cover to get a good foundation of knowledge of containers and tooling. -It is probably worth breaking down some of the "docker" tools that we will be using and what they are used for. The term docker can be referring to the docker project overall, which is a platform for devs and admins to develop, ship and run applications. It might also be a reference to the docker daemon process running on the host which manages images and containers also called Docker Engine. +It is probably worth breaking down some of the "docker" tools that we will be using and what they are used for. The term docker can be referring to the docker project overall, which is a platform for devs and admins to develop, ship and run applications. It might also be a reference to the docker daemon process running on the host which manages images and containers also called Docker Engine. -### Docker Engine +### Docker Engine Docker Engine is an open-source containerization technology for building and containerizing your applications. Docker Engine acts as a client-server application with: @@ -29,37 +30,39 @@ Docker Engine is an open-source containerization technology for building and con The above was taken from the official Docker documentation and the specific [Docker Engine Overview](https://docs.docker.com/engine/) -### Docker Desktop -We have a docker desktop for both Windows and macOS systems. An easy-to-install, lightweight docker development environment. A native OS application that leverages virtualisation capabilities on the host operating system. +### Docker Desktop + +We have a docker desktop for both Windows and macOS systems. An easy-to-install, lightweight docker development environment. A native OS application that leverages virtualisation capabilities on the host operating system. + +It’s the best solution if you want to build, debug, test, package, and ship Dockerized applications on Windows or macOS. -It’s the best solution if you want to build, debug, test, package, and ship Dockerized applications on Windows or macOS. +On Windows, we can also take advantage of WSL2 and Microsoft Hyper-V. We will cover some of the WSL2 benefits as we go through. -On Windows, we can also take advantage of WSL2 and Microsoft Hyper-V. We will cover some of the WSL2 benefits as we go through. +Because of the integration with hypervisor capabilities on the host operating system docker provides the ability to run your containers with Linux Operating systems. -Because of the integration with hypervisor capabilities on the host operating system docker provides the ability to run your containers with Linux Operating systems. +### Docker Compose -### Docker Compose -Docker compose is a tool that allows you to run more complex apps over multiple containers. With the benefit of being able to use a single file and command to spin up your application. +Docker compose is a tool that allows you to run more complex apps over multiple containers. With the benefit of being able to use a single file and command to spin up your application. -### Docker Hub -A centralised resource for working with Docker and its components. Most commonly known as a registry to host docker images. But there are a lot of additional services here which can be used in part with automation or integrated into GitHub as well as security scanning. +### Docker Hub -### Dockerfile +A centralised resource for working with Docker and its components. Most commonly known as a registry to host docker images. But there are a lot of additional services here which can be used in part with automation or integrated into GitHub as well as security scanning. -A dockerfile is a text file that contains commands you would normally execute manually to build a docker image. Docker can build images automatically by reading the instructions we have in our dockerfile. +### Dockerfile -## Installing Docker Desktop +A dockerfile is a text file that contains commands you would normally execute manually to build a docker image. Docker can build images automatically by reading the instructions we have in our dockerfile. -The [docker documenation](https://docs.docker.com/engine/install/) is amazing and if you are only just diving in then you should take a look and have a read-through. We will be using Docker Desktop on Windows with WSL2. I had already run through the installation on the machine we are using here. +## Installing Docker Desktop + +The [docker documenation](https://docs.docker.com/engine/install/) is amazing and if you are only just diving in then you should take a look and have a read-through. We will be using Docker Desktop on Windows with WSL2. I had already run through the installation on the machine we are using here. ![](Images/Day43_Containers1.png) Take note before you go ahead and install at the system requirements, [Install Docker Desktop on Windows](https://docs.docker.com/desktop/windows/install/) if you are using macOS including the M1-based CPU architecture you can also take a look at [Install Docker Desktop on macOS](https://docs.docker.com/desktop/mac/install/) -I will run through the Docker Desktop installation for Windows on another Windows Machine and log the process down below. - +I will run through the Docker Desktop installation for Windows on another Windows Machine and log the process down below. -## Resources +## Resources - [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE) - [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI) diff --git a/Days/day44.md b/Days/day44.md index 66b3de418..aaf370b63 100644 --- a/Days/day44.md +++ b/Days/day44.md @@ -1,51 +1,52 @@ --- -title: '#90DaysOfDevOps - Docker Images & Hands-On with Docker Desktop - Day 44' +title: "#90DaysOfDevOps - Docker Images & Hands-On with Docker Desktop - Day 44" published: false description: 90DaysOfDevOps - Docker Images & Hands-On with Docker Desktop -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048708 --- + ## Docker Images & Hands-On with Docker Desktop We now have Docker Desktop installed on our system. (If you are running Linux then you still have options but no GUI but docker does work on Linux.)[Install Docker Engine on Ubuntu](https://docs.docker.com/engine/install/ubuntu/) (Other distributions also available.) In this post, we are going to get started with deploying some images into our environment. A recap on what a Docker Image is - A Docker image is a file used to execute code in a Docker container. Docker images act as a set of instructions to build a Docker container, like a template. Docker images also act as the starting point when using Docker. -Now is a good time to go and create your account on [DockerHub](https://hub.docker.com/) +Now is a good time to go and create your account on [DockerHub](https://hub.docker.com/) ![](Images/Day44_Containers1.png) DockerHub is a centralised resource for working with Docker and its components. Most commonly known as a registry to host docker images. But there are a lot of additional services here which can be used in part with automation or integrated into GitHub as well as security scanning. -If you scroll down once logged in you are going to see a list of container images, You might see database images for MySQL, hello-world etc. Think of these as great baseline images or you might just need a database image and you are best to use the official one which means you don't need to create your own. +If you scroll down once logged in you are going to see a list of container images, You might see database images for MySQL, hello-world etc. Think of these as great baseline images or you might just need a database image and you are best to use the official one which means you don't need to create your own. ![](Images/Day44_Containers2.png) -We can drill deeper into the view of available images and search across categories, operating systems and architectures. The one thing I highlight below is the Official Image, this should give you peace of mind about the origin of this container image. +We can drill deeper into the view of available images and search across categories, operating systems and architectures. The one thing I highlight below is the Official Image, this should give you peace of mind about the origin of this container image. ![](Images/Day44_Containers3.png) -We can also search for a specific image, for example, WordPress might be a good base image that we want we can do that at the top and find all container images related to WordPress. Below are notices that we also have verified publisher. +We can also search for a specific image, for example, WordPress might be a good base image that we want we can do that at the top and find all container images related to WordPress. Below are notices that we also have verified publisher. -- Official Image - Docker Official images are a curated set of Docker open source and "drop-in" solution repositories. +- Official Image - Docker Official images are a curated set of Docker open source and "drop-in" solution repositories. -- Verified Publisher - High-quality Docker content from verified publishers. These products are published and maintained directly by a commercial entity. +- Verified Publisher - High-quality Docker content from verified publishers. These products are published and maintained directly by a commercial entity. ![](Images/Day44_Containers4.png) -### Exploring Docker Desktop +### Exploring Docker Desktop -We have Docker Desktop installed on our system and if you open this I expect unless you had this already installed you will see something similar to the image below. As you can see we have no containers running and our docker engine is running. +We have Docker Desktop installed on our system and if you open this I expect unless you had this already installed you will see something similar to the image below. As you can see we have no containers running and our docker engine is running. ![](Images/Day44_Containers5.png) -Because this was not a fresh install for me, I do have some images already downloaded and available on my system. You will likely see nothing in here. +Because this was not a fresh install for me, I do have some images already downloaded and available on my system. You will likely see nothing in here. ![](Images/Day44_Containers6.png) -Under remote repositories, this is where you will find any container images you have stored in your docker hub. You can see from the below I do not have any images. +Under remote repositories, this is where you will find any container images you have stored in your docker hub. You can see from the below I do not have any images. ![](Images/Day44_Containers7.png) @@ -53,59 +54,59 @@ We can also clarify this on our dockerhub site and confirm that we have no repos ![](Images/Day44_Containers8.png) -Next, we have the Volumes tab, If you have containers that require persistence then this is where we can add these volumes to your local file system or a shared file system. +Next, we have the Volumes tab, If you have containers that require persistence then this is where we can add these volumes to your local file system or a shared file system. ![](Images/Day44_Containers9.png) -At the time of writing, there is also a Dev Environments tab, this is going to help you collaborate with your team instead of moving between different git branches. We won't be covering this. +At the time of writing, there is also a Dev Environments tab, this is going to help you collaborate with your team instead of moving between different git branches. We won't be covering this. ![](Images/Day44_Containers10.png) -Going back to the first tab you can see that there is a command we can run which is a getting started container. Let's run `docker run -d -p 80:80 docker/getting-started` in our terminal. +Going back to the first tab you can see that there is a command we can run which is a getting started container. Let's run `docker run -d -p 80:80 docker/getting-started` in our terminal. ![](Images/Day44_Containers11.png) -If we go and check our docker desktop window again, we are going to see that we have a running container. +If we go and check our docker desktop window again, we are going to see that we have a running container. ![](Images/Day44_Containers12.png) -You might have noticed that I am using WSL2 and for you to be able to use that you will need to make sure this is enabled in the settings. +You might have noticed that I am using WSL2 and for you to be able to use that you will need to make sure this is enabled in the settings. ![](Images/Day44_Containers13.png) -If we now go and check our Images tab again, you should now see an in-use image called docker/getting-started. +If we now go and check our Images tab again, you should now see an in-use image called docker/getting-started. ![](Images/Day44_Containers14.png) -Back to the Containers/Apps tab, click on your running container. You are going to see the logs by default and along the top, you have some options to choose from, in our case I am pretty confident that this is going to be a web page running in this container so we are going to choose the open in the browser. +Back to the Containers/Apps tab, click on your running container. You are going to see the logs by default and along the top, you have some options to choose from, in our case I am pretty confident that this is going to be a web page running in this container so we are going to choose the open in the browser. ![](Images/Day44_Containers15.png) -When we hit that button above sure enough a web page should open hitting your localhost and display something similar to below. +When we hit that button above sure enough a web page should open hitting your localhost and display something similar to below. -This container also has some more detail on our containers and images. +This container also has some more detail on our containers and images. ![](Images/Day44_Containers16.png) -We have now run our first container. Nothing too scary just yet. What about if we wanted to pull one of the container images down from DockerHub? Maybe there is a `hello world` docker container we could use. +We have now run our first container. Nothing too scary just yet. What about if we wanted to pull one of the container images down from DockerHub? Maybe there is a `hello world` docker container we could use. -I went ahead and stopped the getting started container not that it's taking up any mass amount of resources but for tidiness, as we walk through some more steps. +I went ahead and stopped the getting started container not that it's taking up any mass amount of resources but for tidiness, as we walk through some more steps. -Back in our terminal let's go ahead and run `docker run hello-world` and see what happens. +Back in our terminal let's go ahead and run `docker run hello-world` and see what happens. -You can see we did not have the image locally so we pulled that down and then we got a message that is written into the container image with some information on what it did to get up and running and some links to reference points. +You can see we did not have the image locally so we pulled that down and then we got a message that is written into the container image with some information on what it did to get up and running and some links to reference points. ![](Images/Day44_Containers17.png) -However, if we go and look in Docker Desktop now we have no running containers but we do have an exited container that used the hello-world message, meaning it came up, delivered the message and then is terminated. +However, if we go and look in Docker Desktop now we have no running containers but we do have an exited container that used the hello-world message, meaning it came up, delivered the message and then is terminated. ![](Images/Day44_Containers18.png) -And for the last time, let's just go and check the images tab and see that we have a new hello-world image locally on our system, meaning that if we run the `docker run hello-world` command again in our terminal we would not have to pull anything unless a version changes. +And for the last time, let's just go and check the images tab and see that we have a new hello-world image locally on our system, meaning that if we run the `docker run hello-world` command again in our terminal we would not have to pull anything unless a version changes. ![](Images/Day44_Containers19.png) -The message from the hello-world container set down the challenge of running something a little more ambitious. +The message from the hello-world container set down the challenge of running something a little more ambitious. Challenge Accepted! @@ -113,29 +114,29 @@ Challenge Accepted! In running `docker run -it ubuntu bash` in our terminal we are going to run a containerised version of Ubuntu well not a full copy of the Operating system. You can find out more about this particular image on [DockerHub](https://hub.docker.com/_/ubuntu) -You can see below when we run the command we now have an interactive prompt (`-it`) and we have a bash shell into our container. +You can see below when we run the command we now have an interactive prompt (`-it`) and we have a bash shell into our container. ![](Images/Day44_Containers21.png) -We have a bash shell but we don't have much more which is why this container image is less than 30MB. +We have a bash shell but we don't have much more which is why this container image is less than 30MB. ![](Images/Day44_Containers22.png) -But we can still use this image and we can still install software using our apt package manager, we can update our container image and upgrade also. +But we can still use this image and we can still install software using our apt package manager, we can update our container image and upgrade also. ![](Images/Day44_Containers23.png) -Or maybe we want to install some software into our container, I have chosen a really bad example here as pinta is an image editor and it's over 200MB but hopefully you get where I am going with this. This would increase the size of our container considerably but still, we are going to be in the MB and not in the GB. +Or maybe we want to install some software into our container, I have chosen a really bad example here as pinta is an image editor and it's over 200MB but hopefully you get where I am going with this. This would increase the size of our container considerably but still, we are going to be in the MB and not in the GB. ![](Images/Day44_Containers24.png) -I wanted that to hopefully give you an overview of Docker Desktop and the not-so-scary world of containers when you break it down with simple use cases, we do need to cover some networking, security and other options we have vs just downloading container images and using them like this. By the end of the section, we want to have made something and uploaded it to our DockerHub repository and be able to deploy it. +I wanted that to hopefully give you an overview of Docker Desktop and the not-so-scary world of containers when you break it down with simple use cases, we do need to cover some networking, security and other options we have vs just downloading container images and using them like this. By the end of the section, we want to have made something and uploaded it to our DockerHub repository and be able to deploy it. -## Resources +## Resources - [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE) - [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI) - [Docker Tutorial for Beginners - What is Docker? Introduction to Containers](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s) - [WSL 2 with Docker getting started](https://www.youtube.com/watch?v=5RQbdMn04Oc) -See you on [Day 45](day45.md) +See you on [Day 45](day45.md) diff --git a/Days/day46.md b/Days/day46.md index 71dced497..feb24fd25 100644 --- a/Days/day46.md +++ b/Days/day46.md @@ -1,54 +1,56 @@ --- -title: '#90DaysOfDevOps - Docker Compose - Day 46' +title: "#90DaysOfDevOps - Docker Compose - Day 46" published: false description: 90DaysOfDevOps - Docker Compose -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048740 --- -## Docker Compose -The ability to run one container could be great if you have a self-contained image that has everything you need for your single use case, where things get interesting is when you are looking to build multiple applications between different container images. For example, if I had a website front end but required a backend database I could put everything in one container but better and more efficient would be to have its container for the database. +## Docker Compose + +The ability to run one container could be great if you have a self-contained image that has everything you need for your single use case, where things get interesting is when you are looking to build multiple applications between different container images. For example, if I had a website front end but required a backend database I could put everything in one container but better and more efficient would be to have its container for the database. This is where Docker compose comes in which is a tool that allows you to run more complex apps over multiple containers. With the benefit of being able to use a single file and command to spin up your application. The example I am going to the walkthrough in this post is from the [Docker QuickStart sample apps (Quickstart: Compose and WordPress)](https://docs.docker.com/samples/wordpress/). -In this first example we are going to: +In this first example we are going to: -- Use Docker compose to bring up WordPress and a separate MySQL instance. +- Use Docker compose to bring up WordPress and a separate MySQL instance. - Use a YAML file which will be called `docker-compose.yml` -- Build the project +- Build the project - Configure WordPress via a Browser - Shutdown and Clean up -### Install Docker Compose -As mentioned Docker Compose is a tool, If you are on macOS or Windows then compose is included in your Docker Desktop installation. However, you might be wanting to run your containers on a Windows server host or Linux server and in which case you can install using these instructions [Install Docker Compose](https://docs.docker.com/compose/install/) +### Install Docker Compose + +As mentioned Docker Compose is a tool, If you are on macOS or Windows then compose is included in your Docker Desktop installation. However, you might be wanting to run your containers on a Windows server host or Linux server and in which case you can install using these instructions [Install Docker Compose](https://docs.docker.com/compose/install/) -To confirm we have `docker-compose` installed on our system we can open a terminal and simply type the above command. +To confirm we have `docker-compose` installed on our system we can open a terminal and simply type the above command. ![](Images/Day46_Containers1.png) ### Docker-Compose.yml (YAML) -The next thing to talk about is the docker-compose.yml which you can find in the container folder of the repository. But more importantly, we need to discuss YAML, in general, a little. +The next thing to talk about is the docker-compose.yml which you can find in the container folder of the repository. But more importantly, we need to discuss YAML, in general, a little. -YAML could almost have its session as you are going to find it in so many different places. But for the most part +YAML could almost have its session as you are going to find it in so many different places. But for the most part "YAML is a human-friendly data serialization language for all programming languages." -It is commonly used for configuration files and in some applications where data is being stored or transmitted. You have no doubt come across XML files that tend to offer that same configuration file. YAML provides a minimal syntax but is aimed at those same use cases. +It is commonly used for configuration files and in some applications where data is being stored or transmitted. You have no doubt come across XML files that tend to offer that same configuration file. YAML provides a minimal syntax but is aimed at those same use cases. YAML Ain't Markup Language (YAML) is a serialisation language that has steadily increased in popularity over the last few years. The object serialisation abilities make it a viable replacement for languages like JSON. The YAML acronym was shorthand for Yet Another Markup Language. But the maintainers renamed it to YAML Ain't Markup Language to place more emphasis on its data-oriented features. -Anyway, back to the docker-compose.yml file. This is a configuration file of what we want to do when it comes to multiple containers being deployed on our single system. +Anyway, back to the docker-compose.yml file. This is a configuration file of what we want to do when it comes to multiple containers being deployed on our single system. -Straight from the tutorial linked above you can see the contents of the file looks like this: +Straight from the tutorial linked above you can see the contents of the file looks like this: ``` version: "3.9" - + services: DB: image: mysql:5.7 @@ -60,7 +62,7 @@ services: MYSQL_DATABASE: wordpress MYSQL_USER: wordpress MYSQL_PASSWORD: wordpress - + wordpress: depends_on: - db @@ -80,95 +82,95 @@ volumes: wordpress_data: {} ``` -We declare a version and then a large part of this docker-compose.yml file is made up of our services, we have a DB service and a WordPress service. You can see each of those has an image defined with a version tag associated. We are now also introducing state into our configuration unlike our first walkthroughs, but now we are going to create volumes so we can store our databases there. +We declare a version and then a large part of this docker-compose.yml file is made up of our services, we have a DB service and a WordPress service. You can see each of those has an image defined with a version tag associated. We are now also introducing state into our configuration unlike our first walkthroughs, but now we are going to create volumes so we can store our databases there. -We then have some environmental variables such as passwords and usernames. These files can get very complicated but the YAML configuration file simplifies what these look like overall. +We then have some environmental variables such as passwords and usernames. These files can get very complicated but the YAML configuration file simplifies what these look like overall. -### Build the project +### Build the project -Next up we can head back into our terminal and we can use some commands with our docker-compose tool. Navigate to your directory, where your docker-compose.yml file is located. +Next up we can head back into our terminal and we can use some commands with our docker-compose tool. Navigate to your directory, where your docker-compose.yml file is located. -From the terminal, we can simply run `docker-compose up -d` this will start the process of pulling those images and standing up your multi-container application. +From the terminal, we can simply run `docker-compose up -d` this will start the process of pulling those images and standing up your multi-container application. The `-d` in this command means detached mode, which means that the Run command is or will be in the background. ![](Images/Day46_Containers2.png) -If we now run the `docker ps` command, you can see we have 2 containers running, one being WordPress and the other being MySQL. +If we now run the `docker ps` command, you can see we have 2 containers running, one being WordPress and the other being MySQL. ![](Images/Day46_Containers3.png) -Next, we can validate that we have WordPress up and running by opening a browser and going to `http://localhost:8000` and you should see the WordPress set-up page. +Next, we can validate that we have WordPress up and running by opening a browser and going to `http://localhost:8000` and you should see the WordPress set-up page. ![](Images/Day46_Containers4.png) -We can run through the setup of WordPress, and then we can start building our website as we see fit in the console below. +We can run through the setup of WordPress, and then we can start building our website as we see fit in the console below. ![](Images/Day46_Containers5.png) -If we then open a new tab and navigate to that same address we did before `http://localhost:8000` we will now see a simple default theme with our site title "90DaysOfDevOps" and then a sample post. +If we then open a new tab and navigate to that same address we did before `http://localhost:8000` we will now see a simple default theme with our site title "90DaysOfDevOps" and then a sample post. ![](Images/Day46_Containers6.png) -Before we make any changes, open Docker Desktop and navigate to the volumes tab and here you will see two volumes associated with our containers, one for WordPress and one for DB. +Before we make any changes, open Docker Desktop and navigate to the volumes tab and here you will see two volumes associated with our containers, one for WordPress and one for DB. ![](Images/Day46_Containers7.png) -My Current wordpress theme is "Twenty Twenty-Two" and I want to change this to "Twenty Twenty" Back in the dashboard we can make those changes. +My Current wordpress theme is "Twenty Twenty-Two" and I want to change this to "Twenty Twenty" Back in the dashboard we can make those changes. ![](Images/Day46_Containers8.png) -I am also going to add a new post to my site, and here below you see the latest version of our new site. +I am also going to add a new post to my site, and here below you see the latest version of our new site. ![](Images/Day46_Containers9.png) ### Clean Up or not -If we were now to use the command `docker-compose down` this would bring down our containers. But will leave our volumes in place. +If we were now to use the command `docker-compose down` this would bring down our containers. But will leave our volumes in place. ![](Images/Day46_Containers10.png) -We can just confirm in Docker Desktop that our volumes are still there though. +We can just confirm in Docker Desktop that our volumes are still there though. ![](Images/Day46_Containers11.png) -If we then want to bring things back up then we can issue the `docker up -d` command from within the same directory and we have our application back up and running. +If we then want to bring things back up then we can issue the `docker up -d` command from within the same directory and we have our application back up and running. ![](Images/Day46_Containers12.png) -We then navigate in our browser to that same address of `http://localhost:8000` and notice that our new post and our theme change are all still in place. +We then navigate in our browser to that same address of `http://localhost:8000` and notice that our new post and our theme change are all still in place. ![](Images/Day46_Containers13.png) -If we want to get rid of the containers and those volumes then issuing the `docker-compose down --volumes` will also destroy the volumes. +If we want to get rid of the containers and those volumes then issuing the `docker-compose down --volumes` will also destroy the volumes. ![](Images/Day46_Containers14.png) -Now when we use `docker-compose up -d` again we will be starting, however, the images will still be local on our system so you won't need to re-pull them from the DockerHub repository. +Now when we use `docker-compose up -d` again we will be starting, however, the images will still be local on our system so you won't need to re-pull them from the DockerHub repository. -I know that when I started diving into docker-compose and its capabilities I was then confused as to where this sits alongside or with Container Orchestration tools such as Kubernetes, well everything we have done here in this short demo is focused on one host we have WordPress and DB running on the local desktop machine. We don't have multiple virtual machines or multiple physical machines, we also can't easily scale up and down the requirements of our application. +I know that when I started diving into docker-compose and its capabilities I was then confused as to where this sits alongside or with Container Orchestration tools such as Kubernetes, well everything we have done here in this short demo is focused on one host we have WordPress and DB running on the local desktop machine. We don't have multiple virtual machines or multiple physical machines, we also can't easily scale up and down the requirements of our application. -Our next section is going to cover Kubernetes but we have a few more days of Containers in general first. +Our next section is going to cover Kubernetes but we have a few more days of Containers in general first. This is also a great resource for samples of docker-compose applications with multiple integrations. [Awesome-Compose](https://github.com/docker/awesome-compose) -In the above repository, there is a great example which will deploy an Elasticsearch, Logstash, and Kibana (ELK) in single-node. +In the above repository, there is a great example which will deploy an Elasticsearch, Logstash, and Kibana (ELK) in single-node. -I have uploaded the files to the [Containers folder](/Days/Containers/elasticsearch-logstash-kibana/) When you have this folder locally, navigate there and you can simply use `docker-compose up -d` +I have uploaded the files to the [Containers folder](/Days/Containers/elasticsearch-logstash-kibana/) When you have this folder locally, navigate there and you can simply use `docker-compose up -d` ![](Images/Day46_Containers15.png) -We can then check we have those running containers with `docker ps` +We can then check we have those running containers with `docker ps` ![](Images/Day46_Containers16.png) -Now we can open a browser for each of the containers: +Now we can open a browser for each of the containers: ![](Images/Day46_Containers17.png) -To remove everything we can use the `docker-compose down` command. +To remove everything we can use the `docker-compose down` command. -## Resources +## Resources - [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE) - [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI) diff --git a/Days/day47.md b/Days/day47.md index 63dec43dc..0c8a9e207 100644 --- a/Days/day47.md +++ b/Days/day47.md @@ -1,39 +1,40 @@ --- -title: '#90DaysOfDevOps - Docker Networking & Security - Day 47' +title: "#90DaysOfDevOps - Docker Networking & Security - Day 47" published: false description: 90DaysOfDevOps - Docker Networking & Security -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049078 --- + ## Docker Networking & Security -During this container session so far we have made things happen but we have not looked at how things have worked behind the scenes either from a networking point of view also we have not touched on security, that is the plan for this session. +During this container session so far we have made things happen but we have not looked at how things have worked behind the scenes either from a networking point of view also we have not touched on security, that is the plan for this session. -### Docker Networking Basics +### Docker Networking Basics -Open a terminal, and type the command `docker network` this is the main command for configuring and managing container networks. +Open a terminal, and type the command `docker network` this is the main command for configuring and managing container networks. -From the below, you can see this is how we can use the command, and all of the sub-commands available. We can create new networks, list existing ones, and inspect and remove networks. +From the below, you can see this is how we can use the command, and all of the sub-commands available. We can create new networks, list existing ones, and inspect and remove networks. ![](Images/Day47_Containers1.png) -Let's take a look at the existing networks we have since our installation, so the out-of-box Docker networking looks like using the `docker network list` command. +Let's take a look at the existing networks we have since our installation, so the out-of-box Docker networking looks like using the `docker network list` command. Each network gets a unique ID and NAME. Each network is also associated with a single driver. Notice that the "bridge" network and the "host" network have the same name as their respective drivers. ![](Images/Day47_Containers2.png) -Next, we can take a deeper look into our networks with the `docker network inspect` command. +Next, we can take a deeper look into our networks with the `docker network inspect` command. -With me running `docker network inspect bridge` I can get all the configuration details of that specific network name. This includes name, ID, drivers, connected containers and as you can see quite a lot more. +With me running `docker network inspect bridge` I can get all the configuration details of that specific network name. This includes name, ID, drivers, connected containers and as you can see quite a lot more. ![](Images/Day47_Containers3.png) -### Docker: Bridge Networking +### Docker: Bridge Networking -As you have seen above a standard installation of Docker Desktop gives us a pre-built network called `bridge` If you look back up to the `docker network list` command, you will see that the network called bridge is associated with the `bridge` driver. Just because they have the same name doesn't they are the same thing. Connected but not the same thing. +As you have seen above a standard installation of Docker Desktop gives us a pre-built network called `bridge` If you look back up to the `docker network list` command, you will see that the network called bridge is associated with the `bridge` driver. Just because they have the same name doesn't they are the same thing. Connected but not the same thing. The output above also shows that the bridge network is scoped locally. This means that the network only exists on this Docker host. This is true of all networks using the bridge driver - the bridge driver provides single-host networking. @@ -41,27 +42,27 @@ All networks created with the bridge driver are based on a Linux bridge (a.k.a. ### Connect a Container -By default the bridge network is assigned to new containers, meaning unless you specify a network all containers will be connected to the bridge network. +By default the bridge network is assigned to new containers, meaning unless you specify a network all containers will be connected to the bridge network. Let's create a new container with the command `docker run -dt ubuntu sleep infinity` -The sleep command above is just going to keep the container running in the background so we can mess around with it. +The sleep command above is just going to keep the container running in the background so we can mess around with it. ![](Images/Day47_Containers4.png) -If we then check our bridge network with `docker network inspect bridge` you will see that we have a container matching what we have just deployed because we did not specify a network. +If we then check our bridge network with `docker network inspect bridge` you will see that we have a container matching what we have just deployed because we did not specify a network. ![](Images/Day47_Containers5.png) -We can also dive into the container using `docker exec -it 3a99af449ca2 bash` you will have to use `docker ps` to get your container ID. +We can also dive into the container using `docker exec -it 3a99af449ca2 bash` you will have to use `docker ps` to get your container ID. From here our image doesn't have anything to ping so we need to run the following command.`apt-get update && apt-get install -y iputils-ping` then ping an external interfacing address. `ping -c5 www.90daysofdevops.com` ![](Images/Day47_Containers6.png) -To clear this up we can run `docker stop 3a99af449ca2` again and use `docker ps` to find your container ID but this will remove our container. +To clear this up we can run `docker stop 3a99af449ca2` again and use `docker ps` to find your container ID but this will remove our container. -### Configure NAT for external connectivity +### Configure NAT for external connectivity In this step, we'll start a new NGINX container and map port 8080 on the Docker host to port 80 inside of the container. This means that traffic that hits the Docker host on port 8080 will be passed on to port 80 inside the container. @@ -75,27 +76,27 @@ Review the container status and port mappings by running `docker ps` The top line shows the new web1 container running NGINX. Take note of the command the container is running as well as the port mapping - `0.0.0.0:8080->80/tcp` maps port 8080 on all host interfaces to port 80 inside the web1 container. This port mapping is what effectively makes the container's web service accessible from external sources (via the Docker hosts IP address on port 8080). -Now we need our IP address for our actual host, we can do this by going into our WSL terminal and using the `IP addr` command. +Now we need our IP address for our actual host, we can do this by going into our WSL terminal and using the `IP addr` command. ![](Images/Day47_Containers9.png) -Then we can take this IP and open a browser and head to `http://172.25.218.154:8080/` Your IP might be different. This confirms that NGINX is accessible. +Then we can take this IP and open a browser and head to `http://172.25.218.154:8080/` Your IP might be different. This confirms that NGINX is accessible. ![](Images/Day47_Containers10.png) I have taken these instructions from this site from way back in 2017 DockerCon but they are still relevant today. However, the rest of the walkthrough goes into Docker Swarm and I am not going to be looking into that here. [Docker Networking - DockerCon 2017](https://github.com/docker/labs/tree/master/dockercon-us-2017/docker-networking) -### Securing your containers +### Securing your containers -Containers provide a secure environment for your workloads vs a full server configuration. They offer the ability to break up your applications into much smaller, loosely coupled components each isolated from one another which helps reduce the attack surface overall. +Containers provide a secure environment for your workloads vs a full server configuration. They offer the ability to break up your applications into much smaller, loosely coupled components each isolated from one another which helps reduce the attack surface overall. -But they are not immune from hackers that are looking to exploit systems. We still need to understand the security pitfalls of the technology and maintain best practices. +But they are not immune from hackers that are looking to exploit systems. We still need to understand the security pitfalls of the technology and maintain best practices. -### Move away from root permission +### Move away from root permission -All of the containers we have deployed have been using the root permission to the process within your containers. This means they have full administrative access to your container and host environments. Now to walk through we knew these systems were not going to be up and running for long. But you saw how easy it was to get up and running. +All of the containers we have deployed have been using the root permission to the process within your containers. This means they have full administrative access to your container and host environments. Now to walk through we knew these systems were not going to be up and running for long. But you saw how easy it was to get up and running. -We can add a few steps to our process to enable non-root users to be our preferred best practice. When creating our dockerfile we can create user accounts. You can find this example also in the containers folder in the repository. +We can add a few steps to our process to enable non-root users to be our preferred best practice. When creating our dockerfile we can create user accounts. You can find this example also in the containers folder in the repository. ``` # Use the official Ubuntu 18.04 as base @@ -111,21 +112,21 @@ However, this method doesn’t address the underlying security flaw of the image ### Private Registry -Another area we have used heavily in public registries in DockerHub, with a private registry of container images set up by your organisation means that you can host where you wish or there are managed services for this as well, but all in all, this gives you complete control of the images available for you and your team. +Another area we have used heavily in public registries in DockerHub, with a private registry of container images set up by your organisation means that you can host where you wish or there are managed services for this as well, but all in all, this gives you complete control of the images available for you and your team. -DockerHub is great to give you a baseline, but it's only going to be providing you with a basic service where you have to put a lot of trust into the image publisher. +DockerHub is great to give you a baseline, but it's only going to be providing you with a basic service where you have to put a lot of trust into the image publisher. -### Lean & Clean +### Lean & Clean -Have mentioned this throughout, although not related to security. But the size of your container can also affect security in terms of attack surface if you have resources you do not use in your application then you do not need them in your container. +Have mentioned this throughout, although not related to security. But the size of your container can also affect security in terms of attack surface if you have resources you do not use in your application then you do not need them in your container. -This is also my major concern with pulling the `latest` images because that can bring a lot of bloat to your images as well. DockerHub does show the compressed size for each of the images in a repository. +This is also my major concern with pulling the `latest` images because that can bring a lot of bloat to your images as well. DockerHub does show the compressed size for each of the images in a repository. -Checking `docker image` is a great command to see the size of your images. +Checking `docker image` is a great command to see the size of your images. ![](Images/Day47_Containers11.png) -## Resources +## Resources - [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE) - [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI) diff --git a/Days/day48.md b/Days/day48.md index 692decb83..1df14b12b 100644 --- a/Days/day48.md +++ b/Days/day48.md @@ -1,65 +1,66 @@ --- -title: '#90DaysOfDevOps - Alternatives to Docker - Day 48' +title: "#90DaysOfDevOps - Alternatives to Docker - Day 48" published: false description: 90DaysOfDevOps - Alternatives to Docker -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048807 --- + ## Alternatives to Docker -I did say at the very beginning of this section that we were going to be using Docker, simply because resource wise there is so much and the community is very big, but also this was really where the indents to making containers popular came from. I would encourage you to go and watch some of the history around Docker and how it came to be, I found it very useful. +I did say at the very beginning of this section that we were going to be using Docker, simply because resource wise there is so much and the community is very big, but also this was really where the indents to making containers popular came from. I would encourage you to go and watch some of the history around Docker and how it came to be, I found it very useful. But as I have alluded to there are other alternatives to Docker. If we think about what Docker is and what we have covered. It is a platform for developing, testing, deploying, and managing applications. -I want to highlight a few alternatives to Docker that you might or will in the future see out in the wild. +I want to highlight a few alternatives to Docker that you might or will in the future see out in the wild. ### Podman -What is Podman? Podman is a daemon-less container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode. +What is Podman? Podman is a daemon-less container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode. -I am going to be looking at this from a Windows point of view but know that like Docker there is no requirement for virtualisation there as it will use the underlying OS which is cannot do in the Windows world. +I am going to be looking at this from a Windows point of view but know that like Docker there is no requirement for virtualisation there as it will use the underlying OS which is cannot do in the Windows world. -Podman can be run under WSL2 although not as sleek as the experience with Docker Desktop. There is also a Windows remote client where you can connect to a Linux VM where your containers will run. +Podman can be run under WSL2 although not as sleek as the experience with Docker Desktop. There is also a Windows remote client where you can connect to a Linux VM where your containers will run. -My Ubuntu on WSL2 is the 20.04 release. Following the next steps will enable you to install Podman on your WSL instance. +My Ubuntu on WSL2 is the 20.04 release. Following the next steps will enable you to install Podman on your WSL instance. -``` +```Shell echo "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/ /" | sudo tee /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list ``` -Add the GPG Key +Add the GPG Key -``` +```Shell curl -L "https://download.opensuse.org/repositories/devel:/kubic:\ /libcontainers:/stable/xUbuntu_20.04/Release.key" | sudo apt-key add - ``` -Run a system update and upgrade with the `sudo apt-get update && sudo apt-get upgrade` command. Finally, we can install podman using `sudo apt install podman` +Run a system update and upgrade with the `sudo apt-get update && sudo apt-get upgrade` command. Finally, we can install podman using `sudo apt install podman` -We can now use a lot of the same commands we have been using for docker, note that we do not have that nice docker desktop UI. You can see below I used `podman images` and I have nothing after installation then I used `podman pull ubuntu` to pull down the ubuntu container image. +We can now use a lot of the same commands we have been using for docker, note that we do not have that nice docker desktop UI. You can see below I used `podman images` and I have nothing after installation then I used `podman pull ubuntu` to pull down the ubuntu container image. ![](Images/Day48_Containers1.png) -We can then run our Ubuntu image using `podman run -dit ubuntu` and `podman ps` to see our running image. +We can then run our Ubuntu image using `podman run -dit ubuntu` and `podman ps` to see our running image. ![](Images/Day48_Containers2.png) -To then get into that container we can run `podman attach dazzling_darwin` your container name will most likely be different. +To then get into that container we can run `podman attach dazzling_darwin` your container name will most likely be different. ![](Images/Day48_Containers3.png) -If you are moving from docker to podman it is also common to change your config file to have `alias docker=podman` that way any command you run with docker will use podman. +If you are moving from docker to podman it is also common to change your config file to have `alias docker=podman` that way any command you run with docker will use podman. -### LXC +### LXC -LXC is a containerisation engine that enables users again to create multiple isolated Linux container environments. Unlike Docker, LXC acts as a hypervisor for creating multiple Linux machines with separate system files, and networking features. Was around before Docker and then made a short comeback due to Docker's shortcomings. +LXC is a containerisation engine that enables users again to create multiple isolated Linux container environments. Unlike Docker, LXC acts as a hypervisor for creating multiple Linux machines with separate system files, and networking features. Was around before Docker and then made a short comeback due to Docker's shortcomings. -LXC is as lightweight though as docker and easily deployed. +LXC is as lightweight though as docker and easily deployed. -### Containerd +### Containerd A standalone container runtime. Containerd brings simplicity and robustness as well as of course portability. Containerd was formerly a tool that runs as part of Docker container services until Docker decided to graduate its components into standalone components. @@ -67,21 +68,21 @@ A project in the Cloud Native Computing Foundation, placing it in the same class ### Other Docker tooling -We could also mention toolings and options around Rancher, and VirtualBox but we can cover them in more detail another time. +We could also mention toolings and options around Rancher, and VirtualBox but we can cover them in more detail another time. -[**Gradle**](https://gradle.org/) +[**Gradle**](https://gradle.org/) - Build scans allow teams to collaboratively debug their scripts and track the history of all builds. - Execution options give teams the ability to continuously build so that whenever changes are inputted, the task is automatically executed. - The custom repository layout gives teams the ability to treat any file directory structure as an artefact repository. -[**Packer**](https://packer.io/) +[**Packer**](https://packer.io/) - Ability to create multiple machine images in parallel to save developer time and increase efficiency. - Teams can easily debug builds using Packer’s debugger, which inspects failures and allows teams to try out solutions before restarting builds. - Support with many platforms via plugins so teams can customize their builds. -[**Logspout**](https://github.com/gliderlabs/logspout) +[**Logspout**](https://github.com/gliderlabs/logspout) - Logging tool - The tool’s customizability allows teams to ship the same logs to multiple destinations. - Teams can easily manage their files because the tool only requires access to the Docker socket. @@ -99,9 +100,7 @@ We could also mention toolings and options around Rancher, and VirtualBox but we - Create teams and assign roles and permissions to team members. - Know what is running in each environment using the tool’s dashboard. - - -## Resources +## Resources - [TechWorld with Nana - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=3c-iBn73dDE) - [Programming with Mosh - Docker Tutorial for Beginners](https://www.youtube.com/watch?v=pTFZFxd4hOI) @@ -113,4 +112,4 @@ We could also mention toolings and options around Rancher, and VirtualBox but we - [Podman | Daemonless Docker | Getting Started with Podman](https://www.youtube.com/watch?v=Za2BqzeZjBk) - [LXC - Guide to building an LXC Lab](https://www.youtube.com/watch?v=cqOtksmsxfg) -See you on [Day 49](day49.md) +See you on [Day 49](day49.md) diff --git a/Days/day49.md b/Days/day49.md index 3783879fc..1000fc7ad 100644 --- a/Days/day49.md +++ b/Days/day49.md @@ -1,33 +1,34 @@ --- -title: '#90DaysOfDevOps - The Big Picture: Kubernetes - Day 49' +title: "#90DaysOfDevOps - The Big Picture: Kubernetes - Day 49" published: false description: 90DaysOfDevOps - The Big Picture Kubernetes -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049049 --- + ## The Big Picture: Kubernetes -In the last section we covered Containers, Containers fall short when it comes to scale and orchestration alone. The best we can do is use docker-compose to bring up multiple containers together. When it comes to Kubernetes which is a Container Orchestrator, this gives us the ability to scale up and down in an automated way or based on a load of your applications and services. +In the last section we covered Containers, Containers fall short when it comes to scale and orchestration alone. The best we can do is use docker-compose to bring up multiple containers together. When it comes to Kubernetes which is a Container Orchestrator, this gives us the ability to scale up and down in an automated way or based on a load of your applications and services. -As a platform Kubernetes offers the ability to orchestrate containers according to your requirements and desired state. We are going to cover Kubernetes in this section as it is growing rapidly as the next wave of infrastructure. I would also suggest that from a DevOps perspective Kubernetes is just one platform that you will need to have a basic understanding of, you will also need to understand bare metal, virtualisation and most likely cloud-based services as well. Kubernetes is just another option to run our applications. +As a platform Kubernetes offers the ability to orchestrate containers according to your requirements and desired state. We are going to cover Kubernetes in this section as it is growing rapidly as the next wave of infrastructure. I would also suggest that from a DevOps perspective Kubernetes is just one platform that you will need to have a basic understanding of, you will also need to understand bare metal, virtualisation and most likely cloud-based services as well. Kubernetes is just another option to run our applications. ### What is Container Orchestration? -I have mentioned Kubernetes and I have mentioned Container Orchestration, Kubernetes is the technology whereas container orchestration is the concept or the process behind the technology. Kubernetes is not the only Container Orchestration platform we also have Docker Swarm, HashiCorp Nomad and others. But Kubernetes is going from strength to strength so I want to cover Kubernetes but wanted to say that it is not the only one out there. +I have mentioned Kubernetes and I have mentioned Container Orchestration, Kubernetes is the technology whereas container orchestration is the concept or the process behind the technology. Kubernetes is not the only Container Orchestration platform we also have Docker Swarm, HashiCorp Nomad and others. But Kubernetes is going from strength to strength so I want to cover Kubernetes but wanted to say that it is not the only one out there. ### What is Kubernetes? -The first thing you should read if you are new to Kubernetes is the official documentation, My experience of really deep diving into Kubernetes a little over a year ago was that this is going to be a steep learning curve. Coming from a virtualisation and storage background I was thinking about how daunting this felt. +The first thing you should read if you are new to Kubernetes is the official documentation, My experience of really deep diving into Kubernetes a little over a year ago was that this is going to be a steep learning curve. Coming from a virtualisation and storage background I was thinking about how daunting this felt. -But the community, free learning resources and documentation are amazing. [Kubernetes.io](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/) +But the community, free learning resources and documentation are amazing. [Kubernetes.io](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/) -*Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.* +_Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available._ -Important things to note from the above quote, Kubernetes is Open-Source with a rich history that goes back to Google who donated the project to the Cloud Native Computing Foundation (CNCF) and it has now been progressed by the open-source community as well as large enterprise vendors contributing to making Kubernetes what it is today. +Important things to note from the above quote, Kubernetes is Open-Source with a rich history that goes back to Google who donated the project to the Cloud Native Computing Foundation (CNCF) and it has now been progressed by the open-source community as well as large enterprise vendors contributing to making Kubernetes what it is today. -I mentioned above that containers are great and in the previous section, we spoke about how containers and container images have changed and accelerated the adoption of cloud-native systems. But containers alone are not going to give you the production-ready experience you need from your application. Kubernetes gives us the following: +I mentioned above that containers are great and in the previous section, we spoke about how containers and container images have changed and accelerated the adoption of cloud-native systems. But containers alone are not going to give you the production-ready experience you need from your application. Kubernetes gives us the following: - **Service discovery and load balancing** Kubernetes can expose a container using the DNS name or using their IP address. If traffic to a container is high, Kubernetes can load balance and distribute the network traffic so that the deployment is stable. @@ -39,25 +40,24 @@ I mentioned above that containers are great and in the previous section, we spok - **Self-healing** Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve. -- **Secret and configuration management** Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration. +- **Secret and configuration management** Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration. Kubernetes provides you with a framework to run distributed systems resiliently. Container Orchestration manages the deployment, placement, and lifecycle of containers. -It also has many other responsibilities: +It also has many other responsibilities: - Cluster management federates hosts into one target. - Schedule management distributes containers across nodes through the scheduler. - - Service discovery knows where containers are located and distributes client requests across them. - Replication ensures that the right number of nodes and containers are available for the requested workload. - Health management detects and replaces unhealthy containers and nodes. -### Main Kubernetes Components +### Main Kubernetes Components Kubernetes is a container orchestrator to provision, manage, and scale apps. You can use it to manage the lifecycle of containerized apps in a cluster of nodes, which is a collection of worker machines such as VMs or physical machines. @@ -67,20 +67,21 @@ The key paradigm of Kubernetes is its declarative model. You provide the state t ### Node -**Control Plane** +#### Control Plane -Every Kubernetes cluster requires a Control Plane node, the control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events. +Every Kubernetes cluster requires a Control Plane node, the control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events. ![](Images/Day49_Kubernetes1.png) -**Worker Node** - A worker machine that runs Kubernetes workloads. It can be a physical (bare metal) machine or a virtual machine (VM). Each node can host one or more pods. Kubernetes nodes are managed by a control plane +#### Worker Node + +A worker machine that runs Kubernetes workloads. It can be a physical (bare metal) machine or a virtual machine (VM). Each node can host one or more pods. Kubernetes nodes are managed by a control plane ![](Images/Day49_Kubernetes2.png) -There are other node types but I won't be covering them here. +There are other node types but I won't be covering them here. -**kubelet** +#### kubelet An agent that runs on each node in the cluster. It makes sure that containers are running in a Pod. @@ -88,7 +89,7 @@ The kubelet takes a set of PodSpecs that are provided through various mechanisms ![](Images/Day49_Kubernetes3.png) -**kube-proxy** +#### kube-proxy kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept. @@ -98,7 +99,7 @@ kube-proxy uses the operating system packet filtering layer if there is one and ![](Images/Day49_Kubernetes4.png) -**Container runtime** +#### Container runtime The container runtime is the software that is responsible for running containers. @@ -110,29 +111,29 @@ Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and a A cluster is a group of nodes, where a node can be a physical machine or a virtual machine. Each of the nodes will have the container runtime (Docker) and will also be running a kubelet service, which is an agent that takes in the commands from the Master controller (more on that later) and a Proxy, that is used to proxy connections to the Pods from another component (Services, that we will see later). -Our control plane which can be made highly available will contain some unique roles compared to the worker nodes, the most important will be the kube API server, this is where any communication will take place to get information or push information to our Kubernetes cluster. +Our control plane which can be made highly available will contain some unique roles compared to the worker nodes, the most important will be the kube API server, this is where any communication will take place to get information or push information to our Kubernetes cluster. -**Kube API-Server** +#### Kube API-Server The Kubernetes API server validates and configures data for the API objects which include pods, services, replication controllers, and others. The API Server services REST operations and provide the frontend to the cluster's shared state through which all other components interact. -**Scheduler** +#### Scheduler The Kubernetes scheduler is a control plane process which assigns Pods to Nodes. The scheduler determines which Nodes are valid placements for each Pod in the scheduling queue according to constraints and available resources. The scheduler then ranks each valid Node and binds the Pod to a suitable Node. -**Controller Manager** +#### Controller Manager The Kubernetes controller manager is a daemon that embeds the core control loops shipped with Kubernetes. In applications of robotics and automation, a control loop is a non-terminating loop that regulates the state of the system. In Kubernetes, a controller is a control loop that watches the shared state of the cluster through the apiserver and makes changes attempting to move the current state towards the desired state. -**etcd** +#### etcd Consistent and highly-available key value store used as Kubernetes' backing store for all cluster data. ![](Images/Day49_Kubernetes6.png) -**kubectl** +#### kubectl -To manage this from a CLI point of view we have kubectl, kubectl interacts with the API server. +To manage this from a CLI point of view we have kubectl, kubectl interacts with the API server. The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. @@ -154,11 +155,11 @@ A Pod is a group of containers that form a logical application. E.g. If you have ### Deployments -- You can just decide to run Pods but when they die they die. +- You can just decide to run Pods but when they die they die. -- A Deployment will enable your pod to run continuously. +- A Deployment will enable your pod to run continuously. -- Deployments allow you to update a running app without downtime. +- Deployments allow you to update a running app without downtime. - Deployments also specify a strategy to restart Pods when they die @@ -166,17 +167,17 @@ A Pod is a group of containers that form a logical application. E.g. If you have ### ReplicaSets -- The Deployment can also create the ReplicaSet +- The Deployment can also create the ReplicaSet - A ReplicaSet ensures your app has the desired number of Pods -- ReplicaSets will create and scale Pods based on the Deployment +- ReplicaSets will create and scale Pods based on the Deployment - Deployments, ReplicaSets, and Pods are not exclusive but can be ### StatefulSets -- Does your App require you to keep information about its state? +- Does your App require you to keep information about its state? - A database needs state @@ -188,23 +189,23 @@ A Pod is a group of containers that form a logical application. E.g. If you have ### DaemonSets -- DaemonSets are for continuous process +- DaemonSets are for continuous process -- They run one Pod per Node. +- They run one Pod per Node. - Each new node added to the cluster gets a pod started -- Useful for background tasks such as monitoring and log collection +- Useful for background tasks such as monitoring and log collection - Each pod has a unique, persistent identifier that the controller maintains over any rescheduling. ![](Images/Day49_Kubernetes11.png) -### Services +### Services -- A single endpoint to access Pods +- A single endpoint to access Pods -- a unified way to route traffic to a cluster and eventually to a list of Pods. +- a unified way to route traffic to a cluster and eventually to a list of Pods. - By using a Service, Pods can be brought up and down without affecting anything. @@ -212,18 +213,18 @@ This is just a quick overview and notes around the fundamental building blocks o ![](Images/Day49_Kubernetes12.png) -### What we will cover in the series on Kubernetes +### What we will cover in the series on Kubernetes -- Kubernetes Architecture -- Kubectl Commands -- Kubernetes YAML -- Kubernetes Ingress +- Kubernetes Architecture +- Kubectl Commands +- Kubernetes YAML +- Kubernetes Ingress - Kubernetes Services -- Helm Package Manager -- Persistent Storage -- Stateful Apps +- Helm Package Manager +- Persistent Storage +- Stateful Apps -## Resources +## Resources - [Kubernetes Documentation](https://kubernetes.io/docs/home/) - [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do) diff --git a/Days/day50.md b/Days/day50.md index 7b77527a2..5f5934d38 100644 --- a/Days/day50.md +++ b/Days/day50.md @@ -1,50 +1,52 @@ --- -title: '#90DaysOfDevOps - Choosing your Kubernetes platform - Day 50' +title: "#90DaysOfDevOps - Choosing your Kubernetes platform - Day 50" published: false description: 90DaysOfDevOps - Choosing your Kubernetes platform -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049046 --- -## Choosing your Kubernetes platform -I wanted to use this session to break down some of the platforms or maybe distributions is a better term to use here, one thing that has been a challenge in the Kubernetes world is removing complexity. +## Choosing your Kubernetes platform -Kubernetes the hard way walks through how to build out from nothing to a full-blown functional Kubernetes cluster this is to the extreme but more and more at least the people I am speaking to are wanting to remove that complexity and run a managed Kubernetes cluster. The issue there is that it costs more money but the benefits could be if you use a managed service do you need to know the underpinning node architecture and what is happening from a Control Plane node point of view when generally you do not have access to this. +I wanted to use this session to break down some of the platforms or maybe distributions is a better term to use here, one thing that has been a challenge in the Kubernetes world is removing complexity. -Then we have the local development distributions that enable us to use our systems and run a local version of Kubernetes so developers can have the full working environment to run their apps in the platform they are intended for. +Kubernetes the hard way walks through how to build out from nothing to a full-blown functional Kubernetes cluster this is to the extreme but more and more at least the people I am speaking to are wanting to remove that complexity and run a managed Kubernetes cluster. The issue there is that it costs more money but the benefits could be if you use a managed service do you need to know the underpinning node architecture and what is happening from a Control Plane node point of view when generally you do not have access to this. -The general basis of all of these concepts is that they are all a flavour of Kubernetes which means we should be able to freely migrate and move our workloads where we need them to suit our requirements. +Then we have the local development distributions that enable us to use our systems and run a local version of Kubernetes so developers can have the full working environment to run their apps in the platform they are intended for. -A lot of our choice will also depend on what investments have been made. I mentioned the developer experience as well but some of those local Kubernetes environments that run our laptops are great for getting to grips with the technology without spending any money. +The general basis of all of these concepts is that they are all a flavour of Kubernetes which means we should be able to freely migrate and move our workloads where we need them to suit our requirements. -### Bare-Metal Clusters +A lot of our choice will also depend on what investments have been made. I mentioned the developer experience as well but some of those local Kubernetes environments that run our laptops are great for getting to grips with the technology without spending any money. -An option for many could be running your Linux OS straight onto several physical servers to create our cluster, it could also be Windows but I have not heard much about the adoption rate around Windows, Containers and Kubernetes. If you are a business and you have made a CAPEX decision to buy your physical servers then this might be how you go when building out your Kubernetes cluster, the management and admin side here means you are going to have to build yourself and manage everything from the ground up. +### Bare-Metal Clusters -### Virtualisation +An option for many could be running your Linux OS straight onto several physical servers to create our cluster, it could also be Windows but I have not heard much about the adoption rate around Windows, Containers and Kubernetes. If you are a business and you have made a CAPEX decision to buy your physical servers then this might be how you go when building out your Kubernetes cluster, the management and admin side here means you are going to have to build yourself and manage everything from the ground up. -Regardless of test and learning environments or enterprise-ready Kubernetes clusters virtualisation is a great way to go, typically the ability to spin up virtual machines to act as your nodes and then cluster those together. You have the underpinning architecture, efficiency and speed of virtualisation as well as leveraging that existing spend. VMware for example offers a great solution for both Virtual Machines and Kubernetes in various flavours. +### Virtualisation -My first ever Kubernetes cluster was built based on Virtualisation using Microsoft Hyper-V on an old server that I had which was capable of running a few VMs as my nodes. +Regardless of test and learning environments or enterprise-ready Kubernetes clusters virtualisation is a great way to go, typically the ability to spin up virtual machines to act as your nodes and then cluster those together. You have the underpinning architecture, efficiency and speed of virtualisation as well as leveraging that existing spend. VMware for example offers a great solution for both Virtual Machines and Kubernetes in various flavours. -### Local Desktop options +My first ever Kubernetes cluster was built based on Virtualisation using Microsoft Hyper-V on an old server that I had which was capable of running a few VMs as my nodes. -There are several options when it comes to running a local Kubernetes cluster on your desktop or laptop. This as previously said gives developers the ability to see what their app will look like without having to have multiple costly or complex clusters. Personally, this has been one that I have used a lot and in particular, I have been using minikube. It has some great functionality and adds-ons which changes the way you get something up and running. +### Local Desktop options + +There are several options when it comes to running a local Kubernetes cluster on your desktop or laptop. This as previously said gives developers the ability to see what their app will look like without having to have multiple costly or complex clusters. Personally, this has been one that I have used a lot and in particular, I have been using minikube. It has some great functionality and adds-ons which changes the way you get something up and running. + +### Kubernetes Managed Services -### Kubernetes Managed Services I have mentioned virtualisation, and this can be achieved with hypervisors locally but we know from previous sections we could also leverage VMs in the public cloud to act as our nodes. What I am talking about here with Kubernetes managed services are the offerings we see from the large hyperscalers but also from MSPs removing layers of management and control away from the end user, this could be removing the control plane from the end user this is what happens with Amazon EKS, Microsoft AKS and Google Kubernetes Engine. (GKE) -### Overwhelming choice +### Overwhelming choice -I mean the choice is great but there is a point where things become overwhelming and this is not a depth look into all options within each category listed above. On top of the above, we also have OpenShift which is from Red Hat and this option can be run across the options above in all the major cloud providers and probably today gives the best overall useability to the admins regardless of where clusters are deployed. +I mean the choice is great but there is a point where things become overwhelming and this is not a depth look into all options within each category listed above. On top of the above, we also have OpenShift which is from Red Hat and this option can be run across the options above in all the major cloud providers and probably today gives the best overall useability to the admins regardless of where clusters are deployed. -So where do you start from your learning perspective, as I said I started with the virtualisation route but that was because I had access to a physical server which I could use for the purpose, I appreciate and in fact, since then I no longer have this option. +So where do you start from your learning perspective, as I said I started with the virtualisation route but that was because I had access to a physical server which I could use for the purpose, I appreciate and in fact, since then I no longer have this option. -My actual advice now would be to use Minikube as a first option or Kind (Kubernetes in Docker) but Minikube gives us some additional benefits which almost abstracts the complexity out as we can just use add-ons and get things built out quickly and we can then blow it away when we are finished, we can run multiple clusters, we can run it almost anywhere, cross-platform and hardware agnostic. +My actual advice now would be to use Minikube as a first option or Kind (Kubernetes in Docker) but Minikube gives us some additional benefits which almost abstracts the complexity out as we can just use add-ons and get things built out quickly and we can then blow it away when we are finished, we can run multiple clusters, we can run it almost anywhere, cross-platform and hardware agnostic. -I have been through a bit of a journey with my learning around Kubernetes so I am going to leave the platform choice and specifics here to list out the options that I have tried to give me a better understanding of Kubernetes the platform and where it can run. What I might do with the below blog posts is take another look at these update them and bring them more into here vs them being linked to blog posts. +I have been through a bit of a journey with my learning around Kubernetes so I am going to leave the platform choice and specifics here to list out the options that I have tried to give me a better understanding of Kubernetes the platform and where it can run. What I might do with the below blog posts is take another look at these update them and bring them more into here vs them being linked to blog posts. - [Kubernetes playground – How to choose your platform](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1) - [Kubernetes playground – Setting up your cluster](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2) @@ -56,22 +58,22 @@ I have been through a bit of a journey with my learning around Kubernetes so I a - [Getting started with CIVO Cloud](https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud) - [Minikube - Kubernetes Demo Environment For Everyone](https://vzilla.co.uk/vzilla-blog/project_pace-kasten-k10-demo-environment-for-everyone) -### What we will cover in the series on Kubernetes +### What we will cover in the series on Kubernetes -- Kubernetes Architecture -- Kubectl Commands -- Kubernetes YAML -- Kubernetes Ingress +- Kubernetes Architecture +- Kubectl Commands +- Kubernetes YAML +- Kubernetes Ingress - Kubernetes Services -- Helm Package Manager -- Persistent Storage -- Stateful Apps +- Helm Package Manager +- Persistent Storage +- Stateful Apps -## Resources +## Resources - [Kubernetes Documentation](https://kubernetes.io/docs/home/) - [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do) - [TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners](https://www.youtube.com/watch?v=s_o8dwzRlu4) - [Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified!](https://www.youtube.com/watch?v=KVBON1lA9N8) -See you on [Day 51](day51.md) +See you on [Day 51](day51.md) diff --git a/Days/day51.md b/Days/day51.md index 4deef6986..16f4d39fd 100644 --- a/Days/day51.md +++ b/Days/day51.md @@ -1,23 +1,24 @@ --- -title: '#90DaysOfDevOps - Deploying your first Kubernetes Cluster - Day 51' +title: "#90DaysOfDevOps - Deploying your first Kubernetes Cluster - Day 51" published: false description: 90DaysOfDevOps - Deploying your first Kubernetes Cluster -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048778 --- -## Deploying your first Kubernetes Cluster -In this post we are going get a Kubernetes cluster up and running on our local machine using minikube, this will give us a baseline Kubernetes cluster for the rest of the Kubernetes section, although we will look at deploying a Kubernetes cluster also in VirtualBox later on. The reason for choosing this method vs spinning a managed Kubernetes cluster up in the public cloud is that this is going to cost money even with the free tier, I shared some blogs though if you would like to spin up that environment in the previous section [Day 50](day50.md). +## Deploying your first Kubernetes Cluster -### What is Minikube? +In this post we are going get a Kubernetes cluster up and running on our local machine using minikube, this will give us a baseline Kubernetes cluster for the rest of the Kubernetes section, although we will look at deploying a Kubernetes cluster also in VirtualBox later on. The reason for choosing this method vs spinning a managed Kubernetes cluster up in the public cloud is that this is going to cost money even with the free tier, I shared some blogs though if you would like to spin up that environment in the previous section [Day 50](day50.md). -*“minikube quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. We proudly focus on helping application developers and new Kubernetes users.”* +### What is Minikube? -You might not fit into the above but I have found minikube is a great little tool if you just want to test something out in a Kubernetes fashion, you can easily deploy and app and they have some amazing add ons which I will also cover. +> “minikube quickly sets up a local Kubernetes cluster on macOS, Linux, and Windows. We proudly focus on helping application developers and new Kubernetes users.” -To begin with regardless of your workstation OS, you can run minikube. First, head over to the [project page here](https://minikube.sigs.k8s.io/docs/start/). The first option you have is choosing your installation method. I did not use this method, but you might choose to vs my way (my way is coming up). +You might not fit into the above but I have found minikube is a great little tool if you just want to test something out in a Kubernetes fashion, you can easily deploy and app and they have some amazing add ons which I will also cover. + +To begin with regardless of your workstation OS, you can run minikube. First, head over to the [project page here](https://minikube.sigs.k8s.io/docs/start/). The first option you have is choosing your installation method. I did not use this method, but you might choose to vs my way (my way is coming up). mentioned below it states that you need to have a “Container or virtual machine manager, such as: Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMware” this is where MiniKube will run and the easy option and unless stated in the repository I am using Docker. You can install Docker on your system using the following [link](https://docs.docker.com/get-docker/). @@ -25,7 +26,7 @@ mentioned below it states that you need to have a “Container or virtual machin ### My way of installing minikube and other prereqs… -I have been using arkade for some time now to get all those Kubernetes tools and CLIs, you can see the installation steps on this [github repository](https://github.com/alexellis/arkade) for getting started with Arkade. I have also mentioned this in other blog posts where I needed something installing. The simplicity of just hitting arkade get and then seeing if your tool or cli is available is handy. In the Linux section we spoke about package manager and the process for getting our software, you can think about Arkade as that marketplace for all your apps and clis for Kubernetes. A very handy little tool to have on your systems, written in Golang and cross platform. +I have been using arkade for some time now to get all those Kubernetes tools and CLIs, you can see the installation steps on this [github repository](https://github.com/alexellis/arkade) for getting started with Arkade. I have also mentioned this in other blog posts where I needed something installing. The simplicity of just hitting arkade get and then seeing if your tool or cli is available is handy. In the Linux section we spoke about package manager and the process for getting our software, you can think about Arkade as that marketplace for all your apps and clis for Kubernetes. A very handy little tool to have on your systems, written in Golang and cross platform. ![](Images/Day51_Kubernetes2.png) @@ -33,33 +34,33 @@ As part of the long list of available apps within arkade minikube is one of them ![](Images/Day51_Kubernetes3.png) -We will also need kubectl as part of our tooling so you can also get this via arkade or I believe that the minikube documentation brings this down as part of the curl commands mentioned above. We will cover more on kubectl later on in the post. +We will also need kubectl as part of our tooling so you can also get this via arkade or I believe that the minikube documentation brings this down as part of the curl commands mentioned above. We will cover more on kubectl later on in the post. ### Getting a Kubernetes cluster up and running For this particular section I want to cover the options available to us when it comes to getting a Kubernetes cluster up and running on your local machine. We could simply run the following command and it would spin up a cluster for you to use. -minikube is used on the command line, and simply put once you have everything installed you can run `minikube start` to deploy your first Kubernetes cluster. You will see below that the Docker Driver is the default as to where we will be running our nested virtualisation node. I mentioned at the start of the post the other options available, the other options help when you want to expand what this local Kubernetes cluster needs to look like. +minikube is used on the command line, and simply put once you have everything installed you can run `minikube start` to deploy your first Kubernetes cluster. You will see below that the Docker Driver is the default as to where we will be running our nested virtualisation node. I mentioned at the start of the post the other options available, the other options help when you want to expand what this local Kubernetes cluster needs to look like. -A single Minikube cluster is going to consist of a single docker container in this instance which will have the control plane node and worker node in one instance. Where as typically you would separate those nodes out. Something we will cover in the next section where we look at still home lab type Kubernetes environments but a little closer to production architecture. +A single Minikube cluster is going to consist of a single docker container in this instance which will have the control plane node and worker node in one instance. Where as typically you would separate those nodes out. Something we will cover in the next section where we look at still home lab type Kubernetes environments but a little closer to production architecture. ![](Images/Day51_Kubernetes4.png) I have mentioned this a few times now, I really like minikube because of the addons available, the ability to deploy a cluster with a simple command including all the required addons from the start really helps me deploy the same required setup everytime. -Below you can see a list of those addons, I generally use the `csi-hostpath-driver` and the `volumesnapshots` addons but you can see the long list below. Sure these addons can generally be deployed using Helm again something we will cover later on in the Kubernetes section but this makes things much simpler. +Below you can see a list of those addons, I generally use the `csi-hostpath-driver` and the `volumesnapshots` addons but you can see the long list below. Sure these addons can generally be deployed using Helm again something we will cover later on in the Kubernetes section but this makes things much simpler. ![](Images/Day51_Kubernetes5.png) -I am also defining in our project some additional configuration, apiserver is set to 6433 instead of a random API port, I define the container runtime also to containerd however docker is default and CRI-O is also available. I am also setting a specific Kubernetes version. +I am also defining in our project some additional configuration, apiserver is set to 6433 instead of a random API port, I define the container runtime also to containerd however docker is default and CRI-O is also available. I am also setting a specific Kubernetes version. ![](Images/Day51_Kubernetes6.png) -Now we are ready to deploy our first Kubernetes cluster using minikube. I mentioned before though that you will also need `kubectl` to interact with your cluster. You can get kubectl installed using arkade with the command `arkade get kubectl` +Now we are ready to deploy our first Kubernetes cluster using minikube. I mentioned before though that you will also need `kubectl` to interact with your cluster. You can get kubectl installed using arkade with the command `arkade get kubectl` ![](Images/Day51_Kubernetes7.png) -or you can download cross platform from the following +or you can download cross platform from the following - [Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux) - [macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos) @@ -71,74 +72,74 @@ Once you have kubectl installed we can then interact with our cluster with a sim ### What is kubectl? -We now have our minikube | Kubernetes cluster up and running and I have asked you to install both Minikube where I have explained at least what it does but I have not really explained what kubectl is and what it does. +We now have our minikube | Kubernetes cluster up and running and I have asked you to install both Minikube where I have explained at least what it does but I have not really explained what kubectl is and what it does. -kubectl is a cli that is used or allows you to interact with Kubernetes clusters, we are using it here for interacting with our minikube cluster but we would also use kubectl for interacting with our enterprise clusters across the public cloud. +kubectl is a cli that is used or allows you to interact with Kubernetes clusters, we are using it here for interacting with our minikube cluster but we would also use kubectl for interacting with our enterprise clusters across the public cloud. -We use kubectl to deploy applications, inspect and manage cluster resources. A much better [Overview of kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) can be found here on the Kubernetes official documentation. +We use kubectl to deploy applications, inspect and manage cluster resources. A much better [Overview of kubectl](https://kubernetes.io/docs/reference/kubectl/overview/) can be found here on the Kubernetes official documentation. -kubectl interacts with the API server found on the Control Plane node which we breifly covered in an earlier post. +kubectl interacts with the API server found on the Control Plane node which we breifly covered in an earlier post. ### kubectl cheat sheet Along with the official documentation, I have also found myself with this page open all the time when looking for kubectl commands. [Unofficial Kubernetes](https://unofficial-kubernetes.readthedocs.io/en/latest/) -|Listing Resources | | -| ------------------------------ | ----------------------------------------- | -|kubectl get nodes |List all nodes in cluster | -|kubectl get namespaces |List all namespaces in cluster | -|kubectl get pods |List all pods in default namespace cluster | -|kubectl get pods -n name |List all pods in "name" namespace | +| Listing Resources | | +| ------------------------ | ------------------------------------------ | +| kubectl get nodes | List all nodes in cluster | +| kubectl get namespaces | List all namespaces in cluster | +| kubectl get pods | List all pods in default namespace cluster | +| kubectl get pods -n name | List all pods in "name" namespace | -|Creating Resources | | -| ------------------------------ | ----------------------------------------- | -|kubectl create namespace name |Create a namespace called "name" | -|kubectl create -f [filename] |Create a resource from a JSON or YAML file:| +| Creating Resources | | +| ----------------------------- | ------------------------------------------- | +| kubectl create namespace name | Create a namespace called "name" | +| kubectl create -f [filename] | Create a resource from a JSON or YAML file: | -|Editing Resources | | -| ------------------------------ | ----------------------------------------- | -|kubectl edit svc/servicename |To edit a service | +| Editing Resources | | +| ---------------------------- | ----------------- | +| kubectl edit svc/servicename | To edit a service | -|More detail on Resources | | -| ------------------------------ | ------------------------------------------------------ | -|kubectl describe nodes | display the state of any number of resources in detail,| +| More detail on Resources | | +| ------------------------ | ------------------------------------------------------- | +| kubectl describe nodes | display the state of any number of resources in detail, | -|Delete Resources | | -| ------------------------------ | ------------------------------------------------------ | -|kubectl delete pod | Remove resources, this can be from stdin or file | +| Delete Resources | | +| ------------------ | ------------------------------------------------ | +| kubectl delete pod | Remove resources, this can be from stdin or file | You will find yourself wanting to know the short names for some of the kubectl commands, for example `-n` is the short name for `namespace` which makes it easier to type a command but also if you are scripting anything you can have much tidier code. -| Short name | Full name | -| -------------------- | ---------------------------- | -| csr | certificatesigningrequests | -| cs | componentstatuses | -| cm | configmaps | -| ds | daemonsets | -| deploy | deployments | -| ep | endpoints | -| ev | events | -| hpa | horizontalpodautoscalers | -| ing | ingresses | -| limits | limitranges | -| ns | namespaces | -| no | nodes | -| pvc | persistentvolumeclaims | -| pv | persistentvolumes | -| po | pods | -| pdb | poddisruptionbudgets | -| psp | podsecuritypolicies | -| rs | replicasets | -| rc | replicationcontrollers | -| quota | resourcequotas | -| sa | serviceaccounts | -| svc | services | - -The final thing to add here is that I created another project around minikube to help me quickly spin up demo environments to display data services and protecting those workloads with Kasten K10, [Project Pace](https://github.com/MichaelCade/project_pace) can be found there and would love your feedback or interaction, it also displays or includes some automated ways of deploying your minikube clusters and creating different data services applications. - -Next up, we will get in to deploying multiple nodes into virtual machines using VirtualBox but we are going to hit the easy button there like we did in the Linux section where we used vagrant to quickly spin up the machines and deploy our software how we want them. - -I added this list to the post yesterday which are walkthrough blogs I have done around different Kubernetes clusters being deployed. +| Short name | Full name | +| ---------- | -------------------------- | +| csr | certificatesigningrequests | +| cs | componentstatuses | +| cm | configmaps | +| ds | daemonsets | +| deploy | deployments | +| ep | endpoints | +| ev | events | +| hpa | horizontalpodautoscalers | +| ing | ingresses | +| limits | limitranges | +| ns | namespaces | +| no | nodes | +| pvc | persistentvolumeclaims | +| pv | persistentvolumes | +| po | pods | +| pdb | poddisruptionbudgets | +| psp | podsecuritypolicies | +| rs | replicasets | +| rc | replicationcontrollers | +| quota | resourcequotas | +| sa | serviceaccounts | +| svc | services | + +The final thing to add here is that I created another project around minikube to help me quickly spin up demo environments to display data services and protecting those workloads with Kasten K10, [Project Pace](https://github.com/MichaelCade/project_pace) can be found there and would love your feedback or interaction, it also displays or includes some automated ways of deploying your minikube clusters and creating different data services applications. + +Next up, we will get in to deploying multiple nodes into virtual machines using VirtualBox but we are going to hit the easy button there like we did in the Linux section where we used vagrant to quickly spin up the machines and deploy our software how we want them. + +I added this list to the post yesterday which are walkthrough blogs I have done around different Kubernetes clusters being deployed. - [Kubernetes playground – How to choose your platform](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1) - [Kubernetes playground – Setting up your cluster](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2) @@ -150,26 +151,26 @@ I added this list to the post yesterday which are walkthrough blogs I have done - [Getting started with CIVO Cloud](https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud) - [Minikube - Kubernetes Demo Environment For Everyone](https://vzilla.co.uk/vzilla-blog/project_pace-kasten-k10-demo-environment-for-everyone) -### What we will cover in the series on Kubernetes +### What we will cover in the series on Kubernetes -We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters. +We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters. -- Kubernetes Architecture -- Kubectl Commands -- Kubernetes YAML -- Kubernetes Ingress +- Kubernetes Architecture +- Kubectl Commands +- Kubernetes YAML +- Kubernetes Ingress - Kubernetes Services -- Helm Package Manager -- Persistant Storage -- Stateful Apps +- Helm Package Manager +- Persistant Storage +- Stateful Apps -## Resources +## Resources -If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them. +If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them. - [Kubernetes Documentation](https://kubernetes.io/docs/home/) - [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do) - [TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners](https://www.youtube.com/watch?v=s_o8dwzRlu4) - [Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified!](https://www.youtube.com/watch?v=KVBON1lA9N8) -See you on [Day 52](day52.md) +See you on [Day 52](day52.md) diff --git a/Days/day52.md b/Days/day52.md index 26dc78ae3..a048ad404 100644 --- a/Days/day52.md +++ b/Days/day52.md @@ -1,63 +1,64 @@ --- -title: '#90DaysOfDevOps - Setting up a multinode Kubernetes Cluster - Day 52' +title: "#90DaysOfDevOps - Setting up a multinode Kubernetes Cluster - Day 52" published: false description: 90DaysOfDevOps - Setting up a multinode Kubernetes Cluster -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049050 --- -## Setting up a multinode Kubernetes Cluster -I wanted this title to be "Setting up a multinode Kubernetes cluster with Vagrant" but thought it might be a little too long! +## Setting up a multinode Kubernetes Cluster -In the session yesterday we used a cool project to deploy our first Kubernetes cluster and get a little hands on with the most important CLI tool you will come across when using Kubernetes (kubectl). +I wanted this title to be "Setting up a multinode Kubernetes cluster with Vagrant" but thought it might be a little too long! -Here we are going to use VirtualBox as our base but as mentioned the last time we spoke about Vagrant back in the Linux section we can really use any hypervisor or virtualisation tool supported. It was [Day 14](day14.md) when we went through and deployed an Ubuntu machine for the Linux section. +In the session yesterday we used a cool project to deploy our first Kubernetes cluster and get a little hands on with the most important CLI tool you will come across when using Kubernetes (kubectl). -### A quick recap on Vagrant +Here we are going to use VirtualBox as our base but as mentioned the last time we spoke about Vagrant back in the Linux section we can really use any hypervisor or virtualisation tool supported. It was [Day 14](day14.md) when we went through and deployed an Ubuntu machine for the Linux section. -Vagrant is a CLI utility that manages the lifecyle of your virtual machines. We can use vagrant to spin up and down virtual machines across many different platforms including vSphere, Hyper-v, Virtual Box and also Docker. It does have other providers but we will stick with that we are using Virtual Box here so we are good to go. +### A quick recap on Vagrant -I am going to be using a baseline this [blog and repository](https://devopscube.com/kubernetes-cluster-vagrant/) to walk through the configuration. I would however advise that if this is your first time deploying a Kubernetes cluster then maybe also look into how you would do this manually and then at least you know what this looks like. Although I will say that this Day 0 operations and effort is being made more efficient with every release of Kubernetes. I liken this very much to the days of VMware and ESX and how you would need at least a day to deploy 3 ESX servers now we can have that up and running in an hour. We are heading in that direction when it comes to Kubernetes. +Vagrant is a CLI utility that manages the lifecyle of your virtual machines. We can use vagrant to spin up and down virtual machines across many different platforms including vSphere, Hyper-v, Virtual Box and also Docker. It does have other providers but we will stick with that we are using Virtual Box here so we are good to go. -### Kubernetes Lab environment +I am going to be using a baseline this [blog and repository](https://devopscube.com/kubernetes-cluster-vagrant/) to walk through the configuration. I would however advise that if this is your first time deploying a Kubernetes cluster then maybe also look into how you would do this manually and then at least you know what this looks like. Although I will say that this Day 0 operations and effort is being made more efficient with every release of Kubernetes. I liken this very much to the days of VMware and ESX and how you would need at least a day to deploy 3 ESX servers now we can have that up and running in an hour. We are heading in that direction when it comes to Kubernetes. -I have uploaded in [Kubernetes folder](Kubernetes) the vagrantfile that we will be using to build out our environment. Grab this and navigate to this directory in your terminal. I am again using Windows so I will be using PowerShell to perform my workstation commands with vagrant. If you do not have vagrant then you can use arkade, we covered this yesterday when installing minikube and other tools. A simple command `arkade get vagrant` should see you download and install the latest version of vagrant. +### Kubernetes Lab environment -When you are in your directory then you can simply run `vagrant up` and if all is configured correctly then you should see the following kick off in your terminal. +I have uploaded in [Kubernetes folder](Kubernetes) the vagrantfile that we will be using to build out our environment. Grab this and navigate to this directory in your terminal. I am again using Windows so I will be using PowerShell to perform my workstation commands with vagrant. If you do not have vagrant then you can use arkade, we covered this yesterday when installing minikube and other tools. A simple command `arkade get vagrant` should see you download and install the latest version of vagrant. + +When you are in your directory then you can simply run `vagrant up` and if all is configured correctly then you should see the following kick off in your terminal. ![](Images/Day52_Kubernetes1.png) - In the terminal you are going to see a number of steps taking place, but in the meantime let's take a look at what we are actually building here. +In the terminal you are going to see a number of steps taking place, but in the meantime let's take a look at what we are actually building here. ![](Images/Day52_Kubernetes2.png) -From the above you can see that we are going to build out 3 virtual machines, we will have a control plane node and then two worker nodes. If you head back to [Day 49](day49.md) You will see some more description on these areas we see in the image. +From the above you can see that we are going to build out 3 virtual machines, we will have a control plane node and then two worker nodes. If you head back to [Day 49](day49.md) You will see some more description on these areas we see in the image. -Also in the image we indicate that our kubectl access will come from outside of the cluster and hit that kube apiserver when in fact as part of the vagrant provisioning we are deploying kubectl on each of these nodes so that we can access the cluster from within each of our nodes. +Also in the image we indicate that our kubectl access will come from outside of the cluster and hit that kube apiserver when in fact as part of the vagrant provisioning we are deploying kubectl on each of these nodes so that we can access the cluster from within each of our nodes. -The process of building out this lab could take anything from 5 minutes to 30 minutes depending on your setup. +The process of building out this lab could take anything from 5 minutes to 30 minutes depending on your setup. -I am going to cover the scripts shortly as well but you will notice if you look into the vagrant file that we are calling on 3 scripts as part of the deployment and this is really where the cluster is created. We have seen how easy it is to use vagrant to deploy our virtual machines and OS installations using vagrant boxes but having the ability to run a shell script as part of the deployment process is where it gets quite interesting around automating these lab build outs. +I am going to cover the scripts shortly as well but you will notice if you look into the vagrant file that we are calling on 3 scripts as part of the deployment and this is really where the cluster is created. We have seen how easy it is to use vagrant to deploy our virtual machines and OS installations using vagrant boxes but having the ability to run a shell script as part of the deployment process is where it gets quite interesting around automating these lab build outs. -Once complete we can then ssh to one of our nodes `vagrant ssh master` from the terminal should get you access, default username and password is `vagrant/vagrant` +Once complete we can then ssh to one of our nodes `vagrant ssh master` from the terminal should get you access, default username and password is `vagrant/vagrant` -You can also use `vagrant ssh node01` and `vagrant ssh node02` to gain access to the worker nodes should you wish. +You can also use `vagrant ssh node01` and `vagrant ssh node02` to gain access to the worker nodes should you wish. ![](Images/Day52_Kubernetes3.png) -Now we are in one of the above nodes in our new cluster we can issue `kubectl get nodes` to show our 3 node cluster and the status of this. +Now we are in one of the above nodes in our new cluster we can issue `kubectl get nodes` to show our 3 node cluster and the status of this. ![](Images/Day52_Kubernetes4.png) -At this point we have a running 3 node cluster, with 1 control plane node and 2 worker nodes. +At this point we have a running 3 node cluster, with 1 control plane node and 2 worker nodes. -### Vagrantfile and Shell Script walkthrough +### Vagrantfile and Shell Script walkthrough -If we take a look at our vagrantfile, you will see that we are defining a number of worker nodes, networking IP addresses for the bridged network within VirtualBox and then some naming. Another you will notice is that we are also calling upon some scripts that we want to run on specific hosts. +If we take a look at our vagrantfile, you will see that we are defining a number of worker nodes, networking IP addresses for the bridged network within VirtualBox and then some naming. Another you will notice is that we are also calling upon some scripts that we want to run on specific hosts. -``` +``` NUM_WORKER_NODES=2 IP_NW="10.0.0." IP_START=10 @@ -98,28 +99,29 @@ Vagrant.configure("2") do |config| end end end - ``` -Lets break down those scripts that are being ran. We have three scripts listed in the above VAGRANTFILE to run on specific nodes. +``` + +Lets break down those scripts that are being ran. We have three scripts listed in the above VAGRANTFILE to run on specific nodes. `master.vm.provision "shell", path: "scripts/common.sh"` -This script above is going to focus on getting the nodes ready, it is going to be ran on all 3 of our nodes and it will remove any existing Docker components and reinstall Docker and ContainerD as well as kubeadm, kubelet and kubectl. This script will also update existing software packages on the system. +This script above is going to focus on getting the nodes ready, it is going to be ran on all 3 of our nodes and it will remove any existing Docker components and reinstall Docker and ContainerD as well as kubeadm, kubelet and kubectl. This script will also update existing software packages on the system. `master.vm.provision "shell", path: "scripts/master.sh"` -The master.sh script will only run on the control plane node, this script is going to create the Kubernetes cluster using kubeadm commands. It will also prepare the config context for access to this cluster which we will cover next. +The master.sh script will only run on the control plane node, this script is going to create the Kubernetes cluster using kubeadm commands. It will also prepare the config context for access to this cluster which we will cover next. `node.vm.provision "shell", path: "scripts/node.sh"` -This is simply going to take the config created by the master and join our nodes to the Kubernetes cluster, this join process again uses kubeadm and another script which can be found in the config folder. +This is simply going to take the config created by the master and join our nodes to the Kubernetes cluster, this join process again uses kubeadm and another script which can be found in the config folder. -### Access to the Kubernetes cluster +### Access to the Kubernetes cluster - Now we have two clusters deployed we have our minikube cluster that we deployed in the previous section and we have the new 3 node cluster we just deployed to VirtualBox. +Now we have two clusters deployed we have our minikube cluster that we deployed in the previous section and we have the new 3 node cluster we just deployed to VirtualBox. - Also in that config file that you will also have access to on the machine you ran vagrant from consists of how we can gain access to our cluster from our workstation. +Also in that config file that you will also have access to on the machine you ran vagrant from consists of how we can gain access to our cluster from our workstation. - Before we show that let me touch on the context. +Before we show that let me touch on the context. ![](Images/Day52_Kubernetes5.png) @@ -127,23 +129,23 @@ Context is important, the ability to access your Kubernetes cluster from your de By default, the Kubernetes CLI client (kubectl) uses the C:\Users\username\.kube\config to store the Kubernetes cluster details such as endpoint and credentials. If you have deployed a cluster you will be able to see this file in that location. But if you have been using maybe the master node to run all of your kubectl commands so far via SSH or other methods then this post will hopefully help you get to grips with being able to connect with your workstation. -We then need to grab the kubeconfig file from the cluster or we can also get this from our config file once deployed, grab the contents of this file either via SCP or just open a console session to your master node and copy to the local windows machine. +We then need to grab the kubeconfig file from the cluster or we can also get this from our config file once deployed, grab the contents of this file either via SCP or just open a console session to your master node and copy to the local windows machine. ![](Images/Day52_Kubernetes6.png) -We then want to take a copy of that config file and move to our `$HOME/.kube/config` location. +We then want to take a copy of that config file and move to our `$HOME/.kube/config` location. ![](Images/Day52_Kubernetes7.png) -Now from your local workstation you will be able to run `kubectl cluster-info` and `kubectl get nodes` to validate that you have access to your cluster. +Now from your local workstation you will be able to run `kubectl cluster-info` and `kubectl get nodes` to validate that you have access to your cluster. ![](Images/Day52_Kubernetes8.png) This not only allows for connectivity and control from your windows machine but this then also allows us to do some port forwarding to access certain services from our windows machine -If you are interested in how you would manage multiple clusters on your workstation then I have a more detailed walkthrough [here](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-6). +If you are interested in how you would manage multiple clusters on your workstation then I have a more detailed walkthrough [here](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-6). -I have added this list which are walkthrough blogs I have done around different Kubernetes clusters being deployed. +I have added this list which are walkthrough blogs I have done around different Kubernetes clusters being deployed. - [Kubernetes playground – How to choose your platform](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1) - [Kubernetes playground – Setting up your cluster](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2) @@ -155,26 +157,26 @@ I have added this list which are walkthrough blogs I have done around different - [Getting started with CIVO Cloud](https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud) - [Minikube - Kubernetes Demo Environment For Everyone](https://vzilla.co.uk/vzilla-blog/project_pace-kasten-k10-demo-environment-for-everyone) -### What we will cover in the series on Kubernetes +### What we will cover in the series on Kubernetes -We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters. +We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters. -- Kubernetes Architecture -- Kubectl Commands -- Kubernetes YAML -- Kubernetes Ingress +- Kubernetes Architecture +- Kubectl Commands +- Kubernetes YAML +- Kubernetes Ingress - Kubernetes Services -- Helm Package Manager -- Persistant Storage -- Stateful Apps +- Helm Package Manager +- Persistant Storage +- Stateful Apps -## Resources +## Resources -If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them. +If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them. - [Kubernetes Documentation](https://kubernetes.io/docs/home/) - [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do) - [TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners](https://www.youtube.com/watch?v=s_o8dwzRlu4) - [Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified!](https://www.youtube.com/watch?v=KVBON1lA9N8) -See you on [Day 53](day53.md) +See you on [Day 53](day53.md) diff --git a/Days/day53.md b/Days/day53.md index 6131f9f72..dcfb9cd12 100644 --- a/Days/day53.md +++ b/Days/day53.md @@ -1,63 +1,64 @@ --- -title: '#90DaysOfDevOps - Rancher Overview - Hands On - Day 53' +title: "#90DaysOfDevOps - Rancher Overview - Hands On - Day 53" published: false description: 90DaysOfDevOps - Rancher Overview - Hands On -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048742 --- + ## Rancher Overview - Hands On -In this section we are going to take a look at Rancher, so far everything we have done has been in the cli and using kubectl but we have a few really good UIs and multi cluster management tools to give our operations teams good visibility into our cluster management. +In this section we are going to take a look at Rancher, so far everything we have done has been in the cli and using kubectl but we have a few really good UIs and multi cluster management tools to give our operations teams good visibility into our cluster management. Rancher is according to their [site](https://rancher.com/) -*Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads.* +> Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads. -Rancher enables us to deploy production grade Kubernetes clusters from pretty much any location and then provides centralised authentication, access control and observability. I mentioned in a previous section that there is almost an overwhelming choice when it comes to Kubernetes and where you should or could run them, looking at Rancher it really doesn't matter where they are. +Rancher enables us to deploy production grade Kubernetes clusters from pretty much any location and then provides centralised authentication, access control and observability. I mentioned in a previous section that there is almost an overwhelming choice when it comes to Kubernetes and where you should or could run them, looking at Rancher it really doesn't matter where they are. ### Deploy Rancher -The first thing we need to do is deploy Rancher on our local workstation, there are few ways and locations you can choose to proceed with this step, for me I want to use my local workstation and run rancher as a docker container. By running the command below we will pull down a container image and then have access to the rancher UI. +The first thing we need to do is deploy Rancher on our local workstation, there are few ways and locations you can choose to proceed with this step, for me I want to use my local workstation and run rancher as a docker container. By running the command below we will pull down a container image and then have access to the rancher UI. Other rancher deployment methods are available [Rancher Quick-Start-Guide](https://rancher.com/docs/rancher/v2.6/en/quick-start-guide/deployment/) `sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher` -As you can see in our Docker Desktop we have a running rancher container. +As you can see in our Docker Desktop we have a running rancher container. ![](Images/Day53_Kubernetes1.png) ### Accessing Rancher UI -With the above container running we should be able to navigate to it via a web page. `https://localhost` will bring up a login page as per below. +With the above container running we should be able to navigate to it via a web page. `https://localhost` will bring up a login page as per below. ![](Images/Day53_Kubernetes2.png) -Follow the instructions below to get the password required. Because I am using Windows I chose to use bash for Windows because of the grep command required. +Follow the instructions below to get the password required. Because I am using Windows I chose to use bash for Windows because of the grep command required. ![](Images/Day53_Kubernetes3.png) -We can then take the above password and login, the next page is where we can define a new password. +We can then take the above password and login, the next page is where we can define a new password. ![](Images/Day53_Kubernetes4.png) -Once we have done the above we will then be logged in and we can see our opening screen. As part of the Rancher deployment we will also see a local K3s cluster provisioned. +Once we have done the above we will then be logged in and we can see our opening screen. As part of the Rancher deployment we will also see a local K3s cluster provisioned. ![](Images/Day53_Kubernetes5.png) ### A quick tour of rancher -The first thing for us to look at is our locally deployed K3S cluster You can see below that we get a good visual on what is happening inside our cluster. This is the default deployment and we have not yet deployed anything to this cluster. You can see it is made up of 1 node and has 5 deployments. Then you can also see that there are some stats on pods, cores and memory. +The first thing for us to look at is our locally deployed K3S cluster You can see below that we get a good visual on what is happening inside our cluster. This is the default deployment and we have not yet deployed anything to this cluster. You can see it is made up of 1 node and has 5 deployments. Then you can also see that there are some stats on pods, cores and memory. ![](Images/Day53_Kubernetes6.png) -On the left hand menu we also have an Apps & Marketplace tab, this allows us to choose applications we would like to run on our clusters, as mentioned previously Rancher gives us the capability of running or managing a number of different clusters. With the marketplace we can deploy our applications very easily. +On the left hand menu we also have an Apps & Marketplace tab, this allows us to choose applications we would like to run on our clusters, as mentioned previously Rancher gives us the capability of running or managing a number of different clusters. With the marketplace we can deploy our applications very easily. ![](Images/Day53_Kubernetes7.png) -Another thing to mention is that if you did need to get access to any cluster being managed by Rancher in the top right you have the ability to open a kubectl shell to the selected cluster. +Another thing to mention is that if you did need to get access to any cluster being managed by Rancher in the top right you have the ability to open a kubectl shell to the selected cluster. ![](Images/Day53_Kubernetes8.png) @@ -65,21 +66,21 @@ Another thing to mention is that if you did need to get access to any cluster be Over the past two sessions we have created a minikube cluster locally and we have used Vagrant with VirtualBox to create a 3 node Kubernetes cluster, with Rancher we can also create clusters. In the [Rancher Folder](Kubernetes/Rancher) you will find additional vagrant files that will build out the same 3 nodes but without the steps for creating our Kubernetes cluster (we want Rancher to do this for us) -We do however want docker installed and for the OS to be updated so you will still see the `common.sh` script being ran on each of our nodes. This will also install Kubeadm, Kubectl etc. But it will not run the Kubeadm commands to create and join our nodes into a cluster. +We do however want docker installed and for the OS to be updated so you will still see the `common.sh` script being ran on each of our nodes. This will also install Kubeadm, Kubectl etc. But it will not run the Kubeadm commands to create and join our nodes into a cluster. -We can navigate to our vagrant folder location and we can simply run `vagrant up` and this will begin that process of creating our 3 VMs in virtualbox. +We can navigate to our vagrant folder location and we can simply run `vagrant up` and this will begin that process of creating our 3 VMs in virtualbox. ![](Images/Day53_Kubernetes9.png) -Now that we have our nodes or VMs in place and ready, we can then use Rancher to create our new Kubernetes cluster. The first screen to create your cluster gives you some options as to where your cluster is, i.e are you using the public cloud managed Kubernetes services, vSphere or something else. +Now that we have our nodes or VMs in place and ready, we can then use Rancher to create our new Kubernetes cluster. The first screen to create your cluster gives you some options as to where your cluster is, i.e are you using the public cloud managed Kubernetes services, vSphere or something else. ![](Images/Day53_Kubernetes10.png) -We will be choosing "custom" as we are not using one of the integrated platforms. The opening page is where you define your cluster name (it says local below but you cannot use local, our cluster is called vagrant.) you can define Kubernetes versions here, network providers and some other configuration options to get your Kubernetes cluster up and running. +We will be choosing "custom" as we are not using one of the integrated platforms. The opening page is where you define your cluster name (it says local below but you cannot use local, our cluster is called vagrant.) you can define Kubernetes versions here, network providers and some other configuration options to get your Kubernetes cluster up and running. ![](Images/Day53_Kubernetes11.png) -The next page is going to give you the registration code that needs to be ran on each of your nodes with the appropriate services to be enabled. etcd, controlplane and worker. For our master node we want etcd and controlplane so the command can be seen below. +The next page is going to give you the registration code that needs to be ran on each of your nodes with the appropriate services to be enabled. etcd, controlplane and worker. For our master node we want etcd and controlplane so the command can be seen below. ![](Images/Day53_Kubernetes12.png) @@ -87,11 +88,11 @@ The next page is going to give you the registration code that needs to be ran on sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.6.3 --server https://10.0.0.1 --token mpq8cbjjwrj88z4xmf7blqxcfmwdsmq92bmwjpphdkklfckk5hfwc2 --ca-checksum a81944423cbfeeb92be0784edebba1af799735ebc30ba8cbe5cc5f996094f30b --etcd --controlplane ``` -If networking is configured correctly then you should pretty quickly see the following in your rancher dashboard, indicating that the first master node is now being registered and the cluster is being created. +If networking is configured correctly then you should pretty quickly see the following in your rancher dashboard, indicating that the first master node is now being registered and the cluster is being created. ![](Images/Day53_Kubernetes13.png) -We can then repeat the registration process for each of the worker nodes with the following command and after some time you will have your cluster up and running with the ability to leverage the marketplace to deploy your applications. +We can then repeat the registration process for each of the worker nodes with the following command and after some time you will have your cluster up and running with the ability to leverage the marketplace to deploy your applications. ``` sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.6.3 --server https://10.0.0.1 --token mpq8cbjjwrj88z4xmf7blqxcfmwdsmq92bmwjpphdkklfckk5hfwc2 --ca-checksum a81944423cbfeeb92be0784edebba1af799735ebc30ba8cbe5cc5f996094f30b --worker @@ -99,30 +100,30 @@ sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kube ![](Images/Day53_Kubernetes14.png) -Over the last 3 sessions we have used a few different ways to get up and running with a Kubernetes cluster, over the remaining days we are going to look at the application side of the platform arguably the most important. We will look into services and being able to provision and use our service in Kubernetes. +Over the last 3 sessions we have used a few different ways to get up and running with a Kubernetes cluster, over the remaining days we are going to look at the application side of the platform arguably the most important. We will look into services and being able to provision and use our service in Kubernetes. -I have been told since that the requirements around bootstrapping rancher nodes requires those VMs to have 4GB ram or they will crash-loop, I have since updated as our worker nodes had 2GB. +I have been told since that the requirements around bootstrapping rancher nodes requires those VMs to have 4GB ram or they will crash-loop, I have since updated as our worker nodes had 2GB. -### What we will cover in the series on Kubernetes +### What we will cover in the series on Kubernetes -We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters. +We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters. -- Kubernetes Architecture -- Kubectl Commands -- Kubernetes YAML -- Kubernetes Ingress +- Kubernetes Architecture +- Kubectl Commands +- Kubernetes YAML +- Kubernetes Ingress - Kubernetes Services -- Helm Package Manager -- Persistant Storage -- Stateful Apps +- Helm Package Manager +- Persistant Storage +- Stateful Apps -## Resources +## Resources -If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them. +If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them. - [Kubernetes Documentation](https://kubernetes.io/docs/home/) - [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do) - [TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners](https://www.youtube.com/watch?v=s_o8dwzRlu4) - [Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified!](https://www.youtube.com/watch?v=KVBON1lA9N8) -See you on [Day 54](day54.md) +See you on [Day 54](day54.md) diff --git a/Days/day54.md b/Days/day54.md index d57e82c9f..96730a188 100644 --- a/Days/day54.md +++ b/Days/day54.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Kubernetes Application Deployment - Day 54' +title: "#90DaysOfDevOps - Kubernetes Application Deployment - Day 54" published: false description: 90DaysOfDevOps - Kubernetes Application Deployment tags: "devops, 90daysofdevops, learning" @@ -7,31 +7,32 @@ cover_image: null canonical_url: null id: 1048764 --- -## Kubernetes Application Deployment -Now we finally get to actually deploying some applications into our clusters, some would say this is the reason Kubernetes exists, for Application delivery. +## Kubernetes Application Deployment + +Now we finally get to actually deploying some applications into our clusters, some would say this is the reason Kubernetes exists, for Application delivery. The idea here is that we can take our container images and now deploy these as pods into our Kubernetes cluster to take advantage of Kubernetes as a container orchestrator. ### Deploying Apps into Kubernetes -There are several ways in which we can deploy our applications into our Kubernetes cluster, we will cover two of the most common approaches which will be YAML files and Helm charts. +There are several ways in which we can deploy our applications into our Kubernetes cluster, we will cover two of the most common approaches which will be YAML files and Helm charts. -We will be using our minikube cluster for these application deployments. We will be walking through some of the previously mentioned components or building blocks of Kubernetes. +We will be using our minikube cluster for these application deployments. We will be walking through some of the previously mentioned components or building blocks of Kubernetes. -All through this section and the Container section we have discussed about images and the benefits of Kubernetes and how we can handle scale quite easily on this platform. +All through this section and the Container section we have discussed about images and the benefits of Kubernetes and how we can handle scale quite easily on this platform. -In this first step we are simply going to create a stateless application within our minikube cluster. We will be using the defacto standard stateless application in our first demonostration `nginx` we will configure a Deployment, which will provide us with our pods and then we will also create a service which will allow us to navigate to the simple web server hosted by the nginx pod. All of this will be contained in a namespace. +In this first step we are simply going to create a stateless application within our minikube cluster. We will be using the defacto standard stateless application in our first demonostration `nginx` we will configure a Deployment, which will provide us with our pods and then we will also create a service which will allow us to navigate to the simple web server hosted by the nginx pod. All of this will be contained in a namespace. ![](Images/Day54_Kubernetes1.png) ### Creating the YAML -In the first demo we want to define everything we do with YAML, we could have a whole section on YAML but I am going to skim over this and leave some resources at the end that will cover YAML in more detail. +In the first demo we want to define everything we do with YAML, we could have a whole section on YAML but I am going to skim over this and leave some resources at the end that will cover YAML in more detail. We could create the following as one YAML file or we could break this down for each aspect of our application, i.e this could be separate files for namespace, deployment and service creation but in this file below we separate these by using `---` in one file. You can find this file located [here](Kubernetes) (File name:- nginx-stateless-demo.yaml) -``` +```Yaml apiVersion: v1 kind: Namespace metadata: @@ -74,7 +75,8 @@ spec: port: 80 targetPort: 80 ``` -### Checking our cluster + +### Checking our cluster Before we deploy anything we should just make sure that we have no existing namespaces called `nginx` we can do this by running the `kubectl get namespace` command and as you can see below we do not have a namespace called `nginx` @@ -82,13 +84,13 @@ Before we deploy anything we should just make sure that we have no existing name ### Time to deploy our App -Now we are ready to deploy our application to our minikube cluster, this same process will work on any other Kubernetes cluster. +Now we are ready to deploy our application to our minikube cluster, this same process will work on any other Kubernetes cluster. -We need to navigate to our yaml file location and then we can run `kubectl create -f nginx-stateless-demo.yaml` which you then see that 3 objects have been created, we have a namespace, deployment and service. +We need to navigate to our yaml file location and then we can run `kubectl create -f nginx-stateless-demo.yaml` which you then see that 3 objects have been created, we have a namespace, deployment and service. ![](Images/Day54_Kubernetes3.png) -Let's run the command again to see our available namespaces in our cluster `kubectl get namespace` and you can now see that we have our new namespace. +Let's run the command again to see our available namespaces in our cluster `kubectl get namespace` and you can now see that we have our new namespace. ![](Images/Day54_Kubernetes5.png) @@ -96,70 +98,69 @@ If we then check our namespace for pods using `kubectl get pods -n nginx` you wi ![](Images/Day54_Kubernetes4.png) -We can also check our service is created by running `kubectl get service -n nginx` +We can also check our service is created by running `kubectl get service -n nginx` ![](Images/Day54_Kubernetes6.png) -Finally we can then go and check our deployment, the deployment is where and how we keep our desired configuration. +Finally we can then go and check our deployment, the deployment is where and how we keep our desired configuration. ![](Images/Day54_Kubernetes7.png) -The above takes a few commands that are worth knowing but you can also use `kubectl get all -n nginx` to see everything we deployed with that one YAML file. +The above takes a few commands that are worth knowing but you can also use `kubectl get all -n nginx` to see everything we deployed with that one YAML file. ![](Images/Day54_Kubernetes8.png) -You will notice in the above that we also have a replicaset, in our deployment we define how many replicas of our image we would like to deploy. This was set to 1 initially, but if we wanted to quickly scale our application then we can do this several ways. +You will notice in the above that we also have a replicaset, in our deployment we define how many replicas of our image we would like to deploy. This was set to 1 initially, but if we wanted to quickly scale our application then we can do this several ways. -We can edit our file using `kubectl edit deployment nginx-deployment -n nginx` which will open a text editor within your terminal and enable you to modify you deployment. +We can edit our file using `kubectl edit deployment nginx-deployment -n nginx` which will open a text editor within your terminal and enable you to modify you deployment. ![](Images/Day54_Kubernetes9.png) -Upon saving the above in your text editor within the terminal if there was no issues and the correct formatting was used then you should see additional deployed in your namespace. +Upon saving the above in your text editor within the terminal if there was no issues and the correct formatting was used then you should see additional deployed in your namespace. ![](Images/Day54_Kubernetes10.png) -We can also make a change to the number of replicas using kubectl and the `kubectl scale deployment nginx-deployment --replicas=10 -n nginx` +We can also make a change to the number of replicas using kubectl and the `kubectl scale deployment nginx-deployment --replicas=10 -n nginx` ![](Images/Day54_Kubernetes11.png) -We can equally use this method to scale our application down back to 1 again if we wish using either method. I used the edit option but you can also use the scale command above. +We can equally use this method to scale our application down back to 1 again if we wish using either method. I used the edit option but you can also use the scale command above. ![](Images/Day54_Kubernetes12.png) -Hopefully here you can see the use case not only are things super fast to spin up and down but we have the ability to quickly scale up and down our applications. If this was a web server we could scale up during busy times and down when load is quiet. +Hopefully here you can see the use case not only are things super fast to spin up and down but we have the ability to quickly scale up and down our applications. If this was a web server we could scale up during busy times and down when load is quiet. +### Exposing our app -### Exposing our app +But how do we access our web server? -But how do we access our web server? +If you look above at our service you will see there is no External IP available so we cannot just open a web browser and expect this to be there magically. For access we have a few options. -If you look above at our service you will see there is no External IP available so we cannot just open a web browser and expect this to be there magically. For access we have a few options. +**ClusterIP** - The IP you do see is a ClusterIP this is on an internal network on the cluster. Only things within the cluster can reach this IP. -**ClusterIP** - The IP you do see is a ClusterIP this is on an internal network on the cluster. Only things within the cluster can reach this IP. +**NodePort** - Exposes the service on the same port of each of the selected nodes in the cluster using NAT. -**NodePort** - Exposes the service on the same port of each of the selected nodes in the cluster using NAT. +**LoadBalancer** - Creates an external load balancer in the current cloud, we are using minikube but also if you have built your own Kubernetes cluster i.e what we did in VirtualBox you would need to deploy a LoadBalancer such as metallb into your cluster to provide this functionality. -**LoadBalancer** - Creates an external load balancer in the current cloud, we are using minikube but also if you have built your own Kubernetes cluster i.e what we did in VirtualBox you would need to deploy a LoadBalancer such as metallb into your cluster to provide this functionality. +**Port-Forward** - We also have the ability to Port Forward, which allows you to access and interact with internal Kubernetes cluster processes from your localhost. Really this option is only for testing and fault finding. -**Port-Forward** - We also have the ability to Port Forward, which allows you to access and interact with internal Kubernetes cluster processes from your localhost. Really this option is only for testing and fault finding. +We now have a few options to choose from, Minikube has some limitations or differences I should say to a full blown Kubernetes cluster. -We now have a few options to choose from, Minikube has some limitations or differences I should say to a full blown Kubernetes cluster. - -We could simply run the following command to port forward our access using our local workstation. +We could simply run the following command to port forward our access using our local workstation. `kubectl port-forward deployment/nginx-deployment -n nginx 8090:80` ![](Images/Day54_Kubernetes13.png) -note that when you run the above command this terminal is now unusable as this is acting as your port forward to your local machine and port. +note that when you run the above command this terminal is now unusable as this is acting as your port forward to your local machine and port. ![](Images/Day54_Kubernetes14.png) -We are now going to run through specifically with Minikube how we can expose our application. We can also use minikube to create a URL to connect to a service [More details](https://minikube.sigs.k8s.io/docs/commands/service/) +We are now going to run through specifically with Minikube how we can expose our application. We can also use minikube to create a URL to connect to a service [More details](https://minikube.sigs.k8s.io/docs/commands/service/) First of all we will delete our service using `kubectl delete service nginx-service -n nginx` -Next we are going to create a new service using `kubectl expose deployment nginx-deployment --name nginx-service --namespace nginx --port=80 --type=NodePort` notice here we are going to use the expose and change the type to NodePort. +Next we are going to create a new service using `kubectl expose deployment nginx-deployment --name nginx-service --namespace nginx --port=80 --type=NodePort` notice here we are going to use the expose and change the type to NodePort. ![](Images/Day54_Kubernetes15.png) @@ -171,7 +172,7 @@ Open a browser or control and click on the link in your terminal. ![](Images/Day54_Kubernetes17.png) -### Helm +### Helm Helm is another way in which we can deploy our applications. Known as "The package manager for Kubernetes" You can find out more [here](https://helm.sh/) @@ -183,7 +184,7 @@ It is super simple to get Helm up and running or installed. Simply. You can find Or you can use an installer script, the benefit here is that the latest version of the helm will be downloaded and installed. -``` +```Shell curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 chmod 700 get_helm.sh @@ -193,30 +194,30 @@ chmod 700 get_helm.sh Finally, there is also the option to use a package manager for the application manager, homebrew for mac, chocolatey for windows, apt with Ubuntu/Debian, snap and pkg also. -Helm so far seems to be the go-to way to get different test applications downloaded and installed in your cluster. +Helm so far seems to be the go-to way to get different test applications downloaded and installed in your cluster. -A good resource to link here would be [ArtifactHUB](https://artifacthub.io/) which is a resource to find, install and publish Kubernetes packages. I will also give a shout out to [KubeApps](https://kubeapps.com/) which is a UI to display helm charts. +A good resource to link here would be [ArtifactHUB](https://artifacthub.io/) which is a resource to find, install and publish Kubernetes packages. I will also give a shout out to [KubeApps](https://kubeapps.com/) which is a UI to display helm charts. -### What we will cover in the series on Kubernetes +### What we will cover in the series on Kubernetes -We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters. +We have started covering some of these mentioned below but we are going to get more hands on tomorrow with our second cluster deployment then we can start deploying applications into our clusters. -- Kubernetes Architecture -- Kubectl Commands -- Kubernetes YAML -- Kubernetes Ingress +- Kubernetes Architecture +- Kubectl Commands +- Kubernetes YAML +- Kubernetes Ingress - Kubernetes Services -- Helm Package Manager -- Persistant Storage -- Stateful Apps +- Helm Package Manager +- Persistant Storage +- Stateful Apps -## Resources +## Resources -If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them. +If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them. - [Kubernetes Documentation](https://kubernetes.io/docs/home/) - [TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours]](https://www.youtube.com/watch?v=X48VuDVv0do) - [TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners](https://www.youtube.com/watch?v=s_o8dwzRlu4) - [Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified!](https://www.youtube.com/watch?v=KVBON1lA9N8) -See you on [Day 55](day55.md) +See you on [Day 55](day55.md) diff --git a/Days/day55.md b/Days/day55.md index cfdf7d0eb..1eb5cf156 100644 --- a/Days/day55.md +++ b/Days/day55.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - State and Ingress in Kubernetes - Day 55' +title: "#90DaysOfDevOps - State and Ingress in Kubernetes - Day 55" published: false description: 90DaysOfDevOps - State and Ingress in Kubernetes tags: "devops, 90daysofdevops, learning" @@ -7,67 +7,69 @@ cover_image: null canonical_url: null id: 1048779 --- + ## State and Ingress in Kubernetes -In this closing section of Kubernetes, we are going to take a look at State and ingress. -Everything we have said so far is about stateless, stateless is really where our applications do not care which network it is using and does not need any permanent storage. Whereas stateful apps, databases for example for such an application to function correctly, you’ll need to ensure that pods can reach each other through a unique identity that does not change (hostnames, IPs...etc.). Examples of stateful applications include MySQL clusters, Redis, Kafka, MongoDB and others. Basically though any application that stores data. +In this closing section of Kubernetes, we are going to take a look at State and ingress. + +Everything we have said so far is about stateless, stateless is really where our applications do not care which network it is using and does not need any permanent storage. Whereas stateful apps, databases for example for such an application to function correctly, you’ll need to ensure that pods can reach each other through a unique identity that does not change (hostnames, IPs...etc.). Examples of stateful applications include MySQL clusters, Redis, Kafka, MongoDB and others. Basically though any application that stores data. -### Stateful Application +### Stateful Application StatefulSets represent a set of Pods with unique, persistent identities and stable hostnames that Kubernetes maintains regardless of where they are scheduled. The state information and other resilient data for any given StatefulSet Pod is maintained in persistent disk storage associated with the StatefulSet. -### Deployment vs StatefulSet +### Deployment vs StatefulSet -- Replicating stateful applications is more difficult. -- Replicating our pods in a deployment (Stateless Application) is identical and interchangable. -- Create pods in random order with random hashes -- One Service that load balances to any Pod. +- Replicating stateful applications is more difficult. +- Replicating our pods in a deployment (Stateless Application) is identical and interchangable. +- Create pods in random order with random hashes +- One Service that load balances to any Pod. -When it comes to StatefulSets or Stateful Applications the above is more difficult. +When it comes to StatefulSets or Stateful Applications the above is more difficult. -- Cannot be created or deleted at the same time. -- Can't be randomly addressed. +- Cannot be created or deleted at the same time. +- Can't be randomly addressed. - replica Pods are not identical -Something you will see in our demonstration shortly is that each pod has its own identity. With a stateless Application you will see random names. For example `app-7469bbb6d7-9mhxd` where as a Stateful Application would be more aligned to `mongo-0` and then when scaled it will create a new pod called `mongo-1`. +Something you will see in our demonstration shortly is that each pod has its own identity. With a stateless Application you will see random names. For example `app-7469bbb6d7-9mhxd` where as a Stateful Application would be more aligned to `mongo-0` and then when scaled it will create a new pod called `mongo-1`. -These pods are created from the same specification, but they are not interchangable. Each StatefulSet pod has a persistent identifier across any re-scheduling. This is necessary because when we require stateful workloads such as a database where we require writing and reading to a database, we cannot have two pods writing at the same time with no awareness as this will give us data inconsistency. We need to ensure that only one of our pods is writing to the database at any given time however we can have multiple pods reading that data. +These pods are created from the same specification, but they are not interchangable. Each StatefulSet pod has a persistent identifier across any re-scheduling. This is necessary because when we require stateful workloads such as a database where we require writing and reading to a database, we cannot have two pods writing at the same time with no awareness as this will give us data inconsistency. We need to ensure that only one of our pods is writing to the database at any given time however we can have multiple pods reading that data. -Each pod in a StatefulSet would have access to its own persistent volume and replica copy of the database to read from, this is continuously updated from the master. Its also interesting to note that each pod will also store its pod state in this persistent volume, if then `mongo-0` dies then when a new one is provisioned it will take over the pod state stored in storage. +Each pod in a StatefulSet would have access to its own persistent volume and replica copy of the database to read from, this is continuously updated from the master. Its also interesting to note that each pod will also store its pod state in this persistent volume, if then `mongo-0` dies then when a new one is provisioned it will take over the pod state stored in storage. -TLDR; StatefulSets vs Deployments +TLDR; StatefulSets vs Deployments -- Predicatable pod name = `mongo-0` -- Fixed individual DNS name +- Predictable pod name = `mongo-0` +- Fixed individual DNS name - Pod Identity - Retain State, Retain Role -- Replicating stateful apps is complex - - There are lots of things you must do: - - Configure cloning and data synchronisation. - - Make remote shared storage available. - - Management & backup +- Replicating stateful apps is complex + - There are lots of things you must do: + - Configure cloning and data synchronisation. + - Make remote shared storage available. + - Management & backup ### Persistant Volumes | Claims | StorageClass -How to persist data in Kubernetes? +How to persist data in Kubernetes? -We mentioned above when we have a stateful application, we have to store the state somewhere and this is where the need for a volume comes in, out of the box Kubernetes does not provide persistance out of the box. +We mentioned above when we have a stateful application, we have to store the state somewhere and this is where the need for a volume comes in, out of the box Kubernetes does not provide persistence out of the box. -We require a storage layer that does not depend on the pod lifecycle. This storage should be available and accessible from all of our Kubernetes nodes. The storage should also be outside of the Kubernetes cluster to be able to survive even if the Kubernetes cluster crashes. +We require a storage layer that does not depend on the pod lifecycle. This storage should be available and accessible from all of our Kubernetes nodes. The storage should also be outside of the Kubernetes cluster to be able to survive even if the Kubernetes cluster crashes. -### Persistent Volume +### Persistent Volume - A cluster resource (like CPU and RAM) to store data. -- Created via a YAML file +- Created via a YAML file - Needs actual physical storage (NAS) - External integration to your Kubernetes cluster -- You can have different types of storage available in your storage. +- You can have different types of storage available in your storage. - PVs are not namespaced - Local storage is available but it would be specific to one node in the cluster - Database persistence should use remote storage (NAS) ### Persistent Volume Claim -A persistent volume alone above can be there and available but unless it is claimed by an application it is not being used. +A persistent volume alone above can be there and available but unless it is claimed by an application it is not being used. - Created via a YAML file - Persistent Volume Claim is used in pod configuration (volumes attribute) @@ -75,36 +77,36 @@ A persistent volume alone above can be there and available but unless it is clai - Volume is mounted into the pod - Pods can have multiple different volume types (ConfigMap, Secret, PVC) -Another way to think of PVs and PVCs is that +Another way to think of PVs and PVCs is that -PVs are created by the Kubernetes Admin +PVs are created by the Kubernetes Admin PVCs are created by the user or application developer -We also have two other types of volumes that we will not get into detail on but worth mentioning: +We also have two other types of volumes that we will not get into detail on but worth mentioning: -### ConfigMaps | Secrets -- Configuration file for your pod. -- Certificate file for your pod. +### ConfigMaps | Secrets -### StorageClass +- Configuration file for your pod. +- Certificate file for your pod. + +### StorageClass - Created via a YAML file -- Provisions Persistent Volumes Dynamically when a PVC claims it -- Each storage backend has its own provisioner +- Provisions Persistent Volumes Dynamically when a PVC claims it +- Each storage backend has its own provisioner - Storage backend is defined in YAML (via provisioner attribute) -- Abstracts underlying storage provider +- Abstracts underlying storage provider - Define parameters for that storage - ### Walkthrough time -In the session yesterday we walked through creating a stateless application, here we want to do the same but we want to use our minikube cluster to deploy a stateful workload. +In the session yesterday we walked through creating a stateless application, here we want to do the same but we want to use our minikube cluster to deploy a stateful workload. -A recap on the minikube command we are using to have the capability and addons to use persistence is `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p mc-demo --kubernetes-version=1.21.2` +A recap on the minikube command we are using to have the capability and addons to use persistence is `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p mc-demo --kubernetes-version=1.21.2` -This command uses the csi-hostpath-driver which is what gives us our storageclass, something I will show later. +This command uses the csi-hostpath-driver which is what gives us our storageclass, something I will show later. -The build out of the application looks like the below: +The build out of the application looks like the below: ![](Images/Day55_Kubernetes1.png) @@ -112,13 +114,13 @@ You can find the YAML configuration file for this application here [pacman-state ### StorageClass Configuration -There is one more step though that we should run before we start deploying our application and that is make sure that our storageclass (csi-hostpath-sc) is our default one. We can firstly check this by running the `kubectl get storageclass` command but out of the box the minikube cluster will be showing the standard storageclass as default so we have to change that with the following commands. +There is one more step though that we should run before we start deploying our application and that is make sure that our storageclass (csi-hostpath-sc) is our default one. We can firstly check this by running the `kubectl get storageclass` command but out of the box the minikube cluster will be showing the standard storageclass as default so we have to change that with the following commands. This first command will make our csi-hostpath-sc storageclass our default. `kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'` -This command will remove the default annotation from the standard StorageClass. +This command will remove the default annotation from the standard StorageClass. `kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'` @@ -128,39 +130,39 @@ We start with no pacman namespace in our cluster. `kubectl get namespace` ![](Images/Day55_Kubernetes3.png) -We will then deploy our YAML file. `kubectl create -f pacman-stateful-demo.yaml` you can see from this command we are creating a number of objects within our Kubernetes cluster. +We will then deploy our YAML file. `kubectl create -f pacman-stateful-demo.yaml` you can see from this command we are creating a number of objects within our Kubernetes cluster. ![](Images/Day55_Kubernetes4.png) -We now have our newly created namespace. +We now have our newly created namespace. ![](Images/Day55_Kubernetes5.png) -You can then see from the next image and command `kubectl get all -n pacman` that we have a number of things happening inside of our namespace. We have our pods running our NodeJS web front end, we have mongo running our backend database. There are services for both pacman and mongo to access those pods. We have a deployment for pacman and a statefulset for mongo. +You can then see from the next image and command `kubectl get all -n pacman` that we have a number of things happening inside of our namespace. We have our pods running our NodeJS web front end, we have mongo running our backend database. There are services for both pacman and mongo to access those pods. We have a deployment for pacman and a statefulset for mongo. ![](Images/Day55_Kubernetes6.png) -We also have our persistent volume and persistent volume claim by running `kubectl get pv` will give us our non namespaced persistent volumes and running `kubectl get pvc -n pacman` will give us our namespaced persistent volume claims. +We also have our persistent volume and persistent volume claim by running `kubectl get pv` will give us our non namespaced persistent volumes and running `kubectl get pvc -n pacman` will give us our namespaced persistent volume claims. ![](Images/Day55_Kubernetes7.png) ### Playing the game | I mean accessing our mission critical application -Because we are using Minikube as mentioned in the stateless application we have a few hurdles to get over when it comes to accessing our application, If however we had access to ingress or a load balancer within our cluster the service is set up to automatically get an IP from that to gain access externally. (you can see this above in the image of all components in the pacman namespace). +Because we are using Minikube as mentioned in the stateless application we have a few hurdles to get over when it comes to accessing our application, If however we had access to ingress or a load balancer within our cluster the service is set up to automatically get an IP from that to gain access externally. (you can see this above in the image of all components in the pacman namespace). -For this demo we are going to use the port forward method to access our application. By opening a new terminal and running the following `kubectl port-forward svc/pacman 9090:80 -n pacman` command, opening a browser we will now have access to our application. If you are running this in AWS or specific locations then this will also report on the cloud and zone as well as the host which equals your pod within Kubernetes, again you can look back and see this pod name in our screenshots above. +For this demo we are going to use the port forward method to access our application. By opening a new terminal and running the following `kubectl port-forward svc/pacman 9090:80 -n pacman` command, opening a browser we will now have access to our application. If you are running this in AWS or specific locations then this will also report on the cloud and zone as well as the host which equals your pod within Kubernetes, again you can look back and see this pod name in our screenshots above. ![](Images/Day55_Kubernetes8.png) -Now we can go and create a high score which will then be stored in our database. +Now we can go and create a high score which will then be stored in our database. ![](Images/Day55_Kubernetes9.png) -Ok, great we have a high score but what happens if we go and delete our `mongo-0` pod? by running `kubectl delete pod mongo-0 -n pacman` I can delete that and if you are still in the app you will see that high score not available at least for a few seconds. +Ok, great we have a high score but what happens if we go and delete our `mongo-0` pod? by running `kubectl delete pod mongo-0 -n pacman` I can delete that and if you are still in the app you will see that high score not available at least for a few seconds. ![](Images/Day55_Kubernetes10.png) -Now if I go back to my game I can create a new game and see my high scores. The only way you can truly believe me on this though is if you give it a try and share on social media your high scores! +Now if I go back to my game I can create a new game and see my high scores. The only way you can truly believe me on this though is if you give it a try and share on social media your high scores! ![](Images/Day55_Kubernetes11.png) @@ -168,28 +170,29 @@ With the deployment we can scale this up using the commands that we covered in t ![](Images/Day55_Kubernetes12.png) +### Ingress explained + +Before we wrap things up with Kubernetes I also wanted to touch on a huge aspect of Kubernetes and that is ingress. -### Ingress explained -Before we wrap things up with Kubernetes I also wanted to touch on a huge aspect of Kubernetes and that is ingress. +### What is ingress? -### What is ingress? +So far with our examples we have used port-forward or we have used specific commands within minikube to gain access to our applications but this in production is not going to work. We are going to want a better way of accessing our applications at scale with multiple users. -So far with our examples we have used port-forward or we have used specific commands within minikube to gain access to our applications but this in production is not going to work. We are going to want a better way of accessing our applications at scale with multiple users. +We also spoke about NodePort being an option but again this should be only for test purposes. -We also spoke about NodePort being an option but again this should be only for test purposes. +Ingress gives us a better way of exposing our applications, this allows us to define routing rules within our Kubernetes cluster. -Ingress gives us a better way of exposing our applications, this allows us to define routing rules within our Kubernetes cluster. +For ingress we would create a forward request to the internal service of our application. -For ingress we would create a forward request to the internal service of our application. +### When do you need ingress? -### When do you need ingress? -If you are using a cloud provider, a managed Kubernetes offering they most likely will have their own ingress option for your cluster or they provide you with their own load balancer option. You don't have to implement this yourself, one of the benefits of managed Kubernetes. +If you are using a cloud provider, a managed Kubernetes offering they most likely will have their own ingress option for your cluster or they provide you with their own load balancer option. You don't have to implement this yourself, one of the benefits of managed Kubernetes. -If you are running your own cluster then you will need to configure an entrypoint. +If you are running your own cluster then you will need to configure an entrypoint. -### Configure Ingress on Minikube +### Configure Ingress on Minikube -On my particular running cluster called mc-demo I can run the following command to get ingress enabled on my cluster. +On my particular running cluster called mc-demo I can run the following command to get ingress enabled on my cluster. `minikube --profile='mc-demo' addons enable ingress` @@ -201,25 +204,25 @@ If we check our namespaces now you will see that we have a new ingress-nginx nam Now we must create our ingress YAML configuration to hit our Pacman service I have added this file to the repository [pacman-ingress.yaml](Kubernetes) -We can then create this in our ingress namespace with `kubectl create -f pacman-ingress.yaml` +We can then create this in our ingress namespace with `kubectl create -f pacman-ingress.yaml` ![](Images/Day55_Kubernetes15.png) -Then if we run `kubectl get ingress -n pacman` +Then if we run `kubectl get ingress -n pacman` ![](Images/Day55_Kubernetes16.png) -I am then told because we are using minikube running on WSL2 in Windows we have to create the minikube tunnel using `minikube tunnel --profile=mc-demo` +I am then told because we are using minikube running on WSL2 in Windows we have to create the minikube tunnel using `minikube tunnel --profile=mc-demo` -But I am still not able to gain access to 192.168.49.2 and play my pacman game. +But I am still not able to gain access to 192.168.49.2 and play my pacman game. -If anyone has or can get this working on Windows and WSL I would appreciate the feedback. I will raise an issue on the repository for this and come back to it once I have time and a fix. +If anyone has or can get this working on Windows and WSL I would appreciate the feedback. I will raise an issue on the repository for this and come back to it once I have time and a fix. UPDATE: I feel like this blog helps identify maybe the cause of this not working with WSL [Configuring Ingress to run Minikube on WSL2 using Docker runtime](https://hellokube.dev/posts/configure-minikube-ingress-on-wsl2/) -## Resources +## Resources -If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them. +If you have FREE resources that you have used then please feel free to add them in here via a PR to the repository and I will be happy to include them. - [Kubernetes StatefulSet simply explained](https://www.youtube.com/watch?v=pPQKAR1pA9U) - [Kubernetes Volumes explained](https://www.youtube.com/watch?v=0swOh5C3OVM) @@ -229,8 +232,8 @@ If you have FREE resources that you have used then please feel free to add them - [TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners](https://www.youtube.com/watch?v=s_o8dwzRlu4) - [Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified!](https://www.youtube.com/watch?v=KVBON1lA9N8) -This wraps up our Kubernetes section, there is so much additional content we could cover on Kubernetes and 7 days gives us a foundational knowledge but there are people running through [100DaysOfKubernetes](https://100daysofkubernetes.io/overview.html) where you can get really into the weeds. +This wraps up our Kubernetes section, there is so much additional content we could cover on Kubernetes and 7 days gives us a foundational knowledge but there are people running through [100DaysOfKubernetes](https://100daysofkubernetes.io/overview.html) where you can get really into the weeds. -Next up we are going to be taking a look at Infrastructure as Code and the important role it plays from a DevOps perspective. +Next up we are going to be taking a look at Infrastructure as Code and the important role it plays from a DevOps perspective. -See you on [Day 56](day56.md) +See you on [Day 56](day56.md) diff --git a/Days/day56.md b/Days/day56.md index c4bad6cd9..edcae0749 100644 --- a/Days/day56.md +++ b/Days/day56.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - The Big Picture: IaC - Day 56' +title: "#90DaysOfDevOps - The Big Picture: IaC - Day 56" published: false description: 90DaysOfDevOps - The Big Picture IaC tags: "devops, 90daysofdevops, learning" @@ -7,118 +7,125 @@ cover_image: null canonical_url: null id: 1048709 --- + ## The Big Picture: IaC -Humans make mistakes! Automation is the way to go! +Humans make mistakes! Automation is the way to go! -How do you build your systems today? +How do you build your systems today? -What would be your plan if you were to lose everything today, Physical machines, Virtual Machines, Cloud VMs, Cloud PaaS etc etc.? +What would be your plan if you were to lose everything today, Physical machines, Virtual Machines, Cloud VMs, Cloud PaaS etc etc.? -How long would it take you to replace everything? +How long would it take you to replace everything? -Infrastructure as code provides a solution to be able to do this whilst also being able to test this, we should not confuse this with backup and recovery but in terms of your infrastructure and environments, your platforms we should be able to spin them up and treat them as cattle vs pets. +Infrastructure as code provides a solution to be able to do this whilst also being able to test this, we should not confuse this with backup and recovery but in terms of your infrastructure and environments, your platforms we should be able to spin them up and treat them as cattle vs pets. -The TLDR; is that we can use code to rebuild our whole entire environment. +The TLDR; is that we can use code to rebuild our whole entire environment. -If we also remember from the start we said about DevOps in general is a way in which to break down barriers to deliver systems into production safely and rapidly. +If we also remember from the start we said about DevOps in general is a way in which to break down barriers to deliver systems into production safely and rapidly. -Infrastructure as code helps us deliver the systems, we have spoken a lot of processes and tools. IaC brings us more tools to be familiar with to enable this part of the process. +Infrastructure as code helps us deliver the systems, we have spoken a lot of processes and tools. IaC brings us more tools to be familiar with to enable this part of the process. -We are going to concentrate on Infrastructure as code in this section. You might also hear this mentioned as Infrastructure from code or configuration as code. I think the most well known term is likely Infrastructure as code. +We are going to concentrate on Infrastructure as code in this section. You might also hear this mentioned as Infrastructure from code or configuration as code. I think the most well known term is likely Infrastructure as code. -### Pets vs Cattle +### Pets vs Cattle -If we take a look at pre DevOps, if we had the requirement to build a new Application, we would need to prepare our servers manually for the most part. +If we take a look at pre DevOps, if we had the requirement to build a new Application, we would need to prepare our servers manually for the most part. - Deploy VMs | Physical Servers and install operating system -- Configure networking -- Create routing tables -- Install software and updates -- Configure software -- Install database +- Configure networking +- Create routing tables +- Install software and updates +- Configure software +- Install database This would be a manual process performed by Systems Administrators. The bigger the application the more resource and servers required the more manual effort it would take to bring up those systems. This would take a huge amount of human effort and time but also as a business you would have to pay for that resource to build out this environment. As I opened the section with "Humans make mistakes! Automation is the way to go!" -Ongoing from the above initial setup phase you then have maintenance of these servers. +Ongoing from the above initial setup phase you then have maintenance of these servers. -- Update versions -- Deploy new releases -- Data Management -- Recovery of Applications -- Add, Remove and Scale Servers +- Update versions +- Deploy new releases +- Data Management +- Recovery of Applications +- Add, Remove and Scale Servers - Network Configuration -Add the complexity of multiple test and dev environments. +Add the complexity of multiple test and dev environments. + +This is where Infrastructure as Code comes in, the above was very much a time where we would look after those servers as if they were pets, people even called them servers pet names or at least named them something because they were going to be around for a while, they were going to hopefully be part of the "family" for a while. -This is where Infrastructure as Code comes in, the above was very much a time where we would look after those servers as if they were pets, people even called them servers pet names or at least named them something because they were going to be around for a while, they were going to hopefully be part of the "family" for a while. +With Infrastructure as Code we have the ability to automate all these tasks end to end. Infrastructure as code is a concept and there are tools that carry out this automated provisioning of infrastructure, at this point if something bad happens to a server you throw it away and you spin up a new one. This process is automated and the server is exactly as defined in code. At this point we don't care what they are called they are there in the field serving their purpose until they are no longer in the field and we have another to replace it either because of a failure or because we updated part or all of our application. -With Infrastructure as Code we have the ability to automate all these tasks end to end. Infrastructure as code is a concept and there are tools that carry out this automated provisioning of infrastructure, at this point if something bad happens to a server you throw it away and you spin up a new one. This process is automated and the server is exactly as defined in code. At this point we don't care what they are called they are there in the field serving their purpose until they are no longer in the field and we have another to replace it either because of a failure or because we updated part or all of our application. +This can be used in almost all platforms, virtualisation, cloud based workloads and also cloud-native infrastructure such as Kubernetes and containers. -This can be used in almost all platforms, virtualisation, cloud based workloads and also cloud-native infrastructure such as Kubernetes and containers. +### Infrastructure Provisioning -### Infrastructure Provisioning -Not all IaC cover all of the below, You will find that the tool we are going to be using during this section only really covers the the first 2 areas of below; Terraform is that tool we will be covering and this allows us to start from nothing and define in code what our infrastructure should look like and then deploy that, it will also enable us to manage that infrastructure and also initially deploy an application but at that point it is going to lose track of the application which is where the next section comes in and something like Ansible as a configuration management tool might work better on that front. +Not all IaC cover all of the below, You will find that the tool we are going to be using during this section only really covers the the first 2 areas of below; Terraform is that tool we will be covering and this allows us to start from nothing and define in code what our infrastructure should look like and then deploy that, it will also enable us to manage that infrastructure and also initially deploy an application but at that point it is going to lose track of the application which is where the next section comes in and something like Ansible as a configuration management tool might work better on that front. -Without jumping ahead tools like chef, puppet and ansible are best suited to deal with the initial application setup and then to manage those applications and their configuration. +Without jumping ahead tools like chef, puppet and ansible are best suited to deal with the initial application setup and then to manage those applications and their configuration. -Initial installation & configuration of software +Initial installation & configuration of software -- Spinning up new servers -- Network configuration -- Creating load balancers +- Spinning up new servers +- Network configuration +- Creating load balancers - Configuration on infrastructure level -### Configuration of provisioned infrastructure +### Configuration of provisioned infrastructure -- Installing application on servers -- Prepare the servers to deploy your application. +- Installing application on servers +- Prepare the servers to deploy your application. -### Deployment of Application +### Deployment of Application -- Deploy and Manage Application +- Deploy and Manage Application - Maintain phase -- Software updates -- Reconfiguration +- Software updates +- Reconfiguration + +### Difference of IaC tools + +Declarative vs procedural -### Difference of IaC tools +Procedural -Declarative vs procedural +- Step by step instruction +- Create a server > Add a server > Make this change -Procedural -- Step by step instruction -- Create a server > Add a server > Make this change +Declarative -Declartive -- declare end result -- 2 Servers +- declare end result +- 2 Servers Mutable (pets) vs Immutable (cattle) -Mutable +Mutable + - Change instead of replace -- Generally long lived +- Generally long lived Immutable + - Replace instead of change -- Possibly short lived +- Possibly short lived + +This is really why we have lots of different options for Infrastructure as Code because there is no one tool to rule them all. -This is really why we have lots of different options for Infrastructure as Code because there is no one tool to rule them all. +We are going to be mostly using terraform and getting hands on as this is the best way to start seeing the benefits of Infrastructure as Code when it is in action. Getting hands on is also the best way to pick up the skills as you are going to be writing code. -We are going to be mostly using terraform and getting hands on as this is the best way to start seeing the benefits of Infrastructure as Code when it is in action. Getting hands on is also the best way to pick up the skills as you are going to be writing code. +Next up we will start looking into Terraform with a 101 before we get some hands on get using. -Next up we will start looking into Terraform with a 101 before we get some hands on get using. +## Resources -## Resources -I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. +I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) -- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) +- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) - [Terraform Simple Projects](https://terraform.joshuajebaraj.com/) - [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks) - [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform) diff --git a/Days/day57.md b/Days/day57.md index 1b82dbf30..fdd542098 100644 --- a/Days/day57.md +++ b/Days/day57.md @@ -1,50 +1,50 @@ --- -title: '#90DaysOfDevOps - An intro to Terraform - Day 57' +title: "#90DaysOfDevOps - An intro to Terraform - Day 57" published: false description: 90DaysOfDevOps - An intro to Terraform -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048710 --- -## An intro to Terraform -"Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently" +## An intro to Terraform -The above quote is from HashiCorp, HashiCorp is the company behind Terraform. +"Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently" + +The above quote is from HashiCorp, HashiCorp is the company behind Terraform. "Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services. Terraform codifies cloud APIs into declarative configuration files." -HashiCorp have a great resource in [HashiCorp Learn](https://learn.hashicorp.com/terraform?utm_source=terraform_io&utm_content=terraform_io_hero) which covers all of their products and gives some great walkthrough demos when you are trying to achieve something with Infrastructure as Code. +HashiCorp have a great resource in [HashiCorp Learn](https://learn.hashicorp.com/terraform?utm_source=terraform_io&utm_content=terraform_io_hero) which covers all of their products and gives some great walkthrough demos when you are trying to achieve something with Infrastructure as Code. -All cloud providers and on prem platforms generally give us access to management consoles which enables us to create our resources via a UI, generally these platforms also provide a CLI or API access to also create the same resources but with an API we have the ability to provision fast. +All cloud providers and on prem platforms generally give us access to management consoles which enables us to create our resources via a UI, generally these platforms also provide a CLI or API access to also create the same resources but with an API we have the ability to provision fast. -Infrastructure as Code allows us to hook into those APIs to deploy our resources in a desired state. +Infrastructure as Code allows us to hook into those APIs to deploy our resources in a desired state. -Other tools but not exclusive or exhaustive below. If you have other tools then please share via a PR. +Other tools but not exclusive or exhaustive below. If you have other tools then please share via a PR. -| Cloud Specific | Cloud Agnostic | +| Cloud Specific | Cloud Agnostic | | ------------------------------- | -------------- | -| AWS CloudFormation | Terraform | -| Azure Resource Manager | Pulumi | -| Google Cloud Deployment Manager | | +| AWS CloudFormation | Terraform | +| Azure Resource Manager | Pulumi | +| Google Cloud Deployment Manager | | -This is another reason why we are using Terraform, we want to be agnostic to the clouds and platforms that we wish to use for our demos but also in general. +This is another reason why we are using Terraform, we want to be agnostic to the clouds and platforms that we wish to use for our demos but also in general. -## Terraform Overview +## Terraform Overview -Terraform is a provisioning focused tool, Terraform is a CLI that gives the capabilities of being able to provision complex infrastructure environments. With Terraform we can define complex infrastructure requirements that exist locally or remote (cloud) Terraform not only enables us to build things initially but also to maintain and update those resources for their lifetime. +Terraform is a provisioning focused tool, Terraform is a CLI that gives the capabilities of being able to provision complex infrastructure environments. With Terraform we can define complex infrastructure requirements that exist locally or remote (cloud) Terraform not only enables us to build things initially but also to maintain and update those resources for their lifetime. We are going to cover the high level here but for more details and loads of resources you can head to [terraform.io](https://www.terraform.io/) ### Write -Terraform allows us to create declaritive configuration files that will build our environments. The files are written using the HashiCorp Configuration Language (HCL) which allows for concise descriptions of resources using blocks, arguments, and expressions. We will of course be looking into these in detail in deploying VMs, Containers and within Kubernetes. - +Terraform allows us to create declarative configuration files that will build our environments. The files are written using the HashiCorp Configuration Language (HCL) which allows for concise descriptions of resources using blocks, arguments, and expressions. We will of course be looking into these in detail in deploying VMs, Containers and within Kubernetes. ### Plan -The ability to check that the above configuration files are going to deploy what we want to see using specific functions of the terraform cli to be able to test that plan before deploying anything or changing anything. Remember Terraform is a continued tool for your infrastructure if you would like to change aspect of your infrastructure you should do that via terraform so that it is captured all in code. +The ability to check that the above configuration files are going to deploy what we want to see using specific functions of the terraform cli to be able to test that plan before deploying anything or changing anything. Remember Terraform is a continued tool for your infrastructure if you would like to change aspect of your infrastructure you should do that via terraform so that it is captured all in code. ### Apply @@ -52,48 +52,45 @@ Obviously once you are happy you can go ahead and apply this configuration to th Another thing to mention is that there are also modules available, and this is similar to container images in that these modules have been created and shared in public so you do not have to create it again and again just re use the best practice of deploying a specific infrastructure resource the same way everywhere. You can find the modules available [here](https://registry.terraform.io/browse/modules) - -The Terraform workflow looks like this: (*taken from the terraform site*) - +The Terraform workflow looks like this: (_taken from the terraform site_) ![](Images/Day57_IAC3.png) ### Terraform vs Vagrant -During this challenge we have used Vagrant which happens to be another Hashicorp open source tool which concentrates on the development environments. +During this challenge we have used Vagrant which happens to be another Hashicorp open source tool which concentrates on the development environments. - Vagrant is a tool focused for managing development environments -- Terraform is a tool for building infrastructure. +- Terraform is a tool for building infrastructure. A great comparison of the two tools can be found here on the official [Hashicorp site](https://www.vagrantup.com/intro/vs/terraform) +## Terraform Installation -## Terraform Installation +There is really not much to the installation of Terraform. -There is really not much to the installation of Terraform. - -Terraform is cross platform and you can see below on my Linux machine we have several options to download and install the CLI +Terraform is cross platform and you can see below on my Linux machine we have several options to download and install the CLI ![](Images/Day57_IAC2.png) - Using `arkade` to install Terraform, arkade is a handy little tool for getting your required tools, apps and clis onto your system. A simple `arkade get terraform` will allow for an update of terraform if available or this same command will also install the Terraform CLI ![](Images/Day57_IAC1.png) -We are going to get into more around HCL and then also start using Terraform to create some infrastructure resources in various different platforms. +We are going to get into more around HCL and then also start using Terraform to create some infrastructure resources in various different platforms. + +## Resources -## Resources -I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. +I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) -- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) +- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) - [Terraform Simple Projects](https://terraform.joshuajebaraj.com/) - [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks) - [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform) diff --git a/Days/day58.md b/Days/day58.md index 4ba3af8ec..7798db19c 100644 --- a/Days/day58.md +++ b/Days/day58.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - HashiCorp Configuration Language (HCL) - Day 58' +title: "#90DaysOfDevOps - HashiCorp Configuration Language (HCL) - Day 58" published: false description: 90DaysOfDevOps - HashiCorp Configuration Language (HCL) tags: "devops, 90daysofdevops, learning" @@ -7,22 +7,22 @@ cover_image: null canonical_url: null id: 1048741 --- + ## HashiCorp Configuration Language (HCL) Before we start making stuff with Terraform we have to dive a little into HashiCorp Configuration Language (HCL). So far during our challenge we have looked at a few different scripting and programming languages and here is another one. We touched on the [Go programming language](day07.md) then [bash scripts](day19.md) we even touched on a little python when it came to [network automation](day27.md) Now we must cover HashiCorp Configuration Language (HCL) if this is the first time you are seeing the language it might look a little daunting but its quite simple and very powerful. -As we move through this section, we are going to be using examples that we can run locally on our system regardless of what OS you are using, we will be using virtualbox, albeit not the infrastructure platform you would usually be using with Terraform. However running this locally, it is free and will allow us to achieve what we are looking for in this post. We could also extend this posts concepts to docker or Kubernetes as well. +As we move through this section, we are going to be using examples that we can run locally on our system regardless of what OS you are using, we will be using virtualbox, albeit not the infrastructure platform you would usually be using with Terraform. However running this locally, it is free and will allow us to achieve what we are looking for in this post. We could also extend this posts concepts to docker or Kubernetes as well. -In general though, you would or should be using Terraform to deploy your infrastructure in the public cloud (AWS, Google, Microsoft Azure) but then also in your virtualisation environments such as (VMware, Microsoft Hyper-V, Nutanix AHV). In the public cloud Terraform allows for us to do a lot more than just Virtual Machine automated deployment, we can create all the required infrastructure such as PaaS workloads and all of the networking required assets such as VPCs and Security Groups. +In general though, you would or should be using Terraform to deploy your infrastructure in the public cloud (AWS, Google, Microsoft Azure) but then also in your virtualisation environments such as (VMware, Microsoft Hyper-V, Nutanix AHV). In the public cloud Terraform allows for us to do a lot more than just Virtual Machine automated deployment, we can create all the required infrastructure such as PaaS workloads and all of the networking required assets such as VPCs and Security Groups. -There are two important aspects to Terraform, we have the code which we are going to get into in this post and then we also have the state. Both of these together could be called the Terraform core. We then have the environment we wish to speak to and deploy into, which is executed using Terraform providers, briefly mentioned in the last session, but we have an AWS provider, we have an Azure providers etc. There are hundreds. +There are two important aspects to Terraform, we have the code which we are going to get into in this post and then we also have the state. Both of these together could be called the Terraform core. We then have the environment we wish to speak to and deploy into, which is executed using Terraform providers, briefly mentioned in the last session, but we have an AWS provider, we have an Azure providers etc. There are hundreds. ### Basic Terraform Usage -Let's take a look at a Terraform `.tf` file to see how they are made up. The first example we will walk through will in fact be code to deploy resources to AWS, this would then also require the AWS CLI to be installed on your system and configured for your account. - +Let's take a look at a Terraform `.tf` file to see how they are made up. The first example we will walk through will in fact be code to deploy resources to AWS, this would then also require the AWS CLI to be installed on your system and configured for your account. ### Providers @@ -38,7 +38,8 @@ terraform { } } ``` -We might also add in a region as well here to determine which AWS region we would like to provision to we can do this by adding the following: + +We might also add in a region as well here to determine which AWS region we would like to provision to we can do this by adding the following: ``` provider "aws" { @@ -46,15 +47,14 @@ provider "aws" { } ``` -### Resources +### Terraform Resources - Another important component of a terraform config file which describes one or more infrastructure objects like EC2, Load Balancer, VPC, etc. -- A resource block declares a resource of a given type ("aws_instance") with a given local name ("90daysofdevops"). +- A resource block declares a resource of a given type ("aws_instance") with a given local name ("90daysofdevops"). - The resource type and name together serve as an identifier for a given resource. - ``` resource "aws_instance" "90daysofdevops" { ami = data.aws_ami.instance_id.id @@ -78,9 +78,9 @@ resource "aws_instance" "90daysofdevops" { } ``` -You can see from the above we are also running a `yum` update and installing `httpd` into our ec2 instance. +You can see from the above we are also running a `yum` update and installing `httpd` into our ec2 instance. -If we now look at the complete main.tf file it might look something like this. +If we now look at the complete main.tf file it might look something like this. ``` terraform { @@ -123,9 +123,10 @@ resource "aws_instance" "90daysofdevops" { } } ``` -The above code will go and deploy a very simple web server as an ec2 instance in AWS, the great thing about this and any other configuration like this is that we can repeat this and we will get the same output every single time. Other than the chance that I have messed up the code there is no human interaction with the above. -We can take a look at a super simple example, one that you will likely never use but let's humour it anyway. Like with all good scripting and programming language we should start with a hello-world scenario. +The above code will go and deploy a very simple web server as an ec2 instance in AWS, the great thing about this and any other configuration like this is that we can repeat this and we will get the same output every single time. Other than the chance that I have messed up the code there is no human interaction with the above. + +We can take a look at a super simple example, one that you will likely never use but let's humour it anyway. Like with all good scripting and programming language we should start with a hello-world scenario. ``` terraform { @@ -140,66 +141,67 @@ output "hello_world" { value = "Hello, 90DaysOfDevOps from Terraform" } ``` -You will find this file in the IAC folder under hello-world, but out of the box this is not going to simply work there are some commans we need to run in order to use our terraform code. -In your terminal navigate to your folder where the main.tf has been created, this could be from this repository or you could create a new one using the code above. +You will find this file in the IAC folder under hello-world, but out of the box this is not going to simply work there are some commans we need to run in order to use our terraform code. + +In your terminal navigate to your folder where the main.tf has been created, this could be from this repository or you could create a new one using the code above. -When in that folder we are going to run `terraform init` +When in that folder we are going to run `terraform init` -We need to perform this on any directory where we have or before we run any terraform code. Initialising a configuration directory downloads and installs the providers defined in the configuration, in this case we have no providers but in the example above this would download the aws provider for this configuration. +We need to perform this on any directory where we have or before we run any terraform code. Initialising a configuration directory downloads and installs the providers defined in the configuration, in this case we have no providers but in the example above this would download the aws provider for this configuration. ![](Images/Day58_IAC1.png) -The next command will be `terraform plan` +The next command will be `terraform plan` The `terraform plan` command creates an execution plan, which lets you preview the changes that Terraform plans to make to your infrastructure. -You can simply see below that with our hello-world example we are going to see an output if this was an AWS ec2 instance we would see all the steps that we will be creating. +You can simply see below that with our hello-world example we are going to see an output if this was an AWS ec2 instance we would see all the steps that we will be creating. ![](Images/Day58_IAC2.png) -At this point we have initialised our repository and we have our providers downloaded where required, we have run a test walkthrough to make sure this is what we want to see so now we can run and deploy our code. +At this point we have initialised our repository and we have our providers downloaded where required, we have run a test walkthrough to make sure this is what we want to see so now we can run and deploy our code. -`terraform apply` allows us to do this there is a built in safety measure to this command and this will again give you a plan view on what is going to happen which warrants a response from you to say yes to continue. +`terraform apply` allows us to do this there is a built in safety measure to this command and this will again give you a plan view on what is going to happen which warrants a response from you to say yes to continue. ![](Images/Day58_IAC3.png) -When we type in yes to the enter a value, and our code is deployed. Obviously not that exciting but you can see we have the output that we defined in our code. +When we type in yes to the enter a value, and our code is deployed. Obviously not that exciting but you can see we have the output that we defined in our code. ![](Images/Day58_IAC4.png) -Now we have not deployed anything, we have not added, changed or destroyed anything but if we did then we would see that indicated also in the above. If however we had deployed something and we wanted to get rid of everything we deployed we can use the `terraform destroy` command. Again this has that safety where you have to type yes although you can use `--auto-approve` on the end of your `apply` and `destroy` commands to bypass that manual intervention. But I would advise only using this shortcut when in learning and testing as everything will dissappear sometimes faster than it was built. +Now we have not deployed anything, we have not added, changed or destroyed anything but if we did then we would see that indicated also in the above. If however we had deployed something and we wanted to get rid of everything we deployed we can use the `terraform destroy` command. Again this has that safety where you have to type yes although you can use `--auto-approve` on the end of your `apply` and `destroy` commands to bypass that manual intervention. But I would advise only using this shortcut when in learning and testing as everything will dissappear sometimes faster than it was built. -From this there are really 4 commands we have covered from the Terraform CLI. +From this there are really 4 commands we have covered from the Terraform CLI. -- `terraform init` = get your project folder ready with providers -- `terraform plan` = show what is going to be created, changed during the next command based on our code. -- `terraform apply` = will go and deploy the resources defined in our code. +- `terraform init` = get your project folder ready with providers +- `terraform plan` = show what is going to be created, changed during the next command based on our code. +- `terraform apply` = will go and deploy the resources defined in our code. - `terraform destroy` = will destroy the resources we have created in our project -We also covered two important aspects of our code files. +We also covered two important aspects of our code files. -- providers = how does terraform speak to the end platform via APIs +- providers = how does terraform speak to the end platform via APIs - resources = what it is we want to deploy with code -Another thing to note when running `terraform init` take a look at the tree on the folder before and after to see what happens and where we store providers and modules. +Another thing to note when running `terraform init` take a look at the tree on the folder before and after to see what happens and where we store providers and modules. -### Terraform state +### Terraform state -We also need to be aware of the state file that is created also inside our directory and for this hello world example our state file is simple. This is a JSON file which is the representation of the world according to Terraform. The state will happily show off your sensitive data so be careful and as a best practice put your `.tfstate` files in your `.gitignore` folder before uploading to GitHub. +We also need to be aware of the state file that is created also inside our directory and for this hello world example our state file is simple. This is a JSON file which is the representation of the world according to Terraform. The state will happily show off your sensitive data so be careful and as a best practice put your `.tfstate` files in your `.gitignore` folder before uploading to GitHub. -By default the state file as you can see lives inside the same directory as your project code, but it can also be stored remotely as an option. In a production environment this is likely going to be a shared location such as an S3 bucket. +By default the state file as you can see lives inside the same directory as your project code, but it can also be stored remotely as an option. In a production environment this is likely going to be a shared location such as an S3 bucket. Another option could be Terraform Cloud, this is a paid for managed service. (Free up to 5 users) -The pros for storing state in a remote location is that we get: +The pros for storing state in a remote location is that we get: -- Sensitive data encrypted -- Collaboration -- Automation +- Sensitive data encrypted +- Collaboration +- Automation - However it could bring increase complexity -``` +```JSON { "version": 4, "terraform_version": "1.1.6", @@ -215,17 +217,17 @@ The pros for storing state in a remote location is that we get: } ``` +## Resources -## Resources -I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. +I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) -- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) +- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) - [Terraform Simple Projects](https://terraform.joshuajebaraj.com/) - [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks) - [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform) diff --git a/Days/day59.md b/Days/day59.md index 90d7365e1..26c14f50b 100644 --- a/Days/day59.md +++ b/Days/day59.md @@ -1,21 +1,22 @@ --- -title: '#90DaysOfDevOps - Create a VM with Terraform & Variables - Day 59' +title: "#90DaysOfDevOps - Create a VM with Terraform & Variables - Day 59" published: false description: 90DaysOfDevOps - Create a VM with Terraform & Variables -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049051 --- + ## Create a VM with Terraform & Variables -In this session we are going to be creating a VM or two VMs using terraform inside VirtualBox. This is not the normal, VirtualBox is a workstation virtualisation option and really this would not be a use case for Terraform but I am currently 36,000ft in the air and as much as I have deployed public cloud resources this high in the clouds it is much faster to do this locally on my laptop. +In this session we are going to be creating a VM or two VMs using terraform inside VirtualBox. This is not the normal, VirtualBox is a workstation virtualisation option and really this would not be a use case for Terraform but I am currently 36,000ft in the air and as much as I have deployed public cloud resources this high in the clouds it is much faster to do this locally on my laptop. -Purely demo purpose but the concept is the same we are going to have our desired state configuration code and then we are going to run that against the virtualbox provider. In the past we have used vagrant here and I covered off the differences between vagrant and terraform at the beginning of the section. +Purely demo purpose but the concept is the same we are going to have our desired state configuration code and then we are going to run that against the virtualbox provider. In the past we have used vagrant here and I covered off the differences between vagrant and terraform at the beginning of the section. -### Create virtual machine in VirtualBox +### Create virtual machine in VirtualBox -The first thing we are going to do is create a new folder called virtualbox, we can then create a virtualbox.tf file and this is going to be where we define our resources. The code below which can be found in the VirtualBox folder as virtualbox.tf this is going to create 2 VMs in Virtualbox. +The first thing we are going to do is create a new folder called virtualbox, we can then create a virtualbox.tf file and this is going to be where we define our resources. The code below which can be found in the VirtualBox folder as virtualbox.tf this is going to create 2 VMs in Virtualbox. You can find more about the community virtualbox provider [here](https://registry.terraform.io/providers/terra-farm/virtualbox/latest/docs/resources/vm) @@ -54,54 +55,53 @@ output "IPAddr_2" { ``` -Now that we have our code defined we can now perform the `terraform init` on our folder to download the provider for virtualbox. +Now that we have our code defined we can now perform the `terraform init` on our folder to download the provider for virtualbox. ![](Images/Day59_IAC1.png) - Obviously you will also need to have virtualbox installed on your system as well. We can then next run `terraform plan` to see what our code will create for us. Followed by `terraform apply` the below image shows your completed process. ![](Images/Day59_IAC2.png) -In Virtualbox you will now see your 2 virtual machines. +In Virtualbox you will now see your 2 virtual machines. ![](Images/Day59_IAC3.png) -### Change configuration +### Change configuration -Lets add another node to our deployment. We can simply change the count line to show our newly desired number of nodes. When we run our `terraform apply` it will look something like below. +Lets add another node to our deployment. We can simply change the count line to show our newly desired number of nodes. When we run our `terraform apply` it will look something like below. ![](Images/Day59_IAC4.png) -Once complete in virtualbox you can see we now have 3 nodes up and running. +Once complete in virtualbox you can see we now have 3 nodes up and running. ![](Images/Day59_IAC5.png) -When we are finished we can clear this up using the `terraform destroy` and our machines will be removed. +When we are finished we can clear this up using the `terraform destroy` and our machines will be removed. ![](Images/Day59_IAC6.png) -### Variables & Outputs +### Variables & Outputs -We did mention outputs when we ran our hello-world example in the last session. But we can get into more detail here. +We did mention outputs when we ran our hello-world example in the last session. But we can get into more detail here. -But there are many other variables that we can use here as well, there are also a few different ways in which we can define variables. +But there are many other variables that we can use here as well, there are also a few different ways in which we can define variables. - We can manually enter our variables with the `terraform plan` or `terraform apply` command -- We can define them in the .tf file within the block +- We can define them in the .tf file within the block -- We can use environment variables within our system using `TF_VAR_NAME` as the format. +- We can use environment variables within our system using `TF_VAR_NAME` as the format. -- My preference is to use a terraform.tfvars file in our project folder. +- My preference is to use a terraform.tfvars file in our project folder. -- There is an *auto.tfvars file option +- There is an \*auto.tfvars file option -- or we can define when we run the `terraform plan` or `terraform apply` with the `-var` or `-var-file`. +- or we can define when we run the `terraform plan` or `terraform apply` with the `-var` or `-var-file`. -Starting from the bottom moving up would be the order in which the variables are defined. +Starting from the bottom moving up would be the order in which the variables are defined. -We have also mentioned that the state file will contain sensitive information. We can define our sensitive information as a variable and we can define this as being sensitive. +We have also mentioned that the state file will contain sensitive information. We can define our sensitive information as a variable and we can define this as being sensitive. ``` variable "some resource" { @@ -112,16 +112,17 @@ variable "some resource" { } ``` -## Resources -I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. +## Resources + +I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) -- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) +- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) - [Terraform Simple Projects](https://terraform.joshuajebaraj.com/) - [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks) - [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform) diff --git a/Days/day60.md b/Days/day60.md index b88ace47c..6c3bb294f 100644 --- a/Days/day60.md +++ b/Days/day60.md @@ -1,19 +1,20 @@ --- -title: '#90DaysOfDevOps - Docker Containers, Provisioners & Modules - Day 60' +title: "#90DaysOfDevOps - Docker Containers, Provisioners & Modules - Day 60" published: false -description: '90DaysOfDevOps - Docker Containers, Provisioners & Modules' -tags: 'devops, 90daysofdevops, learning' +description: "90DaysOfDevOps - Docker Containers, Provisioners & Modules" +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049052 --- -## Docker Containers, Provisioners & Modules -On [Day 59](day59.md) we provisioned a virtual machine using Terraform to our local FREE virtualbox environment. In this section we are going to be deploy a Docker container with some configuration to our local Docker environment. +## Docker Containers, Provisioners & Modules + +On [Day 59](day59.md) we provisioned a virtual machine using Terraform to our local FREE virtualbox environment. In this section we are going to be deploy a Docker container with some configuration to our local Docker environment. ### Docker Demo -First up we are going to use the code block below, the outcome of the below is that we would like a simple web app to be deployed into docker and to publish this so that it is available to our network. We will be using nginx and we will make this available externally on our laptop over localhost and port 8000. We are using a docker provider from the community and you can see the docker image we are using also stated in our configuration. +First up we are going to use the code block below, the outcome of the below is that we would like a simple web app to be deployed into docker and to publish this so that it is available to our network. We will be using nginx and we will make this available externally on our laptop over localhost and port 8000. We are using a docker provider from the community and you can see the docker image we are using also stated in our configuration. ``` terraform { @@ -42,21 +43,21 @@ resource "docker_container" "nginx" { } ``` -The first task is to use `terraform init` command to download the provider to our local machine. +The first task is to use `terraform init` command to download the provider to our local machine. ![](Images/Day60_IAC1.png) -We then run our `terraform apply` followed by `docker ps` and you can see we have a running container. +We then run our `terraform apply` followed by `docker ps` and you can see we have a running container. ![](Images/Day60_IAC2.png) -If we then open a browser we can navigate to http://localhost:8000/ and you will see we have access to our NGINX container. +If we then open a browser we can navigate to `http://localhost:8000/` and you will see we have access to our NGINX container. ![](Images/Day60_IAC3.png) -You can find out more information on the [Docker Provider](https://registry.terraform.io/providers/kreuzwerker/docker/latest/docs/resources/container) +You can find out more information on the [Docker Provider](https://registry.terraform.io/providers/kreuzwerker/docker/latest/docs/resources/container) -The above is a very simple demo of what can be done with Terraform plus Docker and how we can now manage this under the Terraform state. We covered docker compose in the containers section and there is a little crossover in a way between this, infrastructure as code as well as then Kubernetes. +The above is a very simple demo of what can be done with Terraform plus Docker and how we can now manage this under the Terraform state. We covered docker compose in the containers section and there is a little crossover in a way between this, infrastructure as code as well as then Kubernetes. For the purpose of showing this and how Terraform can handle a little more complexity, we are going to take the docker compose file for wordpress and mysql that we created with docker compose and we will put this to Terraform. You can find the [docker-wordpress.tf](/Days/IaC/Docker-Wordpress/docker-wordpress.tf) @@ -120,26 +121,25 @@ resource "docker_container" "wordpress" { } ``` -We again put this is in a new folder and then run our `terraform init` command to pull down our provisioners required. +We again put this is in a new folder and then run our `terraform init` command to pull down our provisioners required. ![](Images/Day60_IAC4.png) -We then run our `terraform apply` command and then take a look at our docker ps output we should see our newly created containers. +We then run our `terraform apply` command and then take a look at our docker ps output we should see our newly created containers. ![](Images/Day60_IAC5.png) -We can then also navigate to our WordPress front end. Much like when we went through this process with docker-compose in the containers section we can now run through the setup and our wordpress posts would be living in our MySQL database. +We can then also navigate to our WordPress front end. Much like when we went through this process with docker-compose in the containers section we can now run through the setup and our wordpress posts would be living in our MySQL database. ![](Images/Day60_IAC6.png) -Obviously now we have covered containers and Kubernetes in some detail, we probably know that this is ok for testing but if you were really going to be running a website you would not do this with containers alone and you would look at using Kubernetes to achieve this, Next up we are going to take a look using Terraform with Kubernetes. - +Obviously now we have covered containers and Kubernetes in some detail, we probably know that this is ok for testing but if you were really going to be running a website you would not do this with containers alone and you would look at using Kubernetes to achieve this, Next up we are going to take a look using Terraform with Kubernetes. -### Provisioners +### Provisioners -Provisioners are there so that if something cannot be declartive we have a way in which to parse this to our deployment. +Provisioners are there so that if something cannot be declarative we have a way in which to parse this to our deployment. -If you have no other alternative and adding this complexity to your code is the place to go then you can do this by running something similar to the following block of code. +If you have no other alternative and adding this complexity to your code is the place to go then you can do this by running something similar to the following block of code. ``` resource "docker_container" "db" { @@ -152,40 +152,41 @@ resource "docker_container" "db" { ``` -The remote-exec provisioner invokes a script on a remote resource after it is created. This could be used for something OS specific or it could be used to wrap in a configuration management tool. Although notice that we have some of these covered in their own provisioners. +The remote-exec provisioner invokes a script on a remote resource after it is created. This could be used for something OS specific or it could be used to wrap in a configuration management tool. Although notice that we have some of these covered in their own provisioners. [More details on provisioners](https://www.terraform.io/language/resources/provisioners/syntax) - file -- local-exec -- remote-exec -- vendor - - ansible - - chef - - puppet +- local-exec +- remote-exec +- vendor + - ansible + - chef + - puppet -### Modules +### Modules -Modules are containers for multiple resources that are used together. A module consists of a collection of .tf files in the same directory. +Modules are containers for multiple resources that are used together. A module consists of a collection of .tf files in the same directory. -Modules are a good way to separate your infrastructure resources as well as being able to pull in third party modules that have already been created so you do not have to re invent the wheel. +Modules are a good way to separate your infrastructure resources as well as being able to pull in third party modules that have already been created so you do not have to re invent the wheel. -For example if we wanted to use the same project to build out some VMs, VPCs, Security Groups and then also a Kubernetes cluster we would likely want to split our resources out into modules to better define our resources and where they are grouped. +For example if we wanted to use the same project to build out some VMs, VPCs, Security Groups and then also a Kubernetes cluster we would likely want to split our resources out into modules to better define our resources and where they are grouped. -Another benefit to modules is that you can take these modules and use them on other projects or share publicly to help the community. +Another benefit to modules is that you can take these modules and use them on other projects or share publicly to help the community. We are breaking down our infrastructure into components, components are known here as modules. -## Resources -I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. +## Resources + +I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) -- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) +- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) - [Terraform Simple Projects](https://terraform.joshuajebaraj.com/) - [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks) - [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform) diff --git a/Days/day61.md b/Days/day61.md index 4b159328e..6cb9011fd 100644 --- a/Days/day61.md +++ b/Days/day61.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Kubernetes & Multiple Environments - Day 61' +title: "#90DaysOfDevOps - Kubernetes & Multiple Environments - Day 61" published: false description: 90DaysOfDevOps - Kubernetes & Multiple Environments tags: "devops, 90daysofdevops, learning" @@ -7,25 +7,26 @@ cover_image: null canonical_url: null id: 1048743 --- -## Kubernetes & Multiple Environments + +## Kubernetes & Multiple Environments So far during this section on Infrastructure as code we have looked at deploying virtual machines albeit to virtualbox but the premise is the same really as we define in code what we want our virtual machine to look like and then we deploy. The same for Docker containers and in this session we are going to take a look at how Terraform can be used to interact with resources supported by Kubernetes. I have been using Terraform to deploy my Kubernetes clusters for demo purposes across the 3 main cloud providers and you can find the repository [tf_k8deploy](https://github.com/MichaelCade/tf_k8deploy) -However you can also use Terraform to interact with objects within the Kubernetes cluster, this could be using the [Kubernetes provider](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs) or it could be using the [Helm provider](https://registry.terraform.io/providers/hashicorp/helm/latest) to manage your chart deployments. +However you can also use Terraform to interact with objects within the Kubernetes cluster, this could be using the [Kubernetes provider](https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs) or it could be using the [Helm provider](https://registry.terraform.io/providers/hashicorp/helm/latest) to manage your chart deployments. -Now we could use `kubectl` as we have showed in previous sections. But there are some benefits to using Terraform in your Kubernetes environment. +Now we could use `kubectl` as we have showed in previous sections. But there are some benefits to using Terraform in your Kubernetes environment. - Unified workflow - if you have used terraform to deploy your clusters, you could use the same workflow and tool to deploy within your Kubernetes clusters -- Lifecycle management - Terraform is not just a provisioning tool, its going to enable change, updates and deletions. +- Lifecycle management - Terraform is not just a provisioning tool, its going to enable change, updates and deletions. ### Simple Kubernetes Demo Much like the demo we created in the last session we can now deploy nginx into our Kubernetes cluster, I will be using minikube here again for demo purposes. We create our Kubernetes.tf file and you can find this in the [folder](/Days/IaC/Kubernetes/kubernetes.tf) -In that file we are going to define our Kubernetes provider, we are going to point to our kubeconfig file, create a namespace called nginx, then we will create a deployment which contains 2 replicas and finally a service. +In that file we are going to define our Kubernetes provider, we are going to point to our kubeconfig file, create a namespace called nginx, then we will create a deployment which contains 2 replicas and finally a service. ``` terraform { @@ -93,73 +94,77 @@ resource "kubernetes_service" "test" { } ``` -The first thing we have to do in our new project folder is run the `terraform init` command. +The first thing we have to do in our new project folder is run the `terraform init` command. ![](Images/Day61_IAC1.png) -And then before we run the `terraform apply` command, let me show you that we have no namespaces. +And then before we run the `terraform apply` command, let me show you that we have no namespaces. ![](Images/Day61_IAC2.png) -When we run our apply command this is going to create those 3 new resources, namespace, deployment and service within our Kubernetes cluster. +When we run our apply command this is going to create those 3 new resources, namespace, deployment and service within our Kubernetes cluster. ![](Images/Day61_IAC3.png) -We can now take a look at the deployed resources within our cluster. +We can now take a look at the deployed resources within our cluster. ![](Images/Day61_IAC4.png) -Now because we are using minikube and you will have seen in the previous section this has its own limitations when we try and play with the docker networking for ingress. But if we simply issue the `kubectl port-forward -n nginx svc/nginx 30201:80` command and open a browser to http://localhost:30201/ we should see our NGINX page. +Now because we are using minikube and you will have seen in the previous section this has its own limitations when we try and play with the docker networking for ingress. But if we simply issue the `kubectl port-forward -n nginx svc/nginx 30201:80` command and open a browser to `http://localhost:30201/` we should see our NGINX page. ![](Images/Day61_IAC5.png) -If you want to try out more detailed demos with Terraform and Kubernetes then the [HashiCorp Learn site](https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider) is fantastic to run through. +If you want to try out more detailed demos with Terraform and Kubernetes then the [HashiCorp Learn site](https://learn.hashicorp.com/tutorials/terraform/kubernetes-provider) is fantastic to run through. +### Multiple Environments -### Multiple Environments +If we wanted to take any of the demos we have ran through but wanted to now have specific production, staging and development environments looking exactly the same and leveraging this code there are two approaches to achieve this with Terraform -If we wanted to take any of the demos we have ran through but wanted to now have specific production, staging and development environments looking exactly the same and leveraging this code there are two approaches to achieve this with Terraform +- `terraform workspaces` - multiple named sections within a single backend -- `terraform workspaces` - multiple named sections within a single backend +- file structure - Directory layout provides separation, modules provide reuse. -- file structure - Directory layout provides separation, modules provide reuse. +Each of the above do have their pros and cons though. -Each of the above do have their pros and cons though. +### terraform workspaces -### terraform workspaces +Pros -Pros -- Easy to get started -- Convenient terraform.workspace expression -- Minimises code duplication +- Easy to get started +- Convenient terraform.workspace expression +- Minimises code duplication Cons + - Prone to human error (we were trying to eliminate this by using TF) -- State stored within the same backend -- Codebase doesnt unambiguously show deployment configurations. +- State stored within the same backend +- Codebase doesn't unambiguously show deployment configurations. + +### File Structure -### File Structure +Pros -Pros -- Isolation of backends - - improved security - - decreased potential for human error +- Isolation of backends + - improved security + - decreased potential for human error - Codebase fully represents deployed state -Cons -- Multiple terraform apply required to provision environments -- More code duplication, but can be minimised with modules. +Cons + +- Multiple terraform apply required to provision environments +- More code duplication, but can be minimised with modules. + +## Resources -## Resources -I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. +I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) -- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) +- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) - [Terraform Simple Projects](https://terraform.joshuajebaraj.com/) - [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks) - [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform) diff --git a/Days/day62.md b/Days/day62.md index 61ead6767..0d926058f 100644 --- a/Days/day62.md +++ b/Days/day62.md @@ -1,57 +1,58 @@ --- -title: '#90DaysOfDevOps - Testing, Tools & Alternatives - Day 62' +title: "#90DaysOfDevOps - Testing, Tools & Alternatives - Day 62" published: false -description: '90DaysOfDevOps - Testing, Tools & Alternatives' -tags: 'devops, 90daysofdevops, learning' +description: "90DaysOfDevOps - Testing, Tools & Alternatives" +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049053 --- + ## Testing, Tools & Alternatives -As we close out this section on Infrastructure as Code we must mention about testing our code, the various different tools available and then some of the alternatives to Terraform to achieve this. As I said at the start of the section my focus was on Terraform because it is firstly free and open source, secondly it is cross platform and agnostic to environments. But there are also alternatives out there that should be considered but the overall goal is to make people aware that this is the way to deploy your infrastructure. +As we close out this section on Infrastructure as Code we must mention about testing our code, the various different tools available and then some of the alternatives to Terraform to achieve this. As I said at the start of the section my focus was on Terraform because it is firstly free and open source, secondly it is cross platform and agnostic to environments. But there are also alternatives out there that should be considered but the overall goal is to make people aware that this is the way to deploy your infrastructure. -### Code Rot +### Code Rot -The first area I want to cover in this session is code rot, unlike application code, infrastructure as code might get used and then not for a very long time. Lets take the example that we are going to be using Terraform to deploy our VM environment in AWS, perfect and it works first time and we have our environment, but this environment doesnt change too often so the code gets left the state possibly or hopefully stored in a central location but the code does not change. +The first area I want to cover in this session is code rot, unlike application code, infrastructure as code might get used and then not for a very long time. Lets take the example that we are going to be using Terraform to deploy our VM environment in AWS, perfect and it works first time and we have our environment, but this environment doesn't change too often so the code gets left the state possibly or hopefully stored in a central location but the code does not change. -What if something changes in the infrastructure? But it is done out of band, or other things change in our environment. +What if something changes in the infrastructure? But it is done out of band, or other things change in our environment. -- Out of band changes -- Unpinned versions -- Deprecated dependancies -- Unapplied changes +- Out of band changes +- Unpinned versions +- Deprecated dependencies +- Unapplied changes -### Testing +### Testing -Another huge area that follows on from code rot and in general is the ability to test your IaC and make sure all areas are working the way they should. +Another huge area that follows on from code rot and in general is the ability to test your IaC and make sure all areas are working the way they should. -First up there are some built in testing commands we can take a look at: +First up there are some built in testing commands we can take a look at: -| Command | Description | -| --------------------- | ------------------------------------------------------------------------------------------ | -| `terraform fmt` | Rewrite Terraform configuration files to a canonical format and style. | -| `terraform validate` | Validates the configuration files in a directory, referring only to the configuration | -| `terraform plan` | Creates an execution plan, which lets you preview the changes that Terraform plans to make | -| Custom validation | Validation of your input variables to ensure they match what you would expect them to be | +| Command | Description | +| -------------------- | ------------------------------------------------------------------------------------------ | +| `terraform fmt` | Rewrite Terraform configuration files to a canonical format and style. | +| `terraform validate` | Validates the configuration files in a directory, referring only to the configuration | +| `terraform plan` | Creates an execution plan, which lets you preview the changes that Terraform plans to make | +| Custom validation | Validation of your input variables to ensure they match what you would expect them to be | -We also have some testing tools available external to Terraform: +We also have some testing tools available external to Terraform: - [tflint](https://github.com/terraform-linters/tflint) - - Find possible errors - - Warn about deprecated syntax, unused declarations. - - Enforce best practices, naming conventions. + - Find possible errors + - Warn about deprecated syntax, unused declarations. + - Enforce best practices, naming conventions. -Scanning tools +Scanning tools - [checkov](https://www.checkov.io/) - scans cloud infrastructure configurations to find misconfigurations before they're deployed. - [tfsec](https://aquasecurity.github.io/tfsec/v1.4.2/) - static analysis security scanner for your Terraform code. -- [terrascan](https://github.com/accurics/terrascan) - static code analyzer for Infrastructure as Code. +- [terrascan](https://github.com/accurics/terrascan) - static code analyser for Infrastructure as Code. - [terraform-compliance](https://terraform-compliance.com/) - a lightweight, security and compliance focused test framework against terraform to enable negative testing capability for your infrastructure-as-code. -- [snyk](https://docs.snyk.io/products/snyk-infrastructure-as-code/scan-terraform-files/scan-and-fix-security-issues-in-terraform-files) - scans your Terraform code for misconfigurations and security issues +- [snyk](https://docs.snyk.io/products/snyk-infrastructure-as-code/scan-terraform-files/scan-and-fix-security-issues-in-terraform-files) - scans your Terraform code for misconfigurations and security issues -Managed Cloud offering +Managed Cloud offering - [Terraform Sentinel](https://www.terraform.io/cloud-docs/sentinel) - embedded policy-as-code framework integrated with the HashiCorp Enterprise products. It enables fine-grained, logic-based policy decisions, and can be extended to use information from external sources. @@ -59,48 +60,49 @@ Automated testing - [Terratest](https://terratest.gruntwork.io/) - Terratest is a Go library that provides patterns and helper functions for testing infrastructure -Worth a mention +Worth a mention - [Terraform Cloud](https://cloud.hashicorp.com/products/terraform) - Terraform Cloud is HashiCorp’s managed service offering. It eliminates the need for unnecessary tooling and documentation for practitioners, teams, and organizations to use Terraform in production. -- [Terragrunt](https://terragrunt.gruntwork.io/) - Terragrunt is a thin wrapper that provides extra tools for keeping your configurations DRY, working with multiple Terraform modules, and managing remote state. +- [Terragrunt](https://terragrunt.gruntwork.io/) - Terragrunt is a thin wrapper that provides extra tools for keeping your configurations DRY, working with multiple Terraform modules, and managing remote state. -- [Atlantis](https://www.runatlantis.io/) - Terraform Pull Request Automation +- [Atlantis](https://www.runatlantis.io/) - Terraform Pull Request Automation -### Alternatives +### Alternatives -We mentioned on Day 57 when we started this section that there were some alternatives and I very much plan on exploring this following on from this challenge. +We mentioned on Day 57 when we started this section that there were some alternatives and I very much plan on exploring this following on from this challenge. -| Cloud Specific | Cloud Agnostic | +| Cloud Specific | Cloud Agnostic | | ------------------------------- | -------------- | -| AWS CloudFormation | Terraform | -| Azure Resource Manager | Pulumi | -| Google Cloud Deployment Manager | | +| AWS CloudFormation | Terraform | +| Azure Resource Manager | Pulumi | +| Google Cloud Deployment Manager | | + +I have used AWS CloudFormation probably the most out of the above list and native to AWS but I have not used the others other than Terraform. As you can imagine the cloud specific versions are very good in that particular cloud but if you have multiple cloud environments then you are going to struggle to migrate those configurations or you are going to have multiple management planes for your IaC efforts. + +I think an interesting next step for me is to take some time and learn more about [Pulumi](https://www.pulumi.com/) -I have used AWS CloudFormation probably the most out of the above list and native to AWS but I have not used the others other than Terraform. As you can imagine the cloud specific versions are very good in that particular cloud but if you have multiple cloud environments then you are going to struggle to migrate those configurations or you are going to have multiple management planes for your IaC efforts. +From a Pulumi comparison on their site -I think an interesting next step for me is to take some time and learn more about [Pulumi](https://www.pulumi.com/) - -From a Pulumi comparison on their site +> "Both Terraform and Pulumi offer a desired state infrastructure as code model where the code represents the desired infrastructure state and the deployment engine compares this desired state with the stack’s current state and determines what resources need to be created, updated or deleted." -*"Both Terraform and Pulumi offer a desired state infrastructure as code model where the code represents the desired infrastructure state and the deployment engine compares this desired state with the stack’s current state and determines what resources need to be created, updated or deleted."* +The biggest difference I can see is that unlike the HashiCorp Configuration Language (HCL) Pulumi allows for general purpose languages like Python, TypeScript, JavaScript, Go and .NET. -The biggest difference I can see is that unlike the HashiCorp Configuration Language (HCL) Pulumi allows for general purpose languages like Python, TypeScript, JavaScript, Go and .NET. +A quick overview [Introduction to Pulumi: Modern Infrastructure as Code](https://www.youtube.com/watch?v=QfJTJs24-JM) I like the ease and choices you are prompted with and want to get into this a little more. -A quick overview [Introduction to Pulumi: Modern Infrastructure as Code](https://www.youtube.com/watch?v=QfJTJs24-JM) I like the ease and choices you are prompted with and want to get into this a little more. +This wraps up the Infrastructure as code section and next we move on to that little bit of overlap with configuration management and in particular as we get past the big picture of configuration management we are going to be using Ansible for some of those tasks and demos. -This wraps up the Infrastructure as code section and next we move on to that little bit of overlap with configuration management and in particular as we get past the big picture of configuration management we are going to be using Ansible for some of those tasks and demos. +## Resources -## Resources -I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. +I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) -- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) +- [KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide!](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) - [Terraform Simple Projects](https://terraform.joshuajebaraj.com/) - [Terraform Tutorial - The Best Project Ideas](https://www.youtube.com/watch?v=oA-pPa0vfks) - [Awesome Terraform](https://github.com/shuaibiyy/awesome-terraform) diff --git a/Days/day63.md b/Days/day63.md index ab498820e..1f23e1d6e 100644 --- a/Days/day63.md +++ b/Days/day63.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - The Big Picture: Configuration Management - Day 63' +title: "#90DaysOfDevOps - The Big Picture: Configuration Management - Day 63" published: false description: 90DaysOfDevOps - The Big Picture Configuration Management tags: "devops, 90daysofdevops, learning" @@ -7,102 +7,100 @@ cover_image: null canonical_url: null id: 1048711 --- + ## The Big Picture: Configuration Management -Coming straight off the back of the section covering Infrastructure as Code, there is likely going to be some crossover as we talk about Configuration Management or Application Configuration Management. +Coming straight off the back of the section covering Infrastructure as Code, there is likely going to be some crossover as we talk about Configuration Management or Application Configuration Management. -Configuration Management is the process of maintaining applications, systems and servers in a desired state. The overlap with Infrastructure as code is that IaC is going to make sure your infrastructure is at the desired state but after that especially terraform is not going to look after the desired state of your OS settings or Application and that is where Configuration Management tools come in. Making sure that system and applications perform the way it is expected as changes occur over Deane. +Configuration Management is the process of maintaining applications, systems and servers in a desired state. The overlap with Infrastructure as code is that IaC is going to make sure your infrastructure is at the desired state but after that especially terraform is not going to look after the desired state of your OS settings or Application and that is where Configuration Management tools come in. Making sure that system and applications perform the way it is expected as changes occur over Deane. -Configuration management keeps you from making small or large changes that go undocumented. +Configuration management keeps you from making small or large changes that go undocumented. ### Scenario: Why would you want to use Configuration Management The scenario or why you'd want to use Configuration Management, meet Dean he's our system administrator and Dean is a happy camper pretty and -working on all of the systems in his environement. - -What happens if their system fails, if there's a fire, a server goes down well? Dean knows exactly what to do he can fix that fire really easily the problems become really difficult for Dean however if multiple servers start failing particularly when you have large and expanding environments, this is why Dean really needs to have a configuration management tool. Configuration Management tools can help make Dean look like a rockstar, all he has to do is configure the right codes that allows him to push out the instructions on how to set up each of the servers quickly effectively and at scale. +working on all of the systems in his environment. +What happens if their system fails, if there's a fire, a server goes down well? Dean knows exactly what to do he can fix that fire really easily the problems become really difficult for Dean however if multiple servers start failing particularly when you have large and expanding environments, this is why Dean really needs to have a configuration management tool. Configuration Management tools can help make Dean look like a rockstar, all he has to do is configure the right codes that allows him to push out the instructions on how to set up each of the servers quickly effectively and at scale. -### Configuration Management tools +### Configuration Management tools -There are a variety of configuration management tools available, and each has specific features that make it better for some situations than others. +There are a variety of configuration management tools available, and each has specific features that make it better for some situations than others. ![](Images/Day63_config1.png) -At this stage we will take a quick fire look at the options in the above picture before making our choice on which one we will use and why. +At this stage we will take a quick fire look at the options in the above picture before making our choice on which one we will use and why. - **Chef** - - Chef ensures configuration is applied consistently in every environment, at any scale with infrastructure automation. + + - Chef ensures configuration is applied consistently in every environment, at any scale with infrastructure automation. - Chef is an open-source tool developed by OpsCode written in Ruby and Erlang. - - Chef is best suited for organisations that have a hetrogenous infrastructure and are looking for mature solutions. - - Recipes and Cookbooks determine the configuration code for your systems. + - Chef is best suited for organisations that have a heterogeneous infrastructure and are looking for mature solutions. + - Recipes and Cookbooks determine the configuration code for your systems. - Pro - A large collection of recipes are available - Pro - Integrates well with Git which provides a strong version control - - Con - Steep learning curve, a considerable amount of time required. - - Con - The main server doesn't have much control. - - Architecture - Server / Clients - - Ease of setup - Moderate + - Con - Steep learning curve, a considerable amount of time required. + - Con - The main server doesn't have much control. + - Architecture - Server / Clients + - Ease of setup - Moderate - Language - Procedural - Specify how to do a task - **Puppet** - - Puppet is a configuration management tool that supports automatic deployment. - - Puppet is built in Ruby and uses DSL for writing manifests. - - Puppet also works well with hetrogenous infrastructure where the focus is on scalability. - - Pro - Large community for support. - - Pro - Well developed reporting mechanism. + - Puppet is a configuration management tool that supports automatic deployment. + - Puppet is built in Ruby and uses DSL for writing manifests. + - Puppet also works well with heterogeneous infrastructure where the focus is on scalability. + - Pro - Large community for support. + - Pro - Well developed reporting mechanism. - Con - Advance tasks require knowledge of Ruby language. - - Con - The main server doesn't have much control. - - Architecture - Server / Clients - - Ease of setup - Moderate - - Language - Declartive - Specify only what to do - + - Con - The main server doesn't have much control. + - Architecture - Server / Clients + - Ease of setup - Moderate + - Language - Declarative - Specify only what to do - **Ansible** - - Ansible is an IT automation tool that automates configuration management, cloud provisioning, deployment and orchestration. + + - Ansible is an IT automation tool that automates configuration management, cloud provisioning, deployment and orchestration. - The core of Ansible playbooks are written in YAML. (Should really do a section on YAML as we have seen this a few times) - - Ansible works well when there are environments that focus on getting things up and running fast. + - Ansible works well when there are environments that focus on getting things up and running fast. - Works on playbooks which provide instructions to your servers. - Pro - No agents needed on remote nodes. - - Pro - YAML is easy to learn. + - Pro - YAML is easy to learn. - Con - Performance speed is often less than other tools (Faster than Dean doing it himself manually) - - Con - YAML not as powerful as Ruby but less of a learning curve. + - Con - YAML not as powerful as Ruby but less of a learning curve. - Architecture - Client Only - - Ease of setup - Very Easy + - Ease of setup - Very Easy - Language - Procedural - Specify how to do a task - **SaltStack** - - SaltStack is a CLI based tool that automates configuration management and remote execution. - - SaltStack is Python based whilst the instructions are written in YAML or its own DSL. - - Perfect for environments with scalability and resilience as the priority. - - Pro - Easy to use when up and running - - Pro - Good reporting mechanism + - SaltStack is a CLI based tool that automates configuration management and remote execution. + - SaltStack is Python based whilst the instructions are written in YAML or its own DSL. + - Perfect for environments with scalability and resilience as the priority. + - Pro - Easy to use when up and running + - Pro - Good reporting mechanism - Con - Setup phase is tough - - Con - New web ui which is much less developed than the others. + - Con - New web ui which is much less developed than the others. - Architecture - Server / Clients - - Ease of setup - Moderate - - Language - Declartive - Specify only what to do + - Ease of setup - Moderate + - Language - Declarative - Specify only what to do ### Ansible vs Terraform The tool that we will be using for this section is going to be Ansible. (Easy to use and easier language basics required.) -I think it is important to touch on some of the differences between Ansible and Terraform before we look into the tooling a little further. - -| |Ansible |Terraform | -| ------------- | ------------------------------------------------------------- | ----------------------------------------------------------------- | -|Type |Ansible is a configuration management tool |Terraform is a an orchestration tool | -|Infrastructure |Ansible provides support for mutable infrastructure |Terraform provides support for immutable infrastructure | -|Language |Ansible follows procedural language |Terraform follows a declartive language | -|Provisioning |Ansible provides partial provisioning (VM, Network, Storage) |Terraform provides extensive provisioning (VM, Network, Storage) | -|Packaging |Ansible provides complete support for packaging & templating |Terraform provides partial support for packaging & templating | -|Lifecycle Mgmt |Ansible does not have lifecycle management |Terraform is heavily dependant on lifecycle and state mgmt | +I think it is important to touch on some of the differences between Ansible and Terraform before we look into the tooling a little further. +| | Ansible | Terraform | +| -------------- | ------------------------------------------------------------ | ---------------------------------------------------------------- | +| Type | Ansible is a configuration management tool | Terraform is a an orchestration tool | +| Infrastructure | Ansible provides support for mutable infrastructure | Terraform provides support for immutable infrastructure | +| Language | Ansible follows procedural language | Terraform follows a declarative language | +| Provisioning | Ansible provides partial provisioning (VM, Network, Storage) | Terraform provides extensive provisioning (VM, Network, Storage) | +| Packaging | Ansible provides complete support for packaging & templating | Terraform provides partial support for packaging & templating | +| Lifecycle Mgmt | Ansible does not have lifecycle management | Terraform is heavily dependant on lifecycle and state mgmt | - -## Resources +## Resources - [What is Ansible](https://www.youtube.com/watch?v=1id6ERvfozo) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - See you on [Day 64](day64.md) diff --git a/Days/day64.md b/Days/day64.md index db3aeff6c..ac3a4029f 100644 --- a/Days/day64.md +++ b/Days/day64.md @@ -1,88 +1,87 @@ --- -title: '#90DaysOfDevOps - Ansible: Getting Started - Day 64' +title: "#90DaysOfDevOps - Ansible: Getting Started - Day 64" published: false -description: '90DaysOfDevOps - Ansible: Getting Started' +description: "90DaysOfDevOps - Ansible: Getting Started" tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048765 --- + ## Ansible: Getting Started -We covered a little what Ansible is in the [big picture session yesterday](day63.md) But we are going to get started with a little more information on top of that here. Firstly Ansible comes from RedHat. Secondly it is agentles, connects via SSH and runs commands. Thirdly it is cross platform (Linux & macOS, WSL2) and open-source (there is also a paid for enterprise option) Ansible pushes configuration vs other models. +We covered a little what Ansible is in the [big picture session yesterday](day63.md) But we are going to get started with a little more information on top of that here. Firstly Ansible comes from RedHat. Secondly it is agentles, connects via SSH and runs commands. Thirdly it is cross platform (Linux & macOS, WSL2) and open-source (there is also a paid for enterprise option) Ansible pushes configuration vs other models. -### Ansible Installation -As you might imagine, RedHat and the Ansible team have done a fantastic job around documenting Ansible. This generally starts with the installation steps which you can find [here](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) Remember we said that Ansible is an agentless automation tool, the tool is deployed to a system referred to as a "Control Node" from this control node is manages machines and other devices (possibly network) over SSH. +### Ansible Installation -It does state in the above linked documentation that the Windows OS cannot be used as the control node. +As you might imagine, RedHat and the Ansible team have done a fantastic job around documenting Ansible. This generally starts with the installation steps which you can find [here](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) Remember we said that Ansible is an agentless automation tool, the tool is deployed to a system referred to as a "Control Node" from this control node is manages machines and other devices (possibly network) over SSH. -For my control node and for at least this demo I am going to use the Linux VM we created way back in the [Linux section](day20.md) as my control node. +It does state in the above linked documentation that the Windows OS cannot be used as the control node. -This system was running Ubuntu and the installation steps simply needs the following commands. +For my control node and for at least this demo I am going to use the Linux VM we created way back in the [Linux section](day20.md) as my control node. -``` +This system was running Ubuntu and the installation steps simply needs the following commands. + +```Shell sudo apt update sudo apt install software-properties-common sudo add-apt-repository --yes --update ppa:ansible/ansible sudo apt install ansible ``` -Now we should have ansible installed on our control node, you can check this by running `ansible --version` and you should see something similar to this below. + +Now we should have ansible installed on our control node, you can check this by running `ansible --version` and you should see something similar to this below. ![](Images/Day64_config1.png) -Before we then start to look at controlling other nodes in our environment, we can also check functionality of ansible by running a command against our local machine `ansible localhost -m ping` will use an [Ansible Module](https://docs.ansible.com/ansible/2.9/user_guide/modules_intro.html) and this is a quick way to perform a single task across many different systems. I mean it is not much fun with just the local host but imagine you wanted to get something or make sure all your systems were up and you had 1000+ servers and devices. +Before we then start to look at controlling other nodes in our environment, we can also check functionality of ansible by running a command against our local machine `ansible localhost -m ping` will use an [Ansible Module](https://docs.ansible.com/ansible/2.9/user_guide/modules_intro.html) and this is a quick way to perform a single task across many different systems. I mean it is not much fun with just the local host but imagine you wanted to get something or make sure all your systems were up and you had 1000+ servers and devices. ![](Images/Day64_config2.png) -Or an actual real life use for a module might be something like `ansible webservers --m service -a "name=httpd state=started"` this will tell us if all of our webservers have the httpd service running. I have glossed over the webservers term used in that command. +Or an actual real life use for a module might be something like `ansible webservers --m service -a "name=httpd state=started"` this will tell us if all of our webservers have the httpd service running. I have glossed over the webservers term used in that command. -### hosts +### hosts -The way I used localhost above to run a simple ping module against the system, I cannot specify another machine on my network, for example in the environment I am using my Windows host where VirtualBox is running has a network adapter with the IP 10.0.0.1 but you can see below that I can reach by pinging but I cannot use ansible to perform that task. +The way I used localhost above to run a simple ping module against the system, I cannot specify another machine on my network, for example in the environment I am using my Windows host where VirtualBox is running has a network adapter with the IP 10.0.0.1 but you can see below that I can reach by pinging but I cannot use ansible to perform that task. ![](Images/Day64_config3.png) -In order for us to specify our hosts or the nodes that we want to automate with these tasks we need to define them. We can define them by navigating to the /etc/ansible directory on your system. +In order for us to specify our hosts or the nodes that we want to automate with these tasks we need to define them. We can define them by navigating to the /etc/ansible directory on your system. ![](Images/Day64_config4.png) -The file we want to edit is the hosts file, using a text editor we can jump in and define our hosts. The hosts file contains lots of great instructions on how to use and modify the file. We want to scroll down to the bottom and we are going to create a new group called [windows] and we are going to add our `10.0.0.1` IP address for that host. Save the file. +The file we want to edit is the hosts file, using a text editor we can jump in and define our hosts. The hosts file contains lots of great instructions on how to use and modify the file. We want to scroll down to the bottom and we are going to create a new group called [windows] and we are going to add our `10.0.0.1` IP address for that host. Save the file. ![](Images/Day64_config5.png) -However remember I said you will need to have SSH available to enable ansible to connect to your system. As you can see below when I run `ansible windows -m ping` we get an unreachable because things failed to connect via SSH. +However remember I said you will need to have SSH available to enable ansible to connect to your system. As you can see below when I run `ansible windows -m ping` we get an unreachable because things failed to connect via SSH. ![](Images/Day64_config6.png) -I have now also started adding some additional hosts to our inventory, another name for this file as this is where you are going to define all of your devices, could be network devices, switches and routers for example also would be added here and grouped. In our hosts file though I have also added in my credentials for accessing the linux group of systems. +I have now also started adding some additional hosts to our inventory, another name for this file as this is where you are going to define all of your devices, could be network devices, switches and routers for example also would be added here and grouped. In our hosts file though I have also added in my credentials for accessing the linux group of systems. ![](Images/Day64_config7.png) -Now if we run `ansible linux -m ping` we get a success as per below. +Now if we run `ansible linux -m ping` we get a success as per below. ![](Images/Day64_config8.png) -We then have the node requirements, these are the target systems you wish to automate the configuration on. We are not installing anything for Ansible on these (I mean we might be installing software but there is no client from Ansible we need) Ansible will make a connection over SSH and send anything over SFTP. (If you so desire though and you have SSH configured you could use SCP vs SFTP.) +We then have the node requirements, these are the target systems you wish to automate the configuration on. We are not installing anything for Ansible on these (I mean we might be installing software but there is no client from Ansible we need) Ansible will make a connection over SSH and send anything over SFTP. (If you so desire though and you have SSH configured you could use SCP vs SFTP.) -### Ansible Commands +### Ansible Commands You saw that we were able to run `ansible linux -m ping` against our Linux machine and get a response, basically with Ansible we have the ability to run many adhoc commands. But obviously you can run this against a group of systems and get that information back. [ad hoc commands](https://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html) -If you find yourself repeating commands or even worse you are having to log into individual systems to run these commands then Ansible can help there. For example the simple command below would give us the output of all the operating system details for all of the systems we add to our linux group. +If you find yourself repeating commands or even worse you are having to log into individual systems to run these commands then Ansible can help there. For example the simple command below would give us the output of all the operating system details for all of the systems we add to our linux group. `ansible linux -a "cat /etc/os-release"` -Other use cases could be to reboot systems, copy files, manage packers and users. You can also couple ad hoc commands with Ansible modules. +Other use cases could be to reboot systems, copy files, manage packers and users. You can also couple ad hoc commands with Ansible modules. Ad hoc commands use a declarative model, calculating and executing the actions required to reach a specified final state. They achieve a form of idempotence by checking the current state before they begin and doing nothing unless the current state is different from the specified final state. -## Resources +## Resources - [What is Ansible](https://www.youtube.com/watch?v=1id6ERvfozo) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - See you on [Day 65](day65.md) - - - diff --git a/Days/day65.md b/Days/day65.md index 2478bf429..63f0d20d6 100644 --- a/Days/day65.md +++ b/Days/day65.md @@ -1,13 +1,14 @@ --- -title: '#90DaysOfDevOps - Ansible Playbooks - Day 65' +title: "#90DaysOfDevOps - Ansible Playbooks - Day 65" published: false description: 90DaysOfDevOps - Ansible Playbooks -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049054 --- -### Ansible Playbooks + +### Ansible Playbooks In this section we will take a look at the main reason that I can see at least for Ansible, I mean it is great to take a single command and hit many different servers to perform simple commands such as rebooting a long list of servers and saving the hassle of having to connect to each one individually. @@ -25,7 +26,7 @@ These playbooks are written in YAML (YAML ain’t markup language) you will find Let’s take a look at a simple playbook called playbook.yml. -``` +```Yaml - name: Simple Play hosts: localhost connection: local @@ -37,30 +38,30 @@ Let’s take a look at a simple playbook called playbook.yml. msg: "{{ ansible_os_family }}" ``` -You will find the above file [simple_play](days/../Configmgmt/simple_play.yml). If we then use the `ansible-playbook simple_play.yml` command we will walk through the following steps. +You will find the above file [simple_play](days/../Configmgmt/simple_play.yml). If we then use the `ansible-playbook simple_play.yml` command we will walk through the following steps. ![](Images/Day65_config1.png) You can see the first task of "gathering steps" happened, but we didn't trigger or ask for this? This module is automatically called by playbooks to gather useful variables about remote hosts. [ansible.builtin.setup](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/setup_module.html) -Our second task was to set a ping, this is not an ICMP ping but a python script to report back `pong` on successful connectivity to remote or localhost. [ansible.builtin.ping](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/ping_module.html) +Our second task was to set a ping, this is not an ICMP ping but a python script to report back `pong` on successful connectivity to remote or localhost. [ansible.builtin.ping](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/ping_module.html) -Then our third or really our second defined task as the first one will run unless you disable was the printing of a message telling us our OS. In this task we are using conditionals, we could run this playbook against all different types of operating systems and this would return the OS name. We are simply messaging this output for ease but we could add a task to say something like: +Then our third or really our second defined task as the first one will run unless you disable was the printing of a message telling us our OS. In this task we are using conditionals, we could run this playbook against all different types of operating systems and this would return the OS name. We are simply messaging this output for ease but we could add a task to say something like: -``` -tasks: +```Yaml +tasks: - name: "shut down Debian flavoured systems" - command: /sbin/shutdown -t now + command: /sbin/shutdown -t now when: ansible_os_family == "Debian" -``` +``` ### Vagrant to setup our environment -We are going to use Vagrant to set up our node environment, I am going to keep this at a reasonable 4 nodes but you can hopefully see that this could easily be 300 or 3000 and this is the power of Ansible and other configuration management tools to be able to configure your servers. +We are going to use Vagrant to set up our node environment, I am going to keep this at a reasonable 4 nodes but you can hopefully see that this could easily be 300 or 3000 and this is the power of Ansible and other configuration management tools to be able to configure your servers. You can find this file located here ([Vagrantfile](/Days/Configmgmt/Vagrantfile)) -``` +```Vagrant Vagrant.configure("2") do |config| servers=[ { @@ -97,7 +98,7 @@ config.vm.base_address = 600 config.vm.define machine[:hostname] do |node| node.vm.box = machine[:box] node.vm.hostname = machine[:hostname] - + node.vm.network :public_network, bridge: "Intel(R) Ethernet Connection (7) I219-V", ip: machine[:ip] node.vm.network "forwarded_port", guest: 22, host: machine[:ssh_port], id: "ssh" @@ -111,49 +112,51 @@ config.vm.base_address = 600 end ``` -Use the `vagrant up` command to spin these machines up in VirtualBox, You might be able to add more memory and you might also want to define a different private_network address for each machine but this works in my environment. Remember our control box is the Ubuntu desktop we deployed during the Linux section. +Use the `vagrant up` command to spin these machines up in VirtualBox, You might be able to add more memory and you might also want to define a different private_network address for each machine but this works in my environment. Remember our control box is the Ubuntu desktop we deployed during the Linux section. -If you are resource contrained then you can also run `vagrant up web01 web02` to only bring up the webservers that we are using here. +If you are resource contrained then you can also run `vagrant up web01 web02` to only bring up the webservers that we are using here. ### Ansible host configuration Now that we have our environment ready, we can check ansible and for this we will use our Ubuntu desktop (You could use this but you can equally use any Linux based machine on your network accessible to the network below) as our control, let’s also add the new nodes to our group in the ansible hosts file, you can think of this file as an inventory, an alternative to this could be another inventory file that is called on as part of your ansible command with `-i filename` this could be useful vs using the host file as you can have different files for different environments, maybe production, test and staging. Because we are using the default hosts file we do not need to specify as this would be the default used. -I have added the following to the default hosts file. +I have added the following to the default hosts file. -``` +```Text [control] ansible-control -[proxy] +[proxy] loadbalancer -[webservers] +[webservers] web01 web02 -[database] +[database] db01 ``` + ![](Images/Day65_config2.png) Before moving on we want to make sure we can run a command against our nodes, let’s run `ansible nodes -m command -a hostname` this simple command will test that we have connectivity and report back our host names. Also note that I have added these nodes and IPs to my Ubuntu control node within the /etc/hosts file to ensure connectivity. We might also need to do SSH configuration for each node from the Ubuntu box. -``` +```Text 192.168.169.140 ansible-control 192.168.169.130 db01 192.168.169.131 web01 192.168.169.132 web02 192.168.169.133 loadbalancer ``` + ![](Images/Day65_config3.png) -At this stage we want to run through setting up SSH keys between your control and your server nodes. This is what we are going to do next, another way here could be to add variables into your hosts file to give username and password. I would advise against this as this is never going to be a best practice. +At this stage we want to run through setting up SSH keys between your control and your server nodes. This is what we are going to do next, another way here could be to add variables into your hosts file to give username and password. I would advise against this as this is never going to be a best practice. -To set up SSH and share amongst your nodes, follow the steps below, you will be prompted for passwords (`vagrant`) and you will likely need to hit `y` a few times to accept. +To set up SSH and share amongst your nodes, follow the steps below, you will be prompted for passwords (`vagrant`) and you will likely need to hit `y` a few times to accept. `ssh-keygen` @@ -165,28 +168,27 @@ To set up SSH and share amongst your nodes, follow the steps below, you will be Now if you have all of your VMs switched on then you can run the `ssh-copy-id web01 && ssh-copy-id web02 && ssh-copy-id loadbalancer && ssh-copy-id db01` this will prompt you for your password in our case our password is `vagrant` -I am not running all my VMs and only running the webservers so I issued `ssh-copy-id web01 && ssh-copy-id web02` +I am not running all my VMs and only running the webservers so I issued `ssh-copy-id web01 && ssh-copy-id web02` ![](Images/Day65_config7.png) -Before running any playbooks I like to make sure that I have simple connectivity with my groups so I have ran `ansible webservers -m ping` to test connectivity. +Before running any playbooks I like to make sure that I have simple connectivity with my groups so I have ran `ansible webservers -m ping` to test connectivity. ![](Images/Day65_config4.png) - ### Our First "real" Ansible Playbook -Our first Ansible playbook is going to configure our webservers, we have grouped these in our hosts file under the grouping [webservers]. -Before we run our playbook we can confirm that our web01 and web02 do not have apache installed. The top of the screenshot below is showing you the folder and file layout I have created within my ansible control to run this playbook, we have the `playbook1.yml`, then in the templates folder we have the `index.html.j2` and `ports.conf.j2` files. You can find these files in the folder listed above in the repository. +Our first Ansible playbook is going to configure our webservers, we have grouped these in our hosts file under the grouping [webservers]. -Then we SSH into web01 to check if we have apache installed? +Before we run our playbook we can confirm that our web01 and web02 do not have apache installed. The top of the screenshot below is showing you the folder and file layout I have created within my ansible control to run this playbook, we have the `playbook1.yml`, then in the templates folder we have the `index.html.j2` and `ports.conf.j2` files. You can find these files in the folder listed above in the repository. -![](Images/Day65_config8.png) +Then we SSH into web01 to check if we have apache installed? -You can see from the above that we have not got apache installed on our web01 so we can fix this by running the below playbook. +![](Images/Day65_config8.png) +You can see from the above that we have not got apache installed on our web01 so we can fix this by running the below playbook. -``` +```Yaml - hosts: webservers become: yes vars: @@ -224,30 +226,31 @@ You can see from the above that we have not got apache installed on our web01 so name: apache2 state: restarted ``` -Breaking down the above playbook: + +Breaking down the above playbook: - `- hosts: webservers` this is saying that our group to run this playbook on is a group called webservers -- `become: yes` means that our user running the playbook will become root on our remote systems. You will be prompted for the root password. -- We then have `vars` and this defines some environment variables we want throughout our webservers. +- `become: yes` means that our user running the playbook will become root on our remote systems. You will be prompted for the root password. +- We then have `vars` and this defines some environment variables we want throughout our webservers. -Following this we start our tasks, +Following this we start our tasks, - Task 1 is to ensure that apache is running the latest version -- Task 2 is writing the ports.conf file from our source found in the templates folder. -- Task 3 is creating a basic index.html file -- Task 4 is making sure apache is running +- Task 2 is writing the ports.conf file from our source found in the templates folder. +- Task 3 is creating a basic index.html file +- Task 4 is making sure apache is running Finally we have a handlers section, [Handlers: Running operations on change](https://docs.ansible.com/ansible/latest/user_guide/playbooks_handlers.html) "Sometimes you want a task to run only when a change is made on a machine. For example, you may want to restart a service if a task updates the configuration of that service, but not if the configuration is unchanged. Ansible uses handlers to address this use case. Handlers are tasks that only run when notified. Each handler should have a globally unique name." -At this stage you might be thinking but we have deployed 5 VMs (including our Ubuntu Desktop machine which is acting as our Ansible Control) The other systems will come into play during the rest of the section. +At this stage you might be thinking but we have deployed 5 VMs (including our Ubuntu Desktop machine which is acting as our Ansible Control) The other systems will come into play during the rest of the section. ### Run our Playbook -We are now ready to run our playbook against our nodes. To run our playbook we can use the `ansible-playbook playbook1.yml` We have defined our hosts that our playbook will run against within the playbook and this will walkthrough our tasks that we have defined. +We are now ready to run our playbook against our nodes. To run our playbook we can use the `ansible-playbook playbook1.yml` We have defined our hosts that our playbook will run against within the playbook and this will walkthrough our tasks that we have defined. -When the command is complete we get an output showing our plays and tasks, this may take some time you can see from the below image that this took a while to go and install our desired state. +When the command is complete we get an output showing our plays and tasks, this may take some time you can see from the below image that this took a while to go and install our desired state. ![](Images/Day65_config9.png) @@ -255,7 +258,7 @@ We can then double check this by jumping into a node and checking we have the in ![](Images/Day65_config10.png) -Just to round this out as we have deployed two standalone webservers with the above we can now navigate to the respective IPs that we defined and get our new website. +Just to round this out as we have deployed two standalone webservers with the above we can now navigate to the respective IPs that we defined and get our new website. ![](Images/Day65_config11.png) @@ -263,13 +266,13 @@ We are going to build on this playbook as we move through the rest of this secti Another thing to add here is that we are only really working with Ubuntu VMs but Ansible is agnostic to the target systems. The alternatives that we have previously mentioned to manage your systems could be server by server (not scalable when you get over a large amount of servers, plus a pain even with 3 nodes) we can also use shell scripting which again we covered in the Linux section but these nodes are potentially different so yes it can be done but then someone needs to maintain and manage those scripts. Ansible is free and hits the easy button vs having to have a specialised script. -## Resources +## Resources - [What is Ansible](https://www.youtube.com/watch?v=1id6ERvfozo) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - [Your complete guide to Ansible](https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u) -This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. +This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. See you on [Day 66](day66.md) diff --git a/Days/day66.md b/Days/day66.md index 3a23c3b19..a032177f1 100644 --- a/Days/day66.md +++ b/Days/day66.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Ansible Playbooks Continued... - Day 66' +title: "#90DaysOfDevOps - Ansible Playbooks Continued... - Day 66" published: false description: 90DaysOfDevOps - Ansible Playbooks Continued... tags: "devops, 90daysofdevops, learning" @@ -7,27 +7,28 @@ cover_image: null canonical_url: null id: 1048712 --- -## Ansible Playbooks Continued... -In our last section we started with creating our small lab using a Vagrantfile to deploy 4 machines and we used our Linux machine we created in that section as our ansible control system. +## Ansible Playbooks (Continued) -We also ran through a few scenarios of playbooks and at the end we had a playbook that made our web01 and web02 individual webservers. +In our last section we started with creating our small lab using a Vagrantfile to deploy 4 machines and we used our Linux machine we created in that section as our ansible control system. + +We also ran through a few scenarios of playbooks and at the end we had a playbook that made our web01 and web02 individual webservers. ![](Images/Day66_config1.png) ### Keeping things tidy -Before we get into further automation and deployment we should cover the ability to keep our playbook lean and tidy and how we can separate our taks and handlers into subfolders. +Before we get into further automation and deployment we should cover the ability to keep our playbook lean and tidy and how we can separate our taks and handlers into subfolders. we are basically going to copy our tasks into their own file within a folder. -``` +```Yaml - name: ensure apache is at the latest version apt: name=apache2 state=latest - name: write the apache2 ports.conf config file - template: - src=templates/ports.conf.j2 + template: + src=templates/ports.conf.j2 dest=/etc/apache2/ports.conf notify: restart apache @@ -44,9 +45,9 @@ we are basically going to copy our tasks into their own file within a folder. state: started ``` -and the same for the handlers. +and the same for the handlers. -``` +```Yaml - name: restart apache service: name: apache2 @@ -59,7 +60,7 @@ You can test this on your control machine. If you have copied the files from the ![](Images/Day66_config2.png) -Let's find out what simple change I made. Using `curl web01:8000` +Let's find out what simple change I made. Using `curl web01:8000` ![](Images/Day66_config3.png) @@ -67,25 +68,25 @@ We have just tidied up our playbook and started to separate areas that could mak ### Roles and Ansible Galaxy -At the moment we have deployed 4 VMs and we have configured 2 of these VMs as our webservers but we have some more specific functions namely, a database server and a loadbalancer or proxy. In order for us to do this and tidy up our repository we can use roles within Ansible. +At the moment we have deployed 4 VMs and we have configured 2 of these VMs as our webservers but we have some more specific functions namely, a database server and a loadbalancer or proxy. In order for us to do this and tidy up our repository we can use roles within Ansible. -To do this we will use the `ansible-galaxy` command which is there to manage ansible roles in shared repositories. +To do this we will use the `ansible-galaxy` command which is there to manage ansible roles in shared repositories. ![](Images/Day66_config4.png) -We are going to use `ansible-galaxy` to create a role for apache2 which is where we are going to put our specifics for our webservers. +We are going to use `ansible-galaxy` to create a role for apache2 which is where we are going to put our specifics for our webservers. ![](Images/Day66_config5.png) -The above command `ansible-galaxy init roles/apache2` will create the folder structure that we have shown above. Our next step is we need to move our existing tasks and templates to the relevant folders in the new structure. +The above command `ansible-galaxy init roles/apache2` will create the folder structure that we have shown above. Our next step is we need to move our existing tasks and templates to the relevant folders in the new structure. ![](Images/Day66_config6.png) -Copy and paste is easy to move those files but we also need to make a change to the tasks/main.yml so that we point this to the apache2_install.yml. +Copy and paste is easy to move those files but we also need to make a change to the tasks/main.yml so that we point this to the apache2_install.yml. -We also need to change our playbook now to refer to our new role. In the playbook1.yml and playbook2.yml we determine our tasks and handlers in different ways as we changed these between the two versions. We need to change our playbook to use this role as per below: +We also need to change our playbook now to refer to our new role. In the playbook1.yml and playbook2.yml we determine our tasks and handlers in different ways as we changed these between the two versions. We need to change our playbook to use this role as per below: -``` +```Yaml - hosts: webservers become: yes vars: @@ -98,32 +99,32 @@ We also need to change our playbook now to refer to our new role. In the playboo ![](Images/Day66_config7.png) -We can now run our playbook again this time with the new playbook name `ansible-playbook playbook3.yml` you will notice the depreciation, we can fix that next. +We can now run our playbook again this time with the new playbook name `ansible-playbook playbook3.yml` you will notice the depreciation, we can fix that next. ![](Images/Day66_config8.png) -Ok, the depreciation although our playbook ran we should fix our ways now, in order to do that I have changed the include option in the tasks/main.yml to now be import_tasks as per below. +Ok, the depreciation although our playbook ran we should fix our ways now, in order to do that I have changed the include option in the tasks/main.yml to now be import_tasks as per below. ![](Images/Day66_config9.png) You can find these files in the [ansible-scenario3](Days/Configmgmt/ansible-scenario3) -We are also going to create a few more roles whilst using `ansible-galaxy` we are going to create: +We are also going to create a few more roles whilst using `ansible-galaxy` we are going to create: - common = for all of our servers (`ansible-galaxy init roles/common`) - nginx = for our loadbalancer (`ansible-galaxy init roles/nginx`) ![](Images/Day66_config10.png) -I am going to leave this one here and in the next session we will start working on those other nodes we have deployed but have not done anything with yet. +I am going to leave this one here and in the next session we will start working on those other nodes we have deployed but have not done anything with yet. -## Resources +## Resources - [What is Ansible](https://www.youtube.com/watch?v=1id6ERvfozo) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - [Your complete guide to Ansible](https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u) -This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. +This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. See you on [Day 67](day67.md) diff --git a/Days/day67.md b/Days/day67.md index 545a85363..142f101d0 100644 --- a/Days/day67.md +++ b/Days/day67.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Using Roles & Deploying a Loadbalancer - Day 67' +title: "#90DaysOfDevOps - Using Roles & Deploying a Loadbalancer - Day 67" published: false description: 90DaysOfDevOps - Using Roles & Deploying a Loadbalancer tags: "devops, 90daysofdevops, learning" @@ -7,20 +7,22 @@ cover_image: null canonical_url: null id: 1048713 --- + ## Using Roles & Deploying a Loadbalancer -In the last session we covered roles and used the `ansible-galaxy` command to help create our folder structures for some roles that we are going to use. We finished up with a much tidier working repository for our configuration code as everything is hidden away in our role folders. +In the last session we covered roles and used the `ansible-galaxy` command to help create our folder structures for some roles that we are going to use. We finished up with a much tidier working repository for our configuration code as everything is hidden away in our role folders. -However we have only used the apache2 role and have a working playbook3.yaml to handle our webservers. +However we have only used the apache2 role and have a working playbook3.yaml to handle our webservers. -At this point if you have only used `vagrant up web01 web02` now is the time to run `vagrant up loadbalancer` this will bring up another Ubuntu system that we will use as our Load Balancer/Proxy. +At this point if you have only used `vagrant up web01 web02` now is the time to run `vagrant up loadbalancer` this will bring up another Ubuntu system that we will use as our Load Balancer/Proxy. -We have already defined this new machine in our hosts file, but we do not have the ssh key configured until it is available, so we need to also run `ssh-copy-id loadbalancer` when the system is up and ready. +We have already defined this new machine in our hosts file, but we do not have the ssh key configured until it is available, so we need to also run `ssh-copy-id loadbalancer` when the system is up and ready. ### Common role -I created at the end of yesterdays session the role of `common`, common will be used across all of our servers where as the other roles are specific to use cases, now the applications I am going to install as common as spurious and I cannot see many reasons for this to be the case but it shows the objective. In our common role folder structure, navigate to tasks folder and you will have a main.yml. In this yaml we need to point this to our install_tools.yml file and we do this by adding a line `- import_tasks: install_tools.yml` this used to be `include` but this is going to be depreciated soon enough so we are using import_tasks. -``` +I created at the end of yesterdays session the role of `common`, common will be used across all of our servers where as the other roles are specific to use cases, now the applications I am going to install as common as spurious and I cannot see many reasons for this to be the case but it shows the objective. In our common role folder structure, navigate to tasks folder and you will have a main.yml. In this yaml we need to point this to our install_tools.yml file and we do this by adding a line `- import_tasks: install_tools.yml` this used to be `include` but this is going to be depreciated soon enough so we are using import_tasks. + +```Yaml - name: "Install Common packages" apt: name={{ item }} state=latest with_items: @@ -29,9 +31,9 @@ I created at the end of yesterdays session the role of `common`, common will be - figlet ``` -In our playbook we then add in the common role for each host block. +In our playbook we then add in the common role for each host block. -``` +```Yaml - hosts: webservers become: yes vars: @@ -45,13 +47,13 @@ In our playbook we then add in the common role for each host block. ### nginx -The next phase is for us to install and configure nginx on our loadbalancer vm. Like the common folder structure, we have the nginx based on the last session. +The next phase is for us to install and configure nginx on our loadbalancer vm. Like the common folder structure, we have the nginx based on the last session. -First of all we are going to add a host block to our playbook. This block will include our common role and then our new nginx role. +First of all we are going to add a host block to our playbook. This block will include our common role and then our new nginx role. The playbook can be found here. [playbook4.yml](Days/../Configmgmt/ansible-scenario4/playbook4.yml) -``` +```Yaml - hosts: webservers become: yes vars: @@ -62,32 +64,32 @@ The playbook can be found here. [playbook4.yml](Days/../Configmgmt/ansible-scena - common - apache2 -- hosts: proxy +- hosts: proxy become: yes - roles: + roles: - common - nginx ``` -In order for this to mean anything, we have to define our tasks that we wish to run, in the same way we will modify the main.yml in tasks to point to two files this time, one for installation and one for configuration. +In order for this to mean anything, we have to define our tasks that we wish to run, in the same way we will modify the main.yml in tasks to point to two files this time, one for installation and one for configuration. -There are some other files that I have modified based on the outcome we desire, take a look in the folder [ansible-scenario4](Days/Configmgmt/ansible-scenario4) for all the files changed. You should check the folders tasks, handlers and templates in the nginx folder and you will find those additional changes and files. +There are some other files that I have modified based on the outcome we desire, take a look in the folder [ansible-scenario4](Days/Configmgmt/ansible-scenario4) for all the files changed. You should check the folders tasks, handlers and templates in the nginx folder and you will find those additional changes and files. -### Run the updated playbook +### Run the updated playbook -Since yesterday we have added the common role which will now install some packages on our system and then we have also added our nginx role which includes installation and configuration. +Since yesterday we have added the common role which will now install some packages on our system and then we have also added our nginx role which includes installation and configuration. Let's run our playbook4.yml using the `ansible-playbook playbook4.yml` ![](Images/Day67_config1.png) -Now that we have our webservers and loadbalancer configured we should now be able to go to http://192.168.169.134/ which is the IP address of our loadbalancer. +Now that we have our webservers and loadbalancer configured we should now be able to go to http://192.168.169.134/ which is the IP address of our loadbalancer. ![](Images/Day67_config2.png) -If you are following along and you do not have this state then it could be down to the server IP addresses you have in your environment. The file can be found in `templates\mysite.j2` and looks similar to the below: You would need to update with your webserver IP addresses. +If you are following along and you do not have this state then it could be down to the server IP addresses you have in your environment. The file can be found in `templates\mysite.j2` and looks similar to the below: You would need to update with your webserver IP addresses. -``` +```J2 upstream webservers { server 192.168.169.131:8000; server 192.168.169.132:8000; @@ -96,24 +98,25 @@ If you are following along and you do not have this state then it could be down server { listen 80; - location / { + location / { proxy_pass http://webservers; } } ``` -I am pretty confident that what we have installed is all good but let's use an adhoc command using ansible to check these common tools installation. + +I am pretty confident that what we have installed is all good but let's use an adhoc command using ansible to check these common tools installation. `ansible loadbalancer -m command -a neofetch` ![](Images/Day67_config3.png) -## Resources +## Resources - [What is Ansible](https://www.youtube.com/watch?v=1id6ERvfozo) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - [Your complete guide to Ansible](https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u) -This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. +This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. See you on [Day 68](day68.md) diff --git a/Days/day68.md b/Days/day68.md index c25eb5ced..c95694dcd 100644 --- a/Days/day68.md +++ b/Days/day68.md @@ -1,23 +1,24 @@ --- -title: '#90DaysOfDevOps - Tags, Variables, Inventory & Database Server config - Day 68' +title: "#90DaysOfDevOps - Tags, Variables, Inventory & Database Server config - Day 68" published: false -description: '90DaysOfDevOps - Tags, Variables, Inventory & Database Server config' +description: "90DaysOfDevOps - Tags, Variables, Inventory & Database Server config" tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048780 --- + ## Tags, Variables, Inventory & Database Server config -### Tags +### Tags -As we left our playbook in the session yesterday we would need to run every tasks and play within that playbook. Which means we would have to run the webservers and loadbalancer plays and tasks to completion. +As we left our playbook in the session yesterday we would need to run every tasks and play within that playbook. Which means we would have to run the webservers and loadbalancer plays and tasks to completion. -However tags can enable us to seperate these out if we want. This could be an effcient move if we have extra large and long playbooks in our environments. +However tags can enable us to separate these out if we want. This could be an efficient move if we have extra large and long playbooks in our environments. In our playbook file, in this case we are using [ansible-scenario5](Configmgmt/ansible-scenario5/playbook5.yml) -``` +```Yaml - hosts: webservers become: yes vars: @@ -29,39 +30,40 @@ In our playbook file, in this case we are using [ansible-scenario5](Configmgmt/a - apache2 tags: web -- hosts: proxy +- hosts: proxy become: yes - roles: + roles: - common - nginx tags: proxy ``` -We can then confirm this by using the `ansible-playbook playbook5.yml --list-tags` and the list tags is going to outline the tags we have defined in our playbook. + +We can then confirm this by using the `ansible-playbook playbook5.yml --list-tags` and the list tags is going to outline the tags we have defined in our playbook. ![](Images/Day68_config1.png) -Now if we wanted to target just the proxy we could do this by running `ansible-playbook playbook5.yml --tags proxy` and this will as you can see below only run the playbook against the proxy. +Now if we wanted to target just the proxy we could do this by running `ansible-playbook playbook5.yml --tags proxy` and this will as you can see below only run the playbook against the proxy. ![](Images/Day68_config2.png) -tags can be added at the task level as well so we can get really granular on where and what you want to happen. It could be application focused tags, we could go through tasks for example and tag our tasks based on installation, configuration or removal. Another very useful tag you can use is +tags can be added at the task level as well so we can get really granular on where and what you want to happen. It could be application focused tags, we could go through tasks for example and tag our tasks based on installation, configuration or removal. Another very useful tag you can use is -`tag: always` this will ensure no matter what --tags you are using in your command if something is tagged with the always value then it will always be ran when you run the ansible-playbook command. +`tag: always` this will ensure no matter what --tags you are using in your command if something is tagged with the always value then it will always be ran when you run the ansible-playbook command. -With tags we can also bundle multiple tags together and if we choose to run `ansible-playbook playbook5.yml --tags proxy,web` this will run all of the items with those tags. Obviously in our instance that would mean the same as running the the playbook but if we had multiple other plays then this would make sense. +With tags we can also bundle multiple tags together and if we choose to run `ansible-playbook playbook5.yml --tags proxy,web` this will run all of the items with those tags. Obviously in our instance that would mean the same as running the the playbook but if we had multiple other plays then this would make sense. -You can also define more than one tag. +You can also define more than one tag. -### Variables +### Variables -There are two main types of variables within Ansible. +There are two main types of variables within Ansible. -- User created -- Ansible Facts +- User created +- Ansible Facts ### Ansible Facts -Each time we have ran our playbooks, we have had a task that we have not defined called "Gathering facts" we can use these variables or facts to make things happen with our automation tasks. +Each time we have ran our playbooks, we have had a task that we have not defined called "Gathering facts" we can use these variables or facts to make things happen with our automation tasks. ![](Images/Day68_config3.png) @@ -69,9 +71,9 @@ If we were to run the following `ansible proxy -m setup` command we should see a ![](Images/Day68_config4.png) -If you open this file you can see all sorts of information for our command. We can get our IP addresses, architecture, bios version. A lot of useful information if we want to leverage this and use this in our playbooks. +If you open this file you can see all sorts of information for our command. We can get our IP addresses, architecture, bios version. A lot of useful information if we want to leverage this and use this in our playbooks. -An idea would be to potentially use one of these variables within our nginx template mysite.j2 where we hard coded the IP addresses of our webservers. You can do this by creating a for loop in your mysite.j2 and this is going to cycle through the group [webservers] this enables us to have more than our 2 webservers automatically and dynamically created or added to this load balancer configuration. +An idea would be to potentially use one of these variables within our nginx template mysite.j2 where we hard coded the IP addresses of our webservers. You can do this by creating a for loop in your mysite.j2 and this is going to cycle through the group [webservers] this enables us to have more than our 2 webservers automatically and dynamically created or added to this load balancer configuration. ``` #Dynamic Config for server {{ ansible_facts['nodename'] }} @@ -84,18 +86,19 @@ An idea would be to potentially use one of these variables within our nginx temp server { listen 80; - location / { + location / { proxy_pass http://webservers; } } ``` -The outcome of the above will look the same as it does right now but if we added more webservers or removed one this would dynamically change the proxy configuration. For this to work you will need to have name resolution configured. + +The outcome of the above will look the same as it does right now but if we added more webservers or removed one this would dynamically change the proxy configuration. For this to work you will need to have name resolution configured. ### User created -User created variables are what we have created ourselves. If you take a look in our playbook you will see we have `vars:` and then a list of 3 variables we are using there. +User created variables are what we have created ourselves. If you take a look in our playbook you will see we have `vars:` and then a list of 3 variables we are using there. -``` +```Yaml - hosts: webservers become: yes vars: @@ -107,25 +110,25 @@ User created variables are what we have created ourselves. If you take a look in - apache2 tags: web -- hosts: proxy +- hosts: proxy become: yes - roles: + roles: - common - nginx tags: proxy ``` -We can however keep our playbook clear of variables by moving them to their own file. We are going to do this but we will move into the [ansible-scenario6](Configmgmt/ansible-scenario6) folder. In the root of that folder we are going to create a group_vars folder. We are then going to create another folder called all (all groups are going to get these variables). In there we will create a file called `common_variables.yml` and we will copy our variables from our playbook into this file. Removing them from the playbook along with vars: as well. +We can however keep our playbook clear of variables by moving them to their own file. We are going to do this but we will move into the [ansible-scenario6](Configmgmt/ansible-scenario6) folder. In the root of that folder we are going to create a group_vars folder. We are then going to create another folder called all (all groups are going to get these variables). In there we will create a file called `common_variables.yml` and we will copy our variables from our playbook into this file. Removing them from the playbook along with vars: as well. -``` +```Yaml http_port: 8000 https_port: 4443 html_welcome_msg: "Hello 90DaysOfDevOps - Welcome to Day 68!" ``` -Because we are associating this as a global variable we could also add in our NTP and DNS servers here as well. The variables are set from the folder structure that we have created. You can see below how clean our Playbook now looks. +Because we are associating this as a global variable we could also add in our NTP and DNS servers here as well. The variables are set from the folder structure that we have created. You can see below how clean our Playbook now looks. -``` +```Yaml - hosts: webservers become: yes roles: @@ -133,20 +136,20 @@ Because we are associating this as a global variable we could also add in our NT - apache2 tags: web -- hosts: proxy +- hosts: proxy become: yes - roles: + roles: - common - nginx tags: proxy ``` -One of those variables was the http_port, we can use this again in our for loop within the mysite.j2 as per below: +One of those variables was the http_port, we can use this again in our for loop within the mysite.j2 as per below: -``` +```J2 #Dynamic Config for server {{ ansible_facts['nodename'] }} upstream webservers { - {% for host in groups['webservers'] %} + {% for host in groups['webservers'] %} server {{ hostvars[host]['ansible_facts']['nodename'] }}:{{ http_port }}; {% endfor %} } @@ -154,44 +157,45 @@ One of those variables was the http_port, we can use this again in our for loop server { listen 80; - location / { + location / { proxy_pass http://webservers; } } ``` -We can also define an ansible fact in our roles/apache2/templates/index.html.j2 file so that we can understand which webserver we are on. +We can also define an ansible fact in our roles/apache2/templates/index.html.j2 file so that we can understand which webserver we are on. -``` +```J2

{{ html_welcome_msg }}! I'm webserver {{ ansible_facts['nodename'] }}

``` -The results of running the `ansible-playbook playbook6.yml` command with our variable changes means that when we hit our loadbalancer you can see that we hit either of the webservers we have in our group. + +The results of running the `ansible-playbook playbook6.yml` command with our variable changes means that when we hit our loadbalancer you can see that we hit either of the webservers we have in our group. ![](Images/Day68_config5.png) -We could also add a folder called host_vars and create a web01.yml and have a specific message or change what that looks like on a per host basis if we wish. +We could also add a folder called host_vars and create a web01.yml and have a specific message or change what that looks like on a per host basis if we wish. ### Inventory Files -So far we have used the default hosts file in the /etc/ansible folder to determine our hosts. We could however have different files for different environments, for example production and staging. I am not going to create more environments. But we are able to create our own host files. +So far we have used the default hosts file in the /etc/ansible folder to determine our hosts. We could however have different files for different environments, for example production and staging. I am not going to create more environments. But we are able to create our own host files. -We can create multiple files for our different inventory of servers and nodes. We would call these using `ansible-playbook -i dev playbook.yml` you can also define variables within your hosts file and then print that out or leverage that variable somewhere else in your playbooks for example in the example and training course I am following along to below they have added the environment variable created in the host file to the loadbalancer web page template to show the environment as part of the web page message. +We can create multiple files for our different inventory of servers and nodes. We would call these using `ansible-playbook -i dev playbook.yml` you can also define variables within your hosts file and then print that out or leverage that variable somewhere else in your playbooks for example in the example and training course I am following along to below they have added the environment variable created in the host file to the loadbalancer web page template to show the environment as part of the web page message. ### Deploying our Database server -We still have one more machine we have not powered up yet and configured. We can do this using `vagrant up db01` from where our Vagrantfile is located. When this is up and accessible we then need to make sure the SSH key is copied over using `ssh-copy-id db01` so that we can access. +We still have one more machine we have not powered up yet and configured. We can do this using `vagrant up db01` from where our Vagrantfile is located. When this is up and accessible we then need to make sure the SSH key is copied over using `ssh-copy-id db01` so that we can access. We are going to be working from the [ansible-scenario7](Configmgmt/ansible-scenario7) folder -Let's then use `ansible-galaxy init roles/mysql` to create a new folder structure for a new role called "mysql" +Let's then use `ansible-galaxy init roles/mysql` to create a new folder structure for a new role called "mysql" -In our playbook we are going to add a new play block for the database configuration. We have our group database defined in our /etc/ansible/hosts file. We then instruct our database group to have the role common and a new role called mysql which we created in the previous step. We are also tagging our database group with database, this means as we discussed earlier we can choose to only run against these tags if we wish. +In our playbook we are going to add a new play block for the database configuration. We have our group database defined in our /etc/ansible/hosts file. We then instruct our database group to have the role common and a new role called mysql which we created in the previous step. We are also tagging our database group with database, this means as we discussed earlier we can choose to only run against these tags if we wish. -``` +```Yaml - hosts: webservers become: yes roles: @@ -205,7 +209,7 @@ In our playbook we are going to add a new play block for the database configurat roles: - common - nginx - tags: + tags: proxy - hosts: database @@ -216,11 +220,11 @@ In our playbook we are going to add a new play block for the database configurat tags: database ``` -Within our roles folder structure you will now have the tree automatically created, we need to populate the following: +Within our roles folder structure you will now have the tree automatically created, we need to populate the following: -Handlers - main.yml +Handlers - main.yml -``` +```Yaml # handlers file for roles/mysql - name: restart mysql service: @@ -230,9 +234,9 @@ Handlers - main.yml Tasks - install_mysql.yml, main.yml & setup_mysql.yml -install_mysql.yml - this task is going to be there to install mysql and ensure that the service is running. +install_mysql.yml - this task is going to be there to install mysql and ensure that the service is running. -``` +```Yaml - name: "Install Common packages" apt: name={{ item }} state=latest with_items: @@ -254,17 +258,17 @@ install_mysql.yml - this task is going to be there to install mysql and ensure t state: started ``` -main.yml is a pointer file that will suggest that we import_tasks from these files. +main.yml is a pointer file that will suggest that we import_tasks from these files. -``` +```Yaml # tasks file for roles/mysql - import_tasks: install_mysql.yml - import_tasks: setup_mysql.yml ``` -setup_mysql.yml - This task will create our database and database user. +setup_mysql.yml - This task will create our database and database user. -``` +```Yaml - name: Create my.cnf configuration file template: src=templates/my.cnf.j2 dest=/etc/mysql/conf.d/mysql.cnf notify: restart mysql @@ -272,8 +276,8 @@ setup_mysql.yml - This task will create our database and database user. - name: Create database user with name 'devops' and password 'DevOps90' with all database privileges community.mysql.mysql_user: login_unix_socket: /var/run/mysqld/mysqld.sock - login_user: "{{ mysql_user_name }}" - login_password: "{{ mysql_user_password }}" + login_user: "{{ mysql_user_name }}" + login_password: "{{ mysql_user_password }}" name: "{{db_user}}" password: "{{db_pass}}" priv: '*.*:ALL' @@ -282,15 +286,15 @@ setup_mysql.yml - This task will create our database and database user. - name: Create a new database with name '90daysofdevops' mysql_db: - login_user: "{{ mysql_user_name }}" - login_password: "{{ mysql_user_password }}" + login_user: "{{ mysql_user_name }}" + login_password: "{{ mysql_user_password }}" name: "{{ db_name }}" state: present ``` -You can see from the above we are using some variables to determine some of our configuration such as passwords, usernames and databases, this is all stored in our group_vars/all/common_variables.yml file. +You can see from the above we are using some variables to determine some of our configuration such as passwords, usernames and databases, this is all stored in our group_vars/all/common_variables.yml file. -``` +```Yaml http_port: 8000 https_port: 4443 html_welcome_msg: "Hello 90DaysOfDevOps - Welcome to Day 68!" @@ -301,48 +305,49 @@ db_user: devops db_pass: DevOps90 db_name: 90DaysOfDevOps ``` -We also have the my.cnf.j2 file in the templates folder, which looks like below: -``` -[mysql] +We also have the my.cnf.j2 file in the templates folder, which looks like below: + +```J2 +[mysql] bind-address = 0.0.0.0 -``` +``` -### Running the playbook +### Running the playbook -Now we have our VM up and running and we have our configuration files in place, we are now ready to run our playbook which will include everything we have done before if we run the following `ansible-playbook playbook7.yml` or we could choose to just deploy to our database group with the `ansible-playbook playbook7.yml --tags database` command, which will just run our new configuration files. +Now we have our VM up and running and we have our configuration files in place, we are now ready to run our playbook which will include everything we have done before if we run the following `ansible-playbook playbook7.yml` or we could choose to just deploy to our database group with the `ansible-playbook playbook7.yml --tags database` command, which will just run our new configuration files. -I ran only against the database tag but I stumbled across an error. This error tells me that we do not have pip3 (Python) installed. We can fix this by adding this to our common tasks and install +I ran only against the database tag but I stumbled across an error. This error tells me that we do not have pip3 (Python) installed. We can fix this by adding this to our common tasks and install ![](Images/Day68_config6.png) -We fixed the above and ran the playbook again and we have a successful change. +We fixed the above and ran the playbook again and we have a successful change. ![](Images/Day68_config7.png) -We should probably make sure that everything is how we want it to be on our newly configured db01 server. We can do this from our control node using the `ssh db01` command. +We should probably make sure that everything is how we want it to be on our newly configured db01 server. We can do this from our control node using the `ssh db01` command. -To connect to mySQL I used `sudo /usr/bin/mysql -u root -p` and gave the vagrant password for root at the prompt. +To connect to mySQL I used `sudo /usr/bin/mysql -u root -p` and gave the vagrant password for root at the prompt. When we have connected let's first make sure we have our user created called devops. `select user, host from mysql.user;` ![](Images/Day68_config8.png) -Now we can issue the `SHOW DATABASES;` command to see our new database that has also been created. +Now we can issue the `SHOW DATABASES;` command to see our new database that has also been created. ![](Images/Day68_config9.png) -I actually used root to connect but we could also now log in with our devops account in the same way using `sudo /usr/bin/mysql -u devops -p` but the password here is DevOps90. +I actually used root to connect but we could also now log in with our devops account in the same way using `sudo /usr/bin/mysql -u devops -p` but the password here is DevOps90. -One thing I have found that in our `setup_mysql.yml` I had to add the line `login_unix_socket: /var/run/mysqld/mysqld.sock` in order to successfully connect to my db01 mysql instance and now everytime I run this it reports a change when creating the user, any suggestions would be greatly appreciated. +One thing I have found that in our `setup_mysql.yml` I had to add the line `login_unix_socket: /var/run/mysqld/mysqld.sock` in order to successfully connect to my db01 mysql instance and now everytime I run this it reports a change when creating the user, any suggestions would be greatly appreciated. -## Resources +## Resources - [What is Ansible](https://www.youtube.com/watch?v=1id6ERvfozo) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - [Your complete guide to Ansible](https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u) -This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. +This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. See you on [Day 69](day69.md) diff --git a/Days/day69.md b/Days/day69.md index 700f4b198..19d1acbf7 100644 --- a/Days/day69.md +++ b/Days/day69.md @@ -1,17 +1,18 @@ --- -title: '#90DaysOfDevOps - All other things Ansible - Automation Controller (Tower), AWX, Vault - Day 69' +title: "#90DaysOfDevOps - All other things Ansible - Automation Controller (Tower), AWX, Vault - Day 69" published: false -description: '90DaysOfDevOps - All other things Ansible - Automation Controller (Tower), AWX, Vault' +description: "90DaysOfDevOps - All other things Ansible - Automation Controller (Tower), AWX, Vault" tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048714 --- + ## All other things Ansible - Automation Controller (Tower), AWX, Vault -Rounding out the section on Configuration Management I wanted to have a look into the other areas that you might come across when dealing with Ansible. +Rounding out the section on Configuration Management I wanted to have a look into the other areas that you might come across when dealing with Ansible. -There are a lot of products that make up the Ansible Automation platform. +There are a lot of products that make up the Ansible Automation platform. Red Hat Ansible Automation Platform is a foundation for building and operating automation across an organization. The platform includes all the tools needed to implement enterprise-wide automation. @@ -19,40 +20,40 @@ Red Hat Ansible Automation Platform is a foundation for building and operating a I will try and cover some of these in this post. But for more information then the official Red Hat Ansible site is going to have lots more information. [Ansible.com](https://www.ansible.com/?hsLang=en-us) -### Ansible Automation Controller | AWX +### Ansible Automation Controller | AWX -I have bundled these two together because the Automation Controller and AWX are very similar in what they offer. +I have bundled these two together because the Automation Controller and AWX are very similar in what they offer. -The AWX project or AWX for short is an open-source community project, sponsored by Red Hat that enables you to better control your Ansible projects within your environments. AWX is the upstream project from which the automation controller component is derived. +The AWX project or AWX for short is an open-source community project, sponsored by Red Hat that enables you to better control your Ansible projects within your environments. AWX is the upstream project from which the automation controller component is derived. -If you are looking for an enterprise solution then you will be looking for the Automation Controller or you might have previously heard this as Ansible Tower. The Ansible Automation Controller is the control plane for the Ansible Automation Platform. +If you are looking for an enterprise solution then you will be looking for the Automation Controller or you might have previously heard this as Ansible Tower. The Ansible Automation Controller is the control plane for the Ansible Automation Platform. -Both AWX and the Automation Controller bring the following features above everything else we have covered in this section thus far. +Both AWX and the Automation Controller bring the following features above everything else we have covered in this section thus far. -- User Interface -- Role Based Access Control -- Workflows -- CI/CD integration +- User Interface +- Role Based Access Control +- Workflows +- CI/CD integration -The Automation Controller is the enterprise offering where you pay for your support. +The Automation Controller is the enterprise offering where you pay for your support. -We are going to take a look at deploying AWX within our minikube Kubernetes environment. +We are going to take a look at deploying AWX within our minikube Kubernetes environment. -### Deploying Ansible AWX +### Deploying Ansible AWX -AWX does not need to be deployed to a Kubernetes cluster, the [github](https://github.com/ansible/awx) for AWX from ansible will give you that detail. However starting in version 18.0, the AWX Operator is the preferred way to install AWX. +AWX does not need to be deployed to a Kubernetes cluster, the [github](https://github.com/ansible/awx) for AWX from ansible will give you that detail. However starting in version 18.0, the AWX Operator is the preferred way to install AWX. -First of all we need a minikube cluster. We can do this if you followed along during the Kubernetes section by creating a new minikube cluster with the `minikube start --cpus=4 --memory=6g --addons=ingress` command. +First of all we need a minikube cluster. We can do this if you followed along during the Kubernetes section by creating a new minikube cluster with the `minikube start --cpus=4 --memory=6g --addons=ingress` command. ![](Images/Day69_config2.png) -The official [Ansible AWX Operator](https://github.com/ansible/awx-operator) can be found here. As stated in the install instructions you should clone this repository and then run through the deployment. +The official [Ansible AWX Operator](https://github.com/ansible/awx-operator) can be found here. As stated in the install instructions you should clone this repository and then run through the deployment. -I forked the repo above and then ran `git clone https://github.com/MichaelCade/awx-operator.git` my advice is you do the same and do not use my repository as I might change things or it might not be there. +I forked the repo above and then ran `git clone https://github.com/MichaelCade/awx-operator.git` my advice is you do the same and do not use my repository as I might change things or it might not be there. -In the cloned repository you will find a awx-demo.yml file we need to change `NodePort` for `ClusterIP` as per below: +In the cloned repository you will find a awx-demo.yml file we need to change `NodePort` for `ClusterIP` as per below: -``` +```Yaml --- apiVersion: awx.ansible.com/v1beta1 kind: AWX @@ -62,7 +63,7 @@ spec: service_type: ClusterIP ``` -The next step is to define our namespace where we will be deploying the awx operator, using the `export NAMESPACE=awx` command then followed by `make deploy` we will start the deployment. +The next step is to define our namespace where we will be deploying the awx operator, using the `export NAMESPACE=awx` command then followed by `make deploy` we will start the deployment. ![](Images/Day69_config3.png) @@ -74,17 +75,17 @@ Within the cloned repository you will find a file called awx-demo.yml we now wan ![](Images/Day69_config5.png) -You can keep an eye on the progress with `kubectl get pods -n awx -w` which will keep a visual watch on what is happening. +You can keep an eye on the progress with `kubectl get pods -n awx -w` which will keep a visual watch on what is happening. -You should have something that resembles the image you see below when everything is running. +You should have something that resembles the image you see below when everything is running. ![](Images/Day69_config6.png) -Now we should be able to access our awx deployment after running in a new terminal `minikube service awx-demo-service --url -n $NAMESPACE` to expose this through the minikube ingress. +Now we should be able to access our awx deployment after running in a new terminal `minikube service awx-demo-service --url -n $NAMESPACE` to expose this through the minikube ingress. ![](Images/Day69_config7.png) -If we then open a browser to that address [] you can see we are prompted for username and password. +If we then open a browser to that address [] you can see we are prompted for username and password. ![](Images/Day69_config8.png) @@ -92,19 +93,19 @@ The username by default is admin, to get the password we can run the following c ![](Images/Day69_config9.png) -Obviously this then gives you a UI to manage your playbook and configuration management tasks in a centralised location, it also allows you as a team to work together vs what we have been doing so far here where we have been running from one ansible control station. +Obviously this then gives you a UI to manage your playbook and configuration management tasks in a centralised location, it also allows you as a team to work together vs what we have been doing so far here where we have been running from one ansible control station. -This is another one of those areas where you could probably go and spend another length of time walking through the capabilities within this tool. +This is another one of those areas where you could probably go and spend another length of time walking through the capabilities within this tool. -I will call out a great resource from Jeff Geerling, which goes into more detail on using Ansible AWX. [Ansible 101 - Episode 10 - Ansible Tower and AWX](https://www.youtube.com/watch?v=iKmY4jEiy_A&t=752s) +I will call out a great resource from Jeff Geerling, which goes into more detail on using Ansible AWX. [Ansible 101 - Episode 10 - Ansible Tower and AWX](https://www.youtube.com/watch?v=iKmY4jEiy_A&t=752s) In this video he also goes into great detail on the differences between Automation Controller (Previously Ansible Tower) and Ansible AWX (Free and Open Source). -### Ansible Vault +### Ansible Vault -`ansible-vault` allows us to encrypt and decrypt Ansible data files. Throughout this section we have skipped over and we have put some of our sensitive information in plain text. +`ansible-vault` allows us to encrypt and decrypt Ansible data files. Throughout this section we have skipped over and we have put some of our sensitive information in plain text. -Built in to the Ansible binary is `ansible-vault` which allows us to mask away this sensitive information. +Built in to the Ansible binary is `ansible-vault` which allows us to mask away this sensitive information. ![](Images/Day69_config10.png) @@ -124,19 +125,19 @@ Now, we have already used `ansible-galaxy` to create some of our roles and file - [Ansible Lint](https://ansible-lint.readthedocs.io/en/latest/) - CLI tool for linting playbooks, roles and collections -### Other Resource +### Other Resource - [Ansible Documentation](https://docs.ansible.com/ansible/latest/index.html) -## Resources +## Resources - [What is Ansible](https://www.youtube.com/watch?v=1id6ERvfozo) - [Ansible 101 - Episode 1 - Introduction to Ansible](https://www.youtube.com/watch?v=goclfp6a2IQ) - [NetworkChuck - You need to learn Ansible right now!](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) - [Your complete guide to Ansible](https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u) -This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. +This final playlist listed above is where a lot of the code and ideas came from for this section, a great resource and walkthrough in video format. -This post wraps up our look into configuration management, we next move into CI/CD Pipelines and some of the tools and processes that we might see and use out there to achieve this workflow for our application development and release. +This post wraps up our look into configuration management, we next move into CI/CD Pipelines and some of the tools and processes that we might see and use out there to achieve this workflow for our application development and release. See you on [Day 70](day70.md) diff --git a/Days/day70.md b/Days/day70.md index 43ea50819..5df69ed7a 100644 --- a/Days/day70.md +++ b/Days/day70.md @@ -1,8 +1,8 @@ --- -title: '#90DaysOfDevOps - The Big Picture: CI/CD Pipelines - Day 70' +title: "#90DaysOfDevOps - The Big Picture: CI/CD Pipelines - Day 70" published: false description: 90DaysOfDevOps - The Big Picture CI/CD Pipelines -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048836 @@ -10,13 +10,13 @@ id: 1048836 ## The Big Picture: CI/CD Pipelines -A CI/CD (Continous Integration/Continous Deployment) Pipeline implementation is the backbone of the modern DevOps environment. +A CI/CD (Continuous Integration/Continuous Deployment) Pipeline implementation is the backbone of the modern DevOps environment. It bridges the gap between development and operations by automating the build, test and deployment of applications. -We covered a lot of this Continous mantra in the opening section of the challenge. But to reiterate: +We covered a lot of this continuous mantra in the opening section of the challenge. But to reiterate: -Continous Integration (CI) is a more modern software development practice in which incremental code changes are made more frequently and reliabily. Automated build and test workflow steps triggered by Contininous Integration ensures that code changes being merged into the repository are reliable. +Continuous Integration (CI) is a more modern software development practice in which incremental code changes are made more frequently and reliably. Automated build and test workflow steps triggered by Continuous Integration ensures that code changes being merged into the repository are reliable. That code / Application is then delivered quickly and seamlessly as part of the Continuous Deployment process. @@ -24,7 +24,7 @@ That code / Application is then delivered quickly and seamlessly as part of the - Ship software quickly and efficiently - Facilitates an effective process for getting applications to market as fast as possible -- A continous flow of bug fixes and new features without waiting months or years for version releases. +- A continuous flow of bug fixes and new features without waiting months or years for version releases. The ability for developers to make small impactful changes regular means we get faster fixes and more features quicker. diff --git a/Days/day71.md b/Days/day71.md index 869db6d6a..435082a9a 100644 --- a/Days/day71.md +++ b/Days/day71.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - What is Jenkins? - Day 71' +title: "#90DaysOfDevOps - What is Jenkins? - Day 71" published: false description: 90DaysOfDevOps - What is Jenkins? tags: "devops, 90daysofdevops, learning" @@ -7,87 +7,85 @@ cover_image: null canonical_url: null id: 1048745 --- + ## What is Jenkins? -Jenkins is a continous integration tool that allows continous development, test and deployment of newly created code. +Jenkins is a continuous integration tool that allows continuous development, test and deployment of newly created code. -There are two ways we can achieve this with either nightly builds or continous development. The first option is that our developers are developing throughout the day on their tasks and come the end of the set day they push their changes to the source code repository. Then during the night we run our unit tests and build of the software. This could be deemed as the old way to integrate all code. +There are two ways we can achieve this with either nightly builds or continuous development. The first option is that our developers are developing throughout the day on their tasks and come the end of the set day they push their changes to the source code repository. Then during the night we run our unit tests and build of the software. This could be deemed as the old way to integrate all code. ![](Images/Day71_CICD1.png) -The other option and the preferred way is that our developers are still committing their changes to source code, then when that code commit has been made there is a build process kicked off continously. +The other option and the preferred way is that our developers are still committing their changes to source code, then when that code commit has been made there is a build process kicked off continuously. ![](Images/Day71_CICD2.png) -The above methods means that with distributed developers across the world we don't have a set time each day where we have to stop committing our code changes. This is where Jenkins comes in to act as that CI server to control those tests and build processes. +The above methods means that with distributed developers across the world we don't have a set time each day where we have to stop committing our code changes. This is where Jenkins comes in to act as that CI server to control those tests and build processes. ![](Images/Day71_CICD3.png) -I know we are talking about Jenkins here but I also want to add a few more to maybe look into later on down the line to get an understanding why I am seeing Jenkins as the overall most popular, why is that and what can the others do over Jenkins. +I know we are talking about Jenkins here but I also want to add a few more to maybe look into later on down the line to get an understanding why I am seeing Jenkins as the overall most popular, why is that and what can the others do over Jenkins. -- TravisCI - A hosted, distributed continous integration service used to build and test software projects hosted on GitHub. - -- Bamboo - Can run multiple builds in parallel for faster compilation, built in functionality to connect with repositories and has build tasks for Ant, Maven. - -- Buildbot - is an open-source framework for automating software build, test and release processes. It is written in Python and supports distributed, parallel execution of jobs across multiple platforms. - -- Apache Gump - Specific to Java projects, designed with the aim to build and test those Java projects every night. ensures that all projects are compatible at both API and functionality level. +- TravisCI - A hosted, distributed continuous integration service used to build and test software projects hosted on GitHub. +- Bamboo - Can run multiple builds in parallel for faster compilation, built in functionality to connect with repositories and has build tasks for Ant, Maven. +- Buildbot - is an open-source framework for automating software build, test and release processes. It is written in Python and supports distributed, parallel execution of jobs across multiple platforms. +- Apache Gump - Specific to Java projects, designed with the aim to build and test those Java projects every night. ensures that all projects are compatible at both API and functionality level. -Because we are now going to focus on Jenkins - Jenkins is again open source like all of the above tools and is an automation server written in Java. It is used to automate the software development process via continous integration adn faciliates continous delivery. +Because we are now going to focus on Jenkins - Jenkins is again open source like all of the above tools and is an automation server written in Java. It is used to automate the software development process via continuous integration adn facilitates continuous delivery. ### Features of Jenkins -As you can probably expect Jenkins has a lot of features spanning a lot of areas. +As you can probably expect Jenkins has a lot of features spanning a lot of areas. -**Easy Installation** - Jenkins is a self contained java based program ready to run with packages for Windows, macOS and Linux operating systems. +**Easy Installation** - Jenkins is a self contained java based program ready to run with packages for Windows, macOS and Linux operating systems. -**Easy Configuration** - Easy setup and configured via a web interface which includes error checks and built in help. +**Easy Configuration** - Easy setup and configured via a web interface which includes error checks and built in help. -**Plug-ins** - Lots of plugins available in the Update Centre and integrates with many tools in the CI / CD toolchain. +**Plug-ins** - Lots of plugins available in the Update Centre and integrates with many tools in the CI / CD toolchain. -**Extensible** - In addition to the Plug-Ins available, Jenkins can be extended by that plugin architecture which provides nearly infinite options for what it can be used for. +**Extensible** - In addition to the Plug-Ins available, Jenkins can be extended by that plugin architecture which provides nearly infinite options for what it can be used for. -**Distributed** - Jenkins easily distributes work across multiple machines, helping to speed up builds, tests and deployments across multiple platforms. +**Distributed** - Jenkins easily distributes work across multiple machines, helping to speed up builds, tests and deployments across multiple platforms. -### Jenkins Pipeline +### Jenkins Pipeline -You will have seen this pipeline but used in a much broader and we have not spoken about specific tools. +You will have seen this pipeline but used in a much broader and we have not spoken about specific tools. -You are going to be committing code to Jenkins, which then will build out your application, with all automated tests, it will then release and deploy that code when each step is completed. Jenkins is what allows for the automation of this process. +You are going to be committing code to Jenkins, which then will build out your application, with all automated tests, it will then release and deploy that code when each step is completed. Jenkins is what allows for the automation of this process. ![](Images/Day71_CICD4.png) -### Jenkins Architecture +### Jenkins Architecture -First up and not wanting to reinvent the wheel, the [Jenkins Documentation](https://www.jenkins.io/doc/developer/architecture/) is always the place to start but I am going to put down my notes and learnings here as well. +First up and not wanting to reinvent the wheel, the [Jenkins Documentation](https://www.jenkins.io/doc/developer/architecture/) is always the place to start but I am going to put down my notes and learnings here as well. Jenkins can be installed on many different operating systems, Windows, Linux and macOS but then also the ability to deploy as a Docker container and within Kubernetes. [Installing Jenkins](https://www.jenkins.io/doc/book/installing/) -As we get into this we will likely take a look at installing Jenkins within a minikube cluster simulating the deployment to Kubernetes. But this will depend on the scenarios we put together throughout the rest of the section. +As we get into this we will likely take a look at installing Jenkins within a minikube cluster simulating the deployment to Kubernetes. But this will depend on the scenarios we put together throughout the rest of the section. -Let's now break down the image below. +Let's now break down the image below. Step 1 - Developers commit changes to the source code repository. Step 2 - Jenkins checks the repository at regular intervals and pulls any new code. -Step 3 - A build server then builds the code into an executable, in this example we are using maven as a well known build server. Another area to cover. +Step 3 - A build server then builds the code into an executable, in this example we are using maven as a well known build server. Another area to cover. -Step 4 - If the build fails then feedback is sent back to the developers. +Step 4 - If the build fails then feedback is sent back to the developers. -Step 5 - Jenkins then deploys the build app to the test server, in this example we are using selenium as a well known test server. Another area to cover. +Step 5 - Jenkins then deploys the build app to the test server, in this example we are using selenium as a well known test server. Another area to cover. Step 6 - If the test fails then feedback is passed to the developers. -Step 7 - If the tests are successful then we can release to production. +Step 7 - If the tests are successful then we can release to production. -This cycle is continous, this is what allows applications to be updated in minutes vs hours, days, months, years! +This cycle is continuous, this is what allows applications to be updated in minutes vs hours, days, months, years! ![](Images/Day71_CICD5.png) -There is a lot more to the architecture of Jenkins if you require it, they have a master-slave capability, which enables a master to distribute the tasks to slave jenkins environment. +There is a lot more to the architecture of Jenkins if you require it, they have a master-slave capability, which enables a master to distribute the tasks to slave jenkins environment. -For reference with Jenkins being open source, there are going to be lots of enterprises that require support, CloudBees is that enterprise version of Jenkins that brings support and possibly other functionality for the paying enterprise customer. +For reference with Jenkins being open source, there are going to be lots of enterprises that require support, CloudBees is that enterprise version of Jenkins that brings support and possibly other functionality for the paying enterprise customer. An example of this in a customer is Bosch, you can find the Bosch case study [here](https://assets.ctfassets.net/vtn4rfaw6n2j/case-study-boschpdf/40a0b23c61992ed3ee414ae0a55b6777/case-study-bosch.pdf) diff --git a/Days/day72.md b/Days/day72.md index f63838c95..d91916ba3 100644 --- a/Days/day72.md +++ b/Days/day72.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Getting hands on with Jenkins - Day 72' +title: "#90DaysOfDevOps - Getting hands on with Jenkins - Day 72" published: false description: 90DaysOfDevOps - Getting hands on with Jenkins tags: "devops, 90daysofdevops, learning" @@ -7,33 +7,34 @@ cover_image: null canonical_url: null id: 1048829 --- -## Getting hands on with Jenkins -The plan today is to get some hands on with Jenkins and make something happen as part of our CI pipeline, looking at some example code bases that we can use. +## Getting hands on with Jenkins -### What is a pipeline? +The plan today is to get some hands on with Jenkins and make something happen as part of our CI pipeline, looking at some example code bases that we can use. -Before we start we need to know what is a pipeline when it comes to CI, and we already covered this in the session yesterday with the following image. +### What is a pipeline? + +Before we start we need to know what is a pipeline when it comes to CI, and we already covered this in the session yesterday with the following image. ![](Images/Day71_CICD4.png) -We want to take the processes or steps above and we want to automate them to get an outcome eventually meaning that we have a deployed application that we can then ship to our customers, end users etc. +We want to take the processes or steps above and we want to automate them to get an outcome eventually meaning that we have a deployed application that we can then ship to our customers, end users etc. -This automated process enables us to have a version control through to our users and customers. Every change, feature enhancement, bug fix etc goes through this automated process confirming that everything is fine without too much manual intervention to ensure our code is good. +This automated process enables us to have a version control through to our users and customers. Every change, feature enhancement, bug fix etc goes through this automated process confirming that everything is fine without too much manual intervention to ensure our code is good. This process involves building the software in a reliable and repeatable manner, as well as progressing the built software (called a "build") through multiple stages of testing and deployment. -A jenkins pipeline, is written into a text file called a Jenkinsfile. Which itself should be committed to a source control repository. This is also known as Pipeline as code, we could also very much liken this to Infrastructure as code which we covered a few weeks back. +A jenkins pipeline, is written into a text file called a Jenkinsfile. Which itself should be committed to a source control repository. This is also known as Pipeline as code, we could also very much liken this to Infrastructure as code which we covered a few weeks back. -[Jenkins Pipeline Definition](https://www.jenkins.io/doc/book/pipeline/#ji-toolbar) +[Jenkins Pipeline Definition](https://www.jenkins.io/doc/book/pipeline/#ji-toolbar) -### Deploying Jenkins +### Deploying Jenkins -I had some fun deploying Jenkins, You will notice from the [documentation](https://www.jenkins.io/doc/book/installing/) that there are many options on where you can install Jenkins. +I had some fun deploying Jenkins, You will notice from the [documentation](https://www.jenkins.io/doc/book/installing/) that there are many options on where you can install Jenkins. -Given that I have minikube on hand and we have used this a number of times I wanted to use this for this task also. (also it is free!) Although the steps given in the [Kubernetes Installation](https://www.jenkins.io/doc/book/installing/kubernetes/) had me hitting a wall and not getting things up and running, you can compare the two when I document my steps here. +Given that I have minikube on hand and we have used this a number of times I wanted to use this for this task also. (also it is free!) Although the steps given in the [Kubernetes Installation](https://www.jenkins.io/doc/book/installing/kubernetes/) had me hitting a wall and not getting things up and running, you can compare the two when I document my steps here. -The first step is to get our minikube cluster up and running, we can simply do this with the `minikube start` command. +The first step is to get our minikube cluster up and running, we can simply do this with the `minikube start` command. ![](Images/Day72_CICD1.png) @@ -41,15 +42,15 @@ I have added a folder with all the YAML configuration and values that can be fou ![](Images/Day72_CICD2.png) -We will be using Helm to deploy jenkins into our cluster, we covered helm in the Kubernetes section. We firstly need to add the jenkinsci helm repository `helm repo add jenkinsci https://charts.jenkins.io` then update our charts `helm repo update`. +We will be using Helm to deploy jenkins into our cluster, we covered helm in the Kubernetes section. We firstly need to add the jenkinsci helm repository `helm repo add jenkinsci https://charts.jenkins.io` then update our charts `helm repo update`. ![](Images/Day72_CICD3.png) -The idea behind Jenkins is that it is going to save state for its pipelines, you can run the above helm installation without persistence but if those pods are rebooted, changed or modified then any pipeline or configuration you have made will be lost. We will create a volume for persistence using the jenkins-volume.yml file with the `kubectl apply -f jenkins-volume.yml` command. +The idea behind Jenkins is that it is going to save state for its pipelines, you can run the above helm installation without persistence but if those pods are rebooted, changed or modified then any pipeline or configuration you have made will be lost. We will create a volume for persistence using the jenkins-volume.yml file with the `kubectl apply -f jenkins-volume.yml` command. ![](Images/Day72_CICD4.png) -We also need a service account which we can create using this yaml file and command. `kubectl apply -f jenkins-sa.yml` +We also need a service account which we can create using this yaml file and command. `kubectl apply -f jenkins-sa.yml` ![](Images/Day72_CICD5.png) @@ -57,17 +58,17 @@ At this stage we are good to deploy using the helm chart, we will firstly define ![](Images/Day72_CICD6.png) -At this stage our pods will be pulling the image but the pod will not have access to the storage so no configuration can be started in terms of getting Jenkins up and running. +At this stage our pods will be pulling the image but the pod will not have access to the storage so no configuration can be started in terms of getting Jenkins up and running. -This is where the documentation did not help me massively understand what needed to happen. But we can see that we have no permission to start our jenkins install. +This is where the documentation did not help me massively understand what needed to happen. But we can see that we have no permission to start our jenkins install. ![](Images/Day72_CICD7.png) -In order to fix the above or resolve, we need to make sure we provide access or the right permission in order for our jenkins pods to be able to write to this location that we have suggested. We can do this by using the `minikube ssh` which will put us into the minikube docker container we are running on, and then using `sudo chown -R 1000:1000 /data/jenkins-volume` we can ensure we have permissions set on our data volume. +In order to fix the above or resolve, we need to make sure we provide access or the right permission in order for our jenkins pods to be able to write to this location that we have suggested. We can do this by using the `minikube ssh` which will put us into the minikube docker container we are running on, and then using `sudo chown -R 1000:1000 /data/jenkins-volume` we can ensure we have permissions set on our data volume. ![](Images/Day72_CICD8.png) -The above process should fix the pods, however if not you can force the pods to be refreshed with the `kubectl delete pod jenkins-0 -n jenkins` command. At this point you should have 2/2 running pods called jenkins-0. +The above process should fix the pods, however if not you can force the pods to be refreshed with the `kubectl delete pod jenkins-0 -n jenkins` command. At this point you should have 2/2 running pods called jenkins-0. ![](Images/Day72_CICD9.png) @@ -79,25 +80,25 @@ Now open a new terminal as we are going to use the `port-forward` command to all ![](Images/Day72_CICD11.png) -We should now be able to open a browser and login to http://localhost:8080 and authenticate with the username: admin and password we gathered in a previous step. +We should now be able to open a browser and login to `http://localhost:8080` and authenticate with the username: admin and password we gathered in a previous step. ![](Images/Day72_CICD12.png) -When we have authenticated, our Jenkins welcome page should look something like this: +When we have authenticated, our Jenkins welcome page should look something like this: ![](Images/Day72_CICD13.png) -From here, I would suggest heading to "Manage Jenkins" and you will see "Manage Plugins" which will have some updates available. Select all of those plugins and choose "Download now and install after restart" +From here, I would suggest heading to "Manage Jenkins" and you will see "Manage Plugins" which will have some updates available. Select all of those plugins and choose "Download now and install after restart" ![](Images/Day72_CICD14.png) If you want to go even further and automate the deployment of Jenkins using a shell script this great repository was shared with me on twitter [mehyedes/nodejs-k8s](https://github.com/mehyedes/nodejs-k8s/blob/main/docs/automated-setup.md) +### Jenkinsfile -### Jenkinsfile -Now we have Jenkins deployed in our Kubernetes cluster, we can now go back and think about this Jenkinsfile. +Now we have Jenkins deployed in our Kubernetes cluster, we can now go back and think about this Jenkinsfile. -Every Jenkinsfile will likely start like this, Which is firstly where you would define your steps of your pipeline, in this instance you have Build > Test > Deploy. But we are not really doing anything other than using the `echo` command to call out the specific stages. +Every Jenkinsfile will likely start like this, Which is firstly where you would define your steps of your pipeline, in this instance you have Build > Test > Deploy. But we are not really doing anything other than using the `echo` command to call out the specific stages. ``` @@ -126,27 +127,28 @@ pipeline { } ``` -In our Jenkins dashboard, select "New Item" give the item a name, I am going to "echo1" I am going to suggest that this is a Pipeline. + +In our Jenkins dashboard, select "New Item" give the item a name, I am going to "echo1" I am going to suggest that this is a Pipeline. ![](Images/Day72_CICD15.png) -Hit Ok and you will then have the tabs (General, Build Triggers, Advanced Project Options and Pipeline) for a simple test we are only interested in Pipeline. Under Pipeline you have the ability to add a script, we can copy and paste the above script into the box. +Hit Ok and you will then have the tabs (General, Build Triggers, Advanced Project Options and Pipeline) for a simple test we are only interested in Pipeline. Under Pipeline you have the ability to add a script, we can copy and paste the above script into the box. As we said above this is not going to do much but it will show us the stages of our Build > Test > Deploy ![](Images/Day72_CICD16.png) -Click Save, We can now run our build using the build now highlighted below. +Click Save, We can now run our build using the build now highlighted below. ![](Images/Day72_CICD17.png) -We should also open a terminal and run the `kubectl get pods -n jenkins` to see what happens there. +We should also open a terminal and run the `kubectl get pods -n jenkins` to see what happens there. ![](Images/Day72_CICD18.png) -Ok, very simple stuff but we can now see that our Jenkins deployment and installation is working correctly and we can start to see the building blocks of the CI pipeline here. +Ok, very simple stuff but we can now see that our Jenkins deployment and installation is working correctly and we can start to see the building blocks of the CI pipeline here. -In the next section we will be building a Jenkins Pipeline. +In the next section we will be building a Jenkins Pipeline. ## Resources diff --git a/Days/day73.md b/Days/day73.md index bcc58a643..83d410314 100644 --- a/Days/day73.md +++ b/Days/day73.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Building a Jenkins Pipeline - Day 73' +title: "#90DaysOfDevOps - Building a Jenkins Pipeline - Day 73" published: false description: 90DaysOfDevOps - Building a Jenkins Pipeline tags: "devops, 90daysofdevops, learning" @@ -7,17 +7,18 @@ cover_image: null canonical_url: null id: 1048766 --- -## Building a Jenkins Pipeline -In the last section we got Jenkins deployed to our Minikube cluster and we set up a very basic Jenkins Pipeline, that didn't do much at all other than echo out the stages of a Pipeline. +## Building a Jenkins Pipeline -You might have also seen that there are some example scripts available for us to run in the Jenkins Pipeline creation. +In the last section we got Jenkins deployed to our Minikube cluster and we set up a very basic Jenkins Pipeline, that didn't do much at all other than echo out the stages of a Pipeline. + +You might have also seen that there are some example scripts available for us to run in the Jenkins Pipeline creation. ![](Images/Day73_CICD1.png) -The first demo script is "Declartive (Kubernetes)" and you can see the stages below. +The first demo script is "Declarative (Kubernetes)" and you can see the stages below. -``` +```Yaml // Uses Declarative syntax to run commands inside a container. pipeline { agent { @@ -58,23 +59,24 @@ spec: } } ``` -You can see below the outcome of what happens when this Pipeline is ran. + +You can see below the outcome of what happens when this Pipeline is ran. ![](Images/Day73_CICD2.png) -### Job creation +### Job creation -**Goals** +#### Goals -- Create a simple app and store in GitHub public repository (https://github.com/scriptcamp/kubernetes-kaniko.git) +- Create a simple app and store in GitHub public repository [https://github.com/scriptcamp/kubernetes-kaniko.git](https://github.com/scriptcamp/kubernetes-kaniko.git) -- Use Jenkins to build our docker Container image and push to docker hub. (for this we will use a private repository) +- Use Jenkins to build our docker Container image and push to docker hub. (for this we will use a private repository) -To achieve this in our Kubernetes cluster running in or using Minikube we need to use something called [Kaniko](https://github.com/GoogleContainerTools/kaniko#running-kaniko-in-a-kubernetes-cluster) It general though if you are using Jenkins in a real Kubernetes cluster or you are running it on a server then you can specify an agent which will give you the ability to perform the docker build commands and upload that to DockerHub. +To achieve this in our Kubernetes cluster running in or using Minikube we need to use something called [Kaniko](https://github.com/GoogleContainerTools/kaniko#running-kaniko-in-a-kubernetes-cluster) It general though if you are using Jenkins in a real Kubernetes cluster or you are running it on a server then you can specify an agent which will give you the ability to perform the docker build commands and upload that to DockerHub. -With the above in mind we are also going to deploy a secret into Kubernetes with our GitHub credentials. +With the above in mind we are also going to deploy a secret into Kubernetes with our GitHub credentials. -``` +```Shell kubectl create secret docker-registry dockercred \ --docker-server=https://index.docker.io/v1/ \ --docker-username= \ @@ -82,17 +84,17 @@ kubectl create secret docker-registry dockercred \ --docker-email= ``` -In fact I want to share another great resource from [DevOpsCube.com](https://devopscube.com/build-docker-image-kubernetes-pod/) running through much of what we will cover here. +In fact I want to share another great resource from [DevOpsCube.com](https://devopscube.com/build-docker-image-kubernetes-pod/) running through much of what we will cover here. -### Adding credentials to Jenkins +### Adding credentials to Jenkins -However if you were on a Jenkins system unlike ours then you will likely want to define your credentials within Jenkins and then use them multiple times within your Pipelines and configurations. We can refer to these credentials in the Pipelines using the ID we determine on creation. I went ahead and stepped through and created a user entry for DockerHub and GitHub. +However if you were on a Jenkins system unlike ours then you will likely want to define your credentials within Jenkins and then use them multiple times within your Pipelines and configurations. We can refer to these credentials in the Pipelines using the ID we determine on creation. I went ahead and stepped through and created a user entry for DockerHub and GitHub. First of all select "Manage Jenkins" and then "Manage Credentials" ![](Images/Day73_CICD3.png) -You will see in the centre of the page, Stores scoped to Jenkins click on Jenkins here. +You will see in the centre of the page, Stores scoped to Jenkins click on Jenkins here. ![](Images/Day73_CICD4.png) @@ -100,25 +102,25 @@ Now select Global Credentials (Unrestricted) ![](Images/Day73_CICD5.png) -Then in the top left you have Add Credentials +Then in the top left you have Add Credentials ![](Images/Day73_CICD6.png) -Fill in your details for your account and then select OK, remember the ID is what you will refer to when you want to call this credential. My advice here also is that you use specific token access vs passwords. +Fill in your details for your account and then select OK, remember the ID is what you will refer to when you want to call this credential. My advice here also is that you use specific token access vs passwords. ![](Images/Day73_CICD7.png) For GitHub you should use a [Personal Access Token](https://vzilla.co.uk/vzilla-blog/creating-updating-your-github-personal-access-token) -Personally I did not find this process very intuitive to create these accounts, so even though we are not using I wanted to share the process as it is not clear from the UI. +Personally I did not find this process very intuitive to create these accounts, so even though we are not using I wanted to share the process as it is not clear from the UI. ### Building the pipeline -We have our DockerHub credentials deployed to as a secret into our Kubernetes cluster which we will call upon for our docker deploy to DockerHub stage in our pipeline. +We have our DockerHub credentials deployed to as a secret into our Kubernetes cluster which we will call upon for our docker deploy to DockerHub stage in our pipeline. -The pipeline script is what you can see below, this could in turn become our Jenkinsfile located in our GitHub repository which you can also see is listed in the Get the project stage of the pipeline. +The pipeline script is what you can see below, this could in turn become our Jenkinsfile located in our GitHub repository which you can also see is listed in the Get the project stage of the pipeline. -``` +```Yaml podTemplate(yaml: ''' apiVersion: v1 kind: Pod @@ -174,41 +176,41 @@ podTemplate(yaml: ''' } ``` -To kick things on the Jenkins dashboard we need to select "New Item" +To kick things on the Jenkins dashboard we need to select "New Item" ![](Images/Day73_CICD8.png) -We are then going to give our item a name, select Pipeline and then hit ok. +We are then going to give our item a name, select Pipeline and then hit ok. ![](Images/Day73_CICD9.png) -We are not going to be selecting any of the general or build triggers but have a play with these as there are some interesting schedules and other configurations that might be useful. +We are not going to be selecting any of the general or build triggers but have a play with these as there are some interesting schedules and other configurations that might be useful. ![](Images/Day73_CICD10.png) -We are only interested in the Pipeline tab at the end. +We are only interested in the Pipeline tab at the end. ![](Images/Day73_CICD11.png) -In the Pipeline definition we are going to copy and paste the pipeline script that we have above into the Script section and hit save. +In the Pipeline definition we are going to copy and paste the pipeline script that we have above into the Script section and hit save. ![](Images/Day73_CICD12.png) -Next we will select the "Build Now" option on the left side of the page. +Next we will select the "Build Now" option on the left side of the page. ![](Images/Day73_CICD13.png) -You should now wait a short amount of time, less than a minute really. and you should see under status the stages that we defined above in our script. +You should now wait a short amount of time, less than a minute really. and you should see under status the stages that we defined above in our script. ![](Images/Day73_CICD14.png) -More importantly if we now head on over to our DockerHub and check that we have a new build. +More importantly if we now head on over to our DockerHub and check that we have a new build. ![](Images/Day73_CICD15.png) -This overall did take a while to figure out but I wanted to stick with it for the purpose of getting hands on and working through a scenario that anyone can run through using minikube and access to github and dockerhub. +This overall did take a while to figure out but I wanted to stick with it for the purpose of getting hands on and working through a scenario that anyone can run through using minikube and access to github and dockerhub. -The DockerHub repository I used for this demo was a private one. But in the next section I want to advance some of these stages and actually have them do something vs just printing out `pwd` and actually run some tests and build stages. +The DockerHub repository I used for this demo was a private one. But in the next section I want to advance some of these stages and actually have them do something vs just printing out `pwd` and actually run some tests and build stages. ## Resources diff --git a/Days/day74.md b/Days/day74.md index 9eddd2542..8a6eb10b3 100644 --- a/Days/day74.md +++ b/Days/day74.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Hello World - Jenkinsfile App Pipeline - Day 74' +title: "#90DaysOfDevOps - Hello World - Jenkinsfile App Pipeline - Day 74" published: false description: 90DaysOfDevOps - Hello World - Jenkinsfile App Pipeline tags: "devops, 90daysofdevops, learning" @@ -7,79 +7,80 @@ cover_image: null canonical_url: null id: 1048744 --- + ## Hello World - Jenkinsfile App Pipeline -In the last section we built a simple Pipeline in Jenkins that would push our docker image from our dockerfile in a public GitHub repository to our private Dockerhub repository. +In the last section we built a simple Pipeline in Jenkins that would push our docker image from our dockerfile in a public GitHub repository to our private Dockerhub repository. -In this section we want to take this one step further and we want to achieve the following with our simple application. +In this section we want to take this one step further and we want to achieve the following with our simple application. -### Objective +### Objective - Dockerfile (Hello World) -- Jenkinsfile -- Jenkins Pipeline to trigger when GitHub Repository is updated -- Use GitHub Repository as source. +- Jenkinsfile +- Jenkins Pipeline to trigger when GitHub Repository is updated +- Use GitHub Repository as source. - Run - Clone/Get Repository, Build, Test, Deploy Stages - Deploy to DockerHub with incremental version numbers - Stretch Goal to deploy to our Kubernetes Cluster (This will involve another job and manifest repository using GitHub credentials) -### Step One +### Step One -We have our [GitHub repository](https://github.com/MichaelCade/Jenkins-HelloWorld) This currently contains our Dockerfile and our index.html +We have our [GitHub repository](https://github.com/MichaelCade/Jenkins-HelloWorld) This currently contains our Dockerfile and our index.html ![](Images/Day74_CICD1.png) -With the above this is what we were using as our source in our Pipeline, now we want to add that Jenkins Pipeline script to our GitHub repository as well. +With the above this is what we were using as our source in our Pipeline, now we want to add that Jenkins Pipeline script to our GitHub repository as well. ![](Images/Day74_CICD2.png) -Now back in our Jenkins dashboard, we are going to create a new pipeline but now instead of pasting our script we are going to use "Pipeline script from SCM" We are then going to use the configuration options below. +Now back in our Jenkins dashboard, we are going to create a new pipeline but now instead of pasting our script we are going to use "Pipeline script from SCM" We are then going to use the configuration options below. -For reference we are going to use https://github.com/MichaelCade/Jenkins-HelloWorld.git as the repository URL. +For reference we are going to use `https://github.com/MichaelCade/Jenkins-HelloWorld.git` as the repository URL. ![](Images/Day74_CICD3.png) -We could at this point hit save and apply and we would then be able to manually run our Pipeline building our new Docker image that is uploaded to our DockerHub repository. +We could at this point hit save and apply and we would then be able to manually run our Pipeline building our new Docker image that is uploaded to our DockerHub repository. -However, I also want to make sure that we set a schedule that whenever our repository or our source code is changed, I want to trigger a build. we could use webhooks or we could use a scheduled pull. +However, I also want to make sure that we set a schedule that whenever our repository or our source code is changed, I want to trigger a build. we could use webhooks or we could use a scheduled pull. This is a big consideration because if you are using costly cloud resources to hold your pipeline and you have lots of changes to your code repository then you will incur a lot of costs. We know that this is a demo environment which is why I am using the "poll scm" option. (Also I believe that using minikube I am lacking the ability to use webhooks) ![](Images/Day74_CICD4.png) -One thing I have changed since yesterdays session is I want to now upload my image to a public repository which in this case would be michaelcade1\90DaysOfDevOps, my Jenkinsfile has this change already. And from previous sections I have removed any existing demo container images. +One thing I have changed since yesterdays session is I want to now upload my image to a public repository which in this case would be michaelcade1\90DaysOfDevOps, my Jenkinsfile has this change already. And from previous sections I have removed any existing demo container images. ![](Images/Day74_CICD5.png) -Going backwards here, we created our Pipeline and then as previously shown we added our configuration. +Going backwards here, we created our Pipeline and then as previously shown we added our configuration. ![](Images/Day74_CICD6.png) -At this stage our Pipeline has never ran and your stage view will look something like this. +At this stage our Pipeline has never ran and your stage view will look something like this. ![](Images/Day74_CICD7.png) -Now lets trigger the "Build Now" button. and our stage view will display our stages. +Now lets trigger the "Build Now" button. and our stage view will display our stages. ![](Images/Day74_CICD8.png) -If we then head over to our DockerHub repository, we should have 2 new Docker images. We should have a Build ID of 1 and a latest because every build that we create based on the "Upload to DockerHub" is we send a version using the Jenkins Build_ID environment variable and we also issue a latest. +If we then head over to our DockerHub repository, we should have 2 new Docker images. We should have a Build ID of 1 and a latest because every build that we create based on the "Upload to DockerHub" is we send a version using the Jenkins Build_ID environment variable and we also issue a latest. ![](Images/Day74_CICD9.png) -Let's go and create an update to our index.html file in our GitHub repository as per below, I will let you go and find out what version 1 of the index.html was saying. +Let's go and create an update to our index.html file in our GitHub repository as per below, I will let you go and find out what version 1 of the index.html was saying. ![](Images/Day74_CICD10.png) -If we head back to Jenkins and select "Build Now" again. We will see our #2 build is successful. +If we head back to Jenkins and select "Build Now" again. We will see our #2 build is successful. ![](Images/Day74_CICD11.png) -Then a quick look at DockerHub, we can see that we have our tagged version 2 and our latest tag. +Then a quick look at DockerHub, we can see that we have our tagged version 2 and our latest tag. ![](Images/Day74_CICD12.png) -It is worth noting here that I have added into my Kubernetes cluster a secret that enables my access and authentication to push my docker builds into DockerHub. If you are following along you should repeat this process for your account, and also make a change to the Jenkinsfile that is associated to my repository and account. +It is worth noting here that I have added into my Kubernetes cluster a secret that enables my access and authentication to push my docker builds into DockerHub. If you are following along you should repeat this process for your account, and also make a change to the Jenkinsfile that is associated to my repository and account. ## Resources diff --git a/Days/day75.md b/Days/day75.md index fce804420..335090587 100644 --- a/Days/day75.md +++ b/Days/day75.md @@ -1,63 +1,64 @@ --- -title: '#90DaysOfDevOps - GitHub Actions Overview - Day 75' +title: "#90DaysOfDevOps - GitHub Actions Overview - Day 75" published: false description: 90DaysOfDevOps - GitHub Actions Overview -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049070 --- + ## GitHub Actions Overview -In this section I wanted to move on and take a look at maybe a different approach to what we just spent time on. GitHub Actions is where we will focus on in this session. +In this section I wanted to move on and take a look at maybe a different approach to what we just spent time on. GitHub Actions is where we will focus on in this session. -GitHub Actions is a CI/CD platform that allows us to build, test and deploy amongst other tasks our pipeline. It has the concept of workflows that build and test against a GitHub repository. You could also use GitHub Actions to drive other workflows based on events that happen within your repository. +GitHub Actions is a CI/CD platform that allows us to build, test and deploy amongst other tasks our pipeline. It has the concept of workflows that build and test against a GitHub repository. You could also use GitHub Actions to drive other workflows based on events that happen within your repository. ### Workflows -Overall, in GitHub Actions our task is called a **Workflow**. +Overall, in GitHub Actions our task is called a **Workflow**. -- A **workflow** is the configurable automated process. +- A **workflow** is the configurable automated process. - Defined as YAML files. - Contain and run one or more **jobs** -- Will run when triggered by an **event** in your repository or can be ran manually +- Will run when triggered by an **event** in your repository or can be ran manually - You can multiple workflows per repository - A **workflow** will contain a **job** and then **steps** to achieve that **job** -- Within our **workflow** we will also have a **runner** on which our **workflow** runs. +- Within our **workflow** we will also have a **runner** on which our **workflow** runs. For example, you can have one **workflow** to build and test pull requests, another **workflow** to deploy your application every time a release is created, and still another **workflow** that adds a label every time someone opens a new issue. -### Events +### Events -Events are a specific event in a repository that triggers the workflow to run. +Events are a specific event in a repository that triggers the workflow to run. -### Jobs +### Jobs -A job is a set of steps in the workflow that execute on a runner. +A job is a set of steps in the workflow that execute on a runner. ### Steps -Each step within the job can be a shell script that gets executed, or an action. Steps are executed in order and they are dependant on each other. +Each step within the job can be a shell script that gets executed, or an action. Steps are executed in order and they are dependant on each other. -### Actions +### Actions -A repeatable custom application used for frequently repeated tasks. +A repeatable custom application used for frequently repeated tasks. ### Runners -A runner is a server that runs the workflow, each runner runs a single job at a time. GitHub Actions provides the ability to run Ubuntu Linux, Microsoft Windows, and macOS runners. You can also host your own on specific OS or hardware. +A runner is a server that runs the workflow, each runner runs a single job at a time. GitHub Actions provides the ability to run Ubuntu Linux, Microsoft Windows, and macOS runners. You can also host your own on specific OS or hardware. -Below you can see how this looks, we have our event triggering our workflow > our workflow consists of two jobs > within our jobs we then have steps and then we have actions. +Below you can see how this looks, we have our event triggering our workflow > our workflow consists of two jobs > within our jobs we then have steps and then we have actions. ![](Images/Day75_CICD1.png) -### YAML +### YAML -Before we get going with a real use case lets take a quick look at the above image in the form of an example YAML file. +Before we get going with a real use case lets take a quick look at the above image in the form of an example YAML file. -I have added # to comment in where we can find the components of the YAML workflow. +I have added # to comment in where we can find the components of the YAML workflow. -``` +```Yaml #Workflow name: 90DaysOfDevOps #Event @@ -78,19 +79,19 @@ jobs: - run: bats -v ``` -### Getting Hands-On with GitHub Actions +### Getting Hands-On with GitHub Actions -I think there are a lot of options when it comes to GitHub Actions, yes it will satisfy your CI/CD needs when it comes to Build, Test, Deploying your code and the continued steps thereafter. +I think there are a lot of options when it comes to GitHub Actions, yes it will satisfy your CI/CD needs when it comes to Build, Test, Deploying your code and the continued steps thereafter. -I can see lots of options and other automated tasks that we could use GitHub Actions for. +I can see lots of options and other automated tasks that we could use GitHub Actions for. -### Using GitHub Actions for Linting your code +### Using GitHub Actions for Linting your code -One option is making sure your code is clean and tidy within your repository. This will be our first example demo. +One option is making sure your code is clean and tidy within your repository. This will be our first example demo. -I am going to be using some example code linked in one of the resources for this section, we are going to use `github/super-linter` to check against our code. +I am going to be using some example code linked in one of the resources for this section, we are going to use `github/super-linter` to check against our code. -``` +```Yaml name: Super-Linter on: push @@ -115,37 +116,37 @@ You can see from the above that for one of our steps we have an action called gi "This repository is for the GitHub Action to run a Super-Linter. It is a simple combination of various linters, written in bash, to help validate your source code." -Also in the code snippet above it mentions GITHUB_TOKEN so I was interested to find out why and what this does and needed for. +Also in the code snippet above it mentions GITHUB_TOKEN so I was interested to find out why and what this does and needed for. -"NOTE: If you pass the Environment variable `GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}` in your workflow, then the GitHub Super-Linter will mark the status of each individual linter run in the Checks section of a pull request. Without this you will only see the overall status of the full run. **There is no need to set the GitHub Secret as it is automatically set by GitHub, it only needs to be passed to the action.**" +"NOTE: If you pass the Environment variable `GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}` in your workflow, then the GitHub Super-Linter will mark the status of each individual linter run in the Checks section of a pull request. Without this you will only see the overall status of the full run. **There is no need to set the GitHub Secret as it is automatically set by GitHub, it only needs to be passed to the action.**" -The bold text being important to note at this stage. We are using it but we do not need to set any environment variable within our repository. +The bold text being important to note at this stage. We are using it but we do not need to set any environment variable within our repository. We will use our repository that we used in our Jenkins demo to test against.[Jenkins-HelloWorld](https://github.com/MichaelCade/Jenkins-HelloWorld) -Here is our repository as we left it in the Jenkins sessions. +Here is our repository as we left it in the Jenkins sessions. ![](Images/Day75_CICD2.png) -In order for us to take advantage we have to use the Actions tab above to choose from the marketplace which I will cover shortly or we can create our own files using our super-linter code above, in order to create your own you must create a new file in your repository at this exact location. `.github/workflows/workflow_name` obviously making sure the workflow_name is something useful for you recognise, within here we can have many different workflows performing different jobs and tasks against our repository. +In order for us to take advantage we have to use the Actions tab above to choose from the marketplace which I will cover shortly or we can create our own files using our super-linter code above, in order to create your own you must create a new file in your repository at this exact location. `.github/workflows/workflow_name` obviously making sure the workflow_name is something useful for you recognise, within here we can have many different workflows performing different jobs and tasks against our repository. We are going to create `.github/workflows/super-linter.yml` ![](Images/Day75_CICD3.png) -We can then paste our code and commit the code to our repository, if we then head to the Actions tab we will now see our Super-Linter workflow listed as per below, +We can then paste our code and commit the code to our repository, if we then head to the Actions tab we will now see our Super-Linter workflow listed as per below, ![](Images/Day75_CICD4.png) -We defined in our code that this workflow would run when we pushed anything to our repository, so in pushing the super-linter.yml to our repository we triggered the workflow. +We defined in our code that this workflow would run when we pushed anything to our repository, so in pushing the super-linter.yml to our repository we triggered the workflow. ![](Images/Day75_CICD5.png) -As you can see from the above we have some errors most likely with my hacking ability vs coding ability. +As you can see from the above we have some errors most likely with my hacking ability vs coding ability. Although actually it was not my code at least not yet, in running this and getting an error I found this [issue](https://github.com/github/super-linter/issues/2255) -Take #2 I changed the version of Super-Linter from version 3 to 4 and have ran the task again. +Take #2 I changed the version of Super-Linter from version 3 to 4 and have ran the task again. ![](Images/Day75_CICD6.png) @@ -155,21 +156,21 @@ I wanted to show the look now on our repository when something within the workfl ![](Images/Day75_CICD7.png) -Now if we resolve the issue with my code and push the changes our workflow will run again (you can see from the image it took a while to iron out our "bugs") Deleting a file is probably not recommended but it is a very quick way to show the issue being resolved. +Now if we resolve the issue with my code and push the changes our workflow will run again (you can see from the image it took a while to iron out our "bugs") Deleting a file is probably not recommended but it is a very quick way to show the issue being resolved. ![](Images/Day75_CICD8.png) -If you hit the new workflow button highlighted above, this is going to open the door to a huge plethora of actions. One thing you might have noticed throughout this challenge is that we don't want to reinvent the wheel we want to stand on the shoulders of giants and share our code, automations and skills far and wide to make our lives easier. +If you hit the new workflow button highlighted above, this is going to open the door to a huge plethora of actions. One thing you might have noticed throughout this challenge is that we don't want to reinvent the wheel we want to stand on the shoulders of giants and share our code, automations and skills far and wide to make our lives easier. ![](Images/Day75_CICD9.png) -Oh, I didn't show you the green tick on the repository when our workflow was successful. +Oh, I didn't show you the green tick on the repository when our workflow was successful. ![](Images/Day75_CICD10.png) -I think that covers things from a foundational point of view for GitHub Actions but if you are anything like me then you are probably seeing how else GitHub Actions can be used to automate a lot of tasks. +I think that covers things from a foundational point of view for GitHub Actions but if you are anything like me then you are probably seeing how else GitHub Actions can be used to automate a lot of tasks. -Next up we will cover another area of CD, we will be looking into ArgoCD to deploy our applications out into our environments. +Next up we will cover another area of CD, we will be looking into ArgoCD to deploy our applications out into our environments. ## Resources diff --git a/Days/day76.md b/Days/day76.md index d4faa476b..9f49617f2 100644 --- a/Days/day76.md +++ b/Days/day76.md @@ -1,12 +1,13 @@ --- -title: '#90DaysOfDevOps - ArgoCD Overview - Day 76' +title: "#90DaysOfDevOps - ArgoCD Overview - Day 76" published: false description: 90DaysOfDevOps - ArgoCD Overview -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048809 --- + ## ArgoCD Overview “Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes” @@ -17,11 +18,11 @@ From an Operations background but having played a lot around Infrastructure as C [What is ArgoCD](https://argo-cd.readthedocs.io/en/stable/) -### Deploying ArgoCD +### Deploying ArgoCD -We are going to be using our trusty minikube Kubernetes cluster locally again for this deployment. +We are going to be using our trusty minikube Kubernetes cluster locally again for this deployment. -``` +```Shell kubectl create namespace argocd kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml ``` @@ -32,41 +33,41 @@ Make sure all the ArgoCD pods are up and running with `kubectl get pods -n argoc ![](Images/Day76_CICD2.png) -Also let's check everything that we deployed in the namespace with `kubectl get all -n argocd` +Also let's check everything that we deployed in the namespace with `kubectl get all -n argocd` ![](Images/Day76_CICD3.png) -When the above is looking good, we then should consider accessing this via the port forward. Using the `kubectl port-forward svc/argocd-server -n argocd 8080:443` command. Do this in a new terminal. +When the above is looking good, we then should consider accessing this via the port forward. Using the `kubectl port-forward svc/argocd-server -n argocd 8080:443` command. Do this in a new terminal. -Then open a new web browser and head to https://localhost:8080 +Then open a new web browser and head to `https://localhost:8080` ![](Images/Day76_CICD4.png) -To log in you will need a username of admin and then to grab your created secret as your password use the `kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo` +To log in you will need a username of admin and then to grab your created secret as your password use the `kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d && echo` ![](Images/Day76_CICD5.png) -Once you have logged in you will have your blank CD canvas. +Once you have logged in you will have your blank CD canvas. ![](Images/Day76_CICD6.png) -### Deploying our application +### Deploying our application -Now we have ArgoCD up and running we can now start using it to deploy our applications from our Git repositories as well as Helm. +Now we have ArgoCD up and running we can now start using it to deploy our applications from our Git repositories as well as Helm. -The application I want to deploy is Pac-Man, yes that's right the famous game and something I use in a lot of demos when it comes to data management, this will not be the last time we see Pac-Man. +The application I want to deploy is Pac-Man, yes that's right the famous game and something I use in a lot of demos when it comes to data management, this will not be the last time we see Pac-Man. You can find the repository for [Pac-Man](https://github.com/MichaelCade/pacman-tanzu.git) here. -Instead of going through each step using screen shots I thought it would be easier to create a walkthrough video covering the steps taken for this one particular application deployment. +Instead of going through each step using screen shots I thought it would be easier to create a walkthrough video covering the steps taken for this one particular application deployment. [ArgoCD Demo - 90DaysOfDevOps](https://www.youtube.com/watch?v=w6J413_j0hA) -Note - During the video there is a service that is never satisfied as the app health being healthy this is because the LoadBalancer type set for the pacman service is in a pending state, in Minikube we do not have a loadbalancer configured. If you would like to test this you could change the YAML for the service to ClusterIP and use port forwarding to play the game. +Note - During the video there is a service that is never satisfied as the app health being healthy this is because the LoadBalancer type set for the pacman service is in a pending state, in Minikube we do not have a loadbalancer configured. If you would like to test this you could change the YAML for the service to ClusterIP and use port forwarding to play the game. -This wraps up the CICD Pipelines section, I feel there is a lot of focus on this area in the industry at the moment and you will also hear terms around GitOps also related to the methodologies used within CICD in general. +This wraps up the CICD Pipelines section, I feel there is a lot of focus on this area in the industry at the moment and you will also hear terms around GitOps also related to the methodologies used within CICD in general. -The next section we move into is around Observability, another concept or area that is not new but it is more and more important as we look at our environments in a different way. +The next section we move into is around Observability, another concept or area that is not new but it is more and more important as we look at our environments in a different way. ## Resources diff --git a/Days/day77.md b/Days/day77.md index ef45231bc..a8e88f935 100644 --- a/Days/day77.md +++ b/Days/day77.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - The Big Picture: Monitoring - Day 77' +title: "#90DaysOfDevOps - The Big Picture: Monitoring - Day 77" published: false description: 90DaysOfDevOps - The Big Picture Monitoring tags: "devops, 90daysofdevops, learning" @@ -7,76 +7,77 @@ cover_image: null canonical_url: null id: 1048715 --- + ## The Big Picture: Monitoring -In this section we are going to talk about monitoring, what is it why do we need it? +In this section we are going to talk about monitoring, what is it why do we need it? -### What is Monitoring? +### What is Monitoring? -Monitoring is the process of keeping a close eye on the entire infrastructure +Monitoring is the process of keeping a close eye on the entire infrastructure -### and why do we need it? +### and why do we need it? -Let's assume we're managing a thousand servers these include a variety of specialised servers like application servers, database servers and web servers. We could also complicate this further with additional services and different platforms including public cloud offerings and Kubernetes. +Let's assume we're managing a thousand servers these include a variety of specialised servers like application servers, database servers and web servers. We could also complicate this further with additional services and different platforms including public cloud offerings and Kubernetes. ![](Images/Day77_Monitoring1.png) -We are responsible for ensuring that all the services, applications and resources on the servers are running as they should be. +We are responsible for ensuring that all the services, applications and resources on the servers are running as they should be. ![](Images/Day77_Monitoring2.png) -How do we do it? there are three ways: +How do we do it? there are three ways: -- Login manually to all of our servers and check all the data pertaining to services processes and resources. -- Write a script that logs in to the servers for us and checks on the data. +- Login manually to all of our servers and check all the data pertaining to services processes and resources. +- Write a script that logs in to the servers for us and checks on the data. -Both of these options would require considerable amount of work on our part, +Both of these options would require considerable amount of work on our part, -The third option is easier, we could use a monitoring solution that is available in the market. +The third option is easier, we could use a monitoring solution that is available in the market. -Nagios and Zabbix are possible solutions that are readily available which allow us to upscale our monitoring infrastructure to include as many servers as we want. +Nagios and Zabbix are possible solutions that are readily available which allow us to upscale our monitoring infrastructure to include as many servers as we want. ### Nagios Nagios is an infrastructure monitoring tool that is made by a company that goes by the same name. The open-source version of this tool is called Nagios core while the commercial version is called Nagios XI. [Nagios Website](https://www.nagios.org/) -The tool allows us to monitor our servers and see if they are being sufficiently utilised or if there are any tasks of failure that need addressing. +The tool allows us to monitor our servers and see if they are being sufficiently utilised or if there are any tasks of failure that need addressing. ![](Images/Day77_Monitoring3.png) -Essentially monitoring allows us to achieve these two goals, check the status of our servers and services and determine the health of our infrastructure it also gives us a 40,000ft view of the complete infrastructure to see if our servers are up and running, if the applications are working properly and the web servers are reachable or not. +Essentially monitoring allows us to achieve these two goals, check the status of our servers and services and determine the health of our infrastructure it also gives us a 40,000ft view of the complete infrastructure to see if our servers are up and running, if the applications are working properly and the web servers are reachable or not. -It will tell us that our disk has been increasing by 10 percent for the last 10 weeks in a particular server, that it will exhaust entirely within the next four or five days and we'll fail to respond soon it will alert us when your disk or server is in a critical state so that we can take appropriate actions to avoid possible outages. +It will tell us that our disk has been increasing by 10 percent for the last 10 weeks in a particular server, that it will exhaust entirely within the next four or five days and we'll fail to respond soon it will alert us when your disk or server is in a critical state so that we can take appropriate actions to avoid possible outages. -In this case we can free up some disk space and ensure that our servers don't fail and that our users are not affected. +In this case we can free up some disk space and ensure that our servers don't fail and that our users are not affected. -The difficult question for most monitoring engineers is what do we monitor? and alternately what do we not? +The difficult question for most monitoring engineers is what do we monitor? and alternately what do we not? -Every system has a number of resources, which of these should we keep a close eye on and which ones can we turn a blind eye to for instance is it necessary to monitor CPU usage the answer is yes obviously nevertheless it is still a decision that has to be made is it necessary to monitor the number of open ports in the system we may or may not have to depending on the situation if it is a general-purpose server we probably won't have to but then again if it is a webserver we probably would have to. +Every system has a number of resources, which of these should we keep a close eye on and which ones can we turn a blind eye to for instance is it necessary to monitor CPU usage the answer is yes obviously nevertheless it is still a decision that has to be made is it necessary to monitor the number of open ports in the system we may or may not have to depending on the situation if it is a general-purpose server we probably won't have to but then again if it is a webserver we probably would have to. -### Continous Monitoring +### Continuous Monitoring -Monitoring is not a new item and even continous monitoring has been an ideal that many enterprises have adopted for many years. +Monitoring is not a new item and even continuous monitoring has been an ideal that many enterprises have adopted for many years. -There are three key areas of focus when it comes to monitoring. +There are three key areas of focus when it comes to monitoring. - Infrastructure Monitoring -- Application Monitoring -- Network Monitoring +- Application Monitoring +- Network Monitoring -The important thing to note is that there are many tools available we have mentioned two generic systems and tools in this session but there are lots. The real benefit of a monitoring solution comes when you have really spent the time making sure you are answering that question of what should we be monitoring and what shouldn't we? +The important thing to note is that there are many tools available we have mentioned two generic systems and tools in this session but there are lots. The real benefit of a monitoring solution comes when you have really spent the time making sure you are answering that question of what should we be monitoring and what shouldn't we? -We could turn on a monitoring solution in any of our platforms and it will start grabbing information but if that information is simply too much then you are going to struggle to benefit from that solution, you have to spend the time to configure. +We could turn on a monitoring solution in any of our platforms and it will start grabbing information but if that information is simply too much then you are going to struggle to benefit from that solution, you have to spend the time to configure. -In the next session we will get hands on with a monitoring tool and see what we can start monitoring. +In the next session we will get hands on with a monitoring tool and see what we can start monitoring. -## Resources +## Resources - [The Importance of Monitoring in DevOps](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) -- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) -- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) +- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) +- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) - [Top 5 - DevOps Monitoring Tools](https://www.youtube.com/watch?v=4t71iv_9t_4) -- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) +- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) - [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8) See you on [Day 78](day78.md) diff --git a/Days/day78.md b/Days/day78.md index 3e44f0c06..b6ba30d51 100644 --- a/Days/day78.md +++ b/Days/day78.md @@ -1,77 +1,79 @@ --- -title: '#90DaysOfDevOps - Hands-On Monitoring Tools - Day 78' +title: "#90DaysOfDevOps - Hands-On Monitoring Tools - Day 78" published: false description: 90DaysOfDevOps - Hands-On Monitoring Tools -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049056 --- + ## Hands-On Monitoring Tools -In the last session, I spoke about the big picture of monitoring and I took a look into Nagios, there was two reasons for doing this. The first was this is a peice of software I have heard a lot of over the years so wanted to know a little more about its capabilities. +In the last session, I spoke about the big picture of monitoring and I took a look into Nagios, there was two reasons for doing this. The first was this is a piece of software I have heard a lot of over the years so wanted to know a little more about its capabilities. -Today I am going to be going into Prometheus, I have seen more and more of Prometheus in the Cloud-Native landscape but it can also be used to look after those physical resources as well outside of Kubernetes and the like. +Today I am going to be going into Prometheus, I have seen more and more of Prometheus in the Cloud-Native landscape but it can also be used to look after those physical resources as well outside of Kubernetes and the like. ### Prometheus - Monitors nearly everything -First of all Prometheus is Open-Source that can help you monitor containers and microservice based systems as well as physical, virtual and other services. There is a large community behind Prometheus. +First of all Prometheus is Open-Source that can help you monitor containers and microservice based systems as well as physical, virtual and other services. There is a large community behind Prometheus. -Prometheus has a large array of [integrations and exporters](https://prometheus.io/docs/instrumenting/exporters/) The key being to exporting existing metrics as prometheus metrics. On top of this it also supports multiple proagramming languages. +Prometheus has a large array of [integrations and exporters](https://prometheus.io/docs/instrumenting/exporters/) The key being to exporting existing metrics as prometheus metrics. On top of this it also supports multiple proagramming languages. -Pull approach - If you are talking to thousands of microservices or systems and services a push method is going to be where you generally see the service pushing to the monitoring system. This brings some challenges around flooding the network, high cpu and also a single point of failure. Where Pull gives us a much better experience where Prometheus will pull from the metrics endpoint on every service. +Pull approach - If you are talking to thousands of microservices or systems and services a push method is going to be where you generally see the service pushing to the monitoring system. This brings some challenges around flooding the network, high cpu and also a single point of failure. Where Pull gives us a much better experience where Prometheus will pull from the metrics endpoint on every service. -Once again we see YAML for configuration for Prometheus. +Once again we see YAML for configuration for Prometheus. ![](Images/Day78_Monitoring7.png) -Later on you are going to see how this looks when deployed into Kubernetes, in particular we have the **PushGateway** which pulls our metrics from our jobs/exporters. +Later on you are going to see how this looks when deployed into Kubernetes, in particular we have the **PushGateway** which pulls our metrics from our jobs/exporters. -We have the **AlertManager** which pushes alerts and this is where we can integrate into external services such as email, slack and other tooling. +We have the **AlertManager** which pushes alerts and this is where we can integrate into external services such as email, slack and other tooling. -Then we have the Prometheus server which manages the retrieval of those pull metrics from the PushGateway and then sends those push alerts to the AlertManager. The Prometheus server also stores data on a local disk. Although can leverage remote storage solutions. +Then we have the Prometheus server which manages the retrieval of those pull metrics from the PushGateway and then sends those push alerts to the AlertManager. The Prometheus server also stores data on a local disk. Although can leverage remote storage solutions. -We then also have PromQL which is the language used to interact with the metrics, this can be seen later on with the Prometheus Web UI but you will also see later on in this section how this is also used within Data visualisation tools such as Grafana. +We then also have PromQL which is the language used to interact with the metrics, this can be seen later on with the Prometheus Web UI but you will also see later on in this section how this is also used within Data visualisation tools such as Grafana. -### Ways to Deploy Prometheus +### Ways to Deploy Prometheus -Various ways of installing Prometheus, [Download Section](https://prometheus.io/download/) Docker images are also available. +Various ways of installing Prometheus, [Download Section](https://prometheus.io/download/) Docker images are also available. `docker run --name prometheus -d -p 127.0.0.1:9090:9090 prom/prometheus` -But we are going to focus our efforts on deploying to Kubernetes. Which also has some options. +But we are going to focus our efforts on deploying to Kubernetes. Which also has some options. -- Create configuration YAML files +- Create configuration YAML files - Using an Operator (manager of all prometheus components) -- Using helm chart to deploy operator +- Using helm chart to deploy operator -### Deploying to Kubernetes +### Deploying to Kubernetes -We will be using our minikube cluster locally again for this quick and simple installation. As with previous touch points with minikube, we will be using helm to deploy the Prometheus helm chart. +We will be using our minikube cluster locally again for this quick and simple installation. As with previous touch points with minikube, we will be using helm to deploy the Prometheus helm chart. -`helm repo add prometheus-community https://prometheus-community.github.io/helm-charts` +`helm repo add prometheus-community https://prometheus-community.github.io/helm-charts` ![](Images/Day78_Monitoring1.png) -As you can see from the above we have also ran a helm repo update, we are now ready to deploy Prometheus into our minikube environment using the `helm install stable prometheus-community/prometheus` command. +As you can see from the above we have also ran a helm repo update, we are now ready to deploy Prometheus into our minikube environment using the `helm install stable prometheus-community/prometheus` command. ![](Images/Day78_Monitoring2.png) -After a couple of minutes you will see a number of new pods appear, for this demo I have deployed into the default namespace, I would normally push this to its own namespace. +After a couple of minutes you will see a number of new pods appear, for this demo I have deployed into the default namespace, I would normally push this to its own namespace. ![](Images/Day78_Monitoring3.png) -Once all the pods are running we can also take a look at all the deployed aspects of Prometheus. +Once all the pods are running we can also take a look at all the deployed aspects of Prometheus. ![](Images/Day78_Monitoring4.png) -Now for us to access the Prometheus Server UI we can use the following command to port forward. +Now for us to access the Prometheus Server UI we can use the following command to port forward. -``` +```Shell export POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace default port-forward $POD_NAME 9090 ``` -When we first open our browser to http://localhost:9090 we see the following very blank screen. + +When we first open our browser to `http://localhost:9090` we see the following very blank screen. ![](Images/Day78_Monitoring5.png) @@ -79,17 +81,17 @@ Because we have deployed to our Kubernetes cluster we will automatically be pick ![](Images/Day78_Monitoring6.png) -Short on learning PromQL and putting that into practice this is very much like I mentioned previously in that gaining metrics is great, so is monitoring but you have to know what you are monitoring and why and what you are not monitoring and why! +Short on learning PromQL and putting that into practice this is very much like I mentioned previously in that gaining metrics is great, so is monitoring but you have to know what you are monitoring and why and what you are not monitoring and why! -I want to come back to Prometheus but for now I think we need to think about Log Management and Data Visualisation to bring us back to Prometheus later on. +I want to come back to Prometheus but for now I think we need to think about Log Management and Data Visualisation to bring us back to Prometheus later on. -## Resources +## Resources - [The Importance of Monitoring in DevOps](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) -- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) -- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) +- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) +- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) - [Top 5 - DevOps Monitoring Tools](https://www.youtube.com/watch?v=4t71iv_9t_4) -- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) +- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) - [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8) - [Promql cheat sheet with examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) diff --git a/Days/day79.md b/Days/day79.md index d941e1a2a..1db073544 100644 --- a/Days/day79.md +++ b/Days/day79.md @@ -1,41 +1,42 @@ --- -title: '#90DaysOfDevOps - The Big Picture: Log Management - Day 79' +title: "#90DaysOfDevOps - The Big Picture: Log Management - Day 79" published: false description: 90DaysOfDevOps - The Big Picture Log Management -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049057 --- + ## The Big Picture: Log Management -A continuation to the infrastructure monitoring challenges and solutions, log management is another puzzle peice to the overall observability jigsaw. +A continuation to the infrastructure monitoring challenges and solutions, log management is another puzzle piece to the overall observability jigsaw. -### Log Management & Aggregation +### Log Management & Aggregation -Let's talk about two core concepts the first of which is log aggregation and it's a way of collecting and tagging application logs from many different services and to a single dashboard that can easily be searched. +Let's talk about two core concepts the first of which is log aggregation and it's a way of collecting and tagging application logs from many different services and to a single dashboard that can easily be searched. -One of the first systems that have to be built out in an application performance management system is log aggregation. Application performance management is the part of the devops lifecycle where things have been built and deployed and you need to make sure that they're continuously working so they have enough resources allocated to them and errors aren't being shown to users. In most production deployments there are many related events that emit logs across services at google a single search might hit ten different services before being returned to the user if you got unexpected search results that might mean a logic problem in any of the ten services and log aggregation helps companies like google diagnose problems in production, they've built a single dashboard where they can map every request to unique id so if you search something your search will get a unique id and then every time that search is passing through a different service that service will connect that id to what they're currently doing. +One of the first systems that have to be built out in an application performance management system is log aggregation. Application performance management is the part of the devops lifecycle where things have been built and deployed and you need to make sure that they're continuously working so they have enough resources allocated to them and errors aren't being shown to users. In most production deployments there are many related events that emit logs across services at google a single search might hit ten different services before being returned to the user if you got unexpected search results that might mean a logic problem in any of the ten services and log aggregation helps companies like google diagnose problems in production, they've built a single dashboard where they can map every request to unique id so if you search something your search will get a unique id and then every time that search is passing through a different service that service will connect that id to what they're currently doing. -This is the essence of a good log aggregation platform efficiently collect logs from everywhere that emits them and make them easily searchable in the case of a fault again. +This is the essence of a good log aggregation platform efficiently collect logs from everywhere that emits them and make them easily searchable in the case of a fault again. -### Example App +### Example App -Our example application is a web app, we have a typical front end and backend storing our critical data to a MongoDB database. +Our example application is a web app, we have a typical front end and backend storing our critical data to a MongoDB database. -If a user told us the page turned all white and printed an error message we would be hard-pressed to diagnose the problem with our current stack the user would need to manually send us the error and we'd need to match it with relevant logs in the other three services. +If a user told us the page turned all white and printed an error message we would be hard-pressed to diagnose the problem with our current stack the user would need to manually send us the error and we'd need to match it with relevant logs in the other three services. -### ELK +### ELK -Let's take a look at ELK, a popular open source log aggregation stack named after its three components elasticsearch, logstash and kibana if we installed it in the same environment as our example app. +Let's take a look at ELK, a popular open source log aggregation stack named after its three components elasticsearch, logstash and kibana if we installed it in the same environment as our example app. -The web application would connect to the frontend which then connects to the backend, the backend would send logs to logstash and then the way that these three components work +The web application would connect to the frontend which then connects to the backend, the backend would send logs to logstash and then the way that these three components work -### The components of elk +### The components of elk -Elasticsearch, logstash and Kibana is that all of services send logs to logstash, logstash takes these logs which are text emitted by the application. For example the web application when you visit a web page, the web page might log this visitor access to this page at this time and that's an example of a log message those logs would be sent to logstash. +Elasticsearch, logstash and Kibana is that all of services send logs to logstash, logstash takes these logs which are text emitted by the application. For example the web application when you visit a web page, the web page might log this visitor access to this page at this time and that's an example of a log message those logs would be sent to logstash. -Logstash would then extract things from them so for that log message user did **thing**, at **time**. It would extract the time and extract the message and extract the user and include those all as tags so the message would be an object of tags and message so that you could search them easily you could find all of the requests made by a specific user but logstash doesn't store things itself it stores things in elasticsearch which is a efficient database for querying text and elasticsearch exposes the results as Kibana and Kibana is a web server that connects to elasticsearch and allows administrators as the devops person or other people on your team, the on-call engineer to view the logs in production whenever there's a major fault. You as the administrator would connect to Kibana, Kibana would query elasticsearch for logs matching whatever you wanted. +Logstash would then extract things from them so for that log message user did **thing**, at **time**. It would extract the time and extract the message and extract the user and include those all as tags so the message would be an object of tags and message so that you could search them easily you could find all of the requests made by a specific user but logstash doesn't store things itself it stores things in elasticsearch which is a efficient database for querying text and elasticsearch exposes the results as Kibana and Kibana is a web server that connects to elasticsearch and allows administrators as the devops person or other people on your team, the on-call engineer to view the logs in production whenever there's a major fault. You as the administrator would connect to Kibana, Kibana would query elasticsearch for logs matching whatever you wanted. You could say hey Kibana in the search bar I want to find errors and kibana would say elasticsearch find the messages which contain the string error and then elasticsearch would return results that had been populated by logstash. Logstash would have been sent those results from all of the other services. @@ -43,39 +44,38 @@ You could say hey Kibana in the search bar I want to find errors and kibana woul A user says i saw error code one two three four five six seven when i tried to do this with elk setup we'd have to go to kibana enter one two three four five six seven in the search bar press enter and then that would show us the logs that corresponded to that and one of the logs might say internal server error returning one two three four five six seven and we'd see that the service that emitted that log was the backend and we'd see what time that log was emitted at so we could go to the time in that log and we could look at the messages above and below it in the backend and then we could see a better picture of what happened for the user's request and we'd be able to repeat this process going to other services until we found what actually caused the problem for the user. -### Security and Access to Logs +### Security and Access to Logs -An important peice of the puzzle is ensuring that logs are only visible to administrators (or the users and groups that absolutely need to have access), logs can contain sensitive information like tokens it's important that only authenticated users can access them you wouldn't want to expose Kibana to the internet without some way of authenticating. +An important piece of the puzzle is ensuring that logs are only visible to administrators (or the users and groups that absolutely need to have access), logs can contain sensitive information like tokens it's important that only authenticated users can access them you wouldn't want to expose Kibana to the internet without some way of authenticating. ### Examples of Log Management Tools Examples of log management platforms there's -- Elasticsearch -- Logstash -- Kibana +- Elasticsearch +- Logstash +- Kibana - Fluentd - popular open source choice -- Datadog - hosted offering, commonly used at larger enterprises, -- LogDNA - hosted offering -- Splunk - -Cloud providers also provide logging such as AWS CloudWatch Logs, Microsoft Azure Monitor and Google Cloud Logging. +- Datadog - hosted offering, commonly used at larger enterprises, +- LogDNA - hosted offering +- Splunk +Cloud providers also provide logging such as AWS CloudWatch Logs, Microsoft Azure Monitor and Google Cloud Logging. -Log Management is a key aspect of the overall observability of your applications and instracture environment for diagnosing problems in production it's relatively simple to install a turnkey solution like ELK or CloudWatch and it makes diagnosing and triaging problems in production significantly easier. +Log Management is a key aspect of the overall observability of your applications and instracture environment for diagnosing problems in production it's relatively simple to install a turnkey solution like ELK or CloudWatch and it makes diagnosing and triaging problems in production significantly easier. -## Resources +## Resources - [The Importance of Monitoring in DevOps](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) -- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) -- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) +- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) +- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) - [Top 5 - DevOps Monitoring Tools](https://www.youtube.com/watch?v=4t71iv_9t_4) -- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) +- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) - [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8) - [Promql cheat sheet with examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) - [Log Management for DevOps | Manage application, server, and cloud logs with Site24x7](https://www.youtube.com/watch?v=J0csO_Shsj0) - [Log Management what DevOps need to know](https://devops.com/log-management-what-devops-teams-need-to-know/) - [What is ELK Stack?](https://www.youtube.com/watch?v=4X0WLg05ASw) -- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) +- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) See you on [Day 80](day80.md) diff --git a/Days/day80.md b/Days/day80.md index 7fd8324c7..855413b9d 100644 --- a/Days/day80.md +++ b/Days/day80.md @@ -1,36 +1,34 @@ --- -title: '#90DaysOfDevOps - ELK Stack - Day 80' +title: "#90DaysOfDevOps - ELK Stack - Day 80" published: false description: 90DaysOfDevOps - ELK Stack -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048746 --- -## ELK Stack -In this session, we are going to get a little more hands-on with some of the options we have mentioned. +## ELK Stack -### ELK Stack +In this session, we are going to get a little more hands-on with some of the options we have mentioned. -ELK Stack is the combination of 3 separate tools: +ELK Stack is the combination of 3 separate tools: - [Elasticsearch](https://www.elastic.co/what-is/elasticsearch) is a distributed, free and open search and analytics engine for all types of data, including textual, numerical, geospatial, structured, and unstructured. -- [Logstash](https://www.elastic.co/logstash/) is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite "stash." +- [Logstash](https://www.elastic.co/logstash/) is a free and open server-side data processing pipeline that ingests data from a multitude of sources, transforms it, and then sends it to your favorite "stash." -- [Kibana](https://www.elastic.co/kibana/) is a free and open user interface that lets you visualize your Elasticsearch data and navigate the Elastic Stack. Do anything from tracking query load to understanding the way requests flow through your apps. +- [Kibana](https://www.elastic.co/kibana/) is a free and open user interface that lets you visualize your Elasticsearch data and navigate the Elastic Stack. Do anything from tracking query load to understanding the way requests flow through your apps. ELK stack lets us reliably and securely take data from any source, in any format, then search, analyze, and visualize it in real time. On top of the above mentioned components you might also see Beats which are lightweight agents that are installed on edge hosts to collect different types of data for forwarding into the stack. - -- Logs: Server logs that need to be analyzed are identified +- Logs: Server logs that need to be analysed are identified - Logstash: Collect logs and events data. It even parses and transforms data -- ElasticSearch: The transformed data from Logstash is Store, Search, and indexed. +- ElasticSearch: The transformed data from Logstash is Store, Search, and indexed. - Kibana uses Elasticsearch DB to Explore, Visualize, and Share @@ -40,69 +38,69 @@ On top of the above mentioned components you might also see Beats which are ligh A good resource explaining this [The Complete Guide to the ELK Stack](https://logz.io/learn/complete-guide-elk-stack/) -With the addition of beats the ELK Stack is also now known as Elastic Stack. +With the addition of beats the ELK Stack is also now known as Elastic Stack. -For the hands-on scenario there are many places you can deploy the Elastic Stack but we are going to be using docker compose to deploy locally on our system. +For the hands-on scenario there are many places you can deploy the Elastic Stack but we are going to be using docker compose to deploy locally on our system. [Start the Elastic Stack with Docker Compose](https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-stack-docker.html#get-started-docker-tls) ![](Images/Day80_Monitoring1.png) -You will find the original files and walkthrough that I used here [ deviantony/docker-elk](https://github.com/deviantony/docker-elk) +You will find the original files and walkthrough that I used here [deviantony/docker-elk](https://github.com/deviantony/docker-elk) -Now we can run `docker-compose up -d`, the first time this has been ran will require the pulling of images. +Now we can run `docker-compose up -d`, the first time this has been ran will require the pulling of images. ![](Images/Day80_Monitoring2.png) If you follow either this repository or the one that I used you will have either have the password of "changeme" or in my repository the password of "90DaysOfDevOps". The username is "elastic" -After a few minutes we can navigate to http://localhost:5601/ which is our Kibana server / Docker container. +After a few minutes we can navigate to `http://localhost:5601/` which is our Kibana server / Docker container. ![](Images/Day80_Monitoring3.png) -Your initial home screen is going to look something like this. +Your initial home screen is going to look something like this. ![](Images/Day80_Monitoring4.png) -Under the section titled "Get started by adding integrations" there is a "try sample data" click this and we can add one of the shown below. +Under the section titled "Get started by adding integrations" there is a "try sample data" click this and we can add one of the shown below. ![](Images/Day80_Monitoring5.png) -I am going to select "Sample web logs" but this is really to get a look and feel of what data sets you can get into the ELK stack. +I am going to select "Sample web logs" but this is really to get a look and feel of what data sets you can get into the ELK stack. -When you have selected "Add Data" it takes a while to populate some of that data and then you have the "View Data" option and a list of the available ways to view that data in the drop down. +When you have selected "Add Data" it takes a while to populate some of that data and then you have the "View Data" option and a list of the available ways to view that data in the drop down. ![](Images/Day80_Monitoring6.png) -As it states on the dashboard view: +As it states on the dashboard view: **Sample Logs Data** -*This dashboard contains sample data for you to play with. You can view it, search it, and interact with the visualizations. For more information about Kibana, check our docs.* +> This dashboard contains sample data for you to play with. You can view it, search it, and interact with the visualizations. For more information about Kibana, check our docs. ![](Images/Day80_Monitoring7.png) -This is using Kibana to visualise data that has been added into ElasticSearch via Logstash. This is not the only option but I personally wanted to deploy and look at this. +This is using Kibana to visualise data that has been added into ElasticSearch via Logstash. This is not the only option but I personally wanted to deploy and look at this. -We are going to cover Grafana at some point and you are going to see some data visualisation similarities between the two, you have also seen Prometheus. +We are going to cover Grafana at some point and you are going to see some data visualisation similarities between the two, you have also seen Prometheus. -The key takeaway I have had between the Elastic Stack and Prometheus + Grafana is that Elastic Stack or ELK Stack is focused on Logs and Prometheus is focused on metrics. +The key takeaway I have had between the Elastic Stack and Prometheus + Grafana is that Elastic Stack or ELK Stack is focused on Logs and Prometheus is focused on metrics. -I was reading this article from MetricFire [Prometheus vs. ELK](https://www.metricfire.com/blog/prometheus-vs-elk/) to get a better understanding of the different offerings. +I was reading this article from MetricFire [Prometheus vs. ELK](https://www.metricfire.com/blog/prometheus-vs-elk/) to get a better understanding of the different offerings. -## Resources +## Resources - [Understanding Logging: Containers & Microservices](https://www.youtube.com/watch?v=MMVdkzeQ848) - [The Importance of Monitoring in DevOps](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) -- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) -- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) +- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) +- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) - [Top 5 - DevOps Monitoring Tools](https://www.youtube.com/watch?v=4t71iv_9t_4) -- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) +- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) - [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8) - [Promql cheat sheet with examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) - [Log Management for DevOps | Manage application, server, and cloud logs with Site24x7](https://www.youtube.com/watch?v=J0csO_Shsj0) - [Log Management what DevOps need to know](https://devops.com/log-management-what-devops-teams-need-to-know/) - [What is ELK Stack?](https://www.youtube.com/watch?v=4X0WLg05ASw) -- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) +- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) See you on [Day 81](day81.md) diff --git a/Days/day81.md b/Days/day81.md index f252d3680..95040efa3 100644 --- a/Days/day81.md +++ b/Days/day81.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Fluentd & FluentBit - Day 81' +title: "#90DaysOfDevOps - Fluentd & FluentBit - Day 81" published: false description: 90DaysOfDevOps - Fluentd & FluentBit tags: "devops, 90daysofdevops, learning" @@ -7,9 +7,10 @@ cover_image: null canonical_url: null id: 1048716 --- + ## Fluentd & FluentBit -Another data collector that I wanted to explore as part of this observability section was [Fluentd](https://docs.fluentd.org/). An Open-Source unified logging layer. +Another data collector that I wanted to explore as part of this observability section was [Fluentd](https://docs.fluentd.org/). An Open-Source unified logging layer. Fluentd has four key features that makes it suitable to build clean, reliable logging pipelines: @@ -23,48 +24,47 @@ Built-in Reliability: Data loss should never happen. Fluentd supports memory- an [Installing Fluentd](https://docs.fluentd.org/quickstart#step-1-installing-fluentd) -### How apps log data? +### How apps log data? - Write to files. `.log` files (difficult to analyse without a tool and at scale) - Log directly to a database (each application must be configured with the correct format) - Third party applications (NodeJS, NGINX, PostgreSQL) -This is why we want a unified logging layer. +This is why we want a unified logging layer. -FluentD allows for the 3 logging data types shown above and gives us the ability to collect, process and send those to a destination, this could be sending them logs to Elastic, MongoDB, Kafka databases for example. +FluentD allows for the 3 logging data types shown above and gives us the ability to collect, process and send those to a destination, this could be sending them logs to Elastic, MongoDB, Kafka databases for example. -Any Data, Any Data source can be sent to FluentD and that can be sent to any destination. FluentD is not tied to any particular source or destination. +Any Data, Any Data source can be sent to FluentD and that can be sent to any destination. FluentD is not tied to any particular source or destination. -In my research of Fluentd I kept stumbling across Fluent bit as another option and it looks like if you were looking to deploy a logging tool into your Kubernetes environment then fluent bit would give you that capability, even though fluentd can also be deployed to containers as well as servers. +In my research of Fluentd I kept stumbling across Fluent bit as another option and it looks like if you were looking to deploy a logging tool into your Kubernetes environment then fluent bit would give you that capability, even though fluentd can also be deployed to containers as well as servers. [Fluentd & Fluent Bit](https://docs.fluentbit.io/manual/about/fluentd-and-fluent-bit) -Fluentd and Fluentbit will use the input plugins to transform that data to Fluent Bit format, then we have output plugins to whatever that output target is such as elasticsearch. - -We can also use tags and matches between configurations. +Fluentd and Fluentbit will use the input plugins to transform that data to Fluent Bit format, then we have output plugins to whatever that output target is such as elasticsearch. -I cannot see a good reason for using fluentd and it sems that Fluent Bit is the best way to get started. Although they can be used together in some architectures. +We can also use tags and matches between configurations. -### Fluent Bit in Kubernetes +I cannot see a good reason for using fluentd and it sems that Fluent Bit is the best way to get started. Although they can be used together in some architectures. -Fluent Bit in Kubernetes is deployed as a DaemonSet, which means it will run on each node in the cluster. Each Fluent Bit pod on each node will then read each container on that node and gather all of the logs available. It will also gather the metadata from the Kubernetes API Server. +### Fluent Bit in Kubernetes -Kubernetes annotations can be used within the configuration YAML of our applications. +Fluent Bit in Kubernetes is deployed as a DaemonSet, which means it will run on each node in the cluster. Each Fluent Bit pod on each node will then read each container on that node and gather all of the logs available. It will also gather the metadata from the Kubernetes API Server. +Kubernetes annotations can be used within the configuration YAML of our applications. -First of all we can deploy from the fluent helm repository. `helm repo add fluent https://fluent.github.io/helm-charts` and then install using the `helm install fluent-bit fluent/fluent-bit` command. +First of all we can deploy from the fluent helm repository. `helm repo add fluent https://fluent.github.io/helm-charts` and then install using the `helm install fluent-bit fluent/fluent-bit` command. ![](Images/Day81_Monitoring1.png) -In my cluster I am also running prometheus in my default namespace (for test purposes) we need to make sure our fluent-bit pod is up and running. we can do this using `kubectl get all | grep fluent` this is going to show us our running pod, service and daemonset that we mentioned earlier. +In my cluster I am also running prometheus in my default namespace (for test purposes) we need to make sure our fluent-bit pod is up and running. we can do this using `kubectl get all | grep fluent` this is going to show us our running pod, service and daemonset that we mentioned earlier. ![](Images/Day81_Monitoring2.png) -So that fluentbit knows where to get logs from we have a configuration file, in this Kubernetes deployment of fluentbit we have a configmap which resembles the configuration file. +So that fluentbit knows where to get logs from we have a configuration file, in this Kubernetes deployment of fluentbit we have a configmap which resembles the configuration file. ![](Images/Day81_Monitoring3.png) -That ConfigMap will look something like: +That ConfigMap will look something like: ``` Name: fluent-bit @@ -141,28 +141,26 @@ fluent-bit.conf: Events: ``` -We can now port-forward our pod to our localhost to ensure that we have connectivity. Firstly get the name of your pod with `kubectl get pods | grep fluent` and then use `kubectl port-forward fluent-bit-8kvl4 2020:2020` open a web browser to http://localhost:2020/ +We can now port-forward our pod to our localhost to ensure that we have connectivity. Firstly get the name of your pod with `kubectl get pods | grep fluent` and then use `kubectl port-forward fluent-bit-8kvl4 2020:2020` open a web browser to `http://localhost:2020/` ![](Images/Day81_Monitoring4.png) I also found this really great medium article covering more about [Fluent Bit](https://medium.com/kubernetes-tutorials/exporting-kubernetes-logs-to-elasticsearch-using-fluent-bit-758e8de606af) -## Resources +## Resources - [Understanding Logging: Containers & Microservices](https://www.youtube.com/watch?v=MMVdkzeQ848) - [The Importance of Monitoring in DevOps](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) -- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) -- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) +- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) +- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) - [Top 5 - DevOps Monitoring Tools](https://www.youtube.com/watch?v=4t71iv_9t_4) -- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) +- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) - [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8) - [Promql cheat sheet with examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) - [Log Management for DevOps | Manage application, server, and cloud logs with Site24x7](https://www.youtube.com/watch?v=J0csO_Shsj0) - [Log Management what DevOps need to know](https://devops.com/log-management-what-devops-teams-need-to-know/) - [What is ELK Stack?](https://www.youtube.com/watch?v=4X0WLg05ASw) -- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) -- [ Fluent Bit explained | Fluent Bit vs Fluentd ](https://www.youtube.com/watch?v=B2IS-XS-cc0) - +- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) +- [Fluent Bit explained | Fluent Bit vs Fluentd](https://www.youtube.com/watch?v=B2IS-XS-cc0) See you on [Day 82](day82.md) - diff --git a/Days/day82.md b/Days/day82.md index 954c5b38a..23cfb1742 100644 --- a/Days/day82.md +++ b/Days/day82.md @@ -1,21 +1,22 @@ --- -title: '#90DaysOfDevOps - EFK Stack - Day 82' +title: "#90DaysOfDevOps - EFK Stack - Day 82" published: false description: 90DaysOfDevOps - EFK Stack -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049059 --- + ### EFK Stack In the previous section, we spoke about ELK Stack, which uses Logstash as the log collector in the stack, in the EFK Stack we are swapping that out for FluentD or FluentBit. -Our mission in this section is to monitor our Kubernetes logs using EFK. +Our mission in this section is to monitor our Kubernetes logs using EFK. ### Overview of EFK -We will be deploying the following into our Kubernetes cluster. +We will be deploying the following into our Kubernetes cluster. ![](Images/Day82_Monitoring1.png) @@ -23,13 +24,13 @@ The EFK stack is a collection of 3 software bundled together, including: - Elasticsearch : NoSQL database is used to store data and provides interface for searching and query log. -- Fluentd : Fluentd is an open source data collector for unified logging layer. Fluentd allows you to unify data collection and consumption for a better use and understanding of data. +- Fluentd : Fluentd is an open source data collector for unified logging layer. Fluentd allows you to unify data collection and consumption for a better use and understanding of data. -- Kibana : Interface for managing and statistics logs. Responsible for reading information from elasticsearch . +- Kibana : Interface for managing and statistics logs. Responsible for reading information from elasticsearch. -### Deploying EFK on Minikube +### Deploying EFK on Minikube -We will be using our trusty minikube cluster to deploy our EFK stack. Let's start a cluster using `minikube start` on our system. I am using a Windows OS with WSL2 enabled. +We will be using our trusty minikube cluster to deploy our EFK stack. Let's start a cluster using `minikube start` on our system. I am using a Windows OS with WSL2 enabled. ![](Images/Day82_Monitoring2.png) @@ -37,20 +38,21 @@ I have created [efk-stack.yaml](Days/Monitoring/../../Monitoring/EFK%20Stack/efk ![](Images/Day82_Monitoring3.png) -Depending on your system and if you have ran this already and have images pulled you should now watch the pods into a ready state before we can move on, you can check the progress with the following command. `kubectl get pods -n kube-logging -w` This can take a few minutes. +Depending on your system and if you have ran this already and have images pulled you should now watch the pods into a ready state before we can move on, you can check the progress with the following command. `kubectl get pods -n kube-logging -w` This can take a few minutes. ![](Images/Day82_Monitoring4.png) -The above command lets us keep an eye on things but I like to clarify that things are all good by just running the following `kubectl get pods -n kube-logging` command to ensure all pods are now up and running. +The above command lets us keep an eye on things but I like to clarify that things are all good by just running the following `kubectl get pods -n kube-logging` command to ensure all pods are now up and running. ![](Images/Day82_Monitoring5.png) -Once we have all our pods up and running and at this stage we should see +Once we have all our pods up and running and at this stage we should see + - 3 pods associated to ElasticSearch - 1 pod associated to Fluentd - 1 pod associated to Kibana -We can also use `kubectl get all -n kube-logging` to show all in our namespace, fluentd as explained previously is deployed as a daemonset, kibana as a deployment and Elasticsearch as a statefulset. +We can also use `kubectl get all -n kube-logging` to show all in our namespace, fluentd as explained previously is deployed as a daemonset, kibana as a deployment and Elasticsearch as a statefulset. ![](Images/Day82_Monitoring6.png) @@ -58,55 +60,53 @@ Now all of our pods are up and running we can now issue in a new terminal the po ![](Images/Day82_Monitoring7.png) -We can now open up a browser and navigate to this address, http://localhost:5601 you will be greeted with either the screen you see below or you might indeed see a sample data screen or continue and configure yourself. Either way and by all means look at that test data, it is what we covered when we looked at the ELK stack in a previous session. +We can now open up a browser and navigate to this address, `http://localhost:5601` you will be greeted with either the screen you see below or you might indeed see a sample data screen or continue and configure yourself. Either way and by all means look at that test data, it is what we covered when we looked at the ELK stack in a previous session. ![](Images/Day82_Monitoring8.png) -Next, we need to hit the "discover" tab on the left menu and add "*" to our index pattern. Continue to the next step by hitting "Next step". +Next, we need to hit the "discover" tab on the left menu and add "\*" to our index pattern. Continue to the next step by hitting "Next step". ![](Images/Day82_Monitoring9.png) -On Step 2 of 2, we are going to use the @timestamp option from the dropdown as this will filter our data by time. When you hit create pattern it might take a few seconds to complete. +On Step 2 of 2, we are going to use the @timestamp option from the dropdown as this will filter our data by time. When you hit create pattern it might take a few seconds to complete. ![](Images/Day82_Monitoring10.png) -If we now head back to our "discover" tab after a few seconds you should start to see data coming in from your Kubernetes cluster. +If we now head back to our "discover" tab after a few seconds you should start to see data coming in from your Kubernetes cluster. ![](Images/Day82_Monitoring11.png) -Now that we have the EFK stack up and running and we are gathering logs from our Kubernetes cluster via Fluentd we can also take a look at other sources we can choose from, if you navigate to the home screen by hitting the Kibana logo in the top left you will be greeted with the same page we saw when we first logged in. +Now that we have the EFK stack up and running and we are gathering logs from our Kubernetes cluster via Fluentd we can also take a look at other sources we can choose from, if you navigate to the home screen by hitting the Kibana logo in the top left you will be greeted with the same page we saw when we first logged in. -We have the ability to add APM, Log data, metric data and security events from other plugins or sources. +We have the ability to add APM, Log data, metric data and security events from other plugins or sources. ![](Images/Day82_Monitoring12.png) -If we select "Add log data" then we can see below that we have a lot of choices on where we want to get our logs from, you can see that Logstash is mentioned there which is part of the ELK stack. +If we select "Add log data" then we can see below that we have a lot of choices on where we want to get our logs from, you can see that Logstash is mentioned there which is part of the ELK stack. ![](Images/Day82_Monitoring13.png) -Under the metrics data you will find that you can add sources for Prometheus and lots of other services. +Under the metrics data you will find that you can add sources for Prometheus and lots of other services. ### APM (Application Performance Monitoring) -There is also the option to gather APM (Application Performance Monitoring) which collects in-depth performance metrics and errors from inside your application. It allows you to monitor the performance of thousands of applications in real time. +There is also the option to gather APM (Application Performance Monitoring) which collects in-depth performance metrics and errors from inside your application. It allows you to monitor the performance of thousands of applications in real time. I am not going to get into APM here but you can find out more on the [Elastic site](https://www.elastic.co/observability/application-performance-monitoring) - -## Resources +## Resources - [Understanding Logging: Containers & Microservices](https://www.youtube.com/watch?v=MMVdkzeQ848) - [The Importance of Monitoring in DevOps](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) -- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) -- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) +- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) +- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) - [Top 5 - DevOps Monitoring Tools](https://www.youtube.com/watch?v=4t71iv_9t_4) -- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) +- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) - [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8) - [Promql cheat sheet with examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) - [Log Management for DevOps | Manage application, server, and cloud logs with Site24x7](https://www.youtube.com/watch?v=J0csO_Shsj0) - [Log Management what DevOps need to know](https://devops.com/log-management-what-devops-teams-need-to-know/) - [What is ELK Stack?](https://www.youtube.com/watch?v=4X0WLg05ASw) -- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) +- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) See you on [Day 83](day83.md) - diff --git a/Days/day83.md b/Days/day83.md index 403fbc3a0..4ff84ab0f 100644 --- a/Days/day83.md +++ b/Days/day83.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Data Visualisation - Grafana - Day 83' +title: "#90DaysOfDevOps - Data Visualisation - Grafana - Day 83" published: false description: 90DaysOfDevOps - Data Visualisation - Grafana tags: "devops, 90daysofdevops, learning" @@ -7,57 +7,58 @@ cover_image: null canonical_url: null id: 1048767 --- + ## Data Visualisation - Grafana -We saw a lot of Kibana over this section around Observability. But we have to also take some time to cover Grafana. But also they are not the same and they are not completely competing against each other. +We saw a lot of Kibana over this section around Observability. But we have to also take some time to cover Grafana. But also they are not the same and they are not completely competing against each other. Kibana’s core feature is data querying and analysis. Using various methods, users can search the data indexed in Elasticsearch for specific events or strings within their data for root cause analysis and diagnostics. Based on these queries, users can use Kibana’s visualisation features which allow users to visualize data in a variety of different ways, using charts, tables, geographical maps and other types of visualizations. -Grafana actually started as a fork of Kibana, Grafana had an aim to supply support for metrics aka monitoring, which at that time Kibana did not provide. +Grafana actually started as a fork of Kibana, Grafana had an aim to supply support for metrics aka monitoring, which at that time Kibana did not provide. -Grafana is a free and Open-Source data visualisation tool. We commonly see Prometheus and Grafana together out in the field but we might also see Grafana alongside Elasticsearch and Graphite. +Grafana is a free and Open-Source data visualisation tool. We commonly see Prometheus and Grafana together out in the field but we might also see Grafana alongside Elasticsearch and Graphite. -The key difference between the two tools is Logging vs Monitoring, we started the section off covering monitoring with Nagios and then into Prometheus before moving into Logging where we covered the ELK and EFK stacks. +The key difference between the two tools is Logging vs Monitoring, we started the section off covering monitoring with Nagios and then into Prometheus before moving into Logging where we covered the ELK and EFK stacks. -Grafana caters to analysing and visualising metrics such as system CPU, memory, disk and I/O utilisation. The platform does not allow full-text data querying. Kibana runs on top of Elasticsearch and is used primarily for analyzing log messages. +Grafana caters to analysing and visualising metrics such as system CPU, memory, disk and I/O utilisation. The platform does not allow full-text data querying. Kibana runs on top of Elasticsearch and is used primarily for analyzing log messages. -As we have already discovered with Kibana it is quite easy to deploy as well as having the choice of where to deploy, this is the same for Grafana. +As we have already discovered with Kibana it is quite easy to deploy as well as having the choice of where to deploy, this is the same for Grafana. -Both support installation on Linux, Mac, Windows, Docker or building from source. +Both support installation on Linux, Mac, Windows, Docker or building from source. -There are no doubt others but Grafana is a tool that I have seen spanning the virtual, cloud and cloud-native platforms so I wanted to cover this here in this section. +There are no doubt others but Grafana is a tool that I have seen spanning the virtual, cloud and cloud-native platforms so I wanted to cover this here in this section. -### Prometheus Operator + Grafana Deployment +### Prometheus Operator + Grafana Deployment -We have covered Prometheus already in this section but as we see these paired so often I wanted to spin up an environment that would allow us to at least see what metrics we could have displayed in a visualisation. We know that monitoring our environments is important but going through those metrics alone in Prometheus or any metric tool is going to be cumbersome and it is not going to scale. This is where Grafana comes in and provides us that interactive visualisation of those metrics collected and stored in the Prometheus database. +We have covered Prometheus already in this section but as we see these paired so often I wanted to spin up an environment that would allow us to at least see what metrics we could have displayed in a visualisation. We know that monitoring our environments is important but going through those metrics alone in Prometheus or any metric tool is going to be cumbersome and it is not going to scale. This is where Grafana comes in and provides us that interactive visualisation of those metrics collected and stored in the Prometheus database. -With that visualisation we can create custom charts, graphs and alerts for our environment. In this walkthrough we will be using our minikube cluster. +With that visualisation we can create custom charts, graphs and alerts for our environment. In this walkthrough we will be using our minikube cluster. We are going to start by cloning this down to our local system. Using `git clone https://github.com/prometheus-operator/kube-prometheus.git` and `cd kube-prometheus` ![](Images/Day83_Monitoring1.png) -First job is to create our namespace within our minikube cluster `kubectl create -f manifests/setup` if you have not been following along in previous sections we can use `minikube start` to bring up a new cluster here. +First job is to create our namespace within our minikube cluster `kubectl create -f manifests/setup` if you have not been following along in previous sections we can use `minikube start` to bring up a new cluster here. ![](Images/Day83_Monitoring2.png) -Next we are going to deploy everything we need for our demo using the `kubectl create -f manifests/` command, as you can see this is going to deploy a lot of different resources within our cluster. +Next we are going to deploy everything we need for our demo using the `kubectl create -f manifests/` command, as you can see this is going to deploy a lot of different resources within our cluster. ![](Images/Day83_Monitoring3.png) -We then need to wait for our pods to come up and being in the running state we can use the `kubectl get pods -n monitoring -w` command to keep an eye on the pods. +We then need to wait for our pods to come up and being in the running state we can use the `kubectl get pods -n monitoring -w` command to keep an eye on the pods. ![](Images/Day83_Monitoring4.png) -When everything is running we can check all pods are in a running and healthy state using the `kubectl get pods -n monitoring` command. +When everything is running we can check all pods are in a running and healthy state using the `kubectl get pods -n monitoring` command. ![](Images/Day83_Monitoring5.png) -With the deployment, we deployed a number of services that we are going to be using later on in the demo you can check these by using the `kubectl get svc -n monitoring` command. +With the deployment, we deployed a number of services that we are going to be using later on in the demo you can check these by using the `kubectl get svc -n monitoring` command. ![](Images/Day83_Monitoring6.png) -And finally lets check on all resources deployed in our new monitoring namespace using the `kubectl get all -n monitoring` command. +And finally lets check on all resources deployed in our new monitoring namespace using the `kubectl get all -n monitoring` command. ![](Images/Day83_Monitoring7.png) @@ -65,19 +66,21 @@ Opening a new terminal we are now ready to access our Grafana tool and start gat ![](Images/Day83_Monitoring8.png) -Open a browser and navigate to http://localhost:3000 you will be prompted for a username and password. +Open a browser and navigate to http://localhost:3000 you will be prompted for a username and password. ![](Images/Day83_Monitoring9.png) -The default username and password to access is +The default username and password to access is + ``` -Username: admin +Username: admin Password: admin ``` -However you will be asked to provide a new password at first login. The initial screen or home page you will see will give you some areas to explore as well as some useful resources to get up to speed with Grafana and its capabilities. Notice the "Add your first data source" and "create your first dashboard" widgets we will be using them later. + +However you will be asked to provide a new password at first login. The initial screen or home page you will see will give you some areas to explore as well as some useful resources to get up to speed with Grafana and its capabilities. Notice the "Add your first data source" and "create your first dashboard" widgets we will be using them later. ![](Images/Day83_Monitoring10.png) -You will find that there is already a prometheus data source already added to our Grafana data sources, however because we are using minikube we need to also port forward prometheus so that this is available on our localhost, opening a new terminal we can run the following command. `kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090` if on the home page of Grafana we now enter into the widget "Add your first data source" and from here we are going to select Prometheus. +You will find that there is already a prometheus data source already added to our Grafana data sources, however because we are using minikube we need to also port forward prometheus so that this is available on our localhost, opening a new terminal we can run the following command. `kubectl --namespace monitoring port-forward svc/prometheus-k8s 9090` if on the home page of Grafana we now enter into the widget "Add your first data source" and from here we are going to select Prometheus. ![](Images/Day83_Monitoring11.png) @@ -85,7 +88,7 @@ For our new data source we can use the address http://localhost:9090 and we will ![](Images/Day83_Monitoring12.png) -At the bottom of the page, we can now hit save and test. This should give us the outcome you see below if the port forward for prometheus is working. +At the bottom of the page, we can now hit save and test. This should give us the outcome you see below if the port forward for prometheus is working. ![](Images/Day83_Monitoring13.png) @@ -93,58 +96,58 @@ Head back to the home page and find the option to "Create your first dashboard" ![](Images/Day83_Monitoring14.png) -You will see from below that we are already gathering from our Grafana data source, but we would like to gather metrics from our Prometheus data source, select the data source drop down and select our newly created "Prometheus-1" +You will see from below that we are already gathering from our Grafana data source, but we would like to gather metrics from our Prometheus data source, select the data source drop down and select our newly created "Prometheus-1" ![](Images/Day83_Monitoring15.png) -If you then select the Metrics browser you will have a long list of metrics being gathered from Prometheus related to our minikube cluster. +If you then select the Metrics browser you will have a long list of metrics being gathered from Prometheus related to our minikube cluster. ![](Images/Day83_Monitoring16.png) -For the purpose of the demo I am going to find a metric that gives us some output around our system resources, `cluster:node_cpu:ratio{}` gives us some detail on the nodes in our cluster and proves that this integration is working. +For the purpose of the demo I am going to find a metric that gives us some output around our system resources, `cluster:node_cpu:ratio{}` gives us some detail on the nodes in our cluster and proves that this integration is working. ![](Images/Day83_Monitoring17.png) -Once you are happy with this as your visualisation then you can hit the apply button in the top right and you will then add this graph to your dashboard. Obviously you can go ahead and add additional graphs and other charts to give you the visual that you need. +Once you are happy with this as your visualisation then you can hit the apply button in the top right and you will then add this graph to your dashboard. Obviously you can go ahead and add additional graphs and other charts to give you the visual that you need. ![](Images/Day83_Monitoring18.png) -We can however take advantage of thousands of previously created dashboards that we can use so that we do not need to reinvent the wheel. +We can however take advantage of thousands of previously created dashboards that we can use so that we do not need to reinvent the wheel. ![](Images/Day83_Monitoring19.png) -If we do a search for Kubernetes we will see a long list of pre built dashboards that we can choose from. +If we do a search for Kubernetes we will see a long list of pre built dashboards that we can choose from. ![](Images/Day83_Monitoring20.png) -We have chosen the Kubernetes API Server dashboard and changed the data source to suit our newly added Prometheus-1 data source and we get to see some of the metrics displayed as per below. +We have chosen the Kubernetes API Server dashboard and changed the data source to suit our newly added Prometheus-1 data source and we get to see some of the metrics displayed as per below. ![](Images/Day83_Monitoring21.png) ### Alerting -You could also leverage the alertmanager that we deployed to then send alerts out to slack or other integrations, in order to do this you would need to port foward the alertmanager service using the below details. +You could also leverage the alertmanager that we deployed to then send alerts out to slack or other integrations, in order to do this you would need to port forward the alertmanager service using the below details. `kubectl --namespace monitoring port-forward svc/alertmanager-main 9093` -http://localhost:9093 +`http://localhost:9093` -That wraps up our section on all things observability, I have personally found that this section has highlighted how broad this topic is but equally how important this is for our roles and that be it metrics, logging or tracing you are going to need to have a good idea of what is happening in our broad environments moving forward, especially when they can change so dramatically with all the automation that we have already covered in the other sections. +That wraps up our section on all things observability, I have personally found that this section has highlighted how broad this topic is but equally how important this is for our roles and that be it metrics, logging or tracing you are going to need to have a good idea of what is happening in our broad environments moving forward, especially when they can change so dramatically with all the automation that we have already covered in the other sections. -Next up we are going to be taking a look into data management and how DevOps principles also needs to be considered when it comes to Data Management. +Next up we are going to be taking a look into data management and how DevOps principles also needs to be considered when it comes to Data Management. -## Resources +## Resources - [Understanding Logging: Containers & Microservices](https://www.youtube.com/watch?v=MMVdkzeQ848) - [The Importance of Monitoring in DevOps](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) -- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) -- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) +- [Understanding Continuous Monitoring in DevOps?](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) +- [DevOps Monitoring Tools](https://www.youtube.com/watch?v=Zu53QQuYqJ0) - [Top 5 - DevOps Monitoring Tools](https://www.youtube.com/watch?v=4t71iv_9t_4) -- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) +- [How Prometheus Monitoring works](https://www.youtube.com/watch?v=h4Sl21AKiDg) - [Introduction to Prometheus monitoring](https://www.youtube.com/watch?v=5o37CGlNLr8) - [Promql cheat sheet with examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) - [Log Management for DevOps | Manage application, server, and cloud logs with Site24x7](https://www.youtube.com/watch?v=J0csO_Shsj0) - [Log Management what DevOps need to know](https://devops.com/log-management-what-devops-teams-need-to-know/) - [What is ELK Stack?](https://www.youtube.com/watch?v=4X0WLg05ASw) -- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) +- [Fluentd simply explained](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) See you on [Day 84](day84.md) diff --git a/Days/day84.md b/Days/day84.md index 6397d33a8..27ff48bd0 100644 --- a/Days/day84.md +++ b/Days/day84.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - The Big Picture: Data Management - Day 84' +title: "#90DaysOfDevOps - The Big Picture: Data Management - Day 84" published: false description: 90DaysOfDevOps - The Big Picture Data Management tags: "devops, 90daysofdevops, learning" @@ -7,61 +7,61 @@ cover_image: null canonical_url: null id: 1048747 --- + ## The Big Picture: Data Management ![](Images/Day84_Data1.png) -Data Management is by no means a new wall to climb, although we do know that data is more important than it maybe was a few years ago. Valuable and ever changing it can also be a massive nightmare when we are talking about automation and continuously integrate, test and deploy frequent software releases. Enter the persistent data and underlying data services often the main culprit when things go wrong. +Data Management is by no means a new wall to climb, although we do know that data is more important than it maybe was a few years ago. Valuable and ever changing it can also be a massive nightmare when we are talking about automation and continuously integrate, test and deploy frequent software releases. Enter the persistent data and underlying data services often the main culprit when things go wrong. -But before I get into the Cloud-Native Data Management, we need to go up a level. We have touched on many different platforms throughout this challenge. Be it Physical, Virtual, Cloud and Cloud-Native obviously including Kubernetes there is none of these platforms that provide the lack of requirement for data management. +But before I get into the Cloud-Native Data Management, we need to go up a level. We have touched on many different platforms throughout this challenge. Be it Physical, Virtual, Cloud and Cloud-Native obviously including Kubernetes there is none of these platforms that provide the lack of requirement for data management. -Whatever our business it is more than likely you will find a database lurking in the environment somewhere, be it for the most mission critical system in the business or at least some cog in the chain is storing that persistent data on some level of system. +Whatever our business it is more than likely you will find a database lurking in the environment somewhere, be it for the most mission critical system in the business or at least some cog in the chain is storing that persistent data on some level of system. -### DevOps and Data +### DevOps and Data -Much like the very start of this series where we spoke about the DevOps principles, in order for a better process when it comes to data you have to include the right people. This might be the DBAs but equally that is going to include people that care about the backup of those data services as well. +Much like the very start of this series where we spoke about the DevOps principles, in order for a better process when it comes to data you have to include the right people. This might be the DBAs but equally that is going to include people that care about the backup of those data services as well. -Secondly we also need to identify the different data types, domains, boundaries that we have associated with our data. This way it is not just dealt with in a silo approach amongst Database administrators, storage engineers or Backup focused engineers. This way the whole team can determine the best route of action when it comes to developing and hosting applications for the wider business and focus on the data architecture vs it being an after thought. +Secondly we also need to identify the different data types, domains, boundaries that we have associated with our data. This way it is not just dealt with in a silo approach amongst Database administrators, storage engineers or Backup focused engineers. This way the whole team can determine the best route of action when it comes to developing and hosting applications for the wider business and focus on the data architecture vs it being an after thought. -Now, this can span many different areas of the data lifecycle, we could be talking about data ingest, where and how will data be ingested into our service or application? How will the service, application or users access this data. But then it also requires us to understand how we will secure the data and then how will we protect that data. +Now, this can span many different areas of the data lifecycle, we could be talking about data ingest, where and how will data be ingested into our service or application? How will the service, application or users access this data. But then it also requires us to understand how we will secure the data and then how will we protect that data. -### Data Management 101 +### Data Management 101 -Data management according to the [Data Management Body of Knowledge](https://www.dama.org/cpages/body-of-knowledge) is “the development, execution and supervision of plans, policies, programs and practices that control, protect, deliver and enhance the value of data and information assets.” +Data management according to the [Data Management Body of Knowledge](https://www.dama.org/cpages/body-of-knowledge) is “the development, execution and supervision of plans, policies, programs and practices that control, protect, deliver and enhance the value of data and information assets.” -- Data is the most important aspect of your business - Data is only one part of your overall business. I have seen the term "Data is the lifeblood of our business" and most likely absolutely true. Which then got me thinking about blood being pretty important to the body but alone it is nothing we still need the aspects of the body to make the blood something other than a liquid. +- Data is the most important aspect of your business - Data is only one part of your overall business. I have seen the term "Data is the lifeblood of our business" and most likely absolutely true. Which then got me thinking about blood being pretty important to the body but alone it is nothing we still need the aspects of the body to make the blood something other than a liquid. -- Data quality is more important than ever - We are having to treat data as a business asset, meaning that we have to give it the considerations it needs and requires to work with our automation and DevOps principles. +- Data quality is more important than ever - We are having to treat data as a business asset, meaning that we have to give it the considerations it needs and requires to work with our automation and DevOps principles. -- Accessing data in a timely fashion - Nobody has the patience to not have access to the right data at the right time to make effective decisions. Data must be available in a streamlined and timely manher regardless of presentation. +- Accessing data in a timely fashion - Nobody has the patience to not have access to the right data at the right time to make effective decisions. Data must be available in a streamlined and timely manher regardless of presentation. -- Data Management has to be an enabler to DevOps - I mentioned streamline previously, we have to include the data management requirements into our cycle and ensure not just availablity of that data but also include other important policy based protection of those data points along with fully tested recovery models with that as well. +- Data Management has to be an enabler to DevOps - I mentioned streamline previously, we have to include the data management requirements into our cycle and ensure not just availablity of that data but also include other important policy based protection of those data points along with fully tested recovery models with that as well. -### DataOps +### DataOps -Both DataOps and DevOps apply the best practices of technology development and operations to improve quality, increase speed, reduce security threats, delight customers and provide meaningful and challenging work for skilled professionals. DevOps and DataOps share goals to accelerate product delivery by automating as many process steps as possible. For DataOps, the objective is a resilient data pipeline and trusted insights from data analytics. +Both DataOps and DevOps apply the best practices of technology development and operations to improve quality, increase speed, reduce security threats, delight customers and provide meaningful and challenging work for skilled professionals. DevOps and DataOps share goals to accelerate product delivery by automating as many process steps as possible. For DataOps, the objective is a resilient data pipeline and trusted insights from data analytics. -Some of the most common higher level areas that focus on DataOps are going to be Machine Learning, Big Data and Data Analytics including Artifical Intelligence. +Some of the most common higher level areas that focus on DataOps are going to be Machine Learning, Big Data and Data Analytics including Artifical Intelligence. ### Data Management is the management of information -My focus throughout this section is not going to be getting into Machine Learning or Articial Intelligence but to focus on the protecting the data from a data protection point of view, the title of this subsection is "Data management is the management of information" and we can relate that information = data. +My focus throughout this section is not going to be getting into Machine Learning or Articial Intelligence but to focus on the protecting the data from a data protection point of view, the title of this subsection is "Data management is the management of information" and we can relate that information = data. -Three key areas that we should consider along this journey with data are: +Three key areas that we should consider along this journey with data are: -- Accuracy - Making sure that production data is accurate, equally we need to ensure that our data in the form of backups are also working and tested against recovery to be sure if a failure or a reason comes up we need to be able to get back up and running as fast as possible. - -- Consistent - If our data services span multiple locations then for production we need to make sure we have consistency across all data locations so that we are getting accurate data, this also spans into data protection when it comes to protecting these data services especially data services we need to ensure consistency at different levels to make sure we are taking a good clean copy of that data for our backups, replicas etc. +- Accuracy - Making sure that production data is accurate, equally we need to ensure that our data in the form of backups are also working and tested against recovery to be sure if a failure or a reason comes up we need to be able to get back up and running as fast as possible. +- Consistent - If our data services span multiple locations then for production we need to make sure we have consistency across all data locations so that we are getting accurate data, this also spans into data protection when it comes to protecting these data services especially data services we need to ensure consistency at different levels to make sure we are taking a good clean copy of that data for our backups, replicas etc. -- Secure - Access Control but equally just keeping data in general is a topical theme at the moment across the globe. Making sure the right people have access to your data is paramount, again this leads into data protection where we must make sure that only the required personnel have access to backups and the ability to restore from those as well clone and provide other versions of the business data. +- Secure - Access Control but equally just keeping data in general is a topical theme at the moment across the globe. Making sure the right people have access to your data is paramount, again this leads into data protection where we must make sure that only the required personnel have access to backups and the ability to restore from those as well clone and provide other versions of the business data. -Better Data = Better Decisions +Better Data = Better Decisions -### Data Management Days +### Data Management Days -During the next 6 sessions we are going to be taking a closer look at Databases, Backup & Recovery, Disaster Recovery, Application Mobility all with an element of demo and hands on throughout. +During the next 6 sessions we are going to be taking a closer look at Databases, Backup & Recovery, Disaster Recovery, Application Mobility all with an element of demo and hands on throughout. -## Resources +## Resources - [Kubernetes Backup and Restore made easy!](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) - [Kubernetes Backups, Upgrades, Migrations - with Velero](https://www.youtube.com/watch?v=zybLTQER0yY) @@ -70,7 +70,3 @@ During the next 6 sessions we are going to be taking a closer look at Databases, - [Veeam Portability & Cloud Mobility](https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s) See you on [Day 85](day85.md) - - - - diff --git a/Days/day85.md b/Days/day85.md index dba8421bb..7ead66140 100644 --- a/Days/day85.md +++ b/Days/day85.md @@ -7,11 +7,12 @@ cover_image: null canonical_url: null id: 1048781 --- + ## Data Services -Databases are going to be the most common data service that we come across in our environments. I wanted to take this session to explore some of those different types of Databases and some of the use cases they each have. Some we have used and seen throughout the course of the challenge. +Databases are going to be the most common data service that we come across in our environments. I wanted to take this session to explore some of those different types of Databases and some of the use cases they each have. Some we have used and seen throughout the course of the challenge. -From an application development point of view choosing the right data service or database is going to be a huge decision when it comes to the performance and scalability of your application. +From an application development point of view choosing the right data service or database is going to be a huge decision when it comes to the performance and scalability of your application. https://www.youtube.com/watch?v=W2Z7fbCLSTw @@ -19,79 +20,82 @@ https://www.youtube.com/watch?v=W2Z7fbCLSTw A key-value database is a type of nonrelational database that uses a simple key-value method to store data. A key-value database stores data as a collection of key-value pairs in which a key serves as a unique identifier. Both keys and values can be anything, ranging from simple objects to complex compound objects. Key-value databases are highly partitionable and allow horizontal scaling at scales that other types of databases cannot achieve. -An example of a Key-Value database is Redis. +An example of a Key-Value database is Redis. -*Redis is an in-memory data structure store, used as a distributed, in-memory key–value database, cache and message broker, with optional durability. Redis supports different kinds of abstract data structures, such as strings, lists, maps, sets, sorted sets, HyperLogLogs, bitmaps, streams, and spatial indices.* +_Redis is an in-memory data structure store, used as a distributed, in-memory key–value database, cache and message broker, with optional durability. Redis supports different kinds of abstract data structures, such as strings, lists, maps, sets, sorted sets, HyperLogLogs, bitmaps, streams, and spatial indices._ ![](Images/Day85_Data1.png) -As you can see from the description of Redis this means that our database is fast but we are limited on space as a trade off. Also no queries or joins which means data modelling options are very limited. +As you can see from the description of Redis this means that our database is fast but we are limited on space as a trade off. Also no queries or joins which means data modelling options are very limited. + +Best for: -Best for: -- Caching +- Caching - Pub/Sub -- Leaderboards +- Leaderboards - Shopping carts -Generally used as a cache above another persistent data layer. +Generally used as a cache above another persistent data layer. ### Wide Column A wide-column database is a NoSQL database that organises data storage into flexible columns that can be spread across multiple servers or database nodes, using multi-dimensional mapping to reference data by column, row, and timestamp. -*Cassandra is a free and open-source, distributed, wide-column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure.* +_Cassandra is a free and open-source, distributed, wide-column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure._ ![](Images/Day85_Data2.png) -No schema which means can handle unstructured data however this can be seen as a benefit to some workloads. +No schema which means can handle unstructured data however this can be seen as a benefit to some workloads. + +Best for: -Best for: -- Time-Series -- Historical Records -- High-Write, Low-Read +- Time-Series +- Historical Records +- High-Write, Low-Read ### Document -A document database (also known as a document-oriented database or a document store) is a database that stores information in documents. +A document database (also known as a document-oriented database or a document store) is a database that stores information in documents. -*MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License.* +_MongoDB is a source-available cross-platform document-oriented database program. Classified as a NoSQL database program, MongoDB uses JSON-like documents with optional schemas. MongoDB is developed by MongoDB Inc. and licensed under the Server Side Public License._ ![](Images/Day85_Data3.png) -NoSQL document databases allow businesses to store simple data without using complex SQL codes. Quickly store with no compromise to reliability. +NoSQL document databases allow businesses to store simple data without using complex SQL codes. Quickly store with no compromise to reliability. -Best for: +Best for: -- Most Applications -- Games -- Internet of Things +- Most Applications +- Games +- Internet of Things ### Relational -If you are new to databases but you know of them my guess is that you have absolutely come across a relational database. +If you are new to databases but you know of them my guess is that you have absolutely come across a relational database. A relational database is a digital database based on the relational model of data, as proposed by E. F. Codd in 1970. A system used to maintain relational databases is a relational database management system. Many relational database systems have an option of using the SQL for querying and maintaining the database. -*MySQL is an open-source relational database management system. Its name is a combination of "My", the name of co-founder Michael Widenius's daughter, and "SQL", the abbreviation for Structured Query Language.* +_MySQL is an open-source relational database management system. Its name is a combination of "My", the name of co-founder Michael Widenius's daughter, and "SQL", the abbreviation for Structured Query Language._ -MySQL is one example of a relational database there are lots of other options. +MySQL is one example of a relational database there are lots of other options. ![](Images/Day85_Data4.png) -Whilst researching relational databases the term or abbreviation **ACID** has been mentioned a lot, (atomicity, consistency, isolation, durability) is a set of properties of database transactions intended to guarantee data validity despite errors, power failures, and other mishaps. In the context of databases, a sequence of database operations that satisfies the ACID properties (which can be perceived as a single logical operation on the data) is called a transaction. For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction. +Whilst researching relational databases the term or abbreviation **ACID** has been mentioned a lot, (atomicity, consistency, isolation, durability) is a set of properties of database transactions intended to guarantee data validity despite errors, power failures, and other mishaps. In the context of databases, a sequence of database operations that satisfies the ACID properties (which can be perceived as a single logical operation on the data) is called a transaction. For example, a transfer of funds from one bank account to another, even involving multiple changes such as debiting one account and crediting another, is a single transaction. + +Best for: -Best for: - Most Applications (It has been around for years, doesn't mean it is the best) -It is not ideal for unstructured data or the ability to scale is where some of the other NoSQL mentions give a better ability to scale for certain workloads. +It is not ideal for unstructured data or the ability to scale is where some of the other NoSQL mentions give a better ability to scale for certain workloads. ### Graph A graph database stores nodes and relationships instead of tables, or documents. Data is stored just like you might sketch ideas on a whiteboard. Your data is stored without restricting it to a pre-defined model, allowing a very flexible way of thinking about and using it. -*Neo4j is a graph database management system developed by Neo4j, Inc. Described by its developers as an ACID-compliant transactional database with native graph storage and processing* +_Neo4j is a graph database management system developed by Neo4j, Inc. Described by its developers as an ACID-compliant transactional database with native graph storage and processing_ -Best for: +Best for: - Graphs - Knowledge Graphs @@ -99,37 +103,37 @@ Best for: ### Search Engine -In the last section we actually used a Search Engine database in the way of Elasticsearch. +In the last section we actually used a Search Engine database in the way of Elasticsearch. A search-engine database is a type of non-relational database that is dedicated to the search of data content. Search-engine databases use indexes to categorise the similar characteristics among data and facilitate search capability. -*Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.* +_Elasticsearch is a search engine based on the Lucene library. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents._ -Best for: +Best for: -- Search Engines -- Typeahead +- Search Engines +- Typeahead - Log search ### Multi-model -A multi-model database is a database management system designed to support multiple data models against a single, integrated backend. In contrast, most database management systems are organized around a single data model that determines how data can be organized, stored, and manipulated.Document, graph, relational, and key–value models are examples of data models that may be supported by a multi-model database. +A multi-model database is a database management system designed to support multiple data models against a single, integrated backend. In contrast, most database management systems are organized around a single data model that determines how data can be organized, stored, and manipulated.Document, graph, relational, and key–value models are examples of data models that may be supported by a multi-model database. -*Fauna is a flexible, developer-friendly, transactional database delivered as a secure and scalable cloud API with native GraphQL.* +_Fauna is a flexible, developer-friendly, transactional database delivered as a secure and scalable cloud API with native GraphQL._ -Best for: +Best for: - You are not stuck to having to choose a data model - ACID Compliant -- Fast +- Fast - No provisioning overhead - How do you want to consume your data and let the cloud do the heavy lifting -That is going to wrap up this database overview session, no matter what industry you are in you are going to come across one area of databases. We are then going to take some of these examples and look at the data management and in particular the protection and storing of these data services later on in the section. +That is going to wrap up this database overview session, no matter what industry you are in you are going to come across one area of databases. We are then going to take some of these examples and look at the data management and in particular the protection and storing of these data services later on in the section. -There are a ton of resources I have linked below, you could honestly spend 90 years probably deep diving into all database types and everything that comes with this. +There are a ton of resources I have linked below, you could honestly spend 90 years probably deep diving into all database types and everything that comes with this. -## Resources +## Resources - [Redis Crash Course - the What, Why and How to use Redis as your primary database](https://www.youtube.com/watch?v=OqCK95AS-YE) - [Redis: How to setup a cluster - for beginners](https://www.youtube.com/watch?v=GEg7s3i6Jak) @@ -145,5 +149,4 @@ There are a ton of resources I have linked below, you could honestly spend 90 ye - [FaunaDB Basics - The Database of your Dreams](https://www.youtube.com/watch?v=2CipVwISumA) - [Fauna Crash Course - Covering the Basics](https://www.youtube.com/watch?v=ihaB7CqJju0) - See you on [Day 86](day86.md) diff --git a/Days/day86.md b/Days/day86.md index 2448c6961..ad50823c9 100644 --- a/Days/day86.md +++ b/Days/day86.md @@ -1,137 +1,138 @@ --- -title: '#90DaysOfDevOps - Backup all the platforms - Day 86' +title: "#90DaysOfDevOps - Backup all the platforms - Day 86" published: false description: 90DaysOfDevOps - Backup all the platforms -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1049058 --- + ## Backup all the platforms During this whole challenge we have discussed many different platforms and environments. One thing all of those have in common is the fact they all need some level of data protection! -Data Protection has been around for many many years but the wealth of data that we have today and the value that this data brings means we have to make sure we are not only resilient to infrastructure failure by having multiple nodes and high availablity across applications but we must also consider that we need a copy of that data, that important data in a safe and secure location if a failure scenario was to occur. +Data Protection has been around for many many years but the wealth of data that we have today and the value that this data brings means we have to make sure we are not only resilient to infrastructure failure by having multiple nodes and high availablity across applications but we must also consider that we need a copy of that data, that important data in a safe and secure location if a failure scenario was to occur. -We hear a lot these days it seems about cybercrime and ransomware, and don't get me wrong this is a massive threat and I stand by the fact that you will be attacked by ransomware. It is not a matter of if it is a matter of when. So even more reason to make sure you have your data secure for when that time arises. However the most common cause for data loss is not ransomware or cybercrime it is simply accidental deletion! +We hear a lot these days it seems about cybercrime and ransomware, and don't get me wrong this is a massive threat and I stand by the fact that you will be attacked by ransomware. It is not a matter of if it is a matter of when. So even more reason to make sure you have your data secure for when that time arises. However the most common cause for data loss is not ransomware or cybercrime it is simply accidental deletion! -We have all done it, deleted something we shouldn't have and had that instant regret. +We have all done it, deleted something we shouldn't have and had that instant regret. -With all of the technology and automation we have discussed during the challenge, the requirement to protect any stateful data or even complex stateless configuration is still there, regardless of platform. +With all of the technology and automation we have discussed during the challenge, the requirement to protect any stateful data or even complex stateless configuration is still there, regardless of platform. ![](Images/Day86_Data1.png) -But we should be able to perform that protection of the data with automation in mind and being able to integrate into our workflows. +But we should be able to perform that protection of the data with automation in mind and being able to integrate into our workflows. -If we look at what backup is: +If we look at what backup is: -*In information technology, a backup, or data backup is a copy of computer data taken and stored elsewhere so that it may be used to restore the original after a data loss event. The verb form, referring to the process of doing so, is "back up", whereas the noun and adjective form is "backup".* +_In information technology, a backup, or data backup is a copy of computer data taken and stored elsewhere so that it may be used to restore the original after a data loss event. The verb form, referring to the process of doing so, is "back up", whereas the noun and adjective form is "backup"._ -If we break this down to the simplest form, a backup is a copy and paste of data to a new location. Simply put I could take a backup right now by copying a file from my C: drive to my D: drive and I would then have a copy in case something happened to the C: drive or something was edited wrongly within the files. I could revert back to the copy I have on the D: drive. Now if my computer dies where both the C & D drives live then I am not protected so I have to consider a solution or a copy of data outside of my system maybe onto a NAS drive in my house? But then what happens if something happens to my house, maybe I need to consider storing it on another system in another location, maybe the cloud is an option. Maybe I could store a copy of my important files in several locations to mitigate against the risk of failure? +If we break this down to the simplest form, a backup is a copy and paste of data to a new location. Simply put I could take a backup right now by copying a file from my C: drive to my D: drive and I would then have a copy in case something happened to the C: drive or something was edited wrongly within the files. I could revert back to the copy I have on the D: drive. Now if my computer dies where both the C & D drives live then I am not protected so I have to consider a solution or a copy of data outside of my system maybe onto a NAS drive in my house? But then what happens if something happens to my house, maybe I need to consider storing it on another system in another location, maybe the cloud is an option. Maybe I could store a copy of my important files in several locations to mitigate against the risk of failure? -### 3-2-1 Backup Methodolgy +### 3-2-1 Backup Methodolgy -Now seems a good time to talk about the 3-2-1 rule or backup methodology. I actually did a [lightening talk](https://www.youtube.com/watch?v=5wRt1bJfKBw) covering this topic. +Now seems a good time to talk about the 3-2-1 rule or backup methodology. I actually did a [lightening talk](https://www.youtube.com/watch?v=5wRt1bJfKBw) covering this topic. -We have already mentioned before some of the extreme ends of why we need to protect our data but a few more are listed below: +We have already mentioned before some of the extreme ends of why we need to protect our data but a few more are listed below: ![](Images/Day86_Data2.png) -Which then allows me to talk about the 3-2-1 methodology. My first copy or backup of my data should be as close to my production system as possible, the reason for this is based on speed to recovery and again going back to that original point about accidental deletion this is going to be the most common reason for recovery. But I want to be storing that on a suitable second media outside of the original or production system. +Which then allows me to talk about the 3-2-1 methodology. My first copy or backup of my data should be as close to my production system as possible, the reason for this is based on speed to recovery and again going back to that original point about accidental deletion this is going to be the most common reason for recovery. But I want to be storing that on a suitable second media outside of the original or production system. -We then want to make sure we also send a copy of our data external or offsite this is where a second location comes in be it another house, building, data centre or the public cloud. +We then want to make sure we also send a copy of our data external or offsite this is where a second location comes in be it another house, building, data centre or the public cloud. ![](Images/Day86_Data3.png) -### Backup Responsibility +### Backup Responsibility -We have most likely heard all of the myths when it comes to not having to backup, things like "Everything is stateless" I mean if everything is stateless then what is the business? no databases? word documents? Obviously there is a level of responsibility on every individual within the business to ensure they are protected but it is going to come down most likely to the operations teams to provide the backup process for the mission critical applications and data. +We have most likely heard all of the myths when it comes to not having to backup, things like "Everything is stateless" I mean if everything is stateless then what is the business? no databases? word documents? Obviously there is a level of responsibility on every individual within the business to ensure they are protected but it is going to come down most likely to the operations teams to provide the backup process for the mission critical applications and data. -Another good one is that "High availability is my backup, we have built in multiple nodes into our cluster there is no way this is going down!" apart from when you make a mistake to the database and this is replicated over all the nodes in the cluster, or there is fire, flood or blood scenario that means the cluster is no longer available and with it the important data. It's not about being stubborn it is about being aware of the data and the services, absolutely everyone should factor in high availability and fault tollerance into their architecture but that does not substitute the need for backup! +Another good one is that "High availability is my backup, we have built in multiple nodes into our cluster there is no way this is going down!" apart from when you make a mistake to the database and this is replicated over all the nodes in the cluster, or there is fire, flood or blood scenario that means the cluster is no longer available and with it the important data. It's not about being stubborn it is about being aware of the data and the services, absolutely everyone should factor in high availability and fault tollerance into their architecture but that does not substitute the need for backup! -Replication can also seem to give us the offsite copy of the data and maybe that cluster mentioned above does live across multiple locations, however the first accidental mistake would still be replicated there. But again a Backup requirement should stand alongside application replication or system replication within the environment. +Replication can also seem to give us the offsite copy of the data and maybe that cluster mentioned above does live across multiple locations, however the first accidental mistake would still be replicated there. But again a Backup requirement should stand alongside application replication or system replication within the environment. -Now with all this said you can go to the extreme the other end as well and send copies of data to too many locations which is going to not only cost but also increase risk about being attacked as your surface area is now massively expanded. +Now with all this said you can go to the extreme the other end as well and send copies of data to too many locations which is going to not only cost but also increase risk about being attacked as your surface area is now massively expanded. -Anyway, who looks after backup? It will be different within each business but someone should be taking it upon themselves to understand the backup requirements. But also understand the recovery plan! +Anyway, who looks after backup? It will be different within each business but someone should be taking it upon themselves to understand the backup requirements. But also understand the recovery plan! -### Nobody cares till everybody cares +### Nobody cares till everybody cares -Backup is a prime example, nobody cares about backup until you need to restore something. Alongside the requirement to back our data up we also need to consider how we restore! +Backup is a prime example, nobody cares about backup until you need to restore something. Alongside the requirement to back our data up we also need to consider how we restore! -With our text document example we are talking very small files so the ability to copy back and forth is easy and fast. But if we are talking about 100GB plus files then this is going to take time. Also we have to consider the level in which we need to recover, if we take a virtual machine for example. +With our text document example we are talking very small files so the ability to copy back and forth is easy and fast. But if we are talking about 100GB plus files then this is going to take time. Also we have to consider the level in which we need to recover, if we take a virtual machine for example. -We have the whole Virtual Machine, we have the Operating System, Application installation and then if this is a database server then we will have some database files as well. If we have made a mistake and inserted the wrong line of code into our database I probably don't need to restore the whole virtual machine, I want to be granular on what I recover back. +We have the whole Virtual Machine, we have the Operating System, Application installation and then if this is a database server then we will have some database files as well. If we have made a mistake and inserted the wrong line of code into our database I probably don't need to restore the whole virtual machine, I want to be granular on what I recover back. -### Backup Scenario +### Backup Scenario -I want to now start building on a scenario to protect some data, specifically I want to protect some files on my local machine (in this case Windows but the tool I am going to use is in fact not only free and open-source but also cross platform) I would like to make sure they are protected to a NAS device I have locally in my home but also into an Object Storage bucket in the cloud. +I want to now start building on a scenario to protect some data, specifically I want to protect some files on my local machine (in this case Windows but the tool I am going to use is in fact not only free and open-source but also cross platform) I would like to make sure they are protected to a NAS device I have locally in my home but also into an Object Storage bucket in the cloud. -I want to backup this important data, it just so happens to be the repository for the 90DaysOfDevOps, which yes this is also being sent to GitHub which is probably where you are reading this now but what if my machine was to die and GitHub was down? How would anyone be able to read the content but also how would I potentially be able to restore that data to another service. +I want to backup this important data, it just so happens to be the repository for the 90DaysOfDevOps, which yes this is also being sent to GitHub which is probably where you are reading this now but what if my machine was to die and GitHub was down? How would anyone be able to read the content but also how would I potentially be able to restore that data to another service. ![](Images/Day86_Data5.png) -There are lots of tools that can help us achieve this but I am going to be using a a tool called [Kopia](https://kopia.io/) an Open-Source backup tool which will enable us to encrypt, dedupe and compress our backups whilst being able to send them to many locations. +There are lots of tools that can help us achieve this but I am going to be using a a tool called [Kopia](https://kopia.io/) an Open-Source backup tool which will enable us to encrypt, dedupe and compress our backups whilst being able to send them to many locations. -You will find the releases to download [here](https://github.com/kopia/kopia/releases) at the time of writing I will be using v0.10.6. +You will find the releases to download [here](https://github.com/kopia/kopia/releases) at the time of writing I will be using v0.10.6. -### Installing Kopia +### Installing Kopia -There is a Kopia CLI and GUI, we will be using the GUI but know that you can have a CLI version of this as well for those Linux servers that do not give you a GUI. +There is a Kopia CLI and GUI, we will be using the GUI but know that you can have a CLI version of this as well for those Linux servers that do not give you a GUI. I will be using `KopiaUI-Setup-0.10.6.exe` -Really quick next next installation and then when you open the application you are greeted with the choice of selecting your storage type that you wish to use as your backup repository. +Really quick next next installation and then when you open the application you are greeted with the choice of selecting your storage type that you wish to use as your backup repository. ![](Images/Day86_Data6.png) -### Setting up a Repository +### Setting up a Repository -Firstly we would like to setup a repository using our local NAS device and we are going to do this using SMB, but we could also use NFS I believe. +Firstly we would like to setup a repository using our local NAS device and we are going to do this using SMB, but we could also use NFS I believe. ![](Images/Day86_Data7.png) -On the next screen we are going to define a password, this password is used to encrypt the repository contents. +On the next screen we are going to define a password, this password is used to encrypt the repository contents. ![](Images/Day86_Data8.png) -Now that we have the repository configured we can trigger an adhoc snapshot to start writing data to our it. +Now that we have the repository configured we can trigger an adhoc snapshot to start writing data to our it. ![](Images/Day86_Data9.png) -First up we need to enter a path to what we want to snapshot and our case we want to take a copy of our `90DaysOfDevOps` folder. We will get back to the scheduling aspect shortly. +First up we need to enter a path to what we want to snapshot and our case we want to take a copy of our `90DaysOfDevOps` folder. We will get back to the scheduling aspect shortly. ![](Images/Day86_Data10.png) -We can define our snapshot retention. +We can define our snapshot retention. ![](Images/Day86_Data11.png) -Maybe there are files or file types that we wish to exclude. +Maybe there are files or file types that we wish to exclude. ![](Images/Day86_Data12.png) -If we wanted to define a schedule we could this on this next screen, when you first create this snapshot this is the opening page to define. +If we wanted to define a schedule we could this on this next screen, when you first create this snapshot this is the opening page to define. ![](Images/Day86_Data13.png) -And you will see a number of other settings that can be handled here. +And you will see a number of other settings that can be handled here. ![](Images/Day86_Data14.png) -Select snapshot now and the data will be written to your repository. +Select snapshot now and the data will be written to your repository. ![](Images/Day86_Data15.png) -### Offsite backup to S3 +### Offsite backup to S3 -With Kopia we can through the UI it seems only have one repository configured at a time. But through the UI we can be creative and basically have multiple repository configuration files to choose from to achieve our goal of having a copy local and offsite in Object Storage. +With Kopia we can through the UI it seems only have one repository configured at a time. But through the UI we can be creative and basically have multiple repository configuration files to choose from to achieve our goal of having a copy local and offsite in Object Storage. -The Object Storage I am choosing to send my data to is going to Google Cloud Storage. I firstly logged into my Google Cloud Platform account and created myself a storage bucket. I already had the Google Cloud SDK installed on my system but running the `gcloud auth application-default login` authenticated me with my account. +The Object Storage I am choosing to send my data to is going to Google Cloud Storage. I firstly logged into my Google Cloud Platform account and created myself a storage bucket. I already had the Google Cloud SDK installed on my system but running the `gcloud auth application-default login` authenticated me with my account. ![](Images/Day86_Data16.png) -I then used the CLI of Kopia to show me the current status of my repository after we added our SMB repository in the previous steps. I did this using the `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config repository status` command. +I then used the CLI of Kopia to show me the current status of my repository after we added our SMB repository in the previous steps. I did this using the `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config repository status` command. ![](Images/Day86_Data17.png) @@ -141,21 +142,21 @@ The above command is taking into account that the Google Cloud Storage bucket we ![](Images/Day86_Data18.png) -Now that we have created our new repository we can then run the `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config repository status` command again and will now show the GCS repository configuration. +Now that we have created our new repository we can then run the `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config repository status` command again and will now show the GCS repository configuration. ![](Images/Day86_Data19.png) -Next thing we need to do is create a snapshot and send that to our newly created repository. Using the `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config kopia snapshot create "C:\Users\micha\demo\90DaysOfDevOps"` command we can kick off this process. You can see in the below browser that our Google Cloud Storage bucket now has kopia files based on our backup in place. +Next thing we need to do is create a snapshot and send that to our newly created repository. Using the `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config kopia snapshot create "C:\Users\micha\demo\90DaysOfDevOps"` command we can kick off this process. You can see in the below browser that our Google Cloud Storage bucket now has kopia files based on our backup in place. ![](Images/Day86_Data20.png) -With the above process we are able to settle our requirement of sending our important data to 2 different locations, 1 of which is offsite in Google Cloud Storage and of course we still have our production copy of our data on a different media type. +With the above process we are able to settle our requirement of sending our important data to 2 different locations, 1 of which is offsite in Google Cloud Storage and of course we still have our production copy of our data on a different media type. ### Restore -Restore is another consideration and is very important, Kopia gives us the capability to not only restore to the existing location but also to a new location. +Restore is another consideration and is very important, Kopia gives us the capability to not only restore to the existing location but also to a new location. -If we run the command `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config snapshot list` this will list the snapshots that we have currently in our configured repository (GCS) +If we run the command `"C:\Program Files\KopiaUI\resources\server\kopia.exe" --config-file=C:\Users\micha\AppData\Roaming\kopia\repository.config snapshot list` this will list the snapshots that we have currently in our configured repository (GCS) ![](Images/Day86_Data21.png) @@ -163,13 +164,13 @@ We can then mount those snapshots directly from GCS using the `"C:\Program Files ![](Images/Day86_Data22.png) -We could also restore the snapshot contents using `kopia snapshot restore kdbd9dff738996cfe7bcf99b45314e193` +We could also restore the snapshot contents using `kopia snapshot restore kdbd9dff738996cfe7bcf99b45314e193` -Obviously the commands above are very long and this is because I was using the KopiaUI version of the kopia.exe as explained at the top of the walkthrough you can download the kopia.exe and put into a path so you can just use the `kopia` command. +Obviously the commands above are very long and this is because I was using the KopiaUI version of the kopia.exe as explained at the top of the walkthrough you can download the kopia.exe and put into a path so you can just use the `kopia` command. -In the next session we will be focusing in on protecting workloads within Kubernetes. +In the next session we will be focusing in on protecting workloads within Kubernetes. -## Resources +## Resources - [Kubernetes Backup and Restore made easy!](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) - [Kubernetes Backups, Upgrades, Migrations - with Velero](https://www.youtube.com/watch?v=zybLTQER0yY) diff --git a/Days/day87.md b/Days/day87.md index 90b725541..089c3f226 100644 --- a/Days/day87.md +++ b/Days/day87.md @@ -7,28 +7,29 @@ cover_image: null canonical_url: null id: 1048717 --- + ## Hands-On Backup & Recovery -In the last session we touched on [Kopia](https://kopia.io/) an Open-Source backup tool that we used to get some important data off to a local NAS and off to some cloud based object storage. +In the last session we touched on [Kopia](https://kopia.io/) an Open-Source backup tool that we used to get some important data off to a local NAS and off to some cloud based object storage. -In this section, I want to get into the world of Kubernetes backup. It is a platform we covered [The Big Picture: Kubernetes](Days/day49.md) earlier in the challenge. +In this section, I want to get into the world of Kubernetes backup. It is a platform we covered [The Big Picture: Kubernetes](Days/day49.md) earlier in the challenge. -We will again be using our minikube cluster but this time we are going to take advantage of some of those addons that are available. +We will again be using our minikube cluster but this time we are going to take advantage of some of those addons that are available. -### Kubernetes cluster setup +### Kubernetes cluster setup -To set up our minikube cluster we will be issuing the `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p 90daysofdevops --kubernetes-version=1.21.2` you will notice that we are using the `volumesnapshots` and `csi-hostpath-driver` as we will take full use of these for when we are taking our backups. +To set up our minikube cluster we will be issuing the `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p 90daysofdevops --kubernetes-version=1.21.2` you will notice that we are using the `volumesnapshots` and `csi-hostpath-driver` as we will take full use of these for when we are taking our backups. -At this point I know we have not deployed Kasten K10 yet but we want to issue the following command when your cluster is up, but we want to annotate the volumesnapshotclass so that Kasten K10 can use this. +At this point I know we have not deployed Kasten K10 yet but we want to issue the following command when your cluster is up, but we want to annotate the volumesnapshotclass so that Kasten K10 can use this. -``` +```Shell kubectl annotate volumesnapshotclass csi-hostpath-snapclass \ k10.kasten.io/is-snapshot-class=true ``` -We are also going to change over the default storageclass from the standard default storageclass to the csi-hostpath storageclass using the following. +We are also going to change over the default storageclass from the standard default storageclass to the csi-hostpath storageclass using the following. -``` +```Shell kubectl patch storageclass csi-hostpath-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' @@ -36,7 +37,7 @@ kubectl patch storageclass standard -p '{"metadata": {"annotations":{"storagecla ![](Images/Day87_Data1.png) -### Deploy Kasten K10 +### Deploy Kasten K10 Add the Kasten Helm repository @@ -44,7 +45,7 @@ Add the Kasten Helm repository We could use `arkade kasten install k10` here as well but for the purpose of the demo we will run through the following steps. [More Details](https://blog.kasten.io/kasten-k10-goes-to-the-arkade) -Create the namespace and deploy K10, note that this will take around 5 mins +Create the namespace and deploy K10, note that this will take around 5 mins `helm install k10 kasten/k10 --namespace=kasten-io --set auth.tokenAuth.enabled=true --set injectKanisterSidecar.enabled=true --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true --create-namespace` @@ -64,9 +65,9 @@ The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/` ![](Images/Day87_Data4.png) -To authenticate with the dashboard we now need the token which we can get with the following commands. +To authenticate with the dashboard we now need the token which we can get with the following commands. -``` +```Shell TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) @@ -76,17 +77,17 @@ echo $TOKEN ![](Images/Day87_Data5.png) -Now we take this token and we input that into our browser, you will then be prompted for an email and company name. +Now we take this token and we input that into our browser, you will then be prompted for an email and company name. ![](Images/Day87_Data6.png) -Then we get access to the Kasten K10 dashboard. +Then we get access to the Kasten K10 dashboard. ![](Images/Day87_Data7.png) -### Deploy our stateful application +### Deploy our stateful application -Use the stateful application that we used in the Kubernetes section. +Use the stateful application that we used in the Kubernetes section. ![](Images/Day55_Kubernetes1.png) @@ -94,51 +95,51 @@ You can find the YAML configuration file for this application here[pacman-statef ![](Images/Day87_Data8.png) -We can use `kubectl get all -n pacman` to check on our pods coming up. +We can use `kubectl get all -n pacman` to check on our pods coming up. ![](Images/Day87_Data9.png) In a new terminal we can then port forward the pacman front end. `kubectl port-forward svc/pacman 9090:80 -n pacman` -Open another tab on your browser to http://localhost:9090/ +Open another tab on your browser to http://localhost:9090/ ![](Images/Day87_Data10.png) -Take the time to clock up some high scores in the backend MongoDB database. +Take the time to clock up some high scores in the backend MongoDB database. ![](Images/Day87_Data11.png) -### Protect our High Scores +### Protect our High Scores -Now we have some mission critical data in our database and we do not want to lose it. We can use Kasten K10 to protect this whole application. +Now we have some mission critical data in our database and we do not want to lose it. We can use Kasten K10 to protect this whole application. -If we head back into the Kasten K10 dashboard tab you will see that our number of application has now increased from 1 to 2 with the addition of our pacman application to our Kubernetes cluster. +If we head back into the Kasten K10 dashboard tab you will see that our number of application has now increased from 1 to 2 with the addition of our pacman application to our Kubernetes cluster. ![](Images/Day87_Data12.png) -If you click on the Applications card you will see the automatically discovered applications in our cluster. +If you click on the Applications card you will see the automatically discovered applications in our cluster. ![](Images/Day87_Data13.png) -With Kasten K10 we have the ability to leverage storage based snapshots as well export our copies out to object storage options. +With Kasten K10 we have the ability to leverage storage based snapshots as well export our copies out to object storage options. -For the purpose of the demo, we will create a manual storage snapshot in our cluster and then we can add some rogue data to our high scores to simulate an accidental mistake being made or is it? +For the purpose of the demo, we will create a manual storage snapshot in our cluster and then we can add some rogue data to our high scores to simulate an accidental mistake being made or is it? -Firstly we can use the manual snapshot option below. +Firstly we can use the manual snapshot option below. ![](Images/Day87_Data14.png) -For the demo I am going to leave everything as the default +For the demo I am going to leave everything as the default ![](Images/Day87_Data15.png) -Back on the dashboard you get a status report on the job as it is running and then when complete it should look as successful as this one. +Back on the dashboard you get a status report on the job as it is running and then when complete it should look as successful as this one. ![](Images/Day87_Data16.png) -### Failure Scenario +### Failure Scenario -We can now make that fatal change to our mission critical data by simply adding in a prescriptive bad change to our application. +We can now make that fatal change to our mission critical data by simply adding in a prescriptive bad change to our application. As you can see below we have two inputs that we probably dont want in our production mission critical database. @@ -146,39 +147,39 @@ As you can see below we have two inputs that we probably dont want in our produc ### Restore the data -Obviously this is a simple demo and in a way not realistic although have you seen how easy it is to drop databases? +Obviously this is a simple demo and in a way not realistic although have you seen how easy it is to drop databases? -Now we want to get that high score list looking a little cleaner and how we had it before the mistakes were made. +Now we want to get that high score list looking a little cleaner and how we had it before the mistakes were made. -Back in the Applications card and on the pacman tab we now have 1 restore point we can use to restore from. +Back in the Applications card and on the pacman tab we now have 1 restore point we can use to restore from. ![](Images/Day87_Data18.png) -When you select restore you can see all the associated snapshots and exports to that application. +When you select restore you can see all the associated snapshots and exports to that application. ![](Images/Day87_Data19.png) -Select that restore and a side window will appear, we will keep the default settings and hit restore. +Select that restore and a side window will appear, we will keep the default settings and hit restore. ![](Images/Day87_Data20.png) -Confirm that you really want to make this happen. +Confirm that you really want to make this happen. ![](Images/Day87_Data21.png) -You can then go back to the dashboard and see the progress of the restore. You should see something like this. +You can then go back to the dashboard and see the progress of the restore. You should see something like this. ![](Images/Day87_Data22.png) -But more importantly how is our High-Score list looking in our mission critical application. You will have to start the port forward again to pacman as we previously covered. +But more importantly how is our High-Score list looking in our mission critical application. You will have to start the port forward again to pacman as we previously covered. ![](Images/Day87_Data23.png) -A super simple demo and only really touching the surface of what Kasten K10 can really achieve when it comes to backup. I will be creating some more in depth video content on some of these areas in the future. We will also be using Kasten K10 to highlight some of the other prominent areas around Data Management when it comes to Disaster Recovery and the mobility of your data. +A super simple demo and only really touching the surface of what Kasten K10 can really achieve when it comes to backup. I will be creating some more in depth video content on some of these areas in the future. We will also be using Kasten K10 to highlight some of the other prominent areas around Data Management when it comes to Disaster Recovery and the mobility of your data. -Next we will take a look at Application consistency. +Next we will take a look at Application consistency. -## Resources +## Resources - [Kubernetes Backup and Restore made easy!](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) - [Kubernetes Backups, Upgrades, Migrations - with Velero](https://www.youtube.com/watch?v=zybLTQER0yY) diff --git a/Days/day88.md b/Days/day88.md index 684137697..6e8473e6c 100644 --- a/Days/day88.md +++ b/Days/day88.md @@ -1,5 +1,5 @@ --- -title: '#90DaysOfDevOps - Application Focused Backup - Day 88' +title: "#90DaysOfDevOps - Application Focused Backup - Day 88" published: false description: 90DaysOfDevOps - Application Focused Backups tags: "devops, 90daysofdevops, learning" @@ -7,41 +7,42 @@ cover_image: null canonical_url: null id: 1048749 --- + ## Application Focused Backups -We have already spent some time talking about data services or data intensive applications such as databases on [Day 85](day85.md). For these data services we have to consider how we manage consistency, especially when it comes application consistency. +We have already spent some time talking about data services or data intensive applications such as databases on [Day 85](day85.md). For these data services we have to consider how we manage consistency, especially when it comes application consistency. -In this post we are going to dive into that requirement around protecting the application data in a consistent manner. +In this post we are going to dive into that requirement around protecting the application data in a consistent manner. In order to do this our tool of choice will be [Kanister](https://kanister.io/) ![](Images/Day88_Data1.png) -### Introducing Kanister +### Introducing Kanister -Kanister is an open-source project by Kasten, that enables us to manage (backup and restore) application data on Kubernetes. You can deploy Kanister as a helm application into your Kubernetes cluster. +Kanister is an open-source project by Kasten, that enables us to manage (backup and restore) application data on Kubernetes. You can deploy Kanister as a helm application into your Kubernetes cluster. -Kanister uses Kubernetes custom resources, the main custom resources that are installed when Kanister is deployed are +Kanister uses Kubernetes custom resources, the main custom resources that are installed when Kanister is deployed are -- `Profile` - is a target location to store your backups and recover from. Most commonly this will be object storage. +- `Profile` - is a target location to store your backups and recover from. Most commonly this will be object storage. - `Blueprint` - steps that are to be taken to backup and restore the database should be maintained in the Blueprint -- `ActionSet` - is the motion to move our target backup to our profile as well as restore actions. - -### Execution Walkthrough +- `ActionSet` - is the motion to move our target backup to our profile as well as restore actions. + +### Execution Walkthrough -Before we get hands on we should take a look at the workflow that Kanister takes in protecting application data. Firstly our controller is deployed using helm into our Kubernetes cluster, Kanister lives within its own namespace. We take our Blueprint of which there are many community supported blueprints available, we will cover this in more detail shortly. We then have our database workload. +Before we get hands on we should take a look at the workflow that Kanister takes in protecting application data. Firstly our controller is deployed using helm into our Kubernetes cluster, Kanister lives within its own namespace. We take our Blueprint of which there are many community supported blueprints available, we will cover this in more detail shortly. We then have our database workload. ![](Images/Day88_Data2.png) -We then create our ActionSet. +We then create our ActionSet. ![](Images/Day88_Data3.png) -The ActionSet allows us to run the actions defined in the blueprint against the specific data service. +The ActionSet allows us to run the actions defined in the blueprint against the specific data service. ![](Images/Day88_Data4.png) -The ActionSet in turns uses the Kanister functions (KubeExec, KubeTask, Resource Lifecycle) and pushes our backup to our target repository (Profile). +The ActionSet in turns uses the Kanister functions (KubeExec, KubeTask, Resource Lifecycle) and pushes our backup to our target repository (Profile). ![](Images/Day88_Data5.png) @@ -49,50 +50,53 @@ If that action is completed/failed the respective status is updated in the Actio ![](Images/Day88_Data6.png) -### Deploying Kanister +### Deploying Kanister -Once again we will be using the minikube cluster to achieve this application backup. If you have it still running from the previous session then we can continue to use this. +Once again we will be using the minikube cluster to achieve this application backup. If you have it still running from the previous session then we can continue to use this. -At the time of writing we are up to image version `0.75.0` with the following helm command we will install kanister into our Kubernetes cluster. +At the time of writing we are up to image version `0.75.0` with the following helm command we will install kanister into our Kubernetes cluster. `helm install kanister --namespace kanister kanister/kanister-operator --set image.tag=0.75.0 --create-namespace` ![](Images/Day88_Data7.png) -We can use `kubectl get pods -n kanister` to ensure the pod is up and runnnig and then we can also check our custom resource definitions are now available (If you have only installed Kanister then you will see the highlighted 3) +We can use `kubectl get pods -n kanister` to ensure the pod is up and running and then we can also check our custom resource definitions are now available (If you have only installed Kanister then you will see the highlighted 3) ![](Images/Day88_Data8.png) -### Deploy a Database +### Deploy a Database Deploying mysql via helm: -``` +```Shell APP_NAME=my-production-app kubectl create ns ${APP_NAME} helm repo add bitnami https://charts.bitnami.com/bitnami helm install mysql-store bitnami/mysql --set primary.persistence.size=1Gi,volumePermissions.enabled=true --namespace=${APP_NAME} kubectl get pods -n ${APP_NAME} -w ``` + ![](Images/Day88_Data9.png) Populate the mysql database with initial data, run the following: -``` +```Shell MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace ${APP_NAME} mysql-store -o jsonpath="{.data.mysql-root-password}" | base64 --decode) MYSQL_HOST=mysql-store.${APP_NAME}.svc.cluster.local MYSQL_EXEC="mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} -DmyImportantData -t" echo MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} ``` -### Create a MySQL CLIENT +### Create a MySQL CLIENT + We will run another container image to act as our client -``` +```Shell APP_NAME=my-production-app kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash ``` -``` + +```Shell Note: if you already have an existing mysql client pod running, delete with the command kubectl delete pod -n ${APP_NAME} mysql-client @@ -100,7 +104,7 @@ kubectl delete pod -n ${APP_NAME} mysql-client ### Add Data to MySQL -``` +```Shell echo "create database myImportantData;" | mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} MYSQL_EXEC="mysql -h ${MYSQL_HOST} -u root --password=${MYSQL_ROOT_PASSWORD} -DmyImportantData -t" echo "drop table Accounts" | ${MYSQL_EXEC} @@ -116,18 +120,18 @@ echo "insert into Accounts values('rastapopoulos', 377);" | ${MYSQL_EXEC} echo "select * from Accounts;" | ${MYSQL_EXEC} exit ``` -You should be able to see some data as per below. -![](Images/Day88_Data10.png) +You should be able to see some data as per below. +![](Images/Day88_Data10.png) ### Create Kanister Profile -Kanister provides a CLI, `kanctl` and another utility `kando` that is used to interact with your object storage provider from blueprint and both of these utilities. +Kanister provides a CLI, `kanctl` and another utility `kando` that is used to interact with your object storage provider from blueprint and both of these utilities. [CLI Download](https://docs.kanister.io/tooling.html#tooling) -I have gone and I have created an AWS S3 Bucket that we will use as our profile target and restore location. I am going to be using environment variables so that I am able to still show you the commands I am running with `kanctl` to create our kanister profile. +I have gone and I have created an AWS S3 Bucket that we will use as our profile target and restore location. I am going to be using environment variables so that I am able to still show you the commands I am running with `kanctl` to create our kanister profile. `kanctl create profile s3compliant --access-key $ACCESS_KEY --secret-key $SECRET_KEY --bucket $BUCKET --region eu-west-2 --namespace my-production-app` @@ -135,12 +139,11 @@ I have gone and I have created an AWS S3 Bucket that we will use as our profile ### Blueprint time -Don't worry you don't need to create your own one from scratch unless your data service is not listed here in the [Kanister Examples](https://github.com/kanisterio/kanister/tree/master/examples) but by all means community contributions are how this project gains awareness. - -The blueprint we will be using will be the below. +Don't worry you don't need to create your own one from scratch unless your data service is not listed here in the [Kanister Examples](https://github.com/kanisterio/kanister/tree/master/examples) but by all means community contributions are how this project gains awareness. +The blueprint we will be using will be the below. -``` +```Shell apiVersion: cr.kanister.io/v1alpha1 kind: Blueprint metadata: @@ -220,46 +223,46 @@ actions: kando location delete --profile '{{ toJson .Profile }}' --path ${s3_path} ``` -To add this we will use the `kubectl create -f mysql-blueprint.yml -n kanister` command +To add this we will use the `kubectl create -f mysql-blueprint.yml -n kanister` command ![](Images/Day88_Data12.png) -### Create our ActionSet and Protect our application +### Create our ActionSet and Protect our application We will now take a backup of the MySQL data using an ActionSet defining backup for this application. Create an ActionSet in the same namespace as the controller. -`kubectl get profiles.cr.kanister.io -n my-production-app` This command will show us the profile we previously created, we can have multiple profiles configured here so we might want to use specific ones for different ActionSets +`kubectl get profiles.cr.kanister.io -n my-production-app` This command will show us the profile we previously created, we can have multiple profiles configured here so we might want to use specific ones for different ActionSets -We are then going to create our ActionSet with the following command using `kanctl` +We are then going to create our ActionSet with the following command using `kanctl` `kanctl create actionset --action backup --namespace kanister --blueprint mysql-blueprint --statefulset my-production-app/mysql-store --profile my-production-app/s3-profile-dc5zm --secrets mysql=my-production-app/mysql-store` -You can see from the command above we are defining the blueprint we added to the namespace, the statefulset in our `my-production-app` namespace and also the secrets to get into the MySQL application. +You can see from the command above we are defining the blueprint we added to the namespace, the statefulset in our `my-production-app` namespace and also the secrets to get into the MySQL application. ![](Images/Day88_Data13.png) Check the status of the ActionSet by taking the ActionSet name and using this command `kubectl --namespace kanister describe actionset backup-qpnqv` -Finally we can go and confirm that we now have data in our AWS S3 bucket. +Finally we can go and confirm that we now have data in our AWS S3 bucket. ![](Images/Day88_Data14.png) -### Restore +### Restore -We need to cause some damage before we can restore anything, we can do this by dropping our table, maybe it was an accident, maybe it wasn't. +We need to cause some damage before we can restore anything, we can do this by dropping our table, maybe it was an accident, maybe it wasn't. -Connect to our MySQL pod. +Connect to our MySQL pod. -``` +```Shell APP_NAME=my-production-app kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash ``` -You can see that our importantdata db is there with `echo "SHOW DATABASES;" | ${MYSQL_EXEC}` +You can see that our importantdata db is there with `echo "SHOW DATABASES;" | ${MYSQL_EXEC}` -Then to drop we ran `echo "DROP DATABASE myImportantData;" | ${MYSQL_EXEC}` +Then to drop we ran `echo "DROP DATABASE myImportantData;" | ${MYSQL_EXEC}` -And confirmed that this was gone with a few attempts to show our database. +And confirmed that this was gone with a few attempts to show our database. ![](Images/Day88_Data15.png) @@ -267,19 +270,20 @@ We can now use Kanister to get our important data back in business using the `ku ![](Images/Day88_Data16.png) -We can confirm our data is back by using the below command to connect to our database. +We can confirm our data is back by using the below command to connect to our database. -``` +```Shell APP_NAME=my-production-app kubectl run mysql-client --rm --env APP_NS=${APP_NAME} --env MYSQL_EXEC="${MYSQL_EXEC}" --env MYSQL_ROOT_PASSWORD=${MYSQL_ROOT_PASSWORD} --env MYSQL_HOST=${MYSQL_HOST} --namespace ${APP_NAME} --tty -i --restart='Never' --image docker.io/bitnami/mysql:latest --command -- bash ``` -Now we are inside the MySQL Client, we can issue the `echo "SHOW DATABASES;" | ${MYSQL_EXEC}` and we can see the database is back. We can also issue the `echo "select * from Accounts;" | ${MYSQL_EXEC}` to check the contents of the database and our important data is restored. + +Now we are inside the MySQL Client, we can issue the `echo "SHOW DATABASES;" | ${MYSQL_EXEC}` and we can see the database is back. We can also issue the `echo "select * from Accounts;" | ${MYSQL_EXEC}` to check the contents of the database and our important data is restored. ![](Images/Day88_Data17.png) -In the next post we take a look at Disaster Recovery within Kubernetes. +In the next post we take a look at Disaster Recovery within Kubernetes. -## Resources +## Resources - [Kanister Overview - An extensible open-source framework for app-lvl data management on Kubernetes](https://www.youtube.com/watch?v=wFD42Zpbfts) - [Application Level Data Operations on Kubernetes](https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-live-webinar-kanister-application-level-data-operations-on-kubernetes/) diff --git a/Days/day89.md b/Days/day89.md index 5d3e61d12..996449395 100644 --- a/Days/day89.md +++ b/Days/day89.md @@ -1,31 +1,32 @@ --- -title: '#90DaysOfDevOps - Disaster Recovery - Day 89' +title: "#90DaysOfDevOps - Disaster Recovery - Day 89" published: false description: 90DaysOfDevOps - Disaster Recovery -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048718 --- + ## Disaster Recovery -We have mentioned already how different failure scenarios will warrant different recovery requirements. When it comes to Fire, Flood and Blood scenarios we can consider these mostly disaster situations where we might need our workloads up and running in a completely different location as fast as possible or at least with near-zero recovery time objectives (RTO). +We have mentioned already how different failure scenarios will warrant different recovery requirements. When it comes to Fire, Flood and Blood scenarios we can consider these mostly disaster situations where we might need our workloads up and running in a completely different location as fast as possible or at least with near-zero recovery time objectives (RTO). -This can only be achieved at scale when you automate the replication of the complete application stack to a standby environment. +This can only be achieved at scale when you automate the replication of the complete application stack to a standby environment. -This allows for fast failovers across cloud regions, cloud providers or between on-premises and cloud infrastructure. +This allows for fast failovers across cloud regions, cloud providers or between on-premises and cloud infrastructure. -Keeping with the theme so far, we are going to concentrate on how this can be achieved using Kasten K10 using our minikube cluster that we deployed and configured a few sessions ago. +Keeping with the theme so far, we are going to concentrate on how this can be achieved using Kasten K10 using our minikube cluster that we deployed and configured a few sessions ago. -We will then create another minikube cluster with Kasten K10 also installed to act as our standby cluster which in theory could be any location. +We will then create another minikube cluster with Kasten K10 also installed to act as our standby cluster which in theory could be any location. Kasten K10 also has built in functionality to ensure if something was to happen to the Kubernetes cluster it is running on that the catalog data is replicated and available in a new one [K10 Disaster Recovery](https://docs.kasten.io/latest/operating/dr.html). -### Add object storage to K10 +### Add object storage to K10 -The first thing we need to do is add an object storage bucket as a target location for our backups to land. Not only does this act as an offsite location but we can also leverage this as our disaster recovery source data to recover from. +The first thing we need to do is add an object storage bucket as a target location for our backups to land. Not only does this act as an offsite location but we can also leverage this as our disaster recovery source data to recover from. -I have cleaned out the S3 bucket that we created for the Kanister demo in the last session. +I have cleaned out the S3 bucket that we created for the Kanister demo in the last session. ![](Images/Day89_Data1.png) @@ -37,9 +38,9 @@ The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/` ![](Images/Day87_Data4.png) -To authenticate with the dashboard, we now need the token which we can get with the following commands. +To authenticate with the dashboard, we now need the token which we can get with the following commands. -``` +```Shell TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) @@ -49,11 +50,11 @@ echo $TOKEN ![](Images/Day87_Data5.png) -Now we take this token and we input that into our browser, you will then be prompted for an email and company name. +Now we take this token and we input that into our browser, you will then be prompted for an email and company name. ![](Images/Day87_Data6.png) -Then we get access to the Kasten K10 dashboard. +Then we get access to the Kasten K10 dashboard. ![](Images/Day87_Data7.png) @@ -61,27 +62,27 @@ Now that we are back in the Kasten K10 dashboard we can add our location profile ![](Images/Day89_Data2.png) -You can see from the image below that we have choice when it comes to where this location profile is, we are going to select Amazon S3, and we are going to add our sensitive access credentials, region and bucket name. +You can see from the image below that we have choice when it comes to where this location profile is, we are going to select Amazon S3, and we are going to add our sensitive access credentials, region and bucket name. ![](Images/Day89_Data3.png) -If we scroll down on the New Profile creation window you will see, we also have the ability to enable immutable backups which leverages the S3 Object Lock API. For this demo we won't be using that. +If we scroll down on the New Profile creation window you will see, we also have the ability to enable immutable backups which leverages the S3 Object Lock API. For this demo we won't be using that. ![](Images/Day89_Data4.png) -Hit "Save Profile" and you can now see our newly created or added location profile as per below. +Hit "Save Profile" and you can now see our newly created or added location profile as per below. ![](Images/Day89_Data5.png) ### Create a policy to protect Pac-Man app to object storage -In the previous session we created only an ad-hoc snapshot of our Pac-Man application, therefore we need to create a backup policy that will send our application backups to our newly created object storage location. +In the previous session we created only an ad-hoc snapshot of our Pac-Man application, therefore we need to create a backup policy that will send our application backups to our newly created object storage location. -If you head back to the dashboard and select the Policy card you will see a screen as per below. Select "Create New Policy". +If you head back to the dashboard and select the Policy card you will see a screen as per below. Select "Create New Policy". ![](Images/Day89_Data6.png) -First, we can give our policy a useful name and description. We can also define our backup frequency for demo purposes I am using on-demand. +First, we can give our policy a useful name and description. We can also define our backup frequency for demo purposes I am using on-demand. ![](Images/Day89_Data7.png) @@ -89,36 +90,35 @@ Next, we want to enable backups via Snapshot exports meaning that we want to sen ![](Images/Day89_Data8.png) -Next, we select the application by either name or labels, I am going to choose by name and all resources. +Next, we select the application by either name or labels, I am going to choose by name and all resources. ![](Images/Day89_Data9.png) -Under Advanced settings we are not going to be using any of these but based on our [walkthrough of Kanister yesterday](https://github.com/MichaelCade/90DaysOfDevOps/blob/main/Days/day88.md), we can leverage Kanister as part of Kasten K10 as well to take those application consistent copies of our data. +Under Advanced settings we are not going to be using any of these but based on our [walkthrough of Kanister yesterday](https://github.com/MichaelCade/90DaysOfDevOps/blob/main/Days/day88.md), we can leverage Kanister as part of Kasten K10 as well to take those application consistent copies of our data. ![](Images/Day89_Data10.png) -Finally select "Create Policy" and you will now see the policy in our Policy window. +Finally select "Create Policy" and you will now see the policy in our Policy window. ![](Images/Day89_Data11.png) -At the bottom of the created policy, you will have "Show import details" we need this string to be able to import into our standby cluster. Copy this somewhere safe for now. +At the bottom of the created policy, you will have "Show import details" we need this string to be able to import into our standby cluster. Copy this somewhere safe for now. ![](Images/Day89_Data12.png) -Before we move on, we just need to select "run once" to get a backup sent our object storage bucket. +Before we move on, we just need to select "run once" to get a backup sent our object storage bucket. ![](Images/Day89_Data13.png) -Below, the screenshot is just to show the successful backup and export of our data. +Below, the screenshot is just to show the successful backup and export of our data. ![](Images/Day89_Data14.png) - ### Create a new MiniKube cluster & deploy K10 -We then need to deploy a second Kubernetes cluster and where this could be any supported version of Kubernetes including OpenShift, for the purpose of education we will use the very free version of MiniKube with a different name. +We then need to deploy a second Kubernetes cluster and where this could be any supported version of Kubernetes including OpenShift, for the purpose of education we will use the very free version of MiniKube with a different name. -Using `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p standby --kubernetes-version=1.21.2` we can create our new cluster. +Using `minikube start --addons volumesnapshots,csi-hostpath-driver --apiserver-port=6443 --container-runtime=containerd -p standby --kubernetes-version=1.21.2` we can create our new cluster. ![](Images/Day89_Data15.png) @@ -126,11 +126,11 @@ We then can deploy Kasten K10 in this cluster using: `helm install k10 kasten/k10 --namespace=kasten-io --set auth.tokenAuth.enabled=true --set injectKanisterSidecar.enabled=true --set-string injectKanisterSidecar.namespaceSelector.matchLabels.k10/injectKanisterSidecar=true --create-namespace` -This will take a while but in the meantime, we can use `kubectl get pods -n kasten-io -w` to watch the progress of our pods getting to the running status. +This will take a while but in the meantime, we can use `kubectl get pods -n kasten-io -w` to watch the progress of our pods getting to the running status. -It is worth noting that because we are using MiniKube our application will just run when we run our import policy, our storageclass is the same on this standby cluster. However, something we will cover in the final session is about mobility and transformation. +It is worth noting that because we are using MiniKube our application will just run when we run our import policy, our storageclass is the same on this standby cluster. However, something we will cover in the final session is about mobility and transformation. -When the pods are up and running, we can follow the steps we went through on the previous steps in the other cluster. +When the pods are up and running, we can follow the steps we went through on the previous steps in the other cluster. Port forward to access the K10 dashboard, open a new terminal to run the below command @@ -140,9 +140,9 @@ The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/` ![](Images/Day87_Data4.png) -To authenticate with the dashboard, we now need the token which we can get with the following commands. +To authenticate with the dashboard, we now need the token which we can get with the following commands. -``` +```Shell TOKEN_NAME=$(kubectl get secret --namespace kasten-io|grep k10-k10-token | cut -d " " -f 1) TOKEN=$(kubectl get secret --namespace kasten-io $TOKEN_NAME -o jsonpath="{.data.token}" | base64 --decode) @@ -152,55 +152,55 @@ echo $TOKEN ![](Images/Day87_Data5.png) -Now we take this token and we input that into our browser, you will then be prompted for an email and company name. +Now we take this token and we input that into our browser, you will then be prompted for an email and company name. ![](Images/Day87_Data6.png) -Then we get access to the Kasten K10 dashboard. +Then we get access to the Kasten K10 dashboard. ![](Images/Day87_Data7.png) ### Import Pac-Man into new the MiniKube cluster -At this point we are now able to create an import policy in that standby cluster and connect to the object storage backups and determine what and how we want this to look. +At this point we are now able to create an import policy in that standby cluster and connect to the object storage backups and determine what and how we want this to look. -First, we add in our Location Profile that we walked through earlier on the other cluster, showing off dark mode here to show the difference between our production system and our DR standby location. +First, we add in our Location Profile that we walked through earlier on the other cluster, showing off dark mode here to show the difference between our production system and our DR standby location. ![](Images/Day89_Data16.png) -Now we go back to the dashboard and into the policies tab to create a new policy. +Now we go back to the dashboard and into the policies tab to create a new policy. ![](Images/Day89_Data17.png) -Create the import policy as per the below image. When complete, we can create policy. There are options here to restore after import and some people might want this option, this will go and restore into our standby cluster on completion. We also have the ability to change the configuration of the application as it is restored and this is what I have documented in [Day 90](day90.md). +Create the import policy as per the below image. When complete, we can create policy. There are options here to restore after import and some people might want this option, this will go and restore into our standby cluster on completion. We also have the ability to change the configuration of the application as it is restored and this is what I have documented in [Day 90](day90.md). ![](Images/Day89_Data18.png) -I selected to import on demand, but you can obviously set a schedule on when you want this import to happen. Because of this I am going to run once. +I selected to import on demand, but you can obviously set a schedule on when you want this import to happen. Because of this I am going to run once. ![](Images/Day89_Data19.png) -You can see below the successful import policy job. +You can see below the successful import policy job. ![](Images/Day89_Data20.png) -If we now head back to the dashboard and into the Applications card, we can then select the drop down where you see below "Removed" you will see our application here. Select Restore +If we now head back to the dashboard and into the Applications card, we can then select the drop down where you see below "Removed" you will see our application here. Select Restore ![](Images/Day89_Data21.png) -Here we can see the restore points we have available to us; this was the backup job that we ran on the primary cluster against our Pac-Man application. +Here we can see the restore points we have available to us; this was the backup job that we ran on the primary cluster against our Pac-Man application. ![](Images/Day89_Data22.png) -I am not going to change any of the defaults as I want to cover this in more detail in the next session. +I am not going to change any of the defaults as I want to cover this in more detail in the next session. ![](Images/Day89_Data23.png) -When you hit "Restore" it will prompt you with a confirmation. +When you hit "Restore" it will prompt you with a confirmation. ![](Images/Day89_Data24.png) -We can see below that we are in the standby cluster and if we check on our pods, we can see that we have our running application. +We can see below that we are in the standby cluster and if we check on our pods, we can see that we have our running application. ![](Images/Day89_Data25.png) @@ -208,9 +208,9 @@ We can then port forward (in real life/production environments, you would not ne ![](Images/Day89_Data26.png) -Next, we will take a look at Application mobility and transformation. +Next, we will take a look at Application mobility and transformation. -## Resources +## Resources - [Kubernetes Backup and Restore made easy!](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) - [Kubernetes Backups, Upgrades, Migrations - with Velero](https://www.youtube.com/watch?v=zybLTQER0yY) diff --git a/Days/day90.md b/Days/day90.md index 2bd9dac4c..36bcde6bb 100644 --- a/Days/day90.md +++ b/Days/day90.md @@ -1,37 +1,38 @@ --- -title: '#90DaysOfDevOps - Data & Application Mobility - Day 90' +title: "#90DaysOfDevOps - Data & Application Mobility - Day 90" published: false description: 90DaysOfDevOps - Data & Application Mobility -tags: 'devops, 90daysofdevops, learning' +tags: "devops, 90daysofdevops, learning" cover_image: null canonical_url: null id: 1048748 --- + ## Data & Application Mobility -Day 90 of the #90DaysOfDevOps Challenge! In this final session I am going to cover mobility of our data and applications. I am specifically going to focus on Kubernetes but the requirement across platforms and between platforms is something that is an ever-growing requirement and is seen in the field. +Day 90 of the #90DaysOfDevOps Challenge! In this final session I am going to cover mobility of our data and applications. I am specifically going to focus on Kubernetes but the requirement across platforms and between platforms is something that is an ever-growing requirement and is seen in the field. -The use case being "I want to move my workload, application and data from one location to another" for many different reasons, could be cost, risk or to provide the business with a better service. +The use case being "I want to move my workload, application and data from one location to another" for many different reasons, could be cost, risk or to provide the business with a better service. -In this session we are going to take our workload and we are going to look at moving a Kubernetes workload from one cluster to another, but in doing so we are going to change how our application is on the target location. +In this session we are going to take our workload and we are going to look at moving a Kubernetes workload from one cluster to another, but in doing so we are going to change how our application is on the target location. It in fact uses a lot of the characteristics that we went through with [Disaster Recovery](day89.md) ### **The Requirement** -Our current Kubernetes cluster cannot handle demand and our costs are rocketing through the roof, it is a business decision that we wish to move our production Kubernetes cluster to our Disaster Recovery location, located on a different public cloud which will provide the ability to expand but also at a cheaper rate. We could also take advantage of some of the native cloud services available in the target cloud. +Our current Kubernetes cluster cannot handle demand and our costs are rocketing through the roof, it is a business decision that we wish to move our production Kubernetes cluster to our Disaster Recovery location, located on a different public cloud which will provide the ability to expand but also at a cheaper rate. We could also take advantage of some of the native cloud services available in the target cloud. -Our current mission critical application (Pac-Man) has a database (MongoDB) and is running on slow storage, we would like to move to a newer faster storage tier. +Our current mission critical application (Pac-Man) has a database (MongoDB) and is running on slow storage, we would like to move to a newer faster storage tier. -The current Pac-Man (NodeJS) front-end is not scaling very well, and we would like to increase the number of available pods in the new location. +The current Pac-Man (NodeJS) front-end is not scaling very well, and we would like to increase the number of available pods in the new location. ### Getting to IT -We have our brief and in fact we have our imports already hitting the Disaster Recovery Kubernetes cluster. +We have our brief and in fact we have our imports already hitting the Disaster Recovery Kubernetes cluster. -The first job we need to do is remove the restore operation we carried out on Day 89 for the Disaster Recovery testing. +The first job we need to do is remove the restore operation we carried out on Day 89 for the Disaster Recovery testing. -We can do this using `kubectl delete ns pacman` on the "standby" minikube cluster. +We can do this using `kubectl delete ns pacman` on the "standby" minikube cluster. ![](Images/Day90_Data1.png) @@ -43,23 +44,23 @@ We then get a list of the available restore points. We will select the one that ![](Images/Day90_Data3.png) -When we worked on the Disaster Recovery process, we left everything as default. However these additional restore options are there if you have a Disaster Recovery process that requires the transformation of your application. In this instance we have the requirement to change our storage and number of replicas. +When we worked on the Disaster Recovery process, we left everything as default. However these additional restore options are there if you have a Disaster Recovery process that requires the transformation of your application. In this instance we have the requirement to change our storage and number of replicas. ![](Images/Day90_Data4.png) -Select the "Apply transforms to restored resources" option. +Select the "Apply transforms to restored resources" option. ![](Images/Day90_Data5.png) -It just so happens that the two built in examples for the transformation that we want to perform are what we need for our requirements. +It just so happens that the two built in examples for the transformation that we want to perform are what we need for our requirements. ![](Images/Day90_Data6.png) -The first requirement is that on our primary cluster we were using a Storage Class called `csi-hostpath-sc` and in our new cluster we would like to use `standard` so we can make that change here. +The first requirement is that on our primary cluster we were using a Storage Class called `csi-hostpath-sc` and in our new cluster we would like to use `standard` so we can make that change here. ![](Images/Day90_Data7.png) -Looks good, hit the create transform button at the bottom. +Looks good, hit the create transform button at the bottom. ![](Images/Day90_Data8.png) @@ -67,7 +68,7 @@ The next requirement is that we would like to scale our Pac-Man frontend deploym ![](Images/Day90_Data9.png) -If you are following along you should see both of our transforms as per below. +If you are following along you should see both of our transforms as per below. ![](Images/Day90_Data10.png) @@ -75,25 +76,25 @@ You can now see from the below image that we are going to restore all of the art ![](Images/Day90_Data11.png) -Again, we will be asked to confirm the actions. +Again, we will be asked to confirm the actions. ![](Images/Day90_Data12.png) -The final thing to show is now if we head back into the terminal and we take a look at our cluster, you can see we have 5 pods now for the pacman pods and our storageclass is now set to standard vs the csi-hostpath-sc +The final thing to show is now if we head back into the terminal and we take a look at our cluster, you can see we have 5 pods now for the pacman pods and our storageclass is now set to standard vs the csi-hostpath-sc ![](Images/Day90_Data13.png) -There are many different options that can be achieved through transformation. This can span not only migration but also Disaster Recovery, test and development type scenarios and more. +There are many different options that can be achieved through transformation. This can span not only migration but also Disaster Recovery, test and development type scenarios and more. -### API and Automation +### API and Automation -I have not spoken about the ability to leverage the API and to automate some of these tasks, but these options are present and throughout the UI there are breadcrumbs that provide the command sets to take advantage of the APIs for automation tasks. +I have not spoken about the ability to leverage the API and to automate some of these tasks, but these options are present and throughout the UI there are breadcrumbs that provide the command sets to take advantage of the APIs for automation tasks. -The important thing to note about Kasten K10 is that on deployment it is deployed inside the Kubernetes cluster and then can be called through the Kubernetes API. +The important thing to note about Kasten K10 is that on deployment it is deployed inside the Kubernetes cluster and then can be called through the Kubernetes API. -This then brings us to a close on the section around Storing and Protecting your data. +This then brings us to a close on the section around Storing and Protecting your data. -## Resources +## Resources - [Kubernetes Backup and Restore made easy!](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) - [Kubernetes Backups, Upgrades, Migrations - with Velero](https://www.youtube.com/watch?v=zybLTQER0yY) @@ -103,23 +104,24 @@ This then brings us to a close on the section around Storing and Protecting your ### **Closing** -As I wrap up this challenge, I want to continue to ask for feedback to make sure that the information is always relevant. +As I wrap up this challenge, I want to continue to ask for feedback to make sure that the information is always relevant. -I also appreciate there are a lot of topics that I was not able to cover or not able to dive deeper into around the topics of DevOps. +I also appreciate there are a lot of topics that I was not able to cover or not able to dive deeper into around the topics of DevOps. -This means that we can always take another attempt that this challenge next year and find another 90 day's worth of content and walkthroughs to work through. +This means that we can always take another attempt that this challenge next year and find another 90 day's worth of content and walkthroughs to work through. -### What is next? +### What is next? -Firstly, a break from writing for a little while, I started this challenge on the 1st January 2022 and I have finished on the 31st March 2022 19:50 BST! It has been a slog. But as I say and have said for a long time, if this content helps one person, then it is always worth learning in public! +Firstly, a break from writing for a little while, I started this challenge on the 1st January 2022 and I have finished on the 31st March 2022 19:50 BST! It has been a slog. But as I say and have said for a long time, if this content helps one person, then it is always worth learning in public! -I have some ideas on where to take this next and hopefully it has a life outside of a GitHub repository and we can look at creating an eBook and possibly even a physical book. +I have some ideas on where to take this next and hopefully it has a life outside of a GitHub repository and we can look at creating an eBook and possibly even a physical book. -I also know that we need to revisit each post and make sure everything is grammatically correct before making anything like that happen. If anyone does know about how to take markdown to print or to an eBook it would be greatly appreciated feedback. +I also know that we need to revisit each post and make sure everything is grammatically correct before making anything like that happen. If anyone does know about how to take markdown to print or to an eBook it would be greatly appreciated feedback. -As always keep the issues and PRs coming. +As always keep the issues and PRs coming. -Thanks! +Thanks! @MichaelCade1 + - [GitHub](https://github.com/MichaelCade) - [Twitter](https://twitter.com/MichaelCade1) diff --git a/README.md b/README.md index 6837ce378..d0d0856f3 100644 --- a/README.md +++ b/README.md @@ -6,17 +6,17 @@ English Version | [中文版本](zh_cn/README.md) | [繁體中文版本](zh_tw/README.md)| [日本語版](ja/README.md) -This repository is used to document my journey on getting a better foundational knowledge of "DevOps". I will be starting this journey on the 1st January 2022 but the idea is that we take 90 days which just so happens to be January 1st to March 31st. +This repository is used to document my journey on getting a better foundational knowledge of "DevOps". I will be starting this journey on the 1st January 2022 but the idea is that we take 90 days which just so happens to be January 1st to March 31st. -The reason for documenting these days is so that others can take something from it and also hopefully enhance the resources. +The reason for documenting these days is so that others can take something from it and also hopefully enhance the resources. -The goal is to take 90 days, 1 hour each a day, to tackle over 13 areas of "DevOps" to a foundational knowledge. +The goal is to take 90 days, 1 hour each a day, to tackle over 13 areas of "DevOps" to a foundational knowledge. -This will **not cover all things** "DevOps" but it will cover the areas that I feel will benefit my learning and understanding overall. +This will **not cover all things** "DevOps" but it will cover the areas that I feel will benefit my learning and understanding overall. The quickest way to get in touch is going to be via Twitter, my handle is [@MichaelCade1](https://twitter.com/MichaelCade1) -## Progress +## Progress - [✔️] ♾️ 1 > [Introduction](Days/day01.md) @@ -78,7 +78,7 @@ The quickest way to get in touch is going to be via Twitter, my handle is [@Mich - [✔️] 📚 40 > [Social Network for code](Days/day40.md) - [✔️] 📚 41 > [The Open Source Workflow](Days/day41.md) -### Containers +### Containers - [✔️] 🏗️ 42 > [The Big Picture: Containers](Days/day42.md) - [✔️] 🏗️ 43 > [What is Docker & Getting installed](Days/day43.md) @@ -91,7 +91,7 @@ The quickest way to get in touch is going to be via Twitter, my handle is [@Mich ### Kubernetes - [✔️] ☸ 49 > [The Big Picture: Kubernetes](Days/day49.md) -- [✔️] ☸ 50 > [Choosing your Kubernetes platform ](Days/day50.md) +- [✔️] ☸ 50 > [Choosing your Kubernetes platform](Days/day50.md) - [✔️] ☸ 51 > [Deploying your first Kubernetes Cluster](Days/day51.md) - [✔️] ☸ 52 > [Setting up a multinode Kubernetes Cluster](Days/day52.md) - [✔️] ☸ 53 > [Rancher Overview - Hands On](Days/day53.md) @@ -101,7 +101,7 @@ The quickest way to get in touch is going to be via Twitter, my handle is [@Mich ### Learn Infrastructure as Code - [✔️] 🤖 56 > [The Big Picture: IaC](Days/day56.md) -- [✔️] 🤖 57 > [An intro to Terraform ](Days/day57.md) +- [✔️] 🤖 57 > [An intro to Terraform](Days/day57.md) - [✔️] 🤖 58 > [HashiCorp Configuration Language (HCL)](Days/day58.md) - [✔️] 🤖 59 > [Create a VM with Terraform & Variables](Days/day59.md) - [✔️] 🤖 60 > [Docker Containers, Provisioners & Modules](Days/day60.md) @@ -118,7 +118,7 @@ The quickest way to get in touch is going to be via Twitter, my handle is [@Mich - [✔️] 📜 68 > [Tags, Variables, Inventory & Database Server config](Days/day68.md) - [✔️] 📜 69 > [All other things Ansible - Automation Controller, AWX, Vault](Days/day69.md) -### Create CI/CD Pipelines +### Create CI/CD Pipelines - [✔️] 🔄 70 > [The Big Picture: CI/CD Pipelines](Days/day70.md) - [✔️] 🔄 71 > [What is Jenkins?](Days/day71.md) @@ -161,7 +161,6 @@ This work is licensed under a [![Star History Chart](https://api.star-history.com/svg?repos=MichaelCade/90DaysOfDevOps&type=Timeline)](https://star-history.com/#MichaelCade/90DaysOfDevOps&Timeline) - [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png [cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg diff --git a/Resources.md b/Resources.md index 97ea427ac..0476084dc 100644 --- a/Resources.md +++ b/Resources.md @@ -1,787 +1,876 @@ -## Resources - +# Resources Day-01 -- https://www.youtube.com/watch?v=Xrgk023l4lI DevOps in 5 Minutes -- https://www.youtube.com/watch?v=_Gpe1Zn-1fE&t=43s What is DevOps? Easy Way -- https://www.youtube.com/watch?v=7l_n97Mt0ko DevOps roadmap 2022 | Success Roadmap 2022 + +- [https://www.youtube.com/watch?v=Xrgk023l4lI](https://www.youtube.com/watch?v=Xrgk023l4lI) DevOps in 5 Minutes +- [https://www.youtube.com/watch?v=\_Gpe1Zn-1f&t=43s](https://www.youtube.com/watch?v=_Gpe1Zn-1f&t=43s) What is DevOps? Easy Way +- [https://www.youtube.com/watch?v=7l_n97Mt0ko](https://www.youtube.com/watch?v=7l_n97Mt0ko) DevOps roadmap 2022 | Success Roadmap 2022 Day-02 -- https://www.youtube.com/watch?v=0yWAtQ6wYNM What is DevOps? - TechWorld with Nana -- https://www.youtube.com/watch?v=kBV8gPVZNEE What is DevOps? - GitHub YouTube -- https://www.youtube.com/watch?v=UbtB4sMaaNM What is DevOps? - IBM YouTube -- https://aws.amazon.com/devops/what-is-devops/ What is DevOps? - AWS -- https://docs.microsoft.com/en-us/devops/what-is-devops What is DevOps? - Microsoft + +- [https://www.youtube.com/watch?v=0yWAtQ6wYNM](https://www.youtube.com/watch?v=0yWAtQ6wYNM) What is DevOps? - TechWorld with Nana +- [https://www.youtube.com/watch?v=kBV8gPVZNEE](https://www.youtube.com/watch?v=kBV8gPVZNEE) What is DevOps? - GitHub YouTube +- [https://www.youtube.com/watch?v=UbtB4sMaaNM](https://www.youtube.com/watch?v=UbtB4sMaaNM) What is DevOps? - IBM YouTube +- [https://aws.amazon.com/devops/what-is-devops/](https://aws.amazon.com/devops/what-is-devops/) What is DevOps? - AWS +- [https://docs.microsoft.com/en-us/devops/what-is-devops](https://docs.microsoft.com/en-us/devops/what-is-devops) What is DevOps? - Microsoft Day-03 -- https://www.youtube.com/watch?v=UnjwVYAN7Ns Continuous Development I will also add that this is focused on manufacturing but the lean culture can be closely followed with DevOps. -- https://www.youtube.com/watch?v=RYQbmjLgubM Continuous Testing - IBM YouTube -- https://www.youtube.com/watch?v=1er2cjUq1UI Continuous Integration - IBM YouTube -- https://www.youtube.com/watch?v=Zu53QQuYqJ0 Continuous Monitoring -- https://www.notion.so/The-Remote-Flow-d90982e77a144f4f990c135f115f41c6 The Remote Flow -- https://www.finops.org/introduction/what-is-finops/ FinOps Foundation - What is FinOps -- https://www.amazon.co.uk/Phoenix-Project-DevOps-Helping-Business-ebook/dp/B00AZRBLHO NOT FREE The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win + +- [https://www.youtube.com/watch?v=UnjwVYAN7Ns](https://www.youtube.com/watch?v=UnjwVYAN7Ns) Continuous Development - I will also add that this is focused on manufacturing but the lean culture can be closely followed with DevOps. +- [https://www.youtube.com/watch?v=RYQbmjLgubM](https://www.youtube.com/watch?v=RYQbmjLgubM) Continuous Testing - IBM YouTube +- [https://www.youtube.com/watch?v=1er2cjUq1UI0](https://www.youtube.com/watch?v=1er2cjUq1UI0) Continuous Integration - IBM YouTube +- [https://www.youtube.com/watch?v=Zu53QQuYqJ0](https://www.youtube.com/watch?v=Zu53QQuYqJ0) Continuous Monitoring +- [https://www.notion.so/The-Remote-Flow-d90982e77a144f4f990c135f115f41c6](https://www.notion.so/The-Remote-Flow-d90982e77a144f4f990c135f115f41c6) The Remote Flow +- [https://www.finops.org/introduction/what-is-finops/](https://www.finops.org/introduction/what-is-finops/) FinOps Foundation - What is FinOps +- [https://www.amazon.co.uk/Phoenix-Project-DevOps-Helping-Business-ebook/dp/B00AZRBLHO](https://www.amazon.co.uk/Phoenix-Project-DevOps-Helping-Business-ebook/dp/B00AZRBLHO) **NOT FREE** The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win Day-04 -- https://www.youtube.com/watch?v=2JymM0YoqGA DevOps for Developers – Day in the Life: DevOps Engineer in 2021 -- https://www.youtube.com/watch?v=udRNM7YRdY4 3 Things I wish I knew as a DevOps Engineer -- https://www.youtube.com/watch?v=kDQMjAQNvY4 How to become a DEVOPS Engineer feat. Shawn Powers + +- [https://www.youtube.com/watch?v=2JymM0YoqGA](https://www.youtube.com/watch?v=2JymM0YoqGA) DevOps for Developers – Day in the Life: DevOps Engineer in 2021 +- [https://www.youtube.com/watch?v=udRNM7YRdY4](https://www.youtube.com/watch?v=udRNM7YRdY4) 3 Things I wish I knew as a DevOps Engineer +- [https://www.youtube.com/watch?v=kDQMjAQNvY40](https://www.youtube.com/watch?v=kDQMjAQNvY40) How to become a DEVOPS Engineer feat. Shawn Powers Day-05 -- https://www.youtube.com/watch?v=a0-uE3rOyeU DevOps for Developers – Software or DevOps Engineer? -- https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps? -- https://www.youtube.com/watch?v=5pxbp6FyTfk How to become a DevOps Engineer in 2021 - DevOps Roadmap + +- [https://www.youtube.com/watch?v=a0-uE3rOyeU](https://www.youtube.com/watch?v=a0-uE3rOyeU) DevOps for Developers – Software or DevOps Engineer? +- [https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s) Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps? +- [https://www.youtube.com/watch?v=5pxbp6FyTfk](https://www.youtube.com/watch?v=5pxbp6FyTfk) How to become a DevOps Engineer in 2021 - DevOps Roadmap Day-06 -- https://www.youtube.com/watch?v=UTKIT6STSVM How Netflix Thinks of DevOps -- https://www.upgrad.com/blog/devops-use-cases-applications/ 16 Popular DevOps Use Cases & Real Life Applications [2021] -- https://www.youtube.com/watch?v=ZzLa0YEbGIY DevOps: The Amazon Story -- https://www.networkworld.com/article/2886672/how-etsy-makes-devops-work.html How Etsy makes DevOps work -- https://www.youtube.com/watch?v=gm18-gcgXRY Adopting DevOps @ Scale Lessons learned at Hertz, Kaiser Permanente and lBM -- https://www.usenix.org/conference/lisa16/technical-sessions/presentation/isla Interplanetary DevOps at NASA JPL -- https://enterprisersproject.com/article/2017/1/target-cio-explains-how-devops-took-root-inside-retail-giant Target CIO explains how DevOps took root inside the retail giant + +- [https://www.youtube.com/watch?v=UTKIT6STSVM](https://www.youtube.com/watch?v=UTKIT6STSVM) How Netflix Thinks of DevOps +- [https://www.upgrad.com/blog/devops-use-cases-applications/](https://www.upgrad.com/blog/devops-use-cases-applications/) 16 Popular DevOps Use Cases & Real Life Applications [2021] +- [https://www.youtube.com/watch?v=ZzLa0YEbGIY](https://www.youtube.com/watch?v=ZzLa0YEbGIY) DevOps: The Amazon Story +- [https://www.networkworld.com/article/2886672/how-etsy-makes-devops-work.html](https://www.networkworld.com/article/2886672/how-etsy-makes-devops-work.html) How Etsy makes DevOps work +- [https://www.youtube.com/watch?v=gm18-gcgXRY](https://www.youtube.com/watch?v=gm18-gcgXRY) Adopting DevOps @ Scale Lessons learned at Hertz, Kaiser Permanente and lBM +- [https://www.usenix.org/conference/lisa16/technical-sessions/presentation/isla](https://www.usenix.org/conference/lisa16/technical-sessions/presentation/isla) Interplanetary DevOps at NASA JPL +- [https://enterprisersproject.com/article/2017/1/target-cio-explains-how-devops-took-root-inside-retail-giant](https://enterprisersproject.com/article/2017/1/target-cio-explains-how-devops-took-root-inside-retail-giant) Target CIO explains how DevOps took root inside the retail giant Day-07 -- https://insights.stackoverflow.com/survey/2021 StackOverflow 2021 Developer Survey -- https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s Why we are choosing Golang to learn -- https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s Jake Wright - Learn Go in 12 minutes -- https://www.youtube.com/watch?v=yyUHQIec83I Techworld with Nana - Golang full course - 3 hours 24 mins -- https://www.pluralsight.com/courses/go-fundamentals NOT FREE Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins -- https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners -- https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N Hitesh Choudhary - Complete playlist + +- [https://insights.stackoverflow.com/survey/2021](https://insights.stackoverflow.com/survey/2021) StackOverflow 2021 Developer Survey +- [https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s) Why we are choosing Golang to learn +- [https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) Jake Wright - Learn Go in 12 minutes +- [https://www.youtube.com/watch?v=yyUHQIec83I](https://www.youtube.com/watch?v=yyUHQIec83I) Techworld with Nana - Golang full course - 3 hours 24 mins +- [https://www.pluralsight.com/courses/go-fundamentals](https://www.pluralsight.com/courses/go-fundamentals) **NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins +- [https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners +- [https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) Hitesh Choudhary - Complete playlist Day-08 -- https://insights.stackoverflow.com/survey/2021 StackOverflow 2021 Developer Survey -- https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s Why we are choosing Golang to learn -- https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s Jake Wright - Learn Go in 12 minutes -- https://www.youtube.com/watch?v=yyUHQIec83I Techworld with Nana - Golang full course - 3 hours 24 mins -- https://www.pluralsight.com/courses/go-fundamentals NOT FREE Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins -- https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners -- https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N Hitesh Choudhary - Complete playlist + +- [https://insights.stackoverflow.com/survey/2021](https://insights.stackoverflow.com/survey/2021) StackOverflow 2021 Developer Survey +- [https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s) Why we are choosing Golang to learn +- [https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312st=1025s) Jake Wright - Learn Go in 12 minutes +- [https://www.youtube.com/watch?v=yyUHQIec83I](https://www.youtube.com/watch?v=yyUHQIec83I) Techworld with Nana - Golang full course - 3 hours 24 mins +- [https://www.pluralsight.com/courses/go-fundamentals](https://www.pluralsight.com/courses/go-fundamentals) **NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins +- [https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners +- [https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) Hitesh Choudhary - Complete playlist Day-09 -- https://insights.stackoverflow.com/survey/2021 StackOverflow 2021 Developer Survey -- https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s Why we are choosing Golang to learn -- https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s Jake Wright - Learn Go in 12 minutes -- https://www.youtube.com/watch?v=yyUHQIec83I Techworld with Nana - Golang full course - 3 hours 24 mins -- https://www.pluralsight.com/courses/go-fundamentals NOT FREE Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins -- https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners -- https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N Hitesh Choudhary - Complete playlist + +- [https://insights.stackoverflow.com/survey/2021](https://insights.stackoverflow.com/survey/2021) StackOverflow 2021 Developer Survey +- [https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s) Why we are choosing Golang to learn +- [https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) Jake Wright - Learn Go in 12 minutes +- [https://www.youtube.com/watch?v=yyUHQIec83I](https://www.youtube.com/watch?v=yyUHQIec83I) Techworld with Nana - Golang full course - 3 hours 24 mins +- [https://www.pluralsight.com/courses/go-fundamentals](https://www.pluralsight.com/courses/go-fundamentals) **NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins +- [https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners +- [https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) Hitesh Choudhary - Complete playlist Day-10 -- https://insights.stackoverflow.com/survey/2021 StackOverflow 2021 Developer Survey -- https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s Why we are choosing Golang to learn -- https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s Jake Wright - Learn Go in 12 minutes -- https://www.youtube.com/watch?v=yyUHQIec83I Techworld with Nana - Golang full course - 3 hours 24 mins -- https://www.pluralsight.com/courses/go-fundamentals NOT FREE Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins -- https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners -- https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N Hitesh Choudhary - Complete playlist + +- [https://insights.stackoverflow.com/survey/2021](https://insights.stackoverflow.com/survey/2021) StackOverflow 2021 Developer Survey +- [https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s) Why we are choosing Golang to learn +- [https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) Jake Wright - Learn Go in 12 minutes +- [https://www.youtube.com/watch?v=yyUHQIec83I](https://www.youtube.com/watch?v=yyUHQIec83I) Techworld with Nana - Golang full course - 3 hours 24 mins +- [https://www.pluralsight.com/courses/go-fundamentals](https://www.pluralsight.com/courses/go-fundamentals) **NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins +- [https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners +- [https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) Hitesh Choudhary - Complete playlist Day-11 -- https://insights.stackoverflow.com/survey/2021 StackOverflow 2021 Developer Survey -- https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s Why we are choosing Golang to learn -- https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s Jake Wright - Learn Go in 12 minutes -- https://www.youtube.com/watch?v=yyUHQIec83I Techworld with Nana - Golang full course - 3 hours 24 mins -- https://www.pluralsight.com/courses/go-fundamentals NOT FREE Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins -- https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners -- https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N Hitesh Choudhary - Complete playlist + +- [https://insights.stackoverflow.com/survey/2021](https://insights.stackoverflow.com/survey/2021) StackOverflow 2021 Developer Survey +- [https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s) Why we are choosing Golang to learn +- [https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) Jake Wright - Learn Go in 12 minutes +- [https://www.youtube.com/watch?v=yyUHQIec83I](https://www.youtube.com/watch?v=yyUHQIec83I) Techworld with Nana - Golang full course - 3 hours 24 mins +- [https://www.pluralsight.com/courses/go-fundamentals](https://www.pluralsight.com/courses/go-fundamentals) **NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins +- [https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners +- [https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) Hitesh Choudhary - Complete playlist Day-12 -- https://insights.stackoverflow.com/survey/2021 StackOverflow 2021 Developer Survey -- https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s Why we are choosing Golang to learn -- https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s Jake Wright - Learn Go in 12 minutes -- https://www.youtube.com/watch?v=yyUHQIec83I Techworld with Nana - Golang full course - 3 hours 24 mins -- https://www.pluralsight.com/courses/go-fundamentals NOT FREE Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins -- https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners -- https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N Hitesh Choudhary - Complete playlist + +- [https://insights.stackoverflow.com/survey/2021](https://insights.stackoverflow.com/survey/2021) StackOverflow 2021 Developer Survey +- [https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s) Why we are choosing Golang to learn +- [https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) Jake Wright - Learn Go in 12 minutes +- [https://www.youtube.com/watch?v=yyUHQIec83I](https://www.youtube.com/watch?v=yyUHQIec83I) Techworld with Nana - Golang full course - 3 hours 24 mins +- [https://www.pluralsight.com/courses/go-fundamentals](https://www.pluralsight.com/courses/go-fundamentals) **NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins +- [https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners +- [https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) Hitesh Choudhary - Complete playlist Day-13 -- https://insights.stackoverflow.com/survey/2021 StackOverflow 2021 Developer Survey -- https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s Why we are choosing Golang to learn -- https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s Jake Wright - Learn Go in 12 minutes -- https://www.youtube.com/watch?v=yyUHQIec83I Techworld with Nana - Golang full course - 3 hours 24 mins -- https://www.pluralsight.com/courses/go-fundamentals NOT FREE Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins -- https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners -- https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N Hitesh Choudhary - Complete playlist -- https://github.com/bregman-arie/devops-exercises -- https://gobyexample.com/ GoByExample - Example based learning -- https://go.dev/tour/list go.dev/tour/list -- https://go.dev/learn/ go.dev/learn + +- [https://insights.stackoverflow.com/survey/2021](https://insights.stackoverflow.com/survey/2021) StackOverflow 2021 Developer Survey +- [https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s](https://www.youtube.com/watch?v=7pLqIIAqZD4&t=9s) Why we are choosing Golang to learn +- [https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s](https://www.youtube.com/watch?v=C8LgvuEBraI&t=312s) Jake Wright - Learn Go in 12 minutes +- [https://www.youtube.com/watch?v=yyUHQIec83I](https://www.youtube.com/watch?v=yyUHQIec83I) Techworld with Nana - Golang full course - 3 hours 24 mins +- [https://www.pluralsight.com/courses/go-fundamentals](https://www.pluralsight.com/courses/go-fundamentals) **NOT FREE** Nigel Poulton Pluralsight - Go Fundamentals - 3 hours 26 mins +- [https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s](https://www.youtube.com/watch?v=YS4e4q9oBaU&t=1025s) FreeCodeCamp - Learn Go Programming - Golang Tutorial for Beginners +- [https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N](https://www.youtube.com/playlist?list=PLRAV69dS1uWSR89FRQGZ6q9BR2b44Tr9N) Hitesh Choudhary - Complete playlist +- [https://github.com/bregman-arie/devops-exercises](https://github.com/bregman-arie/devops-exercises) +- [https://gobyexample.com/](https://gobyexample.com/) GoByExample - Example based learning +- [https://go.dev/tour/list](https://go.dev/tour/list) go.dev/tour/list +- [https://go.dev/learn/](https://go.dev/learn/) go.dev/learn Day-14 -- https://www.youtube.com/watch?v=kPylihJRG70 Learn the Linux Fundamentals - Part 1 -- https://www.youtube.com/watch?v=VbEx7B_PTOE Linux for hackers (don't worry you don't need be a hacker!) + +- [https://www.youtube.com/watch?v=kPylihJRG70](https://www.youtube.com/watch?v=kPylihJRG70) Learn the Linux Fundamentals - Part 1 +- [https://www.youtube.com/watch?v=VbEx7B_PTOE](https://www.youtube.com/watch?v=VbEx7B_PTOE) Linux for hackers (don't worry you don't need be a hacker!) Day-15 -- https://www.youtube.com/watch?v=kPylihJRG70 Learn the Linux Fundamentals - Part 1 -- https://www.youtube.com/watch?v=VbEx7B_PTOE Linux for hackers (don't worry you don't need be a hacker!) + +- [https://www.youtube.com/watch?v=kPylihJRG70](https://www.youtube.com/watch?v=kPylihJRG70) Learn the Linux Fundamentals - Part 1 +- [https://www.youtube.com/watch?v=VbEx7B_PTOE](https://www.youtube.com/watch?v=VbEx7B_PTOE) Linux for hackers (don't worry you don't need be a hacker!) Day-16 -- https://www.youtube.com/watch?v=kPylihJRG70 Learn the Linux Fundamentals - Part 1 -- https://www.youtube.com/watch?v=VbEx7B_PTOE Linux for hackers (don't worry you don't need to be a hacker!) + +- [https://www.youtube.com/watch?v=kPylihJRG70](https://www.youtube.com/watch?v=kPylihJRG70) Learn the Linux Fundamentals - Part 1 +- [https://www.youtube.com/watch?v=VbEx7B_PTOE](https://www.youtube.com/watch?v=VbEx7B_PTOE) Linux for hackers (don't worry you don't need to be a hacker!) Day-17 -- https://www.youtube.com/watch?v=-txKSRn0qeA Vim in 100 Seconds -- https://www.youtube.com/watch?v=IiwGbcd8S7I Vim tutorial -- https://www.youtube.com/watch?v=kPylihJRG70 Learn the Linux Fundamentals - Part 1 -- https://www.youtube.com/watch?v=VbEx7B_PTOE Linux for hackers (don't worry you don't need to be a hacker!) + +- [https://www.youtube.com/watch?v=-txKSRn0qeA](https://www.youtube.com/watch?v=-txKSRn0qeA) Vim in 100 Seconds +- [https://www.youtube.com/watch?v=IiwGbcd8S7I](https://www.youtube.com/watch?v=IiwGbcd8S7I) Vim tutorial +- [https://www.youtube.com/watch?v=kPylihJRG70](https://www.youtube.com/watch?v=kPylihJRG70) Learn the Linux Fundamentals - Part 1 +- [https://www.youtube.com/watch?v=VbEx7B_PTOE](https://www.youtube.com/watch?v=VbEx7B_PTOE) Linux for hackers (don't worry you don't need to be a hacker!) Day-18 -- https://remmina.org/ Client SSH GUI - Remmina -- https://www.youtube.com/watch?v=2QXkrLVsRmk The Beginner's guide to SSH -- https://www.youtube.com/watch?v=-txKSRn0qeA Vim in 100 Seconds -- https://www.youtube.com/watch?v=IiwGbcd8S7I Vim tutorial -- https://www.youtube.com/watch?v=kPylihJRG70 Learn the Linux Fundamentals - Part 1 -- https://www.youtube.com/watch?v=VbEx7B_PTOE Linux for hackers (don't worry you don't need to be a hacker!) + +- [https://remmina.org/](https://remmina.org/) Client SSH GUI - Remmina +- [https://www.youtube.com/watch?v=2QXkrLVsRmk](https://www.youtube.com/watch?v=2QXkrLVsRmk) The Beginner's guide to SSH +- [https://www.youtube.com/watch?v=-txKSRn0qeA](https://www.youtube.com/watch?v=-txKSRn0qeA) Vim in 100 Seconds +- [https://www.youtube.com/watch?v=IiwGbcd8S7I](https://www.youtube.com/watch?v=IiwGbcd8S7I) Vim tutorial +- [https://www.youtube.com/watch?v=kPylihJRG70](https://www.youtube.com/watch?v=kPylihJRG70) Learn the Linux Fundamentals - Part 1 +- [https://www.youtube.com/watch?v=VbEx7B_PTOE](https://www.youtube.com/watch?v=VbEx7B_PTOE) Linux for hackers (don't worry you don't need to be a hacker!) Day-19 -- https://www.youtube.com/watch?v=I4EWvMFj37g Bash in 100 seconds -- https://www.youtube.com/watch?v=TPRSJbtfK4M Bash script with practical examples - Full Course -- https://remmina.org/ Client SSH GUI - Remmina -- https://www.youtube.com/watch?v=2QXkrLVsRmk The Beginner's guide to SSH -- https://www.youtube.com/watch?v=-txKSRn0qeA Vim in 100 Seconds -- https://www.youtube.com/watch?v=IiwGbcd8S7I Vim tutorial -- https://www.youtube.com/watch?v=kPylihJRG70 Learn the Linux Fundamentals - Part 1 -- https://www.youtube.com/watch?v=VbEx7B_PTOE Linux for hackers (don't worry you don't need to be a hacker!) + +- [https://www.youtube.com/watch?v=I4EWvMFj37g](https://www.youtube.com/watch?v=I4EWvMFj37g) Bash in 100 seconds +- [https://www.youtube.com/watch?v=TPRSJbtfK4M](https://www.youtube.com/watch?v=TPRSJbtfK4M) Bash script with practical examples - Full Course +- [https://remmina.org/](https://remmina.org/) Client SSH GUI - Remmina +- [https://www.youtube.com/watch?v=2QXkrLVsRmk](https://www.youtube.com/watch?v=2QXkrLVsRmk) The Beginner's guide to SSH +- [https://www.youtube.com/watch?v=-txKSRn0qeA](https://www.youtube.com/watch?v=-txKSRn0qeA) Vim in 100 Seconds +- [https://www.youtube.com/watch?v=IiwGbcd8S7I](https://www.youtube.com/watch?v=IiwGbcd8S7I) Vim tutorial +- [https://www.youtube.com/watch?v=kPylihJRG70](https://www.youtube.com/watch?v=kPylihJRG70) Learn the Linux Fundamentals - Part 1 +- [https://www.youtube.com/watch?v=VbEx7B_PTOE](https://www.youtube.com/watch?v=VbEx7B_PTOE) Linux for hackers (don't worry you don't need to be a hacker!) Day-20 -- https://www.youtube.com/watch?v=I4EWvMFj37g Bash in 100 seconds -- https://www.youtube.com/watch?v=TPRSJbtfK4M Bash script with practical examples - Full Course -- https://remmina.org/ Client SSH GUI - Remmina -- https://www.youtube.com/watch?v=2QXkrLVsRmk The Beginner's guide to SSH -- https://www.youtube.com/watch?v=-txKSRn0qeA Vim in 100 Seconds -- https://www.youtube.com/watch?v=IiwGbcd8S7I Vim tutorial -- https://www.youtube.com/watch?v=kPylihJRG70 Learn the Linux Fundamentals - Part 1 -- https://www.youtube.com/watch?v=VbEx7B_PTOE Linux for hackers (don't worry you don't need to be a hacker!) + +- [https://www.youtube.com/watch?v=I4EWvMFj37g](https://www.youtube.com/watch?v=I4EWvMFj37g) Bash in 100 seconds +- [https://www.youtube.com/watch?v=TPRSJbtfK4M](https://www.youtube.com/watch?v=TPRSJbtfK4M) Bash script with practical examples - Full Course +- [https://remmina.org/](https://remmina.org/) Client SSH GUI - Remmina +- [https://www.youtube.com/watch?v=2QXkrLVsRmk](https://www.youtube.com/watch?v=2QXkrLVsRmk) The Beginner's guide to SSH +- [https://www.youtube.com/watch?v=-txKSRn0qeA](https://www.youtube.com/watch?v=-txKSRn0qeA) Vim in 100 Seconds +- [https://www.youtube.com/watch?v=IiwGbcd8S7I](https://www.youtube.com/watch?v=IiwGbcd8S7I) Vim tutorial +- [https://www.youtube.com/watch?v=kPylihJRG70](https://www.youtube.com/watch?v=kPylihJRG70) Learn the Linux Fundamentals - Part 1 +- [https://www.youtube.com/watch?v=VbEx7B_PTOE](https://www.youtube.com/watch?v=VbEx7B_PTOE) Linux for hackers (don't worry you don't need to be a hacker!) Day-21 -- https://www.youtube.com/watch?v=IPvYjXCsTg8 Computer Networking full course + +- [https://www.youtube.com/watch?v=IPvYjXCsTg8](https://www.youtube.com/watch?v=IPvYjXCsTg8) Computer Networking full course Day-22 -- https://www.youtube.com/watch?v=IPvYjXCsTg8 Computer Networking full course -- http://www.practicalnetworking.net/ Practical Networking + +- [https://www.youtube.com/watch?v=IPvYjXCsTg8](https://www.youtube.com/watch?v=IPvYjXCsTg8) Computer Networking full course +- [https://www.practicalnetworking.net/](https://www.practicalnetworking.net/) Practical Networking Day-23 -- https://www.youtube.com/watch?v=IPvYjXCsTg8 Computer Networking full course -- http://www.practicalnetworking.net/ Practical Networking + +- [https://www.youtube.com/watch?v=IPvYjXCsTg8](https://www.youtube.com/watch?v=IPvYjXCsTg8) Computer Networking full course +- [https://www.practicalnetworking.net/](https://www.practicalnetworking.net/) Practical Networking Day-24 -- https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s 3 Necessary Skills for Network Automation -- https://www.youtube.com/watch?v=IPvYjXCsTg8 Computer Networking full course -- http://www.practicalnetworking.net/ Practical Networking -- https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126 Python Network Automation + +- [https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s](https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s) 3 Necessary Skills for Network Automation +- [https://www.youtube.com/watch?v=IPvYjXCsTg8](https://www.youtube.com/watch?v=IPvYjXCsTg8) Computer Networking full course +- [https://www.practicalnetworking.net/](https://www.practicalnetworking.net/) Practical Networking +- [https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126) Python Network Automation Day-25 -- https://www.gns3.com/software/download-vm GNS3 VM -- https://www.eve-ng.net/ Eve-ng -- https://unimus.net/ Unimus Not a lab environment but an interesting concept. -- https://www.youtube.com/watch?v=g6B0f_E0NMg Free Course: Introduction to EVE-NG -- https://www.youtube.com/watch?v=9dPWARirtK8 EVE-NG - Creating your first lab -- https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s 3 Necessary Skills for Network Automation -- https://www.youtube.com/watch?v=IPvYjXCsTg8 Computer Networking full course -- http://www.practicalnetworking.net/ Practical Networking -- https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126 Python Network Automation + +- [https://www.gns3.com/software/download-vm](https://www.gns3.com/software/download-vm) GNS3 VM +- [https://www.eve-ng.net/](https://www.eve-ng.net/) Eve-ng +- [https://unimus.net/](https://unimus.net/) Unimus - Not a lab environment but an interesting concept. +- [https://www.youtube.com/watch?v=g6B0f_E0NMg](https://www.youtube.com/watch?v=g6B0f_E0NMg) Free Course: Introduction to EVE-NG +- [https://www.youtube.com/watch?v=9dPWARirtK8](https://www.youtube.com/watch?v=9dPWARirtK8) EVE-NG - Creating your first lab +- [https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s](https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s) 3 Necessary Skills for Network Automation +- [https://www.youtube.com/watch?v=IPvYjXCsTg8](https://www.youtube.com/watch?v=IPvYjXCsTg8) Computer Networking full course +- [https://www.practicalnetworking.net/](https://www.practicalnetworking.net/) Practical Networking +- [https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126) Python Network Automation Day-26 -- https://www.youtube.com/watch?v=g6B0f_E0NMg Free Course: Introduction to EVE-NG -- https://www.youtube.com/watch?v=9dPWARirtK8 EVE-NG - Creating your first lab -- https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s 3 Necessary Skills for Network Automation -- https://www.youtube.com/watch?v=IPvYjXCsTg8 Computer Networking full course -- http://www.practicalnetworking.net/ Practical Networking -- https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126 Python Network Automation -- https://www.packtpub.com/product/hands-on-enterprise-automation-with-python/9781788998512 Hands-On Enterprise Automation with Python (Book) + +- [https://www.youtube.com/watch?v=g6B0f_E0NMg](https://www.youtube.com/watch?v=g6B0f_E0NMg) Free Course: Introduction to EVE-NG +- [https://www.youtube.com/watch?v=9dPWARirtK8](https://www.youtube.com/watch?v=9dPWARirtK8) EVE-NG - Creating your first lab +- [https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s](https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s) 3 Necessary Skills for Network Automation +- [https://www.youtube.com/watch?v=IPvYjXCsTg8](https://www.youtube.com/watch?v=IPvYjXCsTg8) Computer Networking full course +- [https://www.practicalnetworking.net/](https://www.practicalnetworking.net/) Practical Networking +- [https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126) Python Network Automation +- [https://www.packtpub.com/product/hands-on-enterprise-automation-with-python/9781788998512](https://www.packtpub.com/product/hands-on-enterprise-automation-with-python/9781788998512) Hands-On Enterprise Automation with Python (Book) Day-27 -- https://www.youtube.com/watch?v=g6B0f_E0NMg Free Course: Introduction to EVE-NG -- https://www.youtube.com/watch?v=9dPWARirtK8 EVE-NG - Creating your first lab -- https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s 3 Necessary Skills for Network Automation -- https://www.youtube.com/watch?v=IPvYjXCsTg8 Computer Networking full course -- http://www.practicalnetworking.net/ Practical Networking -- https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126 Python Network Automation -- https://www.packtpub.com/product/hands-on-enterprise-automation-with-python/9781788998512 Hands-On Enterprise Automation with Python (Book) + +- [https://www.youtube.com/watch?v=g6B0f_E0NMg](https://www.youtube.com/watch?v=g6B0f_E0NMg) Free Course: Introduction to EVE-NG +- [https://www.youtube.com/watch?v=9dPWARirtK8](https://www.youtube.com/watch?v=9dPWARirtK8) EVE-NG - Creating your first lab +- [https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s](https://www.youtube.com/watch?v=KhiJ7Fu9kKA&list=WL&index=122&t=89s) 3 Necessary Skills for Network Automation +- [https://www.youtube.com/watch?v=IPvYjXCsTg8](https://www.youtube.com/watch?v=IPvYjXCsTg8) Computer Networking full course +- [https://www.practicalnetworking.net/](https://www.practicalnetworking.net/) Practical Networking +- [https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126](https://www.youtube.com/watch?v=xKPzLplPECU&list=WL&index=126) Python Network Automation +- [https://www.packtpub.com/product/hands-on-enterprise-automation-with-python/9781788998512](https://www.packtpub.com/product/hands-on-enterprise-automation-with-python/9781788998512) Hands-On Enterprise Automation with Python (Book) Day-28 -- https://www.youtube.com/watch?v=qkj5W98Xdvw Hybrid Cloud and MultiCloud -- https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s Microsoft Azure Fundamentals -- https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s Google Cloud Digital Leader Certification Course -- https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s AWS Basics for Beginners - Full Course + +- [https://www.youtube.com/watch?v=qkj5W98Xdvw](https://www.youtube.com/watch?v=qkj5W98Xdvw) Hybrid Cloud and MultiCloud +- [https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) Microsoft Azure Fundamentals +- [https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s) Google Cloud Digital Leader Certification Course +- [https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s) AWS Basics for Beginners - Full Course Day-29 -- https://www.youtube.com/watch?v=qkj5W98Xdvw Hybrid Cloud and MultiCloud -- https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s Microsoft Azure Fundamentals -- https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s Google Cloud Digital Leader Certification Course -- https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s AWS Basics for Beginners - Full Course + +- [https://www.youtube.com/watch?v=qkj5W98Xdvw](https://www.youtube.com/watch?v=qkj5W98Xdvw) Hybrid Cloud and MultiCloud +- [https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) Microsoft Azure Fundamentals +- [https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s) Google Cloud Digital Leader Certification Course +- [https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s) AWS Basics for Beginners - Full Course Day-30 -- https://www.youtube.com/watch?v=qkj5W98Xdvw Hybrid Cloud and MultiCloud -- https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s Microsoft Azure Fundamentals -- https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s Google Cloud Digital Leader Certification Course -- https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s AWS Basics for Beginners - Full Course + +- [https://www.youtube.com/watch?v=qkj5W98Xdvw](https://www.youtube.com/watch?v=qkj5W98Xdvw) Hybrid Cloud and MultiCloud +- [https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) Microsoft Azure Fundamentals +- [https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s) Google Cloud Digital Leader Certification Course +- [https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s) AWS Basics for Beginners - Full Course Day-31 -- https://www.youtube.com/watch?v=qkj5W98Xdvw Hybrid Cloud and MultiCloud -- https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s Microsoft Azure Fundamentals -- https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s Google Cloud Digital Leader Certification Course -- https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s AWS Basics for Beginners - Full Course + +- [https://www.youtube.com/watch?v=qkj5W98Xdvw](https://www.youtube.com/watch?v=qkj5W98Xdvw) Hybrid Cloud and MultiCloud +- [https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) Microsoft Azure Fundamentals +- [https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s) Google Cloud Digital Leader Certification Course +- [https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s) AWS Basics for Beginners - Full Course Day-32 -- https://www.youtube.com/watch?v=qkj5W98Xdvw Hybrid Cloud and MultiCloud -- https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s Microsoft Azure Fundamentals -- https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s Google Cloud Digital Leader Certification Course -- https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s AWS Basics for Beginners - Full Course + +- [https://www.youtube.com/watch?v=qkj5W98Xdvw](https://www.youtube.com/watch?v=qkj5W98Xdvw) Hybrid Cloud and MultiCloud +- [https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) Microsoft Azure Fundamentals +- [https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s) Google Cloud Digital Leader Certification Course +- [https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s) AWS Basics for Beginners - Full Course Day-33 -- https://www.youtube.com/watch?v=qkj5W98Xdvw Hybrid Cloud and MultiCloud -- https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s Microsoft Azure Fundamentals -- https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s Google Cloud Digital Leader Certification Course -- https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s AWS Basics for Beginners - Full Course + +- [https://www.youtube.com/watch?v=qkj5W98Xdvw](https://www.youtube.com/watch?v=qkj5W98Xdvw) Hybrid Cloud and MultiCloud +- [https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) Microsoft Azure Fundamentals +- [https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s) Google Cloud Digital Leader Certification Course +- [https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s) AWS Basics for Beginners - Full Course Day-34 -- https://www.youtube.com/watch?v=qkj5W98Xdvw Hybrid Cloud and MultiCloud -- https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s Microsoft Azure Fundamentals -- https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s Google Cloud Digital Leader Certification Course -- https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s AWS Basics for Beginners - Full Course + +- [https://www.youtube.com/watch?v=qkj5W98Xdvw](https://www.youtube.com/watch?v=qkj5W98Xdvw) Hybrid Cloud and MultiCloud +- [https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s](https://www.youtube.com/watch?v=NKEFWyqJ5XA&list=WL&index=130&t=12s) Microsoft Azure Fundamentals +- [https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s](https://www.youtube.com/watch?v=UGRDM86MBIQ&list=WL&index=131&t=10s) Google Cloud Digital Leader Certification Course +- [https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s](https://www.youtube.com/watch?v=ulprqHHWlng&t=5352s) AWS Basics for Beginners - Full Course Day-35 -- https://www.youtube.com/watch?v=Yc8sCSeMhi4 What is Version Control? -- https://www.youtube.com/watch?v=kr62e_n6QuQ Types of Version Control System -- https://www.youtube.com/watch?v=8JJ101D3knE&t=52s Git Tutorial for Beginners -- https://www.youtube.com/watch?v=Uszj_k0DGsg Git for Professionals Tutorial -- https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s Git and GitHub for Beginners - Crash Course -- https://www.youtube.com/watch?v=apGV9Kg7ics Complete Git and GitHub Tutorial + +- [https://www.youtube.com/watch?v=Yc8sCSeMhi4](https://www.youtube.com/watch?v=Yc8sCSeMhi4) What is Version Control? +- [https://www.youtube.com/watch?v=kr62e_n6QuQ](https://www.youtube.com/watch?v=kr62e_n6QuQ) Types of Version Control System +- [https://www.youtube.com/watch?v=8JJ101D3knE&t=52s](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) Git Tutorial for Beginners +- [https://www.youtube.com/watch?v=Uszj_k0DGsg](https://www.youtube.com/watch?v=Uszj_k0DGsg) Git for Professionals Tutorial +- [https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) Git and GitHub for Beginners - Crash Course +- [https://www.youtube.com/watch?v=apGV9Kg7ics](https://www.youtube.com/watch?v=apGV9Kg7ics) Complete Git and GitHub Tutorial Day-36 -- https://www.youtube.com/watch?v=Yc8sCSeMhi4 What is Version Control? -- https://www.youtube.com/watch?v=kr62e_n6QuQ Types of Version Control System -- https://www.youtube.com/watch?v=8JJ101D3knE&t=52s Git Tutorial for Beginners -- https://www.youtube.com/watch?v=Uszj_k0DGsg Git for Professionals Tutorial -- https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s Git and GitHub for Beginners - Crash Course -- https://www.youtube.com/watch?v=apGV9Kg7ics Complete Git and GitHub Tutorial + +- [https://www.youtube.com/watch?v=Yc8sCSeMhi4](https://www.youtube.com/watch?v=Yc8sCSeMhi4) What is Version Control? +- [https://www.youtube.com/watch?v=kr62e_n6QuQ](https://www.youtube.com/watch?v=kr62e_n6QuQ) Types of Version Control System +- [https://www.youtube.com/watch?v=8JJ101D3knE&t=52s](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) Git Tutorial for Beginners +- [https://www.youtube.com/watch?v=Uszj_k0DGsg](https://www.youtube.com/watch?v=Uszj_k0DGsg) Git for Professionals Tutorial +- [https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) Git and GitHub for Beginners - Crash Course +- [https://www.youtube.com/watch?v=apGV9Kg7ics](https://www.youtube.com/watch?v=apGV9Kg7ics) Complete Git and GitHub Tutorial Day-37 -- https://www.youtube.com/watch?v=Yc8sCSeMhi4 What is Version Control? -- https://www.youtube.com/watch?v=kr62e_n6QuQ Types of Version Control System -- https://www.youtube.com/watch?v=8JJ101D3knE&t=52s Git Tutorial for Beginners -- https://www.youtube.com/watch?v=Uszj_k0DGsg Git for Professionals Tutorial -- https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s Git and GitHub for Beginners - Crash Course -- https://www.youtube.com/watch?v=apGV9Kg7ics Complete Git and GitHub Tutorial -- https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet Git cheatsheet + +- [https://www.youtube.com/watch?v=Yc8sCSeMhi4](https://www.youtube.com/watch?v=Yc8sCSeMhi4) What is Version Control? +- [https://www.youtube.com/watch?v=kr62e_n6QuQ](https://www.youtube.com/watch?v=kr62e_n6QuQ) Types of Version Control System +- [https://www.youtube.com/watch?v=8JJ101D3knE&t=52s](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) Git Tutorial for Beginners +- [https://www.youtube.com/watch?v=Uszj_k0DGsg](https://www.youtube.com/watch?v=Uszj_k0DGsg) Git for Professionals Tutorial +- [https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) Git and GitHub for Beginners - Crash Course +- [https://www.youtube.com/watch?v=apGV9Kg7ics](https://www.youtube.com/watch?v=apGV9Kg7ics) Complete Git and GitHub Tutorial +- [https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) Git cheatsheet Day-38 -- https://www.youtube.com/watch?v=Yc8sCSeMhi4 What is Version Control? -- https://www.youtube.com/watch?v=kr62e_n6QuQ Types of Version Control System -- https://www.youtube.com/watch?v=8JJ101D3knE&t=52s Git Tutorial for Beginners -- https://www.youtube.com/watch?v=Uszj_k0DGsg Git for Professionals Tutorial -- https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s Git and GitHub for Beginners - Crash Course -- https://www.youtube.com/watch?v=apGV9Kg7ics Complete Git and GitHub Tutorial -- https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet Git cheatsheet + +- [https://www.youtube.com/watch?v=Yc8sCSeMhi4](https://www.youtube.com/watch?v=Yc8sCSeMhi4) What is Version Control? +- [https://www.youtube.com/watch?v=kr62e_n6QuQ](https://www.youtube.com/watch?v=kr62e_n6QuQ) Types of Version Control System +- [https://www.youtube.com/watch?v=8JJ101D3knE&t=52s](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) Git Tutorial for Beginners +- [https://www.youtube.com/watch?v=Uszj_k0DGsg](https://www.youtube.com/watch?v=Uszj_k0DGsg) Git for Professionals Tutorial +- [https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) Git and GitHub for Beginners - Crash Course +- [https://www.youtube.com/watch?v=apGV9Kg7ics](https://www.youtube.com/watch?v=apGV9Kg7ics) Complete Git and GitHub Tutorial +- [https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) Git cheatsheet Day-39 -- https://www.youtube.com/watch?v=Yc8sCSeMhi4 What is Version Control? -- https://www.youtube.com/watch?v=kr62e_n6QuQ Types of Version Control System -- https://www.youtube.com/watch?v=8JJ101D3knE&t=52s Git Tutorial for Beginners -- https://www.youtube.com/watch?v=Uszj_k0DGsg Git for Professionals Tutorial -- https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s Git and GitHub for Beginners - Crash Course -- https://www.youtube.com/watch?v=apGV9Kg7ics Complete Git and GitHub Tutorial -- https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet Git cheatsheet -- https://veducate.co.uk/exploring-the-git-command-line/ Exploring the Git command line – A getting started guide + +- [https://www.youtube.com/watch?v=Yc8sCSeMhi4](https://www.youtube.com/watch?v=Yc8sCSeMhi4) What is Version Control? +- [https://www.youtube.com/watch?v=kr62e_n6QuQ](https://www.youtube.com/watch?v=kr62e_n6QuQ) Types of Version Control System +- [https://www.youtube.com/watch?v=8JJ101D3knE&t=52s](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) Git Tutorial for Beginners +- [https://www.youtube.com/watch?v=Uszj_k0DGsg](https://www.youtube.com/watch?v=Uszj_k0DGsg) Git for Professionals Tutorial +- [https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) Git and GitHub for Beginners - Crash Course +- [https://www.youtube.com/watch?v=apGV9Kg7ics](https://www.youtube.com/watch?v=apGV9Kg7ics) Complete Git and GitHub Tutorial +- [https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) Git cheatsheet +- [https://veducate.co.uk/exploring-the-git-command-line/](https://veducate.co.uk/exploring-the-git-command-line/) Exploring the Git command line – A getting started guide Day-40 -- https://www.youtube.com/watch?v=8aV5AxJrHDg Learn GitLab in 3 Hours | GitLab Complete Tutorial For Beginners -- https://www.youtube.com/watch?v=OMLh-5O6Ub8&list=PLaD4FvsFdarSyyGl3ooAm-ZyAllgw_AM5 BitBucket Tutorials Playlist -- https://www.youtube.com/watch?v=Yc8sCSeMhi4 What is Version Control? -- https://www.youtube.com/watch?v=kr62e_n6QuQ Types of Version Control System -- https://www.youtube.com/watch?v=8JJ101D3knE&t=52s Git Tutorial for Beginners -- https://www.youtube.com/watch?v=Uszj_k0DGsg Git for Professionals Tutorial -- https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s Git and GitHub for Beginners - Crash Course -- https://www.youtube.com/watch?v=apGV9Kg7ics Complete Git and GitHub Tutorial -- https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet Git cheatsheet + +- [https://www.youtube.com/watch?v=8aV5AxJrHDg](https://www.youtube.com/watch?v=8aV5AxJrHDg) Learn GitLab in 3 Hours | GitLab Complete Tutorial For Beginners +- [https://www.youtube.com/watch?v=OMLh-5O6Ub8&list=PLaD4FvsFdarSyyGl3ooAm-ZyAllgw_AM5](https://www.youtube.com/watch?v=OMLh-5O6Ub8&list=PLaD4FvsFdarSyyGl3ooAm-ZyAllgw_AM5) BitBucket Tutorials Playlist +- [https://www.youtube.com/watch?v=Yc8sCSeMhi4](https://www.youtube.com/watch?v=Yc8sCSeMhi4) What is Version Control? +- [https://www.youtube.com/watch?v=kr62e_n6QuQ](https://www.youtube.com/watch?v=kr62e_n6QuQ) Types of Version Control System +- [https://www.youtube.com/watch?v=8JJ101D3knE&t=52s](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) Git Tutorial for Beginners +- [https://www.youtube.com/watch?v=Uszj_k0DGsg](https://www.youtube.com/watch?v=Uszj_k0DGsg) Git for Professionals Tutorial +- [https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) Git and GitHub for Beginners - Crash Course +- [https://www.youtube.com/watch?v=apGV9Kg7ics](https://www.youtube.com/watch?v=apGV9Kg7ics) Complete Git and GitHub Tutorial +- [https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) Git cheatsheet Day-41 -- https://www.youtube.com/watch?v=8aV5AxJrHDg Learn GitLab in 3 Hours | GitLab Complete Tutorial For Beginners -- https://www.youtube.com/watch?v=OMLh-5O6Ub8&list=PLaD4FvsFdarSyyGl3ooAm-ZyAllgw_AM5 BitBucket Tutorials Playlist -- https://www.youtube.com/watch?v=Yc8sCSeMhi4 What is Version Control? -- https://www.youtube.com/watch?v=kr62e_n6QuQ Types of Version Control System -- https://www.youtube.com/watch?v=8JJ101D3knE&t=52s Git Tutorial for Beginners -- https://www.youtube.com/watch?v=Uszj_k0DGsg Git for Professionals Tutorial -- https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s Git and GitHub for Beginners - Crash Course -- https://www.youtube.com/watch?v=apGV9Kg7ics Complete Git and GitHub Tutorial -- https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet Git cheatsheet + +- [https://www.youtube.com/watch?v=8aV5AxJrHDg](https://www.youtube.com/watch?v=8aV5AxJrHDg) Learn GitLab in 3 Hours | GitLab Complete Tutorial For Beginners +- [https://www.youtube.com/watch?v=OMLh-5O6Ub8&list=PLaD4FvsFdarSyyGl3ooAm-ZyAllgw_AM5](https://www.youtube.com/watch?v=OMLh-5O6Ub8&list=PLaD4FvsFdarSyyGl3ooAm-ZyAllgw_AM5) BitBucket Tutorials Playlist +- [https://www.youtube.com/watch?v=Yc8sCSeMhi4](https://www.youtube.com/watch?v=Yc8sCSeMhi4) What is Version Control? +- [https://www.youtube.com/watch?v=kr62e_n6QuQ](https://www.youtube.com/watch?v=kr62e_n6QuQ) Types of Version Control System +- [https://www.youtube.com/watch?v=8JJ101D3knE&t=52s](https://www.youtube.com/watch?v=8JJ101D3knE&t=52s) Git Tutorial for Beginners +- [https://www.youtube.com/watch?v=Uszj_k0DGsg](https://www.youtube.com/watch?v=Uszj_k0DGsg) Git for Professionals Tutorial +- [https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s](https://www.youtube.com/watch?v=RGOj5yH7evk&t=8s) Git and GitHub for Beginners - Crash Course +- [https://www.youtube.com/watch?v=apGV9Kg7ics](https://www.youtube.com/watch?v=apGV9Kg7ics) Complete Git and GitHub Tutorial +- [https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet](https://www.atlassian.com/git/tutorials/atlassian-git-cheatsheet) Git cheatsheet Day-42 -- https://www.youtube.com/watch?v=3c-iBn73dDE TechWorld with Nana - Docker Tutorial for Beginners -- https://www.youtube.com/watch?v=pTFZFxd4hOI Programming with Mosh - Docker Tutorial for Beginners -- https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s Docker Tutorial for Beginners - What is Docker? Introduction to Containers + +- [https://www.youtube.com/watch?v=3c-iBn73dDE](https://www.youtube.com/watch?v=3c-iBn73dDE) TechWorld with Nana - Docker Tutorial for Beginners +- [https://www.youtube.com/watch?v=pTFZFxd4hOI](https://www.youtube.com/watch?v=pTFZFxd4hOI) Programming with Mosh - Docker Tutorial for Beginners +- [https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s) Docker Tutorial for Beginners - What is Docker? Introduction to Containers Day-43 -- https://www.youtube.com/watch?v=3c-iBn73dDE TechWorld with Nana - Docker Tutorial for Beginners -- https://www.youtube.com/watch?v=pTFZFxd4hOI Programming with Mosh - Docker Tutorial for Beginners -- https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s Docker Tutorial for Beginners - What is Docker? Introduction to Containers -- https://www.youtube.com/watch?v=5RQbdMn04Oc WSL 2 with Docker getting started + +- [https://www.youtube.com/watch?v=3c-iBn73dDE](https://www.youtube.com/watch?v=3c-iBn73dDE) TechWorld with Nana - Docker Tutorial for Beginners +- [https://www.youtube.com/watch?v=pTFZFxd4hOI](https://www.youtube.com/watch?v=pTFZFxd4hOI) Programming with Mosh - Docker Tutorial for Beginners +- [https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s) Docker Tutorial for Beginners - What is Docker? Introduction to Containers +- [https://www.youtube.com/watch?v=5RQbdMn04Oc](https://www.youtube.com/watch?v=5RQbdMn04Oc) WSL 2 with Docker getting started Day-44 -- https://www.youtube.com/watch?v=3c-iBn73dDE TechWorld with Nana - Docker Tutorial for Beginners -- https://www.youtube.com/watch?v=pTFZFxd4hOI Programming with Mosh - Docker Tutorial for Beginners -- https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s Docker Tutorial for Beginners - What is Docker? Introduction to Containers -- https://www.youtube.com/watch?v=5RQbdMn04Oc WSL 2 with Docker getting started + +- [https://www.youtube.com/watch?v=3c-iBn73dDE](https://www.youtube.com/watch?v=3c-iBn73dDE) TechWorld with Nana - Docker Tutorial for Beginners +- [https://www.youtube.com/watch?v=pTFZFxd4hOI](https://www.youtube.com/watch?v=pTFZFxd4hOI) Programming with Mosh - Docker Tutorial for Beginners +- [https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s) Docker Tutorial for Beginners - What is Docker? Introduction to Containers +- [https://www.youtube.com/watch?v=5RQbdMn04Oc](https://www.youtube.com/watch?v=5RQbdMn04Oc) WSL 2 with Docker getting started Day-45 -- https://www.youtube.com/watch?v=3c-iBn73dDE TechWorld with Nana - Docker Tutorial for Beginners -- https://www.youtube.com/watch?v=pTFZFxd4hOI Programming with Mosh - Docker Tutorial for Beginners -- https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s Docker Tutorial for Beginners - What is Docker? Introduction to Containers -- https://www.youtube.com/watch?v=5RQbdMn04Oc WSL 2 with Docker getting started -- https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/ Blog on gettng started building a docker image -- https://docs.docker.com/develop/develop-images/dockerfile_best-practices/ Docker documentation for building an image + +- [https://www.youtube.com/watch?v=3c-iBn73dDE](https://www.youtube.com/watch?v=3c-iBn73dDE) TechWorld with Nana - Docker Tutorial for Beginners +- [https://www.youtube.com/watch?v=pTFZFxd4hOI](https://www.youtube.com/watch?v=pTFZFxd4hOI) Programming with Mosh - Docker Tutorial for Beginners +- [https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s) Docker Tutorial for Beginners - What is Docker? Introduction to Containers +- [https://www.youtube.com/watch?v=5RQbdMn04Oc](https://www.youtube.com/watch?v=5RQbdMn04Oc) WSL 2 with Docker getting started +- [https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/](https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/) Blog on getting started building a docker image +- [https://docs.docker.com/develop/develop-images/dockerfile_best-practices/](https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started) Docker documentation for building an image Day-46 -- https://www.youtube.com/watch?v=3c-iBn73dDE TechWorld with Nana - Docker Tutorial for Beginners -- https://www.youtube.com/watch?v=pTFZFxd4hOI Programming with Mosh - Docker Tutorial for Beginners -- https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s Docker Tutorial for Beginners - What is Docker? Introduction to Containers -- https://www.youtube.com/watch?v=5RQbdMn04Oc WSL 2 with Docker getting started -- https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/ Blog on gettng started building a docker image -- https://docs.docker.com/develop/develop-images/dockerfile_best-practices/ Docker documentation for building an image -- https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started YAML Tutorial: Everything You Need to Get Started in Minute + +- [https://www.youtube.com/watch?v=3c-iBn73dDE](https://www.youtube.com/watch?v=3c-iBn73dDE) TechWorld with Nana - Docker Tutorial for Beginners +- [https://www.youtube.com/watch?v=pTFZFxd4hOI](https://www.youtube.com/watch?v=pTFZFxd4hOI) Programming with Mosh - Docker Tutorial for Beginners +- [https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s) Docker Tutorial for Beginners - What is Docker? Introduction to Containers +- [https://www.youtube.com/watch?v=5RQbdMn04Oc](https://www.youtube.com/watch?v=5RQbdMn04Oc) WSL 2 with Docker getting started +- [https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/](https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/) Blog on getting started building a docker image +- [https://docs.docker.com/develop/develop-images/dockerfile_best-practices/](https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started) Docker documentation for building an image +- [https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started](https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started) YAML Tutorial: Everything You Need to Get Started in Minute Day-47 -- https://www.youtube.com/watch?v=3c-iBn73dDE TechWorld with Nana - Docker Tutorial for Beginners -- https://www.youtube.com/watch?v=pTFZFxd4hOI Programming with Mosh - Docker Tutorial for Beginners -- https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s Docker Tutorial for Beginners - What is Docker? Introduction to Containers -- https://www.youtube.com/watch?v=5RQbdMn04Oc WSL 2 with Docker getting started -- https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/ Blog on gettng started building a docker image -- https://docs.docker.com/develop/develop-images/dockerfile_best-practices/ Docker documentation for building an image -- https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started YAML Tutorial: Everything You Need to Get Started in Minute + +- [https://www.youtube.com/watch?v=3c-iBn73dDE](https://www.youtube.com/watch?v=3c-iBn73dDE) TechWorld with Nana - Docker Tutorial for Beginners +- [https://www.youtube.com/watch?v=pTFZFxd4hOI](https://www.youtube.com/watch?v=pTFZFxd4hOI) Programming with Mosh - Docker Tutorial for Beginners +- [https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s) Docker Tutorial for Beginners - What is Docker? Introduction to Containers +- [https://www.youtube.com/watch?v=5RQbdMn04Oc](https://www.youtube.com/watch?v=5RQbdMn04Oc) WSL 2 with Docker getting started +- [https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/](https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/) Blog on getting started building a docker image +- [https://docs.docker.com/develop/develop-images/dockerfile_best-practices/](https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started) Docker documentation for building an image +- [https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started](https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started) YAML Tutorial: Everything You Need to Get Started in Minute Day-48 -- https://www.youtube.com/watch?v=3c-iBn73dDE TechWorld with Nana - Docker Tutorial for Beginners -- https://www.youtube.com/watch?v=pTFZFxd4hOI Programming with Mosh - Docker Tutorial for Beginners -- https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s Docker Tutorial for Beginners - What is Docker? Introduction to Containers -- https://www.youtube.com/watch?v=5RQbdMn04Oc WSL 2 with Docker getting started -- https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/ Blog on gettng started building a docker image -- https://docs.docker.com/develop/develop-images/dockerfile_best-practices/ Docker documentation for building an image -- https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started YAML Tutorial: Everything You Need to Get Started in Minute -- https://www.youtube.com/watch?v=Za2BqzeZjBk Podman | Daemonless Docker | Getting Started with Podman -- https://www.youtube.com/watch?v=cqOtksmsxfg LXC - Guide to building a LXC Lab + +- [https://www.youtube.com/watch?v=3c-iBn73dDE](https://www.youtube.com/watch?v=3c-iBn73dDE) TechWorld with Nana - Docker Tutorial for Beginners +- [https://www.youtube.com/watch?v=pTFZFxd4hOI](https://www.youtube.com/watch?v=pTFZFxd4hOI) Programming with Mosh - Docker Tutorial for Beginners +- [https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s](https://www.youtube.com/watch?v=17Bl31rlnRM&list=WL&index=128&t=61s) Docker Tutorial for Beginners - What is Docker? Introduction to Containers +- [https://www.youtube.com/watch?v=5RQbdMn04Oc](https://www.youtube.com/watch?v=5RQbdMn04Oc) WSL 2 with Docker getting started +- [https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/](https://stackify.com/docker-build-a-beginners-guide-to-building-docker-images/) Blog on getting started building a docker image +- [https://docs.docker.com/develop/develop-images/dockerfile_best-practices/](https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started) Docker documentation for building an image +- [https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started](https://www.cloudbees.com/blog/yaml-tutorial-everything-you-need-get-started) YAML Tutorial: Everything You Need to Get Started in Minute +- [https://www.youtube.com/watch?v=Za2BqzeZjBk](https://www.youtube.com/watch?v=Za2BqzeZjBk) Podman | Daemonless Docker | Getting Started with Podman +- [https://www.youtube.com/watch?v=cqOtksmsxfg](https://www.youtube.com/watch?v=cqOtksmsxfg) LXC - Guide to building a LXC Lab Day-49 -- https://kubernetes.io/docs/home/ Kubernetes Documentation -- https://www.youtube.com/watch?v=X48VuDVv0do TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours] -- https://www.youtube.com/watch?v=s_o8dwzRlu4 TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners -- https://www.youtube.com/watch?v=KVBON1lA9N8 Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified! + +- [https://kubernetes.io/docs/home/](https://kubernetes.io/docs/home/) Kubernetes Documentation +- [https://www.youtube.com/watch?v=X48VuDVv0do](https://www.youtube.com/watch?v=X48VuDVv0do) TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours] +- [https://terraform.joshuajebaraj.com/](https://github.com/shuaibiyy/awesome-terraform) TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners +- [https://www.youtube.com/watch?v=KVBON1lA9N8](https://www.youtube.com/watch?v=KVBON1lA9N8) Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified! Day-50 -- https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1 Kubernetes playground – How to choose your platform -- https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2 Kubernetes playground – Setting up your cluster -- https://vzilla.co.uk/vzilla-blog/getting-started-with-amazon-elastic-kubernetes-service-amazon-eks Getting started with Amazon Elastic Kubernetes Service (Amazon EKS) -- https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-azure-kubernetes-service-aks Getting started with Microsoft Azure Kubernetes Service (AKS) -- https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-aks-azure-powershell-edition Getting Started with Microsoft AKS – Azure PowerShell Edition -- https://vzilla.co.uk/vzilla-blog/getting-started-with-google-kubernetes-service-gke Getting started with Google Kubernetes Service (GKE) -- https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks Kubernetes, How to – AWS Bottlerocket + Amazon EKS -- https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud Getting started with CIVO Cloud -- https://vzilla.co.uk/vzilla-blog/project_pace-kasten-k10-demo-environment-for-everyone Minikube - Kubernetes Demo Environment For Everyone -- https://kubernetes.io/docs/home/ Kubernetes Documentation -- https://www.youtube.com/watch?v=X48VuDVv0do TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours] -- https://www.youtube.com/watch?v=s_o8dwzRlu4 TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners -- https://www.youtube.com/watch?v=KVBON1lA9N8 Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified! + +- [https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1) Kubernetes playground – How to choose your platform +- [https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2) Kubernetes playground – Setting up your cluster +- [https://vzilla.co.uk/vzilla-blog/getting-started-with-amazon-elastic-kubernetes-service-amazon-eks](https://vzilla.co.uk/vzilla-blog/amazon-elastic-kubernetes-service-amazon-eks) Getting started with Amazon Elastic Kubernetes Service (Amazon EKS) +- [https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-azure-kubernetes-service-aks](https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-azure-kubernetes-service-aks) Getting started with Microsoft Azure Kubernetes Service (AKS) +- [https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-aks-azure-powershell-edition](https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-aks-azure-powershell-edition) Getting Started with Microsoft AKS – Azure PowerShell Edition +- [https://vzilla.co.uk/vzilla-blog/getting-started-with-google-kubernetes-service-gke](https://vzilla.co.uk/vzilla-blog/getting-started-with-google-kubernetes-service-gke) Getting started with Google Kubernetes Service (GKE) +- [https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks](https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks) Kubernetes, How to – AWS Bottlerocket + Amazon EKS +- [https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud](https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud) Getting started with CIVO Cloud +- [https://vzilla.co.uk/vzilla-blog/project_pace-kasten-k10-demo-environment-for-everyone](https://vzilla.co.uk/vzilla-blog/project_pace-kasten-k10-demo-environment-for-everyone) Minikube - Kubernetes Demo Environment For Everyone +- [https://kubernetes.io/docs/home/](https://kubernetes.io/docs/home/) Kubernetes Documentation +- [https://www.youtube.com/watch?v=X48VuDVv0do](https://www.youtube.com/watch?v=X48VuDVv0do) TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours] +- [https://terraform.joshuajebaraj.com/](https://github.com/shuaibiyy/awesome-terraform) TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners +- [https://www.youtube.com/watch?v=KVBON1lA9N8](https://www.youtube.com/watch?v=KVBON1lA9N8) Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified! Day-51 -- https://kubernetes.io/docs/tasks/tools/install-kubectl-linux Linux -- https://kubernetes.io/docs/tasks/tools/install-kubectl-macos macOS -- https://kubernetes.io/docs/tasks/tools/install-kubectl-windows Windows -- https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1 Kubernetes playground – How to choose your platform -- https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2 Kubernetes playground – Setting up your cluster -- https://vzilla.co.uk/vzilla-blog/getting-started-with-amazon-elastic-kubernetes-service-amazon-eks Getting started with Amazon Elastic Kubernetes Service (Amazon EKS) -- https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-azure-kubernetes-service-aks Getting started with Microsoft Azure Kubernetes Service (AKS) -- https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-aks-azure-powershell-edition Getting Started with Microsoft AKS – Azure PowerShell Edition -- https://vzilla.co.uk/vzilla-blog/getting-started-with-google-kubernetes-service-gke Getting started with Google Kubernetes Service (GKE) -- https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks Kubernetes, How to – AWS Bottlerocket + Amazon EKS -- https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud Getting started with CIVO Cloud -- https://vzilla.co.uk/vzilla-blog/project_pace-kasten-k10-demo-environment-for-everyone Minikube - Kubernetes Demo Environment For Everyone -- https://kubernetes.io/docs/home/ Kubernetes Documentation -- https://www.youtube.com/watch?v=X48VuDVv0do TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours] -- https://www.youtube.com/watch?v=s_o8dwzRlu4 TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners -- https://www.youtube.com/watch?v=KVBON1lA9N8 Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified! + +- [https://kubernetes.io/docs/tasks/tools/install-kubectl-linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux) Linux +- [https://kubernetes.io/docs/tasks/tools/install-kubectl-macos](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos) macOS +- [https://kubernetes.io/docs/tasks/tools/install-kubectl-windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows) Windows +- [https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1) Kubernetes playground – How to choose your platform +- [https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2) Kubernetes playground – Setting up your cluster +- [https://vzilla.co.uk/vzilla-blog/getting-started-with-amazon-elastic-kubernetes-service-amazon-eks](https://vzilla.co.uk/vzilla-blog/amazon-elastic-kubernetes-service-amazon-eks) Getting started with Amazon Elastic Kubernetes Service (Amazon EKS) +- [https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-azure-kubernetes-service-aks](https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-azure-kubernetes-service-aks) Getting started with Microsoft Azure Kubernetes Service (AKS) +- [https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-aks-azure-powershell-edition](https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-aks-azure-powershell-edition) Getting Started with Microsoft AKS – Azure PowerShell Edition +- [https://vzilla.co.uk/vzilla-blog/getting-started-with-google-kubernetes-service-gke](https://vzilla.co.uk/vzilla-blog/getting-started-with-google-kubernetes-service-gke) Getting started with Google Kubernetes Service (GKE) +- [https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks](https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks) Kubernetes, How to – AWS Bottlerocket + Amazon EKS +- [https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud](https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud) Getting started with CIVO Cloud +- [https://vzilla.co.uk/vzilla-blog/project_pace-kasten-k10-demo-environment-for-everyone](https://vzilla.co.uk/vzilla-blog/project_pace-kasten-k10-demo-environment-for-everyone) Minikube - Kubernetes Demo Environment For Everyone +- [https://kubernetes.io/docs/home/](https://kubernetes.io/docs/home/) Kubernetes Documentation +- [https://www.youtube.com/watch?v=X48VuDVv0do](https://www.youtube.com/watch?v=X48VuDVv0do) TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours] +- [https://www.youtube.com/watch?v=s_o8dwzRlu4](https://www.youtube.com/watch?v=s_o8dwzRlu4) TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners +- [https://www.youtube.com/watch?v=KVBON1lA9N8](https://www.youtube.com/watch?v=KVBON1lA9N8) Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified! Day-52 -- https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1 Kubernetes playground – How to choose your platform -- https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2 Kubernetes playground – Setting up your cluster -- https://vzilla.co.uk/vzilla-blog/getting-started-with-amazon-elastic-kubernetes-service-amazon-eks Getting started with Amazon Elastic Kubernetes Service (Amazon EKS) -- https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-azure-kubernetes-service-aks Getting started with Microsoft Azure Kubernetes Service (AKS) -- https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-aks-azure-powershell-edition Getting Started with Microsoft AKS – Azure PowerShell Edition -- https://vzilla.co.uk/vzilla-blog/getting-started-with-google-kubernetes-service-gke Getting started with Google Kubernetes Service (GKE) -- https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks Kubernetes, How to – AWS Bottlerocket + Amazon EKS -- https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud Getting started with CIVO Cloud -- https://vzilla.co.uk/vzilla-blog/project_pace-kasten-k10-demo-environment-for-everyone Minikube - Kubernetes Demo Environment For Everyone -- https://kubernetes.io/docs/home/ Kubernetes Documentation -- https://www.youtube.com/watch?v=X48VuDVv0do TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours] -- https://www.youtube.com/watch?v=s_o8dwzRlu4 TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners -- https://www.youtube.com/watch?v=KVBON1lA9N8 Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified! + +- [https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-1) Kubernetes playground – How to choose your platform +- [https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2](https://vzilla.co.uk/vzilla-blog/building-the-home-lab-kubernetes-playground-part-2) Kubernetes playground – Setting up your cluster +- [https://vzilla.co.uk/vzilla-blog/getting-started-with-amazon-elastic-kubernetes-service-amazon-eks](https://vzilla.co.uk/vzilla-blog/amazon-elastic-kubernetes-service-amazon-eks) Getting started with Amazon Elastic Kubernetes Service (Amazon EKS) +- [https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-azure-kubernetes-service-aks](https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-azure-kubernetes-service-aks) Getting started with Microsoft Azure Kubernetes Service (AKS) +- [https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-aks-azure-powershell-edition](https://vzilla.co.uk/vzilla-blog/getting-started-with-microsoft-aks-azure-powershell-edition) Getting Started with Microsoft AKS – Azure PowerShell Edition +- [https://vzilla.co.uk/vzilla-blog/getting-started-with-google-kubernetes-service-gke](https://vzilla.co.uk/vzilla-blog/getting-started-with-google-kubernetes-service-gke) Getting started with Google Kubernetes Service (GKE) +- [https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks](https://vzilla.co.uk/vzilla-blog/kubernetes-how-to-aws-bottlerocket-amazon-eks) Kubernetes, How to – AWS Bottlerocket + Amazon EKS +- [https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud](https://vzilla.co.uk/vzilla-blog/getting-started-with-civo-cloud) Getting started with CIVO Cloud +- [https://vzilla.co.uk/vzilla-blog/project_pace-kasten-k10-demo-environment-for-everyone](https://vzilla.co.uk/vzilla-blog/project_pace-kasten-k10-demo-environment-for-everyone) Minikube - Kubernetes Demo Environment For Everyone +- [https://kubernetes.io/docs/home/](https://kubernetes.io/docs/home/) Kubernetes Documentation +- [https://www.youtube.com/watch?v=X48VuDVv0do](https://www.youtube.com/watch?v=X48VuDVv0do) TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours] +- [https://www.youtube.com/watch?v=s_o8dwzRlu4](https://www.youtube.com/watch?v=s_o8dwzRlu4) TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners +- [https://www.youtube.com/watch?v=KVBON1lA9N8](https://www.youtube.com/watch?v=KVBON1lA9N8) Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified! Day-53 -- https://kubernetes.io/docs/home/ Kubernetes Documentation -- https://www.youtube.com/watch?v=X48VuDVv0do TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours] -- https://www.youtube.com/watch?v=s_o8dwzRlu4 TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners -- https://www.youtube.com/watch?v=KVBON1lA9N8 Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified! + +- [https://kubernetes.io/docs/home/](https://kubernetes.io/docs/home/) Kubernetes Documentation +- [https://www.youtube.com/watch?v=X48VuDVv0do](https://www.youtube.com/watch?v=X48VuDVv0do) TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours] +- [https://www.youtube.com/watch?v=s_o8dwzRlu4](https://www.youtube.com/watch?v=s_o8dwzRlu4) TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners +- [https://www.youtube.com/watch?v=KVBON1lA9N8](https://www.youtube.com/watch?v=KVBON1lA9N8) Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified! Day-54 -- https://kubernetes.io/docs/home/ Kubernetes Documentation -- https://www.youtube.com/watch?v=X48VuDVv0do TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours] -- https://www.youtube.com/watch?v=s_o8dwzRlu4 TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners -- https://www.youtube.com/watch?v=KVBON1lA9N8 Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified! + +- [https://kubernetes.io/docs/home/](https://kubernetes.io/docs/home/) Kubernetes Documentation +- [https://www.youtube.com/watch?v=X48VuDVv0do](https://www.youtube.com/watch?v=X48VuDVv0do) TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours] +- [https://www.youtube.com/watch?v=s_o8dwzRlu4](https://www.youtube.com/watch?v=s_o8dwzRlu4) TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners +- [https://www.youtube.com/watch?v=KVBON1lA9N8](https://www.youtube.com/watch?v=KVBON1lA9N8) Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified! Day-55 -- https://www.youtube.com/watch?v=pPQKAR1pA9U Kubernetes StatefulSet simply explained -- https://www.youtube.com/watch?v=0swOh5C3OVM Kubernetes Volumes explained -- https://www.youtube.com/watch?v=80Ew_fsV4rM Kubernetes Ingress Tutorial for Beginners -- https://kubernetes.io/docs/home/ Kubernetes Documentation -- https://www.youtube.com/watch?v=X48VuDVv0do TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours] -- https://www.youtube.com/watch?v=s_o8dwzRlu4 TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners -- https://www.youtube.com/watch?v=KVBON1lA9N8 Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified! + +- [https://www.youtube.com/watch?v=pPQKAR1pA9U](https://www.youtube.com/watch?v=pPQKAR1pA9U) Kubernetes StatefulSet simply explained +- [https://www.youtube.com/watch?v=0swOh5C3OVM](https://www.youtube.com/watch?v=0swOh5C3OVM) Kubernetes Volumes explained +- [https://www.youtube.com/watch?v=80Ew_fsV4rM](https://www.youtube.com/watch?v=80Ew_fsV4rM) Kubernetes Ingress Tutorial for Beginners +- [https://kubernetes.io/docs/home/](https://kubernetes.io/docs/home/) Kubernetes Documentation +- [https://www.youtube.com/watch?v=X48VuDVv0do](https://www.youtube.com/watch?v=X48VuDVv0do) TechWorld with Nana - Kubernetes Tutorial for Beginners [FULL COURSE in 4 Hours] +- [https://www.youtube.com/watch?v=s_o8dwzRlu4](https://www.youtube.com/watch?v=s_o8dwzRlu4) TechWorld with Nana - Kubernetes Crash Course for Absolute Beginners +- [https://www.youtube.com/watch?v=KVBON1lA9N8](https://www.youtube.com/watch?v=KVBON1lA9N8) Kunal Kushwaha - Kubernetes Tutorial for Beginners | What is Kubernetes? Architecture Simplified! Day-56 -- https://www.youtube.com/watch?v=POPP2WTJ8es What is Infrastructure as Code? Difference of Infrastructure as Code Tools -- https://www.youtube.com/watch?v=m3cKkYXl-8o Terraform Tutorial | Terraform Course Overview 2021 -- https://www.youtube.com/watch?v=l5k1ai_GBDE Terraform explained in 15 mins | Terraform Tutorial for Beginners -- https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s Terraform Course - From BEGINNER to PRO! -- https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s HashiCorp Terraform Associate Certification Course -- https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s Terraform Full Course for Beginners -- https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide! -- https://terraform.joshuajebaraj.com/ Terraform Simple Projects -- https://www.youtube.com/watch?v=oA-pPa0vfks Terraform Tutorial - The Best Project Ideas -- https://github.com/shuaibiyy/awesome-terraform + +- [https://www.youtube.com/watch?v=POPP2WTJ8es](https://www.youtube.com/watch?v=POPP2WTJ8es) What is Infrastructure as Code? Difference of Infrastructure as Code Tools +- [https://www.youtube.com/watch?v=m3cKkYX-8o](https://www.youtube.com/watch?v=m3cKkYX-8o) Terraform Tutorial | Terraform Course Overview 2021 +- [https://www.youtube.com/watch?v=l5k1ai_GBDE](https://www.youtube.com/watch?v=l5k1ai_GBDE) Terraform explained in 15 mins | Terraform Tutorial for Beginners +- [https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) Terraform Course - From BEGINNER to PRO! +- [https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) HashiCorp Terraform Associate Certification Course +- [https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) Terraform Full Course for Beginners +- [https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide! +- [https://terraform.joshuajebaraj.com/](https://github.com/shuaibiyy/awesome-terraform) Terraform Simple Projects +- [https://www.youtube.com/watch?v=oA-pPa0vfks](https://www.youtube.com/watch?v=oA-pPa0vfks) Terraform Tutorial - The Best Project Ideas +- [https://github.com/shuaibiyy/awesome-terraform](https://github.com/shuaibiyy/awesome-terraform) Day-57 -- https://www.youtube.com/watch?v=POPP2WTJ8es What is Infrastructure as Code? Difference of Infrastructure as Code Tools -- https://www.youtube.com/watch?v=m3cKkYXl-8o Terraform Tutorial | Terraform Course Overview 2021 -- https://www.youtube.com/watch?v=l5k1ai_GBDE Terraform explained in 15 mins | Terraform Tutorial for Beginners -- https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s Terraform Course - From BEGINNER to PRO! -- https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s HashiCorp Terraform Associate Certification Course -- https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s Terraform Full Course for Beginners -- https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide! -- https://terraform.joshuajebaraj.com/ Terraform Simple Projects -- https://www.youtube.com/watch?v=oA-pPa0vfks Terraform Tutorial - The Best Project Ideas -- https://github.com/shuaibiyy/awesome-terraform + +- [https://www.youtube.com/watch?v=POPP2WTJ8es](https://www.youtube.com/watch?v=POPP2WTJ8es) What is Infrastructure as Code? Difference of Infrastructure as Code Tools +- [https://www.youtube.com/watch?v=m3cKkYX-8o](https://www.youtube.com/watch?v=m3cKkYX-8o) Terraform Tutorial | Terraform Course Overview 2021 +- [https://www.youtube.com/watch?v=l5k1ai_GBDE](https://www.youtube.com/watch?v=l5k1ai_GBDE) Terraform explained in 15 mins | Terraform Tutorial for Beginners +- [https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) Terraform Course - From BEGINNER to PRO! +- [https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) HashiCorp Terraform Associate Certification Course +- [https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) Terraform Full Course for Beginners +- [https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide! +- [https://terraform.joshuajebaraj.com/](https://github.com/shuaibiyy/awesome-terraform) Terraform Simple Projects +- [https://www.youtube.com/watch?v=oA-pPa0vfks](https://www.youtube.com/watch?v=oA-pPa0vfks) Terraform Tutorial - The Best Project Ideas +- [https://github.com/shuaibiyy/awesome-terraform](https://github.com/shuaibiyy/awesome-terraform) Day-58 -- https://www.youtube.com/watch?v=POPP2WTJ8es What is Infrastructure as Code? Difference of Infrastructure as Code Tools -- https://www.youtube.com/watch?v=m3cKkYXl-8o Terraform Tutorial | Terraform Course Overview 2021 -- https://www.youtube.com/watch?v=l5k1ai_GBDE Terraform explained in 15 mins | Terraform Tutorial for Beginners -- https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s Terraform Course - From BEGINNER to PRO! -- https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s HashiCorp Terraform Associate Certification Course -- https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s Terraform Full Course for Beginners -- https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide! -- https://terraform.joshuajebaraj.com/ Terraform Simple Projects -- https://www.youtube.com/watch?v=oA-pPa0vfks Terraform Tutorial - The Best Project Ideas -- https://github.com/shuaibiyy/awesome-terraform + +- [https://www.youtube.com/watch?v=POPP2WTJ8es](https://www.youtube.com/watch?v=POPP2WTJ8es) What is Infrastructure as Code? Difference of Infrastructure as Code Tools +- [https://www.youtube.com/watch?v=m3cKkYX-8o](https://www.youtube.com/watch?v=m3cKkYX-8o) Terraform Tutorial | Terraform Course Overview 2021 +- [https://www.youtube.com/watch?v=l5k1ai_GBDE](https://www.youtube.com/watch?v=l5k1ai_GBDE) Terraform explained in 15 mins | Terraform Tutorial for Beginners +- [https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) Terraform Course - From BEGINNER to PRO! +- [https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) HashiCorp Terraform Associate Certification Course +- [https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) Terraform Full Course for Beginners +- [https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide! +- [https://terraform.joshuajebaraj.com/](https://github.com/shuaibiyy/awesome-terraform) Terraform Simple Projects +- [https://www.youtube.com/watch?v=oA-pPa0vfks](https://www.youtube.com/watch?v=oA-pPa0vfks) Terraform Tutorial - The Best Project Ideas +- [https://github.com/shuaibiyy/awesome-terraform](https://github.com/shuaibiyy/awesome-terraform) Day-59 -- https://www.youtube.com/watch?v=POPP2WTJ8es What is Infrastructure as Code? Difference of Infrastructure as Code Tools -- https://www.youtube.com/watch?v=m3cKkYXl-8o Terraform Tutorial | Terraform Course Overview 2021 -- https://www.youtube.com/watch?v=l5k1ai_GBDE Terraform explained in 15 mins | Terraform Tutorial for Beginners -- https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s Terraform Course - From BEGINNER to PRO! -- https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s HashiCorp Terraform Associate Certification Course -- https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s Terraform Full Course for Beginners -- https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide! -- https://terraform.joshuajebaraj.com/ Terraform Simple Projects -- https://www.youtube.com/watch?v=oA-pPa0vfks Terraform Tutorial - The Best Project Ideas -- https://github.com/shuaibiyy/awesome-terraform + +- [https://www.youtube.com/watch?v=POPP2WTJ8es](https://www.youtube.com/watch?v=POPP2WTJ8es) What is Infrastructure as Code? Difference of Infrastructure as Code Tools +- [https://www.youtube.com/watch?v=m3cKkYX-8o](https://www.youtube.com/watch?v=m3cKkYX-8o) Terraform Tutorial | Terraform Course Overview 2021 +- [https://www.youtube.com/watch?v=l5k1ai_GBDE](https://www.youtube.com/watch?v=l5k1ai_GBDE) Terraform explained in 15 mins | Terraform Tutorial for Beginners +- [https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) Terraform Course - From BEGINNER to PRO! +- [https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) HashiCorp Terraform Associate Certification Course +- [https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) Terraform Full Course for Beginners +- [https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide! +- [https://terraform.joshuajebaraj.com/](https://github.com/shuaibiyy/awesome-terraform) Terraform Simple Projects +- [https://www.youtube.com/watch?v=oA-pPa0vfks](https://www.youtube.com/watch?v=oA-pPa0vfks) Terraform Tutorial - The Best Project Ideas +- [https://github.com/shuaibiyy/awesome-terraform](https://github.com/shuaibiyy/awesome-terraform) Day-60 -- https://www.youtube.com/watch?v=POPP2WTJ8es What is Infrastructure as Code? Difference of Infrastructure as Code Tools -- https://www.youtube.com/watch?v=m3cKkYXl-8o Terraform Tutorial | Terraform Course Overview 2021 -- https://www.youtube.com/watch?v=l5k1ai_GBDE Terraform explained in 15 mins | Terraform Tutorial for Beginners -- https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s Terraform Course - From BEGINNER to PRO! -- https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s HashiCorp Terraform Associate Certification Course -- https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s Terraform Full Course for Beginners -- https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide! -- https://terraform.joshuajebaraj.com/ Terraform Simple Projects -- https://www.youtube.com/watch?v=oA-pPa0vfks Terraform Tutorial - The Best Project Ideas -- https://github.com/shuaibiyy/awesome-terraform + +- [https://www.youtube.com/watch?v=POPP2WTJ8es](https://www.youtube.com/watch?v=POPP2WTJ8es) What is Infrastructure as Code? Difference of Infrastructure as Code Tools +- [https://www.youtube.com/watch?v=m3cKkYX-8o](https://www.youtube.com/watch?v=m3cKkYX-8o) Terraform Tutorial | Terraform Course Overview 2021 +- [https://www.youtube.com/watch?v=l5k1ai_GBDE](https://www.youtube.com/watch?v=l5k1ai_GBDE) Terraform explained in 15 mins | Terraform Tutorial for Beginners +- [https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) Terraform Course - From BEGINNER to PRO! +- [https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) HashiCorp Terraform Associate Certification Course +- [https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) Terraform Full Course for Beginners +- [https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide! +- [https://terraform.joshuajebaraj.com/](https://github.com/shuaibiyy/awesome-terraform) Terraform Simple Projects +- [https://www.youtube.com/watch?v=oA-pPa0vfks](https://www.youtube.com/watch?v=oA-pPa0vfks) Terraform Tutorial - The Best Project Ideas +- [https://github.com/shuaibiyy/awesome-terraform](https://github.com/shuaibiyy/awesome-terraform) Day-61 -- https://www.youtube.com/watch?v=POPP2WTJ8es What is Infrastructure as Code? Difference of Infrastructure as Code Tools -- https://www.youtube.com/watch?v=m3cKkYXl-8o Terraform Tutorial | Terraform Course Overview 2021 -- https://www.youtube.com/watch?v=l5k1ai_GBDE Terraform explained in 15 mins | Terraform Tutorial for Beginners -- https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s Terraform Course - From BEGINNER to PRO! -- https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s HashiCorp Terraform Associate Certification Course -- https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s Terraform Full Course for Beginners -- https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide! -- https://terraform.joshuajebaraj.com/ Terraform Simple Projects -- https://www.youtube.com/watch?v=oA-pPa0vfks Terraform Tutorial - The Best Project Ideas -- https://github.com/shuaibiyy/awesome-terraform + +- [https://www.youtube.com/watch?v=POPP2WTJ8es](https://www.youtube.com/watch?v=POPP2WTJ8es) What is Infrastructure as Code? Difference of Infrastructure as Code Tools +- [https://www.youtube.com/watch?v=m3cKkYX-8o](https://www.youtube.com/watch?v=m3cKkYX-8o) Terraform Tutorial | Terraform Course Overview 2021 +- [https://www.youtube.com/watch?v=l5k1ai_GBDE](https://www.youtube.com/watch?v=l5k1ai_GBDE) Terraform explained in 15 mins | Terraform Tutorial for Beginners +- [https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) Terraform Course - From BEGINNER to PRO! +- [https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) HashiCorp Terraform Associate Certification Course +- [https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) Terraform Full Course for Beginners +- [https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide! +- [https://terraform.joshuajebaraj.com/](https://github.com/shuaibiyy/awesome-terraform) Terraform Simple Projects +- [https://www.youtube.com/watch?v=oA-pPa0vfks](https://www.youtube.com/watch?v=oA-pPa0vfks) Terraform Tutorial - The Best Project Ideas +- [https://github.com/shuaibiyy/awesome-terraform](https://github.com/shuaibiyy/awesome-terraform) Day-62 -- https://www.checkov.io/ checkov - scans cloud infrastructure configurations to find misconfigurations before they're deployed. -- https://aquasecurity.github.io/tfsec/v1.4.2/ tfsec - static analysis security scanner for your Terraform code. -- https://github.com/accurics/terrascan -- https://terraform-compliance.com/ terraform-compliance - a lightweight, security and compliance focused test framework against terraform to enable negative testing capability for your infrastructure-as-code. -- https://docs.snyk.io/products/snyk-infrastructure-as-code/scan-terraform-files/scan-and-fix-security-issues-in-terraform-files snyk - scans your Terraform code for misconfigurations and security issues -- https://www.terraform.io/cloud-docs/sentinel Terraform Sentinel - embedded policy-as-code framework integrated with the HashiCorp Enterprise products. It enables fine-grained, logic-based policy decisions, and can be extended to use information from external sources. -- https://terratest.gruntwork.io/ Terratest - Terratest is a Go library that provides patterns and helper functions for testing infrastructure -- https://www.youtube.com/watch?v=POPP2WTJ8es What is Infrastructure as Code? Difference of Infrastructure as Code Tools -- https://www.youtube.com/watch?v=m3cKkYXl-8o Terraform Tutorial | Terraform Course Overview 2021 -- https://www.youtube.com/watch?v=l5k1ai_GBDE Terraform explained in 15 mins | Terraform Tutorial for Beginners -- https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s Terraform Course - From BEGINNER to PRO! -- https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s HashiCorp Terraform Associate Certification Course -- https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s Terraform Full Course for Beginners -- https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide! -- https://terraform.joshuajebaraj.com/ Terraform Simple Projects -- https://www.youtube.com/watch?v=oA-pPa0vfks Terraform Tutorial - The Best Project Ideas -- https://github.com/shuaibiyy/awesome-terraform -- https://www.youtube.com/watch?v=vIjeiDcsR3Q&t=51s Pulumi - IaC in your favorite programming language! + +- [https://www.checkov.io/](https://www.checkov.io/) checkov - scans cloud infrastructure configurations to find misconfigurations before they're deployed. +- [https://aquasecurity.github.io/tfsec/v1.4.2/](https://aquasecurity.github.io/tfsec/v1.4.2/) tfsec - static analysis security scanner for your Terraform code. +- [https://github.com/accurics/terrascan](https://github.com/accurics/terrascan) +- [https://terraform-compliance.com/](https://terraform-compliance.com/) terraform-compliance - a lightweight, security and compliance focused test framework against terraform to enable negative testing capability for your infrastructure-as-code. +- [https://docs.snyk.io/products/snyk-infrastructure-as-code/scan-terraform-files/scan-and-fix-security-issues-in-terraform-files](https://docs.snyk.io/products/snyk-infrastructure-as-code/scan-terraform-files/scan-and-fix-security-issues-in-terraform-files) snyk - scans your Terraform code for misconfigurations and security issues +- [https://www.terraform.io/cloud-docs/sentinel](https://www.terraform.io/cloud-docs/sentinel) Terraform Sentinel - embedded policy-as-code framework integrated with the HashiCorp Enterprise products. It enables fine-grained, logic-based policy decisions, and can be extended to use information from external sources. +- [https://terratest.gruntwork.io/](https://terratest.gruntwork.io/) Terratest - Terratest is a Go library that provides patterns and helper functions for testing infrastructure +- [https://www.youtube.com/watch?v=POPP2WTJ8es](https://www.youtube.com/watch?v=POPP2WTJ8es) What is Infrastructure as Code? Difference of Infrastructure as Code Tools +- [https://www.youtube.com/watch?v=m3cKkYX-8o](https://www.youtube.com/watch?v=m3cKkYX-8o) Terraform Tutorial | Terraform Course Overview 2021 +- [https://www.youtube.com/watch?v=l5k1ai_GBDE](https://www.youtube.com/watch?v=l5k1ai_GBDE) Terraform explained in 15 mins | Terraform Tutorial for Beginners +- [https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) Terraform Course - From BEGINNER to PRO! +- [https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) HashiCorp Terraform Associate Certification Course +- [https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) Terraform Full Course for Beginners +- [https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s](https://www.youtube.com/watch?v=YcJ9IeukJL8&list=WL&index=16&t=11s) KodeKloud - Terraform for DevOps Beginners + Labs: Complete Step by Step Guide! +- [https://terraform.joshuajebaraj.com/](https://github.com/shuaibiyy/awesome-terraform) Terraform Simple Projects +- [https://www.youtube.com/watch?v=oA-pPa0vfks](https://www.youtube.com/watch?v=oA-pPa0vfks) Terraform Tutorial - The Best Project Ideas +- [https://github.com/shuaibiyy/awesome-terraform](https://github.com/shuaibiyy/awesome-terraform) +- [https://www.youtube.com/watch?v=vIjeiDcsR3Q&t=51s](https://www.youtube.com/watch?v=vIjeiDcsR3Q&t=51s) Pulumi - IaC in your favorite programming language! Day-63 -- https://www.youtube.com/watch?v=1id6ERvfozo What is Ansible -- https://www.youtube.com/watch?v=goclfp6a2IQ Ansible 101 - Episode 1 - Introduction to Ansible -- https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s NetworkChuck - You need to learn Ansible right now! + +- [https://www.youtube.com/watch?v=1id6ERvfozo](https://www.youtube.com/watch?v=1id6ERvfozo) What is Ansible +- [https://www.youtube.com/watch?v=goclfp6a2IQ](https://www.youtube.com/watch?v=goclfp6a2IQ) Ansible 101 - Episode 1 - Introduction to Ansible +- [https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) NetworkChuck - You need to learn Ansible right now! Day-64 -- https://www.youtube.com/watch?v=1id6ERvfozo What is Ansible -- https://www.youtube.com/watch?v=goclfp6a2IQ Ansible 101 - Episode 1 - Introduction to Ansible -- https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s NetworkChuck - You need to learn Ansible right now! + +- [https://www.youtube.com/watch?v=1id6ERvfozo](https://www.youtube.com/watch?v=1id6ERvfozo) What is Ansible +- [https://www.youtube.com/watch?v=goclfp6a2IQ](https://www.youtube.com/watch?v=goclfp6a2IQ) Ansible 101 - Episode 1 - Introduction to Ansible +- [https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) NetworkChuck - You need to learn Ansible right now! Day-65 -- https://www.youtube.com/watch?v=1id6ERvfozo What is Ansible -- https://www.youtube.com/watch?v=goclfp6a2IQ Ansible 101 - Episode 1 - Introduction to Ansible -- https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s NetworkChuck - You need to learn Ansible right now! -- https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u Your complete guide to Ansible + +- [https://www.youtube.com/watch?v=1id6ERvfozo](https://www.youtube.com/watch?v=1id6ERvfozo) What is Ansible +- [https://www.youtube.com/watch?v=goclfp6a2IQ](https://www.youtube.com/watch?v=goclfp6a2IQ) Ansible 101 - Episode 1 - Introduction to Ansible +- [https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) NetworkChuck - You need to learn Ansible right now! +- [https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u](https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u) Your complete guide to Ansible Day-66 -- https://www.youtube.com/watch?v=1id6ERvfozo What is Ansible -- https://www.youtube.com/watch?v=goclfp6a2IQ Ansible 101 - Episode 1 - Introduction to Ansible -- https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s NetworkChuck - You need to learn Ansible right now! -- https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u Your complete guide to Ansible + +- [https://www.youtube.com/watch?v=1id6ERvfozo](https://www.youtube.com/watch?v=1id6ERvfozo) What is Ansible +- [https://www.youtube.com/watch?v=goclfp6a2IQ](https://www.youtube.com/watch?v=goclfp6a2IQ) Ansible 101 - Episode 1 - Introduction to Ansible +- [https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) NetworkChuck - You need to learn Ansible right now! +- [https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u](https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u) Your complete guide to Ansible Day-67 -- https://www.youtube.com/watch?v=1id6ERvfozo What is Ansible -- https://www.youtube.com/watch?v=goclfp6a2IQ Ansible 101 - Episode 1 - Introduction to Ansible -- https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s NetworkChuck - You need to learn Ansible right now! -- https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u Your complete guide to Ansible + +- [https://www.youtube.com/watch?v=1id6ERvfozo](https://www.youtube.com/watch?v=1id6ERvfozo) What is Ansible +- [https://www.youtube.com/watch?v=goclfp6a2IQ](https://www.youtube.com/watch?v=goclfp6a2IQ) Ansible 101 - Episode 1 - Introduction to Ansible +- [https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) NetworkChuck - You need to learn Ansible right now! +- [https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u](https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u) Your complete guide to Ansible Day-68 -- https://www.youtube.com/watch?v=1id6ERvfozo What is Ansible -- https://www.youtube.com/watch?v=goclfp6a2IQ Ansible 101 - Episode 1 - Introduction to Ansible -- https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s NetworkChuck - You need to learn Ansible right now! -- https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u Your complete guide to Ansible + +- [https://www.youtube.com/watch?v=1id6ERvfozo](https://www.youtube.com/watch?v=1id6ERvfozo) What is Ansible +- [https://www.youtube.com/watch?v=goclfp6a2IQ](https://www.youtube.com/watch?v=goclfp6a2IQ) Ansible 101 - Episode 1 - Introduction to Ansible +- [https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) NetworkChuck - You need to learn Ansible right now! +- [https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u](https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u) Your complete guide to Ansible Day-69 -- https://docs.ansible.com/ansible/latest/index.html Ansible Documentation -- https://www.youtube.com/watch?v=1id6ERvfozo What is Ansible -- https://www.youtube.com/watch?v=goclfp6a2IQ Ansible 101 - Episode 1 - Introduction to Ansible -- https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s NetworkChuck - You need to learn Ansible right now! -- https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u Your complete guide to Ansible + +- [](https://docs.ansible.com/ansible/latest/index.html) Ansible Documentation +- [https://www.youtube.com/watch?v=1id6ERvfozo](https://www.youtube.com/watch?v=1id6ERvfozo) What is Ansible +- [https://www.youtube.com/watch?v=goclfp6a2IQ](https://www.youtube.com/watch?v=goclfp6a2IQ) Ansible 101 - Episode 1 - Introduction to Ansible +- [https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s](https://www.youtube.com/watch?v=5hycyr-8EKs&t=955s) NetworkChuck - You need to learn Ansible right now! +- [https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u](https://www.youtube.com/playlist?list=PLnFWJCugpwfzTlIJ-JtuATD2MBBD7_m3u) Your complete guide to Ansible Day-70 -- https://youtu.be/_MXtbjwsz3A Jenkins is the way to build, test, deploy -- https://www.jenkins.io/ Jenkins.io -- https://argo-cd.readthedocs.io/en/stable/ ArgoCD -- https://www.youtube.com/watch?v=MeU5_k9ssrs ArgoCD Tutorial for Beginners -- https://www.youtube.com/watch?v=LFDrDnKPOTg What is Jenkins? -- https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s Complete Jenkins Tutorial -- https://www.youtube.com/watch?v=R8_veQiYBjI GitHub Actions -- https://www.youtube.com/watch?v=mFFXuXjVgkU GitHub Actions CI/CD + +- [https://www.youtube.com/watch?v=\_MXtbjwsz3A](https://www.youtube.com/watch?v=_MXtbjwsz3A) Jenkins is the way to build, test, deploy +- [https://www.jenkins.io/](https://www.jenkins.io/) Jenkins.io +- [https://argo-cd.readthedocs.io/en/stable/](https://argo-cd.readthedocs.io/en/stable/) ArgoCD +- [https://www.youtube.com/watch?v=MeU5_k9ssrs](https://www.youtube.com/watch?v=MeU5_k9ssrs) ArgoCD Tutorial for Beginners +- [https://www.youtube.com/watch?v=LFDrDnKPOTg](https://www.youtube.com/watch?v=LFDrDnKPOTg) What is Jenkins? +- [https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s](https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s) Complete Jenkins Tutorial +- [https://www.youtube.com/watch?v=R8_veQiYBjI](https://www.youtube.com/watch?v=R8_veQiYBjI) GitHub Actions +- [https://www.youtube.com/watch?v=mFFXuXjVgkU](https://www.youtube.com/watch?v=mFFXuXjVgkU) GitHub Actions CI/CD Day-71 -- https://youtu.be/_MXtbjwsz3A Jenkins is the way to build, test, deploy -- https://www.jenkins.io/ Jenkins.io -- https://argo-cd.readthedocs.io/en/stable/ ArgoCD -- https://www.youtube.com/watch?v=MeU5_k9ssrs ArgoCD Tutorial for Beginners -- https://www.youtube.com/watch?v=LFDrDnKPOTg What is Jenkins? -- https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s Complete Jenkins Tutorial -- https://www.youtube.com/watch?v=R8_veQiYBjI GitHub Actions -- https://www.youtube.com/watch?v=mFFXuXjVgkU GitHub Actions CI/CD + +- [https://www.youtube.com/watch?v=\_MXtbjwsz3A](https://www.youtube.com/watch?v=_MXtbjwsz3A) Jenkins is the way to build, test, deploy +- [https://www.jenkins.io/](https://www.jenkins.io/) Jenkins.io +- [https://argo-cd.readthedocs.io/en/stable/](https://argo-cd.readthedocs.io/en/stable/) ArgoCD +- [https://www.youtube.com/watch?v=MeU5_k9ssrs](https://www.youtube.com/watch?v=MeU5_k9ssrs) ArgoCD Tutorial for Beginners +- [https://www.youtube.com/watch?v=LFDrDnKPOTg](https://www.youtube.com/watch?v=LFDrDnKPOTg) What is Jenkins? +- [https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s](https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s) Complete Jenkins Tutorial +- [https://www.youtube.com/watch?v=R8_veQiYBjI](https://www.youtube.com/watch?v=R8_veQiYBjI) GitHub Actions +- [https://www.youtube.com/watch?v=mFFXuXjVgkU](https://www.youtube.com/watch?v=mFFXuXjVgkU) GitHub Actions CI/CD Day-72 -- https://youtu.be/_MXtbjwsz3A Jenkins is the way to build, test, deploy -- https://www.jenkins.io/ Jenkins.io -- https://argo-cd.readthedocs.io/en/stable/ ArgoCD -- https://www.youtube.com/watch?v=MeU5_k9ssrs ArgoCD Tutorial for Beginners -- https://www.youtube.com/watch?v=LFDrDnKPOTg What is Jenkins? -- https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s Complete Jenkins Tutorial -- https://www.youtube.com/watch?v=R8_veQiYBjI GitHub Actions -- https://www.youtube.com/watch?v=mFFXuXjVgkU GitHub Actions CI/CD + +- [https://www.youtube.com/watch?v=\_MXtbjwsz3A](https://www.youtube.com/watch?v=_MXtbjwsz3A) Jenkins is the way to build, test, deploy +- [https://www.jenkins.io/](https://www.jenkins.io/) Jenkins.io +- [https://argo-cd.readthedocs.io/en/stable/](https://argo-cd.readthedocs.io/en/stable/) ArgoCD +- [https://www.youtube.com/watch?v=MeU5_k9ssrs](https://www.youtube.com/watch?v=MeU5_k9ssrs) ArgoCD Tutorial for Beginners +- [https://www.youtube.com/watch?v=LFDrDnKPOTg](https://www.youtube.com/watch?v=LFDrDnKPOTg) What is Jenkins? +- [https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s](https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s) Complete Jenkins Tutorial +- [https://www.youtube.com/watch?v=R8_veQiYBjI](https://www.youtube.com/watch?v=R8_veQiYBjI) GitHub Actions +- [https://www.youtube.com/watch?v=mFFXuXjVgkU](https://www.youtube.com/watch?v=mFFXuXjVgkU) GitHub Actions CI/CD Day-73 -- https://youtu.be/_MXtbjwsz3A Jenkins is the way to build, test, deploy -- https://www.jenkins.io/ Jenkins.io -- https://argo-cd.readthedocs.io/en/stable/ ArgoCD -- https://www.youtube.com/watch?v=MeU5_k9ssrs ArgoCD Tutorial for Beginners -- https://www.youtube.com/watch?v=LFDrDnKPOTg What is Jenkins? -- https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s Complete Jenkins Tutorial -- https://www.youtube.com/watch?v=R8_veQiYBjI GitHub Actions -- https://www.youtube.com/watch?v=mFFXuXjVgkU GitHub Actions CI/CD + +- [https://www.youtube.com/watch?v=\_MXtbjwsz3A](https://www.youtube.com/watch?v=_MXtbjwsz3A) Jenkins is the way to build, test, deploy +- [https://www.jenkins.io/](https://www.jenkins.io/) Jenkins.io +- [https://argo-cd.readthedocs.io/en/stable/](https://argo-cd.readthedocs.io/en/stable/) ArgoCD +- [https://www.youtube.com/watch?v=MeU5_k9ssrs](https://www.youtube.com/watch?v=MeU5_k9ssrs) ArgoCD Tutorial for Beginners +- [https://www.youtube.com/watch?v=LFDrDnKPOTg](https://www.youtube.com/watch?v=LFDrDnKPOTg) What is Jenkins? +- [https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s](https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s) Complete Jenkins Tutorial +- [https://www.youtube.com/watch?v=R8_veQiYBjI](https://www.youtube.com/watch?v=R8_veQiYBjI) GitHub Actions +- [https://www.youtube.com/watch?v=mFFXuXjVgkU](https://www.youtube.com/watch?v=mFFXuXjVgkU) GitHub Actions CI/CD Day-74 -- https://youtu.be/_MXtbjwsz3A Jenkins is the way to build, test, deploy -- https://www.jenkins.io/ Jenkins.io -- https://argo-cd.readthedocs.io/en/stable/ ArgoCD -- https://www.youtube.com/watch?v=MeU5_k9ssrs ArgoCD Tutorial for Beginners -- https://www.youtube.com/watch?v=LFDrDnKPOTg What is Jenkins? -- https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s Complete Jenkins Tutorial -- https://www.youtube.com/watch?v=R8_veQiYBjI GitHub Actions -- https://www.youtube.com/watch?v=mFFXuXjVgkU GitHub Actions CI/CD + +- [https://www.youtube.com/watch?v=\_MXtbjwsz3A](https://www.youtube.com/watch?v=_MXtbjwsz3A) Jenkins is the way to build, test, deploy +- [https://www.jenkins.io/](https://www.jenkins.io/) Jenkins.io +- [https://argo-cd.readthedocs.io/en/stable/](https://argo-cd.readthedocs.io/en/stable/) ArgoCD +- [https://www.youtube.com/watch?v=MeU5_k9ssrs](https://www.youtube.com/watch?v=MeU5_k9ssrs) ArgoCD Tutorial for Beginners +- [https://www.youtube.com/watch?v=LFDrDnKPOTg](https://www.youtube.com/watch?v=LFDrDnKPOTg) What is Jenkins? +- [https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s](https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s) Complete Jenkins Tutorial +- [https://www.youtube.com/watch?v=R8_veQiYBjI](https://www.youtube.com/watch?v=R8_veQiYBjI) GitHub Actions +- [https://www.youtube.com/watch?v=mFFXuXjVgkU](https://www.youtube.com/watch?v=mFFXuXjVgkU) GitHub Actions CI/CD Day-75 -- https://youtu.be/_MXtbjwsz3A Jenkins is the way to build, test, deploy -- https://www.jenkins.io/ Jenkins.io -- https://argo-cd.readthedocs.io/en/stable/ ArgoCD -- https://www.youtube.com/watch?v=MeU5_k9ssrs ArgoCD Tutorial for Beginners -- https://www.youtube.com/watch?v=LFDrDnKPOTg What is Jenkins? -- https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s Complete Jenkins Tutorial -- https://www.youtube.com/watch?v=R8_veQiYBjI GitHub Actions -- https://www.youtube.com/watch?v=mFFXuXjVgkU GitHub Actions CI/CD + +- [https://www.youtube.com/watch?v=\_MXtbjwsz3A](https://www.youtube.com/watch?v=_MXtbjwsz3A) Jenkins is the way to build, test, deploy +- [https://www.jenkins.io/](https://www.jenkins.io/) Jenkins.io +- [https://argo-cd.readthedocs.io/en/stable/](https://argo-cd.readthedocs.io/en/stable/) ArgoCD +- [https://www.youtube.com/watch?v=MeU5_k9ssrs](https://www.youtube.com/watch?v=MeU5_k9ssrs) ArgoCD Tutorial for Beginners +- [https://www.youtube.com/watch?v=LFDrDnKPOTg](https://www.youtube.com/watch?v=LFDrDnKPOTg) What is Jenkins? +- [https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s](https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s) Complete Jenkins Tutorial +- [https://www.youtube.com/watch?v=R8_veQiYBjI](https://www.youtube.com/watch?v=R8_veQiYBjI) GitHub Actions +- [https://www.youtube.com/watch?v=mFFXuXjVgkU](https://www.youtube.com/watch?v=mFFXuXjVgkU) GitHub Actions CI/CD Day-76 -- https://youtu.be/_MXtbjwsz3A Jenkins is the way to build, test, deploy -- https://www.jenkins.io/ Jenkins.io -- https://argo-cd.readthedocs.io/en/stable/ ArgoCD -- https://www.youtube.com/watch?v=MeU5_k9ssrs ArgoCD Tutorial for Beginners -- https://www.youtube.com/watch?v=LFDrDnKPOTg What is Jenkins? -- https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s Complete Jenkins Tutorial -- https://www.youtube.com/watch?v=R8_veQiYBjI GitHub Actions -- https://www.youtube.com/watch?v=mFFXuXjVgkU GitHub Actions CI/CD + +- [https://www.youtube.com/watch?v=\_MXtbjwsz3A](https://www.youtube.com/watch?v=_MXtbjwsz3A) Jenkins is the way to build, test, deploy +- [https://www.jenkins.io/](https://www.jenkins.io/) Jenkins.io +- [https://argo-cd.readthedocs.io/en/stable/](https://argo-cd.readthedocs.io/en/stable/) ArgoCD +- [https://www.youtube.com/watch?v=MeU5_k9ssrs](https://www.youtube.com/watch?v=MeU5_k9ssrs) ArgoCD Tutorial for Beginners +- [https://www.youtube.com/watch?v=LFDrDnKPOTg](https://www.youtube.com/watch?v=LFDrDnKPOTg) What is Jenkins? +- [https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s](https://www.youtube.com/watch?v=nCKxl7Q_20I&t=3s) Complete Jenkins Tutorial +- [https://www.youtube.com/watch?v=R8_veQiYBjI](https://www.youtube.com/watch?v=R8_veQiYBjI) GitHub Actions +- [https://www.youtube.com/watch?v=mFFXuXjVgkU](https://www.youtube.com/watch?v=mFFXuXjVgkU) GitHub Actions CI/CD Day-77 -- https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/ The Importance of Monitoring in DevOps -- https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b Understanding Continuous Monitoring in DevOps? -- https://www.youtube.com/watch?v=Zu53QQuYqJ0 DevOps Monitoring Tools -- https://www.youtube.com/watch?v=4t71iv_9t_4 Top 5 - DevOps Monitoring Tools -- https://www.youtube.com/watch?v=h4Sl21AKiDg How Prometheus Monitoring works -- https://www.youtube.com/watch?v=5o37CGlNLr8 Introduction to Prometheus monitoring + +- [https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) The Importance of Monitoring in DevOps +- [https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) Understanding Continuous Monitoring in DevOps? +- [https://www.youtube.com/watch?v=Zu53QQuYqJ0](https://www.youtube.com/watch?v=Zu53QQuYqJ0) DevOps Monitoring Tools +- [https://www.youtube.com/watch?v=4t71iv_9t_4](https://www.youtube.com/watch?v=4t71iv_9t_4) Top 5 - DevOps Monitoring Tools +- [https://www.youtube.com/watch?v=h4Sl21AKiDg](https://www.youtube.com/watch?v=h4Sl21AKiDg) How Prometheus Monitoring works +- [https://www.youtube.com/watch?v=5o37CGlNLr8](https://www.youtube.com/watch?v=5o37CGlNLr8) Introduction to Prometheus monitoring Day-78 -- https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/ The Importance of Monitoring in DevOps -- https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b Understanding Continuous Monitoring in DevOps? -- https://www.youtube.com/watch?v=Zu53QQuYqJ0 DevOps Monitoring Tools -- https://www.youtube.com/watch?v=4t71iv_9t_4 Top 5 - DevOps Monitoring Tools -- https://www.youtube.com/watch?v=h4Sl21AKiDg How Prometheus Monitoring works -- https://www.youtube.com/watch?v=5o37CGlNLr8 Introduction to Prometheus monitoring -- https://www.containiq.com/post/promql-cheat-sheet-with-examples Promql cheat sheet with examples + +- [https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) The Importance of Monitoring in DevOps +- [https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) Understanding Continuous Monitoring in DevOps? +- [https://www.youtube.com/watch?v=Zu53QQuYqJ0](https://www.youtube.com/watch?v=Zu53QQuYqJ0) DevOps Monitoring Tools +- [https://www.youtube.com/watch?v=4t71iv_9t_4](https://www.youtube.com/watch?v=4t71iv_9t_4) Top 5 - DevOps Monitoring Tools +- [https://www.youtube.com/watch?v=h4Sl21AKiDg](https://www.youtube.com/watch?v=h4Sl21AKiDg) How Prometheus Monitoring works +- [https://www.youtube.com/watch?v=5o37CGlNLr8](https://www.youtube.com/watch?v=5o37CGlNLr8) Introduction to Prometheus monitoring +- [https://www.containiq.com/post/promql-cheat-sheet-with-examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) Promql cheat sheet with examples Day-79 -- https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/ The Importance of Monitoring in DevOps -- https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b Understanding Continuous Monitoring in DevOps? -- https://www.youtube.com/watch?v=Zu53QQuYqJ0 DevOps Monitoring Tools -- https://www.youtube.com/watch?v=4t71iv_9t_4 Top 5 - DevOps Monitoring Tools -- https://www.youtube.com/watch?v=h4Sl21AKiDg How Prometheus Monitoring works -- https://www.youtube.com/watch?v=5o37CGlNLr8 Introduction to Prometheus monitoring -- https://www.containiq.com/post/promql-cheat-sheet-with-examples Promql cheat sheet with examples -- https://www.youtube.com/watch?v=J0csO_Shsj0 Log Management for DevOps | Manage application, server, and cloud logs with Site24x7 -- https://devops.com/log-management-what-devops-teams-need-to-know/ Log Management what DevOps need to know -- https://www.youtube.com/watch?v=4X0WLg05ASw What is ELK Stack? -- https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s Fluentd simply explained + +- [https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) The Importance of Monitoring in DevOps +- [https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) Understanding Continuous Monitoring in DevOps? +- [https://www.youtube.com/watch?v=Zu53QQuYqJ0](https://www.youtube.com/watch?v=Zu53QQuYqJ0) DevOps Monitoring Tools +- [https://www.youtube.com/watch?v=4t71iv_9t_4](https://www.youtube.com/watch?v=4t71iv_9t_4) Top 5 - DevOps Monitoring Tools +- [https://www.youtube.com/watch?v=h4Sl21AKiDg](https://www.youtube.com/watch?v=h4Sl21AKiDg) How Prometheus Monitoring works +- [https://www.youtube.com/watch?v=5o37CGlNLr8](https://www.youtube.com/watch?v=5o37CGlNLr8) Introduction to Prometheus monitoring +- [https://www.containiq.com/post/promql-cheat-sheet-with-examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) Promql cheat sheet with examples +- [https://www.youtube.com/watch?v=J0csO_Shsj0](https://www.youtube.com/watch?v=J0csO_Shsj0) Log Management for DevOps | Manage application, server, and cloud logs with Site24x7 +- [https://devops.com/log-management-what-devops-teams-need-to-know/](https://devops.com/log-management-what-devops-teams-need-to-know/) Log Management what DevOps need to know +- [https://www.youtube.com/watch?v=4X0WLg05ASw](https://www.youtube.com/watch?v=4X0WLg05ASw) What is ELK Stack? +- [https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) Fluentd simply explained Day-80 -- https://www.youtube.com/watch?v=MMVdkzeQ848 Understanding Logging: Containers & Microservices -- https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/ The Importance of Monitoring in DevOps -- https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b Understanding Continuous Monitoring in DevOps? -- https://www.youtube.com/watch?v=Zu53QQuYqJ0 DevOps Monitoring Tools -- https://www.youtube.com/watch?v=4t71iv_9t_4 Top 5 - DevOps Monitoring Tools -- https://www.youtube.com/watch?v=h4Sl21AKiDg How Prometheus Monitoring works -- https://www.youtube.com/watch?v=5o37CGlNLr8 Introduction to Prometheus monitoring -- https://www.containiq.com/post/promql-cheat-sheet-with-examples Promql cheat sheet with examples -- https://www.youtube.com/watch?v=J0csO_Shsj0 Log Management for DevOps | Manage application, server, and cloud logs with Site24x7 -- https://devops.com/log-management-what-devops-teams-need-to-know/ Log Management what DevOps need to know -- https://www.youtube.com/watch?v=4X0WLg05ASw What is ELK Stack? -- https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s Fluentd simply explained + +- [https://www.youtube.com/watch?v=MMVdkzeQ848](https://www.youtube.com/watch?v=MMVdkzeQ848) Understanding Logging: Containers & Microservices +- [https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) The Importance of Monitoring in DevOps +- [https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) Understanding Continuous Monitoring in DevOps? +- [https://www.youtube.com/watch?v=Zu53QQuYqJ0](https://www.youtube.com/watch?v=Zu53QQuYqJ0) DevOps Monitoring Tools +- [https://www.youtube.com/watch?v=4t71iv_9t_4](https://www.youtube.com/watch?v=4t71iv_9t_4) Top 5 - DevOps Monitoring Tools +- [https://www.youtube.com/watch?v=h4Sl21AKiDg](https://www.youtube.com/watch?v=h4Sl21AKiDg) How Prometheus Monitoring works +- [https://www.youtube.com/watch?v=5o37CGlNLr8](https://www.youtube.com/watch?v=5o37CGlNLr8) Introduction to Prometheus monitoring +- [https://www.containiq.com/post/promql-cheat-sheet-with-examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) Promql cheat sheet with examples +- [https://www.youtube.com/watch?v=J0csO_Shsj0](https://www.youtube.com/watch?v=J0csO_Shsj0) Log Management for DevOps | Manage application, server, and cloud logs with Site24x7 +- [https://devops.com/log-management-what-devops-teams-need-to-know/](https://devops.com/log-management-what-devops-teams-need-to-know/) Log Management what DevOps need to know +- [https://www.youtube.com/watch?v=4X0WLg05ASw](https://www.youtube.com/watch?v=4X0WLg05ASw) What is ELK Stack? +- [https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) Fluentd simply explained Day-81 -- https://www.youtube.com/watch?v=MMVdkzeQ848 Understanding Logging: Containers & Microservices -- https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/ The Importance of Monitoring in DevOps -- https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b Understanding Continuous Monitoring in DevOps? -- https://www.youtube.com/watch?v=Zu53QQuYqJ0 DevOps Monitoring Tools -- https://www.youtube.com/watch?v=4t71iv_9t_4 Top 5 - DevOps Monitoring Tools -- https://www.youtube.com/watch?v=h4Sl21AKiDg How Prometheus Monitoring works -- https://www.youtube.com/watch?v=5o37CGlNLr8 Introduction to Prometheus monitoring -- https://www.containiq.com/post/promql-cheat-sheet-with-examples Promql cheat sheet with examples -- https://www.youtube.com/watch?v=J0csO_Shsj0 Log Management for DevOps | Manage application, server, and cloud logs with Site24x7 -- https://devops.com/log-management-what-devops-teams-need-to-know/ Log Management what DevOps need to know -- https://www.youtube.com/watch?v=4X0WLg05ASw What is ELK Stack? -- https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s Fluentd simply explained -- https://www.youtube.com/watch?v=B2IS-XS-cc0 Fluent Bit explained | Fluent Bit vs Fluentd + +- [https://www.youtube.com/watch?v=MMVdkzeQ848](https://www.youtube.com/watch?v=MMVdkzeQ848) Understanding Logging: Containers & Microservices +- [https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) The Importance of Monitoring in DevOps +- [https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) Understanding Continuous Monitoring in DevOps? +- [https://www.youtube.com/watch?v=Zu53QQuYqJ0](https://www.youtube.com/watch?v=Zu53QQuYqJ0) DevOps Monitoring Tools +- [https://www.youtube.com/watch?v=4t71iv_9t_4](https://www.youtube.com/watch?v=4t71iv_9t_4) Top 5 - DevOps Monitoring Tools +- [https://www.youtube.com/watch?v=h4Sl21AKiDg](https://www.youtube.com/watch?v=h4Sl21AKiDg) How Prometheus Monitoring works +- [https://www.youtube.com/watch?v=5o37CGlNLr8](https://www.youtube.com/watch?v=5o37CGlNLr8) Introduction to Prometheus monitoring +- [https://www.containiq.com/post/promql-cheat-sheet-with-examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) Promql cheat sheet with examples +- [https://www.youtube.com/watch?v=J0csO_Shsj0](https://www.youtube.com/watch?v=J0csO_Shsj0) Log Management for DevOps | Manage application, server, and cloud logs with Site24x7 +- [https://devops.com/log-management-what-devops-teams-need-to-know/](https://devops.com/log-management-what-devops-teams-need-to-know/) Log Management what DevOps need to know +- [https://www.youtube.com/watch?v=4X0WLg05ASw](https://www.youtube.com/watch?v=4X0WLg05ASw) What is ELK Stack? +- [https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) Fluentd simply explained +- [https://www.youtube.com/watch?v=B2IS-XS-cc0](https://www.youtube.com/watch?v=B2IS-XS-cc0) Fluent Bit explained | Fluent Bit vs Fluentd Day-82 -- https://www.youtube.com/watch?v=MMVdkzeQ848 Understanding Logging: Containers & Microservices -- https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/ The Importance of Monitoring in DevOps -- https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b Understanding Continuous Monitoring in DevOps? -- https://www.youtube.com/watch?v=Zu53QQuYqJ0 DevOps Monitoring Tools -- https://www.youtube.com/watch?v=4t71iv_9t_4 Top 5 - DevOps Monitoring Tools -- https://www.youtube.com/watch?v=h4Sl21AKiDg How Prometheus Monitoring works -- https://www.youtube.com/watch?v=5o37CGlNLr8 Introduction to Prometheus monitoring -- https://www.containiq.com/post/promql-cheat-sheet-with-examples Promql cheat sheet with examples -- https://www.youtube.com/watch?v=J0csO_Shsj0 Log Management for DevOps | Manage application, server, and cloud logs with Site24x7 -- https://devops.com/log-management-what-devops-teams-need-to-know/ Log Management what DevOps need to know -- https://www.youtube.com/watch?v=4X0WLg05ASw What is ELK Stack? -- https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s Fluentd simply explained + +- [https://www.youtube.com/watch?v=MMVdkzeQ848](https://www.youtube.com/watch?v=MMVdkzeQ848) Understanding Logging: Containers & Microservices +- [https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) The Importance of Monitoring in DevOps +- [https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) Understanding Continuous Monitoring in DevOps? +- [https://www.youtube.com/watch?v=Zu53QQuYqJ0](https://www.youtube.com/watch?v=Zu53QQuYqJ0) DevOps Monitoring Tools +- [https://www.youtube.com/watch?v=4t71iv_9t_4](https://www.youtube.com/watch?v=4t71iv_9t_4) Top 5 - DevOps Monitoring Tools +- [https://www.youtube.com/watch?v=h4Sl21AKiDg](https://www.youtube.com/watch?v=h4Sl21AKiDg) How Prometheus Monitoring works +- [https://www.youtube.com/watch?v=5o37CGlNLr8](https://www.youtube.com/watch?v=5o37CGlNLr8) Introduction to Prometheus monitoring +- [https://www.containiq.com/post/promql-cheat-sheet-with-examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) Promql cheat sheet with examples +- [https://www.youtube.com/watch?v=J0csO_Shsj0](https://www.youtube.com/watch?v=J0csO_Shsj0) Log Management for DevOps | Manage application, server, and cloud logs with Site24x7 +- [https://devops.com/log-management-what-devops-teams-need-to-know/](https://devops.com/log-management-what-devops-teams-need-to-know/) Log Management what DevOps need to know +- [https://www.youtube.com/watch?v=4X0WLg05ASw](https://www.youtube.com/watch?v=4X0WLg05ASw) What is ELK Stack? +- [https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) Fluentd simply explained Day-83 -- https://www.youtube.com/watch?v=MMVdkzeQ848 Understanding Logging: Containers & Microservices -- https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/ The Importance of Monitoring in DevOps -- https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b Understanding Continuous Monitoring in DevOps? -- https://www.youtube.com/watch?v=Zu53QQuYqJ0 DevOps Monitoring Tools -- https://www.youtube.com/watch?v=4t71iv_9t_4 Top 5 - DevOps Monitoring Tools -- https://www.youtube.com/watch?v=h4Sl21AKiDg How Prometheus Monitoring works -- https://www.youtube.com/watch?v=5o37CGlNLr8 Introduction to Prometheus monitoring -- https://www.containiq.com/post/promql-cheat-sheet-with-examples Promql cheat sheet with examples -- https://www.youtube.com/watch?v=J0csO_Shsj0 Log Management for DevOps | Manage application, server, and cloud logs with Site24x7 -- https://devops.com/log-management-what-devops-teams-need-to-know/ Log Management what DevOps need to know -- https://www.youtube.com/watch?v=4X0WLg05ASw What is ELK Stack? -- https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s Fluentd simply explained + +- [https://www.youtube.com/watch?v=MMVdkzeQ848](https://www.youtube.com/watch?v=MMVdkzeQ848) Understanding Logging: Containers & Microservices +- [https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/](https://www.devopsonline.co.uk/the-importance-of-monitoring-in-devops/) The Importance of Monitoring in DevOps +- [https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b](https://medium.com/devopscurry/understanding-continuous-monitoring-in-devops-f6695b004e3b) Understanding Continuous Monitoring in DevOps? +- [https://www.youtube.com/watch?v=Zu53QQuYqJ0](https://www.youtube.com/watch?v=Zu53QQuYqJ0) DevOps Monitoring Tools +- [https://www.youtube.com/watch?v=4t71iv_9t_4](https://www.youtube.com/watch?v=4t71iv_9t_4) Top 5 - DevOps Monitoring Tools +- [https://www.youtube.com/watch?v=h4Sl21AKiDg](https://www.youtube.com/watch?v=h4Sl21AKiDg) How Prometheus Monitoring works +- [https://www.youtube.com/watch?v=5o37CGlNLr8](https://www.youtube.com/watch?v=5o37CGlNLr8) Introduction to Prometheus monitoring +- [https://www.containiq.com/post/promql-cheat-sheet-with-examples](https://www.containiq.com/post/promql-cheat-sheet-with-examples) Promql cheat sheet with examples +- [https://www.youtube.com/watch?v=J0csO_Shsj0](https://www.youtube.com/watch?v=J0csO_Shsj0) Log Management for DevOps | Manage application, server, and cloud logs with Site24x7 +- [https://devops.com/log-management-what-devops-teams-need-to-know/](https://devops.com/log-management-what-devops-teams-need-to-know/) Log Management what DevOps need to know +- [https://www.youtube.com/watch?v=4X0WLg05ASw](https://www.youtube.com/watch?v=4X0WLg05ASw) What is ELK Stack? +- [https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s](https://www.youtube.com/watch?v=5ofsNyHZwWE&t=14s) Fluentd simply explained Day-84 -- https://www.youtube.com/watch?v=01qcYSck1c4&t=217s Kubernetes Backup and Restore made easy! -- https://www.youtube.com/watch?v=zybLTQER0yY Kubernetes Backups, Upgrades, Migrations - with Velero -- https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s 7 Database Paradigms -- https://www.youtube.com/watch?v=07EHsPuKXc0 Disaster Recovery vs. Backup: What's the difference? -- https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s Veeam Portability & Cloud Mobility + +- [https://www.youtube.com/watch?v=01qcYSck1c4&t=217s](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) Kubernetes Backup and Restore made easy! +- [https://www.youtube.com/watch?v=zybLTQER0yY](https://www.youtube.com/watch?v=zybLTQER0yY) Kubernetes Backups, Upgrades, Migrations - with Velero +- [https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s](https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s) 7 Database Paradigms +- [https://www.youtube.com/watch?v=07EHsPuKXc0](https://www.youtube.com/watch?v=07EHsPuKXc0) Disaster Recovery vs. Backup: What's the difference? +- [https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s](https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s) Veeam Portability & Cloud Mobility Day-85 -- https://www.youtube.com/watch?v=OqCK95AS-YE Redis Crash Course - the What, Why and How to use Redis as your primary database -- https://www.youtube.com/watch?v=GEg7s3i6Jak Redis: How to setup a cluster - for beginners -- https://www.youtube.com/watch?v=JmCn7k0PlV4 Redis on Kubernetes for beginners -- https://www.youtube.com/watch?v=YjYWsN1vek8 Intro to Cassandra - Cassandra Fundamentals -- https://www.youtube.com/watch?v=ofme2o29ngU MongoDB Crash Course -- https://www.youtube.com/watch?v=-bt_y4Loofg MongoDB in 100 Seconds -- https://www.youtube.com/watch?v=OqjJjpjDRLc What is a Relational Database? -- https://www.youtube.com/watch?v=qw--VYLpxG4 Learn PostgreSQL Tutorial - Full Course for Beginners -- https://www.youtube.com/watch?v=7S_tz1z_5bA MySQL Tutorial for Beginners [Full Course] -- https://www.youtube.com/watch?v=REVkXVxvMQE What is a graph database? (in 10 minutes) -- https://www.youtube.com/watch?v=ZP0NmfyfsoM What is Elasticsearch? -- https://www.youtube.com/watch?v=2CipVwISumA FaunaDB Basics - The Database of your Dreams -- https://www.youtube.com/watch?v=ihaB7CqJju0 Fauna Crash Course - Covering the Basics + +- [https://www.youtube.com/watch?v=OqCK95AS-YE](https://www.youtube.com/watch?v=OqCK95AS-YE) Redis Crash Course - the What, Why and How to use Redis as your primary database +- [https://www.youtube.com/watch?v=GEg7s3i6Jak](https://www.youtube.com/watch?v=GEg7s3i6Jak) Redis: How to setup a cluster - for beginners +- [https://www.youtube.com/watch?v=JmCn7k0PlV4](https://www.youtube.com/watch?v=JmCn7k0PlV4) Redis on Kubernetes for beginners +- [https://www.youtube.com/watch?v=YjYWsN1vek8](https://www.youtube.com/watch?v=YjYWsN1vek8) Intro to Cassandra - Cassandra Fundamentals +- [https://www.youtube.com/watch?v=ofme2o29ngU](https://www.youtube.com/watch?v=ofme2o29ngU) MongoDB Crash Course +- [https://www.youtube.com/watch?v=-bt_y4Loofg](https://www.youtube.com/watch?v=-bt_y4Loofg) MongoDB in 100 Seconds +- [https://www.youtube.com/watch?v=OqjJjpjDRLc](https://www.youtube.com/watch?v=OqjJjpjDRLc) What is a Relational Database? +- [https://www.youtube.com/watch?v=qw--VYLpxG4](https://www.youtube.com/watch?v=qw--VYLpxG4) Learn PostgreSQL Tutorial - Full Course for Beginners +- [https://www.youtube.com/watch?v=7S_tz1z_5bA](https://www.youtube.com/watch?v=7S_tz1z_5bA) MySQL Tutorial for Beginners [Full Course] +- [https://www.youtube.com/watch?v=REVkXVxvMQE](https://www.youtube.com/watch?v=REVkXVxvMQE) What is a graph database? (in 10 minutes) +- [https://www.youtube.com/watch?v=ZP0NmfyfsoM](https://www.youtube.com/watch?v=ZP0NmfyfsoM) What is Elasticsearch? +- [https://www.youtube.com/watch?v=2CipVwISumA](https://www.youtube.com/watch?v=2CipVwISumA) FaunaDB Basics - The Database of your Dreams +- [https://www.youtube.com/watch?v=ihaB7CqJju0](https://www.youtube.com/watch?v=ihaB7CqJju0) Fauna Crash Course - Covering the Basics Day-86 -- https://www.youtube.com/watch?v=01qcYSck1c4&t=217s Kubernetes Backup and Restore made easy! -- https://www.youtube.com/watch?v=zybLTQER0yY Kubernetes Backups, Upgrades, Migrations - with Velero -- https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s 7 Database Paradigms -- https://www.youtube.com/watch?v=07EHsPuKXc0 Disaster Recovery vs. Backup: What's the difference? -- https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s Veeam Portability & Cloud Mobility + +- [https://www.youtube.com/watch?v=01qcYSck1c4&t=217s](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) Kubernetes Backup and Restore made easy! +- [https://www.youtube.com/watch?v=zybLTQER0yY](https://www.youtube.com/watch?v=zybLTQER0yY) Kubernetes Backups, Upgrades, Migrations - with Velero +- [https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s](https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s) 7 Database Paradigms +- [https://www.youtube.com/watch?v=07EHsPuKXc0](https://www.youtube.com/watch?v=07EHsPuKXc0) Disaster Recovery vs. Backup: What's the difference? +- [https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s](https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s) Veeam Portability & Cloud Mobility Day-87 -- https://www.youtube.com/watch?v=01qcYSck1c4&t=217s Kubernetes Backup and Restore made easy! -- https://www.youtube.com/watch?v=zybLTQER0yY Kubernetes Backups, Upgrades, Migrations - with Velero -- https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s 7 Database Paradigms -- https://www.youtube.com/watch?v=07EHsPuKXc0 Disaster Recovery vs. Backup: What's the difference? -- https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s Veeam Portability & Cloud Mobility + +- [https://www.youtube.com/watch?v=01qcYSck1c4&t=217s](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) Kubernetes Backup and Restore made easy! +- [https://www.youtube.com/watch?v=zybLTQER0yY](https://www.youtube.com/watch?v=zybLTQER0yY) Kubernetes Backups, Upgrades, Migrations - with Velero +- [https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s](https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s) 7 Database Paradigms +- [https://www.youtube.com/watch?v=07EHsPuKXc0](https://www.youtube.com/watch?v=07EHsPuKXc0) Disaster Recovery vs. Backup: What's the difference? +- [https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s](https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s) Veeam Portability & Cloud Mobility Day-88 -- https://www.youtube.com/watch?v=wFD42Zpbfts Kanister Overview - An extensible open-source framework for app-lvl data management on Kubernetes -- https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-live-webinar-kanister-application-level-data-operations-on-kubernetes/ Application Level Data Operations on Kubernetes -- https://www.youtube.com/watch?v=01qcYSck1c4&t=217s Kubernetes Backup and Restore made easy! -- https://www.youtube.com/watch?v=zybLTQER0yY Kubernetes Backups, Upgrades, Migrations - with Velero -- https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s 7 Database Paradigms -- https://www.youtube.com/watch?v=07EHsPuKXc0 Disaster Recovery vs. Backup: What's the difference? -- https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s Veeam Portability & Cloud Mobility + +- [https://www.youtube.com/watch?v=wFD42Zpbfts](https://www.youtube.com/watch?v=wFD42Zpbfts) Kanister Overview - An extensible open-source framework for app-lvl data management on Kubernetes +- [https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-live-webinar-kanister-application-level-data-operations-on-kubernetes/](https://community.cncf.io/events/details/cncf-cncf-online-programs-presents-cncf-live-webinar-kanister-application-level-data-operations-on-kubernetes/) Application Level Data Operations on Kubernetes +- [https://www.youtube.com/watch?v=01qcYSck1c4&t=217s](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) Kubernetes Backup and Restore made easy! +- [https://www.youtube.com/watch?v=zybLTQER0yY](https://www.youtube.com/watch?v=zybLTQER0yY) Kubernetes Backups, Upgrades, Migrations - with Velero +- [https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s](https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s) 7 Database Paradigms +- [https://www.youtube.com/watch?v=07EHsPuKXc0](https://www.youtube.com/watch?v=07EHsPuKXc0) Disaster Recovery vs. Backup: What's the difference? +- [https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s](https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s) Veeam Portability & Cloud Mobility Day-89 -- https://www.youtube.com/watch?v=01qcYSck1c4&t=217s Kubernetes Backup and Restore made easy! -- https://www.youtube.com/watch?v=zybLTQER0yY Kubernetes Backups, Upgrades, Migrations - with Velero -- https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s 7 Database Paradigms -- https://www.youtube.com/watch?v=07EHsPuKXc0 Disaster Recovery vs. Backup: What's the difference? -- https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s Veeam Portability & Cloud Mobility + +- [https://www.youtube.com/watch?v=01qcYSck1c4&t=217s](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) Kubernetes Backup and Restore made easy! +- [https://www.youtube.com/watch?v=zybLTQER0yY](https://www.youtube.com/watch?v=zybLTQER0yY) Kubernetes Backups, Upgrades, Migrations - with Velero +- [https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s](https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s) 7 Database Paradigms +- [https://www.youtube.com/watch?v=07EHsPuKXc0](https://www.youtube.com/watch?v=07EHsPuKXc0) Disaster Recovery vs. Backup: What's the difference? +- [https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s](https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s) Veeam Portability & Cloud Mobility Day-90 -- https://www.youtube.com/watch?v=01qcYSck1c4&t=217s Kubernetes Backup and Restore made easy! -- https://www.youtube.com/watch?v=zybLTQER0yY Kubernetes Backups, Upgrades, Migrations - with Velero -- https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s 7 Database Paradigms -- https://www.youtube.com/watch?v=07EHsPuKXc0 Disaster Recovery vs. Backup: What's the difference? -- https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s Veeam Portability & Cloud Mobility + +- [https://www.youtube.com/watch?v=01qcYSck1c4&t=217s](https://www.youtube.com/watch?v=01qcYSck1c4&t=217s) Kubernetes Backup and Restore made easy! +- [https://www.youtube.com/watch?v=zybLTQER0yY](https://www.youtube.com/watch?v=zybLTQER0yY) Kubernetes Backups, Upgrades, Migrations - with Velero +- [https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s](https://www.youtube.com/watch?v=W2Z7fbCLSTw&t=520s) 7 Database Paradigms +- [https://www.youtube.com/watch?v=07EHsPuKXc0](https://www.youtube.com/watch?v=07EHsPuKXc0) Disaster Recovery vs. Backup: What's the difference? +- [https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s](https://www.youtube.com/watch?v=hDBlTdzE6Us&t=3s) Veeam Portability & Cloud Mobility diff --git a/ja/Days/day02.md b/ja/Days/day02.md index dfac0577c..f0e414604 100644 --- a/ja/Days/day02.md +++ b/ja/Days/day02.md @@ -61,7 +61,7 @@ DevOpsエンジニアとして、あなたはアプリケーションをプロ - [What is DevOps? - TechWorld with Nana](https://www.youtube.com/watch?v=0yWAtQ6wYNM) - [What is DevOps? - GitHub YouTube](https://www.youtube.com/watch?v=kBV8gPVZNEE) - [What is DevOps? - IBM YouTube](https://www.youtube.com/watch?v=UbtB4sMaaNM) -- [What is DevOps? - AWS ](https://aws.amazon.com/devops/what-is-devops/) +- [What is DevOps? - AWS](https://aws.amazon.com/devops/what-is-devops/) - [What is DevOps? - Microsoft](https://docs.microsoft.com/en-us/devops/what-is-devops) -ここまで来れば、ここが自分の望むところかどうかが分かるはずです。それでは、[3日目](day03.md)でお会いしましょう。 \ No newline at end of file +ここまで来れば、ここが自分の望むところかどうかが分かるはずです。それでは、[3日目](day03.md)でお会いしましょう。 diff --git a/ja/Days/day05.md b/ja/Days/day05.md index 85f35e14e..8d73b693b 100644 --- a/ja/Days/day05.md +++ b/ja/Days/day05.md @@ -77,7 +77,7 @@ CIリリースが成功した場合 = 継続的デプロイメント = デプロ ### リソース: - [DevOps for Developers – Software or DevOps Engineer?](https://www.youtube.com/watch?v=a0-uE3rOyeU) -- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps? ](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s) +- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps?](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s) - [How to become a DevOps Engineer in 2021 - DevOps Roadmap](https://www.youtube.com/watch?v=5pxbp6FyTfk) ここまで来れば、ここが自分の居場所かどうかが分かるはずです。 diff --git a/ja/Days/day56.md b/ja/Days/day56.md index c4bad6cd9..d90c0931b 100644 --- a/ja/Days/day56.md +++ b/ja/Days/day56.md @@ -112,9 +112,9 @@ Next up we will start looking into Terraform with a 101 before we get some hands ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/ja/Days/day57.md b/ja/Days/day57.md index 1b82dbf30..764924d10 100644 --- a/ja/Days/day57.md +++ b/ja/Days/day57.md @@ -87,9 +87,9 @@ We are going to get into more around HCL and then also start using Terraform to ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/ja/Days/day58.md b/ja/Days/day58.md index cdb9b6eeb..53d5ecd0f 100644 --- a/ja/Days/day58.md +++ b/ja/Days/day58.md @@ -219,9 +219,9 @@ The pros for storing state in a remote location is that we get: ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/ja/Days/day59.md b/ja/Days/day59.md index 90d7365e1..7ec0bf5c4 100644 --- a/ja/Days/day59.md +++ b/ja/Days/day59.md @@ -115,9 +115,9 @@ variable "some resource" { ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/ja/Days/day60.md b/ja/Days/day60.md index b88ace47c..61b4dbf73 100644 --- a/ja/Days/day60.md +++ b/ja/Days/day60.md @@ -179,9 +179,9 @@ We are breaking down our infrastructure into components, components are known he ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/ja/Days/day61.md b/ja/Days/day61.md index 4b159328e..c3c819b69 100644 --- a/ja/Days/day61.md +++ b/ja/Days/day61.md @@ -153,9 +153,9 @@ Cons ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/ja/Days/day62.md b/ja/Days/day62.md index 61ead6767..5ee56a53c 100644 --- a/ja/Days/day62.md +++ b/ja/Days/day62.md @@ -94,9 +94,9 @@ This wraps up the Infrastructure as code section and next we move on to that lit ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/zh_cn/Days/day02.md b/zh_cn/Days/day02.md index 8889e19cf..0b9ac7ddb 100644 --- a/zh_cn/Days/day02.md +++ b/zh_cn/Days/day02.md @@ -61,7 +61,7 @@ My advice is to watch all of the below and hopefully you also picked something u - [What is DevOps? - TechWorld with Nana](https://www.youtube.com/watch?v=0yWAtQ6wYNM) - [What is DevOps? - GitHub YouTube](https://www.youtube.com/watch?v=kBV8gPVZNEE) - [What is DevOps? - IBM YouTube](https://www.youtube.com/watch?v=UbtB4sMaaNM) -- [What is DevOps? - AWS ](https://aws.amazon.com/devops/what-is-devops/) +- [What is DevOps? - AWS](https://aws.amazon.com/devops/what-is-devops/) - [What is DevOps? - Microsoft](https://docs.microsoft.com/en-us/devops/what-is-devops) If you made it this far then you will know if this is where you want to be or not. See you on [Day 3](day03.md). diff --git a/zh_cn/Days/day05.md b/zh_cn/Days/day05.md index 721ea6bab..4246b1b27 100644 --- a/zh_cn/Days/day05.md +++ b/zh_cn/Days/day05.md @@ -77,7 +77,7 @@ This last bit was a bit of a recap for me on Day 3 but think this actually makes ### Resources: - [DevOps for Developers – Software or DevOps Engineer?](https://www.youtube.com/watch?v=a0-uE3rOyeU) -- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps? ](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s) +- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps?](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s) - [How to become a DevOps Engineer in 2021 - DevOps Roadmap](https://www.youtube.com/watch?v=5pxbp6FyTfk) If you made it this far then you will know if this is where you want to be or not. diff --git a/zh_cn/Days/day56.md b/zh_cn/Days/day56.md index c4bad6cd9..d90c0931b 100644 --- a/zh_cn/Days/day56.md +++ b/zh_cn/Days/day56.md @@ -112,9 +112,9 @@ Next up we will start looking into Terraform with a 101 before we get some hands ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/zh_cn/Days/day57.md b/zh_cn/Days/day57.md index e89a386d6..26bea35e2 100644 --- a/zh_cn/Days/day57.md +++ b/zh_cn/Days/day57.md @@ -86,9 +86,9 @@ We are going to get into more around HCL and then also start using Terraform to ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/zh_cn/Days/day58.md b/zh_cn/Days/day58.md index cdb9b6eeb..53d5ecd0f 100644 --- a/zh_cn/Days/day58.md +++ b/zh_cn/Days/day58.md @@ -219,9 +219,9 @@ The pros for storing state in a remote location is that we get: ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/zh_cn/Days/day59.md b/zh_cn/Days/day59.md index 90d7365e1..7ec0bf5c4 100644 --- a/zh_cn/Days/day59.md +++ b/zh_cn/Days/day59.md @@ -115,9 +115,9 @@ variable "some resource" { ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/zh_cn/Days/day60.md b/zh_cn/Days/day60.md index b88ace47c..61b4dbf73 100644 --- a/zh_cn/Days/day60.md +++ b/zh_cn/Days/day60.md @@ -179,9 +179,9 @@ We are breaking down our infrastructure into components, components are known he ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/zh_cn/Days/day61.md b/zh_cn/Days/day61.md index 4b159328e..c3c819b69 100644 --- a/zh_cn/Days/day61.md +++ b/zh_cn/Days/day61.md @@ -153,9 +153,9 @@ Cons ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/zh_cn/Days/day62.md b/zh_cn/Days/day62.md index 61ead6767..5ee56a53c 100644 --- a/zh_cn/Days/day62.md +++ b/zh_cn/Days/day62.md @@ -94,9 +94,9 @@ This wraps up the Infrastructure as code section and next we move on to that lit ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/zh_tw/Days/day02.md b/zh_tw/Days/day02.md index 43c737283..512546333 100644 --- a/zh_tw/Days/day02.md +++ b/zh_tw/Days/day02.md @@ -62,7 +62,7 @@ My advice is to watch all of the below and hopefully you also picked something u - [What is DevOps? - TechWorld with Nana](https://www.youtube.com/watch?v=0yWAtQ6wYNM) - [What is DevOps? - GitHub YouTube](https://www.youtube.com/watch?v=kBV8gPVZNEE) - [What is DevOps? - IBM YouTube](https://www.youtube.com/watch?v=UbtB4sMaaNM) -- [What is DevOps? - AWS ](https://aws.amazon.com/devops/what-is-devops/) +- [What is DevOps? - AWS](https://aws.amazon.com/devops/what-is-devops/) - [What is DevOps? - Microsoft](https://docs.microsoft.com/en-us/devops/what-is-devops) If you made it this far then you will know if this is where you want to be or not. See you on [Day 3](day03.md). diff --git a/zh_tw/Days/day05.md b/zh_tw/Days/day05.md index 721ea6bab..4246b1b27 100644 --- a/zh_tw/Days/day05.md +++ b/zh_tw/Days/day05.md @@ -77,7 +77,7 @@ This last bit was a bit of a recap for me on Day 3 but think this actually makes ### Resources: - [DevOps for Developers – Software or DevOps Engineer?](https://www.youtube.com/watch?v=a0-uE3rOyeU) -- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps? ](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s) +- [Techworld with Nana -DevOps Roadmap 2022 - How to become a DevOps Engineer? What is DevOps?](https://www.youtube.com/watch?v=9pZ2xmsSDdo&t=125s) - [How to become a DevOps Engineer in 2021 - DevOps Roadmap](https://www.youtube.com/watch?v=5pxbp6FyTfk) If you made it this far then you will know if this is where you want to be or not. diff --git a/zh_tw/Days/day56.md b/zh_tw/Days/day56.md index c4bad6cd9..d90c0931b 100644 --- a/zh_tw/Days/day56.md +++ b/zh_tw/Days/day56.md @@ -112,9 +112,9 @@ Next up we will start looking into Terraform with a 101 before we get some hands ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/zh_tw/Days/day57.md b/zh_tw/Days/day57.md index 1b82dbf30..764924d10 100644 --- a/zh_tw/Days/day57.md +++ b/zh_tw/Days/day57.md @@ -87,9 +87,9 @@ We are going to get into more around HCL and then also start using Terraform to ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/zh_tw/Days/day58.md b/zh_tw/Days/day58.md index cdb9b6eeb..53d5ecd0f 100644 --- a/zh_tw/Days/day58.md +++ b/zh_tw/Days/day58.md @@ -219,9 +219,9 @@ The pros for storing state in a remote location is that we get: ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/zh_tw/Days/day59.md b/zh_tw/Days/day59.md index 90d7365e1..7ec0bf5c4 100644 --- a/zh_tw/Days/day59.md +++ b/zh_tw/Days/day59.md @@ -115,9 +115,9 @@ variable "some resource" { ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/zh_tw/Days/day60.md b/zh_tw/Days/day60.md index b88ace47c..61b4dbf73 100644 --- a/zh_tw/Days/day60.md +++ b/zh_tw/Days/day60.md @@ -179,9 +179,9 @@ We are breaking down our infrastructure into components, components are known he ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/zh_tw/Days/day61.md b/zh_tw/Days/day61.md index 4b159328e..c3c819b69 100644 --- a/zh_tw/Days/day61.md +++ b/zh_tw/Days/day61.md @@ -153,9 +153,9 @@ Cons ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s) diff --git a/zh_tw/Days/day62.md b/zh_tw/Days/day62.md index 61ead6767..5ee56a53c 100644 --- a/zh_tw/Days/day62.md +++ b/zh_tw/Days/day62.md @@ -94,9 +94,9 @@ This wraps up the Infrastructure as code section and next we move on to that lit ## Resources I have listed a lot of resources down below and I think this topic has been covered so many times out there, If you have additional resources be sure to raise a PR with your resources and I will be happy to review and add them to the list. -- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools ](https://www.youtube.com/watch?v=POPP2WTJ8es) +- [What is Infrastructure as Code? Difference of Infrastructure as Code Tools](https://www.youtube.com/watch?v=POPP2WTJ8es) - [Terraform Tutorial | Terraform Course Overview 2021](https://www.youtube.com/watch?v=m3cKkYXl-8o) -- [Terraform explained in 15 mins | Terraform Tutorial for Beginners ](https://www.youtube.com/watch?v=l5k1ai_GBDE) +- [Terraform explained in 15 mins | Terraform Tutorial for Beginners](https://www.youtube.com/watch?v=l5k1ai_GBDE) - [Terraform Course - From BEGINNER to PRO!](https://www.youtube.com/watch?v=7xngnjfIlK4&list=WL&index=141&t=16s) - [HashiCorp Terraform Associate Certification Course](https://www.youtube.com/watch?v=V4waklkBC38&list=WL&index=55&t=111s) - [Terraform Full Course for Beginners](https://www.youtube.com/watch?v=EJ3N-hhiWv0&list=WL&index=39&t=27s)