diff --git a/docs/deploy-postgres-k8.md b/docs/deploy-postgres-k8.md deleted file mode 100644 index 8f98f129..00000000 --- a/docs/deploy-postgres-k8.md +++ /dev/null @@ -1,259 +0,0 @@ -# Deploying PostgreSQL in Kubernetes with ArgoCD using the Apps of Apps Pattern - -PostgreSQL, also known as Postgres, is a widely popular open-source relational database system known for its robustness, advanced features, and strong community support. This article will guide you through deploying PostgreSQL in a Kubernetes cluster using ArgoCD with the Apps of Apps pattern. - -## Why PostgreSQL is Popular - -PostgreSQL's popularity stems from several key features: - -1. **Open Source**: Free and open-source, allowing usage, modification, and distribution without licensing costs. -2. **Advanced Features**: Supports advanced data types, full ACID compliance, complex queries, JSON support, full-text search, and custom data types. -3. **Performance**: Optimized for high performance with large datasets, supporting concurrent transactions efficiently. -4. **Extensibility**: Allows users to define custom functions and operators, and supports a wide range of extensions. -5. **Community Support**: A large, active community provides extensive documentation, plugins, and third-party tools. - -## Prerequisites - -Before deploying PostgreSQL with ArgoCD, ensure you have the following: - -- A Kubernetes cluster running (local or cloud-based). -- `kubectl` command-line tool configured to interact with your cluster. -- ArgoCD installed and configured on your Kubernetes cluster. -- A Git repository to store your Kubernetes manifests. - -## Step 1: Set Up ArgoCD and the Git Repository - -Ensure that ArgoCD is installed and set up correctly on your Kubernetes cluster. You should also have a Git repository where you will store your Kubernetes manifests. - -## Step 2: Create the Root Application - -The Apps of Apps pattern in ArgoCD involves having a root application that manages other applications. Let's create a root application manifest. - -```yaml -apiVersion: argoproj.io/v1alpha1 -kind: Application -metadata: - name: registry - namespace: argocd - finalizers: - - resources-finalizer.argocd.argoproj.io -spec: - project: default - source: - repoURL: 'https://github.com/mrpbennett/home-ops.git' - path: kubernetes/registry - targetRevision: HEAD - directory: - recurse: true - destination: - server: 'https://kubernetes.default.svc' - syncPolicy: - automated: - prune: true - selfHeal: true - syncOptions: - - Validate=true - - CreateNamespace=false - retry: - limit: 5 - backoff: - duration: 5s - maxDuration: 5m0s - factor: 2 -``` - -Now we have the root application, we can create our postgres application. The root application will look inside a directory, where we can place more applications like the postgres one below. - -```yaml -apiVersion: argoproj.io/v1alpha1 -kind: Application -metadata: - name: &app postgres-db - namespace: argocd - finalizers: - - resources-finalizer.argocd.argoproj.io -spec: - project: default - source: - repoURL: 'https://github.com/mrpbennett/home-ops.git' - path: kubernetes/apps/postgres-db - targetRevision: HEAD - directory: - recurse: true - destination: - namespace: *app - server: 'https://kubernetes.default.svc' - syncPolicy: - automated: - prune: true - selfHeal: true - syncOptions: - - CreateNamespace=true - retry: - limit: 5 - backoff: - duration: 5s - maxDuration: 5m0s - factor: 2 -``` - -## Creating the manifest files for Postgres - -### Configuration - -**Config Map** - -In Kubernetes, a ConfigMap is an API object that stores configuration data in key-value pairs, which pods or containers can use in a cluster. ConfigMaps helps decouple configuration details from the application code, making it easier to manage and update configuration settings without changing the application’s code. - -Let’s create a ConfigMap configuration file to store PostgreSQL connection details such as hostname, database name, username, and other settings. - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: postgres-secret - labels: - app: postgres -data: - POSTGRES_DB: ps_db - POSTGRES_USER: ps_user - POSTGRES_PASSWORD: SecurePassword -``` - -> Storing sensitive data in a ConfigMap is not recommended due to security concerns. When handling sensitive data within Kubernetes, it’s essential to use Secrets and follow security best practices to ensure the protection and confidentiality of your data. - -### Storage - -PersistentVolume (PV) and PersistentVolumeClaim (PVC) are Kubernetes resources that provide and claim persistent storage in a cluster. A PersistentVolume provides storage resources in the cluster, while a PersistentVolumeClaim allows pods to request specific storage resources. - -**Persistant Volume** - -```yaml -apiVersion: v1 -kind: PersistentVolume -metadata: - name: postgres-volume - labels: - type: local - app: postgres -spec: - storageClassName: manual - capacity: - storage: 50Gi - accessModes: - - ReadWriteMany - hostPath: - path: /var/mnt/storage/postgresql -``` - -Here I have set the `accessMode` to `ReadWriteMany` allowing multiple Pods to read and write to the volume simultaneously. This is because we're going to be setting the replicas to more than 1. As I am using [Talos](https://www.talos.dev/) as my OS the `hostPath` is the extra drive I have mounted on my nodes. - -```yaml -extraMounts: - - destination: /var/mnt/storage # Destination is the absolute path where the mount will be placed in the container. - type: bind # Type specifies the mount kind. - source: /var/mnt/storage # Source specifies the source path of the mount. -``` - -In my case any PV will always start `/var/mnt/storage` - -**Persistant Volume Claim** - -```yaml -apiVersion: v1 -kind: PersistentVolumeClaim -metadata: - name: postgres-volume-claim - labels: - app: postgres -spec: - storageClassName: manual - accessModes: - - ReadWriteMany - resources: - requests: - storage: 50Gi -``` - -**Deployment** - -Creating a PostgreSQL deployment in Kubernetes involves defining a Deployment manifest to orchestrate the PostgreSQL pods. - -```yaml -apiVersion: apps/v1 -kind: Deployment -metadata: - name: postgres -spec: - replicas: 3 - selector: - matchLabels: - app: postgres - template: - metadata: - labels: - app: postgres - spec: - containers: - - name: postgres - image: 'postgres:16' - imagePullPolicy: IfNotPresent - ports: - - containerPort: 5432 - envFrom: - - configMapRef: - name: postgres-secret - volumeMounts: - - mountPath: /var/lib/postgresql/data - name: postgresdata - volumes: - - name: postgresdata - persistentVolumeClaim: - claimName: postgres-volume-claim -``` - -The `PersistentVolumeClaim` named “postgres-volume-claim” which we created earlier. This claim is likely used to provide persistent storage to the PostgreSQL container so that data is retained across Pod restarts or rescheduling. - -**Service** - -As I am running 3 replicas and MetalLB I have chosen to use `LoadBalancer` for my service, this will give me an external IP from the cluster. The IP is one of the internal IPs on my network. A Service is used to define a logical set of Pods that enable other Pods within the cluster to communicate with a set of Pods without needing to know the specific IP addresses of those Pods. - -```yaml -apiVersion: v1 -kind: Service -metadata: - name: postgres - labels: - app: postgres -spec: - type: LoadBalancer - ports: - - port: 5432 - selector: - app: postgres -``` - -Once the is created other applications or services within the Kubernetes cluster can communicate with the PostgreSQL database using the Postgres name and port 5432 as the entry point. You can see how the LoadBalancer has provided an internal IP on my network that allows me to connect to it from my local machine. - -All the above manifests should be within their own directory like so: - -``` -📁 -├── configmap.yaml -├── deployment.yaml -├── persistant-vol-claim.yaml -├── persistant-vol.yaml -└── service.yaml -``` - -Now all that is left to do is, commit the changes to your repo and let ArgoCD take care of the rest. Once ArgoCD has synced and got everything up and running it should look like this: - -IMAGE---- - -Things will look a little different here, as I only have 2 replicas and I am still playing around with PGAdmin. - -## That's it - -On a basic level this is how I have set up Postgres. If this has helped then please do let me know, I will be exploring how to backup my database as well as implementing things like [PGAdmin](https://www.pgadmin.org/). - -Inspo - [How to Deploy Postgres to Kubernetes Cluster](https://www.digitalocean.com/community/tutorials/how-to-deploy-postgres-to-kubernetes-cluster) diff --git a/docs/k3s/k3s-config.yaml b/docs/k3s/k3s-config.yaml new file mode 100644 index 00000000..9cabbf4f --- /dev/null +++ b/docs/k3s/k3s-config.yaml @@ -0,0 +1,19 @@ +# Configuration file for k3s + +# Node Configuration +node-name: "my-k3s-node" # The name of the node +token: 95e2850a0e0b505b8b677661885509a2 + +# TLS Configuration +tls-san: 192.168.5.200 + +# ETCD configuration +etcd-snapshot-schedule-cron: "0 */6 * * *" #Schedule ETCD snapshots every 6 hours +etcd-snapshot-retention: 7 # Retain the last 7 snapshots +etcd-snapshot-dir: "/var/lib/rancher/k3s/etcd/snapshots" # Directory to store ETCD snapshots + +# Security Configuration +# List of features to disable +disable: + - "traefik" + - "servicelb" diff --git a/docs/k3s/k3s-registry.yaml b/docs/k3s/k3s-registry.yaml new file mode 100644 index 00000000..af4bcd3d --- /dev/null +++ b/docs/k3s/k3s-registry.yaml @@ -0,0 +1,11 @@ +mirrors: + "192.168.7.210:5000": + endpoint: + - "http://192.168.7.210:5000" +configs: + "192.168.7.210:5000": + tls: + insecure_skip_verify: true + "docker.io": + tls: + insecure_skip_verify: true diff --git a/docs/k3s/specs.md b/docs/k3s/specs.md new file mode 100644 index 00000000..5e1e475b --- /dev/null +++ b/docs/k3s/specs.md @@ -0,0 +1,51 @@ +# Recommended Specifications for a Production k3s Cluster + +## Control Plane Node (Master) + +1. **Boot Drive (ETCD Storage)**: + + - **Type**: SSD (NVMe preferred for higher performance and reliability) + - **Size**: 100 GB or more (depending on the size and number of resources managed by the cluster) + +2. **Storage Drive**: + + - **Type**: SSD (NVMe preferred) + - **Size**: 100 GB or more (separate from the boot drive for storing persistent data and logs) + +3. **CPU / Cores**: + + - **Cores**: 4 cores (minimum) + - **Type**: Multi-core processor (modern Intel or AMD processors with high clock speeds) + +4. **Memory**: + - **RAM**: 16 GB (minimum) + +## Worker Node + +1. **Boot Drive**: + + - **Type**: SSD (NVMe preferred) + - **Size**: 50 GB or more (sufficient for OS and k3s runtime) + +2. **Storage Drive**: + + - **Type**: SSD (NVMe preferred) + - **Size**: 100 GB or more (additional storage can be added based on workload requirements) + +3. **CPU / Cores**: + + - **Cores**: 2 cores (minimum) + - **Type**: Multi-core processor (modern Intel or AMD processors with good performance per core) + +4. **Memory**: + - **RAM**: 8 GB (minimum) + +## Additional Considerations + +- **High Availability**: For a highly available control plane, deploy at least three control plane nodes to ensure resilience and fault tolerance. +- **Network**: Ensure high-speed networking (1 Gbps or higher) between nodes for optimal performance. +- **Backup and Recovery**: Implement regular backups of ETCD and other critical data to ensure disaster recovery capabilities. +- **Monitoring and Logging**: Deploy comprehensive monitoring and logging solutions to track the health and performance of the cluster. +- **Load Balancing**: Consider using a load balancer in front of the control plane nodes to distribute traffic evenly and provide redundancy. + +These recommendations aim to provide a robust and reliable k3s production cluster. Adjustments may be necessary based on specific workload requirements and performance benchmarks. diff --git a/kubernetes/apps/homepage-dashboard/config-map.yaml b/kubernetes/apps/homepage-dashboard/config-map.yaml index 410f7b07..6b043ecb 100644 --- a/kubernetes/apps/homepage-dashboard/config-map.yaml +++ b/kubernetes/apps/homepage-dashboard/config-map.yaml @@ -111,6 +111,12 @@ data: description: PGAdmin for Postgres target: _blank + - Redis: + icon: redis.svg + href: http://redis-insight.pnfb.home + description: World’s fastest data platform + target: _blank + - Grafana: icon: grafana.png href: http://grafana.pnfb.home @@ -123,6 +129,12 @@ data: description: Open-source monitoring system target: _blank + - Loki: + icon: loki.png + href: http://loki.pnfb.home + description: Fully featured logging stack + target: _blank + - Longhorn: icon: longhorn.png href: http://longhorn.pnfb.home @@ -135,11 +147,6 @@ data: description: Domain and network tunnel target: _blank - - Redis: - icon: redis.svg - href: http://redis-insight.pnfb.home - description: World’s fastest data platform - target: _blank - Docker: diff --git a/kubernetes/apps/monitoring/loki/config-map.yaml b/kubernetes/apps/monitoring/loki/config-map.yaml new file mode 100644 index 00000000..22a9462f --- /dev/null +++ b/kubernetes/apps/monitoring/loki/config-map.yaml @@ -0,0 +1,36 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: loki-config + namespace: monitoring +data: + loki-config.yaml: |- + + auth_enabled: false + + server: + http_listen_port: 3100 + + ingester: + lifecycler: + address: 127.0.0.1 + ring: + store: inmemory + replication_factor: 1 + + schema_config: + configs: + - from: 0 + store: boltdb + object_store: filesystem + schema: v9 + index: + prefix: index_ + period: 168h + + storage_config: + boltdb: + directory: /tmp/loki/index + + filesystem: + directory: /tmp/loki/chunks diff --git a/kubernetes/apps/monitoring/loki/deployment.yaml b/kubernetes/apps/monitoring/loki/deployment.yaml new file mode 100644 index 00000000..c681c7ed --- /dev/null +++ b/kubernetes/apps/monitoring/loki/deployment.yaml @@ -0,0 +1,40 @@ +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: loki + namespace: monitoring + labels: + app: loki + group: grafana +spec: + replicas: 1 + selector: + matchLabels: + app: loki + group: grafana + template: + metadata: + labels: + app: loki + group: grafana + spec: + serviceAccountName: loki + containers: + - name: loki + image: grafana/loki:master + imagePullPolicy: Always + args: ["-config.file=/etc/loki/loki-config.yaml"] + resources: + requests: + memory: "64Mi" + cpu: "10m" + limits: + memory: "128Mi" + cpu: "500m" + volumeMounts: + - name: loki-config + mountPath: /etc/loki/ + volumes: + - name: loki-config + configMap: + name: loki-config diff --git a/kubernetes/apps/monitoring/loki/ingress.yaml b/kubernetes/apps/monitoring/loki/ingress.yaml new file mode 100644 index 00000000..047c7604 --- /dev/null +++ b/kubernetes/apps/monitoring/loki/ingress.yaml @@ -0,0 +1,19 @@ +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: loki-ingress + namespace: monitoring + annotations: + nginx.ingress.kubernetes.io/rewrite-target: / +spec: + rules: + - host: loki.pnfb.home + http: + paths: + - path: / + pathType: Prefix + backend: + service: + name: loki-svc + port: + number: 80 diff --git a/kubernetes/apps/monitoring/loki/service-account.yaml b/kubernetes/apps/monitoring/loki/service-account.yaml new file mode 100644 index 00000000..c0775274 --- /dev/null +++ b/kubernetes/apps/monitoring/loki/service-account.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: loki + namespace: monitoring + labels: + app: loki + group: grafana \ No newline at end of file diff --git a/kubernetes/apps/monitoring/loki/service.yaml b/kubernetes/apps/monitoring/loki/service.yaml new file mode 100644 index 00000000..97ab7b14 --- /dev/null +++ b/kubernetes/apps/monitoring/loki/service.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: loki-svc + namespace: monitoring + labels: + app: loki + group: grafana +spec: + type: NodePort + ports: + - port: 3100 + targetPort: 3100 + protocol: TCP + selector: + app: loki + group: grafana diff --git a/kubernetes/apps/monitoring/promtail/cluster-role-binding.yaml b/kubernetes/apps/monitoring/promtail/cluster-role-binding.yaml new file mode 100644 index 00000000..f2d66930 --- /dev/null +++ b/kubernetes/apps/monitoring/promtail/cluster-role-binding.yaml @@ -0,0 +1,15 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: promtail + labels: + app: promtail + group: grafana +subjects: + - kind: ServiceAccount + name: promtail + namespace: monitoring +roleRef: + kind: ClusterRole + name: promtail + apiGroup: rbac.authorization.k8s.io diff --git a/kubernetes/apps/monitoring/promtail/cluster-role.yaml b/kubernetes/apps/monitoring/promtail/cluster-role.yaml new file mode 100644 index 00000000..7f7988d8 --- /dev/null +++ b/kubernetes/apps/monitoring/promtail/cluster-role.yaml @@ -0,0 +1,16 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + labels: + app: promtail + group: grafana + name: promtail +rules: + - apiGroups: [""] + resources: + - nodes + - nodes/proxy + - services + - endpoints + - pods + verbs: ["get", "watch", "list"] diff --git a/kubernetes/apps/monitoring/promtail/config-map.yaml b/kubernetes/apps/monitoring/promtail/config-map.yaml new file mode 100644 index 00000000..cc234197 --- /dev/null +++ b/kubernetes/apps/monitoring/promtail/config-map.yaml @@ -0,0 +1,84 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: promtail-config + namespace: monitoring +data: + promtail-config.yaml: |- + server: + http_listen_port: 0 + grpc_listen_port: 0 + + positions: + filename: /tmp/positions.yaml + + client: + url: http://loki:3100/api/prom/push + + scrape_configs: + - job_name: kubernetes-pods + kubernetes_sd_configs: + - role: pod + relabel_configs: + - source_labels: + - __meta_kubernetes_pod_node_name + target_label: __host__ + - action: drop + regex: ^$ + source_labels: + - __meta_kubernetes_pod_label_name + - action: replace + replacement: $1 + separator: / + source_labels: + - __meta_kubernetes_namespace + - __meta_kubernetes_pod_label_name + target_label: job + - action: replace + source_labels: + - __meta_kubernetes_namespace + target_label: namespace + - action: replace + source_labels: + - __meta_kubernetes_pod_name + target_label: instance + - replacement: /var/log/pods/$1 + separator: / + source_labels: + - __meta_kubernetes_pod_uid + - __meta_kubernetes_pod_container_name + target_label: __path__ + - job_name: kubernetes-pods-app + kubernetes_sd_configs: + - role: pod + relabel_configs: + - source_labels: + - __meta_kubernetes_pod_node_name + target_label: __host__ + - action: drop + regex: ^$ + source_labels: + - __meta_kubernetes_pod_label_app + - action: replace + replacement: $1 + separator: / + source_labels: + - __meta_kubernetes_namespace + - __meta_kubernetes_pod_label_app + target_label: job + - action: replace + source_labels: + - __meta_kubernetes_namespace + target_label: namespace + - action: replace + source_labels: + - __meta_kubernetes_pod_name + target_label: instance + - action: labelmap + regex: __meta_kubernetes_pod_label_(.+) + - replacement: /var/log/pods/$1 + separator: / + source_labels: + - __meta_kubernetes_pod_uid + - __meta_kubernetes_pod_container_name + target_label: __path__ diff --git a/kubernetes/apps/monitoring/promtail/deployment.yaml b/kubernetes/apps/monitoring/promtail/deployment.yaml new file mode 100644 index 00000000..d1657a10 --- /dev/null +++ b/kubernetes/apps/monitoring/promtail/deployment.yaml @@ -0,0 +1,48 @@ +apiVersion: apps/v1beta2 +kind: Deployment +metadata: + name: promtail + namespace: monitoring + labels: + app: promtail + group: grafana +spec: + replicas: 1 + selector: + matchLabels: + app: promtail + group: grafana + template: + metadata: + labels: + app: promtail + group: grafana + spec: + containers: + - name: promtail + image: grafana/promtail:make-images-static-26a87c9 + imagePullPolicy: Always + args: ["-config.file=/etc/promtail/promtail-config.yaml"] + volumeMounts: + - name: promtail-config + mountPath: /etc/promtail/ + - name: varlog + mountPath: /var/log + - name: varlibdockercontainers + mountPath: /var/lib/docker/containers + readOnly: true + env: + - name: HOSTNAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + volumes: + - name: promtail-config + configMap: + name: promtail-config + - name: varlog + hostPath: + path: /var/log + - name: varlibdockercontainers + hostPath: + path: /var/lib/docker/containers diff --git a/kubernetes/apps/monitoring/promtail/service-account.yaml b/kubernetes/apps/monitoring/promtail/service-account.yaml new file mode 100644 index 00000000..73428eb6 --- /dev/null +++ b/kubernetes/apps/monitoring/promtail/service-account.yaml @@ -0,0 +1,8 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: promtail + namespace: monitoring + labels: + app: promtail + group: grafana diff --git a/kubernetes/apps/postgres-db/deployment.yaml b/kubernetes/apps/postgres-db/deployment.yaml index ae44a980..bc185cc2 100644 --- a/kubernetes/apps/postgres-db/deployment.yaml +++ b/kubernetes/apps/postgres-db/deployment.yaml @@ -3,7 +3,7 @@ kind: Deployment metadata: name: postgres spec: - replicas: 3 + replicas: 1 selector: matchLabels: app: postgres