Compare commits

...

28 commits

Author SHA1 Message Date
Jun-te Kim
461f5b5f8d
Merge pull request #96 from MealCraft/feature/git
Some checks are pending
Deploy Home Assistant / deploy (push) Waiting to run
Build juntekim.com / Push-to-juntekim-to-docker-hub (push) Waiting to run
Build juntekim.com / run-on-k8s (push) Blocked by required conditions
Deploy n8n / deploy (push) Waiting to run
Build & Deploy stripe-to-invoice (with DB secrets + migrations) / build (push) Waiting to run
Build & Deploy stripe-to-invoice (with DB secrets + migrations) / Deploy Postgres (PV + PVC + Deployment) (push) Blocked by required conditions
Build & Deploy stripe-to-invoice (with DB secrets + migrations) / Apply runtime secrets (push) Blocked by required conditions
Build & Deploy stripe-to-invoice (with DB secrets + migrations) / Run DB migrations (Atlas) (push) Blocked by required conditions
Build & Deploy stripe-to-invoice (with DB secrets + migrations) / deploy (push) Blocked by required conditions
Terraform Apply / Terraform Apply (push) Waiting to run
Terraform Apply / Terraform Apply - SES (push) Blocked by required conditions
Feature/git
2026-03-11 23:27:58 +00:00
dcd0d6a589 added rook and git 2026-03-11 23:27:13 +00:00
aa112572a3 lets update storage together 2026-03-11 07:16:56 +00:00
Jun-te Kim
f8172e36ea
Merge pull request #95 from MealCraft/feature/rook
Feature/rook
2026-03-11 07:14:32 +00:00
6e8bd2eafd lets go to gym 2026-03-11 07:12:33 +00:00
4571e00268 frontned updated 2026-03-11 06:59:21 +00:00
00724910b2 updated storage 2026-03-11 06:17:37 +00:00
Jun-te Kim
067ec5d5f3
Merge pull request #94 from MealCraft/feature/rook
Feature/rook
2026-03-10 22:07:38 +00:00
ddd2ed26a0 added 2026-03-10 21:54:37 +00:00
664b8e4bb1 added rook and donetick 2026-03-10 21:50:28 +00:00
Jun-te Kim
45dec474b0
Merge pull request #93 from MealCraft/feautre/make_juntekim_com_better
donetick
2026-03-08 15:50:52 +00:00
Jun-te Kim
166ed11de9
Merge pull request #92 from MealCraft/feautre/make_juntekim_com_better
added new functionatlty and profressionalism
2026-03-08 15:34:00 +00:00
Jun-te Kim
f446fd9902
Merge pull request #91 from MealCraft/feautre/make_juntekim_com_better
Feautre/make juntekim com better
2026-03-08 14:57:19 +00:00
Jun-te Kim
ddca573c3c
Merge pull request #90 from MealCraft/feautre/make_juntekim_com_better
Feautre/make juntekim com better
2026-03-08 14:34:58 +00:00
Jun-te Kim
641db79407
Merge pull request #89 from MealCraft/feautre/make_juntekim_com_better
self hosted
2026-03-08 13:53:16 +00:00
Jun-te Kim
de2ea14ddb
Merge pull request #88 from MealCraft/feature/deploy_chaptgpt
Feature/deploy chaptgpt
2026-03-07 15:47:59 +00:00
Jun-te Kim
9a557971c6
Merge pull request #87 from MealCraft/faeture/stripe_to_invoice
Faeture/stripe to invoice
2026-03-01 17:27:20 +00:00
Jun-te Kim
a9262c8c3d
Merge pull request #85 from MealCraft/faeture/stripe_to_invoice
build
2026-02-21 15:02:19 +00:00
Jun-te Kim
60b7d54d7f
Merge pull request #84 from MealCraft/faeture/stripe_to_invoice
use the correct key
2026-02-21 14:49:08 +00:00
Jun-te Kim
7547ba4ac4
Merge pull request #83 from MealCraft/faeture/stripe_to_invoice
add aws things
2026-02-21 13:53:01 +00:00
Jun-te Kim
19a6183627
Merge pull request #82 from MealCraft/faeture/stripe_to_invoice
missing secxrets
2026-02-21 13:38:46 +00:00
Jun-te Kim
f33b67e732
Merge pull request #81 from MealCraft/faeture/stripe_to_invoice
more env things
2026-02-21 13:23:34 +00:00
Jun-te Kim
ae2ad8491d
Merge pull request #80 from MealCraft/faeture/stripe_to_invoice
Faeture/stripe to invoice
2026-02-21 13:14:31 +00:00
Jun-te Kim
539bab4c89
Merge pull request #79 from MealCraft/faeture/stripe_to_invoice
add it in and check later
2026-02-21 12:53:15 +00:00
Jun-te Kim
0afefab269
Merge pull request #78 from MealCraft/faeture/stripe_to_invoice
final check
2026-02-21 12:50:39 +00:00
Jun-te Kim
3ba1eb1c5c
Merge pull request #77 from MealCraft/faeture/stripe_to_invoice
Faeture/stripe to invoice
2026-02-21 12:33:33 +00:00
Jun-te Kim
16ca69f98a
Merge pull request #76 from MealCraft/faeture/stripe_to_invoice
recommit to check if billing page shows up
2026-02-21 12:13:21 +00:00
Jun-te Kim
44b56ba0b5
Merge pull request #75 from MealCraft/faeture/stripe_to_invoice
Faeture/stripe to invoice
2026-02-21 11:51:22 +00:00
9 changed files with 16039 additions and 7 deletions

19
TODO.md Normal file
View file

@ -0,0 +1,19 @@
figure out how to do a back up for a small pvc and pv using traefik as the example
how would i back everything in ceph storage to aws like i used to do in mist cron job when i it was just the local host
un mount the storage class once i got rid of everything
## Services still using mist local storage (need to migrate to Ceph)
- Uptime Kuma (uptime-kuma-pvc, 500Mi)
- n8n (n8n-pvc, 5Gi)
- Home Assistant (homeassistant-pvc, 10Gi)
- DBeaver (dbeaver-pvc, 5Gi)
- Postgres Prod (postgres-prod-pvc, 20Gi)
- Postgres Dev (postgres-dev-pvc, 20Gi)
- Monica (monica-storage-pvc 1Gi + monica-db-pvc 2Gi)
- Tandoor (tandoor-media-pvc 5Gi + tandoor-postgres-pvc 2Gi)
- Donetick (donetick-pvc, 1Gi)
- Papra (papra-pvc, 10Gi)
- Databasus (databasus-pvc, 500Mi)
- wger (wger-media-pvc 5Gi + wger-postgres-pvc 2Gi + wger-static-pvc 2Gi)
- Certs (certs-pvc, 1Mi)
- Pihole (pihole-pv, 5Gi - Released/unused)

View file

@ -67,10 +67,16 @@ spec:
env:
- name: DT_ENV
value: "selfhosted"
- name: DT_SERVER_PORT
value: "2021"
- name: DT_DATABASE_TYPE
value: "sqlite"
- name: DT_SQLITE_PATH
value: "/donetick-data/donetick.db"
- name: TZ
value: "America/Toronto"
- name: DT_JWT_SECRET
value: "4-wowwMZ14zK2zSpbzCwva5s2zA1LnAQqjhWbvjMfFc="
resources:
requests:
cpu: "100m"

211
forgejo/forgejo.yaml Normal file
View file

@ -0,0 +1,211 @@
# ================================
# FORGEJO - SELF-HOSTED GIT
# https://forgejo.org/
# ================================
---
apiVersion: v1
kind: Secret
metadata:
name: forgejo-db-secret
type: Opaque
stringData:
POSTGRES_USER: forgejo
POSTGRES_PASSWORD: changeMePleaseOtherwiseSomeoneWillKnow
POSTGRES_DB: forgejo
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: forgejo-db-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: rook-ceph-block
resources:
requests:
storage: 10Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: forgejo-postgres
labels:
app: forgejo-postgres
spec:
replicas: 1
selector:
matchLabels:
app: forgejo-postgres
template:
metadata:
labels:
app: forgejo-postgres
spec:
containers:
- name: postgres
image: postgres:16-alpine
ports:
- containerPort: 5432
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
envFrom:
- secretRef:
name: forgejo-db-secret
volumeMounts:
- name: forgejo-db-data
mountPath: /var/lib/postgresql/data
volumes:
- name: forgejo-db-data
persistentVolumeClaim:
claimName: forgejo-db-pvc
---
apiVersion: v1
kind: Service
metadata:
name: forgejo-postgres
spec:
selector:
app: forgejo-postgres
ports:
- port: 5432
targetPort: 5432
# -------------------------
# FORGEJO APP
# -------------------------
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: forgejo-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: rook-ceph-block
resources:
requests:
storage: 20Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: forgejo
labels:
app: forgejo
spec:
replicas: 1
selector:
matchLabels:
app: forgejo
template:
metadata:
labels:
app: forgejo
spec:
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /data"]
volumeMounts:
- name: forgejo-data
mountPath: /data
containers:
- name: forgejo
image: codeberg.org/forgejo/forgejo:10
ports:
- containerPort: 3000
name: http
- containerPort: 22
name: ssh
env:
- name: FORGEJO__server__DOMAIN
value: git.juntekim.com
- name: FORGEJO__server__ROOT_URL
value: https://git.juntekim.com
- name: FORGEJO__server__HTTP_PORT
value: "3000"
- name: FORGEJO__server__SSH_PORT
value: "2222"
- name: FORGEJO__server__SSH_DOMAIN
value: git.juntekim.com
- name: FORGEJO__database__DB_TYPE
value: postgres
- name: FORGEJO__database__HOST
value: forgejo-postgres:5432
- name: FORGEJO__database__NAME
valueFrom:
secretKeyRef:
name: forgejo-db-secret
key: POSTGRES_DB
- name: FORGEJO__database__USER
valueFrom:
secretKeyRef:
name: forgejo-db-secret
key: POSTGRES_USER
- name: FORGEJO__database__PASSWD
valueFrom:
secretKeyRef:
name: forgejo-db-secret
key: POSTGRES_PASSWORD
- name: FORGEJO__security__INSTALL_LOCK
value: "true"
volumeMounts:
- name: forgejo-data
mountPath: /data
volumes:
- name: forgejo-data
persistentVolumeClaim:
claimName: forgejo-pvc
---
apiVersion: v1
kind: Service
metadata:
name: forgejo
spec:
selector:
app: forgejo
ports:
- name: http
port: 3000
targetPort: 3000
---
# SSH exposed via LoadBalancer on port 2222 (MetalLB)
apiVersion: v1
kind: Service
metadata:
name: forgejo-ssh
spec:
type: LoadBalancer
selector:
app: forgejo
ports:
- name: ssh
port: 2222
targetPort: 22
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: forgejo-ingressroute
spec:
entryPoints:
- websecure
routes:
- match: Host(`git.juntekim.com`)
kind: Rule
services:
- name: forgejo
port: 3000
tls:
certResolver: myresolver
domains:
- main: git.juntekim.com

View file

@ -1,10 +1,4 @@
const services = [
{
name: "Ollama",
description: "Local LLM server powering my AI experiments",
url: "https://ollama.juntekim.com",
icon: "🧠",
},
{
name: "Uptime Kuma",
description: "Service monitoring for my homelab and projects",
@ -33,7 +27,7 @@ const services = [
name: "DBeaver",
description: "Another Web interface I use to manage PostgreSQL databases. - Deciding which one I prefer",
url: "https://dbeaver.juntekim.com",
icon: "🐿️",
icon: "🦫",
},
{
name: "Monica",

View file

@ -0,0 +1,38 @@
apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
name: rook-ceph
namespace: rook-ceph
spec:
cephVersion:
image: quay.io/ceph/ceph:v18
dataDirHostPath: /var/lib/rook
mon:
count: 1
allowMultiplePerNode: false
mgr:
count: 1
dashboard:
enabled: true
storage:
useAllNodes: false
useAllDevices: false
nodes:
- name: unity
devices:
- name: sda
- name: jerry
devices:
- name: /dev/mapper/ubuntu--vg-ceph--osd0
- name: caspian
devices:
- name: sdb

View file

@ -0,0 +1,29 @@
apiVersion: ceph.rook.io/v1
kind: CephBlockPool
metadata:
name: replicapool
namespace: rook-ceph
spec:
failureDomain: host
replicated:
size: 2
requireSafeReplicaSize: false
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: rook-ceph-block
provisioner: rook-ceph.rbd.csi.ceph.com
parameters:
clusterID: rook-ceph
pool: replicapool
imageFormat: "2"
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
reclaimPolicy: Retain
allowVolumeExpansion: true

1284
rook/common.yaml Normal file

File diff suppressed because it is too large Load diff

13763
rook/crds.yaml Normal file

File diff suppressed because it is too large Load diff

688
rook/operator.yaml Normal file
View file

@ -0,0 +1,688 @@
#################################################################################################################
# The deployment for the rook operator
# Contains the common settings for most Kubernetes deployments.
# For example, to create the rook-ceph cluster:
# kubectl create -f crds.yaml -f common.yaml -f operator.yaml
# kubectl create -f cluster.yaml
#
# Also see other operator sample files for variations of operator.yaml:
# - operator-openshift.yaml: Common settings for running in OpenShift
###############################################################################################################
# Rook Ceph Operator Config ConfigMap
# Use this ConfigMap to override Rook-Ceph Operator configurations.
# NOTE! Precedence will be given to this config if the same Env Var config also exists in the
# Operator Deployment.
# To move a configuration(s) from the Operator Deployment to this ConfigMap, add the config
# here. It is recommended to then remove it from the Deployment to eliminate any future confusion.
kind: ConfigMap
apiVersion: v1
metadata:
name: rook-ceph-operator-config
# should be in the namespace of the operator
namespace: rook-ceph # namespace:operator
data:
# The logging level for the operator: ERROR | WARNING | INFO | DEBUG
ROOK_LOG_LEVEL: "INFO"
# MicroK8s uses a non-standard kubelet path — required for CSI mounts to work
ROOK_CSI_KUBELET_DIR_PATH: "/var/snap/microk8s/common/var/lib/kubelet"
# The address for the operator's controller-runtime metrics. 0 is disabled. :8080 serves metrics on port 8080.
ROOK_OPERATOR_METRICS_BIND_ADDRESS: "0"
# Allow using loop devices for osds in test clusters.
ROOK_CEPH_ALLOW_LOOP_DEVICES: "false"
# Enable CSI Operator
ROOK_USE_CSI_OPERATOR: "false"
# Enable the CSI driver.
# To run the non-default version of the CSI driver, see the override-able image properties in operator.yaml
ROOK_CSI_ENABLE_CEPHFS: "true"
# Enable the default version of the CSI RBD driver. To start another version of the CSI driver, see image properties below.
ROOK_CSI_ENABLE_RBD: "true"
# Enable the CSI NFS driver. To start another version of the CSI driver, see image properties below.
ROOK_CSI_ENABLE_NFS: "false"
# Disable the CSI driver.
ROOK_CSI_DISABLE_DRIVER: "false"
# Set to true to enable Ceph CSI pvc encryption support.
CSI_ENABLE_ENCRYPTION: "false"
# Set to true to enable host networking for CSI CephFS and RBD nodeplugins. This may be necessary
# in some network configurations where the SDN does not provide access to an external cluster or
# there is significant drop in read/write performance.
# CSI_ENABLE_HOST_NETWORK: "true"
# Deprecation note: Rook uses "holder" pods to allow CSI to connect to the multus public network
# without needing hosts to the network. Holder pods are being removed. See issue for details:
# https://github.com/rook/rook/issues/13055. New Rook deployments should set this to "true".
CSI_DISABLE_HOLDER_PODS: "true"
# Set to true to enable adding volume metadata on the CephFS subvolume and RBD images.
# Not all users might be interested in getting volume/snapshot details as metadata on CephFS subvolume and RBD images.
# Hence enable metadata is false by default.
# CSI_ENABLE_METADATA: "true"
# cluster name identifier to set as metadata on the CephFS subvolume and RBD images. This will be useful in cases
# like for example, when two container orchestrator clusters (Kubernetes/OCP) are using a single ceph cluster.
# CSI_CLUSTER_NAME: "my-prod-cluster"
# Set logging level for cephCSI containers maintained by the cephCSI.
# Supported values from 0 to 5. 0 for general useful logs, 5 for trace level verbosity.
# CSI_LOG_LEVEL: "0"
# Set logging level for Kubernetes-csi sidecar containers.
# Supported values from 0 to 5. 0 for general useful logs (the default), 5 for trace level verbosity.
# CSI_SIDECAR_LOG_LEVEL: "0"
# csi driver name prefix for cephfs, rbd and nfs. if not specified, default
# will be the namespace name where rook-ceph operator is deployed.
# search for `# csi-provisioner-name` in the storageclass and
# volumesnashotclass and update the name accordingly.
# CSI_DRIVER_NAME_PREFIX: "rook-ceph"
# Set replicas for csi provisioner deployment.
CSI_PROVISIONER_REPLICAS: "2"
# OMAP generator will generate the omap mapping between the PV name and the RBD image.
# CSI_ENABLE_OMAP_GENERATOR need to be enabled when we are using rbd mirroring feature.
# By default OMAP generator sidecar is deployed with CSI provisioner pod, to disable
# it set it to false.
# CSI_ENABLE_OMAP_GENERATOR: "false"
# set to false to disable deployment of snapshotter container in CephFS provisioner pod.
CSI_ENABLE_CEPHFS_SNAPSHOTTER: "true"
# set to false to disable deployment of snapshotter container in NFS provisioner pod.
CSI_ENABLE_NFS_SNAPSHOTTER: "true"
# set to false to disable deployment of snapshotter container in RBD provisioner pod.
CSI_ENABLE_RBD_SNAPSHOTTER: "true"
# set to false to disable volume group snapshot feature. This feature is
# enabled by default as long as the necessary CRDs are available in the cluster.
CSI_ENABLE_VOLUME_GROUP_SNAPSHOT: "true"
# Enable cephfs kernel driver instead of ceph-fuse.
# If you disable the kernel client, your application may be disrupted during upgrade.
# See the upgrade guide: https://rook.io/docs/rook/latest/ceph-upgrade.html
# NOTE! cephfs quota is not supported in kernel version < 4.17
CSI_FORCE_CEPHFS_KERNEL_CLIENT: "true"
# (Optional) policy for modifying a volume's ownership or permissions when the RBD PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
CSI_RBD_FSGROUPPOLICY: "File"
# (Optional) policy for modifying a volume's ownership or permissions when the CephFS PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
CSI_CEPHFS_FSGROUPPOLICY: "File"
# (Optional) policy for modifying a volume's ownership or permissions when the NFS PVC is being mounted.
# supported values are documented at https://kubernetes-csi.github.io/docs/support-fsgroup.html
CSI_NFS_FSGROUPPOLICY: "File"
# (Optional) Allow starting unsupported ceph-csi image
ROOK_CSI_ALLOW_UNSUPPORTED_VERSION: "false"
# (Optional) control the host mount of /etc/selinux for csi plugin pods.
CSI_PLUGIN_ENABLE_SELINUX_HOST_MOUNT: "false"
# The default version of CSI supported by Rook will be started. To change the version
# of the CSI driver to something other than what is officially supported, change
# these images to the desired release of the CSI driver.
# ROOK_CSI_CEPH_IMAGE: "quay.io/cephcsi/cephcsi:v3.12.0"
# ROOK_CSI_REGISTRAR_IMAGE: "registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.11.1"
# ROOK_CSI_RESIZER_IMAGE: "registry.k8s.io/sig-storage/csi-resizer:v1.11.1"
# ROOK_CSI_PROVISIONER_IMAGE: "registry.k8s.io/sig-storage/csi-provisioner:v5.0.1"
# ROOK_CSI_SNAPSHOTTER_IMAGE: "registry.k8s.io/sig-storage/csi-snapshotter:v8.0.1"
# ROOK_CSI_ATTACHER_IMAGE: "registry.k8s.io/sig-storage/csi-attacher:v4.6.1"
# To indicate the image pull policy to be applied to all the containers in the csi driver pods.
# ROOK_CSI_IMAGE_PULL_POLICY: "IfNotPresent"
# (Optional) set user created priorityclassName for csi plugin pods.
CSI_PLUGIN_PRIORITY_CLASSNAME: "system-node-critical"
# (Optional) set user created priorityclassName for csi provisioner pods.
CSI_PROVISIONER_PRIORITY_CLASSNAME: "system-cluster-critical"
# CSI CephFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.
# Default value is RollingUpdate.
# CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY: "OnDelete"
# A maxUnavailable parameter of CSI cephFS plugin daemonset update strategy.
# Default value is 1.
# CSI_CEPHFS_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE: "1"
# CSI RBD plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.
# Default value is RollingUpdate.
# CSI_RBD_PLUGIN_UPDATE_STRATEGY: "OnDelete"
# A maxUnavailable parameter of CSI RBD plugin daemonset update strategy.
# Default value is 1.
# CSI_RBD_PLUGIN_UPDATE_STRATEGY_MAX_UNAVAILABLE: "1"
# CSI NFS plugin daemonset update strategy, supported values are OnDelete and RollingUpdate.
# Default value is RollingUpdate.
# CSI_NFS_PLUGIN_UPDATE_STRATEGY: "OnDelete"
# kubelet directory path, if kubelet configured to use other than /var/lib/kubelet path.
# ROOK_CSI_KUBELET_DIR_PATH: "/var/lib/kubelet"
# Labels to add to the CSI CephFS Deployments and DaemonSets Pods.
# ROOK_CSI_CEPHFS_POD_LABELS: "key1=value1,key2=value2"
# Labels to add to the CSI RBD Deployments and DaemonSets Pods.
# ROOK_CSI_RBD_POD_LABELS: "key1=value1,key2=value2"
# Labels to add to the CSI NFS Deployments and DaemonSets Pods.
# ROOK_CSI_NFS_POD_LABELS: "key1=value1,key2=value2"
# (Optional) CephCSI CephFS plugin Volumes
# CSI_CEPHFS_PLUGIN_VOLUME: |
# - name: lib-modules
# hostPath:
# path: /run/current-system/kernel-modules/lib/modules/
# - name: host-nix
# hostPath:
# path: /nix
# (Optional) CephCSI CephFS plugin Volume mounts
# CSI_CEPHFS_PLUGIN_VOLUME_MOUNT: |
# - name: host-nix
# mountPath: /nix
# readOnly: true
# (Optional) CephCSI RBD plugin Volumes
# CSI_RBD_PLUGIN_VOLUME: |
# - name: lib-modules
# hostPath:
# path: /run/current-system/kernel-modules/lib/modules/
# - name: host-nix
# hostPath:
# path: /nix
# (Optional) CephCSI RBD plugin Volume mounts
# CSI_RBD_PLUGIN_VOLUME_MOUNT: |
# - name: host-nix
# mountPath: /nix
# readOnly: true
# (Optional) CephCSI provisioner NodeAffinity (applied to both CephFS and RBD provisioner).
# CSI_PROVISIONER_NODE_AFFINITY: "role=storage-node; storage=rook, ceph"
# (Optional) CephCSI provisioner tolerations list(applied to both CephFS and RBD provisioner).
# Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
# CSI_PROVISIONER_TOLERATIONS: |
# - effect: NoSchedule
# key: node-role.kubernetes.io/control-plane
# operator: Exists
# - effect: NoExecute
# key: node-role.kubernetes.io/etcd
# operator: Exists
# (Optional) CephCSI plugin NodeAffinity (applied to both CephFS and RBD plugin).
# CSI_PLUGIN_NODE_AFFINITY: "role=storage-node; storage=rook, ceph"
# (Optional) CephCSI plugin tolerations list(applied to both CephFS and RBD plugin).
# Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# CSI_PLUGIN_TOLERATIONS: |
# - effect: NoSchedule
# key: node-role.kubernetes.io/control-plane
# operator: Exists
# - effect: NoExecute
# key: node-role.kubernetes.io/etcd
# operator: Exists
# (Optional) CephCSI RBD provisioner NodeAffinity (if specified, overrides CSI_PROVISIONER_NODE_AFFINITY).
# CSI_RBD_PROVISIONER_NODE_AFFINITY: "role=rbd-node"
# (Optional) CephCSI RBD provisioner tolerations list(if specified, overrides CSI_PROVISIONER_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
# CSI_RBD_PROVISIONER_TOLERATIONS: |
# - key: node.rook.io/rbd
# operator: Exists
# (Optional) CephCSI RBD plugin NodeAffinity (if specified, overrides CSI_PLUGIN_NODE_AFFINITY).
# CSI_RBD_PLUGIN_NODE_AFFINITY: "role=rbd-node"
# (Optional) CephCSI RBD plugin tolerations list(if specified, overrides CSI_PLUGIN_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# CSI_RBD_PLUGIN_TOLERATIONS: |
# - key: node.rook.io/rbd
# operator: Exists
# (Optional) CephCSI CephFS provisioner NodeAffinity (if specified, overrides CSI_PROVISIONER_NODE_AFFINITY).
# CSI_CEPHFS_PROVISIONER_NODE_AFFINITY: "role=cephfs-node"
# (Optional) CephCSI CephFS provisioner tolerations list(if specified, overrides CSI_PROVISIONER_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
# CSI_CEPHFS_PROVISIONER_TOLERATIONS: |
# - key: node.rook.io/cephfs
# operator: Exists
# (Optional) CephCSI CephFS plugin NodeAffinity (if specified, overrides CSI_PLUGIN_NODE_AFFINITY).
# CSI_CEPHFS_PLUGIN_NODE_AFFINITY: "role=cephfs-node"
# NOTE: Support for defining NodeAffinity for operators other than "In" and "Exists" requires the user to input a
# valid v1.NodeAffinity JSON or YAML string. For example, the following is valid YAML v1.NodeAffinity:
# CSI_CEPHFS_PLUGIN_NODE_AFFINITY: |
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: myKey
# operator: DoesNotExist
# (Optional) CephCSI CephFS plugin tolerations list(if specified, overrides CSI_PLUGIN_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# CSI_CEPHFS_PLUGIN_TOLERATIONS: |
# - key: node.rook.io/cephfs
# operator: Exists
# (Optional) CephCSI NFS provisioner NodeAffinity (overrides CSI_PROVISIONER_NODE_AFFINITY).
# CSI_NFS_PROVISIONER_NODE_AFFINITY: "role=nfs-node"
# (Optional) CephCSI NFS provisioner tolerations list (overrides CSI_PROVISIONER_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI provisioner would be best to start on the same nodes as other ceph daemons.
# CSI_NFS_PROVISIONER_TOLERATIONS: |
# - key: node.rook.io/nfs
# operator: Exists
# (Optional) CephCSI NFS plugin NodeAffinity (overrides CSI_PLUGIN_NODE_AFFINITY).
# CSI_NFS_PLUGIN_NODE_AFFINITY: "role=nfs-node"
# (Optional) CephCSI NFS plugin tolerations list (overrides CSI_PLUGIN_TOLERATIONS).
# Put here list of taints you want to tolerate in YAML format.
# CSI plugins need to be started on all the nodes where the clients need to mount the storage.
# CSI_NFS_PLUGIN_TOLERATIONS: |
# - key: node.rook.io/nfs
# operator: Exists
# (Optional) CEPH CSI RBD provisioner resource requirement list, Put here list of resource
# requests and limits you want to apply for provisioner pod
#CSI_RBD_PROVISIONER_RESOURCE: |
# - name : csi-provisioner
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-resizer
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-attacher
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-snapshotter
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-rbdplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : csi-omap-generator
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# (Optional) CEPH CSI RBD plugin resource requirement list, Put here list of resource
# requests and limits you want to apply for plugin pod
#CSI_RBD_PLUGIN_RESOURCE: |
# - name : driver-registrar
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# - name : csi-rbdplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# (Optional) CEPH CSI CephFS provisioner resource requirement list, Put here list of resource
# requests and limits you want to apply for provisioner pod
#CSI_CEPHFS_PROVISIONER_RESOURCE: |
# - name : csi-provisioner
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-resizer
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-attacher
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-snapshotter
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-cephfsplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# (Optional) CEPH CSI CephFS plugin resource requirement list, Put here list of resource
# requests and limits you want to apply for plugin pod
#CSI_CEPHFS_PLUGIN_RESOURCE: |
# - name : driver-registrar
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# - name : csi-cephfsplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : liveness-prometheus
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# (Optional) CEPH CSI NFS provisioner resource requirement list, Put here list of resource
# requests and limits you want to apply for provisioner pod
# CSI_NFS_PROVISIONER_RESOURCE: |
# - name : csi-provisioner
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# - name : csi-nfsplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# - name : csi-attacher
# resource:
# requests:
# memory: 128Mi
# cpu: 100m
# limits:
# memory: 256Mi
# (Optional) CEPH CSI NFS plugin resource requirement list, Put here list of resource
# requests and limits you want to apply for plugin pod
# CSI_NFS_PLUGIN_RESOURCE: |
# - name : driver-registrar
# resource:
# requests:
# memory: 128Mi
# cpu: 50m
# limits:
# memory: 256Mi
# - name : csi-nfsplugin
# resource:
# requests:
# memory: 512Mi
# cpu: 250m
# limits:
# memory: 1Gi
# Configure CSI CephFS liveness metrics port
# Set to true to enable Ceph CSI liveness container.
CSI_ENABLE_LIVENESS: "false"
# CSI_CEPHFS_LIVENESS_METRICS_PORT: "9081"
# Configure CSI RBD liveness metrics port
# CSI_RBD_LIVENESS_METRICS_PORT: "9080"
# CSIADDONS_PORT: "9070"
# Set CephFS Kernel mount options to use https://docs.ceph.com/en/latest/man/8/mount.ceph/#options
# Set to "ms_mode=secure" when connections.encrypted is enabled in CephCluster CR
# CSI_CEPHFS_KERNEL_MOUNT_OPTIONS: "ms_mode=secure"
# (Optional) Duration in seconds that non-leader candidates will wait to force acquire leadership. Default to 137 seconds.
# CSI_LEADER_ELECTION_LEASE_DURATION: "137s"
# (Optional) Deadline in seconds that the acting leader will retry refreshing leadership before giving up. Defaults to 107 seconds.
# CSI_LEADER_ELECTION_RENEW_DEADLINE: "107s"
# (Optional) Retry Period in seconds the LeaderElector clients should wait between tries of actions. Defaults to 26 seconds.
# CSI_LEADER_ELECTION_RETRY_PERIOD: "26s"
# Whether the OBC provisioner should watch on the ceph cluster namespace or not, if not default provisioner value is set
ROOK_OBC_WATCH_OPERATOR_NAMESPACE: "true"
# Custom prefix value for the OBC provisioner instead of ceph cluster namespace, do not set on existing cluster
# ROOK_OBC_PROVISIONER_NAME_PREFIX: "custom-prefix"
# Whether to start the discovery daemon to watch for raw storage devices on nodes in the cluster.
# This daemon does not need to run if you are only going to create your OSDs based on StorageClassDeviceSets with PVCs.
ROOK_ENABLE_DISCOVERY_DAEMON: "false"
# The timeout value (in seconds) of Ceph commands. It should be >= 1. If this variable is not set or is an invalid value, it's default to 15.
ROOK_CEPH_COMMANDS_TIMEOUT_SECONDS: "15"
# Enable the csi addons sidecar.
CSI_ENABLE_CSIADDONS: "false"
# Enable watch for faster recovery from rbd rwo node loss
ROOK_WATCH_FOR_NODE_FAILURE: "true"
# ROOK_CSIADDONS_IMAGE: "quay.io/csiaddons/k8s-sidecar:v0.9.0"
# The CSI GRPC timeout value (in seconds). It should be >= 120. If this variable is not set or is an invalid value, it's default to 150.
CSI_GRPC_TIMEOUT_SECONDS: "150"
# Enable topology based provisioning.
CSI_ENABLE_TOPOLOGY: "false"
# Domain labels define which node labels to use as domains
# for CSI nodeplugins to advertise their domains
# NOTE: the value here serves as an example and needs to be
# updated with node labels that define domains of interest
# CSI_TOPOLOGY_DOMAIN_LABELS: "kubernetes.io/hostname,topology.kubernetes.io/zone,topology.rook.io/rack"
# Whether to skip any attach operation altogether for CephCSI PVCs.
# See more details [here](https://kubernetes-csi.github.io/docs/skip-attach.html#skip-attach-with-csi-driver-object).
# If set to false it skips the volume attachments and makes the creation of pods using the CephCSI PVC fast.
# **WARNING** It's highly discouraged to use this for RWO volumes. for RBD PVC it can cause data corruption,
# csi-addons operations like Reclaimspace and PVC Keyrotation will also not be supported if set to false
# since we'll have no VolumeAttachments to determine which node the PVC is mounted on.
# Refer to this [issue](https://github.com/kubernetes/kubernetes/issues/103305) for more details.
CSI_CEPHFS_ATTACH_REQUIRED: "true"
CSI_RBD_ATTACH_REQUIRED: "true"
CSI_NFS_ATTACH_REQUIRED: "true"
# Rook Discover toleration. Will tolerate all taints with all keys.
# (Optional) Rook Discover tolerations list. Put here list of taints you want to tolerate in YAML format.
# DISCOVER_TOLERATIONS: |
# - effect: NoSchedule
# key: node-role.kubernetes.io/control-plane
# operator: Exists
# - effect: NoExecute
# key: node-role.kubernetes.io/etcd
# operator: Exists
# (Optional) Rook Discover priority class name to set on the pod(s)
# DISCOVER_PRIORITY_CLASS_NAME: "<PriorityClassName>"
# (Optional) Discover Agent NodeAffinity.
# DISCOVER_AGENT_NODE_AFFINITY: |
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: myKey
# operator: DoesNotExist
# (Optional) Discover Agent Pod Labels.
# DISCOVER_AGENT_POD_LABELS: "key1=value1,key2=value2"
# Disable automatic orchestration when new devices are discovered
ROOK_DISABLE_DEVICE_HOTPLUG: "false"
# The duration between discovering devices in the rook-discover daemonset.
ROOK_DISCOVER_DEVICES_INTERVAL: "60m"
# DISCOVER_DAEMON_RESOURCES: |
# - name: DISCOVER_DAEMON_RESOURCES
# resources:
# limits:
# memory: 512Mi
# requests:
# cpu: 100m
# memory: 128Mi
# (Optional) Burst to use while communicating with the kubernetes apiserver.
# CSI_KUBE_API_BURST: "10"
# (Optional) QPS to use while communicating with the kubernetes apiserver.
# CSI_KUBE_API_QPS: "5.0"
---
# OLM: BEGIN OPERATOR DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
name: rook-ceph-operator
namespace: rook-ceph # namespace:operator
labels:
operator: rook
storage-backend: ceph
app.kubernetes.io/name: rook-ceph
app.kubernetes.io/instance: rook-ceph
app.kubernetes.io/component: rook-ceph-operator
app.kubernetes.io/part-of: rook-ceph-operator
spec:
selector:
matchLabels:
app: rook-ceph-operator
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
app: rook-ceph-operator
spec:
tolerations:
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 5
serviceAccountName: rook-ceph-system
containers:
- name: rook-ceph-operator
image: docker.io/rook/ceph:v1.15.0
args: ["ceph", "operator"]
securityContext:
runAsNonRoot: true
runAsUser: 2016
runAsGroup: 2016
capabilities:
drop: ["ALL"]
volumeMounts:
- mountPath: /var/lib/rook
name: rook-config
- mountPath: /etc/ceph
name: default-config-dir
env:
# If the operator should only watch for cluster CRDs in the same namespace, set this to "true".
# If this is not set to true, the operator will watch for cluster CRDs in all namespaces.
- name: ROOK_CURRENT_NAMESPACE_ONLY
value: "false"
# Whether to start pods as privileged that mount a host path, which includes the Ceph mon, osd pods and csi provisioners(if logrotation is on).
# Set this to true if SELinux is enabled (e.g. OpenShift) to workaround the anyuid issues.
# For more details see https://github.com/rook/rook/issues/1314#issuecomment-355799641
- name: ROOK_HOSTPATH_REQUIRES_PRIVILEGED
value: "false"
# Provide customised regex as the values using comma. For eg. regex for rbd based volume, value will be like "(?i)rbd[0-9]+".
# In case of more than one regex, use comma to separate between them.
# Default regex will be "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+"
# Add regex expression after putting a comma to blacklist a disk
# If value is empty, the default regex will be used.
- name: DISCOVER_DAEMON_UDEV_BLACKLIST
value: "(?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+"
# Time to wait until the node controller will move Rook pods to other
# nodes after detecting an unreachable node.
# Pods affected by this setting are:
# mgr, rbd, mds, rgw, nfs, PVC based mons and osds, and ceph toolbox
# The value used in this variable replaces the default value of 300 secs
# added automatically by k8s as Toleration for
# <node.kubernetes.io/unreachable>
# The total amount of time to reschedule Rook pods in healthy nodes
# before detecting a <not ready node> condition will be the sum of:
# --> node-monitor-grace-period: 40 seconds (k8s kube-controller-manager flag)
# --> ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS: 5 seconds
- name: ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS
value: "5"
# The name of the node to pass with the downward API
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# The pod name to pass with the downward API
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# The pod namespace to pass with the downward API
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# Recommended resource requests and limits, if desired
#resources:
# limits:
# memory: 512Mi
# requests:
# cpu: 200m
# memory: 128Mi
# Uncomment it to run lib bucket provisioner in multithreaded mode
#- name: LIB_BUCKET_PROVISIONER_THREADS
# value: "5"
# Uncomment it to run rook operator on the host network
#hostNetwork: true
volumes:
- name: rook-config
emptyDir: {}
- name: default-config-dir
emptyDir: {}
# OLM: END OPERATOR DEPLOYMENT