juntekim.com/databasus/databasus-migration-job.yaml
Jun-te Kim 5d8ab14e9f
Some checks are pending
Build juntekim.com / Push-to-juntekim-to-docker-hub (push) Waiting to run
Build juntekim.com / run-on-k8s (push) Blocked by required conditions
Build & Deploy stripe-to-invoice (with DB secrets + migrations) / build (push) Waiting to run
Build & Deploy stripe-to-invoice (with DB secrets + migrations) / Deploy Postgres (PV + PVC + Deployment) (push) Blocked by required conditions
Build & Deploy stripe-to-invoice (with DB secrets + migrations) / Apply runtime secrets (push) Blocked by required conditions
Build & Deploy stripe-to-invoice (with DB secrets + migrations) / Run DB migrations (Atlas) (push) Blocked by required conditions
Build & Deploy stripe-to-invoice (with DB secrets + migrations) / deploy (push) Blocked by required conditions
Terraform Plan / Terraform Plan (pull_request) Waiting to run
save
2026-03-12 08:01:22 +00:00

85 lines
2.6 KiB
YAML

# ================================
# DATABASUS - ONE-TIME MIGRATION JOB
# Copies data from old local PVC → new ceph PVC
#
# Steps:
# 1. Scale down databasus deployment:
# kubectl scale deployment databasus --replicas=0
# 2. Apply databasus-storage.yaml (creates new ceph PVC):
# kubectl apply -f databasus-storage.yaml
# 3. Apply this file:
# kubectl apply -f databasus-migration-job.yaml
# 4. Wait for job to complete:
# kubectl wait --for=condition=complete job/databasus-migration --timeout=120s
# 5. Verify data in new PVC:
# kubectl logs job/databasus-migration
# 6. Apply updated databasus.yaml (uses ceph PVC, drops local PV/PVC):
# kubectl apply -f databasus.yaml
# 7. Delete the old local PV and its PVC:
# kubectl delete pvc databasus-pvc-local
# kubectl delete pv databasus-pv
# 8. Delete this job:
# kubectl delete job databasus-migration
# ================================
---
# Rename the old local PVC binding so both can coexist during migration.
# Since the old PVC is named databasus-pvc, the new ceph one uses the same
# name — so you must delete or rename the old one first, OR use this job
# which mounts the old PV directly via a temporary PVC alias below.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: databasus-pvc-local
spec:
accessModes:
- ReadWriteOnce
storageClassName: databasus-local-storage
resources:
requests:
storage: 500Mi
# Bind to the existing local PV explicitly
volumeName: databasus-pv
---
apiVersion: batch/v1
kind: Job
metadata:
name: databasus-migration
spec:
template:
spec:
nodeSelector:
kubernetes.io/hostname: mist
restartPolicy: Never
containers:
- name: migrate
image: busybox
command:
- sh
- -c
- |
echo "=== Starting migration ==="
echo "Source contents:"
ls -lah /old-data/
echo ""
echo "Copying data..."
cp -av /old-data/. /new-data/
echo ""
echo "=== Migration complete ==="
echo "New data contents:"
ls -lah /new-data/
volumeMounts:
- name: old-data
mountPath: /old-data
readOnly: true
- name: new-data
mountPath: /new-data
volumes:
- name: old-data
persistentVolumeClaim:
claimName: databasus-pvc-local
- name: new-data
persistentVolumeClaim:
claimName: databasus-pvc