This guide provides instructions on how to clean up Veeam Kasten for Kubernetes provisioned PVs that are not deleted, causing orphaned volumes in the GKE k8s 1.28 release.
It has been observed on GKE running k8s 1.28, Veeam Kasten for Kubernetes provisioned PVs using in-tree provisioner cannot be deleted via "kubectl delete pvc <pvcname>", resulting in volume sprawl requiring manual remediation. Restoring from snapshots/backups still functions.
There is no risk of data loss or inability to recover from existing backups.
This issue effects only in-tree provisioner "kubernetes.io/gce-pd", on GKE k8s 1.28.x.
This section demonstrates how to check for orphaned volumes in the environment and successfully clean them.
Environment
GKE k8s 1.28.x
Kasten K10 Version : 6.5.2
in-tree provisioned "kubernetes.io/gce-pd"
Despite the reported success in the Dashboard, the report shows a failed status after the export was completed with orphaned volumes:
status:
message: 'error getting deleter volume plugin for volume "kio-4e1c8777bcd411eeb71cde3cda5267f6-0":
no volume plugin matched'
** phase: Failed**
kubectl get pv | grep -i failed
Copy
kubectl describe pv <pv-id>
Copy
example output:
**kubectl get pv |grep -i failed**
kio-4e1c8777bcd411eeb71cde3cda5267f6-0 8589934592 RWO Delete Failed kasten-io/kio-4e1c8777bcd411eeb71cde3cda5267f6-0 standard 39m
kio-e747f57abcd111eeb71cde3cda5267f6-0 8589934592 RWO Delete Failed kasten-io/kio-e747f57abcd111eeb71cde3cda5267f6-0 standard 57m
**kubectl describe pv kio-4e1c8777bcd411eeb71cde3cda5267f6-0**
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning VolumeFailedDelete 15m persistentvolume-controller error getting deleter volume plugin for volume "kio-4e1c8777bcd411eeb71cde3cda5267f6-0": no volume plugin matched
Using the following command attempt to delete the identified orphaned PVC.
kubectl delete pvc <pvc-id> - n kasten-io
Copy
Example
**kubectl delete pvc kio-4e1c8777bcd411eeb71cde3cda5267f6-0 - n kasten-io**
status:
message: 'error getting deleter volume plugin for volume "kio-4e1c8777bcd411eeb71cde3cda5267f6-0":
no volume plugin matched'
**phase: Failed**
Restore workload from Veeam Kasten for Kubernetes snapshot/export, which succeeds!
The following steps outline the process to clean up any orphaned volumes.
kubectl get pv --selector k10pvmatchid \
-o jsonpath='{.items[?(@.status.phase == "Failed")].spec.gcePersistentDisk.pdName}'
Copy
disks=$(kubectl get pv --selector k10pvmatchid -o jsonpath='{.items[?(@.status.phase == "Failed")].spec.gcePersistentDisk.pdName}')
for disk in $disks; do
gcloud compute disks delete $disk --quiet
done
Copy
failedpvs=$(kubectl get pv --selector k10pvmatchid -o jsonpath='{.items[?(@.status.phase == "Failed")].metadata.name}')
for failedpv in $failedpvs; do
kubectl delete pv $failedpv
done
Copy
To submit feedback regarding this article, please click this link: Send Article Feedback
To report a typo on this page, highlight the typo with your mouse and press CTRL + Enter.