部署在Kubernetes集群中的服务,一般情况下会使用liveness和readiness进行服务的健康检查。当服务部署在Istio网格中时,服务的健康检查要做适当的改变,主要分为如下两种情况:
·服务关闭mTLS:当服务没有启用mTLS时,Command和HTTP请求以及TCP端口检查类型的服务健康检查都可以正常使用。
·服务开启mTLS:当服务启用mTLS时,Command类型的服务健康检查和TCP端口检查类型都能正常使用,HTTP请求不能正常使用。
TCP端口检查时,只会检查端口是否开启,由于Envoy代理会根据服务的配置监听相应的端口,这会导致不管Pod有没有出现问题,TCP端口检查一定会成功,所以TCP端口检查类型的健康检查机制并不能真正检测出服务实例的真实健康情况,不推荐使用此种类型的健康检查。
【实验一】 服务关闭mTLS时的健康检查。
1)创建测试Pod:
$ kubectl apply -f kubernetes/dns-test.yaml
2)关闭service-go服务的mTLS:
$ kubectl apply -f istio/security/mtls-service-go-off.yaml
3)部署使用Command类型健康检查的service-go服务:
$ kubectl apply -f kubernetes/service-go-liveness-command.yaml $ kubectl get pod -l app=service-go NAME READY STATUS RESTARTS AGE service-go-v1-5d98689766-6rcv2 2/2 Running 0 64s
4)部署使用HTTP请求类型健康检查的service-go服务:
$ kubectl apply -f kubernetes/service-go-liveness-http.yaml $ kubectl get pod -l app=service-go NAME READY STATUS RESTARTS AGE service-go-v1-67dffc6768-tbftf 2/2 Running 0 9s
5)部署使用TCP端口类型健康检查的service-go服务:
$ kubectl apply -f kubernetes/service-go-liveness-tcp.yaml $ kubectl get pod -l app=service-go NAME READY STATUS RESTARTS AGE service-go-v1-6ff45f7cbc-mf7jf 2/2 Running 0 54s
6)清理:
$ kubectl delete -f istio/security/mtls-service-go-off.yaml $ kubectl delete -f kubernetes/service-go-liveness-tcp.yaml
【实验二】 服务开启mTLS时的健康检查。
1)开启service-go服务的mTLS:
$ kubectl apply -f istio/security/mtls-service-go-on.yaml
2)部署使用Command类型健康检查的service-go服务:
$ kubectl apply -f kubernetes/service-go-liveness-command.yaml $ kubectl get pod -l app=service-go NAME READY STATUS RESTARTS AGE service-go-v1-5d98689766-t4mdd 2/2 Running 0 7s
3)部署使用HTTP请求类型健康检查的service-go服务:
$ kubectl apply -f kubernetes/service-go-liveness-http.yaml $ kubectl get pod -l app=service-go NAME READY STATUS RESTARTS AGE service-go-v1-67dffc6768-sjqmk 1/2 CrashLoopBackOff 6 4m19s
4)查看service-go服务Pod的events事件:
SERVICE_GO_POD=$(kubectl get pod -l app=service-go -o jsonpath={.items..metadata.name}) $ kubectl describe pod $SERVICE_GO_POD | grep -A50 Events Events: Type Reason Age From Message ---- ----- ---- ---- ------- ... Normal Started 14m kubelet, lab3 Started container Normal Created 14m kubelet, lab3 Created container Warning Unhealthy 14m kubelet, lab3 Liveness probe failed: Get http://10.244.2.13:80/status: read tcp 10.244.2.1:51260->10.244.2.13:80: read: connection reset by peer Warning Unhealthy 14m kubelet, lab3 Liveness probe failed:Get http://10.244.2.13:80/status: read tcp 10.244.2.1:51268->10.244.2.13:80: read: connection reset by peer Warning Unhealthy 14m kubelet, lab3 Liveness probe ...
5)部署使用TCP端口类型健康检查的service-go服务:
$ kubectl apply -f kubernetes/service-go-liveness-tcp.yaml $ kubectl get pod -l app=service-go NAME READY STATUS RESTARTS AGE service-go-v1-6ff45f7cbc-4cbbj 2/2 Running 0 2m23s
6)清理:
$ kubectl delete -f istio/security/mtls-service-go-on.yaml $ kubectl delete -f kubernetes/service-go-liveness-tcp.yaml