Istio通过设置VirtualService中的timeout字段来指定服务的调用超时时间。
使用示例如下:
1 apiVersion: networking.istio.io/v1alpha3 2 kind: VirtualService 3 metadata: 4 name: service-node 5 spec: 6 hosts: 7 - service-node 8 http: 9 - route: 10 - destination: 11 host: service-node 12 timeout: 500ms
第12行指定服务的调用时间不能超过500毫秒,当调用service-node服务时,如果超过500毫秒请求还没有完成,就直接给调用方返回超时错误。
【实验】
1)部署service-node服务:
$ kubectl apply -f service/node/service-node.yaml $ kubectl get pod NAME READY STATUS RESTARTS AGE service-go-v1-7cc5c6f574-lrp2h 2/2 Running 0 4m service-go-v2-7656dcc478-svn5c 2/2 Running 0 4m service-node-v1-d44b9bf7b-ppn26 2/2 Running 0 24s service-node-v2-86545d9796-rgmb7 2/2 Running 0 24s
2)启动用于并发测试的Pod:
$ kubectl apply -f kubernetes/fortio.yaml
3)创建service-node服务的超时规则:
$ kubectl apply -f istio/resilience/virtual-service-node-timeout.yaml
4)访问service-node服务:
$ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -curl http://service-node/env HTTP/1.1 200 OK content-type: application/json; charset=utf-8 content-length: 77 date: Wed, 16 Jan 2019 10:33:57 GMT x-envoy-upstream-service-time: 18 server: envoy {"message":"node v1","upstream":[{"message":"go v1","response_time":"0.01"}]} # 10 并发 $ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -c 10 -qps 0 -n 100 -loglevel Error http://service-node/env 11:08:24 I logger.go:97> Log level is now 4 Error (was 2 Info) Fortio 1.0.1 running at 0 queries per second, 2->2 procs, for 100 calls: http://service-node/env Aggregated Function Time : count 100 avg 0.19270902 +/- 0.1403 min 0.009657651 max 0.506141264 sum 19.2709017 # target 50% 0.173333 # target 75% 0.3 # target 90% 0.421429 # target 99% 0.505118 # target 99.9% 0.506039 Sockets used: 15 (for perfect keepalive, would be 10) Code 200 : 94 (94.0 %) Code 504 : 6 (6.0 %) All done 100 calls (plus 0 warmup) 192.709 ms avg, 45.4 qps # 20 并发 $ kubectl exec fortio -c fortio /usr/local/bin/fortio -- load -c 20 -qps 0 -n 200 -loglevel Error http://service-node/env 11:08:47 I logger.go:97> Log level is now 4 Error (was 2 Info) Fortio 1.0.1 running at 0 queries per second, 2->2 procs, for 200 calls: http://service-node/env Aggregated Function Time : count 200 avg 0.44961158 +/- 0.122 min 0.006904922 max 0.524347684 sum 89.9223153 # target 50% 0.50864 # target 75% 0.516494 # target 90% 0.521206 # target 99% 0.524034 # target 99.9% 0.524316 Sockets used: 163 (for perfect keepalive, would be 20) Code 200 : 46 (23.0 %) Code 504 : 154 (77.0 %) All done 200 calls (plus 0 warmup) 449.612 ms avg, 39.2 qps
当并发逐渐增大时,service-node服务的响应时间逐渐增大,服务请求响应超时的响应码(504)所占比例逐渐升高。这说明我们配置的服务超时时间已经生效。
5)清理:
$ kubectl delete -f kubernetes/fortio.yaml $ kubectl delete -f service/node/service-node.yaml $ kubectl delete -f istio/resilience/virtual-service-node-timeout.yaml