heidsoft / cloud-bigdata-book

write book
56 stars 33 forks source link

k8s生产实践 #41

Open heidsoft opened 6 years ago

heidsoft commented 6 years ago

istio

部署入门链接

安装配置 Bookinfo 应用 Ingress Gateway Gateway Virtual Service 设置 Sidecar

https://skyao.io/learning-istio/installation/minikube.html https://readailib.com/2019/02/22/kubernetes/istio-minikube/ https://emacoo.cn/devops/istio-tutorial/ https://medium.com/faun/istio-step-by-step-part-10-installing-istio-1-4-in-minikube-ebce9a4e99c https://www.jianshu.com/p/314500cce146

minikube 使用

minikube

minikube addons list
➜  istio-1.7.1 minikube addons list
|-----------------------------|----------|--------------|
|         ADDON NAME          | PROFILE  |    STATUS    |
|-----------------------------|----------|--------------|
| dashboard                   | minikube | enabled ✅   |
| default-storageclass        | minikube | enabled ✅   |
| efk                         | minikube | disabled     |
| freshpod                    | minikube | disabled     |
| gvisor                      | minikube | disabled     |
| helm-tiller                 | minikube | disabled     |
| ingress                     | minikube | disabled     |
| ingress-dns                 | minikube | disabled     |
| istio                       | minikube | disabled     |
| istio-provisioner           | minikube | disabled     |
| logviewer                   | minikube | disabled     |
| metrics-server              | minikube | enabled ✅   |
| nvidia-driver-installer     | minikube | disabled     |
| nvidia-gpu-device-plugin    | minikube | disabled     |
| registry                    | minikube | disabled     |
| registry-aliases            | minikube | disabled     |
| registry-creds              | minikube | disabled     |
| storage-provisioner         | minikube | enabled ✅   |
| storage-provisioner-gluster | minikube | disabled     |
|-----------------------------|----------|--------------|
➜  istio-1.7.1

测试部署的服务是否可用

➜  istio-1.7.1 kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-558b8b4b76-bgw67       2/2     Running   0          123m
httpbin-66cdbdb6c5-tqvlf          2/2     Running   0          13m
productpage-v1-6987489c74-rfj8k   2/2     Running   0          123m
ratings-v1-7dc98c7588-62ljw       2/2     Running   0          123m
reviews-v1-7f99cc4496-pqkc7       2/2     Running   0          123m
reviews-v2-7d79d5bd5d-wfqsl       2/2     Running   0          123m
reviews-v3-7dbcdcbc56-jnzjj       2/2     Running   0          123m
➜  istio-1.7.1 kubectl get pods
➜  istio-1.7.1 curl -s http://$\{GATEWAY_URL\}/productpage | grep -o "<title>.*</title>"
➜  istio-1.7.1 kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"

<title>Simple Bookstore App</title>

httpbin-gateway 访问演示

获取端口

istio-1.7.1 export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')
➜  istio-1.7.1 export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')

创建网关与虚拟服务

➜  istio-1.7.1 kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: httpbin-gateway
spec:
  selector:
    istio: ingressgateway # use Istio default gateway implementation
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "httpbin.example.com"
EOF

gateway.networking.istio.io/httpbin-gateway created
➜  istio-1.7.1 kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
spec:
  hosts:
  - "httpbin.example.com"
  gateways:
  - httpbin-gateway
  http:
  - match:
    - uri:
        prefix: /status
    - uri:
        prefix: /delay
    route:
    - destination:
        port:
          number: 8000
        host: httpbin
EOF

virtualservice.networking.istio.io/httpbin created
➜  istio-1.7.1 curl -I -HHost:httpbin.example.com http://$INGRESS_HOST:$INGRESS_PORT
➜  istio-1.7.1 kubectl get VirtualService
NAME       GATEWAYS             HOSTS                   AGE
bookinfo   [bookinfo-gateway]   [*]                     30m
httpbin    [httpbin-gateway]    [httpbin.example.com]   21s
➜  istio-1.7.1 kubectl get Gateway
NAME               AGE
bookinfo-gateway   30m
httpbin-gateway    42s
➜  istio-1.7.1 curl -I -HHost:httpbin.example.com http://$INGRESS_HOST:$INGRESS_PORT/status/200
HTTP/1.1 200 OK
server: istio-envoy
date: Tue, 15 Sep 2020 15:38:36 GMT
content-type: text/html; charset=utf-8
access-control-allow-origin: *
access-control-allow-credentials: true
content-length: 0
x-envoy-upstream-service-time: 23

➜  istio-1.7.1 kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: httpbin-gateway
spec:
  selector:
    istio: ingressgateway # use Istio default gateway implementation
  servers:
  - port:
      number: 80
      name: http
      protocol: HTTP
    hosts:
    - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: httpbin
spec:
  hosts:
  - "*"
  gateways:
  - httpbin-gateway
  http:
  - match:
    - uri:
        prefix: /headers
    route:
    - destination:
        port:
          number: 8000
        host: httpbin
EOF

gateway.networking.istio.io/httpbin-gateway configured
virtualservice.networking.istio.io/httpbin configured
➜  istio-1.7.1 echo http://$INGRESS_HOST:$INGRESS_PORT/headers
http://172.16.154.129:32540/headers
➜  istio-1.7.1

image

heidsoft commented 6 years ago

https://blog.qikqiak.com/post/kubernetes-resource-quota-usage/

heidsoft commented 6 years ago

https://jimmysong.io/kubernetes-handbook/practice/storage.html

heidsoft commented 6 years ago

http://docs.kubernetes.org.cn/728.html#CPU-2

heidsoft commented 6 years ago

https://jimmysong.io/kubernetes-handbook/guide/configure-liveness-readiness-probes.html

heidsoft commented 6 years ago

https://k8smeetup.github.io/docs/tasks/administer-cluster/cpu-memory-limit/

设置 Pod CPU 和内存限制 默认情况下,Pod 运行没有 CPU 和内存的限额。 这意味着系统中的任何 Pod 将能够像执行该 Pod 所在的节点一样,消耗足够多的 CPU 和内存。

这个例子演示了如何限制 Kubernetes Namespace,以此来控制每个 Pod 的最小/最大资源限额。 另外,这个例子演示了当终端用户没有为 Pod 设置资源限额时,如何使用默认的资源限额。

Before you begin 创建 Namespace 对 Namespace 应用限额 创建时强制设置限额 清理 设置资限额制的动机 总结 What’s next Before you begin You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. If you do not already have a cluster, you can create one by using Minikube, or you can use one of these Kubernetes playgrounds:

Katacoda Play with Kubernetes To check the version, enter kubectl version.

创建 Namespace 这个例子将能在一个自定义的 Namespace 中工作,演示了相关的概念。

让我们创建一个名称为 limit-example 的 Namespace:

$ kubectl create namespace limit-example namespace "limit-example" created 注意到 kubectl 命令将打印出创建或修改的资源类型和名称,然后会在后续的命令中使用到:

$ kubectl get namespaces NAME STATUS AGE default Active 51s limit-example Active 45s 对 Namespace 应用限额 在我们的 Namespace 中创建一个简单的限额:

$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/limits.yaml --namespace=limit-example limitrange "mylimits" created 让我们描述一下在该 Namespace 中被强加的限额:

$ kubectl describe limits mylimits --namespace=limit-example Name: mylimits Namespace: limit-example Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio


Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m - Container memory 3Mi 1Gi 100Mi 200Mi - 在这种场景下,指定了如下限制:

如果一个资源被指定了最大约束(在该例子中为 2 CPU 和 1Gi 内存),然后跨所有容器的该资源,限额必须被指定。 当尝试创建该 Pod 时,指定限额失败将导致一个验证错误。 注意,一个默认的限额通过在 limits.yaml 文件中的 default 来设置(300m CPU 和 200Mi 内存)。 如果一个资源被指定了最小约束(在该例子中为 100m CPU 和 3Mi 内存),然后跨所有容器的该资源,请求必须被指定。 当尝试创建该 Pod 时,指定的请求失败将导致一个验证错误。 注意,一个默认的请求的值通过在 limits.yaml 文件中的 defaultRequest 来设置(200m CPU 和 100Mi 内存)。 对任意 Pod,所有容器内存 requests 值之和必须 >= 6Mi,所有容器内存 limits 值之和必须 <= 1Gi; 所有容器 CPU requests 值之和必须 >= 200m,所有容器 CPU limits 值之和必须 <= 2。 创建时强制设置限额 当集群中的 Pod 创建和更新时,在一个 Namespace 中列出的限额是强制设置的。 如果将该限额修改成一个不同的值范围,它不会影响先前在该 Namespace 中创建的 Pod。

如果资源(CPU 或内存)被设置限额,用户将在创建时得到一个错误,并指出了错误的原因。

首先让我们启动一个创建单容器 Pod 的 Deployment,来演示默认值是如何被应用到每个 Pod 上的:

$ kubectl run nginx --image=nginx --replicas=1 --namespace=limit-example deployment "nginx" created 注意到在 >= v1.2 的 Kubernetes 集群中,kubectl run 创建了名称为 “nginx” 的 Deployment。如果在老版本的集群上运行,相反它会创建 ReplicationController。 如果想要获取老版本的行为,使用 --generator=run/v1 选项来创建 ReplicationController。查看 kubectl run 获取更多详细信息。 Deployment 管理单容器 Pod 的 1 个副本。让我们看一下它是如何管理 Pod 的。首先,查找到 Pod 的名称:

$ kubectl get pods --namespace=limit-example NAME READY STATUS RESTARTS AGE nginx-2040093540-s8vzu 1/1 Running 0 11s 以 yaml 输出格式来打印这个 Pod,然后grep 其中的 resources 字段。注意,您自己的 Pod 的名称将不同于上面输出的:

$ kubectl get pods nginx-2040093540-s8vzu --namespace=limit-example -o yaml | grep resources -C 8 resourceVersion: "57" selfLink: /api/v1/namespaces/limit-example/pods/nginx-2040093540-ivimu uid: 67b20741-f53b-11e5-b066-64510658e388 spec: containers:

让我们创建一个超过被允许限额的 Pod,通过使它具有一个请求 3 CPU 核心的容器:

$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/invalid-pod.yaml --namespace=limit-example Error from server: error when creating "http://k8s.io/docs/tasks/configure-pod-container/invalid-pod.yaml": Pod "invalid-pod" is forbidden: [Maximum cpu usage per Pod is 2, but limit is 3., Maximum cpu usage per Container is 2, but limit is 3.] 让我们创建一个 Pod,使它在允许的最大限额范围之内:

$ kubectl create -f https://k8s.io/docs/tasks/configure-pod-container/valid-pod.yaml --namespace=limit-example pod "valid-pod" created 现在查看该 Pod 的 resources 字段:

$ kubectl get pods valid-pod --namespace=limit-example -o yaml | grep -C 6 resources uid: 3b1bfd7a-f53c-11e5-b066-64510658e388 spec: containers:

注意:在物理节点上默认安装的 Kubernetes 集群中,CPU 资源的 limits 是被强制使用的,该 Kubernetes 集群运行容器,除非管理员在部署 kubelet 时使用了如下标志:

$ kubelet --help Usage of kubelet .... --cpu-cfs-quota[=true]: Enable CPU CFS quota enforcement for containers that specify CPU limits $ kubelet --cpu-cfs-quota=false ... 清理 基于使用的该示例来清理资源,可以通过如下命令删除名称为 limit-example 的 Namespace:

$ kubectl delete namespace limit-example namespace "limit-example" deleted $ kubectl get namespaces NAME STATUS AGE default Active 12m 设置资限额制的动机 可能由于对资源使用的各种原因,用户希望对单个 Pod 的资源总量进行强制限制。

例如:

集群中每个节点有 2GB 内存。集群操作员不想接受内存需求大于 2GB 的 Pod,因为集群中没有节点能支持这个要求。 为了避免 Pod 永远无法被调度到集群中的节点上,操作员会选择去拒绝超过 2GB 内存作为许可控制的 Pod。 一个集群被一个组织内部的 2 个团体共享,分别作为生产和开发工作负载来运行。 生产工作负载可能消耗多达 8GB 内存,而开发工作负载可能消耗 512MB 内存。 集群操作员为每个工作负载创建了一个单独的 Namespace,为每个 Namespace 设置了限额。 用户会可能创建一个资源消耗低于机器容量的 Pod。剩余的空间可能太小但很有用,然而对于整个集群来说浪费的代价是足够大的。 结果集群操作员会希望设置限额:为了统一调度和限制浪费,Pod 必须至少消耗它们平均节点大小 20% 的内存和 CPU。 总结 想要限制单个容器或 Pod 消耗资源总量的集群操作员,能够为每个 Kubernetes Namespace 定义可允许的范围。 在没有任何明确指派的情况下,Kubernetes 系统能够使用默认的资源 limits 和 requests,如果需要的话,限制一个节点上的 Pod 的资源总量。

What’s next 查看 LimitRange 设计文档 获取更多信息。 查看 资源 获取关于 Kubernetes 资源模型的详细描述。

heidsoft commented 6 years ago

journalctl --since 15:00:00 -u kubelet

heidsoft commented 6 years ago

查看job kubectl get jobs --watch -n my-cn

通过job查看pods kubectl get pods -n mw-protege-cn --selector=job-name=wallet-cn-curl-cn-1529630400 --output=jsonpath={.items..metadata.name}

查看pods 执行结果 kubectl logs -f wallet-cn-curl-cn-1529484000-w5rdv -n my-cn

heidsoft commented 5 years ago

label 操作


create labels for the nodes:

kubectl label node <nodename> <labelname>=allow

delete above labels from its respecitve nodes:

kubectl label node <nodename> <labelname>-
heidsoft commented 5 years ago

根据进程Id 获取容器id

docker ps -q | xargs docker inspect --format '{{.State.Pid}}, {{.Name}}'|grep 27088
docker ps -q | xargs docker inspect --format '{{.State.Pid}}, {{.Name}}'|grep 22667
heidsoft commented 5 years ago

k8s ingress nginx-ingress-controller:0.9.0-beta.5 body size大小控制 采用的注解

registry.cn-hangzhou.aliyuncs.com/acs/nginx-ingress-controller:0.9.0-beta.5
annotations:
   ingress.kubernetes.io/proxy-body-size: 50m
heidsoft commented 5 years ago

k8s 通过feature-gates 开启新特性功能

https://k8smeetup.github.io/docs/reference/feature-gates/ https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/

heidsoft commented 5 years ago

pod java 应用 jvm 参数设置

-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -Xms512M

设置-XX:+UnlockExperimentalVMOptions 来解锁参数

java8及java9
Java 8u131及以上版本开始支持了Docker的cpu和memory限制。
cpu limit
即如果没有显式指定-XX:ParalllelGCThreads 或者 -XX:CICompilerCount, 那么JVM使用docker的cpu限制。如果docker有指定cpu limit,jvm参数也有指定-XX:ParalllelGCThreads 或者 -XX:CICompilerCount,那么以指定的参数为准。
memory limit

在java8u131+及java9,需要加上-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap才能使得Xmx感知docker的memory limit。
heidsoft commented 5 years ago

查看jvm 默认参数值

java -XX:+UnlockDiagnosticVMOptions -XX:+UnlockExperimentalVMOptions -XX:+PrintFlagsFinal
heidsoft commented 5 years ago

jvm Max memory = [-Xmx] + [-XX:MaxPermSize] + number_of_threads * [-Xss]

heidsoft commented 5 years ago

jvm 参数调试

java -XX:+PrintGCDetails  -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -version

/ # java -XX:+PrintGCDetails  -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -version
java version "1.8.0_151"
Java(TM) SE Runtime Environment (build 1.8.0_151-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
Heap
 PSYoungGen      total 9728K, used 522K [0x00000000f5580000, 0x00000000f6000000, 0x0000000100000000)
  eden space 8704K, 6% used [0x00000000f5580000,0x00000000f5602a98,0x00000000f5e00000)
  from space 1024K, 0% used [0x00000000f5f00000,0x00000000f5f00000,0x00000000f6000000)
  to   space 1024K, 0% used [0x00000000f5e00000,0x00000000f5e00000,0x00000000f5f00000)
 ParOldGen       total 22016K, used 0K [0x00000000e0000000, 0x00000000e1580000, 0x00000000f5580000)
  object space 22016K, 0% used [0x00000000e0000000,0x00000000e0000000,0x00000000e1580000)
 Metaspace       used 2226K, capacity 4480K, committed 4480K, reserved 1056768K
  class space    used 243K, capacity 384K, committed 384K, reserved 1048576K
/ #  
heidsoft commented 5 years ago

查看jvm 默认参数

java -XX:+PrintFlagsInitial命令查看本机的初始化参数
heidsoft commented 5 years ago

根据MaxRAMFraction jvm 分配最大值

Java 8/9 brought support for -XX:+UseCGroupMemoryLimitForHeap (with -XX:+UnlockExperimentalVMOptions). This sets -XX:MaxRAM to the cgroup memory limit. Per default, the JVM allocates roughly 25% of the max RAM, because -XX:MaxRAMFraction defaults to 4.

Example:

MaxRAM = 1g
MaxRAMFraction = 4
JVM is allowed to allocate: MaxRAM / MaxRAMFraction = 1g / 4 = 256m

Using only 25% of the quota seems like waste for a deployment which (usually) consists of a single JVM process. So now people set -XX:MaxRAMFraction=1, so the JVM is theoretically allowed to use 100% of the MaxRAM.

For the 1g example, this often results in heap sizes around 900m. This seems a bit high - there is not a lot of free room for the JVM or other stuff like remote shells or out-of-process tasks.

So is this configuration (-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1) considered safe for prod or even best practice? Or should I still hand pick -Xmx, -Xms, -Xss and so on?
heidsoft commented 5 years ago

容器化之后jvm 参数调试

https://dzone.com/articles/how-to-decrease-jvm-memory-consumption-in-docker-u
heidsoft commented 5 years ago

k8s 容器网络分析工具

https://github.com/heidsoft/netshoot
heidsoft commented 5 years ago

/sys/fs/cgroup/memory/memory.stat

[root@xxxxxx ~]# cat /sys/fs/cgroup/memory/memory.stat
cache 203841536
rss 72060928
rss_huge 12582912
mapped_file 43094016
swap 0
pgpgin 13652593
pgpgout 13609251
pgfault 39577330
pgmajfault 753
inactive_anon 401408
active_anon 73506816
inactive_file 892928
active_file 201101312
unevictable 0
hierarchical_memory_limit 9223372036854771712
hierarchical_memsw_limit 9223372036854771712
total_cache 11854675968
total_rss 1916837888
total_rss_huge 243269632
total_mapped_file 233168896
total_swap 0
total_pgpgin 33702061203
total_pgpgout 33699549832
total_pgfault 127611255701
total_pgmajfault 17816
total_inactive_anon 651264
total_active_anon 1919160320
total_inactive_file 5354594304
total_active_file 6497079296
total_unevictable 0
heidsoft commented 5 years ago

https://stackoverflow.com/questions/49854237/is-xxmaxramfraction-1-safe-for-production-in-a-containered-environment

heidsoft commented 5 years ago

pod 监控相关

https://sysdig.com/blog/kubernetes-monitoring-prometheus/ https://itnext.io/kubernetes-monitoring-with-prometheus-in-15-minutes-8e54d1de2e13 https://dzone.com/articles/monitoring-kubernetes-in-production-how-to-guide-p https://logz.io/blog/kubernetes-monitoring/ https://akomljen.com/get-kubernetes-cluster-metrics-with-prometheus-in-5-minutes/ https://blog.freshtracks.io/a-deep-dive-into-kubernetes-metrics-part-3-container-resource-metrics-361c5ee46e66

heidsoft commented 5 years ago

安全配置

https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/ https://k8smeetup.github.io/docs/tasks/configure-pod-container/configure-pod-configmap/ https://purewhite.io/2017/12/28/kubernetes-configmap-and-secret/ https://www.cnblogs.com/cocowool/p/kubernetes_configmap_secret.html https://kubernetes.io/docs/concepts/configuration/secret/

heidsoft commented 5 years ago

spring boot 容器化时,jvm参数配置

https://medium.com/@cl4r1ty/docker-spring-boot-and-java-opts-ba381c818fa2

heidsoft commented 5 years ago

删除已完成的pod

kubectl get jobs --all-namespaces | sed '1d' | awk '{ print $2, "--namespace", $1 }' | while read line; do kubectl delete jobs $line; done

kubectl delete job --namespace heidsoft $(kubectl get jobs --namespace heidsoft | awk '$3 ~ 1' | awk '{print $1}')
heidsoft commented 5 years ago

k8s serviceaccount

User account是为人设计的,而service account则是为Pod中的进程调用Kubernetes API而设计;
User account是跨namespace的,而service account则是仅局限它所在的namespace;
每个namespace都会自动创建一个default service account
Token controller检测service account的创建,并为它们创建secret
开启ServiceAccount Admission Controller后
每个Pod在创建后都会自动设置spec.serviceAccount为default(除非指定了其他ServiceAccout)
验证Pod引用的service account已经存在,否则拒绝创建
如果Pod没有指定ImagePullSecrets,则把service account的ImagePullSecrets加到Pod中
每个container启动后都会挂载该service account的token和ca.crt到/var/run/secrets/kubernetes.io/serviceaccount/

can-i-connect-one-service-account-to-multiple-namespaces-in-kubernetes service-account 管理Service Accounts

heidsoft commented 5 years ago

How to do kubernetes TCP health checks on a container?

service type

helm list : cannot list configmaps in the namespace “kube-system”

service

heidsoft commented 5 years ago

calico

http://zhouxi.io/blog/post/zhouxi/k8s-calico-BGP-%E7%BD%91%E7%BB%9C%E9%AA%8C%E8%AF%81

heidsoft commented 5 years ago

calico

https://www.lijiaocn.com/%E9%A1%B9%E7%9B%AE/2017/04/11/calico-usage.html#calico https://www.yangcs.net/posts/calico-rr/ http://hustcat.github.io/setup-rr-for-calico-node/ https://support.huawei.com/enterprise/zh/knowledge/EKB1000048982 https://kubernetes.io/zh/docs/concepts/services-networking/service/#%E5%AE%9A%E4%B9%89-service http://hustcat.github.io/setup-rr-for-calico-node/ http://zhouxi.io/blog/post/zhouxi/k8s-calico-BGP-%E7%BD%91%E7%BB%9C%E9%AA%8C%E8%AF%81

heidsoft commented 5 years ago

calico

http://zhouxi.io/blog/post/zhouxi/k8s-calico-BGP-%E7%BD%91%E7%BB%9C%E9%AA%8C%E8%AF%81 https://www.lijiaocn.com/%E9%A1%B9%E7%9B%AE/2017/04/11/calico-usage.html#calico https://www.yangcs.net/posts/calico-rr/ http://hustcat.github.io/setup-rr-for-calico-node/ https://www.projectcalico.org/learn/

视频讲解

https://youtu.be/hqzUfefL1ek

heidsoft commented 5 years ago

helm

helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.13.0 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts

heidsoft commented 5 years ago

docker 存储结构

https://zhangchenchen.github.io/2018/03/09/record-for-docker-storage-driver/ http://hustcat.github.io/docker-devicemapper/ http://www.cnblogs.com/hustcat/p/3908985.html https://www.troyying.xyz/index.php/IT/6.html https://zhangchenchen.github.io/2018/03/09/record-for-docker-storage-driver/ http://www.senra.me/docker-switch-storage-driver-to-overlay2-to-optimize-performance/ https://stackoverflow.com/questions/37672018/clean-docker-environment-devicemapper/37681340 http://www.projectatomic.io/blog/2015/06/notes-on-fedora-centos-and-docker-storage-drivers/ https://github.com/moby/moby/issues/3182 https://docs.docker.com/storage/storagedriver/device-mapper-driver/ https://github.com/snitm/docker/tree/master/daemon/graphdriver/devmapper

heidsoft commented 5 years ago

etcd

https://github.com/doczhcn/etcd https://jin-yang.github.io/post/golang-raft-etcd-sourcode-details.html https://jin-yang.github.io/about.html http://sealblog.com/2018/09/14/etcd-raft/ https://draveness.me/etcd-introduction https://linux.cn/article-4810-1.html

heidsoft commented 5 years ago

http://codemacro.com/2018/05/30/kube_apiserver_sample/

heidsoft commented 5 years ago

设置ingress proxy 超时时间

https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/