cp deployment-user-v1.yaml deployment-user-v2.yaml
apiVersion: apps/v1 #API 配置版本
kind: Deployment #资源类型
metadata:
+ name: user-v2 #资源名称
spec:
selector:
matchLabels:
+ app: user-v2 #告诉deployment根据规则匹配相应的Pod进行控制和管理,matchLabels字段匹配Pod的label值
replicas: 3 #声明一个 Pod,副本的数量
template:
metadata:
labels:
+ app: user-v2 #Pod的名称
spec: #组内创建的 Pod 信息
containers:
- name: nginx #容器的名称
+ image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v2
ports:
- containerPort: 80 #容器内映射的端口
service-user-v2.yaml
apiVersion: v1
kind: Service
metadata:
+ name: service-user-v2
spec:
selector:
+ app: user-v2
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
kubectl apply -f deployment-user-v2.yaml service-user-v2.yaml
nginx.ingress.kubernetes.io/canary
:可选值为 true / false 。代表是否开启灰度功能nginx.ingress.kubernetes.io/canary-by-cookie
:灰度发布 cookie 的 key。当 key 值等于 always 时,灰度触发生效。等于其他值时,则不会走灰度环境
ingress-gray.yaml apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: user-canary
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-cookie: "vip_user"
spec:
rules:
- http:
paths:
- backend:
serviceName: service-user-v2
servicePort: 80
backend:
serviceName: service-user-v2
servicePort: 80
kubectl apply -f ./ingress-gray.yaml
kubectl -n ingress-nginx get svc
curl http://172.31.178.169:31234/user
curl http://118.190.156.138:31234/user
curl --cookie "vip_user=always" http://172.31.178.169:31234/user
vi ingress-gray.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: user-canary
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/canary: "true"
+ nginx.ingress.kubernetes.io/canary-by-header: "name"
+ nginx.ingress.kubernetes.io/canary-by-header-value: "vip"
spec:
rules:
- http:
paths:
- backend:
serviceName: service-user-v2
servicePort: 80
backend:
serviceName: service-user-v2
servicePort: 80
kubectl apply -f ingress-gray.yaml
curl --header "name:vip" http://172.31.178.169:31234/user
nginx.ingress.kubernetes.io/canary-weight
:值是字符串,为 0-100 的数字,代表灰度环境命中概率。如果值为 0,则表示不会走灰度。值越大命中概率越大。当值 = 100 时,代表全走灰度vi ingress-gray.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: user-canary
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/canary: "true"
+ nginx.ingress.kubernetes.io/canary-weight: "50"
spec:
rules:
- http:
paths:
- backend:
serviceName: service-user-v2
servicePort: 80
backend:
serviceName: service-user-v2
servicePort: 80
kubectl apply -f ingress-gray.yaml
for ((i=1; i<=10; i++)); do curl http://172.31.178.169:31234/user; done
kubectl get deploy
kubectl scale deployment user-v1 --replicas=10
deployment-user-v1.yaml
apiVersion: apps/v1 #API 配置版本
kind: Deployment #资源类型
metadata:
name: user-v1 #资源名称
spec:
minReadySeconds: 1
+ strategy:
+ type: RollingUpdate
+ rollingUpdate:
+ maxSurge: 1
+ maxUnavailable: 0
+ selector:
+ matchLabels:
+ app: user-v1 #告诉deployment根据规则匹配相应的Pod进行控制和管理,matchLabels字段匹配Pod的label值
replicas: 10 #声明一个 Pod,副本的数量
template:
metadata:
labels:
app: user-v1 #Pod的名称
spec: #组内创建的 Pod 信息
containers:
- name: nginx #容器的名称
+ image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3 #使用哪个镜像
ports:
- containerPort: 80 #容器内映射的端口
参数 | 含义 |
---|---|
minReadySeconds | 容器接受流量延缓时间:单位为秒,默认为0。如果没有设置的话,k8s会认为容器启动成功后就可以用了。设置该值可以延缓容器流量切分 |
strategy.type = RollingUpdate | ReplicaSet 发布类型,声明为滚动发布,默认也为滚动发布 |
strategy.rollingUpdate.maxSurge | 最多Pod数量:为数字类型/百分比。如果 maxSurge 设置为1,replicas 设置为10,则在发布过程中pod数量最多为10 + 1个(多出来的为旧版本pod,平滑期不可用状态)。maxUnavailable 为 0 时,该值也不能设置为0 |
strategy.rollingUpdate.maxUnavailable | 升级中最多不可用pod的数量:为数字类型/百分比。当 maxSurge 为 0 时,该值也不能设置为0 |
kubectl apply -f ./deployment-user-v1.yaml
deployment.apps/user-v1 configured
kubectl rollout status deployment/user-v1
Waiting for deployment "user-v1" rollout to finish: 3 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 3 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
deployment "user-v1" successfully rolled out
探针名称 | 在哪个环节触发 | 作用 | 检测失败对Pod的反应 |
---|---|---|---|
启动探针 | Pod 运行时 | 检测服务是否启动成功 | 杀死 Pod 并重启 |
存活探针 | Pod 运行时 | 检测服务是否崩溃,是否需要重启服务 | 杀死 Pod 并重启 |
可用探针 | Pod 运行时 | 检测服务是不是允许被访问到 | 停止Pod的访问调度,不会被杀死重启 |
vi shell-probe.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: shell-probe
name: shell-probe
spec:
containers:
- name: shell-probe
image: registry.aliyuncs.com/google_containers/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
kubectl apply -f liveness.yaml
kubectl get pods | grep liveness-exec
kubectl describe pods liveness-exec
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m44s default-scheduler Successfully assigned default/liveness-exec to node1
Normal Pulled 2m41s kubelet Successfully pulled image "registry.aliyuncs.com/google_containers/busybox" in 1.669600584s
Normal Pulled 86s kubelet Successfully pulled image "registry.aliyuncs.com/google_containers/busybox" in 605.008964ms
Warning Unhealthy 41s (x6 over 2m6s) kubelet Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
Normal Killing 41s (x2 over 116s) kubelet Container liveness failed liveness probe, will be restarted
Normal Created 11s (x3 over 2m41s) kubelet Created container liveness
Normal Started 11s (x3 over 2m41s) kubelet Started container liveness
Normal Pulling 11s (x3 over 2m43s) kubelet Pulling image "registry.aliyuncs.com/google_containers/busybox"
Normal Pulled 11s kubelet Successfully pulled image "registry.aliyuncs.com/google_containers/busybox" in 521.70892ms
tcp-probe.yaml
apiVersion: v1
kind: Pod
metadata:
name: tcp-probe
labels:
app: tcp-probe
spec:
containers:
- name: tcp-probe
image: nginx
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
kubectl apply -f tcp-probe.yaml
kubectl get pods | grep tcp-probe
kubectl describe pods tcp-probe
kubectl exec -it tcp-probe -- /bin/sh
apt-get update
apt-get install vim -y
vi /etc/nginx/conf.d/default.conf
80=>8080
nginx -s reload
kubectl describe pod tcp-probe
Warning Unhealthy 6s kubelet Readiness probe failed: dial tcp 10.244.1.47:80: connect: connection
vi http-probe.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: http-probe
name: http-probe
spec:
containers:
- name: http-probe
image: registry.cn-beijing.aliyuncs.com/zhangrenyang/http-probe:1.0.0
livenessProbe:
httpGet:
path: /liveness
port: 80
httpHeaders:
- name: source
value: probe
initialDelaySeconds: 3
periodSeconds: 3
vim ./http-probe.yaml
kubectl apply -f ./http-probe.yaml
kubectl describe pods http-probe
Normal Killing 5s kubelet Container http-probe failed liveness probe, will be restarted
docker pull registry.cn-beijing.aliyuncs.com/zhangrenyang/http-probe:1.0.0
kubectl replace --force -f http-probe.yaml
Dockerfile
FROM node
COPY ./app /app
WORKDIR /app
EXPOSE 3000
CMD node index.js
let http = require('http');
let start = Date.now();
http.createServer(function(req,res){
if(req.url === '/liveness'){
let value = req.headers['source'];
if(value === 'probe'){
let duration = Date.now()-start;
if(duration>10*1000){
res.statusCode=500;
res.end('error');
}else{
res.statusCode=200;
res.end('success');
}
}else{
res.statusCode=200;
res.end('liveness');
}
}else{
res.statusCode=200;
res.end('liveness');
}
}).listen(3000,function(){console.log("http server started on 3000")});
kubectl create secret generic mysql-account --from-literal=username=zhufeng --from-literal=password=123456
kubectl get secret
字段 | 含义 |
---|---|
NAME | Secret的名称 |
TYPE | Secret的类型 |
DATA | 存储内容的数量 |
AGE | 创建到现在的时间 |
//编辑值
kubectl edit secret account
//输出yaml格式
kubectl get secret account -o yaml
//输出json格式
kubectl get secret account -o json
//对Base64进行解码
echo MTIzNDU2 | base64 -d
mysql-account.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysql-account
stringData:
username: root
password: root
type: Opaque
kubectl apply -f mysql-account.yaml
secret/mysql-account created
kubectl get secret mysql-account -o yaml
kubectl create secret docker-registry private-registry \
--docker-username=[用户名] \
--docker-password=[密码] \
--docker-email=[邮箱] \
--docker-server=[私有镜像库地址]
//查看私有库密钥组
kubectl get secret private-registry -o yaml
echo [value] | base64 -d
vi private-registry-file.yaml
apiVersion: v1
kind: Secret
metadata:
name: private-registry-file
data:
.dockerconfigjson: eyJhdXRocyI6eyJodHRwczo
type: kubernetes.io/dockerconfigjson
kubectl apply -f ./private-registry-file.yaml
kubectl get secret private-registry-file -o yaml
apiVersion: apps/v1 #API 配置版本
kind: Deployment #资源类型
metadata:
name: user-v1 #资源名称
spec:
minReadySeconds: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: user-v1 #告诉deployment根据规则匹配相应的Pod进行控制和管理,matchLabels字段匹配Pod的label值
+ replicas: 1 #声明一个 Pod,副本的数量
template:
metadata:
labels:
app: user-v1 #Pod的名称
spec: #组内创建的 Pod 信息
+ volumes:
+ - name: mysql-account
+ secret:
+ secretName: mysql-account
containers:
- name: nginx #容器的名称
image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3 #使用哪个镜像
+ volumeMounts:
+ - name: mysql-account
+ mountPath: /mysql-account
+ readOnly: true
ports:
- containerPort: 80 #容器内映射的端口
kubectl describe pods user-v1-b88799944-tjgrs
kubectl exec -it user-v1-b88799944-tjgrs -- ls /root
deployment-user-v1.yaml
apiVersion: apps/v1 #API 配置版本
kind: Deployment #资源类型
metadata:
name: user-v1 #资源名称
spec:
minReadySeconds: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
selector:
matchLabels:
app: user-v1 #告诉deployment根据规则匹配相应的Pod进行控制和管理,matchLabels字段匹配Pod的label值
replicas: 1 #声明一个 Pod,副本的数量
template:
metadata:
labels:
app: user-v1 #Pod的名称
spec: #组内创建的 Pod 信息
volumes:
- name: mysql-account
secret:
secretName: mysql-account
containers:
- name: nginx #容器的名称
+ env:
+ - name: USERNAME
+ valueFrom:
+ secretKeyRef:
+ name: mysql-account
+ key: username
+ - name: PASSWORD
+ valueFrom:
+ secretKeyRef:
+ name: mysql-account
+ key: password
image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3 #使用哪个镜像
volumeMounts:
- name: mysql-account
mountPath: /mysql-account
readOnly: true
ports:
- containerPort: 80 #容器内映射的端口
kubectl apply -f deployment-user-v1.yaml
kubectl get pods
kubectl describe pod user-v1-5f48f78d86-hjkcl
kubectl exec -it user-v1-688486759f-9snpx -- env | grep USERNAME
vi v4.yaml
image: [仅有镜像库地址]/[镜像名称]:[镜像标签]
kubectl apply -f v4.yaml
kubectl get pods
kubectl describe pods [POD_NAME]
vi v4.yaml
+imagePullSecrets:
+ - name: private-registry-file
containers:
- name: nginx
kubectl apply -f v4.yaml
服务发现
kubectl -n kube-system get all -l k8s-app=kube-dns -o wide
kubectl exec -it [PodName] -- [Command]
kubectl get pods
kubectl get svc
kubectl exec -it user-v1-688486759f-9snpx -- /bin/sh
curl http://service-user-v2
namespace(命名空间)
[ServiceName].[NameSpace].svc.cluster.local
ServiceName 就是我们创建的 Service 名称
curl http://service-user-v2.default.svc.cluster.local
kubectl create configmap [config_name] --from-literal=[key]=[value]
kubectl create configmap mysql-config --from-literal=MYSQL_HOST=192.168.1.172 --from-literal=MYSQL_PORT=3306
[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')
kubectl get cm
kubectl describe cm mysql-config
mysql-config-file.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-config-file
data:
MYSQL_HOST: "192.168.1.172"
MYSQL_PORT: "3306"
kubectl apply -f ./mysql-config-file.yaml
kubectl describe cm mysql-config-file
--from-file
代表一个文件key
是文件在 configmap
内的 keyfile_path
是文件的路径kubectl create configmap [configname] --from-file=[key]=[file_path]
env.config
HOST: 192.168.0.1
PORT: 8080
kubectl create configmap env-from-file --from-file=env=./env.config
configmap/env-from-file created
kubectl get cm env-from-file -o yaml
kubectl create configmap [configname] --from-file=[dir_path]
mkdir env && cd ./env
echo 'local' > env.local
echo 'test' > env.test
echo 'prod' > env.prod
kubectl create configmap env-from-dir --from-file=./
kubectl get cm env-from-dir -o yaml
containers:
- name: nginx #容器的名称
+ env:
+ - name: MYSQL_HOST
+ valueFrom:
+ configMapKeyRef:
+ name: mysql-config
+ key: MYSQL_HOST
kubectl apply -f ./v1.yaml
//kubectl exec -it [POD_NAME] -- env | grep MYSQL_HOST
kubectl exec -it user-v1-744f48d6bd-9klqr -- env | grep MYSQL_HOST
kubectl exec -it user-v1-744f48d6bd-9klqr -- env | grep MYSQL_PORT
containers:
- name: nginx #容器的名称
env:
+ envFrom:
+ - configMapRef:
+ name: mysql-config
+ optional: true
image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3 #使用哪个镜像
volumeMounts:
- name: mysql-account
mountPath: /mysql-account
readOnly: true
ports:
- containerPort: 80 #容器内映射的端口
template:
metadata:
labels:
app: user-v1 #Pod的名称
spec: #组内创建的 Pod 信息
volumes:
- name: mysql-account
secret:
secretName: mysql-account
+ - name: envfiles
+ configMap:
+ name: env-from-dir
containers:
- name: nginx #容器的名称
env:
- name: USERNAME
valueFrom:
secretKeyRef:
name: mysql-account
key: username
- name: PASSWORD
valueFrom:
secretKeyRef:
name: mysql-account
key: password
envFrom:
- configMapRef:
name: mysql-config
optional: true
image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3 #使用哪个镜像
volumeMounts:
- name: mysql-account
mountPath: /mysql-account
readOnly: true
+ - name: envfiles
+ mountPath: /envfiles
+ readOnly: true
ports:
- containerPort: 80 #容器内映射的端口
kubectl apply -f deployment-user-v1.yaml
kubectl get pods
kubectl describe pod user-v1-79b8768f54-r56kd
kubectl exec -it user-v1-744f48d6bd-9klqr -- ls /envfiles
spec: #组内创建的 Pod 信息
volumes:
- name: mysql-account
secret:
secretName: mysql-account
- name: envfiles
configMap:
name: env-from-dir
+ items:
+ - key: env.local
+ path: env.local
key=value
,可以自定义自己的内容,就像是一组 Tag 一样kubectl taint nodes [Node_Name] [key]=[value]:NoSchedule
//添加污点
kubectl taint nodes node1 user-v4=true:NoSchedule
//查看污点
kubectl describe node node1
kubectl describe node master
Taints: node-role.kubernetes.io/master:NoSchedule
vi deployment-user-v4.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-v4
spec:
minReadySeconds: 1
selector:
matchLabels:
app: user-v4
replicas: 1
template:
metadata:
labels:
app: user-v4
spec:
containers:
- name: nginx
image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3
ports:
- containerPort: 80
kubectl apply -f deployment-user-v4.yaml
vi deployment-user-v4.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-v4
spec:
minReadySeconds: 1
selector:
matchLabels:
app: user-v4
replicas: 1
template:
metadata:
labels:
app: user-v4
spec:
+ tolerations:
+ - key: "user-v4"
+ operator: "Equal"
+ value: "true"
+ effect: "NoSchedule"
containers:
- name: nginx
image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3
ports:
- containerPort: 80
修改Node的污点
kubectl taint nodes node1 user-v4=1:NoSchedule --overwrite
删除 Node 的污点
kubectl taint nodes node1 user-v4-
在master上布署pod
kubectl taint nodes node1 user-v4=true:NoSchedule
kubectl describe node node1
kubectl describe node master
vi deployment-user-v4.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-v4
spec:
minReadySeconds: 1
selector:
matchLabels:
app: user-v4
replicas: 1
template:
metadata:
labels:
app: user-v4
spec:
+ tolerations:
+ - key: "node-role.kubernetes.io/master"
+ operator: "Exists"
+ effect: "NoSchedule"
containers:
- name: nginx
image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3
ports:
- containerPort: 80
kubectl apply -f deployment-user-v4.yaml
apiVersion: v1kind: Podmetadata: name: private-regspec: containers: - name: private-reg-container image: imagePullSecrets: - name: har