When running a Go HTTPS server locally with self signed certificates, things are fine
When pushing the same to a docker container (via skaffold -- or Google GKE), ListenAndServeTLS is hanging and the container is looping on recreation.
Certificate was create via:
openssl genrsa -out https-server.key 2048
openssl ecparam -genkey -name secp384r1 -out https-server.key
openssl req -new -x509 -sha256 -key https-server.key -out https-server.crt -days 3650
main.go contains:
if IsSSL {
err := http.ListenAndServeTLS(addr+":"+srvPort, os.Getenv("CERT_FILE"), os.Getenv("KEY_FILE"), handler)
if err != nil {
log.Fatal(err)
}
} else {
log.Fatal(http.ListenAndServe(addr+":"+srvPort, handler))
}
The crt and key files are passed via K8s secrets and my yaml file contains the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
annotations:
sidecar.istio.io/rewriteAppHTTPProbers: "true"
spec:
volumes:
- name: google-cloud-key
secret:
secretName: ecomm-key
- name: ssl-cert
secret:
secretName: ecomm-cert-server
- name: ssl-key
secret:
secretName: ecomm-cert-key
containers:
- name: frontend
image: gcr.io/sca-ecommerce-291313/frontend:latest
ports:
- containerPort: 8080
readinessProbe:
initialDelaySeconds: 10
httpGet:
path: "/_healthz"
port: 8080
httpHeaders:
- name: "Cookie"
value: "shop_session-id=x-readiness-probe"
livenessProbe:
initialDelaySeconds: 10
httpGet:
path: "/_healthz"
port: 8080
httpHeaders:
- name: "Cookie"
value: "shop_session-id=x-liveness-probe"
volumeMounts:
- name: ssl-cert
mountPath: /var/secrets/ssl-cert
- name: ssl-key
mountPath: /var/secrets/ssl-key
env:
- name: USE_SSL
value: "true"
- name: CERT_FILE
value: "/var/secrets/ssl-cert/cert-server.pem"
- name: KEY_FILE
value: "/var/secrets/ssl-key/cert-key.pem"
- name: PORT
value: "8080"
I have the same behaviour when referencing the file directly in the code like:
err := http.ListenAndServeTLS(addr+":"+srvPort, "https-server.crt", "https-server.key", handler)
The strange and not helping thing is that ListenAndServeTLS does not give any log output on why it's hanging or a hinch on the problem ( using kubectl logs )
Looking at the kubectl describe pod output:
Name: frontend-85f4d9cb8c-9bjh4
Namespace: ecomm-ns
Priority: 0
Start Time: Fri, 01 Jan 2021 17:04:29 +0100
Labels: app=frontend
app.kubernetes.io/managed-by=skaffold
pod-template-hash=85f4d9cb8c
skaffold.dev/run-id=44518449-c1c1-4b6c-8cc1-406ac6d6b91f
Annotations: sidecar.istio.io/rewriteAppHTTPProbers: true
Status: Running
IP: 192.168.10.7
IPs:
IP: 192.168.10.7
Controlled By: ReplicaSet/frontend-85f4d9cb8c
Containers:
frontend:
Container ID: docker://f867ea7a2f99edf891b571f80ae18f10e261375e073b9d2007bbff1600d272c7
Image: gcr.io/sca-ecommerce-291313/frontend:5110aa8a87655b07cc71ffb2c46fd8739e3c25c222a637b2f5a7a1af1bfccc22
Image ID: docker://sha256:5110aa8a87655b07cc71ffb2c46fd8739e3c25c222a637b2f5a7a1af1bfccc22
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 01 Jan 2021 17:05:08 +0100
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Fri, 01 Jan 2021 17:04:37 +0100
Finished: Fri, 01 Jan 2021 17:05:07 +0100
Ready: False
Restart Count: 1
Limits:
cpu: 200m
memory: 128Mi
Requests:
cpu: 100m
memory: 64Mi
Liveness: http-get http://:8080/_healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8080/_healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
GOOGLE_APPLICATION_CREDENTIALS: /var/secrets/google/key.json
CERT_FILE: /var/secrets/ssl-cert/cert-server.crt
KEY_FILE: /var/secrets/ssl-key/cert-server.key
PORT: 8080
USE_SSL: true
ONLINE_PRODUCT_CATALOG_SERVICE_ADDR: onlineproductcatalogservice:4040
ENV_PLATFORM: gcp
DISABLE_TRACING: 1
DISABLE_PROFILER: 1
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tm62d (ro)
/var/secrets/google from google-cloud-key (rw)
/var/secrets/ssl-cert from ssl-cert (rw)
/var/secrets/ssl-key from ssl-key (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
google-cloud-key:
Type: Secret (a volume populated by a Secret)
SecretName: ecomm-key
Optional: false
ssl-cert:
Type: Secret (a volume populated by a Secret)
SecretName: https-cert-server
Optional: false
ssl-key:
Type: Secret (a volume populated by a Secret)
SecretName: https-cert-key
Optional: false
default-token-tm62d:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tm62d
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 46s default-scheduler Successfully assigned ecomm-ns/frontend-85f4d9cb8c-9bjh4
Warning Unhealthy 17s (x2 over 27s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 400
Normal Pulled 8s (x2 over 41s) kubelet Container image "gcr.io/frontend:5110aa8a87655b07cc71ffb2c46fd8739e3c25c222a637b2f5a7a1af1bfccc22" already present on machine
Normal Created 8s (x2 over 39s) kubelet Created container frontend
Warning Unhealthy 8s (x3 over 28s) kubelet Liveness probe failed: HTTP probe failed with statuscode: 400
Normal Killing 8s kubelet Container frontend failed liveness probe, will be restarted
Normal Started 7s (x2 over 38s) kubelet Started container frontend
The liveness probe and readyness probes are getting a 400 response.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…