I just migrated my Kubernetes Cluster to a new one on Digital Ocean, as I had used Helm previously and it was just too complicated to fix up. One of the things I was working on was the implementation of Matomo which is an open-source self-hosted Website Analytics solution. I’ve manage the finer details for my clients, and I didn’t want to keep them on Google Analytics because they won’t set it up and I didn’t want my own account being polluted with their statistics.
As part of this, I had to enable
proxy-protocol on my
nginx-ingress Daemon Set and the Digital Ocean Load Balancer. However, once I’d done this I noticed that all of my Wordpress Scheduled Tasks had stopped working. I gave it a few days, and I after I realised it didn’t work I knew I had to look into it. After firing up the Wordpress Troubleshooting Tool, I noticed that I was getting the following error:
curl: (35) Unknown SSL protocol error in connection to www.mywebhost.com:443
I scratched my head, because my TLS was terminating fine when I browsed to it and used a curl from my own computers. My status monitor was also reporting my sites were up. I jumped onto the pods and tried out the command where I got an interesting result:
root@wordpress-ddb9594fc-qx9kh:/# curl -v https://www.mywebhost.com/ * Rebuilt URL to: https://www.mywebhost.com/ * Trying #.#.#.#... * TCP_NODELAY set * Connected to www.mywebhost.com (#.#.#.#) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt CApath: /etc/ssl/certs * TLSv1.2 (OUT), TLS header, Certificate Status (22): * TLSv1.2 (OUT), TLS handshake, Client hello (1): * Unknown SSL protocol error in connection to www.mywebhost.com:443 * Curl_http_done: called premature == 1 * stopped the pause stream! * Closing connection 0 curl: (35) Unknown SSL protocol error in connection to www.mywebhost.com:443
I ran an SSL Test using Quay’s Labs SSL Server Test. I noticed some interesting results, predominantly that I didn’t have perfect forward secrecy enabled or TLSv1.3 enabled. I enabled perfect forward secrecy using the list of recommended ciphers on a DigiCert Article and enabled TLSv1.3 in the accepted protocols. Once this was done, I was still getting the errors. Further Googling led me to a few sites which were advising that I needed to update OpenSSL and cURL, however they were both on the latest versions. It was then I realised, I needed to figure out where the problem lay (pod, nginx-ingress, or Digital Ocean Load Balancer). I jumped back onto the pod and did the cURL against localhost, successful. I checked the nginx-ingress logs and the noticed that none of the unsuccessful traffic was making it my ingress controllers. The problem was with the Load Balancer.
With this in mind, I remembered I had to add some annotations to enable proxy-protocol so went to check out the article that listed all of the DO-specific Load Balancer annotations. Near the bottom of the page I see:
service.beta.kubernetes.io/do-loadbalancer-hostname Specifies the hostname used for the Service status.Hostname instead of assigning status.IP directly. This can be used to workaround the issue of kube-proxy adding external LB address to node local iptables rule, which will break requests to an LB from in-cluster if the LB is expected to terminate SSL or proxy protocol. See the examples/README for more detail.
I added the annotation to my Load Balancer Service, and vóila it worked! Problem solved. If you have issues with calls from your internal services to your Load Balancer not working and are using
proxy-protocol be sure to check the Load Balancer annotations that are detailed by your Cloud Provider.