25 Jan 2024 · Software Engineering

    Securing Front-end Applications in Kubernetes With SSL/TLS

    14 min read
    Contents

    If security was ever important, it is even more so today. Imagine sending a confidential letter through a public mail system where anyone can read or alter its content.

    That is what happens when we transmit data online without securing it. That is where SSL/TLS come in; you can think of SSL/TLSas the security envelope of the digital world.

    With the rapid rise of cloud-native applications, microservices architecture, and Software as a Service (SaaS), ensuring secure communication between front-end applications and users is more critical than ever. This is where SSL/TLS (Secure Sockets Layer/Transport Layer Security) comes into play.

    In this article, we’ll explore how to secure front-end applications in a Kubernetes environment, using SSL/TLS.

    Implementing SSL/TLS is not only a security best practice but it is also a necessity for safeguarding your data. It would also ensure the integrity of the communication between front-end and back-end components.

    Let’s begin by understanding what SSL/TLS is and why it is so important.

    SSL/TLS: A Foundation of Security

    SSL/TLS, or Secure Sockets Layer and its upgraded version, Transport Layer Security, are both cryptographic protocols that establish secure communication over the internet. In other words, they provide encryption and authentication mechanisms that enable sensitive data to travel securely between a client (e.g., a web browser) and a server. SSL is the older version, while TLS is the newer and more secure iteration.

    For the purposes of this article, we’ll use the term SSL/TLS interchangeably.

    How does it work?

    The load balancer is typically located outside the cluster and directs internet traffic to the Ingress controller, active within the cluster. The Ingress or edge proxy accepts the traffic, reads the information in the Ingress resource, then directs the traffic to the appropriate services and pods based on the requests. The ingress also continuously monitors pods and automatically updates load balancing rules, whenever pods are added or removed.

    Prerequisites:

    Before we get into configuring SSL/TLS for our front-end applications in Kubernetes, these are some assumptions to keep in mind:

    1. You have a functioning Kubernetes cluster.
    2. You have helm installed on your cluster
    3. You have a solid understanding of Kubernetes ingress concept.
    4. You have the kubectl tool set up on your system.
    5. You own or have control over a domain name.
    6. We will need SSL/TSL certificates. If you already have them, then skip steps ….. In case you don’t already have them, don’t worry; we’ll use Let’s Encrypt to obtain them.

    1. Comprehending SSL/TLS in Kubernetes

    Before we go any further. It is necessary for us to start by understanding the role of SSL/TLS in Kubernetes:

    • Encryption: SSL/TLS protocols ensure that the communication between your front-end applications and the outside world remains secure and protected. They encrypt the data exchanged between the client and server, thereby preventing unauthorized access and data tampering.
      • Authentication: Even beyond encryption, SSL/TLS also verifies the authenticity of the server. It ensures that the client communicates with the intended server and not an imposter.

    Like sending mail to a verified physical address.

    • Data Integrity: SSL/TLS elevates the overall security of your applications, building trust with your users and also helping you meet compliance requirements by safeguarding sensitive data. It also guarantees that the data remains unaltered during transit, much like a wax seal protecting an unaltered letter.

    2. Deploying our sample front-end application

    I will be using my Dog Pic website as my sample frontend application. Below is our Kubernetes manifest

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: doggie-web
      labels:
        app.kubernetes.io/name: doggie-web
    spec:
      replicas: 1
      selector:
        matchLabels:
          app.kubernetes.io/name: doggie-web
      template:
        metadata:
          labels:
            app.kubernetes.io/name: doggie-web
        spec:
          containers:
            - name: doggie-container
              image: learncloudnative/dogpic-service:0.1.0
              ports:
                - containerPort: 3000
    ---
    kind: Service
    apiVersion: v1
    metadata:
      name: doggie-service
      labels:
        app.kubernetes.io/name: doggie-web
    spec:
      selector:
        app.kubernetes.io/name: doggie-web
      ports:
        - port: 3000
          name: http

    We will go ahead and save the above YAML file as dog-pic-application.yaml and run the command below to create the deployment and service

    kubectl apply -f dog-pic-application.yaml

    2.1 Deploying cert-manager

    We’ll proceed by deploying the cert-manager inside our Kubernetes cluster. As the name suggests, the cert-managerspecializes in handling certificates. Whenever we need a new certificate or the renewal of an existing one, the cert-manager can help us with that.

    Our first step is to create a namespace for deploying the cert-manager in.

    $ kubectl create ns cert-manager
    namespace/cert-manager created

    Next, we will add the jetstack Helm repository and refresh the local repository cache:

    First we will run the command below to add it:

    $ helm repo add jetstack https://charts.jetstack.io
    "jetstack" has been added to your repositories

    Next, we will update our local helm repo:

    $ helm repo update
    Hang tight while we grab the latest from your chart repositories...
    ...Successfully got an update from the "jetstack" chart repository
    Update Complete. ⎈ Happy Helming!⎈

    We can now go ahead and install the cert-manager. We will run the following Helm command to install it:

    helm install \
      cert-manager jetstack/cert-manager \
      --namespace cert-manager \
      --version v0.15.1 \
      --set installCRDs=true

    Notice the output of our command, you will notice the message confirming that cert-manager was deployed successfully.

    However, before we can use it, we need to set up either an Issuer resource or a ClusterIssuer and configure it. This resource represents a CA (Certified Signing Authority) and allows our cert-manager to issue certificates.

    While a ClusterIssuer operates at the cluster level, the Issuer resource operates at the namespace level instead. For instance, while the ClusterIssuer can be used to issue certificates for any application in the cluster, regardless of which namespace it is in. An Issuer resource, on the other hand, can only be used to issue certificates for applications in the namespace in which it is created.

    Our Cert-manager supports multiple issuer types. We’ll be configuring an ACME issuer type since Let’s Encrypt relies on the ACME protocol. These protocols offer various challenge mechanisms to establish and confirm domain ownership.

    2.2 Challenges

    Cert-manager facilitates the verification of domain ownership using two challenges in the case of the ACME protocol: the HTTP-01 challenge and the DNS-01 challenge. Read more on the Let’s Encrypt website.

    • The HTTP-01 challenge is the most common challenge. It requires you to place a file containing a token in a specific location on our server. For example http://[my-domain]/.well-known/acme-challenge/[token-file].
    • The DNS-01 challenge requires you to modify a domain DNS record. To pass this challenge, we will need to create a TXT DNS record with a specific value under the domain we want to claim. It is only logical to utilize the DNS-01 challenge, if your domain registrar provides an API for updating DNS records automatically. View the complete list of service providers that have incorporated the Let’s Encrypt DNS validation.

    We will be using the HTTP-01 challenge since it is more generic than the DNS-01 challenge, which depends on your domain registrar.

    Now let us proceed. Let us deploy a ClusterIssuer we will be using. Make sure you replace the email with your email address.

    apiVersion: cert-manager.io/v1alpha2
    kind: ClusterIssuer
    metadata:
      name: letsencrypt-prod
    spec:
      acme:
        email: hello@thedomain.com
        server: https://acme-v02.api.letsencrypt.org/directory
        privateKeySecretRef:
          name: letsencrypt-prod
        solvers:
          - http01:
              ingress:
                class: nginx
            selector: {}

    Before we go any further, let us run the command below to make sure that our ClusterIssuer has been created and that our ACME account was registered with the email we provided in the ClusterIssuer as above.

    $ kubectl describe clusterissuer
    
    ...
    Status:
      Acme:
        Last Registered Email:  hello@thedomain.com
        Uri:                    https://acme-v02.api.letsencrypt.org/acme/acct/89498526
      Conditions:
        Last Transition Time:  2023-10-22T20:36:04Z
        Message:               The ACME account was registered with the ACME server
        Reason:                ACMEAccountRegistered
        Status:                True
        Type:                  Ready
    Events:                    <none>

    Similarly, if we run the command kubectl get clusterissuer We will see the indication that the ClusterIssuer is ready:

    $ kubectl get clusterissuer
    NAME               READY   AGE
    letsencrypt-prod   True    5m30s

    Later on, once we have deployed our Ingress controller and set up the DNS record on our domain, we will then create a Certificate resource.

    2.3 Ambassador Gateway

    We will install Ambassador Gateway which is an open-source Kubernetes-native API gateway for microservices. We will use it as a reverse proxy to manage external access to services within our Kubernetes cluster.

    To install Ambassador gateway, let us run the commands below. The first one will install all CRD (custom resource definitions).

    $ kubectl apply -f https://www.getambassador.io/yaml/ambassador/ambassador-crds.yaml

    The second command will install RBAC (Role-Based Access Control) resources and will create the Ambassador deployment.

    $ kubectl apply -f https://www.getambassador.io/yaml/ambassador/ambassador-rbac.yaml

    Let us now create our LoadBalancer service that will expose port 80 for HTTP traffic and port 443 for HTTPS.

    apiVersion: v1
    kind: Service
    metadata:
      name: ambassador
    spec:
      type: LoadBalancer
      ports:
        - name: http
          port: 80
          targetPort: 8080
        - name: https
          port: 443
          targetPort: 8443
      selector:
        service: ambassador

    Like we did before, save the YAML file above in ambassador-serv.yaml. The run the command below

    $ kubectl apply -f ambassador-svc.yaml

    Please note that deploying the service above will create a load balancer in your cloud providers’ account

    Let us now check if our ambassador service, has been created:

    $ kubectl get svc
    NAME               TYPE           CLUSTER-IP    EXTERNAL-IP     PORT(S)          AGE
    ambassador         LoadBalancer   10.0.78.66    51.143.120.54   80:31365/TCP     97s
    ambassador-admin   NodePort       10.0.65.191   <none>          8877:30189/TCP   4m20s
    kubernetes         ClusterIP      10.0.0.1      <none>          443/TCP          30d 

    Great job so far.

    Now that we have created the external IP address, we can go to the website where we registered our domain and create a DNS record, that will point the domain to this external IP. This allows us to enter our domain http://our-domain.com in our browser and this will resolve to the IP address of the ingress controller inside the cluster.

    I will be using my domain http://k8s4today.com. I will set up a subdomain, http://mydogs.k8s4today.com to point to the IP address of our Load Balancer (for example 51.140.123.54) using an A record. Irrespective of where you registered your domain, you should be able to update the DNS records. Refer to your domain registrar’s documentation for guidance on how to do that.

    We will now set up our Ingress resource, so we can reach the My Dog Picture website we deployed on our subdomain (make sure you replace the dogs.k8s4today.com with your domain or subdomain name):

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      name: my-ingress
      annotations:
        kubernetes.io/ingress.class: ambassador
    spec:
      rules:
        - host: dogs.k8s4today.com
          http:
            paths:
              - backend:
                  serviceName: doggie-service
                  servicePort: 3000

    With our ingress deployed, the Dog Pic website can be accessed at http://dogs.k8s4today.com.

    2.4 Requesting a certificate

    To obtain a new certificate, we must first establish a ‘Certificate‘ resource. This resource contains the issuer reference (‘ClusterIssuer‘, which we defined before), the DNS domains for which we want certificates (‘dogs.k8s4today.com‘), and the Secret name where the certificate will be stored.

    apiVersion: cert-manager.io/v1alpha2
    kind: Certificate
    metadata:
      name: ambassador-certs
      namespace: default
    spec:
      secretName: ambassador-certs
      issuerRef:
        name: letsencrypt-prod
        kind: ClusterIssuer
      dnsNames:
        - dogs.k8s4today.com

    Make sure you replace the dogs.k8s4today.com with your domain name.

    Now we will go ahead and save the YAML above in cert.yaml and create the certificate using:

    kubectl apply -f -cert.yaml

    This command will create our resources. Now if we list the pods, we will notice a new pod called cm-acme-http-solver:

    $ kubectl get po
    NAME                          READY   STATUS    RESTARTS   AGE
    ambassador-9db7b5d76-jlcdg    1/1     Running   0          22h
    ambassador-9db7b5d76-qcwgk    1/1     Running   0          22h
    ambassador-9db7b5d76-xsfw4    1/1     Running   0          22h
    cm-acme-http-solver-qzh6l     1/1     Running   0          25m
    doggie-web-7bf547bd54-f2pff   1/1     Running   0          22h

    This pod was made by cert-manager to serve the token file as described in the Challenges part and check the domain name.

    We can also look at the logs from the pod to see the values pod expects for the challenge:

    $ kubectl logs cm-acme-http-solver-qzh6l
    I0622 20:39:26.712391       1 solver.go:39] cert-manager/acmesolver "msg"="starting listener"  "expected_domain"="dogs.k8s4today.com" "expected_key"="iqUZlG9v1K8czpAKaTpLfL278piwf-mN4VZNvuwD0Ks.xonKHFvEQg2Ox_mI0cPM7UpCUHfu6H4aKtRcdrpiLik" "expected_token"="iqUZlG9v1K8czpAKaTpLfL278piwf-mN4VZNvuwD0Ks" "listen_port"=8089

    However, because this pod is not available (not exposed), Let’s Encrypt cannot access it and perform the challenge. As a result, we must expose this pod via an ingress. This entails creating a Kubernetes Service and updating the ingress to point to the pod. We will use Ambassador’s Mapping resource to update the ingress. This resource specifies a mapping that will redirect requests with the prefix /.well-known/acme-challenge to the Kubernetes service that will be used by the pod.

    apiVersion: getambassador.io/v2
    kind: Mapping
    metadata:
      name: challenge-mapping
    spec:
      prefix: /.well-known/acme-challenge/
      rewrite: ''
      service: challenge-service
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: challenge-service
    spec:
      ports:
        - port: 80
          targetPort: 8089
      selector:
        acme.cert-manager.io/http01-solver: 'true'

    We will save the yaml above as challenge.yaml and deploy it using kubectl apply -f challenge.yaml. The cert-manager will retry the challenge and issue the certificate.

    Now we can run kubectl get cert and confirm the READY column shows True as below:

    $ kubectl get cert
    NAME               READY   SECRET             AGE
    ambassador-certs   True    ambassador-certs   35m

    Here are the steps we will take to request a certificate:

    1. Create the Certificate resource to request the certificate.
    2. Cert-manager constructs the http-solver pod (exposed via the challenge-service we developed).
    3. Cert-manager uses the issuer referenced in the Certificate to request certificates for our dnsNames from the authority (Let’s Encrypt).
    4. The authority (CA) sends the http-solver challenge to prove that we own the domains and verifies that the challenges are solved (by downloading the file from /.well-known/acme-challenge/).
    5. The issued certificate and key are stored in the secret and are referred to by the Issuer resource.

    Below is a diagram to visualize the process

    Configuring TLS in Ingress

    To secure our Ingress we will have to specify a Secret that contains the certificate and the private key. We defined the ambassador-certs secret name in the Certificate resource we created earlier.

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      name: my-ingress
      annotations:
        kubernetes.io/ingress.class: ambassador
    spec:
      tls:
        - hosts:
            - dogs.k8s4today.com
          secretName: ambassador-certs
      rules:
        - host: dogs.k8s4today.com
          http:
            paths:
              - path: /
                backend:
                  serviceName: doggie-service
                  servicePort: 3000

    Under resource specification (spec), we use the tls key to specify the hosts and the secret name where the certificate and private key are stored.

    We will save the above YAML as ingress-tls.yaml and apply it with 

    kubectl apply -f ingress-tls.yaml

    3. Verification

    We can now test our app. If we navigate to our sub-domain using https (e.g. https://dogs.k8s4today.com) we can now see that the connection is secure, using a valid certificate from Let’s Encrypt.

    Conclusion

    Configuring SSL/TLS for your Kubernetes-deployed applications is much more than just encryption. It is about ensuring the integrity of data, establishing trust, and paving a smooth road for user interactions. Some things to keep in mind are to prioritize safety, renew certificates when necessary, and maintain a secure environment.

    Today we learned how to protect our Ingress gateway to do this. We did that using the Ambassador gateway, however the same process could be followed for any other Kubernetes ingress controller.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    mm
    Writen by:
    As a seasoned Infrastructure Specialist with many years of experience, Lionel specializes in designing, developing, and maintaining automated deployment pipelines using Terraform, Jenkins, Kubernetes, and other cutting-edge tools. His expertise extends to working closely with development teams to ensure the seamless integration and delivery of software. He also excels in troubleshooting and resolving issues related to application deployment, infrastructure, and networking.
    Avatar for Lionel Tchami
    Reviewed by:
    I picked up most of my soft/hardware troubleshooting skills in the US Army. A decade of Java development drove me to operations, scaling infrastructure to cope with the thundering herd. Engineering coach and CTO of Teleclinic.