Angular: Integrating Micro-Frontends at Build Time Using Libraries and Runtime Using Angular Elements — Part-III

AngularEnthusiast
Stackademic
Published in
12 min readMar 1, 2024

--

In this story, we will deploy the container application and the micro-frontend developed using angular elements to a Kubernetes cluster and check if it's working as expected.

Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.

We have already published the library to npm. Hence no changes would be required for the micro-frontend created using the library.

You can check the the first 2 parts of this series at the below links.

Below is a short demo of the container application running on the Kubernetes Service Node Port 30547 and the micro-frontend running on Node Port 31508.

On application load, the micro-frontend’s bundle has not loaded.

When I click on the ToDos link in the navigation bar, the bundle is downloaded and the angular element shows up in the browser via lazy loading.

Lets begin.

We need to only make changes to the container application and the micro-frontend application created using angular elements.

  1. Nginx Configuration. The nginx webserver is running on port 80 and the angular application will be deployed to the path /usr/share/nginx/html.

Below is the nginx.config for both the applications.

Observe the use of variable containerPort in the file. These variables will be substituted at runtime with appropriate values using nginx templates.

In both the applications, I have added a nginx/nginx.config in the root of the project.

2. Docker Configuration.

Again, in both the applications, I have added a docker/Dockerfile and a docker/startup.envsh in the root of the project. The contents of the Dockerfile in both the applications is almost the same except for 1 minor change. The contents of startup.envsh is same for both the applications.

Below is the Dockerfile for the container application.

Dockerfile for the micro-frontend.

Both files are exactly the same except for the COPY — — from=node instruction. In the container application, we have specified

COPY — from=node /app/dist/container-app /usr/share/nginx/html/

In the micro-frontend, we have given it as

COPY — from=node /app/dist/angular-element-micro /usr/share/nginx/html/

Only the content of the dist folder is changing in the above instruction.

Please observe the below line in the Dockerfile in both the applications. We are copying the contents of the nginx.config to a default.conf.template file within /etc/nginx/templates. The environment variables in the default.conf.template file will substituted with the appropriate values internally using the envsubst command. The contents of the default.conf.template post substitution will be output to the default.conf file within /etc/nginx/conf.d folder.

COPY nginx/nginx.config /etc/nginx/templates/default.conf.template

Lets now check the purpose and contents of the startup.envsh file. The main purpose of this file is to access the runtime environment variables, we have set in the Docker container or contents of volumes mounted onto the container. I have set multiple keys via the Kubernetes ConfigMap. The purpose of setting these variables is to make the environment specific configurations available to the application at runtime. Our runtime environment specific configurations will be available in the assets/configurations/config.json for access in component/service/directive or anywhere in the angular application.

#!/bin/bash
echo "Starting container."
envsubst < /usr/share/nginx/html/assets/configurations/config-temp.json > /usr/share/nginx/html/assets/configurations/config.json
echo "Runtime environment variables. target environment using volume=$(cat /config/env); target environment using environment variables: ${env}"
nginx -g 'daemon off;'

Finally, we are also starting the nginx webserver to run as background process using nginx -g ‘daemon off;’

Where are we executing the startup.envsh ? Observe the last few lines of the Dockerfile.

COPY docker/startup.envsh /docker-entrypoint.d/
RUN chmod +x /docker-entrypoint.d/startup.envsh

Please do check the below story, if you want a more detailed explanation of how to pass runtime configuration data to the docker container and how we access it in the angular application.

3. Kubernetes Configuration

I prefer to manage the Kubernetes objects using Helm Charts. Helm is a package manager for Kubernetes. Helm uses a packaging format called charts.

If you are new to Helm, you can install Helm using the below link.

I installed helm globally via chocolatey using the below command:

choco install kubernetes-helm

Few points about Helm Charts before proceeding further:

  1. A Helm chart is organized as a collection of files inside of a directory. The directory name is the name of the chart. These files describe a related set of Kubernetes resources.
  2. Charts can be packaged into versioned archives to be deployed.

I have enclosed the configuration files for all the Kubernetes objects of the container application within a folder angular-micro-container inside the project root. This means the Helm chart name will be angular-micro-container.

I have enclosed the configuration files for all Kubernetes objects of the micro-frontend application within a folder angular-micro-front-end-1 inside the project root. This means the Helm chart name will be angular-micro-front-end-1.

Lets go through all the configuration files now.

I. ConfigMaps.

I have created the 3 ConfigMaps for dev, prod and uat environments in both the applications and a 4th ConfigMap for configurations common to all environments. We are referencing the ConfigMap file name in the deployment.yaml to indicate which file must be used.

Below are 2 examples of the 4 ConfigMaps I created for the container application.

dev-config.yaml

common-config.yaml

Below are 2 examples of the 4 ConfigMaps I created for the Micro-frontend application.

dev-config.yaml

common-config.yaml

II. values.yaml and Chart.

We are dynamically passing the values from the values.yaml and Chart.yaml to the service.yaml and deployment.yaml using the interpolation {{}}.

=> values.yaml in the container application.

=> values.yaml in the micro-frontend application.

I have not used any of the values of the keys: services.app.dev,services.app.uat or services.app.prod while accessing both the applications because we have used the NodePort and not the LoadBalancer service type.

If you are using the LoadBalancer service type, then these ports can be used to access the application in the browser.

The Chart.yaml file is required for a chart.

apiVersion field describes the Helm version. We are using Helm v3.

name field is the Helm Chart name. This should be in sync with the folder name.

version field describes the semantic version of the application.

type field describes the type of chart. It can be application or library chart.

description field tells us about the purpose of the chart.

=> Chart.yaml in the container application.

apiVersion: v3
name : angular-micro-container
version : 1.0
type: application
description: A helm chart for the angular app

=> Chart.yaml in the micro-frontend application

apiVersion: v3
name : angular-micro-front-end-1
version : 1.0
type: application
description: A helm chart for the angular app

III. Service and Deployment files

The service.yaml and the deployment.yaml files are the same in both the applications. There is nothing hardcoded in the service.yaml or the deployment.yaml files, hence they are reusable for multiple environments.

deployment.yaml

In this file, please note that we have created multiple volumes that references the ConfigMaps we created earlier. This volume is then mounted onto the container via the volumeMounts field.

Ensure the below 2 points are satisfied to avoid errors.

=>The containerPort field in the deployment.yaml must match the port on which the nginx webserver is listening on i.e in this example both must have the value 80. You can use any port but ensure they are the same in both the places.

=>spec.selector should match the template.metadata.labels to avoid rejection by the engine. We are identifying the pods using the below key-value pair.

app: {{.Chart.Name}}-deployment-{{.Values.environment}}

Below is the deployment.yaml for the container application.

deployment.yaml for the micro-frontend application

service.yaml

This file is same for both applications.

You will face unexpected errors, if any errors are made in selecting the pods in the selector field. Ensure you are selecting the desired pods based on the key-value pairs provided in the spec.selector in the deployment.yaml.

Also ensure the value of the targetPort in the service.yaml matches the port on which the nginx webserver is listening on.

Why do we need a service ?

Kubernetes assigns Pods private IP’s as soon as they are created in a node within the cluster. These IP addresses are not permanent. If you delete or recreate a Pod, it gets a new IP address, different from the one it had before.If the IP address keeps changing, which one would the client keep track of and connect to?

A Service helps a client reach one (or more) of the Pods that can fulfill its request. The Service can be reached at the same place, at any point in time. So it serves as a stable destination that the client can use to get access to what it needs. The client doesn’t have to worry about the Pods’ dynamic IP addresses anymore.

What is the Service Type ?

In Kubernetes, there are three commonly used Service types: ClusterIP, NodePort, and LoadBalancer. These Services provide different ways to make Pods accessible to other Pods within the cluster, as well as to clients outside of it.

We are using the NodePort Service type in this example to enable the client external to the cluster to access the application running inside a container within a pod in the cluster. Point to be noted here is that NodePort service type doesnt provide any kind of load-balancing of traffic amongst the nodes in the cluster. If you are looking for load-balancing of traffic amongst the nodes, go for LoadBalancer service type.

4. Jenkins Declarative Pipeline

I have created 2 pipelines for both the applications to automate build and deployment to dev and higher environments.

These are the steps I have followed:

  1. Clone the specific branch of the git repository. Branch name provided as a parameter to the pipeline.
  2. Build the project, create the docker image and push it to the DockerHub Registry.
  3. Step-2 will done only once when deploying to “dev” environment. For deployment to higher environments, we will provide the image tag to the pipeline as a parameter and the docker image with that tag will be pulled for deployment.
  4. We are creating a separate Kubernetes namespace for each environment. For eg: when deploying to “dev” environment we will create a seperate Kubernetes namespace(if it doesn’t already exist) and set it as the current context.
  5. Finally we use the “helm package” and “helm upgrade” commands to deploy the application to the Kubernetes cluster. Please observe that we are updating the value of certain keys within the values.yaml via the “helm upgrade” command below.
bat “helm upgrade ${releaseName} ./${releaseName}-${buildNumber}.tgz — install — debug \
--values ./${releaseName}/values.yaml \
--set pod.imageName=${DOCKER_HUB_CRED_USR}/${repositoryName}:${buildNumber} \
--set environment=${env.environment}”

In this command we have updated the value of the key pod.imageName with correct docker image we just created/pulled from the Docker registry. In the values.yaml, the key pod.imageName, had an empty string as default value, which we have updated now while executing the “helm upgrade”.

Also we have updated the value of the key environment, with the target environment, we want the deploy the application to. In the values.yaml, the key environment, had an empty string as default value, which we have updated now while executing the “helm upgrade”.

Both the applications use similar pipeline definitions, except for minor changes in the namespace, Helm release name and the git repository used.

I have used the same names for the Helm chart and Helm release to avoid any confusions.

Below is the pipeline definition for the micro-frontend application.

Below is the pipeline definition for the container application.

How do you debug for issues ?

Use the “kubectl get all” command to get the list of kubernetes objects created/updated.

NAME READY STATUS RESTARTS AGE
pod/angular-micro-container-deployment-dev-7bfdb77dd8–7fqvk 1/1 Running 0 26m
pod/angular-micro-front-end-1-deployment-dev-6d7dd8894f-57mh9 1/1 Running 0 26m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/angular-micro-container-service-dev NodePort 10.104.7.165 <none> 8081:30547/TCP 26m
service/angular-micro-front-end-1-service-dev NodePort 10.110.54.158 <none> 8086:31508/TCP 2d3h

NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/angular-micro-container-deployment-dev 1/1 1 1 26m
deployment.apps/angular-micro-front-end-1-deployment-dev 1/1 1 1 2d3h


NAME DESIRED CURRENT READY AGE
replicaset.apps/angular-micro-container-deployment-dev-7bfdb77dd8 1 1 1 26m
replicaset.apps/angular-micro-front-end-1-deployment-dev-6d7dd8894f 1 1 1 26m
replicaset.apps/angular-micro-front-end-1-deployment-dev-d6c988f7f 0 0 0 2d3h

Check the status of your pod. If its not in running state, execute the below command, to understand the events which have prevented the container from running.

kubectl describe pod <pod-name>

If the container is in running state but your application is not loading in the browser for some reason, we can shell into the pod and try accessing the application to look for possible issues.

Take the container application as an example:

=>First get hold of the Pod IP and Node IP. I have executed the below command to get more information on the container application Pod.

C:\Users\User>kubectl get pod/angular-micro-container-deployment-dev-7bfdb77dd8–7fqvk -o yaml

You will get a lot of useful information about the Pod.

Details about the container running within the Pod:

At the end of the yaml, you can find the Pod IP and the Node IP

So lets note them down. We have obtained the ports and the ClusterIP from the “kubectl get all” output.

Node IP:192.168.65.3 Node Port :30547

Pod IP:10.1.0.200 Pod Port:80(same port on which the container is running)

Cluster IP:10.104.7.165 ClusterIP Port:8081

Lets shell into the Pod now and access the application via the Pod IP and Port.

I have used the below command to shell into the Pod.

C:\Users\User>kubectl exec pod/angular-micro-container-deployment-dev-7bfdb77dd8–7fqvk -it sh

After that I try to hit http://10.1.0.200:80 and I instantly get the index.html of the container application.

Lets repeat the same experiment for the NodeIP and Port. I am trying to access the container app via http://192.168.65.3:30547 . I again obtain the index.html of the container application.

Finally for the ClusterIP and Port, I hit “http://10.104.7.165:8081” and obtain the same index.html file.

This shows that we have configured the kubernetes objects correctly. You can repeat the same for the micro-frontend application as well.

Below are the git repositories for the container and the micro-frontend application.

If you found this story useful, do check the 4th part of this series below, where we try to access static assets from the micro-frontends within the container application.

Stackademic 🎓

Thank you for reading until the end. Before you go:

--

--

Loves Angular and Node. I wish to target issues that front end developers struggle with the most.