• What we do

    full-cycleFull Cycle Developmentstaff-augmentationIT Staff Augmentationai-developmentAI DevelopmententerpriseEnterprise Application Developmenttech-consultingTechnology Consulting
    Service Preview
    full-cycle
    Service preview

    Full Cycle Development

    End-to-end software development from concept to deployment, using cutting-edge technologies and best practices.

    Learn more
  • Blog
    Containerizing Node.js Microservices with Docker & KubernetesTop 15 KPIs That Keep Software Project on TrackHow to Plan Your Software Development BudgetThe Price You Set is the Story You Tell: The Complete Saas Price GuideLeveling Up UX: How Gamification Influences User Engagement
    Company News
    Process Monitoring Events - Version 4.1 Official LaunchScientific Paper: Refinement of the Cauchy-Schwartz inequalityTensorial Simpson Type Inequalities of Self Adjoint Operators in Hilbert SpaceGeneralization of the Buzano's Inequality and Numerical RadiusNew Partnership with HomeStory Rewards Corporation
    Case Studies
    Operations, Synced: End-to-End Live Process Monitoring Meetings, Transformed: Gesture-Led WorkspaceFrom Static to Addictive: Content Exchange PlatformOne Hub. Unified Fintech Control.The New Blueprint: AI-Driven Mortgage Engagement
    Featured resource
    Featured article

    Operations, Synced: End-to-End Live Process Monitoring

    Integrated disconnected tools and data sources to deliver real-time operational insight for a 200+-employee SaaS, IoT, and FinTech enterprise.

    Read more
    See all case studies
  • About Us
  • FAQ
Get Started

We deliver advantage beyond features.
What will you ship?

Get Started
  • Full Cycle Development
  • IT Staff Augmentation
  • AI Development
  • Enterprise Application Development
  • Technology Consulting
  • Case Studies
  • Blog
  • Company news
  • About
  • FAQ
  • Contact Us

Follow Us

Site MapTerms of UsePrivacy Policy
© Energmа 2026. All rights reserved.
Železnička 94, 11300 Smederevo, Serbia

Cookie Preferences

We use cookies to enhance your browsing experience, serve personalized content, and analyze our traffic. Choose your preferences below.

Essential Cookies

These cookies are necessary for the website to function properly. It is accepted by default.

Always On

Analytics Cookies

These cookies help us understand how visitors interact with our website by collecting and reporting information anonymously.

Marketing Cookies

These cookies are used to track visitors across websites to display relevant advertisements.

Back to blog
Article thumbnail

Containerizing Node.js Event-Driven Microservices with Docker & Kubernetes

In an era defined by rapid development and massive deployments, the pressure for high availability is relentless. Without the right tools, "daily developer life" quickly spirals into a cycle of maintenance nightmares, broken syncs, and missed deadlines.

To stay competitive in 2026, we need more than just code - we need isolation, portability, and infinite scalability. This is where the power of a Kubernetes orchestrator paired with Docker becomes non-negotiable. By containerising our environment, we transform chaotic deployments into a streamlined, predictable engine.

In this guide, we will build a backend payment system made up of three Node.js + TypeScript microservices - Product, Order, and Payment - containerise each with Docker, and orchestrate them with Kubernetes.

Quick Summary

  • Docker packages each microservice with everything it needs to run, ensuring consistent behavior across environments. Kubernetes adds production-grade orchestration on top.
  • Each service needs a Dockerfile and .dockerignore file at its root directory to define how it builds and what to exclude from the image.
  • Images are built locally and pushed to Docker Hub so Kubernetes can pull and deploy them via declarative YAML configuration files.
  • Kubernetes YAML files define Deployments and Services for each microservice, stored in an infra/k8s/ folder at the project root.
  • Use kubectl get pods, get services, and get deployments to verify your cluster state, and kubectl logs for real-time debugging of individual pods.

Overview

Imagine we’re building a backend payment system that is made up of three services:

  • Product Service – manages products and inventory
  • Order Service – creates and tracks orders
  • Payment Service – handles payments

Each service is written in Node.js with TypeScript and runs independently.

Why Docker + Kubernetes?

At first, managing a few microservices feels simple: you run each service locally and connect them together. But as you add more services, the setup quickly becomes messy. Every new service adds more configuration, more processes to manage, and more opportunities for failures before development even begins.

Docker solves this by packaging each service with everything it needs to run into containers. This ensures consistent behavior across environments, making it ideal for running a single microservice in an isolated space. However, once your app grows into multiple services, Docker alone does not provide production-grade orchestration.

However, once your app grows into multiple services, Docker alone doesn’t provide production-grade orchestration.

What Kubernetes Adds

That’s where Kubernetes comes in. It provides the orchestration layer that keeps containers running reliably at scale, offering:

  • Automatic restarts for crashed services
  • Routes traffic between services
  • Scales services up or down based on demand
  • Rolling updates for zero-downtime deployments

In short, it enables you to operate multi-service applications with minimal manual intervention.

Now that we’ve got that out of the way, let’s get started!

Prerequisites

Node.js - Download Node.js®

Docker Desktop - Get Started | Docker

Kubernetes - Download Kubernetes

1. Creating Docker Files

In the root directory of each service, create two files:

  • Dockerfile
  • dockerignore

Dockerfile

The Dockerfile contains instructions that Docker uses to automatically build a Docker image. The .dockerignore file excludes files from the Docker build context.

Dockerfile example:

1FROM node:<version>-alpine
2
3WORKDIR /app
4
5COPY package*.json ./
6RUN npm install
7
8COPY ./ ./
9
10CMD ["npm", "start"]

Replace <version> with your actual Node.js version (e.g., 20).

.dockerignore

.dockerignore example:

1node_modules

For simplicity, we will just exclude node_modules. (*We already have npm install command in our Docker file, so we do not need to copy the local dependency. We can install them.

2. Build & Push the Docker Image

Once the Dockerfile is ready, the next step is to build the Docker image and push it to Docker Hub, so Kubernetes can pull and deploy it. (We can do without publishing the Docker image, but in the sake of simplicity lets keep that way.)

Make sure you know your Docker username before proceeding.

Build the Image

We build an image of our service using:

1docker build -t <username>/<service>-srv:latest .

<username> – Your Docker Hub username

<service> – Your service name

Push the Image

After the image is successfully built, push it to Docker Hub using:

1docker push <username>/<service>-srv:latest

After completing these steps:

  • Your image is stored in Docker Hub
  • Kubernetes can pull it for deployment
  • You can reference it in your Kubernetes YAML files

3. Create Kubernetes Configuration Files

Now that the Docker images are built and pushed to Docker Hub, the next step is to create Kubernetes YAML files that define and manage the microservice deployments.

Create an infra/k8s folder inside the project root to store all Kubernetes configuration files. Your project structure should look something like this:

1microservice-app/
2├── product/
3│ ├── index.js
4│ ├── package.json
5│ ├── Dockerfile
6│ └── ...
7├── order/
8│ ├── index.js
9│ ├── package.json
10│ ├── Dockerfile
11│ └── ...
12├── payment/
13│ ├── index.js
14│ ├── package.json
15│ ├── Dockerfile
16│ └── ...
17└── infra/
18 └── k8s/ # Your Kubernetes YAMLs go here
19 ├── product-deployment.yaml

product-deployment.yaml example:

1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: product
5spec:
6 replicas: 1
7 selector:
8 matchLabels:
9 app: product
10 template:
11 metadata:
12 labels:
13 app: product
14 spec:
15 containers:
16 - name: product
17 image: <username>/product-srv:latest
18 ports:
19 - containerPort: <port>
20---
21apiVersion: v1
22kind: Service
23metadata:
24 name: product-srv
25spec:
26 selector:
27 app: product
28 ports:
29 - port: <port>
30 targetPort: <port>

Deployment YAML Example

<username> – Your Docker Hub username

<port> – Your service port

Kubernetes configuration file is now complete, and we can deploy it with the command:

1kubectl apply -f deployment-product.yaml

Note that we should repeat steps 2 and 3 to for Order and Payment services as well.

4. Testing

Now let’s verify that everything we created is running correctly.

Get all created pods

1kubectl get pods

example output:

Blog image

If the name contains the name of your YAML file, you’re good to go.

The name is in the format: [pod-name] = [deployment-name]-[replicaset-hash]-[pod-id], which is meaningful for the Kubernetes engine.

Get all created services

1kubectl get services

Example output:

Blog image

Get all created deployments

1kubectl get deployments

Example output:

Blog image

Congratulations, your deployments are now up and running! 🎉

Applying Deployments

To deploy or update a service, use:

1kubectl apply -f deployment-product.yaml

You can also apply everything in the /k8s folder:

1kubectl apply -f k8s/

Viewing Logs

To deploy or update a service, use:

1kubectl apply -f deployment-product.yaml

To view real-time logs

1kubectl logs -f <pod-name>

Reminder, we can get the pod name from the command:

1kubectl get pods

Cleanup deployment

Deleting the deployment also deleted all managed pods by that deployment.

1kubectl delete -f deployment-product.yaml

Or delete everything in the k8s/ folder:

1kubectl delete -f k8s/

Conclusion

Over the course of this guide, we took the concept from theory to practice. We built and containerised services using Docker, deployed them into a Kubernetes cluster, validated deployments, monitored runtime status, and used logs for troubleshooting.

What can you imagine more? 😊

That’s it! You have successfully deployed your services to Kubernetes and learned the essential workflows to keep them running smoothly. With these tools, deployments are no longer a struggle; they are a controlled, predictable process, and must have thing in 2026.

Deployment chaos is no longer the default, and what once felt fragile and unpredictable is now resilient by design.

By containerising our services and handing orchestration to Kubernetes, we’ve moved from reactive firefighting to controlled execution. This is how modern backend systems are engineered, and it’s only the beginning. Now that your foundation is solid, the real optimization begins, and scalability is just a word.

Table of Contents

  • Overview
  • Why Docker + Kubernetes?
  • Prerequisites
  • 1. Creating Docker Files
    • Dockerfile
    • .dockerignore
  • 2. Build & Push the Docker Image
    • Build the Image
    • Push the Image
  • 3. Create Kubernetes Configuration Files
    • Deployment YAML Example
  • 4. Testing
    • Applying Deployments
    • Viewing Logs
    • Cleanup deployment
  • Conclusion