• What we do

    full-cycleFull Cycle Developmentstaff-augmentationIT Staff Augmentationai-developmentAI DevelopmententerpriseEnterprise Application Developmenttech-consultingTechnology Consulting
    Service Preview
    full-cycle
    Service preview

    Full Cycle Development

    End-to-end software development from concept to deployment, using cutting-edge technologies and best practices.

    Learn more
  • Blog
    In-House Software Development VS Outsourcing: Strategic Guide for 2026The DeviLLM Bargain: Gain Superhuman Speed… But Can You Handle the Risk?The JavaScript Diet: 33% Bloat Loss.Pretty Lies or Ugly Truths? Debunking 10 Software MythsIs SaaS Dead? Rethinking Software's Role
    Company News
    Process Monitoring Events - Version 4.1 Official LaunchScientific Paper: Refinement of the Cauchy-Schwartz inequalityTensorial Simpson Type Inequalities of Self Adjoint Operators in Hilbert SpaceGeneralization of the Buzano's Inequality and Numerical RadiusNew Partnership with HomeStory Rewards Corporation
    Case Studies
    Operations, Synced: End-to-End Live Process Monitoring Meetings, Transformed: Gesture-Led WorkspaceFrom Static to Addictive: Content Exchange PlatformOne Hub. Unified Fintech Control.The New Blueprint: AI-Driven Mortgage Engagement
    Featured resource
    Featured article

    Operations, Synced: End-to-End Live Process Monitoring

    Integrated disconnected tools and data sources to deliver real-time operational insight for a 200+-employee SaaS, IoT, and FinTech enterprise.

    Read more
    See all case studies
  • About Us
  • FAQ
Get Started

We deliver advantage beyond features.
What will you ship?

Get Started
  • Full Cycle Development
  • IT Staff Augmentation
  • AI Development
  • Enterprise Application Development
  • Technology Consulting
  • Case Studies
  • Blog
  • Company news
  • About
  • FAQ
  • Contact Us

Follow Us

Site MapTerms of UsePrivacy Policy
© Energmа 2026. All rights reserved.
Železnička 94, 11300 Smederevo, Serbia
Article thumbnail

The JavaScript Diet: 33% Bloat Loss

That infuriating moment. The page shifts and you click the wrong button. The spinning wheel of doom mocks you for a while. You rage-quit.

The richer the app, the bigger the problems. While more front-end code is written to support more complex features, more bytes are sent to the browser to be parsed and executed, strangling performance.

At Energma, we understand how incredibly annoying such experiences are. Over the past year, our web performance engineering team diagnosed performance issues. The core pathogen was an archaic module bundler.

Quick Summary

  • 33% Lighter Bundles: Unlocked significantly faster downloads for end-users
  • 15% Fewer Scripts: Reduced network requests and processing overhead
  • Liberated Developers: Eliminated tedious, error-prone manual bundle management
  • Conquered CI Challenges: Overcame memory exhaustion and caching issues
  • Saved Engineering Months: Avoided building equivalent features by adopting Rollup

How a Tool for Order Became a Source of Chaos

Miller's Law states that the human brain can only hold so much information at any given time which is partially why most modern codebases (including ours) are broken into smaller, manageable modules. A module bundler is meant to be the solution. Its job is to take hundreds of JavaScript and CSS fragments and amalgamate them into efficient, cohesive bundles for the browser. In essence, a minified JavaScript file that delivers your application's logic.

Ours was a relic. Conceived in 2018, our custom previous solution became our bottleneck. While the industry charged ahead with performance-first tools like Webpack and Rollup, ours stagnated. It was barebones, missing critical performance optimizations, and notoriously onerous to work with. It was an active drag, hampering user experience and crippling front-end development velocity.

As it became clear our existing bundler was showing its age, we decided the best way to optimize performance is to replace it. Since we were in the middle of migrating our pages to our new web serving stack, the timing couldn't be any better. It presented an opportunity to piggyback on an existing migration plan and also provided an architecture that made it simpler to re-engineer our asset pipeline for the modern web.

Existing Architecture: A House of Cards

Build-time speed was the only virtue of our legacy bundler. But that single advantage came at a cost: massive bundle sizes and a maintenance burden for engineers. They manually defined which scripts to bundle with a package, and we simply shipped all packages involved in rendering a page with few optimizations. Emerging problems with this approach didn't just become crystal clear over time. They became unavoidable.

Issue #1: Multiple Versions of Bundled Code

Our custom architecture shattered every page into independent pagelets (i.e. subsections of pages), resulting in multiple JS entry points per page, with each servlet being served by its own controller on the backend. This empowered teams to deploy faster and more independently, but the trade-off was architectural insanity where different sections of a single page could run on different backend code versions.

It required our architecture to support delivering separate versions of packaged code on the same page, which resulted in consistency hell (e.g. multiple instances of a singleton being loaded on the same page). Eliminating pagelet architecture was our non-negotiable first step. It set the stage for flexibility and stability to adopt an industry-standard bundling scheme.

Issue #2: Manual Code-splitting

Code splitting is the essential process of slicing a massive JavaScript bundle into smaller chunks, so that the browser only loads parts of the codebase that are absolutely necessary for the current page. For example, assume a user visits Energma and then our services. Without code-splitting, the entire bundle.js is downloaded and forced on the user upfront which can significantly drop performance.

All code for all pages is served via a single file

Blog image

After code-splitting, the browser only downloads what's essential. This unlocks nearly instant navigation to our homepage, as the browser parses and executes a fraction of the code. But, the benefits compound.

  • Critical scripts are loaded immediately, rendering content faster
  • Non-essential scripts loaded asynchronously without blocking the user
  • Shared code is cached by the browser, making subsequent page transitions fast
  • Reduced amount of JS downloaded

The collective impact is drastically reduced load times, a fluid user experience, and a foundation built for speed at scale.

Only chunks needed for the page are downloaded

Blog image

Since our existing bundler didn't have any built-in code-splitting, engineers had to manually define packages. More specifically, our packaging map was a massive 6,000+ line dictionary that specified which modules were included in which package.

As you can imagine, this became incredibly complex to maintain over time. To avoid sub-optimal packaging, we enforced a rigorous set of tests, the packager tests, which became dreaded by engineers since they would often require a manual reshuffling of modules with each change.

This also resulted in a lot more code than what was needed by certain pages. For instance, assume we have the following package map:

Blog image

If a page depends on modules a, b, and c, the browser would only need to make two HTTP calls (i.e. to fetch pkg-a and pkg-b) instead of three separate calls, once per module. While this would reduce the HTTP call overhead, it would often result in having to load unnecessary modules—in this case, module d. Not only were we loading unnecessary code due to a lack of tree shaking, but we were also loading entire modules that weren't necessary for a page, resulting in an overall slower user experience.

Issue #3: No Tree Shaking

Tree shaking is a bundle-optimization technique to reduce bundle sizes by eliminating unused code. Let's assume your app imports a third-party library that contains several modules. Without tree shaking, much of the bundled code is unused.

All code is bundled, regardless if it's used or not

Blog image

With tree shaking, the static structure of the code is analyzed and any code that is not directly referenced is removed. This results in a much leaner final bundle.

Only used code is bundled

Blog image

Since our existing bundler was barebones, there wasn't any tree shaking functionality either. The resulting packages would often contain large swaths of unused code, especially from third-party libraries, which translated to unnecessarily longer wait times for page loads. Also, since we used Protobuf definitions for efficient data transfer from the front-end to the back-end, instrumenting certain observability metrics would often end up introducing several additional megabytes of unused code!

Why Rollup

Although we considered many solutions over the years, our primary requirements were having certain features like automatic code-splitting, tree shaking, and, optionally, some plugins for further optimizing the bundling pipeline. Rollup was the most mature at the time and most flexible to incorporate into our existing build pipeline, which is mainly why we settled on it.

Another reason: less engineering overhead. Since we were already using Rollup for bundling our NPM modules (albeit without many of its useful features), expanding our adoption of Rollup would require less overhead than integrating an entirely foreign tool in our build process. Additionally, we had more engineering expertise with Rollup's quirks in our codebase versus that of other bundlers, reducing the likelihood of "unknown unknowns". Also, replicating Rollup's features within our existing module bundler would require significantly more engineering time than if we just integrated Rollup more deeply in our build process.

Rollup Rollout

We knew that rolling out a module bundler safely and gradually would be no easy feat. We had to support two different bundlers and sets of bundles simultaneously. Our primary concerns included ensuring stable and bug-free bundled code, the increased load on our build systems and CI, and how we would incentivize teams to opt-in to using Rollup bundles for the pages owned.

With reliability and scalability in mind, we divided the rollout process to four stages:

  • The developer preview stage allowed engineers to opt-in to Rollup bundles in their dev environment. This allowed us to effectively crowdsource QA testing by having developers surface any unexpected application behavior introduced by Rollup bundles early on, giving us plenty of time to address bugs and scope changes.
  • The preview stage involved serving Rollup bundles to all internal employees, which allowed us to gather early performance data and further gather feedback on any application behavioral changes.
  • The general availability stage involved gradually rolling out to all users, both internal and external. This only happened once our Rollup packaging was thoroughly tested and deemed stable enough for users.
  • The maintenance stage involved addressing any tech debt left over in the project and iterating on our use of Rollup to further optimize performance and the developer experience. We realized that projects of such a massive scale will inevitably end up accumulating some tech debt, and we should proactively plan to address it at some stage instead of sweeping it under the rug.

To support each of these stages, we used a mix of cookie-based gating and our in-house feature-gating system. Historically, most rollouts at Energma are exclusively done using our in-house feature gating system; however, we decided to allow cookie-based gating to quickly toggle between Rollup and legacy packages, which accelerated debugging. Nested within each of these rollout stages were gradual rollouts, which involved ramping up from 1%, 10%, 25%, 50%, to 100%. This gave us the flexibility to collect early performance and stability results—and to seamlessly roll-back any breaking changes if they occurred—while minimizing impact to both internal and external users.

Because of the large number of pages we had to migrate, we not only needed a strategy to switch pages over to Rollup safely, but also to incentivize page owners to switch in the first place. Since our web stack was about to undergo a major renovation with Edison, we realized that piggybacking on Edison's rollout could solve both our problems. If Rollup was an Edison-only feature, developer teams would have greater incentive to migrate to both Rollup and Edison, and we could tightly couple our migration strategy with Edison's too.

Edison was also expected to have its own performance and development velocity improvements. We figured that coupling Edison and Rollup together would have a transformational synergy strongly felt throughout the company.

Challenges and Roadblocks

While we did expect to run into some unexpected challenges, we realized that daisy-chaining one build system (Rollup) with another (our existing Bazel-based infrastructure) proved to be harder than anticipated.

Firstly, running two different module bundlers at the same time proved to be more resource-intensive than we estimated. Rollup's tree-shaking algorithm, while quite mature, still had to load all modules into memory and generate the abstract syntax trees needed to analyze relationships and shake code out. Also, our integration of Rollup into Bazel limited us in being able to cache intermediary build results, requiring our CI to rebuild and re-minify all Rollup chunks on each build. This caused our CI builds to time-out due to memory exhaustion, and delayed the rollout significantly.

We also found several bugs with Rollup's tree-shaking algorithm which resulted in overly aggressive tree shaking. Thankfully, this only resulted in minor bugs that were caught and fixed during the developer preview phase without ever impacting our users. Additionally, we found that our legacy bundler was serving some code from third-party libraries that was incompatible with JavaScript's strict mode. Serving this same code via the new bundler with strict mode enabled resulted in fail-hard runtime errors in the browser. This required us to conduct a one-time audit of our entire codebase and patch code that was incompatible with strict mode.

Finally, during the preview phase, we found that our A/B telemetry metrics between Rollup and the legacy bundler weren't showing as much of a TTVC improvement as we expected. We eventually narrowed this down to Rollup producing a lot more chunks than what our legacy packager produced. Although we initially hypothesized that HTTP2's multiplexing would negate any performance degradations from a greater number of chunks, we found that too many chunks would result in the browser spending significantly more time in discovering all the modules needed for the page. Increasing the number of chunks also resulted in lower compression efficiency, since compression algorithms such as Zlib use a LZ77 algorithm's approach to compression, which results in greater compression efficiency for one large file rather than many smaller files.

Results

After rolling out Rollup to all users, we found that this project reduced our JavaScript bundle sizes by 33%, our total JavaScript script count by 15%, and yielded modest TTVC improvements. We also significantly improved front end development velocity through automatic code-splitting, which liberated developers from manual shuffling around bundle definitions with each change. Lastly and perhaps most importantly, we brought our bundling infrastructure into modernity and slashed years of tech debt accumulated since 2018, reducing our maintenance burden going forward.

In addition to having a highly impactful rollout, the Rollup project revealed several bottlenecks in our existing architecture—for example, several render-blocking RPCs, excessive function calls to third-party libraries, and inefficiencies in how the browser loads our module dependency graph. Given Rollup's rich plugin ecosystem, addressing such bottlenecks has never been easier in our codebase.

Overall, adopting Rollup fully as our module bundler has not only resulted in immediate performance and productivity gains, but will also unlock significant performance improvements down the road.

Table of Contents

  • How a Tool for Order Became a Source of Chaos
  • Existing Architecture: A House of Cards
    • Issue #1: Multiple Versions of Bundled Code
    • Issue #2: Manual Code-splitting
    • Issue #3: No Tree Shaking
  • Why Rollup
  • Rollup Rollout
  • Challenges and Roadblocks
  • Results