[From sandbox] We speed up web application build with webpack

[From sandbox] We speed up web application build with webpack

As your application develops and grows, its build time increases — from a few minutes when rebuilding in development mode to tens of minutes when “cold” »Production assembly. This is completely unacceptable. We, the developers, do not like to switch the context while waiting for the bundle to be ready and want to receive feedback from the application as early as possible - ideally, while switching from IDE to the browser.

How to achieve this? What can we do to optimize build time?

This article is an overview of the tools available in the webpack ecosystem to speed up assembly, experience with their use and tips.

Optimization of the bundle size and performance of the application itself is not covered in this article.

The project, references to which are found in the text and relative to which the assembly speed is measured, is a relatively small application written on the JS + Flow + React + Redux stack using webpack, Babel, PostCSS, Sass, etc. and consisting of approximately 30 thousand lines of code and 1500 modules. Versions of dependencies are relevant for April 2019.

Studies were conducted on a computer with Windows 10, Node.js 8, 4-core processor, 8 GB of memory and SSD.


  • Build is the process of converting project source files into a set of related assets that make up a web application.
  • dev mode - build with the mode: 'development' option, usually using webpack-dev-server and watch mode.
  • prod-mode - build with the mode: 'production' option, usually with a full set of bundle optimizations.
  • Incremental build - in dev mode: rebuild only files with changes.
  • "Cold" build - build from scratch, without any caches, but with dependencies installed.


Caching allows you to save the results of calculations for further reuse. The first build may be a little slower than usual due to the overhead of caching, but subsequent ones will be much faster due to reusing compilation results of unchanged modules.

By default, the webpack in the watch-mode caches in memory intermediate assembly results in order not to reassemble the whole project with every change. For a regular build (not in watch mode), this setting does not make sense. You can also try to enable cache resolving to make it easier for the webpack to search for modules and see if this setting has a noticeable effect on your project.

Persistent (saved to disk or other storage) has no cache in the webpack yet, although in version 5 it’s promise to add . In the meantime, we can use the following tools:

- Caching in TerserWebpackPlugin settings

< br/>

Disabled by default. Even alone it has a noticeable positive effect: 60.7 s → 39 s (-36%), it is perfectly combined with other tools for caching.

Turn on and use is very simple:

  optimization: {
  minimizer: [
  new TerserJsPlugin ({
  terserOptions: {...},
  cache: true

- cache-loader

Cache-loader can be placed in any chain of loaders and cache the results of previous loaders.

By default, it stores the cache in the .cache-loader folder in the project root.You can redefine the path using the cacheDirectory option in the loader settings.

Use Case:

  use: [
  loader: 'cache-loader',
  options: {
  cacheDirectory: path.resolve (

Safe and secure solution. It works without problems with almost any loaders: for scripts (babel-loader, ts-loader), styles (scss-, less-, postcss-, css-loader), images and fonts (image-webpack-loader, react-svg- loader, file-loader) and others.


  • When using a cache-loader with a style-loader or MiniCssExtractPlugin.loader, it should be placed after them:
    ['style-loader', 'cache-loader', 'css-loader', ...] .
  • Contrary to the documentation recommendations, using this loader for caching only the results of time-consuming calculations, it may well give a small, but measurable performance increase for more “lightweight” loaders - you need to try and measure.


  • dev: 35.5 s → (turn on the cache-loader) → 36.2 s (+ 2%) → (re-build) → 7.9 s (-78%)
  • prod: 60.6 s → (we include the cache-loader) → 61.5 s (+ 1.5%) → (reassembly) → 30.6 s (-49%) → (we include the cache in Terser) → 15.4 s (-75%)

- HardSourceWebpackPlugin

A more massive and intelligent solution for caching at the level of the entire build process, rather than individual loader chains. In the base use case, it is enough to add a plugin to the webpack configuration; standard settings should be sufficient for correct operation. It is suitable for those who want to achieve maximum performance and are not afraid to face difficulties.

  plugins: [
  new HardSourceWebpackPlugin ()

The documentation has advanced usage examples and tips solving possible problems. Before putting the plug-in into operation on an ongoing basis, it is worth thoroughly testing its work in various situations and assembly modes.


  • dev: 35.5 s → (turn on the plugin) → 36.5 s (+ 3%) → (re-build) → 3.7 s (-90%)
  • prod: 60.6 s → (turn on the plugin) → 69.5 s (+ 15%) → (re-build) → 25 s (-59%) → (turn on the cache at Terser) → 10 s (-83 %)


  • compared to the cache-loader speeds up re-builds even more;
  • does not require duplicate ads in different places of the configuration, like a cache-loader.


  • compared to the cache-loader slows down the first build more (when there is no disk cache);
  • may slightly increase the incremental reassembly time;
  • can cause problems when using a webpack-dev-server and require detailed configuration of partitioning and invalidation of caches (see documentation );
  • quite a lot of issues with bugs on GitHub.

- Caching in babel-loader settings . Disabled by default. The effect is a few percent worse than the cache-loader.

- Caching in eslint-loader settings . Disabled by default. If you use this loader, the cache will help not to waste time on linting of unchanged files when re-assembling.

When using the cache-loader or HardSourceWebpackPlugin, you should disable the built-in caching mechanisms in other plugins or loaders (except TerserWebpackPlugin), since they will no longer be useful for repeated and incremental builds, and the “cold” ones will even slow down. The same applies to the cache-loader itself if HardSourceWebpackPlugin is already in use.

When setting up caching, the following questions may arise:

Where should the caching results be saved?

Keshes are usually stored in the node_modules/.cache/& lt; kekename & gt;/ directory. Most tools use this path by default and allow you to override it if you want to store the cache in a different place.

When and how to invalidate the cache?

It is very important to reset the cache when the configuration of the assembly changes, which will affect the output. Using the old cache in such cases is harmful and can lead to errors of unknown nature.

Factors to consider:

  • A list of dependencies and their versions: package.json, package-lock.json, yarn.lock, .yarn-integrity;
  • the contents of the webpack, Babel, PostCSS, browserslist, and other configuration files that are explicitly or implicitly used by loaders and plugins.

If you do not use cache-loader or HardSourceWebpackPlugin, which allow you to redefine the list of sources to form an assembly fingerprint, npm-scripts that clear the cache when adding, updating, or removing dependencies will help you a little:

  "prunecaches": "rimraf ./node_modules/.cache/",
 "postinstall": "npm run prunecaches",
 "postuninstall": "npm run prunecaches"  

Also nodemon , which is configured to clear the cache, and restart webpack-dev-server if it detects changes in the configuration files:

  "start": "cross-env NODE_ENV = development nodemon --exec \" webpack-dev-server --config webpack.config.dev.js \ ""   


  "watch": [
  "more configs ...",
  "events": {
  "restart": "yarn prunecaches"

Do I need to save the cache in the project repository?

Since the cache is, in fact, an assembly artifact, you do not need to commit it to the repository. The location of the cache inside the node_modules folder, which is usually included in .gitignore, will help with this.

It is worth noting that if you have a caching system that can reliably determine the validity of the cache under any conditions, including changing the OS and the Node.js version, the cache could be reused between the developers' machines or in the CI, which would drastically reduce the time even first build after switching between branches.

What are the build modes and which ones you should not use the cache?

There is no definitive answer: it all depends on how intensively you use dev and prod modes when developing and switch between them. In general, nothing prevents you from including caching everywhere, but remember that it usually makes the first build slower. In CI, you probably always need a “clean” build, in which case caching can be disabled using the appropriate environment variable.

Interesting stuff about caching in webpack:


With parallelisation, you can get a performance boost by using all available processor cores. The final effect is individual for each car.

By the way, here’s a simple Node.js code to get the number of available processor cores (this can be useful when setting up the tools listed below):

  const os = require ('os');
 const cores = os.cpus (). length;  

- Parallelization in TerserWebpackPlugin settings

< br/>

Disabled by default. Just like your own caching, it easily turns on and significantly speeds up the build.

  optimization: {
  minimizer: [
  new TerserJsPlugin ({
  terserOptions: {...},
  parallel: true

- thread-loader

Thread-loader can be placed in a chain of loaders that perform heavy calculations, after which previous loaders will use a pool of Node.js (“workers”) subprocesses.

It has a set of options that allow you to fine-tune the work of a pool of workers, although the basic values ​​look quite adequate. Special attention is given to poolTimeout and workers - see example .

Can be used in conjunction with the cache-loader as follows (order is important): ['cache-loader', 'thread-loader', 'babel-loader'] . If the warm-up is enabled for the thread-loader, it is worth re-checking the stability of re-assemblies using the cache — the webpack may hang and not complete the process after successfully completing the assembly. In this case, simply turn off warmup.

If you encounter a hang of the assembly after adding a thread-loader to the Sass-style compilation chain, this one can help you. advice .

- HappyPack

A plugin that intercepts loader calls and distributes their work across multiple threads. At the moment, it is in support mode (that is, development is not planned), and its creator recommends thread-loader as a replacement. Thus, if your project keeps up with the times, it is better to refrain from using HappyPack, although it is certainly worth trying and comparing the results with the thread-loader.

HappyPack has clear documentation for customization, which, by the way, is rather unusual in itself: Loader configurations are proposed to be moved to the plugin's constructor call, and the loader chains themselves are replaced with the own happypack loader. Such a non-standard approach can cause inconvenience when creating a custom webpack configuration “of pieces”.

HappyPack supports limited list of loaders ; the main and most widely used in this list are present, but the performance of others is not guaranteed due to possible API incompatibility. More information can be found in issues of the project.

Refusal to calculate

Any work takes time. To spend less time, you need to avoid work that does little good, can be put off until later or not needed at all in this situation.

- Apply loaders to the minimum possible number of modules

the test, exclude, and include properties set the conditions for the module to be included in the processing by the loader. The point is to avoid transforming modules that do not need this transformation.

A popular example is the exclusion of node_modules from Babel transfiguration:

  rules: [
  loader: 'babel-loader'

Another example is that regular CSS files do not need to be processed by the preprocessor:

  rules: [
  use: ['style-loader', 'css-loader', 'sass-loader']
  use: ['style-loader', 'css-loader']

- Do not enable optimization of the size of the bundle in the dev-mode

On a powerful developer’s machine with a stable Internet, a locally-deployed application usually starts up quickly, even if it weighs several megabytes. Optimizing a bundle during assembly can take a lot more precious time than saving when loading.

The advice concerns JS (Terser, Uglify and others. ), CSS (cssnano, optimize-css-assets-webpack-plugin), SVG and images (SVGO, Imagemin, image-webpack-loader), HTML (html-minifier, option in html-webpack-plugin), etc.

- Do not include polyfills and transformations in dev-mode

If you are using babel-preset-env, postcss-preset-env or Autoprefixer - add separate configuration Browserslist for dev -mode, which includes only those browsers that you use when developing. Most likely, these are the latest versions of Chrome or Firefox, which perfectly support modern standards without polyfills and transformations. This will avoid unnecessary work.

Example .browserslistrc:

 your supported browsers go here ...

 last 2 Chrome versions
 last 2 Firefox versions
 last 1 Safari version  

- Revise the use of source maps

Generating the most accurate and complete source maps takes considerable time (on our project, about 30% of the time of the prod build with the devtool option: 'source-map' ). Consider whether you need source maps in a prod build (locally and in CI). It may be worthwhile to generate them only if necessary - for example, on the basis of an environment variable or a commit tag.

In dev-mode, in most cases, there will be a rather lightweight version - 'cheap-eval-source-map' or 'cheap-module-eval-source-map' . Learn more in the documentation webpack.

- Set up compression in Terser

According to Terser documentation (the same goes for Uglify), when minifying the code is overwhelmed by the mangle and compress options. By fine-tuning them, you can speed up assembly at the cost of a slight increase in the size of the bundle.There is an example in the vue sources -cli and another example from an engineer from Slack. In our project, Terser tuning in the first version reduces assembly time by about 7% in exchange for a 2.5% increase in the size of the bundle. Is the game worth the candle - you decide.

- Exclude external dependencies from parsing

With module.noParse and resolve.alias you can redirect the import of library modules to already compiled versions and simply insert them into the bundle without spending time on parsing In dev-mode, this should significantly increase the speed of assembly, including incremental one.

The algorithm is roughly the following:

(1) Make a list of modules that need to be skipped when parsing.

Ideally, these are all runtime dependencies that fall into a bundle (or at least the most massive ones, such as react-dom or lodash), and not only their own (first level), but also transitive (dependency dependencies). You’ll have to maintain this list yourself.

(2) For the selected modules, write down the paths to their compiled versions.

Instead of missing dependencies, you need to provide the assembler with an alternative, and this alternative should not depend on the environment - have calls to module.exports , require , process , import , etc. Pre-compiled (not necessarily minified) single-file modules that usually lie in the dist folder inside dependency sources are suitable for this role. To find them, you have to go to node_modules. For example, for axios, the path to the compiled module looks like this: node_modules/axios/dist/axios.js .

(3) In the webpack configuration, use the resolve.alias option to replace imports by dependency names with direct file imports, the paths to which were written out in the previous step.

For example:

  resolve: {
  alias: {
  axios: path.resolve (

There is a big flaw here: if your code or the code of your dependencies does not refer to the standard entry point (the index file, the main field in package.json ), but to a specific the file inside the dependency source file, or if the dependency is exported as an ES module, or if something interferes with the rezolving process (for example, babel-plugin-transform-imports), the whole idea may fail. Bundle will be assembled, but the application will be broken.

(4) In the webpack configuration, use the module.noParse option to use the regular expressions to skip the parsing of precompiled modules requested by the paths from step 2.

For example:

  module: {
  noParse: [
  new RegExp ('node_modules/dist/axios.min.js'),

Total: on paper, the method looks promising, but a non-trivial setting with pitfalls at a minimum increases the cost of implementation, and at most it reduces the benefits to nothing.

An alternative option with a similar working principle is to use the externals option. In this case, you will have to embed links to external scripts yourself in the HTML file, and with the necessary dependency versions corresponding to package.json.

- Allocate rarely changing code into a separate bundle and compile it only once

You’ve probably heard about DllPlugin . With it, you can spread the actively changing code (your application) and the rarely changing code (for example, dependencies) into different assemblies. Once assembled, the dependencies bundle (the same DLL ) then simply connects to the application build - saving time.

It looks in general like this:

  1. A separate webpack configuration is created to build the DLL, the necessary modules are connected as entry points.
  2. Building on this configuration starts. DllPlugin generates a DLL bundle and a manifest file with name mappings and module paths.
  3. A DllReferencePlugin is added to the configuration of the main assembly, to which the manifest is passed.
  4. Importing dependencies rendered in a DLL are displayed on an assembly to already compiled modules with the help of a manifest.

For a little more detail, see the article on the link .

Starting to use this approach, you will quickly find a number of shortcomings:

  • The DLL assembly is separate from the main assembly, and it needs to be managed separately: prepare a special configuration, start it again each time a branch is switched or changes in dependencies.
  • Since the DLL does not belong to the main assembly artifacts, it will need to be manually copied to the folder with the other assets and connected to the HTML file using one of these plugins: 1 , 2 .
  • You need to manually maintain up to date the list of dependencies to be included in the DLL bundle.
  • The saddest thing is that tree-shaking is not applied to the DLL bundle. In theory, this is the entryOnly option, but it was forgotten to be documented.

You can get rid of the boilerplate and solve the first problem (and the second one, if you use html-webpack-plugin v3 - it doesn't work with version 4) using AutoDllPlugin . However, it still does not support the entryOnly option for the DllPlugin used under the hood, and the author of the plug-in doubts the advisability of using his brainchild in the light of the soon coming webpack 5.


Update your software and dependencies regularly. More recent versions of Node.js, npm/yarn, and build tools (webpack, Babel, and others) often contain performance improvements. Of course, before using the new version, you should carefully read the changelog, issues, security reports, ensure stability and test it.

When using PostCSS and postcss-preset-env pay attention to the stage setting, which is responsible for the set of supported features . For example, stage-3 was installed in our project, from which only Custom Properties were used, and switching to stage-4 reduced assembly time by 13%.

If you are using Sass (node-sass, sass-loader), try Dart Sass (implementing Sass on Dart, compiled into JS) and fast-sass-loader . Perhaps on your project they will increase the performance of the assembly. But even if they do not, dart-sass is at least installed faster than node-sass, because it is pure JS, not a binding for libsass.

An example of using Dart Sass can be found in the sass-loader documentation . Pay attention to the explicit indication of the specific implementation of the Sass preprocessor and the use of the fibers module.

If you are using CSS modules, try disabling adding hashes to the generated class names in dev mode. Generating unique identifiers takes some time, which can be saved if including paths to files in class names is enough to avoid collisions.


  loader: 'css-loader',
  options: {
  modules: true
  localIdentName: isDev
  ?  '[path] [name] [local]'
: '[hash: base64: 5]'

The benefit, however, is small: on our project, this is less than half a second.

Perhaps you’ve ever met the mysterious PrefetchPlugin in the webpack documentation, which seems to promise to speed up the build, but how is unknown. The creator of webpack in one of the issues briefly told about what problem this plugin solves. But how to use it?

  1. Upload build statistics to file. This is done using the - json CLI option, see the documentation for more details. Actually, most likely, only for the dev-build mode.
  2. Download the file to a special online analyzer and go to the Hints tab.
  3. Find the section entitled “Long module build chains”. If it is not there, you can finish it - PrefetchPlugin is not needed.
  4. For the long chains found, use PrefetchPlugin. As a starting example, see the topic on StackOverflow .

Total: poorly documented method without guarantee for a noticeable positive result.

as a conclusion

If you have additions, especially with examples on other technologies (TypeScript, Angular, etc.) - write in the comments!


Some of them are partially outdated, but, nevertheless, served as the basis for writing this article.

Source text: [From sandbox] We speed up web application build with webpack