Nuxt3 and NGINX: Efficient Serving of Static Files

In the world of modern web technologies, the latest version of Nuxt introduces significant innovations in terms of developer experience.

Utilizing its proprietary Nitro server for rendering applications as well as serving static files, Nuxt significantly eases the workload for developers. Thanks to this integration, developers can focus on application development without worrying about configuring and managing an additional HTTP server. Nitro automates and standardizes processes that would typically require external solutions or detailed server configuration. This approach has several key advantages:

  • Elimination of the need for configuring an additional HTTP server.
  • Focus on creating and refining applications instead of managing a server.
  • Simplified deployment process through integrated and optimized solutions.
  • Consistent workflow, increasing productivity and facilitating collaboration in teams.

While working on one of our projects, which attracts significant amounts of traffic, we noticed that although Node.js's performance is very good in many aspects, in the context of serving static files, Nginx shows significantly better performance. Our applications are already served through Nginx, leading us to conclude that redirecting traffic to a Node.js application only to later serve static files is not an optimal solution.

Key observations in this matter include:

Higher Nginx Performance in Serving Static Files: Nginx is specifically optimized for serving static resources, such as HTML, CSS, and images. Thanks to its event-driven architecture, it can handle large amounts of traffic with minimal server load.

Unnecessary load on Node.js: Directing queries for static files to Node.js, only to have them served by Nginx, creates unnecessary load on the Node.js application. This not only consumes server resources but can also affect the overall performance of the application.

Importantly! Every query for a non-existent static file, which ended with a 404 error, was unnecessarily redirected to Node.js. This consumed valuable server resources that could be better used to handle other, more important queries.

An additional problem we may encounter is that during the operation of the application, we will only be able to open those files that were in the application at the time of its building. All other files added to the public folder during the operation of the application will not be available. You can learn more here:

Traffic Optimization: Directly serving static files through Nginx eliminates additional processing steps by Node.js. This makes network traffic more efficient and reduces the loading time for the end user.

Simplifying Architecture: Using Nginx as a direct point for serving static files simplifies the system's architecture. This allows for a more transparent configuration and easier management of network traffic.

Focusing on Node.js's Strengths: Leaving Node.js to handle tasks it is best suited for, such as dynamic rendering and application logic, fully utilizes its potential while taking advantage of Nginx's efficiency in serving static resources.

How to Serve Static Files Using NGINX

Here's how we can configure NGINX to serve static files from a Nuxt application.

Here is an example base NGINX configuration for a Nuxt application:

server {
    listen 80;


    location / {
        proxy_pass http://localhost:3000; 
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;

This NGINX configuration acts as a reverse proxy, redirecting all incoming requests from to the Node.js server running locally at http://localhost:3000. In this way, NGINX handles all network traffic, delegating the actual request processing to the Node.js server, including static files.

We can slightly tune the above configuration so that all static files are served directly by NGINX, meaning requests are not delegated to the Node.js application.

   location /_nuxt/ {
       root /var/www/html/front/.output/public;
       try_files $uri $uri/ =404;

   location / {
       root /var/www/html/front/.output/public;
       try_files $uri $uri @nuxt;

   location @nuxt {
       proxy_pass http://localhost:3000;
       proxy_http_version 1.1;
       proxy_set_header Upgrade $http_upgrade;
       proxy_set_header Connection 'upgrade';
       proxy_set_header Host $host;
       proxy_cache_bypass $http_upgrade;

location /_nuxt/

The first location /_nuxt/ block in the NGINX configuration specifies how the server should handle requests that hit the path containing /_nuxt/. This is a typical path used in Nuxt applications for storing compiled resources, such as JavaScript, CSS, or images. Here's what this block does:

  • root /var/www/html/front/.output/public;: Specifies the root directory from which NGINX should serve files for this path. In this case, it is /var/www/html/front/.output/public. This means all requests to /_nuxt/ will look for files in this directory.
  • try_files $uri $uri/ =404;: The try_files directive checks if a file at the path indicated by $uri (the exact path behind /_nuxt/) exists in the directory specified by root. If the file exists, it will be served. If not, NGINX will then check the directory ($uri/). If neither the file nor the directory exists, NGINX will return a 404 error (page not found). Here we see a major advantage of this configuration, as 404 requests will not burden the Node.js server.
  • Note that the location /_nuxt/ block should be placed above the location / block.

location /

The second location / block in the NGINX configuration specifies how to handle all other requests to the server that do not match any other defined location blocks.

Here's how it works:

  • root /var/www/html/front/.output/public;: As in the previous block, this specifies the root directory from which NGINX should serve files. For all requests matching this block, NGINX will look for files in /var/www/html/front/.output/public. In this folder, Nuxt stores files that should be publicly accessible - i.e., all files placed in the public folder after compilation will be in this folder.
  • try_files $uri $uri/ @nuxt;: This directive checks if a file or directory specified by $uri exists in the directory indicated by root. If so, NGINX serves that file or directory. If the file or directory does not exist, control is passed to the @nuxt identifier.
  • @nuxt: This is the name of the location block that specifies how NGINX should forward requests to the application server (in this case, the Node.js server with the Nuxt application). This way, requests not directly related to static files (such as dynamic page requests) are forwarded to the application backend for further handling.

location @nuxt

The third block is our earlier configuration saved in the form of a so-called "named location," to which NGINX will delegate handling if all previous conditions are not met.

What does the traffic look like before and after configuration?

Using a simple Nitro plugin, we can track which requests are handled by our application:

export default defineNitroPlugin((nitroApp) => {
   nitroApp.hooks.hook('request', (event) => {
       console.log('nitro request:',event.path)

This is the list of requests our application must handle to correctly render our page. It is a very simple application with the nuxt-typo3 module installed. In a real project, this list extends to hundreds of files during a single request.

nitro request: /

nitro request: /_nuxt/entry.8dbc29b4.js

nitro request: /_nuxt/T3Page.ae78a545.js

nitro request: /_nuxt/T3BackendLayout.vue.e990d826.js

nitro request: /_nuxt/useT3DynamicComponent.9f5e9126.js

nitro request: /_nuxt/vue.f36acd1f.dd941d21.js

nitro request: /_nuxt/T3BackendLayout.7ab09a15.js

nitro request: /_nuxt/T3BlDefault.c0432432.js

nitro request: /_nuxt/T3Renderer.vue.72118a2e.js

nitro request: /_nuxt/T3Renderer.94a76c91.js

nitro request: /_nuxt/T3Frame.5eb6794d.js

nitro request: /_nuxt/T3CeTextpic.c2018245.js

nitro request: /_nuxt/T3CeTextpic.vue.fa80ffe1.js

nitro request: /_nuxt/T3CeHeader.vue.99047ba5.js

nitro request: /_nuxt/T3Link.vue.467931b2.js

nitro request: /_nuxt/nuxt-link.984d5a23.js

nitro request: /_nuxt/T3HtmlParser.vue.224ec59b.js

nitro request: /_nuxt/T3MediaGallery.vue.703aeb70.js

nitro request: /_nuxt/MediaFile.vue.bc0215d4.js

nitro request: /_nuxt/useMediaFile.dfba60a9.js

nitro request: /_nuxt/T3MediaGallery.cccbba3f.js

nitro request: /_nuxt/MediaFile.d8fbd6ee.js

nitro request: /_nuxt/T3CeHeader.5a4ad5c4.js

nitro request: /_nuxt/T3Link.8dd7a6f2.js

nitro request: /_nuxt/T3HtmlParser.2b7a107c.js

nitro request: /_nuxt/T3CeBullets.9fe1cd9d.js

nitro request: /_nuxt/T3CeMenuPages.d3cd596a.js

nitro request: /_nuxt/MediaImage.56fec44a.js

nitro request: /_nuxt/T3CeMenuPages.vue.7125810e.js

nitro request: /_nuxt/T3CeMenuPagesList.vue.4fa2fe8f.js

nitro request: /_nuxt/i18n.config.604a9d1c.js

nitro request: /_nuxt/i18n.config.604a9d1c.js

nitro request: /_nuxt/error-404.84c21dd9.js

nitro request: /_nuxt/error-500.fe3c6d26.js

nitro request: /_nuxt/builds/meta/176ec5cf-2b20-416b-96c4-bdb9cacced6e.json

nitro request: /favicon.ico

In the above list, we see a request for the homepage "nitro request: /"; the rest are static file requests.

What does such traffic look like after our proposed configuration?

nitro request: /

All other requests are handled at the NGINX server level.

Additionally, in the Nitro configuration, you can force the disabling of serving static files through Nitro as well as building a separate .output/public folder (, however, we do not do this. We leave these options enabled because, in the absence of a configured NGINX server locally, the application can still serve static files via Nitro.

Quick Comparison

Below we present the results of a load test performed using the k6.js tool. This allows us to compare how the server behaves with a given number of concurrent connections.

Test configuration:

 stages: [
     { duration: '15s', target: 1000 },
     { duration: '15s', target: 2000 },
     { duration: '15s', target: 3000 },
     { duration: '15s', target: 3500 },

This configuration means that the test ramps up the number of VUs over time:

  • In the first 15 seconds, the number of VUs increases to 1000.
  • In the next 15 seconds, the number increases to 2000.
  • Then it increases to 3000 VUs in the following 15 seconds.
  • Finally, it reaches 3500 VUs in the last 15-second stage.

Each VU simulates a user making requests to the server as defined by the test script. The total number of requests made during each stage will depend on how quickly each VU can send requests, which is influenced by the server’s response time and the complexity of each request.

In the test, we performed a request for a static file (favicon) served via the Nuxt application. Tests were performed on the same machine, with the same application and the same cache settings for static files - no caching.

Results for serving static files through Nitro (Node.js)

Nitro (Node.js) HTTP performance test results showing response times and errors.

And for NGINX

NGINX HTTP performance test results showing response times and errors.

What we can notice:

  • Request duration (average): The Nginx server is significantly faster than the Node.js server, with an average request duration of only 44.7 ms compared to 481.16 ms for Node.js.
  • Waiting time (average): Also, in terms of waiting time, Nginx is significantly faster, with an average time of 43.74 ms compared to 462.61 ms for Node.js.
  • Number of requests: The Nginx server processed significantly more requests per second, achieving an average of 1257.75/s compared to 904/s for Node.js.


In conclusion, the NGINX configuration we discussed significantly improves application performance by directly serving static files and efficiently managing network traffic. This reduces the load on the Node.js server, eliminates unnecessary request processing, especially in the case of 404 errors, and centralizes cache management and other performance settings. As a result, the entire application operates faster and more efficiently.

Additionally, it's worth noting that in the absence of an NGINX server, for example, in local development environments, the application still operates as it did previously. This means that if NGINX is not configured or not used, the Node.js server itself handles all requests, including serving static files and handling 404 errors. This flexibility allows for easy transition between development and production environments without the need to change the application logic.

For those interested, I invite you to check out an interesting thread on StackOverflow discussing the advantages of Node.js vs NGINX in the context of serving static files.