[Translation] HTTP headers for the responsible developer

[Translation] HTTP headers for the responsible developer



Today, being online is a familiar condition for many people. We all buy, communicate, read articles, look for information on various topics. The network connects us to the world, but above all, it connects people. I myself have been using the Internet for 20 years now, and my relationship with it changed eight years ago when I became a web developer.

Developers connect people.
Developers help people.
Developers give people the opportunity.

Developers can create a network for everyone, but this ability must be used responsibly. After all, it is important to create things that help people and empower them. In this article I want to talk about how HTTP headers can help you create better products for the best work of all users on the Internet.

HTTP - hypertext transfer protocol


Let's talk about HTTP first. HTTP is a protocol used by computers to request and send data over the Internet.

When the browser requests a resource from the server, it uses HTTP. This request includes a set of key-value pairs containing information such as the browser version or file formats that it understands. These pairs are called request headers.

The server responds with the requested resource, but also sends response headers containing information about the resource or the server itself.

  Request:
 GET https://the-responsible.dev/Accept: text/html, application/xhtml + xml, application/xml
 Accept-Encoding: gzip, deflate, br
 Accept-Language: en-GB, en-US; q = 0.9, en; q = 0.8, de; q = 0.7
 ...

 Response:
 Connection: keep-alive
 Content-Type: text/html;  charset = utf-8
 Date: Mon, 11 Mar 2019 12:59:38 GMT
 ...
 Response Body  

Today, HTTP is the foundation of the Internet and offers many ways to optimize user experience. Let's see how you can use HTTP headers to create a secure and accessible network.

Network must be secure


Before, I never felt the danger when I was searching for something on the Internet. But the more I learned about the world wide web, the more I worried. You can read how hackers change global CDN libraries , incidental sites mine cryptocurrency in the browser of their visitors, and also about how blog.npmjs.org/post/180565383195/details-about-the-event-stream-incident"> using social engineering, people regularly get access to successful open source projects . This is not good. But why should you care?

If you are developing for the web today, you are not just writing code. There are a lot of people working on one site today in web development. You may also use a lot of open source. In addition, for marketing purposes, you can include several third-party scripts. Hundreds of people provide code running on your site. And developers have to work in these realities.

Can you trust all these people and the whole source code?

I do not think I should trust any third-party code. Fortunately, there are ways to protect your site and make it more secure. In addition, tools such as helmet can be useful, for example, for express applications .

If you want to analyze how much third-party code runs on your site, you can look at the developer’s panel or try Request Map Generator.

HTTPS and HSTS - make sure your connection is safe


A secure connection is the foundation of a secure Internet. No encrypted requests, going through HTTPS , you can’t be sure that there’s no one else between your website and visitors. A person can quickly set up a public Wi-Fi network and perform a man-in-the-middle attack on anyone who connects to this network . How often do you use public Wi-Fi? Also, how often do you check if he is trustworthy?

Fortunately, today TLS certificates are free ; HTTPS has become the standard, and browsers provide advanced features only for secure connections, and even mark non-HTTPS websites as unsafe, which facilitates the implementation of this protocol. Unfortunately, we are not always safe when we are on the Internet. When someone wants to open a website, he doesn’t enter the protocol in the address bar (and why should he even?). This leads to the creation of an unencrypted HTTP request. Securely working sites redirect the user to HTTPS. But what if someone intercepts the first unprotected request?

You can use HSTS (HTTP Strict Transport Transport Security) response headers to tell browsers that your site only works via HTTPS.

  Strict-Transport-Security: max-age = 1000;  includeSubDomains;  preload  

This header tells the browser that you do not want to use HTTP requests, and then it will automatically apply the same requests to the same source with a secure connection. If you try to open the same URL via HTTP, the browser will use HTTPS again and redirect the user.

You can configure how long this parameter should remain active ( max-age in seconds) if you want to use HTTP again later. If you want to include subdomains, you can configure this with includeSubDomains .

If you want to do everything possible so that the browser never requests your site via HTTP, you can also set the pointer to preload and send your site to the global list. If your site’s HSTS configuration matches the minimum max-age in one year and is active for subdomains, it can be included in the internal browser list for sites that work only through HTTPS.

Have you ever wondered why you can no longer use local environment variables like my-site.dev in your browser via HTTP? The reason for this internal list - .dev is automatically included in this list, since in February 2019 it became a real top-level domain.

The HSTS header not only makes your site a little safer, but also speeds up its work. Imagine someone calling over a slow mobile connection. If the first request is made via HTTP only to receive a redirect, then the user may not see anything on the screen for a few seconds. And with HSTS, you can save those seconds, and the browser will automatically use HTTPS.

CSP - clearly state what is allowed on your site


Now that your site is running over a secure connection, you may encounter a problem when browsers start blocking requests that go to an unprotected address due to mixed content policies. The title is Content Security Policy (CSP) offers a great way to handle these situations. You can set your own set of CSP rules using meta elements in the provided HTML or via HTTP -heads.

  Content-Security-Policy: upgrade-insecure-requests  

The upgrade-insecure-requests pointer causes the browser to magically convert all HTTP requests into HTTPS requests.

However, CSP is not only about the protocol used. It offers detailed ways to determine which resources and activities are allowed on your site. You can, for example, specify which scripts should be executed or where to download images from. If something is not allowed, the browser blocks this action and prevents potential attacks on your site.



At the time of writing, there were 24 different configuration options for CSP. They range from scripts through style sheets all the way to service workers.



You can find a full review on MDN.

Using CSP, you can specify what your site should include and what not.

  Content-Security-Policy: default-src 'self';  script-src 'self' just-comments.com www.google-analytics.com production-assets.codepen.io storage.googleapis.com;  style-src 'self' 'unsafe-inline';  img-src 'self' data: images.contentful.com images.ctfassets.net www.gravatar.com www.google-analytics.com just-comments.com;  font-src 'self' data :;  connect-src 'self' cdn.contentful.com images.contentful.com videos.contentful.com images.ctfassets.net videos.ctfassets.net service.just-comments.com www.google-analytics.com;  media-src 'self' videos.contentful.com videos.ctfassets.net;  object-src 'self';  frame-src codepen.io;  frame-ancestors 'self';  worker-src 'self';  block-all-mixed-content;  manifest-src 'self' 'self';  disown-opener;  prefetch-src 'self'  

The above rule set is for my personal site, and if you think that this example of CSP definition is very complex, then you are absolutely right. I implemented this set on my third attempt, deploying and rolling back again, because he broke the site several times. But there is a better way.

To avoid hacking your site, CSP also provides a report-only mode.

  Content-Security-Policy-Report-Only: default-src 'self';  ... report-uri https://stefanjudis.report-uri.com/r/d/csp/reportOnly  

Using the Content-Security-Policy-Reportt-Only mode, browsers simply record the resources that would be blocked instead of actually blocking them. This reporting mechanism allows you to check and adjust your rule set.

Both the Content-Security-Policy and Content-Security-Policy-Report-Only headers also offer a way to define the end point for sending a violation message and registering information ( report-uri ). You can configure the registration server and use the log information sent to configure the CSP rules until it is ready to be sent.

The recommended process looks like this: first run the CSP in report mode, analyze incoming violations with real traffic, and only when no violations of your monitored resources are detected, turn it on.

If you are looking for a service that could help you deal with these magazines, I recommend Report URI , it helps me a lot.

General CSP implementation


Today’s browsers support CSP well , but unfortunately not many sites use it. To see how many sites donate content using CSP, I sent a request to HTTParchive and found that only 6% of sites viewed use this policy. I think we can make the Internet more secure and protect our users from involuntary ma Ning cryptocurrency .



Network should be available


While I am writing this article, I am sitting in front of a relatively new MacBook using a fast home Wi-Fi connection. Developers often forget that this situation is not standard for most of our users. People visiting our sites use old phones and dubious connections. Heavy and overloaded sites with hundreds of queries leave them with a bad impression.

And it's not just an impression. People pay different amounts for traffic depending on where they live . Imagine you are creating a website for a hospital. Information on it can be crucial and save lives. If the page on the hospital site has a size of 5 MB, then it will not only work slowly, but may also be too expensive for those who most need it. The price of five megabytes of traffic in Europe or the United States is negligible compared to the price in Africa. Developers are responsible for the accessibility of web pages to all. This responsibility includes providing the right resources, choosing the right tools (do you really need a JS framework for landing pages?) And avoiding requests.

Cache-Control - avoid requests for immutable resources


Today a site can contain hundreds of resources, from CSS to scripts and images. Using the Cache-Control header , the developers can indicate how long the resource should be considered “fresh” and can be returned from the browser cache.

  Cache-Control: max-age = 30, public  

If you properly set up Cache-Control , the data transfer is saved, and files can be used from the browser cache for a certain number of seconds ( max-age ). Browsers should re-check cached resources after this time period expires.

However, if visitors refresh the page, browsers will still re-check it , including links to resources , to ensure that the cached data is still valid. Servers respond with heading 304, indicating that the cached data is still valid, or heading 200 when sending updated data. This allows you to save the transferred data, but not necessarily made requests.

This is where the immutable comes into play.

Immutable - never request a resource twice


In modern frontend applications, CSS and script files usually have unique names, for example, styles.123abc.css . The name of this file depends on the content. And when the contents of files change, their names also change.

These unique files can potentially be cached forever, including when the user refreshes the page. The immutable function can prevent the browser from re-checking the resource at a certain time interval. This is very important for objects with checksums, and helps to avoid repeated verification requests.

  Cache-Control: max-age = 31536000, public, immutable  

Implementing optimal caching is very difficult, and especially browser caching is not very intuitive, since it has different configurations. I recommend reading the following materials:


Accept-Encoding - maximum compression (to the minimum)


With Cache control , we can save queries and reduce the amount of data that is repeatedly transmitted over the network. We can not only save requests, but also reduce what is transmitted.

When donating resources, developers should take care to send as little data as possible. For textual resources like HTML, CSS, and JavaScript, compression plays an important role in saving data transfer.

The most popular compression method today is GZIP. Servers have enough power to compress text files on the fly and provide compressed data when prompted. But gzip is no longer the best option.

If you take a look at browser requests for text files such as HTML, CSS, and JavaScript, and analyze the headers, you will find accept-encoding among them.

  Accept-Encoding: gzip, deflate, br  

This header tells the server which compression algorithms it understands. The little-known parameter br stands for Brotli compression and is used on high traffic sites such as Google and Facebook. To use Brotli, your site must work via HTTPS.

This compression algorithm was created taking into account the small file size. If you try to compress the file manually on your local device, you will find that Brotli really compresses better than GZIP.



You may have heard that Brotli compression is slower. The reason is that Brotli has 11 compression modes, and by default it selects the one that produces the smallest files, which lengthens the procedure. GZIP, on the other hand, has 9 modes, and by default, one that takes into account both the compression rate and file size is taken into account. As a result, Brotli's default mode is unsuitable for compression on the fly, but if you change the mode, you can achieve the compression of small files with the same speed as GZIP. You can use it for compression on the fly and be seen as a potential replacement for GZIP for supporting browsers.

In addition, if you want to maximize your files, you can forget about dynamic compression and pre-generate optimized GZIP files with using the zopfli and Brotli files for their static maintenance.

If you want to read more about Brotli compression and its comparison with GZIP, Akamai employees have done extensive study on this topic .

Accept and Accept-CH - serve individual resources for the user


Optimizing text resources is very important for saving kilobytes, but what about heavier resources like images to save even more data?

Accept - servicing images in the correct format


Browsers do not just show us which compression algorithms they understand. When the browser requests an image, it also provides information about which file formats it understands.

  Accept: image/webp, image/apng, image/*, */*; q = 0.8  

For several years there was a struggle around the new image format, but won the webp. Webp is an image format invented by Google and support of this format is now very relevant.

Using this request header, developers can transfer an image of webp , even if the browser has requested image.jpg , resulting in a smaller file size. Dean Hume has written a good guide about how to use it. Very cool!



Accept-CH - Serving Images of the Right Size


You can also enable client hints for those who support this feature. browsers. Client hints are a way to tell browsers to send additional information about the width of the viewport, the width of the image, and even network conditions such as RTT (transmission and acknowledgment time) and connection type, for example 2g . < br/>
You can activate hints by adding a meta element:

  & lt; meta http-equiv = "Accept-CH" content = "Viewport-Width, Downlink" & gt;
 & lt; meta http-equiv = "Accept-CH-Lifetime" content = "86400" & gt;  

Or by specifying the headers in the original HTML request:

  Accept-CH: Width, Viewport-Width
 Accept-CH-Lifetime: 100  

In subsequent requests, browsers will send additional information over a certain period of time ( Accept-CH-Lifetime in seconds), which can help developers adapt images to the user's conditions without changing HTML.

For example, for more information, such as the width of the image on the server side, you can provide your images with the sizes attribute to give the browser additional information about how these images will look.

  & lt;! - this image  100 viewport width - & gt;
 & lt; img class = "w-100" src = "/img/header.jpg" alt = "" sizes = "100vw" & gt;  

With the response header Accept-CH and images with the attribute sizes , browsers will include the headers viewport-width and width in image requests, showing you which image is best suited.



With a supported image format and size, you can send customized data without having to write unreliable image elements and pay attention only to the file format and size, as shown below.

  & lt; pictureе & gt;
  & lt;! - serve WebP to Chrome, Edge, Firefox and Opera - & gt;
  & lt; source
  media = "(min-width: 50em)"
  sizes = "50vw"
  srcset = "/image/thing-200.webp 200w,/image/thing-400.webp 400w,
/image/thing-800.webp 800w,/image/thing-1200.webp 1200w,
/image/thing-1600.webp 1600w,/image/thing-2000.webp 2000w "
  type = "image/webp" & gt;
  & lt; source
  sizes = "(min-width: 30em) 100vw"
  srcset = "/image/thing-crop-200.webp 200w,/image/thing-crop-400.webp 400w,
/image/thing-crop-800.webp 800w,/image/thing-crop-1200.webp 1200w,
/image/thing-crop-1600.webp 1600w,/image/thing-crop-2000.webp 2000w "
  type = "image/webp" & gt;
  & lt;! - serve JPEG to others - & gt;
  & lt; sоurce
  media = "(min-width: 50em)"
  sizes = "50vw"
  srcset = "/image/thing-200.jpg 200w,/image/thing-400.jpg 400w,
/image/thing-800.jpg 800w,/image/thing-1200.jpg 1200w,
/image/thing-1600.jpg 1600w,/image/thing-2000.jpg 2000w "& gt;
  & lt; sоurce
  sizes = "(min-width: 30em) 100vw"
  srcset = "/image/thing-crop-200.jpg 200w,/image/thing-crop-400.jpg 400w,
/image/thing-crop-800.jpg 800w,/image/thing-crop-1200.jpg 1200w,
/image/thing-crop-1600.jpg 1600w,/image/thing-crop-2000.jpg 2000w "& gt;
  & lt;! - fallback for browsers that don't support picture - & gt;
  & lt; img src = "/image/thing.jpg" width = "50%" & gt;
 & lt;/pictureе & gt;  

If you have access to the width of the viewport and the size of the images, you can put the resizing logic on your servers at the forefront.

However, keep in mind that you should not create images for any width simply because you have the exact image width. Sending images for a specific size range ( image-200 , image-300 , ... ) helps to use CDN caching and saves computation time.

In addition, with modern technologies such as service workers, you can even intercept and modify requests directly in the client to serve the best image files. With the client prompts included, service workers access information about the layouts, and in combination with the image API, such as Cloudinary, you can configure url images right in the browser for getting images of proper size.

If you are looking for more information about customer tips, you can read the articles Jeremy Wagner or Ilya Grigorik on this topic.

Network should be careful


Since each of us spends many hours a day online, there is a final aspect that I think is very important - the network should be careful.

Preload - reduced wait time


As developers, we value the time of our users. No one wants to waste time. As mentioned in previous chapters, providing the right data plays a big role in saving time and traffic. It is not only about what requests are being made, but also about the timing and order of their implementation.

Let me give an example: if you add a style sheet to your site, browsers will not show anything until it loads. While nothing is displayed on the screen, the browser continues to analyze HTML for other resources to request. After loading and parsing the stylesheet, it may contain links to other important resources, such as fonts, which may also be requested. This process can increase page load time for your visitors.

Using Rel = preload you can tell the browser what resources will be requested soon.

You can preload resources through HTML elements:

  & lt; link rel = "preload" href = "/font.woff2" as = "font" type = "font/woff2" crossorigin = "anonymous" & gt;   

Or headlines:

  Link: & lt;/font.woff2 & gt ;;  rel = preload;  as = font;  no-push  

Thus, the browser receives a header or finds a link element, and immediately requests resources so that they are already in the cache when needed. This process saves your visitors time.

For optimal pre-loading of resources and an understanding of all configurations, I recommend paying attention to the following materials:


Feature-Policy - don't annoy others


The last thing I want is to see sites that request permission from me without a reason. I can only quote my colleague Phil Nash about this.

Do not require permission when loading the page . Developers should be respectful and not create websites that annoy visitors. People just click on all windows with permissions. If we do not use them correctly, then websites and developers lose confidence, and new brilliant functions - their attractiveness.

But what if your site has to include a lot of third-party code, and all these scripts run a lot of permission dialog boxes? How to make sure that all included scripts behave correctly?

This is where the header Feature-Policy comes into play. With it, you can specify which features are allowed and restrict pop-up dialog boxes with permissions that can be triggered by third-party code executed on your site.

  Feature-Policy: vibrate 'none';  geolocation 'none'  

You can customize this behavior for the site using the header. You can also set this header for inline content, for example, floating frames, which may be mandatory for integration with third-party developers.

  & lt; iframe allow = "camera 'none'; microphone 'none'" & gt;  

At the time of this writing, the heading Feature-Policy was rather experimental, but it’s interesting to look at its future possibilities. In the near future, developers will not only be able to restrict themselves and prevent the appearance of annoying dialog boxes, but also to block non-optimized data. These features will significantly improve user experience.



You can find a full review at MDN.

Looking at the list above, you can recall the most annoying moment - push notifications. It turned out that using Feature-Policy for push notifications is more difficult than expected. If you want to know more, you can subscribe to the appropriate topic at GitHub.

Thanks to feature policy, you can be sure that you and third-party resources will not turn your site into a permissions race , which, unfortunately, has already become familiar for many sites.

Network must be for everyone


In this article, I talked about only a few headlines that can help improve user experience. If you want to see an almost complete overview of the headers and their capabilities, I recommend watching the presentation of Christian Schaefer " HTTP headers are hidden champions ".

I know that creating a great site today is a very difficult task. Developers must take into account design, devices, frameworks, and yes ... headers also play a certain role. I hope this article will give you some ideas and you will consider safety, availability and respect in your next web projects, because these are the factors that make the network a really great place for everyone.

Source text: [Translation] HTTP headers for the responsible developer