Mejoramos el tiempo hasta el primer byte de HTML
The time to first byte (TTFB) of a website is the time from when the user starts browsing until the HTML code of the requested page begins to arrive. A slow TTFB has been a major issue for over ten years while running WebPageTest. According to recent test data, 20% of pages have a TTFB greater than 3 seconds and 80% start rendering after 5 seconds (10% taking more than 10 seconds). Additionally, 500 pages were larger than 15MB. A fast TTFB is crucial as it directly impacts other metrics; every millisecond improvement in TTFB translates to a millisecond saving in all other measures. However, a fast TTFB does not guarantee a quick overall experience but a slow one certainly does. The TTFB can be affected by several factors including redirections, DNS settings, connection configuration, SSL negotiation, and server response time for HTML. While most of these issues are easily fixed using services like Cloudflare, the server response time is often the hardest to resolve. The waterfall graph shows the server response time as a light blue bar in the first request and can be embarrassingly obvious when it's slow. In optimal conditions, the server response time should not exceed the orange socket connection bar that comes just before. The slowness of origin response could be due to various reasons such as server configuration, system load, backend databases, and communication systems with which it interacts, or even the application code itself. Identifying the root cause of performance issues usually involves development teams working with Application Performance Management tools to trace the slowest parts of the application and improve them. However, many site owners lack resources or knowledge for such investigations. In most cases, they had hired a developer to create their site or done it themselves on WordPress and hosted it with the cheapest hosting they could find. Hosting is generally designed to run as many sites as possible, not necessarily with maximum performance. Most of the HTML content isn't especially dynamic; it needs to change relatively quickly when the site is updated but for most parts of the web, the content remains static for months or years. There are special cases like when a user logs in (as an administrator or otherwise) where the content differs, but the majority of visits are from anonymous users. If HTML can be cached and served directly from the edge, then the performance improvement could be significant (up to 3 seconds faster in all metrics in this case). There are dozens of plugins for WordPress for caching origin content, but they require configuration (where to store pages) and their performance still largely depends on hosting. By moving content to edge-caching, we reduce complexity, eliminate the additional time to go back to the origin, and completely remove hosting performance from the equation. It can also significantly reduce the load on hosting systems by offloading all anonymous traffic. Cloudflare supports static HTML edge caching, and commercial and enterprise customers can allow users with active sessions to skip cache by enabling "avoid cookies storing". This works in tandem with the Cloudflare plugin for WordPress, so the cache can be cleared when content is updated. There are also other plugins that integrate with various Content Delivery Networks but they all need configuration with API keys and implementations specific to each CDN. To make edge-caching widely adopted, we need a way to automatically (or as close to automatic as possible) cache HTML. For this purpose, we need a communication pathway between an origin (like a WordPress site) and an edge storage (like Cloudflare's edge nodes) to manage a remote cache that can be explicitly purged. The origin should be able to: Detect when it is facing an edge-compatible storage. Specify what content should be cached and for which visitors (for example, visits without login cookies). Purge stored content when it has changed (globally across all edges). Instead of requiring the origin to communicate with an API to purge changes and manual configuration to determine what to cache and when, we can do everything with HTTP headers on requests going back and forth between the edge and the origin: 1. An HTTP header is added to requests going from the edge to the origin to announce that there's an edge storage and its capabilities: x-HTML-Edge-Cache: supports=cache|purgeall|bypass-cookies 2. When the origin responds with a cacheable HTML page, it adds an HTTP header in the response to indicate that it should be cached and the rules for when the stored version should not be used (to allow bypassing caching of logged-in users): x-HTML-Edge-Cache: cache,bypass-cookies In this case, the HTML code will be cached but requests with cookies starting with "wordpress" or "wp-" in their name will avoid caching and go to the origin. 3. When a request modifies the site's content (updates a post, changes a theme, adds a comment), the origin adds an HTTP response header indicating that the cache should be purged: x-HTML-Edge-Cache: purgeall The only tricky part of managing this is that the purge has to remove the cache globally. The Worker's caches are local to each edge and do not provide a global interface for making operations. One way to achieve this is by using Cloudflare's API to purge the global cache, but it's a bit heavy-handed (it purges everything from cache, including scripts and images) and requires some configuration. If you know exactly which URLs will change when content is updated, doing a targeted purge in the API only of those URLs would probably be the best solution. Using the new KV store for Workers, we can purge the cache in a different way. The worker script uses a caching versioning scheme where each URL gets a version number added to it (for example, http://www.example.com/?cf_edge_cache_ver=32). The modified URL is only used locally by the Worker as a key for stored responses and the current version number is saved in KV, which is a global store. When the cache is purged, the version number is incremented, changing the URL of all resources. Older entries will naturally fall out of cache as they won't be accessed. It requires a small adjustment to set up KV for Worker, but hopefully in the future it can be automatic. I believe there's great value in standardizing a way for edge storage and origin to communicate about caching dynamic content. I would encourage content management systems to build direct support on platforms and provide a standard interface that could be used with different providers (even for local edge-caching on load balancers or other reverse proxies). After doing some more testing with different types of sites, I'm considering bringing the concept to the HTTP Working Group at IETF to see if we can create an official standard for control headers (using different names). If you have any feedback about how it should work or what features it should expose, I would love to hear from you (like purging specific URLs, varying content for mobile/desktop or by region, expanding it to cover all types of content, etc.).
Company
Cloudflare
Date published
Dec. 24, 2018
Author(s)
Patrick Meenan
Word count
1742
Hacker News points
None found.
Language
espaƱol