Why Cache Isn't Shared Across Sites: Double-keyed Caching Explained

Have you ever wondered: “Why do static resources remain cached on one website, but need to be redownloaded when accessed from another site?” This is fundamentally caused by Double-keyed Caching. So today, let’s dive into what Double-keyed Caching is, how it works, and how we can optimize it. What Is Double-keyed Caching? In traditional browser caching, resources are typically stored based on their URL. For example, when you visit https://cdn.example.com/script.js, the browser caches this script.js file. When another website references the same URL, the browser directly reuses the cached version without needing to download it again. This traditional caching method works the way you'd expect: once a resource is cached, any site can access it. However, this approach has a significant security risk—cross-site tracking and data leakage. For example: Some websites can check the cache status of public CDN resources to infer whether a user has visited certain sites (e.g., for ad tracking). Hackers can exploit cache poisoning attacks to serve users compromised resources. To mitigate these security risks, many browsers (e.g., Chrome and Firefox) introduced Double-keyed Caching. The core rule of Double-keyed Caching is: when caching a resource, consider not just the URL but also the site (Origin) from which it is loaded. This means that caching is now based on “site + URL” as a unique identifier. In other words: Previously, a resource cached by Site A could be reused by Site B ✅ Now, a resource cached by Site A must be redownloaded by Site B ❌ How Does Double-keyed Caching Work? Double-keyed Caching = Origin + Resource URL Let’s use an example to illustrate: Assume you visit Page A and Page B, both of which use the same CDN resource: https://cdn.example.com/script.js. Traditional Caching (Single-keyed Caching) You load script.js on Page A, and the browser caches the file. When you visit Page B, the browser sees that it requests the same script.js, so it loads it directly from the cache (reducing network requests and improving load speed). Double-keyed Caching You load script.js on Page A, and the browser caches it only for Page A. When you visit Page B, even though it requests the same script.js, the browser treats it as a completely new resource and requires a fresh download. Different sites, even when requesting the same resource, must cache it separately! This enhances security but also causes the confusion we mentioned at the start: resources cannot be shared across sites and must be downloaded multiple times. Downsides of Double-keyed Caching While Double-keyed Caching improves security, it also introduces several problems: Lower cache reuse rate: Even for identical resources, different sites must redownload them. Reduced effectiveness of public CDNs: Traditionally, CDNs (such as jsDelivr and UNPKG) helped multiple sites share cached resources. Now, this advantage is greatly diminished. Higher first-time access costs: When a user visits a site, even if they have a cached copy of the same resource from another site, they still need to download it again, slowing down the initial page load. How to Optimize the Impact of Double-keyed Caching? Now that we understand how Double-keyed Caching works and its potential downsides, let’s explore ways to optimize its impact. Use Service Workers Service Workers can intercept requests on the client side and leverage local cache to reduce dependence on network requests. For example, we can use the Cache API to manually store certain resources, bypassing the restrictions of Double-keyed Caching: self.addEventListener('fetch', (event) => { event.respondWith( caches.match(event.request).then((response) => { return response || fetch(event.request); }) ); }); Since Service Worker caching is not affected by Double-keyed Caching, you can use it to manage frequently used static resources instead of relying entirely on HTTP caching. Use HTTP/3 to Reduce Redundant Requests Due to Double-keyed Caching, even if the same user visits different websites, common CDN resources may be downloaded multiple times. However, if we use HTTP/3 (QUIC), which supports multiplexing and 0-RTT connections, we can optimize performance. How to check if your CDN supports HTTP/3? Open Chrome DevTools, go to the Network panel, and check the Protocol column. If it shows h3, that means the resource is using HTTP/3. Preload Critical Resources Since we can’t rely entirely on browser caching, we can proactively preload critical resources. For example, using to preload fonts, scripts, or CSS: This ensures that even if resources need to be redownloaded due to Double-keyed Caching, they load faster. By applying these strategies, we can reduce the impact of Double-keyed Caching while maintaining both security and performance. We are

Feb 8, 2025 - 20:05
 0
Why Cache Isn't Shared Across Sites: Double-keyed Caching Explained

Cover

Have you ever wondered: “Why do static resources remain cached on one website, but need to be redownloaded when accessed from another site?”

This is fundamentally caused by Double-keyed Caching.

So today, let’s dive into what Double-keyed Caching is, how it works, and how we can optimize it.

What Is Double-keyed Caching?

In traditional browser caching, resources are typically stored based on their URL.

For example, when you visit https://cdn.example.com/script.js, the browser caches this script.js file. When another website references the same URL, the browser directly reuses the cached version without needing to download it again.

This traditional caching method works the way you'd expect: once a resource is cached, any site can access it.

However, this approach has a significant security risk—cross-site tracking and data leakage.

For example:

  • Some websites can check the cache status of public CDN resources to infer whether a user has visited certain sites (e.g., for ad tracking).
  • Hackers can exploit cache poisoning attacks to serve users compromised resources.

To mitigate these security risks, many browsers (e.g., Chrome and Firefox) introduced Double-keyed Caching.

The core rule of Double-keyed Caching is: when caching a resource, consider not just the URL but also the site (Origin) from which it is loaded. This means that caching is now based on “site + URL” as a unique identifier.

In other words:

  • Previously, a resource cached by Site A could be reused by Site B
  • Now, a resource cached by Site A must be redownloaded by Site B

How Does Double-keyed Caching Work?

Double-keyed Caching = Origin + Resource URL

Let’s use an example to illustrate:

Assume you visit Page A and Page B, both of which use the same CDN resource:

https://cdn.example.com/script.js.

Traditional Caching (Single-keyed Caching)

  1. You load script.js on Page A, and the browser caches the file.
  2. When you visit Page B, the browser sees that it requests the same script.js, so it loads it directly from the cache (reducing network requests and improving load speed).

Double-keyed Caching

  1. You load script.js on Page A, and the browser caches it only for Page A.
  2. When you visit Page B, even though it requests the same script.js, the browser treats it as a completely new resource and requires a fresh download.

Different sites, even when requesting the same resource, must cache it separately!

This enhances security but also causes the confusion we mentioned at the start: resources cannot be shared across sites and must be downloaded multiple times.

Downsides of Double-keyed Caching

While Double-keyed Caching improves security, it also introduces several problems:

  • Lower cache reuse rate: Even for identical resources, different sites must redownload them.
  • Reduced effectiveness of public CDNs: Traditionally, CDNs (such as jsDelivr and UNPKG) helped multiple sites share cached resources. Now, this advantage is greatly diminished.
  • Higher first-time access costs: When a user visits a site, even if they have a cached copy of the same resource from another site, they still need to download it again, slowing down the initial page load.

How to Optimize the Impact of Double-keyed Caching?

Now that we understand how Double-keyed Caching works and its potential downsides, let’s explore ways to optimize its impact.

Use Service Workers

Service Workers can intercept requests on the client side and leverage local cache to reduce dependence on network requests.

For example, we can use the Cache API to manually store certain resources, bypassing the restrictions of Double-keyed Caching:

self.addEventListener('fetch', (event) => {
  event.respondWith(
    caches.match(event.request).then((response) => {
      return response || fetch(event.request);
    })
  );
});

Since Service Worker caching is not affected by Double-keyed Caching, you can use it to manage frequently used static resources instead of relying entirely on HTTP caching.

Use HTTP/3 to Reduce Redundant Requests

Due to Double-keyed Caching, even if the same user visits different websites, common CDN resources may be downloaded multiple times.

However, if we use HTTP/3 (QUIC), which supports multiplexing and 0-RTT connections, we can optimize performance.

How to check if your CDN supports HTTP/3?

Open Chrome DevTools, go to the Network panel, and check the Protocol column.

If it shows h3, that means the resource is using HTTP/3.

Preload Critical Resources

Since we can’t rely entirely on browser caching, we can proactively preload critical resources.

For example, using to preload fonts, scripts, or CSS:


  rel="preload"
  href="https://your-cdn.com/fonts/Roboto.woff2"
  as="font"
  type="font/woff2"
  crossorigin="anonymous"
/>

This ensures that even if resources need to be redownloaded due to Double-keyed Caching, they load faster.

By applying these strategies, we can reduce the impact of Double-keyed Caching while maintaining both security and performance.

We are Leapcell, your top choice for hosting backend projects.

Leapcell

Leapcell is the Next-Gen Serverless Platform for Web Hosting, Async Tasks, and Redis:

Multi-Language Support

  • Develop with Node.js, Python, Go, or Rust.

Deploy unlimited projects for free

  • pay only for usage — no requests, no charges.

Unbeatable Cost Efficiency

  • Pay-as-you-go with no idle charges.
  • Example: $25 supports 6.94M requests at a 60ms average response time.

Streamlined Developer Experience

  • Intuitive UI for effortless setup.
  • Fully automated CI/CD pipelines and GitOps integration.
  • Real-time metrics and logging for actionable insights.

Effortless Scalability and High Performance

  • Auto-scaling to handle high concurrency with ease.
  • Zero operational overhead — just focus on building.

Explore more in the Documentation!

Try Leapcell

Follow us on X: @LeapcellHQ

Read on our blog