Gwd.putty PDocsOpen Source
Related
Git Overhauls Documentation with New Data Model and User-Tested Revisionsk6 2.0: AI-Powered Performance Testing and Developer EnhancementsSwift in April 2026: New Valkey Client, Embedded Swift Talks, and More6 Ways OpenClaw Agents Are Changing Enterprise AI—And What Your Organization Needs to KnowReviving Abandoned Open Source: How Chainguard Keeps Critical Projects SecureRust Programming Language Secures Record 13 Projects in Google Summer of Code 202610 Critical Updates on GitHub Availability and ScalingGitHub Copilot Updates: Flex Allotments and New Max Plan for Enhanced AI Coding

How to Reduce Navigation Latency in Your Web App with Client-Side Caching and Service Workers

Last updated: 2026-05-17 07:22:55 · Open Source

Introduction

When working through a backlog—opening an item, jumping to a linked thread, then back to the list—latency isn't just a metric; it's a context switch. Even small delays accumulate and hit hardest when developers are trying to stay in flow. The bottleneck isn't that your app is “slow” in isolation; it's that too many navigations still pay the cost of redundant data fetching, breaking flow repeatedly. This guide walks you through the same approach used to modernize GitHub Issues navigation: shifting work to the client, optimizing perceived latency, and making navigation feel instant without a full backend rewrite.

How to Reduce Navigation Latency in Your Web App with Client-Side Caching and Service Workers
Source: github.blog

What You Need

  • Basic understanding of web performance metrics (e.g., First Contentful Paint, Time to Interactive)
  • Familiarity with JavaScript, Service Workers, IndexedDB, and the Fetch API
  • A data-heavy web application with navigation paths that repeatedly load similar data
  • Browser developer tools for profiling network and cache behavior
  • Patience for iterative testing and tradeoff analysis

Step-by-Step Guide

Step 1: Identify Navigation Bottlenecks

Before changing anything, measure the current performance of your most common navigation paths. Use the Performance tab in DevTools to record user flows—opening a detail page from a list, going back, and cross-linking. Look for repeated server roundtrips, redundant API calls, and client boot times. The key metric you'll optimize for is perceived latency: the time from user action to meaningful on-screen content. In the GitHub Issues case, the team found that navigations paid the full cost of server rendering, network fetches, and client boot even when data hadn't changed.

Step 2: Implement a Client-Side Caching Layer with IndexedDB

The core of the solution is a local cache that stores fetched data so subsequent navigations can load instantly. Use IndexedDB because it supports large amounts of structured data and survives page reloads. Build a cache interface with methods like get(key), set(key, data, ttl), and invalidate(pattern). Store each resource (issue, pull request, etc.) with its original timestamp and an expiration time. When a navigation occurs, serve data from cache first—rendering instantly—then revalidate in the background by fetching fresh data from the server and updating the cache. This “stale-while-revalidate” pattern eliminates visible loading states.

Step 3: Add a Preheating Strategy to Improve Cache Hit Rates

To minimize cache misses, preheat the cache with data likely to be needed soon. Use heuristics based on user behavior: when a user views a list page, fetch the details for the first few items in the background. In GitHub Issues, the team examined real usage patterns to predict which issues would be opened next. Implement a work queue that prioritizes preheating based on signals like viewport position, hover, or recent activity. Preheating must be efficient—avoid spamming requests by batching and throttling. The goal is to have data ready before the user clicks.

Step 4: Integrate a Service Worker for Offline and Hard Navigations

A service worker intercepts network requests and can serve cached responses even when the browser navigates to a new page (hard navigation). Register a service worker that, on install, caches essential app shell assets. On fetch, use a cache-first strategy for API endpoints that return data used in navigation. In GitHub Issues, the service worker made cached data available on hard navigations—when the user types a URL directly, clicks a back button (which may trigger a page reload), or navigates from an external link. The service worker also helps with network failures, showing cached content with a note that it may be outdated. Ensure your service worker respects cache headers and provides a way to clear stale data.

How to Reduce Navigation Latency in Your Web App with Client-Side Caching and Service Workers
Source: github.blog

Step 5: Measure and Optimize Perceived Performance

Use Real User Monitoring (RUM) tools to track metrics like Largest Contentful Paint (LCP) and First Input Delay (FID) for navigation paths. Compare before and after: you should see a dramatic reduction in time-to-interactive for repeat visits. In GitHub Issues, the results showed that navigations that previously took hundreds of milliseconds often felt instant. But beware of tradeoffs: client-side caching increases memory/disk usage, requires cache invalidation logic, and can serve stale data if not revalidated quickly. Monitor cache hit rates and adjust TTLs accordingly. Also, test edge cases—hard reloads, incognito mode, and multiple tabs.

Tips

  • This approach isn't free. IndexedDB reads/writes have overhead. Keep your cache schema simple and avoid storing large blobs. Use a background task to trim expired entries.
  • Preheating must be smart. Don't preheat everything—analyze user journeys and prioritize the most common next steps. Use probabilistic models if possible.
  • Service workers are powerful but tricky. Test thoroughly on different browsers and network conditions. Provide a clear cache button for users who encounter stale data.
  • Combine with backend improvements for best results. Client-side caching won't fix a slow API; it only masks it for repeat requests. Optimize your endpoints as well.
  • Measure the right thing: perceived latency, not just network time. Use tools like PerformanceNavigationTiming and custom user timing marks.
  • Iterate based on real usage. Deploy to a subset of users first and compare behavior. The patterns that worked for GitHub Issues may need adaptation for your app.

By following these steps, you can transform a data-heavy web app from feeling “slow” to feeling “instant” for the most common navigation paths—without a full rewrite. The principles are directly transferable: shift work to the client, render from local data, and revalidate asynchronously. Your users will thank you with fewer context switches and more productive flow.