Introduction
When you navigate through a web application, every delay breaks your mental flow. Developers working on GitHub Issues know this all too well—opening a ticket, jumping to a linked comment, then returning to the list often incurred redundant data fetches. The result? A constant context switch that made the entire experience feel sluggish. But the solution isn't about making individual backend calls faster; it's about redesigning the entire loading sequence to feel instant. This guide walks you through the exact approach we used to modernize GitHub Issues navigation performance, step by step. You'll learn how to shift work to the client, cache data locally, and use service workers to eliminate perceived latency. By the end, you'll have a replicable model for your own data-heavy web app.

What You Need
- Familiarity with client-side JavaScript (ES6+).
- Understanding of browser storage (especially IndexedDB).
- Knowledge of Service Worker API and how to register it.
- A web application with frequent navigations (e.g., a listing page to detail pages and back).
- Basic performance measurement tools (e.g., browser DevTools, Lighthouse, or custom timing).
- A backend API that returns data in a structured format (JSON).
Step-by-Step Guide
Step 1: Shift Rendering to the Client
The first step is to stop relying on server-rendered pages for every navigation. Instead, render pages on the client using locally available data. This eliminates the round trip to the server for common paths. Implementation tip: Use a JavaScript router (e.g., React Router or a custom hash-based router) to intercept navigation events. When a user clicks an issue, the client immediately renders content from a local cache while the real data is fetched in the background. This makes the page appear instant—the user sees something right away, even if it's slightly stale. You'll need to structure your front-end components to accept both cached and fresh data seamlessly.
Step 2: Build a Client-Side Caching Layer Backed by IndexedDB
To support instant rendering, you need a reliable local cache. IndexedDB is ideal because it can store large amounts of structured data, including the full responses from your API. How to implement: Create a caching module that intercepts API calls. On success, store the response with a key (e.g., issue ID) and a timestamp. Before any new fetch, check the cache first. If the data is present and not too old (you decide the freshness threshold), use it immediately and then trigger a background revalidation. Use transactions and proper error handling to avoid blocking the main thread. This layer becomes the source of truth for instant renders.
Step 3: Implement a Preheating Strategy to Improve Cache Hit Rates
A cache is only useful if it contains the data users actually need next. Preheating predicts what the user will request and fetches it before they click. How to implement: Monitor navigation patterns—e.g., when a user is on an issue list, pre-fetch details of the most likely next issue (like the one they're hovering over or the top item). You can also preheat after a successful fetch: if a issue mentions several linked issues, fetch those in the background as well. Use a priority queue to avoid flooding the network. Over time, this improves cache hit rates significantly without adding noticeable overhead. Preheating turns idle moments into productive loading times.
Step 4: Introduce a Service Worker for Hard Navigations and Offline Support
Even with client-side caching, hard navigations (like full page reloads) would bypass the cache and re-fetch everything. A service worker can intercept these requests and serve cached data directly, making even cold starts feel fast. Implementation steps: Register a service worker in your website’s root. In the install and activate events, cache essential resources (like the app shell). In the fetch event, check if the URL matches an API endpoint you cache. If yes, respond with cached data if available; otherwise, fetch from network. For navigations to issue detail pages, the service worker can serve the cached HTML or JSON, and the client-side code then hydrates the page. This ensures that even when the user presses reload, they get instant feedback from local data while the app updates in the background.

Step 5: Measure and Optimize Perceived Latency
Finally, you need to verify that your changes actually improve the user experience. Instead of focusing solely on traditional metrics like Time to First Byte or Total Page Load Time, measure perceived latency—the time between user action and the moment they see meaningful content. What to track: Use the performance API to mark moments like “user clicked”, “first paint”, and “content rendered from cache”. Compare these to a control group. Also monitor cache hit rates and revalidation staleness. Tools like Lighthouse User Flow or custom RUM (Real User Monitoring) can give you accurate data. Optimize further by tuning cache expiration, preheating logic, and service worker strategies based on real-world usage. The goal is to make the 99th percentile feel as fast as the median.
Tips for Success
- Test with real user patterns: Simulate actual workflows (opening an issue, going back, jumping to a linked thread) rather than just synthetic page loads. This reveals where latency really hurts.
- Handle stale data gracefully: Always show a loading indicator or “refreshing” state when data is being revalidated. Users prefer stale but instant over blank and waiting.
- Be mindful of storage limits: IndexedDB has no hard cap, but browsers may evict data under pressure. Implement eviction policies (e.g., least recently used) and avoid caching overly large payloads.
- Prioritize critical navigations: Not every page needs to be instant. Focus on the most common paths—like list-to-detail—and gradually expand.
- Monitor both client and server: Client-side optimizations can mask server issues. Keep an eye on backend response times to ensure revalidation is fast enough.
- Consider accessibility: Instant rendering can confuse screen readers if changes happen without announcement. Use ARIA live regions or focus management to signal updates.
- Iterate based on feedback: Roll out changes gradually using A/B testing. Let real metrics guide your next steps—what feels fast to you may not be fast for users on slower devices or networks.
By following these steps, you can transform a data-heavy web app from “loads in a second” to “feels instant.” The same patterns that modernized GitHub Issues are directly applicable to your own projects. Start small, measure impact, and watch your users' flow state improve.