Back

Practical Memoization Patterns in JavaScript

Practical Memoization Patterns in JavaScript

You’ve profiled your app and found a function running thousands of times with identical inputs. Memoization seems like the obvious fix. But before you wrap everything in a cache, you should know: memoization done wrong creates bugs that are harder to find than the performance problems you started with.

This article covers practical JavaScript memoization patterns, common memoization pitfalls JavaScript developers encounter, and how to apply these techniques safely in production code—including async memoization and React useMemo best practices.

Key Takeaways

  • Memoization caches function results based on arguments, trading memory for speed—but only works reliably with pure functions and primitive arguments.
  • Object references cause stale cache hits when data changes; use libraries like fast-memoize for custom keying or stick to primitives.
  • Unbounded caches leak memory in long-running apps; implement LRU eviction, TTL expiration, or scope caches to component lifecycles.
  • Async memoization requires caching promises immediately and deleting failed entries to prevent duplicate requests and enable retries.
  • React’s useMemo is a targeted optimization, not a default—profile first and only apply it when computations are measurably slow.

What Memoization Actually Does

Memoization caches function results based on their arguments. Call the function again with the same inputs, and you get the cached result instead of recomputing.

JavaScript has no built-in memoization. TC39 has discussed proposals (for example, Function.prototype.memo), but nothing is production-ready yet (proposal). You’ll implement this yourself or use a library.

Here’s a basic pattern for single-argument functions:

function memoize(fn) {
  const cache = new Map()
  return (...args) => {
    const key = args[0]
    if (cache.has(key)) return cache.get(key)
    const result = fn(...args)
    cache.set(key, result)
    return result
  }
}

This works for primitive arguments. It breaks in subtle ways for everything else.

The Object Reference Problem

Objects are cached by reference, not by value. This catches developers constantly:

const memoizedFn = memoize(processData)

const config = { threshold: 10 }
memoizedFn(config) // Computes
config.threshold = 20
memoizedFn(config) // Returns stale cached result

Same reference means cache hit, even though the data changed.

Some developers try to fix this by using JSON.stringify(args) as a cache key. That can work for simple data, but it fails on circular references, drops functions and symbols, and can be slow for large objects.

The fix: Only memoize functions with primitive arguments, or use a library like fast-memoize that supports custom key resolvers/serializers for more complex cases.

When Memoization Causes Problems

Impure Functions

Memoizing impure functions creates impossible-to-debug issues:

// Never memoize this for long-term caching
const getData = memoize(() => {
  return fetch('/api/data').then(r => r.json())
})

The first call caches the promise. Every subsequent call returns that same promise—even if the server data changed.

Async memoization is only safe when you intentionally want to deduplicate concurrent requests, or when you also implement invalidation or TTL-based expiration (covered below).

Unbounded Cache Growth

Without cache eviction strategies, your cache grows forever:

// Memory leak waiting to happen
const processUserInput = memoize((input) => expensiveOperation(input))

Every unique input adds to the cache. In a long-running app, this leaks memory.

Solutions:

  • Set a maximum cache size (LRU eviction)
  • Add TTL (time-to-live) expiration
  • Scope caches to component lifecycles
function memoizeWithLimit(fn, maxSize = 100) {
  const cache = new Map()
  return (...args) => {
    const key = args[0]
    if (cache.has(key)) return cache.get(key)
    if (cache.size >= maxSize) {
      const firstKey = cache.keys().next().value
      cache.delete(firstKey)
    }
    const result = fn(...args)
    cache.set(key, result)
    return result
  }
}

Async Memoization Done Right

Async memoization needs special handling for concurrent calls and failures:

function memoizeAsync(fn) {
  const cache = new Map()
  return async (...args) => {
    const key = args[0]
    if (cache.has(key)) return cache.get(key)
    
    const promise = fn(...args).catch(err => {
      cache.delete(key) // Don't cache failures
      throw err
    })
    
    cache.set(key, promise)
    return promise
  }
}

Cache the promise immediately. This prevents duplicate concurrent requests. Delete on failure so retries work.

React useMemo Best Practices

useMemo and React.memo are targeted optimizations, not defaults. They add complexity and can hurt performance when misused (see the official React docs: https://react.dev/reference/react/useMemo).

Use useMemo when:

  • Computing derived data from props/state is measurably slow
  • You’re passing objects to memoized children

Skip useMemo when:

  • The computation is trivial
  • You haven’t measured a performance problem
// Probably unnecessary
const doubled = useMemo(() => value * 2, [value])

// Potentially useful
const sortedItems = useMemo(
  () => items.slice().sort((a, b) => a.name.localeCompare(b.name)),
  [items]
)

React’s dependency comparison uses Object.is—reference equality. New object literals break memoization every render.

The Decision Framework

Before memoizing, ask:

  1. Is the function pure? No side effects, same inputs always produce same outputs.
  2. Is it actually slow? Profile first. Don’t guess.
  3. Are arguments primitive or stable references? Object arguments need careful handling.
  4. What’s the cache lifetime? Unbounded caches leak memory.

Conclusion

Memoization trades memory for speed. Make sure you’re getting a good deal. Start by profiling to confirm a real performance problem exists, then apply memoization selectively to pure functions with predictable arguments. Implement cache limits to prevent memory leaks, handle async operations carefully, and remember that React’s useMemo is an optimization tool—not a default pattern. When done right, memoization eliminates redundant computation. When done wrong, it introduces subtle bugs that outlast the performance gains.

FAQs

Memoizing API calls is risky because server data changes over time. If you cache the promise, subsequent calls return stale data. Only memoize API calls when you explicitly want to deduplicate concurrent requests and you implement cache invalidation or TTL expiration to refresh data periodically.

Objects are compared by reference, not value. If you mutate an object and call the memoized function again, it returns the cached result because the reference is unchanged. Use immutable data patterns, create new objects instead of mutating, or serialize arguments with JSON.stringify for simple cases.

Profile before and after using browser DevTools or Node.js profiling tools. Measure execution time and memory usage. If the function runs infrequently or computes quickly, memoization overhead may exceed the savings. Cache hit rate matters too—low hit rates mean wasted memory with minimal benefit.

No. useMemo adds overhead for dependency tracking and comparison. For simple calculations like basic math or string concatenation, the memoization cost exceeds the computation cost. Reserve useMemo for expensive operations like sorting large arrays, complex filtering, or creating objects passed to memoized child components.

Complete picture for complete understanding

Capture every clue your frontend is leaving so you can instantly get to the root cause of any issue with OpenReplay — the open-source session replay tool for developers. Self-host it in minutes, and have complete control over your customer data.

Check our GitHub repo and join the thousands of developers in our community.

OpenReplay