Skip to main content
Saved
Pattern
Difficulty Advanced

Memoization

Cache expensive computation results to avoid recalculating with identical inputs.

By Den Odell Added

Memoization

Problem

Your component filters 10,000 products, and every time the parent re-renders for any reason—even if the products array hasn’t changed—you run that expensive filter again, causing lag when users type and scroll stutters when they navigate.

I’ve watched performance profiles where a simple sort ran 47 times during a single user interaction because unrelated state changes triggered re-renders, producing the same sorted result 47 times.

The problem extends to callbacks too: every inline onClick={() => handleClick(id)} creates a new function reference, triggering re-renders in any memoized children that receive it. Your carefully optimized virtualized list just re-rendered every visible item because of a function that does exactly the same thing as before.

Solution

Cache the results of expensive computations so that when inputs haven’t changed, you return the cached value instead of recalculating.

In React, useMemo caches computed values until dependencies change. Vue’s computed properties handle caching automatically. Svelte’s reactive statements do the same through its compiler. For callbacks passed to children, use useCallback to create stable references that don’t trigger unnecessary re-renders.

My advice is to avoid memoizing everything—memoization has overhead from dependency checking and cache management, which can exceed the cost of simply recalculating cheap operations. Profile first to confirm you have an actual performance problem. That sort on 10,000 items is a great candidate; adding two numbers should just be calculated directly.

Example

The concept is the same across frameworks: check if inputs changed, return cached result if they didn’t.

Basic Memoization

function ProductList({ products, category, sortBy }) {
  // Only recomputes when products or category change
  const filteredProducts = useMemo(() => {
    return products.filter(p => p.category === category);
  }, [products, category]);

  // Chain memoizations for multi-step transformations
  const sortedProducts = useMemo(() => {
    return [...filteredProducts].sort((a, b) =>
      a[sortBy] > b[sortBy] ? 1 : -1
    );
  }, [filteredProducts, sortBy]);

  return (
    <ul>
      {sortedProducts.map(p => <li key={p.id}>{p.name}</li>)}
    </ul>
  );
}

Callback Memoization

When passing callbacks to memoized children, stable references prevent unnecessary re-renders:

function TodoList({ todos, onToggle }) {
  // Stable reference prevents child re-renders
  const handleToggle = useCallback((id) => {
    onToggle(id);
  }, [onToggle]);

  return (
    <ul>
      {todos.map(todo => (
        <TodoItem key={todo.id} todo={todo} onToggle={handleToggle} />
      ))}
    </ul>
  );
}

// React.memo skips re-render if props unchanged
const TodoItem = React.memo(({ todo, onToggle }) => (
  <li onClick={() => onToggle(todo.id)}>{todo.text}</li>
));

Component Memoization

Wrapping entire components prevents re-renders when their props haven’t changed:

// Wrap component to prevent re-renders when props unchanged
const ExpensiveComponent = React.memo(({ data }) => {
  const processedData = data.map(item => ({
    ...item,
    computed: expensiveCalculation(item)
  }));

  return processedData.map(item => <div key={item.id}>{item.computed}</div>);
});

// Custom comparison for complex props
const CustomCompare = React.memo(
  ({ data, config }) => { /* ... */ },
  (prev, next) => prev.data.length === next.data.length && prev.config.id === next.config.id
);

LRU Cache for Limited Memory

For unbounded input spaces, an LRU cache evicts the least recently used entries to prevent memory bloat:

class LRUCache {
  constructor(maxSize = 100) {
    this.cache = new Map();
    this.maxSize = maxSize;
  }

  get(key) {
    if (!this.cache.has(key)) return undefined;
    const value = this.cache.get(key);
    this.cache.delete(key);
    this.cache.set(key, value); // Move to end (most recent)
    return value;
  }

  set(key, value) {
    if (this.cache.has(key)) this.cache.delete(key);
    this.cache.set(key, value);
    if (this.cache.size > this.maxSize) {
      this.cache.delete(this.cache.keys().next().value); // Evict oldest
    }
  }
}

Selector Pattern

Redux selectors with reselect create memoized derived state that only recomputes when inputs change:

import { createSelector } from 'reselect';

const getProducts = state => state.products;
const getCategory = state => state.category;

// Memoized selector - only recomputes when inputs change
const getFilteredProducts = createSelector(
  [getProducts, getCategory],
  (products, category) => products.filter(p => p.category === category)
);

// Chained selectors
const getSortedProducts = createSelector(
  [getFilteredProducts, state => state.sortBy],
  (filtered, sortBy) => [...filtered].sort((a, b) => a[sortBy] > b[sortBy] ? 1 : -1)
);

Fibonacci with Memoization

The classic example of memoization turning exponential time into linear time:

// Exponential time -> linear time
const fibonacci = (() => {
  const cache = {};
  return function fib(n) {
    if (n in cache) return cache[n];
    if (n <= 1) return n;
    return cache[n] = fib(n - 1) + fib(n - 2);
  };
})();

fibonacci(50); // Instant with memoization

Benefits

  • Expensive operations only run when data actually changes—that sort on 10,000 items stops running on every render.
  • Visible lag disappears from search boxes, scroll interactions, and other frequently-updated UI elements.
  • Stable callback references prevent cascading re-renders in memoized child components.
  • Complex derived state becomes practical since aggregates and transformations are cached automatically.
  • Battery life improves on mobile devices due to fewer wasted CPU cycles.

Tradeoffs

  • Memoization trades memory for CPU time, because every cached value lives in memory until its dependencies change or the component unmounts.
  • For cheap operations, the memoization overhead can actually hurt performance more than it helps, because comparing dependencies and managing the cache takes time that might exceed the cost of simply recalculating.
  • Dependency arrays in React are easy to get wrong, and forgetting a dependency results in stale cached values while including an unstable reference means you get no effective caching at all.
  • Shallow comparison has inherent limitations and can’t detect nested changes, so if user.name changes but the user object reference stays the same, memoization will miss the update entirely.
  • Debugging memoized code is different—your breakpoints only hit sometimes because cache hits skip the function body and return the cached value directly.
  • Custom comparison functions in React.memo are notoriously bug-prone, and getting them wrong means you either re-render too much and waste performance, or re-render too little and display stale data.
  • It’s easy to over-memoize by wrapping everything in useMemo “just in case,” adding complexity and overhead without helping performance.
  • Only pure functions should be memoized, because if your function has side effects, those effects will either be skipped when the cache is hit or duplicated when dependencies change unexpectedly, both of which lead to subtle bugs.

Summary

Memoization caches expensive computation results so repeated calls with the same inputs return instantly from cache instead of recalculating. Use it for computationally heavy operations that get called frequently with the same arguments, but profile first to confirm you actually have a performance problem. The overhead of caching isn’t free, and premature memoization often costs more than it saves.

Newsletter

A Monthly Email
from Den Odell

Behind-the-scenes thinking on frontend patterns, site updates, and more

No spam. Unsubscribe anytime.