Saved
Frontend Pattern

Retry

Re-attempt failed requests automatically with exponential backoff strategies.

Difficulty Advanced

By Den Odell

Retry

Problem

Temporary network blips, momentary server timeouts, or brief service disruptions immediately show error messages to users even though the exact same request would succeed if tried again seconds later. Operations that could succeed if retried force users to manually refresh the page or click “try again” buttons, creating friction and abandonment, while transient failures get treated as permanent errors, degrading the user experience during normal internet hiccups like brief WiFi dropouts, DNS resolution delays, or server restarts. Mobile users on unreliable connections see constant errors instead of seamless experiences, while rate-limited API calls fail permanently instead of waiting and retrying after the limit resets.

Database connection pool exhaustion causes requests to fail even though connections will be available moments later, while load balancer failovers, container restarts, or deployment rollouts cause brief unavailability that looks like permanent failures to users. Retry-able errors like 503 Service Unavailable, 429 Too Many Requests, or connection timeouts aren’t distinguished from permanent errors like 404 Not Found or 401 Unauthorized, making every failure feel catastrophic regardless of whether it could easily be resolved with a retry.

Solution

Automatically reattempt failed requests with exponential backoff to handle transient errors without user intervention. Identify which errors are retry-able (timeouts, 5xx server errors, network failures) versus permanent (4xx client errors except 429, authentication failures). Use exponential backoff where wait time doubles between retries (1s, 2s, 4s, 8s) to avoid overwhelming struggling servers while still providing timely retries. Add jitter (randomization) to backoff delays to prevent thundering herd where many clients retry simultaneously. Set maximum retry counts to prevent infinite loops on persistent failures. Implement circuit breakers that stop retrying after too many consecutive failures to avoid wasting resources. Provide user feedback during retries so users know the application is working rather than frozen. This makes applications resilient to temporary network issues, server hiccups, and transient service disruptions without requiring manual intervention.

Example

This example demonstrates automatic retry logic with exponential backoff, waiting progressively longer between each retry attempt to avoid overwhelming struggling servers.

Basic Exponential Backoff

async function fetchWithRetry(url, maxRetries = 3) {
  let lastError;

  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      const response = await fetch(url);
      
      // Success - return result
      if (response.ok) {
        return await response.json();
      }
      
      // Check if error is retryable
      if (response.status >= 500 || response.status === 429) {
        throw new Error(`Server error: ${response.status}`);
      }
      
      // Non-retryable error (4xx except 429)
      throw new Error(`Client error: ${response.status}`);
      
    } catch (error) {
      lastError = error;
      
      // If this was the last retry, throw the error
      if (attempt === maxRetries - 1) {
        throw error;
      }
      
      // Exponential backoff: 1s, 2s, 4s, 8s...
      const delay = Math.pow(2, attempt) * 1000;
      console.log(`Retry attempt ${attempt + 1} after ${delay}ms`);
      
      await new Promise(resolve => setTimeout(resolve, delay));
    }
  }
  
  throw lastError;
}

Exponential Backoff with Jitter

async function fetchWithRetry(url, options = {}) {
  const {
    maxRetries = 3,
    baseDelay = 1000,
    maxDelay = 30000,
    retryableStatuses = [408, 429, 500, 502, 503, 504]
  } = options;

  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      const response = await fetch(url);
      
      if (response.ok) {
        return await response.json();
      }
      
      // Check if status is retryable
      if (!retryableStatuses.includes(response.status)) {
        throw new Error(`Non-retryable error: ${response.status}`);
      }
      
      throw new Error(`Retryable error: ${response.status}`);
      
    } catch (error) {
      if (attempt === maxRetries - 1) {
        throw error;
      }
      
      // Exponential backoff with jitter
      const exponentialDelay = Math.min(
        baseDelay * Math.pow(2, attempt),
        maxDelay
      );
      
      // Add random jitter (±25%)
      const jitter = exponentialDelay * 0.25 * (Math.random() - 0.5);
      const delay = exponentialDelay + jitter;
      
      console.log(`Retry ${attempt + 1}/${maxRetries} after ${Math.round(delay)}ms`);
      
      await new Promise(resolve => setTimeout(resolve, delay));
    }
  }
}

React Hook with Retry

function useDataWithRetry(url) {
  const [data, setData] = useState(null);
  const [loading, setLoading] = useState(true);
  const [error, setError] = useState(null);
  const [retryCount, setRetryCount] = useState(0);

  useEffect(() => {
    let cancelled = false;

    async function fetchData() {
      setLoading(true);
      setError(null);
      
      try {
        const result = await fetchWithRetry(url, { maxRetries: 3 });
        
        if (!cancelled) {
          setData(result);
          setRetryCount(0);
        }
      } catch (err) {
        if (!cancelled) {
          setError(err);
        }
      } finally {
        if (!cancelled) {
          setLoading(false);
        }
      }
    }

    fetchData();

    return () => {
      cancelled = true;
    };
  }, [url, retryCount]);

  const retry = () => setRetryCount(c => c + 1);

  return { data, loading, error, retry };
}

// Usage
function UserProfile({ userId }) {
  const { data, loading, error, retry } = useDataWithRetry(`/api/users/${userId}`);

  if (loading) return <div>Loading...</div>;
  
  if (error) {
    return (
      <div>
        <p>Error: {error.message}</p>
        <button onClick={retry}>Try Again</button>
      </div>
    );
  }

  return <div>{data.name}</div>;
}

Retry with AbortController

async function fetchWithRetryAndTimeout(url, options = {}) {
  const {
    maxRetries = 3,
    timeout = 5000,
    baseDelay = 1000
  } = options;

  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const controller = new AbortController();
    const timeoutId = setTimeout(() => controller.abort(), timeout);

    try {
      const response = await fetch(url, {
        signal: controller.signal
      });
      
      clearTimeout(timeoutId);
      
      if (response.ok) {
        return await response.json();
      }
      
      // Retry on server errors
      if (response.status >= 500) {
        throw new Error(`Server error: ${response.status}`);
      }
      
      // Don't retry client errors
      throw new Error(`Client error: ${response.status}`);
      
    } catch (error) {
      clearTimeout(timeoutId);
      
      // Last attempt - throw error
      if (attempt === maxRetries - 1) {
        throw error;
      }
      
      // Retry on timeout or server error
      if (error.name === 'AbortError' || error.message.includes('Server error')) {
        const delay = baseDelay * Math.pow(2, attempt);
        await new Promise(resolve => setTimeout(resolve, delay));
        continue;
      }
      
      // Don't retry other errors
      throw error;
    }
  }
}

Retry with Circuit Breaker

class CircuitBreaker {
  constructor(threshold = 5, timeout = 60000) {
    this.failureCount = 0;
    this.threshold = threshold;
    this.timeout = timeout;
    this.state = 'CLOSED'; // CLOSED, OPEN, HALF_OPEN
    this.nextAttempt = Date.now();
  }

  async execute(fn) {
    // Circuit is OPEN - fail fast
    if (this.state === 'OPEN') {
      if (Date.now() < this.nextAttempt) {
        throw new Error('Circuit breaker is OPEN');
      }
      // Try to close circuit
      this.state = 'HALF_OPEN';
    }

    try {
      const result = await fn();
      
      // Success - reset circuit
      this.failureCount = 0;
      this.state = 'CLOSED';
      
      return result;
    } catch (error) {
      this.failureCount++;
      
      // Open circuit if threshold exceeded
      if (this.failureCount >= this.threshold) {
        this.state = 'OPEN';
        this.nextAttempt = Date.now() + this.timeout;
      }
      
      throw error;
    }
  }
}

// Usage
const breaker = new CircuitBreaker();

async function fetchWithCircuitBreaker(url) {
  return breaker.execute(() => fetchWithRetry(url));
}

Retry-After Header Support

async function fetchWithRetryAfter(url, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      const response = await fetch(url);
      
      if (response.ok) {
        return await response.json();
      }
      
      // Handle 429 Too Many Requests with Retry-After header
      if (response.status === 429) {
        const retryAfter = response.headers.get('Retry-After');
        
        if (retryAfter && attempt < maxRetries - 1) {
          // Retry-After can be seconds or HTTP date
          const delay = isNaN(retryAfter)
            ? new Date(retryAfter) - Date.now()
            : parseInt(retryAfter) * 1000;
          
          console.log(`Rate limited, retrying after ${delay}ms`);
          await new Promise(resolve => setTimeout(resolve, delay));
          continue;
        }
      }
      
      // Other server errors
      if (response.status >= 500) {
        throw new Error(`Server error: ${response.status}`);
      }
      
      // Client errors - don't retry
      throw new Error(`Client error: ${response.status}`);
      
    } catch (error) {
      if (attempt === maxRetries - 1) {
        throw error;
      }
      
      // Exponential backoff for other errors
      const delay = Math.pow(2, attempt) * 1000;
      await new Promise(resolve => setTimeout(resolve, delay));
    }
  }
}

Idempotent Request Retry

async function safeRetryForIdempotentRequest(url, options = {}) {
  const {
    method = 'GET',
    maxRetries = 3
  } = options;

  // Only retry safe/idempotent methods
  const idempotentMethods = ['GET', 'HEAD', 'OPTIONS', 'PUT', 'DELETE'];
  
  if (!idempotentMethods.includes(method)) {
    console.warn(`Method ${method} is not idempotent - single attempt only`);
    return fetch(url, options);
  }

  return fetchWithRetry(url, { maxRetries });
}

User Feedback During Retry

function DataLoader({ url }) {
  const [attempt, setAttempt] = useState(0);
  const [data, setData] = useState(null);

  useEffect(() => {
    async function load() {
      try {
        const result = await fetchWithRetry(url, {
          maxRetries: 3,
          onRetry: (attemptNum) => {
            setAttempt(attemptNum);
          }
        });
        setData(result);
      } catch (error) {
        console.error('All retries failed:', error);
      }
    }

    load();
  }, [url]);

  if (!data) {
    return (
      <div>
        Loading...
        {attempt > 0 && (
          <p className="retry-notice">
            Connection issue detected, retrying (attempt {attempt})...
          </p>
        )}
      </div>
    );
  }

  return <div>{data.content}</div>;
}

Benefits

  • Makes applications resilient to temporary network issues, WiFi dropouts, and transient service disruptions without user intervention.
  • Improves user experience by handling transient failures automatically, eliminating manual refresh or “try again” button clicks.
  • Reduces error messages from temporary blips that resolve quickly, preventing user frustration from spurious failures.
  • Works particularly well for read operations that are idempotent (same request can be made multiple times safely).
  • Handles rate limiting gracefully by respecting Retry-After headers and backing off appropriately.
  • Enables applications to survive brief service disruptions like deployments, restarts, or failovers transparently.
  • Improves mobile app experience where network conditions fluctuate frequently.

Tradeoffs

  • Can delay error feedback if retries keep failing - users wait through multiple retry attempts before seeing the error message.
  • May overwhelm struggling servers with repeated requests if many clients retry simultaneously without jitter, creating thundering herd problems.
  • Requires careful tuning of retry count and backoff strategy - too aggressive wastes resources, too conservative provides poor UX.
  • Can hide persistent problems by masking them with retries - developers may not notice broken endpoints if retries always eventually succeed.
  • Exponential backoff can lead to very long delays (32s, 64s) on later retries, making the application appear frozen.
  • Retry logic adds code complexity around error handling, timeout management, and state tracking.
  • Non-idempotent operations (POST creating resources) should not be retried automatically as they may create duplicates or inconsistent state.
  • Circuit breakers add complexity and require tuning threshold and timeout parameters appropriate for specific failure modes.
  • Retrying mutations without idempotency keys risks duplicate operations like double-charging credit cards or creating duplicate records.
  • Network costs increase with retries - mobile users on metered connections pay for failed attempts even if they eventually succeed.
  • Retry logic makes debugging harder because errors may occur multiple times with different timing before surfacing to users.
  • Difficult to distinguish between truly transient errors worth retrying and persistent failures that need different handling.
  • Jitter randomization helps with thundering herd but makes retry timing unpredictable and harder to reason about.
  • Testing retry logic requires mocking failures at specific attempts and verifying timing, which is awkward in most test frameworks.
  • Aggressive retry strategies can mask underlying infrastructure problems that should be fixed rather than worked around.
Stay Updated

Get New Patterns
in Your Inbox

Join thousands of developers receiving regular insights on frontend architecture patterns

No spam. Unsubscribe anytime.