API Integration Examples: Ready-to-Use Snippets for REST and GraphQL
apiintegrationstutorials

API Integration Examples: Ready-to-Use Snippets for REST and GraphQL

DDaniel Mercer
2026-05-14
21 min read

Production-ready API integration examples in JavaScript and Python with auth, pagination, retries, caching, and error handling.

Modern software teams rarely fail because they cannot call an API. They fail because the integration is brittle, inconsistent, slow under load, or hard to maintain six months later. This guide gives you practical API integration examples in both JavaScript and Python, with authenticated requests, pagination, retries, caching, and real error handling patterns you can adapt to your own services. If you are building a migration from a legacy API to a modern integration, or designing a new workflow that needs to be production-safe, the difference is in the details: auth refresh, backoff, observability, and safe defaults.

Think of this as a field-ready script library for engineers who want runnable code examples, not toy demos. You will see how to structure reusable integration workflows, how to handle failure like a production system, and how to make your data flows interoperable without breaking policy constraints. The same patterns also apply when your service is feeding dashboards, internal automation scripts, or customer-facing features that need reliable data access.

1. What Makes a Good API Integration Example?

Reusable, not just runnable

A good example should teach a pattern, not just a single request. Developers need code snippets that can be dropped into a service, tested locally, and extended with minimal rewrite. That means the example should separate concerns: authentication, transport, retries, response parsing, and application logic. The best integration snippets feel like a miniature production module rather than a one-off request.

This matters because most teams end up copying snippets into many repos and then maintaining them for years. If the example is structured poorly, every new endpoint becomes a special case. If it is structured well, you get reusable developer scripts that can handle common tasks like fetching records, posting updates, and dealing with pagination across both REST and GraphQL APIs.

Authentication and failure modes must be explicit

Auth is where many examples become misleading. If you only show an API key in a header, you are skipping the realities of OAuth2 access tokens, refresh flows, short-lived credentials, scoped permissions, and secret rotation. A trustworthy example makes these mechanics visible and explains what happens when tokens expire or permissions are insufficient. That transparency reduces surprises in production.

Failure handling is equally important. APIs can return 429 rate limits, 401 invalid credentials, 403 permission errors, 5xx transient failures, malformed JSON, or slow responses. If your example does not cover these, it is incomplete. When teams evaluate tools or snippets for production use, they are really asking the same question: can this be trusted under pressure, much like the practical thinking behind field debugging discipline in embedded systems or critical infrastructure incident response.

Performance, observability, and safety

Even a small API client should include sane timeouts, logging, retries with jitter, and caching where appropriate. The goal is not only to make requests succeed, but to avoid cascading failures. This is the same design mindset that appears in benchmarking performance-sensitive delivery systems: throughput matters, but so does consistency under stress.

In practice, a production-ready snippet should also include a place to inject custom headers, support retry policies, and handle response validation. This is where many libraries fall short. When you keep those concerns visible in your own code, it becomes much easier to adapt the integration to different services, from payments and CRM systems to analytics APIs and internal admin tools.

2. REST API Integration in JavaScript: Authenticated GET with Retries

Core fetch wrapper with timeout and retries

The most common REST integration pattern is a GET request that needs authorization and resilience. The snippet below uses the native Fetch API available in modern Node.js, adds timeout support through AbortController, retries transient failures, and returns parsed JSON. It is intentionally small, but it includes the decisions that matter in real services.

async function sleep(ms) {
  return new Promise(resolve => setTimeout(resolve, ms));
}

async function fetchWithRetry(url, options = {}, { retries = 3, baseDelay = 300, timeoutMs = 8000 } = {}) {
  for (let attempt = 0; attempt <= retries; attempt++) {
    const controller = new AbortController();
    const timeout = setTimeout(() => controller.abort(), timeoutMs);

    try {
      const response = await fetch(url, {
        ...options,
        signal: controller.signal,
        headers: {
          'Accept': 'application/json',
          ...(options.headers || {})
        }
      });

      clearTimeout(timeout);

      if (!response.ok) {
        const body = await response.text().catch(() => '');
        const error = new Error(`HTTP ${response.status}: ${body}`);
        error.status = response.status;
        throw error;
      }

      return await response.json();
    } catch (err) {
      clearTimeout(timeout);
      const retryable = err.name === 'AbortError' || [429, 500, 502, 503, 504].includes(err.status);
      if (!retryable || attempt === retries) throw err;

      const jitter = Math.floor(Math.random() * 100);
      const delay = baseDelay * Math.pow(2, attempt) + jitter;
      await sleep(delay);
    }
  }
}

This wrapper gives you a simple base for automation scripts that call external services without making every call a risk. Notice how the code distinguishes between retryable and non-retryable failures. That prevents endless retries on authentication errors, which would waste time and possibly trigger account lockouts.

Authenticated request example

Here is how to use the wrapper with a bearer token. In production, the token should come from a secret manager or runtime environment, not hard-coded. The example also demonstrates how to send custom query parameters while preserving a clean client interface.

const token = process.env.API_TOKEN;

async function getOrders() {
  const url = new URL('https://api.example.com/v1/orders');
  url.searchParams.set('limit', '50');
  url.searchParams.set('status', 'open');

  return fetchWithRetry(url.toString(), {
    headers: {
      'Authorization': `Bearer ${token}`
    }
  });
}

getOrders()
  .then(data => console.log(data))
  .catch(err => console.error('Request failed:', err.message));

This pattern is ideal when you need code snippets that are easy to audit. It is also a good fit for the broader operational mindset described in team transition planning, where the goal is to reduce uncertainty and keep integration behavior predictable as systems evolve.

Practical error-handling rules

Do not parse errors only from HTTP status. Some APIs return useful structured error objects even when the status is 200. Others return a JSON envelope with an error field that indicates a soft failure. A robust client should inspect both the transport response and the payload. It should also normalize errors into a format your app understands, such as { code, message, status, retryable }.

For teams shipping production features, this is where trustworthy examples stand apart from generic tutorials. A good snippet doesn’t just show the happy path; it teaches the contract you should enforce in your application. That same attention to contract clarity shows up in articles like employment classification guidance, where ambiguity creates downstream cost.

3. REST API Integration in Python: Session, Auth, and Retries

Using requests with a resilient session

Python remains a favorite for backend jobs, internal tools, and automation scripts for repetitive operations. A strong REST client in Python should reuse connections, configure retries centrally, and make auth easy to rotate. The requests library plus urllib3 retry adapters gives you a production-friendly foundation.

import os
import requests
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry


def build_session(token: str) -> requests.Session:
    session = requests.Session()
    session.headers.update({
        'Authorization': f'Bearer {token}',
        'Accept': 'application/json'
    })

    retry = Retry(
        total=3,
        backoff_factor=0.5,
        status_forcelist=[429, 500, 502, 503, 504],
        allowed_methods=["GET", "POST", "PUT", "PATCH", "DELETE"]
    )

    adapter = HTTPAdapter(max_retries=retry)
    session.mount('http://', adapter)
    session.mount('https://', adapter)
    return session


def get_customers():
    token = os.environ['API_TOKEN']
    session = build_session(token)
    response = session.get('https://api.example.com/v1/customers', timeout=8)
    response.raise_for_status()
    return response.json()


print(get_customers())

The practical advantage here is connection reuse. That matters when your script library may be called dozens or hundreds of times in a batch process. It also keeps your integration closer to a service-ready design, which is especially valuable if you are comparing architecture tradeoffs similar to the decision-making found in where to run inference in distributed systems.

Handling token expiry cleanly

If your API uses OAuth2, the access token will eventually expire. Your client should detect 401 responses, refresh the token once, and then retry the original request. Do not blindly retry 401 in a loop. That usually means credentials are invalid, the refresh token is broken, or the scope has changed. In those cases, fail fast and emit an actionable error message for operators.

A helpful pattern is to create a token provider function that your request layer can call whenever it detects expiry. This keeps the auth logic separate from the API operation itself. It also makes the code easier to test because you can mock token refresh without rewriting the request code.

Why this Python pattern is production-friendly

This structure mirrors the way seasoned engineers approach integration work: isolate transport concerns, keep business logic thin, and make failures observable. That same mindset is reflected in integration architecture guidance, where the point is not merely connectivity but dependable interoperability. When teams adopt this style, they spend less time debugging transient issues and more time shipping product features.

Pro tip: Use a single session per service integration, not a new session per request. Reusing connections reduces latency, improves throughput, and gives retry logic a consistent place to live.

Offset pagination in JavaScript

Many APIs still use offset pagination because it is simple to reason about. The downside is that offsets can become slow on large datasets and can shift if records change during traversal. Still, for moderate data volumes, it is perfectly workable. The key is to wrap pagination in a reusable loop so the calling code gets a single aggregated result rather than managing page state manually.

async function listAllPages(baseUrl, token) {
  let page = 1;
  const allItems = [];

  while (true) {
    const url = new URL(baseUrl);
    url.searchParams.set('page', String(page));
    url.searchParams.set('per_page', '100');

    const data = await fetchWithRetry(url.toString(), {
      headers: { 'Authorization': `Bearer ${token}` }
    });

    if (!data.items || data.items.length === 0) break;
    allItems.push(...data.items);

    if (data.items.length < 100) break;
    page += 1;
  }

  return allItems;
}

Cursor pagination in Python

Cursor pagination is usually the more reliable option for large or frequently changing datasets. Instead of requesting page 17, you request the next cursor returned by the API. This avoids missing or duplicating records when the dataset changes between calls. It is also the preferred model for many modern APIs, especially where performance and consistency matter.

def list_all_items(session, url):
    items = []
    cursor = None

    while True:
        params = {'limit': 100}
        if cursor:
            params['cursor'] = cursor

        response = session.get(url, params=params, timeout=8)
        response.raise_for_status()
        payload = response.json()

        items.extend(payload.get('data', []))
        cursor = payload.get('next_cursor')
        if not cursor:
            break

    return items

Choosing the right pagination strategy

If you are designing your own API, cursor pagination usually scales better and is safer for clients. Offset pagination is easier for reporting, admin lists, and simpler integrations. Link-header pagination is a nice HTTP-native option when you want clients to follow standard next and prev relations. Whichever model you choose, document it clearly and include examples, because pagination is one of the most common sources of integration bugs.

For teams that value repeatability, this is similar to the practical rigor in lifecycle planning for infrastructure assets: you choose the mechanism that best balances operational cost, scale, and change frequency.

5. GraphQL Integration Examples: Queries, Variables, and Error Inspection

JavaScript GraphQL query with authenticated POST

GraphQL reduces overfetching and underfetching, but the request shape is different from REST. Instead of multiple endpoints, you typically send a POST request with a query document and variables. A clean client should still include retries, auth, and structured error handling.

async function graphqlRequest(query, variables = {}) {
  const response = await fetchWithRetry('https://api.example.com/graphql', {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${process.env.API_TOKEN}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({ query, variables })
  });

  if (response.errors) {
    throw new Error(JSON.stringify(response.errors));
  }

  return response.data;
}

const query = `
  query User($id: ID!) {
    user(id: $id) {
      id
      name
      email
    }
  }
`;

graphqlRequest(query, { id: '123' })
  .then(data => console.log(data.user))
  .catch(err => console.error(err.message));

Python GraphQL query with response validation

In Python, the pattern is similar. Use the session to post JSON, check both HTTP status and GraphQL errors, and return a clean object. This makes the calling code simpler and more consistent.

def graphql_request(session, query, variables=None):
    payload = {
        'query': query,
        'variables': variables or {}
    }
    response = session.post(
        'https://api.example.com/graphql',
        json=payload,
        timeout=8
    )
    response.raise_for_status()
    result = response.json()

    if 'errors' in result:
        raise RuntimeError(result['errors'])

    return result['data']

GraphQL-specific integration tips

GraphQL errors can be partial. A single response may contain valid data plus an errors array. That means your client should not assume all failures are hard failures. You may need to inspect the error path and determine whether a fallback is possible. For higher-volume teams, this can become part of your observability strategy, much like the way workflow optimization turns small tool choices into scalable systems.

If you are using GraphQL extensively, also consider persisted queries, query cost limits, and schema-aware code generation. These reduce request size, improve security, and make integration bugs easier to detect before deployment.

6. Caching API Responses Without Creating Stale Data Problems

When caching helps

Caching is one of the easiest ways to improve API performance, but it should be used intentionally. Cache data that changes slowly, or data that is repeatedly requested by many users. Avoid caching sensitive or per-user authorization responses unless you understand the privacy implications. Cache keys should include all request dimensions that affect the output, such as query parameters, locale, and user scope.

Good caching is not just about speed. It is about reducing rate-limit pressure and making downstream systems more predictable. If an API is expensive or subject to quotas, caching can be the difference between a stable app and an unreliable one. This is why engineering teams often evaluate caching strategy with the same seriousness they apply to product delivery systems and operational planning.

Simple in-memory cache in JavaScript

const cache = new Map();

async function cachedFetch(url, options = {}, ttlMs = 30000) {
  const key = JSON.stringify({ url, options });
  const cached = cache.get(key);

  if (cached && cached.expiresAt > Date.now()) {
    return cached.value;
  }

  const value = await fetchWithRetry(url, options);
  cache.set(key, {
    value,
    expiresAt: Date.now() + ttlMs
  });

  return value;
}

File-based cache in Python

import json
import time
from pathlib import Path

CACHE_FILE = Path('/tmp/api_cache.json')


def load_cache():
    if CACHE_FILE.exists():
        return json.loads(CACHE_FILE.read_text())
    return {}


def save_cache(cache):
    CACHE_FILE.write_text(json.dumps(cache))


def cached_get(session, url, ttl=30):
    cache = load_cache()
    key = url
    now = time.time()

    if key in cache and cache[key]['expires_at'] > now:
        return cache[key]['value']

    response = session.get(url, timeout=8)
    response.raise_for_status()
    value = response.json()

    cache[key] = {'value': value, 'expires_at': now + ttl}
    save_cache(cache)
    return value

Cache invalidation and correctness

The hard part is not storing data; it is knowing when to invalidate it. If you are caching user-specific or transactional data, use a short TTL and avoid sharing across identities. If the API supports ETags or Last-Modified, use conditional requests to reduce payload size while still staying fresh. That approach is often better than inventing a custom cache when the server can help.

For product teams, this decision should be documented just like licensing or compliance requirements. A code snippet is only trustworthy if it is clear where stale data can appear and how that impacts business behavior. For broader operational thinking, see how insight-driven workflows turn raw input into dependable decisions; the same principle applies to API response freshness, even if the domain is different.

7. Handling Errors, Rate Limits, and Backoff Like a Pro

Normalize errors into predictable categories

Strong integration code should not leak raw transport complexity into every caller. Instead, it should convert vendor-specific errors into a small set of categories, such as AUTH, RATE_LIMIT, TRANSIENT, and VALIDATION. This makes downstream behavior easier to design, test, and monitor. It also helps support teams triage incidents faster because they can see what kind of failure occurred.

For instance, a 429 should usually trigger a retry after a delay. A 400 might indicate invalid request construction and should fail immediately. A 500 often deserves retry with jitter, while a 401 should prompt token refresh or credential checks. If you classify responses well, your application logic stays clean and your alerting becomes more useful.

Exponential backoff with jitter

Backoff prevents a thundering herd when many clients retry at once. Jitter adds randomness so those retries spread out over time. This is especially important when your service depends on a shared provider or a fragile upstream dependency. In practice, it is one of the simplest ways to improve reliability.

Pro tip: Retry only idempotent requests by default. For POST requests that create records, use idempotency keys so a retry cannot accidentally duplicate the operation.

Observability for integration failures

Every API client should log request identifiers, response codes, duration, and retry count. Do not log secrets or full payloads if they contain sensitive data. Instead, log safe metadata that helps you correlate problems across services. If possible, propagate trace IDs so the request can be followed through your entire stack.

This is the operational difference between a script that works on your laptop and a module that belongs in production. It is also why engineers often prefer a curated snippet library over random internet examples: the curated version is more likely to include the operational details that prevent outages, similar to the way field-tested debugging guidance outperforms generic advice.

8. Real-World Integration Patterns by Use Case

Marketing and CRM syncs

When syncing customer data to a CRM or marketing platform, you usually need pagination, retries, and upsert logic. These jobs often run as scheduled automation scripts, so they must handle partial failures gracefully and resume without duplicating records. If the API is rate-limited, batching and backoff are essential. If the source data changes frequently, cursor-based traversal is usually the safest option.

For these workflows, your client should also support checkpointing. A checkpoint records the last successful cursor or timestamp, allowing the next run to continue where the previous run stopped. This is one of the easiest ways to make recurring scripts reliable enough for business use.

Internal dashboards and admin tools

Dashboard integrations often need low-latency reads, cached responses, and clear error messages for operators. In this setting, the most important concern is usually correctness with enough freshness, not absolute real-time data. Small TTL caches and background refresh jobs can significantly reduce load and improve page responsiveness. That is especially useful if the dashboard makes many parallel API calls.

When teams build admin tools, they should treat API clients as part of the product surface. Good default states, retry feedback, and empty-state handling matter because the operator experience determines whether the tool is trusted. This is why a polished snippet library can save hours across multiple teams, not just one feature crew.

AI and data pipelines

API integrations increasingly feed retrieval, enrichment, and automation pipelines. In those cases, the same principles still apply, but the blast radius is bigger because downstream models or automations may depend on the output. That makes validation, response schema checks, and cache correctness more important than ever. It is similar in spirit to choosing the right dataset for a high-stakes decision: the input quality dictates the output quality.

If you are stitching together services across teams, the integration should be documented as carefully as any external dependency. This is also why organizations invest in clearer workflows and standardized handoffs, much like the thinking behind marketplace vendor planning or domain and hosting strategy for fast-growing products.

9. Comparison Table: REST vs GraphQL, JavaScript vs Python

The best implementation choice depends on your team’s stack, the API provider, and how often the interface changes. Use the table below to decide which snippet pattern fits your current project. If you are building a reusable script library, you may even support both JavaScript and Python versions for different teams.

ScenarioBest OptionWhy It WorksRisk to Watch
Simple list/read operationsREST + GETEasy to cache, debug, and testPagination inconsistencies
Flexible field selectionGraphQLReduces overfetching and extra endpointsPartial errors can be missed
Batch automation jobPython + requestsGreat for scripts, cron jobs, and ETLConnection handling if sessions are not reused
Frontend or serverless integrationJavaScript + fetchNatural fit for Node and edge runtimesTimeouts and retries must be added manually
High-volume sync workflowREST/GraphQL + caching + checkpointsBalances throughput with resilienceStale data if invalidation is weak
Third-party rate-limited APIRetry with exponential backoffReduces pressure on upstream systemsRetry storms if idempotency is ignored

10. Production Checklist Before You Ship

Security and secrets

Store tokens in environment variables, secret managers, or managed workload identity systems. Rotate keys regularly and scope them to the smallest permissions possible. If the API supports short-lived tokens, prefer them. Never commit credentials into a repository, even for a demo, because examples get copied longer than expected.

Testing and mocking

Every integration should have tests that mock both success and failure cases. At minimum, test 200, 401, 429, and 500 responses, plus malformed JSON and timeout behavior. If the API is critical, add contract tests against a staging environment so you can detect breaking changes before production. This is especially important for APIs that change their schema or pagination behavior without warning.

Documentation and ownership

Document the endpoint, auth type, rate limits, retry policy, cache TTL, and known error conditions in one place. Also document who owns the integration and how incidents should be escalated. The best integration workflows are not just code; they are operational systems with explicit ownership and maintenance plans.

Pro tip: If your integration affects revenue, compliance, or customer-visible data, treat the client module like production infrastructure. That means code review, observability, test coverage, and a rollback plan.

Conclusion: The Best API Snippets Are Small, Explicit, and Durable

Great API integration examples do more than show syntax. They show how to build trustworthy integration code that survives real traffic, real auth failures, real pagination quirks, and real operational pressure. The JavaScript and Python snippets in this guide are designed to be adapted, not copied blindly. If you keep the separation between transport, retries, caching, and business logic clear, your code becomes easier to test, easier to debug, and easier to hand off to another engineer.

That is the practical advantage of a vetted snippet library: faster delivery with less reinvention. Whether you are wiring up REST endpoints, GraphQL queries, or internal automation scripts, the patterns are the same. Build for failure, document assumptions, and make each integration a reusable asset instead of a one-off workaround.

FAQ

Should I use REST or GraphQL for new integrations?

Use REST when you want straightforward caching, debugging, and broad compatibility. Use GraphQL when clients need flexible field selection or the API has many related resources that would otherwise require multiple calls. In practice, the best choice depends on the provider’s ecosystem and your team’s experience. If you are unsure, start with the provider’s recommended approach and wrap it with consistent retries, auth, and validation.

How many retries are safe for API calls?

Usually 2 to 4 retries is enough for transient issues like 429s or 5xx responses. More retries can create latency spikes and worsen upstream load. Use exponential backoff with jitter, and avoid retrying non-idempotent requests unless you have idempotency keys. The goal is resilience, not repeated noise.

What should I cache from API responses?

Cache data that changes slowly, is expensive to fetch, or is requested repeatedly. Good examples include reference data, product metadata, and public configuration. Avoid caching user-sensitive or highly dynamic transactional results unless you have strict scoping and invalidation rules. A short TTL is often safer than a complex invalidation design.

How do I handle pagination in a reusable way?

Wrap the pagination loop in a helper function so calling code gets a full list or an iterator. Prefer cursor pagination if the API supports it, because it is less prone to missing or duplicating records during updates. If you use offset pagination, document page size limits and ordering assumptions clearly. Always test with empty pages, last pages, and partial failures.

What is the best way to handle API errors in application code?

Normalize external errors into a small set of internal categories. For example, map 401 to auth, 429 to rate limit, 5xx to transient, and 4xx validation errors to input issues. Then make the application respond based on category instead of raw vendor codes. This keeps your service logic cleaner and your logs more actionable.

How do JavaScript and Python snippets differ in practice?

JavaScript snippets often fit best in Node-based services, serverless functions, and edge environments. Python snippets shine in cron jobs, ETL, data pipelines, and internal tooling. The main differences are around HTTP client libraries, session reuse, and how you structure async behavior. The underlying integration principles are the same: auth, retries, pagination, caching, and clear error handling.

Related Topics

#api#integrations#tutorials
D

Daniel Mercer

Senior SEO Content Strategist

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

2026-05-14T08:38:17.745Z