JSPM

  • ESM via JSPM
  • ES Module Entrypoint
  • Export Map
  • Keywords
  • License
  • Repository URL
  • TypeScript Types
  • README
  • Created
  • Published
  • Downloads 2
  • Score
    100M100P100Q78586F
  • License ISC

Generate a graphql client sdk for any graphql api

Package Exports

  • buro26-graphql-codegen-client
  • buro26-graphql-codegen-client/dist/index.js
  • buro26-graphql-codegen-client/dist/index.mjs

This package does not declare an exports field, so the exports above have been automatically detected and optimized by JSPM instead. If any package subpath is missing, it is recommended to post an issue to the original package (buro26-graphql-codegen-client) to support the "exports" field. If that is not possible, create a JSPM override to customize the exports field for this package.

Readme

GraphQL Codegen Client

Generate a secure, high-performance client SDK for your GraphQL API using GraphQL Codegen. Features include per-user caching, rate limiting, error handling, and middleware support.

Getting started

npm i --save-dev buro26-graphql-codegen-client buro26-graphql-codegen-zod @graphql-codegen/cli @graphql-codegen/schema-ast @graphql-codegen/typescript @graphql-codegen/typescript-operations @graphql-codegen/typed-document-node

Usage

Create a codegen.ts file in the root of your project with the following content:

import type {CodegenConfig} from '@graphql-codegen/cli'

const config: CodegenConfig = {
    schema: 'http://localhost:1337/graphql',
    documents: 'src/lib/strapi/queries/**/!(*.generated).{ts,tsx}',
    debug: true,
    overwrite: true,
    verbose: true,
    generates: {
        // optional: generate schema types
        './schema.graphql': {
            plugins: ['schema-ast']
        },
        'src/lib/my-api/generated/types.ts': {
            plugins: ['typescript', 'typescript-operations', 'typed-document-node'],
            config: {
                withHooks: false,
                maybeValue: 'T | null',
                avoidOptionals: false
            }
        },
        'src/lib/my-api/generated/client-factory.ts': {
            plugins: ['buro26-graphql-codegen-client'],
            config: {
                logger: true,
                typesImportPath: '@/lib/strapi/generated/types',
                schemaImportPath: '@/lib/strapi/generated/schema'
            }
        },
        'src/lib/my-api/generated/schema.ts': {
            plugins: ['buro26-graphql-codegen-zod'],
            config: {
                onlyWithValidation: false,
                lazy: true,
                zodTypesMap: {
                    DateTime: 'string',
                    JSON: 'string',
                    ID: 'string',
                    Int: 'number',
                    Float: 'number',
                    Long: 'number',
                    String: 'string',
                    Boolean: 'boolean',
                    I18NLocaleCode: 'string'
                },
                zodSchemasMap: {
                    DateTime: 'z.string()',
                    JSON: 'z.any()',
                    Long: 'z.coerce.number()',
                    I18NLocaleCode: 'z.string()'
                }
            }
        }
    }
}
export default config

Add a script to your package.json to run the codegen:

{
  "scripts": {
    "codegen": "graphql-codegen"
  }
}

Run the codegen script:

npm run codegen

Create a client from the generated code:

import {createClient} from '@/lib/my-api/generated/client-factory'

export const client = createClient({
    url: process.env.MY_API_URL!,
    logging: true,
    
    // Optional: Global error handling
    onError: (error, operation) => {
        console.error('GraphQL Error:', error.message)
        // Send to error tracking service (Sentry, etc.)
    },
    
    // Optional: Rate limit handling
    onRateLimitExceeded: (type, token) => {
        console.warn('Rate limit exceeded:', type)
        // Show user notification or implement backoff strategy
    }
})

Make a request

Import the client and make a request:

import {client} from '@/lib/my-api/client'

const {data} = await client.query.adres.fetch({
    authToken: ctx.session?.accessToken,
    variables: {
        filters: {
            id: {
                eq: id
            }
        }
    }
})

Context-Aware Client

Manage request-scoped context using AsyncLocalStorage, this allows you to provide context that's available throughout your GraphQL operations without passing it explicitly.

Note: Context functions are automatically re-exported in your generated client factory file, so you can import everything from there instead of the main library package.

Two Usage Patterns

Pattern 1: Direct createClient (No Breaking Changes)

import { createClient } from 'buro26-graphql-codegen-client'

export const client = createClient({
  url: 'http://localhost:4000/graphql',
  devMode: process.env.NODE_ENV === 'development',
  middlewares: [
    // Your existing middleware
  ],
})

Pattern 2: Context-Aware Factory

import { createContextAwareClient, getContext, contextStore } from '@/lib/my-api/generated/client-factory'

const createClient = createContextAwareClient(async () => {
  const session = await getServerSession()
  return {
    userId: session?.user?.id,
    email: session?.user?.email,
    accessToken: session?.accessToken
  }
})

export const client = createClient({
  url: 'http://localhost:4000/graphql',
  devMode: process.env.NODE_ENV === 'development',
  middlewares: [
    // Your middleware - context is available here
    async (request, context, next) => {
      const requestContext = getContext()
      if (requestContext?.userId) {
        request.headers.append('X-Requested-By', requestContext.userId)
      }
      return next(request, context)
    },
  ],
})

Setting Context

In Your Auth Logic

import { contextStore } from '@/lib/my-api/generated/client-factory'

export async function getServerSession() {
  const session = await nextGetServerSession(authOptions)
  
  if (!session?.accessToken) {
    return null
  }

  // Set context for this request
  contextStore.enterWith({
    userId: session.user.id,
    email: session.user.email,
    accessToken: session.accessToken
  })

  const meResponse = await getMe(session.accessToken)
  return { ...session, user: { ...session?.user, ...meResponse } }
}

In Your Middleware

// Context is automatically available in all middleware
async (request, context, next) => {
  const requestContext = getContext()
  
  if (requestContext?.userId) {
    request.headers.append('X-Requested-By', requestContext.userId)
  }
  
  const response = await next(request, context)
  
  // Validate response
  if (requestContext?.userId) {
    const responseRequestedBy = response.headers.get('X-Requested-By')
    if (responseRequestedBy !== requestContext.userId) {
      throw new Error('User context mismatch')
    }
  }
  
  return response
}

Context API Reference

createContextAwareClient<T>(createContext: () => Promise<T>) (Generated)

Creates a context-aware client factory. This is the recommended way to create context-aware clients.

Parameters:

  • createContext: Function that creates the context for each request

Returns: A function that creates a client with context support

createContextAwareClientFactory<T>(createContext: () => Promise<T>, createClient: (options: ClientOptions) => any) (Advanced)

The underlying factory function for advanced usage.

Parameters:

  • createContext: Function that creates the context for each request
  • createClient: The generated createClient function from your codegen

Returns: A function that creates a client with context support

getContext<T>(): T | undefined

Gets the current request context.

Returns: The current context or undefined if not set

contextStore: AsyncLocalStorage<any>

The underlying AsyncLocalStorage instance for advanced usage.

Import Options

You can import context functions from either location:

// Option 1: From generated client factory (recommended)
import { createContextAwareClient, getContext, contextStore } from '@/lib/my-api/generated/client-factory'

// Option 2: From main library package (advanced usage)
import { createContextAwareClientFactory, getContext, contextStore } from 'buro26-graphql-codegen-client'

Recommendation: Use the generated client factory import for the cleanest API. The generated createContextAwareClient function automatically uses your generated createClient function.

Per-operation error handling

You can also handle errors for specific operations:

const {data} = await client.query.adres.fetch({
    authToken: ctx.session?.accessToken,
    variables: { filters: { id: { eq: id } } },
    
    // Custom error handler for this specific operation
    onError: (error) => {
        console.log('Custom error handler:', error.message)
        // This will be called in addition to the global onError callback
    }
})

Use the schema

Import the schema and use it to validate data. This is useful for example for form validation, or usage with tRPC.

procedure
    .input(
        client.query.adres.schema()
    )
    .query(async ({input, ctx}) => {
        const {data} = await client.query.adres.fetch({
            authToken: ctx.session?.accessToken,
            variables: input
        })

        return data?.adressen?.data[0] ?? null
    })

Features

🔒 Security Features

  • Per-user cache isolation: Each user's data is cached separately to prevent cross-user data leakage
  • Rate limiting: Built-in rate limiting with configurable limits (100 requests/minute per user, 10k global)
  • Token-based authentication: Secure token handling with proper isolation
  • Request deduplication: Prevents duplicate requests while maintaining security

⚡ Performance Features

  • Intelligent caching: 500ms minimum cache duration with automatic cleanup
  • Request deduplication: Identical requests are deduplicated to reduce server load
  • Cached requests don't count against rate limits: Better user experience
  • Memory efficient: Per-user cache instances with automatic cleanup

🛠️ Developer Experience

  • TypeScript support: Full type safety with generated types
  • Error handling: Global and per-operation error callbacks
  • Rate limit monitoring: Callbacks for rate limit events
  • Middleware support: Custom middleware for request/response processing
  • Logging: Built-in logging with configurable levels

📊 Monitoring & Observability

const client = createClient({
    url: process.env.API_URL!,
    logging: true,
    
    // Monitor errors
    onError: (error, operation) => {
        // Send to error tracking service
        analytics.track('graphql_error', {
            message: error.message,
            operation: operation.kind,
            operationName: operation.context.operationName
        })
    },
    
    // Monitor rate limits
    onRateLimitExceeded: (type, token) => {
        // Track rate limit events
        analytics.track('rate_limit_exceeded', {
            type,
            userId: extractUserIdFromToken(token)
        })
    }
})

Configuration Options

Client Options

interface ClientOptions {
    url: string                    // GraphQL endpoint URL
    logging?: boolean             // Enable/disable logging (default: false)
    devMode?: boolean             // Enable debug mode for cache and rate limit logging (default: false)
    middlewares?: Middleware[]    // Custom middleware functions
    onError?: ClientErrorCallback // Global error handler
    onRateLimitExceeded?: RateLimitExceededCallback // Rate limit handler
    cache?: CacheConfig           // Cache configuration options
}

interface CacheConfig {
    cacheDuration?: number        // How long to keep cache entries (default: 3 seconds)
    maxCacheSize?: number         // Maximum number of cache entries per user (default: 100)
    enableCache?: boolean         // Enable/disable caching entirely (default: true)
}

Debug Mode & Logging

The client supports configurable logging with different verbosity levels to help you debug cache behavior and monitor performance.

Basic Debug Mode

const client = createClient({
  url: 'http://localhost:4000/graphql',
  devMode: true, // Enable debug logging
  logging: true, // Enable logging with defaults (NORMAL level, 60s summaries)
  cache: {
    cacheDuration: 3 * 1000,       // 3 seconds cache time
    maxCacheSize: 100,             // 100 entries per user
    enableCache: true              // Enable caching
  }
});

Simple Logging Options

// Disable logging entirely
logging: false

// Enable with defaults (NORMAL level, 60s summaries)
logging: true

// Custom configuration
logging: {
  level: 'VERBOSE',
  summaryInterval: 10000,
  enablePeriodicSummaries: true
}

Advanced Logging Configuration

const client = createClient({
  url: 'http://localhost:4000/graphql',
  devMode: true,
  logging: {
    level: 'NORMAL',           // QUIET | NORMAL | VERBOSE
    summaryInterval: 30000,    // 30 seconds
    enablePeriodicSummaries: true
  }
});

Logging Levels

QUIET - Only errors and critical issues

⚠️  RATE LIMIT EXCEEDED for user eyJhbGci...JP4U

NORMAL - Periodic summaries every 60s + errors (recommended)

📊 Cache Summary (last 0.5m):
  📈 Overall: 45 hits, 3 misses, 2 invalidations (90.0% hit rate)
  🌐 Network requests: 5
  👥 Users (2):
    👤 eyJhbGci...JP4U: 42 hits, 2 misses (95.5% hit rate) - me(15), getUserProfile(12), getPosts(8) - 5s ago
    👤 anonymou...mous: 3 hits, 1 misses (75.0% hit rate) - getContent(4) - 12s ago

VERBOSE - All cache decisions + summaries + errors

[CACHE MISS] TestQuery (abc12345...) #1 (not cached)
[CACHE HIT] TestQuery (abc12345...) #2
[CACHE HIT] TestQuery (abc12345...) #3

Request-Level Verbose Logging

You can also enable verbose logging for individual requests without changing the global logging level:

// Global logging: NORMAL (periodic summaries)
const client = createClient({
  url: 'http://localhost:4000/graphql',
  devMode: true,
  logging: true
})

// Specific request with verbose logging
const result = await query(client, {
  query: GET_USER_PROFILE,
  variables: { id: '123' },
  verboseLogging: true // This request will log all cache decisions
})

// Output for this specific request:
// [CACHE MISS] getUserProfile (eyJhbGci...JP4U) #1 (not cached) [OVERRIDE]
// [CACHE HIT] getUserProfile (eyJhbGci...JP4U) [OVERRIDE]

// Mutations also support verbose logging
const mutationResult = await mutate(client, {
  mutation: UPDATE_USER_PROFILE,
  variables: { id: '123', name: 'New Name' },
  verboseLogging: true // This will log mutation execution details
})

// Output for mutation:
// [MUTATION] Executing updateUserProfile with token: eyJhbGci...

This is perfect for debugging specific operations without overwhelming your logs with verbose output for all requests.

🚨 RATE LIMIT EXCEEDED Type: USER User: abc12345...

📊 User Request Statistics: TestQuery: 15 requests (last: 2s ago) GetUser: 8 requests (last: 5s ago) GetPosts: 3 requests (last: 10s ago)

Total requests: 26


### Rate Limiting Configuration

The client includes built-in rate limiting that is fully configurable:

**Default Configuration:**
- **Per-user limit**: 100 requests per minute
- **Global limit**: 10,000 requests per minute
- **Window**: 60 seconds

**Custom Configuration:**
```typescript
const client = createClient({
  url: 'http://localhost:4000/graphql',
  rateLimit: {
    maxRequestsPerMinute: 200,        // Per user limit
    maxGlobalRequestsPerMinute: 5000, // Global limit
    windowMs: 60000                   // Time window in milliseconds
  }
})

Rate limits apply to both queries and mutations, but cached requests don't count against limits.

Important: The rate limiter only counts actual network requests to the server. Cached requests (cache HITs) are not counted against rate limits, which is the correct behavior for efficient caching.

Cache Configuration

The cache system is highly configurable and can be customized per client:

const client = createClient({
  url: 'http://localhost:4000/graphql',
  cache: {
    cacheDuration: 3 * 1000,       // 3 seconds cache time
    maxCacheSize: 100,             // 100 entries per user
    enableCache: true              // Enable caching
  }
});

Real-world examples for Next.js App Router:

// Default configuration (good for most cases)
const client = createClient({
  url: 'http://localhost:4000/graphql',
  cache: {
    cacheDuration: 3 * 1000,       // 3 seconds - prevent stale data after mutations
    maxCacheSize: 100,             // 100 entries per user
    enableCache: true
  }
});

// For very dynamic data (user-specific, changes frequently)
const dynamicClient = createClient({
  url: 'http://localhost:4000/graphql',
  cache: {
    cacheDuration: 1 * 1000,       // 1 second - quick expiration for dynamic data
    maxCacheSize: 50,              // 50 entries - smaller cache
    enableCache: true
  }
});

// For static/semi-static data (configuration, settings)
const staticClient = createClient({
  url: 'http://localhost:4000/graphql',
  cache: {
    cacheDuration: 10 * 1000,      // 10 seconds - longer for static data
    maxCacheSize: 200,             // 200 entries - larger cache
    enableCache: true
  }
});

Default Configuration (Optimized for Request Deduplication):

  • Cache duration: 3 seconds (prevents stale data after mutations)
  • Maximum cache size: 100 entries per user
  • Per-user isolation: Each user has separate cache instances
  • Automatic cleanup: Expired entries are automatically removed
  • Memory efficient: Only caches requests with valid authentication tokens
  • LRU eviction: When cache size limit is reached, oldest entries are removed

Simple Cache Logic:

  • Cache entries are valid immediately after being created
  • Cache expires after cacheDuration to prevent stale data
  • Example: With cacheDuration: 3000ms (3 seconds):
    • Request at 10:00:00.000 AM → cached
    • Request at 10:00:00.100 AM → Cache HIT (immediately valid)
    • Request at 10:00:01.000 AM → Cache HIT (within 3 seconds)
    • Request at 10:00:04.000 AM → Cache MISS (expired after 3 seconds)

Cache Behavior:

  • Cache entries are automatically cleaned up when they exceed the maximum duration
  • When the cache size limit is reached, the oldest entries are evicted (LRU)
  • Cached requests don't count against rate limits
  • Cache is disabled for requests without authentication tokens

Cache Monitoring

The cache system includes built-in monitoring to help you track cache usage and performance:

import { cacheMonitor } from 'your-graphql-client'

// Get overall cache statistics
const stats = cacheMonitor.getStats()
console.log(`Total entries: ${stats.totalEntries}`)
console.log(`Total users: ${stats.totalUsers}`)
console.log(`Oldest entry: ${stats.oldestEntryAge}ms ago`)
console.log(`Newest entry: ${stats.newestEntryAge}ms ago`)

// Get cache stats for a specific user
const userStats = cacheMonitor.getUserStats(userToken)
console.log(`User cache: ${userStats.entryCount} entries`)
console.log(`Oldest: ${userStats.oldestEntryAge}ms, Newest: ${userStats.newestEntryAge}ms`)

// Log cache stats to console (requires devMode)
cacheMonitor.logStats(client) // Log all cache stats
cacheMonitor.logStats(client, userToken) // Log specific user stats

Cache Statistics Include:

  • Total entries: Number of cached requests across all users
  • Total users: Number of users with active caches
  • Oldest entry age: How long ago the oldest cached entry was created
  • Newest entry age: How long ago the newest cached entry was created
  • Largest user cache: Which user has the most cached entries

Example Cache Stats Output:

📊 Cache Statistics:
  Total entries: 45
  Total users: 3
  Oldest entry: 2.5s ago
  Newest entry: 150ms ago
  Largest user cache: abc12345... (25 entries)

Test and Deploy

Running tests

To run tests, run the following command:

bun test

Contributing

Wish to contribute to this project? Pull the project from the repository and create a merge request.

Authors and acknowledgment

Buro26 - https://buro26.digital
Special thanks to all contributors and the open-source community for their support and contributions.

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project status

The project is currently in active development. We are continuously working on adding new features and improving the existing ones. Check the issues section for the latest updates and planned features.

Feel free to reach out if you have any questions or suggestions!