Skip to content

Rate Limits

Understanding and working with API rate limits to ensure reliable service.

Overview

The MsGine API uses rate limiting to protect infrastructure and ensure fair usage. Rate limits are applied per API token.

Default Limits

TierRequests/MinuteRequests/Second (Burst)
Free6010
Starter10020
Professional50050
EnterpriseCustomCustom

Rate Limit Headers

Every API response includes rate limit information in headers:

http
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1640000000
HeaderDescription
X-RateLimit-LimitTotal requests allowed per minute
X-RateLimit-RemainingRequests remaining in current window
X-RateLimit-ResetUnix timestamp when limit resets

Rate Limit Exceeded

When you exceed the rate limit, you'll receive a 429 Too Many Requests response:

json
{
  "error": {
    "code": "rate_limit_exceeded",
    "message": "Too many requests. Please try again later.",
    "retryAfter": 60
  }
}

The Retry-After header indicates when you can retry (in seconds):

http
HTTP/1.1 429 Too Many Requests
Retry-After: 60
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1640000060

Handling Rate Limits

SDK Automatic Retry

The official SDK automatically handles rate limits with exponential backoff:

typescript
import { MsGineClient } from '@msgine/sdk'

const client = new MsGineClient({
  apiToken: process.env.MSGINE_API_TOKEN!,
  retryConfig: {
    maxRetries: 3,
    initialDelayMs: 1000,
    maxDelayMs: 10000,
    backoffMultiplier: 2
  }
})

// SDK automatically retries on rate limit errors
await client.sendSms({
  to: '+256701521269',
  message: 'Hello!'
})

Manual Retry Logic

If using the REST API directly, implement retry logic:

typescript
async function sendWithRetry(url: string, data: any, retries = 3) {
  for (let i = 0; i < retries; i++) {
    const response = await fetch(url, {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${process.env.MSGINE_API_TOKEN}`,
        'Content-Type': 'application/json'
      },
      body: JSON.stringify(data)
    })

    if (response.status !== 429) {
      return response.json()
    }

    const retryAfter = parseInt(response.headers.get('Retry-After') || '60')
    await new Promise(resolve => setTimeout(resolve, retryAfter * 1000))
  }

  throw new Error('Max retries exceeded')
}

Check Rate Limit Status

Monitor your rate limit status:

typescript
async function checkRateLimit() {
  const response = await fetch('https://api.msgine.net/api/v1/account', {
    headers: {
      'Authorization': `Bearer ${process.env.MSGINE_API_TOKEN}`
    }
  })

  const limit = response.headers.get('X-RateLimit-Limit')
  const remaining = response.headers.get('X-RateLimit-Remaining')
  const reset = response.headers.get('X-RateLimit-Reset')

  console.log(`Rate Limit: ${remaining}/${limit}`)
  console.log(`Resets at: ${new Date(parseInt(reset) * 1000)}`)
}

Best Practices

1. Implement Exponential Backoff

typescript
async function exponentialBackoff(attempt: number) {
  const delay = Math.min(1000 * Math.pow(2, attempt), 10000)
  await new Promise(resolve => setTimeout(resolve, delay))
}

2. Use Request Queuing

For high-volume applications, implement a request queue:

typescript
class RequestQueue {
  private queue: Array<() => Promise<any>> = []
  private processing = false
  private readonly requestsPerMinute = 100

  async enqueue<T>(request: () => Promise<T>): Promise<T> {
    return new Promise((resolve, reject) => {
      this.queue.push(async () => {
        try {
          const result = await request()
          resolve(result)
        } catch (error) {
          reject(error)
        }
      })

      if (!this.processing) {
        this.process()
      }
    })
  }

  private async process() {
    this.processing = true
    const interval = 60000 / this.requestsPerMinute

    while (this.queue.length > 0) {
      const request = this.queue.shift()!
      await request()
      await new Promise(resolve => setTimeout(resolve, interval))
    }

    this.processing = false
  }
}

3. Batch Operations

Use batch endpoints when available:

typescript
// ❌ Not optimal - Multiple requests
for (const phone of phones) {
  await client.sendSms({ to: phone, message: 'Hello!' })
}

// ✅ Better - Single batch request
await client.sendSms({
  to: phones,
  message: 'Hello!'
})

4. Cache Responses

Cache responses when appropriate:

typescript
const cache = new Map()

async function getAccountInfo() {
  const cached = cache.get('account')
  if (cached && Date.now() - cached.timestamp < 60000) {
    return cached.data
  }

  const data = await client.getAccount()
  cache.set('account', {
    data,
    timestamp: Date.now()
  })

  return data
}

5. Monitor Usage

Track your rate limit usage:

typescript
const client = new MsGineClient({
  apiToken: process.env.MSGINE_API_TOKEN!,
  onError: (error) => {
    if (error.code === 'rate_limit_exceeded') {
      // Alert monitoring service
      console.warn('Rate limit exceeded:', error.details)
    }
  }
})

Increasing Rate Limits

To increase your rate limits:

  1. Upgrade your plan: Higher tiers have higher limits
  2. Contact sales: Enterprise customers can request custom limits
  3. Optimize usage: Implement batching and caching

Rate Limit by Endpoint

Different endpoints may have different rate limits:

EndpointLimit (req/min)
POST /messages/smsPlan-dependent
GET /messagesPlan-dependent
GET /accountPlan-dependent
POST /webhooks20

Burst Limits

Burst limits allow short spikes in traffic:

  • Burst window: 1 second
  • Burst limit: Varies by plan (10-50 requests/second)

Example: Professional plan allows 50 requests in 1 second, but still limited to 500 requests/minute overall.

Monitoring and Alerts

Set up monitoring for rate limit issues:

typescript
function checkRateLimitHealth(remaining: number, limit: number) {
  const percentage = (remaining / limit) * 100

  if (percentage < 10) {
    console.error('Critical: Less than 10% rate limit remaining')
  } else if (percentage < 25) {
    console.warn('Warning: Less than 25% rate limit remaining')
  }
}

Next Steps

Released under the MIT License.