API Basics

The Slicker API is a RESTful HTTP API that allows you to integrate with Slicker’s retry data.

  • Protocol: HTTPS only (unencrypted HTTP is not supported)
  • Base URL: https://api.slickerhq.com
  • Data Format: All requests and responses use JSON format
  • Character Encoding: UTF-8

Authentication

The API uses Bearer token authentication. An API key from Slicker which should be included in all API requests via the Authorization header:

Authorization: Bearer YOUR_API_KEY

You can see and manage your organization’s API keys at https://auth.slickerhq.com/org/api_keys.

Your API key grants access to sensitive data and should be kept secure. Do not share your API key in publicly accessible areas such as GitHub, client-side code, or in API requests to other services.

Request Limits

  • Maximum Batch Size: Each API request can contain up to 100 items.
  • Rate Limiting: The API implements rate limiting to ensure stable performance for all users. The default rate limits are 100 requests a second. If this does not meet your requirements, please contact us directly.
  • Concurrency: We recommend no more than 5 concurrent requests per second

If you exceed these limits, the API will return a 429 Too Many Requests response. Implement appropriate backoff strategies in your integration.

API Versioning

The API version is included in the URL path (e.g., /v1/recovery_actions). When new versions are released, we’ll provide appropriate migration guidance and timelines.

Pagination

The Slicker API uses token-based pagination to handle large datasets efficiently. Each paginated endpoint returns a subset of results along with a pagination token for retrieving subsequent pages.

Pagination Parameters

  • pageSize: Number of items to return per page (1-100, defaults to 100)
  • pageToken: Token for retrieving the next page of results. Use the nextPageToken from the previous response.

Pagination Response

Each paginated response includes:

  • {data}: Array of objects for the current page
  • nextPageToken: Token to fetch the next page (empty if no more pages)
  • totalSize: Total number of items available across all pages

Best Practices

Avoiding Duplicates During Pagination

When paginating through results, new recovery actions may be created while you’re processing pages. To avoid missing or duplicating entries:

  1. Use ascending sort order: Sort by executed_at or created_at in ascending order (asc)
  2. Why ascending? If you sort in descending order (desc) and new entries are created, they appear at the beginning of the result set, potentially causing you to skip entries that get pushed to later pages.
# Recommended: Ascending order to avoid duplicates
GET /v1/recovery_actions?orderBy=executed_at&orderDirection=asc&pageSize=100

# Not recommended for continuous processing
GET /v1/recovery_actions?orderBy=executed_at&orderDirection=desc&pageSize=100

Example Pagination Flow

# First request
GET /v1/recovery_actions?orderBy=executed_at&orderDirection=asc&pageSize=100

# Response includes nextPageToken
{
  "recoveryActions": [...],
  "nextPageToken": "150",
  "totalSize": 250
}

# Subsequent request using the token
GET /v1/recovery_actions?pageToken=150

Incremental Data Syncing

For regular data synchronization or warehousing:

  1. Initial sync: Fetch all recovery actions up to a specific timestamp
  2. Subsequent syncs: Use time-based filters with executedAfter to only fetch new or updated entries
  3. Always use ascending sort to maintain consistency
# Initial sync
GET /v1/recovery_actions?orderBy=executed_at&orderDirection=asc&executedBefore=2024-01-31T23:59:59Z

# Subsequent sync (fetch entries after last sync)
GET /v1/recovery_actions?orderBy=executed_at&orderDirection=asc&executedAfter=2024-01-31T23:59:59Z

Error Handling

  • Invalid tokens: Expired or invalid pageToken values will return a 400 Bad Request error
  • Rate limiting: If you exceed rate limits, implement exponential backoff before retrying
  • Connection issues: Always store the last successful pageToken to resume pagination after connection failures