Rate Limits & Bulk Processing
Understanding rate limits and implementing bulk processing strategies is crucial for efficient API integration.
Rate Limits
Thought Industries implements rate limiting to ensure platform stability and fair usage.
Standard Rate Limits
- Requests per minute: 60 requests
- Requests per hour: 1,000 requests
- Concurrent connections: 5 connections
Enterprise Rate Limits
Higher limits available for enterprise customers:
- Requests per minute: 300 requests
- Requests per hour: 10,000 requests
- Concurrent connections: 20 connections
Rate Limit Headers
All API responses include rate limit information:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 45
X-RateLimit-Reset: 1640995200
Headers Explained:
X-RateLimit-Limit- Maximum requests allowedX-RateLimit-Remaining- Remaining requests in current windowX-RateLimit-Reset- Unix timestamp when limit resets
Handling Rate Limits
Exponential Backoff
Implement exponential backoff when you hit rate limits:
async function makeAPIRequest(url, options, retries = 3) {
try {
const response = await fetch(url, options);
if (response.status === 429) {
if (retries > 0) {
const retryAfter = response.headers.get('Retry-After') || 60;
await sleep(retryAfter * 1000);
return makeAPIRequest(url, options, retries - 1);
}
throw new Error('Rate limit exceeded');
}
return response;
} catch (error) {
console.error('API request failed:', error);
throw error;
}
}
Monitor Rate Limit Headers
Always check rate limit headers before making requests:
function checkRateLimit(headers) {
const remaining = parseInt(headers.get('X-RateLimit-Remaining'));
const reset = parseInt(headers.get('X-RateLimit-Reset'));
if (remaining < 5) {
const waitTime = reset - Math.floor(Date.now() / 1000);
console.warn(`Approaching rate limit. Wait ${waitTime}s`);
}
}
Creating Multiple Courses
The API supports creating up to 100 courses in a single request using the courseAttributes array.
Bulk Create in Single Request
Create multiple courses at once:
Endpoint:
POST https://{your-instance-url}/incoming/v2/content/course/create
Request:
{
"courseAttributes": [
{
"title": "Course 1",
"kind": "courseGroup",
"sections": [...]
},
{
"title": "Course 2",
"kind": "article",
"articleVariant": {...}
}
]
}
Maximum: 100 courses per request
Benefits:
- Reduced API calls
- Better performance
- Lower chance of hitting rate limits
Batch Processing Strategy
For more than 100 courses, implement batch processing:
async function batchCreateCourses(courses, batchSize = 50) {
const results = [];
for (let i = 0; i < courses.length; i += batchSize) {
const batch = courses.slice(i, i + batchSize);
try {
const response = await fetch('https://{your-instance-url}/incoming/v2/content/course/create', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${API_TOKEN}`
},
body: JSON.stringify({ courseAttributes: batch })
});
const data = await response.json();
results.push(...data);
// Check rate limits
checkRateLimit(response.headers);
// Add delay between batches
if (i + batchSize < courses.length) {
await sleep(1000);
}
} catch (error) {
console.error(`Batch ${i / batchSize + 1} failed:`, error);
results.push({ error: error.message, batch });
}
}
return results;
}
function sleep(ms) {
return new Promise(resolve => setTimeout(resolve, ms));
}
Best Practices
1. Implement Request Queuing
Queue requests and process them within rate limits:
class APIQueue {
constructor(rateLimit = 60) {
this.queue = [];
this.rateLimit = rateLimit;
this.processing = false;
}
async add(request) {
this.queue.push(request);
if (!this.processing) {
await this.process();
}
}
async process() {
this.processing = true;
while (this.queue.length > 0) {
const request = this.queue.shift();
await request();
await sleep(60000 / this.rateLimit);
}
this.processing = false;
}
}
2. Cache Responses
Cache API responses to reduce unnecessary requests:
const cache = new Map();
async function getCourseWithCache(courseId) {
if (cache.has(courseId)) {
return cache.get(courseId);
}
const course = await fetch(`/api/v1/courses/${courseId}`);
cache.set(courseId, course);
// Expire cache after 5 minutes
setTimeout(() => cache.delete(courseId), 300000);
return course;
}
3. Use Webhooks
Instead of polling, use webhooks for event-driven updates:
{
"webhook_url": "https://your-app.com/webhooks/ti-courses",
"events": [
"course.created",
"course.updated",
"course.deleted"
]
}
Troubleshooting
429 Too Many Requests
Causes:
- Exceeded rate limit
- Too many concurrent connections
- Burst of requests
Solutions:
- Implement exponential backoff
- Use bulk endpoints
- Spread requests over time
- Increase rate limits (contact support)
Timeout Errors
Causes:
- Large batch sizes
- Network issues
- Server load
Solutions:
- Reduce batch size
- Use asynchronous processing
- Implement retry logic
- Contact support for platform issues
Next Steps
- Review Field Reference
- Explore format-specific examples: