API4:2019 Lack of Resources and Rate Limiting

API clients' requests cost, at least bandwidth, computation cycles, memory, and storage, not only from the API back-end server but, in most cases several other systems, such as database servers. API requests compete for these resources to be fulfilled as quickly as possible but, improper resources management may cost you additional money or leave you out of business.

What is the issue?

When there's no limit regarding the number of objects to be returned in a single request (e. g. database records), the database server may take too much time or even get stuck computing what records to return. Moreover, the API back-end server may not be able to process all those records to properly fulfill the request, due to, for example, lack of available memory. The problem gets even worst if the number of requests a single API client can perform per unit of time is not limited. When the system is unstable, starving for computational resources, new requests don't help that much.

How does it look like?

The lack of resources limiting is easier to illustrate: probably you're familiar with API requests/responses like the ones below:

table

Instead of retrieving all user tasks at once, the client uses the page and limit query string parameters to filter (paginate) the objects to return. Without great surprise, the tasks array length in the returned JSON object is 5. To get the next 5 tasks, the client should send another request changing the query string parameter page to 2. Often, clients can also choose the number of objects per page e.g. limit=10.

When the API does not properly limit the maximum number of objects that can be retrieved at once (limit), bad actors may request arbitrarily large numbers e. g. 1000000. It is fair to assume the database server will take longer to retrieve such a number of records or expect an operation timeout. Moreover, the amount of available memory required to process data returned by the database server and the bandwidth to transfer the API response will be considerably more. The overall operation will take longer and meanwhile, the API server may not be able to handle other requests. And these are the basics of any Denial of Service (DoS) attack.

This is a simple example to illustrate the issue and the risk. If you're familiar with GraphQL APIs query depth and amount and query batching are also common exploitation techniques.

Where have we seen this issue lately?

This is a common issue whose impact is not always easy to demonstrate without causing some harm to the API server, thus compromising the service availability.

SoundCloud's API had several issues regarding resources and rate-limiting: on the one hand, the number of tracks returned by some endpoints for a single request was not subject to a limit leading to increasingly longer response times and response size, on the other the sign-in rate-limiting mechanism could be easily bypassed, failing to tackle brute-force attacks. Meetup's API also suffered from resource and rate-limiting issues.

Conclusion

Even if an API is ready to scale, resources (computational and financial) are not unlimited. During the design and development phases, it is important to frequently benchmark the system's performance. Limits should be established according to the knowledge of the system, available infrastructure, and expected demand.

Regarding rate-limiting, authentication-related endpoints usually have different requirements to enforce security: you can read more about it in our previous article API3:2019 Excessive Data Exposure.

Share this Post