Table of Contents | ||||
---|---|---|---|---|
|
APIs are an integral part of CS usage. The legitimate users of CloudStack can occasionally hammer the server with heavy API requests that cause undesirable results, like killing the server, performance issues for other CloudStack users. Also, it may become a mechanism for certain malicious users to do malicious attacks to CloudStack service to cause cloud outage. To prevent certain things happen, we will introduce this API request throttling feature to limit number of APIs that can be placed by each account within certain time duration and will block API requests if the account is over the limit so that he/she have to retry later.
This API throttling feature will be implemented as an APIChecker Adapter as well as a Pluggable service to provide api limit check in invoking each API command. This Adapter will be invoked by APIServer
when the command is invoked, as a chain to current acl access check, but before each acl access checker.
This Api Rate Limit Pluggable service is also implementing APIChecker adapter interface to perform API limit checking on the API commands. This adapter can be chained before current ACL access checker adapter to control whether an API invocation can go through or not. APIServer will invoke this adapter first on each command invocation to avoid wasting resources to perform access checking.
...
By default, we will provide an implementation to query Ehcache-based rate limit store (in memory) to check if the given account has passed his/her api limit set in plugin configuration to pass through or deny the request. In case of denial, we will throw an ServerApiException with HttpErrorCode = 429 and clearing indicating the error message that is indicative to the user. For example, "You have reached the API limit per second, please re-try after x seconds". For custom implementation, for example, setting different limits for different accounts based on business needs, we can write other custom API rate limit plugins to serve the purpose.
We will introduce two new APIs related limit reset and query:
...
To allow UI to retrieve the global configured api.throttling.interval and api.throttling.max, we also modified existing listCapabilitiesCmd to also return apilimitinterval and apilimitmax.
We have defined the following Rate Limit Store interface to provide contracts among different implementations of api limit store. Contributors can provide their own particular implementations based on different technologies, such as DB, Memcached, Redis, Ehcache, etc. Here we have provided a sample implementation of this interface using Ehcache in this pluggable service. See details next.
Code Block |
---|
public interface LimitStore { StoreEntry get(Long account); StoreEntry create(Long account, int timeToLiveInSecs); void resetCounters(); } public interface StoreEntry { int getCounter(); int incrementAndGet(); boolean isExpired(); long getExpireDuration(); } |
With scalability and simplicity in mind, we are thinking of using Ehcache to keep track of API limit counter in memory. With Ehcache time_to_live feature, item in the cache can automatically become expired based on time_to_live value set for each cache element, saving us from resetting counter in our business logic. For this release, we are implementing counter cache per management server.
...
...
The ideal implementation of Rate limit store is to use Memcached (see http://simonwillison.net/2009/jan/7/ratelimitcache/), where we can set a memcached server for counter tracking. We didn't pursue this route is that currently Memcached is not currently under Apache license. With our current code abstraction of API rate limit as a pluggable service and clear definition of LimitStore interface, this implementation should be straightforward for any contributor once Memcached license issue is resolved.
We have done the following two kinds of testing during development cycle: