When generating > 2 million access tokens per day, what settings should we use for Cassandra?
The concern is that we'd rapidly start accumulating tombstones causing performance to suffer or perhaps issues with the 100k tombstone query limit
My initial thoughts are:
Would that be a good idea? Is there any documentation on this?
Solved! Go to Solution.
Hello,
I came across this post and wanted to provide more feedback on what you can do to avoid this issue.
To help with the number of accesstokens and tombstones, we recommend the following:
Upon further research, it looks like the 100k limit is specifically related to intensive queries. But, I suspect Apigee's queries are using the indexes to retrieve only one token record at a time rather than many - in which case, the tombstones may not be an issue. Can anyone help clarify this concern?
http://docs.apigee.com/api-services/content/oauthv2-policy talks about how to purge expired tokens. By default they are purged 180 days after expiry (for access token as well as refresh token).
Thanks but that does not help answer the question. That document does describe how to enable the automatic token purge, but it still results in tombstones being created and possible performance problems as a result.
Both are the right approaches for the scenario. However we cannot officially document/support both options.
Option 1, will work as long as you can guarantee a node failure will be handled/restored within few hours.
Option 2, could potentially inflict a huge latency as the seeks now have to traverse through all the tombstones upto 1M.
I would start looking at, the legitimate business case for generating and expiring X number (presuming it over few hundred thousand per day) of oauth tokens.
> Option 2, could potentially inflict a huge latency as the seeks now have to traverse through all the tombstones upto 1M.
My understanding is that the huge latency would only occur if queries were issued that require scanning through many records - i.e., not directly querying based on the primary key (the token). Does Apigee perform queries like that?
> Both are the right approaches for the scenario. However we cannot officially document/support both options.
Are there no customers who expire more than 100k tokens per 10 day span?
Hello,
I came across this post and wanted to provide more feedback on what you can do to avoid this issue.
To help with the number of accesstokens and tombstones, we recommend the following: