What method/tool did you use to size your Cloud Spanner database based on the source database info?

Hello All,

What method/tool did you use to size your Cloud Spanner database based on the source database info? Since my source database is in Azure Cloud and they translate the resources in DTU and Cloud Spanner needs input in node or compute capacity, how did you arrive at the initial sizing of our cloud spanner database? 

regards,

Sridhar

0 3 277
3 REPLIES 3

Hi @slaksh10,

Welcome to Google Cloud Community!

You may want to check these blogs on granular instance sizing through the links below:

Granular instance sizing will basically scale your resource to "processing units" in addition to "nodes". 

Please be advised that this feature is still in early access. You may request this feature by signing up using this form. 

thanks Robertcarlos. I am looking for a tool/method to size the cloud spanner database based on input from my source SQL Server database. lets say, if the SQL server database uses x size, what node/compute capacity will i need to provision Cloud Spanner.

thanks.

Hi @slaksh10 , there is no Spanner sizing tool for Azure SQL DB but there are some basic guidelines on getting started based on read/write throughput requirements. See https://cloud.google.com/spanner/docs/instance-configurations#multi-region-best-practices guidance on the recommended load level for your Spanner instance and the maximum R/W queries per second supported per node in each region. Keep in mind this is just a staring point as different schemas, queries and data patterns will behave differently. Once you have an instance created, it is a one line script or a few clicks in the console to change your node count. You can also use autoscaler to change it automatically based on workload demands. This ability to increase/decrease compute capacity online is very different from other databases where such operations are limited and often intrusive.