Hello,
we are trying to experiment with AlloyDB for a new project. we created a table and while trying to load it into the columnar engine get this error:
Hi @morpheus,
This isn't my area of expertise, however I believe the column store of AlloyDB can only handle tables that fit in its memory. AlloyDB is a memory-optimized database, which means that it stores data in memory and executes queries on data in memory. This makes AlloyDB very fast for queries that can be executed entirely in memory, but it also means that AlloyDB can only handle tables that fit in its memory.
If you have a table that is too large to fit in memory, you can use AlloyDB's partitioned tables feature. Partitioned tables allow you to store a table across multiple nodes, each of which can have its own memory. This allows you to store tables that are much larger than the memory of a single node.
However, partitioning tables also comes with some trade-offs. Partitioned tables can be slower than unpartitioned tables, and they can also be more complex to manage.
If you are using AlloyDB for a real-world use case, you need to carefully consider the trade-offs between performance, complexity, and cost. You may need to use a combination of techniques, such as partitioning tables, to get the best performance and cost-effectiveness for your application.
Hi @Roderick, you suggested using "AlloyDB's partitioned tables feature". I have gone through the documentation and not come across partitions anywhere, can you please share the link. Also, instead of loading the entire table, isn't it possible to only load some columns in the columnar store?