We're building a new data ingestion pipeline and storing data in BigQuery. This is streaming data (think like IoT) about 5K messages/sec.
For business requirements, we are partitioning data into multiple datasets. There will be thousands of datasets and about 50 tables per dataset.
Our implementation language of choice is Node.js and we'll be running multiple instances of our services to handle the load. We have a running proof-of-concept that successfully handles the load and writes data to BigQuery.
However, this proof-of-concept only uses one dataset and table for simplicity.
We're about to expand the proof-of-concept to multiple datasets and...
From looking at the documentation, samples, and source code, it appears to me that the Node.js bigquery-storage SDK does *not* support multiplexing appending rows to different destination datasets/tables over one stream. It looks like the Java and Go SDKs do.
If required, we could write this one component in Go or Java, but that would not be our preference.
Is my assessment correct about this limitation in the Node.js bigquery-storage SDK? What options do we have?
Thanks!