We have an application that queries a Postgres DB. The tables are partitioned into Postgres schema(s). I'm tasked with getting some sort of cost ( dollars, cpu, memory, anything ) for executed queries on a per schema basis. Looking through all the log files, the only way I see to do this is by finding log files that have logged the Postgres query, and within that query the schema(s) is listed as part of the query. That's what I use to identify which query accessed which schema.
Now, using Logs Explorer, I'm seeing these Postgres queries list in log files having either "data_access" or "postgres_log" in their logNames. I'm not sure why I am seeing the queries in either one or the other or what the difference is. But each of these has it's own fields describing the logged action. Neither has something to hang my hat on as to the amount of resources the query took. Gemini gives me various methods that aren't consistent or accurate. So:
Knowing the difference between "data_access" and "postgres_log" files might be helpful.
Is the best I can do is to just count the number of queries referencing a particular schema? Is there a join field ( insertId? ) that can be used to reference the cost? I'm not seeing it.
Is there a way to adjust Googles log files to always log a consistent cost value ( of any type )? chunkCount isn't always there and regardless it is always "1" for me. I see reference in the docs to statement_latency_ms but never actually see it. Lots of docs talk about BigQuery queries, but these are queries to a PG database, not BQ queries.
Am I SOL? If there's a guru to ask about this we can hire him but any advice is appreciated.
User | Count |
---|---|
4 | |
1 | |
1 | |
1 | |
1 |