Estoy en una controversia, si bien el cálculo automático me dice que la query va a procesar más de 20 GB, cuando miro el historial en información del trabajo me dice mucho menos, que es lo que serviría
En realidad cuanto procesa, 22gb o 400mb?
Muchas gracias
Solved! Go to Solution.
In your example, the bytes processed by the query was 498MB. With that in mind, you are likely wondering "Where does the 22GB come into the story?". The answer is that 22GB is the maximum amount of storage that might have been scanned to give you an answer. The reason the actual scan is so much less than the possible scan is potentially due to filters, join "on" conditions, partitions and clusters. What you need to understand is that the estimate is an estimated calculation that ignores the actual parameters of your query. For example, if you have a table of StackOverflow questions with clustering applied and we filtered just to see the questions "you asked" ... then short of actually running the query, we can't predict how much storage needs be examined ... so Google will report the worst case ... which would be examining every row in the table.
In your example, the bytes processed by the query was 498MB. With that in mind, you are likely wondering "Where does the 22GB come into the story?". The answer is that 22GB is the maximum amount of storage that might have been scanned to give you an answer. The reason the actual scan is so much less than the possible scan is potentially due to filters, join "on" conditions, partitions and clusters. What you need to understand is that the estimate is an estimated calculation that ignores the actual parameters of your query. For example, if you have a table of StackOverflow questions with clustering applied and we filtered just to see the questions "you asked" ... then short of actually running the query, we can't predict how much storage needs be examined ... so Google will report the worst case ... which would be examining every row in the table.
MUCHAS GRACIAS