Hello! I have problem that logical size is bigger than logical in 20х times. What could be the problem
BigQuery compresses data when applicable. We should expect the physical size (size on disk) to be smaller than the logical size (size of data uncompressed). From a billing perspective, I understand that the default is that you are charged based on the logical size ... unless you switch to "Dataset storage billing" and then you are charged based on the physical size (which is smaller). It appears that the reason you wouldn't immediately switch to "Dataset storage billing" is that this would also include "time travel" storage costs.
For example ... imagine you have 1 GByte of data (uncompressed) in your table. Now imagine that you change (completely) 50% of your data. Through time travel, Google has to remember both the current table data and the data as it was before the change. So now Google is managing 1.5GByte of data. The distinction in billing is whether or not to charge you based on what you have as data in your tables (eg. 1 GByte at any given time) or your data AND all its history changes (> 1 GByte at a time) but with compression.