We had our application, on App Engine standard environment, GEN1 , developed on runtime, python 2.7 and webapp2 framework, deployed on google app engine.
Currently, we're in the migration process to update our framework from Webapp2 to flask and effectively, we're updating our python runtime as well, from python27 to python 310.
We've migrated the code base, however, we're facing some unexpected issues:
- With webapp2, the google app engine build size raised upto 140 -160 MB, but when after migration to flask, python 310, the size rose upto, 480-500 MB..
- Upon debugging, we found out, it's the same size that the project is engulfing locally on the machine.. i mean did webapp2 apply compression while deploying of app engine? And Flask isn't applying such compression?
- Now, for all my F1 instance class declared, the build is deployed successfully, but my front end does not load, it's because my build size is about 480MB, greater than that of the F1 memory limit i.e 384 MB in the second gen runtime for the App engine standard environment.
- We did few experiments to evaluate this build size rise i.e we checked adding requirements.txt in .gcloudignore that reduced size to 200 MB, and moreover, adding static folder, further reduced 136 MB. I mean we cannot add these in the .gcloudignore as these are required.
Also for example, if i go live with such build size, if i have 20 instances let's suppose, on an F2 instance, when will it spin another instance, using automatic scaling, and in return have an affect on billing
Version Build
Solved! Go to Solution.
Try explicitly setting an entrypoint element in the yaml file, specifically including the number of gunicorn workers that Google suggests in that table for your instance class.
https://cloud.google.com/appengine/docs/standard/python3/runtime#application_startup
Based on our testing, when we specified a gunicorn entrypoint in our yaml file with the number of workers suggested for our instance class, the initial memory consumption dropped from when we did not specify an entrypoint. We further halved the number of workers, and this further reduced the initial memory consumption.
As well, there was this thread that mentioned this.
> Upon debugging, we found out, it's the same size that the project is engulfing locally on the machine
This probably means you didn't include your virtual env folder in your .gcloudignore file. This would mean it was deployed when you deployed your App. You should include it in your virtual env folder so that it doesn't get deployed. Google will install the contents of your requirements.txt file when it's starting your App
> we checked adding requirements.txt in .gcloudignore that reduced size to 200 MB
If you're saying that you added 'requirements.txt' to your .gcloudignore, then that is most likely the reason why your App isn't working (without the 'requirements.txt' file, none of your libraries will be installed by Google)
> it's because my build size is about 480MB, greater than that of the F1 memory limit i.e 384 MB
I believe you're mixing up 2 things. Your build size is about data storage and you get 1GB for that (see quotas and limits at the bottom of the documentation). 384MB is for memory size (the amount of memory i.e. RAM required to run your App).
> adding static folder, further reduced 136 MB. I mean we cannot add these in the .gcloudignore as these are required
I believe you meant that 'removing' the static folder further reduced your deployed size. The way to do that is to upload the static files to Cloud Storage and then reference the urls from cloud storage in your App (see doc here)
Hi @NoCommandLine .
Thanks for the reply.
@NoCommandLine wrote:This probably means you didn't include your virtual env folder in your .gcloudignore file.
Just to add a clarification here, my .gcloudignore includes the venv directory i. e
.gcloudignore
.git
.gitignore
# Python pycache:
__pycache__/
# Ignored by the build system
/setup.cfg
/dump.rdb
venv
/packages
/static/dist/.DS_Store
/static/dist/images/ping.mp3
/scripts
You're right. Adding requirements.txt would not allow app engine to run the application as the packages wouldn't be installed, therefore i was just checking it for the experimental causes to check if its affected the size and it drastically did.
Let's suppose my build size is 423MB, using F1 instance of size 384MB, it gives error when app is run,
'Exceeded hard memory limit of 384 MiB with 423 MiB after servicing 0 requests total. Consider setting a larger instance class in app.yaml'
If I set a larger instance class, it would definitely, increase the cost as well.
1) build size != amount of memory to run App
2) If your memory consumption is way higher than what it was in Python 2, you'll have to do some troubleshooting to figure out if you're leaking memory. Also check to see if you made changes in the Python 3 version which has resulted in you reading and holding large amounts of data in memory or if you changed libraries and are using something that consumes a lot of memory
Try explicitly setting an entrypoint element in the yaml file, specifically including the number of gunicorn workers that Google suggests in that table for your instance class.
https://cloud.google.com/appengine/docs/standard/python3/runtime#application_startup
Based on our testing, when we specified a gunicorn entrypoint in our yaml file with the number of workers suggested for our instance class, the initial memory consumption dropped from when we did not specify an entrypoint. We further halved the number of workers, and this further reduced the initial memory consumption.
As well, there was this thread that mentioned this.
Thanks @dnswrsrx .
This did help, adding an entry_point helped in dropping the initial memory consumption.
However, the no. of workers seem to be having no affect.
Along the lines of what @dnswrsrx mentions, I have posted an article with our migration experience wrt billing at https://gae123.com/article/gae-py3-billing
As I mention there at the end we were able to lower our bill below python2.