Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Pub/Sub takes so much time in Cloud Functions, Java 11

I'm publishing a Pub/Sub message in a HttpFunction , here's how I measure the execution time:

val builder: Publisher.Builder
measureTimeMillis {
builder = Publisher.newBuilder(
ProjectTopicName.of(projectId, topicName),
)
}.also {
LOGGER.info("Pub/Sub publisher newBuilder() took ${it}ms.")
}
val publisher: Publisher
measureTimeMillis {
publisher = builder.build()
}.also {
LOGGER.info("Pub/Sub publisher build() took ${it}ms.")
}
measureTimeMillis {
publisher.publish(pubsubMessage)
}.also {
LOGGER.info("Pub/Sub publish() took ${it}ms.")
}
measureTimeMillis {
publisher.shutdown()
}.also {
LOGGER.info("Pub/Sub shutdown() took ${it}ms.")
}

 And the result is:

Pub/Sub publisher newBuilder() took 302ms.
Pub/Sub publisher build() took 3103ms.
Pub/Sub publish() took 93ms.
Pub/Sub shutdown() took 2006ms.

The deployment is configured as Java 11, 512MB. As the Slack Tutorial recommends to publish a topic to prevent the 3 second timeout, the publishing itself takes more than that. Is there anything that I'm doing it wrong?

Solved Solved
0 3 1,162
1 ACCEPTED SOLUTION

Make sure you’re not loading dependencies that your function doesn’t use.

As described in the Performance section [1] of the document Tips & Tricks that I shared with you in the last post, it states that:

Because functions are stateless, the execution environment is often initialized from scratch (during what is known as a cold start). 

If your functions import modules, the load time for those modules can add to the invocation latency during a cold start.

If your application is latency-sensitive, as is stated here [2]:

You can avoid cold starts for your application and reduce application latency by setting a minimum number of instances.

Cloud Functions scales by creating new instances of your function. Each of these instances can handle only one request at a time, so large spikes in request volume often cause longer wait times as new instances are created to handle the demand.

Take a look at this Stackoverflow post [3] to get extra support about reducing times related to cold starts.

[1]: https://cloud.google.com/functions/docs/bestpractices/tips#performance

[2]:https://cloud.google.com/functions/docs/configuring/min-instances  

[3]:https://stackoverflow.com/a/51790072/17544309 

 

View solution in original post

3 REPLIES 3

It depends on the commands you're using. What are you trying to achieve? Are you expecting your function to respond within 3 seconds of receiving a webhook request? These Cloud Function’s tips & tricks may be helpful [1].

[1]:https://cloud.google.com/functions/docs/bestpractices/tips#ensure_http_functions_send_an_http_respon...

I‘m implementing a Slack Slash Command, which needs to respond within 3 seconds.

I have 2 functions, A and B.

Function A receives HTTP request, publish a Pub/Sub event, then response immediately.

Function B receives Pub/Sub event from A, then calls 3rd party APIs. After that, it makes a POST request to the Slack response URL.

I'm following the Pub/Sub tutorial. It works, but takes so much time (can be 15 secs. or more) on Function A, especially on a cold start.

Make sure you’re not loading dependencies that your function doesn’t use.

As described in the Performance section [1] of the document Tips & Tricks that I shared with you in the last post, it states that:

Because functions are stateless, the execution environment is often initialized from scratch (during what is known as a cold start). 

If your functions import modules, the load time for those modules can add to the invocation latency during a cold start.

If your application is latency-sensitive, as is stated here [2]:

You can avoid cold starts for your application and reduce application latency by setting a minimum number of instances.

Cloud Functions scales by creating new instances of your function. Each of these instances can handle only one request at a time, so large spikes in request volume often cause longer wait times as new instances are created to handle the demand.

Take a look at this Stackoverflow post [3] to get extra support about reducing times related to cold starts.

[1]: https://cloud.google.com/functions/docs/bestpractices/tips#performance

[2]:https://cloud.google.com/functions/docs/configuring/min-instances  

[3]:https://stackoverflow.com/a/51790072/17544309