**Subject:** Optimizing Cloud Function Usage for Google Prompt Optimizer Custom Metrics
**Body:**
Hello Google Cloud Community,
I am currently working with the Google Prompt Optimizer, which provides support for custom metrics. The service enables users to specify a custom metric name and a corresponding Cloud Function for metric calculation, using the following arguments:
```python
custom_metric_name="custom_engagement_personalization_score" # Metric name as key in dictionary returned from Cloud Function
custom_metric_cloud_function_name="custom_engagement_personalization_metric" # Cloud Function name
```
Current Setup and Limitations
The Cloud Function itself only receives the following inputs from Google Prompt Optimizer:
- `question`: The original query.
- `response`: The output from the LLM.
- `target`: The expected answer.
However, this setup lacks flexibility because it does not provide the `custom_metric_name` as an input to the Cloud Function. This limitation makes it challenging to dynamically determine which specific metric to compute. Without this critical information, I can do one of the following:
1. All Metrics are computed Each Time and the Correct one is taken by GPO:
2. Separate Cloud Function for Each Metric:
Possible Solution
Therefore I suggest:
Solved! Go to Solution.
Hello @SayyorYusupov, Prompt Optimizer actually sends the entire row of the provided data plus the `response` field to the cloud function. For example, if the provided data has the `question`, `metric_to_compute`, and `target` fields, the data received in the cloud function will contain the following fields: `question`, `metric_to_compute`, `target` and `response`, where the `response` field corresponds to the LLM generation.
On the other hand, `custom_metric_name` must be unique. Since this is how we retrieve the metric from the cloud function and aggregate over the entire dataset.
Hi @SayyorYusupov,
Welcome to Google Cloud Community!
Thanks for your feedback regarding Google Prompt Optimizer. As this feature is still in its Preview launch stage, it is intended for use in test environments only and has limited support. With this, I suggest to file a feature request here so our engineering team can review this. Please note that I can't provide any details or timelines as to when this will be implemented. However, you may keep an eye on the release notes for any latest updates or new features related to Vertex AI.
Understood, thank you for response, Cassandra.
Hello @SayyorYusupov, Prompt Optimizer actually sends the entire row of the provided data plus the `response` field to the cloud function. For example, if the provided data has the `question`, `metric_to_compute`, and `target` fields, the data received in the cloud function will contain the following fields: `question`, `metric_to_compute`, `target` and `response`, where the `response` field corresponds to the LLM generation.
On the other hand, `custom_metric_name` must be unique. Since this is how we retrieve the metric from the cloud function and aggregate over the entire dataset.
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |