Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Adding pre-amble when Fine Tuning Classification Model using Gemini API

While reading Gemini-API fine tuning (https://ai.google.dev/gemini-api/docs/model-tuning), there is this part that mention:

Adding a prompt or preamble to each example in your dataset can also help improve the performance of the tuned model. Note, if a prompt or preamble is included in your dataset, it should also be included in the prompt to the tuned model at inference time.

But the training data format is only a json array of objects consisting of text_input and output, e.g.:

training_data = [
{"text_input": "1", "output": "2"},
{"text_input": "3", "output": "4"},
{"text_input": "-3", "output": "-2"},
...
]
  1. In this case how do I enter the pre-amble? Any example is appreciated.
  2. This statement: 'if a prompt or preamble is included in your dataset, it should also be included in the prompt to the tuned model at inference time' does it means i have to provide this during every prompting? Will this cause the model trained to not be able to answer if there are items are not in my training data?

Thanks

0 1 144
1 REPLY 1

Hi @kokhoor,

Welcome to Google Cloud Community!

The idea behind including a preamble in your training data is to guide the model towards a specific task or format. It's like giving the model a consistent instruction set to follow for each example. Think of it like this:

  • Without a Preamble: You're training the model on raw input-output pairs, and the model has to infer the general rule.
  • With a Preamble: You're explicitly telling the model how to interpret the input and format the output. This can improve consistency and accuracy, especially for complex tasks.

To incorporate a preamble in your training data, the key is that the text_input field should include the preamble and the actual input text concatenated together as a single string. 

Here’s an example, suppose we want to fine-tune a model to add 1 to a given number.

 

[
   {"text_input": "Please add 1 to the following number: 1", "output": "2"},
   {"text_input": "Please add 1 to the following number: 3", "output": "4"},
   {"text_input": "Please add 1 to the following number: -3", "output": "-2"}
]

 

Here’s an insight of the code above:

  • Preamble: The preamble is "Please add 1 to the following number: ".
  • Concatenation: In each training example, the preamble is concatenated with the actual input value (e.g., "1", "3", "-3"). The whole thing is included in text_input
  • Consistency: The key is that the preamble is consistent across all your training examples.

Additionally, the statement "if a prompt or preamble is included in your dataset, it should also be included in the prompt to the tuned model at inference time" means that you must use the preamble when you prompt the model after fine-tuning.

Here’s an example:

Training Data:

 

[
   {"text_input": "Translate the following English phrase to French: Hello", "output": "Bonjour"},
   {"text_input": "Translate the following English phrase to French: Goodbye", "output": "Au revoir"},
   {"text_input": "Translate the following English phrase to French: Thank you", "output": "Merci"}
]

 

Inference:

 

from google.generativeai.client import GenerativeModel

model = GenerativeModel('tuned-model-name') #Your fine tuned model name
prompt = "Translate the following English phrase to French: How are you?"
response = model.generate_content(prompt)
print(response.text)

 

This is important because if you trained the model with a prompt prefix, your tuned model has learned to "expect" that prefix at inference time. If you omit it, the model will likely not understand the meaning and return nonsensical or incorrect responses. The model is trained to recognize that specific combination of prompt and text to perform the correct response. Consistency in pre-amble ensures the fine-tuned model can consistently apply what it's learned from your training examples.

Lastly, to answer your question, if the model will fail when the input is not in the training data, no, not necessarily. Fine-tuning generally enhances a pre-trained model's ability to handle data similar to the training data. The pre-trained model still retains its knowledge of English, French and general knowledge etc. So, if you give it a similar task with a new input that it has not seen, it will try its best to fulfill that task.

Also, the fine-tuned model is not limited only to inputs from the training data. It can generalize to unseen inputs provided that they are in the same distribution as the training data (i.e., follow the same general patterns) and contain the same preamble. However, if the prompt is very very different or the concept of what is asked is very very different from what it has been fine tuned with, the model might not give a good answer.

Was this helpful? If so, please accept this answer as “Solution”. If you need additional assistance, reply here within 2 business days and I’ll be happy to help.