Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Fine tuning text-bison model

Hi everyone

I currently have a pipeline using OpenAI where I pass information about my internal company database tables as a prompt, and then ask a user defined question, and then get an SQL query and a response.

As you might have guessed, this takes alot of tokens since I need to describe my tables in the prompt and costs alot.

I am trying to now fine tune a text-bison model by passing it training examples of the input text along with an appropriate output response. For the training, I can pass the same prompt as the OpenAI pipeline, where I describe my tables and then ask the model to generate a query.

But, the Vertex AI page on fine tuning says to use training examples which will be the same as you would get an input in production. This would mean that I also pass the whole table description in the production pipeline as well, and this is exactly what I am trying to avoid.

As an example:

For training:

Context: You have the following tables to gather data from:

Table 1 description
Table 2 description

Input: What is the price of Lockheed Martin stocks?

Output: The price of stock is X.

In the above example, the model knows the tables through the context and then finds the appropriate table for the text it was given and generates a response.

But in a production environment, I want to give only the 'input', and not the table descriptions (i.e. context), since that would take up tokens and cost more, and that is what I am trying to avoid in the first place.

Any idea how to go about this or am I approaching the problem in the wrong way?

Thanks!

1 0 1,384
0 REPLIES 0