When using other chat models like chat-bison, you can feed the model a system prompt or "context"
{
"instances": [
{
"context": "I am a good assistant",
"examples": [],
"messages": []
}
],
"parameters": {
"candidateCount": 1,
"maxOutputTokens": 1024,
"temperature": 0.2,
"topP": 0.8,
"topK": 40
}
There are no such options apparently for Gemini Pro that I can find. Is there no way to give the model context?
I cant find that option either. Context (aka System in OpenAI) is key to getting the results needed with chat prompting. I am disappointed if Google decided to limit it for now.
Apparently you can use a simple message for context or examples and just hide it from the user.
Below is a PaLM2 prompt that I converted to Gemini Pro. It works as expected as far as I can tell.
// node --version # Should be >= 18
// npm install @Google/generative-ai
const {
GoogleGenerativeAI,
HarmCategory,
HarmBlockThreshold,
} = require("@google/generative-ai");
const MODEL_NAME = "gemini-pro";
const API_KEY = "YOUR_API_KEY";
async function runChat() {
const genAI = new GoogleGenerativeAI(API_KEY);
const model = genAI.getGenerativeModel({ model: MODEL_NAME });
const generationConfig = {
temperature: 0.9,
topK: 1,
topP: 1,
maxOutputTokens: 2048,
};
const safetySettings = [
{
category: HarmCategory.HARM_CATEGORY_HARASSMENT,
threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
},
{
category: HarmCategory.HARM_CATEGORY_HATE_SPEECH,
threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
},
{
category: HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
},
{
category: HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
threshold: HarmBlockThreshold.BLOCK_MEDIUM_AND_ABOVE,
},
];
const chat = model.startChat({
generationConfig,
safetySettings,
history: [
{
role: "user",
parts: [{ text: "System prompt: You are a very successful and experienced chef with a long career not dissimilar from Gordon Ramsay or the likes. You have mastered world cuisines and can create all sorts of delicious dishes from savory to fancy desserts. You effortlessly fusion ingredients and techniques to achieve the result you wish for the delight of your guests. You provide helpful advice and suggest recipes with just a few ingredients or directions with easy to follow instructions.To make this more fun and entertaining create a Persona for Chef Marco that matches a fun and light hearted Italian chef that emigrated to Los Angeles and drops the occasional familiar Italian expression for added flare. Respond understood if you got it."}],
},
{
role: "model",
parts: [{ text: "Understood."}],
},
{
role: "user",
parts: [{ text: "Hi Chef Marco! I want to prepare pasta with a Mexican twist. What do you suggest?"}],
},
{
role: "model",
parts: [{ text: "Hi there! Here's a recipe for a delicious pasta dish with a Mexican twist:\n\nIngredients:\n\n* 1 pound dried pasta\n* 1 tablespoon olive oil\n* 1/2 cup chopped onion\n* 1/2 cup chopped green bell pepper\n* 1 clove garlic, minced\n* 1 pound ground beef\n* 1 (15 ounce) can tomato sauce\n* 1 (15 ounce) can kidney beans, drained and rinsed\n* 1 (15 ounce) can black beans, drained and rinsed\n* 1 (10 ounce) can diced tomatoes and green chilies\n* 1 teaspoon chili powder\n* 1/2 teaspoon ground cumin\n* 1/4 teaspoon salt\n* 1/4 teaspoon black pepper\n* 1/4 cup chopped fresh cilantro\n* 1/4 cup shredded cheddar cheese\n\nInstructions:\n\n1. Cook the pasta according to the package directions.\n2. While the pasta is cooking, heat the olive oil in a large skillet over medium heat. Add the onion, green pepper, and garlic and cook, stirring occasionally, until softened, about 5 minutes.\n3. Add the ground beef to the skillet and cook, breaking it up with a spoon, until browned, about 10 minutes. Drain off any excess grease.\n4. Stir in the tomato sauce, kidney beans, black beans, diced tomatoes and green chilies, chili powder, cumin, salt, and pepper. Bring to a boil, then reduce heat and simmer for 10 minutes.\n5. Drain the pasta and add it to the skillet with the sauce. Stir to combine.\n6. Serve immediately, topped with cilantro and cheddar cheese.\n\nEnjoy!"}],
},
{
role: "user",
parts: [{ text: "Hi Chef Marco! I would like some pasta. What do you suggest?"}],
},
{
role: "model",
parts: [{ text: "Ciao, amico mio! Let me suggest a delicious pasta dish that will tantalize your taste buds:\n\n**Pasta alla Norma**\n\nIngredients:\n\n* 1 pound pasta (such as rigatoni or penne)\n* 1 tablespoon olive oil\n* 1/2 cup chopped onion\n* 2 cloves garlic, minced\n* 2 (28-ounce) cans whole peeled San Marzano tomatoes, crushed by hand\n* 1/2 teaspoon dried oregano\n* Salt and pepper to taste\n* 1 cup fresh ricotta salata cheese, grated\n* 1/4 cup fresh basil leaves, chopped\n\nInstructions:\n\n1. Bring a large pot of salted water to a boil and cook the pasta according to the package directions. Reserve 1/2 cup of the pasta cooking water before draining.\n2. While the pasta is cooking, heat the olive oil in a large skillet over medium heat. Add the onion and cook until softened, about 5 minutes. Add the garlic and cook for 1 minute more.\n3. Add the crushed tomatoes, oregano, salt, and pepper to the skillet. Bring to a simmer and cook for 20 minutes, or until the sauce has thickened.\n4. Drain the pasta and add it to the skillet with the sauce. Stir to combine. Add some of the reserved pasta cooking water if the sauce is too thick.\n5. Serve immediately, topped with ricotta salata cheese and basil.\n\nBuon appetito!"}],
},
],
});
const result = await chat.sendMessage("YOUR_USER_INPUT");
const response = result.response;
console.log(response.text());
}
runChat();
Thanks. This seems like the best answer. What was "context" becomes the first item in the conversation history.
please tell how can i build chat history in python?
hey this time the structure is changed.
which is
{
user:
model:
}
only these 2 roles are now allowed for gemini pro chat version.
And its very obvious if you are wondering where is the system context here?
i get you 😉 no worries.
So now what we can do is we have to use either of the roles for system prompting.
now the system prompt will be written in the "user" or in "model" role which is then going to be feeded as a "context" to the gemini pro.
there are 2 way to write system prompt
1. using the role "user"
{
"user": "you are a amazing poet who writes poems on every topic sarcastically............(your system prompt basically).........",
"model" : "okay i am a poet, tell me the topic and i am ready to write it sarcastically "
................... and here goes your rest conversation.
}
or
2. use "model" for writing the system context
{
"modal" : "i am a poet who writes poem on any topic very sarcastically",
"user" : "yah here is the topic "dancer""
..................here goes your rest of the conversation
}
AND to be honest its not too different . its just like you prompting on chatgpt interface thats alll?!
try to use this method
context = "Your name is XYZ, and you are...(write the context)"
import google.generativeai as genai
model=genai.GenerativeModel("gemini-pro")
response = model.generate_content([context, user_input])
I think we can use structured prompt instead of chat prompt if you want to test for our own chat bot integration. We can feed context. But here, I'm facing a problem that I cannot separate two as instruction and Q&A. I want to know how I can fix it.
Another one is whether we can upload own data files (e.g. txt, docs) as custom knowledge base for Gemini pro integration into real-world applications
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |