Hello, I have a problem about the use of the text-bison@002` model, although you can check in the code that I try to use another model, but by default langchain uses the text-bison@002 model and does not allow me to change the model.:
FailedPrecondition: 400 Project `xxxxxxxxxxxxxx` is not allowed to use Publisher Model `projects/xxxxxxxxxxxxxx/locations/us-central1/publishers/google/models/text-bison@002`
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.FAILED_PRECONDITION
details = "Project xxxxxxxxxxxxxxis not allowed to use Publisher Model `projects/xxxxxxxxxxxxxx/locations/us-central1/publishers/google/models/text-bison@002`"
debug_error_string = "UNKNOWN:Error received from peer ipv4:74.125.139.95:443 {created_time:"2024-10-24T18:26:01.653852977+00:00", grpc_status:9, grpc_message:"Project xxxxxxxxxxxxxxis not allowed to use Publisher Model `projects/xxxxxxxxxxxxxx/locations/us-central1/publishers/google/models/text-bison@002`"}"
>
I am using a "hands-on" on "Chain of Thought - Self Consistency"
!pip install --upgrade --user langchain==0.0.310 \
google-cloud-aiplatform==1.35.0 \
prettyprinter==0.18.0 \
wikipedia==1.4.0 \
chromadb==0.3.26 \
tiktoken==0.5.1 \
tabulate==0.9.0 \
sqlalchemy-bigquery==1.8.0 \
google-cloud-bigquery==3.11.4
#Langachain on VertexAI
from langchain.llms import VertexAI
from operator import itemgetter
from langchain.prompts import PromptTemplate
from langchain.schema import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough
from IPython.display import display, Markdown
import pandas as pd
project_id = "xxxxxxxx"
model_name = "gemini-1.0-pro"
print(f"Using model: {model_name}")
question = """The cafeteria had 23 apples.
If they used 20 to make lunch and bought 6 more, how many apples do they have?"""
context = """Answer questions showing the full math and reasoning.
Follow the pattern in the example.
"""
one_shot_exemplar = """Example Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls.
Each can has 3 tennis balls. How many tennis balls does he have now?
A: Roger started with 5 balls. 2 cans of 3 tennis balls
each is 6 tennis balls. 5 + 6 = 11.
The answer is 11.
Q: """
planner = (
PromptTemplate.from_template(context + one_shot_exemplar + " {input}")
| VertexAI(model=model_name, project=project_id)
| StrOutputParser()
| {"base_response": RunnablePassthrough()}
)
answer_1 = (
PromptTemplate.from_template("{base_response} A: 33")
| VertexAI(model=model_name, project=project_id, temperature=0, max_output_tokens=400)
| StrOutputParser()
)
answer_2 = (
PromptTemplate.from_template("{base_response} A:")
| VertexAI(model=model_name, project=project_id, temperature=0.1, max_output_tokens=400)
| StrOutputParser()
)
answer_3 = (
PromptTemplate.from_template("{base_response} A:")
| VertexAI(model=model_name, project=project_id, temperature=0.7, max_output_tokens=400)
| StrOutputParser()
)
final_responder = (
PromptTemplate.from_template(
"Output all the final results in this markdown format: Result 1: {results_1} \n Result 2:{results_2} \n Result 3: {results_3}"
)
| VertexAI(model=model_name, max_output_tokens=1024)
| StrOutputParser()
)
chain = (
planner
| {
"results_1": answer_1,
"results_2": answer_2,
"results_3": answer_3,
"original_response": itemgetter("base_response"),
}
| final_responder
)
answers = chain.invoke({"input": question})
display(Markdown(answers))