I am evaluating retail search. With regard to semantic search, I saw some out-of-the-box impressive features, for example. the famous search phrase: long sleeve short baby shower dress.
But each application is different, there are some specifics. Application developers are happy to provide sample data to train (or fine-tune) the model, anyway to do that ?
An hypothetical example, products like below:
2022 Chevy Camaro, with mileage of 8,000, and 20 MPG
2015 Chevy Cruze, with mileage of 40,000, and 32 MPG
The search terms such "less than 5 year old cars", "under 10,000 miles", or "30 MPG or higher" do not work, I tried.
Anyway to do 'few-shots" training? we are happy to provide data, though it should be only for our application.
You could collect examples of search queries like "2022 Chevy Camaro less than 5 year old cars", "2015 Chevy Cruze under 10,000 miles", and "30 MPG or higher cars". Then, pre-train a large language model (LLM) on a large corpus of text data. This will give the LLM a general understanding of language. Then, fine-tune the LLM on the few-shot data. This will teach the LLM to associate specific search queries with specific products.
Thanks for replying, Joevanie. I am testing Google retail search as a product, but couldn't find any interface to train LLM part of the retail search. Anyway we can integrate with other LLM services which we can train? Or any train/fine-tune interface for retail search I am not aware of? I'd appreciate if you can point me to the direction.
I am looking into Retail search as a possible one-stop solution, so we don't have to build several components for search, personalization and recommendations, based on tracked success conversion rates. All of these are coming out of the box.
User | Count |
---|---|
2 | |
1 | |
1 | |
1 | |
1 |