Hi AppSheet Community,
If youโve recently logged into AppSheet and enabled access to preview features, some of you eagle-eyed creators might have already noticed a new option popping up in AppSheet Automations recently!
Now, letโs make it official: Weโre thrilled to announce that Gemini in AppSheet Solutions is now available in Public Preview for Enterprise users! As announced during the AppSheet breakout session at Google Cloud Next 2025 just a few weeks ago, this powerful new capability allows AppSheet Enterprise Plus users to integrate Google's Gemini models directly into their automation workflows.
The Extract and Categorize AI Tasks in action: Quickly get info about a book from its cover and assign its genre
And, to help you build with confidence as you explore these new capabilities, we're also excited to announce that the in-editor testing capability for these AI Tasks โ AI Task Step Testing โ is now Generally Available (GA)!
AI Task Testing is now GA for AppSheet Enterprise customers
Imagine automatically extracting key information from uploaded photos, parsing through complex PDFs, or categorizing incoming requests based on their content โ all within your existing AppSheet apps. The new AI Task (Preview), powered by Gemini, makes this a reality. And now, with the now-GA AI Task Step Testing feature, you can bring these powerful AI capabilities into your apps and iterate with speed and confidence knowing your Gemini-powered solutions work just as you intended.
Hereโs how this combination helps you:
Important Note on Usage: Currently, during the Public Preview phase for the Gemini AI Task feature, AppSheet Enterprise Plus users enabling this feature have complementary access to explore and learn Gemini capabilities. While we plan to track usage against entitled credit quotas when this feature becomes Generally Available (expected around June 2025), weโre giving preview users open access now to learn and explore.
Ready to try the AI Task Preview and use the GA testing feature? If you have an AppSheet Enterprise Plus plan:
AI task testing allows you to test at the automation step level. Rating the results improves Gemini in AppSheet's performance in the future.
Here are just a few ideas you can build and test today:
Remember, you can use the now GA testing feature to thoroughly check how the AI Task (Preview) performs on your specific data examples and refine your instructions within the configuration before letting it run live in your automation.
From AI task testing, rate your result and send us your insights:
Rating your results allows us to improve
Gemini in AppSheet to provide better responses.
This is just the beginning for Gemini in AppSheet, and we're incredibly excited to see what you build. For those already experimenting and those just diving in, please share your questions, thoughts, and feedback on both the AI Task preview and the GA testing capability in the comments below!
Warmly,
Rachel on behalf of the Gemini in AppSheet Team
I believe that AI tasks should be available starting from the AppSheet Core license or higher to broadly highlight AppSheet's advantage as a no-code tool. What do you all in the community think about this? Even with a limit on execution tokens, I'd like to see AI tasks accessible with the AppSheet Core license.
We looked at this, but similar to the question in another thread about video support, the cost risk given how many core users we have was just too high to justify. In the future we might roll out a more nuanced quota system that allows us to give some capacity to Core users and/or AI costs get low enough that we're not worried about the risks for certain use cases, but right now it was tough to justify.
Believe me, we would love to bring this to Core users, so it's not for lack of interest.
Thank you for the announcement on this matter!
I think AI Task will be a very powerful feature and look forward to future updates.
Now, yesterday I asked a question about data learning in AI Task in the following post.
I received the following answer from @zito
"File and prompt information is not used for gemini training"
answer
However, in this post, it is stated as follows
From AI task testing, rate your result and send us your insights:
Rating your results allows us to improve
Gemini in AppSheet to provide better responses.
Isn't this so-called โreinforcement learningโ?
I am very sorry to repeat my question again and again, but if AI Task really does not use user-uploaded information for training, could you please clearly state that in the official documentation of AI Task?
Don't worry, there's a lot of nuance and complexity here, and we're getting these types of questions from customers across Google Cloud and Workspace. In the AI Task documentation:
https://support.google.com/appsheet/answer/16106353
Depending on how you bought AppSheet, it's subject to either the AppSheet Generative AI terms or the Workspace generative AI terms, which say:
"12.11 Training Restriction. Google will not use Customer Data to train or fine-tune any of its generative artificial intelligence models supporting the Google Workspace Generative AI Services without Customer's prior permission or instruction."
The thumbs up/thumbs down is powered by an internal tool called "gFeedback" (it's amazing how many tools at Google are just "g" followed by what the thing does). When you submit information via gFeedback, we capture the thumbs up/down, and then if you choose to provide more detail, we capture that information as well (you can see what data is being submitted before you submit it).
If a user opts to submit that information, there's a notice that we can use that information to improve our services. For AppSheet, today, we are using that additional information to build an evaluation set of actual examples so we can assess performance of the models and our prompts over time, as well as identify areas where the AI Tasks perform better or worse. Basically we're using it for ongoing QA so we can identify regressions or improvements over time.
Reinforcement learning through human feedback (RLHF) is where humans score an evaluation set and provide updated responses, which are used to fine-tune or train a model. We are not doing that.
@zito
Thank you for the detailed explanation.
It is a relief to know that the text clearly states that the data will not be used to train the model.๐
I will register the URL you provided in my NotebookLM.๐
A lot of positive movements here, which is very welcome to see. ๐ Kudos to the team for all their hard work.๐
If you wanted to make all of this truly usable and something app builders can trust, please give us the ability to select different language model providers and the model we want to use. Unfortunately these Gemini models are absolutely terrible at following instructions, which makes it incredibly difficult to integrate these steps into automation because you can't trust the AI behind it.
I built an agentic workflow app a few years ago, and one of the chains that I've built out is a vanilla LLM comparison whenever somebody asks a question in the Answer Portal app I've got; it takes their question and poses it to regular LLMs (without my Appster instructions or any of the resources from the RAG system I've built) like Gemini, Claude, and the like. The instructions are simple: you are an AppSheet expert, answer the question. (There's a few more but that's the essence of the instructions, answer the question.)
If you would give us the ability to select other language model providers, app builders could then use the language models that they trust - really empowering them to bring in AI into their workflows. Restricted to just the Gemini models makes this a very lackluster release.
I'm sure there are plans for expansion, as this is just an initial release, and I really look forward to where this is going to go. ๐
Thanks for the feedback - one of the things we discussed a fair bit internally was - how much LLM do we want to expose to creators? If we think of it as a spectrum, there's one extreme where the creator gets a black box: "shove in an image/pdf and we'll fill in fields", and the other extreme is, "hey, you tell us what model and write the prompt, set the temperature, provide some few-shot prompting and away you go".
For the latter case - from interviews with users, the vast majority of them did not want to have to think about models or prompts or any of the "ML operations" that come with what you describe. There's the secondary consideration which is that for many non-Gemini models, there's a cost associated with them above and beyond the compute cost, which impacts our costs/margins. Then there's the complexity with AI Tasks supporting multiple models that we have to QA and test every task against every model and manage prompts for each one. Finally, if you're experienced enough to write a prompt, research models, experiment, etc. - you're probably experienced enough to write a webhook task that can do the things you described.
For the first case, the complete black box, it just didn't work well. Gemini needed some guidance about what to do and where to focus. So when we added the ability to give some additional instructions, that seemed to hit the sweet spot - Gemini got a lot smarter, users didn't have to write the whole prompt, but just give a bit of explanation, and it worked pretty well for a lot of use cases.
So that's where we landed - somewhere in between the extremes with a bias towards the "black box" side of the equation. However, we have absolutely talked about something we have been calling the "pro task", which is what you describe: its a task where the user (or someone in IT/a power user) writes the prompt, picks the model, tweaks everything, and then makes it available for integration into bots. Given that we wouldn't have any control over input or output tokens for that, we would likely have the user (or an admin) connect AppSheet to Vertex with billing enabled, and the charges would be directly between the customer and Vertex. Not currently something we are building, but I would like to find a way for us to staff that in the future.
(As a side note - regarding your commentary about Gemini, if you haven't tried the latest Gemini models, I encourage you to try 2.5. Previously, in my personal life, I used Gemini, ChatGPT, and Claude, depending on the use case, but since 2.5 came out, I've found myself using Gemini more and more. It's really good at following instructions, it's thoughtful about when it doesn't have enough information, it's got a nice neutral tone - really great. Just my two cents)
Played about with this feature and my god, what a game changer this is! ๐ My app centers around change requests and ideas in my business area - some of which are an essay long. I've now got Gemini to insert a summary at the top of each submission and it's marvelous!
I can now see at a glance what my colleagues are wanting to suggest
The following is a suggestion.
Currently, the Additional instructions are text-based input, but for example, in โimportanceโ and โurgencyโ classification tasks, there are often cases where a company would like to handle these based on its own criteria.
Therefore, I think it would be more convenient if there is a function to upload knowledge about them as PDF files or documents.
Thank you for the suggestion. I have created a feature request internally and will discuss with our product team.
Thank you very much!
I are looking forward to it!
ใCategoryใTask surely needs to be improved if you want to get more users to get to use this functionality for sure.
As it's name suggests, it may only support the limited number of use cases alone out of millions, as we are not able to envisage the actual real-world use cases if the functionality is limited like this.
Why?
We have to "HARD CODE" the Enum / Enumlist columns where the returned values are saved. We have to put the available values for the options of the selectable items for enum/enumlist column where the returned values are saved. This limits the use cases.
When we pass the fixed/hard-coded values for the availavble options , it is always just a few of available options, such as Low/Mid/High etc. However, in the real-world use cases, the "Options" are more.
Therefore, naturally we put the "master table" to make a dropdown list for the available items for Enum/Enumlist. However, with the current limitation, this would not be accepted by AppSheet.
Once we do not put any hard-coded list of the items, we get turned down (errors).
This does not utterly make sense.
Leaving the hard coded values for option null, then we got such error. Then we test to add "Expression" for the valid if/suggested values part to generate the available options, but it is also turned down (Errors).
To accelerate the usage of the โcategory" task,
1. Enum/Enumlist without hard-coded options with expressions for valid if / suggested values to be supported.
2. REF type column should be also supported.
REF and/or base type REF could be challenging. But naturally, the AI would check the LABEL value for those columns instead of KEY value to return the values what we want. Technically, I should be feasible, as it is not things in terms of AI stuffs, but AppSheet things alone.
If this does not happen, the Category AI use case is pretty much limited. Just few percent of users / or less use cases to employ this functionality as use cases are limited sadly.
Thank you for the feedback. We are aware of this use case and are currently actively working on developing this. This will be available soon.
Thank you for your comment. Very good to hear you are aware of this and working on to deliver the more advanced features for AI Task. Looking forward to hearing further news from your end!
Hi @aneeshatgoogle and AppSheet Team.
I would like to be able to get Exif information and metadata in the AI Task.
However, it seems that Exif information cannot be obtained with Gemini, although there may be a safety issue. (With ChatGPT I could get it.)
So I understand that this would be feedback to the Gemini team.
Exif information is highly compatible with the AppSheet use case.
I would very much like to see this addressed in conjunction with the Gemini team.
Thanks @Rachelgmoore for this very important information. One question: Is there a limit to the number of images Gemini can process?
Hi @Rachelgmoore .
I'd greatly appreciate your help with the following. Today I received an email from Google Cloud informing me that Gemini in AppSheet will have a charge starting at the end of July 2025, but it's unclear how this charge will be calculated, which is preventing us from using it because we don't know how much we'll be charged. Could you explain here and in a separate post how this charge is calculated?
Thanks in advance.
Thanks for reaching out!
You're likely referring to our recent email, "Gemini in AppSheet Solutions launches in late June 2025," sent to AppSheet Enterprise account admins.
The most important thing for AppSheet Enterprise users to know is that you automatically receive the full, maximum AppSheet credit entitlement with your existing Enterprise agreement, at no additional charge.
Here's a bit more detail to clarify:
Regarding the specifics of credit consumption and your quota:
We'll announce these updates right here in the Community when they're live, so please stay tuned.
Hope this helps to clarify how your included credits work! The key takeaway is that your Enterprise licenses automatically come with a significant AI credit entitlement at no extra cost.
Thanks a lot @Rachelgmoore for your answer. I realize this is all very new, and we'll only have complete cost information in the coming days so we know how much we can use it. I'll be keeping an eye out for the information.