Gemini in AppSheet now in Public Preview for Enterprise users! (And AI task testing is GA!)

Hi AppSheet Community,

If you’ve recently logged into AppSheet and enabled access to preview features, some of you eagle-eyed creators might have already noticed a new option popping up in AppSheet Automations recently!

Now, let’s make it official: We’re thrilled to announce that Gemini in AppSheet Solutions is now available in Public Preview for Enterprise users! As announced during the AppSheet breakout session at Google Cloud Next 2025 just a few weeks ago, this powerful new capability allows AppSheet Enterprise Plus users to integrate Google's Gemini models directly into their automation workflows.

The Extract and Categorize AI Tasks in action: Quickly get info about a book from its cover and assign its genreThe Extract and Categorize AI Tasks in action: Quickly get info about a book from its cover and assign its genre

 And, to help you build with confidence as you explore these new capabilities, we're also excited to announce that the in-editor testing capability for these AI Tasks – AI Task Step Testing – is now Generally Available (GA)!

 

AI Task Testing is now GA for AppSheet Enterprise customersAI Task Testing is now GA for AppSheet Enterprise customers

Why It Matters: Supercharge Your Processes & Build with Confidence

Imagine automatically extracting key information from uploaded photos, parsing through complex PDFs, or categorizing incoming requests based on their content – all within your existing AppSheet apps. The new AI Task (Preview), powered by Gemini, makes this a reality. And now, with the now-GA AI Task Step Testing feature, you can bring these powerful AI capabilities into your apps and iterate with speed and confidence knowing your Gemini-powered solutions work just as you intended.

Here’s how this combination helps you:

  • Automate Smarter: Leverage Gemini to handle tasks like data extraction and categorization, freeing up your team for higher-value work.
  • Deploy AI Safely & Scalably: Bring powerful AI capabilities into your team's processes quickly. AppSheet admins retain control over which creators can use these Gemini features, ensuring governance and compliance.
  • Keep Humans in the Loop: While Gemini accelerates tasks, you still control the workflow. Easily incorporate human review steps where needed using AppSheet's flexible interface (mobile, web, chat, Gmail).
  • Easy to Use (No Machine Learning Degree Required): We've created pre-built AI capabilities that work out-of-the-box. You can still provide additional context or instructions to tailor the results to your specific business needs, but getting started is simple.
  • Build with Confidence using GA Step Testing: We know that when configuring workflows, especially using AI, robust testing is crucial. You want to be sure you're building the right solution for your teams. That's why the now-GA in-editor AI Task Step Testing is so important. It allows you to tweak prompts and settings, see immediate results on sample data, and iterate much more quickly – building trust in your AI-powered automations before you deploy.

How to Get Access & Start Building

Important Note on Usage: Currently, during the Public Preview phase for the Gemini AI Task feature, AppSheet Enterprise Plus users enabling this feature have complementary access to explore and learn Gemini capabilities. While we plan to track usage against entitled credit quotas when this feature becomes Generally Available (expected around June 2025), we’re giving preview users open access now to learn and explore.

Ready to try the AI Task Preview and use the GA testing feature? If you have an AppSheet Enterprise Plus plan:

  1. Open one of your existing apps in the AppSheet editor (or create a new one).
  2. Navigate to Settings > General.
  3. Scroll down and ensure "Preview new features" is turned ON. (This enables access to the AI Task).
  4. Go to the Automation section of the editor and open or create a Bot.
  5. When adding a step to a process, you'll now see the option to add an "AI Task" (Preview). Choose this to begin configuring your Gemini-powered step!
  6. Test Your Configuration (Now GA!): Inside the AI Task setup panel, use the new Testing section. Input sample data to preview results, refine your prompts or settings before saving the automation step, and use the ratings feature to help improve AppSheet’s answers in the future.

AI task testing allows you to test at the automation step level. Rating the results improves Gemini in AppSheet's performance in the future.AI task testing allows you to test at the automation step level. Rating the results improves Gemini in AppSheet's performance in the future.

Use Cases (Leveraging Preview AI Tasks + GA Testing)

Here are just a few ideas you can build and test today:

Extract Information:
  • Have technicians snap a photo of equipment; use the AI Task to automatically extract the Serial Number, Model Number, or Meter Reading into your AppSheet table.
  • Process uploaded purchase orders (PDFs) or photos of shipping labels to extract PO numbers, company names, tracking numbers, or addresses.
  • Extract key details like location, date, or names from incident reports.
Categorize Records:
  • Analyze the description in employee expense submissions and automatically categorize them by type ("Travel", "Meals", "Software", "Training").
  • Read incoming facility maintenance requests and categorize them by urgency ("High", "Medium", "Low") or equipment type ("HVAC", "Plumbing", "Electrical").
  • Classify customer survey responses or feedback form submissions into types like "Bug Report", "Feature Request", "Positive Feedback", or "Billing Inquiry".

Remember, you can use the now GA testing feature to thoroughly check how the AI Task (Preview) performs on your specific data examples and refine your instructions within the configuration before letting it run live in your automation.

Learn More & Share Your Feedback



From AI task testing, rate your result and send us your insights:

Rating your results allows us to improve
Gemini in AppSheet to provide better responses.

 

 

 


This is just the beginning for Gemini in AppSheet, and we're incredibly excited to see what you build. For those already experimenting and those just diving in, please share your questions, thoughts, and feedback on both the AI Task preview and the GA testing capability in the comments below!

Warmly,
Rachel on behalf of the Gemini in AppSheet Team

10 19 3,251
19 REPLIES 19

Go
Bronze 5
Bronze 5

I believe that AI tasks should be available starting from the AppSheet Core license or higher to broadly highlight AppSheet's advantage as a no-code tool. What do you all in the community think about this? Even with a limit on execution tokens, I'd like to see AI tasks accessible with the AppSheet Core license.

We looked at this, but similar to the question in another thread about video support, the cost risk given how many core users we have was just too high to justify.  In the future we might roll out a more nuanced quota system that allows us to give some capacity to Core users and/or AI costs get low enough that we're not worried about the risks for certain use cases, but right now it was tough to justify.

Believe me, we would love to bring this to Core users, so it's not for lack of interest. 

Thank you for the announcement on this matter!
I think AI Task will be a very powerful feature and look forward to future updates.

Now, yesterday I asked a question about data learning in AI Task in the following post.
I received the following answer from @zito 
"File and prompt information is not used for gemini training"
answer 

However, in this post, it is stated as follows


From AI task testing, rate your result and send us your insights:

Rating your results allows us to improve
Gemini in AppSheet to provide better responses.


Isn't this so-called “reinforcement learning”?

I am very sorry to repeat my question again and again, but if AI Task really does not use user-uploaded information for training, could you please clearly state that in the official documentation of AI Task?

Don't worry, there's a lot of nuance and complexity here, and we're getting these types of questions from customers across Google Cloud and Workspace.  In the AI Task documentation:

https://support.google.com/appsheet/answer/16106353

Depending on how you bought AppSheet, it's subject to either the AppSheet Generative AI terms or the Workspace generative AI terms, which say:

"12.11 Training Restriction. Google will not use Customer Data to train or fine-tune any of its generative artificial intelligence models supporting the Google Workspace Generative AI Services without Customer's prior permission or instruction."

The thumbs up/thumbs down is powered by an internal tool called "gFeedback" (it's amazing how many tools at Google are just "g" followed by what the thing does).  When you submit information via gFeedback, we capture the thumbs up/down, and then if you choose to provide more detail, we capture that information as well (you can see what data is being submitted before you submit it).

If a user opts to submit that information, there's a notice that we can use that information to improve our services.  For AppSheet, today, we are using that additional information to build an evaluation set of actual examples so we can assess performance of the models and our prompts over time, as well as identify areas where the AI Tasks perform better or worse.  Basically we're using it for ongoing QA so we can identify regressions or improvements over time.

Reinforcement learning through human feedback (RLHF) is where humans score an evaluation set and provide updated responses, which are used to fine-tune or train a model.  We are not doing that.  

@zito 
Thank you for the detailed explanation.
It is a relief to know that the text clearly states that the data will not be used to train the model.🙂

I will register the URL you provided in my NotebookLM.😅

A lot of positive movements here, which is very welcome to see. 😃 Kudos to the team for all their hard work.🙏

If you wanted to make all of this truly usable and something app builders can trust, please give us the ability to select different language model providers and the model we want to use.  Unfortunately these Gemini models are absolutely terrible at following instructions, which makes it incredibly difficult to integrate these steps into automation because you can't trust the AI behind it.

I built an agentic workflow app a few years ago, and one of the chains that I've built out is a vanilla LLM comparison whenever somebody asks a question in the Answer Portal app I've got; it takes their question and poses it to regular LLMs (without my Appster instructions or any of the resources from the RAG system I've built) like Gemini, Claude, and the like. The instructions are simple: you are an AppSheet expert, answer the question. (There's a few more but that's the essence of the instructions, answer the question.)

  • Consistently Gemini refuses to follow that instruction. Instead it does an analysis of the situation, and in the analysis it gives an answer - but the point is I didn't tell it to analyze the situation, I said answer the question.... And it's not following my instructions. 
  • When I have a very basic situation like that, and the model is incapable of following even that basic of request, how can I trust this model to do anything that's got more nuance and subtlety?

If you would give us the ability to select other language model providers, app builders could then use the language models that they trust - really empowering them to bring in AI into their workflows.  Restricted to just the Gemini models makes this a very lackluster release. 

I'm sure there are plans for expansion, as this is just an initial release, and I really look forward to where this is going to go. 🙂 

Thanks for the feedback - one of the things we discussed a fair bit internally was - how much LLM do we want to expose to creators?  If we think of it as a spectrum, there's one extreme where the creator gets a black box: "shove in an image/pdf and we'll fill in fields", and the other extreme is, "hey, you tell us what model and write the prompt, set the temperature, provide some few-shot prompting and away you go".

For the latter case - from interviews with users, the vast majority of them did not want to have to think about models or prompts or any of the "ML operations" that come with what you describe.  There's the secondary consideration which is that for many non-Gemini models, there's a cost associated with them above and beyond the compute cost, which impacts our costs/margins.  Then there's the complexity with AI Tasks supporting multiple models that we have to QA and test every task against every model and manage prompts for each one.  Finally, if you're experienced enough to write a prompt, research models, experiment, etc. - you're probably experienced enough to write a webhook task that can do the things you described.

For the first case, the complete black box, it just didn't work well.  Gemini needed some guidance about what to do and where to focus.  So when we added the ability to give some additional instructions, that seemed to hit the sweet spot - Gemini got a lot smarter, users didn't have to write the whole prompt, but just give a bit of explanation, and it worked pretty well for a lot of use cases.

So that's where we landed - somewhere in between the extremes with a bias towards the "black box" side of the equation.  However, we have absolutely talked about something we have been calling the "pro task", which is what you describe: its a task where the user (or someone in IT/a power user) writes the prompt, picks the model, tweaks everything, and then makes it available for integration into bots.  Given that we wouldn't have any control over input or output tokens for that, we would likely have the user (or an admin) connect AppSheet to Vertex with billing enabled, and the charges would be directly between the customer and Vertex. Not currently something we are building, but I would like to find a way for us to staff that in the future. 

(As a side note - regarding your commentary about Gemini, if you haven't tried the latest Gemini models, I encourage you to try 2.5. Previously, in my personal life, I used Gemini, ChatGPT, and Claude, depending on the use case, but since 2.5 came out, I've found myself using Gemini more and more.  It's really good at following instructions, it's thoughtful about when it doesn't have enough information, it's got a nice neutral tone - really great.  Just my two cents)

Played about with this feature and my god, what a game changer this is! 😃 My app centers around change requests and ideas in my business area - some of which are an essay long. I've now got Gemini to insert a summary at the top of each submission and it's marvelous!

I can now see at a glance what my colleagues are wanting to suggest

The following is a suggestion.

Currently, the Additional instructions are text-based input, but for example, in “importance” and “urgency” classification tasks, there are often cases where a company would like to handle these based on its own criteria.

Therefore, I think it would be more convenient if there is a function to upload knowledge about them as PDF files or documents.

Thank you for the suggestion. I have created a feature request internally and will discuss with our product team.

Thank you very much!
I are looking forward to it!

「Category」Task surely needs to be improved if you want to get more users to get to use this functionality for sure.

As it's name suggests, it may only support the limited number of  use cases alone out of millions, as we are not able to envisage the actual real-world use cases if the functionality is limited like this.

Why?

We have to "HARD CODE" the Enum / Enumlist columns where the returned values are saved. We have to put the available values for the options of the selectable items for enum/enumlist column where the returned values are saved.  This limits the use cases. 

When we pass the fixed/hard-coded values for the availavble options , it is always just  a few of available options, such as Low/Mid/High etc. However, in the real-world use cases, the "Options" are more.

Therefore, naturally we put the "master table" to make a dropdown list for the available items for Enum/Enumlist. However, with the current limitation, this would not be accepted by AppSheet. 

2025-05-01_18-16-23.png

Once we do not put any hard-coded list of the items, we get turned down (errors).

This does not utterly make sense.

Leaving the hard coded values for option null, then we got such error. Then we test to add "Expression" for the valid if/suggested values part to generate the available options, but it is also turned down (Errors). 

2025-05-01_18-12-51.PNG

To accelerate the usage of the ”category" task,  

1. Enum/Enumlist without hard-coded options with expressions for valid if / suggested values to be supported.

2. REF type column should be also supported.

REF and/or base type REF could be challenging. But naturally, the AI would check the LABEL value for those columns instead of KEY value to return the values what we want.  Technically, I should be feasible, as it is not things in terms of AI stuffs, but AppSheet things alone.

If this does not happen, the Category AI use case is pretty much limited. Just few percent of users / or less use cases to employ this functionality as use cases are limited sadly.

 

 

 

Thank you for the feedback. We are aware of this use case and are currently actively working on developing this. This will be available soon.

@aneeshatgoogle 

Thank you for your comment. Very good to hear you are aware of this and working on to deliver the more advanced features for AI Task. Looking forward to hearing further news from your end!

 

Hi @aneeshatgoogle and AppSheet Team.

I would like to be able to get Exif information and metadata in the AI Task.

2025-05-02_09h58_20.png

 

However, it seems that Exif information cannot be obtained with Gemini, although there may be a safety issue. (With ChatGPT I could get it.)
So I understand that this would be feedback to the Gemini team.

2025-05-02_09h57_10.png2025-05-02_09h57_36.png

 

Exif information is highly compatible with the AppSheet use case.
I would very much like to see this addressed in conjunction with the Gemini team.

Thanks @Rachelgmoore  for this very important information. One question: Is there a limit to the number of images Gemini can process?

Hi @Rachelgmoore .

I'd greatly appreciate your help with the following. Today I received an email from Google Cloud informing me that Gemini in AppSheet will have a charge starting at the end of July 2025, but it's unclear how this charge will be calculated, which is preventing us from using it because we don't know how much we'll be charged. Could you explain here and in a separate post how this charge is calculated?

Thanks in advance.

 

Thanks for reaching out!

You're likely referring to our recent email, "Gemini in AppSheet Solutions launches in late June 2025," sent to AppSheet Enterprise account admins.

The most important thing for AppSheet Enterprise users to know is that you automatically receive the full, maximum AppSheet credit entitlement with your existing Enterprise agreement, at no additional charge.

Here's a bit more detail to clarify:

  • Your Included Credit Entitlement: You're correct that AI tasks performed by Gemini features will consume credits once it's generally available (GA). However, the good news is that AppSheet Enterprise licenses automatically include a full quota of AppSheet credits as part of your existing Enterprise agreement. These credits are provided with your current Enterprise licenses at no additional charge, and the number of credits corresponds to the number of AppSheet Enterprise licenses you hold.
  • No Separate Google Workspace License Needed: Accessing Gemini in AppSheet Solutions does not require a separate Google Workspace license. 

Regarding the specifics of credit consumption and your quota:

  • Detailed Credit Information is Coming Soon: We want to be transparent about credit usage, and we'll be providing detailed visibility well before GA. Specifically, within the next week or so, we're rolling out new views in the Admin Console. These will allow you to see your organization's included credit quota, monitor credit consumption per app, and understand your overall usage.
  • Help Resources on the Way: Alongside the new Admin Console views, we'll also be publishing new help articles. These will provide comprehensive details on how your included credit quota is calculated, how credits are pooled and consumed, and will also outline options for acquiring additional credits if your organization's needs were to exceed your included entitlement in the future.

We'll announce these updates right here in the Community when they're live, so please stay tuned.

Hope this helps to clarify how your included credits work! The key takeaway is that your Enterprise licenses automatically come with a significant AI credit entitlement at no extra cost. 


Thanks a lot @Rachelgmoore for your answer. I realize this is all very new, and we'll only have complete cost information in the coming days so we know how much we can use it. I'll be keeping an eye out for the information.