Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

Safe Search Vision moderations API for detecting child porn

Hello everyone,

We just started using the Google Safe Search Vision API to provide image upload moderation on our platform.

At first, we moderated images flagged as "adult", "medical" and "racy".

Then, we realized "Racy" is a little too strict, it easily rejects images with fully clothed cleavage (perhaps because it was a drawing maybe, I don't know). So we started allowing "racy".

I want to know what category "child porn" and "inappropriately dressed children" images are flagged as. We had one instance where a user uploaded child porn images that could have been interpreted as a scantily clad pre-teen/baby. Will images like that be flagged as "adult" or "racy"?

I want to make sure that by denying images moderated as "adult" (VERY_LIKELY), that we will be able to detect & filter out images that are:

1. Child porn that includes sex
2. Child porn that includes no sex but inappropriately dressed children
3. Nude photos of children in any pose (even if it's something like a baby being powdered or diapered)
4. Clothed children in inappropriate poses

Furthermore, I'm curious if there's an option to detect "offensive" images such as violent vomiting or literal toilet/diarrhea photos, etcetera. Which of the existing labels would detect those kind of images (if any)? If those images are not detectable with Cloud Vision Safe Search API, what are our best options?

Thank you for your help, as this is very important to us.

I look forward to your reply.

1 4 4,333
4 REPLIES 4

Good day @jasonsaeho,

Welcome to Google Cloud Community!

I'll try to answer your questions:

1. Since it contains adult sexual content, Google Safe Search should be able to classify it as adult (VERY LIKELY)

2. If it does not contain adult sexual content but it contains sexual acts, it may be classified as racy (LIKELY)

3. This may also be classified in the category of adult (VERY LIKELY).

4. Inappropriate poses may fall under the category of racy (LIKELY).

Please note that it is vital to keep in mind that interpretations of what constitutes an unacceptable pose or act may vary, there may be times that other steps are required. 

You can also check this blog which discusses about the classification of Safe Search: https://cloud.google.com/blog/products/ai-machine-learning/filtering-inappropriate-content-with-the-...

You can also try performing a safe search on your local file, you can use this link to learn more: https://cloud.google.com/vision/docs/samples/vision-safe-search-detection

Alternatively, You can also train a model that will classify these pictures based on your objectives. You can check this link for more information: https://cloud.google.com/vertex-ai/docs/training-overview

Hope this helps!

Thank you for your reply.

I don't think we want to train a model just for moderation since there are so many competent models out there ready to use, and image upload is just a tiny portion of our business.

However, I think it's giving feedback to your team, that in my opinion, more content moderation categories are needed.

 

Thank you so much for your answer.

Hi Jason,

*Shameless plug warning*

Check out Hive (https://docs.thehive.ai/docs/visual-content-moderation). We have more than 50 granular classes including a solution specifically geared towards detecting this type of content. Feel free to reach out to me at nick AT thehive.ai and I'd be happy to share some additional information!

Best,

Nick 

Hey Jason! I was dealing with a similar problem a while back, and found fine-grained content moderation (with custom labels) to be pretty challenging with the Safe Search Vision API. 

It's why we're building DirectAI (https://directai.io). We let you create custom labels to detect and classify images, with zero training and no custom deployment. Just tell us what you want to find, and our API will do it out of the box. 

We've worked with clients who have needed bespoke detection & moderation tools in the past. If any of this sounds relevant, please schedule some time to chat with us. We'd love to hear how we can meet your needs.