Image Recognition for Image Moderation

Image moderation uses a set of labels to identify potentially inappropriate content. Messages with images that match the label list get flagged and displayed in the moderation dashboard.

Key contributions of this work are the generation of well-labeled obscene image dataset GGOI via Pix-2-Pix GAN model for training deep learning models, modification of model architectures by integrating batch normalization and mixed pooling strategy to obtain training stability, selection of outperforming models and performing end-to-end obscene content detection.

NSFW Recognition

The client wanted to build a solution that can rapidly scan and identify not suitable for work (NSFW) images, pictures or video frames. They also needed it to be able to do so in real-time even on mobile devices with low CPU specifications and without GPU support.

NSFW content recognition is a very challenging task due to the subjective nature of what defines explicit images. What may be offensive to some people can be loved by others or even considered artistic and acceptable in other contexts.

Using state of the art image classification technology, Imagga’s NSFW categorizer can instantly detect nudity and adult content in images and GIF frames. It then returns a score value that can be used by developers to flag or even censor (e.g. apply blur filter) the target picture/frame on the fly if NSFW is detected. This solution is perfect for e-commerce and marketplaces, dating websites and stock apps that want to automatically moderate users image or GIF uploads on the go.

Obscene Content Detection

Detects explicit content such as drugs, gore and explicit nudity in images to help reduce manual moderation costs. It also flags messages that contain potentially NSFW images to be reviewed by your team.

Explicit content detection is a deep learning solution to identify Not Suitable for Work (NSFW) media (images or videos). It uses skin tone detection, pattern recognition and image processing technologies to extract the explicitness of a given image.

A residual network model is applied to the image to return a probability score on the likelihood of it being NSFW. This is compared with a threshold of 0.6 which was chosen by extensive trial and error to determine the appropriate level of explicitness to detect.

This method has been tested on a dataset of 500 explicit videos. It shows that the system correctly detects the explicitness of a video with 97% accuracy (blue dots) and incorrectly detects it only with 7% accuracy (red dots). This is a good result for a deep learning based solution.

Adult Content Detection

Adult content detection is a common use case for image recognition. It involves recognizing nude or sexual images and includes pornography, sex scenes, gore, drugs, etc. It is important for companies and schools to have this capability because inappropriate content on computers can be a liability.

The detection of explicit content in images and videos is a challenging task. Various methods are used for this task including deep learning. One method uses a bag of visual words and skin color features whereas another combines GoogleNet architecture and a recurrent neural network for feature extraction and classification.

Clarifai offers a vision API that can detect the presence of explicit content in images and videos. The API returns a probability score on a 0 to 1 scale for each vertical and applies it to the image as a label. This makes it easy for developers to integrate this functionality into their products. Explicit content detection can be used by businesses, schools and churches to remove pornographic images and videos from their systems.

Gore Detection

Gore detection is a pre-trained model that flags images and video content that depict gory violence such as bloody wounds, self-harm, human skulls and mutilated bodies. It can also be used to identify weapons such as firearms and knives. The gore value returned by the model can be compared to a threshold set by users and if the image or video is above the threshold it will be flagged as gory and may be inappropriate for children. Azure Computer Vision’s Gore Detection can be integrated into a website or app for image moderation. Clarifai’s Explicit Content API is another example of this type of technology.

Leave a Reply

Your email address will not be published. Required fields are marked *