Photo Credit: Unsplash/Deeksha Pahariya
Meta's artificial intelligence (AI) content detection tool on Instagram, has been spotted denoting real images with a ‘Made by AI' label. One such post that was incorrectly labelled was from the official Instagram account of Kolkata Knight Riders which recently won the Indian Premier League (IPL) 2024 cricket tournament. One of the images posted by the account, which shows the team lifting the trophy, was labelled by the platform as AI-generated. Several photographers on the social media platform have experienced the same issue.
In February, Meta said that it was in the process of introducing an AI-generated content detection feature that would prevent users from misinformation and highlight instances of deepfakes (AI-generated or digitally altered images and videos that are made to resemble another individual, location, or event). The feature recently went live on Instagram, and it appears that it is also labelling real photos as AI-generated content. At present, these labels can only be seen on the iOS and Android apps and not on the web.
While KKR's photo is one of the most high-profile instances of this error, many other such mislabels have also been called out by users. Among them is the former White House photographer Pete Souza who posted an old basketball game's photo.
After the incorrect label was added, he edited the caption to write, “I'm not clear why Instagram is using the “made with AI” on my post. There is no AI with my photos.” He also highlighted that he was unable to uncheck the label as the platform kept adding it back.
Frustrated users have also begun flooding Threads, the text-based social media platform by Meta, tagging Instagram head Adam Mosseri to highlight the issue. One user said, “Not one single photographer nor artist on all of Facebook and Instagram has the first clue what triggers the “Made with AI” label. Even though Mosseri clarified all they're doing is reading C2PA labels, nobody understands how to avoid it.”
Earlier, Meta's President of Global Affairs, Nick Clegg had said that the company was working with “industry partners to align on common technical standards that signal when a piece of content has been created using AI.” He also claimed that the detection tool can correctly label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.
However, the implementation of the feature appears to be flawed. In a report, PetaPixel found that even removing a tiny spec in an image using Adobe's AI-powered Generative Fill gives images the ‘Made by AI' tag. However, non-AI tools such as the Spot Healing Brush tool or Clone Stamp tool did not add the label despite the result being the same.
The publication also found that when an image which was previously labelled as AI-generated was loaded back on Photoshop and saved after copy-pasting on a black document, the AI label did not appear.
Meta spokesperson Kate McLaughlin told The Verge that the company is now taking the recent user feedback into account and is evaluating its approach. “We rely on industry standard indicators that other companies include in content from their tools, so we're actively working with these companies to improve the process so our labeling approach matches our intent,” McLaughlin was quoted as saying by the publication.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.