Has This Artificial Intelligence Model Invented Its Own Secret Language?

The fact that AI language models do not interpret the text in the same manner humans do supports this theory.

Has This Artificial Intelligence Model Invented Its Own Secret Language?

Photo Credit: Twitter / Giannis Daras

DALL-E 2 employs byte-pair encoding (BPE), which is a halfway solution.

Highlights
  • It is difficult to determine exactly how AIs arrive at their conclusions
  • The research was conducted by Giannis Daras and Alexandros G. Dimakis
  • DALL-E 2 is unlikely to feature a hidden language
Advertisement

Based on a written cue, a new generation of artificial intelligence (AI) models can make “creative” visuals on demand. Imagen, MidJourney and DALL-E 2 are just a few examples of how new technologies are changing the way creative content is created, with ramifications for copyright and intellectual property. While the output from these models is frequently impressive, it is difficult to determine exactly how they arrive at their conclusions. Researchers in the United States claimed last week that the DALL-E 2 model may have established its own hidden language to communicate about objects.

The research was conducted by Giannis Daras and Alexandros G. Dimakis, both students at the University of Texas at Austin. By asking the AI to create photos with text captions and then feeding the captions back into the system, the researchers discovered that DALL-E 2 thinks 'Apoploe vesrreaitais' means 'birds', 'contarra ccetnxniams luryca tanniounons' means 'bugs or pests', 'vicootes' means 'vegetables' and 'wa ch zod rea' means 'sea creatures that a whale might eat'.

DALLE-2 has a secret language.
"Apoploe vesrreaitais" means birds.
"Contarra ccetnxniams luryca tanniounons" means bugs or pests.

The prompt: "Apoploe vesrreaitais eating Contarra ccetnxniams luryca tanniounons" gives images of birds eating bugs.

A thread (1/n)???? pic.twitter.com/VzWfsCFnZo

— Giannis Daras (@giannis_daras) May 31, 2022

These statements are intriguing, and if accurate, they could have significant ramifications for the security and interpretability of this type of huge AI model. DALL-E 2 is unlikely to feature a hidden language.

"It might be more accurate to say it has its own vocabulary – but even then we can't know for sure," wrote Daras in a report published in The Conversation.

To begin with, it's difficult to validate any claims made regarding DALL-E 2 and other huge AI models at this point because only a few researchers and creative practitioners have access to them. Daras added that any photographs that are publicly posted should be taken with a grain of salt, as they have been cherry-picked by a human from a vast number of AI output images.

One theory is that the gibberish sentences are derived from the non-English vocabulary. Apoploe, for example, which appears to conjure images of birds, is related to Apodidae, the scientific name of a family of bird species in Latin. DALL-E 2, for example, was trained on a wide range of data scraped from the internet, including a large number of non-English terms.

The fact that AI language models do not interpret the text in the same manner humans do supports this theory. Instead, before analysing the text, they break it down into 'tokens', said Daras. Treating each word as a token may seem straightforward, but it might be problematic when identical tokens have various meanings. For example, 'match' signifies different meanings when playing tennis and when lighting a fire, Daras pointed out.

Treating each character as a token, on the other hand, results in a lower number of viable tokens, but each one transmits far less relevant information.

DALL-E 2 employs byte-pair encoding (BPE), which is a halfway solution. Examining the BPE representations for some of the gibberish words reveals that this could be a key aspect in deciphering the code. In any case, none of these possibilities are complete explanations for what's going on. When individual characters are removed from these sentences, for example, the resultant visuals appear to be corrupted in very precise ways. Individual gibberish words don't always combine to form logical compound visuals, it appears.

Overall, DALL-E 2's hidden language poses questions about interpretability. The researchers, through their latest report, want these models to act like humans, but seeing organised output in response to gibberish defies their expectations.

However, another Twitter thread has rejected the recent claims, by stating that 'Contarra ccetnxniams luryca tanniounons' into DALL-E 2 does not limit the search to bugs and pests, but also display images of other animals. 


How is Alexa faring in India? We discuss this on Orbital, the Gadgets 360 podcast. Orbital is available on Spotify, Gaana, JioSaavn, Google Podcasts, Apple Podcasts, Amazon Music and wherever you get your podcasts.
Affiliate links may be automatically generated - see our ethics statement for details.
Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Moto G82 5G With a 120Hz AMOLED Display, Triple Rear Cameras Launched in India: Price, Specifications
Realme C30 Supposed Design Renders Leaked; May Feature Micro-USB Port, Waterdrop-Style Notch
Facebook Gadgets360 Twitter Share Tweet Snapchat LinkedIn Reddit Comment google-newsGoogle News

Advertisement

Follow Us
© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »