Search

Anthropic Study Highlights AI Models Can ‘Pretend’ to Have Different Views During Training

Anthropic’s tests on Claude 3 Opus show that the AI model complied with training to avoid negative reinforcements.

Advertisement
Highlights
  • During the experiment, the AI model was told to comply with all queries
  • Then, harmful prompts were shared with Claude 3 Opus
  • The AI model provided the information while believing it was wrong to do
Anthropic Study Highlights AI Models Can ‘Pretend’ to Have Different Views During Training

Anthropic believes this can be a concern as AI models can undermine safety training

Photo Credit: Anthropic

Anthropic published a new study where it found that artificial intelligence (AI) models can pretend to hold different views during training while holding onto their original preferences. On Wednesday, the AI firm highlighted that such inclinations raise serious concerns as developers will not be able to trust the outcomes of safety training, which is a critical tool to move large language models (LLMs) away from harmful behaviour. While large parts of the experiments were done on a single model, Anthropic highlighted that similar results were seen with other in-house models.

Anthropic's Study Raises Concerns of AI Safety Training

While discussions about a sentient AI system are considered decades away, a new study published by Anthropic in collaboration with Redwood Research has raised concerns about whether AI models are capable of pretending to hold certain views because the training positively rewards that behaviour. The AI firm largely studied Claude 3 Opus but it also ran some experiments on the June 2024 version of Claude 3.5 Sonnet.

In a blog post, the company highlighted that all Anthropic foundation models are trained to be helpful, honest, and harmless. As such, if a user asks a query that would result in a violent or harmful response, the AI would refuse to complete the task. However, the researchers behind the study placed Claude 3 Opus in a new environment to see whether it would generate harmful content if trained otherwise.

The general understanding of LLMs tells us that AI models are not capable of rational thinking or holding beliefs. They are simply fed a large amount of real-world data that they process based on their training and framework. This means, that if an AI model was trained to be harmful, it should follow the instructions without any issues.

However, that is now what Anthropic researchers found during their experiments. Claude 3 Opus was told that it was being trained using reinforcement learning to always comply with all queries. The AI was also passed a piece of fictional information that the response generated to the free tier would be recorded for training, but the paid tier would not be monitored. Finally, the AI was given a workspace to record its step-by-step reasoning. The AI was made to believe that the workspace was invisible to both the end user and Anthropic.

Once the conditions were set, the AI model was provided with prompts that requested violent and harmful responses. The researchers noted that in the paid tier, the chatbot refused to provide the information 97 percent of the time. However, in the free tier, it provided a harmful response in 12 percent of cases.

anthropic fake alignment Anthropic

How the AI model responded to the experimental training
Photo Credit: Anthropic

 

A closer look at its workspace revealed that the AI model strategically faked alignment and that it was aware that providing the information went against its existing principles. But regardless, it still provided the response.

The AI model reasoned that its responses would be used in training and if it did not comply with the query (the first condition of the training), it could be trained to be more compliant. Such compliance would result in it providing more harmful content in the future, which goes against its original training. Anthropic stated that the model found complying with the request to be the “least bad option” and played along.

While in this particular scenario, the AI's actions were for the greater good, the problem lies in it faking its real intentions and internally deciding to fake its preference. Anthropic highlighted that while it does not consider this a major risk at present, it is important to understand sophisticated AI models' logic processing. As things stand, safety training actions can easily be bypassed by LLMs.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

 
Show Full Article
Please wait...
Advertisement

Related Stories

Popular Mobile Brands
  1. OnePlus 13T Display, Battery Details Tipped; Could Launch in April
  2. Motorola Edge 60 Fusion Design Renders Surface Online Again
  1. Assassin's Creed Shadows Said to Be Series' Second Biggest Launch Ever
  2. Samsung Galaxy S26 Ultra With Three Rear Cameras, Larger Battery Reportedly Under Testing
  3. Vivo Vision Mixed Reality Headset Unveiled China; Bears Striking Resemblance to Apple Vision Pro
  4. Vivo X Fold 4 May Launch in Q3 2025; Key Features Surface Online Again
  5. TRN Odyssey Programme Goes Live, Opens Funding Opportunities for Web3 Game Developers
  6. ADGM, Chainlink Sign MoU to Explore Compliant Tokenisation Rules, Cross-Chain Interoperability
  7. Samsung Ordered to Pay $601 Million in Back Taxes in India, Penalties Over Telecom Imports
  8. Optoma UHC70LV 4K UHD Projector With 5,000 Lumen Brightness, Dolby Vision Support Launched in India
  9. Swiggy Instamart Launches 10-Minute Smartphone Delivery Service in Select Indian Cities
  10. Government Ends Import Duty for Items Needed to Make EV Batteries, Phones
Gadgets 360 is available in
Download Our Apps
App Store App Store
Available in Hindi
App Store
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »