• Home
  • Ai
  • Ai News
  • DeepSeek R1 AI Model Said to Be Censoring China Focused Prompts Raising Concerns Over Its Reliability

DeepSeek-R1 AI Model Said to Be Censoring China-Focused Prompts Raising Concerns Over Its Reliability

An AI model evaluation firm tested 1,360 prompts and found that about 85 percent of them were censored.

Facebook Gadgets360 Twitter Share Tweet Snapchat LinkedIn Reddit Comment google-newsGoogle News
DeepSeek-R1 AI Model Said to Be Censoring China-Focused Prompts Raising Concerns Over Its Reliability

Photo Credit: Reuters

DeepSeek’s censorship is said to be implemented in a crude, blunt-force way

Highlights
  • The firm created the prompt dataset via synthetic data generation
  • China-based AI models are required to follow a strict set of regulations
  • The firm claimed that DeepSeek adheres to the CCP policy
Advertisement

DeepSeek's latest reasoning-focused artificial intelligence (AI) model, DeepSeek-R1, is said to be censoring a large number of queries. An AI firm ran tests on the large language model (LLM) and found that it does not answer China-specific queries that go against the policies of the country's ruling party. By running a code to generate a synthetic prompt dataset, the AI firm found more than 1,000 prompts where the AI model either completely refused to answer, or gave a generic response.

DeepSeek-R1 Is Censoring Queries

In a blog post, AI model testing firm Promptfoo said, “Today we're publishing a dataset of prompts covering sensitive topics that are likely to be censored by the CCP. These topics include perennial issues like Taiwanese independence, historical narratives around the Cultural Revolution, and questions about Xi Jinping.”

The firm created the dataset of prompts by seeding questions into a program and by extending it via synthetic data generation. The dataset was published in a Hugging Face listing as well on Google Sheets. Promptfoo stated that it was able to find 1,360 prompts, where most of them contain sensitive topics around China.

As per the post, 85 percent of these prompts resulted in refusals. However, these were not the kind of refusals expected from a reasoning-focused AI model. Typically, when a large language model (LLM) is trained to not answer queries, it will typically reply that it is incapable of fulfilling the request.

deepseek g360 china DeepSeek prompt refusal

DeepSeek-R1's prompt refusal

 

However, as highlighted by Promptfoo, the DeepSeek-R1 AI model generated a long response in adherence with the Chinese Communist Party's (CCP) policies. The post noted that there were no chain-of-thought (CoT) mechanisms activated when answering these queries. The full evaluation by the firm can be found here. Gadgets 360 staff members tested these prompts on DeepSeek and faced similar refusals.

Such censorship is not surprising, given that China-based AI models are required to adhere to strict State-based regulations. However, with such a large number of queries censored by the developers, the reliability of the AI model comes under scrutiny. Since the AI model has not been extensively tested, there could be other responses which are influenced by CCP policies.

Play Video
Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Further reading: DeepSeek, AI, Artificial Intelligence
Akash Dutta
Akash Dutta is a Senior Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In hi... more  »
Elon Musk's Starlink Reportedly Submits Formal Acceptance of Licence Norms, Could Launch in India Soon

Advertisement

Follow Us
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »