Photo Credit: Unsplash/Markus Winkler
Artificial intelligence (AI) tools could soon start predicting and manipulating users with the large pool of “intent data” they have, a study has claimed. Conducted by the University of Cambridge, the research paper also highlights that in the future, an “intention economy” could be formed which could create a marketplace for selling “digital signals of intent” of a large user base. Such data can be used in a variety of ways, from creating customised online ads to using AI chatbots to persuade and convince users to buy a product or service, the paper warned.
It is undeniable that AI chatbots such as ChatGPT, Gemini, Copilot, and others have access to a massive dataset that comes from users having conversations with them. Many users talk about their opinions, preferences, and values with these AI platforms. Researchers at Cambridge's Leverhulme Centre for the Future of Intelligence (LCFI) claim that this massive data can be used in dangerous ways in the future.
The paper describes an intention economy as a new marketplace for “digital signals of intent”, where AI chatbots and tools can understand, predict, and steer human intentions. Researchers claim these data points will also be sold to companies who can profit from them.
Researchers behind the paper believe the intention economy would be the successor to the existing “attention economy” which is exploited by social media platforms. In an attention economy, the goal is to keep the user hooked on the platform while a large volume of ads could be fed to them. These ads are targeted based on users' in-app activity, which reveals information about their preferences and behaviour.
The intention economy, the research paper claims, could be far more pervasive in its scope and exploitation as it can gain insight into users by directly conversing with them. As such, they could know their fears, desires, insecurities, and opinions.
“We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press and fair market competition before we become victims of its unintended consequences,” Dr. Jonnie Penn, a Historian of Technology at LCFI told The Guardian.
The study also claimed that with this large volume of “intentional, behavioural, and psychological data”, large language models (LLMs) could also be taught to use such information to anticipate and manipulate people. The paper claimed that future chatbots could recommend users to watch a movie, and could use access to their emotions as a way to convince them to watch it. “You mentioned feeling overworked, shall I book you that movie ticket we'd talked about?”, it cited an example.
Expanding upon the idea, the paper claimed that in an intention economy, LLMs could also build psychological profiles of users and then sell them to advertisers. Such data could include information about a user's cadence, political inclinations, vocabulary, age, gender, preferences, opinions, and more. Advertisers will then be able to make highly customised online ads knowing what could encourage a person to buy a certain product.
Notably, the research paper offers a bleak outlook on how private user data in the age of AI can be used. However, given the proactive stance of various governments across the world in limiting AI companies' access to such data, the reality might be brighter than the one projected by the study.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.