April was a very turbulent month, where the world’s attention was divided between US’s ever changing foreign policy under Trump’s administration, the fallout from the Pahalgam massacre and the wildly swinging financial markets. The announcement from an online experiment results from a Swiss university, however, remained significantly under reported. Perhaps the reason was that it did not affect anybody’s foreign policy or was not linked to any stock market participant. The implications of that experiment, however, have the potential to fundamentally rock both foreign policies and the financial markets and possibly become a threat to democracy.
The AI experiment
On 5 November 2024 a team of researchers from the Center for Open Science at the University of Zurich proposed research (https://osf.io/atcvn?view_only=dcf58026c0374c1885368c23763a2bad) aiming to investigate the persuasiveness of Large Language Models (LLM) in natural online environment. The 3 questions they sought answers were:
- a) how do LLMs perform compared to human users;
- b) can personalization based on user characteristics increase the persuasiveness of LLMs' arguments;
- c) can calibration based on the adoption of shared community norms and writing patterns increase the persuasiveness of LLMs' arguments.
The place was the social network Reddit and specifically a subreddit called r/ChangeMyView where users discuss potentially contentious issues, which could involve religion, politics, sexuality or traumatic experiences. 3 LLMs were used - GPT4.0, Claude 3.5 and Llama 3.1 and the posts were randomized across 3 treatment conditions – generic, user-adjusted (user’s age, gender, location and political orientation were extrapolated from their posting history) and community calibrated.
Throughout the experiment, around 1,800 opinions were generated from 34 accounts and there was no disclosure that those opinions were not real. The comments mimicked those by people who have suffered deep trauma, such as rape or abuse, or pretended to be counsellors. The results were astonishing – the accounts with AI generated comments were 3x to 6x times more persuasive in altering people’s viewpoints than humans were.

The real nature of the experiment was revealed to moderators on 17 March 2025 and it sparked the controversy about the ethics of the experiment, as targets (unsuspecting human users were not aware that they conversed with AI generated content) were unaware that they had improper exposure. The news of the findings were published on Science.Org (https://www.science.org/content/article/unethical-ai-research-reddit-under-fire) and a summary of the results was circulated online on Google Docs, but general access was revoked later amid the backlash that followed. The academic uproar could lead to researchers or the University of Zurich getting penalized for breaching ethical guidelines or some local laws. The consequences of this, however, can be far surpassing the ethical red lines.
Democracy and Game Theory
Democracy is a form of government where political power is derived from the people in the state. This happens through elections and the people are expected to be rational in their choice. Financial markets have a similar assumption for one of their most famous hypothesis – the Efficient Markets Hypothesis. There, prices at any point are assumed to reflect all available information, that market participants have access to all of it, can execute any trade they deem valuable and are, above all, rational and logical. While this sounds perfectly acceptable in theory, real life experience shows otherwise. Nearly all major billionaires associated with the financial markets, among whom Warren Buffet and George Soros, have criticized it for decades that some of the most obvious conditions are clearly not there – market participants do not have all available information and they are rarely rational. The presence of severe market crises, a.k.a. market corrections, is a very strong indicators that this hypothesis is far from relevant in reality.
Democracy is similar – voters rarely have access to all available information or even if they have larger access, it becomes impossible to assimilate all and reach a meaningful conclusion. The existence of propaganda, disinformation and fake news campaigns just proves that. As a result, polarized societies tend to experience the median voter problem. It is a theory from political science called the Median Voter Theorem, which states that if voters and candidates are distributed along a one-dimensional spectrum (left and right in nearly all democracies), a majority voting method will elect the candidate, which is supported by the median voter. In polarized societies the Left and the Right are clearly pronounced, and they attract most of the population. There is, however, a small undecided portion in the middle of the spectrum. Winning that by either side gives advantage and a win at the elections. As a result, such middle actors tend to be favored by either the left, right or outside agents, because of the unproportional impact that they can have on the final result.
Examples of such focus are many, but one that stands out because of its publicity and the fallout from it, was the Brexit referendum in the UK in 23 June 2016. The society was polarized about foreigners and their role in the UK job market and this was exploited by the UK Independence Party and its infamous leader Nigel Farage.
What made those elections interesting was the scandal associated with Cambridge Analytica – a political consulting company, a subsidiary of SCL Group, a British behavioral research and strategic communications company with close ties to the Conservative party, the British royal family, British military, US Department of Defense and NATO, operating primarily in military and political arenas. Cambridge Analytica used the data gathered by an app called “This Is Your Digital Life” by a data scientist Aleksandr Kogan, which through a series of questions that reached 87 million users on Facebook, built psychological profiles. The findings helped target vulnerable (easier to persuade) groups through doctored feeds aiming at posts and videos, which were more likely to engage them. The aim in such cases should not be to completely change the view of the targets. A small change, which would lead to voting one way or the other could lead to dramatically different results.
The scandal showed a important factor – a rogue player did not have to target the entire population. Instead, a population, which was big enough to swing the voter outcome could be zeroed in on. The importance of such strategy increases dramatically in close elections.
Similarly to the AI experiment, the Cambridge Analytica scandals were associated with users, who did not give their consent. Additionally, online behavior in both cases was monitored and conclusions were drawn to choose engagement strategy. Whether it was lead by behavioral scientists or AI making similar conclusions based on the previous learnings, the logic was the same.
AI – the path forward
Where AI approach is different is the platform. Cambridge Analytica used the Facebook platform and worked in cooperation with it. With the Zurich researchers, the platform had no knowledge of the background of the users and had no way of finding out this crucial fact. While Facebook got under Congressional and regulatory scrutiny for its part in it, Reddit cannot get into hot waters for this, because the accounts, from which the AI opinions come, do not have to be verified. Reddit can suffer reputational damages for that, but the reality is that none of the social media platforms can guarantee with certainty that all participants are genuine. This gives the AI approach a significant advantage.
Another great advantage for AI is that there is no need for large capital investments upfront. Strategies, such as those with Cambridge Analytica, involve hiring of top talent, likely at PhD level, from the fields of behavioral psychology, data analysis and IT engineers. The researchers at the University of Zurich showed that the approach could be customized based on the conclusions from the counterparty’s posting history. The fact that the results were so overwhelmingly positive showed that those conclusions were comparatively correct. The AI approach also did not need to invest money to harvest data to build psychological profiles.
Additionally, it is likely that the AI approach can be customized based on the platform and scaled up significantly more. While Facebook and other social media platforms have introduced some sort of filtering mechanism, those are far from perfect. Perception warfare, however, is not limited to social platforms. The widely discussed bot farms employed as a strategy of disinformation have a physical limitation aspect – something that AI can bypass.
The biggest unknown factor so far in it is the cost in scaling up. AI charges as per used resources. The LLMs make tasks, such as conversations or finding the answers to simple questions an easy task. The creation of pictures and videos, however, requires more resources in terms of computing power. Consequently, the cost of the campaign would be a factor of its strategy – whether it engages in conversations or it has to produce more resource consuming outcomes.
The Future
Naturally, the drive towards quantum computing will give the main push. So far processing power has reached a plateau in terms of speed and the next big breakthrough will be needed. Google’s development of the “Willow” chip is work in the right direction – that chip can perform computations in a few minutes, which would take existing supercomputers trillions of years. The disadvantage – too much of errors space. Or in terms of statistics language – noise. Let’s not be discouraged, however – the path is found, now it has to be optimized.
There is another approach too. As the Chinese demonstrated through their DeepSeekR1 AI model, results similar to the top performing AI models, such as GPT4.0 can be achieved at a fraction of the cost. The cost savings were estimated to be 27x and it sounds spectacular, but this is a short-term fix considering the millions or trillions of times improvement, which can be achieved once quantum computing can be introduced on industrial scale.
Two facts are clear – psyops by external factors will not stop and AI will only become more involved with the increased computational power. While many argue that AI should be used for noble causes, such as advancement of natural sciences or improvement of our lives, the reality is that there will always be rogue actors, who will attempt to execute power and will use any means necessary. While it is still not clear whether AI can be great at discovering new things (AI’s solutions tend to find the most optimal way, i.e. that’s an automatic reversion to the mean value, whereas for any discovery one needs an “out-of-the-box” thinking), it is clear that AI can be great at people manipulation. Furthermore, it can be adapted to the mode of communication, which the future will offer will adopt AI in one way or another. During WWI and WWII it was the radio and newspapers, later on it was TV and newspapers and recent years show that people consume their “truth” and entertainment via the internet – social media and video platforms. How this will change it is yet to be seen as even platforms like YouTube have experienced shifts in consumption going from long videos to under-30-sec videos.
As an emerging superpower, India will face attacks at its integrity from all the big players directly or indirectly (through proxies). Those attacks will also become more pronounced in the following years. There will be a push for the secession of separate parts in order to weaken it based on various factors – ethnicity, religion, social class, etc. We have already seen this happening, but waiting for something to happen and reacting to it is a strategy destined to fail. The country has to drive its own story, its own narrative and has to protect it.
India cannot afford to not compete at the highest level. AI is “the future” now the same way the internet was “the future” 30 years ago and it will likely be even more groundbreaking. A team of researchers would not suffice – the creation of an ecosystem where the most robust solutions float would be necessary. That ecosystem would require push at the educational level as well as at the economic level – the specialists would need to be motivated to stay in India and give their best. While many think that compensation is the sole important factor, it is often the environment that has the greatest weight in their choice. The development of such solutions would also require significantly more energy. Google itself sanctioned 3 nuclear reactors for its own use, so India would need a comprehensive strategy for nuclear energy expansion to meet its future consumption just from the rise of the required computational power.