Major bot networks promote anti-Israel influencers on X, report finds

Cyabra showed 25% of users interacting with and boosting salient anti-Israel influencers such as Motaz Azaiza, Anastasia Maria Loupis, Jackson Hinkel are fake.

 Hamas bot posts promoting the narrative that taking dozens of Israeli civilians hostage was justified. Posts like this had a potential reach of 230,000,000 views. (photo credit: Cyabra)
Hamas bot posts promoting the narrative that taking dozens of Israeli civilians hostage was justified. Posts like this had a potential reach of 230,000,000 views.
(photo credit: Cyabra)

A report by disinformation detection firm Cyabra found that an average of 25% of users interacting with popular anti-Israel influencers on the social media platform X are fake, 25% of which have been established post-October 7.

Some of the main users promoted by these bot networks include Jackson Hinkle, Muhammad Smiry, Motaz Azaiza, Dr. Anastasia Maria Loupis, and Mohamad Safa, many of whom experienced a rapid and unusual rise in followers, views, and appearances on the X news feed, formerly Twitter. The report found that some of the bot networks promoting these influencers are themselves overlapping, implying that a single group, institution, or even a state is responsible for the amplification of these users despite their divergent viewpoints.

  A visualization of part of the bot network found by the Cyabra report (credit: Cyabra)
A visualization of part of the bot network found by the Cyabra report (credit: Cyabra)

Another finding revolves around the fact that many of these fake profiles actively interact with each other to boost the visibility of the anti-Israel influencers and themselves. The report also found that most of the content from the fake profiles has been created solely to amplify the influencers’ content by generating more comments on their posts.

These findings by Cyabra, which are based on an analysis of dozens of thousands of X accounts to determine whether they were fake or authentic, revealed that hundreds of the profiles found to be fake were linked with each other and commented on each other, out of which 25% were created in the period between October 2023 and January 2024.

According to the report, many of the fake users used similar language, keywords, and hashtags, sometimes even posting virtually identical comments to different influencers, an abnormal behavior usually identified with bots and bot networks.

Dubious following, fake conversations

It was found lately that 40% of the followers of Jackson Hinkle, a US-based social media influencer sympathizing with Russia, China, Iran, and Hamas, were deemed fake according to a sample of 12,510 users analyzed by Cyabra’s methodology.

In addition to these numbers, another sample of 5,182 users who interacted with Hinkle’s profile between December 20, 2023, and January 20, 2024, 17% were found to be fake. Many of these fake profiles were also found to be interacting with each other’s content and boasting similar behavioral patterns such as creation dates. According to the report, these shared traits suggest coordinated and organized activity.

Another example of a user found to feature unusual activity was Qatari-backed Gazan influencer Motaz Azaiza, whose following skyrocketed from 31,000 users to over one million in less than one month, and with around 23% of followers and 25% of commenters who were found by the Cyabra report to be fake.

 Some of the almost identical tweets published by the botnet (credit: Cyabra)
Some of the almost identical tweets published by the botnet (credit: Cyabra)

Likewise, the account of Anastasia Maria Loupis, who regularly shares antisemitic and anti-Israel content, was also found to feature abnormal activity, with 18% fake profiles interacting on her posts between December 20, 2023, and January 20, 2024.

The report also featured additional anti-Israel or pro-Hamas influencers such as Sulaiman Ahmed, Omar Suleiman, Mohamad Safa, and Muhammad Smiry, as well as anonymous accounts MissFalasteenia, Sabina and Khalissee, all of which ranged between 21-33% of fake users interacting with them, according to the Cyabra report.

'Users must be more aware of whom they're interacting with'

Led by former information warfare and cyber security experts, Cyabra describes itself as offering “an AI-driven intelligence platform to detect inauthentic accounts that are spreading disinformation and fake news.” The firm uses publicly available information and has worked with governments and organizations around the world in analyzing billions of conversations online in real-time in an attempt to provide insights and expose intentions and agendas behind online campaigns.

The Jerusalem Post reached out to Rafi Mendelsohn, Cyabra’s VP of marketing for an exclusive interview to hear more about their findings, the company, and the issue of inauthentic discourse on social media.

“Our main focus is online disinformation discourse in its wider sense, including phishing and other campaigns,” said Mendelsohn, “Cyabra’s CEO is an expert in information warfare, and our team developed top-notch technologies to gauge the authenticity of users or lack thereof, which can be attributed to bots, trolls, avatars, and more.”

“We are not a fact-checking organization,” he stressed. “Our focus is not on the truthfulness of discourse, but rather on identifying behavior indicative of fake users, made usually for malicious activity.”

Cyabra developed a special algorithm aimed at analyzing the behavior of users and discerning whether it can be considered “regular” human behavior. “We look at hundreds of parameters. These can include when they were created, who they follow and by whom, what type of content they post, at what times of the day and how frequently they post, and more,” added Mendelsohn. “Then we have an additional technology which clusters users into different communities based on their following and comments, in an attempt to point at what may seem like orchestrated or coordinated activity.”

“This unique technology allowed us to gain the trust of companies looking to scrutinize smear campaigns directed against them, as well as other agencies and organizations across the globe,” Mendelsohn said. “Our series of tests and red flags are examined constantly, and our algorithm is continuously being updated.”

Much of the user activity on social media is made up of bots anyway. When is it considered exaggerated or unusual?

“When around 4-7% of the activity is bot-related, this can be considered normal. When it gets to 10% it’s odd, but when it’s 25% then the bells start ringing, signifying that this is worth checking in-depth. By the way, even with 25% of fake accounts following an account, there were times when we saw that discourse itself was led by a higher percentage of fakes. Some fake accounts post hundreds of times a week, which is highly abnormal, and one can see that they’ve been set up for a specific goal.”

Is there a way to know who's behind these accounts?

Mendelsohn smiled. “That’s the most important question, isn’t it?”

“What we have are mere ‘breadcrumbs’ of the users’ behavior, and unfortunately you can’t tell where exactly they originate. This is probably why such campaigns are so successful: because it is impossible to get to the bottom of this question. A good framing would be looking at it through the lens of crime. There’s opportunistic crime and there’s organized crime; there’s crime for financial gain, and there’s also state-directed crime outsourced to criminal actors.

“What is striking about this specific report is the fact that we found five different influencers who hail from different ideologies, all being promoted by the same overlapping bot network. What they all have in common is saliently their anti-Westernism and hatred for Israel. This leads to the belief that there’s a high probability that a single source stands behind these networks, coordinating their actions,” added Mendelsohn.

On the issue of enforcement, Mendelsohn replied with a sigh. “There is a serious problem of enforcement. This is one of the biggest problems in our time: the ability to manipulate discourse and spread false narratives unchecked. It also manifests in real life, on campuses, and in parliaments. It’s a daunting social problem,” he said.

According to Mendelsohn, no country has so far managed to tackle the issue successfully. “Fake accounts and impersonation are not only malicious,” he added, “they also go against the policy of the platforms themselves. The accounts must be taken off, and the platforms should also participate in this effort.”

How can I as an ordinary user of social media make sure that I'm not interacting with a bot?

“For regular, individual users it’s even more difficult than it has ever been to notice these malicious behaviors due to the many AI and language tools available. My advice to users is that, when dealing with more touchy subjects, they should take another minute to try and understand what they see, and who the user they are interacting with is. It requires us to be more aware, click on the accounts we’re interacting with, see when they were created, what content they uploaded, how many followers they have, and more.

“Elections are coming up in the US and these techniques of discourse amplification are undoubtedly going to come up. This contaminates our discourse on every subject, from politics to consumerism, and we all must be more aware,” he concluded.