Redefining freedom of speech: Protecting the right to the truth

Could a marketplace with too many ideas put access to truth at risk?

 A keyboard is placed in front of a displayed Facebook logo in this picture illustration. (photo credit: DADO RUVIC/REUTERS)
A keyboard is placed in front of a displayed Facebook logo in this picture illustration.
(photo credit: DADO RUVIC/REUTERS)

The right to factual certainty is becoming one of the biggest challenges of democracies and open societies, according to at least one Israeli expert.

“The cornerstone of freedom of speech is based on the concept of shortage of information – that you need to ensure enough ideas get into the marketplace. But what happens when there is too much information?”

Tehila Shwartz Altshuler

“The cornerstone of freedom of speech is based on the concept of shortage of information – that you need to ensure enough ideas get into the marketplace. But what happens when there is too much information?” asks Dr. Tehilla Shwartz Altshuler, head of the Media Reform and Democracy in the Information Age programs at the Israel Democracy Institute.

In the United States, freedom of speech is protected by the Constitution’s First Amendment. It protects information from being censored by the government, capital markets or the media. At the same time, it calls for the free flowing of ideas on the premise that if enough ideas enter the marketplace, eventually the truth will surface. 

“We are today experiencing the widest variety of ideas in the history of humanity. There is no mediation. Government censorship is minimal. Social networks have no responsibility for anything, so they hardly censor anything. So, suddenly we find ourselves in an era of a huge variety of ideas but not achieving truth,” Shwartz Altshuler says. 

The question becomes whether a market with a shortage of ideas acts the same as a market with too many, she explains. That’s because with such an avalanche of information, people could find it difficult to sort truth from lies.

 A woman holds a smartphone as a TikTok logo is displayed behind in this picture illustration.  (credit: DADO RUVIC/REUTERS)
A woman holds a smartphone as a TikTok logo is displayed behind in this picture illustration. (credit: DADO RUVIC/REUTERS)

“People are overburdened by information, tired, and do not have the time nor ability to allow truth to come up,” Shwartz Altshuler says. “And some actors in the market take strategic advantage of that.”

She proposes that “we need to start rethinking what we want from freedom of speech” and that it might be time to talk about a new speech right: the right to factual certainty

Who would determine what is true and what isn’t is one of the questions that needs to be answered, she says. 

“When we spoke about traditional media, that was the job of a newspaper editor to determine what is true and what is not true, the order of importance, what is new and what is valuable to the public,” Shwartz Altshuler contends. “Sometimes the media got it wrong; but when they found something incorrect, they corrected it themselves.”

Advertisement

Today, untrained or self-proclaimed journalists often report the news via social networks. And these networks have little responsibility for what is being published. As such, Shwartz Altshuler stresses that owners and operators do not mediate these conversations and sometimes allow scientifically incorrect information to go viral. 


Stay updated with the latest news!

Subscribe to The Jerusalem Post Newsletter


A report published by the World Health Organization last year in the aftermath of the COVID-19 pandemic found that the negative repercussions of misinformation on social media include “an increase in erroneous interpretation of scientific knowledge, opinion polarization, escalating fear and panic or decreased access to health care.” WHO called for “developing legal policies, creating and promoting awareness campaigns, improving health-related content in mass media and increasing people’s digital and health literacy” to counter the misinformation. 

Shwartz Altshuler posits that government, traditional media and social media platforms could each play a role in sorting truth from lies. First, however, she says “the public needs to believe we can sort truth from lies.”

‘No traditional gatekeepers’

When the Internet was introduced, it was seen as a dream come true for freedom of speech: an open market of ideas where everyone can express their opinions and ideas, and there could be a free flow of information. But, according to Gabriel Weimann, a professor of communication (emeritus) at the Department of Communication at the University of Haifa and a senior researcher at the International Institute for Counterterrorism, the reality has proven different.

“In many ways, freedom of speech online, despite the positive aspects of an open stage and platforms, has led to the abuse of freedom of speech.”

Gabriel Weimann

“In many ways, freedom of speech online, despite the positive aspects of an open stage and platforms, has led to the abuse of freedom of speech,” he says. “While in conventional media there are red lines and gatekeepers, in modern online media there are no traditional gatekeepers or visible regulations.

“So, when it comes to freedom of speech online, the trouble is the abuse of the liberal character of online platforms by hate groups, terrorists and violent extremists.”

For more than a decade, this abuse has been known and documented. The challenge, Weimann says, is that it is becoming more alarming. According to multiple statistical reports, the presence of hate speech and incitement on online platforms is growing every year.

Why? Weimann gives two reasons: the constant development of new platforms because social network providers cannot perform sufficient censorship; and growing awareness and know-how of the abusive actors. 

“In the past, Twitter, Facebook, Google, Gmail, YouTube and, to some extent, even WhatsApp were the leading social media platforms, and they were used for promoting hate speech,” Weimann explains. “Due to political pressure on these conventional social media platforms, some content was removed. 

“As a result, terrorists, extremists, neo-Nazi groups, fascists and racists started moving to new platforms,” he continued. “The first was Telegram, which had a Russian operator that did not care much. Then came TikTok, owned by a Chinese company that did not pay much attention to enforcing regulations.

“But once these platforms started feeling pressure and cracking down, the abusers moved elsewhere.”

Today, they are using platforms like TomTom. 

Weimann and his team have been researching TomTom for the past several months. He says that they found thousands of hate speech posts within three months. 

The irony is that most alternative social media networks claim to be havens of freedom of speech, according to Pew Research Center. A recent report looked at seven “alternative” social media platforms, all identifying as enemies of censorship. 

“Each explicitly says that it supports free speech, and four (BitChute, Gettr, Parler and Rumble) specifically declare their opposition to censorship,’” the Pew report says. “In expressing their support for freedom of speech, some sites criticize what they describe as ‘cancel culture.’ Rumble, for example, advises readers that as a result of ‘cancel culture,’ it supports ‘diverse opinions, authentic expression and the need for open dialogue.’

“Similarly, Gettr states that the site ‘champions free speech, rejects cancel culture and provides a … platform for the marketplace of ideas.’”

Weimann says that even when social networks are called out for allowing hate speech and want to remove the content, they cannot do so. First of all, because of the sheer volume of content – millions of messages, text, pictures, videos – posted online every minute. And second, because doing so is expensive. 

“These are companies that want to profit,” Weimann says. “They are not educational companies or even guided by ethical or moral principles.”

As such, when they try to remove this harmful content - and some do - the posts will often reemerge in days or weeks. 

Finally, hate and terror groups are becoming more sophisticated social media users.

“Terrorists use social media for radicalization, recruitment, funding, planning and execution of terror activities.”

Interpol

According to Interpol, “Terrorists use social media for radicalization, recruitment, funding, planning and execution of terror activities.” A separate report by the Institute for National Security Studies found that “One of the most remarkable aspects of the Islamic State is its extensive use of social media and its presence on social media. The organization’s meteoric rise to global awareness in the summer of 2014 was accompanied not only by its conquest of vast territories in Iraq and Syria but also by an impressive and well-planned, multilingual campaign on social media.”

ISIS campaigns included “Hollywood- quality” video clips and images. 

 A smartphone with a displayed ChatGPT logo is placed on a computer motherboard in this illustration taken February 23, 2023.  (credit: DADO RUVIC/REUTERS)
A smartphone with a displayed ChatGPT logo is placed on a computer motherboard in this illustration taken February 23, 2023. (credit: DADO RUVIC/REUTERS)

ChatGPT: ‘So irresponsible’

The situation is only expected to get worse, Shwartz Altshuler believes. She says that the world is witnessing a “destructive practice” in which some private companies have released to the technological market platforms with serious factual problems, knowing they have issues but would not be held responsible for the problems their platforms make. 

ChatGPT, developed by Open.ai, is a prime example, Shwartz Altshuler says. 

“You let millions of people have access to a platform that looks very convincing but suffers from severe factuality problems. Then, you tell them to use it as a search engine - so irresponsible,” she says. “The same exception that allows social media platforms to create huge digital fears without responsibility is now being applied to Open.ai. To me, it is unthinkable.” 

She says that generative AI technology like ChatGPT will lead to a world where people can spread fake information in much more convincing, more accessible and less expensive ways. 

“If you ask a generative AI platform to write you a scientific paper that claims a connection between strokes and COVID shots, it will look exactly like a real scientific article, and you can spread it around like the wind,” Shwartz Altshuler says. “And it is not fake. It is not like I took something original and twisted it. No, I created something new but artificial. It is not fake news. It is an artificial fact.”

She says those “facts” will be brutal to detect. 

“We need to put some kind of watermark on artificial content,” Shwartz Altshuler offers. “This is where you need to start thinking about factual certainty and the right to it.”

She says the tipping point could be the metaverse. If today people “watch” the Internet, in the metaverse they will be in the Internet. When one combines generative AI and the metaverse, they can create a whole universe for themselves, their friends or the public.

“What if I create a concentration camp that claims everything was beautiful in the Holocaust?” she asks. “I put a mask on my head and walk through the beautiful alleys of Auschwitz lined with flowers. There is no smell there. And it is as if everything the Jews were discussing never happened.”

Shwartz Altshuler says that people tend to believe things that look authentic and sound good. 

“We have a cognitive tendency to connect something well written with the truth,” she says. “This is the power of ChatGPT.”

‘Awareness is key’

To counter these challenges will require an arsenal of weapons, Weimann says – defensive and offensive. 

The first is education. 

“When I was young and we watched a lot of television, we were taught about the tricks of the trade – how cameras and other techniques were used to create violence and that it was not real even if it looked realistic. I think the same approach should be applied to online forums,” Weinmann says. “From early childhood, children should become educated consumers of social media, taught to identify hate speech and how they could be targeted or victimized…. Awareness is key.”

At the same time, he says there should be international collaboration on pressuring social media to remove content, even if they cannot be 100% effective. And finally, “We should look to the future.”

“Right now, we are playing a game of cat and mouse, chasing the abusers from platform to platform,” Weimann says. “I suggest looking at emerging platforms – the Facebooks of tomorrow – and trying to find ways to support those companies to implement and apply measures to protect freedom of speech – true and ethical freedom of speech – even before they launch.” ■