Uncategorized

Fragmented truth: How AI is distorting and challenging our reality

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here When Open AI first released ChatGPT, it seemed to me like an oracle. Trained on vast swaths of data, loosely representing the sum of human interests and knowledge available online, this statistical prediction machine might, I thought, serve as a
AchieveAI
AchieveAI Author
February 24, 2025
8 min read
32,425 views

Anthropic, Meta approaches

Anthropic took a somewhat different approach. They implemented a “constitution” for their Claude (and now Claude 2) chatbots. As reported in VentureBeat, the constitution outlines a set of values and principles that Claude must follow when interacting with users, including being helpful, harmless and honest. According to a blog post from the company, Claude’s constitution includes ideas from the U.N. Declaration of Human Rights, as well as other principles included to capture non-western perspectives. Perhaps everyone could agree with those.

Meta also recently released their LLaMA 2 large language model (LLM). In addition to apparently being a capable model, it is noteworthy for being made available as open source, meaning that anyone can download and use it for free and for their own purposes. There are other open-source generative AI models available with few guardrail restrictions. Using one of these models makes the idea of guardrails and constitutions somewhat quaint.

Fractured truth, fragmented society

Although perhaps all the efforts to eliminate potential harms from LLMs are moot. New research reported by the New York Times revealed a prompting technique that effectively breaks the guardrails of any of these models, whether closed-source or open-source. Fortune reported that this method had a near 100% success rate against Vicuna, an open-source chatbot built on top of Meta’s original LlaMA.

This means that anyone who wants to get detailed instructions for how to make bioweapons or to defraud consumers would be able to obtain this from the various LLMs. While developers could counter some of these attempts, the researchers say there is no known way of preventing all attacks of this kind.

Beyond the obvious safety implications of this research, there is a growing cacophony of disparate results from multiple models, even when responding to the same prompt. A fragmented AI universe, like our fragmented social media and news universe, is bad for truth and destructive for trust. We are facing a chatbot-infused future that will add to the noise and chaos. The fragmentation of truth and society has far-reaching implications not only for text-based information but also for the rapidly evolving world of digital human representations.

Produced by author with Stable Diffusion.

AI: The rise of digital humans

Today chatbots based on LLMs share information as text. As these models increasingly become multimodal — meaning they could generate images, video and audio — their application and effectiveness will only increase.

One possible use case for multimodal application can be seen in “digital humans,” which are entirely synthetic creations. A recent Harvard Business Review story described the technologies that make digital humans possible: “Rapid progress in computer graphics, coupled with advances in artificial intelligence (AI), is now putting humanlike faces on chatbots and other computer-based interfaces.” They have high-end features that accurately replicate the appearance of a real human.

According to Kuk Jiang, cofounder of Series D startup company ZEGOCLOUD, digital humans are “highly detailed and realistic human models that can overcome the limitations of realism and sophistication.” He adds that these digital humans can interact with real humans in natural and intuitive ways and “can efficiently assist and support virtual customer service, healthcare and remote education scenarios.”

Digital human newscasters

One additional emerging use case is the newscaster. Early implementations are already underway. Kuwait News has started using a digital human newscaster named “Fedha” a popular Kuwaiti name. “She” introduces herself: “I’m Fedha. What kind of news do you prefer?Let’s hear your opinions. By asking, Fedha introduces the possibility of newsfeeds customized to individual interests. China’s People’s Daily is similarly experimenting with AI-powered newscasters. Currently, startup company Channel 1 is planning to use gen AI to create a new type of video news channel, what The Hollywood Reporter described as an AI-generated CNN. As reported, Channel 1 will launch this year with a 30-minute weekly show with scripts developed using LLMs. Their stated ambition is to produce newscasts customized for every user. The article notes: “There are even liberal and conservative hosts who can deliver the news filtered through a more specific point of view.”

Can you tell the difference? Channel 1 cofounder Scott Zabielski acknowledged that, at present, digital human newscasters do not appear as real humans would. He adds that it will take a while, perhaps up to 3 years, for the technology to be seamless. “It is going to get to a point where you absolutely will not be able to tell the difference between watching AI and watching a human being.”

Why might this be concerning? A study reported last year in Scientific American found “not only are synthetic faces highly realistic, they are deemed more trustworthy than real faces,” according to study co-author Hany Farid, a professor at the University of California, Berkeley. “The result raises concerns that ‘these faces could be highly effective when used for nefarious purposes.’”

There is nothing to suggest that Channel 1 will use the convincing power of personalized news videos and synthetic faces for nefarious purposes. That said, technology is advancing to the point where others who are less scrupulous might do so.

As a society, we are already concerned that what we read could be disinformation, what we hear on the phone could be a cloned voice and the pictures we look at could be faked. Soon video — even that which purports to be the evening news — could contain messages designed less to inform or educate but to manipulate opinions more effectively.

Truth and trust have been under attack for quite some time, and this development suggests the trend will continue. We are a long way from the evening news with Walter Cronkite.

Gary Grossman is SVP of technology practice at Edelman and global lead of the Edelman AI Center of Excellence.

Welcome to the VentureBeat community! DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation. If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers. You might even consider contributing an article of your own! Read More From DataDecisionMakers.
As computer scientist Alan Kay said “People who are really serious about software should make their own hardware”, it has never been more true. Artificial Intelligence technology has been advancing rapidly in the past few years so that it is now a major part of everyday life. AI is used to detect faces, operate driverless cars, and diagnose diseases with its machine-extracted patterns. But this technology also has an uncomfortable side effect – the emergence of "fragmented truth".

The rise of AI technology has resulted in an increased ability to create convincing digital art, deepfakes, and other computer-generated images and videos. These digital recreations can be so realistic that it is difficult to determine whether something is real or not. This renders reality into a new, computer-generated reality that can be distorted to manipulate people. An example of this is how algorithms have been used to manipulate what a person sees when using social media, by filtering out information that does not fit with the algorithm's view of reality.

Additionally, AI has enabled companies to sell smart personalized products and services to consumers in personalized, one-on-one marketing campaigns. This form of marketing can be manipulative, as it skews someone's opinion to buy particular products or advice; creating a fragmented truth.

Finally, AI is also being used to generate fake user accounts that spread disinformation, allowing these accounts to reach many people without the need for a real person’s involvement. This can have serious repercussions, as fake accounts can misrepresent facts, or even take on political identities. AI-generated accounts can also interfere in legitimate conversations, creating noise that can lead to the loss of trust in genuine sources of information.

AI has the potential to revolutionize our lives, but it has also brought with it the concept of fragmented truth. AI-generated media can distort reality by bypassing editorial filters and invading personal space. It also has the potential to manipulate opinions and entice people to buy products or services. In addition, AI can generate vast amounts of fake user accounts, which can create confusion and undermine truth.

As the negative impacts of AI-generated technology become increasingly apparent, it is important to be cautious when consuming information. We must be aware and vigilant about the information we consume and its source; understanding that reality is often not as straightforward as it seems. With this understanding, we can move closer to finding the true face of truth and reality.

AchieveAI

AchieveAI

AchieveAI uses AI to transform extensive information into concise, easy-to-digest content for efficient learning. We focus on practical steps before theory, offering courses in various skills and subjects without unnecessary repetition. Our platform provides quick access to vast knowledge, with customizable content levels. We aim to simplify skill acquisition, empowering personal growth and success.

Related Articles