Jordan Peterson speaks at the 2018 Student Action Summit in West Palm Beach, Florida

Do you recognize misinformation when you hear it or read it? What has Jordan Peterson’s experience been, and what does he think should be done about it?

According to Jordan Peterson, misinformation is a growing problem that has impacted him personally. On Theo Von’s This Past Weekend podcast, Peterson discussed the vulnerability of platforms such as Twitter and manipulation through “deepfakes.”

Keep reading to get a summary of this conversation, along with some context about the complex challenges of navigating truth in the digital age.

Jordan Peterson on Misinformation

In the conversation between Theo Von and Jordan Peterson, misinformation came up in a broader discussion of free speech and technology. Peterson specifically mentioned Twitter’s vulnerability to manipulation by users, and he expressed concern about the potential for misinformation due to unchecked ideological biases embedded in AI tools. He even offered techniques to navigate these platforms cautiously.

Theo Von raised concerns about the exploitation facilitated by anonymity on social media platforms, questioning the effectiveness of potential restrictions. Peterson highlighted the problem of criminal exploitation of vulnerable groups online, suggesting a possible solution of separating anonymous users from identifiable users. But, he acknowledged the inherent difficulty of verifying digital identities in this context.

Adding another layer of complexity, Peterson identified a group he called the “dark tetrad”—individuals exhibiting traits such as Machiavellianism, narcissism, psychopathy, and sadism. He views them as major contributors to the toxic online environment.

Misinformation Through Deep-Fake Technologies

While acknowledging the potential of deepfakes to recreate historical figures and even facilitate face-to-face interactions virtually, Peterson and Von raised major concerns about AI technology’s potential to spread misinformation and manipulate public opinion. This threat, they emphasized, could have serious consequences, including influencing elections and eroding trust in information sources.

Peterson shared his chilling experience with a deepfake call mimicking Ben Shapiro, demonstrating the unsettling realism and potential for misuse. While they explored hypothetical benefits such as “virtual Nietzsches” for education, the conversation delved into the dangers of deepfakes becoming powerful tools for misinformation campaigns.

Von touched on the humorous potential of AI-generated historical performances in the broader context of the sheer manipulative power this technology holds. Their discussion underscores the urgent need for solutions to mitigate the threat of deepfakes, prioritizing the protection of public discourse and ensuring the responsible use of this powerful technology.


Context

Communication technology has become an undeniable force in our lives, shaping how we connect and share information. Understanding this dynamic landscape, including digital platforms, social media, and online interactions, is crucial to navigating its complexities. Also, the ongoing challenges of free speech in the digital age remain highly relevant.

This podcast episode touched on several key themes, including:

  • Misinformation and Free Speech: The conversation explored the threats that misinformation and manipulation pose to free speech on digital platforms. Understanding these tactics and their impact is critical to maintaining informed discourse.
  • The “Dark Tetrad”: Peterson and Von examined the influence of anonymous users and introduces the concept of the “dark tetrad”—individuals with traits like manipulation and narcissism who can contribute to a negative online environment.
  • Deepfake Technologies: These AI-powered tools create realistic fake videos and audio, raising concerns about potential misuse in various fields.

Peterson and Von discussed the present state of technology and acknowledged ongoing issues such as online harassment, regulation, and free speech challenges in an increasingly digitized world.

Looking ahead, several areas merit further exploration:

  • The ethical implications of deepfakes. How can these technologies be used responsibly across industries such as entertainment, politics, and journalism?
  • The impact of algorithms and AI on online experiences. How do these shape information dissemination and user behavior?
  • The psychological effects of online interactions. How can we balance responsibility for combating misinformation with protecting user privacy?
  • Alternative perspectives. Considering views from free speech advocates, cybersecurity experts, and victims of online harassment can offer valuable insights.

Peterson brought up Ben Shapiro and Nietzsche. Shapiro is a conservative American political commentator known for his provocative views. Nietzsche is a 19th-century German philosopher recognized for works on morality and religion, including concepts such as the “Ubermensch” (superman).

More Perspectives

Our digital world offers a dynamic landscape, brimming with both potential and pitfalls. While it’s true that online spaces can sometimes harbor negativity and differ from face-to-face interactions, let’s not forget the positive: They also provide platforms for constructive discourse and empower marginalized voices. Oversimplifying this complex issue by focusing solely on the negative neglects the immense potential of digital platforms to foster connection and inclusivity.

Misinformation is indeed a challenge, but singling out specific platforms might ignore its pervasiveness across various online spaces. Addressing this issue requires a collaborative approach, where platforms, fact-checkers, and users themselves work together. Blaming specific platforms distracts from the real solution: promoting media literacy and responsible information sharing among all users.

Attributing disruptive online behavior solely to individuals with certain traits is an oversimplification. Anonymity, lack of consequences, and social dynamics all play significant roles. Blaming specific groups ignores the need for systemic solutions, rather than targeting individuals based on personality alone.

Deepfake technologies present potential risks, particularly to electoral integrity, but let’s not ignore their broader context and potential benefits. These technologies have the power to revolutionize creative expression, entertainment, virtual reality experiences, and even historical preservation. Instead of focusing on risks, people can work together to develop ethical guidelines and regulations that mitigate negative impacts while harnessing the positive potential of deepfakes.

Ultimately, navigating the digital era requires a nuanced and balanced perspective. Recognizing both the challenges and opportunities allows for comprehensive discussions that lead to effective solutions. We can address issues such as online hostility and misinformation without overlooking the positive aspects of digital platforms—and without stifling innovation in emerging technologies like deepfakes.

Deepfakes – The Good, the Bad, and the Ugly – Forbes

Deepfake technology, which uses AI-generated imitations, is considered a major challenge by cyber security experts. Deepfakes have been used to create pornographic images and manipulate political speeches, raising concerns about privacy and misuse. While deepfake technology has potential value, it also poses risks and challenges for fraud investigators.

Jordan Peterson: Misinformation & Deepfakes Are on the Rise

Elizabeth Whitworth

Elizabeth has a lifelong love of books. She devours nonfiction, especially in the areas of history, theology, and philosophy. A switch to audiobooks has kindled her enjoyment of well-narrated fiction, particularly Victorian and early 20th-century works. She appreciates idea-driven books—and a classic murder mystery now and then. Elizabeth has a blog and is writing a book about the beginning and the end of suffering.

Leave a Reply

Your email address will not be published.