In this episode of On Purpose, Jay Shetty examines how social media algorithms influence user behavior and mental health. He explores the mechanics behind these systems, which track user interactions to create personalized content feeds, and explains how they tend to amplify negative content—including how moral outrage increases content sharing by up to 8%, and how false news spreads six times faster than true stories.
Shetty discusses the psychological impact of these algorithms, from how they affect different demographics to their role in creating echo chambers, even on platforms without recommendation systems. The episode covers potential solutions to these challenges, including implementing chronological feeds, adding engagement friction points, and requiring algorithm transparency. It also addresses how users can develop better social media habits through practices like meditation and critical thinking.

Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.
Social media algorithms are sophisticated systems designed to monitor and influence user behavior, as explained by Jay Shetty and various researchers. These algorithms track every interaction, from the duration of video views to hovering over comments, creating personalized content feeds that keep users engaged.
The algorithms excel at amplifying content that triggers emotional responses, particularly negative ones. According to Yale researchers, moral outrage in posts can increase shares and retweets by 5-8%. This creates a feedback loop where negative content gets boosted, leading to what Shetty describes as a cycle of negativity and division.
The algorithmic promotion of certain content types has significant psychological effects. For women, the algorithms often push appearance-focused content that can trigger insecurity. For men, TikTok studies show that exposure to misogynistic content can increase feelings of loneliness within just five days. The constant encouragement of self-comparison leads to elevated anxiety and self-doubt across all users.
Even without algorithmic recommendations, human psychology plays a crucial role in content spread. Shetty notes that false news stories are 70% more likely to be retweeted than true ones and reach people six times faster. The University of Amsterdam's research showed that even on platforms without recommendation algorithms, users naturally gravitate toward like-minded individuals, forming echo chambers.
To address these challenges, Shetty recommends several solutions: implementing chronological feeds by default, adding friction points to encourage mindful engagement, and requiring algorithm transparency. For instance, Twitter's experiment with prompting users to read articles before retweeting led to a 40% increase in reading behavior. Additionally, Shetty suggests that users can practice emotional mastery through meditation and critical thinking to better navigate social media's challenges.
1-Page Summary
Jay Shetty and other sources reveal that social media algorithms are intricately designed to amplify engaging content, exploit psychological biases, and ultimately have significant effects on users’ behavior, mental health, and well-being.
Social media platforms like TikTok are known for closely monitoring user interactions such as pauses, clicks, likes, shares, and even the duration of hovering over videos or comments. These algorithms record behaviors to predict and promote content users are likely to engage with, thereby creating a feedback loop to retain users.
The algorithm prioritizes user engagement, gauging interest by tracking watches to the second, rewatches, and user interactions. Content that garners more emotional engagement is promoted to more people, reinforcing their preferences and behaviors.
Content with emotional resonance is likely to be amplified by the algorithm, potentially going viral. Features like autoplay extend watch time, often without direct user choice, facilitating the consumption of continuous engaging content. Jay Shetty notes that this results in a cycle of negativity, as doomscrolling and the high contagiousness of anger lead to amplifying divisive content.
Social media algorithms are manipulated to exploit innate psychological tendencies and biases, which often increases engagement but can also lead to unintended consequences.
Shetty discusses how algorithms exploit the negativity bias by boosting engagement with negative content. Yale researchers found that posting moral outrage online leads to social rewards, prompting individuals to post more outrage content. Each additional negative word in a post is linked with a 5 to 8% increase in shares and retweets.
Outrage online is often rewarded with likes and retweets, which encourages people to post even more outraged content. This phenomenon signals loyalty to one's social group and equates to a feeling of belonging.
Steering users towards extremist and divisive content can create echo chambers. Mozilla's research indicated that volunteers were directed to extremist content even from neut ...
Social Media Algorithms and Their Impact on Behavior
Psychology and individual choices play a crucial role in the cycle of reinforcement executed by social media algorithms, highlighting how user behavior can fuel the spread of sensational and biased content.
Algorithms are engineered to maximize user retention by presenting content that keeps individuals on the platform for extended periods. Through design features like autoplay, the element of an active decision-making process is reduced, encouraging increased watch time without conscious user choices.
Jay Shetty remarks that false news stories are 70% more likely to be retweeted than true ones and reach 1,500 people six times faster, illustrating that user engagement with sensational or false content is a significant factor in algorithmic amplification. This cyclic relationship between users' interactions and algorithms contributes to the widespread dissemination of such content.
Algorithms contribute to reinforcing users' biases by presenting them with content that similar users are engaging with, under the assumption that it will maintain their attention. Studies show that both liberals and conservatives are more likely to click on links that confirm their biases—21% and 30%, respectively—than on content that challenges their views.
Even in the absence of recommendation algorithms, user psychology exhibits tendencies that facilitate the formation of echo chambers and spread extreme content.
In a study by the University of Amsterdam, social media platform ...
Role of Psychology and Choice in Reinforcing Algorithms
In response to the widespread issues linked to social media, including political polarization and the spread of misinformation, Jay Shetty proposes various solutions aimed at combating these negative effects. These solutions involve changes to how social media platforms operate and how users interact with them.
Shetty suggests that social media platforms should default to chronological feeds, which have been shown to cut political polarization and misinformation exposure. While this change might result in decreased user engagement, it could have a positive impact by providing users with a more unfiltered view of the content from those they follow.
The simplicity of chronological feeds helps users to see posts in the order they are published as opposed to being curated by an engagement-driven algorithm. By seeing posts in sequence, users are less likely to be served content that could aggravate political divisions or propagate false information.
Social media platforms can encourage more mindful engagement by adding friction points that disrupt automatic consumption of content. For example, disabling autoplay features could lead to decreased session lengths, ultimately resulting in less exposure to misinformation and more intentional use of the platform.
One approach is to introduce prompts that encourage users to read or view content before sharing it. For instance, Twitter's experiment asking users to read articles before retweeting led to a 40% increase in reading before sharing. Additionally, WhatsApp's implementation of forwarding limits significantly reduced the spread of misinformation in India by adding a simple friction point to sharing.
Social media companies should be transparent about their algorithms and how they prioritize content. This information should be available for external research to evaluate the impact of algorithms on user behavior and information dissemination. As an example, the European Union's Digital Services Act req ...
Solutions to Address Negative Effects of Social Media
Download the Shortform Chrome extension for your browser
