Are Social Media Algorithms The Ones Choosing Your Preferences?

The Good, The Bad, And The Algorithmic

Take a moment to envision waking up one day and scrolling through your phone—only to discover that those shoes you were thinking about yesterday are now being advertised on your feed. At first glance, this shoe advertisement might not seem like a big deal, but when it appears alongside adorable cat videos, dance tutorials, and passionate political rants, it starts to feel highly suspicious. Love them, the advertisements paired with the videos you consume are not random; they are the product of carefully crafted social media algorithms. These
intricate rules and codes not only determine what appears on your feed but also influence how you engage with the platform—and, ultimately, even what you believe and how you choose to feel. In this blog, we examine the dual nature of these algorithms: while they
personalize our experiences, they can also facilitate the spread of misinformation. In our modern digital landscape, one must ask: Who among us truly controls our choices?

How Algorithms Work: The Art and Science of “Technologically Autonomous” Systems

Social media platforms are designed to keep us engaged, and at the heart of this engagement lie sophisticated algorithms. These are not mere background processes; they are active
systems that learn from our behavior and strive to maximize the time we spend online. Let’s break down their modus operandi:

Acquisition of Information
Every interaction you have—whether it’s a click, a hover, or a scroll—is documented.
Platforms such as Instagram and Twitter have built robust systems that analyze these behavioral actions to develop detailed profiles of your interests, habits, and even your mood. This data collection is the foundation upon which the entire personalization process is built.

Getting Information
Using machine learning techniques, these algorithms predict the type of content you’re likely to engage with next. Ever found yourself doom-scrolling through negative news? That’s not accidental; it’s a result of algorithms capitalizing on your past behavior to serve up similar content—often with a nefarious twist.

Information Foraging
When your feed is bombarded with a steady stream of fitness tips, motivational videos, or even conspiracy theories, the algorithm isn’t passive. Each piece of content you consume feeds back into the system, reinforcing a narrowed digital profile that diminishes your
exposure to diverse viewpoints. This process of continuous feedback can lead to a “zoomed in” perspective where your digital world is ever more constrained.
These systems are not waiting for you to make a single move—they actively shape your information environment based on your past digital footprint.

The Good: Algorithms as Connectors and Curators

When used with care, social media algorithms can be both a bane and a boon. There are undeniable benefits to having a curated online experience:

    1. Customized Online Interaction
      Thanks to these algorithms, your digital interaction feels uniquely tailored to you. Think of Spotify’s “Discover Weekly,” which introduces you to new artists, or Netflix’s recommendation engine that suggests your next binge-worthy show. For
      niche creators and small businesses, platforms like Instagram provide unprecedented access to audiences that might otherwise be unreachable.
    2. The Healing Power of Art: In times of social upheaval, algorithms have demonstrated an unexpected role in promoting grassroots movements. For example, during the Black Lives Matter protests in 2020, TikTok’s algorithm played a crucial role in amplifying educational content and mobilizing millions, even among users who never actively sought this information.²
    3. Soothing Oneself: Perhaps one of the most profound benefits is the way social media platforms are now harnessing algorithms for mental health initiatives. Meta’s platforms, for instance, have implemented methods to detect early signals of distress and guide users toward supportive resources.³

The Impacting Aspect: How Algorithms Hurt Society

However, while these systems can connect us, they also harbor significant risks. When misused or left unregulated, algorithms can cause real harm:

    1. Social Division and Echo Chambers
      To maximize engagement, algorithms often prioritize shocking or polarizing content.
      A study conducted by MIT in 2018 revealed that fake news spreads faster than real news on platforms like Twitter, as sensationalism grabs more attention. Each click
      reinforces existing beliefs, creating echo chambers that deepen societal polarization.⁴
    2. The Psychological Impact: The very formulas designed to keep us engaged can have grave psychological consequences. Endless scrolling, the constant urge to compare oneself with idealized images on Instagram, and features like TikTok’s beauty filters or YouTube’s autoplay function can lead to heightened anxiety, depression, and body dysmorphia—especially among vulnerable groups such as children and teenagers.⁵
    3. The Economy of Manipulation: Beyond individual well-being, algorithms can be weaponized for social control. The infamous Cambridge Analytica scandal of 2016 showed how Facebook could be manipulated to sway public opinion and even influence election outcomes through disinformation. Today, with AI-powered deepfakes and increasingly sophisticate misinformation techniques, the potential for manipulation poses a serious threat to the integrity of public discourse.⁶

The Ugly: Who Is at Fault?

The issues stemming from algorithmic decision-making are not inherent to the technology itself; they arise from the absence of robust regulation and ethical oversight.

Regulatory Gaps
While regions like the European Union have taken steps—such as the adoption of the Digital Services Act—to enhance transparency and accountability among tech firms, many parts of the world (including the United States) still lack unified regulatory measures. This patchwork approach increases the risk of algorithmic abuse.⁷

Corporate Priorities Over User Wellbeing
As whistleblowers like Frances Haugen have revealed, many companies prioritize profit over user protection. In this capital-first model, ethical considerations are an afterthought, leaving users exposed to potential harm.⁸

Opaque Decision Making
Often referred to as “black boxes,” these algorithms make it exceptionally difficult to pinpoint exactly how content is promoted. Without transparency, it becomes nearly
impossible to hold anyone accountable for the spread of harmful or biased content.

Fighting Back: Reclaiming Control

So, how can we reclaim our digital autonomy in an era where algorithms seem to dictate our online experiences? Here are some strategies:

    1. Demand Transparency: Users, legislators, and advocacy organizations must press platforms to open up their “black boxes.” Greater transparency would allow independent researchers to audit
      these algorithms and ensure they are not being exploited for harmful ends.
    2. Promote Algorithmic LiteracyEducation is key. As digital literacy grows, so too does the public’s ability to recognize and resist the creation of echo chambers. Teaching young people how algorithms work can empower them to make more informed choices online.
    3. Advocate for Responsible Design: There is a growing movement among tech designers to build what some call “humane algorithms.” These systems prioritize societal well-being over mere engagement metrics. By supporting companies that commit to ethical design practices, we can help shift the focus from profit to the public good.
    4. Engage in Political Activities: Active participation in the legislative process is crucial. Whether through voting or advocacy, supporting laws that protect privacy, mental health, and democratic values can help ensure that algorithms serve society rather than dominate it.

Final Thoughts: The Future of Social Media Is in Your Hands

There is nothing inherently wrong with social media algorithms. They are tools that can yield tremendous benefits if designed and regulated properly. The challenge is not in the technology itself, but in how it is deployed. With proper laws, increased transparency, and a united global effort toward responsible design, we can enjoy the perks of a personalized digital world without falling prey to its darker side.
So, the next time you find yourself scrolling through your feed, pause for a moment and reflect: Your choices matter—but who really controls them? Is it you, or the systems operating behind the screen? By demanding accountability, fostering digital literacy, and advocating for ethical technology, we can reclaim our digital autonomy and pave the way for a future where social media connects, educates, and empowers us all.

Footnotes

    1. Stephen Zuboff, The Age of Surveillance Capitalism (Profile Books 2019).
    2. The Guardian, TikTok and the Black Lives Matter Movement (2020).
    3. Facebook Wellbeing Initiative, Suicide Prevention Tools (2022).
    4. Soroush Vosoughi et al., The Spread of True and False News Online (MIT, 2018).
    5. American Psychological Association, Social Media and Teen Mental Health (2023).
    6. The New York Times, Cambridge Analytica and Facebook: The Scandal and the Fallout (2018).
    7. European Commission, Digital Services Act (2023).
    8. The Wall Street Journal, The Facebook Files: Frances Haugen’s Testimony (2021).
    9. MIT Technology Review, The Rise of Ethical AI Engineers (2023).
    10. Netflix Tech Blog, How Netflix’s Recommendations System Works (2021).
    11. Eli Pariser, The Filter Bubble (Penguin 2011).
    12. Forbes, How Small Businesses Can Beat Instagram’s Algorithm (2023).

Leave a Reply

Your email address will not be published. Required fields are marked *