homebusiness NewsZoomed Out | How user choices and corporate responsibility shape online safety

Zoomed Out | How user choices and corporate responsibility shape online safety

The responsibility for a safer, less polarised online space isn't just up to social media companies; users also play a key role, writes Jindal Law School faculties GV Anand Bhushan and Sunidhi Setia.

Profile image

By Anand Bhushan G V   | Sunidhi Setia  Jan 5, 2024 10:54:45 AM IST (Published)

Listen to the Article(6 Minutes)
7 Min Read
Zoomed Out | How user choices and corporate responsibility shape online safety
The prevailing view of the internet suggests a challenging landscape filled with fake news, foreign interference, and echo chambers that seem to intensify extreme viewpoints. Social media platforms are often highlighted in this discussion, criticised for algorithms that appear to prioritise user engagement over factual accuracy.

Share Market Live

View All

While it's true that these algorithms can create a sort of feedback loop, serving content that aligns with a user's existing beliefs, it's also worth noting that these platforms are not the sole creators of the digital environment. They offer a space where a variety of opinions can coexist and it is also for the users to determine what opinion or information they want to consume and to what extent. 
The responsibility that the users hold over the digital content, which they interact with, was underscored by the Hon’ble Madras High Court in its recent decision where it refused to quash the criminal petitions against S Ve Shekar. Justice Venkatesh linked the sharing of messages on social media to releasing an arrow — once sent, it cannot be taken back, and the sender must be accountable for its potential impact.
This sense of responsibility becomes even more significant when considering the findings of a study by Bakshy et al. (2015). Nearly eight years ago their research highlighted how users were more likely to engage with and be exposed to content that matched their ideological stance, creating a filter bubble that isolated them from contrasting perspectives.
This lack of exposure to differing opinions can reinforce pre-existing biases and contribute to a deepening polarisation. The study by Bakshy et al. highlights how algorithms can create echo chambers on social media, leading us to question whether tweaking these algorithms could solve the issue. 
However, some recent studies published in journals like Science and Nature suggest otherwise and make a compelling case that consumption of similar kinds of algorithmically curated content does not necessarily lead to a change in political beliefs. The four papers, which examined the 2020 US elections, indicate that while algorithms and re-shared content do shape what users see, and they don't significantly impact or shift their political beliefs. 
The first of the intriguing findings of the study conducted by Andrew M. Guess et al.,  challenges the notion that algorithmic adjustments alone can serve as a panacea for the issues of polarisation and ideological extremity. The study found that algorithms have a significant impact on what people see in their feeds, but changing the algorithm for even a few months is not likely to change people's political attitudes.
The study also found that reducing the prevalence of political like-minded content in participants' feeds during the 2020 US presidential election had no measurable effects on attitudinal measures such as affective polarisation, ideological extremity, candidate evaluations, and belief in false claims. 
Another study conducted by Sandra González-Bailón et al looked at how people interact with political news on social media. They found that users are mostly exposed to news that matches their own beliefs, and this tendency gets stronger the more they interact with the content. The study indicates that switching to chronological feeds is not a panacea for addressing ideological segregation on the platform; while it led to decreased user engagement, it did not substantially mitigate the issue of polarisation.
A similar conclusion was drawn in the study conducted by Brendan Nyhan et al. The study found that content from like-minded sources constitutes the majority of what people see on the platform, although political information and news represent only a small fraction of these exposures.
This study also found that reducing exposure to content from like-minded sources during the 2020 US presidential election did not reduce polarisation in beliefs or attitudes. The study suggests that exposure to content from like-minded sources on social media is common, but reducing its prevalence during an election campaign is not likely to reduce polarisation in beliefs or attitudes. 
The final study conducted by Andrew M. Guess et al examined the impact of reshared content on user behaviour during the 2020 US election. Researchers randomly assigned a group of consenting U.S. users to feeds without reshared content for three months. The study found that this change reduced exposure to political news, including unreliable sources, and led to fewer clicks and reactions. However, it did not affect levels of political polarisation or individual beliefs. The study concludes that while resharing amplifies political content, it doesn't significantly change people's opinions. 
The underlying theme across all these researches appears to be that the users have the power to shape their online experience and do not form their views solely on the basis of the algorithmic driven content shown on their feeds. We have often talked about methods and tools which social media companies should adopt to drive a safer online experience and how there should be regulation on this.
The above research also suggests that regulating platforms alone will not solve the problem. Undeniably, social media companies can contribute to a safer and less polarised user experience by improving content moderation. While algorithms are good at flagging explicit content, they often fall short in identifying more nuanced issues like hate speech, misinformation, or content that promotes radicalization.
Another avenue for improvement lies in algorithmic transparency and user control. Currently, most users have little understanding of how their feeds are curated. Social media platforms can offer features that allow users to customise their algorithmic experience, providing options to diversify their news feed or even view it chronologically.
Transparency reports detailing how algorithms work can also be published regularly to educate users. This not only empowers users to take control of their digital experience but also builds trust between the platform and its user base. Further, social media companies can invest in digital literacy campaigns aimed at educating users about the risks of misinformation and the importance of cross-referencing information.
Platforms can also collaborate with third-party fact-checkers to provide real-time verification of trending news stories, displaying this information prominently in users' feeds. By taking a proactive role in educating users, social media companies can equip them with the tools they need to navigate the digital landscape safely, thereby contributing to a less polarised and more informed online community. Increasingly, countries are introducing these enhanced obligations by legislative intervention which is a welcome move.  Any upcoming regulation can therefore require platforms to ensure there is algorithmic transparency, user control etc.  
However, the responsibility for a safer, less polarised online space isn't just up to social media companies; users also play a key role. First, it's important for users to realise they have the power to shape their own online experiences. This means actively looking for a variety of viewpoints to avoid getting stuck in an echo chamber. Being aware of your own biases and open to different opinions is a crucial first step. Second, critical thinking and media literacy are essential for anyone using social media. Before sharing information, it's important to fact-check and consider the credibility of the source. This simple step can help stop the spread of misinformation and reduce the impact of extreme or biassed content. Lastly, civil discourse matters. Engaging respectfully with people who have different opinions can help bridge divides. If a conversation becomes toxic, it's okay to step back and even report content that crosses into hate speech.
By setting these boundaries, users contribute to a more positive online environment. Thus, any regulation should acknowledge and appreciate the role of users in the creation and spread of content and should not blur the lines between a publisher of content and a social media platform.
 
The authors, GV Anand Bhushan, is a Fulbright Scholar and a Visiting Professor at Jindal Law School, and Sunidhi Setia, is a Faculty of Law at Jindal Law School. The views expressed are their personal.

Most Read

Share Market Live

View All
Top GainersTop Losers
CurrencyCommodities
CurrencyPriceChange%Change