Assessing current platforms’ attempts to curb misinformation

Forbes wrote a story in 2020 that crowned Facebook as the social media site where misinformation spreads the fastest. At this stage of the game, we have all more than likely encountered a piece of misinformation — maybe via screenshot — that spread like wildfire with Facebook as its host. Facebook has historically been a burning spot for misinformation; after research, I feel that is because of a lack of depth in their procedures outside the surface level. Still, recently as the depths and dangers of misinformation continue to evolve, it has taken measures to stop misinformation from appearing on the site as much as possible. 

On the Meta policy rationale page, the site clearly states that there is no way of articulating a comprehensive list of what is prohibited. However, a few boundaries are in place to ensure they won’t get crossed, which are implemented through four different categories of speech that Facebook clearly defines as prohibited. 

The information Facebook removes: Physical Harm or violence- Facebook says they collaborate with a plethora of non-governmental organizations, not-for-profit organizations, humanitarian organizations, and international organizations to combat and move any information, for example, unverifiable rumors, that would directly contribute to the risk of imminent harm or violence to people. 

In a world still at the backend of a global pandemic, Facebook is also partnering with leading health organizations that have implemented strict measures to ensure false health information doesn’t spread across the internet. The site lists an abundance of examples of how health information harmfully spreads across the world. In addition to that, Facebook has a section that talks about just general misinformation being laid in a public health emergency. The small paragraph ends with a section that reads, “Click here for a complete set of rules regarding what misinformation we do not allow regarding COVID-19 and vaccines.” It’s interesting to consider the fact that Facebook and Twitter are amid an ongoing lawsuit centered around COVID-19.

With the assistance of former U.S. President Donald Trump telling people to consume hydroxychloroquine, the United States has recently dealt with many situations that involve “miracle cures.” This has forced the site to ban any promotion or advertising of anything that is “likely to directly contribute to the risk of serious injury or death, and the treatment has no legitimate health use.” 

Staying on the political course, Meta has implemented regulations to try to maintain the integrity of elections, including prohibiting missing, false, or deceiving information about upcoming elections. Examples could be misinformation about the dates, locations, times, and methods for voting, voter registration, census participation, misinformation about whether a candidate is running or not, and more. 

The media manipulation section was the last clearly defined section on the Meta policy rationale page. It clears the way for users to edit their pictures for cosmetic purposes freely, but not to deceive. In 2015, Facebook added a feature allowing you to make minor edits to photos. Meta says media manipulation is one of the facets of misinformation that cannot be corrected through discourse, so it has implemented sharp measures that don’t allow for edits beyond adjustments for clarity or quality and video that is a product of artificial intelligence. Ironically, Meta now possesses an AI tool that helps it combat misinformation. 

Mastodon is a social media site established in 2016 to network in a setting eerily similar in design to Twitter. Mastodon is a bit of an obscure site that I had never heard of, so I approached finding my second site in a way that I was curious to see if any of these websites flying under the radar are taking advantage of that lack of attention by allowing its users to speak freely with few rules of regulations. After spending time on the site and scouring its information pages, I learned that that was what had been taking place. 

Mastodon has no specific rules defined for misinformation. After going to the About page, I found a section that read server rules, but upon clicking, instead of having blocks or paragraphs of information defining and offering examples of prohibited misinformation on the site, there were just a few bullet points under a highlighted section that read “server rules.” The site lists five server rules, with one out of five of them revolving around misinformation. That one regulation said, “Do not share intentionally false or misleading information.” There is nothing added or nothing more involving combatting misinformation. With that said, this site needs to be held accountable and, at the minimum, create a foundation where misinformation is quickly combated and not seemingly embraced, whether through negligence or not. Thorough explanations and examples of misinformation and how it occurs would be a basic necessity. As it stands, Mastadon, and any social media like it, that attempts to thrive in a Wild Wild West environment with no rules or regulations, are weapons of mass destruction in misinformation warfare.  

Overall, I am not a fan of the strength of either Meta or Mastodon’s misinformation policies. Mastodon does nothing at all, while Facebook comes off as proactively attempting to combat misinformation, but when you read through the procedures, the loopholes are glaring. Simply put, there is much that Facebook tries to halt, but at the same time, there are many things that they “cannot clearly define what is prohibited and what isn’t.” 






Leave a Reply

Your email address will not be published. Required fields are marked *