How social media affects voters

by Liam Mauger
The impact of social media on the U.S election goes deeper than just the sharing of info.

The impact of social media on the U.S election goes deeper than just the sharing of info.

As the U.S moves forward into election day, various social media platforms such as Twitter, Instagram, Facebook, and others continue to be the sole source of information for many. However, users and accounts on these platforms can also sway opinions, publish fake news or numbers, mislead viewers, and more.

Even the platforms themselves can have an impact on how their users see the content, as built-in algorithms determine what accounts people want to see and don’t want to see, which can result in an echo chamber of a timeline. However, Instagram specifically does have a feature that displays voting information, fact-checking, and general info that is related to the post, so it is easier to verify on that platform. For the most part, though, all fact-checking and other verifications must be done by the user themselves.

There are multiple social media accounts that are currently purely focused on following the election such as, @WhenWeAllVote, @TheDemocrats, @GOP, and @Politics_Polls, and all of these accounts have large followings. There have also been popular celebrity accounts that have been commenting on, and therefore influencing the election, like @50Cent, @JenniferAnnistn, and @KanyeWest. West probably has the most impact out of these celebrity accounts, as he is an independent candidate, though he formerly did support Trump. These accounts are all legitimate, but troll accounts are also prevalent and they are the cause of much of the incorrect information that is spread. Trolls are sometimes bots and sometimes real people, but either way, their main purpose is to spread misinformation. In the end, though, these accounts normally fit into one of the two parties, Democrat and Republican, and result in the aforementioned echo chamber in which those with dissenting opinions from the norm on that page are ignored or ostracized, leading to more division.

In light of all of this, there lies a number of questions revolving around accountability for these platforms. Are the platforms doing enough to prevent the spread of misinformation and anger? And can they be held accountable for what their users post in a time where fake news can influence elections?

I have been at least partly misinformed on the election due to social media as well. On Instagram, I read a post claiming that presidential candidate Joe Biden had made pro-segregation comments in the past, which many commenters claimed was not true. With the response largely being skeptical, and the many posts I’ve seen on social media championing Biden as a lover of diversity, I did not believe the post and assumed the poster was a troll. My opinion of Biden would have remained unchanged if I hadn’t looked it up, but I ended up doing so, only to find out that Biden had in fact made these comments at a 1977 hearing about the desegregation of public schools, concerned about his children growing up in a “racial jungle with tensions having built so high that it is going to explode at some point”. Basically, I thought the post itself was misleading, only to find out that I was actually misled by the responses – there were layers of deception in an Instagram post.

I don’t blame the platform itself for this though, and I think that both the commenters and myself are at fault. For the commenters, they either wilfully spread false information or didn’t verify if the information was legitimate, and for myself, I didn’t do my fact-checking. This example serves to remind how easy it is to be influenced by lies on social media, that there are some posts, comments, and actions that are specifically tailored to falsely influence, and that just because the platform is reputable, does not mean that all the people on it are.

There is also non-anecdotal evidence pointing to both sides of the argument. Freedom of speech is relevant here, and the opinions of its users, however hate-filled some of them may be, should not be censored. There are report features present on all big social media, which often have specific reporting options for trolls or false info. The verification system is a point in the platform’s favor, as it easily allows users to distinguish between real people with followings and bots or imitation accounts, which is a frequent issue. As stated, Instagram has built-in services to help combat the spread of lies, which is a big step in the right direction (although my experience with misleading content lacked these services).

On the other hand, when opinion or false information is presented as fact, the platforms could begin to face culpability, as they are allowing fake news to be posted, shared, and hosted on their website or app. The algorithm could also promote or highlight troll posts, depending on if they align with the user’s views, and wrongly influence them. The report feature can be abused to try and silence posts or users over political disagreements. Some of the features designed to prevent fake news end up enabling it, and some blame the platform for that. The platform or the users being responsible is a multi-faceted problem, and one that does not have an easy solution- more of a personal opinion than a factual answer.

It cannot be denied, though, that these companies are using legitimate methods to stop fake news and get voters the information they are looking for. Perhaps they just need to find ways that can’t be reverse-engineered to do the exact opposite.

Related Posts