ORIGINAL SCIENTIFIC PAPER / DOI: 10.20901/ms.10.19.8 / SUBMITTED: 30.04.2019
ABSTRACT The recent increase in usage of concepts such as 'fake news' or 'post-truth' reveals the importance of digital literacy especially on social media. In the digital era, people's views on different topics are attempted to be manipulated with disinformation and fake news. Fake content is rapidly replacing the reality among new media users. It is stated with concepts such as 'filter bubbles' and 'echo chambers' that there is a greater tendency for people to be fed with content that is ideologically appropriate to their own views and to believe in fake news in this content. This article analyzes the structure and functioning of fact-checking organizations in the context of preventing propagation of fake news and improving digital literacy. The research is based on content analysis of verification activities of the fact-checking organization Teyit.org, which is a member of International Fact-Checking Network in Turkey, between January 1 and June 31,2018. By conducting in-depth interviews with the verification team, propagation of fake news on social networks, fact-checking processes and their methods of combating fake news are revealed. Our article found that fake content spreading specifically through the Internet predominantly consists of political issues.
KEYWORDS
DISINFORMATION, FAKE NEWS, FACT-CHECKING ORGANIZATIONS, TEYIT.ORG, TURKEY
SAŽETAK Sve veća upotreba pojmova kao što su „lažne vijesti" ili „post-istina" otkriva koliko je važna digitalna pismenost, osobito na društvenim mrežama. U digitalnoj eri mišljenja ljudi o pojedinim temama pokušavaju se mijenjati dezinformacijama i lažnim vijestima, pa korisnici novih medija sve više prihvaćaju lažne vijesti kao stvarnost. Pojmovi kao što su „mjehurići filtri" (engl. filter bubbles, i „komore odjeka" (engl. echo chambers,) ukazuju na tendenciju da se ljude hrani sadržajem koji je ideološki u skladu s njihovim pogledima i koji podržava lažne vijesti. Ovaj rad analizira strukturu i funkcioniranje organizacija za provjeru činjenica u kontekstu prevencije propagiranja lažnih vijesti i poboljšanja digitalne pismenosti. Istraživanje je temeljeno na analizi sadržaja verifikacijskih aktivnosti organizacije za provjeru činjenica Teyit.org u Turskoj, članice Meðunarodnog udruženja za provjeru činjenica, i to tijekom šest mjeseci, od 1. siječnja do 31. lipnja 2018. S članovima tima za provjeru činjenica provedeni su dubinski intervjui kako bi se otkrili procesi provjere činjenica i metode za borbu s lažnim vijestima. Ovaj rad pokazuje da se lažni sadržaj koji se širi internetom u najvećoj mjeri tiče politike.
KLJUČNE RIJEČI
DEZINFORMACIJA, LAŽNE VIJESTI, ORGANIZACIJE ZA PROVJERU ČINJENICA, TEYIT.ORG, TURSKA
INTRODUCTION
There have been radical changes in news production and consumption with new media. Along with this, preferences of users have started shifting from traditional media to digital platforms, especially towards social media.
Benn Parr (2008) defines social media as an efficient way to share and discuss information and experiences with other users via the Internet by means of electronic devices (computer, smartphones, etc.). A different opinion which emphasizes usergenerated content defines social media as "mobile and web-based technologies to create highly interactive platforms via which individuals and communities share, discuss, and modify user-generated content" (Kietzmann et al., 2011). According to another definition, which takes a commercial perspective, social media are defined as "a variety of ... online information that are created, initiated, circulated and used by consumers' intent on educating each other about products, brands, [problem or experience about a service]" (Blackshaw and Nazzaro, 2006).
Social media platforms are widely used in Turkey, as in the rest of the world. According to the Simon Kemp (2018) there are 51 million social media platform users in Turkey. This amounts to 63% of the population of the country. The number of users who access social media platforms from their mobile devices is 44 million. According to the research about new media users in Turkey, which was conducted by Çiğdem Bozdağ (2017), it was found that, when it comes to the Internet, the first thing that comes to mind is social media. Also, social media usage constitutes 61% of the Internet usage.
However, new media in general and social media networks in particular are also often used as platforms where false and misleading information spreads because of its nature which enables content to spread rapidly and which allows user-generated content (Lazer et al. 2017).
CONCEPT AND TYPES OF FAKE NEWS
The fake news concept is surely not a new phenomenon. However, the discussions on this issue were brought to the agenda intensely once again during the 2016 U.S. presidential election and the Brexit vote in the UK.
Before addressing the effects of fake news and the issue with verification platforms, it is useful to define 'fake news'. In this study, the concept of 'fake news' is defined as content that is delivered to mislead individuals regardless of its motivation.
There are various definitions of fake news in the related literature. Hunt Allcott and Matthew Gentzkow (2017: 213) define 'fake news' as "news articles that are intentionally and verifiably false, and could mislead readers". According to Axel Gelfert (2018: 84) "fake news is the deliberate presentation of (typically) false or misleading claims as news, where the claims are misleading by design."
'Fake news' is also defined as "the presentation of false claims that purport to be about the world in a format and with a content that resembles the format and content of legitimate media organisations" (Levy, 2017: 20).
David M. J. Lazer et al. (2018: 1094), characterize 'fake news' as "fabricated information that mimics news media content in form but not in organizational process or intent". Jana Laura Egelhofer and Sophie Lecheler (2019: 3) state that "most authors agree that fake news contains false information". They also suggest that 'fake news' alludes to two dimensions of political communication: the fake news genre (i.e. the deliberate creation of pseudojournalistic disinformation) and the fake news label (i.e. the instrumentalization of the term to delegitimize news media)".
Xinyi Zhou and Reza Zafarani (2018: 3) state that "there has been no universal definition for fake news, even in journalism." Thus, it is better to comprehend the forms related to the concept of 'fake news' rather than seeking only one definition.
Claire Wardle (2017) emphasizes that the term 'fake news' is not clear and inclusive enough, because this problem is more than about news itself. It is a situation which includes the whole information system. Besides, according to Wardle, different types of misinformation (unintended sharing of false information) and disinformation (deliberate creation and sharing of information which is known to be fake) cannot be explained solely by the concept of 'fake news'.
According to Wardle and Derakhshan (2017), the term 'fake news' is definitely inadequate to describe the phenomena of information pollution. Therefore, researchers introduce a new conceptual framework in order to examine the information disorder and to describe three different types. By using dimensions of harm and falseness, they describe the differences between the three types of information; 'Mis-information': False Information + Mistake (Good Intention), 'Dis-information': False Information + Purpose of Harm (Bad Intention), 'Mal-information': True Information + Purpose of Harm (Bad Intention).
Edson C. Tandoc Jr. et al. (2018), who examined the academic articles in which the statements of fake news are used, identify the terms such as satire, parody, fabrication, manipulation, propaganda and advertising as frequently utilized.
For instance, 'news satire' refers to more exaggerated and entertaining forms of fake news that are part of "mock news programs". Websites like The Onion focus on entertainment as well. Satire-like productions are evaluated as 'news parody'. This kind of fake news can also be found in Turkey, e.g. Zaytung website. In these kinds of websites, it is clear that the content is fake both on the side of the reader and the producer. However, this is not the case for 'news fabrication', which is a different type of fake news. It is very hard for readers to identify these kinds of content which are written in the form of news template and which sometimes imitate the images of news organizations. Financial and political motivations or expanding news bots are common reasons for news fabrication. In addition to this, the process of creating a fake narrative via manipulation by using videos and photographs is also called fake news. In the realm of advertising and public relations, the attempts to gain trust of the consumers in the form of news which are hyperbolic, eye-catching and sometimes untrue "clickbait", which enables consumers to reach more websites, are considered as fake news. It is also possible to categorize as 'fake news' some of the content that is produced to affect public opinion within the scope of propaganda activities (Tandoc Jr. etal., 2018).
In their research of the 2016 U.S. presidential election, Allcott and Gentzkow (2017) state that there are two main motivations effective in sharing fake news. The first one is commercial profit. Individuals or institutions that would like to have commercial income on a website or a social media account can earn money by creating fake news, posts and content, by clicking on users' websites, by liking posts on social media accounts or by subscribing. The second motivation is ideological. Regardless of being in power or being opposition, individuals, parties and similar formations, that aim to spread or consolidate their own ideology and to appeal to the corresponding masses with fake news by manipulating them, do not refrain from sharing fake news as truth.
FACTORS CAUSING PROPAGATION OF FAKE NEWS
According to Reyes Keyes (2017: 266), the term 'post-truth era' implies that deception has become ordinary at every phase of contemporary life. According to him, the Internet is a remarkable tool for fake news presented as if true, advertisements to deceive users and malicious rumors. The Internet, which is powerful in terms of creating and disseminating data quickly, has also many problems with regard to security and reliability of these data. Lazer et al. (2017) state that social media systems provide an efficient basis for misinformation to spread. This is dangerous especially for political discussion in a democratic society. Social media platforms provide a communication space for everyone who attracts followers. This new power structure enables a small number of individuals who have technical, social or political experience to distribute disinformation or 'fake news' in large volumes. In the broadest sense, 'fake news' means all kinds of false and misleading content such as fabricated reports, hoaxes, rumors, conspiracy theories, clickbaits and satire (Shao etal., 2017).
Echo Chambers and Filter Bubbles
Jan Van Dijk (2006) states that there has been a significant increase in the amount of information with new media. And some mediators, such as search engines regulating volume of information and communication or algorithms regulating content, are included into communication processes in order to deal with this intensity. However, according to Van Dijk, it has also some risks. The continuous use of these mediators by individuals can cause their judgment skills to weaken or they may be deprived of many other sources of benefit.
New media users can easily distinguish whether information interests them or not thanks to filtering systems. In fact, this filtering system seems to be a very efficient application at first glance, but its consequences are not always positive. Users never encounter information that can help them to overcome their prejudices because of the filters they create, but they encounter the news close to their own ideas and beliefs. People are entrapped into partisan groups and bubbles where only similar views to their own can get in and they are losing their sense of shared reality and ability to communicate across social/religious lines. At the same time, nationalism, tribalism, immigration, fear of social change and hatred for the different are on the rise again (Kakutani, 2018).
According to the effect of echo chambers, Internet sites allow users to use filtering features and create their own echo chambers. In this way, users do not encounter opposing views in the virtual world: by creating homogeneous groups, they only follow accounts and Internet sources that are close and appropriate to their own opinions (Colleoni et al., 2014: 319). Eli Pariser (2011) used the term 'filter bubble' to describe how online personalization leads users to isolate themselves from various views or content. The term 'filter bubble' implies that users on social media such as Facebook and Twitter interact with individuals who conform to their own political tendencies (Hess, 2017). Researchers also worry about the misconceptions that may be caused by the fact that users live in their own filter bubbles closed to different ideas and thoughts (Resnick et al., 2013: 95).
In her field work, Suncem Koçer (2019) claims that there is also a similar situation in Turkey. Koçer states that "users follow news platforms and journalists close their own views on social media and believe in news close to their own ideas" and emphasizes that "this is not surprising considering the dimensions of social polarization, foundations and reflections of polarization in the media".
On the other hand, there are also studies proposing that the effects of these ideas should be evaluated in a skeptical way. As An Nguyen and Hong Vu (2019) stated in their reception studies, it is oversimplified and futile to evaluate the concepts like 'echo chamber' without dealing with the socio-psychological dynamics and the ties between news and media content.
Fake Accounts, Bots and Trolls
Bot programs are pieces of software which are created to capture users' private information (files, passwords, etc.) by infiltrating into their personal computers. However, bot accounts used on social media platforms are created in any location regardless of time and place.
There are bot accounts in all social media platforms, but their existence and effect are more evident on Twitter. Within a research on Twitter, it has been detected that two-thirds of tweets including links were sent by bot accounts (Wojcik etal., 2018).
One of the activities which are carried out by social bots and which have malicious intents is propaganda. By means of this activity, which is referred to as "artificial creation of grassroots" in the literature, politicians have the opportunity to use propaganda to their own benefit. In other words, this situation is an attempt to create a fake impression on the public in order to support a policy, a person or a campaign of a product.
Chengcheng Shao et al. (2017: 96) reviewed 14 million messages on Twitter during the U.S. Presidential election in 2016, and they "find evidence that social bots play a key role in the spread of fake news".
Trolls
The term 'Trolling' is referred to the state of being consciously antagonistic or offensive in computer-mediated communication processes (Hardaker, 2013). Rotimi Taiwo (2014) defines 'trolling' as provocative behavior which aims to provoke other people to react emotionally.
The aims of trolls are to annoy users, to cause discomfort, to spread news which can harm people and to damage an individual's reputation and dignity in public opinion (Coleman, 2012: 113). In this context, the phenomenon of 'troll' can be seen as an important factor for propagation of fake news in order to harm others. Mathew Hindman and Vlad Barash (2018: 16) define the troll accounts as "human-run accounts that usually seek to provoke or to spread disinformation".
Susan Herring et al. (2002), who explain the ways of dealing with trolls and trolling, state that a system which would enable blocking of messages by filtering should be created, that users should be informed about online behaviors of trolls and that a strong moderation team should be created, which is to be managed from a centralized place.
With all these factors, improving digital media literacy against propagation of fake news has gained ground. According to Laura Malita and Gabriela Grosseck (2018: 343), "digital media literacy has a major role to help people to avoid becoming victims of "fake news" and disinformation." It can be said that some activities against fake news contribute to digital media literacy. In this context, verification platforms draw attention as organizations that must be carefully followed.
COMBATING FAKE NEWS AND VERiFiCATiON PLATFORMS
Development of Verification Platforms
The effect of new media (specifically social media platforms), misinformation of the public with fake news, hiding the truth and important consequences of this act have revealed the need to combat fake news in new media. For this purpose, verification platforms have emerged to check information which circulates on traditional media and especially on social media platforms.
Internet verification/fact-checking platforms were founded at the beginning of the 2000s to verify and check suspicious political statements and news in the USA (Graves and Cherubini, 2016: 6). According to another source, it is possible to date back fact-checking platforms to Ronald Reagan's presidential campaign in the USA in the 1980s (Lowrey, 2015: 377). Hoax-busting websites which emerged in the beginning of the 1990s to deceive, to have fun and to verify false information are also considered as the first examples in this field (Lowrey, 2015: 377). In general terms, Internet news checking platforms, which can be described as platforms providing services to check "claims made in public statements through investigation of primary and secondary sources", are one of the structures emerging to meet this need (Brandtzaeg and Folstad, 2017: 65).
In parallel to popularity and proliferation of verification/fact-checking platforms in the USA, they also started operating in Europe in the mid-2000s. The first fact-checking platform in Europe was established to cover and follow general elections in the UK. After this initiative, similar platforms also came into operation in the Netherlands and France in 2008. Fact-checking platforms in Europe were soon to be used for covering not only election processes, but also other agendas (Graves and Cherubini, 2016: 6).
Today, there are fact-checking platforms which broadcast independently or dependently on an NGO. It is possible to examine fact-checking platforms in three categories in accordance with their areas of concern (Brandtzaeg and Folstad, 2017: 65):
1. Those focusing on online rumors, hoaxes and stories (e.g. nopes.com., Hoax-Slayer, HoaxBusters, ThruthOrFiction.com., Viralgranskaren-Metro)
2. Those focusing on political and public claims (e.g. FactCheck.org, PolitiFact, The Washington Post Checker, CNN Reality Check, Full Fact)
3. Those focusing on specific topics or studies (e.g. #RefugeeCheck, Climate Feedback, StopeFake, TruthBe Told)
Works of verification/fact-checking platforms are conducted not only with texts, but also with photos and videos. Verification/fact-checking platforms established in the USA and Europe have experienced an increase in numbers in parallel to the number and spread of circulation of fake news. Numerically, there are 160 verification platforms currently active in the world (Duke Reporters Lab, 2019).
Discussions About Verification Platforms
When it comes to discussing Internet verification/fact-checking platforms, the first prominent issue to be considered is the "reliability problem". Petter Bae Brandtzaeg and Asbjorn Folstad (2017) conclude that fact-checking platforms are beneficial, but that they cannot win people's trust completely. In the research, it is indicated that there are four different factors effective on the reliability of the fact-checking platforms. These factors are ownership structure, financial source, structure and aim of the organization, and transparency of the fact-checking process.It is found that the reason why people approach verification/fact-checking platforms with reservation lies in their concern that platforms may have 'political bias'. The existence of verification platforms with partisan attitudes in the past and especially these platforms' attitudes towards rival political candidates and parties during elections formed the basis for concern regarding 'political bias' (Dobbs, 2012: 11). Besides, there are various other concerns: these platforms will not be able to ensure 'objectivity criteria' (covering parties equally on the news, standing at an equal distance) (Kavaklı, 2019: 401), the verification process is open to human-driven mistakes, verification platforms will be insufficient for people who tend to believe in fake news.
Fundamental Principles for Verification / Fact-Checking Platforms
Verification platforms aiming to inform public correctly should not raise doubts about their objectivity, transparency, openness and reliability. In this context, The International Fact-Checking Network which was established in 2015 as a body of the Poynter Institute and which aims to bring together verification/fact-checking platforms (whose number is increasing rapidly across the world) , has created some fundamental principles to ensure their reliability.
These principles are: 'commitment to the principle of nonpartisanship and fairness', 'commitment to the principle of transparency of sources', 'commitment to the principle of transparency of funding and organization', 'commitment to the principle of transparency of methodology' and 'commitment to the principle of open and honest correction of analyses'.
VERIFICATION PLATFORMS IN TURKEY: TEYİT.ORG CASE
Fake news is considered as an important problem in Turkey. According to the Reuters Institute Digital News Report 2018: Turkey Supplementary Report, printed news consumption is the last choice of the users, while interest in the Internet and social media is increasing gradually. However, the report indicates that misinformation has become the most important issue over the last years in Turkey because of polarization in politics and media. It is asserted in the report that "49% of respondents stated that they have come across 'stories that are completely made up for political or commercial reasons'. This places Turkey at the top of the list compared with the average of all countries of 26%" (Yanatma, 2018). Hence, the need for fact-checking platforms is strongly felt in Turkey as well.
A survey about who should combat misinformation was conducted within the same project. The results show that "78% of respondents thought that media companies and journalists should do more, with 76% choosing technology companies like Facebook and Google". Furthermore, 68% answered that "the government should do more to separate what is real and what is fake on the Internet". However, in the struggle with fake news, there are also different options such as verification platforms.
In this context, as it is the case all around the world, various verification platforms carry out activities in Turkey as well. The platforms claiming themselves as a verification platform in Turkey are listed as below:
*Dogrulukpayi.com (www.dogrulukpayi.com)
*Dogrula.org (www.dogrula.org)
*Gununyalanlari.com (www.gununyalanlari.com)
*Malumatfurus.org (www.malumatfurus.org)
*Factcheckingturkey.com (www.factcheckingturkey.com)
*Teyit.org (www.teyit.org)
*Yalansavar.org (www.yalansavar.org)
The Foundation of Teyit.org and its Principles
Teyit.org, which describes itself as a "platform helping Internet users to reach to correct information by fact-checking in various fields from misconceptions, suspicious information on the agenda of social media, media's allegations to urban myths", was opened to access on October 26, 2016 by the journalist Mehmet Atakan Foça. Teyit.org is a member of the International Fact-Checking Network.
Foça describes verification process as follows:
During the years 2015-2016, there were many moments of explosion crises. In those years, the medium people used to consume news was Twitter. Everything was there in times of crisis. We have seen that users were having difficulty in distinguishing what is right and wrong at the time of the crisis. People were trying to help each other. We, I was trying to show to people on my personal page that everything they see is not true, but it was not enough. We were not able to reach so many people. However, I had started talking about verification methods here and there. We aimed to make it professional and bring it to more people (Foça, 2019).
Teyit.org consists of a team of ten (Çavuş, 2019). This team of ten members works as a content team, project team and video team. After the verification process of suspicious news, the platform shares it with the public on teyit.org, facebook.com/teyitorg, twitter. com/teyitorg, Instagram.com/teyit.org and youtube.com/teyitorg. Besides, users have the opportunity to access a weekly newsletter, if they subscribe to a mailing list on teyit.org.
It is stated in the "methodology and principles" section of the platform's website that the verification process has four phases. These phases are as follows:
a) Scanning
Teyit.org editors check news, social media trending topics and news sent by readers. They also use different software such as Dubito.
b) Choosing
Considering the amount of suspicious news the editors of Teyit.org receive, an evaluation is conducted to prioritize the received news. The decision on which suspicious news is to be examined and analyzed first is determined on the basis of whether the news have at least one of these features: virality, importance and urgency (https://teyit.org/methodology/). Teyit.org editor Gülin Çavuş explains how they select the news to be verified:
Apart from the reports received, our editors and writers scan suspicious news by examining the agenda. The process actually proceeds with volunteering. We are sharing works and prioritizing news. We are making a division of labor, who will take care of what. Of course, the verification itself has many processes. Videos can be watched for hours or books are reviewed at libraries. Digital devices are used. We need to know what to suspect. A text or technical information in a video can take you a different point. Expert opinions can also be helpful during the verification process. The real issue is finding a content to suspect and to go after it (Çavuş, 2019).
This is how Foça, who states that it is difficult to produce an idea about what people would believe and would not, explains the process:
The critical thing on prioritizing issue is virality. We try to predict what people would believe and would not. There is no detector for this. It is enough to analyze suspicious content which are shared and liked most (Foça, 2019).
c)investigation
Basic journalistic tools are used by the editors to verify suspicious content. They also use the digital tools and principles from the Verification Handbook.
d) Result and Analysis
As a result of investigation, an analysis, which consists of only tangible data and facts, is prepared. After all phases are completed, four different conclusions are drawn for the claims analyzed:
The conclusions are as follows:
1. True: It indicates that data about the analyzed claim is true.
2. False: It indicates that data about the analyzed claim is false.
3. Mixed: It indicates that data about the multiple proposition contains both true and false information (or both true and ambiguous, or both false and ambiguous).
4. Uncertain: It indicates that data are obtained about the analyzed claim, but it states that these data are not enough to draw a conclusion whether the claim is right, wrong or mixed.
Teyit.org was introduced as a nonprofit social enterprise which focuses on social impact and which does not distribute income. Activities are operated under multiple institutional structures due to the lack of proper infrastructures in Turkey for social enterprises. In addition to informing users and society, the platform prepares reports where they also have opportunity to communicate their own activities and they publish translated books. These reports are: "Insight report: what do we doubt on the web?", "Verification handbook: a definitive guide to verifying digital content for emergency coverage", "Verification handbook: For investigative reporting / A guide to online search and research techniques for using UGC and open source information in investigations" and "Media usage and news consumption: Trust, verification, political polarization". The aim of these reports is to verify the assumptions of the platform. The aim is to reveal whether people consume news, how they consume it, how they read and how they verify (Foça, 2019).
The increase in fake news especially during election periods is remarkable in Turkey. The The Fake News Report prepared by Teyit.org was focused on the local elections held on 31 March, 2019. According to the report, 61% of Internet users in this period mentioned that they had come across the news they thought was a complete lie in the previous week. In the same period, Teyit.org received more interaction and reported more suspicious news than during the previous election. However, the report shows that Teyit.org's analyses are still low compared to the fake news interaction.
Paul Mena (2019) finds that there is a lower possibility for news that is stated as "fake" by verification platforms to be shared on Facebook than news which has no verification information. Facebook cooperates with organizations approved by the International FactChecking Network in 35 countries to prevent propagation of fake news in its own platform and to present reliable information flow. In Turkey, Teyit.org carries out verification activities in relation to suspicious content spreading via Facebook. Foça (2018) points out that after detecting a fake post, reaching that post decreases by around 80%, as Facebook states.
Teyit.org detected more than 500 items of fake news between 2016 and 2018. In 2017, for example, in Istanbul an attack on the Reina nightclub was organized. 39 people lost their lives in the attack. Afterwards, photographs of innocent people appeared on both mainstream and social media. However, the editors of Teyit.org revealed that these people were not involved in the attack (Lowen, 2018).
Two reports published by Teyit.org in 2017 and 2018 show increased interest in the platform. While 7,628 suspicious content notifications were sent in the 2016-2017 period, this number increased to 11,518 in the period from 2017 to 2018. The notifications sent to be verified increased by 12.67 percent daily compared to the previous period. In times of crisis, Internet users applied to Teyit.org more and the number of suspicious news propagated during the election period increased by 80 percent (Avşar, 2019).
The Main Principles of Teyit.Org
Teyit.org releases publicly their three main principles (https://teyit.org/methodology/):
Objectivity and Openness
Teyit.org claims that they present the verifiable truth of the news "without being a side of any political discussion". Suspicious news is analyzed by Teyit.org editors. In order to prioritize news to be analyzed, they take three features into consideration which are also shared with the public in the Methodology section of their website: the suspicious news should be important, they should be widespread and they should be urgent.
It is stated that, in accordance with the internal verification processes, gathered sources and analyses are also checked by different editors other than the editor who writes the analysis (Foça, 2019).
Correction Policy
Teyit.org emphasizes that all processes are demonstrated clearly in the analyses. They state that the most important thing for Teyit.org is that "while they avert false propagation of news or images, they aim to enable following the truth and developing critical thinking reflexes into a habit for all user who get their news on the Internet." (https://teyit.org/ methodology/).
In these processes, if there is a mistake, they make analyses by taking "correction requests" into consideration that are received from their social media accounts and WhatsApp hot-line.
Economic Transparency
The information on supporters of Teyit.org is given on their web page in the section Supporters. In this context, Teyit.org received support, such as funding and aid in kind from various non-governmental organizations. In order to receive individual contribution, they have also benefited from crowdfunding by taking support of the users over the platform Patreon since February 2018. In addition, some forms of collaboration provide income for Teyit.org, too. Teyit.org started flagging false news on Facebook after signing a contract with Facebook in May 2018. In this way, the analysis by Teyit.org is aimed at reducing the use of false news by Facebook users (https://teyit.org/about/).
However, Teyit.org officers state that there is no impact of any funding institutions on any analyses or articles published. They indicate that any intervention attempt towards content policy or methodology of Teyit. org would never be accepted (Foça, 2019).
METHODOLOGY
In order to understand the structure and functioning of Teyit.org, in-depth interviews were conducted with the founder and editors of the platform. In addition, the verification process of the platform was observed in the office located in the capital city of Ankara, Turkey.
However, in order to obtain a quantitative result regarding the content of these verification activities, three basic research questions were prepared in the study.
1. What categories of content does Teyit.org evaluate for the verification process?
2. What types of media (photo, video, text, etc.) does this content consist of?
3. What is the medium of propagation of fake news or false information and what is the amount of interaction of content on social media?
Quantitative analysis was used to answer these questions. In this study, the content analysis technique of quantitative research method was used to understand the structure of the verification practices revealed by Teyit. org.
Kevin Coe and Joshua M. Scacco (2017: 1) describe quantitative content analysis as "a research method in which features of textual, visual, or aural material are systematically categorized and recorded so that they can be analyzed."
Verification analyses of Teyit.org between January 1 and June 31,2018 were collected from the website (www.teyit.org) and data were examined through a content analysis within the research questions of the study.
First of all, suspicious content delivered to Teyit.org is categorized by subject. These categories are politics, life, health, science, sport, urban myth, magazine, technology, art, jurisdiction, education and economy.
Another category is content type. The content is categorized to determine whether the suspicious content consists of visuals such as photos and videos, news texts or social media messages circulating on Twitter and Facebook. In addition, it was determined whether the content was created for the Internet and social media networks or traditional media in order to understand the means in which this suspicious content was circulated.
Teyit.org also shares images of social media content during its analysis. Thus, the number of likes, dislikes, comments and views on social media reached by the content can be seen even before the review is completed. Based on this data, total interaction numbers of the content are also revealed. Needless to say, this data has a number of limitations and may vary. However, it can still give us an idea of how the content interacts. It is thought that this number is important especially for identifying the field of circulation and interaction of the information which is found to be inaccurate as a result of the examination.
FiNDiNGS AND DiSCUSSiON
The Subject of the Content
Within the scope of the research, Teyit.org analyzed 164 items of suspicious content between January 1, 2018 and June 31, 2018. In this period of time, suspicious content was analyzed the most within the category of politics, with a total number of 112. It can be seen that suspicious content pertaining to other categories are listed as follows: life (23), health (11), science (5), sport (3), urban myth (3), magazine (2), technology (1), art (1), jurisdiction (1), education (1) and economy (1).
After the verification processes, it was determined that 89% of the suspicious content in the politics category contained false information. As a result of the analysis, it was found that thirteen items of content were true and ten items of content were mixed, containing both true and false information. It was discovered that sixteen items of life news, nine of health news, three of sport and urban myth news and two of magazine news contained false information. It was further detected that all analyzed content of technology and art category were false, news pertaining to the jurisdiction and education categories were mixed and content of the education category was true.
Content Type
In the analyses of Teyit.org, it was determined that content which were considered as suspicious on websites and social networks consisted of mostly photos and videos. During the period of analysis, 79 photos, 40 videos, 23 news texts and 22 social media messages were subjected to analysis as suspicious content.
62 of the analyzed photos, 36 of the videos, 14 of the social media messages and 15 of the news texts were determined to be false.
Medium of Propagation
Even though the analyses conducted by Teyit.org consist of suspicious content spreading mostly on new media, five content items broadcast on TV channels and one column published in a newspaper were evaluated. Some suspicious items of content were put into circulation on different social networks simultaneously. Facebook (95), Twitter (93), Internet news sites (56) and Instagram (6) are the media in which fake news and false information are most frequently encountered. Besides, five items of content spreading on WhatsApp and for YouTube videos are also analyzed by the editors.
Interaction Numbers
The total interaction number pertaining to suspicious content analyzed in a six-month period is 39,530,247. It is determined that the interaction number of the analyses with 'false' results is 36,401,391, the interaction number of analyses with 'mixed' results is 716,221, and the interaction number of analyses with 'true' results is 2,412,635.
CONCLUSiON
The Internet is considered, especially by young users in Turkey, as more credible and reliable compared to conventional means, such as television and newspaper. Furthermore, news is followed on social media in particular (Bozdağ, 2017). However, content based on false information and fake news which is frequently encountered on the Internet give rise to discussions about reliability of information on new media, and solutions to the problem are sought.
Apart from activities of reporters, media and technology companies and states, this research discusses what kinds of activities verification platforms carry out in the combat with fake news which are spreading rapidly especially on new media. In this context, Teyit.org, which is a member of the International Fact-Checking Network in Turkey and which carries out news verification activity regularly, is analyzed with the aim to reveal the structure and functioning of the verification platforms.
In the first six months of 2018, 164 shares, most of which spread on social media and which were identified by the editors of Teyit.org as suspicious, were analyzed with the content analysis technique.
In our first research question, we focused on the category in which the content handled during the verification process was concentrated. According to this, it was found that more than half of the content which spreads on the Internet and which are flagged as false is about political issues. In the meantime, this conclusion can also be associated with discussions regarding the concepts, such as 'echo chambers' and 'filter bubbles', implying that users mainly follow people who are closer to their own views and that there is a strong tendency of users to believe in content which they find ideologically closer to them. Indeed, researchers draw attention to the tendency that people get informed in a way which will validate their existing beliefs, even if information is not understood clearly (Flynn etal., 2017).
According to our second research question, it was concluded that the suspicious content consists mostly of images such as videos and photographs. Teyit.org examined 79 photographs and found that 62 of them contained false information. In addition, 40 videos were examined and 36 of them were found containing fake or false information.
The third research focused on the diffusion environment and interaction of fake news. The content of the review by Teyit.org was mainly composed of information disseminated through social media (Facebook: 95 items of content, Twitter: 93 items of content). In addition, it was seen that the content, which was determined to be false as a result of the examination, received more interaction on social media.
Especially during periods of crisis (an armed attack, a natural disaster, an election, etc.), it is evident that interest in verification activities carried out by Teyit.org is increased. Projects of collaboration with international organizations, such as Facebook, are also important to ensure that verification processes reach a wider audience. The spread and prevention of false news is, of course, a big issue. Needless to say, merely Teyit.org's activities are not sufficient to eliminate this problem. However, it can be said that it is beneficial for Internet users to share information with a platform that verifies suspicious content. Indeed, since 2016, the number of requests to verify suspicious content has been increasing every year. In addition, Teyit.org's efforts to draw attention to this issue through the reports it publishes on its website and social media accounts may increase the public's knowledge of fact-checking processes.
As in many countries in the world, fake news is an important issue in Turkey, too. In this context, due to the fact that the increasing number of verification platforms contributes to digital literacy of Internet users, namely to their ability to identify fake news in new media, their further development is required in order to provide vitality for the future of democracy, too. This is because of the fact that fake news jeopardizes the level of reliability of news media, thus causing problems in political decision making among citizens engaged in political processes in democracies which are intensely dependent on media to inform their citizens (Jones, 2004; Balmas, 2014).
1 This study is part of thesis titled "The Verification Platforms in Turkey in the Context of Verification of Spreading the News in New Media Environment: Teyit.org Example".
References
*Allcott, Hunt and Gentzkow, Matthew (2017) Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives 31 (2): 211-236. DOI: 10.1257/jep.31.2.211.
*Avşar, Burak (2019) Teyit'in Son 15 Ayına Verilerle Bir Bakış. https://teyit.org/teyitin-son-15-ayinaverilerle-bir-bakis/ (12/07/2019).
*Balmas, Meital (2014) When Fake News Becomes Real. Communication Research 41 (3): 430-454. DOI: 10.1177/0093650212453600.
*Blackshaw, Pete and Nazzaro, Mike (2006) Consumer-Generated Media (CGM) 101: Word-Of-Mouth in the Age Of the Web-Fortified Consumer, New York, NY: Nielsen BuzzMetrics.
*Bozdağ, Çiğdem (2017) Türkiye'de Yeni Medya Kullanım Eğilimleri Araştırma Raporu. http://ctrs.khas. edu.tr/sources/Yeni%20Medya%20Egilimler%20Rapor.pdf (24/01/2019).
*Brandtzaeg, Petter Bae and Folstad, Asbjorn (2017) Trust and Distrust in Online Fact-Checking Services. Communication of the ACM 60 (9): 65-71. DOI: 10.1145/3122803.
*Coe, Kevin and Scacco, Joshua M. (2017) Quantitative Content Analysis, pp. 346-356 in Matthes, Jörg (ed.) The International Encyclopedia of Communication Research Methods. Hoboken, NJ: WileyBlackwell. DOI: 10.1002/9781118901731.iecrm0045.
*Coleman, Gabriella (2012) Phreaks, Hackers, and Trolls: The Politics of Transgression and Spectacle, pp. 99-119 in Mandiber, Michael (ed.) The Social Media Reader. New York: New York University Press.
*Colleoni, Elanor; Rozza, Alessandro and Arvidsson, Adam (2014) Echo Chamber or Public Sphere? Predicting Political Orientation and Measuring Political Homophily in Twitter Using Big Data. Journal of Communication 64 (2): 317-332. DOI: 10.1111/jcom.12084.
*Çavuş, Gülin (2019) Depth Interview Ankara (13/02/2019).
*Dobbs, Michael (2012) The Rise of Political Fact Checking: How Reagan Inspired a Journalistic Movement: A Reporter's Eye View". The New America Foundation. https://www.issuelab.org/ resources/15318/15318.pdf (16/11/2019).
*Duke Reporters Lab (2019) Global Fact-Checking Sites. https://reporterslab.org/fact-checking/ (06/03/2019).
*Egelhofer, Jana Laura and Lecheler, Sophie (2019) Fake News as a Twodimensional Phenomenon: A Framework and Research Agenda. Annals of the International Communication Association 43 (2): 97-116. DOI: 10.1080/23808985.2019.1602782.
*Flynn, D. J.; Nyhan, Brendan and Reifler, Jason (2017) The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs about Politics. Advances in Political Psychology 38 (1): 127-50. DOI: 10.1111/pops.12394.
*Foça, Mehmet Atakan (2019) Depth Interview Ankara (13/02/2019).
*Foça, Mehmet Atakan (2018) Facebook - Teyit Iş Birliğinin Yayıncılar Üzerindeki Etkisi Ve Öneriler. https://teyit.org/facebook-teyit-is-birliginin-yayincilar-uzerindeki-etkisi-ve-oneriler/ (06/06/2018).
*Gelfert, Axel (2018) Fake News: A Definition. Informal Logic 38 (1): 84-117. DOI: 10.22329/ il.v38i1.5068.
*Graves, Lucas and Cherebini, Federica (2016) The Rise of Fact-Checking Sites in Europe. http:// reutersinstitute.politics.ox.ac.uk/publication/rise-fact-checking-sites-europe (06/06/2018).
*Hardaker, Claire (2013) "Uh. ...not to be nitpicky,,,,,but...the past tense of drag is dragged, not drug.": An overview of trolling strategies. Journal of Language Aggression and Conflict 1 (1): 58-86. DOI: 10.1075/jlac.1.1.04har.
*Herring, Susan; Job-Sluder, Kirk; Scheckler, Rebecca and Barab, Sasha (2002) Searching For Safety Online: Managing "Trolling" In A Feminist Forum. The Information Society 18 (5): 371-384. DOI: 10.1080/01972240290108186.
*Hess, Amanda (2017) How to Escape Your Political Bubble for a Clearer View. https://www.nytimes. com/2017/03/03/arts/the-battle-over-your-political-bubble.html?_r=0 (06/06/2017).
*Hindman, Mathew and Barash, Vlad (2017) Disinformation, "Fake News" and Influence Campaigns on Twitter. Knight Foundation. https://kf-site-production.s3.amazonaws.com/media_elements/files/000/000/238/original/KFDisinformationReport-final2.pdf (23/06/2019).
*Jones, David A. (2004) Why Americans Don't Trust the Media. Press/Politics 9 (2): 60-75. DOI: 10.1177/1081180x04263461.
*Kakutani, Michiko (2018) The Death of Truth: Notes on Falsehood in the Age of Trump. New York: Tim Duggan Books.
*Kavaklı, Nurhan (2019) Yalan Haberle Mücadele ve İnternet Teyit/Doğrulama Platformları. Erciyes İletişim Dergisi 6 (1): 663-682. DOI: 10.17680/erciyesiletisim.453398.
*Kemp, Simon (2018) Digital in 2018: World's Internet Users Pass the 4 Billion Mark. https://wearesocial. com/blog/2018/01/global-digital-report-2018 (02/03/2019).
*Keyes, Reyes (2017) Hakikat Sonrası Çağ. İzmir: Delidolu Yayıncılık.
*Kietzmann, Jan, H.; Hermkens, Kristopher; McCarthy, Ian and Silvestre, Bruno (2011) Social Media? Get Serious! Understanding the Functional Building Blocks of Social Media. Business Horizons 54 (3): 241-251. DOI: 10.1016/j.bushor.2011.01.005.
*Koçer, Suncem (2019) Türkiye'de Sosyal Medya Ve Yalan Haber: Sahadan Notlar. https://www. newslabturkey.org/turkiyede-sosyal-medya-ve-yalan-haber-sahadan-notlar/ (07/06/2006).
*Lazer, David M. J.; Baum, Matthew A.; Benkler, Yochai; Berinsky, Adam J.; Greenhill, Kelly M.; Metnczer, Filippo; Metzger, Miriam J.; Nyhan, Brendan; Pennycook, Gordon; Rothschild, David; Schudson, Michael; Sloman, Steven A.; Sunstein, Cass R.; Thorson, Emily A.; Watts, Duncan J. and Zittrain, Jonathan L. (2018) The Science of Fake News. Science 359 (6380): 1094-1096. DOI: 10.1126/ science.aao2998.
*Lazer, David M. J.; Baum, Matthew A.; Grinberg, Nir; Friedland, Lisa; Joseph, Kenneth; Hobbs, Will and Mattsson, Carolina (2017) Combating Fake News: An Agenda for Research and Action. https://shorensteincenter.org/wp-content/uploads/2017/05/Combating-Fake-News-Agenda-forResearch-1 .pdf?x78124 (23/01/2019).
*Levy, Neil (2017) The Bad News About Fake News. Social Epistemology Review and Reply Collective 6 (8): 20-36.
*Lowen, Mark (2018) Türkiye'de Sahte Haberler: Komplo Teorilerinin Gezdiği Topraklarda Doğrunun Avı. https://www.bbc.com/turkce/haberler-turkiye-46221257 (06/08/2019).
*Lowrey, Wilson (2015) The Emergence and Development of News Fact-checking Sites. Journalism Studies 18 (3): 376-394. DOI: 10.1080/1461670x.2015.1052537.
*Malita, Laura and Grosseck, Gabriela (2018) Tackling Fake News in a Digital Literacy Curriculum. The 14. International Scientific Conference e Learning and Software for Education. Bucharest, Romania, April 19-20, 2018.
*Mena, Paul (2019) Cleaning Up Social Media: The Effect of Warning Labels on Likelihood of Sharing False News on Facebook. Policy & Internet. DOI: 10.1002/poi3.214.
*Nguyen, An and Vu, Hong (2019) Testing Popular News Discourse on the "Echo Chamber" Effect: Does Political Polarisation Occur Among Those Relying on Social Media as Their Primary Politics News Source? First Monday 24 (6). DOI: 10.5210/fm.v24i6.9632.
*Pariser, Eli (2011) The Filter Bubble: What the Internet is Hiding From You. New York: Penguin.
*Parr, Benn (2008) It's Time We Defined Social Media. No More Arguing. Here's the Definition. http:// benparr.com/2008/08/its-time-we-defined-social-media-no-more-arguing-heres-the-definition/ (11/03/2019).
*Resnick, Paul; Garret, Kelly; Kriplean Travis; Munson, Sean and Stroud, Natalie Jomini (2013) Bursting Your (Filter) Bubble: Strategies for Promoting Diverse Exposure, pp. 95-100 in Proceedings of the 2013 Conference on Computer Supported Cooperative Work Companion. New York, USA: ACM. DOI: 10.1145/2441955.2441981.
*Shao, Chengcheng; Ciampaglia, Giovanni Luca; Varol, Onur; Yang, Kai-Cheng; Flammini, Alessandro and Menczer, Filippo (2018) The spread of low-credibility content by social bots . Nature Communications 9 (1): 1-9. DOI: 10.1038/s41467-018-06930-7.
*Taiwo, Rotimi (2014) Impoliteness in Asynchronous Online Discussion Forum: A Case Study of Trolling in Nairaland.com, pp. 67-76 in Chiluwa, Innocent, Ifukor, Presley and Taiwo, Rotimi (eds) Pragmatics of Nigerian English in Digital Discourse. Munich: LINCOM Europa.
*Tandoc Jr., Edson C.; Lim, Zheng Wei and Ling, Richard (2018) Defining 'Fake News': A Typology of Scholarly Definitions. Digital Journalism, 6 (2): 137-153. DOI: 10.1080/21670811.2017.1360143
*Van Dijk, Jan (2006) The Network Society. London: SAGE Publications Ltd
*Wardle, Claire (2017) Fake News. It's Complicated. First Draft. https://firstdraftnews.org/fake-newscomplicated/ (16/01/2019).
*Wardle, Claire and Derakhshan, Hossein (2017) Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making. Council of Europe Report. https://rm.coe.int/ information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c (28/03/2019).
*Wojcik, Stephen; Messing, Solomon; Smith, Aaron; Rainie, Lee and Hitlin, Paul (2018) Bots İn The Twittersphere. Pew Research Center. https://www.pewInternet.org/wp-content/uploads/ sites/9/2018/04/PI_2018.04.09_Twitter-Bots_FINAL.pdf (21/01/2019).
*Yanatma, S. (2018) Reuters Institute Digital News Report 2018 Turkey Supplementary Report. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2018-11/Digital%20News%20Report% 20-%20Turkey%20Supplement%202018%20FINAL.pdf (06.03.2019).
*Zhou, Xinyi and Zafarani, Reza (2018) Fake News: A Survey of Research, Detection Methods and Opportunities. ACM Computing Surveys 1 (1): 1-40.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2019. This work is published under NOCC (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The recent increase in usage of concepts such as 'fake news' or 'post-truth' reveals the importance of digital literacy especially on social media. In the digital era, people's views on different topics are attempted to be manipulated with disinformation and fake news. Fake content is rapidly replacing the reality among new media users. It is stated with concepts such as 'filter bubbles' and 'echo chambers' that there is a greater tendency for people to be fed with content that is ideologically appropriate to their own views and to believe in fake news in this content. This article analyzes the structure and functioning of fact-checking organizations in the context of preventing propagation of fake news and improving digital literacy. The research is based on content analysis of verification activities of the fact-checking organization Teyit.org, which is a member of International Fact-Checking Network in Turkey, between January 1 and June 31,2018. By conducting in-depth interviews with the verification team, propagation of fake news on social networks, fact-checking processes and their methods of combating fake news are revealed. Our article found that fake content spreading specifically through the Internet predominantly consists of political issues.