Disinformation: Policy Responses to Building Citizen Resiliency
Publication Type:
Journal ArticleSource:
Connections: The Quarterly Journal, Volume 20, Issue 2, p.47-55 (2021)Keywords:
citizen resilience, digital literacy, disinformationAbstract:
Maligned actors use fake social media accounts and automated tools, also called computational propaganda, to launch disinformation operations. While technology companies and researchers continue to advance computational propaganda detection, they also know that eradicating social bots and disinformation is impossible. Since computational propaganda continues to increase, governments need to focus their efforts on developing policies that decrease citizen demand for disinformation. The purpose of this article is to explore disinformation at the intersection between technology and citizen resiliency. First, the current landscape will be explored to understand the impact of disinformation on society and its citizens. Second, the effect of technology on the supply of disinformation will be examined. Third, methods to decrease the demand for disinformation will be considered to increase citizen resiliency.
Introduction
With the growth of social media, there is a flood of unregulated content available on the Internet. Gone are socially-responsible publishers, editors, and subject matter experts to evaluate information that was available with traditional media.[1] Instead, citizens are left to decide what is fake or real, while maligned actors leverage this opportunity, along with the openness of democracies, to influence societies with disinformation. Disinformation is defined as the purposeful use of false information created and spread intentionally as a way to confuse or mislead, which may contain a blend of truth and untruth or purposefully exclude context.[2] Governments need to focus their efforts on developing policies to decrease citizen demand for disinformation because controlling the supply of disinformation is a formidable task when machines are increasingly creating the content.
Governments, civil society groups, and technology companies recognize disinformation as a global problem, but they struggle with their responses. Malign actors sow discord and distrust using newer and better tools, leaving citizens, who are the target of disinformation operations, worried about the impact of disinformation on the Internet. Knuutila and colleagues found that 53 % of regular internet users (154,195 respondents in 142 countries) were concerned about encountering disinformation online, with the highest concern (65 %) coming from North America.[3] They were more concerned about disinformation than online fraud or harassment.
This article examines disinformation at the intersection between technology and citizen resiliency. First, the current landscape will be explored to understand the impact of disinformation on society and its citizens. Second, after examining the impact of technology on the supply-side of disinformation, the demand-side of disinformation is examined for citizen resiliency. Finally, this article concludes with policy recommendations for starting a citizen resiliency program.
Computational Propaganda
Malign actors use fake social media accounts and automated tools, also called computational propaganda, to launch disinformation operations. Woolley and Howard (2016) define computation propaganda as “algorithms, automation, and human curation to purposefully distribute misleading information over social media networks.” [4] As an illustration, the computational propaganda tools include bots, sock puppets, robo-trolls, and deepfake videos.
First, bots—short for robots—are software programs with legitimate uses, such as automating tasks on websites. In disinformation operations, social media bots impersonate a human on social media by communicating and interacting with people and systems. For example, they can be social bots, which are fake, automated accounts, or cyborgs, which are accounts operated by a human with bot technology assistance. Malign actors also use a massive number of social media bots to create the illusion of large-scale consensus for online propaganda.[5]
Second, sock accounts or sock puppets are fictitious online accounts created by an individual or group with an intent to deceive. For example, an individual or group will create multiple accounts on a social media platform to influence social media by generating followers by “liking” or voting on posts. They can also slant or distort an online discussion or support a particular online account. As a case in point, Russian intelligence operated a Twitter sock account under the name of Jenna Abrams, which had 70,000 followers, to influence conservative voters during the 2016 US elections.[6]
Third, trolls are real individuals who intentionally provoke others online by posting inflammatory or offensive messages. When their accounts are automated through the use of software, they are called robo-trolls and are capable of generating content.[7] Researchers are concerned about the use of robo-trolls by extremists or terrorists. Therefore, they are testing text-generating artificial intelligence (AI) software, which could be used in the future by robo-trolls.[8] The text-generating AI software would be a powerful tool for extremists or terrorists because they could speedily create propaganda, which at present is manually created by humans and thus a time-intensive process.
Fourth, AI-enabled tools allow the creation of deepfake videos – digitally altered videos used for deceptive purposes. According to Sensity AI (formerly DeepTrace), the amount of deepfake videos is increasing, with 96 % of online deepfake videos consisting of non-consensual, celebrity pornography.[9] Experts believe these videos will continue to grow in numbers and sophistication as more deepfake services and tools become available to the public.[10] Even now, high-quality deepfake videos are difficult to detect.[11]
In response to increasing computational propaganda, technology companies began deploying AI-enabled countermeasures. As companies became better at detecting and blocking bots, bot developers began using more sophisticated techniques, such as AI-generated images, text, and videos.[12] In view of the fact that synthetically-generated content mimics a human’s style, distinguishing AI content from human-generated content is challenging.[13] And recent social bots are more similar to human-operated accounts because AI is being used to create a hybrid of automated and human-driven behaviors.” [14] Compounding this problem is the fact that malign actors are able to weave true information with false information, making it even more difficult for technology companies to label disinformation as truthful or untruthful.[15] Consequently, in the future, it will be impossible for citizens to determine the veracity of information or legitimacy of accounts.
Meanwhile, computation propaganda is increasing globally. Bradshaw et al. noted that state and political actors in 81 countries are using social media to spread computational propaganda.[16] This increase is problematic because computational propaganda is a “powerful tool that can undermine democracy.” [17],[18] While technology companies and researchers continue to advance computational propaganda detection, they also know that eradicating social bots and disinformation is impossible. Instead, a whole-of-society approach is necessary to build citizen resilience against a growing threat that is undermining societal trust.
Governments are responding to disinformation from both sides of the supply-demand equation. The supply-side of disinformation involves limiting the flow of disinformation into the information ecosystem. The demand-side involves addressing citizen consumption of disinformation.[19] Next, this article explores both sides of the supply-demand equation of disinformation.
Supply-Side of Disinformation
Without a doubt, tackling the supply-side of disinformation necessitates the government, technology companies, and civil society to work together to develop a whole-of-society response. From a policymaker’s perspective, countering supply-side disinformation is challenging because there may not be a lead agency responsible for countering disinformation operations. For this reason, a country may not have a coordinated policy response. Consequently, when there is a disinformation attack on domestic affairs (e.g., election security, disasters, pandemic response and vaccinations), the functional agency may not be equipped to respond to an attack. And, when there are overlapping equities or responsibilities, determining which government agency should lead a response may become a problem (e.g., homeland security, defense department, justice department, election authority, or another agency). Malign actors understand the seams between government agencies and leverage them to launch their attacks.
Supply-side approaches to curbing the spread of disinformation include legislation, government fact-checkers, and information troops; however, it is still too early to know which ones are most effective.[20] For example, in 2017, Germany passed the Network Enforcement Act, compelling social media companies to remove hate speech and other illegal content. The downside of this type of law is that it can lead to censorship and curtail free speech.[21]
Another supply-side approach is the European Union’s implementation of a voluntary, self-regulatory standard for technology companies, such as Google, Facebook, Mozilla, and Twitter. In 2018, they signed the European Commission’s Code of Practice on Disinformation and committed to increasing the transparency of political ads, closing fake accounts, and addressing the malicious use of bots. However, the preliminary report of the Code of Practice was mixed. There continues to be a lack of trust between social media companies, governments, and civil society, primarily because technology companies give only limited access to their data.[22] In 2020, the European Commission implemented a comprehensive response to counter disinformation through the European Democracy Action Plan.[23] One of the initiatives is to overhaul the Code of Practice into a co-regulatory framework.
In contrast, Estonia, which has been the target of Russian disinformation since 2007, involves civil society in its approach. The government created a voluntary security force called the Estonia Defense League within the Ministry of Defense. The Estonia Defense League supports cyber defense but also monitors the Internet for disinformation and uses an anti-propaganda blog to counter distorted narratives. Estonia also involves an internet activist group called the Baltic Elves to respond to Russian trolls, report bots, provide counter-narratives.[24] In addition, since Estonia has a sizeable ethnic-Russian population, it operates a Russian-language television station to counter disinformation.
Taiwan is another country with a whole-of-society approach to curbing the supply-side of disinformation. Since 2018 when Taiwan appointed its first Digital Minister, the country instituted several civic-tech initiatives to build citizen and civil society trust. The Digital Minister not only developed a transparent government but also combined the efforts of government teams, technology companies, and private citizens to counter disinformation. Taiwan deployed several successful initiatives, including an Internet Fact-Checking Network, chatbots for social media fact-checking, and memes to challenge disinformation narratives.[25]
The greatest strength of Estonia and Taiwan’s approach is the involvement of citizens in combatting disinformation. The battle against disinformation can only be won by starting with the citizens who are consuming and spreading disinformation. When the disinformation can be ignored by citizens, its spread will decrease. In the next section, this article explores methods to address the demand-side of disinformation.
Demand-Side of Disinformation
One way to achieve demand-side reduction is through digital literacy education and disinformation awareness.[26] There is evidence that digital literacy can be an effective strategy to help counter disinformation.[27] Since there is no universal definition of digital literacy, in this article, digital literacy includes media, news, and information literacy and is defined as “the ability to use information and communication technologies to find, evaluate, create and communicate information, requiring both cognitive and technical skills.” [28]
A common misconception is that older citizens are more susceptible to disinformation than younger citizens because of their lack of comfort with digital technology. There is evidence that senior citizens are more likely to share disinformation using social media.[29] However, younger citizens, who may be more comfortable with technology, are also susceptible to disinformation because they lack digital literacy skills. The Stanford History Education Group found that middle school, high school, and college students had difficulty evaluating the credibility of social media information. They incorrectly perceived information as trustworthy based on incorrect facts: a search engine result appearing at the top, a website using the .org domain or a Twitter account with many followers.[30] These gaps, therefore, demonstrate a societal need for digital literacy.
Policymakers and educators are rethinking the framework of digital literacy to ensure that critical thinking and civics are included in the curriculum. In the past, governments were more focused on developing digital skills needed for “digital transformation” initiatives that did not necessarily include critical thinking and civics. However, newer programs include citizen resiliency. For example, in 2019, Canada created a Digital Citizen Initiative using a multi-stakeholder approach. The initiative supports citizen-focused activities, such as the development of learning materials, investment in research programs, and promotion of media literacy (civic, news, and digital).[31] In contrast, there are also non-government-led programs. For example, two institutes located at the University of South Florida (Florida Center for Cybersecurity and the Florida Center for Instructional Technology) partnered with New America (a non-profit, non-partisan think tank) to develop cyber citizenship skills for primary and secondary students. They aim to create a Cyber Citizenship Working Group to collaborate with various civil society stakeholders and establish a Cyber Citizenship Portal to provide an educational toolkit for the public.[32]
It is still too soon to determine the effectiveness of the digital literacy education and awareness programs. Moreover, preparing citizens for digital literacy is only the first step to other knowledge and skillsets, such as algorithmic literacy and data literacy (as a result of AI).[33] For the challenges ahead, policymakers need to use strategic foresight to prepare citizens for the next-generation disinformation attacks better. In summary, the below policy recommendations are a starting point for developing citizen resiliency.
Policy Recommendation #1: Improve the Digital Literacy of All Citizens
Governments must develop a digital literacy program to educate all citizens about digital literacy by establishing a standard or framework. There are many frameworks to use as a foundation for creating a digital literacy program. They include the United Nations Educational, Scientific and Cultural Organization (UNESCO) Digital Literacy Global Framework, the European Union Digital Competence Framework for Citizens, and Dr. Yuhyun Park’s Digital Intelligence (DQ) Framework.
Once the framework is developed, the government should create a digital literacy curriculum that meets the need of citizens at different stages of life (primary, secondary and tertiary levels). By developing curricula for different levels, educators and trainers can quickly adapt the material to their educational program. Methods to make the content accessible for adults include producing massive open online courses and creating online videos supporting lifelong, self-paced learning. The digital literacy skills will not only build citizen resilience to disinformation but will also prepare citizens for the impending digital transformation, which is the adoption of digital technology to transform society.
Policy Recommendation #2: Include Digital Security in Annual Cybersecurity Awareness Campaigns
Citizen awareness begins with public awareness campaigns. Many governments already use annual cybersecurity awareness month or week to promote online safety and advocate for security practices. Since a core component of cybersecurity deals with understanding the online threats that jeopardize citizen safety, disinformation is an appropriate topic for raising awareness. For example, issues for attention could include a lesson on social bots or on evaluating the sources of online information. An awareness campaign provides yet another opportunity to sensitize citizens about disinformation.
Policy Recommendation #3: Empower Civil Society by Building Trust and Sharing Information on State and Political Actors Using Computation Propaganda
Empowering citizens by building trust and sharing information builds citizen resilience. Citizens do not understand the volume and intensity of the computational propaganda attacks against their country unless they are strengthened with information. They need to know who, what, where, when, and how disinformation attacks occur and what they can do to counter the disinformation. Since political computation propaganda attacks can be state-sponsored attacks, the government may not fully share the details of an attack due to classification reasons. To achieve trust, governments must find a way to be transparent about the attacks while balancing the need for security. Also, when sharing information, plain language should be used, omitting technical and government jargon.
Governments can also foster public-private partnerships to share information and collaborate to solve the technical computational propaganda and citizen resiliency challenges. In view of the fact that technology companies possess the data that government, civil society groups, and researchers need to develop countermeasures, the partnership provides an opportunity to create innovative solutions through crowdsourcing and trust through information sharing and open dialogue. Now, more than ever, government, technology companies, and civil society must work together to build collective trust and citizen resiliency.
Disclaimer
The views expressed are solely those of the author and do not represent official views of the PfP Consortium of Defense Academies and Security Studies Institutes, participating organizations, or the Consortium’s editors.
About the Author
Inez Miyamoto is a cybersecurity professor at the Daniel K. Inouye Asia Pacific Center for Security Studies. E-mail: miyamotoi@dkiapcss.net
Acknowledgment
Connections: The Quarterly Journal, Vol. 20, 2021, is supported by the United States government.
ford.edu/file/druid:fv751yt5934/SHEG%20Evaluating%20Information%20Online.pdf.
ox.ac.uk/wp-content/uploads/sites/93/2019/08/A-Report-of-Anti-Disinformation-Initiatives.
mation-briefing-note-for-new-european-commission-pub-81187.
- 7979 reads
- Google Scholar
- DOI
- RTF
- EndNote XML