<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by First Draft on Medium]]></title>
        <description><![CDATA[Stories by First Draft on Medium]]></description>
        <link>https://medium.com/@FirstDraft?source=rss-ac53bd7c7430------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Wed, 06 May 2026 14:28:04 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@FirstDraft/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Update from First Draft]]></title>
            <link>https://medium.com/1st-draft/update-from-first-draft-942649687fb3?source=rss-ac53bd7c7430------2</link>
            <guid isPermaLink="false">https://medium.com/p/942649687fb3</guid>
            <category><![CDATA[first-draft]]></category>
            <dc:creator><![CDATA[First Draft]]></dc:creator>
            <pubDate>Tue, 14 Jun 2022 16:48:52 GMT</pubDate>
            <atom:updated>2022-06-14T16:48:52.063Z</atom:updated>
            <content:encoded><![CDATA[<p>Today we are announcing that First Draft is closing its doors to make way for the next chapter — its mission will continue at the newly launched Information Futures Lab, an initiative from Brown’s School of Public Health. <a href="https://firstdraftnews.org/first-draft-update-june2022/">https://firstdraftnews.org/first-draft-update-june2022/</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=942649687fb3" width="1" height="1" alt=""><hr><p><a href="https://medium.com/1st-draft/update-from-first-draft-942649687fb3">Update from First Draft</a> was originally published in <a href="https://medium.com/1st-draft">First Draft Footnotes</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Update from First Draft]]></title>
            <link>https://medium.com/@FirstDraft/update-from-first-draft-2d352d817d1d?source=rss-ac53bd7c7430------2</link>
            <guid isPermaLink="false">https://medium.com/p/2d352d817d1d</guid>
            <category><![CDATA[first-draft]]></category>
            <dc:creator><![CDATA[First Draft]]></dc:creator>
            <pubDate>Tue, 14 Jun 2022 16:48:36 GMT</pubDate>
            <atom:updated>2022-06-14T16:48:36.310Z</atom:updated>
            <content:encoded><![CDATA[<p>Today we are announcing that First Draft is closing its doors to make way for the next chapter — its mission will continue at the newly launched Information Futures Lab, an initiative from Brown’s School of Public Health. <a href="https://firstdraftnews.org/first-draft-update-june2022/">https://firstdraftnews.org/first-draft-update-june2022/</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2d352d817d1d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to improve our analysis of ‘coordinated inauthentic behavior’]]></title>
            <link>https://medium.com/1st-draft/how-to-improve-our-analysis-of-coordinated-inauthentic-behavior-a4ec62ce9bff?source=rss-ac53bd7c7430------2</link>
            <guid isPermaLink="false">https://medium.com/p/a4ec62ce9bff</guid>
            <category><![CDATA[influence-operations]]></category>
            <category><![CDATA[disinformation]]></category>
            <dc:creator><![CDATA[First Draft]]></dc:creator>
            <pubDate>Mon, 13 Sep 2021 13:55:08 GMT</pubDate>
            <atom:updated>2021-09-13T18:03:25.870Z</atom:updated>
            <content:encoded><![CDATA[<p><em>First Draft researchers Carlotta Dotto and Seb Cubbon, along with Stefano Cresci, Serena Tardelli and Leonardo Nizzoli from the Institute of Informatics and Telematics of Pisa, explore new approaches to the complex phenomenon of online coordination.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*5NXqNKfHVj79CQR6" /></figure><p>As researchers of disinformation, we inherit terms and concepts from platforms that can shape the way disinformation is understood and detected.</p><p>But often they reflect the needs of policy communications more than high-quality, independent research.</p><p><a href="https://m.facebook.com/communitystandards/inauthentic_behavior/">Coordinated Inauthentic Behavior</a> (CIB) is a case in point — a term devised by Facebook that has shaped our understanding of misinformation, but one that has been <a href="https://slate.com/technology/2020/07/coordinated-inauthentic-behavior-facebook-twitter.html">criticized</a> for its ambiguity.</p><p>For robust research, we need our own concepts and detection methods — ones that are transparent, precisely defined and can be reproduced by other researchers.</p><blockquote>we need our own concepts and detection methods — ones that are transparent, precisely defined and can be reproduced by other researchers.</blockquote><p>What features of a community should define it as “coordinated?” How can we compare degrees of coordination, or examples of coordination from different contexts and events?</p><p>We set out one way to do this: quantitative indicators. In an investigation of coordinated online activity observed during the run-up to the US 2020 election, we used a set of quantitative indicators to precisely define and detect coordination. We show how it can lead to a better approach.</p><h3>Quantitative indicators for measuring coordination</h3><p>Platform-defined metrics such as CIB are not designed for independent research. And arguably they shouldn’t be: They exist to support platforms’ policies and their communication, which often requires definitions to be flexible.</p><p>It is therefore up to disinformation and social media manipulation experts to put forward independent frameworks for assessing online coordination, as organizations such as EU Disinfo Lab <a href="https://www.disinfo.eu/publications/cib-detection-tree-4th-branch/">have begun to do</a>. These frameworks should outline specific criteria that can be measured empirically.</p><p>This is where quantitative indicators are helpful. Detection models that use explicit quantitative benchmarks are not only more likely to identify coordination with greater accuracy, but also make research methodologies more transparent than an unexplained qualitative assessment. And they provide findings that are reproducible by others.</p><p>An example of a quantitative indicator would be something like this:</p><ul><li>When approximately X% of all retweets are identical, a community is defined as extremely coordinated</li></ul><p>Methods and findings that are reproducible may attract greater participation and constructive collaboration in relation to the analysis of online coordination, encouraging the development of more widely adopted definitions and measurements.</p><p>Transparent and quantitative measures can also provide the foundation for difficult, qualitative judgements about whether coordination matters, and what to do about it. With precise measurements, the degree of coordination can be compared across communities and events (for example, elections) to inform action.</p><blockquote>quantitative measures can also provide the foundation for difficult, qualitative judgements about whether coordination matters, and what to do about it</blockquote><p>Relative measurements of the degree of coordination among users (for example, the degree of coordination within a community, rather than an absolute number of coordinated users) can be particularly helpful: They can help to uncover the extent to which a group of actors may be sophisticated, well-resourced and dedicated, even if the group is smaller in number.</p><p>In turn, this can contribute to a better theoretical understanding of the multifaceted role and impact of coordination in online information.</p><h3>Case study: Quantitative indicators for coordinated communities on Twitter in lead-up to the US 2020 elections</h3><p>Leading up to the US 2020 elections, there were many communities coordinating online, to varying extents and with a variety of goals. To uncover what kinds of coordination were taking place, researchers at the Institute of Informatics and Telematics (IIT-CNR) in Pisa adopted precisely the kind of quantitative metrics we have been considering so far.</p><p>The goal was to map different communities on a continuous scale and identify those coordinating most intensively and sophisticatedly.</p><p>Their method works by detecting coordinated communities on Twitter, based on the extent to which large sets of users repeatedly share (retweet) the same tweets across an extended period of time. Given that coordination is not clear-cut, numerical indicators estimate the extent of coordination for each community detected as “coordinated.”</p><p>The quantitative indicators that the team used were:</p><ul><li><em>When approximately </em><strong><em>30–50%</em></strong><em> of all retweets are identical, a community is likely to be mildly coordinated</em></li><li><em>When approximately </em><strong><em>90%</em></strong><em> of all retweets are identical, a community is likely to be extremely coordinated</em></li></ul><p>The model also takes into account the likelihood for a tweet to be retweeted in a coordinated manner. If a tweet has received many thousands of shares, the likelihood of two users both sharing that tweet is high and in turn, the likelihood that they coordinated to share the tweet is low. As a result, common shares of highly engaged-with tweets weigh less than shares of low-engagement tweets when it comes to the final calculation of the coordination scores.</p><p>The model was applied to 70 million tweets shared in the run-up to the US election. It could detect both mildly coordinated groups of users as well as extremely coordinated ones.</p><p>These quantitative measures were applied to identify groups of users that were coordinating their activity to consistently share the same US 2020-related tweets between October 3 and December 3, 2020. These groups of users or “communities” were then labeled based on the types of hashtags that featured most frequently in the tweets they commonly shared. We then represented these communities visually through a network visualization graph (see Figure 1).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*0Agu95dVE9UdJFZv" /><figcaption><em>Figure 1: A network graph displaying the groups of users measured by the IIT-CNR coordination framework as the most coordinated during the US 2020 electoral debate. Each node represents a user who participated in the online debate. Nodes that are connected to one another with a line (edge) signify that both nodes have reshared at least one common tweet. The thickness of the lines is proportional to the number of common shares. Each colored cluster of nodes depicts a community that has been detected as “coordinated” based on the relative numerical thresholds. Communities may be relatively concentrated or more spread out. The density of a community reflects the extent to which its users are coordinating based on the relative numerical thresholds.</em></figcaption></figure><p>Among other findings, the model enabled the discovery of a small yet highly coordinated network of users that immediately stood out from others.</p><p>As shown in Figure 1, two clusters of coordinated users are significantly bigger than the others. The largest one, which appears in blue, is composed of users who supported Donald Trump.</p><p>The second-biggest cluster, in orange, is also composed of Trump supporters, but these users shared many conspiracy theories as opposed to generic pro-Trump or pro-Republican tweets. For example, an analysis of the tweets shared by this community revealed widespread support for QAnon and the “Stop The Steal” narrative.</p><p>We can see in Figure 1 that most communities appear to be sharply separated from one another, indicating there might be strong coordination within a community, but little coordination (i.e., little sharing of the same tweets) among different communities.</p><p>We can also see that one small community, highlighted in purple, appears to be the most densely concentrated. It lies between both pro-Republican groups, yet is also closely linked to the large pro-Trump group.</p><p>These unusual characteristics, revealed by the quantitative measurements of the detection model, prompted us to conduct a deep dive into the community’s individual user profiles, the types of messages they were promoting and the sources of the tweets they were sharing most frequently.</p><p>This analysis revealed that unlike other traditional, hyper-partisan groups of users who coordinated their online activity to push pro-conservative or pro-democrat messages in the run-up to the election, this network used the electoral debate as an opportunity to generate support for a political cause seemingly far removed from US domestic politics: the independence of Biafra, a small former secessionist state that was reintegrated into Nigeria in 1970.</p><p>This pro-Biafran community repeatedly shared tweets that contained pro-independence messages alongside generic US 2020-related hashtags and generic pro-Trump hashtags such as #MAGA and #KAG2020.</p><p>Since Trump’s endorsement of Brexit in 2016, Biafran separatists have considered Trump a supporter of their cause and his presidency an opportunity to attract international support for renewed <a href="https://www.theguardian.com/world/2020/oct/31/he-just-says-it-as-it-is-why-many-nigerians-support-donald-trump">Biafran</a> independence.</p><h3>Setting standards for a consistent approach</h3><p>The detection of this pro-Biafran community illustrates how models such as <a href="https://arxiv.org/abs/2008.08370">IIT-CNR</a>’s (or others such as the University of Urbino’s <a href="http://coornet.org/">CooRnet</a>) can be used to identify highly coordinated communities.</p><p>There are three key points we draw from our reflections:</p><ol><li>Relative indicators can be extremely useful. They help enhance transparency and can point to the extent to which these communities may be well-resourced, dedicated and sophisticated in their behavior. Moreover, methodologies that support fine-grained analyses also make it possible to identify small coordinated networks that might otherwise go unnoticed.</li><li>Methodologies that rely on quantitative indicators can help standardize how we measure, and by extension understand, coordination. This could allow rigorous comparisons between communities involved in a debate, and between analyses of coordination at play in multiple contexts, such as elections in different years or countries. We need more research to fully explore how this could work.</li><li>Far from being the be-all and end-all, quantitative indicators provide a starting point from which additional qualitative analyses can be carried out. It is only through further investigation that the most critical questions concerning authenticity, harmfulness and legitimacy can be answered.</li></ol><p><em>The original version of this post incorrectly referred to EU Disinfo Lab as EU vs Disinfo. This has been corrected.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a4ec62ce9bff" width="1" height="1" alt=""><hr><p><a href="https://medium.com/1st-draft/how-to-improve-our-analysis-of-coordinated-inauthentic-behavior-a4ec62ce9bff">How to improve our analysis of ‘coordinated inauthentic behavior’</a> was originally published in <a href="https://medium.com/1st-draft">First Draft Footnotes</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[We need to know more about political ads. But can transparency be a trap?]]></title>
            <link>https://medium.com/1st-draft/we-need-to-know-more-about-political-ads-but-can-transparency-be-a-trap-542df2a52f21?source=rss-ac53bd7c7430------2</link>
            <guid isPermaLink="false">https://medium.com/p/542df2a52f21</guid>
            <category><![CDATA[social-media]]></category>
            <category><![CDATA[transparency]]></category>
            <category><![CDATA[misinformation]]></category>
            <category><![CDATA[tech]]></category>
            <category><![CDATA[advertising]]></category>
            <dc:creator><![CDATA[First Draft]]></dc:creator>
            <pubDate>Thu, 25 Mar 2021 15:51:49 GMT</pubDate>
            <atom:updated>2021-03-25T17:35:27.196Z</atom:updated>
            <content:encoded><![CDATA[<p><em>First Draft researchers Madelyn Webb and Bethan John explore the complexities and contradictions of calls for increased transparency around online advertising.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*T7VmpQu5D848UlOoCr0XOQ.png" /><figcaption>Image: First Draft Illustration</figcaption></figure><p>As misinformation researchers, we spend a lot of time thinking about online advertising. We dig through ad libraries, monitor platforms’ announcements, and publish investigations into how disinformation agents are bending the rules.</p><p>We rely on social media platforms to give us information to do this. But the experience of working within platforms’ parameters has left us with a question: Can transparency be a trap?</p><p>In 2017, Facebook <a href="https://about.fb.com/news/2017/10/update-on-our-advertising-transparency-and-authenticity-efforts/">announced</a> it was building a searchable archive of US federal election- related ads that would include some spending and targeting data. Various iterations culminated in the <a href="https://www.facebook.com/ads/library/?active_status=all&amp;ad_type=political_and_issue_ads&amp;country=US">Ad Library</a>, which set the standard for ad transparency. Later, Google also <a href="https://blog.google/technology/ads/introducing-new-transparency-report-political-ads/">began sharing some information</a> about political ads with researchers. <a href="https://snap.com/en-US/political-ads">Snapchat did the same</a>, and Twitter eventually opted to <a href="https://business.twitter.com/en/help/ads-policies/ads-content-policies/political-content.html">get rid of political advertising altogether</a><em>. </em>By setting policy on it, social platforms have demonstrated they know transparency matters when it comes to political advertising. But they’re also able to control the terms of that transparency. Here are eight big questions that arose when we began scrutinizing the current landscape for advertising transparency.</p><h3><strong>1. What is obscured by the platforms’ definitions?</strong></h3><p>What counts as “political” and how is that decided? Election and media law in the US generally defines political ads as those purchased by or on behalf of a candidate for public office, or those relating to a matter of national importance; most major social media platforms use a similar definition.</p><p>Facebook <a href="https://www.facebook.com/business/help/214754279118974?id=288762101909005">calls </a>these “social issue” ads, and defines them as ads messaging about anything “heavily debated, [that] may influence the outcome of an election or result in/relate to existing or proposed legislation.” But who determines what is “heavily debated” or what messaging has the power to influence an election? Advertisements promoting ultrasound services may appear apolitical to most, <a href="https://www.theguardian.com/world/2019/aug/19/google-loophole-anti-abortion-clinics-deceptive-ads">but if they’re paid for by an anti-abortion organization</a>, they may warrant further scrutiny. On Twitter, political issue ads are banned in the United States, including those from climate advocacy groups. On the other hand, oil companies such as ExxonMobil have been <a href="https://heated.world/p/twitters-big-oil-ad-loophole">allowed to run ads</a> on the platform. Given the room for interpretation as to what is and isn’t “political,” is the distinction really useful? Should political issue-related ads, such as ads about climate change, count as “political”? And who makes that determination?</p><p>As part of a stated effort to protect the US election’s integrity, Facebook <a href="https://www.facebook.com/business/news/facebook-ads-restriction-2020-us-election">did not allow new political ads</a> to run on its platform from October 27, 2020 to March 4, 2021 (<a href="https://www.facebook.com/gpa/blog/resuming-ads-in-georgia">with a brief exception</a> made for political ads targeting Georgia’s Senate runoff election in January). But ads about vaccines, <a href="https://twitter.com/mdywebb/status/1325824535707455491">ads about election fraud</a> and ads from politically motivated groups including <a href="https://www.motherjones.com/politics/2018/03/inside-right-wing-youtube-turning-millennials-conservative-prageru-video-dennis-prager/">Prager U</a>, the self-described “leading conservative nonprofit,” all ran during this time. Because of the norms established by the platforms, ads deemed non-political are not held to the same transparency standards, so they remain visible to the public, with less scrutiny from researchers. <a href="https://firstdraftnews.org/latest/opinion-twitters-ban-on-political-advertising-is-easier-said-than-done/">When platforms aren’t thoughtful</a> with their definitions, powerful issue lobbies are able to exploit loopholes to promote their message.</p><h3><strong>2. Who gets to access and interpret the transparency data?</strong></h3><p>There are barriers to entry for every mechanism of transparency the platforms have provided us. A researcher looking to explore Snapchat’s political ads archive must be able to run and interpret a csv file. Facebook provides more data to researchers with the advanced skills to access their API. There is also no standardization across the platforms’ databases, making meaningful cross-platform comparisons difficult. So while platforms are increasingly giving researchers access to data, should it only be trained researchers who can scrutinize how social media is used to target communities? How could we open this up for all interested people?</p><p>The platforms also fully control what data they make public, and how, and it’s not always particularly useful. For example, Facebook provides impression data for political ads, but it is given in ranges. So an ad could be listed as having garnered &lt;1000 impressions, but there’s no way to know if this means 998 impressions or none. Many advocacy organizations have <a href="https://citapdigitalpolitics.com/?page_id=1665">called for more granular data</a>, which platforms could conceivably provide in a standardized format that allows comparison, or in a user-friendly public interface.</p><h3><strong>3. Can we be confident that pro-transparency measures are effective?</strong></h3><p>It is crucial to verify whether nominal pro-transparency measures are having a positive effect. For example, many platforms provide some kind of label that indicates who paid for a political ad. This is an effort to increase transparency, but do the labels being used accomplish that? Facebook has been criticized for its lax advertiser verification requirements that allow advertisers to hide their identity behind shell pages. In <a href="https://twitter.com/mdywebb/status/1280530936434692096">this example</a>, Students for Life, an anti-abortion advocacy group, is running ads through a page innocuously called “standingwithyou.org.”</p><h3><strong>4. Can we rely on these measures being enforced?</strong></h3><p>Are the tools built by the platforms suitable to deliver on their stated transparency goals? Researchers at the Online Political Transparency Project <a href="https://medium.com/online-political-transparency-project/audit-of-facebook-ad-transparency-finds-missed-political-ads-603f95027cc6">were surprised</a> to see that ads containing Joe Biden’s name and image were not being picked up as “political” by Facebook’s AI. They were only able to determine this through setting up their own Ad Observer browser extension. How can we know that the tools offered by platforms are working as they are meant to? Platforms could provide more transparency around the methodology used to create these tools, so researchers could audit them for potential issues or errors.</p><h3><strong>5. Will they be evenly enforced?</strong></h3><p>A January 2021 <a href="https://privacyinternational.org/news-analysis/4370/online-political-ads-study-inequality-transparency-standards">study</a> from Privacy International suggested that heightened transparency standards are unevenly applied around the world — authors dubbed this the “transparency divide.” The 2020 US presidential election saw unprecedented measures taken by the platforms that far outweighed their efforts elsewhere. Facebook, for example, <a href="https://about.fb.com/news/2019/10/update-on-election-integrity-efforts/">publicized</a> what was at the time its largest effort to date to, it said, protect the election’s integrity. At the same time, in India’s Bihar state, with a population of around 104 million people, a critical election for the state legislature garnered <a href="https://about.fb.com/?s=Bihar">no blog posts</a> or <a href="https://twitter.com/search?q=from%3AFacebook%20bihar&amp;src=typed_query">announcements</a> from Facebook about protecting its integrity. Facebook and Twitter <a href="https://firstdraftnews.org/latest/how-important-is-the-integrity-of-indias-elections-to-facebook-and-twitter/">treated</a> the rampant misinformation during these two elections differently, labeling more misleading posts in the US than it did in India. Transparency measures must meet equal standards globally and be subject to the same levels of enforcement.</p><h3><strong>6. Is the data reliable?</strong></h3><p>Researchers have consistently reported errors in the data provided as part of transparency efforts. For example, during the 2019 election in the UK, <a href="https://www.buzzfeednews.com/article/rorysmith/the-uk-election-showed-just-how-unreliable-facebooks">thousands of ads went missing</a> from the Facebook ad archive because of an error. <a href="https://www.wsj.com/articles/google-archive-of-political-ads-is-fraught-with-missing-content-delays-11563355800">Similar complaints</a> were made about Google’s ad archive in the US in 2019. What mechanisms are in place to ensure the data we’re getting is reliable?</p><p>There is good reason to be skeptical. In 2019, Facebook <a href="https://www.law360.com/articles/1206890/facebook-cuts-40m-deal-to-end-suit-over-video-ad-metrics">agreed to pay $40 million</a> to settle a lawsuit alleging it had concealed inaccuracies in its video view metrics that led to a massive and misguided industry shift. <a href="https://www.theatlantic.com/technology/archive/2018/10/facebook-driven-video-push-may-have-cost-483-journalists-their-jobs/573403/">Media outlets laid off print staffers in favor of investing in video content</a> based on incorrect information. Why should we take Facebook’s data at face value now? Without <a href="https://firstdraftnews.org/latest/independent-platform-oversight/">independent oversight</a>, there is no reason researchers should consider the data from platforms to be reliable.</p><h3><strong>7. How does transparency direct our attention?</strong></h3><p>A new tool for transparency auditing is an exciting thing for researchers, and so it is only right that it should become the subject of academic and journalistic research. But what is being missed when we focus on a particular type of information because of the transparency measures behind it?</p><p>Take, for example, how the increased access to information around ads marked by social media platforms as “political” has meant that less attention is paid to non-political or commercial advertising. Facebook has given researchers <a href="https://techcrunch.com/2021/01/25/facebooks-ad-library-targeting-political-ad-election-data/">unprecedented access</a> to advertising data around the 2020 US election, possibly the most scrutinized campaign to date. What about elections where that level of oversight was not in place? This concept is <a href="https://medium.com/1st-draft/searching-for-the-misinformation-twilight-zone-63aea9b61cce">neatly captured</a> as a “feature bias” by our colleague Tommy Shane. The features to which we already have access influence our perspective and, therefore, what we study.</p><h3><strong>8. What’s transparency for?</strong></h3><p>Kate Dommett, a lecturer at the UK’s University of Sheffield who studies digital campaigning, <a href="https://onlinelibrary.wiley.com/doi/epdf/10.1002/poi3.234">wrote in <em>Policy and Internet</em></a><em> </em>about calls for more transparency in her field of study in the UK<em>. </em>She found that “despite using common terminology, calls for transparency focus on the disclosure of very different types of information.” Some organizations were calling for financial transparency, others for transparency around targeting data, and only some considered the specifics of how this information would be presented.</p><p>Dommett’s research illustrates the pitfalls of demanding transparency for its own sake. When researchers and advocates aren’t specific enough about the outcomes desired, platforms are able to provide an incomplete form of “transparency” as a fig leaf that blunts the political will for positive change. Take, for example, calls for transparency in political spending. If the desired outcome is to monitor the spread of particular messages, and social media companies only offer ad spending data, and not information about impressions and engagement, there are gaps we must seek to fill. Transparency is a tool, not an end in itself; we must reflect carefully on what we want to achieve when we call for it. If we don’t, we’ll keep falling into the trap of false transparency.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=542df2a52f21" width="1" height="1" alt=""><hr><p><a href="https://medium.com/1st-draft/we-need-to-know-more-about-political-ads-but-can-transparency-be-a-trap-542df2a52f21">We need to know more about political ads. But can transparency be a trap?</a> was originally published in <a href="https://medium.com/1st-draft">First Draft Footnotes</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Finding misinformation with ‘rumor cues’]]></title>
            <link>https://medium.com/1st-draft/finding-misinformation-with-rumor-cues-ee1355fb82ae?source=rss-ac53bd7c7430------2</link>
            <guid isPermaLink="false">https://medium.com/p/ee1355fb82ae</guid>
            <category><![CDATA[misinformation]]></category>
            <category><![CDATA[conspiracy-theories]]></category>
            <category><![CDATA[social-science]]></category>
            <category><![CDATA[news]]></category>
            <category><![CDATA[disinformation]]></category>
            <dc:creator><![CDATA[First Draft]]></dc:creator>
            <pubDate>Thu, 25 Feb 2021 15:09:48 GMT</pubDate>
            <atom:updated>2021-02-25T15:09:48.283Z</atom:updated>
            <content:encoded><![CDATA[<p>First Draft’s head of impact and policy, Tommy Shane, explores how keywords related to rumor can help us understand and respond to dangerous activities online.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*eUokhcaBJHjWlsIq" /></figure><p>If you’re a reporter, getting your queries right really matters.</p><p>In 2015, Daniel Victor of <em>The</em> <em>New York Times</em> <a href="https://medium.com/@bydanielvictor/the-one-word-reporters-should-add-to-twitter-searches-that-you-probably-haven-t-considered-fadab1bc34e8#.x35wjn85k">was searching</a> for witnesses to an incident on a plane involving a female passenger and a Hasidic Jewish man who didn’t feel comfortable sitting next to her. Victor found that querying for “hasidic” and “flight” on social media brought up a lot of people talking about the incident, but not people who were actually there.</p><p>But then he discovered something. There were three words that could identify genuine eyewitness accounts: “me,” “my” and “I.”</p><p>“Most people relating a personal experience — [also known as] good sources — will use [them],” Victor explained. “Most people observing from afar — aka, useless sources — won’t.”</p><p>For anyone researching social media, skillful query design is critical. Get it wrong and you won’t find what you’re looking for. Get it right and you can discover surprising things that others are missing.</p><h3>Rumor cues</h3><p>In this post, we introduce “rumor cues” to describe an approach to query design that, like first-person pronouns, can be a powerful but overlooked entry point into online conversation.</p><p>The term builds on insights from research into how rumors spread, and is designed to help reporters and researchers find truth-seeking behaviors online that contain, or are vulnerable to, misinformation.</p><p>To explore them, let’s look at <a href="https://www.usenix.org/conference/enigma2021/presentation/starbird">a rumor that spread</a> in the early phase of the pandemic, claiming Washington state would go into lockdown.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*tcX2DkGbRWQym2af" /><figcaption>A screengrab of a rumor that circulated in March 2020. Source: <a href="https://www.usenix.org/conference/enigma2021/presentation/starbird">Kate Starbird</a></figcaption></figure><p>In this post there are no hashtags, no conspiracy watchwords, no dog whistles. There isn’t even the word “covid” or “coronavirus.” So how would you find it?</p><p>One answer is the word “grapevine.” Another is “heard.” Both introduce the rumor by referring to sources of information.</p><p>These kinds of verbal cues typically accompany rumors; researchers have found that others do too, such as “apparently,” “reportedly,” “really?” and “is this true?”</p><p>What these words have in common is that they relate to truth-seeking — discussing, evidencing, persuading or questioning what’s true. Like the first-person pronouns “me,” “my” and “I,” words related to truth-seeking — which we’re calling “rumor cues” — can help to monitor misinformation.</p><p>Tracking rumor cues is especially important at this particular moment, when networked rumoring <a href="https://onezero.medium.com/reflecting-on-the-covid-19-infodemic-as-a-crisis-informatics-researcher-ce0656fa4d0a">can drive life-threatening</a> misinformation, and when routine searches for information online are being <a href="https://points.datasociety.net/you-think-you-want-media-literacy-do-you-7cad6af18ec2">weaponized</a> by conspiracy theorists. Both are major social vulnerabilities connected to truth-seeking that we need to identify and better understand.</p><p>In this post, we show how reporters and researchers can use rumor cues for three purposes: identifying rumor in real time, newsgathering with shadow queries and understanding the rhetoric of conspiracy theorists.</p><h3><strong>Identifying rumor</strong></h3><p>One of the great challenges in tackling misinformation is identifying rumors before they spread. This can help to address <a href="https://medium.com/1st-draft/identifying-data-deficits-can-pre-empt-the-spread-of-disinformation-93bd6f680a4e">deficits</a>: demand for information — often through rumors or unanswered questions — that is not met with adequate supply, creating a vacuum for misinformation.</p><p>Crisis researchers have discovered useful insights to help with this. Zhe Zhao and her colleagues, for example, <a href="http://www-personal.umich.edu/~qmei/pub/www2015-zhao.pdf">found that </a>expressions such as “Is this true?” “Really?” and “What?” were common in online rumors following the Boston marathon bombings; similarly, Kate Starbird and her team <a href="https://dl.acm.org/doi/10.1145/2858036.2858551">found that</a> the words “apparently,” “reportedly,” “alleged” and other forms of “expressed uncertainty” were commonly associated with rumors.</p><p>Other examples might include researchers call “non-specific authority references,” such as “experts” or “doctors,” which <a href="https://misinforeview.hks.harvard.edu/article/misinformation-more-likely-to-use-non-specific-authority-references-twitter-analysis-of-two-covid-19-myths/">are more associated</a> with misinformation. Linguistics research <a href="https://medium.com/1st-draft/the-difference-between-the-facts-and-the-truth-59e23c6185d">also indicates</a> that terms like “disguised”, “hiding, “show”, “exposes” and “uncovers” feature prominently in discussion of disinformation.</p><p>Combining words like “covid” or “lockdown” with rumor cues can help us to sift through enormous numbers of posts to find the rumor in the haystack, and support interventions from credible voices.</p><h3><strong>Newsgathering with shadow queries</strong></h3><p>Rumor cues can also be used for news monitoring with shadow queries: slightly inflected search queries, such as “coronavirus truth” <a href="https://medium.com/1st-draft/the-difference-between-the-facts-and-the-truth-59e23c6185d">instead of</a> “coronavirus facts.” Topic keywords like “covid” can also be combined with Rumor cues, such as “won’t cover this” or “mainstream view,” and <a href="https://medium.com/1st-draft/the-difference-between-the-facts-and-the-truth-59e23c6185d">words such as</a> “hidden,” “suppressing” and “concealed” to locate unverified counter-narratives.</p><p>Shadow queries like these can reveal distinct truth-seeking networks. Working with researchers at King’s College London, we found that two seemingly similar queries, #covidfacts and #covidtruth, uncovered two very different hashtag networks: #covidfacts was linked to fact-checking hashtags such as #factchecking, #factsmatter and #debunking, while #covidtruth was linked with conspiracy hashtags like #dontbelievethehype, #covidhoax19 and #wakeup. The minor inflection of the query with different Rumor cues revealed two separate networks that both sought to establish the truth, but in very different ways and with very different claims.</p><h3><strong>Analyzing conspiracy rhetoric</strong></h3><p>Another case for rumor cues is the exploration of conspiracy rhetoric. Researchers <a href="https://datasociety.net/wp-content/uploads/2018/05/Data_Society_Searching-for-Alternative-Facts.pdf">have warned</a> <a href="https://points.datasociety.net/you-think-you-want-media-literacy-do-you-7cad6af18ec2">for some time</a> that the search for accurate information online is being weaponized by conspiracy theorists in ways that are frightfully difficult to counteract.</p><p>To understand the rhetoric that powers this manipulation, <a href="https://wiki.digitalmethods.net/Dmi/WinterSchool2021InfodemicInstagram">I worked with a team of researchers</a> to examine 600,000 conspiracy-related Instagram posts by filtering for rumor cues.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Yt3cs--vzABSX91n" /><figcaption><em>A word tree visualizing the most common words before and after the epistemic keywords “trust your”; First Draft and Digital Methods Initiative analysis, created with </em><a href="https://www.jasondavies.com/wordtree/"><em>Jason Davies’ Word Tree.</em></a></figcaption></figure><p>We looked for words such as “truth,” “research,” “evidence” and “trust.” One phrase was particularly common: “trust your.” We found that it was often followed by bodily references — trust your “gut,” “body,” “eyes,” “heart,” “immune system,” “instincts” and “intuition” — indicating a private, embodied form of knowledge, quite a different authority than experts.</p><p>Another rumor cue we explored were Bible references, which were often used to evidence and support narratives. We found that Revelations 13:16–17, which makes reference to “the mark of the beast,” was particularly common. <a href="https://www.bibleref.com/Revelation/13/Revelation-13-16.html">According to BibleRef.com</a>, a possible <a href="https://datasociety.net/wp-content/uploads/2018/05/Data_Society_Searching-for-Alternative-Facts.pdf">knowledge source </a>for Christians, this may refer to “implanted computer chips or other technology,” echoing a major conspiracy narrative about vaccines.</p><p>These alternative authorities to institutional expertise — the body and the Bible — can be uncovered and explored with rumor cues, and can yield insights about how to create effective counter-messaging.</p><h3><strong>What we mean by ‘rumor cues’</strong></h3><p>Rumor cues, which we also describe with the more technical term “epistemic keywords,” are any words that can be used to query online spaces or datasets for truth-seeking behaviors, such as rumor or conspiracy theory. These might be words or phrases related to knowing, discussing, evidencing, persuading or questioning what’s true. More precisely, they are <em>queryable traces of epistemic activity in online spaces.</em></p><p>Rumor cues are not a silver bullet. But they can be another tool in your kit when looking for misinformation online.</p><p><em>Thanks to </em><a href="https://wiki.digitalmethods.net/Dmi/WinterSchool2021InfodemicInstagram"><em>the research team</em></a><em> at the Digital Methods Initiative Winter School 2021 for their support in testing out this concept. Thanks also to students at King’s College London, by Jonathan Gray, for their experimentation with epistemic keywords in relation to Covid-19 conspiracy theories in winter 2020 as part of an </em><a href="https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.kcl.ac.uk%2Fresearch%2Fengaged-research-led-teaching&amp;data=04%7C01%7Ctommy.shane%40kcl.ac.uk%7Cbbab1a92707d40250a2808d8d4d1a0fa%7C8370cf1416f34c16b83c724071654356%7C0%7C0%7C637493344109776545%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=Bu41uGT4vkWoixo28wgPWGmHDuAhWJgQnX73bQnAiZY%3D&amp;reserved=0"><em>engaged research-led teaching</em></a><em>, with input and support from researchers on the </em><a href="https://eur03.safelinks.protection.outlook.com/?url=http%3A%2F%2Finfodemic.eu%2F&amp;data=04%7C01%7Ctommy.shane%40kcl.ac.uk%7Cbbab1a92707d40250a2808d8d4d1a0fa%7C8370cf1416f34c16b83c724071654356%7C0%7C0%7C637493344109776545%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=u7zKreBSmus8yOYhFrcxOX58sB8xqVjvJL7rINZlT90%3D&amp;reserved=0"><em>infodemic</em></a><em> project and the Public Data Lab.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ee1355fb82ae" width="1" height="1" alt=""><hr><p><a href="https://medium.com/1st-draft/finding-misinformation-with-rumor-cues-ee1355fb82ae">Finding misinformation with ‘rumor cues’</a> was originally published in <a href="https://medium.com/1st-draft">First Draft Footnotes</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Identifying ‘data deficits’ can pre-empt the spread of disinformation]]></title>
            <link>https://medium.com/1st-draft/identifying-data-deficits-can-pre-empt-the-spread-of-disinformation-93bd6f680a4e?source=rss-ac53bd7c7430------2</link>
            <guid isPermaLink="false">https://medium.com/p/93bd6f680a4e</guid>
            <category><![CDATA[disinformation]]></category>
            <category><![CDATA[misinformation]]></category>
            <category><![CDATA[social-science]]></category>
            <dc:creator><![CDATA[First Draft]]></dc:creator>
            <pubDate>Tue, 15 Dec 2020 16:41:16 GMT</pubDate>
            <atom:updated>2020-12-15T16:41:16.177Z</atom:updated>
            <content:encoded><![CDATA[<p><em>First Draft’s research analyst, Seb Cubbon, explores how data deficits get exploited by disinformation actors, and how we can get ahead of them.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Dxr9xDWqPG9kRMVvz-kdXQ.png" /></figure><p>Until very recently, mRNA (messenger ribonucleic Acid) vaccine technology and vaccine-derived poliovirus (VDPV) were still considered highly specialized, niche topics. But following the recent Pfizer and Moderna announcements, as well as the subsequent VDPV outbreaks in central Africa, these topics have suddenly risen to prominence. And their rise has been accompanied by worryingly high levels of mis- and disinformation.</p><p>We call these situations “data deficits”: where high levels of demand for information about a specific topic are not adequately matched by a supply of credible information. Unlike <a href="https://datasociety.net/library/data-voids/">data voids</a>, where search engine queries turn up little to no results, deficits are situations in which much information exists but it is misleading, confusing, false or even harmful.</p><p>These deficits are not the result of deliberate actions from bad actors. In fact, they typically occur when quality information providers are unaware of the demand for information on a given topic or are unable to provide the information in an effective, compelling manner.</p><p>However, bad actors can step in and exploit these deficits, filling them with content meant to deceive or that fits their agenda.</p><p>So how do certain data deficits get exploited by disinformation actors? How can reporters, policymakers and civil society spot them before that happens?</p><h3>How data deficits can be exploited</h3><p>Last summer, First Draft revealed the presence of multiple vaccine-related data deficits through <a href="https://firstdraftnews.org/long-form-article/under-the-surface-covid-19-vaccine-narratives-misinformation-and-data-deficits-on-social-media/">our analysis of the online vaccine information ecosystem</a>, including mRNA- and VDPV-related ones.</p><p>Since then, we found that several online messages published by sources <a href="https://content.govdelivery.com/attachments/USSTATEBPA/2020/08/05/file_attachments/1512230/Pillars%20of%20Russias%20Disinformation%20and%20Propaganda%20Ecosystem_08-04-20%20%281%29.pdf?mc_cid=25194831ee&amp;mc_eid=51cf5c0863">identified as</a> “key players in [foreign actors’] disinformation and propaganda ecosystem” exploited these deficits by incorporating them into wider disinformation and conspiratorial narratives. Their apparent aim was to undermine trust in people and institutions connected to the vaccines.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*DQ3PMmzxPd6lbrl6" /><figcaption>How deficits emerge, and are then exploited. Image: First Draft</figcaption></figure><p>These messages were then disseminated throughout the online information space thanks to a combination of laundering techniques, which are outlined below. Such techniques are <a href="https://www.stratcomcoe.org/russias-footprint-nordic-baltic-information-environment-20192020">frequently</a> <a href="https://www.stratcomcoe.org/information-laundering-germany">employed</a> as part of disinformation campaigns to influence public discourse while obscuring the intentions and identities of the actors involved. Articles mentioned mRNA and VDPV to amplify the narratives that a) US and Western Covid-19 vaccines more broadly are unsafe “experiments” and b) Bill Gates and the institutions connected to him and the vaccines they produce are untrustworthy. These messages were then subsequently:</p><ol><li><strong>Duplicated and translated in multiple languages </strong>by a loose and/or concentrated network of purported news websites and blogs that regularly syndicate each others’ content.<strong> </strong>The resulting multiplicity of reports artificially enhances the noteworthiness — and, by extension, the credibility — of the messages, thereby exploiting the <a href="https://philosophy.lander.edu/logic/popular.html#apf_bandwagon">bandwagon fallacy</a>. The overall audience reached is also maximized as a result.</li><li><strong>Slightly modified to obfuscate the source and reduce traceability</strong>. Small changes are made to the headline and text content, and different pictures are used as the articles’ preview images. Some images can also be added into these messages, many of which tend to be more graphic and emotionally-charged. The sources quoted at the end of the article may also be changed.</li><li><strong>Spread across multiple platforms, including through simultaneous sharing “spurts”</strong> on Facebook Groups to reach a wide range of target communities within an extremely short amount of time. Some links were posted to as many as six Groups in under 40 seconds.</li><li><strong>Artificially amplified</strong> by accounts that exhibit a high number of <a href="https://firstdraftnews.org/latest/how-to-spot-a-bot-or-not-the-main-indicators-of-online-automation-co-ordination-and-inauthentic-activity/">indicators of inauthenticity</a>.</li></ol><h3>How to spot data deficits</h3><p>So how do we identify data deficits before they’re exploited? Here we offer a set of qualitative indicators that can help inform which deficits need to be addressed first. These indicators build on the <a href="https://firstdraftnews.org/long-form-article/data-deficits/">quantitative indicators</a> and <a href="https://medium.com/1st-draft/searching-for-the-misinformation-twilight-zone-63aea9b61cce">research methods</a> that First Draft has previously used to identify data deficits. If addressed proactively and with quality, accessible information, these deficits may be less likely to be exploited by malicious actors.</p><p><strong>Novelty</strong></p><p>Is the subject new or previously unknown to a wider audience? This may mean quality information is less likely to exist or to have been disseminated in a compelling and accessible manner. Conversely, the production and distribution of bad quality but equally compelling information is a far more expedient process and may therefore benefit from a first-mover advantage.</p><p><strong>Technical complexity</strong></p><p>Is the topic characterized by highly-specific information whose comprehension may only come naturally to experts in the field? In this case, easily accessible information may be particularly difficult to produce. On the other hand, messages that simplify the topic in a misleading manner and incorporate it within already-popular narratives are likely to resonate with receptive audiences.</p><p><strong>Alignment with pre-existing narratives</strong></p><p>Does the subject demonstrate a clear potential to fit into pre-existing, long-standing disinformation narratives? If so, it may be easy to instrumentalize these topics as part of wider misleading messages aimed at exploiting fears, eroding trust and increasing polarization. For example, novel vaccine technologies (such as mRNA ones) could be used to stoke up fears over the safety of vaccines, and thereby bolster narratives portraying all vaccines as untrustworthy.</p><p><strong>Political saliency and emotive dimension</strong></p><p>Does the data deficit clearly fall within an emotionally-charged issue or wider topic with high political or geopolitical stakes? If so, the incentive for bad actors to exploit the data deficit with disinformation narratives may be high. The vast body of literature on information operations suggests that opportunities to sow discord, undermine democratic processes and amplify emotive tensions are more likely to be exploited by malicious actors.</p><p><strong>Legitimate questioning</strong></p><p>Is the topic the subject of high levels of legitimate questioning? If so, misleading explanations that address natural concerns may appeal to mainstream communities and thus reach a larger audience. Of course, this heightens incentives for malicious actors as their window of opportunity to bear a greater influence widens. While it may be difficult to distinguish legitimate from illegitimate questioning based on debunked misinformation tropes, one tangible indicator of high levels of questioning can be found using certain <a href="http://www-personal.umich.edu/~qmei/pub/www2015-zhao.pdf">pre-emptive research techniques</a>. By searching for social media posts containing interrogative phrases such as “is this true?”, “really?” and “what?” and then clustering these posts based on the similarity of the rest of their content, topics subjected to questioning and rumoring can be identified early on.</p><h3>We can identify threats ahead of time</h3><p>We must continuously undertake pre-emptive research that can inform proactive messaging aimed at competing with wider narratives, as opposed to just with individual pieces of content. By collecting and analyzing social media data that falls within the “middle” of social web activity — between verified media outlets or influencer accounts and those found on 4chan messaging boards, private Facebook Groups or other semi-closed anonymous online spaces — with a particular focus on identifying influential narratives, we can identify emerging data deficits early on. Qualitative indicators of data deficits can be used to prioritize responses and in turn maximize impact. The cases of mRNA technology and VDPV suggest the pre-emptive identification of key vulnerabilities is possible.</p><blockquote>Qualitative indicators of data deficits can be used to prioritize responses and in turn maximize impact.</blockquote><p>The upwards trend in Covid-19 vaccine misinformation and <a href="https://www.aspistrategist.org.au/covid-19-disinformation-campaigns-shift-focus-to-vaccines/">state-linked</a> <a href="https://www.thetimes.co.uk/article/fake-news-factories-churning-out-lies-over-monkey-vaccine-qhhmxt2g5">disinformation</a> is poised to persist. Now is the time to forge greater collaboration mechanisms involving research and monitoring organizations, media outlets, subject-matter experts, platforms and policymakers to ensure data deficits can be identified early and filled with accessible, evidence-based information. Doing so can prevent the successful spread of harmful disinformation narratives.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=93bd6f680a4e" width="1" height="1" alt=""><hr><p><a href="https://medium.com/1st-draft/identifying-data-deficits-can-pre-empt-the-spread-of-disinformation-93bd6f680a4e">Identifying ‘data deficits’ can pre-empt the spread of disinformation</a> was originally published in <a href="https://medium.com/1st-draft">First Draft Footnotes</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Searching for the misinformation ‘twilight zone’]]></title>
            <link>https://medium.com/1st-draft/searching-for-the-misinformation-twilight-zone-63aea9b61cce?source=rss-ac53bd7c7430------2</link>
            <guid isPermaLink="false">https://medium.com/p/63aea9b61cce</guid>
            <category><![CDATA[social-science]]></category>
            <category><![CDATA[research]]></category>
            <category><![CDATA[misinformation]]></category>
            <category><![CDATA[social-media]]></category>
            <dc:creator><![CDATA[First Draft]]></dc:creator>
            <pubDate>Tue, 24 Nov 2020 14:27:57 GMT</pubDate>
            <atom:updated>2020-11-24T14:27:57.602Z</atom:updated>
            <content:encoded><![CDATA[<p><em>First Draft’s head of policy and impact, Tommy Shane, explores how our technology affects what misinformation we do and don’t see.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GMGCAeFBUWXlFpxIDZtU0Q.jpeg" /></figure><blockquote>“The ocean twilight zone is a layer of water that stretches around the globe … below the ocean surface, just beyond the reach of sunlight … the twilight zone is cold and its light is dim, but with flashes of bioluminescence — light produced by living organisms. The region teems with life.”</blockquote><blockquote>— <a href="https://www.ted.com/talks/heidi_m_sosik_the_discoveries_awaiting_us_in_the_ocean_s_twilight_zone/transcript?language=en">Heidi M. Sosik, 2018</a></blockquote><p>There’s been a lot of debate recently about “<a href="https://twitter.com/FacebooksTop10">Facebook’s Top 10</a>,” a Twitter account that lists “the top-performing link posts by U.S. Facebook pages in the last 24 hours,” managed by <em>The New York Times’</em> <a href="https://mobile.twitter.com/kevinroose">Kevin Roose</a>.</p><p>Given that conservative pages tend to dominate the results, the lists <a href="https://twitter.com/ewarren/status/1326656425259569153">have been used</a> to argue that Facebook is biased in favor of conservatives. Facebook, in turn, has <a href="https://about.fb.com/news/2020/11/what-do-people-actually-see-on-facebook-in-the-us/">pushed back</a>, arguing that engagement doesn’t equal reach.</p><p>Irrespective of this argument, “Facebook’s Top 10” points to wider issues about what we see and don’t see in misinformation research. And they go beyond what data we can access, and which metrics we look at.</p><h3></h3><p>The top-performing link posts by U.S. Facebook pages in the last 24 hours are from:1. Franklin Graham2. Donald J. Trump3. The Dodo4. Newsmax5. Donald J. Trump6. Donald J. Trump7. Fox News8. Fox News9. The White House10. Donald J. Trump</p><p>How do analytics dashboards shape what we see online? What if, by focusing on posts with the greatest engagement, we are missing the things bubbling underneath? Could we be looking in the wrong places and missing real harm, simply because our tools make some things harder to investigate and study?</p><p>These were questions our researchers grappled with in our recent <a href="https://firstdraftnews.org/long-form-article/under-the-surface-covid-19-vaccine-narratives-misinformation-and-data-deficits-on-social-media/">research into vaccine misinformation</a>.</p><p>To find a solution, we took inspiration from the history of marine biology.</p><h3>The twilight zone</h3><p>In 2004, marine biologist Richard Pyle delivered <a href="https://www.ted.com/talks/richard_pyle_a_dive_into_the_reef_s_twilight_zone/transcript?language=en">a talk</a> about his research into the ocean “twilight zone.” Pyle discovered that researchers had been focusing on the very top layer of the ocean’s depths. “[We] know a lot about that part up near the top. The reason we know so much about it is scuba divers can very easily go down there and access it.”</p><p>The problem was that one can only scuba 200 meters deep. Biologists were well aware of this, and so used submersible vehicles to go deeper. But this created another problem, as Pyle explains: “If you’re going to spend $30,000 a day to use one of these things and it’s capable of going 2,000 feet … you’re going to go way down deep.”</p><p>What Pyle discovered was a middle “twilight zone” — so named because of the limited sunlight that pierces to that level — that researchers had neglected because it was easier to look at the surface, and more enticing to go down deep.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/0*-VVYswe4gau8MSeL" /><figcaption><em>Image Credit: NOAA</em></figcaption></figure><p>This twilight zone, once registered, became a huge source of discovery for ocean biologists, at one point leading to discoveries of seven new species for every hour spent in that region.</p><h3>Misinformation’s twilight zone</h3><p>There are a number of lessons here for social media research. We tend to study the accounts with the largest number of followers, the ones responsible for huge engagement metrics. We see network graphs of trending hashtags, dumps of scraped social media data shared by researchers trying to look for evidence of “coordinated inauthentic activity.”</p><p>Or we see qualitative researchers lurking in private Facebook Groups, Discord servers or 8kun boards, trying to spot disinformation campaigns before they make their way onto more popular social media platforms.</p><p>Both are valuable, but it’s not sufficient for understanding the ecosystem as a whole.</p><p>The ocean’s twilight zone is, first and foremost, a reminder that our understanding of misinformation online is severely lacking because of limited data: platforms deny access; ethical guidelines prevent researchers from entering or reporting on certain spaces online.</p><p>But more importantly, this maritime comparison is a reminder that our technology can draw us toward seeing some things and not others. CrowdTangle and Twitter’s API are not passive databases that we access, but products with affordances that influence our activity. Some features exist, others do not, and this affects what we see.</p><blockquote>the interests of platforms are baked into not just the data they share, but the features they allow for querying it</blockquote><p>And critically, the interests of platforms are baked into not just the data they share, but the features they allow for querying it. For example, on CrowdTangle you cannot filter for labeled or fact-checked posts.</p><p>Beyond hard limitations such as these, we also need to consider friction — where accessing certain metrics or items is simply made much harder than others. This includes ranked lists that draw us toward the most engaging posts and away from those in the middle zone.</p><blockquote><strong>More work is needed to surface feature biases, because we might be missing a critical part of the picture without realizing it.</strong></blockquote><p>The problem of feature bias has been raised before. Richard Rogers, a key figure in the development of digital methods, has observed that social media platforms <a href="https://ijoc.org/index.php/ijoc/article/view/6407">can lead researchers to focus on “vanity metrics”</a> such as engagement scores, rather than “voice, concern, commitment, positioning and alignment.”</p><p>But more work is needed to surface feature biases, because we might be missing a critical part of the picture without realizing it.</p><h3>Applying this to our research</h3><p>Engaging with the concept of the twilight zone led our researchers Rory Smith and Seb Cubbon to take two critical methodological decisions in their research into vaccine misinformation.</p><p>The first, and most fundamental, was to focus on how narratives were evolving and competing rather than on highly engaged posts. The units of analysis in analytics dashboards are individual posts, but narratives are much more powerful than individual pieces of misinformation, shape how people think and can’t be simply debunked.</p><p>They also chose to exclude posts from verified accounts as a way of accessing “the middle” of social media activity. The most engaged-with posts were generally from official, often pro-vaccine accounts, such as professional media outlets. Filtering out verified accounts cut through the noise and found more of the anti-vaccine discourse bubbling underneath.</p><p>But this was only feasible because there was a feature to filter out verified accounts; otherwise, it would have been very costly to manually exclude them at scale. The filter illustrates our dependence on not just data, but features, and how this affects what we do and don’t see.</p><p>In the end, searching for the twilight zone is not a fixed process or location, but a reminder and an endeavor: to think outside the logic of analytics dashboards, and, where we can, look for the neglected parts of the ecosystem.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=63aea9b61cce" width="1" height="1" alt=""><hr><p><a href="https://medium.com/1st-draft/searching-for-the-misinformation-twilight-zone-63aea9b61cce">Searching for the misinformation ‘twilight zone’</a> was originally published in <a href="https://medium.com/1st-draft">First Draft Footnotes</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The questions we need to ask before the next infodemic]]></title>
            <link>https://medium.com/1st-draft/the-questions-we-need-to-ask-before-the-next-infodemic-ff68671d07aa?source=rss-ac53bd7c7430------2</link>
            <guid isPermaLink="false">https://medium.com/p/ff68671d07aa</guid>
            <category><![CDATA[infodemic]]></category>
            <category><![CDATA[fact-checking]]></category>
            <category><![CDATA[coronavirus]]></category>
            <category><![CDATA[misinformation]]></category>
            <dc:creator><![CDATA[First Draft]]></dc:creator>
            <pubDate>Mon, 05 Oct 2020 13:48:42 GMT</pubDate>
            <atom:updated>2020-10-05T13:48:42.653Z</atom:updated>
            <content:encoded><![CDATA[<p><em>First Draft’s head of policy and impact, </em><a href="https://twitter.com/tommyshane"><em>Tommy Shane</em></a><em>, explores what questions we need to ask to provide better information during the next pandemic.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*ednUurRBJX6t5aET" /></figure><blockquote>“Covid-19 is neither the first nor the last health emergency we will face. My fellow scientists estimate that we will face a pandemic or health emergency at least once every five years from here on. There is a chance that this is the optimistic scenario. The reality could be far worse.”</blockquote><blockquote>Sally Davies, former Chief Medical Officer for England, <a href="https://www.theguardian.com/commentisfree/2020/sep/26/next-pandemic-coronavirus-prepare">writing for The Guardian</a></blockquote><p>What could we do better next time?</p><p>This is the question we need to be asking now. As we emerge from the immediate aftermath of the crisis, and the infodemic begins to stabilize into a new normal, we must take this opportunity to take a moment and reflect.</p><p>Where did we succeed in getting the right information to the right people at the right time? Where did we fail?</p><p>At First Draft, we have spent much of the summer <a href="https://firstdraftnews.org/long-form-article/tracking-the-infodemic/">tracking the infodemic</a> and its numerous implications, including <a href="https://firstdraftnews.org/latest/the-first-six-months-of-the-pandemic-as-told-by-the-fact-checks/">an analysis of 9,722 fact checks</a> related to coronavirus between January and June.</p><p>What has emerged is a series of questions.</p><p>We lay them out here to spark conversations among reporters, fact checkers, platforms and researchers, focusing on how we get the right information to the right people at the right time during a pandemic.</p><h3>1. How do we best respond to questions about origin, when limited information is available?</h3><p>What do “<a href="https://twitter.com/ashakiiii/status/1222903869639778304">The Simpsons</a>,” <a href="https://www.facebook.com/photo.php?fbid=721732751688288&amp;set=a.102905126904390&amp;type=3&amp;theater">Dettol</a> and Dean Koontz’s novel “<a href="https://twitter.com/NickHintonn/status/1228896027987660800">The Eyes of Darkness</a>” have in common?</p><p>They were all thought to have predicted the coronavirus pandemic.</p><p>Though of varying concern, each speaks to a need for an origin story — an answer to the question: Where did this come from? How did it get here?</p><p>As Claire Wardle, First Draft’s US director, <a href="https://firstdraftnews.org/latest/3-lessons-on-the-coronavirus-infodemic-from-experts-and-tech-companies/">told the UK Parliament</a> during a session on coronavirus misinformation: “It’s easy to dismiss conspiracies, but we have to understand why they’re taking hold.</p><p>“There isn’t a good origin story for the virus, and so this information vacuum is allowing misinformation to circulate.”</p><p>The problem is that during a pandemic, we don’t know the answers to these questions. Limited information, and possible <a href="https://www.nytimes.com/2020/04/30/us/politics/trump-administration-intelligence-coronavirus-china.html">government interference</a>, mean fact checkers may struggle with limited evidence to provide a simple answer. In their place, casual, wild speculation and strategic disinformation will triumph.</p><blockquote>“There isn’t a good origin story for the virus, and so this information vacuum is allowing misinformation to circulate.”</blockquote><p>How can we best address the origin problem? In some cases, might it be better to credibly speculate about what might be the case, instead of failing to fill in the gaps?</p><h3>2. How can we best support collective sensemaking, when at first all we can do is falsify claims?</h3><p>In March, crisis informatics researcher Kate Starbird shared <a href="https://onezero.medium.com/reflecting-on-the-covid-19-infodemic-as-a-crisis-informatics-researcher-ce0656fa4d0a">an explanation</a> of a social process that follows crises called collective sensemaking: the endeavor to make sense of a crisis by filling in information gaps at an individual and group level. Importantly, it is driven by anxiety and often panic, leading some to taking life-threatening actions.</p><p>One of the problems of falsifying claims, such as rejecting claims over the origin of the virus or its alleged treatments, is that they do not fill the gaps in people’s understanding. On the contrary, they may stifle that process by rejecting the information being used to make sense of things.</p><p>By contrast, <a href="https://dl.acm.org/doi/abs/10.1145/3134696">as researchers have noted</a> in relation to other public health crises, rumors will often better serve people’s emotional needs than the accurate information available at the time.</p><p>Social media undoubtedly accelerates and amplifies rumor in these scenarios, but it may also be able to help. Just as researchers have explored <a href="https://dl.acm.org/doi/10.1145/1718918.1718976">how to use technology to facilitate distributed sensemaking in hospitals</a>, can social media platforms develop features to safely support the natural and inevitable process of sensemaking following a crisis? How can reporters and fact checkers support?</p><h3>3. How do information needs change over time, and how can they best be met?</h3><p>The information needs during a pandemic <a href="https://firstdraftnews.org/latest/the-first-six-months-of-the-pandemic-as-told-by-the-fact-checks/">change over time</a>. Put simply, at first we want to know about origin, then about treatments, and finally attention turns toward public policy.</p><p>Connected to this trajectory is a transition in the source and format of misinformation: as the focus moves from online rumors to policy decisions, so too does the kind of misinformation from memes to statements from politicians. These require different skill sets and procedures to verify, and in some cases require more resources. As we noted in <a href="https://firstdraftnews.org/latest/the-first-six-months-of-the-pandemic-as-told-by-the-fact-checks/">our analysis of 9,722 fact checks</a> related to coronavirus, “As the months wore on, the topics that fact checkers addressed increasingly drew on complex political and social phenomena.”</p><p>We need a more nuanced understanding of these trajectories, something crisis informatics researchers <a href="https://www.ideals.illinois.edu/bitstream/handle/2142/47381/259_ready.pdf">have already begun to explore</a>.</p><blockquote>As the months wore on, the topics that fact checkers addressed increasingly drew on complex political and social phenomena.</blockquote><p>We also need to understand whether constraints (such as skills and resources) and incentives (such as Facebook’s payments for fact checks) disrupt fact checkers’ ability <a href="https://onlinelibrary.wiley.com/doi/10.1111/1467-923X.12896">to focus on their missions</a>, how this dynamic plays out through the phases of a pandemic, and if the public need can be better served with preparation or emergency support.</p><h3>4. Which online communities use fact checks during a pandemic, and for what purposes?</h3><p>It is possible to measure which fact checks in the dataset received the highest number of interactions — the sum of reactions, comments and shares on posts in public groups and pages — using data from CrowdTangle, a Facebook-owned data analytics tool.</p><p>But these interactions only tell us that people interacted in some way.</p><p>Many questions remain: Who were they, and why were they interacting? Was it in the way fact checkers would want? And what would that even be?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*EBxu0vbf8NUkfmiBig3cnQ.png" /><figcaption><em>The top ten fact checks related to coronavirus published between Jan-June 2020, ranked by number of interactions (likes, comments, shares, etc.) in public spaces on Facebook. Data from CrowdTangle. Read </em><a href="https://firstdraftnews.org/long-form-article/data-deficits/"><em>full methodology</em></a><em> for more information.</em></figcaption></figure><h3>5. What kinds of translation procedures would help critical information to reach communities around the world?</h3><p>As Poynter <a href="https://www.poynter.org/fact-checking/2020/a-rumor-about-helicopters-disinfecting-cities-just-wont-die-misinformation-demands-our-vigilance/">has remarked</a> in relation to a viral rumor, “This is a perfect illustration of what ‘infodemic’ is… Just like viruses, misinformation knows no borders, especially during such a crisis.”</p><p>But while viruses and viral misinformation might not acknowledge borders, credible information does.</p><p>Over half (54.3 per cent) of the fact checks we studied were in either Spanish or English. This breakdown to some extent reflects the relative number of speakers globally, with the notable exception of Chinese, which accounted for fewer than 1 per cent of the fact checks.</p><p>However, almost half (54) of <a href="https://en.wikipedia.org/wiki/Google_Translate#Supported_languages">109 major languages</a> were not represented at all, amounting to tens of millions of speakers.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*wO6gYa_Rf9Mh9sRZ" /></figure><p>The fact checkers we studied were not the only source of credible information during the pandemic. But questions remain.</p><p>Were some people unable to access fact checks in their spoken languages? What was the impact of this? Can emergency human translation support help communities in the immediate aftermath of an outbreak?</p><h3>6. What elements of the Covid-19 experience can we draw on to tell better stories next time?</h3><p>As a society, we know more than we did in January. We know what it’s like to gradually learn of a pandemic, for the numbers to be confusing, what “R” means (or at least that it matters), and that, in the end, the origin of a virus may just be a market, as was originally suspected.</p><p>Inevitably, the coronavirus will likely be used as a biological and social benchmark for comparison with future outbreaks. How can we best anticipate this?</p><p>How can we draw on the new understanding of pandemics and infodemics? Are there numbers, concepts, metaphors, visuals or tools we should re-use and build upon? What worked that we should replicate?</p><p>Equally, how will the experience of coronavirus be used against efforts to achieve calm and effective action? Can we explore risks through <a href="https://firstdraftnews.org/project/live-simulations/">simulations</a>, or “<a href="https://en.wikipedia.org/wiki/Red_team">red teaming”</a>?</p><p>— —</p><p>These questions will take some time to answer, and will require people with different skills to do it. They are also not exhaustive; they are a starting point for the process of reflection that we must now confront.</p><p>But I hope they will contribute to a thorough examination of what happened, and what we need to do next.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ff68671d07aa" width="1" height="1" alt=""><hr><p><a href="https://medium.com/1st-draft/the-questions-we-need-to-ask-before-the-next-infodemic-ff68671d07aa">The questions we need to ask before the next infodemic</a> was originally published in <a href="https://medium.com/1st-draft">First Draft Footnotes</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The difference between the facts and the truth]]></title>
            <link>https://medium.com/1st-draft/the-difference-between-the-facts-and-the-truth-59e23c6185d?source=rss-ac53bd7c7430------2</link>
            <guid isPermaLink="false">https://medium.com/p/59e23c6185d</guid>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[misinformation]]></category>
            <category><![CDATA[tech]]></category>
            <category><![CDATA[disinformation]]></category>
            <dc:creator><![CDATA[First Draft]]></dc:creator>
            <pubDate>Mon, 03 Aug 2020 16:30:51 GMT</pubDate>
            <atom:updated>2020-10-06T08:27:58.450Z</atom:updated>
            <content:encoded><![CDATA[<p><em>First Draft’s head of policy and impact, Tommy Shane, explores how different ways of knowing can create vulnerabilities online.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*9aU7QquDG2gC5MGOMIWA6A.png" /></figure><p><em>This article is part of First Draft’s new </em>Footnotes<em> publication. </em><a href="https://medium.com/1st-draft/introducing-footnotes-a-place-for-ideas-5ff33d6f0257"><em>Read more about what we want to achieve and inspire with this new body of work</em></a><em>.</em></p><p>When we think about disinformation, we tend to focus on narratives.</p><p>5G causes coronavirus. Bill Gates is trying to depopulate the planet. We’re being controlled by lizards.</p><p>But while narratives are concerning and compelling, there is another way of thinking about online disinformation. All narratives, no matter how bizarre, are an expression of something that underlies them: a way of knowing the world.</p><p>Contrary to claims that we live in a post-truth era, research suggests that people engaging with disinformation care deeply about the truth. William Dance, a disinformation researcher who specializes in linguistics, has found that people engaging with disinformation are more likely to use words related to the truth, such as disingenuous; nonsense; false; charade; deception; concealed, disguised, hiding, show; find; reveals; exposes; uncovers.</p><p>People engaging with false news stories are not disinterested in truth, but are hyper-concerned with it — especially the idea that it’s being hidden.</p><blockquote>Contrary to claims that we live in a post-truth era, research suggests that people engaging with disinformation care deeply about the truth.</blockquote><p>Because they can seem bizarre, misinformed narratives can sometimes lead others to assume their proponents are simply irrational or disinterested in truth. It can also distract from the ways of knowing that lead people to conspiracy narratives. Not everyone is interested in accounts of the world based on institutionalized processes and the perspectives of experts.</p><p>Some people may value different methods, rely on different evidence, value different qualifications, speak in different vernaculars, pursue different logics, and meet different needs. In <a href="https://boingboing.net/2017/02/25/counternarratives-not-fact-che.html">the words of tech journalist and author Cory Doctorow</a>, “we’re not living through a crisis about what is true, we’re living through a crisis about how we know whether something is true.” Part of that crisis stems from not understanding other ways that people know, and why.</p><p>The challenge we face is that we won’t know if trends exist, or if vulnerabilities are growing, unless we ask the right questions. What are the different ways people seek knowledge online? What assumptions underpin them? How are they changing? Are they being manipulated?</p><p>We also need to find out how to answer these questions. We need to find a route from the abstract questions of knowledge to the measurable traces of online behaviors.</p><p>During the pandemic, we’ve been experimenting with ways to do exactly that. In the midst of the pandemic, we looked at search-engine results using different keywords related to knowing about coronavirus: “facts” and “truth.”</p><p>On Google, searching for “coronavirus facts” gives you a full overview of official statistics and visualizations. That’s not the case for “coronavirus truth.” There you’ll get results referring to cover-ups and reports that China has questions to answer about the Wuhan lab — one of the major early conspiracy theories about the origin of the virus.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*40LFZFeja8Rz7sa9" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*2hEu8mskAOo8Cnif" /><figcaption><em>Screenshots of Bing search results for “coronavirus facts” compared to results for “coronavirus truth” (May 2020)</em></figcaption></figure><p>On Bing, we saw something similar but more pronounced. Where searches for “facts” yielded stories from official sources and fact checkers, one of the top results linked to “truth” is a website called The Dark Truth, which claims the coronavirus is a Chinese bioweapon. Its suggested stories for “coronavirus truth” prompted people to search for terms such as “the real truth about coronavirus,” “the truth behind the coronavirus,” “coronavirus truth hidden” and “coronavirus truth conspiracy theory.” (Since our screenshots were taken, Bing has updated its results for “coronavirus truth” to display data visualizations, but The Dark Truth still appeared as of July, and the related searches were broadly the same.)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/754/0*DLqcJ9Vasdg7-9QA" /><figcaption><em>Screenshots of related searches in Bing search results for “coronavirus truth” (June 2020)</em></figcaption></figure><p>These results offer a glimpse into different ways of knowing online, and how platforms are responding with algorithms’ understanding (and production) of the different connotations of “facts” and “truth.”</p><p>Even though “facts” and “truth” might feel the same, they appear to engage with different ideas about information: for example, what we know versus what they’re not telling us; official statistics versus unofficial discovery.</p><p>When we’ve applied this same principle to the emerging threat of vaccine hesitancy, we see a similar pattern. “Vaccine facts” leads to official sources promoting pro-vaccine messages, while “vaccine truth” leads to anti-vaccination books and resources.</p><blockquote>Even though “facts” and “truth” might feel the same, they appear to engage with different ideas about information: for example, what we know versus what they’re not telling us; official statistics versus unofficial discovery.</blockquote><p>These results are snapshots of different knowledge-seeking behaviors, which consider a single entry point — “facts” and “truth.” But as this and other research suggests, there is much more to examine about ways of knowing, and how this plays into disinformation.</p><p>What might a framework of digital ways of knowing look like? Using facts and truth as illustrative concepts, we’ve proposed several dimensions (scale, process, causal logic, qualifications, method, tone and vernacular) as a starting point for considering different assumptions about what “knowing” might involve and to what narratives it might lead. Such an approach is designed to avoid the judgmental framing that conspiracy theories easily fall into. It is intended to show that certain ways of knowing, like lived experience, have their own validity that <a href="https://www.technologyreview.com/2020/06/02/1002505/black-lives-matter-protest-misinformation-advice/">might not be recognized</a> by the paradigm of fact checks.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1002/1*pvvNYkxvIQtpbkk4Oa7Rxg.png" /></figure><p>Attention to ways of knowing as well as narratives can help us to ask different kinds of questions. How are ways of knowing changing? How might they be manipulated? How should reporters, educators and platforms respond?</p><p>If we fail to ask these questions, there is a risk that we won’t account for — or respect — the different assumptions people make when seeking knowledge. We may fail to speak across divides and ignore how other people’s needs from information can differ from our own.</p><p>We might also fail to understand how certain ways of knowing, <a href="https://points.datasociety.net/you-think-you-want-media-literacy-do-you-7cad6af18ec2">such as media literacy</a>, can be manipulated and weaponized. We know that some people are more likely to seek alternative, all-explaining narratives — those with <a href="https://global.oup.com/academic/product/american-conspiracy-theories-9780199351800?cc=us&amp;lang=en&amp;">low social status</a>, <a href="https://journals.sagepub.com/doi/10.1177/01461672992511003">victims of discrimination</a> or <a href="https://kar.kent.ac.uk/61995/1/Douglas%20Sutton%20Cichocka%202017.pdf">people who feel politically powerless</a>. As well as witnessing the rise in 5G conspiracy theories, we may also be experiencing the rise of certain ways of knowing and their manipulation, especially in the context of a resistance to institutions and elites.</p><p>Donald Trump’s 2020 campaign has begun to engage with the idea of “truth over facts” with its campaign website <a href="https://www.thetruthoverfacts.com/">thetruthoverfacts.com</a>, which mocks a series of gaffes by Democratic candidate Joe Biden. Though the website is satirical, it primes the idea of the truth being something more fundamental — and Trumpian — than Biden’s misremembered facts.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Wo5jTjALirZb6P0V" /><figcaption><em>Sign-up form from Trump 2020 election campaign site ‘thetruthoverfacts.com’</em></figcaption></figure><p>At First Draft, we plan to develop techniques for monitoring and analyzing these behaviors in the coming months. We want to speak to others interested in this line of research as we experiment with new techniques. If you are interested in the study of online ways of knowing, or have something to tell us that we can use, we want to hear from you. Please comment below or get in touch <a href="https://twitter.com/tommyshane">on Twitter</a>.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=59e23c6185d" width="1" height="1" alt=""><hr><p><a href="https://medium.com/1st-draft/the-difference-between-the-facts-and-the-truth-59e23c6185d">The difference between the facts and the truth</a> was originally published in <a href="https://medium.com/1st-draft">First Draft Footnotes</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Introducing Footnotes: A place for ideas]]></title>
            <link>https://medium.com/1st-draft/introducing-footnotes-a-place-for-ideas-5ff33d6f0257?source=rss-ac53bd7c7430------2</link>
            <guid isPermaLink="false">https://medium.com/p/5ff33d6f0257</guid>
            <category><![CDATA[disinformation]]></category>
            <category><![CDATA[misinformation-research]]></category>
            <category><![CDATA[information-disorder]]></category>
            <category><![CDATA[misinformation]]></category>
            <dc:creator><![CDATA[First Draft]]></dc:creator>
            <pubDate>Wed, 29 Jul 2020 17:31:10 GMT</pubDate>
            <atom:updated>2020-07-29T17:42:29.046Z</atom:updated>
            <content:encoded><![CDATA[<p><em>First Draft director and co-founder, Claire Wardle, introduces our revamped Medium publication as the organisation turns five years old.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*arJmgZ24K7TLwtbaUJXcYQ.png" /></figure><p>In the summer of 2015, First Draft published <a href="https://medium.com/1st-draft/introducing-the-first-draft-coalition-e557fdacd1a6">its first Medium post</a>. The website came slightly later, and with it the flurry of <a href="https://firstdraftnews.org/">articles, guides, videos, resources, studies, courses and quizzes</a> that have come to define the organisation’s work at the forefront of understanding information disorder.</p><p>On the one hand, the topics that inspired the creation of the First Draft coalition five years ago are as relevant as ever. The three tags accompanying that first post are ‘verification,’ ‘UGC’ and ‘eyewitness media.’ There is no doubt that eyewitness footage continues to shape history, as the devastating video of George Floyd’s death demonstrated. And as the global protests continue, videos and images swirl incessantly with associated claims and counter-claims, requiring journalists to apply strict verification protocols in order to document what was happening on the ground.</p><p>What’s new are the tactics of media manipulation, the sheer volume of lies being pushed so brazenly by official sources, and the scale of our polluted information environment. 2015 seems quaint in comparison. The terms misinformation, disinformation and coordinated inauthentic activity all feel thoroughly inadequate to describe this moment. And similar patterns are being seen around the world, from expected places such as Brazil, the Philippines and India, as well as countries where the tactics feel less familiar — the UK, France, Australia and Canada, for example.</p><blockquote>If the agents of disinformation borrow tactics and techniques from each other, which they do, then so must we.</blockquote><p>It’s been three years since I created the <a href="https://firstdraftnews.org/latest/fake-news-complicated/">seven types of m/disinformation</a> and spent a summer with Hossein Derakhshan writing the report “<a href="https://rm.coe.int/information-disorder-report-version-august-2018/16808c9c77">Information Disorder: Toward an Interdisciplinary Framework for Research and Policy making</a>.” The world looks very different now; the frameworks we’re using, even the terminology and research we’re drawing from, all feel hopelessly out of date. None of us is able to keep up with the speed at which things are moving.</p><p>In June, a <a href="https://www.pnas.org/content/117/27/15536">new piece of research</a> was published that analyzed the impact of the “10 Tips on how to Spot False News” that Facebook rolled out in newsfeeds across 14 countries in April 2017. I’m so glad we have some empirical measure of the impact of that initiative, but it’s over three years after the fact. Do we have to wait until 2023 to find out the impact of Twitter’s new manipulated media labels?</p><blockquote>We really hope you will join the growing community of people thinking deeply about these issues so the early ideas published here can be taken apart and made even stronger</blockquote><p>The <a href="https://misinforeview.hks.harvard.edu/">Harvard Misinformation Review</a>, which I helped set up, is doing an amazing job of speeding up the process of getting peer-reviewed research out much more quickly. It’s been an incredible addition to this emerging field.</p><p>And in that same vein, we decided that we wanted to create a space for our own staff and guest contributors to test out some early thinking and ideas. So we’ve decided to create Footnotes, a dedicated online space for people to publish new ideas, preliminary research findings, and innovative methodologies. To open up some of our own inspirations and processes to the wider community in the hope that they can be picked apart and borrowed from, driving the conversation ever forwards. If the agents of disinformation borrow tactics and techniques from each other, which they do, then so must we.</p><p>We want to encourage people to share early ideas so there is an opportunity to pull them apart and build on them. We hope it might be a place to work through research questions and design, before the process of starting to collect and analyze data. We want it to be a place to suggest, critique and compare notes on ideas and concepts.</p><p>We hope the blog will be useful for an expert audience including researchers, disinformation beat reporters, fact checkers and policy makers. Initially we aim to publish once per month, but maybe that will increase with guest contributors.</p><p>Our first piece will be out later this week, in which our head of policy and impact, Tommy Shane, shares some early thinking about how different ways of knowing — such as the differences between searching for ‘facts’ and ‘truth’ — may be fuelling misinformation online.</p><p>We hope Footnotes is useful, and we really hope you will join the growing community of people thinking deeply about these issues so the early ideas published here can be taken apart and made even stronger.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5ff33d6f0257" width="1" height="1" alt=""><hr><p><a href="https://medium.com/1st-draft/introducing-footnotes-a-place-for-ideas-5ff33d6f0257">Introducing Footnotes: A place for ideas</a> was originally published in <a href="https://medium.com/1st-draft">First Draft Footnotes</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>