Atlantic Council Logo Digital Forensic Research Lab Logo

Interference 2024

The 2024 Foreign Interference Attribution Tracker

A Project of the Digital Forensic Research Lab (DFRLab) of the Atlantic Council

The DFRLab’s Foreign Interference Attribution Tracker (FIAT) is an interactive, open-source database that tracks allegations of foreign interference or foreign malign influence relevant to the 2024 U.S. presidential election. We map the actors, methods, and impact associated with each campaign. We also independently evaluate the credibility, bias, evidence, transparency of the underlying claim. Explore the data by scrolling through the visualization and table below. Hover over a point to see details about a particular case.

FIAT 2024 builds public attribution standards, provides an independent and reliable record of foreign interference claims in the 2024 U.S. presidential election, serves as a resource for stakeholders about the evolving threat, and helps to build resilience against future foreign interference efforts. FIAT 2024 has been created in service of the DFRLab’s mission to identify, expose, and explain disinformation and to promote objective fact as the basis for governance worldwide. It expands upon a similar dashboard created by the DFRLab to track foreign interference allegations during the 2020 U.S. presidential election.

The FIAT 2024 dataset contains {{number_of_cases}} allegations of foreign interference originating from {{number_of_nations}} nations. The dataset was last updated on {{last_modified}}.

This tool will be regularly updated as further allegations or attributions of foreign interference in the 2024 U.S. presidential election are made public. If you have questions regarding the tool or would like to submit a case for consideration, please contact the DFRLab.

FIAT 2024 consists of five elements that work together to tell the complete story of foreign interference allegations in the 2024 U.S. presidential election (some elements may not be viewable on mobile).

Filters enable users to adjust the visibility of cases by Attribution Score, Actor Nation, Platform, Method, Source, Source Category, Campaign, and Attribution Date. Free text search is also supported.

The Case Timeline displays cases as a series of points, arranged chronologically from left to right by Attribution Date. The position and color of each point corresponds to the three most commonly mentioned Actor Nations: Russia, Iran, or China (additional Actor Nations may be found in the “Other” row). The radius of each point corresponds to the case’s estimated severity on the Breakout Scale. The opacity of each point corresponds to the case’s estimated Attribution Score. Finally, cases in which Offline Mobilization occurred are indicated by a border around the corresponding point.

The Discourse Timeline maps the volume of English-language media conversation regarding foreign interference and relating to the most commonly mentioned Actor Nations: Russia, Iran, or China. More information about these structured queries may be found in the Methodology section. The Discourse Timeline consists of two views:

  • X Posts (default) aggregates the number of posts made daily on X (formerly Twitter) about foreign interference by Russia, Iran, or China. This data was generated by querying an API provided by Meltwater, a social media monitoring tool. The DFRLab collected this data from January 1, 2024.
  • Television News Mentions aggregates the amount of airtime given to discussing foreign interference by Russia, Iran, or China across CNN, Fox News, and MSBC. This data was generated by querying the Television Explorer of the GDELT project, with each instance representing a 15-second window of airtime. The DFRLab collected this data from January 1, 2022.

Key Events plots key events in the 2024 U.S. presidential election cycle.

A Case View may be accessed by hovering the cursor over a given case on the Case Timeline or by toggling to select “Cases” in the Data View. This view provides the Source of Attribution, Date of Attribution, the Date(s) of Activity, and a Description of the given case. Users may also see a breakdown of a case’s Attribution Score by its four subsections (Credibility, Objectivity, Evidence, and Transparency); clicking on the question mark on the right-hand corner of this view also expands the full scorecard. Platforms, Methods, Source, Source Category, and Campaign are also presented in this view and can be clicked to filter the data accordingly.

The Data View presents a simplified table of the FIAT 2024 dataset. Cases are affected by all applied filters and can be sorted according to each column. The full dataset can also be downloaded from this view. By toggling from “Table” to “Cases,” users may access the Case View of any case in the currently filtered data.

Other China Iran Russia Jan Apr Jul Oct

Case Selection

In order to be included, cases must meet three criteria.

First, cases must involve allegations of foreign interference or foreign malign influence by primarily digital means. The Australian Government Department of Home Affairs defines foreign interference as an activity that is “coercive, corrupting, deceptive, or clandestine” in nature. The U.S. Office of the Director of National Intelligence defines foreign malign influence as “subversive, undeclared, coercive, or criminal activities” undertaken to affect another nation’s political attitudes, perceptions, or behaviors. These definitions exclude more benign examples of foreign influence, like lobbying, as well as overt and declared foreign propaganda activities.

Second, cases must be novel. A novel case is one which involves a fresh foreign interference claim or which reveals new evidence to reinvigorate an old one. A novel case is also one in which significant newsworthiness is attached to the individual or organization making the claim. In general, a president or ex-president’s claim is novel regardless of the evidence presented. Meanwhile, an op-ed or report by a mid-level US official is only novel if it contains previously undisclosed information.

Third, cases must be relevant to the 2024 U.S. election. Cases should include allegations of activity intended to influence voting behaviors, denigrate particular candidates, or engage in political or social debates of direct relevance to the election. Cases should also have been recorded after the November 8, 2022 U.S. midterm elections.undisclosed information.

Attribution Score

The Attribution Score is a framework of eighteen binary statements (true or false) that assess foreign interference claims made by governments, technology companies, the media, and civil society organizations. The measure is intended to capture the reliability of the attribution as discernible through public sources rather than to serve as a fact-check of the attribution itself. If a statement is deemed applicable, a point is awarded. If a statement is deemed inapplicable or irrelevant, no point is awarded. Each case was coded twice and reconciled by a third reviewer.

This scoring system is based on the experience of DFRLab experts in assessing—and making—such attributions. It is also based on a review of work produced by the wider disinformation studies community, and particularly resources compiled by attribution.news.

The Attribution Score is composed of four subsections:

Credibility

  • The source of the attribution does not have a direct financial interest in a certain attribution outcome.
  • The source of the attribution has a diversified and transparent funding stream.
  • The source of the attribution does not strongly endorse a specific political ideology.
  • The source of the attribution is in no way affiliated with a political campaign.
  • The source of the attribution has not previously promoted mis- or disinformation.

Objectivity

  • The attribution avoids using biased wording. The attribution avoids high-inference or emotive language.
  • The headline accurately conveys the content of the attribution.
  • The attribution clearly distinguishes factual information from argumentative analysis.

Evidence

  • The attribution provides a clear illustration of the methods, tactics, and platforms involved in the alleged information operation.
  • The attribution contextualizes the engagement with, and impact of, the alleged information operation.
  • The attribution identifies actors and states allegedly responsible.
  • The attribution clearly explains the strategic goal and rationale of the actors who conducted the alleged information operation.
  • The attribution relies on information which is unique to, or can only be procured by, the relevant actor. (e.g. classified information for US federal agencies, back-end/developer information for technology companies)

Transparency

  • The attribution provides open access to a dataset or archived links of alleged assets.
  • The attribution methodology is clearly explained.
  • The attribution is replicable through open-source evidence.
  • The attribution acknowledges relevant limitations or mitigating factors in its assessment.
  • The attribution has been corroborated by a third party or independent investigation.

The Breakout Scale

The Breakout Scale is a comparative model for estimating the reach and potential impact of influence operations based on data that is “observable, replicable, verifiable, and available from the moment they were posted.” The model was developed by Ben Nimmo, former DFRLab Research Director.

The Breakout Scale: Measuring The Impact of Influence Operations, categorizes each case’s reach and potential impact based on its spread across platforms, communities, and media types.

The Breakout Scale is divided into six categories:

  • Category One: The case is confined to one platform with no breakout (i.e. the messaging does not spread beyond the community at the insertion point).
  • Category Two: The case is confined to one platform but there is breakout OR is on many platforms with no breakout (insertion points on multiple platforms, but messaging does not spread beyond them).
  • Category Three: The case has insertion points and breakout moments on multiple platforms, but it does not spread onto mainstream media.
  • Category Four: The case features cross-medium breakout beyond social media. It is reported by mainstream media as embedded posts or as reports.
  • Category Five: Celebrity amplification or endorsement.

    Attributions lacking sufficient evidence to justify a Breakout Scale classification are scored as “Not Applicable.” These claims only refer to foreign interference in general terms and do not describe any specific operations.

    Discourse Timeline

    The Discourse Timeline displays X data captured via Meltwater and television airtime data captured via GDELT. In both cases, we used a structured search consisting of an “Interference Term” and a “Country Term,” outlined in the table below. In the case of Meltwater, we also used the search term in “Platform and Post Type Filters” to limit results to the X platform. The GDELT query differs slightly to accommodate the absence of wildcard character support.

    Interference Term Country Term Platform and Post Type Filters
    (amplif* OR bot OR bots OR collu* OR conspir* OR disinfo* OR disseminat* OR fake* OR financ* OR foreign OR fraud* OR fund* OR implicat* OR inauthentic OR influenc* OR intelligence OR interfer* OR malign OR manipulat* OR meddl* OR money OR narrative* OR polariz* OR promot* OR propagand* OR psyop* OR sponsor* OR tamper* OR undermin*) AND (Iran OR Iranian OR Khamenei) AND (NOT postType:rp) AND (socialType:twitter)
    (Kremlin OR Putin OR Russia OR Russian)
    (Beijing OR China OR Chinese OR Xi OR Xi Jinping)

Allegations of foreign interference in US elections that met the case selection criteria were recorded by DFRLab coders using a codebook of variables. Seven text variables, 52 multi-variable options, and four other variables were used to describe who made the allegation of interference against who, what the attribution was, when it occurred, the platforms where it occurred, and how the interference was conducted. Some cases contain multiple allegations either referring to interference attempts by different nation-states or specific actors/campaigns originating from a single nation. To accommodate these cases, five additional variables are included to describe each “sub-attribution” in a given case.

What was the attribution?
  • Short Title (free text).
  • Short Description (free text).
  • Link to Attribution (link).
When did the interference and attribution occur?
  • Date(s) of Activity. Date or range of purported activity.
    • Start (date). Input if start date is known; if not, omit.
    • End (date). Input if end date is known, if not, omit.
    • Date of Attribution (date). Date corresponds to date of link of attribution.
Who is making the attribution, against whom?
  • Source of Allegation (free text). The original source of the interference allegation.
  • Source Nation (free text). The country where the source of the interference allegation originates. Since the scope of this dataset is interference in the US, the most common source nation for allegations is the United States. The source nation does not necessarily denote the actor was associated with a national government.
  • Source Category (select all that apply).
    • Civil Society Organization. A nonprofit, non-governmental, non-media entity, typically a university or think tank.
    • Foreign Government Body. A non-US government entity.
    • Government. Government agencies, elected representatives, and officials, even if quoted anonymously.
    • Influential Individual. A noteworthy individual, not currently affiliated with another category, who is deemed nationally recognizable or operating in the public sphere.
    • Media. Only applies if a news organization makes the allegation on the basis of its own investigation. A media organization reporting on an allegation made by someone else (e.g. an anonymous government official) is not included.
    • Private Consultancy. A company engaged in private monitoring and risk consulting, typically in the field of cybersecurity.
    • Technology Company. A company that operates a social media platform or offers a technology service.
  • Actor (free text). Brief description of the actor purportedly responsible for the interference attempt.
  • Actor Nation (free text). The country where the interference originates, according to the source. When an allegation comes from a non-state political actor, this field is the nation of origin of that non-state political actor. This does not necessarily denote an actor is associated with the national government.
  • Attribution Type (select all that apply).
    • Direct Attribution. The source directly accuses the actor of malicious political behavior.
    • Proxy/Inferred Attribution. The source does not make a direct attribution, but clearly states that the activity is likely associated with the actor or strongly implies the accusation is directed at the actor.
    • Non-Aligned Commercial Activity. The interference consists of malicious commercial activity rather than a politically motivated information operation.
  • Campaign (free text). An identifying tag used to relate attributions to one another which revolve around the same emergent narratives, tactics, or subjects. Discrete tags indicating the activities were part of a larger pattern of behavior or a concerted effort.
On what platforms did the interference purportedly take place?
  • Media (select all that apply).
    • State Media. A media outlet controlled by a government or government proxy, which is not editorially independent.
    • Independent Media. Media outlets that are generally regarded as reputable, balanced, and independent of direct government control.
    • "Junk News" Media. Unreliable, skewed, openly propagandistic, or fringe media outlets that lack discernable government ties.
  • Platform (select all that apply). Platform(s) on which alleged interference occurred.
    • Facebook
    • Instagram
    • X
    • YouTube
    • LinkedIn
    • Reddit
    • Discord
    • VK
    • Forum Board
    • WhatsApp
    • Telegram
    • Signal
    • WeChat
    • SMS
    • TikTok
    • Unspecified
    • Other (free text)
  • Other Platforms (select all that apply).
    • Advertisement (binary).
    • Email (binary).
How was the interference purportedly conducted?
  • Method (select all that apply). Methods used in both the creation and the amplification of content related to the alleged foreign interference.
    • Brigading. Authentic social media accounts but evidence of coordinated amplification or harassment.
    • Sockpuppets. Inauthentic social media accounts; evidence suggests a high likelihood of human operation.
    • Third-Party Automation. Inauthentic social media accounts; evidence suggests a high likelihood of automation by third-party program.
    • DDoS. Distributed denial-of-service attack; malicious attempt to disrupt server traffic.
    • Domain Spoofing. Manipulation of search queries and results; typosquatting.
    • Influencer Payola. Clandestine or indirect payment to an organization or influential individual for the purposes of content creation or amplification.
    • Hacking (select all that apply). Unauthorized and clandestine access to or manipulation of digital systems, networks, or data, often for the purpose of information gathering, system disruption, or data manipulation.
      • Data Manipulation. The clandestine manipulation of computer systems or accounts; Account hijacking or the cooptation of users' social media profiles.
      • Data Exfiltration. Unauthorized movement of data; spearphishing; hack-and-release.
      • Other. Forms of hacking not captured by the categories above.
    • Cheapfakes. Deceptively edited content; decontextualization of existing media, passed off as current; deceptive co-option of existing brands; does not include use of Generative AI.
    • Generative AI. Augmented or fabricated content produced using artificial intelligence; "deep fakes"; textual generation. Sometimes referred to as "synthetic media," although this term does not adequately distinguish between the use of deep learning and use of more basic manipulative techniques.
How far did the interference effort spread?
  • Breakout Scale (binary, select one). Methodology is described above; Categorize the influence operation's reach and potential impact based on its spread across platforms, communities, and media types.
    • Category One
    • Category Two
    • Category Three
    • Category Four
    • Category Five
    • Category Six
    • Not Applicable (Allegation is too vague to categorize)
  • Offline Mobilization (binary). Tangible, real-world events and activities ascribed to the influence operation.
How credible, biased, legitimate, and transparent is the allegation?
  • Attribution Score. Methodology is described above; the goal of this score is to critically assess the validity of the allegation from multiple perspectives.
    • Credibility
    • Bias
    • Evidence
    • Transparency
For each sub-attribution in a given case, the following data is included:
Who, specifically, carried out the interference?
  • Sub-Actor (free text). Expressly named Actor included in the primary attribution to whom specific activities are linked.
  • Sub-Actor Nation (free text). National affiliation of the named sub-actor.
  • Sub-Actor Parent Organization or Affiliation (free text). The organization, institution, or affiliation within which the sub-actor operates. This does not include Actor Nation. (Examples would include IRGC, 8200, CENTCOM, Ministry of Public Security.)
  • Campaign Tag (free text). Does this sub-attribution connect to any larger pattern of behavior? See the existing list of "Campaign" Tags before making a new tag.
  • Date of Activity (free text). Date or range of purported activity.

About This Project

The core FIAT research team is composed of Max Rizzuto, Dina Sadek, Meredith Furbish, Julien Fagel, and Emerson T. Brooking.

The tool was developed by Maarten Lambrechts, based on the Interference 2020 Tracker developed by Mathias Stahl.

This project was directed by Graham Brookie and Emerson T. Brooking and edited by Andy Carvin.

Invaluable counsel and coordination was provided by Nicholas Yap, Andy Carvin, Dominique Ramsawak, and Heather Kunin.

About The DFRLab

The Digital Forensic Research Lab (DFRLab) at the Atlantic Council is a first of its kind organization with technical and policy expertise on disinformation, connective technologies, democracy, and the future of digital rights. Incubated at the Atlantic Council in 2016, the DFRLab is a field-builder, studying, defining, and informing approaches to the global information ecosystem and the technology that underpins it.

The DFRLab pursues this mission through three main efforts:

  • Producing timely primary open source (OSINT) research on disinformation, online harms, foreign interference, platform policy and approaches, and other aspects of the information ecosystem globally;
  • Setting research standards and training others around the world in techniques and practices, enabling more people to do work like the DFRLab in their own backyards, or to mainstream an understanding of the digital ecosystem into their fields; and
  • Leveraging the DFRLab’s unique insights from work across governments, companies, media, and civil society to craft policy recommendations, and collaborate with the global community working to ensure the digital world is a rights-reinforcing and democratic one

About the Atlantic Council

The Atlantic Council promotes constructive leadership and engagement in international affairs based on the Atlantic Community’s central role in meeting global challenges. The Council provides an essential forum for navigating the dramatic economic and political changes defining the twenty-first century by informing and galvanizing its uniquely influential network of global leaders. The Atlantic Council—through the papers it publishes, the ideas it generates, the future leaders it develops, and the communities it builds—shapes policy choices and strategies to create a more free, secure, and prosperous world.